Search is not available for this dataset
text
string | meta
dict |
---|---|
\documentclass[]{article}
% Author: Jonah Miller ([email protected])
% Time-stamp: <2013-12-14 21:18:33 (jonah)>
% packages
\usepackage{fullpage}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{latexsym}
\usepackage{graphicx}
\usepackage{mathrsfs}
\usepackage{verbatim}
\usepackage{braket}
\usepackage{listings}
\usepackage{pdfpages}
\usepackage{listings}
\usepackage{color}
\usepackage{hyperref}
% for the picture environment
\setlength{\unitlength}{1cm}
%preamble
\title{Numerical Explorations of FLRW Cosmologies}
\author{Jonah Miller\\
\textit{[email protected]}}
\date{General Relativity for Cosmology\\Achim Kempf\\ Fall 2013}
% Macros
\newcommand{\R}{\mathbb{R}} % Real number line
\newcommand{\N}{\mathbb{N}} % Integers
\newcommand{\eval}{\biggr\rvert} %evaluated at
\newcommand{\myvec}[1]{\vec{#1}} % vectors for me
% total derivatives
\newcommand{\diff}[2]{\frac{d #1}{d #2}}
\newcommand{\dd}[1]{\frac{d}{d #1}}
% partial derivatives
\newcommand{\pd}[2]{\frac{\partial #1}{\partial #2}}
\newcommand{\pdd}[1]{\frac{\partial}{\partial #1}}
% t derivatives
\newcommand{\ddt}{\dd{t}}
\newcommand{\dt}[1]{\diff{#1}{t}}
% A 2-vector
\newcommand{\twovector}[2]{\left[\begin{matrix}#1\\#2\end{matrix}\right]}
% A 3 vector.
\newcommand{\threevector}[3]{\left[\begin{matrix} #1\\ #2\\ #3 \end{matrix}\right]}
% Error function
\newcommand{\erf}{\text{erf}}
\begin{document}
\maketitle
% \begin{abstract}
% In this paper, we numerically investigate the past and future of the
% night sky. We briefly discuss the origin of the so-called
% Friedmann-Lemaitre-Robertson-Walker metric, the homogenous,
% isotropic matter-filled universe. We numerically explore the
% evolution of the metric given various equations of state of the
% fluid filling spacetime.
% \end{abstract}
\section{Introduction}
\label{sec:intro}
To attain a picture of the universe as a whole, we must look at it on
the largest possible scales. Since the universe is almost certainly
much larger than the observable universe, or indeed the universe we
will ever observe
\cite{MapOfUniverse,Carroll,MisnerThorneWheeler,Wald}, some guesswork
is necessary \cite{Wald}. The most popular---assumption
used to study the universe as a whole is the \textit{Copernican
principle} \cite{BonesOfCopernicus}, the notion that, on the larges
scales, the universe looks the same everywhere and in every direction
\cite{Carroll,Wald}.
In modern language, we assume that the universe is homogeneous and
isotropic. That is, we assume that the energy-momentum tensor that
feeds Einstein's equations is the same everywhere and in space and has
no preferred direction
\cite{Carroll,MisnerThorneWheeler,Wald}. Obviously, this isn't the
only approach one can take. Roughly speaking one can generate a
spacetime with an arbitrary amount of symmetry between no symmetry and
maximal symmetry. The homogeneous isotropic universe is almost as
symmetric as one can get. One can add time-like symmetry to attain de
Sitter, anti de Sitter, or Minkowski spacetimes, but that's as
symmetric as you can get
\cite{Carroll,MisnerThorneWheeler,Wald}. Relaxing the symmetry
conditions on the energy-momentum tensor results in, for example, the
mixmaster universe \cite{MisnerThorneWheeler}. However, observational
evidence, most strongly the cosmic microwave background, suggests that
the universe is homogeneous and isotropic to a very high degree
\cite{Planck}.
In this work, we derive the equations governing the evolution of a
homogeneous isotropic universe and numerically explore how such a
universe could evolve. In section \ref{sec:the:metric} we briefly
derive the so-called Friedmann equations, the equations governing such
spacetimes. In section \ref{sec:numerical:approach}, we briefly
discuss the numerical tools we use to evolve these equations. In
section \ref{sec:results}, we present our results. Finally, in section
\ref{sec:conclusion}, we offer some concluding remarks.
\section{The Friedmann-Lemaitre-Robertson-Walker Metric}
\label{sec:the:metric}
First, we derive the metric of a homogenous, isotropic spacetime. Then
we derive the analytic formulae for its evolution. Because we derived
the Friedmann equations using tetrads in class \cite{Kempf}, we
decided to work in coordinates for another perspective.
\subsection{Deriving the Metric}
\label{subsec:metric:derivation}
% The following discussion borrows from \cite{Carroll} and
% \cite{MisnerThorneWheeler}, but mostly from \cite{Wald}. We start by
% putting the words ``homogeneous'' and ``isotropic'' into precise
% language.
% \begin{itemize}
% \item We define a spacetime as \textit{homogeneous} if we can foliate
% the spacetime by a one-parameter family of spacelike hypersurfaces
% $\Sigma_t$ such that, for a given hypersurface $\Sigma_{t_0}$ at
% time $t_0$ and points $p,q\in\Sigma_{t_0}$, there exists an isometry of
% the four-metric $g_{\mu\nu}$ that maps $p$ into $q$ \cite{Wald}.
% \item We define a spacetime as \textit{isotropic} if at each point
% there exits a congruence of timelike curves $\alpha$ with tangent vectors
% $u^\mu$ such that, for any point $p$ in the spacetime and any two
% space-like vectors $s_1^\mu$ and $s_2^\mu$ orthogonal to $u^\mu$ at
% $p$, there exists an isometry of $g_{\mu\nu}$ which leaves $p$ and
% $u^\mu$ at $p$ fixed but changes $s_1^\mu$ and $s_2^\mu$
% \cite{Wald}.
% \end{itemize}
% Note that if a universe is \textit{both} homogeneous \textit{and}
% isotropic, the foliating hypersurfaces $\Sigma_t$ must be orthogonal
% to the timelike tangent vector field $u^\mu$ \cite{Wald}. Otherwise,
% it is possible to use proof by contradiction to show that the isometry
% guaranteed by isotropy cannot exist \cite{Wald}. This gives us a piece
% of a canonical coordinate system to use. We define the parameter
% indexing our hypersurfaces as time $t$.
%
% On each spacelike hypersurface in our foliation, we can restrict the
% full Lorentzian four-metric $g_{\mu\nu}$ to a Riemannian three-metric
% $h_{ij}$. Because of homogeneity, there must exist a sufficient number
% of isometries of $h_{ij}$ to map every point in the hypersurface into
% every other point \cite{Wald}. Because of isotropy, it must be impo
The following discussion borrows from \cite{Carroll},
\cite{MisnerThorneWheeler}, and \cite{Wald}. However, we mostly follow
\cite{Carroll}. It is possible to rigorously and precisely define the
notions of homogeneous and isotropic \cite{Wald}. However, one can
prove that homogenous and isotropic spacetimes are foliated by
maximally symmetric spacelike hypersurfaces
\cite{Carroll}. Intuitively, at least, it's not hard to convince
oneself that homogeneity and isotropy are sufficient to enforce
maximal symmetry. There are three classes of such hypersurfaces all of
constant scalar curvature: spherical spaces, hyperbolic spaces, and
flat spaces. Spherical spaces have constant positive
curvature. Hyperbolic spaces have constant negative curvature, and
flat spaces have zero curvature \cite{Carroll}.
% \begin{figure}[htb]
% \begin{center}
% \leavevmode
% \includegraphics[width=\textwidth]{figures/comparison_of_curvatures.png}
% \caption{A two-dimensional spacelike hypersurface with positive
% curvature (left), negative curvature (middle) and no curvature
% (right).}
% \label{fig:spacelike:curvatures}
% \end{center}
% \end{figure}
Let us consider a homogenous, isotropic, four-dimensional Lorentzian
spacetime. We can write the metric as \cite{Carroll}
\begin{equation}
\label{eq:hom:iso:most:general}
ds^2 = -dt^2 + a^2(t) d\sigma^2,
\end{equation}
where $t$ is the time coordinate, $a(t)$ is a scalar function known as
the \textit{scale factor}, and $d\sigma^2$ is the metric of the
maximally symmetric spacelike hypersurface. Up to some overall scaling
(which we can hide in $a(t)$, we have \cite{Wald}
\begin{equation}
\label{eq:3-metric}
d\sigma^2 = \begin{cases}
d\chi^2 + \sin^2(\chi)(d\theta^2 + \sin^2(\theta)d\phi^2)&\text{for positive curvature}\\
d\chi^2 + \chi^2(d\theta^2 + \sin^2d\phi^2)&\text{for no curvature}\\
d\chi^2 + \sinh^2(\chi)(d\theta^2 + \sin^2(\theta)d\phi^2)&\text{for negative curvature}
\end{cases},
\end{equation}
where $\chi$, $\theta$, and $\phi$ are the usual spherical
coordinates. However, if we choose $r$ such that
\begin{equation}
\label{eq:def:rho}
d\chi = \frac{dr}{\sqrt{1 - k r^2}},
\end{equation}
where
\begin{equation}
\label{eq:k:range}
k\in\{-1,0,1\}
\end{equation}
is the normalized scalar curvature of the hypersurface, we can
renormalize our equation to put it in a nicer form
\cite{Carroll,MisnerThorneWheeler},
\begin{equation}
\label{eq:friedmann:equation}
ds^2 = -dt^2 + a^2(t)\left[\frac{dr^2}{1 - k r^2} + r^2 d\Omega^2\right],
\end{equation}
where
\begin{equation}
\label{eq:def:domega}
d\Omega = d\theta^2 + \sin^2(\theta) d\phi^2
\end{equation}
is the angular piece of the metric. This is known as the
\textit{Friedmann-Lemaitre-Roberston-Walker} (FRLW) metric.
\subsection{Evolution and the Friedmann Equation}
\label{subsec:metric:evolution}
The following discussion draws primarily from the lecture notes
\cite{Kempf}, specifically lecture 16. If we calculate the Einstein
tensor
\begin{equation}
\label{eq:def:einstein:tensor}
G_{\mu\nu} = R_{\mu\nu} - \frac{1}{2} R g_{\mu\nu}
\end{equation}
(using, e.g., a tool like Maple's differential geometry package
\cite{Maple}) in the frame induced by our choice of coordinates, we
find that it is diagonal \cite{Kempf}:\footnote{We write the tensor as
a linear operator with one raised index because this produces the
nicest form of the equations.}
\begin{eqnarray}
\label{eq:einstein:tensor:calculated:tt}
G^t_{\ t} &=& -3 \frac{k + \dot{a}^2}{a^2}\\
\label{eq:einstein:tensor:calculated:space}
G^i_{\ i} &=& - \frac{k + 2a\ddot{a} + \dot{a}^2}{a^2} \ \forall\ i\in\{1,2,3\},\\
\label{eq:einstein:tensor:off:diagonal}
G^{\mu}_{\ \nu} &=& 0\ \forall\ \mu\neq\nu
\end{eqnarray}
where we have suppressed the $t$-dependence of $a$ and where $\dot{a}$
is a time-derivative of $a$ in the usual way.
If we feed the Einstein tensor into Einstein's equation in geometric units,
\begin{equation}
\label{eq:einsteins:equation:1}
G_{\mu \nu} = 8\pi \tilde{T}_{\mu\nu} - \Lambda g_{\mu\nu},
\end{equation}
since $g_{\mu\nu}$ is diagonal, we see that $T^\mu_{\ \nu}$ must be
diagonal too. Indeed, with one index raised, it looks like the
energy-momentum tensor for a perfect fluid in its rest frame:
\begin{eqnarray}
T^{\mu}_{\ \nu} &=& \tilde{T}^{\mu}_{\ \nu} - \frac{\Lambda}{8\pi}\mathcal{I}\nonumber\\
\label{eq:T:mu:nu}
&=& \left[\begin{array}{cccc}-\rho&0&0&0\\0&p&0&0\\0&0&p&0\\0&0&0&p\end{array}\right] - \frac{\Lambda}{8\pi} \left[\begin{array}{cccc}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{array}\right],
\end{eqnarray}
where $\mathcal{I}$ is the identity operator, $\rho$ is the density of
the fluid, and $p$ is the pressure of the fluid. $\tilde{T}^0_{\ 0}$
is negative because we've raised one index. Obviously, this is just a
mathematical coincidence, and we've even manipulated our definition of
pressure to hide our choice of coordinates. However, one can make weak
physical arguments to justify it \cite{Carroll,MisnerThorneWheeler}.
The time-time component of Einstein's equation
\eqref{eq:einsteins:equation:1} then gives the first Friedmann equation \cite{Carroll,MisnerThorneWheeler,Kempf}:
\begin{equation}
\label{eq:Friedmann:1:orig}
\frac{\dot{a}^2 + k}{a^2} = \frac{8\pi \rho + \Lambda}{3}.
\end{equation}
Any space-space diagonal component gives a second equation
\cite{Carroll,MisnerThorneWheeler,Kempf}:
\begin{equation}
\label{eq:einstein:2}
- \frac{k + 2a\ddot{a} + \dot{a}^2}{a^2} = 8\pi p + \Lambda.
\end{equation}
Now we take
\begin{equation}
\label{eq:trace:condition}
-\frac{1}{2} a \left[\left(\text{equation \eqref{eq:Friedmann:1:orig}}\right) + \frac{1}{3}\left(\text{equation \eqref{eq:einstein:2}}\right)\right],
\end{equation}
which yields
\begin{equation}
\label{eq:Friedmann:2:orig}
\frac{\ddot{a}}{a} = -\frac{4\pi}{3} \left(\rho + 3 p\right) + \frac{\Lambda}{3},
\end{equation}
which is the second Friedmann equation \cite{Kempf}. To use these
equations to describe an FLRW universe, we need one more piece of
information: the relationship between density and pressure, called an
equation of state \cite{Carroll,MisnerThorneWheeler,Kempf}. We
parametrize our ignorance of the equation of state as
\begin{equation}
\label{eq:def:eos}
p = \omega(\rho) \rho,
\end{equation}
where $\omega(\rho)$ is some scalar function.
Finally, we hide the cosmological constant as a type of matter, which
we call dark matter. This corresponds to resetting
\begin{equation}
\label{eq:rho:transformation}
\rho\to\rho - \frac{\Lambda}{8\pi}.
\end{equation}
In this case, the Friedmann equations are
\begin{eqnarray}
\label{eq:Friedmann:1}
\frac{\dot{a}^2 + k}{a^2} &=& \frac{8\pi}{3}\rho\\
\label{eq:Friedmann:2}
\frac{\ddot{a}}{a} &=& -\frac{4\pi}{3} \left(\rho + 3 p\right).
\end{eqnarray}
Then a cosmological constant corresponds to an energy density that
does not dilute with time... and one with negative pressure. Unless
otherwise stated, we will make this choice.
In the simple case when $\omega$ is a constant, we get several
different regimes. In the case of $\omega \geq 0$, as for a
radiation-dominated or matter-dominated universe
\cite{MisnerThorneWheeler,Carroll,Kempf},\footnote{If $\omega=0$, we
have a matter-dominated universe where the density scales as one
over the scale factor cubed, in other words, one over volume. If
$\omega=1/3$, we have a radiation-dominated universe, where the
density scales as one over the scale factor to the fourth. The extra
scaling of the scale factor is due to cosmological redshift
\cite{Carroll,Kempf}.} the scale factor increases monotonically and
at a decelerating rate \cite{MisnerThorneWheeler,Carroll,Kempf}. In
the case of $\omega = -1$, one gets a dark-energy dominated universe
with accelerating expansion
\cite{MisnerThorneWheeler,Carroll,Kempf}. And if $\omega$ is simply
very close to $-1$, one gets an inflationary universe
\cite{MisnerThorneWheeler,Carroll,Kempf}. As a test, will numerically
explore the cases when $\omega$ is constant and we have a known
regime. We will also numerically explore the cases when $\omega$ is a
more sophisticated object.
\section{Numerical Approach}
\label{sec:numerical:approach}
In any numerical problem, there are two steps. The first is two frame
the problem in a way compatible with numerical methods. The second
step is to actually design and implement a method. To limit our
problem domain, from here on out, we assume that
\begin{equation}
\label{eq:def:k}
k = 0.
\end{equation}
\subsection{Framing the Problem}
\label{subsec:framing:the:problem}
We will use a numerical algorithm to solve initial value
problems. These algorithms are usually formulated as first order ODE
solvers. The ode is written in the form
\begin{equation}
\label{eq:ode:formulation}
\diff{\myvec{y}}{t} = \myvec{f}(y,t)\text{ with }\myvec{y}(0) = \myvec{y}_0,
\end{equation}
where $\myvec{y}$ is a vector containing all the functions of $t$ one
wishes to solve for. Let's see if we can't write the Friedmann
equations in such a form.
Our first step is to reduce the second-order ODE system to a
first-order system. We will do this by eliminating $\ddot{a}$ in favor
of some other variable. If we differentiate equation
\eqref{eq:Friedmann:1}, we get
\begin{equation}
\label{eq:dF1}
-2 \frac{\dot{a}^2}{a^3} + \frac{\ddot{a}}{a^2} = \frac{8\pi}{3}\dot{\rho}.
\end{equation}
We can substitute $\ddot{a}$ in from \eqref{eq:Friedmann:2} to obtain
an equation for the time evolution of $\rho$:
\begin{equation}
\label{eq:Friedmann:rho}
\dot{\rho} = - 3 \left(\frac{\dot{a}}{a}\right)\left(\rho + p\right),
\end{equation}
which expresses conservation of mass energy. As a last step, we use
equation \eqref{eq:Friedmann:1} and equation \eqref{eq:def:eos} to
eliminate $\dot{a}$ and $p$ from equation \eqref{eq:Friedmann:rho} to
attain an evolution equation for $\rho$:
\begin{equation}
\label{eq:rho:evolution}
\dot{\rho} = \mp 3\left[\rho + \omega(\rho)\rho\right]\sqrt{\frac{8\pi}{3}\rho-k}.
\end{equation}
So, putting it all together, if we define the two-vector
\begin{equation}
\label{eq:def:y}
\myvec{y}(t) = \twovector{a(t)}{\rho(t)},
\end{equation}
then we can write the equations governing the evolution of the
universe as
\begin{equation}
\label{eq:y:prime:orig}
\ddt \myvec{y}(t) = \myvec{f}(y)
= \pm\twovector{\sqrt{8\pi a^2\rho/3}}{-3\left(\rho + \omega(\rho)\rho\right)\sqrt{\frac{8\pi}{3}\rho-k}}.
\end{equation}
Since our universe is expanding and we're interested in an expanding
universe, we choose $\dot{a}>0$. This gives us
\begin{equation}
\label{eq:y:prime}
\myvec{f}(y) = \twovector{\sqrt{8\pi a^2\rho/3}}{-3\left(\rho + \omega(\rho)\rho\right)\sqrt{\frac{8\pi}{3}\rho-k}}.
\end{equation}
\subsection{Initial Data}
\label{subsec:initial:data}
Even with evolution equations, we're still missing some critical
information. A first-order ODE system needs one initial value for each
unknown function.
Obviously, equation \eqref{eq:y:prime} breaks down when the scale
factor is zero. Therefore we must be careful to avoid starting a
simulation too close to the big bang singularity. For this reason, we
will never assume that $a(0) = 0$. If convenient, we will choose
$a(0)$ to be close to zero. There is one exception, which is the case
of a dark-energy dominated universe, where we will choose $a(0)=1$,
since such a universe is exponentially growing.
We still need to choose a value for $\rho(0)$, however, so long as
$\rho(0) > 0$, we should expect it to not matter much. General
relativity is invariant under an overall re-scaling of energy
\cite{MisnerThorneWheeler,Wald}. However, to get a better idea of
qualitative behavior, we will chose a number of values of $\rho(0)$ in
each case.
\subsection{Numerical Method}
\label{subsec:numerical:method}
For a given $\omega(\rho)$ and set of initial data, we solve the ODE
system using a ``Runge-Kutta'' algorithm. Before we define
Runge-Kutta, let's first describe a simpler, similar, method. The
definition of a derivative is
\begin{equation}
\label{eq:derivative:definition}
\ddt\myvec{y}(t) = \lim_{h\to 0}\left[\frac{\myvec{y}(t+h) - \myvec{y}(t)}{h}\right].
\end{equation}
Or, alternatively, if $h$ is sufficiently small,
\begin{equation}
\label{eq:forward:euler}
\myvec{y}(t+h) = \myvec{y}(t) + h \dt{\myvec{y}}(t).
\end{equation}
If we know $\myvec{y}(t_0)$ and $\ddt\myvec{y}(t_0)$, then we can use
equation \eqref{eq:forward:euler} to solve for $\myvec{y}(t+h)$. Then,
let $t_1 = t_0 + h$ and use equation \eqref{eq:forward:euler} to solve
for $\myvec{y}(t_1 + h)$. In this way, we can solve for $\myvec{y}(t)$
for all $t > t_0$. This method is called the ``forward Euler'' method
\cite{Heath}.
Runge-Kutta methods are more sophisticated. One can use a Taylor
series expansion to define a more accurate iterative scheme that
relies on higher-order derivatives, not just first
derivatives. However, since we only have first derivative information,
we simulate higher-order derivatives by evaluating the first
derivative at a number of different values of $t$. Then, of course,
the higher-order derivatives are finite differences of these
evaluations \cite{Heath,NumericalRecipes}.
We use a fourth-order Runge-Kutta method---which means the method
effectively incorporates the first four derivatives of a
function---with adaptive step sizes: the 4(5) Runge-Kutta-Fehlberg
method \cite{Fehlberg}. We use our own implementation, written in
C++. The Runge-Kutta solver by itself can be found here:
\cite{RKF45}.\footnote{It is good coding practice to make one's code as
flexible as possible. By separating the initial-value solver from a
specific application, we make it possible to reuse our code.} In the
context of FLRW metrics, our implementation can be found here:
\cite{FLRWCode}. Both pieces of code are open-sourced, and the reader
should feel free to download and explore either program.
\section{Results}
\label{sec:results}
Before we delve into unknown territory, it's a good idea to explore
known physically interesting FLRW solutions. This is what we do
now.
\subsection{Known Solutions}
\label{subsec:known:solutions}
If the universe is dominated by radiation, as in the case in the early
universe, just after reheating \cite{Kempf}, we should expect the
scale factor to grow as $a(t) = \left(t/t_0\right)^{1/2}$
\cite{Kempf}. To account for not quite starting at $a=0$, we fit our
numerical solutions to
\begin{equation}
\label{eq:radiation:dominated:fit}
a(t) = a_0 t^{1/2} + b,
\end{equation}
where $a_0$ and be are constants. Figure \ref{fig:radiation:dominated}
shows scale factors for several different choices of $\rho(0)$ and a
fit to an analytic scale factor solution using equation
\eqref{eq:radiation:dominated:fit}, which matches the data extremely
well.\footnote{In all plots showing a fit, we've removed most of the
data points. There are a minimum of 100 data points generated per
simulation. However these are hidden to demonstrate the accuracy of
the fit.}
\begin{figure}[htb]
\begin{center}
\leavevmode
\hspace{-.2cm}
\includegraphics[width=8.2cm]{figures/radiation_dominated_all_scale_factors.png}
\hspace{-0.3cm}
\includegraphics[width=8.3cm]{figures/radiation_dominated_fit_rho0256.png}
\caption[Radiation Dominated Universe]{The Radiation-Dominated
Universe. Left: Scale factor as a function of time for a number
of different values of $\rho(0)$. Right: A fit to equation
\eqref{eq:radiation:dominated:fit} for $\rho(0)=256$.}
\label{fig:radiation:dominated}
\end{center}
\end{figure}
If the universe is dominated by matter, as was the case until
extremely recently \cite{Kempf}, then we expect the scale factor to
scale as $a(t) = \left(t/t_0\right)^{2/3}$ \cite{Kempf}. To account
for not quite starting at $a=0$, we fit our numerical solutions to
\begin{equation}
\label{eq:matter:dominated:fit}
a(t) = a_0 t^{2/3} + b.
\end{equation}
Figure \ref{fig:matter:dominated} shows the scale factors and fit.
\begin{figure}[htb]
\begin{center}
\leavevmode
\hspace{-.2cm}
\includegraphics[width=8.2cm]{figures/matter_dominated_all_scale_factors.png}
\hspace{-0.3cm}
\includegraphics[width=8.3cm]{figures/matter_dominated_fit_rho032.png}
\caption[Matter Dominated Universe]{The Matter-Dominated
Universe. Left: Scale factor as a function of time for a number
of different values of $\rho(0)$. Right: A fit to equation
\eqref{eq:matter:dominated:fit} for $\rho(0)=32$.}
\label{fig:matter:dominated}
\end{center}
\end{figure}
If the universe is dominated by dark energy, as will be the case very
soon \cite{Kempf}, then we expect the scale factor to scale as
$a(t)=a_0 e^{t_0t}$. For convenience, we fit to
\begin{equation}
\label{eq:dark:energy:dominated:fit}
a(t) = e^{t_0t + b},
\end{equation}
where $t_0$ and $a_0$ are constants. Figure
\ref{fig:dark:energy:dominated} shows the scale factors and fit.
\begin{figure}[htb]
\begin{center}
\leavevmode
\hspace{-.2cm}
\includegraphics[width=8.2cm]{figures/dark_energy_dominated_universe_scale_factors.png}
\hspace{-0.3cm}
\includegraphics[width=8.3cm]{figures/dark_energy_dominated_fit_rho012.png}
\caption[Matter Dominated Universe]{The Dark Energy-Dominated
Universe. Left: Scale factor as a function of time for a number
of different values of $\rho(0)$. Right: A fit to equation
\eqref{eq:matter:dominated:fit} for $\rho(0)=1.2$.}
\label{fig:dark:energy:dominated}
\end{center}
\end{figure}
\subsection{A Multi-Regime Universe}
\label{subsec:multi-regime}
Now that we've confirmed appropriate behavior in each of the
analytically explored regimes, it's time to move into unknown
territory! We will attempt to construct a history of the universe by
splicing together the known solutions to the Friedmann equations in a
smooth way.
To this end, we let $\omega$ vary smoothly with $\rho$. We want the
radiation, matter, and dark energy dominated regimes of the universe
to emerge naturally, separated by transition regions. Furthermore, we
want $\omega(\rho)$ to be monotonically increasing, since we know that
we first had a radiation dominated universe---radiation scales as
$a^{1/4}$---then a matter dominated universe---matter scales as
$a^{1/3}$---and finally a dark energy dominated universe---dark energy
does not scale with $a$ at all
\cite{Carroll,MisnerThorneWheeler,Kempf}. The dark energy regime
should take over when $\rho$ is very small and the normal matter and
radiation has diluted away \cite{Carroll,MisnerThorneWheeler,Kempf}.
One function that smoothly, asymptotically, and monotonically
interpolates between two constant values is the error function,
\begin{equation}
\label{eq:def:error:function}
\erf(x) = \frac{2}{\sqrt{\pi}}\int_0^x e^{-t^2}dt.
\end{equation}
$\erf(-\infty)=-1$ and $\erf(+\infty)=+1$. However, the transition
region is very sharp and the error function is effectively $-1$ for
$x<-2$ and effectively $+1$ for $x>2$. We can therefore create a
smoothly varying $\omega(\rho)$ with the desired properties by taking
a linear combination of error functions:
\begin{equation}
\label{eq:interpolating:function}
\omega(\rho) = -1 + \left(\frac{1}{2}\right) \left(1+\frac{1}{3}\right) + \frac{1}{2}\erf\left(\frac{\rho - \rho_{DE}}{\Delta_{DE}}\right) + \left(\frac{1}{2}\right)\left(\frac{1}{3}\right)\erf\left(\frac{\rho - \rho_{RM}}{\Delta_{RM}}\right),
\end{equation}
where $\rho_{DE}$ is the value of $\rho$ marking the change of regimes
from a matter-dominated to a dark energy dominated universe and
$\Delta_{DE}$ is the width of the transition region. $\rho_{RM}$ and
$\Delta_{RM}$ fill analogous roles for the transition from a radiation
to a matter dominated universe.
By varying $\rho_{DE}$, $\rho_{RM}$, $\Delta_{DE}$, and $\Delta_{RM}$,
we can select a function where the transition between regimes is very
abrupt or where the transition is very gradual. We chose four such
functions, as shown in figure \ref{fig:omega:of:rho}: an equation of
state with abrupt transitions
($\rho_{DE}=10,\Delta_{DE}=\Delta_{RM}=1,\rho_{RM}=20$), one with
moderate transitions
($\rho_{DE}=20,\Delta_{DE}=\Delta_{RM}=5,\rho_{RM}=40$), one with
gradual transitions
($\rho_{DE}=25,\Delta_{DE}=\Delta_{RM}=12,\rho_{RM}=80$). And,
finally, as a check, we chose a function with no well defined epochs
at all ($\rho_{DE}=37,\Delta_{DE}=\Delta_{RM}=20,\rho_{RM}=92$).
\begin{figure}[tbh]
\begin{center}
\leavevmode
\includegraphics[width=12cm]{figures/multi_regime_eos_1.png}
\caption[EOS]{The equation of state variable $\omega$ (see
equation \eqref{eq:def:eos}) as a function of energy density
$\rho$, as defined in equation
\eqref{eq:interpolating:function}. We choose four possible
scenarios: an equation of state with abrupt transitions
($\rho_{DE}=10,\Delta_{DE}=\Delta_{RM}=1,\rho_{RM}=20$), one
with moderate transitions
($\rho_{DE}=20,\Delta_{DE}=\Delta_{RM}=5,\rho_{RM}=40$), one
with gradual transitions
($\rho_{DE}=25,\Delta_{DE}=\Delta_{RM}=12,\rho_{RM}=80$), and
one with almost no transitions at all
($\rho_{DE}=37,\Delta_{DE}=\Delta_{RM}=20,\rho_{RM}=92$)}
\label{fig:omega:of:rho}
\end{center}
\end{figure}
The evolution of $a(t)$ for these four choices of $\omega(\rho)$ is
shown in figure \ref{fig:a:varying:omega}, where we also show the
evolution of $\rho$ and $p$ on the same plots. (We have rescaled
$a(t)$ so that it fits nicely with the other variables.) In these
plots, we can see hints of each of the three analytically predicted
epochs of growth. Initially $\ddot{a} < 0$, but it transitions to
$\ddot{a}>0$.
Surprisingly, so long as we allow $\omega(\rho)$ to
vary from $+1/3$ to $0$ to $-1$, $a(t)$ seems remarkably resilient to
the details such as the abruptness of the transition or the precise
location of the transition region. Indeed, when we plotted all four
scale factor functions on the same scale, they overlapped completely.
We postulate that his unexpected resilience is due to the $a^{-2}$
term in the formula for $\dot{\rho}$. Although different fixed values
of $\omega$ drive $a(t)$ at different rates, $a(t)$ in turn drives
$\rho(t)$ into a regime that balances out this change. This stability
is a boon to cosmologists, since it means that, for the purpose of
predictions a piecewise function composed of the analytic solutions is
sufficient---we don't need to look for a more detailed or accurate
equation of state.
\begin{figure}[htb]
\begin{center}
\leavevmode
\hspace{-.2cm}
\includegraphics[width=8.35cm]{figures/multi_regime_abrupt_transition.png}
\hspace{-0.3cm}
\includegraphics[width=8.35cm]{figures/multi_regime_moderate_transition.png}\\
\vspace{0.2cm}
\hspace{-.2cm}
\includegraphics[width=8.35cm]{figures/multi_regime_mild_transition.png}
\hspace{-0.3cm}
\includegraphics[width=8.35cm]{figures/multi_regime_no_transition.png}
\caption[$a(t)$ with a variable $\omega(\rho)$]{$a(t)$ with a
variable $\omega(\rho)$. We plot the scale factor for the four
equations of state show in figure \ref{fig:omega:of:rho}.}
\label{fig:a:varying:omega}
\end{center}
\end{figure}
\subsection{$\omega \approx -1.2$}
\label{subsec:label:omega:1.2}
The Pan-STARRS1 survey \cite{Pan-STARRS1} predicts that $\omega
\approx -1.2$, which is a very strange type of matter indeed. When
$\omega=-1$, it emerges from the cosmological constant term and the
energy density $\rho$ stays constant as the scale factor changes. If
$\omega<-1$, then the energy density must \textit{increase} with the
scale factor! Since we expect energy density to \textit{dilute} with
increased scale factor \cite{Carroll,Kempf}, this is decidedly odd!
We can test this hypothesis with our FLRW simulation code. And,
indeed, the energy density slowly grows with scale factor, as shown in
figure \ref{fig:omega:1.2}. The result is that the scale factor grows
with time more quickly than an exponential. We fit $\ln(a(t))$ to a
polynomial and found a cubic to fit well. Figure \ref{fig:omega:1.2}
shows the results of one such fit.
\begin{figure}[htb]
\begin{center}
\leavevmode
\hspace{-.2cm}
\includegraphics[width=8.2cm]{figures/our_future_universe_all_variables_rho035.png}
\hspace{-0.3cm}
\includegraphics[width=8.3cm]{figures/our_future_universe_fit_rho035.png}
\caption[Our Future Universe?]{Our future universe? We solve for
the universe when $\omega \approx -1.2$. Left: Scale factor,
density, and pressure as a function of time. Right: A fit to
equation $a(t) = \exp(a_0 t^3 + b t^2 + c t + d)$. In both
cases, $\rho(0) = 0.35$.}
\label{fig:omega:1.2}
\end{center}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In summary, given that the universe is foliated by homogeneous and
isotropic hypersurfaces, we have re-derived how to solve for the
evolution of space as a function of time
\cite{Carroll,MisnerThorneWheeler,Wald,Kempf}. We have also used a
fourth-order Runge-Kutta-Fehlberg method \cite{Fehlberg,RKF45} to
numerically solve for the universe given these evolution equations. We
have reproduced the analytically known epochs of evolution and solved
for the universe as it transitions between analytically known
radiation-dominated, matter-dominated, and dark energy-dominated
regimes. Finally, we have solved for a universe dominated by an
equation of state where $\omega<-1$ and the energy density increases
slowly with scale factor. In this case, we discovered faster than
exponential growth, which agrees qualitatively with expectations.
It is a pleasant surprise to find that the evolution of the universe
extremely resilient to a choice of equation of state,
$p=\omega(\rho)\rho$, so long as $\omega(\rho)$ varies smoothly and
monotonically from $\omega=-1$ to $\omega=+1/3$. This means that it is
sufficient for cosmologists to study the evolution of the universe
epoch by epoch, and not worry too much about the transitional periods.
We would have liked to study the big bounce scenario wherein the scale
factor shrinks to some minimum value and then grows
monotonically. This is certainly analytically feasible. In the case of
a positively curved spacelike foliation dominated by dark energy, for
example, we analytically attain de Sitter space, where $a(t)\approx
\cosh(t)$. Unfortunately, $\dot{a}$ must equal zero in this scenario,
and this condition makes the first-order version of the Friedmann
equations singular. A numerical treatment of the big bounce seems
difficult if we continue to treat it as an initial value problem. A
better approach might be to treat the big bounce scenario as a
boundary value problem with initial and final values of $a(t)$ and a
negative value for $\dot{a}(t)$. In general, the numerical methods to
treat initial value problems are very different from those used to
treat boundary value problems \cite{Heath}. Thus it would be necessary
to implement another approach (e.g., finite elements) anew. This would
be an interesting future project.
%Bibliography
\bibliography{frlw_project}
\bibliographystyle{hunsrt}
\end{document} | {
"alphanum_fraction": 0.7367841737,
"avg_line_length": 44.6414342629,
"ext": "tex",
"hexsha": "c50b3d241b2bd46b3b7ea9987bc3d880a26034c5",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-12-27T20:48:24.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-11-23T20:58:13.000Z",
"max_forks_repo_head_hexsha": "15424d2304e44d0e38110b655c5c28a6aeb34147",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Yurlungur/FLRW",
"max_forks_repo_path": "jonah_miller_project.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "15424d2304e44d0e38110b655c5c28a6aeb34147",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Yurlungur/FLRW",
"max_issues_repo_path": "jonah_miller_project.tex",
"max_line_length": 245,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "15424d2304e44d0e38110b655c5c28a6aeb34147",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Yurlungur/FLRW",
"max_stars_repo_path": "jonah_miller_project.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 10186,
"size": 33615
} |
%%!TEX TS-program = latex
\documentclass[11pt]{article} %DIF >
\usepackage{etex}
\usepackage[utf8]{inputenc}
\input ../AuxFiles/PreambleNotes.tex
\begin{document}
\onehalfspace
\vspace*{\fill}
\begingroup
\centering
\Large {\scshape Introduction to Statistics}\\
(Lectures 7-8: Estimation)
\endgroup
\vspace*{\fill}
\newpage
\section{Estimation}
{\scshape Overview:} \noindent Lectures 7-8 will focus on \emph{estimation}. As we have mentioned before, a strategy in the estimation problem is an \emph{estimator}. The loss function used for this problem is the squared distance between the estimated value and the true parameter. This loss gives rise to the popular \emph{mean squared error} performance criterion.
We will analyze two commonly used estimation strategies---Maximum Likelihood (ML) and Posterior Mean estimators---in the context of the homoskedastic Normal regression model with (known variance); i.e.,
\begin{equation} \label{equation:Regression-model}
Y \sim \mathcal{N}(X \beta, \sigma^2 \mathbb{I}_n ),
\end{equation}
where $Y$ is the $n \times 1$ vector of outcome variables, $X$ is the $n \times k$ matrix of non-stochastic regressors, and $\beta \in \mathbb{R}^{k}$ is the parameter of interest.
We will show that if $k \leq n$ and $X$ has full-column rank, the Maximum Likelihood estimator for the model in (\ref{equation:Regression-model}) is the OLS estimator
\[ \widehat{\beta}_{\textrm{OLS}} \equiv (X’X)^{-1} X’Y, \]
\noindent (which we introduced in the last problem set). We will show this estimator is ``best'' among all \emph{unbiased} estimators and we will connect this result to the \emph{Cram\'er-Rao} bound for the variance of estimators in parametric models. We will also argue that ``best’’ among the class of unbiased estimators need not imply admissibility with respect to all estimators. In particular, we show that if $k \geq 3$, the OLS estimator is dominated and we present an ``empirical Bayes’’ estimator that dominates it.
The posterior mean estimator for the regression model assuming $\beta \sim \mathcal{N}_{k}(0 , (\sigma^2/ \lambda) \mathbb{I}_k )$ is the popular \emph{Ridge estimator}
\[ \widehat{\beta}_{\textrm{Ridge}} \equiv (X’X + \lambda \mathbb{I}_k)^{-1} X’Y. \]
The Ridge estimator is well-defined regardless of the number of covariates and it is admissible by construction. The Ridge estimator provides a simple example of ``shrinkage’’ or ``regularization’’. A practical concern is that the Ridge estimator is biased, and the bias depends on the choice of the prior hyperparameters and the true parameter at which we evaluate the bias.
One result that we will not cover in the notes, but that is important to keep in mind is that despite the inadmissibility of the OLS estimator for the \emph{full} vector of coefficients $\beta$, OLS is admissible for the problem in which we are interested in estimating only one regression coefficient but controlling for other variables.
\subsection{Quadratic Loss}
Let $\mathcal{A} =\Theta = \mathbb{R}^{k}$. The loss function we will work with is the so-called quadratic loss
\[ \mathcal{L}(a, \beta) = || a- \beta ||^2 = (a-\beta)’(a-\beta) = \sum_{j=1}^{k} (a_j - \theta_j)^2, \]
which measures the squared distance between the estimator $\widehat{\beta}$ and the parameter $\beta$.
The risk of any estimator $\widehat{\beta}$ (which is a map from data $(Y,X)$ to $\Theta$) is given by
\begin{equation} \label{equation:MSE}
\mathbb{E}_{\mathbb{P}_{\beta}}[ \mathcal{L}(\widehat{\beta}, \beta) ] = \mathbb{E}_{\mathbb{P}_{\beta}}[ (\widehat{\beta}- \beta)’(\widehat{\beta}-\beta) ].
\end{equation}
Equation (\ref{equation:MSE}) is referred to as \emph{the mean-squared estimation error} at $\beta$. We now present a simple algebraic decomposition of the mean squared error in terms of the ``bias’’ and ``variance’’ of the estimator $\widehat{\beta}$. Assuming that $\overline{\beta} \equiv \mathbb{E}_{\beta}[ \widehat{\beta} ]$ is finite, we define the \underline{bias} of $\widehat{\beta}$ at $\beta$ as
$$ B_\beta(\widehat{\beta}) = \overline{\beta} - \beta.$$
If the covariance matrix of $\widehat{\beta}$---denoted $\textrm{V}_{\beta} (\widehat{\beta})$---is also finite, the mean squared-error can be written as
\begin{eqnarray*}
\mathbb{E}_{\mathbb{P}_{\beta}}[ (\widehat{\beta}- \beta)’(\widehat{\beta}-\beta) ] &=& \mathbb{E}_{\mathbb{P}_{\beta}}[ (\widehat{\beta}-\overline{\beta} +\overline{\beta} - \beta)’(\widehat{\beta}-\overline{\beta} +\overline{\beta} -\beta) ], \\
&=& (\overline{\beta}-\beta)’(\overline{\beta}-\beta) + \mathbb{E}_{\mathbb{P}_{\beta}}[ (\widehat{\beta}-\overline{\beta} )’(\widehat{\beta}-\overline{\beta}) ] \\
&=& ||B_\beta(\widehat{\beta}) ||^2 + \textrm{tr} \left( \textrm{V}_{\beta} (\widehat{\beta}) \right),
\end{eqnarray*}
where $\textrm{tr}(\cdot)$ is the trace operator. The decomposition is fairly straightforward, but it highlights the fact that the bias and variance of the estimator fully determine its risk whenever the loss is quadratic. Also, the correlation between any of the components of $\widehat{\beta}$ is not relevant for the risk calculation.
\subsection{Maximum Likelihood Estimation}
According to model (\ref{equation:Regression-model}) the vector of outcome variables is a multivariate normal with parameters $X\beta$ and $\sigma^2 \mathbb{I}_n$ and thus has a p.d.f given by
\begin{equation}\label{equation:Regression-pdf}
f( Y | \beta , X ) = \frac{1}{(2 \pi \sigma^2)^{n/2}} \exp \left( -\frac{1}{2 \sigma^2} (Y - X \beta)’ (Y-X\beta) \right).
\end{equation}
This is true, regardless of whether we have very many covariates or not. For a fixed realization of the data, define the likelihood function, L($\beta$; (Y,X)), as the value attained by the p.d.f. in (\ref{equation:Regression-pdf}) at different values of the data ($Y, X$); that is
\[ L(\beta, (Y,X)) \equiv f(Y | \beta, X ). \]
The maximum likelihood estimator at data $(Y,X)$ is then defined as the value of $\beta$ that maximizes the likelihood; that is
\[ \widehat{\beta}_{\textrm{ML}} \equiv \textrm{argmax}_{\beta \in \mathbb{R}^{k}} L(\beta, (Y,X)). \]
\noindent The likelihood function implied by (\ref{equation:Regression-pdf}) is decreasing in $(Y-XB)’(Y-XB)$, a term which is usually referred to as the sum of square residuals. Therefore, maximizing the likelihood is equivalent to solving the \emph{least-squares} problem
\[ \textrm{min}_{\beta \in \mathbb{R}^{k}} (Y-X\beta)’(Y-X\beta). \]
The first-order conditions for the program above, which are necessary and sufficient, yield
\[ X’(Y-X\widehat{\beta}_{\textrm{ML}}) = \textbf{0}_{k \times 1} \iff X’Y = (X’X) \widehat{\beta}_{\textrm{ML}}. \]
\noindent If $k > n$, there are infinitely many solutions that maximize the likelihood. If $k \leq n$, and $X$ has full-column rank there is a unique solution to this problem given by
\begin{equation} \label{equation:OLS}
\widehat{\beta}_{\textrm{ML}} = \widehat{\beta}_{\textrm{OLS}} = (X’X)^{-1} X’Y.
\end{equation}
\noindent The bias of the maximum likelihood estimator:
\begin{equation} \label{equation:bias}
\textrm{B}_{\beta}(\widehat{\beta}_{\textrm{ML}}) = \mathbb{E}_{\mathbb{P}_{\beta}} [ (X’X)^{-1} X’Y ] - \beta = \mathbf{0}_{k \times 1}.
\end{equation}
for any $\beta$. Thus, we say that the ML estimator of $\beta$ is \underline{unbiased}.\footnote{An estimator $\widehat{\beta}$ is unbiased if $\mathbb{E}_{\beta} [\widehat{\beta}] = \beta$ for all $\beta \in \Theta$.}
The variance of the ML estimator is
\begin{equation} \label{equation:variance}
\mathbb{V}_{\beta} (\widehat{\beta}_{\textrm{ML}}) = \mathbb{E}_{\mathbb{P}_{\beta}} [ (\widehat{\beta}_{\textrm{ML}}-\beta) (\widehat{\beta}_{\textrm{ML}} -\beta )^{\prime} ] = (X’X)^{-1} X’ \mathbb{E}_{\mathbb{P}_{\beta}} [ (Y-X\beta) (Y-X\beta)’] X (X’X)^{-1} = \sigma^2 (X’X)^{-1}.
\end{equation}
This means that the mean squared error at $\beta$---denoted $\textrm{MSE}(\beta ; \widehat{\beta}_{\textrm{ML}})$---equals
\[ \sigma^2 \textrm{tr} \left ( (X’X)^{-1} \right). \]
\noindent for any $\beta$. Consequently, an interesting property of the ML/OLS estimator is that its risk function is constant over the parameter space.
\noindent
\subsection{Posterior Mean under a Normal Prior}
In this subsection we analyze the Bayes estimator of the parameter $\beta$. We have already showed that any Bayes rule can be obtained by minimizing posterior loss at each data realization. Let $\pi$ denote a prior over the parameter $\beta$. In our set-up the posterior loss of an action $a$ is
\[ \mathbb{E}_{\pi} [ \: || a - \beta ||^2 \: | \: (Y,X) \: ]. \]
Using the same argument that we used to decompose the mean squared-error in terms of bias and variance we can show that
\begin{eqnarray*}
\mathbb{E}_{\pi} [ \: || a - \beta ||^2 \: | \: (Y,X) \: ] &=& \mathbb{E}[ \: || a - \mathbb{E}_{\pi} [ \: \beta \: | \: (Y,X) \: ] + \mathbb{E}_{\pi} [ \beta \: | \: (Y,X) ] - \beta ||^2 \: | \: (Y,X) \: ], \\
&=& || a - \mathbb{E}_{\pi} [ \beta \: | \: (Y,X) ] ||^2 + \textrm{tr} \left( \mathbb{V}_{\pi} ( \beta \: | \: (Y,X) ) \right).
\end{eqnarray*}
This shows that, regardless of the specific prior we pick, the Bayes estimator is the posterior mean of $\beta$
\[ \widehat{\beta}_{\textrm{Bayes}} = \mathbb{E}_{\pi} [ \: \beta \: | \: (Y,X) \: ]. \]
Before committing to a specific prior $\pi$, we will show that under quadratic loss the Bayes estimator is the posterior mean.\\
{\scshape Posterior Mean under a Normal Prior:} Consider then the following prior on $\beta$:
\begin{equation}
\beta \sim \pi (\beta) \equiv \mathcal{N}_{k}( \beta_0 \: , \: \sigma^2 V^{-1}).
\end{equation}
\noindent The prior assumes that all the coefficients are approximately normal with values close to the vector $\beta_0$ and covariance matrix given by $\sigma^2 V^{-1}$. There is typically no magical recipe to select a prior. More often than not, the selection of a prior trades-off interpretation and convenience in its implementation.
We derive the posterior distribution of $\beta$. One way of deriving this posterior distribution is by an application of Bayes Theorem
\begin{equation*}
\pi(\beta \: | \: \sigma^2, y, X ) = \frac{f(Y \: | \: \beta, X) \pi(\beta ) }{\int_{\Theta} f(Y, \: | \: \beta, X) \pi (\beta ) d \beta}.
\end{equation*}
\noindent The posterior is thus proportional to the likelihood times the prior, both of which are Gaussian.
\noindent Consequently, $f(Y | X, \beta, \sigma^2 )\: \pi(\beta | \sigma^2 )$ is, up to a constant that does not depend on $\beta$, proportional to
\begin{equation} \label{equation:posterior}
\exp \left( -\frac{1}{2\sigma^2} (Y-X\beta)^{\prime} (Y-X \beta) \right) \exp \left(-\frac{1}{2 \sigma^2} (\beta-\beta_0)^{\prime} V (\beta-\beta_0) \right).
\end{equation}
\noindent The expression above equals:
\[ \exp \left( -\frac{1}{2\sigma^2} Y^{\prime} Y \right) \exp \left( - \frac{1}{2 \sigma^2 } \beta \left(V + X’X \right) \beta + \frac{1}{\sigma^2}(Y^{\prime} X+V\beta_0) \beta \right). \]
Completing the square and ignoring all the terms that do not have $\beta$ on them, gives the posterior distribution as a constant times the exponential of:
\begin{equation*}
-\frac{1}{2 \sigma^2} \left( \beta - \left( V + X^{\prime} X \right)^{-1} (X^{\prime} y+ V \beta_0) \right) \left(V+ X’X \right) \left(\beta - ( V^{-1} + X^{\prime} X )^{-1} (X^{\prime} Y+ V \beta_0) \right).
\end{equation*}
This implies that:
\begin{equation}
\beta | Y, X \sim \mathcal{N}_{k} \left( \left( V + X’X \right)^{-1} (X^{\prime} Y+V\beta_0) \: , \: \sigma^2 \left(V + X^{\prime} X \right)^{-1} \right).
\end{equation}
This means that the Bayesian Estimator of $\beta$ given the Gaussian prior $\pi(\beta)$ is:
\[ \widehat{\beta}_{\textrm{Bayes}} \equiv \left( V + X^{\prime} X \right)^{-1} \left( X^{\prime} Y + V \beta_0 \right). \]
\noindent The posterior mean estimator is well defined regardless the number of
covariates.\footnote{If $V$ is positive definite, then the matrix $V+ X’X$ is
positive definite as well, and thus invertible.} The posterior mean estimator
for $V= \lambda \mathbb{I}_{k}$ and $\beta_0 = 0$ is called the Ridge estimator
\[ \widehat{\beta}_{\textrm{Ridge}} \equiv (X’X + \lambda \mathbb{I}_{k})^{-1} X’Y, \]
\noindent which, by construction, is admissible. The Ridge estimator is biased:
\begin{eqnarray*}
\mathbb{E}_{\mathbb{P}_{\beta}} [ \widehat{\beta}_{\textrm{Ridge}} ] - \beta &=& (X’X + \lambda \mathbb{I}_{k})^{-1} X’X\beta - \beta, \\
&=& (X’X + \lambda \mathbb{I}_{k})^{-1} (X’X\beta - (X’X + \lambda \mathbb{I}_{k}) \beta), \\
&=& -\lambda (X’X + \lambda \mathbb{I}_k)^{-1} \beta.
\end{eqnarray*}
\noindent and the magnitude of the bias depends on $\beta$. The variance of the Ridge estimator is
\[ \mathbb{V}_{\beta} (\widehat{\beta}_{\textrm{Ridge}}) = \sigma^2( X’X + \lambda \mathbb{I}_k)^{-1} X’X ( X’X + \lambda \mathbb{I}_k)^{-1} \]
\noindent When $k<n$, the trace of this variance has to be smaller than that of the ML/OLS estimator.
{\scshape Posterior Mean as a Penalized/Regularized OLS estimator:} \noindent Equation (\ref{equation:posterior}) is decreasing as a function of the \emph{penalized} sum of squared residuals
\[ (y-X\beta)^{\prime} (y-X \beta) + (\beta-\beta_0)^{\prime} V (\beta-\beta_0). \]
\noindent Under the Gaussian posterior the posterior mean and posterior mode are the same. Therefore, another way of deriving the posterior mean estimator is by finding a solution to the problem:
\[ \min_{\beta} (y-X\beta)^{\prime} (y-X \beta) + (\beta-\beta_0)^{\prime} V (\beta-\beta_0) \]
The F.O.C are
\[ X’(Y-X\beta) + V(\beta - \beta_0) = \textbf{0}_{n \times k} \iff X’Y + V \beta_0 = (X’X + V) \beta. \]
\noindent Solving for $\beta$ gives the estimator $\widehat{\beta}_{\textrm{Bayes}}$.
\subsection{Optimality of the OLS estimator among unbiased estimators}
In this section we will show that the OLS estimator minimizes risk among the class of all unbiased estimators, provided some regularity conditions are met.
\begin{proposition}(Optimality of OLS) Consider the Normal regression model in (\ref{equation:Regression-model}). Let $\widehat{\beta}$ be an estimator of $\beta$ for which
\[ \frac{\partial}{\partial \beta} \int_{\mathbb{R}^{k}} \widehat{\beta} f(Y | \beta; X ) dY = \int_{\mathbb{R}^{k}} \widehat{\beta} \Big ( \frac{\partial}{\partial \beta} f(Y | \beta , X) \Big)’ dY, \]
at any $\beta$ in the parameter space. If $\widehat{\beta}$ is unbiased, then
\[ \mathbb{V}_{\beta} ( \widehat{\beta} ) - \mathbb{V}_{\beta} (\widehat{\beta}_{\textrm{OLS}}) \]
is positive semi-definite, implying that the mean squared error of $\widehat{\beta}$ is larger than that of the OLS everywhere on the parameter space.
\end{proposition}
\begin{proof}
We would like show that for any $c \in \mathbb{R}^{k}$, $c \neq 0$
\[ c’ \left( \mathbb{V}_{\beta} ( \widehat{\beta} ) - \mathbb{V}_{\beta} (\widehat{\beta}_{\textrm{OLS}}) \right) c \geq 0. \]
To do this, we start by computing the variance of $c’\widehat{\beta} - c’ \widehat{\beta}_{\textrm{OLS}}$,
which by definition of variance, has to be nonegative. Since $\widehat{\beta}$ and $\widehat{\beta}_{\textrm{OLS}}$ are both unbiased
\begin{eqnarray*}
\mathbb{V}_{\beta} \left( c’\widehat{\beta} - c’ \widehat{\beta}_{\textrm{OLS}} \right)&=& \mathbb{V}_{\beta} (c’\widehat{\beta}) + \mathbb{V}_{\beta} (c’\widehat{\beta}_{\textrm{OLS}}) - 2 \textrm{cov}_{\beta} ( c’\widehat{\beta} , c’ \widehat{\beta}_{\textrm{OLS}} ), \\
&& \textrm{(since $\textrm{Var}(X-Y) = \textrm{Var}(X) + \textrm{Var}(Y)- 2 \textrm{cov}(X,Y)$)}\\
&=& c’ \mathbb{V}_{\beta}(\widehat{\beta}) c + c’ \mathbb{V}_{\beta}(\widehat{\beta}_{\textrm{OLS}}) c - 2 \mathbb{E}_{\mathbb{P}_{\beta}} \left[ c’ (\widehat{\beta}-\beta) (\widehat{\beta}_{\textrm{OLS}}-\beta)’ c \right] \\
&& \textrm{(using the fact that both estimators are unbiased)}. \\
&=& c’ \mathbb{V}_{\beta}(\widehat{\beta}) c + c’ \mathbb{V}_{\beta}(\widehat{\beta}_{\textrm{OLS}}) c - 2 c’ \mathbb{E}_{\mathbb{P}_{\beta}} \left[ \widehat{\beta} (\widehat{\beta}_{\textrm{OLS}}-\beta)’ \right] c.
\end{eqnarray*}
Where the last term is the covariance between $\widehat{\beta}$ and the OLS residual. Using the definition of OLS
\[ \widehat{\beta}_{\textrm{OLS}}- \beta = (X’X)^{-1} X’(Y-X\beta), \]
and using the definition of the the Gaussian p.d.f. $f(Y | \beta,X)$ we can see verify that
\[\widehat{\beta}_{\textrm{OLS}}- \beta = \sigma^2 (X’X)^{-1} \frac{\partial}{\partial \beta} \ln f(Y | \beta, X ). \]
Consequently
\begin{eqnarray*}
\mathbb{E}_{\mathbb{P}_{\beta}} \left[ \widehat{\beta} (\widehat{\beta}_{\textrm{OLS}}-\beta)’ \right] &=& \sigma^2 \mathbb{E}_{\mathbb{P}_{\beta}} \left[ \widehat{\beta} \left( \frac{\partial}{\partial \beta} \ln f(Y | \beta, X )\right)’ \right] (X’X)^{-1}, \\
&=& \sigma^2 \left( \int_{\mathbb{R}^{k}} \widehat{\beta} \left( \frac{\partial}{\partial \beta} \ln f(Y | \beta, X )\right)’ f(Y | \beta , X) dY \right) (X’X)^{-1} \\
&=& \sigma^2 \left( \int_{\mathbb{R}^{k}} \widehat{\beta} \left( \frac{\partial}{\partial \beta} f(Y | \beta, X )\right)’ dY \right) (X’X)^{-1}\\
&& \textrm{(where we have used the chain rule and $\partial \ln x/ \partial x = 1/x$)}\\
&=& \sigma^2 \frac{\partial }{\partial \beta }\left( \int_{\mathbb{R}^{k}} \widehat{\beta} f(Y | \beta, X )\right) (X’X)^{-1}\\
&& (\textrm{by assumption}) \\
&=& \sigma^2 \left( \frac{\partial }{\partial \beta } \mathbb{E}_{\mathbb{P}_{\beta}}[\widehat{\beta}] \right) (X’X)^{-1}\\
&=& \sigma^2 (X’X)^{-1}\\
&& \textrm{(since $\widehat{\beta}$ is unbiased).} \\
&=& \mathbb{V}_{\beta}(\widehat{\beta}_{\textrm{OLS}}).
\end{eqnarray*}
Hence,
\[ 0 \leq \mathbb{V}_{\beta} \left( c’\widehat{\beta} - c’ \widehat{\beta}_{\textrm{OLS}} \right) = c’ \mathbb{V}_{\beta}(\widehat{\beta}) c - c’ \mathbb{V}_{\beta}(\widehat{\beta}_{\textrm{OLS}}) c, \]
for every $\beta$. The result then follows.
\end{proof}
\subsection{Suboptimality of the OLS estimator}
If $k \geq 3$, the OLS estimator is dominated. In this section we present an estimator that dominates OLS.
Assume that $X’X = \mathbb{I}_n$. Start with a Bayesian estimator for $\beta$ under the normal prior $\beta \sim \mathcal{N}_{k}(0, v \mathbb{I}_k ) = \mathcal{N}_{k}(0, \sigma^2 (v/\sigma^2) \mathbb{I}_k ) $. We have shown that such Bayes estimator is
\begin{eqnarray*}
\widehat{\beta}_{\textrm{Bayes}} &=& \left( \mathbb{I}_k + \frac{\sigma^2}{v} \mathbb{I}_k \right)^{-1} \widehat{\beta} _{\textrm{OLS}}, \\
&=& \left( \frac{v}{v+\sigma^2} \right) \widehat{\beta}_{\textrm{OLS}}, \\
&=& \left(1 - \frac{\sigma^2}{v+\sigma^2} \right) \widehat{\beta}_{\textrm{OLS}}
\end{eqnarray*}
The hyperparameter $v$ ``shrinks’’ the OLS estimator towards the prior mean. Instead of picking $v$ \emph{a priori}, it is possible to use data to estimate it. Such an approach is usually called ``empirical Bayes’’.
The distribution of the data conditional on the parameter is
\[ \widehat{\beta}_{\textrm{OLS}} \sim \mathcal{N}_{k}(\beta, \sigma^2 \mathbb{I}_k). \]
This conditional distribution, along with the prior, specify a full joint distribution over $(\widehat{\beta}_{\textrm{OLS}}, \beta)$. The marginal distribution of the data is
\[\widehat{\beta}_{\textrm{OLS}} \sim \mathcal{N}_{k}\left ( 0 , (\sigma^2 + v) \mathbb{I}_k \right). \]
We can use this statistical model to estimate the ``shrinkage’’ factor that appears in the Bayes estimator. Note that
\[ ||\widehat{\beta}_{\textrm{OLS}} ||^2 \sim (\sigma^2 + v) \chi^2_{k}, \]
Therefore, standard results for the mean of the inverse of a chi-square distribution yield
\[\mathbb{E} \left[ \frac{k-2 }{||\widehat{\beta}_{\textrm{OLS}} ||^2} \right] = \frac{1}{v+\sigma^2}, \]
provided $k \geq 3$. An unbiased estimator for the ``shrinkage’’ factor that appears in the formula of $\widehat{\beta}_{\textrm{Bayes}}$ is thus:
\[\left(1 - \frac{\sigma^2 (k-2)}{||\widehat{\beta}_{\textrm{OLS}} ||^2} \right). \]
We now show that the ``empirical Bayes’’ estimator
\[\widehat{\beta}_{JS} \equiv \left(1 - \frac{\sigma^2 (k-2)}{||\widehat{\beta}_{\textrm{OLS}} ||^2} \right) \widehat{\beta}_{\textrm{OLS}}, \]
dominates OLS. This estimator was first proposed by Willard James and Charles Stein in 1961. The estimator is typically referred to as the James-Stein estimator.
\begin{proposition}
Suppose $X’X = \mathbb{I}_{k}$ and $\sigma^2=1$. If $k \geq 3$, the James-Stein estimator dominates the ML/OLS estimator.
\end{proposition}
\begin{proof}
\[ || \widehat{\beta}_{\textrm{JS}} - \widehat{\beta}_{\textrm{OLS}} ||^2 = || \widehat{\beta}_{\textrm{JS}} - \beta ||^2 + || \beta -\widehat{\beta}_{\textrm{OLS}} ||^2 + 2 ( \widehat{\beta}_{\textrm{JS}} - \beta )’ (\beta -\widehat{\beta}_{\textrm{OLS}}) \]
implies
\[|| \widehat{\beta}_{\textrm{JS}} - \beta ||^2 = || \widehat{\beta}_{\textrm{JS}} - \widehat{\beta}_{\textrm{OLS}} ||^2 - || \beta -\widehat{\beta}_{\textrm{OLS}} ||^2 + 2 ( \widehat{\beta}_{\textrm{JS}} - \beta )’ (\widehat{\beta}_{\textrm{OLS}}- \beta). \]
Taking expectation on both sides yields
\begin{eqnarray*}
\mathbb{E}_{\mathbb{P}_{\beta}} \left[ || \widehat{\beta}_{\textrm{JS}} - \beta ||^2 \right] & = & \mathbb{E}_{P_{\beta}} \left[ || \widehat{\beta}_{\textrm{JS}} - \widehat{\beta}_{\textrm{OLS}}||^2 \right] \\
&-& \mathbb{E}_{\mathbb{P}_{\beta}} \left[ || \widehat{\beta}_{\textrm{OLS}}- \beta ||^2 \right] \\
&+& 2 \mathbb{E}_{\mathbb{P}_{\beta}} \left[ (\widehat{\beta}_{\textrm{JS}} -\beta)’( \widehat{\beta}_{\textrm{OLS}}- \beta) \right].
\end{eqnarray*}
Algebra shows that
\[\widehat{\beta}_{JS} - \widehat{\beta}_{\textrm{OLS}} \equiv \left( \frac{(k-2)}{||\widehat{\beta}_{\textrm{OLS}} ||^2} \right) \widehat{\beta}_{\textrm{OLS}}. \]
Therefore,
\[ \mathbb{E}_{\mathbb{P}_{\beta}} \left[ ||\widehat{\beta}_{JS} -\beta ||^2 \right] = \mathbb{E}_{\mathbb{P}_{\beta}} \left[ \frac{(k-2)^2}{|| \widehat{\beta}_{\textrm{OLS}} ||^2} \right] - k + 2 \mathbb{E}_{\mathbb{P}_{\beta}} \left[ (\widehat{\beta}_{\textrm{JS}} -\beta)’( \widehat{\beta}_{\textrm{OLS}}- \beta) \right]. . \]
Also
\begin{eqnarray*}
\mathbb{E}_{\mathbb{P}_{\beta}} \left[ (\widehat{\beta}_{\textrm{JS}} - \beta)’( \widehat{\beta}_{\textrm{OLS}}- \beta) \right] &=& \sum_{j=1}^{k} \textrm{Cov}( \widehat{\beta}^j_{\textrm{JS}},\widehat{\beta}^j_{OLS} ).
\end{eqnarray*}
and the last term can be shown to equal
\begin{eqnarray*}
\sum_{j=1}^{k} \mathbb{E}_{\mathbb{P}_{\beta}} \left[ \frac{\partial \widehat{\beta}^j_{JS} }{\partial \widehat{\beta}^j_{\textrm{OLS}}} \right] &=& K - \mathbb{E}_{\mathbb{P}_{\beta}} \left[ \frac{k (k-2)}{||\widehat{\beta}_{\textrm{OLS}} ||^2} \right] + \mathbb{E}_{\mathbb{P}_{\beta}} \left[ \frac{2(k-2)}{||\widehat{\beta}_{\textrm{OLS}} ||^2} \right]’ \\
&=& K - \mathbb{E}_{\mathbb{P}_{\beta}} \left[ \frac{(k-2)^2}{||\widehat{\beta}_{\textrm{OLS}} ||^2} \right]
\end{eqnarray*}
Therefore,
\[ \mathbb{E}_{\mathbb{P}_{\beta}} \left[ ||\widehat{\beta}_{JS} -\beta ||^2 \right] = k - \mathbb{E}_{\mathbb{P}_{\beta}} \left[ \frac{(k-2)^2}{||\widehat{\beta}_{\textrm{OLS}} ||^2} \right] \leq k =\mathbb{E}_{\mathbb{P}_{\beta}} \left[ ||\widehat{\beta}_{OLS} -\beta ||^2 \right]. \]
\end{proof}
\newpage
\bibliographystyle{../AuxFiles/ecta}
\bibliography{../AuxFiles/BibMaster}
\end{document}
| {
"alphanum_fraction": 0.6589934842,
"avg_line_length": 80.1649484536,
"ext": "tex",
"hexsha": "5c59c0facd3f48cb50388ec3a9d075f732f464ca",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4529b2ff1f38567c9aef15a52cde9de946413686",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "snowdj/Courses-IntroEconometrics-Ph.D",
"max_forks_repo_path": "docs/Lectures/Lectures07-8.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4529b2ff1f38567c9aef15a52cde9de946413686",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "snowdj/Courses-IntroEconometrics-Ph.D",
"max_issues_repo_path": "docs/Lectures/Lectures07-8.tex",
"max_line_length": 528,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4529b2ff1f38567c9aef15a52cde9de946413686",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "snowdj/Courses-IntroEconometrics-Ph.D",
"max_stars_repo_path": "docs/Lectures/Lectures07-8.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 8291,
"size": 23328
} |
\documentclass[11pt]{article}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{pifont}
\usepackage{setspace}
\usepackage{hyperref}
\usepackage{enumitem}
\usepackage{color}
\usepackage{sectsty}
% \usepackage{raisebox}
% \usepackage{geometry}
% \usepackage{fancyhdr}
% \pagestyle{fancy}
% \lfoot{\emph{CV: Christopher Harman after discussion with Dr. S. Flockton}}
\definecolor{darkblue}{rgb}{0.,0.,0.7}
\sectionfont{\color{darkblue}}
\subsectionfont{\color{darkblue}}
\oddsidemargin -0.25in
\evensidemargin 0.25in
\textwidth 7.0in
\headheight 0.0in
\topmargin -0.8in
%\textheight=9.5in
\textheight=10.0in
\onehalfspacing
\begin{document}
\thispagestyle{empty}
\title{Clear Cut: Algorithm Design}
% \author{Christopher Harman}
\maketitle
\tableofcontents
\newpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%% INPUT DATA DESCRIPTION %%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Input data}
The input data is an image of arbitrary size, $M \times N$, where $M$ is the horizontal dimension and $N$ is the vertical dimension of the image. Most images have three channels (RGB = ``Red-Green-Blue"), this means an array with three $M \times N$ arrays. Images with exif data are checked and modified so that the image is imported in the intended orientation, e.g. in case the image is a photo taken from a phone camera.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%% IMAGE SIZE REDUCTION %%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Image size reduction}
To improve the efficiency of the edge detection algorithm, the image size is reduced to that the average image dimension is less than 500 pixels in length. The final image size to be thrown into the algorithm is $M_{\text{eff}} \times N_{\text{eff}}$, where
\begin{equation}
\dfrac{M_{\text{eff}} N_{\text{eff}}}{2}<500~\text{pxl} \text{.}
\label{im_reduction_condition}
\end{equation}
The image is currently reduced slowly by max pooling. This relies on calculating the smallest kernel size and repeating the max pooling process until the condition in Equation~\ref{im_reduction_condition} is satisfied. This can be implemented more effectively by calculating the smallest kernel that would satisfy the condition in Equation~\ref{im_reduction_condition} after just one implementation of max pooling.
As it currently exists, the process consists of determining the smallest number that divides the height (width) of the image to return an integer. If there is no such factor, i.e. the height (width) is a prime number, the image is cropped by removing the single pixel row (column) at the $M_{\text{eff}}^{th}$ ($N_{\text{eff}}^{th}$) index. By definition this would result in that image dimension being divisible by 2, which is then the number of pixels of the smallest kernel in that dimension. Throughout the image reduction process, the values ~{($M_{\text{eff}}$,~$N_{\text{eff}}$,~$M_{\text{kernel}}$,~$N_{\text{kernel}}$)} are stored in a Python dictionary to keep a record of the image reduction history.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%% EDGE DETECTION PROCEDURE %%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Edge detection procedure}
\subsection{\label{sec:grad_img}The gradient image}
\begin{center}
\texttt{For Python code, refer to the {\bf traceObjectsInImage()} function in edgeUtility.py}
\end{center}
In order to determine the edges of an image, we must choose a criteria to distinguishing a single pixel located at $(i_0,j_0)$ as being part of ``an edge''. We would refer to such pixels as ``edge pixels'' or ``non-edge pixels'' in future. This is achieved by mathematically comparing its value to those of its surrounding pixels. The simplest method to adopt is to consider only the pixels neighbouring pixels $(i,j)$, i.e. any combination of pixels that satisfy
\begin{equation}
i_0-1 \leq i \leq i_0+1\text{, and }j_0-1 \leq j \leq j_0+1\text{.}
\end{equation}
A more advanced technique could extend the range of neighbouring pixels. For a pixel not located on the perimeter of the image, it would have 8 neighbouring pixels.
The mathematical quantity we are interested in is the difference in value between a selected pixel and it's neighbouring pixels. For the simple case, an edge pixel would be identified as having a large difference in value between any adjacent neighbouring pixel and itself. This simplest case works well for an image with sharp edges, but not so well for an image with blurry edges. With eight neighbouring pixels, we would have to calculate eight gradients per pixel. It turns out that we can more efficient in calculating the gradients and only count four instead. This is because when the neighbouring pixel has the gradient calculated, it would be the same magnitude but a different sign. The sign is irrelevant for determining the ``sharpness'' between pixels. This means that the eight gradient per pixel method would double-count the number of gradients; therefore we need only calculate four gradients per pixel instead.
Thus for a generic pixel at $(i,j)$, the difference in value between pixels $(i-1,j-1)$, $(i+1,j)$, $(i,j+1)$, and $(i+1,j+1)$ is calculated. These four directions are a choice and are not expected to make a difference in the final result. Note that not all gradients can be calculated if the pixel under consideration sits along the perimeter of the image. That each pixel in the ``image space'' has four unique gradients calculated, means that we will obtain a ``gradient space'' of size $2M_{\text{eff}} \times 2N_{\text{eff}}$.
\subsection{Dealing with image channels}
\subsubsection{Obtaining gradient images}
\begin{center}
\texttt{For Python code, refer to the {\bf traceObjectsInImage()} function in edgeUtility.py}
\end{center}
Each of the RGB channels have the edge detection algorithm specified in Section~\ref{sec:grad_img} applied to them. This is achieved by writing out the results to a single array of size $6M_{\text{eff}} \times 2N_{\text{eff}}$, where each channel's gradient image is offset as follows:
\begin{itemize}
\item the Red channel gradient array exists in the domain $(0,~2M_{\text{eff}}-1)$,
\item the Green channel gradient array exists in the domain $(2M_{\text{eff}},~4M_{\text{eff}}-1)$, and
\item the Blue channel gradient array exists in the domain $(4M_{\text{eff}},~6M_{\text{eff}}-1)$.
\end{itemize}
Recall that we could only reduce the number of gradients per pixel if we ignore the gradients sign. Therefore, we must take care to only add up the gradient images element-wise AFTER taking the magnitude of each element first. The initial step in edge detection is to determine whether an (rgb) pixel in the reduced image is, or is not, part of an edge; this turns each pixel into a ``yes'' or ``no'' (boolean) answer. We therefore set a numerical upper and lower limit on what is an edge pixel, and what is not. We implement this through a variable parameter called \texttt{imCut}, whereby an edge pixel is one in which the gradient lives between:
\begin{equation}
255\times \texttt{imCut}< \text{Gradient}_{\text{edge pixel}} < 255\times (1-\texttt{imCut})\text{.}
\end{equation}
We have found $\texttt{imCut} = 0.07 \pm 0.02$ to work effectively. We have found both the upper limit and lower limit are necessary to extract clear edge pixels for any image. The lower limit is understandable because it determines adjacent pixels with too similar a colour as not being an edge pixel, however, the upper limit is less trivial. We believe the upper limit is necessary to remove random fluctuations in light/darkness within an image. These may be the result of tiny defective regions of photographical film that cannot be seen by the human eye when the image view at a larger scale that the size of the defect.
\subsubsection{Returning detected edges in reduced image size}
\begin{center}
\texttt{For Python code, refer to the {\bf mergeChannelsTracedImage()} function in edgeUtility.py}
\end{center}
We now have three gradient space arrays. However, we only want one gradient space array to say whether a single (rgb) pixel in the reduced image is an edge or not. This means we that we need to merge each of the gradient arrays into a single gradient array, and then reduce this gradient array to be the same size as the reduced image. The idea behind the merge is that \textbf{the more channels that detect an edge, the more likely it is to be an edge}. This is in agreement with human perception that an orange boat on a blue sea is far more distinct object than a blue boat on a blue sea. We therefore simply add up each gradient image element-wise to form our final gradient image.
This final gradient image is still roughly double the width and double the height of the reduced image. We reduce the gradient image to the same size as the reduced image using max pooling of $(2 \times 2)$-sized kernels. Therefore the final ``edge value'' of each (rgb) pixel of the reduced image is determined by the largest gradient of that pixel and its neighbouring pixel. This in turn determines whether it is an edge pixel or not.
\subsection{Cleaning the edge data}
Now that we have two arrays of the same size, we can mask the edge image on top of the reduced image to view our current results. In doing so for a number of unique images, we observe two main features:
\begin{enumerate}
\item There exist spurious regions of edge pixels with a size of the order of a few pixels. \textbf{Solution: remove tiny regions of edge data.}
\item There are edge pixels around the perimeter of objects in the reduced image, but with the occasional break, whereby no edge pixels exist. \textbf{Solution: extrapolate long regions of edge data.}
\end{enumerate}
\subsubsection{Removing small scale edge data}
\begin{center}
\texttt{For Python code, refer to the {\bf edgeKiller()} function in edgeUtility.py}
\end{center}
To accomplish this task, we first need to decide what a small scale is. This is necessary for object recognition because many complicated objects are made of smaller, perhaps even more complicated objects. Fundamentally, if an object is identified as being surrounded by a continuous border of edge pixels, then the smallest object would be three pixels in size. This because the border must be at least one pixel thick and the central pixel must not be recognised as an edge (two pixels either side of a non-edge pixel in the $x$ and $y$ direction), i.e. a square drawn around the perimeter of a $3 \times 3$ pixel grid. Practically, it is the choice of the user to specify a tolerance to the size of the object. Take the example of a cat. Users may want to extract certain features of the cat, depending on some objective scaling into the cat (see Table~\ref{cat_table}).
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
{\bf Feature/object} & {\bf User 1} & {\bf User 2} & {\bf User 3} & {\bf User 4}\\
\hline
{\bf Cat} & y & y & y & y\\
\hline
{\bf Nose} & n & y & y & y\\
\hline
{\bf Eye} & n & n & y & y\\
\hline
{\bf Iris} & n & n & n & y\\
\hline
\end{tabular}
\end{center}
\label{cat_table}
\end{table}
To this end, a user may specific their pixel tolerance using the parameter \texttt{objectTolerance}. Any edge pixels that are found within a border of size $(2\times \texttt{objectTolerance}+1) \times (2\times \texttt{objectTolerance}+1)$, where the border does not contain any edge pixels, are killed off.
% include visualisations here
\subsubsection{Expand large-length edge data}
\begin{center}
\texttt{For Python code, refer to the {\bf edgeFiller()} function in edgeUtility.py}
\end{center}
To address the issue of small gaps within edges, we extrapolate any edges subject to number of consecutive edge pixels in a given direction. The minimum number of consecutive edge pixels required for the extrapolation to happen is embodied in the parameter \texttt{edge\_bias}. The extrapolation basically runs through each edge pixels and determines the number of consecutive edge pixels in each of the eight directions $(-1,-1)$, $(-1,0)$,$(-1,+1)$, $(0,-1)$, $(0,+1)$, $(+1,-1)$, $(+1,0)$,and $(+1,+1)$. If the number of edge pixels in a direction is greater than $\texttt{edge\_bias}$, change the first non-edge pixel in that direction to an edge pixel.
% include visualisations here
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%% OBJECT EXTRACTION PROCEDURE %%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Object extraction procedure}
\subsection{Theoretical set up}
Randomly pick an initial edge pixel. Determine the directions in which the neighbouring pixel is also an edge pixel. For the directions which have an edge pixel, (recursively) determine if the pixel in that same direction is also an edge pixel until it is not an edge pixel. Note down the number of edge pixels in this direction and add it to the number of edge pixels (until a non-edge pixel is found) in the exact opposite direction. This will result in four lengths - which we will call \textit{thicknesses} - in each of the directions:
\begin{itemize}
\item horizontal: $(-1,0)\rightarrow (+1,0)$,
\item vertical: $(0,-1)\rightarrow (0,+1)$,
\item positive gradient diagonal: $(-1,-1)\rightarrow (+1,+1)$,
\item negative gradient diagonal: $(-1,+1)\rightarrow (+1,-1)$.
\end{itemize}
The shortest thickness of consecutive edge pixels at the initial edge pixel will be referred to as the \textbf{race start line}. The length of the race start line is defined to be in the $\hat{y}$-direction of the Cartesian coordinate system $\hat{C}: (\hat{x},\hat{y})$. The initial direction of the \textbf{race path} must have a non-zero $\hat{x}$-component, i.e. the initial path vector is $\vec{O}=a\hat{x}+b\hat{y}$, where $a \neq 0$.
\subsection{Definition of an object}
\begin{center}
\texttt{We define an object within an image as the set of pixels that residue\\ within an enclosed path of edge pixels. The enclosed path must return\\ through the race start line with an $\hat{\texttt{x}}$-component that has the same sign\\ as that in
which the path was initialised.}
\end{center}
There are two very important concepts to understand within this definition:
\begin{itemize}
\item \textit{... an enclosed path ...}: this means we should be able to draw a continuous path of edge pixels that start and end at the same edge pixel coordinate.
\item \textit{... $\hat{x}$-component that has the same sign ...}: this means that the path of edge pixels must have an initial path vector with an $\hat{x}$-component with the same sign as the $\hat{x}$-component of the (final) path vector that crosses past the start race line. Since the path is determined by when the path has crossed the start race line, it means the path could not have gone back on itself whereby it remains along the ``same edge''. % Need to include visualisations here of starting at the bottom of a hangmans knot, and starting at the top of the knot.
\end{itemize}
\subsection{Stepping into the edge path}
The edge path cannot be stepped by an entirely random process as the code would take too long to run and possible return nonsense (see Brownian motion). However, neither can the steps be too systematic as this may guarantee that some forks in the edges are never taken, thus losing potential objects. At each step, there must be some notion of moving in a direction roughly orthogonal to the shortest pixel thickness that is also away from pixel at the previous step. Therefore, the step is chosen to be close to the vector orthogonal to the start line for that current pixel, but with a small random fluctuation in the ($\hat{y}$-)direction of the start line.
\begin{figure}[h!]
\centering
\includegraphics[width=15.0cm]{visuals/edge_id_aids/Path_algorithm.png}
\caption{\small{Visualisation of the random path method. This image shows two paths starting at the same initial edge pixel (white square) with the same steps up $i=3$. At this point the path the random vector off the $i=3$ start line may either go into the upper or lower fork of the edge. The green cone has been drawn to detail that the random path vector has a specified length (the radius of the cone) but the random direction is constrained to within the green cone.}}
\label{singleField_PtPlot_fig}
\end{figure}
\end{document} | {
"alphanum_fraction": 0.7335385358,
"avg_line_length": 86.296875,
"ext": "tex",
"hexsha": "7f54b932a8ba7ea5ee8f2c6b44fa3b8b65fed955",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "799398172861bc2e98465e198b68a2af26abb461",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "chrispdharman/clear-cut",
"max_forks_repo_path": "documentation/clearCut_algorithm_design.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "799398172861bc2e98465e198b68a2af26abb461",
"max_issues_repo_issues_event_max_datetime": "2022-03-12T00:19:26.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-05-06T17:44:39.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "chrispdharman/clear-cut",
"max_issues_repo_path": "documentation/clearCut_algorithm_design.tex",
"max_line_length": 928,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "799398172861bc2e98465e198b68a2af26abb461",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "chrispdharman/clear-cut",
"max_stars_repo_path": "documentation/clearCut_algorithm_design.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4032,
"size": 16569
} |
\chapter{Cluster distortion and bipolaron formation}
\label{chap:ground}
We performed a series of diagonalizations of the hamiltonian matrix (\ref{eq:full-hamiltonian}) using the parameters described in section \ref{sec:model-parameters} with a variable infrared charge-lattice coupling, $\lambda_{ir}$.
We took $\lambda_{ir}$ values in the representative range from 0 eV to 0.25 eV.
This range allowed us to explore the behaviour of the system in the small, middle and strong coupling regimes.
Besides the eigenvalues and eigenvectors of the hamiltonian we also calculated the mean phonon number with their dispersions and the projections into phonon coordinates $(u_{ir}, u_R)$ (see section \ref{sec:lattice-distortions}).
As described at the end of section \ref{sec:hamiltonian-and-basis} the basis for our model, which is denoted by $\{\ket{e_1,e_2,ir,R}: e_1,e_2=1,2,3; ir,R=1,2,\ldots\}$, is infinite-dimentional due to the infinite number of phonons.
However, since only a few of the lowest eigenvalues are relevant to describe this system, we can truncate the basis considering only a finite number of infrared and Raman phonons.
Generally, a larger computational basis set is needed for calculations with larger charge-lattice coupling values.
We found that, for the largest charge-coupling value under consideration, the calculations converge well considering only 40 phonons of each kind in the computational basis.
Increasing the basis size further leave the results unchanged.
This consideration gives a basis size of 15129 states producing a hamiltonian matrix with $\sim 2.3\times 10^{8}$ elements.
However, only $\sim 7.5 \times 10^{4}$ ($\sim 0.03 \%$) of them are different from zero making this a sparse matrix.
This allows the use of algorithms specially tailored to handle efficiently sparse matrices.
Fortunately the \textit{Octave}\footnote{Freely available at https://www.gnu.org/software/octave/} package provides many algorithms for computing with sparse matrices.
In particular, it provides a diagonalization routine based on the Lanczos algorithm which produces reliable results.
In this chapter we turn our attention to the ground and first excited states of the system since only the first excitation in the (\ref{eq:full-hamiltonian}) hamiltonian has a sufficiently low energy to be significantly occuppied in the temperature range where the pseudogap phase is present.
All other excitations have an energy above $T=\omega_i/k_B \sim 550$ K (see Figure \ref{fig:irSpectra}).
First we calculate the difference in Cu-O bond lengths (if any) as a function of the $\lambda_{ir}$ coupling parameter.
With this we can find the $\lambda_{ir}$ value that reproduces the 0.13 \AA\ observed cluster distortion \cite{MustredeLeon1990}.
From the projection into phonon coordinates we can observe whether this distortion is static or dynamic for different $\lambda_{ir}$ values \cite{MustredeLeon1992}.
The presence of a dynamic cluster distortion signals a correlated movement between atomic centers and charged particles, that is, it signals polaronic behaviour.
The difference between the ground state energy in the absence of charge-lattice interaction ($\lambda_{ir}=0$) and its value for the coupling value where this polaronic behaviour sets can be identified as the \textit{bipolaron binding energy}\footnote{A \textit{polaron} is a correlated movement between a charge carrier and the lattice. Since we have two charge carriers in this model we refer to this as a \textit{bipolaron}.}.
We estimate this binding energy for the experimentally relevant $\lambda_{ir}$ value and compare it to other calculations.
Finally we model an isotopic oxygen substitution $^{16}$O$\rightarrow ^{18}$O and calculate its effects in the bipolaron binding energy.
\section{Cu(1)-O(4) local distortion}
\label{sec:grd-phonon-proj}
The wavefunction projection into phonon coordinates (\ref{eq:phonon-coord-projection}) gives information about the coodinates of the three atoms in the CuO$_2$ cluster.
In particular, the infrared phonon coordinate $u_{ir}$ is directly proportional to the bond distortion in the CuO$_2$ cluster, $d$, as shown in (\ref{eq:dvsuir}).
Since the ground state has zero Raman phonons and we have set $\lambda_R=0$, its projection into $u_R$ will remain unchanged and given by a simple gaussian distribution.
Thus, setting $u_R=0$, we projected the ground state into $u_{ir}$ for different $\lambda_{ir}$ values.
In the left panel of Figure \ref{fig:uirCoupl} we show a plot of these projections.
The projection has a gaussian shape for $\lambda_{ir}=0$.
However, as $\lambda_{ir}$ increases it smoothly develops into two separated peaks.
For a small range in the middle coupling values there are clearly two peaks but they are not separated.
Since the system will tend to be near the maximum values of this projections this results can be interpreted as follows: in the small coupling range the bond distances remain equal albeit with an increased uncertainty; in the middle coupling range two Cu-O distances develop but the system can \textit{tunnel} between them, that is, the distortion is \textit{dynamic}; finally in the strong coupling range the two peaks are fully separated and the distortion becomes \textit{static}.
Only in the small range $\sim 0.12 - 0.16$ eV the bond distortion has set in and it is dynamical in nature.
%
\begin{figure}[ht]
\centering
\input{images/uirCoupl}
\caption{Projection into phonon coordinates (left panel) and calculated cluster distortion $d$ (right panel) for different $\lambda_{ir}$ coupling values with $u_R=0$.}
\label{fig:uirCoupl}
\end{figure}
In the right panel of figure \ref{fig:uirCoupl} we show the cluster distortion $d$ assuming the system is found in a maximum value of the projection into phonon coordinates.
We observe that the distortion only sets in for $\lambda_{ir} > \sim 0.12$ eV and monotonically increases with $\lambda_{ir}$.
The two branches in this plot represent the two possible cluster distortions with the longer distance being in either of the CuO bonds.
The measured cluster distortion of $\sim 0.13$ \AA\ \cite{MustredeLeon1990} is reproduced in the intermedite coupling values.
In particular, $\lambda_{ir}=0.1263$ eV reproduces that distortion.
The first excitation is an (antisymmetric) infrared-active state with one infrared phonon.
Contrary to the ground state, the antisymmetry of this excitation implies that it is zero at $u_{ir}=0$ for all $\lambda_{ir}$.
Also, the calculations show that increasing $\lambda_{ir}$ reduces the energy of this excitation asymptotically to zero (see Figure \ref{fig:irSpectra}) increasing its natural length scale.
In Figure \ref{fig:phononProjGrdPol} we show the ground and first excited states projected into phonon coordinates $(u_R,u_{ir})$ for $\lambda_{ir}$ values in the weak, middle and strong coupling regimes.
The projection along $u_R$, as previously stated, is a simple gaussian in all cases.
In the middle coupling regime, with $\lambda_{ir}=0.1263$ eV, there are clearly two peaks present although, for the ground state, they are not fully separated.
For large coupling values the two peaks are fully separated in both states (static distortion) and the projections become very similar.
\begin{figure}[ht]
\centering
\input{images/phononProjGrdPol}
\caption[Ground and first excited states projected into phonon coordinates.]
{Projection into phonon coordinates for the ground (top) and first excited (bottom) states for three different values of the coupling parameter $\lambda_{ir}$.}
\label{fig:phononProjGrdPol}
\end{figure}
The polaron-tunneling energy $\omega_1$, at the relevant $\lambda_{ir}=0.1263$ eV is 141.39 cm$^{-1}$, corresponding to a characteristic temperature $T=\hbar\omega_1 / k_B \sim 204$ K.
Thus the atomic motion in the cluster, in the temperature range where the pseudogap is present, can be described by a two-level Hamiltonian $\tilde{H}=(\omega_1/2)\sigma_z$, such that $\tilde{H}\ket{\psi_1}=+(\omega_1/2)\ket{\psi_1}$ and $\tilde{H}\ket{\psi_o}=-(\omega_1/2)\ket{\psi_0}$, where $\sigma_z$ is a Pauli spin matrix and $\ket{\psi_0},\ket{\psi_1}$ denote the (symmetric) ground state and (antisymmetric) first excited state, respectively, separated by a \textit{polaron tunneling} energy $\hbar\omega_1$ \cite{MustredeLeon1990}.
% Figure of projection into definite electron-occupation basis for both, the ground and polaron-tunneling state
\section{Bipolaron binding energy}
\label{sec:PolBindingEnergy}
The energy of the ground state also changes with the coupling parameter $\lambda_{ir}$.
The change in energy between the coupled and uncoupled systems, $\Delta\omega_{g}$, can be identified, in the region where there are bipolaronic objects present, as the \textit{bipolaron binding energy}.
In figure \ref{fig:polaronFormation} we show the dependence of $\Delta\omega_{g}$ with $\lambda_{ir}$.
\begin{figure}[ht]
\centering
\input{images/polaronFormation}
\caption[Bipolaron formation energy as a function of the $\lambda_{ir}$ coupling. ]
{Bipolaron formation energy as a function of the $\lambda_{ir}$ coupling. The vertical line is drawn at the relevant value $\lambda_{ir}=0.1263$ eV.}
\label{fig:polaronFormation}
\end{figure}
We observe a monotonical behaviour with a small change in the weak coupling regime becoming stronger with greater coupling values.
In particular, for the $\lambda_{ir}=0.1263$ eV value, which reproduces the observed distortion, we find $\Delta\omega_{g} \sim 38$ meV.
This value compares favorably with the value obtained from femtosecond time-domain spectroscopy ($\sim 45$ meV) for YBa$_2$Cu$_3$O$_7$ \cite{Demsar1999}.
We also find that if we consider a smaller electron-lattice coupling such that the distortion is 0.08 \AA\ ($\lambda_{ir}=0.124$ eV), as that observed for in plane Cu(2)-O in La$_{1.85}$Sr$_{0.15}$CuO$_4$ \cite{Bianconi1996}, we obtain $\Delta\omega_{g} \sim 35$ meV, which is also comparable to estimates for the pseudogap formation energy in this system (see Fig. 4b in \cite{Kusar2005}).
Furthermore, we calculated the isotopic shift, $\Delta_g$, of $\omega_g$ under the $^{16}$O$\rightarrow ^{18}$O substitution as defined in (\ref{eq:isot-shift-def-grd}).
Figure \ref{fig:isotPolaronFormation} shows $\Delta_g$ as a function of $\lambda_{ir}$.
Contrary to other isotopic shifts, it does not change sign.
However, it shows a maximum in the middle coupling regime, reminiscent of the maxima, minima and inflection points in the isotopic shifts of all other excitations, and falls below its starting point for large $\lambda_{ir}$ values.
For $\lambda_{ir}=0.1263$ eV we predict an isotopic shift of $\sim 9.2$ \% which is slightly larger than the uncoupled value of $\sim 7.2$ \%.
\begin{figure}[ht]
\centering
\input{images/isotPolaronFormation}
\caption[Isotopic shift of the bipolaron formation energy.]
{Isotopic shift of the bipolaron formation energy. The vertical line is placed at the relevant value $\lambda_{ir}=0.1263$.}
\label{fig:isotPolaronFormation}
\end{figure}
| {
"alphanum_fraction": 0.7786087583,
"avg_line_length": 97.350877193,
"ext": "tex",
"hexsha": "86b1b501a872b57cb1436cc43d6ee4bda8a955ee",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5103ff2dfb7fc4d1f5b65ea31111292d94a73bff",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "andresgsaravia/PhD-thesis",
"max_forks_repo_path": "polaron.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5103ff2dfb7fc4d1f5b65ea31111292d94a73bff",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "andresgsaravia/PhD-thesis",
"max_issues_repo_path": "polaron.tex",
"max_line_length": 542,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5103ff2dfb7fc4d1f5b65ea31111292d94a73bff",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "andresgsaravia/PhD-thesis",
"max_stars_repo_path": "polaron.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2828,
"size": 11098
} |
<!DOCTYPE html>
<html lang="en" class=" is-copy-enabled emoji-size-boost is-u2f-enabled">
<head prefix="og: http://ogp.me/ns# fb: http://ogp.me/ns/fb# object: http://ogp.me/ns/object# article: http://ogp.me/ns/article# profile: http://ogp.me/ns/profile#">
<meta charset='utf-8'>
<link crossorigin="anonymous" href="https://assets-cdn.github.com/assets/frameworks-a56791f00ea569012c620526e115f9be8d519034262393f82c045116a52b0817.css" integrity="sha256-pWeR8A6laQEsYgUm4RX5vo1RkDQmI5P4LARRFqUrCBc=" media="all" rel="stylesheet" />
<link crossorigin="anonymous" href="https://assets-cdn.github.com/assets/github-5deed352941c0958fa4fa1d6f62607987d97095b986d6994657867ea4d843cbd.css" integrity="sha256-Xe7TUpQcCVj6T6HW9iYHmH2XCVuYbWmUZXhn6k2EPL0=" media="all" rel="stylesheet" />
<link as="script" href="https://assets-cdn.github.com/assets/frameworks-149d9338c2665172870825c78fa48fdcca4d431d067cbf5fda7120d9e39cc738.js" rel="preload" />
<link as="script" href="https://assets-cdn.github.com/assets/github-109daf4a404ee43b316c94cbb025dd5c390990eacc3b1d807ec6e1150039af02.js" rel="preload" />
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta http-equiv="Content-Language" content="en">
<meta name="viewport" content="width=device-width">
<title>numerical_computing/Haar.tex at develop · byuimpactrevisions/numerical_computing</title>
<link rel="search" type="application/opensearchdescription+xml" href="/opensearch.xml" title="GitHub">
<link rel="fluid-icon" href="https://github.com/fluidicon.png" title="GitHub">
<link rel="apple-touch-icon" href="/apple-touch-icon.png">
<link rel="apple-touch-icon" sizes="57x57" href="/apple-touch-icon-57x57.png">
<link rel="apple-touch-icon" sizes="60x60" href="/apple-touch-icon-60x60.png">
<link rel="apple-touch-icon" sizes="72x72" href="/apple-touch-icon-72x72.png">
<link rel="apple-touch-icon" sizes="76x76" href="/apple-touch-icon-76x76.png">
<link rel="apple-touch-icon" sizes="114x114" href="/apple-touch-icon-114x114.png">
<link rel="apple-touch-icon" sizes="120x120" href="/apple-touch-icon-120x120.png">
<link rel="apple-touch-icon" sizes="144x144" href="/apple-touch-icon-144x144.png">
<link rel="apple-touch-icon" sizes="152x152" href="/apple-touch-icon-152x152.png">
<link rel="apple-touch-icon" sizes="180x180" href="/apple-touch-icon-180x180.png">
<meta property="fb:app_id" content="1401488693436528">
<meta content="https://avatars3.githubusercontent.com/u/7470734?v=3&s=400" name="twitter:image:src" /><meta content="@github" name="twitter:site" /><meta content="summary" name="twitter:card" /><meta content="byuimpactrevisions/numerical_computing" name="twitter:title" /><meta content="numerical_computing - Numerical computing labs" name="twitter:description" />
<meta content="https://avatars3.githubusercontent.com/u/7470734?v=3&s=400" property="og:image" /><meta content="GitHub" property="og:site_name" /><meta content="object" property="og:type" /><meta content="byuimpactrevisions/numerical_computing" property="og:title" /><meta content="https://github.com/byuimpactrevisions/numerical_computing" property="og:url" /><meta content="numerical_computing - Numerical computing labs" property="og:description" />
<meta name="browser-stats-url" content="https://api.github.com/_private/browser/stats">
<meta name="browser-errors-url" content="https://api.github.com/_private/browser/errors">
<link rel="assets" href="https://assets-cdn.github.com/">
<link rel="web-socket" href="wss://live.github.com/_sockets/MjM5NjYzODo0ODczZjI4YWNkZGU2YzFmMDkzN2JhNzM2NDgzYjllMDo1NzhlMmMwY2UxNDVjYjVmZTZlNmI3OTYwODQ1ZmJkZmUxYTlkYjU5Zjk1NTYwY2I0MzNjODExMjZlMmY2YTZk--96b4e34055886d724b731cb016f274e256a5b041">
<meta name="pjax-timeout" content="1000">
<link rel="sudo-modal" href="/sessions/sudo_modal">
<meta name="msapplication-TileImage" content="/windows-tile.png">
<meta name="msapplication-TileColor" content="#ffffff">
<meta name="selected-link" value="repo_source" data-pjax-transient>
<meta name="google-site-verification" content="KT5gs8h0wvaagLKAVWq8bbeNwnZZK1r1XQysX3xurLU">
<meta name="google-site-verification" content="ZzhVyEFwb7w3e0-uOTltm8Jsck2F5StVihD0exw2fsA">
<meta name="google-analytics" content="UA-3769691-2">
<meta content="collector.githubapp.com" name="octolytics-host" /><meta content="github" name="octolytics-app-id" /><meta content="80BB7013:124F:10817600:57812F7D" name="octolytics-dimension-request_id" /><meta content="2396638" name="octolytics-actor-id" /><meta content="AZaitzeff" name="octolytics-actor-login" /><meta content="3b39c6b7c0ab1ccbb0794edc7994d4175443b2e3bc3fb838ab165fab9b4cc2d3" name="octolytics-actor-hash" />
<meta content="/<user-name>/<repo-name>/blob/show" data-pjax-transient="true" name="analytics-location" />
<meta class="js-ga-set" name="dimension1" content="Logged In">
<meta name="hostname" content="github.com">
<meta name="user-login" content="AZaitzeff">
<meta name="expected-hostname" content="github.com">
<meta name="js-proxy-site-detection-payload" content="ODA3Y2VkYzk2YWViNTE1MDRjNGE4OTU2YTU4YTUyYjNhNDhjNmNiNGM5ZDdhMTUzY2Y4ZjI4Zjk4YWFmNzM3NHx7InJlbW90ZV9hZGRyZXNzIjoiMTI4LjE4Ny4xMTIuMTkiLCJyZXF1ZXN0X2lkIjoiODBCQjcwMTM6MTI0RjoxMDgxNzYwMDo1NzgxMkY3RCIsInRpbWVzdGFtcCI6MTQ2ODA4NDA5M30=">
<link rel="mask-icon" href="https://assets-cdn.github.com/pinned-octocat.svg" color="#4078c0">
<link rel="icon" type="image/x-icon" href="https://assets-cdn.github.com/favicon.ico">
<meta name="html-safe-nonce" content="c1965b07496764a598b25658b69deb1ac4c81682">
<meta content="a114dfe6a2821c41f49ba1eb573bc5d7dc2740fb" name="form-nonce" />
<meta http-equiv="x-pjax-version" content="6b9c431dbe5856e2eed898e3470e5eb3">
<meta name="description" content="numerical_computing - Numerical computing labs">
<meta name="go-import" content="github.com/byuimpactrevisions/numerical_computing git https://github.com/byuimpactrevisions/numerical_computing.git">
<meta content="7470734" name="octolytics-dimension-user_id" /><meta content="byuimpactrevisions" name="octolytics-dimension-user_login" /><meta content="19390913" name="octolytics-dimension-repository_id" /><meta content="byuimpactrevisions/numerical_computing" name="octolytics-dimension-repository_nwo" /><meta content="true" name="octolytics-dimension-repository_public" /><meta content="true" name="octolytics-dimension-repository_is_fork" /><meta content="2602741" name="octolytics-dimension-repository_parent_id" /><meta content="Foundations-of-Applied-Mathematics/Labs" name="octolytics-dimension-repository_parent_nwo" /><meta content="2602741" name="octolytics-dimension-repository_network_root_id" /><meta content="Foundations-of-Applied-Mathematics/Labs" name="octolytics-dimension-repository_network_root_nwo" />
<link href="https://github.com/byuimpactrevisions/numerical_computing/commits/develop.atom" rel="alternate" title="Recent Commits to numerical_computing:develop" type="application/atom+xml">
<link rel="canonical" href="https://github.com/byuimpactrevisions/numerical_computing/blob/develop/Algorithms/Haar/Haar.tex" data-pjax-transient>
</head>
<body class="logged-in env-production macintosh vis-public fork page-blob">
<div id="js-pjax-loader-bar" class="pjax-loader-bar"></div>
<a href="#start-of-content" tabindex="1" class="accessibility-aid js-skip-to-content">Skip to content</a>
<div class="header header-logged-in true" role="banner">
<div class="container clearfix">
<a class="header-logo-invertocat" href="https://github.com/" data-hotkey="g d" aria-label="Homepage" data-ga-click="Header, go to dashboard, icon:logo">
<svg aria-hidden="true" class="octicon octicon-mark-github" height="28" version="1.1" viewBox="0 0 16 16" width="28"><path d="M8 0C3.58 0 0 3.58 0 8c0 3.54 2.29 6.53 5.47 7.59.4.07.55-.17.55-.38 0-.19-.01-.82-.01-1.49-2.01.37-2.53-.49-2.69-.94-.09-.23-.48-.94-.82-1.13-.28-.15-.68-.52-.01-.53.63-.01 1.08.58 1.23.82.72 1.21 1.87.87 2.33.66.07-.52.28-.87.51-1.07-1.78-.2-3.64-.89-3.64-3.95 0-.87.31-1.59.82-2.15-.08-.2-.36-1.02.08-2.12 0 0 .67-.21 2.2.82.64-.18 1.32-.27 2-.27.68 0 1.36.09 2 .27 1.53-1.04 2.2-.82 2.2-.82.44 1.1.16 1.92.08 2.12.51.56.82 1.27.82 2.15 0 3.07-1.87 3.75-3.65 3.95.29.25.54.73.54 1.48 0 1.07-.01 1.93-.01 2.2 0 .21.15.46.55.38A8.013 8.013 0 0 0 16 8c0-4.42-3.58-8-8-8z"></path></svg>
</a>
<div class="header-search scoped-search site-scoped-search js-site-search" role="search">
<!-- </textarea> --><!-- '"` --><form accept-charset="UTF-8" action="/byuimpactrevisions/numerical_computing/search" class="js-site-search-form" data-scoped-search-url="/byuimpactrevisions/numerical_computing/search" data-unscoped-search-url="/search" method="get"><div style="margin:0;padding:0;display:inline"><input name="utf8" type="hidden" value="✓" /></div>
<label class="form-control header-search-wrapper js-chromeless-input-container">
<div class="header-search-scope">This repository</div>
<input type="text"
class="form-control header-search-input js-site-search-focus js-site-search-field is-clearable"
data-hotkey="s"
name="q"
placeholder="Search"
aria-label="Search this repository"
data-unscoped-placeholder="Search GitHub"
data-scoped-placeholder="Search"
tabindex="1"
autocapitalize="off">
</label>
</form></div>
<ul class="header-nav left" role="navigation">
<li class="header-nav-item">
<a href="/pulls" class="js-selected-navigation-item header-nav-link" data-ga-click="Header, click, Nav menu - item:pulls context:user" data-hotkey="g p" data-selected-links="/pulls /pulls/assigned /pulls/mentioned /pulls">
Pull requests
</a> </li>
<li class="header-nav-item">
<a href="/issues" class="js-selected-navigation-item header-nav-link" data-ga-click="Header, click, Nav menu - item:issues context:user" data-hotkey="g i" data-selected-links="/issues /issues/assigned /issues/mentioned /issues">
Issues
</a> </li>
<li class="header-nav-item">
<a class="header-nav-link" href="https://gist.github.com/" data-ga-click="Header, go to gist, text:gist">Gist</a>
</li>
</ul>
<ul class="header-nav user-nav right" id="user-links">
<li class="header-nav-item">
<a href="/notifications" aria-label="You have no unread notifications" class="header-nav-link notification-indicator tooltipped tooltipped-s js-socket-channel js-notification-indicator" data-channel="tenant:1:notification-changed:2396638" data-ga-click="Header, go to notifications, icon:read" data-hotkey="g n">
<span class="mail-status "></span>
<svg aria-hidden="true" class="octicon octicon-bell" height="16" version="1.1" viewBox="0 0 14 16" width="14"><path d="M14 12v1H0v-1l.73-.58c.77-.77.81-2.55 1.19-4.42C2.69 3.23 6 2 6 2c0-.55.45-1 1-1s1 .45 1 1c0 0 3.39 1.23 4.16 5 .38 1.88.42 3.66 1.19 4.42l.66.58H14zm-7 4c1.11 0 2-.89 2-2H5c0 1.11.89 2 2 2z"></path></svg>
</a>
</li>
<li class="header-nav-item dropdown js-menu-container">
<a class="header-nav-link tooltipped tooltipped-s js-menu-target" href="/new"
aria-label="Create new…"
data-ga-click="Header, create new, icon:add">
<svg aria-hidden="true" class="octicon octicon-plus left" height="16" version="1.1" viewBox="0 0 12 16" width="12"><path d="M12 9H7v5H5V9H0V7h5V2h2v5h5z"></path></svg>
<span class="dropdown-caret"></span>
</a>
<div class="dropdown-menu-content js-menu-content">
<ul class="dropdown-menu dropdown-menu-sw">
<a class="dropdown-item" href="/new" data-ga-click="Header, create new repository">
New repository
</a>
<a class="dropdown-item" href="/new/import" data-ga-click="Header, import a repository">
Import repository
</a>
<a class="dropdown-item" href="/organizations/new" data-ga-click="Header, create new organization">
New organization
</a>
<div class="dropdown-divider"></div>
<div class="dropdown-header">
<span title="byuimpactrevisions">This organization</span>
</div>
<a class="dropdown-item" href="/orgs/byuimpactrevisions/invitations/new" data-ga-click="Header, invite someone">
Invite someone
</a>
<a class="dropdown-item" href="/orgs/byuimpactrevisions/new-team" data-ga-click="Header, create new team">
New team
</a>
<a class="dropdown-item" href="/organizations/byuimpactrevisions/repositories/new" data-ga-click="Header, create new organization repository, icon:repo">
New repository
</a>
<div class="dropdown-divider"></div>
<div class="dropdown-header">
<span title="byuimpactrevisions/numerical_computing">This repository</span>
</div>
<a class="dropdown-item" href="/byuimpactrevisions/numerical_computing/settings/collaboration" data-ga-click="Header, create new collaborator">
New collaborator
</a>
</ul>
</div>
</li>
<li class="header-nav-item dropdown js-menu-container">
<a class="header-nav-link name tooltipped tooltipped-sw js-menu-target" href="/AZaitzeff"
aria-label="View profile and more"
data-ga-click="Header, show menu, icon:avatar">
<img alt="@AZaitzeff" class="avatar" height="20" src="https://avatars1.githubusercontent.com/u/2396638?v=3&s=40" width="20" />
<span class="dropdown-caret"></span>
</a>
<div class="dropdown-menu-content js-menu-content">
<div class="dropdown-menu dropdown-menu-sw">
<div class="dropdown-header header-nav-current-user css-truncate">
Signed in as <strong class="css-truncate-target">AZaitzeff</strong>
</div>
<div class="dropdown-divider"></div>
<a class="dropdown-item" href="/AZaitzeff" data-ga-click="Header, go to profile, text:your profile">
Your profile
</a>
<a class="dropdown-item" href="/stars" data-ga-click="Header, go to starred repos, text:your stars">
Your stars
</a>
<a class="dropdown-item" href="/explore" data-ga-click="Header, go to explore, text:explore">
Explore
</a>
<a class="dropdown-item" href="/integrations" data-ga-click="Header, go to integrations, text:integrations">
Integrations
</a>
<a class="dropdown-item" href="https://help.github.com" data-ga-click="Header, go to help, text:help">
Help
</a>
<div class="dropdown-divider"></div>
<a class="dropdown-item" href="/settings/profile" data-ga-click="Header, go to settings, icon:settings">
Settings
</a>
<!-- </textarea> --><!-- '"` --><form accept-charset="UTF-8" action="/logout" class="logout-form" data-form-nonce="a114dfe6a2821c41f49ba1eb573bc5d7dc2740fb" method="post"><div style="margin:0;padding:0;display:inline"><input name="utf8" type="hidden" value="✓" /><input name="authenticity_token" type="hidden" value="bKOTck8SUWhwIRKvdkCBiU5Nhu/yVtyfnC1u+ek7GNRnuq/yb0QFKVj6xmShRx9h7qWARkr4nsCjDH8r9UvotA==" /></div>
<button class="dropdown-item dropdown-signout" data-ga-click="Header, sign out, icon:logout">
Sign out
</button>
</form> </div>
</div>
</li>
</ul>
</div>
</div>
<div id="start-of-content" class="accessibility-aid"></div>
<div id="js-flash-container">
</div>
<div role="main" class="main-content">
<div itemscope itemtype="http://schema.org/SoftwareSourceCode">
<div id="js-repo-pjax-container" data-pjax-container>
<div class="pagehead repohead instapaper_ignore readability-menu experiment-repo-nav">
<div class="container repohead-details-container">
<ul class="pagehead-actions">
<li>
<!-- </textarea> --><!-- '"` --><form accept-charset="UTF-8" action="/notifications/subscribe" class="js-social-container" data-autosubmit="true" data-form-nonce="a114dfe6a2821c41f49ba1eb573bc5d7dc2740fb" data-remote="true" method="post"><div style="margin:0;padding:0;display:inline"><input name="utf8" type="hidden" value="✓" /><input name="authenticity_token" type="hidden" value="nASigrw4DzHkTntqg69WnfpZAl0Vr4amIAGqxe3R4rNp5GGaz0m8ydV2iXlZfkdl7Cw2RUmosy5hRWMQm+PaTA==" /></div> <input class="form-control" id="repository_id" name="repository_id" type="hidden" value="19390913" />
<div class="select-menu js-menu-container js-select-menu">
<a href="/byuimpactrevisions/numerical_computing/subscription"
class="btn btn-sm btn-with-count select-menu-button js-menu-target" role="button" tabindex="0" aria-haspopup="true"
data-ga-click="Repository, click Watch settings, action:blob#show">
<span class="js-select-button">
<svg aria-hidden="true" class="octicon octicon-eye" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path d="M8.06 2C3 2 0 8 0 8s3 6 8.06 6C13 14 16 8 16 8s-3-6-7.94-6zM8 12c-2.2 0-4-1.78-4-4 0-2.2 1.8-4 4-4 2.22 0 4 1.8 4 4 0 2.22-1.78 4-4 4zm2-4c0 1.11-.89 2-2 2-1.11 0-2-.89-2-2 0-1.11.89-2 2-2 1.11 0 2 .89 2 2z"></path></svg>
Watch
</span>
</a>
<a class="social-count js-social-count" href="/byuimpactrevisions/numerical_computing/watchers">
9
</a>
<div class="select-menu-modal-holder">
<div class="select-menu-modal subscription-menu-modal js-menu-content" aria-hidden="true">
<div class="select-menu-header js-navigation-enable" tabindex="-1">
<svg aria-label="Close" class="octicon octicon-x js-menu-close" height="16" role="img" version="1.1" viewBox="0 0 12 16" width="12"><path d="M7.48 8l3.75 3.75-1.48 1.48L6 9.48l-3.75 3.75-1.48-1.48L4.52 8 .77 4.25l1.48-1.48L6 6.52l3.75-3.75 1.48 1.48z"></path></svg>
<span class="select-menu-title">Notifications</span>
</div>
<div class="select-menu-list js-navigation-container" role="menu">
<div class="select-menu-item js-navigation-item selected" role="menuitem" tabindex="0">
<svg aria-hidden="true" class="octicon octicon-check select-menu-item-icon" height="16" version="1.1" viewBox="0 0 12 16" width="12"><path d="M12 5l-8 8-4-4 1.5-1.5L4 10l6.5-6.5z"></path></svg>
<div class="select-menu-item-text">
<input checked="checked" id="do_included" name="do" type="radio" value="included" />
<span class="select-menu-item-heading">Not watching</span>
<span class="description">Be notified when participating or @mentioned.</span>
<span class="js-select-button-text hidden-select-button-text">
<svg aria-hidden="true" class="octicon octicon-eye" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path d="M8.06 2C3 2 0 8 0 8s3 6 8.06 6C13 14 16 8 16 8s-3-6-7.94-6zM8 12c-2.2 0-4-1.78-4-4 0-2.2 1.8-4 4-4 2.22 0 4 1.8 4 4 0 2.22-1.78 4-4 4zm2-4c0 1.11-.89 2-2 2-1.11 0-2-.89-2-2 0-1.11.89-2 2-2 1.11 0 2 .89 2 2z"></path></svg>
Watch
</span>
</div>
</div>
<div class="select-menu-item js-navigation-item " role="menuitem" tabindex="0">
<svg aria-hidden="true" class="octicon octicon-check select-menu-item-icon" height="16" version="1.1" viewBox="0 0 12 16" width="12"><path d="M12 5l-8 8-4-4 1.5-1.5L4 10l6.5-6.5z"></path></svg>
<div class="select-menu-item-text">
<input id="do_subscribed" name="do" type="radio" value="subscribed" />
<span class="select-menu-item-heading">Watching</span>
<span class="description">Be notified of all conversations.</span>
<span class="js-select-button-text hidden-select-button-text">
<svg aria-hidden="true" class="octicon octicon-eye" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path d="M8.06 2C3 2 0 8 0 8s3 6 8.06 6C13 14 16 8 16 8s-3-6-7.94-6zM8 12c-2.2 0-4-1.78-4-4 0-2.2 1.8-4 4-4 2.22 0 4 1.8 4 4 0 2.22-1.78 4-4 4zm2-4c0 1.11-.89 2-2 2-1.11 0-2-.89-2-2 0-1.11.89-2 2-2 1.11 0 2 .89 2 2z"></path></svg>
Unwatch
</span>
</div>
</div>
<div class="select-menu-item js-navigation-item " role="menuitem" tabindex="0">
<svg aria-hidden="true" class="octicon octicon-check select-menu-item-icon" height="16" version="1.1" viewBox="0 0 12 16" width="12"><path d="M12 5l-8 8-4-4 1.5-1.5L4 10l6.5-6.5z"></path></svg>
<div class="select-menu-item-text">
<input id="do_ignore" name="do" type="radio" value="ignore" />
<span class="select-menu-item-heading">Ignoring</span>
<span class="description">Never be notified.</span>
<span class="js-select-button-text hidden-select-button-text">
<svg aria-hidden="true" class="octicon octicon-mute" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path d="M8 2.81v10.38c0 .67-.81 1-1.28.53L3 10H1c-.55 0-1-.45-1-1V7c0-.55.45-1 1-1h2l3.72-3.72C7.19 1.81 8 2.14 8 2.81zm7.53 3.22l-1.06-1.06-1.97 1.97-1.97-1.97-1.06 1.06L11.44 8 9.47 9.97l1.06 1.06 1.97-1.97 1.97 1.97 1.06-1.06L13.56 8l1.97-1.97z"></path></svg>
Stop ignoring
</span>
</div>
</div>
</div>
</div>
</div>
</div>
</form>
</li>
<li>
<div class="js-toggler-container js-social-container starring-container ">
<!-- </textarea> --><!-- '"` --><form accept-charset="UTF-8" action="/byuimpactrevisions/numerical_computing/unstar" class="starred" data-form-nonce="a114dfe6a2821c41f49ba1eb573bc5d7dc2740fb" data-remote="true" method="post"><div style="margin:0;padding:0;display:inline"><input name="utf8" type="hidden" value="✓" /><input name="authenticity_token" type="hidden" value="xyZ110qV2v+vd0Iq4IEWsUcD9rmulzcgAXZ+Xr93St3nP1MKV6qnoMtFMqdQRR6JyElnpaLjxFj1Z46WU2CAGA==" /></div>
<button
class="btn btn-sm btn-with-count js-toggler-target"
aria-label="Unstar this repository" title="Unstar byuimpactrevisions/numerical_computing"
data-ga-click="Repository, click unstar button, action:blob#show; text:Unstar">
<svg aria-hidden="true" class="octicon octicon-star" height="16" version="1.1" viewBox="0 0 14 16" width="14"><path d="M14 6l-4.9-.64L7 1 4.9 5.36 0 6l3.6 3.26L2.67 14 7 11.67 11.33 14l-.93-4.74z"></path></svg>
Unstar
</button>
<a class="social-count js-social-count" href="/byuimpactrevisions/numerical_computing/stargazers">
0
</a>
</form>
<!-- </textarea> --><!-- '"` --><form accept-charset="UTF-8" action="/byuimpactrevisions/numerical_computing/star" class="unstarred" data-form-nonce="a114dfe6a2821c41f49ba1eb573bc5d7dc2740fb" data-remote="true" method="post"><div style="margin:0;padding:0;display:inline"><input name="utf8" type="hidden" value="✓" /><input name="authenticity_token" type="hidden" value="ie2eiZ89uG5at6xiKRMvNZDkoqIbY1kT0b9m1iS1dap8UzhvxEOSvoHKWaPc8+waINtS29KzNmV6ap/1ZL5XKA==" /></div>
<button
class="btn btn-sm btn-with-count js-toggler-target"
aria-label="Star this repository" title="Star byuimpactrevisions/numerical_computing"
data-ga-click="Repository, click star button, action:blob#show; text:Star">
<svg aria-hidden="true" class="octicon octicon-star" height="16" version="1.1" viewBox="0 0 14 16" width="14"><path d="M14 6l-4.9-.64L7 1 4.9 5.36 0 6l3.6 3.26L2.67 14 7 11.67 11.33 14l-.93-4.74z"></path></svg>
Star
</button>
<a class="social-count js-social-count" href="/byuimpactrevisions/numerical_computing/stargazers">
0
</a>
</form> </div>
</li>
<li>
<a href="#fork-destination-box" class="btn btn-sm btn-with-count"
title="Fork your own copy of byuimpactrevisions/numerical_computing to your account"
aria-label="Fork your own copy of byuimpactrevisions/numerical_computing to your account"
rel="facebox"
data-ga-click="Repository, show fork modal, action:blob#show; text:Fork">
<svg aria-hidden="true" class="octicon octicon-repo-forked" height="16" version="1.1" viewBox="0 0 10 16" width="10"><path d="M8 1a1.993 1.993 0 0 0-1 3.72V6L5 8 3 6V4.72A1.993 1.993 0 0 0 2 1a1.993 1.993 0 0 0-1 3.72V6.5l3 3v1.78A1.993 1.993 0 0 0 5 15a1.993 1.993 0 0 0 1-3.72V9.5l3-3V4.72A1.993 1.993 0 0 0 8 1zM2 4.2C1.34 4.2.8 3.65.8 3c0-.65.55-1.2 1.2-1.2.65 0 1.2.55 1.2 1.2 0 .65-.55 1.2-1.2 1.2zm3 10c-.66 0-1.2-.55-1.2-1.2 0-.65.55-1.2 1.2-1.2.65 0 1.2.55 1.2 1.2 0 .65-.55 1.2-1.2 1.2zm3-10c-.66 0-1.2-.55-1.2-1.2 0-.65.55-1.2 1.2-1.2.65 0 1.2.55 1.2 1.2 0 .65-.55 1.2-1.2 1.2z"></path></svg>
Fork
</a>
<div id="fork-destination-box" style="display: none;">
<h2 class="facebox-header" data-facebox-id="facebox-header">Where should we fork this repository?</h2>
<include-fragment src=""
class="js-fork-select-fragment fork-select-fragment"
data-url="/byuimpactrevisions/numerical_computing/fork?fragment=1">
<img alt="Loading" height="64" src="https://assets-cdn.github.com/images/spinners/octocat-spinner-128.gif" width="64" />
</include-fragment>
</div>
<a href="/byuimpactrevisions/numerical_computing/network" class="social-count">
55
</a>
</li>
</ul>
<h1 class="public ">
<svg aria-hidden="true" class="octicon octicon-repo-forked" height="16" version="1.1" viewBox="0 0 10 16" width="10"><path d="M8 1a1.993 1.993 0 0 0-1 3.72V6L5 8 3 6V4.72A1.993 1.993 0 0 0 2 1a1.993 1.993 0 0 0-1 3.72V6.5l3 3v1.78A1.993 1.993 0 0 0 5 15a1.993 1.993 0 0 0 1-3.72V9.5l3-3V4.72A1.993 1.993 0 0 0 8 1zM2 4.2C1.34 4.2.8 3.65.8 3c0-.65.55-1.2 1.2-1.2.65 0 1.2.55 1.2 1.2 0 .65-.55 1.2-1.2 1.2zm3 10c-.66 0-1.2-.55-1.2-1.2 0-.65.55-1.2 1.2-1.2.65 0 1.2.55 1.2 1.2 0 .65-.55 1.2-1.2 1.2zm3-10c-.66 0-1.2-.55-1.2-1.2 0-.65.55-1.2 1.2-1.2.65 0 1.2.55 1.2 1.2 0 .65-.55 1.2-1.2 1.2z"></path></svg>
<span class="author" itemprop="author"><a href="/byuimpactrevisions" class="url fn" rel="author">byuimpactrevisions</a></span><!--
--><span class="path-divider">/</span><!--
--><strong itemprop="name"><a href="/byuimpactrevisions/numerical_computing" data-pjax="#js-repo-pjax-container">numerical_computing</a></strong>
<span class="fork-flag">
<span class="text">forked from <a href="/Foundations-of-Applied-Mathematics/Labs">Foundations-of-Applied-Mathematics/Labs</a></span>
</span>
</h1>
</div>
<div class="container">
<nav class="reponav js-repo-nav js-sidenav-container-pjax"
itemscope
itemtype="http://schema.org/BreadcrumbList"
role="navigation"
data-pjax="#js-repo-pjax-container">
<span itemscope itemtype="http://schema.org/ListItem" itemprop="itemListElement">
<a href="/byuimpactrevisions/numerical_computing" aria-selected="true" class="js-selected-navigation-item selected reponav-item" data-hotkey="g c" data-selected-links="repo_source repo_downloads repo_commits repo_releases repo_tags repo_branches /byuimpactrevisions/numerical_computing" itemprop="url">
<svg aria-hidden="true" class="octicon octicon-code" height="16" version="1.1" viewBox="0 0 14 16" width="14"><path d="M9.5 3L8 4.5 11.5 8 8 11.5 9.5 13 14 8 9.5 3zm-5 0L0 8l4.5 5L6 11.5 2.5 8 6 4.5 4.5 3z"></path></svg>
<span itemprop="name">Code</span>
<meta itemprop="position" content="1">
</a> </span>
<span itemscope itemtype="http://schema.org/ListItem" itemprop="itemListElement">
<a href="/byuimpactrevisions/numerical_computing/pulls" class="js-selected-navigation-item reponav-item" data-hotkey="g p" data-selected-links="repo_pulls /byuimpactrevisions/numerical_computing/pulls" itemprop="url">
<svg aria-hidden="true" class="octicon octicon-git-pull-request" height="16" version="1.1" viewBox="0 0 12 16" width="12"><path d="M11 11.28V5c-.03-.78-.34-1.47-.94-2.06C9.46 2.35 8.78 2.03 8 2H7V0L4 3l3 3V4h1c.27.02.48.11.69.31.21.2.3.42.31.69v6.28A1.993 1.993 0 0 0 10 15a1.993 1.993 0 0 0 1-3.72zm-1 2.92c-.66 0-1.2-.55-1.2-1.2 0-.65.55-1.2 1.2-1.2.65 0 1.2.55 1.2 1.2 0 .65-.55 1.2-1.2 1.2zM4 3c0-1.11-.89-2-2-2a1.993 1.993 0 0 0-1 3.72v6.56A1.993 1.993 0 0 0 2 15a1.993 1.993 0 0 0 1-3.72V4.72c.59-.34 1-.98 1-1.72zm-.8 10c0 .66-.55 1.2-1.2 1.2-.65 0-1.2-.55-1.2-1.2 0-.65.55-1.2 1.2-1.2.65 0 1.2.55 1.2 1.2zM2 4.2C1.34 4.2.8 3.65.8 3c0-.65.55-1.2 1.2-1.2.65 0 1.2.55 1.2 1.2 0 .65-.55 1.2-1.2 1.2z"></path></svg>
<span itemprop="name">Pull requests</span>
<span class="counter">0</span>
<meta itemprop="position" content="3">
</a> </span>
<a href="/byuimpactrevisions/numerical_computing/wiki" class="js-selected-navigation-item reponav-item" data-hotkey="g w" data-selected-links="repo_wiki /byuimpactrevisions/numerical_computing/wiki">
<svg aria-hidden="true" class="octicon octicon-book" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path d="M3 5h4v1H3V5zm0 3h4V7H3v1zm0 2h4V9H3v1zm11-5h-4v1h4V5zm0 2h-4v1h4V7zm0 2h-4v1h4V9zm2-6v9c0 .55-.45 1-1 1H9.5l-1 1-1-1H2c-.55 0-1-.45-1-1V3c0-.55.45-1 1-1h5.5l1 1 1-1H15c.55 0 1 .45 1 1zm-8 .5L7.5 3H2v9h6V3.5zm7-.5H9.5l-.5.5V12h6V3z"></path></svg>
Wiki
</a>
<a href="/byuimpactrevisions/numerical_computing/pulse" class="js-selected-navigation-item reponav-item" data-selected-links="pulse /byuimpactrevisions/numerical_computing/pulse">
<svg aria-hidden="true" class="octicon octicon-pulse" height="16" version="1.1" viewBox="0 0 14 16" width="14"><path d="M11.5 8L8.8 5.4 6.6 8.5 5.5 1.6 2.38 8H0v2h3.6l.9-1.8.9 5.4L9 8.5l1.6 1.5H14V8z"></path></svg>
Pulse
</a>
<a href="/byuimpactrevisions/numerical_computing/graphs" class="js-selected-navigation-item reponav-item" data-selected-links="repo_graphs repo_contributors /byuimpactrevisions/numerical_computing/graphs">
<svg aria-hidden="true" class="octicon octicon-graph" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path d="M16 14v1H0V0h1v14h15zM5 13H3V8h2v5zm4 0H7V3h2v10zm4 0h-2V6h2v7z"></path></svg>
Graphs
</a>
<a href="/byuimpactrevisions/numerical_computing/settings" class="js-selected-navigation-item reponav-item" data-selected-links="repo_settings repo_branch_settings hooks /byuimpactrevisions/numerical_computing/settings">
<svg aria-hidden="true" class="octicon octicon-gear" height="16" version="1.1" viewBox="0 0 14 16" width="14"><path d="M14 8.77v-1.6l-1.94-.64-.45-1.09.88-1.84-1.13-1.13-1.81.91-1.09-.45-.69-1.92h-1.6l-.63 1.94-1.11.45-1.84-.88-1.13 1.13.91 1.81-.45 1.09L0 7.23v1.59l1.94.64.45 1.09-.88 1.84 1.13 1.13 1.81-.91 1.09.45.69 1.92h1.59l.63-1.94 1.11-.45 1.84.88 1.13-1.13-.92-1.81.47-1.09L14 8.75v.02zM7 11c-1.66 0-3-1.34-3-3s1.34-3 3-3 3 1.34 3 3-1.34 3-3 3z"></path></svg>
Settings
</a>
</nav>
</div>
</div>
<div class="container new-discussion-timeline experiment-repo-nav">
<div class="repository-content">
<a href="/byuimpactrevisions/numerical_computing/blob/6450949b6fe098dff3ad2e657483336b137ab3c5/Algorithms/Haar/Haar.tex" class="hidden js-permalink-shortcut" data-hotkey="y">Permalink</a>
<!-- blob contrib key: blob_contributors:v21:87bfae2803eb0f80a8acc923dab1aa70 -->
<div class="file-navigation js-zeroclipboard-container">
<div class="select-menu branch-select-menu js-menu-container js-select-menu left">
<button class="btn btn-sm select-menu-button js-menu-target css-truncate" data-hotkey="w"
title="develop"
type="button" aria-label="Switch branches or tags" tabindex="0" aria-haspopup="true">
<i>Branch:</i>
<span class="js-select-button css-truncate-target">develop</span>
</button>
<div class="select-menu-modal-holder js-menu-content js-navigation-container" data-pjax aria-hidden="true">
<div class="select-menu-modal">
<div class="select-menu-header">
<svg aria-label="Close" class="octicon octicon-x js-menu-close" height="16" role="img" version="1.1" viewBox="0 0 12 16" width="12"><path d="M7.48 8l3.75 3.75-1.48 1.48L6 9.48l-3.75 3.75-1.48-1.48L4.52 8 .77 4.25l1.48-1.48L6 6.52l3.75-3.75 1.48 1.48z"></path></svg>
<span class="select-menu-title">Switch branches/tags</span>
</div>
<div class="select-menu-filters">
<div class="select-menu-text-filter">
<input type="text" aria-label="Find or create a branch…" id="context-commitish-filter-field" class="form-control js-filterable-field js-navigation-enable" placeholder="Find or create a branch…">
</div>
<div class="select-menu-tabs">
<ul>
<li class="select-menu-tab">
<a href="#" data-tab-filter="branches" data-filter-placeholder="Find or create a branch…" class="js-select-menu-tab" role="tab">Branches</a>
</li>
<li class="select-menu-tab">
<a href="#" data-tab-filter="tags" data-filter-placeholder="Find a tag…" class="js-select-menu-tab" role="tab">Tags</a>
</li>
</ul>
</div>
</div>
<div class="select-menu-list select-menu-tab-bucket js-select-menu-tab-bucket" data-tab-filter="branches" role="menu">
<div data-filterable-for="context-commitish-filter-field" data-filterable-type="substring">
<a class="select-menu-item js-navigation-item js-navigation-open "
href="/byuimpactrevisions/numerical_computing/blob/1_i_seg/Algorithms/Haar/Haar.tex"
data-name="1_i_seg"
data-skip-pjax="true"
rel="nofollow">
<svg aria-hidden="true" class="octicon octicon-check select-menu-item-icon" height="16" version="1.1" viewBox="0 0 12 16" width="12"><path d="M12 5l-8 8-4-4 1.5-1.5L4 10l6.5-6.5z"></path></svg>
<span class="select-menu-item-text css-truncate-target js-select-menu-filter-text" title="1_i_seg">
1_i_seg
</span>
</a>
<a class="select-menu-item js-navigation-item js-navigation-open "
href="/byuimpactrevisions/numerical_computing/blob/DataStructures/Algorithms/Haar/Haar.tex"
data-name="DataStructures"
data-skip-pjax="true"
rel="nofollow">
<svg aria-hidden="true" class="octicon octicon-check select-menu-item-icon" height="16" version="1.1" viewBox="0 0 12 16" width="12"><path d="M12 5l-8 8-4-4 1.5-1.5L4 10l6.5-6.5z"></path></svg>
<span class="select-menu-item-text css-truncate-target js-select-menu-filter-text" title="DataStructures">
DataStructures
</span>
</a>
<a class="select-menu-item js-navigation-item js-navigation-open "
href="/byuimpactrevisions/numerical_computing/blob/KevinB/Algorithms/Haar/Haar.tex"
data-name="KevinB"
data-skip-pjax="true"
rel="nofollow">
<svg aria-hidden="true" class="octicon octicon-check select-menu-item-icon" height="16" version="1.1" viewBox="0 0 12 16" width="12"><path d="M12 5l-8 8-4-4 1.5-1.5L4 10l6.5-6.5z"></path></svg>
<span class="select-menu-item-text css-truncate-target js-select-menu-filter-text" title="KevinB">
KevinB
</span>
</a>
<a class="select-menu-item js-navigation-item js-navigation-open selected"
href="/byuimpactrevisions/numerical_computing/blob/develop/Algorithms/Haar/Haar.tex"
data-name="develop"
data-skip-pjax="true"
rel="nofollow">
<svg aria-hidden="true" class="octicon octicon-check select-menu-item-icon" height="16" version="1.1" viewBox="0 0 12 16" width="12"><path d="M12 5l-8 8-4-4 1.5-1.5L4 10l6.5-6.5z"></path></svg>
<span class="select-menu-item-text css-truncate-target js-select-menu-filter-text" title="develop">
develop
</span>
</a>
<a class="select-menu-item js-navigation-item js-navigation-open "
href="/byuimpactrevisions/numerical_computing/blob/image_seg/Algorithms/Haar/Haar.tex"
data-name="image_seg"
data-skip-pjax="true"
rel="nofollow">
<svg aria-hidden="true" class="octicon octicon-check select-menu-item-icon" height="16" version="1.1" viewBox="0 0 12 16" width="12"><path d="M12 5l-8 8-4-4 1.5-1.5L4 10l6.5-6.5z"></path></svg>
<span class="select-menu-item-text css-truncate-target js-select-menu-filter-text" title="image_seg">
image_seg
</span>
</a>
<a class="select-menu-item js-navigation-item js-navigation-open "
href="/byuimpactrevisions/numerical_computing/blob/markov/Algorithms/Haar/Haar.tex"
data-name="markov"
data-skip-pjax="true"
rel="nofollow">
<svg aria-hidden="true" class="octicon octicon-check select-menu-item-icon" height="16" version="1.1" viewBox="0 0 12 16" width="12"><path d="M12 5l-8 8-4-4 1.5-1.5L4 10l6.5-6.5z"></path></svg>
<span class="select-menu-item-text css-truncate-target js-select-menu-filter-text" title="markov">
markov
</span>
</a>
<a class="select-menu-item js-navigation-item js-navigation-open "
href="/byuimpactrevisions/numerical_computing/blob/master/Algorithms/Haar/Haar.tex"
data-name="master"
data-skip-pjax="true"
rel="nofollow">
<svg aria-hidden="true" class="octicon octicon-check select-menu-item-icon" height="16" version="1.1" viewBox="0 0 12 16" width="12"><path d="M12 5l-8 8-4-4 1.5-1.5L4 10l6.5-6.5z"></path></svg>
<span class="select-menu-item-text css-truncate-target js-select-menu-filter-text" title="master">
master
</span>
</a>
<a class="select-menu-item js-navigation-item js-navigation-open "
href="/byuimpactrevisions/numerical_computing/blob/stand_lib/Algorithms/Haar/Haar.tex"
data-name="stand_lib"
data-skip-pjax="true"
rel="nofollow">
<svg aria-hidden="true" class="octicon octicon-check select-menu-item-icon" height="16" version="1.1" viewBox="0 0 12 16" width="12"><path d="M12 5l-8 8-4-4 1.5-1.5L4 10l6.5-6.5z"></path></svg>
<span class="select-menu-item-text css-truncate-target js-select-menu-filter-text" title="stand_lib">
stand_lib
</span>
</a>
<a class="select-menu-item js-navigation-item js-navigation-open "
href="/byuimpactrevisions/numerical_computing/blob/standard_library/Algorithms/Haar/Haar.tex"
data-name="standard_library"
data-skip-pjax="true"
rel="nofollow">
<svg aria-hidden="true" class="octicon octicon-check select-menu-item-icon" height="16" version="1.1" viewBox="0 0 12 16" width="12"><path d="M12 5l-8 8-4-4 1.5-1.5L4 10l6.5-6.5z"></path></svg>
<span class="select-menu-item-text css-truncate-target js-select-menu-filter-text" title="standard_library">
standard_library
</span>
</a>
<a class="select-menu-item js-navigation-item js-navigation-open "
href="/byuimpactrevisions/numerical_computing/blob/wavelets/Algorithms/Haar/Haar.tex"
data-name="wavelets"
data-skip-pjax="true"
rel="nofollow">
<svg aria-hidden="true" class="octicon octicon-check select-menu-item-icon" height="16" version="1.1" viewBox="0 0 12 16" width="12"><path d="M12 5l-8 8-4-4 1.5-1.5L4 10l6.5-6.5z"></path></svg>
<span class="select-menu-item-text css-truncate-target js-select-menu-filter-text" title="wavelets">
wavelets
</span>
</a>
</div>
<!-- </textarea> --><!-- '"` --><form accept-charset="UTF-8" action="/byuimpactrevisions/numerical_computing/branches" class="js-create-branch select-menu-item select-menu-new-item-form js-navigation-item js-new-item-form" data-form-nonce="a114dfe6a2821c41f49ba1eb573bc5d7dc2740fb" method="post"><div style="margin:0;padding:0;display:inline"><input name="utf8" type="hidden" value="✓" /><input name="authenticity_token" type="hidden" value="mOVQiN1l71QBN+sBqfMch8NA+rr4TCn6rCs26v9+FvuHM1D6rKuYopBAFVGVhJSFhc+wN1NL5ypaPjWYPpEX2Q==" /></div>
<svg aria-hidden="true" class="octicon octicon-git-branch select-menu-item-icon" height="16" version="1.1" viewBox="0 0 10 16" width="10"><path d="M10 5c0-1.11-.89-2-2-2a1.993 1.993 0 0 0-1 3.72v.3c-.02.52-.23.98-.63 1.38-.4.4-.86.61-1.38.63-.83.02-1.48.16-2 .45V4.72a1.993 1.993 0 0 0-1-3.72C.88 1 0 1.89 0 3a2 2 0 0 0 1 1.72v6.56c-.59.35-1 .99-1 1.72 0 1.11.89 2 2 2 1.11 0 2-.89 2-2 0-.53-.2-1-.53-1.36.09-.06.48-.41.59-.47.25-.11.56-.17.94-.17 1.05-.05 1.95-.45 2.75-1.25S8.95 7.77 9 6.73h-.02C9.59 6.37 10 5.73 10 5zM2 1.8c.66 0 1.2.55 1.2 1.2 0 .65-.55 1.2-1.2 1.2C1.35 4.2.8 3.65.8 3c0-.65.55-1.2 1.2-1.2zm0 12.41c-.66 0-1.2-.55-1.2-1.2 0-.65.55-1.2 1.2-1.2.65 0 1.2.55 1.2 1.2 0 .65-.55 1.2-1.2 1.2zm6-8c-.66 0-1.2-.55-1.2-1.2 0-.65.55-1.2 1.2-1.2.65 0 1.2.55 1.2 1.2 0 .65-.55 1.2-1.2 1.2z"></path></svg>
<div class="select-menu-item-text">
<span class="select-menu-item-heading">Create branch: <span class="js-new-item-name"></span></span>
<span class="description">from ‘develop’</span>
</div>
<input type="hidden" name="name" id="name" class="js-new-item-value">
<input type="hidden" name="branch" id="branch" value="develop">
<input type="hidden" name="path" id="path" value="Algorithms/Haar/Haar.tex">
</form>
</div>
<div class="select-menu-list select-menu-tab-bucket js-select-menu-tab-bucket" data-tab-filter="tags">
<div data-filterable-for="context-commitish-filter-field" data-filterable-type="substring">
</div>
<div class="select-menu-no-results">Nothing to show</div>
</div>
</div>
</div>
</div>
<div class="btn-group right">
<a href="/byuimpactrevisions/numerical_computing/find/develop"
class="js-pjax-capture-input btn btn-sm"
data-pjax
data-hotkey="t">
Find file
</a>
<button aria-label="Copy file path to clipboard" class="js-zeroclipboard btn btn-sm zeroclipboard-button tooltipped tooltipped-s" data-copied-hint="Copied!" type="button">Copy path</button>
</div>
<div class="breadcrumb js-zeroclipboard-target">
<span class="repo-root js-repo-root"><span class="js-path-segment"><a href="/byuimpactrevisions/numerical_computing"><span>numerical_computing</span></a></span></span><span class="separator">/</span><span class="js-path-segment"><a href="/byuimpactrevisions/numerical_computing/tree/develop/Algorithms"><span>Algorithms</span></a></span><span class="separator">/</span><span class="js-path-segment"><a href="/byuimpactrevisions/numerical_computing/tree/develop/Algorithms/Haar"><span>Haar</span></a></span><span class="separator">/</span><strong class="final-path">Haar.tex</strong>
</div>
</div>
<div class="commit-tease">
<span class="right">
<a class="commit-tease-sha" href="/byuimpactrevisions/numerical_computing/commit/84d572a0bc26a88df56312031973e389164b83a4" data-pjax>
84d572a
</a>
<relative-time datetime="2014-07-08T22:24:15Z">Jul 8, 2014</relative-time>
</span>
<div>
<img alt="@AZaitzeff" class="avatar" height="20" src="https://avatars1.githubusercontent.com/u/2396638?v=3&s=40" width="20" />
<a href="/AZaitzeff" class="user-mention" rel="contributor">AZaitzeff</a>
<a href="/byuimpactrevisions/numerical_computing/commit/84d572a0bc26a88df56312031973e389164b83a4" class="message" data-pjax="true" title="Began merge of the wavelet labs">Began merge of the wavelet labs</a>
</div>
<div class="commit-tease-contributors">
<button type="button" class="btn-link muted-link contributors-toggle" data-facebox="#blob_contributors_box">
<strong>2</strong>
contributors
</button>
<a class="avatar-link tooltipped tooltipped-s" aria-label="abefrandsen" href="/byuimpactrevisions/numerical_computing/commits/develop/Algorithms/Haar/Haar.tex?author=abefrandsen"><img alt="@abefrandsen" class="avatar" height="20" src="https://avatars1.githubusercontent.com/u/2762485?v=3&s=40" width="20" /> </a>
<a class="avatar-link tooltipped tooltipped-s" aria-label="AZaitzeff" href="/byuimpactrevisions/numerical_computing/commits/develop/Algorithms/Haar/Haar.tex?author=AZaitzeff"><img alt="@AZaitzeff" class="avatar" height="20" src="https://avatars1.githubusercontent.com/u/2396638?v=3&s=40" width="20" /> </a>
</div>
<div id="blob_contributors_box" style="display:none">
<h2 class="facebox-header" data-facebox-id="facebox-header">Users who have contributed to this file</h2>
<ul class="facebox-user-list" data-facebox-id="facebox-description">
<li class="facebox-user-list-item">
<img alt="@abefrandsen" height="24" src="https://avatars3.githubusercontent.com/u/2762485?v=3&s=48" width="24" />
<a href="/abefrandsen">abefrandsen</a>
</li>
<li class="facebox-user-list-item">
<img alt="@AZaitzeff" height="24" src="https://avatars3.githubusercontent.com/u/2396638?v=3&s=48" width="24" />
<a href="/AZaitzeff">AZaitzeff</a>
</li>
</ul>
</div>
</div>
<div class="file">
<div class="file-header">
<div class="file-actions">
<div class="btn-group">
<a href="/byuimpactrevisions/numerical_computing/raw/develop/Algorithms/Haar/Haar.tex" class="btn btn-sm " id="raw-url">Raw</a>
<a href="/byuimpactrevisions/numerical_computing/blame/develop/Algorithms/Haar/Haar.tex" class="btn btn-sm js-update-url-with-hash">Blame</a>
<a href="/byuimpactrevisions/numerical_computing/commits/develop/Algorithms/Haar/Haar.tex" class="btn btn-sm " rel="nofollow">History</a>
</div>
<a class="btn-octicon tooltipped tooltipped-nw"
href="https://mac.github.com"
aria-label="Open this file in GitHub Desktop"
data-ga-click="Repository, open with desktop, type:mac">
<svg aria-hidden="true" class="octicon octicon-device-desktop" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path d="M15 2H1c-.55 0-1 .45-1 1v9c0 .55.45 1 1 1h5.34c-.25.61-.86 1.39-2.34 2h8c-1.48-.61-2.09-1.39-2.34-2H15c.55 0 1-.45 1-1V3c0-.55-.45-1-1-1zm0 9H1V3h14v8z"></path></svg>
</a>
<!-- </textarea> --><!-- '"` --><form accept-charset="UTF-8" action="/byuimpactrevisions/numerical_computing/edit/develop/Algorithms/Haar/Haar.tex" class="inline-form js-update-url-with-hash" data-form-nonce="a114dfe6a2821c41f49ba1eb573bc5d7dc2740fb" method="post"><div style="margin:0;padding:0;display:inline"><input name="utf8" type="hidden" value="✓" /><input name="authenticity_token" type="hidden" value="qHLd/J0SPDeSebMYHww/i4CbMRwC4ZrOQGnZ8Qi7TxPy8RJxLG9fgfkb9ku0q5yGQkGBT0LeTUaGn4H9xKfpMw==" /></div>
<button class="btn-octicon tooltipped tooltipped-nw" type="submit"
aria-label="Edit this file" data-hotkey="e" data-disable-with>
<svg aria-hidden="true" class="octicon octicon-pencil" height="16" version="1.1" viewBox="0 0 14 16" width="14"><path d="M0 12v3h3l8-8-3-3-8 8zm3 2H1v-2h1v1h1v1zm10.3-9.3L12 6 9 3l1.3-1.3a.996.996 0 0 1 1.41 0l1.59 1.59c.39.39.39 1.02 0 1.41z"></path></svg>
</button>
</form> <!-- </textarea> --><!-- '"` --><form accept-charset="UTF-8" action="/byuimpactrevisions/numerical_computing/delete/develop/Algorithms/Haar/Haar.tex" class="inline-form" data-form-nonce="a114dfe6a2821c41f49ba1eb573bc5d7dc2740fb" method="post"><div style="margin:0;padding:0;display:inline"><input name="utf8" type="hidden" value="✓" /><input name="authenticity_token" type="hidden" value="wYla/z5NNNEsBdHh2khCL7yKfWOjMsDI5SOHZUmes4xGEaMh50cQ5PJ0NckilxBEN+cAO75W34SrLerDsjSbNQ==" /></div>
<button class="btn-octicon btn-octicon-danger tooltipped tooltipped-nw" type="submit"
aria-label="Delete this file" data-disable-with>
<svg aria-hidden="true" class="octicon octicon-trashcan" height="16" version="1.1" viewBox="0 0 12 16" width="12"><path d="M11 2H9c0-.55-.45-1-1-1H5c-.55 0-1 .45-1 1H2c-.55 0-1 .45-1 1v1c0 .55.45 1 1 1v9c0 .55.45 1 1 1h7c.55 0 1-.45 1-1V5c.55 0 1-.45 1-1V3c0-.55-.45-1-1-1zm-1 12H3V5h1v8h1V5h1v8h1V5h1v8h1V5h1v9zm1-10H2V3h9v1z"></path></svg>
</button>
</form> </div>
<div class="file-info">
691 lines (635 sloc)
<span class="file-info-divider"></span>
33.6 KB
</div>
</div>
<div itemprop="text" class="blob-wrapper data type-tex">
<table class="highlight tab-size js-file-line-container" data-tab-size="8">
<tr>
<td id="L1" class="blob-num js-line-number" data-line-number="1"></td>
<td id="LC1" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\lab</span>{Algorithms}{Introduction to Wavelets}{Intro to Wavelets}</td>
</tr>
<tr>
<td id="L2" class="blob-num js-line-number" data-line-number="2"></td>
<td id="LC2" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L3" class="blob-num js-line-number" data-line-number="3"></td>
<td id="LC3" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\objective</span>{This lab explains the basic ideas of Wavelet Analysis</td>
</tr>
<tr>
<td id="L4" class="blob-num js-line-number" data-line-number="4"></td>
<td id="LC4" class="blob-code blob-code-inner js-file-line">using the Haar wavelet as a prototypical example then presents applications of the Discrete Wavelet Transform in</td>
</tr>
<tr>
<td id="L5" class="blob-num js-line-number" data-line-number="5"></td>
<td id="LC5" class="blob-code blob-code-inner js-file-line">image denoising and compression.}</td>
</tr>
<tr>
<td id="L6" class="blob-num js-line-number" data-line-number="6"></td>
<td id="LC6" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L7" class="blob-num js-line-number" data-line-number="7"></td>
<td id="LC7" class="blob-code blob-code-inner js-file-line">Recall that in the context of Fourier analysis, one seeks to represent a</td>
</tr>
<tr>
<td id="L8" class="blob-num js-line-number" data-line-number="8"></td>
<td id="LC8" class="blob-code blob-code-inner js-file-line">function in the frequency domain, and this is accomplished via the Fourier</td>
</tr>
<tr>
<td id="L9" class="blob-num js-line-number" data-line-number="9"></td>
<td id="LC9" class="blob-code blob-code-inner js-file-line">transform. The Fourier transform allows us to analyze and process functions</td>
</tr>
<tr>
<td id="L10" class="blob-num js-line-number" data-line-number="10"></td>
<td id="LC10" class="blob-code blob-code-inner js-file-line">in many useful ways, as you have seen in previous labs. There are, however,</td>
</tr>
<tr>
<td id="L11" class="blob-num js-line-number" data-line-number="11"></td>
<td id="LC11" class="blob-code blob-code-inner js-file-line">drawbacks to this approach. For example, although a function's Fourier</td>
</tr>
<tr>
<td id="L12" class="blob-num js-line-number" data-line-number="12"></td>
<td id="LC12" class="blob-code blob-code-inner js-file-line">transform gives us complete information on its frequency spectrum, time</td>
</tr>
<tr>
<td id="L13" class="blob-num js-line-number" data-line-number="13"></td>
<td id="LC13" class="blob-code blob-code-inner js-file-line">information is lost. We can know which frequencies are the</td>
</tr>
<tr>
<td id="L14" class="blob-num js-line-number" data-line-number="14"></td>
<td id="LC14" class="blob-code blob-code-inner js-file-line">most prevalent, but not when they occur. This is due in part to the fact that</td>
</tr>
<tr>
<td id="L15" class="blob-num js-line-number" data-line-number="15"></td>
<td id="LC15" class="blob-code blob-code-inner js-file-line">the sinusoidal function <span class="pl-s"><span class="pl-pds">$</span>f(x) = e^{2<span class="pl-c1">\pi</span> ix}<span class="pl-pds">$</span></span> -- on which the Fourier transform</td>
</tr>
<tr>
<td id="L16" class="blob-num js-line-number" data-line-number="16"></td>
<td id="LC16" class="blob-code blob-code-inner js-file-line">is based -- has infinite support. Its nature is essentially <span class="pl-c1">\emph</span>{non-local},</td>
</tr>
<tr>
<td id="L17" class="blob-num js-line-number" data-line-number="17"></td>
<td id="LC17" class="blob-code blob-code-inner js-file-line">and so the Fourier transform fails to provide local information in both the</td>
</tr>
<tr>
<td id="L18" class="blob-num js-line-number" data-line-number="18"></td>
<td id="LC18" class="blob-code blob-code-inner js-file-line">time and frequency domains. This brings us to the following question: are</td>
</tr>
<tr>
<td id="L19" class="blob-num js-line-number" data-line-number="19"></td>
<td id="LC19" class="blob-code blob-code-inner js-file-line">there types of transforms that avoid the shortcomings mentioned above? The</td>
</tr>
<tr>
<td id="L20" class="blob-num js-line-number" data-line-number="20"></td>
<td id="LC20" class="blob-code blob-code-inner js-file-line">answer is an emphatic yes. Enter Wavelet analysis.</td>
</tr>
<tr>
<td id="L21" class="blob-num js-line-number" data-line-number="21"></td>
<td id="LC21" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L22" class="blob-num js-line-number" data-line-number="22"></td>
<td id="LC22" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\subsection</span>*{The Haar Wavelet}</td>
</tr>
<tr>
<td id="L23" class="blob-num js-line-number" data-line-number="23"></td>
<td id="LC23" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L24" class="blob-num js-line-number" data-line-number="24"></td>
<td id="LC24" class="blob-code blob-code-inner js-file-line">As noted earlier, the Fourier transform is based on the complex exponential</td>
</tr>
<tr>
<td id="L25" class="blob-num js-line-number" data-line-number="25"></td>
<td id="LC25" class="blob-code blob-code-inner js-file-line">function. Let us alter the situation and consider instead the following</td>
</tr>
<tr>
<td id="L26" class="blob-num js-line-number" data-line-number="26"></td>
<td id="LC26" class="blob-code blob-code-inner js-file-line">function, known as the <span class="pl-c1">\emph</span>{Haar wavelet}:</td>
</tr>
<tr>
<td id="L27" class="blob-num js-line-number" data-line-number="27"></td>
<td id="LC27" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{equation*}</td>
</tr>
<tr>
<td id="L28" class="blob-num js-line-number" data-line-number="28"></td>
<td id="LC28" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\psi</span>(x) =</td>
</tr>
<tr>
<td id="L29" class="blob-num js-line-number" data-line-number="29"></td>
<td id="LC29" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\begin</span>{cases}</td>
</tr>
<tr>
<td id="L30" class="blob-num js-line-number" data-line-number="30"></td>
<td id="LC30" class="blob-code blob-code-inner js-file-line"> 1 & <span class="pl-c1">\text</span>{if } 0 <span class="pl-c1">\leq</span> x < <span class="pl-c1">\frac</span>{1}{2} <span class="pl-c1">\\</span></td>
</tr>
<tr>
<td id="L31" class="blob-num js-line-number" data-line-number="31"></td>
<td id="LC31" class="blob-code blob-code-inner js-file-line"> -1 & <span class="pl-c1">\text</span>{if } <span class="pl-c1">\frac</span>{1}{2} <span class="pl-c1">\leq</span> x < 1 <span class="pl-c1">\\</span></td>
</tr>
<tr>
<td id="L32" class="blob-num js-line-number" data-line-number="32"></td>
<td id="LC32" class="blob-code blob-code-inner js-file-line"> 0 & <span class="pl-c1">\text</span>{otherwise.}</td>
</tr>
<tr>
<td id="L33" class="blob-num js-line-number" data-line-number="33"></td>
<td id="LC33" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\end</span>{cases}</td>
</tr>
<tr>
<td id="L34" class="blob-num js-line-number" data-line-number="34"></td>
<td id="LC34" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{equation*}</td>
</tr>
<tr>
<td id="L35" class="blob-num js-line-number" data-line-number="35"></td>
<td id="LC35" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L36" class="blob-num js-line-number" data-line-number="36"></td>
<td id="LC36" class="blob-code blob-code-inner js-file-line"><span class="pl-c">% It might be nice to plot this function and include the image in the lab.</span></td>
</tr>
<tr>
<td id="L37" class="blob-num js-line-number" data-line-number="37"></td>
<td id="LC37" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L38" class="blob-num js-line-number" data-line-number="38"></td>
<td id="LC38" class="blob-code blob-code-inner js-file-line">Along with this wavelet, we introduce the associated <span class="pl-c1">\emph</span>{scaling function}:</td>
</tr>
<tr>
<td id="L39" class="blob-num js-line-number" data-line-number="39"></td>
<td id="LC39" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{equation*}</td>
</tr>
<tr>
<td id="L40" class="blob-num js-line-number" data-line-number="40"></td>
<td id="LC40" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\phi</span>(x) =</td>
</tr>
<tr>
<td id="L41" class="blob-num js-line-number" data-line-number="41"></td>
<td id="LC41" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\begin</span>{cases}</td>
</tr>
<tr>
<td id="L42" class="blob-num js-line-number" data-line-number="42"></td>
<td id="LC42" class="blob-code blob-code-inner js-file-line"> 1 & <span class="pl-c1">\text</span>{if } 0 <span class="pl-c1">\leq</span> x < 1 <span class="pl-c1">\\</span></td>
</tr>
<tr>
<td id="L43" class="blob-num js-line-number" data-line-number="43"></td>
<td id="LC43" class="blob-code blob-code-inner js-file-line"> 0 & <span class="pl-c1">\text</span>{otherwise.}</td>
</tr>
<tr>
<td id="L44" class="blob-num js-line-number" data-line-number="44"></td>
<td id="LC44" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\end</span>{cases}</td>
</tr>
<tr>
<td id="L45" class="blob-num js-line-number" data-line-number="45"></td>
<td id="LC45" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{equation*}</td>
</tr>
<tr>
<td id="L46" class="blob-num js-line-number" data-line-number="46"></td>
<td id="LC46" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L47" class="blob-num js-line-number" data-line-number="47"></td>
<td id="LC47" class="blob-code blob-code-inner js-file-line">From the wavelet and scaling function, we can generate two countable families</td>
</tr>
<tr>
<td id="L48" class="blob-num js-line-number" data-line-number="48"></td>
<td id="LC48" class="blob-code blob-code-inner js-file-line">of dyadic dilates and translates given by</td>
</tr>
<tr>
<td id="L49" class="blob-num js-line-number" data-line-number="49"></td>
<td id="LC49" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{equation*}</td>
</tr>
<tr>
<td id="L50" class="blob-num js-line-number" data-line-number="50"></td>
<td id="LC50" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\psi</span>_{m,k}(x) = <span class="pl-c1">\psi</span>(2^mx - k)</td>
</tr>
<tr>
<td id="L51" class="blob-num js-line-number" data-line-number="51"></td>
<td id="LC51" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{equation*}</td>
</tr>
<tr>
<td id="L52" class="blob-num js-line-number" data-line-number="52"></td>
<td id="LC52" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{equation*}</td>
</tr>
<tr>
<td id="L53" class="blob-num js-line-number" data-line-number="53"></td>
<td id="LC53" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\phi</span>_{m,k}(x) = <span class="pl-c1">\phi</span>(2^mx - k),</td>
</tr>
<tr>
<td id="L54" class="blob-num js-line-number" data-line-number="54"></td>
<td id="LC54" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{equation*}</td>
</tr>
<tr>
<td id="L55" class="blob-num js-line-number" data-line-number="55"></td>
<td id="LC55" class="blob-code blob-code-inner js-file-line">where <span class="pl-s"><span class="pl-pds">$</span>m,k <span class="pl-c1">\in</span> <span class="pl-c1">\mathbb</span>{Z}<span class="pl-pds">$</span></span>.</td>
</tr>
<tr>
<td id="L56" class="blob-num js-line-number" data-line-number="56"></td>
<td id="LC56" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L57" class="blob-num js-line-number" data-line-number="57"></td>
<td id="LC57" class="blob-code blob-code-inner js-file-line">Let us focus for the moment on that second family of functions, <span class="pl-s"><span class="pl-pds">$</span><span class="pl-cce">\{</span><span class="pl-c1">\phi</span>_{m,k}<span class="pl-cce">\}</span><span class="pl-pds">$</span></span>.</td>
</tr>
<tr>
<td id="L58" class="blob-num js-line-number" data-line-number="58"></td>
<td id="LC58" class="blob-code blob-code-inner js-file-line">If we fix <span class="pl-s"><span class="pl-pds">$</span>m<span class="pl-pds">$</span></span> and let <span class="pl-s"><span class="pl-pds">$</span>k<span class="pl-pds">$</span></span> vary over the integers, we have a countable collection of</td>
</tr>
<tr>
<td id="L59" class="blob-num js-line-number" data-line-number="59"></td>
<td id="LC59" class="blob-code blob-code-inner js-file-line">simple functions. The support of a typical function <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\phi</span>_{m,k}<span class="pl-pds">$</span></span> is the interval</td>
</tr>
<tr>
<td id="L60" class="blob-num js-line-number" data-line-number="60"></td>
<td id="LC60" class="blob-code blob-code-inner js-file-line"><span class="pl-s"><span class="pl-pds">$</span>[k<span class="pl-c1">2</span>^{-m}, (k+<span class="pl-c1">1</span>)<span class="pl-c1">2</span>^{-m}]<span class="pl-pds">$</span></span>, and for any <span class="pl-s"><span class="pl-pds">$</span>m <span class="pl-c1">\in</span> <span class="pl-c1">\mathbb</span>{Z}<span class="pl-pds">$</span></span> we have</td>
</tr>
<tr>
<td id="L61" class="blob-num js-line-number" data-line-number="61"></td>
<td id="LC61" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{equation*}</td>
</tr>
<tr>
<td id="L62" class="blob-num js-line-number" data-line-number="62"></td>
<td id="LC62" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\mathbb</span>{R} = <span class="pl-c1">\displaystyle\biguplus</span>_k<span class="pl-cce">\,</span>[k2^{-m}, (k+1)2^{-m}],</td>
</tr>
<tr>
<td id="L63" class="blob-num js-line-number" data-line-number="63"></td>
<td id="LC63" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{equation*}</td>
</tr>
<tr>
<td id="L64" class="blob-num js-line-number" data-line-number="64"></td>
<td id="LC64" class="blob-code blob-code-inner js-file-line">where <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\uplus</span><span class="pl-pds">$</span></span> denotes a union over disjoint sets. Thus, the supports can be viewed as</td>
</tr>
<tr>
<td id="L65" class="blob-num js-line-number" data-line-number="65"></td>
<td id="LC65" class="blob-code blob-code-inner js-file-line">a discretization of the real line, and we can use this collection of simple functions</td>
</tr>
<tr>
<td id="L66" class="blob-num js-line-number" data-line-number="66"></td>
<td id="LC66" class="blob-code blob-code-inner js-file-line">to approximate any <span class="pl-s"><span class="pl-pds">$</span>f <span class="pl-c1">\in</span> L^<span class="pl-c1">2</span>(<span class="pl-c1">\mathbb</span>{R})<span class="pl-pds">$</span></span> in the following sense:</td>
</tr>
<tr>
<td id="L67" class="blob-num js-line-number" data-line-number="67"></td>
<td id="LC67" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{equation*}</td>
</tr>
<tr>
<td id="L68" class="blob-num js-line-number" data-line-number="68"></td>
<td id="LC68" class="blob-code blob-code-inner js-file-line">f(x) <span class="pl-c1">\approx</span> f_m(x) := <span class="pl-c1">\displaystyle\sum</span>_{k <span class="pl-c1">\in</span> <span class="pl-c1">\mathbb</span>{Z}}<span class="pl-c1">\alpha</span>_{m,k}<span class="pl-c1">\phi</span>_{m,k}(x),</td>
</tr>
<tr>
<td id="L69" class="blob-num js-line-number" data-line-number="69"></td>
<td id="LC69" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{equation*}</td>
</tr>
<tr>
<td id="L70" class="blob-num js-line-number" data-line-number="70"></td>
<td id="LC70" class="blob-code blob-code-inner js-file-line">where</td>
</tr>
<tr>
<td id="L71" class="blob-num js-line-number" data-line-number="71"></td>
<td id="LC71" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{equation*}</td>
</tr>
<tr>
<td id="L72" class="blob-num js-line-number" data-line-number="72"></td>
<td id="LC72" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\alpha</span>_{m,k} := 2^m <span class="pl-c1">\displaystyle</span> <span class="pl-c1">\int</span>_{k2^{-m}}^{(k+1)2^{-m}}f(x) dx</td>
</tr>
<tr>
<td id="L73" class="blob-num js-line-number" data-line-number="73"></td>
<td id="LC73" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{equation*}</td>
</tr>
<tr>
<td id="L74" class="blob-num js-line-number" data-line-number="74"></td>
<td id="LC74" class="blob-code blob-code-inner js-file-line">(<span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\alpha</span>_{m,k}<span class="pl-pds">$</span></span> is simply the average value of <span class="pl-s"><span class="pl-pds">$</span>f<span class="pl-pds">$</span></span> on <span class="pl-s"><span class="pl-pds">$</span>[k<span class="pl-c1">2</span>^{-m},(k+<span class="pl-c1">1</span>)<span class="pl-c1">2</span>^{-m}]<span class="pl-pds">$</span></span>). As you</td>
</tr>
<tr>
<td id="L75" class="blob-num js-line-number" data-line-number="75"></td>
<td id="LC75" class="blob-code blob-code-inner js-file-line">would probably expect, the point-wise error between <span class="pl-s"><span class="pl-pds">$</span>f<span class="pl-pds">$</span></span> and its approximation <span class="pl-s"><span class="pl-pds">$</span>f_m<span class="pl-pds">$</span></span></td>
</tr>
<tr>
<td id="L76" class="blob-num js-line-number" data-line-number="76"></td>
<td id="LC76" class="blob-code blob-code-inner js-file-line">(called a <span class="pl-c1">\emph</span>{frame}) goes to zero as <span class="pl-s"><span class="pl-pds">$</span>m <span class="pl-c1">\to</span> <span class="pl-c1">\infty</span><span class="pl-pds">$</span></span>.</td>
</tr>
<tr>
<td id="L77" class="blob-num js-line-number" data-line-number="77"></td>
<td id="LC77" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L78" class="blob-num js-line-number" data-line-number="78"></td>
<td id="LC78" class="blob-code blob-code-inner js-file-line">These frames are not quite good enough, however. Each coefficient <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\alpha</span>_{m,k}<span class="pl-pds">$</span></span></td>
</tr>
<tr>
<td id="L79" class="blob-num js-line-number" data-line-number="79"></td>
<td id="LC79" class="blob-code blob-code-inner js-file-line">certainly captures local information about <span class="pl-s"><span class="pl-pds">$</span>f<span class="pl-pds">$</span></span> -- namely its average value on</td>
</tr>
<tr>
<td id="L80" class="blob-num js-line-number" data-line-number="80"></td>
<td id="LC80" class="blob-code blob-code-inner js-file-line">a certain interval -- but it fails to tell us anything about how <span class="pl-s"><span class="pl-pds">$</span>f<span class="pl-pds">$</span></span> changes</td>
</tr>
<tr>
<td id="L81" class="blob-num js-line-number" data-line-number="81"></td>
<td id="LC81" class="blob-code blob-code-inner js-file-line">on that interval. We need more information than is provided by <span class="pl-s"><span class="pl-pds">$</span>f_m<span class="pl-pds">$</span></span> in order</td>
</tr>
<tr>
<td id="L82" class="blob-num js-line-number" data-line-number="82"></td>
<td id="LC82" class="blob-code blob-code-inner js-file-line">to know about discontinuities or high-frequency oscillations of <span class="pl-s"><span class="pl-pds">$</span>f<span class="pl-pds">$</span></span>. To this end,</td>
</tr>
<tr>
<td id="L83" class="blob-num js-line-number" data-line-number="83"></td>
<td id="LC83" class="blob-code blob-code-inner js-file-line">we now consider the wavelet function <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\psi</span><span class="pl-pds">$</span></span>.</td>
</tr>
<tr>
<td id="L84" class="blob-num js-line-number" data-line-number="84"></td>
<td id="LC84" class="blob-code blob-code-inner js-file-line">Notice that the Haar wavelet is oscillatory in nature, and is thus better suited</td>
</tr>
<tr>
<td id="L85" class="blob-num js-line-number" data-line-number="85"></td>
<td id="LC85" class="blob-code blob-code-inner js-file-line">to capture local information on how a function changes at a given point. For</td>
</tr>
<tr>
<td id="L86" class="blob-num js-line-number" data-line-number="86"></td>
<td id="LC86" class="blob-code blob-code-inner js-file-line">any given <span class="pl-s"><span class="pl-pds">$</span>m<span class="pl-pds">$</span></span>, we define a function <span class="pl-s"><span class="pl-pds">$</span>d_m<span class="pl-pds">$</span></span>, called a <span class="pl-c1">\emph</span>{detail}, as follows:</td>
</tr>
<tr>
<td id="L87" class="blob-num js-line-number" data-line-number="87"></td>
<td id="LC87" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{equation*}</td>
</tr>
<tr>
<td id="L88" class="blob-num js-line-number" data-line-number="88"></td>
<td id="LC88" class="blob-code blob-code-inner js-file-line">d_m(x) := <span class="pl-c1">\displaystyle\sum</span>_{k <span class="pl-c1">\in</span> <span class="pl-c1">\mathbb</span>{Z}}<span class="pl-c1">\beta</span>_{m,k}<span class="pl-c1">\psi</span>_{m,k}(x),</td>
</tr>
<tr>
<td id="L89" class="blob-num js-line-number" data-line-number="89"></td>
<td id="LC89" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{equation*}</td>
</tr>
<tr>
<td id="L90" class="blob-num js-line-number" data-line-number="90"></td>
<td id="LC90" class="blob-code blob-code-inner js-file-line">where</td>
</tr>
<tr>
<td id="L91" class="blob-num js-line-number" data-line-number="91"></td>
<td id="LC91" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{equation*}</td>
</tr>
<tr>
<td id="L92" class="blob-num js-line-number" data-line-number="92"></td>
<td id="LC92" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\beta</span>_{m,k} := 2^m <span class="pl-c1">\displaystyle</span> <span class="pl-c1">\int</span>_{-<span class="pl-c1">\infty</span>}^{<span class="pl-c1">\infty</span>}f(x) <span class="pl-c1">\psi</span>_{m,k}(x) dx.</td>
</tr>
<tr>
<td id="L93" class="blob-num js-line-number" data-line-number="93"></td>
<td id="LC93" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{equation*}</td>
</tr>
<tr>
<td id="L94" class="blob-num js-line-number" data-line-number="94"></td>
<td id="LC94" class="blob-code blob-code-inner js-file-line">Each coefficient <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\beta</span>_{m,k}<span class="pl-pds">$</span></span> gives information about how <span class="pl-s"><span class="pl-pds">$</span>f<span class="pl-pds">$</span></span> changes on the</td>
</tr>
<tr>
<td id="L95" class="blob-num js-line-number" data-line-number="95"></td>
<td id="LC95" class="blob-code blob-code-inner js-file-line">the interval <span class="pl-s"><span class="pl-pds">$</span>[k<span class="pl-c1">2</span>^{-m}, (k+<span class="pl-c1">1</span>)<span class="pl-c1">2</span>^{-m}]<span class="pl-pds">$</span></span>, and larger coefficients correspond</td>
</tr>
<tr>
<td id="L96" class="blob-num js-line-number" data-line-number="96"></td>
<td id="LC96" class="blob-code blob-code-inner js-file-line">to larger spikes of width <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">2</span>^{-m}<span class="pl-pds">$</span></span>. Thus, as <span class="pl-s"><span class="pl-pds">$</span>m<span class="pl-pds">$</span></span> increases, the</td>
</tr>
<tr>
<td id="L97" class="blob-num js-line-number" data-line-number="97"></td>
<td id="LC97" class="blob-code blob-code-inner js-file-line">detail function <span class="pl-s"><span class="pl-pds">$</span>d_m<span class="pl-pds">$</span></span> gives information about the higher-frequency oscillations</td>
</tr>
<tr>
<td id="L98" class="blob-num js-line-number" data-line-number="98"></td>
<td id="LC98" class="blob-code blob-code-inner js-file-line">of the function. The details and approximation frames interact in the following way:</td>
</tr>
<tr>
<td id="L99" class="blob-num js-line-number" data-line-number="99"></td>
<td id="LC99" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{equation*}</td>
</tr>
<tr>
<td id="L100" class="blob-num js-line-number" data-line-number="100"></td>
<td id="LC100" class="blob-code blob-code-inner js-file-line">f_{m+1} = f_m + d_m.</td>
</tr>
<tr>
<td id="L101" class="blob-num js-line-number" data-line-number="101"></td>
<td id="LC101" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{equation*}</td>
</tr>
<tr>
<td id="L102" class="blob-num js-line-number" data-line-number="102"></td>
<td id="LC102" class="blob-code blob-code-inner js-file-line">As a result of this fortuitous relationship, one can prove the decomposition</td>
</tr>
<tr>
<td id="L103" class="blob-num js-line-number" data-line-number="103"></td>
<td id="LC103" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{equation*}</td>
</tr>
<tr>
<td id="L104" class="blob-num js-line-number" data-line-number="104"></td>
<td id="LC104" class="blob-code blob-code-inner js-file-line">L^2(R) = V_0 <span class="pl-c1">\oplus</span> W_0 <span class="pl-c1">\oplus</span> W_1 <span class="pl-c1">\oplus</span> <span class="pl-c1">\cdots</span>,</td>
</tr>
<tr>
<td id="L105" class="blob-num js-line-number" data-line-number="105"></td>
<td id="LC105" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{equation*}</td>
</tr>
<tr>
<td id="L106" class="blob-num js-line-number" data-line-number="106"></td>
<td id="LC106" class="blob-code blob-code-inner js-file-line">where <span class="pl-s"><span class="pl-pds">$</span>V_j := <span class="pl-c1">\text</span>{span}<span class="pl-cce">\{</span><span class="pl-c1">\phi</span>_{j,k}<span class="pl-c1">\}</span>_{k <span class="pl-c1">\in</span> <span class="pl-c1">\mathbb</span>{Z}}<span class="pl-pds">$</span></span> and</td>
</tr>
<tr>
<td id="L107" class="blob-num js-line-number" data-line-number="107"></td>
<td id="LC107" class="blob-code blob-code-inner js-file-line"><span class="pl-s"><span class="pl-pds">$</span>W_j := <span class="pl-c1">\text</span>{span}<span class="pl-cce">\{</span><span class="pl-c1">\psi</span>_{j,k}<span class="pl-c1">\}</span>_{k <span class="pl-c1">\in</span> <span class="pl-c1">\mathbb</span>{Z}}<span class="pl-pds">$</span></span>. This fact justifies</td>
</tr>
<tr>
<td id="L108" class="blob-num js-line-number" data-line-number="108"></td>
<td id="LC108" class="blob-code blob-code-inner js-file-line">our hope to approximate and analyze functions using wavelets.</td>
</tr>
<tr>
<td id="L109" class="blob-num js-line-number" data-line-number="109"></td>
<td id="LC109" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{figure}[t]</td>
</tr>
<tr>
<td id="L110" class="blob-num js-line-number" data-line-number="110"></td>
<td id="LC110" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\minipage</span>{0.32<span class="pl-c1">\textwidth</span>}</td>
</tr>
<tr>
<td id="L111" class="blob-num js-line-number" data-line-number="111"></td>
<td id="LC111" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\includegraphics</span>[width=<span class="pl-c1">\linewidth</span>]{sinecurve}</td>
</tr>
<tr>
<td id="L112" class="blob-num js-line-number" data-line-number="112"></td>
<td id="LC112" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\caption</span>{<span class="pl-s"><span class="pl-pds">$</span>f(x) = <span class="pl-c1">\sin</span> (x)<span class="pl-pds">$</span></span>}</td>
</tr>
<tr>
<td id="L113" class="blob-num js-line-number" data-line-number="113"></td>
<td id="LC113" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\endminipage\hfill</span></td>
</tr>
<tr>
<td id="L114" class="blob-num js-line-number" data-line-number="114"></td>
<td id="LC114" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\minipage</span>{0.32<span class="pl-c1">\textwidth</span>}</td>
</tr>
<tr>
<td id="L115" class="blob-num js-line-number" data-line-number="115"></td>
<td id="LC115" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\includegraphics</span>[width=<span class="pl-c1">\linewidth</span>]{discreteSineCurve.pdf}</td>
</tr>
<tr>
<td id="L116" class="blob-num js-line-number" data-line-number="116"></td>
<td id="LC116" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\caption</span>{<span class="pl-s"><span class="pl-pds">$</span>f_<span class="pl-c1">4</span><span class="pl-pds">$</span></span>}</td>
</tr>
<tr>
<td id="L117" class="blob-num js-line-number" data-line-number="117"></td>
<td id="LC117" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\endminipage\hfill</span></td>
</tr>
<tr>
<td id="L118" class="blob-num js-line-number" data-line-number="118"></td>
<td id="LC118" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\minipage</span>{0.32<span class="pl-c1">\textwidth</span>}</td>
</tr>
<tr>
<td id="L119" class="blob-num js-line-number" data-line-number="119"></td>
<td id="LC119" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\includegraphics</span>[width=<span class="pl-c1">\linewidth</span>]{sineCurveDetail}</td>
</tr>
<tr>
<td id="L120" class="blob-num js-line-number" data-line-number="120"></td>
<td id="LC120" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\caption</span>{<span class="pl-s"><span class="pl-pds">$</span>d_<span class="pl-c1">4</span><span class="pl-pds">$</span></span>}</td>
</tr>
<tr>
<td id="L121" class="blob-num js-line-number" data-line-number="121"></td>
<td id="LC121" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\endminipage</span></td>
</tr>
<tr>
<td id="L122" class="blob-num js-line-number" data-line-number="122"></td>
<td id="LC122" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{figure}</td>
</tr>
<tr>
<td id="L123" class="blob-num js-line-number" data-line-number="123"></td>
<td id="LC123" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{problem}</td>
</tr>
<tr>
<td id="L124" class="blob-num js-line-number" data-line-number="124"></td>
<td id="LC124" class="blob-code blob-code-inner js-file-line">Calculate and plot the approximation frames for <span class="pl-s"><span class="pl-pds">$</span>f(x) = <span class="pl-c1">\sin</span>(x)<span class="pl-pds">$</span></span> on the interval <span class="pl-s"><span class="pl-pds">$</span>[<span class="pl-c1">0</span>,<span class="pl-c1">2</span><span class="pl-c1">\pi</span>]<span class="pl-pds">$</span></span></td>
</tr>
<tr>
<td id="L125" class="blob-num js-line-number" data-line-number="125"></td>
<td id="LC125" class="blob-code blob-code-inner js-file-line">for <span class="pl-s"><span class="pl-pds">$</span>m = <span class="pl-c1">4</span>, <span class="pl-c1">6</span>, <span class="pl-c1">8</span><span class="pl-pds">$</span></span>. Note that because we are working on a finite interval,</td>
</tr>
<tr>
<td id="L126" class="blob-num js-line-number" data-line-number="126"></td>
<td id="LC126" class="blob-code blob-code-inner js-file-line">we only need to calculate certain coefficients <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\alpha</span>_{m,k}<span class="pl-pds">$</span></span>. In</td>
</tr>
<tr>
<td id="L127" class="blob-num js-line-number" data-line-number="127"></td>
<td id="LC127" class="blob-code blob-code-inner js-file-line">particular, we only need the coefficients for <span class="pl-s"><span class="pl-pds">$</span>k = <span class="pl-c1">0</span><span class="pl-pds">$</span></span> up to the first integer</td>
</tr>
<tr>
<td id="L128" class="blob-num js-line-number" data-line-number="128"></td>
<td id="LC128" class="blob-code blob-code-inner js-file-line"><span class="pl-s"><span class="pl-pds">$</span>n<span class="pl-pds">$</span></span> such that <span class="pl-s"><span class="pl-pds">$</span>(n+<span class="pl-c1">1</span>)<span class="pl-c1">2</span>^{-m} > <span class="pl-c1">2</span> <span class="pl-c1">\pi</span><span class="pl-pds">$</span></span> (why?). Furthermore, to plot the frame,</td>
</tr>
<tr>
<td id="L129" class="blob-num js-line-number" data-line-number="129"></td>
<td id="LC129" class="blob-code blob-code-inner js-file-line">all we need is an array containing the relevant coefficients. Then simply plot</td>
</tr>
<tr>
<td id="L130" class="blob-num js-line-number" data-line-number="130"></td>
<td id="LC130" class="blob-code blob-code-inner js-file-line">the coefficients against <span class="pl-c1">\li</span>{linspace} with appropriate arguments</td>
</tr>
<tr>
<td id="L131" class="blob-num js-line-number" data-line-number="131"></td>
<td id="LC131" class="blob-code blob-code-inner js-file-line">and set <span class="pl-c1">\li</span>{drawstyle='steps'} in the <span class="pl-c1">\li</span>{plt.plot} function.</td>
</tr>
<tr>
<td id="L132" class="blob-num js-line-number" data-line-number="132"></td>
<td id="LC132" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{problem}</td>
</tr>
<tr>
<td id="L133" class="blob-num js-line-number" data-line-number="133"></td>
<td id="LC133" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L134" class="blob-num js-line-number" data-line-number="134"></td>
<td id="LC134" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{problem}</td>
</tr>
<tr>
<td id="L135" class="blob-num js-line-number" data-line-number="135"></td>
<td id="LC135" class="blob-code blob-code-inner js-file-line">Now calculate the details for <span class="pl-s"><span class="pl-pds">$</span>f(x) = <span class="pl-c1">\sin</span>(x)<span class="pl-pds">$</span></span> on the same interval and for the</td>
</tr>
<tr>
<td id="L136" class="blob-num js-line-number" data-line-number="136"></td>
<td id="LC136" class="blob-code blob-code-inner js-file-line">same <span class="pl-s"><span class="pl-pds">$</span>m<span class="pl-pds">$</span></span> values given above. Use previous results to compute the coefficients</td>
</tr>
<tr>
<td id="L137" class="blob-num js-line-number" data-line-number="137"></td>
<td id="LC137" class="blob-code blob-code-inner js-file-line">for <span class="pl-s"><span class="pl-pds">$</span>f_<span class="pl-c1">5</span><span class="pl-pds">$</span></span>, <span class="pl-s"><span class="pl-pds">$</span>f_<span class="pl-c1">7</span><span class="pl-pds">$</span></span>, and <span class="pl-s"><span class="pl-pds">$</span>f_<span class="pl-c1">9</span><span class="pl-pds">$</span></span> and plot them.</td>
</tr>
<tr>
<td id="L138" class="blob-num js-line-number" data-line-number="138"></td>
<td id="LC138" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{problem}</td>
</tr>
<tr>
<td id="L139" class="blob-num js-line-number" data-line-number="139"></td>
<td id="LC139" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L140" class="blob-num js-line-number" data-line-number="140"></td>
<td id="LC140" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\subsection</span>*{The Discrete Wavelet Transform}</td>
</tr>
<tr>
<td id="L141" class="blob-num js-line-number" data-line-number="141"></td>
<td id="LC141" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L142" class="blob-num js-line-number" data-line-number="142"></td>
<td id="LC142" class="blob-code blob-code-inner js-file-line">What purpose do these details and approximation frames serve? According to the</td>
</tr>
<tr>
<td id="L143" class="blob-num js-line-number" data-line-number="143"></td>
<td id="LC143" class="blob-code blob-code-inner js-file-line">properties discussed above, we can approximate <span class="pl-s"><span class="pl-pds">$</span>L^<span class="pl-c1">2</span><span class="pl-pds">$</span></span> functions as follows:</td>
</tr>
<tr>
<td id="L144" class="blob-num js-line-number" data-line-number="144"></td>
<td id="LC144" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{align*}</td>
</tr>
<tr>
<td id="L145" class="blob-num js-line-number" data-line-number="145"></td>
<td id="LC145" class="blob-code blob-code-inner js-file-line">f <span class="pl-c1">\approx</span> f_{J+1} &= f_J + d_J <span class="pl-c1">\\</span></td>
</tr>
<tr>
<td id="L146" class="blob-num js-line-number" data-line-number="146"></td>
<td id="LC146" class="blob-code blob-code-inner js-file-line">&= f_{J-1} + d_{J-1} + d_J <span class="pl-c1">\\</span></td>
</tr>
<tr>
<td id="L147" class="blob-num js-line-number" data-line-number="147"></td>
<td id="LC147" class="blob-code blob-code-inner js-file-line">& <span class="pl-c1">\ldots</span><span class="pl-c1">\\</span></td>
</tr>
<tr>
<td id="L148" class="blob-num js-line-number" data-line-number="148"></td>
<td id="LC148" class="blob-code blob-code-inner js-file-line">&= f_{I} + d_{I} + d_{I+1} + <span class="pl-c1">\cdots</span> + d_J,</td>
</tr>
<tr>
<td id="L149" class="blob-num js-line-number" data-line-number="149"></td>
<td id="LC149" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{align*}</td>
</tr>
<tr>
<td id="L150" class="blob-num js-line-number" data-line-number="150"></td>
<td id="LC150" class="blob-code blob-code-inner js-file-line">where <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">1</span> <span class="pl-c1">\leq</span> I <span class="pl-c1">\leq</span> J<span class="pl-pds">$</span></span>. If <span class="pl-s"><span class="pl-pds">$</span>f<span class="pl-pds">$</span></span> has compact support (as in the case of a finite-time signal,</td>
</tr>
<tr>
<td id="L151" class="blob-num js-line-number" data-line-number="151"></td>
<td id="LC151" class="blob-code blob-code-inner js-file-line">for example), only finitely many of the coefficients in the frame and the details are</td>
</tr>
<tr>
<td id="L152" class="blob-num js-line-number" data-line-number="152"></td>
<td id="LC152" class="blob-code blob-code-inner js-file-line">nonzero, thus enabling us to represent <span class="pl-s"><span class="pl-pds">$</span>f<span class="pl-pds">$</span></span> to a reasonable degree of accuracy in a very</td>
</tr>
<tr>
<td id="L153" class="blob-num js-line-number" data-line-number="153"></td>
<td id="LC153" class="blob-code blob-code-inner js-file-line">efficient manner. The calculation of these detail coefficients is called the <span class="pl-c1">\emph</span>{discrete</td>
</tr>
<tr>
<td id="L154" class="blob-num js-line-number" data-line-number="154"></td>
<td id="LC154" class="blob-code blob-code-inner js-file-line">wavelet transform}. In the context of signals processing, one can imagine calculating these</td>
</tr>
<tr>
<td id="L155" class="blob-num js-line-number" data-line-number="155"></td>
<td id="LC155" class="blob-code blob-code-inner js-file-line">coefficients, transmitting them, and then reproducing the approximated signal on the</td>
</tr>
<tr>
<td id="L156" class="blob-num js-line-number" data-line-number="156"></td>
<td id="LC156" class="blob-code blob-code-inner js-file-line">receiving end. Furthermore, the coefficients of the details reflect the local properties</td>
</tr>
<tr>
<td id="L157" class="blob-num js-line-number" data-line-number="157"></td>
<td id="LC157" class="blob-code blob-code-inner js-file-line">of the original function <span class="pl-s"><span class="pl-pds">$</span>f<span class="pl-pds">$</span></span> at the particular level of detail and resolution! This means</td>
</tr>
<tr>
<td id="L158" class="blob-num js-line-number" data-line-number="158"></td>
<td id="LC158" class="blob-code blob-code-inner js-file-line">that we can discard many of the coefficients if we are only interested in reproducing a certain</td>
</tr>
<tr>
<td id="L159" class="blob-num js-line-number" data-line-number="159"></td>
<td id="LC159" class="blob-code blob-code-inner js-file-line">part of the signal, or in recovering the entire signal to only a limited resolution. We can</td>
</tr>
<tr>
<td id="L160" class="blob-num js-line-number" data-line-number="160"></td>
<td id="LC160" class="blob-code blob-code-inner js-file-line">also study just those frequencies of the signal that fall within a certain range (called a</td>
</tr>
<tr>
<td id="L161" class="blob-num js-line-number" data-line-number="161"></td>
<td id="LC161" class="blob-code blob-code-inner js-file-line">sub-band) by examining the detail coefficients at a particular level. These</td>
</tr>
<tr>
<td id="L162" class="blob-num js-line-number" data-line-number="162"></td>
<td id="LC162" class="blob-code blob-code-inner js-file-line">properties make the discrete wavelet transform an attractive alternative to the Fourier</td>
</tr>
<tr>
<td id="L163" class="blob-num js-line-number" data-line-number="163"></td>
<td id="LC163" class="blob-code blob-code-inner js-file-line">transform in many applications. See Figure <span class="pl-c1">\ref</span>{fig:dwt1D} for an example of the discrete Wavelet transform.</td>
</tr>
<tr>
<td id="L164" class="blob-num js-line-number" data-line-number="164"></td>
<td id="LC164" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L165" class="blob-num js-line-number" data-line-number="165"></td>
<td id="LC165" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{figure}</td>
</tr>
<tr>
<td id="L166" class="blob-num js-line-number" data-line-number="166"></td>
<td id="LC166" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\centering</span></td>
</tr>
<tr>
<td id="L167" class="blob-num js-line-number" data-line-number="167"></td>
<td id="LC167" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\includegraphics</span>[width = 0.5<span class="pl-c1">\textwidth</span>]{dwt1D}</td>
</tr>
<tr>
<td id="L168" class="blob-num js-line-number" data-line-number="168"></td>
<td id="LC168" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\caption</span>{A level 4 wavelet decomposition of a signal. The top panel is the original signal,</td>
</tr>
<tr>
<td id="L169" class="blob-num js-line-number" data-line-number="169"></td>
<td id="LC169" class="blob-code blob-code-inner js-file-line">the next panel down is the approximation frame, and the remaining panels are the detail coefficients.</td>
</tr>
<tr>
<td id="L170" class="blob-num js-line-number" data-line-number="170"></td>
<td id="LC170" class="blob-code blob-code-inner js-file-line">Notice how the approximation frame resembles a smoothed version of the original signal, while the </td>
</tr>
<tr>
<td id="L171" class="blob-num js-line-number" data-line-number="171"></td>
<td id="LC171" class="blob-code blob-code-inner js-file-line">details capture the high-frequency oscillations and noise.}</td>
</tr>
<tr>
<td id="L172" class="blob-num js-line-number" data-line-number="172"></td>
<td id="LC172" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\label</span>{fig:dwt1D}</td>
</tr>
<tr>
<td id="L173" class="blob-num js-line-number" data-line-number="173"></td>
<td id="LC173" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{figure}</td>
</tr>
<tr>
<td id="L174" class="blob-num js-line-number" data-line-number="174"></td>
<td id="LC174" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L175" class="blob-num js-line-number" data-line-number="175"></td>
<td id="LC175" class="blob-code blob-code-inner js-file-line">In practice, we are often interested in analyzing discrete signals with compact support (that is,</td>
</tr>
<tr>
<td id="L176" class="blob-num js-line-number" data-line-number="176"></td>
<td id="LC176" class="blob-code blob-code-inner js-file-line">finite-time signals that we have sampled at a finite number of points). If wavelet analysis is</td>
</tr>
<tr>
<td id="L177" class="blob-num js-line-number" data-line-number="177"></td>
<td id="LC177" class="blob-code blob-code-inner js-file-line">to be of any use, we first need an efficient way to calculate the discrete wavelet transform.</td>
</tr>
<tr>
<td id="L178" class="blob-num js-line-number" data-line-number="178"></td>
<td id="LC178" class="blob-code blob-code-inner js-file-line">The process described in the first section, while intuitive and illustrative of the mathematical</td>
</tr>
<tr>
<td id="L179" class="blob-num js-line-number" data-line-number="179"></td>
<td id="LC179" class="blob-code blob-code-inner js-file-line">principles</td>
</tr>
<tr>
<td id="L180" class="blob-num js-line-number" data-line-number="180"></td>
<td id="LC180" class="blob-code blob-code-inner js-file-line">behind wavelet analysis, is not the best approach to calculating the wavelet coefficients. It</td>
</tr>
<tr>
<td id="L181" class="blob-num js-line-number" data-line-number="181"></td>
<td id="LC181" class="blob-code blob-code-inner js-file-line">turns out that the discrete wavelet transform can be implemented as an iterated low-pass/high-pass</td>
</tr>
<tr>
<td id="L182" class="blob-num js-line-number" data-line-number="182"></td>
<td id="LC182" class="blob-code blob-code-inner js-file-line">filter bank, one iteration of which is shown graphically in the figure. We present the</td>
</tr>
<tr>
<td id="L183" class="blob-num js-line-number" data-line-number="183"></td>
<td id="LC183" class="blob-code blob-code-inner js-file-line">algorithm without getting into the details of why it works.</td>
</tr>
<tr>
<td id="L184" class="blob-num js-line-number" data-line-number="184"></td>
<td id="LC184" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{figure}[H]</td>
</tr>
<tr>
<td id="L185" class="blob-num js-line-number" data-line-number="185"></td>
<td id="LC185" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\centering</span></td>
</tr>
<tr>
<td id="L186" class="blob-num js-line-number" data-line-number="186"></td>
<td id="LC186" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\includegraphics</span>[width = 0.5<span class="pl-c1">\textwidth</span>]{dwt1}</td>
</tr>
<tr>
<td id="L187" class="blob-num js-line-number" data-line-number="187"></td>
<td id="LC187" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\caption</span>{The one-dimensional discrete wavelet transform.}</td>
</tr>
<tr>
<td id="L188" class="blob-num js-line-number" data-line-number="188"></td>
<td id="LC188" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{figure}</td>
</tr>
<tr>
<td id="L189" class="blob-num js-line-number" data-line-number="189"></td>
<td id="LC189" class="blob-code blob-code-inner js-file-line">The input, <span class="pl-s"><span class="pl-pds">$</span>A_j<span class="pl-pds">$</span></span>, represents the level-<span class="pl-s"><span class="pl-pds">$</span>j<span class="pl-pds">$</span></span> approximation frame, and we initialize <span class="pl-s"><span class="pl-pds">$</span>A_<span class="pl-c1">0</span><span class="pl-pds">$</span></span> to</td>
</tr>
<tr>
<td id="L190" class="blob-num js-line-number" data-line-number="190"></td>
<td id="LC190" class="blob-code blob-code-inner js-file-line">simply be the original signal. Lo and Hi are the low-pass and high-pass filters, respectively.</td>
</tr>
<tr>
<td id="L191" class="blob-num js-line-number" data-line-number="191"></td>
<td id="LC191" class="blob-code blob-code-inner js-file-line">(By <span class="pl-c1">\emph</span>{filter} we mean a vector that serves the purpose of extracting or suppressing a</td>
</tr>
<tr>
<td id="L192" class="blob-num js-line-number" data-line-number="192"></td>
<td id="LC192" class="blob-code blob-code-inner js-file-line">particular feature of the signal. The Lo and Hi filters are obtained from the wavelet at hand;</td>
</tr>
<tr>
<td id="L193" class="blob-num js-line-number" data-line-number="193"></td>
<td id="LC193" class="blob-code blob-code-inner js-file-line">for the Haar wavelet, Lo <span class="pl-s"><span class="pl-pds">$</span>= (<span class="pl-c1">\sqrt</span>{2}^{-1}, <span class="pl-c1">\sqrt</span>{2}^{-1})<span class="pl-pds">$</span></span> and Hi <span class="pl-s"><span class="pl-pds">$</span>= (-<span class="pl-c1">\sqrt</span>{2}^{-1}, <span class="pl-c1">\sqrt</span>{2}</span></td>
</tr>
<tr>
<td id="L194" class="blob-num js-line-number" data-line-number="194"></td>
<td id="LC194" class="blob-code blob-code-inner js-file-line"><span class="pl-s">^{-1})<span class="pl-pds">$</span></span>.) The box means convolve the input with the filter, and the circle means downsample by</td>
</tr>
<tr>
<td id="L195" class="blob-num js-line-number" data-line-number="195"></td>
<td id="LC195" class="blob-code blob-code-inner js-file-line">a factor of two, i.e. remove either the even or odd-indexed entries of the input. The outputs,</td>
</tr>
<tr>
<td id="L196" class="blob-num js-line-number" data-line-number="196"></td>
<td id="LC196" class="blob-code blob-code-inner js-file-line"><span class="pl-s"><span class="pl-pds">$</span>A_{j+1}<span class="pl-pds">$</span></span> and <span class="pl-s"><span class="pl-pds">$</span>D_{j+1}<span class="pl-pds">$</span></span>, are the level-<span class="pl-s"><span class="pl-pds">$</span>(j+<span class="pl-c1">1</span>)<span class="pl-pds">$</span></span> approximation frame and detail coefficients,</td>
</tr>
<tr>
<td id="L197" class="blob-num js-line-number" data-line-number="197"></td>
<td id="LC197" class="blob-code blob-code-inner js-file-line">respectively. Note that the length of the input array is twice that of the output arrays. The</td>
</tr>
<tr>
<td id="L198" class="blob-num js-line-number" data-line-number="198"></td>
<td id="LC198" class="blob-code blob-code-inner js-file-line">detail coefficients <span class="pl-s"><span class="pl-pds">$</span>D_{j+1}<span class="pl-pds">$</span></span> are stored, and <span class="pl-s"><span class="pl-pds">$</span>A_{j+1}<span class="pl-pds">$</span></span> is then fed back into the loop. Continue</td>
</tr>
<tr>
<td id="L199" class="blob-num js-line-number" data-line-number="199"></td>
<td id="LC199" class="blob-code blob-code-inner js-file-line">this process until the length of the output is less than the length of the filters, and</td>
</tr>
<tr>
<td id="L200" class="blob-num js-line-number" data-line-number="200"></td>
<td id="LC200" class="blob-code blob-code-inner js-file-line">return all of the stored detail coefficients as well as the final approximation frame.</td>
</tr>
<tr>
<td id="L201" class="blob-num js-line-number" data-line-number="201"></td>
<td id="LC201" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{problem}</td>
</tr>
<tr>
<td id="L202" class="blob-num js-line-number" data-line-number="202"></td>
<td id="LC202" class="blob-code blob-code-inner js-file-line">Write a function that calculates the discrete wavelet transform as described above.</td>
</tr>
<tr>
<td id="L203" class="blob-num js-line-number" data-line-number="203"></td>
<td id="LC203" class="blob-code blob-code-inner js-file-line">The inputs should be three one-dimensional NumPy arrays (the signal, low-pass filter, and</td>
</tr>
<tr>
<td id="L204" class="blob-num js-line-number" data-line-number="204"></td>
<td id="LC204" class="blob-code blob-code-inner js-file-line">high-pass filter). The output should be a list of one-dimensional NumPy arrays in the</td>
</tr>
<tr>
<td id="L205" class="blob-num js-line-number" data-line-number="205"></td>
<td id="LC205" class="blob-code blob-code-inner js-file-line">following form: <span class="pl-s"><span class="pl-pds">$</span>[A_n, D_n, <span class="pl-c1">\ldots</span>, D_<span class="pl-c1">1</span>]<span class="pl-pds">$</span></span>. (Note: for the convolution, you may use</td>
</tr>
<tr>
<td id="L206" class="blob-num js-line-number" data-line-number="206"></td>
<td id="LC206" class="blob-code blob-code-inner js-file-line">the <span class="pl-c1">\li</span>{fftconvolve} function from the <span class="pl-c1">\li</span>{scipy.signal} package using the default</td>
</tr>
<tr>
<td id="L207" class="blob-num js-line-number" data-line-number="207"></td>
<td id="LC207" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\li</span>{mode = 'full'} parameter, but note that the output array is one entry too large, and so</td>
</tr>
<tr>
<td id="L208" class="blob-num js-line-number" data-line-number="208"></td>
<td id="LC208" class="blob-code blob-code-inner js-file-line">you need to omit the first entry. For downsampling, only keep the even-indexed entries.)</td>
</tr>
<tr>
<td id="L209" class="blob-num js-line-number" data-line-number="209"></td>
<td id="LC209" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{problem}</td>
</tr>
<tr>
<td id="L210" class="blob-num js-line-number" data-line-number="210"></td>
<td id="LC210" class="blob-code blob-code-inner js-file-line">We also need to know how to reconstruct the signal from the detail coefficients and</td>
</tr>
<tr>
<td id="L211" class="blob-num js-line-number" data-line-number="211"></td>
<td id="LC211" class="blob-code blob-code-inner js-file-line">approximation frame. Fortunately, the algorithm described above is entirely reversible,</td>
</tr>
<tr>
<td id="L212" class="blob-num js-line-number" data-line-number="212"></td>
<td id="LC212" class="blob-code blob-code-inner js-file-line">albeit with slightly different filters:</td>
</tr>
<tr>
<td id="L213" class="blob-num js-line-number" data-line-number="213"></td>
<td id="LC213" class="blob-code blob-code-inner js-file-line"><span class="pl-s"><span class="pl-pds">$$</span><span class="pl-c1">\text</span>{Lo} = (<span class="pl-c1">\sqrt</span>{2}^{-1}, <span class="pl-c1">\sqrt</span>{2}^{-1})<span class="pl-pds">$$</span></span> and</td>
</tr>
<tr>
<td id="L214" class="blob-num js-line-number" data-line-number="214"></td>
<td id="LC214" class="blob-code blob-code-inner js-file-line"><span class="pl-s"><span class="pl-pds">$$</span><span class="pl-c1">\text</span>{Hi} = (<span class="pl-c1">\sqrt</span>{2}^{-1}, -<span class="pl-c1">\sqrt</span>{2}^{-1}).<span class="pl-pds">$$</span></span></td>
</tr>
<tr>
<td id="L215" class="blob-num js-line-number" data-line-number="215"></td>
<td id="LC215" class="blob-code blob-code-inner js-file-line">Given <span class="pl-s"><span class="pl-pds">$</span>A_{j+1}<span class="pl-pds">$</span></span> and <span class="pl-s"><span class="pl-pds">$</span>D_{j+1}<span class="pl-pds">$</span></span>, simply upsample both arrays (by inserting a zero after</td>
</tr>
<tr>
<td id="L216" class="blob-num js-line-number" data-line-number="216"></td>
<td id="LC216" class="blob-code blob-code-inner js-file-line">each entry of the original array), convolve the results</td>
</tr>
<tr>
<td id="L217" class="blob-num js-line-number" data-line-number="217"></td>
<td id="LC217" class="blob-code blob-code-inner js-file-line">with the Lo and Hi filters, respectively (this time omit the <span class="pl-c1">\emph</span>{last} entry of the</td>
</tr>
<tr>
<td id="L218" class="blob-num js-line-number" data-line-number="218"></td>
<td id="LC218" class="blob-code blob-code-inner js-file-line">result), and add the outputs to obtain <span class="pl-s"><span class="pl-pds">$</span>A_j<span class="pl-pds">$</span></span>. Continue the</td>
</tr>
<tr>
<td id="L219" class="blob-num js-line-number" data-line-number="219"></td>
<td id="LC219" class="blob-code blob-code-inner js-file-line">process until you recover <span class="pl-s"><span class="pl-pds">$</span>A_<span class="pl-c1">0</span><span class="pl-pds">$</span></span>, the original signal.</td>
</tr>
<tr>
<td id="L220" class="blob-num js-line-number" data-line-number="220"></td>
<td id="LC220" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{problem}</td>
</tr>
<tr>
<td id="L221" class="blob-num js-line-number" data-line-number="221"></td>
<td id="LC221" class="blob-code blob-code-inner js-file-line">Write a function that calculates the inverse wavelet transform as described above.</td>
</tr>
<tr>
<td id="L222" class="blob-num js-line-number" data-line-number="222"></td>
<td id="LC222" class="blob-code blob-code-inner js-file-line">The inputs should be a list of arrays (of the same form as the output of your discrete</td>
</tr>
<tr>
<td id="L223" class="blob-num js-line-number" data-line-number="223"></td>
<td id="LC223" class="blob-code blob-code-inner js-file-line">wavelet transform function), the low-pass filter, and the high-pass filter. The output</td>
</tr>
<tr>
<td id="L224" class="blob-num js-line-number" data-line-number="224"></td>
<td id="LC224" class="blob-code blob-code-inner js-file-line">should be a single array, the recovered signal. In order to check your work, compute</td>
</tr>
<tr>
<td id="L225" class="blob-num js-line-number" data-line-number="225"></td>
<td id="LC225" class="blob-code blob-code-inner js-file-line">the discrete wavelet transform of a random array of length 64, then compute the inverse</td>
</tr>
<tr>
<td id="L226" class="blob-num js-line-number" data-line-number="226"></td>
<td id="LC226" class="blob-code blob-code-inner js-file-line">transform, and compare the original signal with the recovered signal. The difference</td>
</tr>
<tr>
<td id="L227" class="blob-num js-line-number" data-line-number="227"></td>
<td id="LC227" class="blob-code blob-code-inner js-file-line">should be very small.</td>
</tr>
<tr>
<td id="L228" class="blob-num js-line-number" data-line-number="228"></td>
<td id="LC228" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{problem}</td>
</tr>
<tr>
<td id="L229" class="blob-num js-line-number" data-line-number="229"></td>
<td id="LC229" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L230" class="blob-num js-line-number" data-line-number="230"></td>
<td id="LC230" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\subsection</span>*{The two-dimensional Discrete Wavelet Transform}</td>
</tr>
<tr>
<td id="L231" class="blob-num js-line-number" data-line-number="231"></td>
<td id="LC231" class="blob-code blob-code-inner js-file-line">Our discussion so far has focused on one-dimensional</td>
</tr>
<tr>
<td id="L232" class="blob-num js-line-number" data-line-number="232"></td>
<td id="LC232" class="blob-code blob-code-inner js-file-line">discrete signals, but it is not difficult to extend the same ideas into the</td>
</tr>
<tr>
<td id="L233" class="blob-num js-line-number" data-line-number="233"></td>
<td id="LC233" class="blob-code blob-code-inner js-file-line">realm of two-dimensional arrays. As you know, a digital image can be represented</td>
</tr>
<tr>
<td id="L234" class="blob-num js-line-number" data-line-number="234"></td>
<td id="LC234" class="blob-code blob-code-inner js-file-line">as a matrix of pixel values (for simplicity we will consider grayscale images of</td>
</tr>
<tr>
<td id="L235" class="blob-num js-line-number" data-line-number="235"></td>
<td id="LC235" class="blob-code blob-code-inner js-file-line">size <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">2</span>^n <span class="pl-c1">\times</span> <span class="pl-c1">2</span>^n<span class="pl-pds">$</span></span>). We can perform the wavelet decomposition of</td>
</tr>
<tr>
<td id="L236" class="blob-num js-line-number" data-line-number="236"></td>
<td id="LC236" class="blob-code blob-code-inner js-file-line">an image in much that same way as with one-dimensional signals. We once again</td>
</tr>
<tr>
<td id="L237" class="blob-num js-line-number" data-line-number="237"></td>
<td id="LC237" class="blob-code blob-code-inner js-file-line">calculate detail and approximation coefficients using an iterative filter</td>
</tr>
<tr>
<td id="L238" class="blob-num js-line-number" data-line-number="238"></td>
<td id="LC238" class="blob-code blob-code-inner js-file-line">bank, but now we generate four arrays of coefficients at each iteration as opposed to</td>
</tr>
<tr>
<td id="L239" class="blob-num js-line-number" data-line-number="239"></td>
<td id="LC239" class="blob-code blob-code-inner js-file-line">just two. In essence, we perform the one-dimensional wavelet transform first on each</td>
</tr>
<tr>
<td id="L240" class="blob-num js-line-number" data-line-number="240"></td>
<td id="LC240" class="blob-code blob-code-inner js-file-line">row, and then on each column of the matrix. Notice the similarity with our approach</td>
</tr>
<tr>
<td id="L241" class="blob-num js-line-number" data-line-number="241"></td>
<td id="LC241" class="blob-code blob-code-inner js-file-line">to extending the one-dimensional Fourier transform to two-dimensional images. See</td>
</tr>
<tr>
<td id="L242" class="blob-num js-line-number" data-line-number="242"></td>
<td id="LC242" class="blob-code blob-code-inner js-file-line">Figure <span class="pl-c1">\ref</span>{fig:dwt2D} of an example of the two-dimensional Wavelet transform applied</td>
</tr>
<tr>
<td id="L243" class="blob-num js-line-number" data-line-number="243"></td>
<td id="LC243" class="blob-code blob-code-inner js-file-line">to an image.</td>
</tr>
<tr>
<td id="L244" class="blob-num js-line-number" data-line-number="244"></td>
<td id="LC244" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L245" class="blob-num js-line-number" data-line-number="245"></td>
<td id="LC245" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{figure}</td>
</tr>
<tr>
<td id="L246" class="blob-num js-line-number" data-line-number="246"></td>
<td id="LC246" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\includegraphics</span>[width=0.8<span class="pl-c1">\textwidth</span>]{dwt2D.pdf}</td>
</tr>
<tr>
<td id="L247" class="blob-num js-line-number" data-line-number="247"></td>
<td id="LC247" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\caption</span>{The level 2 wavelet coefficients of the Lena image. The upper left quadrant </td>
</tr>
<tr>
<td id="L248" class="blob-num js-line-number" data-line-number="248"></td>
<td id="LC248" class="blob-code blob-code-inner js-file-line"> is the approximation frame, and the other quadrants are the details. Notice how the </td>
</tr>
<tr>
<td id="L249" class="blob-num js-line-number" data-line-number="249"></td>
<td id="LC249" class="blob-code blob-code-inner js-file-line"> details highlight the parts of the image with high-frequency textures and borders.}</td>
</tr>
<tr>
<td id="L250" class="blob-num js-line-number" data-line-number="250"></td>
<td id="LC250" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\label</span>{fig:dwt2D}</td>
</tr>
<tr>
<td id="L251" class="blob-num js-line-number" data-line-number="251"></td>
<td id="LC251" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{figure}</td>
</tr>
<tr>
<td id="L252" class="blob-num js-line-number" data-line-number="252"></td>
<td id="LC252" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L253" class="blob-num js-line-number" data-line-number="253"></td>
<td id="LC253" class="blob-code blob-code-inner js-file-line">The algorithm goes as follows.</td>
</tr>
<tr>
<td id="L254" class="blob-num js-line-number" data-line-number="254"></td>
<td id="LC254" class="blob-code blob-code-inner js-file-line">Given an input matrix of size <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">2</span>^n <span class="pl-c1">\times</span> <span class="pl-c1">2</span>^n<span class="pl-pds">$</span></span>, first operate on the rows as you would</td>
</tr>
<tr>
<td id="L255" class="blob-num js-line-number" data-line-number="255"></td>
<td id="LC255" class="blob-code blob-code-inner js-file-line">in the one-dimensional Wavelet transform (i.e. convolve each row with the filters, then</td>
</tr>
<tr>
<td id="L256" class="blob-num js-line-number" data-line-number="256"></td>
<td id="LC256" class="blob-code blob-code-inner js-file-line">downsample).</td>
</tr>
<tr>
<td id="L257" class="blob-num js-line-number" data-line-number="257"></td>
<td id="LC257" class="blob-code blob-code-inner js-file-line">We then have two matrices of size <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">2</span>^n <span class="pl-c1">\times</span> <span class="pl-c1">2</span>^{n-1}<span class="pl-pds">$</span></span>,</td>
</tr>
<tr>
<td id="L258" class="blob-num js-line-number" data-line-number="258"></td>
<td id="LC258" class="blob-code blob-code-inner js-file-line">since each row has been downsampled by a factor of 2. Then for each of these two</td>
</tr>
<tr>
<td id="L259" class="blob-num js-line-number" data-line-number="259"></td>
<td id="LC259" class="blob-code blob-code-inner js-file-line">intermediate matrices, operate on each column, yielding a total of four matrices of</td>
</tr>
<tr>
<td id="L260" class="blob-num js-line-number" data-line-number="260"></td>
<td id="LC260" class="blob-code blob-code-inner js-file-line">size <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">2</span>^{n-1} <span class="pl-c1">\times</span> <span class="pl-c1">2</span>^{n-1}<span class="pl-pds">$</span></span>. Figure <span class="pl-c1">\ref</span>{fig:2dwt}</td>
</tr>
<tr>
<td id="L261" class="blob-num js-line-number" data-line-number="261"></td>
<td id="LC261" class="blob-code blob-code-inner js-file-line">gives a graphical depiction of one iteration of the algorithm.</td>
</tr>
<tr>
<td id="L262" class="blob-num js-line-number" data-line-number="262"></td>
<td id="LC262" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L263" class="blob-num js-line-number" data-line-number="263"></td>
<td id="LC263" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{figure}[t]</td>
</tr>
<tr>
<td id="L264" class="blob-num js-line-number" data-line-number="264"></td>
<td id="LC264" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\includegraphics</span>[width=0.8<span class="pl-c1">\textwidth</span>]{2dwt.jpg}</td>
</tr>
<tr>
<td id="L265" class="blob-num js-line-number" data-line-number="265"></td>
<td id="LC265" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\caption</span>{The 2-dimensional discrete wavelet transform.}</td>
</tr>
<tr>
<td id="L266" class="blob-num js-line-number" data-line-number="266"></td>
<td id="LC266" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\label</span>{fig:2dwt}</td>
</tr>
<tr>
<td id="L267" class="blob-num js-line-number" data-line-number="267"></td>
<td id="LC267" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{figure}</td>
</tr>
<tr>
<td id="L268" class="blob-num js-line-number" data-line-number="268"></td>
<td id="LC268" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L269" class="blob-num js-line-number" data-line-number="269"></td>
<td id="LC269" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L270" class="blob-num js-line-number" data-line-number="270"></td>
<td id="LC270" class="blob-code blob-code-inner js-file-line">We initialize <span class="pl-s"><span class="pl-pds">$</span>LL_<span class="pl-c1">0</span><span class="pl-pds">$</span></span> to be the</td>
</tr>
<tr>
<td id="L271" class="blob-num js-line-number" data-line-number="271"></td>
<td id="LC271" class="blob-code blob-code-inner js-file-line">original image matrix, and we terminate once the length of the rows or columns</td>
</tr>
<tr>
<td id="L272" class="blob-num js-line-number" data-line-number="272"></td>
<td id="LC272" class="blob-code blob-code-inner js-file-line">is less than the length of the filters. We end up with a list of wavelet</td>
</tr>
<tr>
<td id="L273" class="blob-num js-line-number" data-line-number="273"></td>
<td id="LC273" class="blob-code blob-code-inner js-file-line">coefficients, starting with the final approximation frame <span class="pl-s"><span class="pl-pds">$</span>LL_n<span class="pl-pds">$</span></span> followed by</td>
</tr>
<tr>
<td id="L274" class="blob-num js-line-number" data-line-number="274"></td>
<td id="LC274" class="blob-code blob-code-inner js-file-line">collections of detail coefficients <span class="pl-s"><span class="pl-pds">$</span>(LH_n,HL_n,HH_n)<span class="pl-pds">$</span></span>, <span class="pl-s"><span class="pl-pds">$</span>(LH_{n-1},HL_{n-1},HH_{n-1})<span class="pl-pds">$</span></span>,</td>
</tr>
<tr>
<td id="L275" class="blob-num js-line-number" data-line-number="275"></td>
<td id="LC275" class="blob-code blob-code-inner js-file-line"><span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\ldots</span><span class="pl-pds">$</span></span>, <span class="pl-s"><span class="pl-pds">$</span>(LH_<span class="pl-c1">1</span>,HL_<span class="pl-c1">1</span>,HH_<span class="pl-c1">1</span>)<span class="pl-pds">$</span></span>. Note that at each iteration we operate first on the</td>
</tr>
<tr>
<td id="L276" class="blob-num js-line-number" data-line-number="276"></td>
<td id="LC276" class="blob-code blob-code-inner js-file-line">rows (convolve with the filter, then downsample), and the we operate on the columns</td>
</tr>
<tr>
<td id="L277" class="blob-num js-line-number" data-line-number="277"></td>
<td id="LC277" class="blob-code blob-code-inner js-file-line">of the resulting matrices (<span class="pl-c1">\emph</span>{not} the original matrix). The size of the output</td>
</tr>
<tr>
<td id="L278" class="blob-num js-line-number" data-line-number="278"></td>
<td id="LC278" class="blob-code blob-code-inner js-file-line">matrices have been reduced by a factor of two in both dimensions. As with the</td>
</tr>
<tr>
<td id="L279" class="blob-num js-line-number" data-line-number="279"></td>
<td id="LC279" class="blob-code blob-code-inner js-file-line">one-dimensional algorithm, to reconstruct the image from the coefficients, we simply</td>
</tr>
<tr>
<td id="L280" class="blob-num js-line-number" data-line-number="280"></td>
<td id="LC280" class="blob-code blob-code-inner js-file-line">reverse the process by upsampling, convolving, and adding (first the columns, then</td>
</tr>
<tr>
<td id="L281" class="blob-num js-line-number" data-line-number="281"></td>
<td id="LC281" class="blob-code blob-code-inner js-file-line">the rows). We provide sample code for one iteration of the transform and the inverse.</td>
</tr>
<tr>
<td id="L282" class="blob-num js-line-number" data-line-number="282"></td>
<td id="LC282" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L283" class="blob-num js-line-number" data-line-number="283"></td>
<td id="LC283" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{lstlisting}</td>
</tr>
<tr>
<td id="L284" class="blob-num js-line-number" data-line-number="284"></td>
<td id="LC284" class="blob-code blob-code-inner js-file-line">import numpy as np</td>
</tr>
<tr>
<td id="L285" class="blob-num js-line-number" data-line-number="285"></td>
<td id="LC285" class="blob-code blob-code-inner js-file-line">from scipy.signal import fftconvolve</td>
</tr>
<tr>
<td id="L286" class="blob-num js-line-number" data-line-number="286"></td>
<td id="LC286" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L287" class="blob-num js-line-number" data-line-number="287"></td>
<td id="LC287" class="blob-code blob-code-inner js-file-line"># given the current approximation frame image, and the filters lo_d and hi_d</td>
</tr>
<tr>
<td id="L288" class="blob-num js-line-number" data-line-number="288"></td>
<td id="LC288" class="blob-code blob-code-inner js-file-line"># initialize empty arrays</td>
</tr>
<tr>
<td id="L289" class="blob-num js-line-number" data-line-number="289"></td>
<td id="LC289" class="blob-code blob-code-inner js-file-line">temp = np.zeros([image.shape[0], image.shape[1]/2])</td>
</tr>
<tr>
<td id="L290" class="blob-num js-line-number" data-line-number="290"></td>
<td id="LC290" class="blob-code blob-code-inner js-file-line">LL = np.zeros([image.shape[0]/2, image.shape[1]/2])</td>
</tr>
<tr>
<td id="L291" class="blob-num js-line-number" data-line-number="291"></td>
<td id="LC291" class="blob-code blob-code-inner js-file-line">LH = np.zeros([image.shape[0]/2, image.shape[1]/2])</td>
</tr>
<tr>
<td id="L292" class="blob-num js-line-number" data-line-number="292"></td>
<td id="LC292" class="blob-code blob-code-inner js-file-line">HL = np.zeros([image.shape[0]/2, image.shape[1]/2])</td>
</tr>
<tr>
<td id="L293" class="blob-num js-line-number" data-line-number="293"></td>
<td id="LC293" class="blob-code blob-code-inner js-file-line">HH = np.zeros([image.shape[0]/2, image.shape[1]/2])</td>
</tr>
<tr>
<td id="L294" class="blob-num js-line-number" data-line-number="294"></td>
<td id="LC294" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L295" class="blob-num js-line-number" data-line-number="295"></td>
<td id="LC295" class="blob-code blob-code-inner js-file-line"># low-pass filtering along the rows</td>
</tr>
<tr>
<td id="L296" class="blob-num js-line-number" data-line-number="296"></td>
<td id="LC296" class="blob-code blob-code-inner js-file-line">for i in xrange(image.shape[0]):</td>
</tr>
<tr>
<td id="L297" class="blob-num js-line-number" data-line-number="297"></td>
<td id="LC297" class="blob-code blob-code-inner js-file-line"> temp[i] = fftconvolve(image[i], lo_d, mode='full')[1::2]</td>
</tr>
<tr>
<td id="L298" class="blob-num js-line-number" data-line-number="298"></td>
<td id="LC298" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L299" class="blob-num js-line-number" data-line-number="299"></td>
<td id="LC299" class="blob-code blob-code-inner js-file-line"># low and hi-pass filtering along the columns</td>
</tr>
<tr>
<td id="L300" class="blob-num js-line-number" data-line-number="300"></td>
<td id="LC300" class="blob-code blob-code-inner js-file-line">for i in xrange(image.shape[1]/2):</td>
</tr>
<tr>
<td id="L301" class="blob-num js-line-number" data-line-number="301"></td>
<td id="LC301" class="blob-code blob-code-inner js-file-line"> LL[:,i] = fftconvolve(temp[:,i],lo_d,mode='full')[1::2]</td>
</tr>
<tr>
<td id="L302" class="blob-num js-line-number" data-line-number="302"></td>
<td id="LC302" class="blob-code blob-code-inner js-file-line"> LH[:,i] = fftconvolve(temp[:,i],hi_d,mode='full')[1::2]</td>
</tr>
<tr>
<td id="L303" class="blob-num js-line-number" data-line-number="303"></td>
<td id="LC303" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L304" class="blob-num js-line-number" data-line-number="304"></td>
<td id="LC304" class="blob-code blob-code-inner js-file-line"># hi-pass filtering along the rows</td>
</tr>
<tr>
<td id="L305" class="blob-num js-line-number" data-line-number="305"></td>
<td id="LC305" class="blob-code blob-code-inner js-file-line">for i in xrange(image.shape[0]):</td>
</tr>
<tr>
<td id="L306" class="blob-num js-line-number" data-line-number="306"></td>
<td id="LC306" class="blob-code blob-code-inner js-file-line"> temp[i] = fftconvolve(image[i], hi_d, mode='full')[1::2]</td>
</tr>
<tr>
<td id="L307" class="blob-num js-line-number" data-line-number="307"></td>
<td id="LC307" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L308" class="blob-num js-line-number" data-line-number="308"></td>
<td id="LC308" class="blob-code blob-code-inner js-file-line"># low and hi-pass filtering along the columns</td>
</tr>
<tr>
<td id="L309" class="blob-num js-line-number" data-line-number="309"></td>
<td id="LC309" class="blob-code blob-code-inner js-file-line">for i in xrange(image.shape[1]/2):</td>
</tr>
<tr>
<td id="L310" class="blob-num js-line-number" data-line-number="310"></td>
<td id="LC310" class="blob-code blob-code-inner js-file-line"> HL[:,i] = fftconvolve(temp[:,i],lo_d,mode='full')[1::2]</td>
</tr>
<tr>
<td id="L311" class="blob-num js-line-number" data-line-number="311"></td>
<td id="LC311" class="blob-code blob-code-inner js-file-line"> HH[:,i] = fftconvolve(temp[:,i],hi_d,mode='full')[1::2]</td>
</tr>
<tr>
<td id="L312" class="blob-num js-line-number" data-line-number="312"></td>
<td id="LC312" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{lstlisting}</td>
</tr>
<tr>
<td id="L313" class="blob-num js-line-number" data-line-number="313"></td>
<td id="LC313" class="blob-code blob-code-inner js-file-line">At this point, the variables <span class="pl-c1">\li</span>{LL, LH, HL, HH} contain the current level of wavelet coefficients.</td>
</tr>
<tr>
<td id="L314" class="blob-num js-line-number" data-line-number="314"></td>
<td id="LC314" class="blob-code blob-code-inner js-file-line">You would then store <span class="pl-c1">\li</span>{(LH, HL, HH)} in a list, and feed <span class="pl-c1">\li</span>{LL} back into the same</td>
</tr>
<tr>
<td id="L315" class="blob-num js-line-number" data-line-number="315"></td>
<td id="LC315" class="blob-code blob-code-inner js-file-line">block of code (with <span class="pl-c1">\li</span>{LL} replacing <span class="pl-c1">\li</span>{image}) to obtain the next level of coefficients.</td>
</tr>
<tr>
<td id="L316" class="blob-num js-line-number" data-line-number="316"></td>
<td id="LC316" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L317" class="blob-num js-line-number" data-line-number="317"></td>
<td id="LC317" class="blob-code blob-code-inner js-file-line">Now, given a current level of wavelet coefficients, here is the code to recover the previous</td>
</tr>
<tr>
<td id="L318" class="blob-num js-line-number" data-line-number="318"></td>
<td id="LC318" class="blob-code blob-code-inner js-file-line">approximation frame, which is the crucial step in the inverse transform.</td>
</tr>
<tr>
<td id="L319" class="blob-num js-line-number" data-line-number="319"></td>
<td id="LC319" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{lstlisting}</td>
</tr>
<tr>
<td id="L320" class="blob-num js-line-number" data-line-number="320"></td>
<td id="LC320" class="blob-code blob-code-inner js-file-line"># given current coefficients LL, LH, HL, HH</td>
</tr>
<tr>
<td id="L321" class="blob-num js-line-number" data-line-number="321"></td>
<td id="LC321" class="blob-code blob-code-inner js-file-line"># initialize temporary arrays</td>
</tr>
<tr>
<td id="L322" class="blob-num js-line-number" data-line-number="322"></td>
<td id="LC322" class="blob-code blob-code-inner js-file-line">n = LL.shape[0]</td>
</tr>
<tr>
<td id="L323" class="blob-num js-line-number" data-line-number="323"></td>
<td id="LC323" class="blob-code blob-code-inner js-file-line">temp1 = np.zeros([2*n,n])</td>
</tr>
<tr>
<td id="L324" class="blob-num js-line-number" data-line-number="324"></td>
<td id="LC324" class="blob-code blob-code-inner js-file-line">temp2 = np.zeros([2*n,n])</td>
</tr>
<tr>
<td id="L325" class="blob-num js-line-number" data-line-number="325"></td>
<td id="LC325" class="blob-code blob-code-inner js-file-line">up1 = np.zeros(2*n)</td>
</tr>
<tr>
<td id="L326" class="blob-num js-line-number" data-line-number="326"></td>
<td id="LC326" class="blob-code blob-code-inner js-file-line">up2 = np.zeros(2*n)</td>
</tr>
<tr>
<td id="L327" class="blob-num js-line-number" data-line-number="327"></td>
<td id="LC327" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L328" class="blob-num js-line-number" data-line-number="328"></td>
<td id="LC328" class="blob-code blob-code-inner js-file-line"># upsample and filter the columns of the coefficient arrays</td>
</tr>
<tr>
<td id="L329" class="blob-num js-line-number" data-line-number="329"></td>
<td id="LC329" class="blob-code blob-code-inner js-file-line">for i in xrange(n):</td>
</tr>
<tr>
<td id="L330" class="blob-num js-line-number" data-line-number="330"></td>
<td id="LC330" class="blob-code blob-code-inner js-file-line"> up1[1::2] = HH[:,i]</td>
</tr>
<tr>
<td id="L331" class="blob-num js-line-number" data-line-number="331"></td>
<td id="LC331" class="blob-code blob-code-inner js-file-line"> up2[1::2] = HL[:,i]</td>
</tr>
<tr>
<td id="L332" class="blob-num js-line-number" data-line-number="332"></td>
<td id="LC332" class="blob-code blob-code-inner js-file-line"> temp1[:,i] = fftconvolve(up1, hi_r)[1:] + fftconvolve(up2, lo_r)[1:]</td>
</tr>
<tr>
<td id="L333" class="blob-num js-line-number" data-line-number="333"></td>
<td id="LC333" class="blob-code blob-code-inner js-file-line"> up1[1::2] = LH[:,i]</td>
</tr>
<tr>
<td id="L334" class="blob-num js-line-number" data-line-number="334"></td>
<td id="LC334" class="blob-code blob-code-inner js-file-line"> up2[1::2] = LL[:,i] </td>
</tr>
<tr>
<td id="L335" class="blob-num js-line-number" data-line-number="335"></td>
<td id="LC335" class="blob-code blob-code-inner js-file-line"> temp2[:,i] = fftconvolve(up1, hi_r)[1:] + fftconvolve(up2, lo_r)[1:]</td>
</tr>
<tr>
<td id="L336" class="blob-num js-line-number" data-line-number="336"></td>
<td id="LC336" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L337" class="blob-num js-line-number" data-line-number="337"></td>
<td id="LC337" class="blob-code blob-code-inner js-file-line"># upsample and filter the rows, then add results together</td>
</tr>
<tr>
<td id="L338" class="blob-num js-line-number" data-line-number="338"></td>
<td id="LC338" class="blob-code blob-code-inner js-file-line">result = sp.zeros([2*n,2*n])</td>
</tr>
<tr>
<td id="L339" class="blob-num js-line-number" data-line-number="339"></td>
<td id="LC339" class="blob-code blob-code-inner js-file-line">for i in xrange(2*n):</td>
</tr>
<tr>
<td id="L340" class="blob-num js-line-number" data-line-number="340"></td>
<td id="LC340" class="blob-code blob-code-inner js-file-line"> up1[1::2] = temp1[i]</td>
</tr>
<tr>
<td id="L341" class="blob-num js-line-number" data-line-number="341"></td>
<td id="LC341" class="blob-code blob-code-inner js-file-line"> up2[1::2] = temp2[i]</td>
</tr>
<tr>
<td id="L342" class="blob-num js-line-number" data-line-number="342"></td>
<td id="LC342" class="blob-code blob-code-inner js-file-line"> result[i] = fftconvolve(up1, hi_r)[1:] + fftconvolve(up2, lo_r)[1:]</td>
</tr>
<tr>
<td id="L343" class="blob-num js-line-number" data-line-number="343"></td>
<td id="LC343" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{lstlisting}</td>
</tr>
<tr>
<td id="L344" class="blob-num js-line-number" data-line-number="344"></td>
<td id="LC344" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L345" class="blob-num js-line-number" data-line-number="345"></td>
<td id="LC345" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{problem}</td>
</tr>
<tr>
<td id="L346" class="blob-num js-line-number" data-line-number="346"></td>
<td id="LC346" class="blob-code blob-code-inner js-file-line">Build off of the sample code to fully implement the two-dimensional discrete</td>
</tr>
<tr>
<td id="L347" class="blob-num js-line-number" data-line-number="347"></td>
<td id="LC347" class="blob-code blob-code-inner js-file-line">wavelet transform as described above.</td>
</tr>
<tr>
<td id="L348" class="blob-num js-line-number" data-line-number="348"></td>
<td id="LC348" class="blob-code blob-code-inner js-file-line">As before, the input to your function should consist of</td>
</tr>
<tr>
<td id="L349" class="blob-num js-line-number" data-line-number="349"></td>
<td id="LC349" class="blob-code blob-code-inner js-file-line">three arrays: the input image, the low-pass filter, and the high-pass filter.</td>
</tr>
<tr>
<td id="L350" class="blob-num js-line-number" data-line-number="350"></td>
<td id="LC350" class="blob-code blob-code-inner js-file-line">You should return a list of the following form: <span class="pl-s"><span class="pl-pds">$$</span>[LL_n,(LH_n,HL_n,HH_n), <span class="pl-c1">\ldots</span></span></td>
</tr>
<tr>
<td id="L351" class="blob-num js-line-number" data-line-number="351"></td>
<td id="LC351" class="blob-code blob-code-inner js-file-line"><span class="pl-s">,(LH_<span class="pl-c1">1</span>,HL_<span class="pl-c1">1</span>,HH_<span class="pl-c1">1</span>)].<span class="pl-pds">$$</span></span> </td>
</tr>
<tr>
<td id="L352" class="blob-num js-line-number" data-line-number="352"></td>
<td id="LC352" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L353" class="blob-num js-line-number" data-line-number="353"></td>
<td id="LC353" class="blob-code blob-code-inner js-file-line">The inverse wavelet transform function should take as input a list</td>
</tr>
<tr>
<td id="L354" class="blob-num js-line-number" data-line-number="354"></td>
<td id="LC354" class="blob-code blob-code-inner js-file-line">of that same form, as well as the reconstruction low-pass and high-pass filters,</td>
</tr>
<tr>
<td id="L355" class="blob-num js-line-number" data-line-number="355"></td>
<td id="LC355" class="blob-code blob-code-inner js-file-line">and should return the reconstructed image.</td>
</tr>
<tr>
<td id="L356" class="blob-num js-line-number" data-line-number="356"></td>
<td id="LC356" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{problem}</td>
</tr>
<tr>
<td id="L357" class="blob-num js-line-number" data-line-number="357"></td>
<td id="LC357" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L358" class="blob-num js-line-number" data-line-number="358"></td>
<td id="LC358" class="blob-code blob-code-inner js-file-line">These wavelet coefficients are very useful in a variety of image processing</td>
</tr>
<tr>
<td id="L359" class="blob-num js-line-number" data-line-number="359"></td>
<td id="LC359" class="blob-code blob-code-inner js-file-line">tasks. They allow us to analyze and manipulate images in terms of both their</td>
</tr>
<tr>
<td id="L360" class="blob-num js-line-number" data-line-number="360"></td>
<td id="LC360" class="blob-code blob-code-inner js-file-line">frequency and spatial properties, and at differing levels of resolution.</td>
</tr>
<tr>
<td id="L361" class="blob-num js-line-number" data-line-number="361"></td>
<td id="LC361" class="blob-code blob-code-inner js-file-line">Furthermore, wavelet bases often have the remarkable ability to represent</td>
</tr>
<tr>
<td id="L362" class="blob-num js-line-number" data-line-number="362"></td>
<td id="LC362" class="blob-code blob-code-inner js-file-line">images in a very <span class="pl-c1">\textit</span>{sparse} manner -- that is, most of the image</td>
</tr>
<tr>
<td id="L363" class="blob-num js-line-number" data-line-number="363"></td>
<td id="LC363" class="blob-code blob-code-inner js-file-line">information is captured by a small subset of the wavelet coefficients.</td>
</tr>
<tr>
<td id="L364" class="blob-num js-line-number" data-line-number="364"></td>
<td id="LC364" class="blob-code blob-code-inner js-file-line">In the remainder of this lab, we will see how the discrete wavelet transform</td>
</tr>
<tr>
<td id="L365" class="blob-num js-line-number" data-line-number="365"></td>
<td id="LC365" class="blob-code blob-code-inner js-file-line">plays a role in edge detection, noise removal, and compression.</td>
</tr>
<tr>
<td id="L366" class="blob-num js-line-number" data-line-number="366"></td>
<td id="LC366" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L367" class="blob-num js-line-number" data-line-number="367"></td>
<td id="LC367" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\subsection</span>*{More Wavelets}</td>
</tr>
<tr>
<td id="L368" class="blob-num js-line-number" data-line-number="368"></td>
<td id="LC368" class="blob-code blob-code-inner js-file-line">Up to this point, the only wavelet that we have considered is the Haar wavelet,</td>
</tr>
<tr>
<td id="L369" class="blob-num js-line-number" data-line-number="369"></td>
<td id="LC369" class="blob-code blob-code-inner js-file-line">which is the simplest and historically first example. Wavelet analysis is a broad</td>
</tr>
<tr>
<td id="L370" class="blob-num js-line-number" data-line-number="370"></td>
<td id="LC370" class="blob-code blob-code-inner js-file-line">field, however, and there are myriad other wavelets that have been studied and</td>
</tr>
<tr>
<td id="L371" class="blob-num js-line-number" data-line-number="371"></td>
<td id="LC371" class="blob-code blob-code-inner js-file-line">applied. Your implementation of the discrete wavelet transform is quite general,</td>
</tr>
<tr>
<td id="L372" class="blob-num js-line-number" data-line-number="372"></td>
<td id="LC372" class="blob-code blob-code-inner js-file-line">and you will find that different types of signals or functions call for different</td>
</tr>
<tr>
<td id="L373" class="blob-num js-line-number" data-line-number="373"></td>
<td id="LC373" class="blob-code blob-code-inner js-file-line">wavelets. We will not go into detail here, but be aware that there is a large</td>
</tr>
<tr>
<td id="L374" class="blob-num js-line-number" data-line-number="374"></td>
<td id="LC374" class="blob-code blob-code-inner js-file-line">selection of wavelets out there.</td>
</tr>
<tr>
<td id="L375" class="blob-num js-line-number" data-line-number="375"></td>
<td id="LC375" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L376" class="blob-num js-line-number" data-line-number="376"></td>
<td id="LC376" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{figure}[H]</td>
</tr>
<tr>
<td id="L377" class="blob-num js-line-number" data-line-number="377"></td>
<td id="LC377" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\minipage</span>{0.49<span class="pl-c1">\textwidth</span>}</td>
</tr>
<tr>
<td id="L378" class="blob-num js-line-number" data-line-number="378"></td>
<td id="LC378" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\includegraphics</span>[width=<span class="pl-c1">\linewidth</span>]{mexicanHat}</td>
</tr>
<tr>
<td id="L379" class="blob-num js-line-number" data-line-number="379"></td>
<td id="LC379" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\caption</span>{The Mexican Hat Wavelet}</td>
</tr>
<tr>
<td id="L380" class="blob-num js-line-number" data-line-number="380"></td>
<td id="LC380" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\endminipage\hfill</span></td>
</tr>
<tr>
<td id="L381" class="blob-num js-line-number" data-line-number="381"></td>
<td id="LC381" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\minipage</span>{0.49<span class="pl-c1">\textwidth</span>}</td>
</tr>
<tr>
<td id="L382" class="blob-num js-line-number" data-line-number="382"></td>
<td id="LC382" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\includegraphics</span>[width=<span class="pl-c1">\linewidth</span>]{db5_3}</td>
</tr>
<tr>
<td id="L383" class="blob-num js-line-number" data-line-number="383"></td>
<td id="LC383" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\caption</span>{The Cohen-Daubechies-Feauveau 5/3 Wavelet}</td>
</tr>
<tr>
<td id="L384" class="blob-num js-line-number" data-line-number="384"></td>
<td id="LC384" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\endminipage</span></td>
</tr>
<tr>
<td id="L385" class="blob-num js-line-number" data-line-number="385"></td>
<td id="LC385" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{figure}</td>
</tr>
<tr>
<td id="L386" class="blob-num js-line-number" data-line-number="386"></td>
<td id="LC386" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L387" class="blob-num js-line-number" data-line-number="387"></td>
<td id="LC387" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\section</span>*{The PyWavelets Module}</td>
</tr>
<tr>
<td id="L388" class="blob-num js-line-number" data-line-number="388"></td>
<td id="LC388" class="blob-code blob-code-inner js-file-line">PyWavelets is a Python library for Wavelet Analysis. It provides convenient and</td>
</tr>
<tr>
<td id="L389" class="blob-num js-line-number" data-line-number="389"></td>
<td id="LC389" class="blob-code blob-code-inner js-file-line">efficient methods to calculate the one and two-dimensional discrete Wavelet</td>
</tr>
<tr>
<td id="L390" class="blob-num js-line-number" data-line-number="390"></td>
<td id="LC390" class="blob-code blob-code-inner js-file-line">transform, as well as much more. Assuming that the package has been installed on</td>
</tr>
<tr>
<td id="L391" class="blob-num js-line-number" data-line-number="391"></td>
<td id="LC391" class="blob-code blob-code-inner js-file-line">your machine, type the following to get started:</td>
</tr>
<tr>
<td id="L392" class="blob-num js-line-number" data-line-number="392"></td>
<td id="LC392" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{lstlisting}</td>
</tr>
<tr>
<td id="L393" class="blob-num js-line-number" data-line-number="393"></td>
<td id="LC393" class="blob-code blob-code-inner js-file-line">>>> import pywt</td>
</tr>
<tr>
<td id="L394" class="blob-num js-line-number" data-line-number="394"></td>
<td id="LC394" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{lstlisting}</td>
</tr>
<tr>
<td id="L395" class="blob-num js-line-number" data-line-number="395"></td>
<td id="LC395" class="blob-code blob-code-inner js-file-line">Performing the basic discrete Wavelet transform is very simple.</td>
</tr>
<tr>
<td id="L396" class="blob-num js-line-number" data-line-number="396"></td>
<td id="LC396" class="blob-code blob-code-inner js-file-line">Below, we compute the one-dimensional transform for a sinusoidal signal.</td>
</tr>
<tr>
<td id="L397" class="blob-num js-line-number" data-line-number="397"></td>
<td id="LC397" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{lstlisting}</td>
</tr>
<tr>
<td id="L398" class="blob-num js-line-number" data-line-number="398"></td>
<td id="LC398" class="blob-code blob-code-inner js-file-line">>>> import numpy as np</td>
</tr>
<tr>
<td id="L399" class="blob-num js-line-number" data-line-number="399"></td>
<td id="LC399" class="blob-code blob-code-inner js-file-line">>>> f = np.sin(np.linspace(0,8*np.pi, 256))</td>
</tr>
<tr>
<td id="L400" class="blob-num js-line-number" data-line-number="400"></td>
<td id="LC400" class="blob-code blob-code-inner js-file-line">>>> fw = pywt.wavedec(f, 'haar')</td>
</tr>
<tr>
<td id="L401" class="blob-num js-line-number" data-line-number="401"></td>
<td id="LC401" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{lstlisting}</td>
</tr>
<tr>
<td id="L402" class="blob-num js-line-number" data-line-number="402"></td>
<td id="LC402" class="blob-code blob-code-inner js-file-line">The variable <span class="pl-c1">\li</span>{fw} is now a list of arrays, starting with the final approximation</td>
</tr>
<tr>
<td id="L403" class="blob-num js-line-number" data-line-number="403"></td>
<td id="LC403" class="blob-code blob-code-inner js-file-line">frame, followed by the various levels of detail coefficients, just like the output</td>
</tr>
<tr>
<td id="L404" class="blob-num js-line-number" data-line-number="404"></td>
<td id="LC404" class="blob-code blob-code-inner js-file-line">of the wavelet transform function that you coded in the previous lab.</td>
</tr>
<tr>
<td id="L405" class="blob-num js-line-number" data-line-number="405"></td>
<td id="LC405" class="blob-code blob-code-inner js-file-line">Plot the level 2 detail and verify that it resembles a blocky sinusoid.</td>
</tr>
<tr>
<td id="L406" class="blob-num js-line-number" data-line-number="406"></td>
<td id="LC406" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{lstlisting}</td>
</tr>
<tr>
<td id="L407" class="blob-num js-line-number" data-line-number="407"></td>
<td id="LC407" class="blob-code blob-code-inner js-file-line">>>> from matplotlib import pyplot as plt</td>
</tr>
<tr>
<td id="L408" class="blob-num js-line-number" data-line-number="408"></td>
<td id="LC408" class="blob-code blob-code-inner js-file-line">>>> plt.plot(fw[-2], linestyle='steps')</td>
</tr>
<tr>
<td id="L409" class="blob-num js-line-number" data-line-number="409"></td>
<td id="LC409" class="blob-code blob-code-inner js-file-line">>>> plt.show()</td>
</tr>
<tr>
<td id="L410" class="blob-num js-line-number" data-line-number="410"></td>
<td id="LC410" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{lstlisting}</td>
</tr>
<tr>
<td id="L411" class="blob-num js-line-number" data-line-number="411"></td>
<td id="LC411" class="blob-code blob-code-inner js-file-line">We can give alter the arguments to the <span class="pl-c1">\li</span>{wavedec} function to use different</td>
</tr>
<tr>
<td id="L412" class="blob-num js-line-number" data-line-number="412"></td>
<td id="LC412" class="blob-code blob-code-inner js-file-line">wavelets or obtain different levels of the wavelet transform. The second</td>
</tr>
<tr>
<td id="L413" class="blob-num js-line-number" data-line-number="413"></td>
<td id="LC413" class="blob-code blob-code-inner js-file-line">positional argument, as you will notice, is a string that gives the name of the</td>
</tr>
<tr>
<td id="L414" class="blob-num js-line-number" data-line-number="414"></td>
<td id="LC414" class="blob-code blob-code-inner js-file-line">wavelet to be used. We first used the Haar wavelet, with which you are already</td>
</tr>
<tr>
<td id="L415" class="blob-num js-line-number" data-line-number="415"></td>
<td id="LC415" class="blob-code blob-code-inner js-file-line">familiar. PyWavelets supports a number of different Wavelets, however, which you can</td>
</tr>
<tr>
<td id="L416" class="blob-num js-line-number" data-line-number="416"></td>
<td id="LC416" class="blob-code blob-code-inner js-file-line">list by executing the following code:</td>
</tr>
<tr>
<td id="L417" class="blob-num js-line-number" data-line-number="417"></td>
<td id="LC417" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{lstlisting}</td>
</tr>
<tr>
<td id="L418" class="blob-num js-line-number" data-line-number="418"></td>
<td id="LC418" class="blob-code blob-code-inner js-file-line">>>> # list the available Wavelet families</td>
</tr>
<tr>
<td id="L419" class="blob-num js-line-number" data-line-number="419"></td>
<td id="LC419" class="blob-code blob-code-inner js-file-line">>>> print pywt.families()</td>
</tr>
<tr>
<td id="L420" class="blob-num js-line-number" data-line-number="420"></td>
<td id="LC420" class="blob-code blob-code-inner js-file-line">['haar', 'db', 'sym', 'coif', 'bior', 'rbio', 'dmey']</td>
</tr>
<tr>
<td id="L421" class="blob-num js-line-number" data-line-number="421"></td>
<td id="LC421" class="blob-code blob-code-inner js-file-line">>>> # list the available wavelets in the coif family</td>
</tr>
<tr>
<td id="L422" class="blob-num js-line-number" data-line-number="422"></td>
<td id="LC422" class="blob-code blob-code-inner js-file-line">>>> print pywt.wavelist('coif')</td>
</tr>
<tr>
<td id="L423" class="blob-num js-line-number" data-line-number="423"></td>
<td id="LC423" class="blob-code blob-code-inner js-file-line">['coif1', 'coif2', 'coif3', 'coif4', 'coif5']</td>
</tr>
<tr>
<td id="L424" class="blob-num js-line-number" data-line-number="424"></td>
<td id="LC424" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{lstlisting}</td>
</tr>
<tr>
<td id="L425" class="blob-num js-line-number" data-line-number="425"></td>
<td id="LC425" class="blob-code blob-code-inner js-file-line">We can also include optional arguments <span class="pl-c1">\li</span>{mode} and <span class="pl-c1">\li</span>{level} when calling the</td>
</tr>
<tr>
<td id="L426" class="blob-num js-line-number" data-line-number="426"></td>
<td id="LC426" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\li</span>{wavedec} function. Using these arguments, you can adjust the mode for dealing</td>
</tr>
<tr>
<td id="L427" class="blob-num js-line-number" data-line-number="427"></td>
<td id="LC427" class="blob-code blob-code-inner js-file-line">with border distortion and the level of the Wavelet decomposition, respectively.</td>
</tr>
<tr>
<td id="L428" class="blob-num js-line-number" data-line-number="428"></td>
<td id="LC428" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L429" class="blob-num js-line-number" data-line-number="429"></td>
<td id="LC429" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{figure}[t]</td>
</tr>
<tr>
<td id="L430" class="blob-num js-line-number" data-line-number="430"></td>
<td id="LC430" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\includegraphics</span>[width=<span class="pl-c1">\linewidth</span>]{dwt2.pdf}</td>
</tr>
<tr>
<td id="L431" class="blob-num js-line-number" data-line-number="431"></td>
<td id="LC431" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\caption</span>{Level 1 Wavelet decomposition of the Lena image.</td>
</tr>
<tr>
<td id="L432" class="blob-num js-line-number" data-line-number="432"></td>
<td id="LC432" class="blob-code blob-code-inner js-file-line"> The upper left is the approximation frame, and the remaining</td>
</tr>
<tr>
<td id="L433" class="blob-num js-line-number" data-line-number="433"></td>
<td id="LC433" class="blob-code blob-code-inner js-file-line"> plots are the detail coefficients.}</td>
</tr>
<tr>
<td id="L434" class="blob-num js-line-number" data-line-number="434"></td>
<td id="LC434" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\label</span>{fig:dwt2}</td>
</tr>
<tr>
<td id="L435" class="blob-num js-line-number" data-line-number="435"></td>
<td id="LC435" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{figure}</td>
</tr>
<tr>
<td id="L436" class="blob-num js-line-number" data-line-number="436"></td>
<td id="LC436" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L437" class="blob-num js-line-number" data-line-number="437"></td>
<td id="LC437" class="blob-code blob-code-inner js-file-line">Now we illustrate how to perform a two-dimensional Wavelet transform using</td>
</tr>
<tr>
<td id="L438" class="blob-num js-line-number" data-line-number="438"></td>
<td id="LC438" class="blob-code blob-code-inner js-file-line">PyWavelets. We will work with the traditional Lena image, performing a</td>
</tr>
<tr>
<td id="L439" class="blob-num js-line-number" data-line-number="439"></td>
<td id="LC439" class="blob-code blob-code-inner js-file-line">two level wavelet transform using the Daubechies 4 Wavelet.</td>
</tr>
<tr>
<td id="L440" class="blob-num js-line-number" data-line-number="440"></td>
<td id="LC440" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{lstlisting}</td>
</tr>
<tr>
<td id="L441" class="blob-num js-line-number" data-line-number="441"></td>
<td id="LC441" class="blob-code blob-code-inner js-file-line">>>> import scipy.misc</td>
</tr>
<tr>
<td id="L442" class="blob-num js-line-number" data-line-number="442"></td>
<td id="LC442" class="blob-code blob-code-inner js-file-line">>>> lena = scipy.misc.lena()</td>
</tr>
<tr>
<td id="L443" class="blob-num js-line-number" data-line-number="443"></td>
<td id="LC443" class="blob-code blob-code-inner js-file-line">>>> lw = pywt.wavedec2(lena, 'db4', level=2)</td>
</tr>
<tr>
<td id="L444" class="blob-num js-line-number" data-line-number="444"></td>
<td id="LC444" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{lstlisting}</td>
</tr>
<tr>
<td id="L445" class="blob-num js-line-number" data-line-number="445"></td>
<td id="LC445" class="blob-code blob-code-inner js-file-line">The variable <span class="pl-c1">\li</span>{lw} is a list of tuples of arrays. The first entry of the list is</td>
</tr>
<tr>
<td id="L446" class="blob-num js-line-number" data-line-number="446"></td>
<td id="LC446" class="blob-code blob-code-inner js-file-line">simply the level 2 approximation frame. The second entry of the list is a tuple of</td>
</tr>
<tr>
<td id="L447" class="blob-num js-line-number" data-line-number="447"></td>
<td id="LC447" class="blob-code blob-code-inner js-file-line">the level 2 detail coefficients <span class="pl-s"><span class="pl-pds">$</span>LH<span class="pl-pds">$</span></span>, <span class="pl-s"><span class="pl-pds">$</span>HL<span class="pl-pds">$</span></span>, and <span class="pl-s"><span class="pl-pds">$</span>HH<span class="pl-pds">$</span></span> (in that order). The remaining</td>
</tr>
<tr>
<td id="L448" class="blob-num js-line-number" data-line-number="448"></td>
<td id="LC448" class="blob-code blob-code-inner js-file-line">entries of the list are tuples containing the lower level detail coefficients.</td>
</tr>
<tr>
<td id="L449" class="blob-num js-line-number" data-line-number="449"></td>
<td id="LC449" class="blob-code blob-code-inner js-file-line">Thus, to plot the level 1 <span class="pl-s"><span class="pl-pds">$</span>HL<span class="pl-pds">$</span></span> detail coefficients, we can execute the following code:</td>
</tr>
<tr>
<td id="L450" class="blob-num js-line-number" data-line-number="450"></td>
<td id="LC450" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{lstlisting}</td>
</tr>
<tr>
<td id="L451" class="blob-num js-line-number" data-line-number="451"></td>
<td id="LC451" class="blob-code blob-code-inner js-file-line">>>> HL1 = lw[-1][1]</td>
</tr>
<tr>
<td id="L452" class="blob-num js-line-number" data-line-number="452"></td>
<td id="LC452" class="blob-code blob-code-inner js-file-line">>>> plt.imshow(np.abs(HL1), cmap=plt.cm.Greys_r, interpolation='none')</td>
</tr>
<tr>
<td id="L453" class="blob-num js-line-number" data-line-number="453"></td>
<td id="LC453" class="blob-code blob-code-inner js-file-line">>>> plt.show()</td>
</tr>
<tr>
<td id="L454" class="blob-num js-line-number" data-line-number="454"></td>
<td id="LC454" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{lstlisting}</td>
</tr>
<tr>
<td id="L455" class="blob-num js-line-number" data-line-number="455"></td>
<td id="LC455" class="blob-code blob-code-inner js-file-line">The output of this code should be a plot resembling the lower left plot given in Figure</td>
</tr>
<tr>
<td id="L456" class="blob-num js-line-number" data-line-number="456"></td>
<td id="LC456" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\ref</span>{fig:dwt2}.</td>
</tr>
<tr>
<td id="L457" class="blob-num js-line-number" data-line-number="457"></td>
<td id="LC457" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L458" class="blob-num js-line-number" data-line-number="458"></td>
<td id="LC458" class="blob-code blob-code-inner js-file-line">We have only introduced a couple of the basic tools available in PyWavelets. There</td>
</tr>
<tr>
<td id="L459" class="blob-num js-line-number" data-line-number="459"></td>
<td id="LC459" class="blob-code blob-code-inner js-file-line">are of course many more functions and methods that facilitate a more comprehensive</td>
</tr>
<tr>
<td id="L460" class="blob-num js-line-number" data-line-number="460"></td>
<td id="LC460" class="blob-code blob-code-inner js-file-line">Wavelet analysis. In the remainder of this lab, we will explore three particular </td>
</tr>
<tr>
<td id="L461" class="blob-num js-line-number" data-line-number="461"></td>
<td id="LC461" class="blob-code blob-code-inner js-file-line">applications of Wavelet analysis in the realm of image processing and compression.</td>
</tr>
<tr>
<td id="L462" class="blob-num js-line-number" data-line-number="462"></td>
<td id="LC462" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L463" class="blob-num js-line-number" data-line-number="463"></td>
<td id="LC463" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%\section*{Edge Detection}</span></td>
</tr>
<tr>
<td id="L464" class="blob-num js-line-number" data-line-number="464"></td>
<td id="LC464" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%It is often useful to identify the edges of objects and figures</span></td>
</tr>
<tr>
<td id="L465" class="blob-num js-line-number" data-line-number="465"></td>
<td id="LC465" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%represented in images. The edge information can be used to classify images</span></td>
</tr>
<tr>
<td id="L466" class="blob-num js-line-number" data-line-number="466"></td>
<td id="LC466" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%and group them with other similar images (this is part of a field called</span></td>
</tr>
<tr>
<td id="L467" class="blob-num js-line-number" data-line-number="467"></td>
<td id="LC467" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%\textit{computer vision}), to segment the image into component parts, to</span></td>
</tr>
<tr>
<td id="L468" class="blob-num js-line-number" data-line-number="468"></td>
<td id="LC468" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%sharpen blurry images, to filter out unnecessary details of the image,</span></td>
</tr>
<tr>
<td id="L469" class="blob-num js-line-number" data-line-number="469"></td>
<td id="LC469" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%and so forth. Of course, our human eyes are very adept at recognizing edges,</span></td>
</tr>
<tr>
<td id="L470" class="blob-num js-line-number" data-line-number="470"></td>
<td id="LC470" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%but enabling a computer to do the same is much more difficult. An edge can</span></td>
</tr>
<tr>
<td id="L471" class="blob-num js-line-number" data-line-number="471"></td>
<td id="LC471" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%be thought of as a discontinuity in the image or a region of high contrast</span></td>
</tr>
<tr>
<td id="L472" class="blob-num js-line-number" data-line-number="472"></td>
<td id="LC472" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%in either color or brightness. We can therefore leverage the high-frequency</span></td>
</tr>
<tr>
<td id="L473" class="blob-num js-line-number" data-line-number="473"></td>
<td id="LC473" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%detail coefficients of the wavelet transform to detect the edges. Execute the</span></td>
</tr>
<tr>
<td id="L474" class="blob-num js-line-number" data-line-number="474"></td>
<td id="LC474" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%following code:</span></td>
</tr>
<tr>
<td id="L475" class="blob-num js-line-number" data-line-number="475"></td>
<td id="LC475" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%\begin{lstlisting}</span></td>
</tr>
<tr>
<td id="L476" class="blob-num js-line-number" data-line-number="476"></td>
<td id="LC476" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%>>> # calculate one level of wavelet coefficients</span></td>
</tr>
<tr>
<td id="L477" class="blob-num js-line-number" data-line-number="477"></td>
<td id="LC477" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%>>> coeffs = pywt.wavedec2(lena,'haar', level=1)</span></td>
</tr>
<tr>
<td id="L478" class="blob-num js-line-number" data-line-number="478"></td>
<td id="LC478" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%\end{lstlisting}</span></td>
</tr>
<tr>
<td id="L479" class="blob-num js-line-number" data-line-number="479"></td>
<td id="LC479" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%</span></td>
</tr>
<tr>
<td id="L480" class="blob-num js-line-number" data-line-number="480"></td>
<td id="LC480" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%Note that the approximation coefficients are very close to the original</span></td>
</tr>
<tr>
<td id="L481" class="blob-num js-line-number" data-line-number="481"></td>
<td id="LC481" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%image, while the detail coefficients are much more sparse, and roughly</span></td>
</tr>
<tr>
<td id="L482" class="blob-num js-line-number" data-line-number="482"></td>
<td id="LC482" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%capture the edges in the image. In particular, the upper right coefficients</span></td>
</tr>
<tr>
<td id="L483" class="blob-num js-line-number" data-line-number="483"></td>
<td id="LC483" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%emphasize the vertical edges, the lower left coefficients emphasize the</span></td>
</tr>
<tr>
<td id="L484" class="blob-num js-line-number" data-line-number="484"></td>
<td id="LC484" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%horizontal edges, and the lower right coefficients emphasize the diagonal</span></td>
</tr>
<tr>
<td id="L485" class="blob-num js-line-number" data-line-number="485"></td>
<td id="LC485" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%edges.</span></td>
</tr>
<tr>
<td id="L486" class="blob-num js-line-number" data-line-number="486"></td>
<td id="LC486" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%</span></td>
</tr>
<tr>
<td id="L487" class="blob-num js-line-number" data-line-number="487"></td>
<td id="LC487" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%\begin{problem}</span></td>
</tr>
<tr>
<td id="L488" class="blob-num js-line-number" data-line-number="488"></td>
<td id="LC488" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%Now zero out the approximation coefficients and use your inverse DWT</span></td>
</tr>
<tr>
<td id="L489" class="blob-num js-line-number" data-line-number="489"></td>
<td id="LC489" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%function to recreate the image. Plot its absolute value. This image is</span></td>
</tr>
<tr>
<td id="L490" class="blob-num js-line-number" data-line-number="490"></td>
<td id="LC490" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%a fairly good representation of the edges. If we add this to the original</span></td>
</tr>
<tr>
<td id="L491" class="blob-num js-line-number" data-line-number="491"></td>
<td id="LC491" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%image, we can increase the contrast at the edges (that is, make the dark</span></td>
</tr>
<tr>
<td id="L492" class="blob-num js-line-number" data-line-number="492"></td>
<td id="LC492" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%side darker, and the light side lighter). Do this, and plot the original</span></td>
</tr>
<tr>
<td id="L493" class="blob-num js-line-number" data-line-number="493"></td>
<td id="LC493" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%image side-by-side with the sharpened image. What do you notice? There</span></td>
</tr>
<tr>
<td id="L494" class="blob-num js-line-number" data-line-number="494"></td>
<td id="LC494" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%are many image-sharpening techniques, and those based on wavelets</span></td>
</tr>
<tr>
<td id="L495" class="blob-num js-line-number" data-line-number="495"></td>
<td id="LC495" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%are more sophisticated than what we have done here, but this gives the</span></td>
</tr>
<tr>
<td id="L496" class="blob-num js-line-number" data-line-number="496"></td>
<td id="LC496" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%basic idea.</span></td>
</tr>
<tr>
<td id="L497" class="blob-num js-line-number" data-line-number="497"></td>
<td id="LC497" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%\end{problem}</span></td>
</tr>
<tr>
<td id="L498" class="blob-num js-line-number" data-line-number="498"></td>
<td id="LC498" class="blob-code blob-code-inner js-file-line"><span class="pl-c">%the above section needs work, or maybe should be taken out completely.</span></td>
</tr>
<tr>
<td id="L499" class="blob-num js-line-number" data-line-number="499"></td>
<td id="LC499" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L500" class="blob-num js-line-number" data-line-number="500"></td>
<td id="LC500" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\section</span>*{Noise Removal}</td>
</tr>
<tr>
<td id="L501" class="blob-num js-line-number" data-line-number="501"></td>
<td id="LC501" class="blob-code blob-code-inner js-file-line">Noise in an image can be defined as unwanted visual artifacts that</td>
</tr>
<tr>
<td id="L502" class="blob-num js-line-number" data-line-number="502"></td>
<td id="LC502" class="blob-code blob-code-inner js-file-line">obscure the true image. Images can acquire noise from a variety of</td>
</tr>
<tr>
<td id="L503" class="blob-num js-line-number" data-line-number="503"></td>
<td id="LC503" class="blob-code blob-code-inner js-file-line">sources, including the camera, transmission, and image processing</td>
</tr>
<tr>
<td id="L504" class="blob-num js-line-number" data-line-number="504"></td>
<td id="LC504" class="blob-code blob-code-inner js-file-line">algorithms. Noise can be completely random and incoherent (as in</td>
</tr>
<tr>
<td id="L505" class="blob-num js-line-number" data-line-number="505"></td>
<td id="LC505" class="blob-code blob-code-inner js-file-line">Figure <span class="pl-c1">\ref</span>{fig:incoherent}), or it can be coherent and display</td>
</tr>
<tr>
<td id="L506" class="blob-num js-line-number" data-line-number="506"></td>
<td id="LC506" class="blob-code blob-code-inner js-file-line">visual patterns (Figure <span class="pl-c1">\ref</span>{fig:coherent}). In this section, we will</td>
</tr>
<tr>
<td id="L507" class="blob-num js-line-number" data-line-number="507"></td>
<td id="LC507" class="blob-code blob-code-inner js-file-line">focus on reducing a particular type of random noise in images, called</td>
</tr>
<tr>
<td id="L508" class="blob-num js-line-number" data-line-number="508"></td>
<td id="LC508" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\textit</span>{Gaussian white noise}.</td>
</tr>
<tr>
<td id="L509" class="blob-num js-line-number" data-line-number="509"></td>
<td id="LC509" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L510" class="blob-num js-line-number" data-line-number="510"></td>
<td id="LC510" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{figure}[t]</td>
</tr>
<tr>
<td id="L511" class="blob-num js-line-number" data-line-number="511"></td>
<td id="LC511" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\minipage</span>{0.49<span class="pl-c1">\textwidth</span>}</td>
</tr>
<tr>
<td id="L512" class="blob-num js-line-number" data-line-number="512"></td>
<td id="LC512" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\includegraphics</span>[width=<span class="pl-c1">\linewidth</span>]{phantom_random.pdf}</td>
</tr>
<tr>
<td id="L513" class="blob-num js-line-number" data-line-number="513"></td>
<td id="LC513" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\caption</span>{The Phantom image with incoherent noise}</td>
</tr>
<tr>
<td id="L514" class="blob-num js-line-number" data-line-number="514"></td>
<td id="LC514" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\label</span>{fig:incoherent}</td>
</tr>
<tr>
<td id="L515" class="blob-num js-line-number" data-line-number="515"></td>
<td id="LC515" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\endminipage\hfill</span></td>
</tr>
<tr>
<td id="L516" class="blob-num js-line-number" data-line-number="516"></td>
<td id="LC516" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\minipage</span>{0.49<span class="pl-c1">\textwidth</span>}</td>
</tr>
<tr>
<td id="L517" class="blob-num js-line-number" data-line-number="517"></td>
<td id="LC517" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\includegraphics</span>[width=<span class="pl-c1">\linewidth</span>]{phantom_coherent.pdf}</td>
</tr>
<tr>
<td id="L518" class="blob-num js-line-number" data-line-number="518"></td>
<td id="LC518" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\caption</span>{The Phantom image with coherent noise}</td>
</tr>
<tr>
<td id="L519" class="blob-num js-line-number" data-line-number="519"></td>
<td id="LC519" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\label</span>{fig:coherent}</td>
</tr>
<tr>
<td id="L520" class="blob-num js-line-number" data-line-number="520"></td>
<td id="LC520" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\endminipage</span></td>
</tr>
<tr>
<td id="L521" class="blob-num js-line-number" data-line-number="521"></td>
<td id="LC521" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{figure}</td>
</tr>
<tr>
<td id="L522" class="blob-num js-line-number" data-line-number="522"></td>
<td id="LC522" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L523" class="blob-num js-line-number" data-line-number="523"></td>
<td id="LC523" class="blob-code blob-code-inner js-file-line">An image that is distorted by Gaussian white noise is one in which</td>
</tr>
<tr>
<td id="L524" class="blob-num js-line-number" data-line-number="524"></td>
<td id="LC524" class="blob-code blob-code-inner js-file-line">every pixel has been perturbed by a small amount, such that the</td>
</tr>
<tr>
<td id="L525" class="blob-num js-line-number" data-line-number="525"></td>
<td id="LC525" class="blob-code blob-code-inner js-file-line">perturbations are normally distributed. We can easily add such noise</td>
</tr>
<tr>
<td id="L526" class="blob-num js-line-number" data-line-number="526"></td>
<td id="LC526" class="blob-code blob-code-inner js-file-line">to an image using the <span class="pl-c1">\li</span>{np.random.normal} function.</td>
</tr>
<tr>
<td id="L527" class="blob-num js-line-number" data-line-number="527"></td>
<td id="LC527" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L528" class="blob-num js-line-number" data-line-number="528"></td>
<td id="LC528" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{lstlisting}</td>
</tr>
<tr>
<td id="L529" class="blob-num js-line-number" data-line-number="529"></td>
<td id="LC529" class="blob-code blob-code-inner js-file-line">>>> noisyLena = lena + np.random.normal(scale=20, size=lena.shape)</td>
</tr>
<tr>
<td id="L530" class="blob-num js-line-number" data-line-number="530"></td>
<td id="LC530" class="blob-code blob-code-inner js-file-line">>>> plt.imshow(noisyLena, cmap=plt.cm.Greys_r)</td>
</tr>
<tr>
<td id="L531" class="blob-num js-line-number" data-line-number="531"></td>
<td id="LC531" class="blob-code blob-code-inner js-file-line">>>> plt.show()</td>
</tr>
<tr>
<td id="L532" class="blob-num js-line-number" data-line-number="532"></td>
<td id="LC532" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{lstlisting}</td>
</tr>
<tr>
<td id="L533" class="blob-num js-line-number" data-line-number="533"></td>
<td id="LC533" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L534" class="blob-num js-line-number" data-line-number="534"></td>
<td id="LC534" class="blob-code blob-code-inner js-file-line">Given an image with Gaussian white noise, how do we go about reducing</td>
</tr>
<tr>
<td id="L535" class="blob-num js-line-number" data-line-number="535"></td>
<td id="LC535" class="blob-code blob-code-inner js-file-line">the noise level? Our approach will be based on the idea of thresholding.</td>
</tr>
<tr>
<td id="L536" class="blob-num js-line-number" data-line-number="536"></td>
<td id="LC536" class="blob-code blob-code-inner js-file-line">It turns out that images are often sparse in the wavelet basis,</td>
</tr>
<tr>
<td id="L537" class="blob-num js-line-number" data-line-number="537"></td>
<td id="LC537" class="blob-code blob-code-inner js-file-line">particularly in the high-frequency details. The Gaussian noise, however,</td>
</tr>
<tr>
<td id="L538" class="blob-num js-line-number" data-line-number="538"></td>
<td id="LC538" class="blob-code blob-code-inner js-file-line">is very high frequency, and thus its wavelet transform will be</td>
</tr>
<tr>
<td id="L539" class="blob-num js-line-number" data-line-number="539"></td>
<td id="LC539" class="blob-code blob-code-inner js-file-line">concentrated in high-frequency wavelet coefficients (of magnitude</td>
</tr>
<tr>
<td id="L540" class="blob-num js-line-number" data-line-number="540"></td>
<td id="LC540" class="blob-code blob-code-inner js-file-line">roughly proportional to the variance of the noise). We can therefore</td>
</tr>
<tr>
<td id="L541" class="blob-num js-line-number" data-line-number="541"></td>
<td id="LC541" class="blob-code blob-code-inner js-file-line">reduce the noise while preserving the true image by shrinking the</td>
</tr>
<tr>
<td id="L542" class="blob-num js-line-number" data-line-number="542"></td>
<td id="LC542" class="blob-code blob-code-inner js-file-line">detail coefficients via hard or soft thresholding.</td>
</tr>
<tr>
<td id="L543" class="blob-num js-line-number" data-line-number="543"></td>
<td id="LC543" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L544" class="blob-num js-line-number" data-line-number="544"></td>
<td id="LC544" class="blob-code blob-code-inner js-file-line">Given a positive threshold value <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\tau</span><span class="pl-pds">$</span></span>, hard thresholding sets</td>
</tr>
<tr>
<td id="L545" class="blob-num js-line-number" data-line-number="545"></td>
<td id="LC545" class="blob-code blob-code-inner js-file-line">every wavelet coefficient whose magnitude is less than <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\tau</span><span class="pl-pds">$</span></span> to</td>
</tr>
<tr>
<td id="L546" class="blob-num js-line-number" data-line-number="546"></td>
<td id="LC546" class="blob-code blob-code-inner js-file-line">zero, while leaving the remaining coefficients untouched. Soft</td>
</tr>
<tr>
<td id="L547" class="blob-num js-line-number" data-line-number="547"></td>
<td id="LC547" class="blob-code blob-code-inner js-file-line">thresholding also zeros out all coefficients of magnitude less than</td>
</tr>
<tr>
<td id="L548" class="blob-num js-line-number" data-line-number="548"></td>
<td id="LC548" class="blob-code blob-code-inner js-file-line"><span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\tau</span><span class="pl-pds">$</span></span>, but in addition maps every other coefficient <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\beta</span><span class="pl-pds">$</span></span> to</td>
</tr>
<tr>
<td id="L549" class="blob-num js-line-number" data-line-number="549"></td>
<td id="LC549" class="blob-code blob-code-inner js-file-line"><span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\beta</span> - <span class="pl-c1">\tau</span><span class="pl-pds">$</span></span> if <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\beta</span> > <span class="pl-c1">0</span><span class="pl-pds">$</span></span> or <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\beta</span> + <span class="pl-c1">\tau</span><span class="pl-pds">$</span></span> if <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\beta</span> < <span class="pl-c1">0</span><span class="pl-pds">$</span></span>.</td>
</tr>
<tr>
<td id="L550" class="blob-num js-line-number" data-line-number="550"></td>
<td id="LC550" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L551" class="blob-num js-line-number" data-line-number="551"></td>
<td id="LC551" class="blob-code blob-code-inner js-file-line">Implementing these simple thresholding algorithms in Python is </td>
</tr>
<tr>
<td id="L552" class="blob-num js-line-number" data-line-number="552"></td>
<td id="LC552" class="blob-code blob-code-inner js-file-line">straight-forward, but PyWavelets already provides this functionality.</td>
</tr>
<tr>
<td id="L553" class="blob-num js-line-number" data-line-number="553"></td>
<td id="LC553" class="blob-code blob-code-inner js-file-line">The following code gives an example.</td>
</tr>
<tr>
<td id="L554" class="blob-num js-line-number" data-line-number="554"></td>
<td id="LC554" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L555" class="blob-num js-line-number" data-line-number="555"></td>
<td id="LC555" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{lstlisting}</td>
</tr>
<tr>
<td id="L556" class="blob-num js-line-number" data-line-number="556"></td>
<td id="LC556" class="blob-code blob-code-inner js-file-line">>>> A = np.arange(-4,5).reshape(3,3)</td>
</tr>
<tr>
<td id="L557" class="blob-num js-line-number" data-line-number="557"></td>
<td id="LC557" class="blob-code blob-code-inner js-file-line">>>> A</td>
</tr>
<tr>
<td id="L558" class="blob-num js-line-number" data-line-number="558"></td>
<td id="LC558" class="blob-code blob-code-inner js-file-line">array([[-4, -3, -2],</td>
</tr>
<tr>
<td id="L559" class="blob-num js-line-number" data-line-number="559"></td>
<td id="LC559" class="blob-code blob-code-inner js-file-line"> [-1, 0, 1],</td>
</tr>
<tr>
<td id="L560" class="blob-num js-line-number" data-line-number="560"></td>
<td id="LC560" class="blob-code blob-code-inner js-file-line"> [ 2, 3, 4]])</td>
</tr>
<tr>
<td id="L561" class="blob-num js-line-number" data-line-number="561"></td>
<td id="LC561" class="blob-code blob-code-inner js-file-line">>>> pywt.thresholding.hard(A,1.5)</td>
</tr>
<tr>
<td id="L562" class="blob-num js-line-number" data-line-number="562"></td>
<td id="LC562" class="blob-code blob-code-inner js-file-line">array([[-4, -3, -2],</td>
</tr>
<tr>
<td id="L563" class="blob-num js-line-number" data-line-number="563"></td>
<td id="LC563" class="blob-code blob-code-inner js-file-line"> [ 0, 0, 0],</td>
</tr>
<tr>
<td id="L564" class="blob-num js-line-number" data-line-number="564"></td>
<td id="LC564" class="blob-code blob-code-inner js-file-line"> [ 2, 3, 4]])</td>
</tr>
<tr>
<td id="L565" class="blob-num js-line-number" data-line-number="565"></td>
<td id="LC565" class="blob-code blob-code-inner js-file-line">>>> pywt.thresholding.soft(A,1.5)</td>
</tr>
<tr>
<td id="L566" class="blob-num js-line-number" data-line-number="566"></td>
<td id="LC566" class="blob-code blob-code-inner js-file-line">array([[-2.5, -1.5, -0.5],</td>
</tr>
<tr>
<td id="L567" class="blob-num js-line-number" data-line-number="567"></td>
<td id="LC567" class="blob-code blob-code-inner js-file-line"> [ 0. , 0. , 0. ],</td>
</tr>
<tr>
<td id="L568" class="blob-num js-line-number" data-line-number="568"></td>
<td id="LC568" class="blob-code blob-code-inner js-file-line"> [ 0.5, 1.5, 2.5]])</td>
</tr>
<tr>
<td id="L569" class="blob-num js-line-number" data-line-number="569"></td>
<td id="LC569" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{lstlisting}</td>
</tr>
<tr>
<td id="L570" class="blob-num js-line-number" data-line-number="570"></td>
<td id="LC570" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L571" class="blob-num js-line-number" data-line-number="571"></td>
<td id="LC571" class="blob-code blob-code-inner js-file-line">Once the coefficients have been thresholded, we take the inverse</td>
</tr>
<tr>
<td id="L572" class="blob-num js-line-number" data-line-number="572"></td>
<td id="LC572" class="blob-code blob-code-inner js-file-line">wavelet transform to recover the denoised image. This can be done</td>
</tr>
<tr>
<td id="L573" class="blob-num js-line-number" data-line-number="573"></td>
<td id="LC573" class="blob-code blob-code-inner js-file-line">by calling the <span class="pl-c1">\li</span>{waverec2} function, providing the list of Wavelet</td>
</tr>
<tr>
<td id="L574" class="blob-num js-line-number" data-line-number="574"></td>
<td id="LC574" class="blob-code blob-code-inner js-file-line">coefficients as well as the name of the desired Wavelet as arguments.</td>
</tr>
<tr>
<td id="L575" class="blob-num js-line-number" data-line-number="575"></td>
<td id="LC575" class="blob-code blob-code-inner js-file-line">The threshold value is generally a function of the variance of the noise,</td>
</tr>
<tr>
<td id="L576" class="blob-num js-line-number" data-line-number="576"></td>
<td id="LC576" class="blob-code blob-code-inner js-file-line">and in real situations, we do not know what this variance is. In fact,</td>
</tr>
<tr>
<td id="L577" class="blob-num js-line-number" data-line-number="577"></td>
<td id="LC577" class="blob-code blob-code-inner js-file-line">noise variance estimation in images is a research area in its own</td>
</tr>
<tr>
<td id="L578" class="blob-num js-line-number" data-line-number="578"></td>
<td id="LC578" class="blob-code blob-code-inner js-file-line">right, but this goes beyond the scope of this lab, and so we will</td>
</tr>
<tr>
<td id="L579" class="blob-num js-line-number" data-line-number="579"></td>
<td id="LC579" class="blob-code blob-code-inner js-file-line">assume that we already have a decent estimate of the variance.</td>
</tr>
<tr>
<td id="L580" class="blob-num js-line-number" data-line-number="580"></td>
<td id="LC580" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L581" class="blob-num js-line-number" data-line-number="581"></td>
<td id="LC581" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{figure}[t]</td>
</tr>
<tr>
<td id="L582" class="blob-num js-line-number" data-line-number="582"></td>
<td id="LC582" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\includegraphics</span>[width=<span class="pl-c1">\linewidth</span>]{denoise.pdf}</td>
</tr>
<tr>
<td id="L583" class="blob-num js-line-number" data-line-number="583"></td>
<td id="LC583" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\caption</span>{Noisy Lena (left), denoised using hard thresholding (center), </td>
</tr>
<tr>
<td id="L584" class="blob-num js-line-number" data-line-number="584"></td>
<td id="LC584" class="blob-code blob-code-inner js-file-line"> and denoised using soft thresholding (right).}</td>
</tr>
<tr>
<td id="L585" class="blob-num js-line-number" data-line-number="585"></td>
<td id="LC585" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\label</span>{fig:denoise}</td>
</tr>
<tr>
<td id="L586" class="blob-num js-line-number" data-line-number="586"></td>
<td id="LC586" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{figure}</td>
</tr>
<tr>
<td id="L587" class="blob-num js-line-number" data-line-number="587"></td>
<td id="LC587" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L588" class="blob-num js-line-number" data-line-number="588"></td>
<td id="LC588" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{problem}</td>
</tr>
<tr>
<td id="L589" class="blob-num js-line-number" data-line-number="589"></td>
<td id="LC589" class="blob-code blob-code-inner js-file-line">Write functions that implement the hard and soft thresholding</td>
</tr>
<tr>
<td id="L590" class="blob-num js-line-number" data-line-number="590"></td>
<td id="LC590" class="blob-code blob-code-inner js-file-line">techniques. The inputs should be a list of wavelet coefficients</td>
</tr>
<tr>
<td id="L591" class="blob-num js-line-number" data-line-number="591"></td>
<td id="LC591" class="blob-code blob-code-inner js-file-line">in the usual form, as well as the threshold value. The output</td>
</tr>
<tr>
<td id="L592" class="blob-num js-line-number" data-line-number="592"></td>
<td id="LC592" class="blob-code blob-code-inner js-file-line">should be the thresholded wavelet coefficients (also in</td>
</tr>
<tr>
<td id="L593" class="blob-num js-line-number" data-line-number="593"></td>
<td id="LC593" class="blob-code blob-code-inner js-file-line">the usual form). Remember that we only want to threshold the</td>
</tr>
<tr>
<td id="L594" class="blob-num js-line-number" data-line-number="594"></td>
<td id="LC594" class="blob-code blob-code-inner js-file-line">detail coefficients, and not the approximation coefficients.</td>
</tr>
<tr>
<td id="L595" class="blob-num js-line-number" data-line-number="595"></td>
<td id="LC595" class="blob-code blob-code-inner js-file-line">You should therefore leave the first entry of the input</td>
</tr>
<tr>
<td id="L596" class="blob-num js-line-number" data-line-number="596"></td>
<td id="LC596" class="blob-code blob-code-inner js-file-line">coefficient list unchanged.</td>
</tr>
<tr>
<td id="L597" class="blob-num js-line-number" data-line-number="597"></td>
<td id="LC597" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{problem}</td>
</tr>
<tr>
<td id="L598" class="blob-num js-line-number" data-line-number="598"></td>
<td id="LC598" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L599" class="blob-num js-line-number" data-line-number="599"></td>
<td id="LC599" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{problem}</td>
</tr>
<tr>
<td id="L600" class="blob-num js-line-number" data-line-number="600"></td>
<td id="LC600" class="blob-code blob-code-inner js-file-line">Create a noisy version of the Lena image by adding Gaussian</td>
</tr>
<tr>
<td id="L601" class="blob-num js-line-number" data-line-number="601"></td>
<td id="LC601" class="blob-code blob-code-inner js-file-line">white noise of mean 0 and standard deviation <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\sigma</span> = <span class="pl-c1">20</span><span class="pl-pds">$</span></span> (i.e. <span class="pl-c1">\li</span>{scale=20}).</td>
</tr>
<tr>
<td id="L602" class="blob-num js-line-number" data-line-number="602"></td>
<td id="LC602" class="blob-code blob-code-inner js-file-line">Compute four levels of the wavelet coefficients using the Daubechies 4 Wavelet, </td>
</tr>
<tr>
<td id="L603" class="blob-num js-line-number" data-line-number="603"></td>
<td id="LC603" class="blob-code blob-code-inner js-file-line">and input these into your</td>
</tr>
<tr>
<td id="L604" class="blob-num js-line-number" data-line-number="604"></td>
<td id="LC604" class="blob-code blob-code-inner js-file-line">thresholding functions (with <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\tau</span> = <span class="pl-c1">3</span><span class="pl-c1">\sigma</span><span class="pl-pds">$</span></span> for the hard threshold,</td>
</tr>
<tr>
<td id="L605" class="blob-num js-line-number" data-line-number="605"></td>
<td id="LC605" class="blob-code blob-code-inner js-file-line">and <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\tau</span> = <span class="pl-c1">3</span><span class="pl-c1">\sigma</span>/<span class="pl-c1">2</span><span class="pl-pds">$</span></span> for the soft threshold). Reconstruct the</td>
</tr>
<tr>
<td id="L606" class="blob-num js-line-number" data-line-number="606"></td>
<td id="LC606" class="blob-code blob-code-inner js-file-line">two denoised images, and then plot these together alongside the</td>
</tr>
<tr>
<td id="L607" class="blob-num js-line-number" data-line-number="607"></td>
<td id="LC607" class="blob-code blob-code-inner js-file-line">noisy image. Your output should match Figure <span class="pl-c1">\ref</span>{fig:denoise}.</td>
</tr>
<tr>
<td id="L608" class="blob-num js-line-number" data-line-number="608"></td>
<td id="LC608" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L609" class="blob-num js-line-number" data-line-number="609"></td>
<td id="LC609" class="blob-code blob-code-inner js-file-line">What do you notice? How does lowering or raising the</td>
</tr>
<tr>
<td id="L610" class="blob-num js-line-number" data-line-number="610"></td>
<td id="LC610" class="blob-code blob-code-inner js-file-line">threshold affect the reconstructed images? What happens if you use</td>
</tr>
<tr>
<td id="L611" class="blob-num js-line-number" data-line-number="611"></td>
<td id="LC611" class="blob-code blob-code-inner js-file-line">a different Wavelet?</td>
</tr>
<tr>
<td id="L612" class="blob-num js-line-number" data-line-number="612"></td>
<td id="LC612" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{problem}</td>
</tr>
<tr>
<td id="L613" class="blob-num js-line-number" data-line-number="613"></td>
<td id="LC613" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L614" class="blob-num js-line-number" data-line-number="614"></td>
<td id="LC614" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\section</span>*{Image Compression}</td>
</tr>
<tr>
<td id="L615" class="blob-num js-line-number" data-line-number="615"></td>
<td id="LC615" class="blob-code blob-code-inner js-file-line">We now turn to the problem of image compression. Explicitly saving</td>
</tr>
<tr>
<td id="L616" class="blob-num js-line-number" data-line-number="616"></td>
<td id="LC616" class="blob-code blob-code-inner js-file-line">the value of every pixel in an image can be very costly in both</td>
</tr>
<tr>
<td id="L617" class="blob-num js-line-number" data-line-number="617"></td>
<td id="LC617" class="blob-code blob-code-inner js-file-line">storage and transmission, and numerous image compression techniques</td>
</tr>
<tr>
<td id="L618" class="blob-num js-line-number" data-line-number="618"></td>
<td id="LC618" class="blob-code blob-code-inner js-file-line">have been developed over the years to deal with this problem.</td>
</tr>
<tr>
<td id="L619" class="blob-num js-line-number" data-line-number="619"></td>
<td id="LC619" class="blob-code blob-code-inner js-file-line">Transform methods have long played an important role in these</td>
</tr>
<tr>
<td id="L620" class="blob-num js-line-number" data-line-number="620"></td>
<td id="LC620" class="blob-code blob-code-inner js-file-line">techniques; the popular JPEG image compression standard is based on</td>
</tr>
<tr>
<td id="L621" class="blob-num js-line-number" data-line-number="621"></td>
<td id="LC621" class="blob-code blob-code-inner js-file-line">the discrete cosine transform. Starting from the early 1990's, much</td>
</tr>
<tr>
<td id="L622" class="blob-num js-line-number" data-line-number="622"></td>
<td id="LC622" class="blob-code blob-code-inner js-file-line">research has gone into compression methods using the discrete wavelet</td>
</tr>
<tr>
<td id="L623" class="blob-num js-line-number" data-line-number="623"></td>
<td id="LC623" class="blob-code blob-code-inner js-file-line">transform, and to great success. The JPEG2000 compression standard</td>
</tr>
<tr>
<td id="L624" class="blob-num js-line-number" data-line-number="624"></td>
<td id="LC624" class="blob-code blob-code-inner js-file-line">and the FBI Fingerprint Image database, along with other systems,</td>
</tr>
<tr>
<td id="L625" class="blob-num js-line-number" data-line-number="625"></td>
<td id="LC625" class="blob-code blob-code-inner js-file-line">take the wavelet approach.</td>
</tr>
<tr>
<td id="L626" class="blob-num js-line-number" data-line-number="626"></td>
<td id="LC626" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L627" class="blob-num js-line-number" data-line-number="627"></td>
<td id="LC627" class="blob-code blob-code-inner js-file-line">The general framework for compression is fairly straightforward. First,</td>
</tr>
<tr>
<td id="L628" class="blob-num js-line-number" data-line-number="628"></td>
<td id="LC628" class="blob-code blob-code-inner js-file-line">the image to be compressed undergoes some form of preprocessing (this</td>
</tr>
<tr>
<td id="L629" class="blob-num js-line-number" data-line-number="629"></td>
<td id="LC629" class="blob-code blob-code-inner js-file-line">can include subtracting out its mean, tiling the image, or perhaps</td>
</tr>
<tr>
<td id="L630" class="blob-num js-line-number" data-line-number="630"></td>
<td id="LC630" class="blob-code blob-code-inner js-file-line">nothing at all). Next, the wavelet coefficients are computed using some</td>
</tr>
<tr>
<td id="L631" class="blob-num js-line-number" data-line-number="631"></td>
<td id="LC631" class="blob-code blob-code-inner js-file-line">specially constructed wavelet (JPEG2000 uses the either the</td>
</tr>
<tr>
<td id="L632" class="blob-num js-line-number" data-line-number="632"></td>
<td id="LC632" class="blob-code blob-code-inner js-file-line">Cohen-Daubechies-Feauveau 9/7 or 5/3 wavelet) and then <span class="pl-c1">\textit</span>{quantized},</td>
</tr>
<tr>
<td id="L633" class="blob-num js-line-number" data-line-number="633"></td>
<td id="LC633" class="blob-code blob-code-inner js-file-line"> a process that we will explain shortly. The quantized coefficients are</td>
</tr>
<tr>
<td id="L634" class="blob-num js-line-number" data-line-number="634"></td>
<td id="LC634" class="blob-code blob-code-inner js-file-line"> then grouped in a particular way and passed through an entropy encoder</td>
</tr>
<tr>
<td id="L635" class="blob-num js-line-number" data-line-number="635"></td>
<td id="LC635" class="blob-code blob-code-inner js-file-line"> (such as Huffman coding, run length coding, or arithmetic coding). This</td>
</tr>
<tr>
<td id="L636" class="blob-num js-line-number" data-line-number="636"></td>
<td id="LC636" class="blob-code blob-code-inner js-file-line"> coding step comes from the realm of information theory, and we will not</td>
</tr>
<tr>
<td id="L637" class="blob-num js-line-number" data-line-number="637"></td>
<td id="LC637" class="blob-code blob-code-inner js-file-line"> worry about it in this lab. What you have left is a compact stream of bits</td>
</tr>
<tr>
<td id="L638" class="blob-num js-line-number" data-line-number="638"></td>
<td id="LC638" class="blob-code blob-code-inner js-file-line"> that can then be saved or transmitted much more efficiently than the</td>
</tr>
<tr>
<td id="L639" class="blob-num js-line-number" data-line-number="639"></td>
<td id="LC639" class="blob-code blob-code-inner js-file-line"> original image. All of the above steps are invertible, allowing us to</td>
</tr>
<tr>
<td id="L640" class="blob-num js-line-number" data-line-number="640"></td>
<td id="LC640" class="blob-code blob-code-inner js-file-line"> reconstruct the image from the bitstream.</td>
</tr>
<tr>
<td id="L641" class="blob-num js-line-number" data-line-number="641"></td>
<td id="LC641" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L642" class="blob-num js-line-number" data-line-number="642"></td>
<td id="LC642" class="blob-code blob-code-inner js-file-line"> The step in this process that we will focus on is quantization. Put simply,</td>
</tr>
<tr>
<td id="L643" class="blob-num js-line-number" data-line-number="643"></td>
<td id="LC643" class="blob-code blob-code-inner js-file-line"> quantization is a process whereby the coefficients are converted into</td>
</tr>
<tr>
<td id="L644" class="blob-num js-line-number" data-line-number="644"></td>
<td id="LC644" class="blob-code blob-code-inner js-file-line"> integers. If the coefficients are floating-point numbers, then this</td>
</tr>
<tr>
<td id="L645" class="blob-num js-line-number" data-line-number="645"></td>
<td id="LC645" class="blob-code blob-code-inner js-file-line"> process introduces some loss of precision, and we call this <span class="pl-c1">\textit</span>{</td>
</tr>
<tr>
<td id="L646" class="blob-num js-line-number" data-line-number="646"></td>
<td id="LC646" class="blob-code blob-code-inner js-file-line"> lossy compression}. In the situation where all the coefficients are already</td>
</tr>
<tr>
<td id="L647" class="blob-num js-line-number" data-line-number="647"></td>
<td id="LC647" class="blob-code blob-code-inner js-file-line"> integers, it is possible to compress the image without any loss of precision,</td>
</tr>
<tr>
<td id="L648" class="blob-num js-line-number" data-line-number="648"></td>
<td id="LC648" class="blob-code blob-code-inner js-file-line"> and this is called <span class="pl-c1">\textit</span>{lossless compression}. Quantization can be</td>
</tr>
<tr>
<td id="L649" class="blob-num js-line-number" data-line-number="649"></td>
<td id="LC649" class="blob-code blob-code-inner js-file-line"> performed in a variety of ways, and we will explore one particular method</td>
</tr>
<tr>
<td id="L650" class="blob-num js-line-number" data-line-number="650"></td>
<td id="LC650" class="blob-code blob-code-inner js-file-line"> called a <span class="pl-c1">\emph</span>{uniform null-zone quantizer}. Given a coefficient <span class="pl-s"><span class="pl-pds">$</span>x<span class="pl-pds">$</span></span>, we assign</td>
</tr>
<tr>
<td id="L651" class="blob-num js-line-number" data-line-number="651"></td>
<td id="LC651" class="blob-code blob-code-inner js-file-line"> to it an integer <span class="pl-s"><span class="pl-pds">$</span>q<span class="pl-pds">$</span></span> given by</td>
</tr>
<tr>
<td id="L652" class="blob-num js-line-number" data-line-number="652"></td>
<td id="LC652" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{equation*}</td>
</tr>
<tr>
<td id="L653" class="blob-num js-line-number" data-line-number="653"></td>
<td id="LC653" class="blob-code blob-code-inner js-file-line">q =</td>
</tr>
<tr>
<td id="L654" class="blob-num js-line-number" data-line-number="654"></td>
<td id="LC654" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\begin</span>{cases}</td>
</tr>
<tr>
<td id="L655" class="blob-num js-line-number" data-line-number="655"></td>
<td id="LC655" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\lceil</span> x / <span class="pl-c1">\delta</span> - t/2 <span class="pl-c1">\rceil</span>, & x <span class="pl-c1">\geq</span> 0<span class="pl-c1">\\</span></td>
</tr>
<tr>
<td id="L656" class="blob-num js-line-number" data-line-number="656"></td>
<td id="LC656" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\lfloor</span> x / <span class="pl-c1">\delta</span> + t/2 <span class="pl-c1">\rfloor</span> & x <span class="pl-c1">\leq</span> 0</td>
</tr>
<tr>
<td id="L657" class="blob-num js-line-number" data-line-number="657"></td>
<td id="LC657" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\end</span>{cases}</td>
</tr>
<tr>
<td id="L658" class="blob-num js-line-number" data-line-number="658"></td>
<td id="LC658" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{equation*}</td>
</tr>
<tr>
<td id="L659" class="blob-num js-line-number" data-line-number="659"></td>
<td id="LC659" class="blob-code blob-code-inner js-file-line">where <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">1</span> <span class="pl-c1">\leq</span> t <span class="pl-c1">\leq</span> <span class="pl-c1">2</span><span class="pl-pds">$</span></span> and <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\delta</span> > <span class="pl-c1">0</span><span class="pl-pds">$</span></span> are adjustable parameters.</td>
</tr>
<tr>
<td id="L660" class="blob-num js-line-number" data-line-number="660"></td>
<td id="LC660" class="blob-code blob-code-inner js-file-line">The inverse process, called de-quantization, consists of recovering</td>
</tr>
<tr>
<td id="L661" class="blob-num js-line-number" data-line-number="661"></td>
<td id="LC661" class="blob-code blob-code-inner js-file-line">the coefficient <span class="pl-s"><span class="pl-pds">$</span>y<span class="pl-pds">$</span></span> from the quantized value <span class="pl-s"><span class="pl-pds">$</span>q<span class="pl-pds">$</span></span> via the equation</td>
</tr>
<tr>
<td id="L662" class="blob-num js-line-number" data-line-number="662"></td>
<td id="LC662" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{equation*}</td>
</tr>
<tr>
<td id="L663" class="blob-num js-line-number" data-line-number="663"></td>
<td id="LC663" class="blob-code blob-code-inner js-file-line"> y =</td>
</tr>
<tr>
<td id="L664" class="blob-num js-line-number" data-line-number="664"></td>
<td id="LC664" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\begin</span>{cases}</td>
</tr>
<tr>
<td id="L665" class="blob-num js-line-number" data-line-number="665"></td>
<td id="LC665" class="blob-code blob-code-inner js-file-line"> (q - 1/2 + t/2)<span class="pl-c1">\delta</span> & q > 0<span class="pl-c1">\\</span></td>
</tr>
<tr>
<td id="L666" class="blob-num js-line-number" data-line-number="666"></td>
<td id="LC666" class="blob-code blob-code-inner js-file-line"> (q + 1/2 - t/2)<span class="pl-c1">\delta</span> & q < 0<span class="pl-c1">\\</span></td>
</tr>
<tr>
<td id="L667" class="blob-num js-line-number" data-line-number="667"></td>
<td id="LC667" class="blob-code blob-code-inner js-file-line"> 0, & q = 0</td>
</tr>
<tr>
<td id="L668" class="blob-num js-line-number" data-line-number="668"></td>
<td id="LC668" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\end</span>{cases}</td>
</tr>
<tr>
<td id="L669" class="blob-num js-line-number" data-line-number="669"></td>
<td id="LC669" class="blob-code blob-code-inner js-file-line"> <span class="pl-c1">\end</span>{equation*}</td>
</tr>
<tr>
<td id="L670" class="blob-num js-line-number" data-line-number="670"></td>
<td id="LC670" class="blob-code blob-code-inner js-file-line">What we are essentially doing is mapping all wavelet coefficients that</td>
</tr>
<tr>
<td id="L671" class="blob-num js-line-number" data-line-number="671"></td>
<td id="LC671" class="blob-code blob-code-inner js-file-line">fall in the interval <span class="pl-s"><span class="pl-pds">$</span>[-<span class="pl-c1">\delta</span>,<span class="pl-c1">\delta</span>]<span class="pl-pds">$</span></span> to 0 and all wavelet coefficients</td>
</tr>
<tr>
<td id="L672" class="blob-num js-line-number" data-line-number="672"></td>
<td id="LC672" class="blob-code blob-code-inner js-file-line">in the interval <span class="pl-s"><span class="pl-pds">$</span>[j<span class="pl-c1">\delta</span>,(j+<span class="pl-c1">1</span>)<span class="pl-c1">\delta</span>]<span class="pl-pds">$</span></span> to <span class="pl-s"><span class="pl-pds">$</span>(<span class="pl-c1">2</span>j+<span class="pl-c1">1</span>)<span class="pl-c1">\delta</span>/<span class="pl-c1">2</span><span class="pl-pds">$</span></span> for integers</td>
</tr>
<tr>
<td id="L673" class="blob-num js-line-number" data-line-number="673"></td>
<td id="LC673" class="blob-code blob-code-inner js-file-line"><span class="pl-s"><span class="pl-pds">$</span>j <span class="pl-c1">\geq</span> <span class="pl-c1">1</span><span class="pl-pds">$</span></span> and <span class="pl-s"><span class="pl-pds">$</span>j <span class="pl-c1">\leq</span> -<span class="pl-c1">2</span><span class="pl-pds">$</span></span>. This greatly reduces the number of distinct</td>
</tr>
<tr>
<td id="L674" class="blob-num js-line-number" data-line-number="674"></td>
<td id="LC674" class="blob-code blob-code-inner js-file-line">coefficient values (indeed, most of the coefficients are mapped to</td>
</tr>
<tr>
<td id="L675" class="blob-num js-line-number" data-line-number="675"></td>
<td id="LC675" class="blob-code blob-code-inner js-file-line">zero), allowing us, at the cost of some precision, to store less</td>
</tr>
<tr>
<td id="L676" class="blob-num js-line-number" data-line-number="676"></td>
<td id="LC676" class="blob-code blob-code-inner js-file-line">information. The larger we choose <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\delta</span><span class="pl-pds">$</span></span>, the more compression we</td>
</tr>
<tr>
<td id="L677" class="blob-num js-line-number" data-line-number="677"></td>
<td id="LC677" class="blob-code blob-code-inner js-file-line">can achieve, albeit with a correspondingly larger loss in precision.</td>
</tr>
<tr>
<td id="L678" class="blob-num js-line-number" data-line-number="678"></td>
<td id="LC678" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\begin</span>{problem}</td>
</tr>
<tr>
<td id="L679" class="blob-num js-line-number" data-line-number="679"></td>
<td id="LC679" class="blob-code blob-code-inner js-file-line">Write function <span class="pl-c1">\li</span>{quantize} and <span class="pl-c1">\li</span>{dequantize} based on the discussion above.</td>
</tr>
<tr>
<td id="L680" class="blob-num js-line-number" data-line-number="680"></td>
<td id="LC680" class="blob-code blob-code-inner js-file-line">In both cases, the inputs should be a list of wavelet coefficients in</td>
</tr>
<tr>
<td id="L681" class="blob-num js-line-number" data-line-number="681"></td>
<td id="LC681" class="blob-code blob-code-inner js-file-line">standard form, the <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\delta</span><span class="pl-pds">$</span></span> parameter, and the <span class="pl-s"><span class="pl-pds">$</span>t<span class="pl-pds">$</span></span> parameter with default</td>
</tr>
<tr>
<td id="L682" class="blob-num js-line-number" data-line-number="682"></td>
<td id="LC682" class="blob-code blob-code-inner js-file-line">value of 2. The functions should return the quantized list of wavelet</td>
</tr>
<tr>
<td id="L683" class="blob-num js-line-number" data-line-number="683"></td>
<td id="LC683" class="blob-code blob-code-inner js-file-line">coefficients.</td>
</tr>
<tr>
<td id="L684" class="blob-num js-line-number" data-line-number="684"></td>
<td id="LC684" class="blob-code blob-code-inner js-file-line">
</td>
</tr>
<tr>
<td id="L685" class="blob-num js-line-number" data-line-number="685"></td>
<td id="LC685" class="blob-code blob-code-inner js-file-line">For the Lena image, calculate the complete set of wavelet coefficients and then</td>
</tr>
<tr>
<td id="L686" class="blob-num js-line-number" data-line-number="686"></td>
<td id="LC686" class="blob-code blob-code-inner js-file-line">quantize and de-quantize the coefficients, and reconstruct the image.</td>
</tr>
<tr>
<td id="L687" class="blob-num js-line-number" data-line-number="687"></td>
<td id="LC687" class="blob-code blob-code-inner js-file-line">Do this for a few different values of <span class="pl-s"><span class="pl-pds">$</span><span class="pl-c1">\delta</span><span class="pl-pds">$</span></span> (with <span class="pl-s"><span class="pl-pds">$</span>t=<span class="pl-c1">2</span><span class="pl-pds">$</span></span>), and observe how</td>
</tr>
<tr>
<td id="L688" class="blob-num js-line-number" data-line-number="688"></td>
<td id="LC688" class="blob-code blob-code-inner js-file-line">the image is distorted. Try using different Wavelets. Keep in mind that there are</td>
</tr>
<tr>
<td id="L689" class="blob-num js-line-number" data-line-number="689"></td>
<td id="LC689" class="blob-code blob-code-inner js-file-line">specially designed Wavelets used in image compression that are not available</td>
</tr>
<tr>
<td id="L690" class="blob-num js-line-number" data-line-number="690"></td>
<td id="LC690" class="blob-code blob-code-inner js-file-line">in PyWavelets, so distortion will be greater than tolerable in real-life settings.</td>
</tr>
<tr>
<td id="L691" class="blob-num js-line-number" data-line-number="691"></td>
<td id="LC691" class="blob-code blob-code-inner js-file-line"><span class="pl-c1">\end</span>{problem}</td>
</tr>
</table>
</div>
</div>
<button type="button" data-facebox="#jump-to-line" data-facebox-class="linejump" data-hotkey="l" class="hidden">Jump to Line</button>
<div id="jump-to-line" style="display:none">
<!-- </textarea> --><!-- '"` --><form accept-charset="UTF-8" action="" class="js-jump-to-line-form" method="get"><div style="margin:0;padding:0;display:inline"><input name="utf8" type="hidden" value="✓" /></div>
<input class="form-control linejump-input js-jump-to-line-field" type="text" placeholder="Jump to line…" aria-label="Jump to line" autofocus>
<button type="submit" class="btn">Go</button>
</form></div>
</div>
<div class="modal-backdrop js-touch-events"></div>
</div>
</div>
</div>
</div>
<div class="container site-footer-container">
<div class="site-footer" role="contentinfo">
<ul class="site-footer-links right">
<li><a href="https://status.github.com/" data-ga-click="Footer, go to status, text:status">Status</a></li>
<li><a href="https://developer.github.com" data-ga-click="Footer, go to api, text:api">API</a></li>
<li><a href="https://training.github.com" data-ga-click="Footer, go to training, text:training">Training</a></li>
<li><a href="https://shop.github.com" data-ga-click="Footer, go to shop, text:shop">Shop</a></li>
<li><a href="https://github.com/blog" data-ga-click="Footer, go to blog, text:blog">Blog</a></li>
<li><a href="https://github.com/about" data-ga-click="Footer, go to about, text:about">About</a></li>
</ul>
<a href="https://github.com" aria-label="Homepage" class="site-footer-mark" title="GitHub">
<svg aria-hidden="true" class="octicon octicon-mark-github" height="24" version="1.1" viewBox="0 0 16 16" width="24"><path d="M8 0C3.58 0 0 3.58 0 8c0 3.54 2.29 6.53 5.47 7.59.4.07.55-.17.55-.38 0-.19-.01-.82-.01-1.49-2.01.37-2.53-.49-2.69-.94-.09-.23-.48-.94-.82-1.13-.28-.15-.68-.52-.01-.53.63-.01 1.08.58 1.23.82.72 1.21 1.87.87 2.33.66.07-.52.28-.87.51-1.07-1.78-.2-3.64-.89-3.64-3.95 0-.87.31-1.59.82-2.15-.08-.2-.36-1.02.08-2.12 0 0 .67-.21 2.2.82.64-.18 1.32-.27 2-.27.68 0 1.36.09 2 .27 1.53-1.04 2.2-.82 2.2-.82.44 1.1.16 1.92.08 2.12.51.56.82 1.27.82 2.15 0 3.07-1.87 3.75-3.65 3.95.29.25.54.73.54 1.48 0 1.07-.01 1.93-.01 2.2 0 .21.15.46.55.38A8.013 8.013 0 0 0 16 8c0-4.42-3.58-8-8-8z"></path></svg>
</a>
<ul class="site-footer-links">
<li>© 2016 <span title="0.08359s from github-fe130-cp1-prd.iad.github.net">GitHub</span>, Inc.</li>
<li><a href="https://github.com/site/terms" data-ga-click="Footer, go to terms, text:terms">Terms</a></li>
<li><a href="https://github.com/site/privacy" data-ga-click="Footer, go to privacy, text:privacy">Privacy</a></li>
<li><a href="https://github.com/security" data-ga-click="Footer, go to security, text:security">Security</a></li>
<li><a href="https://github.com/contact" data-ga-click="Footer, go to contact, text:contact">Contact</a></li>
<li><a href="https://help.github.com" data-ga-click="Footer, go to help, text:help">Help</a></li>
</ul>
</div>
</div>
<div id="ajax-error-message" class="ajax-error-message flash flash-error">
<svg aria-hidden="true" class="octicon octicon-alert" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path d="M8.865 1.52c-.18-.31-.51-.5-.87-.5s-.69.19-.87.5L.275 13.5c-.18.31-.18.69 0 1 .19.31.52.5.87.5h13.7c.36 0 .69-.19.86-.5.17-.31.18-.69.01-1L8.865 1.52zM8.995 13h-2v-2h2v2zm0-3h-2V6h2v4z"></path></svg>
<button type="button" class="flash-close js-flash-close js-ajax-error-dismiss" aria-label="Dismiss error">
<svg aria-hidden="true" class="octicon octicon-x" height="16" version="1.1" viewBox="0 0 12 16" width="12"><path d="M7.48 8l3.75 3.75-1.48 1.48L6 9.48l-3.75 3.75-1.48-1.48L4.52 8 .77 4.25l1.48-1.48L6 6.52l3.75-3.75 1.48 1.48z"></path></svg>
</button>
Something went wrong with that request. Please try again.
</div>
<script crossorigin="anonymous" integrity="sha256-FJ2TOMJmUXKHCCXHj6SP3MpNQx0GfL9f2nEg2eOcxzg=" src="https://assets-cdn.github.com/assets/frameworks-149d9338c2665172870825c78fa48fdcca4d431d067cbf5fda7120d9e39cc738.js"></script>
<script async="async" crossorigin="anonymous" integrity="sha256-EJ2vSkBO5DsxbJTLsCXdXDkJkOrMOx2AfsbhFQA5rwI=" src="https://assets-cdn.github.com/assets/github-109daf4a404ee43b316c94cbb025dd5c390990eacc3b1d807ec6e1150039af02.js"></script>
<div class="js-stale-session-flash stale-session-flash flash flash-warn flash-banner hidden">
<svg aria-hidden="true" class="octicon octicon-alert" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path d="M8.865 1.52c-.18-.31-.51-.5-.87-.5s-.69.19-.87.5L.275 13.5c-.18.31-.18.69 0 1 .19.31.52.5.87.5h13.7c.36 0 .69-.19.86-.5.17-.31.18-.69.01-1L8.865 1.52zM8.995 13h-2v-2h2v2zm0-3h-2V6h2v4z"></path></svg>
<span class="signed-in-tab-flash">You signed in with another tab or window. <a href="">Reload</a> to refresh your session.</span>
<span class="signed-out-tab-flash">You signed out in another tab or window. <a href="">Reload</a> to refresh your session.</span>
</div>
<div class="facebox" id="facebox" style="display:none;">
<div class="facebox-popup">
<div class="facebox-content" role="dialog" aria-labelledby="facebox-header" aria-describedby="facebox-description">
</div>
<button type="button" class="facebox-close js-facebox-close" aria-label="Close modal">
<svg aria-hidden="true" class="octicon octicon-x" height="16" version="1.1" viewBox="0 0 12 16" width="12"><path d="M7.48 8l3.75 3.75-1.48 1.48L6 9.48l-3.75 3.75-1.48-1.48L4.52 8 .77 4.25l1.48-1.48L6 6.52l3.75-3.75 1.48 1.48z"></path></svg>
</button>
</div>
</div>
</body>
</html>
| {
"alphanum_fraction": 0.6196226798,
"avg_line_length": 64.973388203,
"ext": "tex",
"hexsha": "32bd67e25d6913041d9e8da63f6a86f64e6a17f2",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-11-05T14:45:03.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-11-05T14:45:03.000Z",
"max_forks_repo_head_hexsha": "9f474e36fe85ae663bd20e2f2d06265d1f095173",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "joshualy/numerical_computing",
"max_forks_repo_path": "Vol2A/Wavelets/Archived/Haar.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9f474e36fe85ae663bd20e2f2d06265d1f095173",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "joshualy/numerical_computing",
"max_issues_repo_path": "Vol2A/Wavelets/Archived/Haar.tex",
"max_line_length": 826,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9f474e36fe85ae663bd20e2f2d06265d1f095173",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "joshualy/numerical_computing",
"max_stars_repo_path": "Vol2A/Wavelets/Archived/Haar.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 72720,
"size": 236828
} |
\chapter{Instructions}\label{chap:zielbestimmung}
\pagenumbering{arabic}
Dear user,\\
we are delighted that you use our Framework for Neo4j. For everything to run accordingly, you need Neo4j, Maven and Java for our functions and it runs on every operating system that these technologies support.
If you have any questions, please do not hesitate to contact us on our \glqq Git Repository\grqq{}.
\section{Desktop Version} \label{sec:desktop}
If you are using the desktop version, you have to take the following steps. For older Versions see chapter \ref{chap:olderVersion}.
\section{Required-Software}\label{sec:neededsoftwareNew}
In order to use our Framework you need:
\begin{itemize}
\item The Neo4J-Desktop Client\footnote{\url{https://neo4j.com/download/}}; it has to be installed otherwise you can't run the framework.
\item The latest version of Java\footnote{\url{https://java.com/de/download/}} has to be installed and the needed root-settings have to be set.
\item Also your device needs Maven\footnote{\url{https://maven.apache.org/download.cgi}}, this is needed to build the project and import alle dependencies{}.
\end{itemize}
For a better programming experience we recommend to use an IDE. During the instructive steps we use IntelliJ IDEA\footnote{\url{https://www.jetbrains.com/idea/download}}. If you really need this instruction, you should download IntelliJ IDEA and you can install the Framework step by step (See point \ref{sec:stepByStepManualNew} Step by Step Manual).
\newpage
\section {First use guidance for experienced users} \label{sec:beforeFirstUseNew}
\begin{itemize}
\item The required software should be installed on your computer so you can ensure that everything works correctly.
\item Start the Neo4j-Desktop Software and create a new database. Instructions are on the Neo4J website.
\item Now clone our \glqq Git Repository\footnote{\url{https://github.com/vonunwerth/Seacucumber.git}} \grqq{}. If you don't know how to clone the \glqq Git Repository\grqq{} please read the Step by Step Manual.
\item Furthermore you need to open the project with Maven. You can check your individual IDE for further instructions.
\item In the package \glqq matcher\grqq{} you can create a new Java class where you implement your algorithm.
\item The necessary changes to this class, so that \glqq Neo4j - procedures\grqq{} can work properly, will be explained in chapter \ref{sec:startProgrammingNew} \glqq Start coding\grqq{}.
\end{itemize}
\section{Step by step guide for beginners}\label{sec:stepByStepManualNew}
This step by step guide is created for using the software \glqq ItelliJ IDEA\grqq{}.
\begin{itemize}
\item The requested software should be installed on your computer.
\item Start the Neo4j-Desktop Software and create a new project (The tab with the book). A new Database can be create with a click on New Graph on the right site. Neo4j will guide you through the creation process. Please take Database Version 3.3.3 \footnote{constituted on 08.03.2018}.
\item After downloading and installing the Software \glqq IntelliJ IDEA\grqq{} opens and is ready for use. The following window opens. \newpage
\begin{center}
\includegraphics[width=4.2cm]{common/IntelliJstart.png}\setlength{\unitlength}{1mm}
\end{center}
\item Click on \glqq Check out from Version Control - GitHub\grqq{} and a window with the name \glqq Clone Repository\grqq{} will open. \\
\textbf{Git Repository URL:} https://github.com/vonunwerth/Seacucumber.git \\
\textbf{Parent Directory:} The location where you want to save the project. \\
\textbf{Directory Name:} For example Neo4j (You can choose whatever you want.)\\
\\
Click on Clone and IntelliJ will start to download the repository.
\item You get a at the right corner a notification to add the project as a Maven-project. If the notification doesn't appear look up the trouble shoot chapter \ref{chap:trouble}.
\item All folders should be visible now. Open the src/main/java folder.
\item In the \glqq package matcher\grqq{} you can create a new Java file to create your algorithm.
\end{itemize}
\section{Start coding}\label{sec:startProgrammingNew}
After checking out the repository you can start coding. Ensure you use Java JDK 1.8 or higher to compile the project. Otherwise some features of the given classes cannot be used.
\begin{itemize}
\item Create a new class for your new Matching-Algorithm in the matcher package and let your new class extends the abstract class \textit{matcher}.
\item Implement the \textit{matchingAlgorithm()-method} and import \textit{org.neo4j.graphdb} in order to use the Nodes of Neo4J. Also import the java.util.List instead of the suggested Scala list. Last import java.util.Map to get a result map of your keys and lists of nodes in the end.
\lstset{language=Java}
\begin{lstlisting}
@Override
public Map<Integer, List<Node>> matchingAlgorithm() {
//Your own matcher
}
\end{lstlisting}
\item You have to write a constructor in your class. The constructor's name has to be the same as the classname. The following structure can be used:
\lstset{language=Java}
\begin{lstlisting}
public [AlgorithmsName] (org.neo4j.graphdb.GraphDatabaseService db, graph.Graph graph) {
this.db = db; //Describes your database
this.graph = graph; //Describes your graph
}
\end{lstlisting}
%\begin{center}
% \includegraphics[width=10.5cm]{common/MatcherConstrutor.png}\setlength{\unitlength}{1mm}
%\end{center}
A lot of prewritten methods can be found in the abstract \glqq Matcher\grqq{} class. Check our javadoc for more information.
\item Now you have to create a now procedure to access your matcher on your Neo4J database. Go to the \textit{procedure.GraphProcedures} class and e.g. copy one of the example procedures for Dual Simulation or Failures.
\begin{lstlisting}
@Procedure(value = "graph.[NAME]", mode = Mode.READ)
@Description("[DESCRIPTION]")
@SuppressWarnings("unused")
public Stream<NodeResult> [NAME](@Name("query") String query) {
Graph graph = prepareQuery(db, query);
[MATCHER] matcher = new [MATCHER](db, graph);
Set<Node> simulated = matcher.simulate();
return simulated.stream().map(NodeResult::new);
}
\end{lstlisting}
Replace [NAME] with the name of your new procedure and [MATCHER] with the name of your new matcher class.
\end{itemize}
\section{After coding}\label{sec:afterProgrammingNew}
After you write your algorithm in your new class in the Java package matcher, you have to create the \glqq jar\grqq{} for the database.
\begin{itemize}
\item Before you create the jar, you can test the code. Please use the \glqq ProcedureTest\grqq{} class in the test package. For testing start the main method in the class \glqq ProcedureTest.java\grqq{}.
\item Open the Maven tab in \glqq IntelliJ \grqq{} and open the point \glqq Neo4JMatchAlgFramework\grqq{}. The next folder you need is the \glqq Lifecycle\grqq{} folder, here you click \glqq clean\grqq{} then you click on \glqq package\grqq{}.
\item After finishing you start your explorer and search for the folder where you cloned the \glqq Git Repository\grqq{} to. Here is a new folder named target. Open this folder and copy the \glqq original-Neo4JMatchAlgFramework-1.0.jar\grqq{}. \\
(The other one is for testing and is not important. It won't work.)
\item Please go to your Neo4j database from the first steps (see step by step manual \ref{sec:stepByStepManualNew}).
You must paste the \glqq Neo4JMatchAlgFramework-1.0.jar\grqq{} to the \glqq plugins\grqq{} Folder.
\\The \glqq plugins\grqq{} Folder can be find after click on the \glqq Manage\grqq{} button at the database tab. Then click on open Folder and you will see the \glqq plugins\grqq{} Folder.
\item After doing this you can call your procedure in Neo4j. If you are ready, start the database with the Neo4j software.
\end{itemize}
\section{Work with Neo4j}\label{sec:takeneo4jNew}
To test if your procedure works in Neo4j you can use the following example statements.
\begin{itemize}
\item You want to use your procedures? Then use the CALL-Statement.\\
For example:
\begin{lstlisting}
CALL graph.exampleQuery(
"MATCH (m:Movie)<-[:DIRECTED]-(p:Person)
RETURN m,p")
\end{lstlisting}
\item You want to take your procedures and would like to search with another query on this result? Then take the CALL statement and the YIELD statement. \\
For example:
\begin{lstlisting}
CALL graph.exampleQuery(
"MATCH (m:Movie)<-[:DIRECTED]-(p:Person)
RETURN m,p")
YIELD node MATCH (m:Movie)<-[:DIRECTED]-(p:Person)
RETURN m,p
\end{lstlisting}
\end{itemize}
At this point you know everything we know - have fun and develop new algorithms.
\newpage
\chapter{For older Versions} \label{chap:olderVersion}
If you use older versions (Neo4j Community Edition 3.2.6), the steps are not the same as the steps at the Neo4j-Desktop version.
\section{Required-Software}\label{sec:neededsoftware}
In order to use our Framework you need:
\begin{itemize}
\item The Neo4J Client\footnote{\url{https://neo4j.com/download/other-releases/\#releases}} is very important; it has to be installed otherwise you can't run the framework.
\item The latest version of Java\footnote{\url{https://java.com/de/download/}} has to be installed and the needed root-settings have to be set.
\item Also your device needs Maven\footnote{\url{https://maven.apache.org/download.cgi}}, this is needed to build the project and import alle dependencies{}.
\end{itemize}
For a better programming experience we recommend to use an IDE. During the instructive steps we use IntelliJ IDEA\footnote{\url{https://www.jetbrains.com/idea/download}}. If you really need this instruction, you should download IntelliJ IDEA and you can install the Framework step by step (See point \ref{sec:stepByStepManual} Step by Step Manual).
\newpage
\section {First use guidance for experienced users} \label{sec:beforeFirstUse}
\begin{itemize}
\item The required Software should be installed on your computer so you can ensure that everything works right.
\item Please start the Neo4j - Software and create a new database.
\item The plugin folder has to be created in the following directory(neo4j/default.graphdb/plugins). You need this for your \glqq Neo4j - Procedures\grqq{}.
\item Now you need our Framework. You can clone our \glqq Git Repository\footnote{\url{https://github.com/vonunwerth/Seacucumber.git}} \grqq{}. If you don't now how to clone the \glqq Git Repository\grqq{} please read the Step by Step Manual.
\item Furthermore you need to open the project with Maven. You can check your individual IDE for further instructions.
\item In the package \glqq matcher\grqq{} you can create a new Java class where you implement your algorithm.
\item Things that need to be changed in this class, so that \glqq Neo4j - procedures\grqq{} can work properly, will be explained in chapter \ref{sec:startProgramming} \glqq Start coding\grqq{}.
\end{itemize}
\section{Step by step guide for beginners}\label{sec:stepByStepManual}
This step by step guide is created for using the software \glqq ItelliJ IDEA\grqq{}.
\begin{itemize}
\item The requested Software should be installed on your computer.
\item Please start the Neo4j - Software and create a new database. Neo4j will guide you through the creation process. One important thing is to know where the databse is stored. (By default the folder is called neo4j/default.graphdb). \\(Nice to know for login: by default the username and password is neo4j)
\item The plugin folder has to be created in the following directory(neo4j/default.graphdb/plugins). You need this for your \glqq Neo4j - Procedures\grqq{}.
\newpage
\item After downloading and installing the Software \glqq IntelliJ IDEA\grqq{} opens and is ready for use. The following window opens. \\
\begin{center}
\includegraphics[width=4.2cm]{common/IntelliJstart.png}\setlength{\unitlength}{1mm}
\end{center}
\item You click on \glqq Check out from Version Control - GitHub\grqq{} and a window with the name \glqq Clone Repository\grqq{} will open. \\
\textbf{Git Repository URL:} https://github.com/vonunwerth/Seacucumber.git \\
\textbf{Parent Directory:} The location where you want to save the project. \\
\textbf{Directory Name:} For example Neo4j (You can choose whatever you want.)\\
\\
Click on Clone and IntelliJ will start to download the repository.
\item You get a at the right corner a notification to add the project as a Maven-project. If the notification doesn't appear look up the trouble shoot chapter \ref{chap:trouble}.
\item All folders should be visible now. Open the src/main/java folder.
\item In the \glqq package matcher\grqq{} you can create a new Java file to create your algorithm.
\end{itemize}
\section{Start coding}\label{sec:startProgramming}
After checking out the repository you can start coding. Ensure you use Java JDK 1.8 or higher to compile the project. Otherwise some features of the given classes cannot be used. It's the same steps like the newer Versions. Please see section \ref{sec:startProgrammingNew}.
\section{After coding}\label{sec:afterProgramming}
After you write your algorithm in your new class in the Java package matcher, you have to create the \glqq jar\grqq{} for the database.
\begin{itemize}
\item Before you create the jar, you can test the code. Please use the \glqq ProcedureTest\grqq{} class in the test package. For testing start the main method in the class \glqq ProcedureTest.java\grqq{}.
\item Open the Maven tab in \glqq IntelliJ \grqq{} and open the point \glqq Neo4JMatchAlgFramework\grqq{}. The next folder you need is the \glqq Lifecycle\grqq{} folder, here you click \glqq clean\grqq{} then you click on \glqq package\grqq{}.
\item After finishing you start your explorer and search for the folder where you cloned the \glqq Git Repository\grqq{} to. Here is a new folder named target. Open this folder and copy the \glqq original-Neo4JMatchAlgFramework-1.0.jar\grqq{}. \\
(The other one is for testing and is not important. It won't work.)
\item Please go to your Neo4j database from the first steps (see step by step manual).%felix
You must paste the \glqq Neo4JMatchAlgFramework-1.0.jar\grqq{} to the previously created \glqq plugins\grqq{} Folder.
\item After doing this you can call your procedure in Neo4j. If you are ready, start the database with the Neo4j software.
\end{itemize}
\section{Work with Neo4j}\label{sec:takeneo4j}
To test if your procedure works in Neo4j you can use the following example statements.
\begin{itemize}
\item You want to use your procedures? Then use the CALL-Statement.\\
For example:
\begin{lstlisting}
CALL graph.exampleQuery(
"MATCH (m:Movie)<-[:DIRECTED]-(p:Person)
RETURN m,p")
\end{lstlisting}
\item You want to take your procedures and would like to search with another query on this result? Then take the CALL statement and the YIELD statement. \\
For example:
\begin{lstlisting}
CALL graph.exampleQuery(
"MATCH (m:Movie)<-[:DIRECTED]-(p:Person)
RETURN m,p")
YIELD node MATCH (m:Movie)<-[:DIRECTED]-(p:Person)
RETURN m,p
\end{lstlisting}
\end{itemize}
At this point you know everything we know - So have fun and develop new good algorithms.
\newpage
\chapter{Trouble shooting} \label{chap:trouble}
\begin{itemize}
\item
The Folder with our \glqq Git Repository\grqq{} is displayed at the \glqq IntelliJ\grqq{} window, but you only see \glqq readme\grqq{}and other data but not the Java files. \\
If you want see the Java files, open the Maven Tab on the right side. Click on the tab and on the circle on the left corner. You will be ask, if you want to \glqq import Maven Projects\grqq{} - click \glqq yes\grqq{}. The \glqq Maven Dependencies\grqq{} are now downloaded, this needs a while.
\begin{center}
\includegraphics[width=10.3cm]{common/MavenImport.png}\setlength{\unitlength}{1mm}
\end{center}
\end{itemize}
| {
"alphanum_fraction": 0.7676875668,
"avg_line_length": 69.1347826087,
"ext": "tex",
"hexsha": "cbfe52323e1db32a48e2664d558a132acb21e5b8",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2018-08-19T06:26:14.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-03-23T14:46:55.000Z",
"max_forks_repo_head_hexsha": "9da416cf052d93ad5c54f3c9226553d13c814d8b",
"max_forks_repo_licenses": [
"ECL-2.0",
"Apache-2.0"
],
"max_forks_repo_name": "vonunwerth/neo4j",
"max_forks_repo_path": "Instruction/Instructions/01_instructions.tex",
"max_issues_count": 5,
"max_issues_repo_head_hexsha": "9da416cf052d93ad5c54f3c9226553d13c814d8b",
"max_issues_repo_issues_event_max_datetime": "2018-03-13T16:38:44.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-02-21T07:43:18.000Z",
"max_issues_repo_licenses": [
"ECL-2.0",
"Apache-2.0"
],
"max_issues_repo_name": "vonunwerth/neo4j",
"max_issues_repo_path": "Instruction/Instructions/01_instructions.tex",
"max_line_length": 351,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "9da416cf052d93ad5c54f3c9226553d13c814d8b",
"max_stars_repo_licenses": [
"ECL-2.0",
"Apache-2.0"
],
"max_stars_repo_name": "vonunwerth/neo4j",
"max_stars_repo_path": "Instruction/Instructions/01_instructions.tex",
"max_stars_repo_stars_event_max_datetime": "2021-01-20T15:14:09.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-03-23T14:46:48.000Z",
"num_tokens": 4188,
"size": 15901
} |
\section*{Abstract} \label{sec:abstract}
Two dimensional polynomials of various orders are fit to noisy Franke function using linear regression methods. The regression methods are ordinary least squares (OLS) and shrinkage methods like ridge and lasso regression. The parameters are either found through analytical methods, scikit functions, or by optimizing a cost function using stochastic gradient descent. Next, logistic regression and support vector machines are used to learn a classification model to predict whether a tumor is benign or malignant based on Wisconsin Breast Cancer Dataset. Various statistics are utilized to determine the performance of the regression and classification models. | {
"alphanum_fraction": 0.8352272727,
"avg_line_length": 234.6666666667,
"ext": "tex",
"hexsha": "1fe19150883845f7a23f9c8b6febcc2fa25e8e93",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-11-17T10:51:25.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-11-17T10:51:25.000Z",
"max_forks_repo_head_hexsha": "098363c47c9409d6ffce1d03a968b6f2265c5fcc",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "am-kaiser/CompSci-Project-1",
"max_forks_repo_path": "documentation/report/sections/abstract.tex",
"max_issues_count": 9,
"max_issues_repo_head_hexsha": "098363c47c9409d6ffce1d03a968b6f2265c5fcc",
"max_issues_repo_issues_event_max_datetime": "2021-12-16T19:51:18.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-11-01T08:32:11.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "am-kaiser/CompSci-Project-1",
"max_issues_repo_path": "documentation/report/sections/abstract.tex",
"max_line_length": 662,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "098363c47c9409d6ffce1d03a968b6f2265c5fcc",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "am-kaiser/CompSci-Project-1",
"max_stars_repo_path": "documentation/report/sections/abstract.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 124,
"size": 704
} |
\documentclass[11pt]{article}
\usepackage{listings}
\newcommand{\numpy}{{\tt numpy}} % tt font for numpy
\topmargin -.5in
\textheight 9in
\oddsidemargin -.25in
\evensidemargin -.25in
\textwidth 7in
\begin{document}
% ========== Edit your name here
\author{Francesco Penasa}
\title{Algorithms for Bioinformatics - lab1}
\maketitle
\medskip
Some bash cmds
\begin{lstlisting}
locate empty # find a file by name
file empty.txt # recognize the type of data contained in a file
less <filename> # visualize the content
head -n <filename> # show the first n lines
cmd1 | cmd2 | ... | cmdN # concat cmds and the out of cmdN-1 becames the input of cmdN
ls -l > list_files.txt # redirect output
ls >> list_files.txt # append the output
grep <what> <where> # search for occurences
grep -A3 -B9 <what> <where> # search after line 3 and before line 9
grep -C3 <what> <where> # search before and after line 3
grep -i <what> <where> # case sensitive
grep -v <what> <where> # everything that does not match
grep -c <what> <where> # count lines that match
grep -w <what> <where> # match whole word
grep -n <what> <where> # display the number of matches
whatis ls ; man ls
\end{lstlisting}
\section{Bash exercise} % (fold)
\label{sec:bash_exercise}
\begin{lstlisting}
# Lets go to your home folder
cd /home/Francesco
# Create a folder named “tmp”
mkdir tmp
# Create a file that contains the manual of the ls command
man ls > manual.txt
# Put this file into the “tmp” folder
mv manual.txt tmp/manual.txt
# Count how many occurrences of “ls” are present (case sensitive, whole word)
grep -i -w ls tmp/manual.txt
# Print the first 7 lines that matched (case sensitive, not whole word)
grep -i ls tmp/manual.txt | head -n 7
# Save the output to a file named “10.txt”
grep -i ls tmp/manual.txt | head -n 7 > 10.txt
# Print the first 3 lines of the last 5 lines that matched
grep -i ls tmp/manual.txt | tail -n 5 | head -n 3
# Concatenate this output to the file “10.txt”
grep -i ls tmp/manual.txt | tail -n 5 | head -n 3 >> 10.txt
# Now remove the “tmp” folder
rm -r tmp
\end{lstlisting}
% section bash_exercise (end)
\end{document}
\grid
\grid | {
"alphanum_fraction": 0.7078132224,
"avg_line_length": 26.0602409639,
"ext": "tex",
"hexsha": "9b01c89de13b9a30e8c3a132647bbc52fff9723b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "980c6a3e6f01a7cb98a62bd39eda34912731c9eb",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "FrancescoPenasa/Master-ComputerScience-UNITN",
"max_forks_repo_path": "4th_year/2nd/AB/1920/lab1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "980c6a3e6f01a7cb98a62bd39eda34912731c9eb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "FrancescoPenasa/Master-ComputerScience-UNITN",
"max_issues_repo_path": "4th_year/2nd/AB/1920/lab1.tex",
"max_line_length": 86,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "980c6a3e6f01a7cb98a62bd39eda34912731c9eb",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "FrancescoPenasa/Master-ComputerScience-UNITN",
"max_stars_repo_path": "4th_year/2nd/AB/1920/lab1.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 658,
"size": 2163
} |
% !TEX root = ../thesis.tex
\graphicspath{{./vega-lite/figures/}}
\chapter{A Grammar of Interactive Graphics}
\label{sec:vega-lite}
\begin{figure}[h!]
\vspace{-40pt}
\centering
\includegraphics[width=0.9\columnwidth]{teaser}
% \vspace{-30pt}
\end{figure}
% Reactive Vega demonstrates that the advantages of declarative specification
% extend to interaction design as well\,---\,decoupling specification from
% execution frees designers to iterate more quickly, facilitates retargeting
% across platforms, and allows for language-level optimizations.
Reactive Vega demonstrates that declarative interaction design is an expressive
and performant alternative to imperative event handling callbacks. However, as
its abstractions are relatively low-level, it is best described as an ``assembly
language'' for interactive visualization most suited for \emph{explanatory}
visualization. In contrast, for \emph{exploratory} visualization, higher-level
grammars such as ggplot2~\cite{wickham:layered}, and grammar-based systems such
as Tableau (n\'ee Polaris~\cite{stolte:polaris}), are typically preferred as
they favor conciseness over expressiveness. Analysts rapidly author partial
specifications of visualizations; the grammar applies default values to resolve
ambiguities, and synthesizes low-level details to produce visualizations.
High-level languages can also enable search and inference over the space of
visualizations. For example, Wongsuphasawat et~al. introduced Vega-Lite to power
the Voyager visualization browser~\cite{voyager}. With a smaller surface area
than Vega, Vega-Lite makes systematic enumeration and ranking of data
transformations and visual encodings more tractable.
However, existing high-level languages provide limited support for
interactivity. An analyst can, at most, enable a predefined set of common
techniques (linked selections, panning \& zooming, etc.) or parameterize their
visualization with dynamic query widgets~\cite{shiny}. For custom, direct
manipulation interaction they must turn to imperative event callbacks.
In this chapter, we extend Vega-Lite with a \emph{multi-view} grammar of
graphics and a \emph{grammar of interaction} to enable concise, high-level
specification of interactive data visualizations.
\input{vega-lite/encoding}
\input{vega-lite/selection}
\input{vega-lite/compiler}
\input{vega-lite/examples}
\input{vega-lite/conclusion} | {
"alphanum_fraction": 0.8083160083,
"avg_line_length": 49.0816326531,
"ext": "tex",
"hexsha": "88a5e123eee599a5aa0a6a5809239ff8e6a79e57",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c79821f947743769bcdd130bb96a488372dfc403",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "arvind/thesis",
"max_forks_repo_path": "vega-lite/index.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c79821f947743769bcdd130bb96a488372dfc403",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "arvind/thesis",
"max_issues_repo_path": "vega-lite/index.tex",
"max_line_length": 80,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "c79821f947743769bcdd130bb96a488372dfc403",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "arvind/thesis",
"max_stars_repo_path": "vega-lite/index.tex",
"max_stars_repo_stars_event_max_datetime": "2019-02-22T00:18:21.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-08-26T08:44:57.000Z",
"num_tokens": 561,
"size": 2405
} |
\documentclass{article}%
\usepackage[T1]{fontenc}%
\usepackage[utf8]{inputenc}%
\usepackage{lmodern}%
\usepackage{textcomp}%
\usepackage{lastpage}%
\input{common_symbols_and_format.tex}%
\usepackage{tocloft}%
\renewcommand{\cfttoctitlefont}{\Large\bfseries}%
%
\begin{document}%
\normalsize%
\logo%
\rulename{Static Level Regression}%
\tblofcontents%
\ruledescription{Regresses the one-day price changes against the level of the signal for the specified number of days, using coefficients estimated from the start of the data.}%
\howtotrade
{Given default parameter values, if the asset drift is 0.001 and the error is 0.02 (2\% daily volatility), this rule will take a $0.001 / (0.02)^2 = 2.5$ or 250\% position (leveraged).}%
{
\begin{figure}[H]
\begin{multicols}{2}
\centering
\begin{subfigure}{\linewidth}
\includegraphics[width=\linewidth]{\graphdir{market.png}}
\caption{Market series data}
\label{fig:01}
\end{subfigure}
\par
\vspace{5mm}
\begin{subfigure}{\linewidth}
\includegraphics[width=\linewidth]{\graphdir{research.png}}
\caption{Research series data}
\label{fig:02}
\end{subfigure}
\par
\begin{subfigure}{\linewidth}
\includegraphics[width=\linewidth]{\graphdir{pa(StaticLevelRegression).png}}
\caption{ Suggested volume to buy or sell}
\label{fig:03}
\end{subfigure}
\par
\vspace{5mm}
\begin{subfigure}{\linewidth}
\includegraphics[width=\linewidth]{\graphdir{pr(StaticLevelRegression).png}}
\caption{Portfolio return}
\label{fig:04}
\end{subfigure}
\end{multicols}
\caption{Graphical depiction of the Static Level Regression algorithm. 20 Days of trading data is visualized in the graphs (\ref{fig:01}) A line chart showing changes in the market price for multiple trading days.(\ref{fig:02}) A chart displaying the research series data. (\ref{fig:03})Positive values indicate that buying the security by x\%. The negative values mean you are shorting the security by x\% (\ref{fig:04})Chart showing the portfolio return when using the Static Level Regression as the trading rule.}
\label{fig:cps_graph}
\end{figure}
}
\ruleparameters{Kelly fraction}{1.0}{Amplitude weighting. 1.0 is maximum growth if regression is exact. <1.0 scales down positions taken.}{$\kellyfraction$}{Regression length}{50}{This is the number of days used to estimate the regression coefficients.}{$\lookbacklength$}%
\stoptable%
\section{Equation}
The following equations describe the static level regression rule:
\begin{equation}
\label{eq:predictedPrice}
\regressionprice_\currenttime= \amplitudecoefficient\research_\currenttime + \constantc
\end{equation}
Using the equation (\ref{eq:predictedPrice}), it is possible to calculate the predicted price $\regressionprice_{\currenttime}$ at time $\currenttime$.
Here the values of the research series $\research_{\currenttime}$ and the $\constantc$ are fitted from the data. In order to calculate the resultant fractional portfolio allocation $\position_{\currenttime}$ we use Kelly fraction to obtain the maximum results for the long run.
\begin{equation}\label{eq:porfolio}
\position_\currenttime = \kellyfraction \frac{\regressionprice_\currenttime}{ \rmserror_{\regressionprice}^{2}}
\end{equation}
\hspace{200mm}
where
$\research$ : is the value of the research series.
$\regressionprice_\currenttime$ : is the predicted price at time $\currenttime$.
$\rmserror$ : is the standard error between predicted and ground truth prices.
$\kellyfraction$ : is the Kelly Fraction.
$\position$ : is the resultant fractional portfolio allocation.
\hspace{200mm}
Additionally, the standard error $\rmserror_{\regressionprice}$ is calculated and included in equation (\ref{eq:porfolio}) to normalize the predicted price.
\hspace{200mm}
\\
\keyterms%
\furtherlinks%
\end{document}
| {
"alphanum_fraction": 0.7529289248,
"avg_line_length": 41.3010752688,
"ext": "tex",
"hexsha": "fa79ddc9d54ade5ab499e797e4021fc40f360606",
"lang": "TeX",
"max_forks_count": 28,
"max_forks_repo_forks_event_max_datetime": "2021-11-10T18:21:14.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-03-26T14:26:04.000Z",
"max_forks_repo_head_hexsha": "2eebf2286f5cc669759de632970e4f8f8a40f232",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "parthgajjar4/infertrade",
"max_forks_repo_path": "docs/strategies/tex/StaticLevelRegression.tex",
"max_issues_count": 137,
"max_issues_repo_head_hexsha": "2eebf2286f5cc669759de632970e4f8f8a40f232",
"max_issues_repo_issues_event_max_datetime": "2022-01-28T19:36:30.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-25T10:59:46.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "parthgajjar4/infertrade",
"max_issues_repo_path": "docs/strategies/tex/StaticLevelRegression.tex",
"max_line_length": 518,
"max_stars_count": 34,
"max_stars_repo_head_hexsha": "2eebf2286f5cc669759de632970e4f8f8a40f232",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "parthgajjar4/infertrade",
"max_stars_repo_path": "docs/strategies/tex/StaticLevelRegression.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-06T23:03:01.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-03-25T13:32:54.000Z",
"num_tokens": 1047,
"size": 3841
} |
\documentclass[12pt , a4paper]{article}
\usepackage[left=2cm, right=2cm, top=2.5cm, bottom=2.5cm]{geometry}
\usepackage{fancyhdr , lipsum}
\usepackage{mathptmx}
\usepackage{anyfontsize}
\usepackage{t1enc}
\usepackage{csquotes}
\usepackage{blindtext}
\usepackage{enumitem}
\usepackage{xcolor}
\usepackage[linktocpage]{hyperref}
\usepackage{graphicx}
\usepackage{float}
\usepackage{subcaption}
\usepackage[section]{placeins}
\usepackage[T1]{fontenc}
\usepackage{fontspec}
\usepackage{tcolorbox}
\usepackage[english]{babel}
\usepackage[export]{adjustbox}
\newcommand{\thedate}{\today}
%\deflatinfont\titrEn[Scale=1.5]{Linux Libertine}
\title{\normalfont{Neural Networks}}
\author{Hossein Dehghanipour}
\date{\today}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% MAIN %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% First Page %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\thispagestyle{empty}
\begin{center}
%picture
\begin{figure}[H]
\centering
\includegraphics[width=0.3\textwidth]{shirazuLogo.png}
\caption*{}
\label{f-0-0}
\end{figure}
{
\centering
\fontfamily{titr}
\fontsize{16pt}{16pt}
\selectfont
In The Name Of God
}
{
\centering
\fontfamily{titr}
\fontsize{14pt}{14pt}
\selectfont
Sample Title
}
{
\centering
\fontfamily{titr}
\fontsize{12pt}{12pt}
\selectfont
Author : Hossein Dehghanipour
}
{
\centering
\fontfamily{titr}
\fontsize{12pt}{12pt}
\selectfont
Teacher : Dr . Samiei
}
{
\centering
\fontfamily{titr}
\fontsize{12pt}{12pt}
\selectfont
Course : Technical Presentation
}
{
\centering
\fontfamily{titr}
\fontsize{12pt}{12pt}
\selectfont
Shiraz University - \thedate
}
\end{center}
\setmainfont{Times New Roman}
\cleardoublepage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\pagenumbering{roman}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% TABLE OF CONTENT %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\tableofcontents
%\thispagestyle{empty}
\cleardoublepage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Introduction %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section { Introduction }
\cleardoublepage
\pagenumbering{arabic}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% SECTION 1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section { Human Brain } \label{sec:1}
\subsection{A}
\subsubsection{B}
%picture 3
\begin{figure}[h]
\centering
%\setlength{\fboxrule}{5pt}
%\fbox{\includegraphics[width=0.75\textwidth,frame]{Neuron.jpg}}
\includegraphics[width=0.75\textwidth,frame]{Neuron.jpg}
\caption*{Figure 2-3,A Closer Look At a Neuron}
\label{f-1-4}
\end{figure}
\LaTeX
\end{document} | {
"alphanum_fraction": 0.6064073227,
"avg_line_length": 20.484375,
"ext": "tex",
"hexsha": "ff1343f0b40158e3bcb05a2dda0ce849da58d184",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "aca7ef6e98df01dd0780216db24f05cbd0e8d7f5",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "hosseindehghanipour1998/Latex-Tutorial",
"max_forks_repo_path": "Samples/Sample 0 - Raw Sample/mine.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "aca7ef6e98df01dd0780216db24f05cbd0e8d7f5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "hosseindehghanipour1998/Latex-Tutorial",
"max_issues_repo_path": "Samples/Sample 0 - Raw Sample/mine.tex",
"max_line_length": 97,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "aca7ef6e98df01dd0780216db24f05cbd0e8d7f5",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "hosseindehghanipour1998/Latex-Tutorial",
"max_stars_repo_path": "Samples/Sample 0 - Raw Sample/mine.tex",
"max_stars_repo_stars_event_max_datetime": "2020-07-10T10:20:25.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-05-01T19:38:22.000Z",
"num_tokens": 769,
"size": 2622
} |
%\documentclass[10pt,conference]{IEEEtran}
\documentclass{article}% default font size: 10pt
\usepackage{amssymb} \usepackage{amsmath} \usepackage{amsthm} \usepackage{algorithm} \usepackage{algorithmic} \usepackage{url}
\usepackage{subfigure}
\newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{corollary}{Corollary} \newtheorem{definition}{Definition} \newtheorem{assumption}{Assumption} \newtheorem{example}{Example}
\newtheorem{observation}[theorem]{Observation}
%\newtheorem{theorem}{Theorem} \newtheorem{definition}{Definition} \newtheorem{remark}{Remark} \newtheorem{lemma}{Lemma} \newtheorem{corollary}{Corollary} \newtheorem{fact}{Fact} \newtheorem{invariant}{Invariant}
\usepackage{color}
\newcounter{todocounter}
\newcommand{\todo}[1]{\stepcounter{todocounter}\textcolor{red}{to-do\#\arabic{todocounter}: #1}}
\newcommand{\newadded}[1]{{\color{magenta} #1}}
\newcommand{\question}[1]{{\color{blue} #1}}
\usepackage{graphicx}
\graphicspath{{./Figures/}}
\begin{document}
\title{Multi-Label Text Classification}
\date{Jul 18, 2014}
\author{Huangxin Wang, Zhonghua Xi\thanks{Department of Computer Science, George Mason University. Fairfax, VA 22030. Email: \textsf{[email protected], [email protected]}}}
\maketitle
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Problem Statement}
\section{Algorithm}
\subsection{ReWeight} We use a variant of TF.IDF model to reweight the word vector of each document. The TF.IDF variants is defined as:
\begin{displaymath}
w_{t,d} = log(tf_{t,d}+1)*idf_t
\end{displaymath}
where $w_{t,d}$ is the reweight word frequency, $tf_{t,d}$ is the original frequency of term $t$ in document $d$, and $idf_t$ is inverse document frequency which is defined as follows,
\begin{displaymath}
idf_t = \frac{|D|}{df_t}
\end{displaymath}
where $|D|$ is the number of documents in the training set, and $df_t$ is number of documents contains term $t$.
\subsection{Build Classifiers}
We build classifier for each class, the classifier is calculated by using the average of each instances in the class.
\subsection{Similarity Calculation}
We calculate the similarity of a test instance with the classifier using the cosine similarity.
\begin{displaymath}
cossim(d_i,d_j) = \frac{d_i*d_j}{|d_i|*|d_j|} = \frac{\sum^N_{k=1} w_{k,i}*w_{k,j}}{\sqrt{\sum^N_{k=1} w^2_{k,i}*\sqrt{\sum^N_{k=1}w^2_{k,j}}}}
\end{displaymath}
where $d_i$ is feature vector of document $i$ and $d_j$ is the feature vector of class $j$. We denote $cossim(d_i,d_j)$ as the score document $i$ achieves in class $j$.
\subsection{Multi-Label Classify}
\paragraph{First Label} The first label document $i$ assigned is the class in which it gets the highest score.
\paragraph{Second Label} Similar to the chosen of label 1, for the second label, we choose the class have the second highest score of document $i$. However, we accept the second label only when $score(label_1)/score(label_2) \leq \alpha$, which means document $i$ have almost the same degree of similarity to label 2 as to label 1.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
| {
"alphanum_fraction": 0.6983050847,
"avg_line_length": 43.2666666667,
"ext": "tex",
"hexsha": "8678401ff8a1c34a0991ce44ca737e8b1dee2a2d",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2017-06-25T06:29:06.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-02-19T15:34:30.000Z",
"max_forks_repo_head_hexsha": "a117b498952c0163a0761e4faea6d7ace981f6a8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "xizhonghua/mlss2014",
"max_forks_repo_path": "documents/Multi-label-text-classification.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a117b498952c0163a0761e4faea6d7ace981f6a8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "xizhonghua/mlss2014",
"max_issues_repo_path": "documents/Multi-label-text-classification.tex",
"max_line_length": 331,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "a117b498952c0163a0761e4faea6d7ace981f6a8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "xizhonghua/mlss2014",
"max_stars_repo_path": "documents/Multi-label-text-classification.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-28T04:09:46.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-06-26T16:55:57.000Z",
"num_tokens": 902,
"size": 3245
} |
\documentclass{amsart}
\newcommand {\rL} {\mathrm {L}}
\newcommand {\rM} {\mathrm {M}}
\newcommand {\rS} {\mathrm {S}}
\title {Notes on colour theory}
\begin{document}
\maketitle
The main reference is \cite{briggs-dim-colour} and articles on light on Wikipedia.
\section{Auspicious beginning of notes on colour theory}
\label{sec:ausp-beginn-notes}
What is colour?
It is a sensation arising from light hitting you.
What is light?
Lol, as if we knew. Here's an attempt. It is electromagnetic wave and also a bunch of photon particles.
\section{Speculations based on shaky foundation to the best of our knowledge}
\label{sec:spec-best-our}
As electromagnetic wave, light has properties such as frequencies $f$ and wavelengths $\lambda$. By property of wave, $f\lambda = c$ where $c$ is the speed of light in the medium. Different wavelengths cause the cones in human eyes to produce varying responses. (The cones are in a fixed medium. It is not clear to us if the wavelengths of light in various charts in the references are tied to this particular medium or not.) Each parcel of light carries energy. The energy of a photon with frequency $f$ is equal to $hf$ where $h$ is the Planck constant.
Now consider the scenario of looking at an object. Light has bounced around or got absorbed and re-emitted etc and finally reaches your eye. Specular reflection generally does not change the frequency distribution of light, while diffuse reflection causes changes\cite [Sec.~2.1] {briggs-dim-colour}. We suppose one can always design devious material to make this false.
Assume you are a generic human. Then you are tri-chromatic with cones of types L, M and S. For given light, let $n (f)$ be the number of photons as a function of given frequency $f$. Since energy carried by the photons of given frequency is proportional to their number, $n$ is similar to spectral power distribution of the light and we will call it spectral distribution of the given light. For each frequency, each type of cones produce a response of a certain strength. The more photons, the stronger the response. Let the response function be $r_i (f,n)$ where $f$ is the frequency and $n$ the number of photons with frequency $f$ hitting one cone of type $i$ per unit time. Here $i$ is L or M or S. It seems that $r_i$ is largely linear with respect to $n$ at least when $n$ is not too small or too big. There is always some threshold you need to go beyond to get any response and when you are hit by too much energy, you die in general. Then we get the total response $R_i$ to the given light for each type of cone by integrating over all frequencies. This is actually a finite sum as we have only finitely many photons hitting you per unit time. Note that $n$ is a function of $f$ when the light is given. With $R_i$'s produced, they are transformed by opponent cells into three other outputs, one being achromatic which is just $R_\rL+R_\rM+R_\rS$, the other two being chromatic opponencies, which are given by $R_\rL-R_\rM$ and $R_\rS- (R_\rL+ R_\rM)$. This is a bijective function. The exact functions may be off, but the next sentence holds regardless. The former one measures brightness; the latter two quantities give a quantitative measure of how biased the spectral distribution is. It should be pointed out that a lot of information is lost in this process. Many different frequency distributions give rise to the same responses and they are called metamers and are indistinguishable to your visual system. Let us call the set of frequency distributions that are metameric to each other a metameric packet.
A naive generalisation to $k$-chromatic creatures is as follows. First they get $R_i$ for $i=1,\ldots,k$ and then these are transformed into brightness and $k-1$ opponencies, which quantify hue and colourfulness. Thus the total sensation/colour space is a domain in a Euclidean space of dimension $k$. This domain should be compact and convex.
Due to the brain's precognitive process (lightness constancy) of compensating for differences in illumination, it is natural to normalise brightness and colourfulness to account for illumination condition. One way to quantify the illumination condition is to use the brightness of a reference object, which should reflect a balanced distribution of frequencies. Thus we define lightness (resp. chroma) as brightness (resp. colourfulness) to brightness of the reference object under the same illumination condition. As lightness and chroma are mostly independent of illumination condition (within a reasonable range), they are perceived as intrinsic to the object. There are other constancies at play, such as colour constancy which makes us to perceive different colours at points when our cones produce the same responses. This gives rise to the two different notions of colour, perceived colour and psychophysical colour.
We now try to describe more precisely the normalised sensation/colour space. When we hold lightness constant, the section of the normalised sensation/colour space is of dimension $k-1$. We may perceive this to be a compact convex $(k-1)$-dimensional domain that is diffeomorphic to a $(k-1)$-ball. On the boundary of the domain, we sense the most bias in spectral distribution and recognise the points on the boundary as the most intense colours. The boundary of the domian is labelled by hues. Toward the centre, we sense a balanced spectral distribution and recognise it as neutral like white, grey and black. The distance to the centre is called chroma.
There is a related concept called saturation. It is defined to be colourfulness of object to the brightness of the object. Note that chroma is colourfulness of object to the brightness of the reference object and that lightness is the brightness of the object to the brightness of the reference object. Thus we see that saturation is chroma to lightness. Thus each circular cone (consisting lines of a given slope in each 2D slice passing through the lightness axis) in the normalised sensation/colour space has the same saturation. As the cones open wider, saturation increases.
\section{Application}
\label{sec:application}
Thus painting is trying to reproduce light to cause the same responses of the cones as caused by reflected light of an object. Note that the responses are those before processed by the precognitive lightnesscolour constancy process. Hence picking the right metameric packet is difficult. Mixing paints is also difficult due to our visual system. One has only an estimation of spectral properties of paints when attempting to mix the right metameric packet.
For digital painting, the canvas is the screen which emits light. Its brightness should be calibrated to match that of the environment so that our visual system estimates the correct illumination condition in which to place the digital painting. Painting digitally still is trying to reproduce light to cause the same responses of the cones as caused by reflected light of an object.
\bibliography{Mybib}\bibliographystyle{plain}
\end{document}
| {
"alphanum_fraction": 0.7913902611,
"avg_line_length": 144.5918367347,
"ext": "tex",
"hexsha": "a895e7e8e73a14e323292e78b8981feb796727ac",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1c9208cc4e93754282fe3ad2dfa25ac283f36929",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "heiegg/arto",
"max_forks_repo_path": "notes/colour.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1c9208cc4e93754282fe3ad2dfa25ac283f36929",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "heiegg/arto",
"max_issues_repo_path": "notes/colour.tex",
"max_line_length": 2034,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1c9208cc4e93754282fe3ad2dfa25ac283f36929",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "heiegg/arto",
"max_stars_repo_path": "notes/colour.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1579,
"size": 7085
} |
%!TEX root = thesis.tex
\chapter{\code{FASMA} and machine learning}
\label{cha:fasmaML}
With the increasing amount of astrophysical data, it is important to perform a rapid and trustworthy
analysis. This is one of the strengths for \code{FASMA} when dealing with high quality spectroscopic
data for determination of stellar atmospheric parameters. Here a classic method to analyse these
large amount of data has been modernised with a new minimisation technique that utilise the physical
knowledge about the system. However smart it may sound like, it can be improved. The time consuming
part in \code{FASMA} are the calls to \code{MOOG}. In the end a couple of minutes are spent on the
minimisation on a modern computer.
This lead to a side-project: explorer the use of machine learning to determine stellar atmospheric
parameters. Machine learning (ML) is not a new topic in computer science, but it is a tool that is
steadily becoming more and more popular in today's science. The effort of using ML here is meant as
a proof of concept, and something that can be improved upon in the future.
The idea was to remove the expensive calls to \code{MOOG} completely. Two things are required in
order to do so in the approach presented here:
\begin{enumerate}
\item Stellar atmospheric parameters of a large sample of stars with a big span in the parameter
space
\item The EW measurements of as many absorption lines as possible (in this case \ion{Fe}{I} and
\ion{Fe}{II})
\end{enumerate}
Additionally it is here required that the above two points are obtained in a homogeneous way.
Luckily such a sample was already analysed during this thesis as a test of \code{FASMA} (see
\sref{sec:fasma_test}) where a sample of 582 stars were analysed; all of which meet the above
required criteria.
This data set of measurements of EWs and the parameters were organised and prepared in a big table
as follows:
\begin{itemize}
\item Each row contains both the measurements of the EWs (first N columns) and the parameters
(last four columns). The parameters are $T_\mathrm{eff}$, $\log g$, $[\ion{Fe}/\ion{H}]$,
and $\xi_\mathrm{micro}$
\item The columns excluding the last four are labelled with the wavelength of the absorption line
\item All wavelength columns which contained at least one missing measurement of the EW for any
of the 582 stars were removed
\end{itemize}
There are 58 wavelength columns after removing wavelength columns with missing measurements. Before
the removal there were 299 columns with wavelength. In this case the \code{scikit-learn}
package\footnote{\url{http://scikit-learn.org/}} from the Python ecosystem was used to train the
data set. This also package also include a tool to split the data into a training part and testing
part. This is particular useful when trying to evaluate the accuracy of the model used. Here the
training consist of 2/3 of the data, and the rest is for testing. This splitting is done randomly,
giving slight different results each time the script is executed.
The training itself is quite fast (less than \SI{10}{s}). However, the real power comes when the
trained model is saved to the disk, which can later be loaded again. In this way the training will
only be done once. Using the model to obtain the parameters is much less than \SI{1}{s}, which makes
is many orders of magnitudes faster\footnote{Improvements in the order of millions have been
obtained here.} than a more traditional approach as with \code{FASMA}.
When testing the model trained, the 1/3 data set is used to derive parameters. Those derived
parameters are then compared to the actual parameters with a mean absolute error. This gives an idea
of the accuracy. In \fref{fig:ml} the script was run 1000 times (the splitting was done randomly in
each run), and the error for each parameter is shown as a histogram. The mean errors for each
parameters are roughly $T_\mathrm{eff}:\SI{47}{K}$, $\log g:\SI{0.10}{dex}$,
$[\ion{Fe}/\ion{H}]:\SI{0.038}{dex}$, and $\xi_\mathrm{micro}:\SI{0.12}{km/s}$.
\begin{figure}[htpb!]
\centering
\includegraphics[width=1.0\linewidth]{figures/ML.pdf}
\caption{Mean absolute error on each parameter after 1000 runs.}
\label{fig:ml}
\end{figure}
There exists many different algorithms within \code{scikit-learn} to train the final model. In the
test here a simple \code{LinearRegression} was used. Other algorithms were tested as well, such as
\code{Ridge} and \code{Lasso}, however with very similar results. The main difference between the
different algorithms are found in the details on how the minimisation is done. This is described in
great detail in the online documentation.
| {
"alphanum_fraction": 0.7719447396,
"avg_line_length": 63.5810810811,
"ext": "tex",
"hexsha": "2d6953e463737217112d02c32d6cd5a45fc1552a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "da18d41e48de5d34c8281ffd9e850dfd4fe37824",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "DanielAndreasen/Thesis",
"max_forks_repo_path": "fasmaML.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "da18d41e48de5d34c8281ffd9e850dfd4fe37824",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "DanielAndreasen/Thesis",
"max_issues_repo_path": "fasmaML.tex",
"max_line_length": 100,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "da18d41e48de5d34c8281ffd9e850dfd4fe37824",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "DanielAndreasen/Thesis",
"max_stars_repo_path": "fasmaML.tex",
"max_stars_repo_stars_event_max_datetime": "2018-05-09T13:46:52.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-04-25T08:31:52.000Z",
"num_tokens": 1145,
"size": 4705
} |
\clearpage
\phantomsection
\addcontentsline{toc}{subsection}{copyinto}
\label{subr:copyinto}
\subsection*{copyinto: copy data from a source to a destination}
\subsubsection*{Calling convention}
\begin{description}
\item[\registerop{rd}] void
\item[\registerop{arg0}] Address to copy from
\item[\registerop{arg1}] Length of data
\item[\registerop{arg2}] Destination address
\end{description}
\subsubsection*{Description}
The \subroutine{copyinto} subroutine copies data from a source pointer
into a destination pointer bounded by a size given in the second argument.
\subsubsection*{Failure modes}
This subroutine has no run-time failure modes beyond its constraints.
| {
"alphanum_fraction": 0.7949479941,
"avg_line_length": 28.0416666667,
"ext": "tex",
"hexsha": "7f5e95e6a9098b2900d358a47ad5f1be70757727",
"lang": "TeX",
"max_forks_count": 10,
"max_forks_repo_forks_event_max_datetime": "2020-03-17T10:53:19.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-07-10T16:11:37.000Z",
"max_forks_repo_head_hexsha": "ffa4129d0d6caa1e719502f416a082498c54be5d",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "trombonehero/opendtrace-documentation",
"max_forks_repo_path": "specification/subr/copyinto.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "ffa4129d0d6caa1e719502f416a082498c54be5d",
"max_issues_repo_issues_event_max_datetime": "2018-11-27T16:42:55.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-07-29T17:34:24.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "trombonehero/opendtrace-documentation",
"max_issues_repo_path": "specification/subr/copyinto.tex",
"max_line_length": 74,
"max_stars_count": 20,
"max_stars_repo_head_hexsha": "ffa4129d0d6caa1e719502f416a082498c54be5d",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "trombonehero/opendtrace-documentation",
"max_stars_repo_path": "specification/subr/copyinto.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-14T17:13:54.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-04-12T21:24:01.000Z",
"num_tokens": 170,
"size": 673
} |
% !TeX root = ../main.tex
\subsection{Generated C++ code}
\label{sec:gencplusplus}
While the previous section is primarily focused on the code generation from an abstract point of view, this section focuses on the generated \texttt{C++} code.
Since \texttt{C++} is the output language of the HCL compiler, all the language concepts of HCL have to be represented in valid \texttt{C++} code.
\textbf{Primitive types}
The primitive types of HCL includes numbers, text and booleans.
Booleans are straightforward, as HCL's boolean behaves exactly like the \texttt{C++} boolean type.
This means that the regular \texttt{C++} boolean type can be used as the output type of HCL booleans.
As HCL aims for simplicity, only one number type is introduced.
This type needs to be able to store both integers and non-integers.
The simple solution is to convert all HCL numbers into \texttt{C++}'s double type.
The solution with doubles is not ideal, but it will fulfill the requirements of the number type in HCL.
\texttt{C++} does have an implementation of a string type in its standard library.
However, the Arduino does not include this implementation, but rather uses its own.
As the string type shares a lot of functionality with that of lists, the representation of strings in the generated \texttt{C++} code, is based on the ConstList.
\textbf{Lists}
For lists, \texttt{C++} includes a list type in its standard library called vector.
This vector type is, like the string type, not available in the libraries provided by the Arduino.
Furthermore, since the vector implementation is mutable and the lists in HCL are immutable, a simple implementation of a constant list is implemented instead.
One thing to be aware of when using lists, is that the size of the list cannot always be determined at compile time.
This means that lists will usually do heap allocation.
With heap allocation comes the need to do proper memory handling, eg.
clean up allocated memory in order to avoid memory leaks.
Memory handling is done by utilizing the \texttt{C++} concept of RAII\footnote{Resource acquisition is initialization} along with \texttt{C++} smart pointers.
This means that a pointer is implemented, which keeps a count of the references to the memory pointed to.
If the counter goes to zero, this means that there are no more references to memory, and the memory may be deallocated safely.
It should be noted, that the current HCL compiler uses the std::shared\_ptr, which is not available in the Arduino standard library.
The implementation should however be easily ported to the Arduino when needed by implementing a similar smart pointer ourself.
\textbf{Tuples}
Tuples are represented as type defined structures in \texttt{C++}.
HCL has a theoretically infinite amount of tuples, so obviously not all tuples can be generated.
As mentioned in the previous section, the type generator will traverse the AST in order to find all the tuples used within the program to be compiled.
For each defined tuple a few helper methods are also generated.
This is the \textit{\textbf{toText}} function, along with accessing methods for each element in the tuple.
\textbf{Functions}
\label{sec:cplusplusfunctions}
Functions are one of the most interesting and advanced parts of the code generation.
This is because closures, per their nature, are advanced concepts.
They have to store information about the content of their body along with scope information.
They also have to potentially be able to expand the life time of scoped variables, if the variables are referenced from within the body of the closure.
Fortunately, the \texttt{C++} standard library includes a std::function data type, which can store the closures that is part of the \texttt{C++} language.
However, once again, this is not a part of the Arduino standard library.
Because of time constraints, the current HCL compiler is only implemented using the std::function data type, and not a custom lambda type that is compatible with the Arduino.
This means that the generated code is not actually compatible with the Arduino at this point in time.
There is one limitation to the std::function combined with the closures in \texttt{C++}.
This is the problem known as dangling references.
The problem occurs when a closure is returned, and executed after a referenced variable has gone out of scope.
The reference now points to some unallocated garbage memory, which leaves undefined behavior in the program.
Although this feature would be nice to have in HCL, it also has to be deemed as undefined behavior for HCL when the target language is \texttt{C++} and the closures are implemented using the method mentioned above.
\textbf{Generics}
As for generic types, which is used in functions and lists, \texttt{C++} has a language concept known as templates.
Templates are \texttt{C++}'s version of generics, and works by determining all the template variants used and then creating functions according to these variants.
Templates are not as powerful as modern generics, for example C\#'s generics, but they are enough for HCL, as HCL's generics are also not that powerful.
One lacking feature of both, is the ability to create an instance of type T.
\textbf{Naming of outputted types and variables}
Aside from storing the types appropriately, there are a few other issues when generating code for \texttt{C++}
An interesting thing to mention is the translation of names from HCL to \texttt{C++}.
In HCL a lot of special characters are allowed for identifiers.
This includes "+" and "-", both of which are reserved keywords in \texttt{C++}.
The chosen solution for this problem is hashing of all the names HCL and prefixing the hash with, for instance with "IDT" for identifiers or "TPL" for tuples.
This solution leaves HCL vulnerable to hash code collisions.
HCL uses a 32bit hash code, along with the prefixes.
Therefore hashing is a valid solution for now.
However, there may exist more sophisticated ways of solving this issues, that could be a potential improvement for future versions of the HCL compiler.
| {
"alphanum_fraction": 0.7903650115,
"avg_line_length": 69.908045977,
"ext": "tex",
"hexsha": "4a3f1c8d0691cbe0021b5b7fe0b7b7f762e82dbb",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2018-07-04T13:55:53.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-07-04T13:55:53.000Z",
"max_forks_repo_head_hexsha": "b6adbc46cc7347aacd45dce5fddffd6fee2d37ac",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "C0DK/P3-HCL",
"max_forks_repo_path": "report/4.Solution/GeneratedCppCode.tex",
"max_issues_count": 10,
"max_issues_repo_head_hexsha": "b6adbc46cc7347aacd45dce5fddffd6fee2d37ac",
"max_issues_repo_issues_event_max_datetime": "2018-05-13T17:37:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-02-17T14:30:17.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "C0DK/P3-HCL",
"max_issues_repo_path": "report/4.Solution/GeneratedCppCode.tex",
"max_line_length": 214,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "b6adbc46cc7347aacd45dce5fddffd6fee2d37ac",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "C0DK/P3-HCL",
"max_stars_repo_path": "report/4.Solution/GeneratedCppCode.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-09T11:33:53.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-02-08T12:34:50.000Z",
"num_tokens": 1325,
"size": 6082
} |
%Deep neural networks (DNNs) have been demonstrated effective for approximating complex and high dimensional functions. In the data-driven inverse modeling, we use DNNs to substitute unknown physical relations, such as constitutive relations, in a physical system described by partial differential equations (PDEs). The coupled system of DNNs and PDEs enables describing complex physical relations while satisfying the physics to the largest extent. However, training the DNNs embedded in PDEs is challenging because input-output pairs of DNNs may not be available, and the physical system may be highly nonlinear, leading to an implicit numerical scheme. We propose an approach, physics constrained learning, to train the DNNs from sparse observations data that are not necessarily input-output pairs of DNNs while enforcing the PDE constraints numerically. Particularly, we present an efficient automatic differentiation based technique that differentiates through implicit PDE solvers. We demonstrate the effectiveness of our method on various problems in solid mechanics and fluid dynamics. Our PCL method enables learning a neural-network-based physical relation from any observations that are interlinked with DNNs through PDEs.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Beamer Presentation
% LaTeX Template
% Version 1.0 (10/11/12)
%
% This template has been downloaded from:
% http://www.LaTeXTemplates.com
%
% License:
% CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/)
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%----------------------------------------------------------------------------------------
% PACKAGES AND THEMES
%----------------------------------------------------------------------------------------
\documentclass[usenames,dvipsnames]{beamer}
\usepackage{animate}
\usepackage{float}
\usepackage{bm}
\usepackage{mathtools}
\usepackage{extarrows}
\newcommand{\ChoL}{\mathsf{L}}
\newcommand{\bx}{\mathbf{x}}
\newcommand{\ii}{\mathrm{i}}
\newcommand{\bxi}{\bm{\xi}}
\newcommand{\bmu}{\bm{\mu}}
\newcommand{\bb}{\mathbf{b}}
\newcommand{\bA}{\mathbf{A}}
\newcommand{\bJ}{\mathbf{J}}
\newcommand{\bB}{\mathbf{B}}
\newcommand{\bM}{\mathbf{M}}
\newcommand{\by}{\mathbf{y}}
\newcommand{\bw}{\mathbf{w}}
\newcommand{\bX}{\mathbf{X}}
\newcommand{\bY}{\mathbf{Y}}
\newcommand{\bs}{\mathbf{s}}
\newcommand{\sign}{\mathrm{sign}}
\newcommand{\bt}[0]{\bm{\theta}}
\newcommand{\bc}{\mathbf{c}}
\newcommand{\bzero}{\mathbf{0}}
\renewcommand{\bf}{\mathbf{f}}
\newcommand{\bu}{\mathbf{u}}
\newcommand{\bv}[0]{\mathbf{v}}
\mode<presentation> {
% The Beamer class comes with a number of default slide themes
% which change the colors and layouts of slides. Below this is a list
% of all the themes, uncomment each in turn to see what they look like.
%\usetheme{default}
%\usetheme{AnnArbor}
%\usetheme{Antibes}
%\usetheme{Bergen}
%\usetheme{Berkeley}
%\usetheme{Berlin}
%\usetheme{Boadilla}
%\usetheme{CambridgeUS}
%\usetheme{Copenhagen}
%\usetheme{Darmstadt}
%\usetheme{Dresden}
%\usetheme{Frankfurt}
%\usetheme{Goettingen}
%\usetheme{Hannover}
%\usetheme{Ilmenau}
%\usetheme{JuanLesPins}
%\usetheme{Luebeck}
\usetheme{Madrid}
%\usetheme{Malmoe}
%\usetheme{Marburg}
%\usetheme{Montpellier}
%\usetheme{PaloAlto}
%\usetheme{Pittsburgh}
%\usetheme{Rochester}
%\usetheme{Singapore}
%\usetheme{Szeged}
%\usetheme{Warsaw}
% As well as themes, the Beamer class has a number of color themes
% for any slide theme. Uncomment each of these in turn to see how it
% changes the colors of your current slide theme.
%\usecolortheme{albatross}
\usecolortheme{beaver}
%\usecolortheme{beetle}
%\usecolortheme{crane}
%\usecolortheme{dolphin}
%\usecolortheme{dove}
%\usecolortheme{fly}
%\usecolortheme{lily}
%\usecolortheme{orchid}
%\usecolortheme{rose}
%\usecolortheme{seagull}
%\usecolortheme{seahorse}
%\usecolortheme{whale}
%\usecolortheme{wolverine}
%\setbeamertemplate{footline} % To remove the footer line in all slides uncomment this line
%\setbeamertemplate{footline}[page number] % To replace the footer line in all slides with a simple slide count uncomment this line
%\setbeamertemplate{navigation symbols}{} % To remove the navigation symbols from the bottom of all slides uncomment this line
}
\usepackage{booktabs}
\usepackage{makecell}
\usepackage{soul}
\newcommand{\red}[1]{\textcolor{red}{#1}}
%
%\usepackage{graphicx} % Allows including images
%\usepackage{booktabs} % Allows the use of \toprule, \midrule and \bottomrule in tables
%
%
%\usepackage{amsthm}
%
%\usepackage{todonotes}
%\usepackage{floatrow}
%
%\usepackage{pgfplots,algorithmic,algorithm}
\usepackage{algorithmicx}
\usepackage{algpseudocode}
%\usepackage[toc,page]{appendix}
%\usepackage{float}
%\usepackage{booktabs}
%\usepackage{bm}
%
%\theoremstyle{definition}
%
\newcommand{\RR}[0]{\mathbb{R}}
%
%\newcommand{\bx}{\mathbf{x}}
%\newcommand{\ii}{\mathrm{i}}
%\newcommand{\bxi}{\bm{\xi}}
%\newcommand{\bmu}{\bm{\mu}}
%\newcommand{\bb}{\mathbf{b}}
%\newcommand{\bA}{\mathbf{A}}
%\newcommand{\bJ}{\mathbf{J}}
%\newcommand{\bB}{\mathbf{B}}
%\newcommand{\bM}{\mathbf{M}}
%\newcommand{\bF}{\mathbf{F}}
%
%\newcommand{\by}{\mathbf{y}}
%\newcommand{\bw}{\mathbf{w}}
%\newcommand{\bn}{\mathbf{n}}
%
%\newcommand{\bX}{\mathbf{X}}
%\newcommand{\bY}{\mathbf{Y}}
%\newcommand{\bs}{\mathbf{s}}
%\newcommand{\sign}{\mathrm{sign}}
%\newcommand{\bt}[0]{\bm{\theta}}
%\newcommand{\bc}{\mathbf{c}}
%\newcommand{\bzero}{\mathbf{0}}
%\renewcommand{\bf}{\mathbf{f}}
%\newcommand{\bu}{\mathbf{u}}
%\newcommand{\bv}[0]{\mathbf{v}}
\AtBeginSection[]
{
\begin{frame}
\frametitle{Outline}
\tableofcontents[currentsection]
\end{frame}
}
%----------------------------------------------------------------------------------------
% TITLE PAGE
%----------------------------------------------------------------------------------------
\usepackage{bm}
\newcommand*{\TakeFourierOrnament}[1]{{%
\fontencoding{U}\fontfamily{futs}\selectfont\char#1}}
\newcommand*{\danger}{\TakeFourierOrnament{66}}
\title[Physics Constrained Learning]{Subsurface Inverse Modeling with Physics Based Machine Learning} % The short title appears at the bottom of every slide, the full title is only on the title page
\author[Kailai Xu (\texttt{[email protected]}), et al.]{Kailai Xu and Dongzhuo Li\\ Jerry M. Harris, Eric Darve}% Your name
%\institute[] % Your institution as it will appear on the bottom of every slide, may be shorthand to save space
%{
%%ICME, Stanford University \\ % Your institution for the title page
%%\medskip
%%\textit{[email protected]}\quad \textit{[email protected]} % Your email address
%}
\date{}% Date, can be changed to a custom date
% Mathematics of PDEs
\newcommand\blfootnote[1]{%
\begingroup
\renewcommand\thefootnote{}\footnote{#1}%
\addtocounter{footnote}{-1}%
\endgroup
}
\begin{document}
\usebackgroundtemplate{%
\begin{picture}(0,250)
\centering
{{\includegraphics[width=1.0\paperwidth]{../background}}}
\end{picture}
}
%\usebackgroundtemplate{%
% \includegraphics[width=\paperwidth,height=\paperheight]{figures/back}}
\begin{frame}
\titlepage % Print the title page as the first slide
%dfa
\end{frame}
\usebackgroundtemplate{}
\section{Inverse Modeling}
\begin{frame}
\frametitle{Inverse Modeling}
\begin{itemize}
\item \textbf{Inverse modeling} identifies a certain set of parameters or functions with which the outputs of the forward analysis matches the desired result or measurement.
\item Many real life engineering problems can be formulated as inverse modeling problems: shape optimization for improving the performance of structures, optimal control of fluid dynamic systems, etc.t
\end{itemize}
\begin{figure}[hbt]
\centering
\includegraphics[width=0.8\textwidth]{../inverse2}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{Inverse Modeling}
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{../inverse3}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{Inverse Modeling for Subsurface Properties}
There are many forms of subsurface inverse modeling problems.
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{../inverse_types}
\end{figure}
\begin{center}
\textbf{\underline{The Central Challenge}}
\end{center}
\begin{center}
\textcolor{red}{\textbf{Can we have a general approach for solving these inverse problems?}}
\end{center}
\end{frame}
\begin{frame}
\frametitle{Parameter Inverse Problem}
We can formulate inverse modeling as a PDE-constrained optimization problem
\begin{equation*}
\min_{\theta} L_h(u_h) \quad \mathrm{s.t.}\; F_h(\theta, u_h) = 0
\end{equation*}
\begin{itemize}
\item The \textcolor{red}{loss function} $L_h$ measures the discrepancy between the prediction $u_h$ and the observation $u_{\mathrm{obs}}$, e.g., $L_h(u_h) = \|u_h - u_{\mathrm{obs}}\|_2^2$.
\item $\theta$ is the \textcolor{red}{model parameter} to be calibrated.
\item The \textcolor{red}{physics constraints} $F_h(\theta, u_h)=0$ are described by a system of partial differential equations. Solving for $u_h$ may require solving linear systems or applying an iterative algorithm such as the Newton-Raphson method.
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Function Inverse Problem}
\begin{equation*}
\min_{\textcolor{red}{f}} L_h(u_h) \quad \mathrm{s.t.}\; F_h(\textcolor{red}{f}, u_h) = 0
\end{equation*}
What if the unknown is a \textcolor{red}{function} instead of a set of parameters?
\begin{itemize}
\item Koopman operator in dynamical systems.
\item Constitutive relations in solid mechanics.
\item Turbulent closure relations in fluid mechanics.
\item Neural-network-based physical properties.
\item ...
\end{itemize}
The candidate solution space is \textcolor{red}{infinite dimensional}.
\begin{figure}[hbt]
\includegraphics[width=0.75\textwidth]{../nnfwi.png}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{Physics Based Machine Learning}
$$\min_{\theta} L_h(u_h) \quad \mathrm{s.t.}\;F_h(\textcolor{red}{NN_\theta}, u_h) = 0$$
\vspace{-0.5cm}
\begin{itemize}
\item Deep neural networks exhibit capability of approximating high dimensional and complicated functions.
\item \textbf{Physics based machine learning}: \textcolor{red}{the unknown function is approximated by a deep neural network, and the physical constraints are enforced by numerical schemes}.
\item \textcolor{red}{Satisfy the physics to the largest extent}.
\end{itemize}
\begin{figure}[hbt]
\includegraphics[width=0.75\textwidth]{../physics_based_machine_learning.png}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{Gradient Based Optimization}
\begin{equation}\label{equ:opt}
\min_{\theta} L_h(u_h) \quad \mathrm{s.t.}\; F_h(\theta, u_h) = 0
\end{equation}
\begin{itemize}
\item We can now apply a gradient-based optimization method to (\ref{equ:opt}).
\item The key is to \textcolor{red}{calculate the gradient descent direction} $g^k$
$$\theta^{k+1} \gets \theta^k - \alpha g^k$$
\end{itemize}
\begin{figure}[hbt]
\centering
\includegraphics[width=0.6\textwidth]{../im.pdf}
\end{figure}
\end{frame}
\section{Automatic Differentiation}
\begin{frame}
\frametitle{Automatic Differentiation}
The fact that bridges the \textcolor{red}{technical} gap between machine learning and inverse modeling:
\begin{itemize}
\item Deep learning (and many other machine learning techniques) and numerical schemes share the same computational model: composition of individual operators.
\end{itemize}
\begin{minipage}[b]{0.4\textwidth}
\begin{center}
\textcolor{red}{Mathematical Fact}
\
Back-propagation
$||$
Reverse-mode
Automatic Differentiation
$||$
Discrete
Adjoint-State Method
\end{center}
\end{minipage}~
\begin{minipage}[b]{0.6\textwidth}
\begin{figure}[hbt]
\centering
\includegraphics[width=0.8\textwidth]{../compare-NN-PDE.png}
\end{figure}
\end{minipage}
\end{frame}
\begin{frame}
\frametitle{Forward Mode vs. Reverse Mode}
\begin{itemize}
\item Reverse mode automatic differentiation evaluates gradients in the \textcolor{red}{reverse order} of forward computation.
\item Reverse mode automatic differentiation is a more efficient way to compute gradients of a many-to-one mapping $J(\alpha_1, \alpha_2, \alpha_3,\alpha_4)$ $\Rightarrow$ suitable for minimizing a loss (misfit) function.
\end{itemize}
\begin{figure}[hbt]
\includegraphics[width=0.8\textwidth]{../fdrd}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{Computational Graph for Numerical Schemes}
\begin{itemize}
\item To leverage automatic differentiation for inverse modeling, we need to express the numerical schemes in the ``AD language'': computational graph.
\item No matter how complicated a numerical scheme is, it can be decomposed into a collection of operators that are interlinked via state variable dependencies.
\end{itemize}
\begin{figure}[hbt]
\includegraphics[width=1.0\textwidth]{../cgnum}
\end{figure}
\end{frame}
%\begin{frame}
% \frametitle{Code Example}
% \begin{itemize}
% \item Find $b$ such that $u(0.5)=1.0$ and
% $$-bu''(x)+u(x) = 8 + 4x - 4x^2, x\in[0,1], u(0)=u(1)=0$$
% \end{itemize}
% \begin{figure}[hbt]
% \includegraphics[width=0.8\textwidth]{../code.png}
%\end{figure}
%\end{frame}
\section{Physics Constrained Learning}
\begin{frame}
\frametitle{Challenges in AD}
\begin{minipage}[t]{0.49\textwidth}
\vspace{-3cm}
\begin{itemize}
\item Most AD frameworks only deal with \textcolor{red}{explicit operators}, i.e., the functions with analytical derivatives that are easy to implement.
\item Many scientific computing algorithms are \textcolor{red}{iterative} or \textcolor{red}{implicit} in nature.
\end{itemize}
\end{minipage}~
\begin{minipage}[t]{0.49\textwidth}
\includegraphics[width=1.0\textwidth]{../sim.png}
\end{minipage}
% Please add the following required packages to your document preamble:
% \usepackage{booktabs}
\begin{table}[]
\begin{tabular}{@{}lll@{}}
\toprule
Linear/Nonlinear & Explicit/Implicit & Expression \\ \midrule
Linear & Explicit & $y=Ax$ \\
Nonlinear & Explicit & $y = F(x)$ \\
\textbf{Linear} & \textbf{Implicit} & $Ay = x$ \\
\textbf{Nonlinear} & \textbf{Implicit} & $F(x,y) = 0$ \\ \bottomrule
\end{tabular}
\end{table}
\end{frame}
\begin{frame}
\frametitle{Implicit Operators in Subsurface Modeling}
\begin{itemize}
\item For reasons such as nonlinearity and stability, implicit operators (schemes) are almost everywhere in subsurface modeling...
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{../nonlinear.PNG}
\end{figure}
\item The ultimate solution: \textcolor{red}{design ``differentiable'' implicit operators}.
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Example}
\begin{itemize}
\item Consider a function $f:x\rightarrow y$, which is implicitly defined by
$$F(x,y) = x^3 - (y^3+y) = 0$$
If not using the cubic formula for finding the roots, the forward computation consists of iterative algorithms, such as the Newton's method and bisection method
\end{itemize}
\begin{minipage}[t]{0.48\textwidth}
\centering
\begin{algorithmic}
\State $y^0 \gets 0$
\State $k \gets 0$
\While {$|F(x, y^k)|>\epsilon$}
\State $\delta^k \gets F(x, y^k)/F'_y(x,y^k)$
\State $y^{k+1}\gets y^k - \delta^k$
\State $k \gets k+1$
\EndWhile
\State \textbf{Return} $y^k$
\end{algorithmic}
\end{minipage}~
\begin{minipage}[t]{0.48\textwidth}
\centering
\begin{algorithmic}
\State $l \gets -M$, $r\gets M$, $m\gets 0$
\While {$|F(x, m)|>\epsilon$}
\State $c \gets \frac{a+b}{2}$
\If{$F(x, m)>0$}
\State $a\gets m$
\Else
\State $b\gets m$
\EndIf
\EndWhile
\State \textbf{Return} $c$
\end{algorithmic}
\end{minipage}
\end{frame}
\begin{frame}
\frametitle{Example}
\begin{itemize}
% \item A simple approach is to save part or all intermediate steps, and ``back-propagate''. This approach is expensive in both computation and memory\footnote{Ablin, Pierre, Gabriel Peyré, and Thomas Moreau. ``Super-efficiency of automatic differentiation for functions defined as a minimum.''}.
% \item Nevertheless, the simple approach works in some scenarios where accuracy or cost is not an issue, e.g., automatic differetiation of soft-DTW and Sinkhorn distance.
\item An efficient way is to apply the \textcolor{red}{implicit function theorem}. For our example, $F(x,y)=x^3-(y^3+y)=0$, treat $y$ as a function of $x$ and take the derivative on both sides
$$3x^2 - 3y(x)^2y'(x)-1=0\Rightarrow y'(x) = \frac{3x^2-1}{3y(x)^2}$$
The above gradient is \textcolor{red}{exact}.
\end{itemize}
\begin{center}
\textbf{Can we apply the same idea to inverse modeling?}
\end{center}
\end{frame}
\begin{frame}
\frametitle{Physics Constrained Learning}
$${\small \min_{\theta}\; L_h(u_h) \quad \mathrm{s.t.}\;\; F_h(\theta, u_h) = 0}$$
\begin{itemize}
\item Assume that we solve for $u_h=G_h(\theta)$ with $F_h(\theta, u_h)=0$, and then
$${\small\tilde L_h(\theta) = L_h(G_h(\theta))}$$
\item Applying the \textcolor{red}{implicit function theorem}
{ \scriptsize
\begin{equation*}
\frac{{\partial {F_h(\theta, u_h)}}}{{\partial \theta }} + {\frac{{\partial {F_h(\theta, u_h)}}}{{\partial {u_h}}}}
\textcolor{red}{\frac{\partial G_h(\theta)}{\partial \theta}}
= 0 \Rightarrow
\textcolor{red}{\frac{\partial G_h(\theta)}{\partial \theta}} = -\Big( \frac{{\partial {F_h(\theta, u_h)}}}{{\partial {u_h}}} \Big)^{ - 1} \frac{{\partial {F_h(\theta, u_h)}}}{{\partial \theta }}
\end{equation*}
}
\item Finally we have
{\scriptsize
\begin{equation*}
\boxed{\frac{{\partial {{\tilde L}_h}(\theta )}}{{\partial \theta }}
= \frac{\partial {{ L}_h}(u_h )}{\partial u_h}\frac{\partial G_h(\theta)}{\partial \theta}=
- \textcolor{red}{ \frac{{\partial {L_h}({u_h})}}{{\partial {u_h}}} } \;
\textcolor{blue}{ \Big( {\frac{{\partial {F_h(\theta, u_h)}}}{{\partial {u_h}}}\Big|_{u_h = {G_h}(\theta )}} \Big)^{ - 1} } \;
\textcolor{ForestGreen}{ \frac{{\partial {F_h(\theta, u_h)}}}{{\partial \theta }}\Big|_{u_h = {G_h}(\theta )} }
}
\end{equation*}
}
\end{itemize}
{\tiny Kailai Xu and Eric Darve, \textit{Physics Constrained Learning for Data-driven Inverse Modeling from Sparse Observations}}
\end{frame}
\section{Applications}
\begin{frame}
\frametitle{Parameter Inverse Problem: Elastic Full Waveform Inversion for Subsurface Flow Problems}
\begin{figure}[hbt]
\includegraphics[width=0.8\textwidth]{../geo.png}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{Fully Nonlinear Implicit Schemes}
\begin{itemize}
\item The governing equation is a nonlinear PDE
\begin{minipage}[b]{0.48\textwidth}
{\scriptsize
\begin{align*}
&\frac{\partial }{{\partial t}}(\phi {{S_i}}{\rho _i}) + \nabla \cdot ({\rho _i}{\mathbf{v}_i}) = {\rho _i}{q_i},\quad
i = 1,2 \\
& S_{1} + S_{2} = 1\\
&{\mathbf{v}_i} = - \frac{{\textcolor{blue}{K}{\textcolor{red}{k_{ri}}}}}{{{\tilde{\mu}_i}}}(\nabla {P_i} - g{\rho _i}\nabla Z), \quad
i=1, 2\\
&k_{r1}(S_1) = \frac{k_{r1}^o S_1^{L_1}}{S_1^{L_1} + E_1 S_2^{T_1}}\\
&k_{r2}(S_1) = \frac{ S_2^{L_2}}{S_2^{L_2} + E_2 S_1^{T_2}}
\end{align*}
}
\end{minipage}~\vline
\begin{minipage}[b]{0.48\textwidth}
\flushleft
{\scriptsize \begin{eqnarray*}
&& \rho \frac{\partial v_z}{\partial t} = \frac{\partial \sigma_{zz}}{\partial z} + \frac{\partial \sigma_{xz}}{\partial x} \nonumber \\
&& \rho \frac{\partial v_x}{\partial t} = \frac{\partial \sigma_{xx}}{\partial x} + \frac{\partial \sigma_{xz}}{\partial z} \nonumber \\
&& \frac{\partial \sigma_{zz}}{\partial t} = (\lambda + 2\mu)\frac{\partial v_z}{\partial z} + \lambda\frac{\partial v_x}{\partial x} \nonumber \\
&& \frac{\partial \sigma_{xx}}{\partial t} = (\lambda + 2\mu)\frac{\partial v_x}{\partial x} + \lambda\frac{\partial v_z}{\partial z} \nonumber \\
&& \frac{\partial \sigma_{xz}}{\partial t} = \mu (\frac{\partial v_z}{\partial x} + \frac{\partial v_x}{\partial z}),
\end{eqnarray*}}
\end{minipage}
\item For stability and efficiency, implicit methods are the industrial standards.
{\scriptsize $$\phi (S_2^{n + 1} - S_2^n) - \nabla \cdot \left( {{m_{2}}(S_2^{n + 1})K\nabla \Psi _2^n} \right) \Delta t =
\left(q_2^n + q_1^n \frac{m_2(S^{n+1}_2)}{m_1(S^{n+1}_2)}\right)
\Delta t\quad m_i(s) = \frac{k_{ri}(s)}{\tilde \mu_i}
$$}
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Inverse Modeling Workflow}
Traditionally, the inversion is typically solved by separately inverting the wave equation (FWI) and the flow transport equations.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{../coupled3}
\includegraphics[width=0.7\textwidth]{../coupled1}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{Coupled Inversion vs. Decoupled Inversion}
We found that \textcolor{red}{coupled inversion reduces the artifacts from FWI significantly and yields a substantially better results}.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{../coupled2}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{Travel Time vs. Full Waveforms}
We also compared using only travel time (left, Eikonal equation) versus using full waveforms (right, FWI) for inversion. We found that \textcolor{red}{full waveforms do contain more information for making a better estimation of the permeability property}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{../coupled4}
\end{figure}
{\small The Eikonal equation solver was also implemented with physics constrained learning!}
\end{frame}
\begin{frame}
Check out our package FwiFlow.jl for wave and flow inversion and our recently published paper for this work.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{../fwiflow}~
\includegraphics[width=0.45\textwidth]{../fwiflow2}
\end{figure}
\begin{columns}
\centering
\begin{column}{0.33\textwidth}
\begin{center}
\textcolor{red}{\textbf{High Performance}}
\end{center}
Solves inverse modeling problems faster with our GPU-accelerated FWI module.
\end{column}
\begin{column}{0.33\textwidth}
\begin{center}
\textcolor{red}{\textbf{Designed for Subsurface Modeling}}
\end{center}
Provides many operators that can be reused for different subsurface modeling problems.
\end{column}
\begin{column}{0.33\textwidth}
\begin{center}
\textcolor{red}{\textbf{Easy to Extend}}
\end{center}
Allows users to implement and insert their own custom operators and solve new problems.
\end{column}
\end{columns}
\end{frame}
\newcommand{\bsigma}[0]{\bm{\sigma}}
\newcommand{\bepsilon}[0]{\bm{\epsilon}}
\begin{frame}
\frametitle{Function Inverse Problem: Modeling Viscoelasticity}
%
\begin{itemize}
\item Multi-physics Interaction of Coupled Geomechanics and Multi-Phase Flow Equations
{\small
\begin{align*}
\mathrm{div}\bsigma(\bu) - b \nabla p &= 0\\
\frac{1}{M} \frac{\partial p}{\partial t} + b\frac{\partial \epsilon_v(\bu)}{\partial t} - \nabla\cdot\left(\frac{k}{B_f\mu}\nabla p\right) &= f(x,t) \\
\bsigma &= \bsigma(\bepsilon, \dot\bepsilon)
\end{align*}
}
\item Approximate the constitutive relation by a neural network
{\small
$$\bsigma^{n+1} = \mathcal{NN}_{\bt} (\bsigma^n, \bepsilon^n) + H\bepsilon^{n+1}$$}
\end{itemize}
\begin{figure}[hbt]
\centering
\includegraphics[width=0.5\textwidth]{../ip}~
\includegraphics[width=0.3\textwidth]{../cell}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{Neural Networks: Inverse Modeling of Viscoelasticity}
\begin{itemize}
\item We propose the following form for modeling viscosity (assume the time step size is fixed):
% $$\bsigma^{n+1} = \mathcal{NN}_{\bt} (\bsigma^n, \bepsilon^n) + H\bepsilon^{n+1}$$
$$\bsigma^{n+1} - \bsigma^{n} = \mathcal{NN}_{\bt} (\bsigma^n, \bepsilon^n) + H (\bepsilon^{n+1} - \bepsilon^n)$$
\end{itemize}
\begin{itemize}
\item $H$ is a free optimizable \textcolor{red}{symmetric positive definite matrix} (SPD). Hence the numerical stiffness matrix is SPD.
\item Implicit linear equation
% $$\bsigma^{n+1} = H(\bepsilon^{n+1} - \bepsilon^n) + \left(\mathcal{NN}_{\bt} (\bsigma^n, \bepsilon^n)+H\bepsilon^n \right)$$
$$\bsigma^{n+1} - H \bepsilon^{n+1} = - H \bepsilon^n
+ \mathcal{NN}_{\bt} (\bsigma^n, \bepsilon^n) + \bsigma^{n}:= \mathcal{NN}_{\bt}^* (\bsigma^n, \bepsilon^n)$$
\item Linear system to solve in each time step $\Rightarrow$ good balance between \textcolor{red}{numerical stability} and \textcolor{red}{computational cost}.
\item Good performance in our numerical examples.
\end{itemize}
\end{frame}
% \begin{frame}
% \frametitle{Neural Networks: Inverse Modeling of Viscoelasticity}
% \begin{figure}
% \centering
% \includegraphics[width=0.7\textwidth]{figures/strainstreess}
% \end{figure}
% \end{frame}
\begin{frame}
\frametitle{Training Strategy and Numerical Stability}
\begin{itemize}
\item Physics constrained learning = improved numerical stability in predictive modeling.
\item For simplicity, consider two strategies to train an NN-based constitutive relation using direct data $\{(\epsilon_o^n, \sigma_o^n)\}_n$
$$\Delta \sigma^n = H \Delta \epsilon^n + \mathcal{NN}_{\bt} (\sigma^n, \epsilon^n),\quad H \succ 0$$
\item Training with input-output pairs
$$\min_{\bt} \sum_n \Big(\sigma_o^{n+1} - \big(H\epsilon_o^{n+1} + \mathcal{NN}_{\bt}^* (\sigma_o^n, \epsilon_o^n)\big) \Big)^2$$
\item Better stability using training on trajectory = \textcolor{red}{physics constrained learning}
\begin{gather*}
\min_{\bt} \ \sum_n (\sigma^n(\bt) - \sigma_o^n)^2 \\
\text{s.t. }\text{I.C.} \ \sigma^1 = \sigma^1_o \text{ and time integrator\ }
{\small \Delta \sigma^n = H \Delta \epsilon^n + \mathcal{NN}_{\bt} (\sigma^n, \epsilon^n)}
\end{gather*}
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Experimental Data}
\includegraphics[width=1.0\textwidth]{../experiment}
{\scriptsize Experimental data from: Javidan, Mohammad Mahdi, and Jinkoo Kim. ``Experimental and numerical Sensitivity Assessment of Viscoelasticity for polymer composite Materials.'' Scientific Reports 10.1 (2020): 1--9.}
\end{frame}
\begin{frame}
\frametitle{Inverse Modeling of Viscoelasticity}
\begin{itemize}
\item Comparison with space varying linear elasticity approximation
\begin{equation*}
\bsigma = H(x, y) \bepsilon
\end{equation*}
\end{itemize}
\begin{figure}[hbt]
\includegraphics[width=1.0\textwidth]{../visco1}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{Inverse Modeling of Viscoelasticity}
\begin{figure}[hbt]
\includegraphics[width=0.7\textwidth]{../visco2}
\end{figure}
\end{frame}
\section{ADCME: Scientific Machine Learning for Inverse Modeling}
\begin{frame}
\frametitle{Physical Simulation as a Computational Graph}
\begin{figure}[hbt]
\includegraphics[width=1.0\textwidth]{../custom.png}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{A General Approach to Inverse Modeling}
\begin{figure}[hbt]
\includegraphics[width=1.0\textwidth]{../summary.png}
\end{figure}
%
\end{frame}
%}
%\usebackgroundtemplate{}
%----------------------------------------------------------------------------------------
% PRESENTATION SLIDES
%----------------------------------------------------------------------------------------
%------------------------------------------------
\end{document} | {
"alphanum_fraction": 0.6895275419,
"avg_line_length": 33.4036363636,
"ext": "tex",
"hexsha": "3e23a91ac24eece108851e97f402526375ef7a4d",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-08-14T09:14:08.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-08-14T09:14:08.000Z",
"max_forks_repo_head_hexsha": "eee87c3aea6ad990bdf80f63e33c7a926f8d5392",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "z403173402/ADCME.jl",
"max_forks_repo_path": "docs/src/assets/Slide/Subsurface.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "eee87c3aea6ad990bdf80f63e33c7a926f8d5392",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "z403173402/ADCME.jl",
"max_issues_repo_path": "docs/src/assets/Slide/Subsurface.tex",
"max_line_length": 1235,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "eee87c3aea6ad990bdf80f63e33c7a926f8d5392",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "z403173402/ADCME.jl",
"max_stars_repo_path": "docs/src/assets/Slide/Subsurface.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 8749,
"size": 27558
} |
\documentclass[11pt]{article}
\usepackage{amsmath, amsfonts, amsthm, amssymb} % Some math symbols
\usepackage{enumerate}
\usepackage{fullpage}
\usepackage[x11names, rgb]{xcolor}
\usepackage{tikz}
\usepackage[colorlinks=true, urlcolor=blue]{hyperref}
\usepackage{graphicx}
\usepackage{gensymb}
\usetikzlibrary{snakes,arrows,shapes}
\usepackage{listings}
\usepackage{array}
\usepackage{mathtools}
\setlength{\parindent}{0pt}
\setlength{\parskip}{5pt plus 1pt}
\pagestyle{empty}
\def\indented#1{\list{}{}\item[]}
\let\indented=\endlist
\newcounter{questionCounter}
\newenvironment{question}[2][\arabic{questionCounter}]{%
\addtocounter{questionCounter}{1}%
\setcounter{partCounter}{0}%
\vspace{.25in} \hrule \vspace{0.5em}%
\noindent{\bf #2}%
\vspace{0.8em} \hrule \vspace{.10in}%
}{}
\newcounter{partCounter}[questionCounter]
\renewenvironment{part}[1][\alph{partCounter}]{%
\addtocounter{partCounter}{1}%
\vspace{.10in}%
\begin{indented}%
{\bf (#1)} %
}{\end{indented}}
%%%%%%%%%%%%%%%%% Identifying Information %%%%%%%%%%%%%%%%%
%% This is here, so that you can make your homework look %%
%% pretty when you compile it. %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newcommand{\myhwname}{Homework 5}
\newcommand{\mysection}{CS Fundamentals: Pong Review: Math! [due Tuesday]}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\begin{center}
{\Large \myhwname} \\
\mysection \\
\today
\end{center}
%%%%%%%%%%%%%%%%% PROBLEM 1: Probability Review %%%%%%%%%%%%%%%%%%%%%%%%%
\section{Review: Pong Game [Expected Duration: 15 - 20 min]}
\textbf{If you have any questions about the directions or any blocks you have not used before, let me know via email or text!}\\\\
\noindent\makebox[\linewidth]{\rule{\paperwidth}{0.4pt}}\\
In the last lesson, we built together a very cool pong game together.
We built this cool calculator by using many things we learned previously and MATH.
For homework, I want you to do some math with angles. Math is quite important in programming! Have fun!\\\\
I want you to use the \textbf{word} bank below to complete this assignment.
Note that \textbf{the words in the word bank is used exactly once... all words are used}.\\\\
If you have any questions, \textbf{please} let me know!\\\\
\noindent\makebox[\linewidth]{\rule{\paperwidth}{0.4pt}}\\
\textbf{Instructions/Notes:}\\
$0\degree = 360\degree = -360\degree$ is Up.\\
$90\degree = -270 \degree$ is East.\\
$-90\degree = 270 \degree$ is West.\\
$180\degree = -180\degree$ is South.\\\\
This homework is hard! You will need to know how to do reflection over x and y-axis. Do as best as you can! It is okay to not finish. Looking at the code for the pong game can help!
\\
\noindent\makebox[\linewidth]{\rule{\paperwidth}{0.4pt}}
\begin{enumerate}
\item \textbf{Bouncing off of the top wall}\\
The only walls the ball in pong can bounce off of are the ones on the bottom and at the top. Drawing pictures may help solve the problems below.
% $\rule{2.5cm}{0.15mm}$
\begin{enumerate}[a.]
\item Let's say a ball was approaching the top wall at a $-45\degree$ (NorthWest) angle. To simulate bouncing off of the wall, we send the ball going at a $-135\degree$ (SouthWest) angle. Was this a reflection over the x-axis or the y-axis (think 2D coordinate! if you don't know what this means, email me!)?\\\\\\
\item If the ball was approaching the top wall at a $50 \degree$ angle, what direction (degree) should the ball be traveling once the ball bounces? (Think reflection.)\\\\\\
\item Generally, if the ball was approaching the top wall at a $x \degree$ angle, what direction $y \degree$ shoudld the ball be traveling once the ball bounces? Write an equation for $y$ in terms of $x$. (note: this is hard! try your best! the code can help!)\\\\\\\\\\\\
\end{enumerate}
\end{enumerate}
\noindent\makebox[\linewidth]{\rule{\paperwidth}{0.4pt}}
\begin{enumerate}
\item \textbf{Bouncing off of the bottom wall}\\
The only walls the ball in pong can bounce off of are the ones on the bottom and at the top. Drawing pictures may help solve the problems below.
% $\rule{2.5cm}{0.15mm}$
\begin{enumerate}[a.]
\item Let's say a ball was approaching the bottom wall at a $135\degree$ (Southeast) angle. To simulate bouncing off of the wall, we send the ball going at a $45\degree$ (Northeast) angle. Was this a reflection over the x-axis or the y-axis (think 2D coordinate! if you don't know what this means, email me!)?\\\\\\
\item If the ball was approaching the bottom wall at a $-100 \degree$ angle, what direction (degree) should the ball be traveling once the ball bounces? (Think reflection.)\\\\\\
\item Generally, if the ball was approaching the bottom wall at a $x \degree$ angle, what direction $y \degree$ shoudld the ball be traveling once the ball bounces? Write an equation for $y$ in terms of $x$. (note: this is hard! try your best! the code can help!)\\\\\\\\\\\\\\\\\\\\\\
\end{enumerate}
\end{enumerate}
\begin{enumerate}
\item \textbf{Bouncing off of the right player's paddle}\\
There are 2 players in pongs. Each has a paddle. The balls can bounce off these paddles. Drawing pictures may help solve the problems below.
% $\rule{2.5cm}{0.15mm}$
\begin{enumerate}[a.]
\item Let's say a ball was approaching the right player's paddle at a $135\degree$ (Southeast) angle. To simulate bouncing off of the paddle, we send the ball going at a $-135 \degree$ (Southwest) angle. Was this a reflection over the x-axis or the y-axis (think 2D coordinate! if you don't know what this means, email me!)?\\\\\\
\item If the ball was approaching the right player's paddle at a $100 \degree$ angle, what direction (degree) should the ball be traveling once the ball bounces? (Think reflection again.)\\\\\\
\item Generally, if the ball was approaching the right player's paddle at a $x \degree$ angle, what direction $y \degree$ shoudld the ball be traveling once the ball bounces? Write an equation for $y$ in terms of $x$. (note: this is hard! try your best! the code can help!)\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\end{enumerate}
\end{enumerate}
\begin{enumerate}
\item \textbf{Bouncing off of the left player's paddle}\\
There are 2 players in pongs. Each has a paddle. The balls can bounce off these paddles. Drawing pictures may help solve the problems below.
% $\rule{2.5cm}{0.15mm}$
\begin{enumerate}[a.]
\item Let's say a ball was approaching the left player's paddle at a $-45\degree$ (NorthWest) angle. To simulate bouncing off of the paddle, we send the ball going at a $45 \degree$ (NorthEast) angle. Was this a reflection over the x-axis or the y-axis (think 2D coordinate! if you don't know what this means, email me!)?\\\\\\
\item If the ball was approaching the right player's paddle at a $120 \degree$ angle, what direction (degree) should the ball be traveling once the ball bounces? (Think reflection again.)\\\\\\
\item Generally, if the ball was approaching the right player's paddle at a $x \degree$ angle, what direction $y \degree$ shoudld the ball be traveling once the ball bounces? Write an equation for $y$ in terms of $x$. (note: this is hard! try your best! the code can help!)
\end{enumerate}
\end{enumerate}
\end{document}
| {
"alphanum_fraction": 0.7001509538,
"avg_line_length": 62.8189655172,
"ext": "tex",
"hexsha": "ae59bc684b084d540b97d1705e89f788e7498e40",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "49e2ba1bf53e65a6dcd7dd521443ff12359f9f7a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "kevink97/Teaching-Materials",
"max_forks_repo_path": "tutor_ms/hw5/hw5.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "49e2ba1bf53e65a6dcd7dd521443ff12359f9f7a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "kevink97/Teaching-Materials",
"max_issues_repo_path": "tutor_ms/hw5/hw5.tex",
"max_line_length": 331,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "49e2ba1bf53e65a6dcd7dd521443ff12359f9f7a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "kevink97/Teaching-Materials",
"max_stars_repo_path": "tutor_ms/hw5/hw5.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2043,
"size": 7287
} |
\chapter{The Initial Mass Function: Theory}
\label{ch:imf_th}
\marginnote{
\textbf{Suggested background reading:}
\begin{itemize}
\item \href{http://adsabs.harvard.edu/abs/2014arXiv1402.0867K}{Krumholz, M.~R. 2014, Phys.~Rep., 539, 49}, section 6 \nocite{krumholz14c}
\end{itemize}
\textbf{Suggested literature:}
\begin{itemize}
\item \href{http://adsabs.harvard.edu/abs/2012MNRAS.423.2037H}{Hopkins, 2012, MNRAS, 423, 2037} \nocite{hopkins12d}
\item \href{http://adsabs.harvard.edu/abs/2012ApJ...754...71K}{Krumholz et al., 2012, ApJ, 754, 71} \nocite{krumholz12b}
\end{itemize}
}
The previous chapter discussed observations of the initial mass function, both how they are made and what they tell us. We now turn to theoretical attempts to explain the IMF. As with theoretical models of the star formation rate, there is at present no completely satisfactory theory for the origin of the IMF, just different ideas that do better or worse at various aspects of the problem. To recall, the things we would really like to explain most are (1) the slope of the powerlaw at high masses, and (2) the location of the peak mass. We would also like to explain the little-to-zero variation in these quantities with galactic environment. Furthermore, we would like to explain the origin of the distribution of binary properties.
\section{The Powerlaw Tail}
We begin by considering the powerlaw tail at high masses, $dn/dm \propto m^{-\Gamma}$ with $\Gamma \approx 2.3$. There are two main classes of theories for how this powerlaw tail is set: competitive accretion and turbulence. Both are scale-free processes that could plausibly produce a powerlaw distribution of masses comparable to what is observed.
\subsection{Competitive Accretion}
One hypothesis for how to produce a powerlaw mass distribution is to consider what will happen in a region where a collection of small "seed" stars form, and then begin to accrete at a rate that is a function of their current mass. Quantitatively, and for simplicity, suppose that every star accretes at a rate proportional to some power of its current mass, i.e.,
\begin{equation}
\frac{dm}{dt} \propto m^\eta.
\end{equation}
If we start with a mass $m_0$ and accretion rate $\dot{m}_0$ at time $t_0$, this ODE is easy to solve for the mass at later times. We get
\begin{equation}
m(t) = m_0 \left\{
\begin{array}{ll}
[1 - (\eta-1)\tau]^{1/(1-\eta)}, & \eta \neq 1 \\
\exp(\tau), & \eta = 1
\end{array}
\right.,
\end{equation}
where $\tau = t / (m_0/\dot{m}_0)$ is the time measured in units of the initial mass-doubling time. The case for $\eta = 1$ is the usual exponential growth, and the case for $\eta > 1$ is even faster, running away to infinite mass in a finite amount of time $\tau = 1/(\eta-1)$.
Now suppose that we start with a collection of stars that all begin at mass $m_0$, but have slightly different values of $\tau$ at which they stop growing, corresponding either to growth stopping at different physical times from one star to another, to stars stopping at the same time but having slightly different initial accretion rates $\dot{m}_0$, or some combination of both. What will the mass distribution of the resulting population be? If $dn/d\tau$ is the distribution of stopping times, then we will have
\begin{equation}
\frac{dn}{dm} \propto \frac{dn/d\tau}{dm/d\tau} \propto m(\tau)^{-\eta} \frac{dn}{d\tau}.
\end{equation}
Thus the final distribution of masses will be a powerlaw in mass, with index $-\eta$, going from $m(\tau_{\rm min})$ to $m(\tau_{\rm max})$; a powerlaw distribution naturally results.
The index of this powerlaw will depend on the index of the accretion law, $\eta$. What should this be? In the case of a point mass accreting from a uniform, infinite medium at rest, the accretion rate was worked out by \citet{hoyle46a} and \citet{bondi52a}, and the problem is known as Bondi-Hoyle accretion. The accretion rate scales as $\dot{m} \propto m^2$, so if this process describes how stars form, then the expected mass distribution should follow $dn/dm\propto m^{-2}$, not so far from the actual slope of $-2.3$ that we observe. A number of authors have argued that this difference can be made up by considering the effects of a crowded environment, where the feeding regions of smaller stars get tidally truncated, and thus the growth law winds up begin somewhat steeper than $\dot{m}\propto m^2$.
\begin{marginfigure}
\includegraphics[width=\linewidth]{imf_bate09}
\caption[IMF in a competitive accretion simulation]{
\label{fig:imf_bate09}
The IMF measured in a simulation of the collapse of a 500 $M_\odot$ initially uniform density cloud. The single-hatched histogram shows all objects in the simulation, while the double-hatched one shows objects that have stopped accreting. Credit: \citeauthor{bate09b}, 2009, MNRAS, 392, 590, reproduced by permission of Oxford University Press on behalf of the RAS.
}
\end{marginfigure}
This is an extremely simple model, requiring no physics but hydrodynamics and gravity, and thus it is easy to simulate. Simulations done based on this model do sometimes return a mass distribution that looks much like the IMF, as illustrated in Figure \ref{fig:imf_bate09}. However, this appears to depend on the choice of initial conditions. Generally speaking, one gets about the right IMF if one stars with something with a virial ratio $\alpha_{\rm vir} \sim 1$ and no initial density structure, just velocities. Simulations that start with either supervirial or sub-virial initial conditions, or that begin with turbulent density structures, do not appear to grow as predicted by competitive accretion (e.g., \citealt{clark08a}).
Another potential problem with this model is that it only seems to work in environments where there is no substantial feedback to drive the turbulence or eject the gas. In simulations where this is not true, there appears to be no competitive accretion. The key issue is that competitive accretion seems to require a global collapse where all the stars fall together into a region where they can compete, and this is hard to accomplish in the presence of feedback.
Yet a third potential issue is that this model has trouble making sense of the IMF peak, as we will discuss in Section \ref{sec:imfpeak}.
\subsection{Turbulent Fragmentation}
A second class of models for the origin of the powerlaw slope is based on the physics of turbulence. The first of these models was proposed by \citet{padoan97a}, and there have been numerous refinements since \citep[e.g.,][]{padoan02a, padoan07a, hennebelle08b, hennebelle09a, hopkins12e, hopkins12d}. The basic assumption in the turbulence models is that the process of shocks repeatedly passing through an isothermal medium leads to a broad range of densities, and that stars form wherever a local region happens to be pushed to the point where it becomes self-gravitating. We then proceed as follows. Suppose we consider the density field smoothed on some size scale $\ell$. The mass of an object of density $\rho$ in this smoothed field is
\begin{equation}
m \sim \rho \ell^3,
\end{equation}
and the total mass of objects with characteristic density between $\rho$ and $\rho+d\rho$ is
\begin{equation}
dM_{\rm tot} \sim \rho p(\rho) \,d\rho,
\end{equation}
where $p(\rho)$ is the density PDF. Then the total number of objects in the mass range from $m$ to $m+dm$ on size scale $\ell$ can be obtained just by dividing the total mass of objects at a given density by the mass per object, and integrating over the density PDF on that size scale
\begin{equation}
\frac{dn_\ell}{dm} = \frac{dM_{\rm tot}}{m} \sim \ell^{-3} \int p(\rho) \,d\rho.
\end{equation}
Not all of these structures will be bound. To filter out the ones that are, we impose a density threshold, analogous to the one we used in computing the star formation rate in Section \ref{ssec:eff_th}.\footnote{Indeed, several of the models discussed there allow simultaneous computation of the star formation rate and the IMF.} We assert that an object will be bound only if its gravitational energy exceeds its kinetic energy, that is, only if the density exceeds a critical value given by
\begin{equation}
\frac{Gm^2}{\ell} \sim m \sigma(\ell)^2
\qquad\Longrightarrow\qquad
\rho_{\rm crit} \sim \frac{\sigma(\ell)^2}{G \ell^2},
\end{equation}
where $\sigma(\ell)$ is the velocity dispersion on size scale $\ell$, which we take from the linewidth-size relation, $\sigma(\ell) = c_s (\ell/\ell_s)^{1/2}$. Thus we have a critical density
\begin{equation}
\rho_{\rm crit} \sim \frac{c_s^2}{G \ell_s \ell},
\end{equation}
and this forms a lower limit on the integral.
There are two more steps in the argument. One is simple: just integrate over all length scales to get the total number of objects. That is,
\begin{equation}
\frac{dn}{dm} \propto \int \frac{dn_\ell}{dm} \, d\ell.
\end{equation}
The second is that we must know the functional form $p(\rho)$ for the smoothed density PDF. One can estimate this in a variety of ways, but to date no one has performed a fully rigorous calculation. For example, \citet{hopkins12e} assumes that the PDF is lognormal no matter what scale it is smoothed on, and all that changes as one alters the smoothing scale is the dispersion. He obtains this by setting the dispersion on some scale $\sigma_\ell$ equal to an integral over the dispersions on all smaller scales. In contrast, \citet{hennebelle08b, hennebelle09a} assume that the density power spectrum is a powerlaw, and derive the density PDF from that. These two assumptions yield similar but not identical results.
\begin{marginfigure}
\includegraphics[width=\linewidth]{imf_hopkins12}
\caption[IMF from an analytic model of turbulent fragmentation]{
\label{fig:imf_hopkins12}
The IMF predicted by an analytic model of turbulent fragmentation. Credit: \citeauthor{hopkins12d}, MNRAS, 423, 2037, reproduced by permission of Oxford University Press on behalf of the RAS.
}
\end{marginfigure}
At this point we will cease following the mathematical formalism in detail; interested readers can find it worked out in the papers referenced above. We will simply assert that one can at this point evaluate all the integrals to get an IMF. The result clearly depends only on two dimensional quantities: the sound speed $c_s$ and the sonic length $\ell_s$. However, at masses much greater than the sonic mass $M_s \approx c_s^2 \ell_s/G$, the result is close to a powerlaw with approximately the right index. Figure \ref{fig:imf_hopkins12} shows an example prediction.
As with the competitive accretion model, this hypothesis encounters certain difficulties. First, there is the technical problem that the choice of smoothed density PDF estimate is not at all rigorous, and there are noticeable differences in the result depending on how the choice is made. Second, the dependence on the sonic length is potentially problematic, because real molecular clouds do not have constant sonic lengths. Regions of massive star formation are observed to be systematically more turbulent. Third, the theory does not address the question of why gravitationally-bound regions do not sub-fragment as they collapse. Indeed, \citet{guszejnov15a} and \citet{guszejnov16a} argue that, when this effect is taken into account, the IMF (as opposed to the core mass distribution) becomes a pure powerlaw. As a result, the model has trouble explaining the IMF peak.
\section{The Peak of the IMF}
\label{sec:imfpeak}
\subsection{Basic Theoretical Considerations}
A powerlaw is scale-free, but the peak has a definite mass scale. This mass scale is one basic observable that any theory of star formation must be able to predict. Moreover, the presence of a characteristic mass scale immediately tells us something about the physical processes that must be involved in producing the IMF. We have thus far thought of molecular clouds as consisting mostly of isothermal, turbulent, magnetized, self-gravitating gas. However, we can show that there \textit{must} be additional processes beyond these at work in setting a peak mass.
We can see this in a few ways. First we can demonstrate it in a more intuitive but not rigorous manner, and then we can demonstrate it rigorously. The intuitive argument is as follows. In the system we have described, there are four energies in the problem: thermal energy, bulk kinetic energy, magnetic energy, and gravitational potential energy. From these energies we can define three dimensionless ratios, and the behavior of the system will be determined by these three ratios. As an example, we might define the Mach number, plasma beta, and Jeans number via
\begin{equation}
\mathcal{M} = \frac{\sigma}{c_s} \qquad \beta = \frac{8\pi \rho c_s^2}{B^2}
\qquad n_J = \frac{\rho L^3}{c_s^3/\sqrt{G^3\rho}}.
\end{equation}
The ratios describe the ratio of kinetic to thermal energy, the ratio of thermal to magnetic energy, and the ratio of thermal to gravitational energy. All other dimensionless numbers we normally use can be derived from these, e.g., the Alfv\'enic Mach number $\mathcal{M}_A = \mathcal{M}\sqrt{\beta/2}$ is simply the ratio of kinetic to magnetic energy.
Now notice the scalings of these numbers with density $\rho$, velocity dispersion $\sigma$, magnetic field strength $B$, and length scale $L$:
\begin{equation}
\mathcal{M} \propto \sigma
\qquad
\beta \propto \rho B^{-2}
\qquad
n_J \propto \rho^{3/2} L^3.
\end{equation}
If we scale the problem by $\rho\rightarrow x \rho$, $L\rightarrow x^{-1/2} L$, $B\rightarrow x^{1/2} B$, all of these dimensionless numbers remain fixed. Thus the behavior of two systems, one with density a factor of $x$ times larger than the other one, length a factor of $x^{-1/2}$ smaller, and magnetic field a factor of $x^{1/2}$ stronger, are simply rescaled versions of one another. If the first system fragments to make a star out of a certain part of its gas, the second system will too. Notice, however, that the {\it masses} of those stars will not be the same! The first star will have a mass that scales as $\rho L^3$, while the second will have a mass that scales as $(x\rho) (x^{-1/2} L)^3 = x^{-1/2} \rho L^3$. We learn from this an important lesson: isothermal gas is scale-free. If we have a model involving only isothermal gas with turbulence, gravity, and magnetic fields, and this model produces stars of a given mass $m_*$, then we can rescale the system to obtain an arbitrarily different mass.
Now that we understand the basic idea, we can show this a bit more formally. Consider the equations describing this system. For simplicity we will omit both viscosity and resistivity. These are
\begin{eqnarray}
\frac{\partial \rho}{\partial t} & = & -\nabla \cdot (\rho \vecv) \\
\frac{\partial}{\partial t}(\rho \vecv) & = & -\nabla \cdot (\rho\vecv\vecv) - c_s^2 \nabla \rho
\nonumber \\
& & {} + \frac{1}{4\pi} (\nabla \times \vecB) \times \vecB - \rho \nabla \phi \\
\frac{\partial\vecB}{\partial t} & = & -\nabla \times (\vecB\times\vecv) \\
\nabla^2 \phi & = & 4\pi G \rho
\end{eqnarray}
One can non-dimensionalize these equations by choosing a characteristic length scale $L$, velocity scale $V$, density scale $\rho_0$, and magnetic field scale $B_0$, and making a change of variables $\mathbf{x} = \mathbf{x}'L$, $t = t' L/V$, $\rho = r \rho_0$, $\mathbf{B} = \mathbf{b} B_0$, $\mathbf{v} = \mathbf{u} V$, and $\phi = \psi G\rho_0 L^2$. With fairly minimal algebra, the equations then reduce to
\begin{eqnarray}
\frac{\partial r}{\partial t'} & = & -\nabla'\cdot (r\mathbf{u}) \\
\frac{\partial}{\partial t'}(r \mathbf{u}) & = & -\nabla' \cdot \left(r\mathbf{uu}\right) - \frac{1}{\mathcal{M}^2}\nabla' r \nonumber \\
& & {} + \frac{1}{\mathcal{M}_A^2} (\nabla'\times\mathbf{b})\times\mathbf{b} - \frac{1}{\alpha_{\rm vir}} r \nabla' \psi \\
\frac{\partial\mathbf{b}}{\partial t'} & = & -\nabla'\times\left(\mathbf{b}\times\mathbf{u}\right) \\
\nabla'^2 \psi & = & 4\pi r,
\end{eqnarray}
where $\nabla'$ indicates differentiation with respect to $x'$. The dimensionless ratios appearing in these equations are
\begin{eqnarray}
\mathcal{M} & = & \frac{V}{c_s} \\
\mathcal{M}_A & = & \frac{V}{V_A} = V\frac{\sqrt{4\pi \rho_0}}{B_0} \\
\alpha_{\rm vir} & = & \frac{V^2}{G \rho_0 L^2},
\end{eqnarray}
which are simply the Mach number, Alfv\'{e}n Mach number, and virial ratio for the system. These equations are fully identical to the original ones, so any solution to them is also a valid solution to the original equations. In particular, suppose we have a system of size scale $L$, density scale $\rho_0$, magnetic field scale $B_0$, velocity scale $V$, and sound speed $c_s$, and that the evolution of this system leads to a star-like object with a mass
\begin{equation}
m \propto \rho_0 L^3.
\end{equation}
One can immediately see that system with length scale $L' = y L$, density scale $\rho_0' = \rho_0 / y^2$, magnetic field scale $B_0' = B_0/y$, and velocity scale $V' = V$ has exactly the same values of $\mathcal{M}$, $\mathcal{M}_A$, and $\alpha_{\rm vir}$ as the original system, and therefore has exactly the same evolution. However, in this case the star-like object will instead have a mass
\begin{equation}
m' \propto \rho' L'^3 = y m
\end{equation}
Thus we can make objects of arbitrary mass just by rescaling the system.
This analysis implies that explaining the IMF peak requires appealing to some physics beyond that of isothermal, magnetized turbulence plus self-gravity. This immediately shows that the competitive accretion and turbulence theories we outlined to explain the powerlaw tail of the IMF cannot be adequate to explaining the IMF peak, at least not by themselves. Something must be added, and models for the origin of the IMF peak can be broadly classified based on what extra physics they choose to add.
\subsection{The IMF From Galactic Properties}
One option is hypothesize that the IMF is set at the outer scale of the turbulence, where the molecular clouds join to the atomic ISM (in a galaxy like the Milky Way), or on sizes of the galactic scale-height (for a molecule-dominated galaxy). Something in this outer scale picks out the characteristic mass of stars at the IMF peak.
This hypothesis comes in two flavors. The simplest is that characteristic mass is simply set by the Jeans mass at the mean density $\overline{\rho}$ of the cloud, so that
\begin{equation}
m_{\rm peak} \propto \frac{c_s^3}{\sqrt{G^3 \overline{\rho}}}
\end{equation}
While simple, this hypothesis immediately encounters problems. Molecular clouds have about the same temperature everywhere, but they do not all have the same density -- indeed, based on our result that the surface density is about constant, the density should vary with cloud mass as $M^{-1/2}$. Thus at face value this hypothesis would seem to predict a factor of $\sim 3$ difference in characteristic peak mass between $10^4$ and $10^6$ $M_\odot$ clouds in the Milky Way. This is pretty hard to reconcile with observations. The problem is even worse if we think about other galaxies, where the range of density variation is much greater and thus the predicted IMF variation is too. One can hope for a convenient cancellation, whereby an increase in the density is balanced by an increase in temperature, but this seems to require a coincidence.
A somewhat more refined hypothesis, which is adopted by all the turbulence models, is that the IMF peak is set by the sound speed and the normalization of the linewidth-size relation. As discussed above, in the turbulence models the only dimensional free parameters are $c_s$ and $\ell_s$, and from them one can derive a mass in only one way:
\begin{equation}
m_{\rm peak} \sim \frac{c_s^2 \ell_s}{G}.
\end{equation}
\citet{hopkins12d} calls this quantity the sonic mass, but it's the same thing as the characteristic masses in the other models.
This value can be expressed in a few ways. Suppose that we have a cloud of characteristic mass $M$ and radius $R$. We can write the velocity dispersion in terms of the virial parameter:
\begin{equation}
\alpha_{\rm vir} \sim \frac{\sigma^2 R}{G M}
\qquad\Longrightarrow\qquad
\sigma \sim \sqrt{\alpha_{\rm vir} \frac{G M}{R}}.
\end{equation}
This is the velocity dispersion on the outer scale of the cloud, so we can also define the Mach number on this scale as
\begin{equation}
\mathcal{M} = \frac{\sigma}{c_s} \sim \sqrt{\alpha_{\rm vir} \frac{G M}{R c_s^2}}
\end{equation}
The sonic length is just the length scale at which $\mathcal{M} \sim 1$, so if the velocity dispersion scales with $\ell^{1/2}$, then we have
\begin{equation}
\ell_s \sim \frac{R}{\mathcal{M}^2} \sim \frac{c_s^2}{\alpha_{\rm vir} G \Sigma},
\end{equation}
where $\Sigma\sim M/R^2$ is the surface density. Substituting this in, we have
\begin{equation}
m_{\rm peak} \sim \frac{c_s^4}{\alpha_{\rm vir} G^2 \Sigma},
\end{equation}
and thus the peak mass simply depends on the surface density of the cloud. We can obtain another equivalent expression by noticing that
\begin{equation}
\frac{M_J}{\mathcal{M}} \sim \frac{c_s^3}{\sqrt{G^3 \overline{\rho}}} \sqrt{\frac{R c_s^2}{\alpha_{\rm vir} G M}}
\sim \frac{c_s^4}{\alpha_{\rm vir} G^2 \Sigma} \sim m_{\rm peak}
\end{equation}
Thus, up to a factor of order unity, this hypothesis is also equivalent to assuming that the characteristic mass is simply the Jeans mass divided by the Mach number.
An appealing aspect of this argument is that it naturally explains why molecular clouds in the Milky Way all make stars at about the same mass. A less appealing result is that it would seem to predict that the masses could be quite different in regions of different surface density, and we observe that there are star-forming regions where $\Sigma$ is indeed much higher than the mean of the Milky Way GMCs. This is doubly-true if we extend our range to extragalactic environments. One can hope that this will cancel because the temperature will be higher and thus $c_s$ will increase, but this again seems to depend on a lucky cancellation, and there is no \textit{a priori} reason why it should.
\subsection{Non-Isothermal Fragmentation}
The alternative to breaking the isothermality at the outer scale of the turbulence is to relax the assumption that the gas is isothermal on small scales. This has the advantage that it avoids any ambiguity about what constitutes the surface density or linewidth-size relation normalization for a "cloud".
\paragraph{Fixed equation of state models.}
\begin{marginfigure}
\includegraphics[width=\linewidth]{rhot_masunaga00}
\caption[Temperature versus density in a collapsing core]{
\label{fig:rhot_masunaga00}
Temperature versus density found in a one-dimensional calculation of the collapse of a $1$ $M_\odot$ gas cloud, at the moment immediately before a central protostar forms. Credit: \citet{masunaga00a}, \copyright AAS. Reproduced with permission.
}
\end{marginfigure}
The earliest versions of these models were proposed by \citet{larson05a}, and followed up by \citet{jappsen05a}. The basic idea of these models is that the gas in star-forming clouds is only approximately isothermal. Instead, there are small deviations from isothermality, which can pick out preferred mass scales. We will discuss these in more detail in Chapters \ref{ch:first_stars} and \ref{ch:protostar_form}, but for now we assert that there are two places where significant deviations from isothermality are expected (Figure \ref{fig:rhot_masunaga00}).
At low density the main heating source is cosmic rays and UV photons, both of which produce a constant heating rate per nucleus if attenuation is not significant. This is because the flux of CRs and UV photons is about constant, and the rate of energy deposition is just proportional to the number of target atoms or dust grains for them to interact with. Cooling is primarily by lines, either of CO once the gas is mostly molecular, or of C$^+$ or O where it is significantly atomic.
In both cases, at low density the gas is slightly below the critical density of the line, so the cooling rate per nucleus or per molecule is an increasing function of density. Since heating per nucleus is constant but cooling per nucleus increases, the equilibrium temperature decreases with density. As one goes to higher density and passes the CO critical density this effect ceases. At that point one generally starts to reach densities such that shielding against UV photons is significant, so the heating rate goes down and thus the temperature continues to drop with density.
This begins to change at a density of around $10^{-18}$ g cm$^{-3}$, $n\sim 10^5 - 10^6$ cm$^{-3}$. By this point the gas and dust have been thermally well-coupled by collisions, and the molecular lines are extremely optically thick, so dust is the main thermostat. As long as the gas is optically thin to thermal dust emission, which it is at these densities, the dust cooling rate per molecule is fixed, since the cooling rate just depends on the number of dust grains. Heating at these densities comes primarily from compression as the gas collapses, i.e., it is just $P\, dV$ work. If the compression were at a constant rate, the heating rate per molecule would be constant. However, the free-fall time decreases with density, so the collapse rate and thus the heating rate per molecule increase with density. The combination of fixed cooling rate and increasing heating rate causes the temperature to begin rising with density. At still higher densities, $\sim 10^{-13}$ g cm$^{-3}$, the gas becomes optically thick to dust thermal emission. At this point the gas simply acts adiabatically, with all the $P\,dV$ work being retained, so the heating rate with density rises again.
\citet{larson05a} pointed out that deviations from isothermality are particularly significant for filamentary structures, which dominate in turbulent flows. It is possible to show that a filament cannot go into runaway collapse if $T$ varies with $\rho$ to a positive number, while it can collapse if $T$ varies as $\rho$ to a negative number. This suggests that filaments will collapse indefinitely in the low-density regime, but that their collapse will then halt around $10^{-18}$ g cm$^{-3}$, forcing them to break up into spheres in order to collapse further. The upshot of all these arguments is that the Jeans or Bonnor-Ebert mass one should be using to estimate the peak of the stellar mass spectrum is the one corresponding to the point where there is a changeover from sub-isothermal to super-isothermal.
In other words, the $\rho$ and $T$ that should be used to evaluate $M_J$ or $M_{\rm BE}$ are the values at that transition point. Larson proposes an approximate equation of state to represent the first break in the EOS:
Combining all these effects, \citet{larson05a} proposed a single simple equation of state
\begin{equation}
T = \left\{
\begin{array}{ll}
4.4 \,\rho_{18}^{-0.27}\mbox{ K}, \qquad & \rho_{18} < 1 \\
4.4 \,\rho_{18}^{0.07}\mbox{ K}, & \rho_{18} \ge 1
\end{array}
\right.
\end{equation}
where $\rho_{18}=\rho/(10^{-18}\mbox{ g cm}^{-3})$. Conveniently enough, the Bonnor-Ebert mass at the minimum temperature here is $M_{\rm BE} = 0.067$ $\msun$, which is not too far off from the observed peak of the IMF at $M=0.2$ $\msun$. (The mass at the second break is a bit less promising. At $\rho = 10^{-13}$ g cm$^{-3}$ and $T=10$ K, we have $M_{\rm BE} = 7\times 10^{-4}$ $\msun$.)
Simulations done adopting this proposed equation of state seem to verify the conjecture that the characteristic fragment mass does depend critically on the break on the EOS (Figure \ref{fig:imf_jappsen05}).
\begin{figure}
\includegraphics[width=\linewidth]{imf_jappsen05}
\caption[IMF from simulations of non-isothermal fragmentation]{
\label{fig:imf_jappsen05}
Measured stellar mass distributions in a series of simulations of turbulent fragmentation using non-isothermal equations of state. Each row shows a single simulation, measured at a series of times, characterized by a particular mass in stars as indicated in each panel. Different rows use different equations of state, with the vertical line in each panel indicating the Jeans mass evaluated at the temperature minimum of the equation of state. Histograms show the mass distributions measured for the stars. Credit: \citeauthor{jappsen05a}, A\&A, 435, 611, 2005, reproduced with permission \copyright\, ESO.
}
\end{figure}
\paragraph{Radiative models.}
While this is a very interesting result, there are two problems. First, the proposed break in the EOS occurs at $n=4\times 10^5$ cm$^{-3}$. This is a fairly high density in a low mass star-forming region, but it is actually quite a low density in more typical, massive star-forming regions. For example, the Orion Nebula cluster now consists of $\approx 2000$ $\msun$ of stars in a radius of $0.8$ pc, giving a mean density $n\approx 2\times 10^4$ cm$^{-3}$. Since the star formation efficiency was less than unity and the cluster is probably expanding due to mass loss, the mean density was almost certainly higher while the stars were still forming. Moreover, recall that, in a turbulent medium, the bulk of the mass is at densities above the volumetric mean density. The upshot of all this is that almost all the gas in Orion was probably over \citet{larson05a}'s break density while the stars were forming. Since Orion managed to form a normal IMF, it is not clear how the break temperature could be relevant.
A second problem is that, in dense regions like the ONC, the simple model proposed by \citet{larson05a} is a very bad representation of the true temperature structure, because it ignores the effects of radiative feedback from stars. In dense regions the stars that form will heat the gas around them, raising the temperature. Figure \ref{fig:rhot_krumholz11} shows the density-temperature distribution of gas in simulations that include radiative transfer, and that have conditions chosen to be similar to those of the ONC.
\begin{figure}
\includegraphics[width=\linewidth]{rhot_krumholz11}
\caption[Density-temperature distribution from a simulation of the formation of the ONC]{
\label{fig:rhot_krumholz11}
Density-temperature distributions measured from a simulation of the formation of an ONC-like star cluster, including radiative transfer and stellar feedback. The panels show the distribution at different times in the simulation, characterized by the fraction of mass that has been turned into stars. Doted lines show lines of constant Bonnor-Ebert mass (in $M_\odot$), while dashed lines show the threshold for sink particle formation in the simulation. Credit: \citet{krumholz11c}, \copyright AAS. Reproduced with permission.
}
\end{figure}
These two observations suggest that one can build a model for the IMF around radiative feedback. There are a few numerical and analytic papers that attempt to do so, including \citet{bate09a, bate12a}, \citet{krumholz11e}, \citet{krumholz12b}, and \citet{guszejnov16a}. The central idea for these models is that radiative feedback shuts off fragmentation at a characteristic mass scale that sets the peak of the IMF.
Suppose that we form a first, small protostellar that radiates at a rate $L$. The temperature of the material at a distance $R$ from it, assuming the gas is optically thick, will be roughly
\begin{equation}
L \approx 4 \pi \sigma_{\rm SB} R^2 T^4,
\end{equation}
where $\sigma_{\rm SB}$ is the Stefan-Boltzmann constant. Now let us compute the Bonnor-Ebert mass using the temperature $T$:
\begin{equation}
M_{\rm BE} \approx \frac{c_s^3}{\sqrt{G^3\rho}} = \sqrt{\left(\frac{k_B T}{\mu m_{\rm H} G}\right)^3 \frac{1}{\rho}},
\end{equation}
where $\mu=2.33$ is the mean particle mass, and we are omitting the factor of $1.18$ for simplicity. Note that $M_{\rm BE}$ here is a function of $R$. At small $R$, the temperature is large and thus $M_{\rm BE}$ is large, while at larger distances the gas is cooler and $M_{\rm BE}$ falls.
Now let us compare this mass to the mass enclosed within the radius $R$, which is $M=(4/3)\pi R^3 \rho$. At small radii, $M_{\rm BE}$ greatly exceeds the enclosed mass, while at large radii $M_{\rm BE}$ is much less than the enclosed mass. A reasonable hypothesis is that fragmentation will be suppressed out to the point where $M \approx M_{\rm BE}$. If we solve for the radius $R$ and mass $M$ at which this condition is met, we obtain
\begin{equation}
M \approx \left(\frac{1}{36\pi}\right)^{1/10} \left(\frac{k_B}{G \mu m_{\rm H}}\right)^{6/5} \left(\frac{L}{\sigma_{\rm SB}}\right)^{3/10} \rho^{-1/5}.
\end{equation}
To go further, we need to know the luminosity $L$. The good news is that, for reasons to be discussed in Chapter \ref{ch:protostar_evol}, the luminosity is dominated by accretion, and the energy produced by accretion is simply the accretion rate multiplied by a roughly fixed energy yield per unit mass. In other words, we can write
\begin{equation}
L \approx \psi \dot{M},
\end{equation}
where $\psi \approx 10^{14}$ erg g$^{-1}$, and can in fact be written in terms of fundamental constants. Taking this on faith for now, if we further assume that stars form over a time of order a free-fall time, then
\begin{equation}
\dot{M} \approx M \sqrt{G\rho},
\end{equation}
and substituting this into the equation for $M$ above and solving gives
\begin{eqnarray}
M & \approx & \left(\frac{1}{36\pi}\right)^{1/7} \left(\frac{k_B}{G \mu m_{\rm H}}\right)^{12/7} \left(\frac{\psi}{\sigma_{\rm SB}}\right)^{3/7} \rho^{-1/14} \\
& = & 0.3 \left(\frac{n}{100\mbox{ cm}^{-3}}\right)^{-1/14} M_\odot,
\end{eqnarray}
where $n = \rho/(\mu m_{\rm H})$. Thus we get a characteristic mass that is a good match to the IMF peak, and that depends only very, very weaky on the ambient density.
Simulations including radiation seem to support the idea that this effect can pick out a characteristic peak ISM mass. The main downside to this hypothesis is that it has little to say by itself about the powerlaw tail of the IMF. This is not so much a problem with the model as an omission, and a promising area of research seems to be joining a non-isothermal model such as this onto a turbulent fragmentation or competitive accretion model to explain the full IMF.
| {
"alphanum_fraction": 0.763008487,
"avg_line_length": 114.2809364548,
"ext": "tex",
"hexsha": "ce04641ca77e954f17fb2df3a58fdcd98f9a5564",
"lang": "TeX",
"max_forks_count": 14,
"max_forks_repo_forks_event_max_datetime": "2020-05-08T15:58:05.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-05-22T17:47:29.000Z",
"max_forks_repo_head_hexsha": "d1c8a10f84fc1676b492ddb4f3bd8b73455b5d07",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "keflavich/star_formation_notes",
"max_forks_repo_path": "chapters/chapter13.tex",
"max_issues_count": 9,
"max_issues_repo_head_hexsha": "d1c8a10f84fc1676b492ddb4f3bd8b73455b5d07",
"max_issues_repo_issues_event_max_datetime": "2022-01-31T02:07:47.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-05-31T17:15:19.000Z",
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "keflavich/star_formation_notes",
"max_issues_repo_path": "chapters/chapter13.tex",
"max_line_length": 1183,
"max_stars_count": 67,
"max_stars_repo_head_hexsha": "d1c8a10f84fc1676b492ddb4f3bd8b73455b5d07",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "Open-Astrophysics-Bookshelf/star_formation_notes",
"max_stars_repo_path": "chapters/chapter13.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-02T02:02:57.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-05-05T22:43:39.000Z",
"num_tokens": 8984,
"size": 34170
} |
%!TEX root = ../thesis.tex
\section{Problem Statement}\label{sec:problemStatement}
% Definition test case
In this work a test case is a specification of an environment and a test setup.
The environment describes the curvatures of roads and the placements of static obstacles.
The test setup describes the initial states of vehicles and the test criteria.
% There are too many test cases to formalize
The resulting number of possible test cases is too huge to create a systematic way to define test cases \ie{} a formalization that allows to describe arbitrary test cases without loosing the level of detail which is required to specify concrete test cases.
So a formalization that is supposed to define concrete test cases can only treat a subset of the whole test case space.
% There is no standardized subset
Currently there is no standardized subset which specifies a comprehensive but sufficient test suite that ensures the safety of \glspl{av} to a high degree.\par
% There are too many ADASs to test
Concerning a subset of test cases which explicitly target safety critical \glspl{adas} \eg{} \gls{acc}, lane centering, emergency brake or collision avoidance the number of possible test cases is still too large for a formalization.
A simple option is to reduce the subset to a number of certain \glspl{adas}.
This reduces the generality of the formalization and raises the problem of reasoning about which \glspl{adas} should be supported.
Another option is to subsume \glspl{adas} into groups.
This raises the problem of determining shared characteristics of \glspl{adas} that separate \glspl{adas}.
Again if the number of groups is too large a formalization can not handle all of them and if it is too low a formalization may not be able to express all the specific details which are needed to specify concrete test cases.\\
% Testing an ADAS is complex
Testing an \gls{adas} is complex.
% There are no standards
Each \gls{adas} requires certain input metrics to operate \eg{} current positions, distances or speeds of the \gls{av} to which the \gls{adas} is attached to or of other participants.
There are no standards to define which metrics \glspl{adas} require and how they have to be tested.
% Increasing number of metrics and data
Further input metrics may be properties about the \gls{av} like damage, steering angle or the state of certain electronic components \eg{} the headlight.
Depending on the implementation of an \gls{adas} it may not require certain metrics as its input directly but other data like camera images or \gls{lidar} data which further increases the variety of input metrics.
In order to test the results of \glspl{adas} even further metrics that \eg{} represent a ground truth are required.
This yields the problem that a formalization has to support many different kinds of metrics in order to provide \glspl{adas} with input metrics and to test them.
The more metrics a formalization supports the more complex it may get.\par
\Glspl{av} under test are controlled by \glspl{ai}.
% Testing the efficiency of an AI is hard
Concerning a subset of test cases which evaluate the efficiency of a given \gls{ai} the problem raises that the execution time of the frequent verification of test criteria, the overhead of the underlying simulator and the discrepancy between the hardware used for testing and the actual hardware used with a real \gls{av} falsify time measurements.
% An external AI needs communication
Given all the metrics which an \gls{adas} that is attached to an \gls{av} requires a simulation needs to exchange these possibly highly diverse metrics with the \gls{ai} that controls the \glspl{av}.
Since \glspl{ai} differ greatly in their implementation they can not be included in the simulation directly and have to run separately.
Additionally a tester may not want to expose the implementation of an \gls{ai} to \drivebuild{}.
Hence \glspl{ai} have to run externally \ie{} not within the internal architecture of \drivebuild{}.
% Network latency influences the results of external AI
In case of an external \gls{ai} the communication with a simulation and the network latency further falsify time measurements.
% Communication scheme required
In case of external \glspl{ai} there is also the problem of creating a communication scheme which allows to request and exchange lots of diverse data and which exposes mechanisms to implement interactions between \glspl{ai} and a simulation.\par
% Extending the range of supported test cases complicates the validation and evaluation of criteria
The more subsets a formalization has to consider and the bigger they are the higher is the diversity of metrics as well as test criteria which are required to define test cases.
This results in the problem of an increasing complexity in the definition of test cases plus in the validation and evaluation of test criteria.
% There are no standardized test criteria
There is no standardized set of test criteria which are sufficient for many test cases.
There is also no standardized way of how to declare test criteria and how to specify reference tests and their expected results~\cite{noStandard}.
% Extensibility of test criteria
Testing \glspl{adas} that are not explicitly considered during the creation of the formalization may need test criteria which the formalization does not provide.
To allow an user to introduce additional criteria on the client side for the purpose of introducing further criteria leads to the problem of distributing test criteria over the underlying platform and the user and thus divides the corresponding responsibilities of the verification of test criteria.\par
% AIs require training data
However, any subset of test cases involves \glspl{ai} which control \glspl{av}.
These \glspl{ai} have to be trained before they are able to control an \gls{av} suitably.
Therefore a tester wants to use training data which is collected in the same environment that is in place to run tests for an \gls{ai}.
This yields the problem of manually controlling participants in a simulation to efficiently generate training data.\par
% Efficient execution of large quantities requires parallelism
In order to ensure a high degree of safety of \glspl{ai} many test executions are required.
In order to execute many tests simultaneously they may be distributed over a cluster.
% Distributing tests requires prediction of load
When distributing test runs across a cluster a common goal is high utilization of its provided resources.
This leads to the problem of finding a strategy to distribute test executions based on their predicted load and their estimated execution time.
Therefore characteristics of formalized test cases have to be determined that deposit in the resulting load and the actual execution time.\par
% Goals of this work
The goals of this work are the creation of a scheme which formalizes test cases, the support of training \glspl{ai}, the specification of a life cycle for handling the execution of tests and the actual implementation of \drivebuild{}.
The formalization shall focus on \glspl{adas} and be able to describe static elements (\eg{} roads and obstacles), dynamic elements (\eg{} participants and their movements), test criteria and sensor data which \glspl{ai} require.
| {
"alphanum_fraction": 0.8053645117,
"avg_line_length": 100.9722222222,
"ext": "tex",
"hexsha": "8c646f0467f9561843a72e8c8cdb849ec68e3efa",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2792203d28d6c7b62f54545344ee6772d2ec5b64",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "TrackerSB/MasterThesis",
"max_forks_repo_path": "thesis/sections/problemStatement.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2792203d28d6c7b62f54545344ee6772d2ec5b64",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "TrackerSB/MasterThesis",
"max_issues_repo_path": "thesis/sections/problemStatement.tex",
"max_line_length": 349,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2792203d28d6c7b62f54545344ee6772d2ec5b64",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "TrackerSB/MasterThesis",
"max_stars_repo_path": "thesis/sections/problemStatement.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1546,
"size": 7270
} |
\documentclass{article}
\usepackage{amsmath,amssymb,siunitx,graphicx}
\usepackage[margin=1in]{geometry}
\DeclareSIUnit\ergs{ergs}
\DeclareSIUnit\yr{yr}
\DeclareSIUnit\AU{AU}
\DeclareSIUnit\msun{\ensuremath{\mathrm{M}_{\odot}}}
\title{Asteroid Defense}
\author{Matthias J. Raives}
\begin{document}
\maketitle{}
\section{Deflection}
Consider an asteroid at some distance $R$ from Earth, heading directly towards the center of the Earth at a speed $v_{0}$. To deflect it, we need to change its velocity such that it path just barely intersects the gravitational focusing radius $b$.
The gravitational focusing radius can be derived from a conservation of energy and angular momentum argument:
\begin{align}
\frac{1}{2}mv_{0}^{2} &= \frac{1}{2}mv_{max}^{2} - \frac{GM_{\oplus}m}{R_{\oplus}}\\
bv_{0} &= v_{max}R_{\oplus}
\end{align}
where $v_{max}$ is the velocity at impact. We solve for $b$ as:
\begin{equation}
b = R_{\oplus}\left(1+\frac{v_{esc}^{2}}{v_{0}^{2}}\right)
\end{equation}
where $v_{esc}$ is the escape velocity from the surface of the Earth
\begin{align}
v_{esc}^{2} = \frac{2GM_{\oplus}}{R_{\oplus}}.
\end{align}
Now consider the incoming asteroid. It has an initial momentum $\mathbf{p}_{0}=m\mathbf{v}_{0}$. Consider our deflection to change it's momentum by some $\Delta \mathbf{p}$ but without changing the total energy, i.e., $|\mathbf{p}_{0}|=|\mathbf{p}_{1}|$. This defines a triangle that can be solved as:
\begin{equation}
\sin\frac{\theta}{2} = \frac{\Delta p}{p_{0}}
\end{equation}
where $\theta$ is determined by
\begin{equation}
\sin\theta = \frac{b}{R}.
\end{equation}
Using the small-angle approximation (valid if $R\gg b$, i.e., $R\gg R_{\oplus}$ and $v_{0}\gg v_{esc}$), we obtain
\begin{equation}
\Delta p = p_{0}\frac{b}{R} = mv_{0}\frac{R_{\oplus}}{R}\left(1+\frac{v_{esc}}{v_{0}}\right)
\end{equation}
In the reference frame initially co-moving with the asteroid, the magnitudes of the initial and final momenta are not the same, and the change in energy (and thus, by conservation of energy, the energy required to change it's trajectory) is just:
\begin{equation}
\Delta E = \frac{\Delta p^{2}}{2m} = \frac{1}{2}mv_{0}^{2}\frac{R_{\oplus}^{2}}{R^{2}}\left(1+\frac{v_{esc}}{v_{0}}\right)^{2}
\end{equation}
Suppose the asteroid is the same size as the Chicxulub impactor, and is detected at the moon's orbital distance. The asteroid's mass can be determined assuming it has an average density of $\rho=\SI{5}{\gram\per\cm\cubed}$:
\begin{equation}
m = \frac{4\pi}{3}\rho R^{3} \sim \SI{3e18}{\gram}.
\end{equation}
If the earth-moon orbital distance is not known, it can be determined from Kepler's 3rd law:
\begin{align}
\left(\frac{P}{\si{\yr}}\right)^{2} &= \left(\frac{\si{\msun}}{M}\right)\left(\frac{R}{\si{\AU}}\right)^{3}\\
R &= \left(\frac{P}{\si{\yr}}\right)^{2/3}\left(\frac{M}{\si{\msun}}\right)^{1/3}\:\si{\AU}\\
R &\sim \SI{2e-3}{\AU} \sim \SI{3e5}{\km}
\end{align}
The initial velocity can be assumed to be the free-fall velocity at \SI{1}{\AU}, i.e.,
\begin{equation}
v_{0} = \sqrt{\frac{2G\si{\msun}}{\SI{1}{\AU}}} \sim \SI{4e6}{\cm\per\second}.
\end{equation}
Thus, the energy required is:
\begin{equation}
\Delta E \sim \SI{1.5e28}{\ergs}.
\end{equation}
Note that gravitational focusing had a minimal effect on the answer. This is because the asteroid is moving quickly; an object with a small relative velocity (say, small bodies near Earth during solar system formation) will be focused much more effectively.
\section{Destruction}
Smaller asteroids will have a smaller impact velocity as they have a smaller terminal velocity. Terminal velocity is defined as the velocity at which the drag force and gravitational force balance out, so:
\begin{align}
mg &= \frac{1}{2}\rho_{A} v_{T}^{2}c_{D}A\\
v_{T}^{2} &= {\frac{2mg}{\rho_{A} c_{D}A}},
\end{align}
where $\rho_{A}$ is the density of air, $A$ is the cross-sectional area of the falling object, and $c_{D}$ is the drag coefficient. We can use $mg$ for the gravitational force because the height of the atmosphere is small compared to the radius of the earth, especially when you consider that the bulk of air resistance is in the lower atmosphere, where the air is more dense. The energy of the impact thus goes as:
\begin{equation}
E = \frac{1}{2}mv_{T}^{2} \propto m^{2}A^{-1}.
\end{equation}
The area can be written in terms of the mass if we know the density of the asteroid:
\begin{align}
A &= \pi R^{2}\\
m &= \frac{4\pi}{3}\rho R^{3}\\
A &= \pi \left(\frac{3m}{4\pi\rho}\right)^{2/3}.
\end{align}
Thus, we have:
\begin{equation}
E\propto m^{4/3}
\end{equation}
Consider now, instead of one large asteroid of mass $m$, we have $N$ smaller asteroids, each of mass $m_{N}\equiv \frac{m}{N}$. The energy then goes as
\begin{equation}
\Sigma{E} \propto N\left(\frac{m_{N}}{N}\right)^{4/3} \propto N^{-1/3}
\end{equation}
Thus, to reduce the total impact energy by a factor of 10, we would need to break the asteroid into $N=1000$ smaller impactors.
\end{document}
| {
"alphanum_fraction": 0.6071922545,
"avg_line_length": 59.6288659794,
"ext": "tex",
"hexsha": "18ee16bae6162118235770d8bad17562fd3837bf",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2018-01-10T21:05:11.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-01-10T21:05:11.000Z",
"max_forks_repo_head_hexsha": "5ad4fa6de9c9a8c60a3043adacfad41aef24ed4a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "osugoom/questions",
"max_forks_repo_path": "Asteroid Defense/Asteroid_Defense_Answer.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5ad4fa6de9c9a8c60a3043adacfad41aef24ed4a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "osugoom/questions",
"max_issues_repo_path": "Asteroid Defense/Asteroid_Defense_Answer.tex",
"max_line_length": 424,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5ad4fa6de9c9a8c60a3043adacfad41aef24ed4a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "osugoom/questions",
"max_stars_repo_path": "Asteroid Defense/Asteroid_Defense_Answer.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1757,
"size": 5784
} |
\SetPicSubDir{ch-Intro}
\chapter{Introduction}
\vspace{2em}
The concept of graphs was first used by Leonhard Euler to study the Königsberg bridges problem \cite{euler:konis} in 1735. This laid the foundation for field of mathematics known as graph theory.
\noindent
These days graphs are used to model and study complex networks. Networks like the internet, electric grid, road networks, social networks etc. are too large to be directly studied. In such situations a mathematical modelling of the structure allows us to study it's properties more efficiently.
\noindent
In this thesis we shall be studying one such property of a graph known as it's hamiltonicity, on particular classes of graphs.
\section{Preliminaries}
In the following section we shall briefly introduce some of the preliminaries and conventions that shall be used throughout the thesis.
\subsection{Graph Theory}
\begin{defn}
We define a \textbf{Graph} $G$ as the pair of sets $(V, E)$ where $V$ is the \textit{vertex set}, and the \textit{edge set}, $E \subseteq V \times V$.
\end{defn}
In an \textit{undirected graph} the order of vertices in an edge does not matter. In such graphs we represent an edge as $\{u, v\}$. On the other hand, in a \textit{directed graph} (or digraph) the order of vertices in an edge matters. Edges in such a graph are represented as $(u, v)$.
\begin{defn}
The number of edges \textit{adjacent} to a vertex, is known as the \textbf{Degree} of that vertex.
\end{defn}
Note that in a directed graph, we have separate in and out-degrees for each vertex. Henceforth, we shall be using $\delta$ to represent the minimum degree of a vertex in a graph, and similarly $\Delta$ to represent the maximum degree.
\begin{defn}
A sequence of distinct vertices $P = v_0, v_1, \cdots v_k$ in a directed graph $G(V, E)$, where $\forall i<k, e_i = (v_i, v_{i+1}) \in E$ is known as a \textbf{Path} in the graph.
\end{defn}
\begin{defn}
A path $C = v_0, v_1, \cdots v_k$ in graph $G(V, E)$ where $v_0 = v_k$ is known as a \textbf{Cycle}.
\end{defn}
\begin{defn}
A cycle $C$ such that it passes through every vertex of the graph, is called a \textbf{Hamiltonian Cycle}.
A graph which contains a Hamiltonian cycle, is said to be a \textbf{Hamiltonian Graph}.
\end{defn}
\subsection{Random Graph Models}
The graphs we shall be looking at in this thesis shall all be sampled from one of the distributions described below.
\begin{description}
\item[$D_{n, m}$ model: ] A graph is picked uniformly at random from the set of all digraphs with $n$ vertices and $m$ edges.
\item[$k$-in, $k$-out: ] In the $D_{k-in, k-out}$ model, for a graph with $n$ vertices, $k$ in-neighbours and $l$ out-neighbours are chosen independently at random from the vertex set $V_n$.
\item[$k$-regular digraph: ] In a random regular digraph $D_{n, k}$, each of the n vertices has in-degree and out-degree exactly k.
\item[Powerlaw graphs: ] Given a fixed degree sequence that has a power law tail, we pick a graph uniformly at random from the set of all graphs that realize the given degree seqence.
\item[Random Tournaments: ] In a complete graph $K_n$, we assign a direction uniformly at random to each edge.
\item[Random n-lifts: ] The \textit{n-lift} of a given graph $G(V, E)$, is obtained by replacing each vertex in $V$ by a set of n vertices, and by adding a random perfect matching (between the corresponding sets of $u \& v$) for every $e_i = (u, v) \in E$, .
\end{description}
| {
"alphanum_fraction": 0.7383687536,
"avg_line_length": 66.9615384615,
"ext": "tex",
"hexsha": "39dd88876737313e34460da5317b958cfabdc874",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0d65ecec12dd843dfb7e3828ac3b5c3824ce6901",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "LaughingBudda/hachikuji",
"max_forks_repo_path": "tex/chapters/ch-intro.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0d65ecec12dd843dfb7e3828ac3b5c3824ce6901",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "LaughingBudda/hachikuji",
"max_issues_repo_path": "tex/chapters/ch-intro.tex",
"max_line_length": 295,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0d65ecec12dd843dfb7e3828ac3b5c3824ce6901",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "LaughingBudda/hachikuji",
"max_stars_repo_path": "tex/chapters/ch-intro.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 975,
"size": 3482
} |
\subsection{The Ramsey RESET test}
The Ramsey Regression Equation Specification Error Test (RESET)
Once we have done our OLS we have \(\hat y\).
The Ramsey RESET test is an additional stage, which takes these predictions and estimates:
\(y=\theta x+\sum_{i=1}^3\alpha_i \hat {y^i}\)
We then run an F-test on \(\alpha\), with the null that \(\alpha = 0\).
| {
"alphanum_fraction": 0.7099447514,
"avg_line_length": 25.8571428571,
"ext": "tex",
"hexsha": "ef873fa738bab1efa14409ba80bb3efd97f9bc2d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/statistics/linearML/04-01-reset.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/statistics/linearML/04-01-reset.tex",
"max_line_length": 90,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/statistics/linearML/04-01-reset.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 99,
"size": 362
} |
\chapter{Implementation}
\label{chp:implementation}
\nomad is a \verb|C++| implementation of reverse mode automatic
differentiation that uses an \textit{operator overloading} strategy.
More precisely, we introduce a dual number type and then overload
common functions to accept this type.
A key difference of \nomad to other implementations is the generalization
to higher-orders. Most higher-order automatic differentiation
implementations leverage the recursive nature of higher-order dual
numbers directly, automatically differentiating through a first-order
implementation to compute higher-order partial derivatives and
the subsequent linear differential operators. This approach allows for
arbitrary-order automatic differentiation, but only at the cost of inefficient and
sometimes numerically unstable results. \nomad,
on the other hand, explicitly implements second and third-order operators
with a focus on performance and numerical stability.
In this chapter we introduce the architecture behind the \nomad automatic
differentiation implementation. We first review the internal representation
of the expression graph and how it interfaces with function overloads before
discussing the user interface itself.
\section{Internal Representation of the Expression Graph}
\label{sec:exp_graph_rep}
While the expression graph represents the entire composite function, each
node in the expression graph represents just a single component function.
Consequently each node must be responsible for both implementing the
local pushforward and pullback of the corresponding function and storing
the dual numbers, input nodes, and any auxiliary storage convenient for
the operator implementations, such as partial derivatives.
Like many other automatic differentiation implementations, \nomad
topologically sorts the expression graph into a linear stack of nodes,
sometimes known as a tape (Figure \ref{fig:topologicalSort}). This ordering
preserves the structure of the expression graph, ensuring that a sweep
through the stack executes a valid forward or reverse sweep through the
expression graph.
\begin{figure}
\setlength{\unitlength}{0.1in}
\centering
\begin{picture}(50, 20)
%
%\put(0, 0) { \framebox(50, 20){} }
%\put(25, 0) { \framebox(25, 30){} }
%\put(25, 0) { \framebox(6.25, 30){} }
%\put(25, 0) { \framebox(12.5, 30){} }
%\put(25, 0) { \framebox(18.75, 30){} }
%
%\put(25, 0) { \framebox(3.125, 30){} }
%\put(25, 0) { \framebox(9.375, 30){} }
%\put(25, 0) { \framebox(15.625, 30){} }
%
% Expression Graph
%
\put(6.25, 2.5) { \circle{4} }
\put(6.25, 2.5) { \makebox(0, 0) {$ x_{1} $} }
%
\put(12.5, 2.5) { \circle{4} }
\put(12.5, 2.5) { \makebox(0, 0) { $ x_{2} $ } }
%
\put(18.75, 2.5) { \circle{4} }
\put(18.75, 2.5) { \makebox(0, 0) { $ x_{3} $ } }
%
\put(6.25, 4.5) { \vector(3, 4){2.75} }
\put(12.5, 4.5) { \vector(-3, 4){2.75} }
\put(12.5, 4.5) { \vector(3, 4){2.75} }
\put(18.75, 4.5) { \vector(-3, 4){2.75} }
%
\put(10, 10) {\circle{4} } % Tweaked to the right
\put(9.375, 10) { \makebox(0, 0) { $f_{1}$ } }
%
\put(16.25, 10) {\circle{4} } % Tweaked to the right
\put(15.625, 10) { \makebox(0, 0) { $f_{2}$ } }
%
\put(9.375, 12) { \vector(3, 4){2.75} }
\put(15.625, 12) { \vector(-3, 4){2.75} }
%
\put(13, 17.5) {\circle{4} } % Tweaked to the right
\put(12.5, 17.5) { \makebox(0, 0) { $ g $ } }
%
% Middle Arrow
%
\put(21.875, 10) { \thicklines \vector(1, 0){6.25} }
%
% Stack
%
\put(33.5, 4) { \framebox(8, 2){ $x_{1}$} }
\put(33.5, 6) { \framebox(8, 2){ $x_{2}$ } }
\put(33.5, 8) { \framebox(8, 2){ $x_{3}$ } }
\put(33.5, 10) { \framebox(8, 2){ $f_{1}$ } }
\put(33.5, 12) { \framebox(8, 2){ $f_{2}$ } }
\put(33.5, 14) { \framebox(8, 2){ $g$ } }
%
\end{picture}
\caption{
A topological sort of the expression graph yields a linear stack of nodes,
sometimes known as a \textit{tape}, ordered such that a pass through the
stack yields a valid forward or reverse sweep of the expression graph.
}
\label{fig:topologicalSort}
\end{figure}
Nodes are implemented with the \verb|var_node| class which defines
default pushforward and pullback methods that can be specialized
for functions with structured Jacobians. Because the amount of storage for
each node can vary widely depending on the corresponding function and the
order of the desired linear differential operator, storage is decoupled from
each node. Instead the necessary data are stored in three global stacks, the
inputs stack, dual numbers stack, and partials stack, with \verb|var_node|
containing only an address to the relevant data in each
(Figure \ref{fig:architecture}), as well as accessor and mutator methods that
abstract the indirect storage.
Storage for each stack is pre-allocated and expanded only as necessary with
an arena-based allocation pattern.
\begin{figure}
\setlength{\unitlength}{0.1in}
\centering
\begin{picture}(50, 30)
%
%\put(0, 0) { \framebox(50, 30){} }
%
%\put(0, 0) { \framebox(12.5, 30){} }
%\put(0, 0) { \framebox(25, 30){} }
%\put(0, 0) { \framebox(37.5, 30){} }
%
%\put(0, 0) { \framebox(50, 7.5){} }
%\put(0, 0) { \framebox(50, 15){} }
%\put(0, 0) { \framebox(50, 22.5){} }
%
% Var Body
%
\put(8.5, 2) { \makebox(8, 2){Var Node} }
\put(8.5, 6) { \framebox(8, 2){} }
\put(8.5, 8) { \framebox(8, 2){} }
\put(8.5, 10) { \framebox(8, 2){} }
\put(8.5, 12) { \framebox(8, 2){} }
\put(8.5, 14) { \framebox(8, 2){} }
\put(8.5, 16) { \framebox(8, 2){} }
\put(8.5, 18) { \framebox(8, 2){} }
\put(8.5, 20) { \framebox(8, 2){} }
\put(8.5, 22) { \framebox(8, 2){} }
\put(8.5, 24) { \framebox(8, 2){} }
%
% Inputs
%
\put(39, 24) { \makebox(8, 2){Inputs} }
\put(25, 22) { \framebox(2, 6){} }
\put(27, 22) { \framebox(2, 6){} }
\put(29, 22) { \framebox(2, 6){} }
\put(31, 22) { \framebox(2, 6){} }
\put(33, 22) { \framebox(2, 6){} }
\put(35, 22) { \framebox(2, 6){} }
\put(37, 22) { \framebox(2, 6){} }
%
% Dual Numbers
%
\put(39, 15) { \makebox(8, 2){Dual} }
\put(39, 13) { \makebox(8, 2){Numbers} }
\put(25, 12) { \framebox(2, 6){} }
\put(27, 12) { \framebox(2, 6){} }
\put(29, 12) { \framebox(2, 6){} }
\put(31, 12) { \framebox(2, 6){} }
\put(33, 12) { \framebox(2, 6){} }
\put(35, 12) { \framebox(2, 6){} }
\put(37, 12) { \framebox(2, 6){} }
%
% Partials
%
\put(39, 4) { \makebox(8, 2){Partials} }
\put(25, 2) { \framebox(2, 6){} }
\put(27, 2) { \framebox(2, 6){} }
\put(29, 2) { \framebox(2, 6){} }
\put(31, 2) { \framebox(2, 6){} }
\put(33, 2) { \framebox(2, 6){} }
\put(35, 2) { \framebox(2, 6){} }
\put(37, 2) { \framebox(2, 6){} }
%
% Arrows
%
\thicklines
\put(32, 22) { \vector(-1, -1){4} }
\put(34, 22) { \vector(0, -1){4} }
%
\put(16.5, 15) { \vector(4, 3){15.5} }
\put(16.5, 15) { \vector(1, 0){19.5} }
\put(16.5, 15) { \vector(2, -1){17.5} }
%
\end{picture}
\caption{
Upon a topological sort (Figure \ref{fig:topologicalSort}), the expression graph
is represented by a stack of \texttt{var\_node} objects. The input nodes,
dual numbers, and partial derivatives are stored in external stacks, with each
\texttt{var\_node} storing only addresses to each. Note that the inputs stack
address not the \texttt{var\_node} objects directly, but rather only the dual numbers
of those nodes needed for implementing the pushforward and pullback operators.
}
\label{fig:architecture}
\end{figure}
\subsubsection{The Inputs Stack}
The inputs stack represents the edges in the expression graph
needed for directing the forward and reverse sweeps. Because
the pushforward and pullback operators need to read and write only
the dual numbers at each node, edges index the dual numbers of any
dependencies directly (Figure \ref{fig:architecture}) and avoid the overhead
that would be acquired for indirect access through the \verb|var_node|
objects.
\subsubsection{Dual Number Stack}
The dual number stack stores the $2^{k}$ components of the $k$th-order
dual numbers at each node. For example, a first-order expression graph
will need only two elements for each node while a third-order expression
graph will need eight (Figure \ref{fig:dualNumberStorage}).
\begin{figure}
\setlength{\unitlength}{0.1in}
\centering
\begin{picture}(50, 20)
%
%\put(0, 0) { \framebox(50, 20){} }
%
% First-Order
%
\put(0, 15) { \makebox(8, 2){First-} }
\put(0, 13) { \makebox(8, 2){Order} }
\put(9, 17) { \makebox(2, 2){ $\ldots$ } }
\put(9, 11) { \makebox(2, 2){ $\ldots$ } }
\put(11, 12) { \framebox(4, 6){ $x $ } }
\put(15, 12) { \framebox(4, 6){ $ \delta x $ } }
\put(19, 17) { \makebox(2, 2){ $\ldots$ } }
\put(19, 11) { \makebox(2, 2){ $\ldots$ } }
%
{ \thicklines \put(13, 20) { \vector(0, -1){2} } }
%
% Second-Order
%
\put(0, 5) { \makebox(8, 2){Third-} }
\put(0, 3) { \makebox(8, 2){Order} }
\put(9, 7) { \makebox(2, 2){ $\ldots$ } }
\put(9, 1) { \makebox(2, 2){ $\ldots$ } }
\put(11, 2) { \framebox(4, 6){ $s$ } }
\put(15, 2) { \framebox(4, 6){ $\delta s$ } }
\put(19, 2) { \framebox(4, 6){ $\delta t$ } }
\put(23, 2) { \framebox(4, 6){ $\delta^{2} t$ } }
\put(27, 2) { \framebox(4, 6){ $\delta u$ } }
\put(31, 2) { \framebox(4, 6){ $\delta^{2} u$ } }
\put(35, 2) { \framebox(4, 6){ $\delta^{2} v$ } }
\put(39, 2) { \framebox(4, 6){ $\delta^{3} v$ } }
\put(43, 7) { \makebox(2, 2){ $\ldots$ } }
\put(43, 1) { \makebox(2, 2){ $\ldots$ } }
%
{ \thicklines \put(13, 10) { \vector(0, -1){2} } }
%
\end{picture}
\caption{
The dual number stack is able to accommodate storage of dual numbers
of any order without reallocating memory. For example, first-order dual
numbers require only two elements while third-order dual numbers require
eight elements.
}
\label{fig:dualNumberStorage}
\end{figure}
\subsubsection{Partials Stack}
A common approach in many automatic differentiation implementations
is to compute partial derivatives online as necessary and avoid storing them
in the expression graph. This strategy is sound for first-order reverse
mode calculations where the partials are used only once, but higher-order
calculations require multiple sweeps that reuse the partials. Recomputing
the partials for each sweep then becomes a significant computational burden.
With a focus on implementing efficient higher-order methods, \nomad
explicitly stores partial derivatives in a dedicated partials stack. When
constructing an $k$th-order expression graph only the $k$th-order partial
derivatives and lower are stored, and only if the partial derivatives are
non-zero. For example, a second-order expression graph will calculate
and store only the first and second-order partial derivatives.
To avoid redundant calculations and storage, \nomad stores only the
unique higher-order values. In a second-order calculation a node
representing the component function
$f: \mathbb{R}^{N} \rightarrow \mathbb{R}$ will store
%
\begin{equation*}
\frac{ \partial^{2} f }{ \partial x_{i} \partial x_{j} }, \, i \in 1, \ldots, N, j \in 1, \ldots i,
\end{equation*}
%
while a third-order calculation will store
%
\begin{equation*}
\frac{ \partial^{3} f }{ \partial x_{i} \partial x_{j} \partial x_{k} }, \,
i \in 1, \ldots, N, j \in 1, \ldots i, k \in 1, \ldots, j.
\end{equation*}
%
For example, when executing a third-order calculation the node representing
a binary function, $f : \mathbb{R}^{2} \rightarrow \mathbb{R}$, utilizes the
compact storage demonstrated in Figure \ref{fig:partialsStorage}.
\begin{figure}
\setlength{\unitlength}{0.1in}
\centering
\begin{picture}(50, 10)
%
%\put(0, 0) { \framebox(50, 10){} }
%
% Second-Order
%
\put(5, 7) { \makebox(2, 2){ $\ldots$ } }
\put(5, 1) { \makebox(2, 2){ $\ldots$ } }
\put(7, 2) { \framebox(4, 6){ $ \frac{ \partial f }{ \partial x} $ } }
\put(11, 2) { \framebox(4, 6){ $ \frac{ \partial f }{ \partial y} $ } }
\put(15, 2) { \framebox(4, 6){ $ \frac{ \partial^{2} f }{ \partial x^{2}} $ } }
\put(19, 2) { \framebox(4, 6){ $ \frac{ \partial^{2} f }{ \partial x \partial y} $ } }
\put(23, 2) { \framebox(4, 6){ $ \frac{ \partial^{2} f }{ \partial y^{2}} $ } }
\put(27, 2) { \framebox(4, 6){ $ \frac{ \partial^{3} f }{ \partial x^{3}} $ } }
\put(31, 2) { \framebox(4, 6){ $ \frac{ \partial^{3} f }{ \partial x^{2} \partial y} $ } }
\put(35, 2) { \framebox(4, 6){ $ \frac{ \partial^{3} f }{ \partial x \partial y^{3}} $ } }
\put(39, 2) { \framebox(4, 6){ $ \frac{ \partial^{3} f }{ \partial y^{3}} $ } }
\put(43, 7) { \makebox(2, 2){ $\ldots$ } }
\put(43, 1) { \makebox(2, 2){ $\ldots$ } }
%
{ \thicklines \put(9, 10) { \vector(0, -1){2} } }
%
\end{picture}
\caption{
Only unique partial derivatives are stored in the partials stack,
as demonstrated here for node representing a binary function,
$f: \mathbb{R}^{2} \rightarrow \mathbb{R}$, in a third-order
expression graph that requires first, second, and third-order
partial derivatives.
}
\label{fig:partialsStorage}
\end{figure}
In general a function with $N$ inputs and non-vanishing derivatives at
all orders will require $\binom{N + M}{M}$ elements to store the unique
$M$th-order partial derivatives.
\section{Extending Functions to Accept Dual Numbers}
In order to implement automatic differentiation calculations, each dual-number
valued-function is responsible for not only computing the function
value but also expanding the expression graph with the addition of
a node with the desired pushforward and pullback methods and
addresses to the top of the inputs, dual numbers, and partials stacks.
The evaluation of a composite dual number-valued function will then
build the complete expression graph primed for the application of
linear differential operators.
Dual numbers in \nomad are exposed to the user by the \verb|var|
class with \verb|var|-valued functions responsible for the management
of the expression graph. In this section we present the details of
the \verb|var| class and the implementation of \verb|var|-valued functions.
\subsection{The var Class}
The \verb|var| class itself is only a lightweight wrapper for an underlying
\verb|var_node| in the expression graph, storing an address to the
corresponding node with a variety of member functions that expose the
node's data. Most importantly, \verb|var| is templated to allow for the
compile-time configuration of the expression graph,
%
\begin{verbatim}
template <short AutodiffOrder, bool StrictSmoothness, bool ValidateIO>
class var { ...
\end{verbatim}
%
\nomad also defines a variety of type definitions for the most commonly
used template configurations (Table \ref{tab:typedefs}).
\begin{table*}[t!]
\centering
\renewcommand{\arraystretch}{2}
\begin{tabular}{cccc}
\rowcolor[gray]{0.9} Type Definition & \verb|AutodiffOrder|
& \verb|StrictSmoothness| & \verb|ValidateIO| \\
\verb|var1| & \verb|1| & \verb|true| & \verb|false| \\
\rowcolor[gray]{0.9} \verb|var2| & \verb|2| & \verb|true| & \verb|false| \\
\verb|var3| & \verb|3| & \verb|true| & \verb|false| \\
\rowcolor[gray]{0.9} \verb|var1_debug| & \verb|1| & \verb|true| & \verb|true| \\
\verb|var2_debug| & \verb|2| & \verb|true| & \verb|true| \\
\rowcolor[gray]{0.9} \verb|var3_debug| & \verb|3| & \verb|true| & \verb|true| \\
\verb|var1_wild| & \verb|1| & \verb|false| & \verb|false| \\
\rowcolor[gray]{0.9} \verb|var2_wild| & \verb|2| & \verb|false| & \verb|false| \\
\verb|var3_wild| & \verb|3| & \verb|false| & \verb|false| \\
\end{tabular}
\caption{The \texttt{nomad} namespace includes helpful type definitions for
the most common \texttt{var} configurations.}
\label{tab:typedefs}
\end{table*}
\subsubsection{AutodiffOrder}
The template parameter \verb|AutodiffOrder| defines the maximum order
linear differential operator that the corresponding expression graph will
admit. Attempting to apply a second-order linear differential operator
to a graph built from \verb|var|s with \verb|AutodiffOrder = 1|, for example,
will induce a compile-time error.
Note that lower-order graphs require the computation and storage of
fewer dual numbers and partial derivatives, not to mention faster pushforward
and pullback implementations. Consequently, although a higher-order
graph will admit lower-order operators, it is much more efficient to match
the order of the expression graph, via the choice of \verb|AutodiffOrder|,
to the linear differential operators of interest.
\subsubsection{StrictSmoothness}
\verb|StrictSmoothness| defines the smoothness requirements of
component functions.
Formally, automatic differentiation requires only that each component
function have well-behaved derivatives defined in some neighborhood
containing the point at which the function is being evaluated. For
example, a linear differential operator applied to the absolute value
function is well defined everywhere except at the origin. Many
algorithms that use these operators, however, actually require that
the derivatives be well-defined \textit{everywhere} because even
functions with only point discontinuities in their derivatives will
manifest undesired pathologies.
With these algorithms in mind, when \verb|StrictSmoothness = true|
\nomad will disable functions and operators that are not everywhere
smooth, as well as those that admit comparisons that may introduce cusps.
\subsubsection{ValidateIO}
\verb|ValidateIO| enables strict validation of the inputs and outputs
to each component function.
Identifying the source of potential floating-point pathologies like
underflow, overflow and \verb|NaN| in a large, composite function
is often a challenging debugging task. When \verb|ValidateIO = true|,
\nomad assists users by checking all input values, output values,
and output partial derivatives for \verb|NaN| and throws an
exception identifying the responsible component function if any
are found. Domain validation will also be done if implemented in
the function. These checks, however, are a nontrivial computational
burden and should be disabled for high-performance applications.
\subsection{var-valued Functions}
Extending a function to take \verb|var| arguments requires not just propagating the
value of the function but also creating a new node in the expression
graph. In this section we review the steps necessary for implementing
a \verb|var|-valued function and present some specific examples.
\subsubsection{Input Validation}
Firstly, a \verb|var|-valued function has to validate its inputs if
\verb|ValidateIO| is \verb|true|. \nomad provides helper functions
that validate double values and throws \nomad-specific exceptions
if \verb|NaN|s are encountered,
%
\begin{verbatim}
inline void validate_input(double val, std::string f_name)
\end{verbatim}
%
Here \verb|f_name| is the name of the function being implemented
and is used to trace the exception to the origin of the violation.
Domain constraints may also be validated here with helper functions
such as
%
\begin{verbatim}
inline void validate_lower_bound(double val, double lower, std::string f_name)
\end{verbatim}
\subsubsection{Expression Graph Node Creation}
With the inputs validated we can now append a new node to the
expression graph with the \verb|create_node| function. This function
is templated to accept the type of node being used by the function
and will require the number of inputs to the function unless the
specific \verb|var_node| implementation does not require it.
For example, if the function is utilizing the default \verb|var_node|
implementation then the call would be
%
\begin{verbatim}
create_node<var_node<AutodiffOrder, PartialsOrder>>(n_inputs);
\end{verbatim}
%
On the other hand, a function utilizing a \verb|var_node| specialization
with a predefined number of inputs would not need to specify an
argument,
%
\begin{verbatim}
create_node<binary_var_node<AutodiffOrder, PartialsOrder>>();
\end{verbatim}
\subsubsection{Pushing Inputs}
Once the node representing the component function has been created
we can now being to push the necessary data onto the external stacks.
First are the addresses of the inputs to the function,
%
\begin{verbatim}
inline void push_inputs(nomad_idx_t input)
\end{verbatim}
%
which are readily accessed from the input \verb|var| arguments.
\subsubsection{Pushing Dual Numbers}
Next are the dual numbers. The function
%
\begin{verbatim}
template<short AutodiffOrder, bool ValidateIO>
inline void push_dual_numbers(double val)
\end{verbatim}
%
pushes $2^{\mathrm{AutodiffOrder}}$ components onto the stack, with
the first component set to the value of the function, \verb|val|, and the
rest set to zero.
When \verb|ValidateIO = true| the function \verb|push_dual_numbers|
will also check the output \verb|val| for a \verb|NaN| and throw an
exception if necessary. Consequently the call to \verb|push_dual_numbers|
must be wrapped in a \verb|try/catch| block, preferable one that throws
a new exception identifying the current function, for example
%
\begin{verbatim}
try {
push_dual_numbers<AutodiffOrder, ValidateIO>(binary_function(x, y));
} catch(nomad_error) {
throw nomad_output_value_error("binary_function");
}
\end{verbatim}
\subsubsection{Pushing Partials}
Finally we can compute the partial derivatives and push them onto the
partials stack with
%
\begin{verbatim}
template<bool ValidateIO>
inline void push_partials(double partial)
\end{verbatim}
%
As with \verb|push_dual_numbers|, \verb|push_partials| will optionally
validate the values of the partial derivatives and must be wrapped in
a \verb|try/catch| block,
%
\begin{verbatim}
try {
if (AutodiffOrder >= 1) {
push_partials<ValidateIO>(df_dx);
push_partials<ValidateIO>(df_dy);
}
...Push higher-order partial derivatives...
} catch(nomad_error) {
throw nomad_output_partial_error("binary_function");
}
\end{verbatim}
Because partial derivatives only up to \verb|AutodiffOrder| need to be
computed and stored, efficient implementations of a \verb|var|-valued
function should evaluate partial derivatives only conditionally on
\verb|AutodiffOrder|.
\subsubsection{Returning a New Var}
Finally we return a new \verb|var| that wraps the freshly-created node.
%
\begin{verbatim}
return var<AutodiffOrder, StrictSmoothness, ValidateIO>(next_node_idx_ - 1);
\end{verbatim}
%
\verb|next_node_idx_| addresses the top of the \verb|var_node| stack, hence
\verb|next_node_idx_ - 1| addresses the last node pushed to the stack.
\subsubsection{Example Implemention of a Smooth Function}
The logic of a \verb|var|-valued function implementation is more clear
when each step is presented together. Here is an example implementation
of a smooth, binary function using the default \verb|var_node| implementation.
\begin{verbatim}
template <short AutodiffOrder, bool StrictSmoothness, bool ValidateIO>
inline var<AutodiffOrder, StrictSmoothness, ValidateIO>
binary_function(const var<AutodiffOrder, StrictSmoothness, ValidateIO>& v1,
const var<AutodiffOrder, StrictSmoothness, ValidateIO>& v2) {
// Validate input values if ValidateIO is true
if (ValidateIO) {
validate_input(v1.first_val(), "binary_function");
validate_input(v2.first_val(), "binary_function");
}
// Create a var_node on the top of the stack
const short partials_order = 3;
const unsigned int n_inputs = 2;
create_node<var_node<AutodiffOrder, partials_order>>(n_inputs);
// Push dependencies to top of the inputs stack
push_inputs(v1.dual_numbers());
push_inputs(v2.dual_numbers());
// Push dual numbers to the the top of the dual numbers stack
double x = v1.first_val();
double y = v2.first_val();
try {
push_dual_numbers<AutodiffOrder, ValidateIO>(binary_function(x, y));
} catch(nomad_error) {
throw nomad_output_value_error("binary_function");
}
// Push partial derivatives to the top of the partials stack
try {
if (AutodiffOrder >= 1) {
...Compute df_dx and df_dy...
push_partials<ValidateIO>(df_dx);
push_partials<ValidateIO>(df_dy);
}
if (AutodiffOrder >= 2) {
...Compute df2_dx2, df2_dxy, and df2_dy2...
push_partials<ValidateIO>(df2_dx2);
push_partials<ValidateIO>(df2_dxdy);
push_partials<ValidateIO>(df2_dy2);
}
if (AutodiffOrder >= 3) {
...Compute df3_dx3, df3_dx2dy, df3_dxdy2, and df3_dy3...
push_partials<ValidateIO>(df3_dx3);
push_partials<ValidateIO>(df3_dx2dy);
push_partials<ValidateIO>(df3_dxdy2);
push_partials<ValidateIO>(df3_dy3);
}
} catch(nomad_error) {
throw nomad_output_partial_error("binary_function");
}
// Return a new var that wraps the newly created node
return var<AutodiffOrder, StrictSmoothness, ValidateIO>(next_node_idx_ - 1);
}
\end{verbatim}
%
Note that expensive calculations, such as when validating inputs and computing
and storing partial derivatives, are conditioned on template parameters wherever
possible to avoid unnecessary computation. Because these template parameters
are known at compile-time these checks incur no run-time penalties.
When the partial derivatives of a function are structured then we can achieve
much higher performance by specializing the underlying \verb|var_node|.
For example, the only non-zero partial derivatives of the addition operator
are at first-order, and they are all equal to $1$. Consequently we can speed
up the automatic differentiation computations by specializing \verb|var_node|
to avoid unnecessary multiplications by $1$.
This is done in the \verb|binary_sum_var_node| and the implementation of
the addition operator then becomes
%
\begin{verbatim}
template <short AutodiffOrder, bool StrictSmoothness, bool ValidateIO>
inline var<AutodiffOrder, StrictSmoothness, ValidateIO>
operator+(const var<AutodiffOrder, StrictSmoothness, ValidateIO>& v1,
const var<AutodiffOrder, StrictSmoothness, ValidateIO>& v2) {
if (ValidateIO) {
validate_input(v1.first_val(), "operator+");
validate_input(v2.first_val(), "operator+");
}
// Create a specialized var_node on the stack
create_node<binary_sum_var_node<AutodiffOrder>>();
push_inputs(v1.dual_numbers());
push_inputs(v2.dual_numbers());
try {
push_dual_numbers<AutodiffOrder, ValidateIO>(v1.first_val() + v2.first_val());
} catch(nomad_error) {
throw nomad_output_value_error("operator+");
}
// The binary_sum_var_node pushforward and pullback implementations
// use hardcoded partial derivatives and don't require partial derivatives to
// be pushed onto the partials stack
return var<AutodiffOrder, StrictSmoothness, ValidateIO>(next_node_idx_ - 1);
}
\end{verbatim}
\subsubsection{Example Implemention of a Non-Smooth Function}
Non-smooth functions are implemented almost exactly the same as smooth functions.
The only difference is in the function signature which uses the \verb|enable_if|
template metaprogram to disable the function unless \verb|StrictSmoothness = 1|.
For example, the absolute value function is implemented as
%
\begin{verbatim}
template <short AutodiffOrder, bool StrictSmoothness, bool ValidateIO>
inline typename
std::enable_if<!StrictSmoothness,
var<AutodiffOrder, StrictSmoothness, ValidateIO> >::type
fabs(const var<AutodiffOrder, StrictSmoothness, ValidateIO>& input) {
if (ValidateIO) validate_input(input.first_val(), "fabs");
const short partials_order = 1;
const unsigned int n_inputs = 1;
create_node<unary_var_node<AutodiffOrder, partials_order>>(n_inputs);
double x = input.first_val();
try {
push_dual_numbers<AutodiffOrder, ValidateIO>(fabs(x));
} catch(nomad_error) {
throw nomad_output_value_error("fabs");
}
push_inputs(input.dual_numbers());
try {
if (AutodiffOrder >= 1) {
if (x < 0)
push_partials<ValidateIO>(-1);
else
push_partials<ValidateIO>(1);
}
} catch(nomad_error) {
throw nomad_output_partial_error("fabs");
}
return var<AutodiffOrder, StrictSmoothness, ValidateIO>(next_node_idx_ - 1);
}
\end{verbatim}
\section{The \nomad User Interface}
\label{sec:user_interface}
In \nomad, linear differential operators are implemented as functionals acting
on \verb|var|-valued functors.
Specifically, functors are classes deriving from \verb|base_functor|
%
\begin{verbatim}
template<class T>
class base_functor {
public:
virtual T operator()(const Eigen::VectorXd& x) const;
typedef T var_type;
};
\end{verbatim}
%
that implement a \verb|var|-valued \verb|operator()| taking an
\verb|Eigen::VectorXd&| of input values. For example, at
first-order the composite function
%
\begin{equation*}
f \! \left( x, y, z \right) = \cos \! \left( e^{x} + e^{y} \right) / z
\end{equation*}
%
would be defined as
%
\begin{verbatim}
class example_functor: public base_functor<var1> {
var1 operator()(const Eigen::VectorXd& x) const {
var1 v1 = x[0];
var1 v2 = x[1];
var1 v3 = x[2];
return cos( exp(v1) + exp(v2) ) / v3;
}
};
\end{verbatim}
The functionals implementing linear differential operators then
take a functor instance as an argument, first calling \verb|operator()|
to build the expression graph and then the various sweeps
necessary to compute the differential operator itself. For example,
the gradient is defined with the signature
%
\begin{verbatim}
template <typename F>
void gradient(const F& f,
const Eigen::VectorXd& x,
Eigen::VectorXd& g)
\end{verbatim}
%
Consequently computing a gradient is straightforward,
%
\begin{verbatim}
int n = 100;
Eigen::VectorXd x = Eigen::VectorXd::Ones(n);
double f;
Eigen::VectorXd grad(x.size());
example_functor example;
gradient(example, x, f, grad);
std::cout << grad.transpose() << std::endl;
\end{verbatim}
For the complete list of \verb|var|-valued functions and linear differential
operators implemented in \nomad please consult Chapter \ref{chap:reference_guide}.
| {
"alphanum_fraction": 0.7162699751,
"avg_line_length": 36.5295566502,
"ext": "tex",
"hexsha": "111dbf986a02ff87daec0e217c3d497e3d65e21f",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-03-08T19:17:51.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-10-13T17:40:34.000Z",
"max_forks_repo_head_hexsha": "a21149ef9f4d53a198e6fdb06cfd0363d3df69e7",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "stan-dev/nomad",
"max_forks_repo_path": "manual/2-implementation.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "a21149ef9f4d53a198e6fdb06cfd0363d3df69e7",
"max_issues_repo_issues_event_max_datetime": "2016-07-17T01:36:56.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-12-15T08:12:01.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "stan-dev/nomad",
"max_issues_repo_path": "manual/2-implementation.tex",
"max_line_length": 99,
"max_stars_count": 23,
"max_stars_repo_head_hexsha": "a21149ef9f4d53a198e6fdb06cfd0363d3df69e7",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "stan-dev/nomad",
"max_stars_repo_path": "manual/2-implementation.tex",
"max_stars_repo_stars_event_max_datetime": "2021-01-15T18:59:58.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-12-11T20:06:57.000Z",
"num_tokens": 8961,
"size": 29662
} |
\section{0. Introduction}\label{introduction}
This project focuses on building a web application to predict house
prices for house buyers and house sellers.
The value of a house is more than just location and square footage.
Houses have several features that make up it's value.We are going to
take advantage of all the features to make accurate predictions about
the price of any house.
We developed our application using a series of logical steps to ensure
that users can easily use the application and make accurate predictions.
\begin{enumerate}
\def\labelenumi{\arabic{enumi})}
\setcounter{enumi}{-1}
\itemsep1pt\parskip0pt\parsep0pt
\item
Introduction
\item
Problem definition
\item
Solution approach
\item
Results and discussions
\item
Conclusions
\item
Refrences
\end{enumerate}
\section{1. Problem definition}\label{problem-definition}
We used a simple case study to understand the problem. There are two
clients at the same time without a conflict of intreset.
The house buyer, a client that wants to buy their dream home. They have
some locations in mind. Now, the client wants to know if the house price
matches the value. With the application, they can understand which
features are influence the final price. If the final price matches the
value predicted by the application the can ensure they are getting a
fair price.
The house seller, a client that buys houses, fixes them and then sells
houses to make profit. This client wants to take advantage of features
that influece the price of a house the most. They typically want to buy
a house at a low price and invest in features that will give the highest
return.
\section{2. Solution approach}\label{solution-approach}
\begin{enumerate}
\def\labelenumi{\arabic{enumi})}
\itemsep1pt\parskip0pt\parsep0pt
\item
Define requirements
\item
Gather data, analyze and build models
\item
Build web backend API to use model
\item
Design and develope frontend
\item
Intergrate both frontend and backend
\item
Test the entire application
\end{enumerate}
\subsection{Define requirements}\label{define-requirements}
The requriements were gathered from the problem and formally defined.
\subsubsection{User function}\label{user-function}
\begin{itemize}
\itemsep1pt\parskip0pt\parsep0pt
\item
Predict house price
\item
Customize house parameters
\item
Assign unique label every prediction
\item
Save recent predictions
\end{itemize}
\subsubsection{Operating enviroment}\label{operating-enviroment}
\begin{itemize}
\itemsep1pt\parskip0pt\parsep0pt
\item
client/server system (Web)
\item
client: Web browers
\item
server: Python/Flask
\item
database: sqlite
\item
platform: Python/Javascript/HTML5/CSS3
\item
Operating system: Windows, Mac, Linux
\end{itemize}
\subsection{2. Gather data, analyze and build
models}\label{gather-data-analyze-and-build-models}
We found an online kaggle challange that contained the data we needed to
solve the problem. Data was downloaded from
\href{https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data}{here}
We broke everything into the following steps We started by loading data
and packages we needed for the research.We then analyzed the data to
understand the relationships between the price and other features. We
cleaned the data and using some domain knowlegde replaced some missing
values. The next step was feature tranformation to make the data
compatible with our models. We then trained our model and started
perfoming some predictions.
\subsection{3. Build web backend API to use
model}\label{build-web-backend-api-to-use-model}
Using python and the flask web framework we built a web API the takes
advantage of our model. The API comsumer can make a request containing
JSON map of features and their values. The flask server recieves this
request and sends a response containing the predicted price.
\subsection{6. Design and develope
frontend}\label{design-and-develope-frontend}
The User interface of the application was built using HTML, CSS3 and
javascript.
\subsection{7. Intergrate both frontend and
backend}\label{intergrate-both-frontend-and-backend}
Using the javascript, we send data from the forms on the webpage to the
flask server and the server sends a reponse, which is a prediction of
the price matching those features
\subsection{8. Test the entire
application}\label{test-the-entire-application}
We run multiple tests fixed bugs in the code.
\section{3. Results and discussions}\label{results-and-discussions}
\subsection{Screenshot of the
application}\label{screenshot-of-the-application}
\includegraphics{images/screenshot-01.png}
\includegraphics{images/screenshot-02.png}
\includegraphics{images/screenshot-04.png}
\includegraphics{images/screenshot-03.png}
We were able to build a web application that can predict the price of a
house given certain features. The application runs in the browser and
talks to a flask server that is taking data and passing it to a machine
learning model.
\section{4. Conclusion}\label{conclusion}
There are real world problems that can be solved with machine learning.
Some of these solutions can take real world data and make very accurate
predictions that can be useful to our daily lives. Users can leverage
the power of machine learning without being data scientist when easy to
use applications are built around some of these complicated models.
\section{5. Refrences}\label{refrences}
Pedro Marcelino, \emph{Comprehensive data exploration with Python},
Kaggle, February 2017. Accessed on: April 19, 2021. {[}Online{]}
Available:
\url{https://www.kaggle.com/pmarcelino/comprehensive-data-exploration-with-python}
J. Ade-Ojo, \emph{Predicting House Prices With Machine Learning},
Towards Data Science, Janurary 8, 2021. Accessed on: April 19, 2021.
{[}Online{]} Available:
\url{https://towardsdatascience.com/predicting-house-prices-with-machine-learning-62d5bcd0d68f}
\emph{House Prices - Advanced Regression Techniques}, Kaggle, Accessed
on: April 19, 2021. {[}Online{]} Available:
\url{https://www.kaggle.com/c/house-prices-advanced-regression-techniques}
\emph{House Prices EDA}, Kaggle, Accessed on: April 19, 2021.
{[}Online{]} Available:
\url{https://www.kaggle.com/dgawlik/house-prices-eda}
| {
"alphanum_fraction": 0.7962607862,
"avg_line_length": 33.4652406417,
"ext": "tex",
"hexsha": "0f3552a6e94986952a2e073ab1155d40cded5e03",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2022-03-18T21:47:48.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-09-29T14:55:15.000Z",
"max_forks_repo_head_hexsha": "a50f6dc46cea8c146ab64af7559e94ddb32a9c0c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "chemicoPy/real-estate-prediction",
"max_forks_repo_path": "report.tex",
"max_issues_count": 3,
"max_issues_repo_head_hexsha": "a50f6dc46cea8c146ab64af7559e94ddb32a9c0c",
"max_issues_repo_issues_event_max_datetime": "2021-05-21T23:08:56.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-05-17T22:32:13.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "chemicoPy/real-estate-prediction",
"max_issues_repo_path": "report.tex",
"max_line_length": 95,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "a50f6dc46cea8c146ab64af7559e94ddb32a9c0c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "chemicoPy/real-estate-prediction",
"max_stars_repo_path": "report.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-25T10:48:11.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-01-13T20:59:24.000Z",
"num_tokens": 1558,
"size": 6258
} |
Using the sample Java web application %
MyBatis JPetStore,\footnote{\url{http://www.mybatis.org/spring/sample.html}} this example %
demonstrates how to employ \KiekerMonitoringPart{} for monitoring a Java application %
running in a Java~EE container---in this case Jetty.\footnote{\url{http://www.eclipse.org/jetty/}} %
Monitoring probes based on the Java~EE Servlet API, Spring, %
and AspectJ are used to monitor execution, trace, and session data (see Section~\ref{chap:aspectJ}). %
The directory \dir{\JavaEEServletExampleReleaseDirDistro/} contains the prepared Jetty %
server with the MyBatis JPetStore application and the \Kieker-based demo %
analysis application known from \url{http://demo.kieker-monitoring.net/}. %
\section{Setting}
The subdirectory \file{jetty/} includes the %
Jetty server with the JPetStore application already deployed to the server's %
\file{webapps/} directory. The example is prepared to use two alternative %
types of \Kieker{} probes: either the \Kieker{} Spring interceptor (default) or the
\Kieker{} AspectJ aspects. Both alternatives additionally use \Kieker{}'s Servlet
filter. %
\paragraph{Required Libraries and \KiekerMonitoringPart{} Configuration}
Both settings require the files \file{\aspectJWeaverJar{}} and \file{\mainJar}, %
which are already included in the webapps's \dir{WEB-INF/lib/} directory. %
Also, a \Kieker{} configuration file is already included in the Jetty's root directory, %
where it is considered for configuration by \KiekerMonitoringPart{} in both modes.
\paragraph{Servlet Filter (Default)}
The file \file{web.xml} is located in the webapps's %
\dir{WEB-INF/} directory. \Kieker{}'s Servlet filters are already enabled:
\setXMLListing
\lstinputlisting[firstline=50,lastline=61, numbers=none, linewidth=1\textwidth, caption=Enabling the Servlet filter in \file{web.xml}]{\JavaEEServletExampleDir/jetty/webapps/jpetstore/WEB-INF/web.xml}
\noindent This filter can be used with both the Spring-based and the %
AspectJ-based instrumentation mode.
\paragraph{Spring-based Instrumentation (Default)}
\Kieker{}'s Spring interceptor are already enabled in the file
\file{applicationContext.xml}, located in the webapps's \dir{WEB-INF/} directory:
\setXMLListing
\lstinputlisting[firstline=38,lastline=43, numbers=none, linewidth=1\textwidth, caption=Enabling the Spring interceptor in \file{applicationContext.xml}]{\JavaEEServletExampleDir/jetty/webapps/jpetstore/WEB-INF/applicationContext.xml}
\NOTIFYBOX{When using, for example, the \texttt{@Autowired} feature in your Spring beans, it can be necessary to force the usage of CGLIB proxy objects with \texttt{<aop:aspectj-autoproxy proxy-target-class="true"/>}.}
\paragraph{AspectJ-based Instrumentation}
\enlargethispage{1cm}
In order to use AspectJ-based instrumentation, the following changes need to
be performed. The file \file{start.ini}, located in Jetty's root directory, %
allows to pass various JVM arguments, JVM system properties, and other options %
to the server on startup. When using AspectJ for instrumentation, the respective %
JVM argument needs to be activated in this file. %
The AspectJ configuration file \file{aop.xml} is already located in the %
webapps's\linebreak \dir{WEB-INF/classes/META-INF/} directory and configured to instrument %
the JPetStore classes with \Kieker{}'s \class{OperationExecutionAspectFull} aspect %
(Section~\ref{chap:aspectJ}).
\
When using the AspectJ-based instrumentation, make sure to disable the Spring %
interceptor in the file \file{applicationContext.xml}, mentioned above. %
\begin{compactenum}
\item Start the Jetty server using the \file{start.jar} file (e.g., via \texttt{java -jar start.jar}). You should make %
sure that the server started properly by taking a look at %
the console output that appears during server startup.
\item Now, you can access the JPetStore application by opening the URL
\url{http://localhost:8080/jpetstore/} (Figure~\ref{fig:jpetstore}). %
\Kieker{} initialization messages should appear in the console output. %
\begin{figure}[h]\centering
\includegraphics[width=0.8\textwidth]{images/jpetstore-example-FFscrsh}
\caption{MyBatis JPetStore}\label{fig:jpetstore}%
\end{figure}
\item Browse through the application to generate some monitoring data. %
\item In this example, \Kieker{} is configured to write the monitoring data %
to JMX in order to communicate with the \Kieker-based demo analysis %
application, which is accessible via \url{localhost:8080/livedemo/}.
\item In order to write the monitoring data to the file system, the %
JMX writer needs to be disabled in the file \file{kieker.monitoring.properties}, %
which is located in the directory \file{webapps/jpetstore/WEB-INF/classes/META-INF/}.
After a restart of the Jetty server, the Kieker startup output includes the %
information where the monitoring data is written to (should be a %
\dir{kieker-<DATE-TIME>/} directory) located in the default temporary %
directory. %
This data can be analyzed and visualized using \KiekerTraceAnalysis{}, %
as described in Chapter~\ref{chap:aspectJ}.
\end{compactenum}
\medskip
| {
"alphanum_fraction": 0.7701481054,
"avg_line_length": 52.5151515152,
"ext": "tex",
"hexsha": "5f3417b4d4df03ed33e15400f703acddc79666a9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "64c8a74422643362da92bb107ae94f892fa2cbf9",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "Zachpocalypse/kieker",
"max_forks_repo_path": "kieker-documentation/userguide/Appendix-ch-javaEEServletContainerExample.inc.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "64c8a74422643362da92bb107ae94f892fa2cbf9",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "Zachpocalypse/kieker",
"max_issues_repo_path": "kieker-documentation/userguide/Appendix-ch-javaEEServletContainerExample.inc.tex",
"max_line_length": 234,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "64c8a74422643362da92bb107ae94f892fa2cbf9",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Zachpocalypse/kieker",
"max_stars_repo_path": "kieker-documentation/userguide/Appendix-ch-javaEEServletContainerExample.inc.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-01T14:04:34.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-01-01T14:04:34.000Z",
"num_tokens": 1312,
"size": 5199
} |
\chapter{Implementation}\label{implementation}
This chapter will be a description of the implementation of the methods described in chapter \ref{methods}, introduction-wise a flowchart (figure \ref{flowchart1}) will be included giving a broad perspective on the implementation and the structure of mainly the first node, which handles the main part of the mining tasks and publishing of the "execution" time of the tasks, as well as the ores collected in the process. A flowchart for the second node will not be included, as this only handles separate console outputs as a callback function to the /mininglog topic.
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{RPro-Mini-Project-B332b/00 - Images/broad_flowchart.png}
\caption{Broad flowchart of the first node}
\label{flowchart1}
\end{figure}
\newpage
\section{Publisher}
The ROS publisher publishes messages onto the ROS network of nodes for any subscriber to read. In the code, this is handled as described below.\\
\\
\begin{figure}[!ht]
\begin{lstlisting}
void printLog()
{
rpro_mini_project::logOutput log;
log.comm = logVar;
mine_pub.publish(log);
ros::spinOnce;
}
\end{lstlisting}
\vspace{-10mm}
\caption{ROS publisher}
\end{figure}
The $logOutput$ class of the $rpro\_mini\_project$ namespace is instantiated as \textit{log}. Then the contents of the class member \textit{comm} are set equal to the contents of the \textit{logVar} string variable.
Next \textit{log} is passed as a parameter when the $mine\_pub.publish$ function is called. $mine\_pub$ is described below. Lastly the class \textit{spinOnce} of the namespace \textit{ROS} is called. This causes ROS to run its functions once thus advertising the existence of a new message on the topic and publishing the message.\\
\\
\begin{figure}[!ht]
\begin{lstlisting}
mine_pub = nh.advertise<rpro_mini_project::logOutput>("miningLog",10);
\end{lstlisting}
\vspace{-10mm}
\caption{Nodehandle for publisher}
\end{figure}
\\
In $mine\_pub$ we have nodehandle.advertise which advertises new messages onto the ROS network of nodes. In this case the message $rpro\_mini\_project::logOutput$ is advertised onto the topic \textit{miningLog}.
The second parameter is the message queue - how many messages will be kept in memory if our system cannot keep up with the flow of messages.
\newpage
\section{Subscriber}
The ROS subscriber subscribes to topics on the ROS network of nodes and reads any advertised messages. In the code, this is handled as described below.\\
\\
\begin{figure}[!ht]
\begin{lstlisting}
void printLog(const rpro_mini_project::logOutput& logOut)
{
if (logOut.comm == "cls")
{
system("clear");
}
else
{
std::cout << logOut.comm << std::endl;
}
}
\end{lstlisting}
\vspace{-10mm}
\caption{ROS subscriber}
\end{figure}
The printLog function has a constant reference to $rpro\_mini\_project::logOutput$ which is instantiated as \textit{logOut}. When a message of the \textit{logOutput}-class is published in the \textit{miningLog}-topic the contents are passed through the if-statements.
If the content of the class-member \textit{comm} equals "cls" the \textit{system}-function is called with the parameter "clear" which clears the terminal window.
If \textit{comm} contains anything else, the contents are printed in the console.\\
\\
\begin{figure}[!ht]
\begin{lstlisting}
mine_sub = nh.subscribe("miningLog", 1000, &MiningOutput::printLog, this);
\end{lstlisting}
\vspace{-10mm}
\caption{Nodehandle for subscriber}
\end{figure}
\\
In $mine\_sub$ we have nodehandle.subscribe which subscribes to new messages - in this case on the \textit{miningLog}-topic.The second parameter is the message queue - how many messages will be kept in memory if our system cannot keep up with the flow of messages. Next is the pointer to the class-member who handles incoming messages. The last parameter is the keyword \textit{this}. The keyword points to the address of the object form which the instance is called. | {
"alphanum_fraction": 0.7495728582,
"avg_line_length": 47.091954023,
"ext": "tex",
"hexsha": "264e2163de597481304c2b4de66085afdacfe638",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f4b8c0d0056a2e30fbb12d2e8ec5a335ada4dff7",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "RiceCurry2/rpro-mini-project",
"max_forks_repo_path": "RPro-Mini-Project-B332b/02 - Sections/03-Implementation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f4b8c0d0056a2e30fbb12d2e8ec5a335ada4dff7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "RiceCurry2/rpro-mini-project",
"max_issues_repo_path": "RPro-Mini-Project-B332b/02 - Sections/03-Implementation.tex",
"max_line_length": 568,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f4b8c0d0056a2e30fbb12d2e8ec5a335ada4dff7",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "RiceCurry2/rpro-mini-project",
"max_stars_repo_path": "RPro-Mini-Project-B332b/02 - Sections/03-Implementation.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1022,
"size": 4097
} |
% ----------------------------------------------------------------------
\lecture{Heuristic-driven solving}{hsolving}
% ----------------------------------------------------------------------
\part{Heuristic-driven solving}
% ----------------------------------------------------------------------
\section{Motivation}
% ------------------------------
\input{heuristic-driven-solving/motivation}
\input{solving/cdcl}
\input{heuristic-driven-solving/decide}
% ----------------------------------------------------------------------
\section{Heuristically modified ASP}
% ------------------------------
\subsection{Language}
% ------------------------------
\input{heuristic-driven-solving/hlanguage}
\input{heuristic-driven-solving/hstrips}
% ------------------------------
\subsection{Options}
% ------------------------------
\input{heuristic-driven-solving/hoptions}
\input{heuristic-driven-solving/hminimality}
% ----------------------------------------------------------------------
\section{Summary}
% ------------------------------
\input{heuristic-driven-solving/summary}
% ----------------------------------------------------------------------
%
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "main"
%%% End:
| {
"alphanum_fraction": 0.3786885246,
"avg_line_length": 36.9696969697,
"ext": "tex",
"hexsha": "1327c1ab8f84a77bae24fbcce2d4b57626979461",
"lang": "TeX",
"max_forks_count": 7,
"max_forks_repo_forks_event_max_datetime": "2022-02-14T10:19:35.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-10-12T23:09:16.000Z",
"max_forks_repo_head_hexsha": "c673c03db3c3dbaee45a69e9617264f56d0ec67f",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "vogelh0ws/course",
"max_forks_repo_path": "hsolving.tex",
"max_issues_count": 15,
"max_issues_repo_head_hexsha": "c673c03db3c3dbaee45a69e9617264f56d0ec67f",
"max_issues_repo_issues_event_max_datetime": "2022-02-09T10:55:26.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-10-05T18:06:31.000Z",
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "vogelh0ws/course",
"max_issues_repo_path": "hsolving.tex",
"max_line_length": 72,
"max_stars_count": 16,
"max_stars_repo_head_hexsha": "c673c03db3c3dbaee45a69e9617264f56d0ec67f",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "vogelh0ws/course",
"max_stars_repo_path": "hsolving.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-18T23:47:51.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-11-09T16:01:53.000Z",
"num_tokens": 183,
"size": 1220
} |
\section{Floquet Conductivity in Landau Levels}
Now we can derive conductivity expression for a given Landau level using Floquet conductivity expression derived in [*Ref:my report 2.488]. Before that let's consider the inverse scattering time matrix element in the previous section. From Eq. \eqref{5.3} we can express the $N$th Landau level's inverse scattering time central element ($n=n'=0$) as
\begin{equation} \label{6.1}
\begin{aligned}
\qty(\frac{1}{\tau(\varepsilon,k_x)})^{00}_{N} =
\frac { N_{imp}^2 A^2 \hbar V_{imp}}{16\pi^4 \qty(eB)^2}
\delta(\varepsilon - \varepsilon_{N}) &
\int_{-\infty}^{\infty} d {k'}_x \;
J_0^2\qty(\frac{g\hbar}{eB}[{k}_x - {k'}_x])
\\
& \times
\qty|
\int_{-\infty}^{\infty} d\bar{k} \;
{\chi}_{N}\qty(\frac{\hbar}{eB}\bar{k})
{\chi}_{N}\qty(\frac{\hbar}{eB} \qty[{k'}_x - {k}_x - \bar{k}])|^2.
\end{aligned}
\end{equation}
Now we can introduce a new parameter with physical meaning of scatterin-induced broading of the Landau level as follows
\begin{equation} \label{6.2}
\Gamma^{00}_{N}(\varepsilon,k_x) \equiv \hbar \qty(\frac{1}{\tau(\varepsilon,k_x)})^{00}_{N}
\end{equation}
and this modify our previous expressing as
\begin{equation} \label{6.3}
\begin{aligned}
\Gamma^{00}_{N}(\varepsilon,k_x) =
\frac { N_{imp}^2 A^2 \hbar V_{imp}}{16\pi^4 \qty(eB)^2}
\delta(\varepsilon - \varepsilon_{N}) &
\int_{-\infty}^{\infty} d {k'}_x \;
J_0^2\qty(\frac{g\hbar}{eB}[{k}_x - {k'}_x])
\\
& \times
\qty|
\int_{-\infty}^{\infty} d\bar{k} \;
{\chi}_{N}\qty(\frac{\hbar}{eB}\bar{k})
{\chi}_{N}\qty(\frac{\hbar}{eB} \qty[{k'}_x - {k}_x - \bar{k}])|^2.
\end{aligned}
\end{equation}
In addition, for the case of elastic scattering within the same Landau level, one can present the delta distribution of the energy using the same physical interpretation as follows
\begin{equation} \label{6.4}
\delta(\varepsilon - \varepsilon_{N}) \approx
\frac{1}{\pi \Gamma^{00}_{N}(\varepsilon,k_x)}
\end{equation}
and this leads to
\begin{equation} \label{6.5}
\begin{aligned}
\qty[\Gamma^{00}_{N}(\varepsilon,k_x)]^2 =
\frac { N_{imp}^2 A^2 \hbar V_{imp}}{16\pi^5 \qty(eB)^2}
\int_{-\infty}^{\infty} d {k'}_x \;
J_0^2\qty(\frac{g\hbar}{eB}[{k}_x - {k'}_x])
\qty|
\int_{-\infty}^{\infty} d\bar{k} \;
{\chi}_{N}\qty(\frac{\hbar}{eB}\bar{k})
{\chi}_{N}\qty(\frac{\hbar}{eB} \qty[{k'}_x - {k}_x - \bar{k}])|^2.
\end{aligned}
\end{equation}
and
\begin{equation} \label{6.6}
\begin{aligned}
\Gamma^{00}_{N}(\varepsilon,k_x) =
\qty[
\frac { N_{imp}^2 A^2 \hbar V_{imp}}{16\pi^5 \qty(eB)^2}
\int_{-\infty}^{\infty} d {k'}_x \;
J_0^2\qty(\frac{g\hbar}{eB}[{k}_x - {k'}_x])
\qty|
\int_{-\infty}^{\infty} d\bar{k} \;
{\chi}_{N}\qty(\frac{\hbar}{eB}\bar{k})
{\chi}_{N}\qty(\frac{\hbar}{eB} \qty[{k'}_x - {k}_x - \bar{k}])|^2
]^{-1/2}.
\end{aligned}
\end{equation}
Using the numerical calculations we can see that the above integral is does not depend on the value of $k_x$ and we can choose any values for $k_x$. Therefore applying $k_x =0$ and letting $k'_x \rightarrow k_1,\bar{k}\rightarrow k_2$ we can modify our equation as
\begin{equation} \label{6.7}
\begin{aligned}
\Gamma^{00}_{N}(\varepsilon,k_x) =
\qty[
\frac { N_{imp}^2 A^2 \hbar V_{imp}}{16\pi^5 \qty(eB)^2}
\int_{-\infty}^{\infty} d k_1\;
J_0^2\qty(\frac{g\hbar}{eB} k_1)
\qty|
\int_{-\infty}^{\infty} dk_2 \;
{\chi}_{N}\qty(\frac{\hbar}{eB} k_2)
{\chi}_{N}\qty(\frac{\hbar}{eB} \qty[k_1 - k_2])|^2
]^{-1/2}.
\end{aligned}
\end{equation}
\noindent
Now we can compare the central element of energy level broading for each Landau level using normalized Landau energy broading(inverse scattering time) against applied dressing field's electric field's amplitude ($E$) as follows
\begin{equation} \label{6.8}
\Lambda^{00}_{N} \equiv
\frac{\Gamma^{00}_{N}(\varepsilon,k_x)}
{\Gamma^{00}_{N}(\varepsilon,k_x)\big|_{E=0}}.
\end{equation}
As you can see in the Fig. \ref{fig6.1}, when the applied dressing field's intensity is increasing the broading of the Landau energy level decreasing. The effect of this decreasing is depend on the considering Landau level and for higher the level lower the effect.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.5]{figures/fig04.pdf}
\caption{Normalized broading of Landau levels agaist dressing fileds amplitude. Red line represents $N=0$, blue line represents $N=1$, green line represents $N=2$ and purplr color line represents $N=3$ Landau levels.}
\label{fig6.1}
\end{figure}
\noindent
Then we can use Floquet conductivity expression derived in [*Ref:my report 2.488] as follows
\begin{equation} \label{6.9}
\begin{aligned}
\lim_{\omega \to 0}
\text{Re}[{\sigma}^{xx}(0,\omega)] &=
\frac{-1}{4\pi\hbar A}
\int_{\lambda-\hbar\Omega/2}^{\lambda+ \hbar\Omega/2} d\varepsilon
\bigg(
-\frac{\partial f}{\partial \varepsilon} \bigg)
\frac{1}{A}\sum_{\mb{k}} \\
& \times
\sum_{s,s'=-\infty}^{\infty}
{j}^x_s(\mb{k}){j}^x_{s'}(\mb{k})
\tr_s \big[
\big(
\mb{G}^{r}_0 (\varepsilon;\mb{k}) - \mb{G}^{a}_0 (\varepsilon;\mb{k})
\big)
\odot_s
\big(
\mb{G}^{r}_0 (\varepsilon;\mb{k}) - \mb{G}^{a}_0 (\varepsilon;\mb{k})
\big)
\big].
\end{aligned}
\end{equation}
However, in this case we are consider only $x$ directional momentum as a quantum number to seperate different states and let $\lambda = \varepsilon_N$. In addition, with the current operator derivation IF we get component values only for $s=s'=0$ our conductivity rquation will be modified to
\begin{equation} \label{6.10}
\begin{aligned}
\lim_{\omega \to 0}
\text{Re}[{\sigma}^{xx}(0,\omega)] &=
\frac{-1}{4\pi\hbar A}
\int_{\varepsilon_N - \hbar\Omega/2}^{\varepsilon_N + \hbar\Omega/2} d\varepsilon
\bigg(
-\frac{\partial f}{\partial \varepsilon} \bigg)
\frac{1}{L_x}\sum_{{k_x}}
\qty[{j}^x_0({k_x})]^2
\\
& \times
\tr \big[
\big(
\mb{G}^{r}_0 (\varepsilon;{k_x}) - \mb{G}^{a}_0 (\varepsilon;{k_x})
\big)
\big(
\mb{G}^{r}_0 (\varepsilon;{k_x}) - \mb{G}^{a}_0 (\varepsilon;{k_x})
\big)
\big].
\end{aligned}
\end{equation}
Now we can expand the above expressing using unitary transformation
\begin{equation} \label{6.11}
\qty(\mb{T})_{\alpha}^{nn'} \equiv \ket*{\phi_{\alpha}^{n+n'}}
\end{equation}
where
\begin{equation} \label{6.12}
\ket{\phi_{\alpha}(t)} = \sum_{n = - \infty}^{\infty} e^{-in\omega t}
\ket{\phi_{\alpha}^{n}}.
\end{equation}
Therefore our conductivity equation becomes
\begin{equation} \label{6.13}
\begin{aligned}
\lim_{\omega \to 0}
\text{Re}[{\sigma}^{xx}(0,\omega)] &=
\frac{-1}{4\pi\hbar A}
\int_{\varepsilon_N - \hbar\Omega/2}^{\varepsilon_N + \hbar\Omega/2} d\varepsilon
\bigg(
-\frac{\partial f}{\partial \varepsilon} \bigg)
\frac{1}{L_x}\sum_{{k_x}}
\qty[{j}^x_0({k_x})]^2
\\
& \times
\qty[
\mb{T}^{\dagger}(k_x)
\big(
\mb{G}^{r}_0 (\varepsilon;{k_x}) - \mb{G}^{a}_0 (\varepsilon;{k_x})
\big)
\mb{T})(k_x)
]^2_{00}.
\end{aligned}
\end{equation}
As derived in [*Ref:my report 2.547] we can present this matrix multiplication result as follows
\begin{equation} \label{6.14}
\begin{aligned}
\qty[
\mb{T}^{\dagger}(k_x)
\big(
\mb{G}^{r}_0 (\varepsilon;{k_x}) - \mb{G}^{a}_0 (\varepsilon;{k_x})
\big)
\mb{T})(k_x)
]^2_{00} &
\approx
\qty[
\mb{T}^{\dagger}(k_x)
\big(
\mb{G}^{r}_0 (\varepsilon;{k_x})\mb{G}^{a}_0 (\varepsilon;{k_x})
\big)
\mb{T})(k_x)
]_{00} \\
& =
\frac{-1}
{
\qty(\frac{\varepsilon}{\hbar} - \frac{\varepsilon_N}{\hbar})^2
+
\qty(\frac{\Gamma^{00}_{N}(\varepsilon,k_x)}{2\hbar})^2
}
\end{aligned}
\end{equation}
and this can be more simplified as
\begin{equation} \label{6.15}
\begin{aligned}
\qty[
\mb{T}^{\dagger}(k_x)
\big(
\mb{G}^{r}_0 (\varepsilon;{k_x}) - \mb{G}^{a}_0 (\varepsilon;{k_x})
\big)
\mb{T})(k_x)
]^2_{00} =
\frac{-\hbar^2}
{\qty[\Gamma^{00}_{N}(\varepsilon,k_x)]^2}
\qty[
1 +
\qty(\frac{{\varepsilon}-{\varepsilon_N}}{\Gamma^{00}_{N}(\varepsilon,k_x)/2})^2
]^{-1}.
\end{aligned}
\end{equation}
Since the squared termed in the square brackets goinf to zero with valid conditions we can use binomial approximation for the square bracket term and get the following derivation
\begin{equation} \label{6.16}
\begin{aligned}
\qty[
\mb{T}^{\dagger}(k_x)
\big(
\mb{G}^{r}_0 (\varepsilon;{k_x}) - \mb{G}^{a}_0 (\varepsilon;{k_x})
\big)
\mb{T})(k_x)
]^2_{00} =
\frac{-\hbar^2}
{\qty[\Gamma^{00}_{N}(\varepsilon,k_x)]^2}
\qty[
1 -
4\qty(\frac{{\varepsilon}-{\varepsilon_N}}{\Gamma^{00}_{N}(\varepsilon,k_x)})^2
]
\end{aligned}
\end{equation}
\noindent
Now the conductivity expression get modify as
\begin{equation} \label{6.17}
\begin{aligned}
\lim_{\omega \to 0}
\text{Re}[{\sigma}^{xx}(0,\omega)] =
\frac{\hbar}{4\pi A L_x}
\int_{\varepsilon_N - \hbar\Omega/2}^{\varepsilon_N + \hbar\Omega/2} d\varepsilon
\bigg(
-\frac{\partial f}{\partial \varepsilon} \bigg)
\sum_{{k_x}}
\qty[{j}^x_0({k_x})]^2
\frac{\hbar^2}
{\qty[\Gamma^{00}_{N}(\varepsilon,k_x)]^2}
\qty[
1 -
4\qty(\frac{{\varepsilon}-{\varepsilon_N}}{\Gamma^{00}_{N}(\varepsilon,k_x)})^2
]
\end{aligned}
\end{equation}
\noindent
Then assuming we are considering fermions in aero temperature for this scenario we can describe the particle distribution function using the Fermi-Dirac distribution in zero-tempurature.
\begin{equation} \label{6.18}
-\frac{\partial f}{\partial \varepsilon} = \delta(\varepsilon_F - \varepsilon)
\end{equation}
where $\varepsilon_F$ represent the Fermi energy for the considered material.
\noindent
With this approximation we can derive that
\begin{equation} \label{6.19}
\begin{aligned}
\lim_{\omega \to 0}
\text{Re}[{\sigma}^{xx}(0,\omega)] =
\frac{\hbar}{4\pi A L_x}
\int_{\varepsilon_N - \hbar\Omega/2}^{\varepsilon_N + \hbar\Omega/2} d\varepsilon
\delta(\varepsilon_F - \varepsilon)
\sum_{{k_x}}
\qty[{j}^x_0({k_x})]^2
\frac{\hbar^2}
{\qty[\Gamma^{00}_{N}(\varepsilon,k_x)]^2}
\qty[
1 -
4\qty(\frac{{\varepsilon}-{\varepsilon_N}}{\Gamma^{00}_{N}(\varepsilon,k_x)})^2
]
\end{aligned}
\end{equation}
and
\begin{equation} \label{6.20}
\begin{aligned}
\lim_{\omega \to 0}
\text{Re}[{\sigma}^{xx}(0,\omega)] =
\frac{\hbar}{4\pi A L_x}
\sum_{{k_x}}
\qty[{j}^x_0({k_x})]^2
\frac{\hbar^2}
{\qty[\Gamma^{00}_{N}(\varepsilon_F,k_x)]^2}
\qty[
1 -
4\qty(\frac{{\varepsilon_F}-{\varepsilon_N}}{\Gamma^{00}_{N}(\varepsilon_F,k_x)})^2
]
\end{aligned}
\end{equation}
\hfill$\blacksquare$
| {
"alphanum_fraction": 0.6080945884,
"avg_line_length": 32.7232142857,
"ext": "tex",
"hexsha": "73bf957368fd2815fb550e57590f2aa2ee394a71",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "91c5df1b018579b4b9c91d84f2d60ee482a001de",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "KosalaHerath/magnetic-2DEG-conductivity",
"max_forks_repo_path": "theory/sec_06_1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "91c5df1b018579b4b9c91d84f2d60ee482a001de",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "KosalaHerath/magnetic-2DEG-conductivity",
"max_issues_repo_path": "theory/sec_06_1.tex",
"max_line_length": 349,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "91c5df1b018579b4b9c91d84f2d60ee482a001de",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "KosalaHerath/magnetic-2DEG-conductivity",
"max_stars_repo_path": "theory/sec_06_1.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4280,
"size": 10995
} |
\filetitle{plan}{Create new empty simulation plan object}{plan/plan}
\paragraph{Syntax}\label{syntax}
\begin{verbatim}
P = plan(M,Range)
\end{verbatim}
\paragraph{Input arguments}\label{input-arguments}
\begin{itemize}
\item
\texttt{M} {[} model {]} - Model object that will be simulated subject
to this simulation plan.
\item
\texttt{Range} {[} numeric {]} - Simulation range; this range must
exactly correspond to the range on which the model will be simulated.
\end{itemize}
\paragraph{Output arguments}\label{output-arguments}
\begin{itemize}
\itemsep1pt\parskip0pt\parsep0pt
\item
\texttt{P} {[} plan {]} - New empty simulation plan.
\end{itemize}
\paragraph{Description}\label{description}
You need to use a simulation plan object to set up the following types
of more complex simulations or forecats:
\begin{itemize}
\item
simulations or forecasts with some of the model variables temporarily
exogenised;
\item
simulations with some of the non-linear equations solved exactly.
\item
forecasts conditioned upon some variables;
\end{itemize}
The plan object is passed to the \href{model/simulate}{simulate} or
\href{model/jforecast}{\texttt{jforecast}} functions through the option
\texttt{'plan='}.
\paragraph{Example}\label{example}
| {
"alphanum_fraction": 0.7543035994,
"avg_line_length": 24.5769230769,
"ext": "tex",
"hexsha": "23f9459762dd1fbe5a6eb1ba75a13baf1afc0a0f",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2022-01-17T07:06:39.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-01-17T07:06:39.000Z",
"max_forks_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "OGResearch/IRIS-Toolbox-For-Octave",
"max_forks_repo_path": "-help/plan/plan.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef",
"max_issues_repo_issues_event_max_datetime": "2020-09-02T10:40:25.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-03-28T08:13:20.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "OGResearch/IRIS-Toolbox-For-Octave",
"max_issues_repo_path": "-help/plan/plan.tex",
"max_line_length": 72,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "OGResearch/IRIS-Toolbox-For-Octave",
"max_stars_repo_path": "-help/plan/plan.tex",
"max_stars_repo_stars_event_max_datetime": "2017-12-06T13:38:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-12-06T13:38:38.000Z",
"num_tokens": 341,
"size": 1278
} |
% !TeX program = pdfLaTeX
\documentclass[smallextended]{svjour3} % onecolumn (second format)
%\documentclass[twocolumn]{svjour3} % twocolumn
%
\smartqed % flush right qed marks, e.g. at end of proof
%
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage[utf8]{inputenc}
\usepackage[hyphens]{url} % not crucial - just used below for the URL
\usepackage{hyperref}
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
%
% \usepackage{mathptmx} % use Times fonts if available on your TeX system
%
% insert here the call for the packages your document requires
%\usepackage{latexsym}
% etc.
%
% please place your own definitions here and don't use \def but
% \newcommand{}{}
%
% Insert the name of "your journal" with
% \journalname{myjournal}
%
%% load any required packages here
% Pandoc citation processing
\begin{document}
\title{Perdiz arrow points from Caddo burial contexts aid in defining
discrete behavioral regions \thanks{This manuscript is dedicated to the
memory of Professor Alston Thoms. Components of the analytical workflow
were developed and funded by a Preservation Technology and Training
grant (P14AP00138) to RZS from the National Center for Preservation
Technology and Training, as well as grants to RZS from the Caddo Nation
of Oklahoma, National Forests and Grasslands in Texas
(15-PA-11081300-033) and the United States Forest Service
(20-PA-11081300-074).} }
\titlerunning{Perdiz arrow points from Caddo burial contexts}
\author{ Robert Z. Selden 1 \and John E. Dockall 2 \and }
\authorrunning{ Selden and Dockall }
\institute{
Robert Z. Selden 1 \at
Heritage Research Center, Stephen F. Austin State University;
Department of Biology, Stephen F. Austin State University; and Cultural
Heritage Department, Jean Monnet University \\
\email{\href{mailto:[email protected]}{\nolinkurl{[email protected]}}} % \\
% \emph{Present address:} of F. Author % if needed
\and
John E. Dockall 2 \at
Cox\textbar McClain Environmental Consultants, Inc. \\
\email{\href{mailto:[email protected]}{\nolinkurl{[email protected]}}} % \\
% \emph{Present address:} of F. Author % if needed
\and
}
\date{Received: date / Accepted: date}
% The correct dates will be entered by the editor
\maketitle
\begin{abstract}
Recent research in the ancestral Caddo area yielded evidence for
distinct \emph{behavioral regions}, across which material culture from
Caddo burials---bottles and Gahagan bifaces---has been found to express
significant morphological differences. This inquiry assesses whether
Perdiz arrow points from Caddo burials, assumed to reflect design
intent, may differ across the same geography, and extend the pattern of
shape differences to a third category of Caddo material culture. Perdiz
arrow points collected from the geographies of the northern and southern
Caddo \emph{behavioral regions} defined in a recent social network
analysis were employed to test the hypothesis that morphological
attributes differ, and are predictable, between the two communities.
Results indicate significant between-community differences in maximum
length, width, stem length, and stem width, but not thickness. Using the
same traditional metrics combined with the tools of machine learning, a
predictive model---support vector machine---was designed to assess the
degree to which community differences could be predicted, achieving a
receiver operator curve score of 97 percent, and an accuracy score of 94
percent. The subsequent geometric morphometric analysis identified
significant differences in Perdiz arrow point shape, size, and
allometry, coupled with significant results for modularity and
morphological integration. These findings bolster recent arguments that
established two discrete \emph{behavioral regions} in the ancestral
Caddo area defined on the basis of discernible morphological differences
across three categories of Caddo material culture.
\\
\keywords{
American Southeast \and
Caddo \and
Texas \and
archaeoinformatics \and
computational archaeology \and
machine learning \and
museum studies \and
digital humanities \and
STEM \and
}
\end{abstract}
\def\spacingset#1{\renewcommand{\baselinestretch}%
{#1}\small\normalsize} \spacingset{1}
\hypertarget{intro}{%
\section{Introduction}\label{intro}}
Perdiz arrow points (Fig.1) are considered the epitome of the Late
Prehistoric Toyah lithic assemblage in Texas---which also includes
convex end scrapers or unifaces, prismatic blades, as well as two- and
four-beveled bifacial knives---and are representative of the Late
Prehistoric transition to the Protohistoric \cite{RN9718}. This
technological assemblage is typically attributed to groups of highly
mobile bison hunters, and has been documented across the full extent of
Texas. Our present understanding of the Toyah tool kit indicates that it
was successfully implemented in a broad-spectrum of hunting and foraging
lifeways that included not only bison (\emph{Bison bison}), but deer
(\emph{Odocoileus spp.}) and numerous other animal prey species
\cite{RN9718,RN9786}.
\begin{figure}
\includegraphics[width=1\linewidth]{ms-figs/figure0} \caption{Perdiz arrow points used in this study come from Caddo burial contexts at the Tuck Carpenter (41CP5), Johns (41CP12), B. J. Horton (41CP20), Washington Square Mound (41NA49), and Morse Mound (41SY27) sites in northeast Texas. Additional information for each Perdiz arrow point, including the option to download full-resolution 2D images of individual projectiles, can be found at https://scholarworks.sfasu.edu/ita-perdiz/.}\label{fig:fig0}
\end{figure}
While Perdiz arrow points have not been used to address more complex
research issues, the Toyah tool kit was recognized as a potential
contributor to discussions of Late Prehistoric social and cultural
identity. Initially identified by J. Charles Kelley on the basis of
technological and morphological differences in material culture, the
Toyah Phase (CE 1300 - 1700) occured between the Protohistoric and the
preceding Austin Phase of the Late Prehistoric Period
\cite{RN9719,RN9720}. As noted by Arnn:
\begin{quote}
Toyah represents something of a paradox in which archaeologists have
identified one archaeological or material culture in the same region
where historians have documented numerous Native American groups and
significant cultural diversity \cite[47]{RN9718}.
\end{quote}
Stemming from the observations of Kelley, as well as later researchers
who viewed Toyah as a cultural entity, technological origins became a
point of further interest and debate from which two schools of thought
emerged regarding Toyah cultural manifestations: 1) that Toyah
represented the technology of Plains groups moving into Texas following
the bison herds \cite{RN9721,RN9722}, or 2) a \emph{technocomplex} or
suite of artifacts adopted by multiple distinct groups across Texas as
they participated in bison hunting \cite{RN9008,RN9723,RN9724}. In both
interpretations, primary agency is environmental \cite{RN9718}; either
people followed the bison from elsewhere, or the influx of bison spurred
adoption of the technology among the numerous distinct groups in Texas.
Research by Arnn \cite{RN9718,RN9716,RN5784,RN9717} emphasized aspects
of Toyah social identity, social fields, and agency, as well as the
archaeological visibility of these phenomena. Arnn recognized three
important scales of identity and interaction in his work:
community/band, marriage/linguistic group, and long-distance social
networks \cite{RN9718}. His ideas are important here because they
supplant a simple monocausal environmental explanation of material
culture variability with a multi-causal and scaled concept that includes
\emph{social identity}.
\hypertarget{perdiz-arrow-points}{%
\subsection{Perdiz arrow points}\label{perdiz-arrow-points}}
Perdiz arrow points generally follow two distinct manufacturing
trajectories; one that enlists flakes, and the other, blade flakes
\cite{RN8999,RN9361,RN9000,RN9364}, and encompass a greater range of
variation in shape and size than most arrow point types in Texas
\cite{RN7795,RN3149}. Lithic tool stone in the ancestral Caddo area of
northeast Texas is relatively sparse, consists primarily of chert,
quartzite, and silicified wood characteristic of the local geological
formations, which may contribute to local variation in shape and size
\cite{RN9364,RN439}. It has been demonstrated elsewhere that
morphological attributes of Perdiz arrow points from northeast Texas
vary significantly by time, raw material, and burial context
\cite{RN9364}. In outline, Perdiz arrow points possess a:
\begin{quote}
{[}t{]}riangular blade with edges usually quite straight but sometimes
slightly convex or concave. Shoulders sometimes at right angles to stem
but usually well barbed. Stem contracted, often quite sharp at base, but
may be somewhat rounded. Occasionally, specimen may be worked on one
face only or mainly on one face \ldots{} {[}w{]}orkmanship generally
good, sometimes exceedingly fine with minutely serrated blade edges
\cite[504]{RN5769}.
\end{quote}
A social network analysis of diagnostic artifacts from Historic Caddo
(post-CE 1680) sites in northeast Texas demonstrated two spatially
distinct \emph{behavioral regions} \cite{RN8031} (Fig. 2). The network
analysis was limited to Historic Caddo types; however, Formative Early
Caddo (CE 800 -- 1200) Gahagan bifaces and Caddo bottle types have been
found to express significant morphological variability following the
same geographic extent \cite{RN8074,RN7927,RN8370,RN8312}, extending the
prehistoric longevity for the \emph{behavioral regions} based on local
alterity. Gahagan bifaces from the ancestral Caddo area also differ
significantly in shape, size, and form compared with those recovered
from central Texas sites \cite{RN8322}, suggesting a second \emph{shape
boundary} between the ancestral Caddo area and central Texas.
\begin{figure}
\includegraphics[width=1\linewidth]{ms-figs/figure1} \caption{Historic Caddo network (CE 1680+) generated using the co-presence of diagnostic ceramic and lithic types---which include Perdiz arrow points---illustrating the ancestral Caddo region (gray outline) and the two (north [blue] and south [red]) Caddo behavioral regions [21]. The regions were identified using a modularity statistic to identify those nodes more densely connected to one another than to the rest of the network.}\label{fig:fig1}
\end{figure}
The goal of this exploratory endeavor was to assess whether metrics
collected for Perdiz arrow points support the \emph{shape boundary}
posited in recent social network and geometric morphometric analyses, to
determine whether linear metrics and shape variables might be useful
predictors of regional membership, and---if so---to identify those
morphological features that articulate with each \emph{behavioral
region}. It is assumed that complete Perdiz arrow points included as
offerings in Caddo burials represent the \emph{design intent} of the
maker. Should the analysis yield significant results, it would bolster
the argument for at least two discrete Caddo \emph{behavioral regions}
in northeast Texas; each empirically defined by discernible
morphological differences across three discrete categories of Caddo
material culture.
\hypertarget{caddo-behavioral-regions}{%
\subsection{Caddo behavioral regions}\label{caddo-behavioral-regions}}
In a June 18, 1937 Works Progress Administration interview with Lillian
Cassaway, Sadie Bedoka---a Caddo-Delaware woman raised with the
Caddo---stated that:
\begin{quote}
Each {[}Caddo{]} clan had its own shape to make its pottery. One clan
never thought of making anything the same pattern of another clan.
\emph{You could tell who made the pottery by the shape}
\cite[395]{RN9357x}.
\end{quote}
General differences in Caddo ceramic forms have been noted elsewhere
\cite{RN5650,RN7162}; however, the study of the Clarence H. Webb
collection was the first to illustrate a significant north-south
geographic shape difference among Hickory Engraved and Smithport Plain
Caddo bottle types \cite{RN8370}. That preliminary observation was later
confirmed using more robust samples of Hickory Engraved and Smithport
Plain bottles \cite{RN8074,RN7927}, then expanded to include a greater
variety of Caddo bottle types across a larger spatial and temporal
extent \cite{RN8312}.
The co-presence of diagnostic artifact and attribute types has been used
to define Caddo phases and periods, which serve as a heuristic tool that
aids archaeologists in explaining the local cultural landscape, as well
as regional differences between local landscapes. The Historic Caddo
network expands those efforts, augmenting the previously-defined phases
and periods, and emphasizing the dynamic and manifold relational
connections that reinforce and transcend the currently categories
\cite{RN8031}. This was achieved by enlisting a multi-scalar
methodological approach \cite{RN5644,RN8039}, where northern and
southern communities were parsed into constituent groups using the
co-presence of diagnostic types paired with a modularity algorithm
\cite{RN8051,RN8024}. Most of the constituent groups identified in the
network analysis were found to articulate with known Caddo polities,
while others were not \cite{RN8031}.
A subsequent analysis of Gahagan bifaces confirmed that a second
category of Caddo material culture expressed significant morphological
differences across the same geography as the Hickory Engraved and
Smithport Plain bottles \cite{RN8158}. The morphology of Gahagan bifaces
from sites in central Texas was later found to differ significantly when
compared with those recovered from the Caddo region \cite{RN8322}. That
Gahagan bifaces were found to differ across two \emph{spatial
boundaries} was noteworthy, particularly since it has regularly been
assumed that these large bifaces were manufactured in central Texas and
arrived in the ancestral Caddo area as products of trade and/or exchange
\cite{RN8322,RN8158}. Further, that Gahagan bifaces were found to differ
across the same geography as those communities posited in the Historic
Caddo network analysis suggested that the temporal range of the
\emph{shape boundary} might extend to the Formative/Early Caddo period
(CE 800 - 1250); a hypothesis that was later confirmed in a more
comprehensive analysis of Caddo bottles \cite{RN8312}.
\hypertarget{methods-and-results}{%
\section{Methods and results}\label{methods-and-results}}
Sixty seven intact Perdiz arrow points recovered from Caddo burial
contexts in Camp, Nacogdoches, and Shelby counties comprise the basis of
this study (\href{https://seldenlab.github.io/perdiz3/}{supplementary
materials}). A standard suite of linear metrics were collected for each
specimen, including maximum length, width, thickness, stem length, and
stem width. Following collection, data were imported to R 4.1.1
\cite{RN8584} (\href{https://seldenlab.github.io/perdiz3/}{supplementary
materials}), where boxplots were produced, along with a Principal
Components Analysis (PCA) followed by analyses of variance (ANOVA) to
test whether the morphology of Perdiz arrow points differs across the
shape boundary (Fig.2).
Boxplots illustrate the distribution and mean for each of the five
variables (Fig. 3a-e), and the PCA (Fig. 3f) illustrates over 92 percent
of the variation in the sample among PC1 (84.65 percent) and PC2 (11.71
percent). ANOVAs demonstrate significant differences in Perdiz arrow
point morphology among four of the five variables (maximum length,
width, stem length, and stem width)
(\href{https://seldenlab.github.io/perdiz3/}{supplementary materials}).
Maximum thickness does not differ significantly between the northern and
southern communities, which led to the decision to conduct the
subsequent geometric morphometric analysis as a two dimensional, rather
than a three-dimensional, study
(\href{https://seldenlab.github.io/perdiz3/}{supplementary materials}).
\begin{figure}
\includegraphics[width=0.95\linewidth]{ms-figs/figure2} \caption{Boxplots for a, maximum length; b, maximum width; c, maximum thickness; d, stem length; e, stem width, and f, PCA for linear metrics associated with the Perdiz arrow points. Additional information related to the analysis, including data and code needed to reproduce these results, can be found in the supplemental materials at https://seldenlab.github.io/perdiz3/.}\label{fig:fig2}
\end{figure}
\hypertarget{predictive-model}{%
\subsection{Predictive model}\label{predictive-model}}
A \emph{support vector machine} is a supervised machine learning model
regularly used in classifying archaeological materials
\cite{RN9515,RN9516,RN9514,RN9513,RN10755,RN10754}, which has utility in
comparing and classifying datasets aggregated from digital repositories,
comparative collections, open access reports, as well as other digital
assets. For this effort, linear data were imported and modeled using the
\texttt{scikit-learn} package in Python \cite{scikit-learn,sklearn_api}
(\href{https://seldenlab.github.io/perdiz3/}{supplementary materials}),
and subsequently split into training (75 percent) and testing (25
percent) subsets. A \emph{standard scaler} was used to decrease the
sensitivity of the algorithm to outliers by standardizing features, and
a \emph{nested cross validation} of the training set was used to achieve
unbiased estimates of model performance, resulting in a mean cross
validation score of 86 percent
(\href{https://seldenlab.github.io/perdiz3/}{supplementary materials}).
The model was subsequently fit on the training set, yielding a receiver
operator curve score of 97 percent, and an accuracy score of 94 percent
(\href{https://seldenlab.github.io/perdiz3/}{supplementary materials}).
\hypertarget{geometric-morphometrics}{%
\subsection{Geometric morphometrics}\label{geometric-morphometrics}}
Each of the arrow points was imaged using a flatbed scanner (HP Scanjet
G4050) at 600 dpi. The landmarking protocol developed for this study
(\href{https://seldenlab.github.io/perdiz3/}{supplementary materials})
included six landmarks and 24 equidistant semilandmarks to characterize
Perdiz arrow point shape, applied using the \texttt{StereoMorph} package
in R \cite{RN8973}. The characteristic points and tangents used in the
landmarking protocol were inspired by the work of Birkhoff
\cite{RN5700}.
Landmarks were aligned to a global coordinate system
\cite{RN8102,RN8587,RN8384}, achieved through generalized Procrustes
superimposition \cite{RN8525}, performed in R 4.1.1 \cite{RN8584} using
the \texttt{geomorph} package v4.0.1 \cite{RN8565,RN9565} (Fig.4).
Procrustes superimposition translates, scales, and rotates the
coordinate data allowing for comparisons among objects
\cite{RN5698,RN8525}. The \texttt{geomorph} package uses a partial
Procrustes superimposition that projects the aligned specimens into
tangent space subsequent to alignment in preparation for the use of
multivariate methods that assume linear space \cite{RN8511,RN8384}.
\begin{figure}
\includegraphics[width=1\linewidth]{ms-figs/figure3} \caption{Results of generalized Procrustes analysis, illustrating mean shape (black) and all specimens in the sample (gray). Additional information related to the GPA, including those data and code needed to reproduce these results, can be found in the supplemental materials at https://seldenlab.github.io/perdiz3/.}\label{fig:fig3}
\end{figure}
Principal components analysis \cite{RN8576,RN10875} was used to
visualize shape variation among the arrow points (Fig. 5). The shape
changes described by each principal axis are commonly visualized using
thin-plate spline warping of a reference image or 3D mesh
\cite{RN8555,RN8553}. A residual randomization permutation procedure
(RRPP; n = 10,000 permutations) was used for all Procrustes ANOVAs
\cite{RN8579,RN8334}, which has higher statistical power and a greater
ability to identify patterns in the data should they be present
\cite{RN6995}. To assess whether shape changes with size (allometry),
and differs by group (region), Procrustes ANOVAs \cite{RN7046} were also
run that enlist effect-sizes (z-scores) computed as standard deviates of
the generated sampling distributions \cite{RN8477}. Procrustes variance
was used to discriminate between regions and compare the amount of shape
variation (morphological disparity) \cite{RN5703}, estimated as
Procrustes variance using residuals of linear model fit \cite{RN8314}. A
pairwise comparison of morphological integration was used to test the
strength of integration between blade and basal morphology using a
z-score \cite{RN8588,RN8477,RN8340,RN10874}.
\begin{figure}
\includegraphics[width=1\linewidth]{ms-figs/figure4} \caption{Principal components analysis plot (PC1/PC2) for Perdiz arrow points by behavioral region/community (top; gray squares, north; orange triangles, south), and results of modularity (bottom left) and blade/base morphological integration (bottom right) analyses. Additional information related to the PCA, including the full listing of results and those data and code needed to reproduce these results, can be found in the supplemental materials at https://seldenlab.github.io/perdiz3/.}\label{fig:fig4}
\end{figure}
The analysis of modularity, which compares within-module covariation of
landmarks against between-module covariation was significant (Fig. 4 and
\href{https://seldenlab.github.io/perdiz3/}{supplementary materials})
\cite{RN10874,RN5170}, demonstrating that Perdiz arrow point blades and
bases are, in fact, modular. The test for morphological integration was
also significant (Fig. 4 and
\href{https://seldenlab.github.io/perdiz3/}{supplementary materials}),
indicating that the blades and bases of Perdiz arrow points are
integrated. These results demonstrate that blade and base shapes for
Perdiz arrow points are predictable; a finding that would have utility
in subsequent studies of Perdiz arrow point morphology that incorporate
fragmentary specimens.
A Procrustes ANOVA was used to test whether a significant difference
exists in Perdiz arrow point (centroid) size (RRPP = 10,000; Rsq =
0.30681; Pr(\textgreater F) = 1e-04), followed by another to test
whether a significant difference exists in arrow point shape by region
(northern vs.~southern) (RRPP = 10,000; Rsq = 0.0536; Pr(\textgreater F)
= 0.0161). A comparison of mean consensus configurations was used to
characterize intraspecific shape variation of Perdiz arrow points from
the northern and southern \emph{behavioral regions}. Diacritical
morphology occurs primarily in basal shape, where the angle between the
shoulder and base is more acute, and a base that is generally shorter
and narrower in the southern \emph{behavioral region} than it is in the
north (\href{https://seldenlab.github.io/perdiz3/}{supplementary
materials}).
\hypertarget{discussion}{%
\section{Discussion}\label{discussion}}
The \emph{shape boundary} empirically delineates two discrete
\emph{behavioral regions} in the ancestral Caddo area. That the Perdiz
arrow points recovered from Caddo burials north and south of the
\emph{shape boundary} were found to differ significantly, expands the
scope of the \emph{behavioral regions} to include three classes of
artifacts (Caddo bottles, bifaces, and---now---arrow points)
\cite{RN8074,RN7927,RN8370,RN8312,RN8322,RN8158}. For material culture
offerings included in burial contexts, the Caddo were \emph{selecting}
for significant morphological differences in bottles and bifaces
recovered from either side of the \emph{shape boundary} (Fig. 6).
Results clearly illustrate that those morphological differences among
Perdiz arrow points found in the northern and southern \emph{behavioral
regions} (Fig. 6d) are predictable
(\href{https://seldenlab.github.io/perdiz3/}{supplementary materials}),
and can be disaggregated using the standard suite of linear metrics
regularly collected in the context of cultural resource management
endeavors.
\begin{figure}
\includegraphics[width=1\linewidth]{ms-figs/figure6} \caption{Mean shapes and comparisons for a, Formative/Early and b, Late/Historic bottles; c, Formative/Early Gahagan bifaces; and d, Middle/Late Perdiz arrow points from Caddo burial contexts in the northern and southern behavioral regions. In the comparisons of mean shape, the northern population appears in gray, and the southern population appears in black.}\label{fig:fig5}
\end{figure}
The geometric morphometric analysis demonstrated significant
morphological differences for Perdiz arrow points recovered north and
south of the \emph{shape boundary}, where the most pronounced difference
was found to occur in basal morphology (Fig. 6). Allometry was also
found to be significant, demonstrating that Perdiz arrow point shape
differs with size. Those arrow points used in this study are considered
to represent \emph{design intent}, and are not thought to exhibit
retouch or resharpening. This finding provides evidence in support of
the argument that Perdiz arrow point morphology is labile \cite{RN9364}.
The character of those morphological differences found to occur in
Perdiz arrow points (basal morphology and size) is also suggestive of
differential approaches to the practice of hafting.
Blades and bases of Perdiz arrow points were found to be both modular
and morphologically integrated. This indicates that each module
functions independently, and that basal shape is a predictor of blade
shape, and vice-versa. Further work is warranted to assess whether
Perdiz arrow points from groups within the boundaries of the northern
and southern \emph{behavioral regions} may express unique morphologies,
aiding in further delimiting local boundaries associated with
constituent Caddo groups.
\hypertarget{standardization-and-specialization}{%
\subsection{Standardization and
specialization}\label{standardization-and-specialization}}
Standardization has been conceptually linked to the notion of
approximating a perceived ideal through the realization of a mental
template \cite[156]{RN9715}. Morphological attributes are representative
of intentional attributes related to morphological characteristics
\cite{RN7051}, and dimensional standardization has utility in
identifying the range of variation and overlap of product morphology
both in and between communities \cite{RN5779}. Further, relative
dimensional standardization may imply a smaller number of production
units when contrasted with larger units \cite{RN7051}. Standardization
can also result from raw material selection and (for lithics) reduction
practices \cite{RN9712,RN9713,RN6494,RN9714}.
One common argument used to establish the presence of specialization
includes large quantities of highly standardized products interpreted as
representative of a single---or limited number of---production units
\cite{RN7019}. There also exists the possibility that an increase in
specialization can result in increased morphological diversity,
depending on the organization of production \cite{RN6451}. Correlations
between specialization and standardization have often, but not always,
been supported by ethnographic and experimental data
\cite{RN9725,RN9726,RN7137}. In the case of material culture from Caddo
burials in the northern and southern \emph{behavioral regions}, bottle
and biface morphologies communicate important information related to
group affiliation \cite{RN9727} through deliberate and differential
production (bottles) and aesthetic choices (bifaces). Differences in
Caddo bottle morphology have aided in the identification of two large
production units; one in each of the \emph{behavioral regions}, and it
is likely the case that each is comprised of a series of smaller
constituent production units bounded by geography and time.
While Caddo bottles are representative of a complete system---or
unit---the function of a biface (Perdiz arrow points and Gahagan bifaces
in this context) is more limited, in that they represent a singular
component of a much more dynamic system. Dimensional standardization in
size has been identified for both the Caddo bottles and Gahagan bifaces
\cite{RN8074,RN7927,RN8370,RN8312,RN8322}. The similarities and
differences in bottle shapes were found to transcend typological
assignments, and bottles from the northern \emph{behavioral region} have
been found to express a significantly greater diversity in size,
yielding evidence for size standardization in the southern
\emph{behavioral region} \cite{RN8312}. It is also the case that Gahagan
bifaces from the ancestral Caddo area have been found to express a
significantly greater diversity in size compared to those from central
Texas \cite{RN8322}. These findings suggest that social and
manufacturing tolerances, as they relate to Caddo burial traditions, may
have been guided by regionally-specific morphological traits that
signaled group \emph{identity}.
\hypertarget{morphologically-distinct-behavioral-regions}{%
\subsection{Morphologically-distinct behavioral
regions}\label{morphologically-distinct-behavioral-regions}}
In considering the role(s) of artifacts as aspects of \emph{social
identity}, it is important not to lose sight of the fact that people and
artifacts are active agents in the production and maintenance of
\emph{social identity(ies)}. Both categories of artifacts (bottles and
bifaces) contribute to local and regional \emph{communities of identity}
and \emph{communities of practice} \cite{RN8061}. Generally, this
concept may be more easily applied to bottles since they were
manufactured and used by individuals sharing collective \emph{Caddo
identities}. Bifaces (Perdiz arrow points and Gahagan bifaces)
potentially represent multiple \emph{identities}---those being the
Caddo, as users; and non-Caddo, as producers---at least with regard to
chipped stone tools incorporated in mortuary contexts. This idea lends
defensible credence to the concept of morphologically-distinct
\emph{behavioral regions} among the Caddo, while integrating the
possibility of understanding interactions between Caddo and non-Caddo
groups, to include the movement of material culture between Caddo
\emph{behavioral regions}.
Three categories of Caddo material culture differ north and south of the
\emph{shape boundary}, indicating a haecceity of regionally-distinct
perspectives related to production (bottles), and aesthetic
choice/cultural interaction (bifaces). These differing perspectives
incorporate group decisions that include shape, size, form, and
decorative expression, which likely represent the culmination of
generational perspectives \cite{RN5610}. Simply stated, such
perspectives are representative of tradition. Eckert and colleagues
\cite{RN8061} indicate that provenance, the origin or source of an item,
is a significant component of understanding the interrelatedness of
\emph{communities of identity} and \emph{communities of practice}.
Perdiz arrow points and Gahagan bifaces recovered north and south of the
\emph{shape boundary} are morphologically distinct. A second \emph{shape
boundary} demonstrates that Gahagan bifaces differ significantly between
the ancestral Caddo region and central Texas, where they are currently
thought to have been manufactured. This suggests that those
\emph{communities of practice} that articulate with the production of
chipped stone artifacts recovered from Caddo internments, are not Caddo.
It is entirely possible that there are no \emph{communities of practice}
at all for chipped stone artifacts recovered from Caddo mortuary
contexts. However, there do appear to have been \emph{communities of
practice} associated with Perdiz arrow points recovered from
non-mortuary contexts in the ancestral Caddo area, which may more
readily reflect the distinct retouch or resharpening approaches employed
by Caddo knappers \cite{RN9364,RN8486}. Similar interpretations can be
applied to Gahagan bifaces, as few have been reported outside of Caddo
mortuary contexts. It may be more fitting to perceive of Perdiz arrow
points and Gahagan bifaces as indicative of \emph{communities of
identity} rather than \emph{communities of practice}, due to the
contextual discrepancy evinced through mortuary and non-mortuary
settings. The provenance of bifaces from Caddo mortuary contexts can
most assuredly be considered non-local, or produced outside of the
ancestral Caddo region, based on multiple factors that include raw
material, workmanship, morphology, and context.
\hypertarget{conclusion}{%
\section{Conclusion}\label{conclusion}}
This study demonstrated that linear metrics and shape variables
collected for Perdiz arrow points support the \emph{shape boundary}
posited in recent social network and geometric morphometric analyses,
and determined that the same metrics can be used to predict regional
membership. Those morphological features that discriminate between
Perdiz arrow points recovered from each \emph{behavioral region} were
identified using geometric morphometrics, with substantive differences
found to occur in the size and basal morphology of the projectiles.
Blade and base shape were found to be both modular and morphologically
integrated, suggesting that blade and base shapes are predictable. While
evidence from one category---Caddo bottles---supports discussions of
Caddo production, the other two---bifaces---more readily articulate with
production activities outside of the region by non-Caddo makers. Such
production activity is more likely to be localized than exchange
systems, thus assumed to leave a clearer signature \cite{RN7019}.
Further work is warranted to expand the scope of this research program
to include analyses of differential production in the two
\emph{behavioral regions} using a greater diversity of diagnostic Caddo
artifact types.
\hypertarget{acknowledgments}{%
\section*{Acknowledgments}\label{acknowledgments}}
\addcontentsline{toc}{section}{Acknowledgments}
We express our gratitude to the Caddo Nation of Oklahoma and the
Anthropology and Archaeology Laboratory at Stephen F. Austin State
University for the requisite permissions and access to the NAGPRA
objects from the Washington Square Mound site and Turner collections,
and to Tom A. Middlebrook for brokering access to the Perdiz arrow
points from burials at the Morse Mound site. We wish to thank Michael J.
Shott and Casey Wayne Riggs for their useful comments and constructive
criticisms, which further improved this manuscript.
\hypertarget{data-management}{%
\section*{Data Management}\label{data-management}}
\addcontentsline{toc}{section}{Data Management}
The data and analysis code associated with this project can be accessed
through the GitHub repository
(\url{https://github.com/seldenlab/perdiz3}) or the supplementary
materials (\url{https://seldenlab.github.io/perdiz3/}); which are
digitally curated on the Open Science Framework at \newline DOI:
10.17605/OSF.IO/UK9ZD.
\bibliographystyle{spphys}
\bibliography{mybibfile.bib}
\end{document}
| {
"alphanum_fraction": 0.8097308665,
"avg_line_length": 55.5,
"ext": "tex",
"hexsha": "04044582273374ea0788ded84b236ad9a90b9cc5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f34e4054c9ce521992675009b5be2f90f4f092f3",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "seldenlab/perdiz3",
"max_forks_repo_path": "ms/perdiz3/perdiz3.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f34e4054c9ce521992675009b5be2f90f4f092f3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "seldenlab/perdiz3",
"max_issues_repo_path": "ms/perdiz3/perdiz3.tex",
"max_line_length": 561,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f34e4054c9ce521992675009b5be2f90f4f092f3",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "seldenlab/perdiz3",
"max_stars_repo_path": "ms/perdiz3/perdiz3.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 8488,
"size": 35187
} |
% !TEX root = ../thesis_phd.tex
%
\pdfbookmark[0]{Acknowledgement}{Acknowledgement}
\chapter*{Acknowledgement}
\label{sec:acknowledgement}
\vspace*{-10mm}
% \blindtext
First of all, I would like to thank my supervisor, Dr LUO Xiapu Daniel, for guiding me to complete this thesis under a difficult time. Daniel is knowledgeable and helpful in leading me to choose an interesting topic, then provides me valuable suggestions on learning the most from this research work.
Before starting this thesis, I had no idea about blockchain and I was not an experienced programmer. It was a huge challenge for me to decide a research topic of analyzing transactions on Ethereum blockchain. Daniel is patient enough to clear all my questions and make me comfortable and confident to take the challenge.
Finally, I would like to thank all my friends who have supported me in completing my thesis. With all of your help, I keep improving myself and solve problems more efficiently. | {
"alphanum_fraction": 0.7997936017,
"avg_line_length": 74.5384615385,
"ext": "tex",
"hexsha": "a1195a3d43d2cb4394e1ad3662c22055e29b3250",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-09-15T08:52:26.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-09-15T08:52:26.000Z",
"max_forks_repo_head_hexsha": "6993accf0cb85e23013bf7ae6b04145724a6dbd2",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "onecklam/ethereum-graphviz",
"max_forks_repo_path": "thesis-report/content/acknowledgment.tex",
"max_issues_count": 3,
"max_issues_repo_head_hexsha": "6993accf0cb85e23013bf7ae6b04145724a6dbd2",
"max_issues_repo_issues_event_max_datetime": "2021-09-08T02:48:20.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-06-08T22:52:09.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "cklamstudio/ethereum-graphviz",
"max_issues_repo_path": "thesis-report/content/acknowledgment.tex",
"max_line_length": 320,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "6993accf0cb85e23013bf7ae6b04145724a6dbd2",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "cklamstudio/ethereum-graphviz",
"max_stars_repo_path": "thesis-report/content/acknowledgment.tex",
"max_stars_repo_stars_event_max_datetime": "2021-07-04T04:26:50.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-09-13T09:15:02.000Z",
"num_tokens": 205,
"size": 969
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% tH %%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection[\tHq]{\tHq}
\label{subsec:tHq}
This section describes the MC samples used for the modelling of \tH\ production.
Section~\ref{subsubsec:tHq_aMCP8} describes the \MGNLOPY[8] samples,
\subsubsection[MadGraph5\_aMC@NLO+Pythia8]{\MGNLOPY[8]}
\label{subsubsec:tHq_aMCP8}
\paragraph{Samples}
%\label{par:tHq_aMCP8_samples}
The descriptions below correspond to the samples in Table~\ref{tab:tHq_aMCP8}.
\begin{table}[htbp]
\begin{center}
\caption{Nominal \tH\ samples produced with \MGNLOPY[8].}
\label{tab:tHq_aMCP8}
\begin{tabular}{ l | l }
\hline
DSID range & Description \\
\hline
346188 & \tHq\, \Hgg\, four flavour \\
346229 & \tHq\, \Hbb\, four flavour \\
346230 & \tHq\, \Htautau/\HZZ/\HWW\, four flavour \\
346414 & \tHq\, \Hllll, four flavour \\
346677 & \tHq\, \Hgg, four flavour, UFO model \\
\hline
\end{tabular}
\end{center}
\end{table}
\paragraph{Exceptions:}
If and only if you are using the UFO model sample: the correct version is
\MGNLO[2.6.2].
\paragraph{Short description:}
The production of \tHq events was modelled using the \MGNLO[2.6.0]~\cite{Alwall:2014hca}
generator at NLO with the \NNPDF[3.0nlo]~\cite{Ball:2014uwa} parton distribution function~(PDF).
The events were interfaced with \PYTHIA[8.230]~\cite{Sjostrand:2014zea} using the A14 tune~\cite{ATL-PHYS-PUB-2014-021} and the
\NNPDF[2.3lo]~\cite{Ball:2014uwa} PDF set.
The decays of bottom and charm hadrons were simulated using the \EVTGEN[1.6.0] program~\cite{Lange:2001uf}.
\paragraph{Long description:}
The \tHq samples were simulated using the \MGNLO[2.6.0]~\cite{Alwall:2014hca}
generator at NLO with the \NNPDF[3.0nlo]~\cite{Ball:2014uwa} parton distribution function~(PDF). The events were interfaced with
\PYTHIA[8.230]~\cite{Sjostrand:2014zea}~ using the A14 tune~\cite{ATL-PHYS-PUB-2014-021} and the \NNPDF[2.3lo]~\cite{Ball:2014uwa} PDF set.
The top quark was decayed at LO using \MADSPIN~\cite{Frixione:2007zp,Artoisenet:2012st} to preserve spin correlations,
whereas the Higgs boson was decayed by \PYTHIA in the parton shower. The samples
were generated in the four-flavour scheme.
The functional form of the renormalisation and factorisation scales was set to the
default scale $0.5\times \sum_i \sqrt{m^2_i+p^2_{\text{T},i}}$, where the sum runs over
all the particles generated from the matrix element calculation.
The decays of bottom and charm hadrons were simulated using the \EVTGEN[1.6.0] program~\cite{Lange:2001uf}.
| {
"alphanum_fraction": 0.7127906977,
"avg_line_length": 43,
"ext": "tex",
"hexsha": "a46067e9c50a8e0468b00b5533c39e45bd447938",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e2640e985974cea2f4276551f6204c9fa50f4a17",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "diegobaronm/QTNote",
"max_forks_repo_path": "template/MC_snippets/tHq.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e2640e985974cea2f4276551f6204c9fa50f4a17",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "diegobaronm/QTNote",
"max_issues_repo_path": "template/MC_snippets/tHq.tex",
"max_line_length": 140,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e2640e985974cea2f4276551f6204c9fa50f4a17",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "diegobaronm/QTNote",
"max_stars_repo_path": "template/MC_snippets/tHq.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 904,
"size": 2580
} |
\subsection{Strategy results}
\label{sec:results_strategy}
Table \ref{table:financial_metrics} shows the results of the benchmark and the
strategy under test. We can see that the estimate on the Sharpe Ratio is bigger
for benchmark than the strategy under test. Later, evaluation of the
Probabilistic Sharpe Ratio will be used to analyze the confidence on the Sharpe
Ratio estimation for different reference values. Sortino ratio is bigger for the
strategy under test which is aligned with the win loss ratio observation. There
are no considerable differences in other indexes with the exception of the
volatility which has a rough difference of two magnitude orders.
Two indices were not computed for the benchmark strategy: correlation to
underlying (because it would be equal to 1) and the return on execution cost.
The latter will just add noise because only two trades will be executed: one at
the start of the time series and one at the end. Finally, because of the low
observed correlation value we can state that both strategies are doing quite
different things.
\begin{table}[H]
\centering
\begin{tabular}{| c | c | c |}
\hline
\multicolumn{3}{|c|}{Financial metrics} \\
\hline
Metric & Buy and hold & Strategy under test \\
\hline
Sharpe ratio & 2.2156 & 1.2289 \\
\hline
Sortino ratio & 2.9138 & 5.7169 \\
\hline
Win loss ratio & 1.1171 & 8.0973 \\
\hline
Win rate & 0.5637 & 0.5833 \\
\hline
Average return & 0.0073 & 0.0065 \\
\hline
Volatility & 1.2062 & 0.0352 \\
\hline
Correlation to underlying & - & 0.0454 \\
\hline
Return on execution costs & - & 21.2264 \\
\hline
\end{tabular}
\caption{Financial metrics benchmark.}
\label{table:financial_metrics}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{results/images/log_odds_ratio_psr.png}
\caption{Odds ratio for the Probabilistic Sharpe Ratio of the buy and hold strategy and the strategy under test.}
\label{fig:prob_sharpe_ratio}
\end{figure}
Because the Probabilistic Sharpe Ratio is compared against a sequence of target
Sharpe Ratios (0.01 increments between 0. and 1.), it is better shown in terms
log-odds ratio of the PSR values in figure \ref{fig:prob_sharpe_ratio}. It is observed
that for both strategies the odds quickly drop to quite low values below Sharpe
Ratios above 0.1. This is explained by analyzing the Probabilistic Sharpe Ratio
equation (\ref{eqn:prob_sharpe_ratio}). The denominator of the test statistic,
i.e. the standard error, is simply great because of the extremely large kurtosis
and skewness coefficients that both strategies exhibit. See table \ref{table:return_moments}
and remember that the normal distribution has a skewness of 0 and a kurtosis
of 3. Together with the table, figures
\ref{fig:b_h_return_distribution} and \ref{fig:st_return_distribution} display
histograms for the return distributions.
\begin{table}[H]
\centering
\begin{tabular}{| c | c | c |}
\hline
\multicolumn{3}{|c|}{Moments of returns} \\
\hline
Metric & Buy and hold & Strategy under test \\
\hline
Mean of returns & 0.0073 & 0.0065 \\
\hline
Variance of returns & 0.0039 & 3.4018 . $10^{-6}$ \\
\hline
Skewness of returns & 0.2973 & 19.1115 \\
\hline
Kurtosis of returns & 11.9335 & 450.5218 \\
\hline
\end{tabular}
\caption{Moments of the returns for both strategies}
\label{table:return_moments}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=1.1\textwidth]{results/images/hist_returns_btc.png}
\caption{Buy and hold return distribution.}
\label{fig:b_h_return_distribution}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=1.1\textwidth]{results/images/hist_returns_st.png}
\caption{Strategy under test return distribution.}
\label{fig:st_return_distribution}
\end{figure}
| {
"alphanum_fraction": 0.7311308767,
"avg_line_length": 38.2038834951,
"ext": "tex",
"hexsha": "d3bc84ae2a8a5e4a791751ed605d64213ee1500d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3871e88884a90e5c9dd80d71b20b811485007273",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "agalbachicar/swing_for_the_fences",
"max_forks_repo_path": "doc/results/financial_results.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3871e88884a90e5c9dd80d71b20b811485007273",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "agalbachicar/swing_for_the_fences",
"max_issues_repo_path": "doc/results/financial_results.tex",
"max_line_length": 117,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3871e88884a90e5c9dd80d71b20b811485007273",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "agalbachicar/swing_for_the_fences",
"max_stars_repo_path": "doc/results/financial_results.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1116,
"size": 3935
} |
\documentclass[9pt,twocolumn,twoside]{styles/osajnl}
\usepackage{fancyvrb}
\usepackage[colorinlistoftodos,prependcaption,textsize=normal]{todonotes}
\newcommand{\TODO}[2][]{\todo[color=red!10,inline,#1]{#2}}
\newcommand{\GE}{\TODO{Grammar}}
\newcommand{\SE}{\TODO{Spelling}}
\newcommand{\TE}{\TODO{Term}}
\newcommand{\CE}{\TODO{Citation}}
\journal{i524}
\title{Ansible}
\author[1]{Anurag Kumar Jain}
\author[1,*]{Gregor von Laszewski}
\affil[1]{School of Informatics and Computing, Bloomington, IN 47408, U.S.A.}
\affil[*]{Corresponding authors: [email protected]}
\dates{Paper 1, \today}
\ociscodes{Cloud, Ansible, Playbook, Roles}
% replace this with your url in github/gitlab
\doi{\url{https://github.com/anurag2301/sp17-i524}}
\begin{abstract}
Ansible is a powerful open source automation tool which can be used in
configuration management, deployment, and orchestration
tool \cite{www-ansible-wikipedia}. This paper gives an in-depth
overview of Ansible.
\newline
\end{abstract}
\setboolean{displaycopyright}{true}
\begin{document}
\maketitle
\TODO{This review document is provided for you to achieve your
best. We have listed a number of obvious opportunities for
improvement. When improving it, please keep this copy untouched and
instead focus on improving report.tex. The review does not include
all possible improvement suggestions and if you sea comment you may
want to check if this comment applies elsewhere in the document.}
\TODO{Abstract: remove This paper, its clear that this is a a paper so you do
not have to mention it. Furthermore, its actually a technology
review. Abstract has grammar errors. Remove the instructor from authorlist}
\section{Introduction}
Ansible \CE is an open source automation engine which can be used to
automate cloud provisioning, configuration management, and application
deployment. It is designed to be minimal in nature, secure,
consistent, and highly reliable \cite{www-ansible3}. In many respects,
it is unique from other management tools and aims to provide large
productivity gains. It has an extremely low learning curve and seeks
to solve major unsolved IT challenges under a single banner. Michael
DeHaan developed the Ansible \GE platform and Ansible\GE, Inc. was the
company set up to commercially support and sponsor Ansible. The
company was later acquired by Red Hat Inc. It is available on Fedora,
Red Hat Enterprise Linux, CentOS and other operating
systems \cite{www-ansible-wikipedia}.
\TODO{place the proper citation after ansible in the firs sentence. Also, except for the beginning of sentence , lowercase ansible }
\section{Architecture}
One of the primary differences between Ansible \GE and many other tools in
the space is its architecture. Ansible is an agentless tool, it
doesn't requires any software to be installed on the remote machines
to make them manageable. By default is manages remote machines over
SSH or WinRM, which are natively present on those
platforms \cite{www-ansible}.
Like many other configuration management software, Ansible \GE
distinguishes two types of servers: controlling machines and
nodes. Ansible uses a single controlling machine where the
orchestration begins. Nodes are managed by a controlling machine over
SSH. The location of the nodes are described by the inventory of the
controlling machine \cite{www-ansible3}.
Modules are deployed by Ansible \GE over SSH. These modules are
temporarily stored in the nodes and communicate with the controlling
machine through a JSON protocol over the standard
output.\cite{www-ansible}
The design goals of Ansible \GE includes consistency, high reliability,
low learning curve, security and to be minimalistic in nature. The
security comes from the fact that Ansible \GE doesn't requires users to
deploy agents on nodes and manages remote machines using SSH or
WinRM. Ansible doesn't require dedicated users or credentials - it
respects the credentials that the user supplies when running
Ansible \GE. Similarly, Ansible \GE does not require administrator access, it
leverages sudo, su, and other privilege escalation methods on request
when necessary \cite{github-ansible}. If needed, Ansible \GE can connect
with LDAP, Kerberos, and other centralized authentication management
systems \cite{www-ansible2}.
\TODO{ except for the beginning of sentences , lowercase ansible }
\section{Advanced Features}
The Ansible \GE Playbook language includes a variety of features which
allow complex automation flow, this includes conditional execution of
tasks, the ability to gather variables and information from the remote
system, ability to spawn asynchronous long running actions, ability to
operate in either a push or pull configuration, it also includes a
“check” mode to test for pending changes without applying change, and
the ability to tag certain plays and tasks so that only certain parts
of configuration can be applied \cite{www-ansible3}. These features allow your
applications and environments to be modelled \SE simply and easily, in a
logical framework that is easily understood not just by the automation
developer, but by anyone from developers to operators to CIOs. Ansible
has low overhead and is much smaller when compared to other tools like \GE
Puppet \cite{www-ansible4}.
\TODO{ except for the beginning of sentences , lowercase ansible. Also, please revise spelling of modelled. }
\section{Playbook and Roles}
Playbook is what Ansible \GE uses to perform automation and
orchestration. They are Ansible's \GE configuration, deployment and
orchestration language. They can be used to describe policy you need
your remote systems to enforce, or a set of steps in a general IT
process \cite{www-ansible5}.
At \GE a basic level, playbooks can be used to manage configurations and
deployments to remote machines. While at an advanced level, they can
be utilized to sequence multi-tier rollouts which involves rolling
updates, and can also be used to delegate actions to other hosts,
interacting with monitoring servers and load balancers at the same time.
Playbooks consists of series of ‘plays’ \GE which are used to define
automation across a set of hosts, known as the ‘inventory’. These
‘play’ generally consists of multiple ‘tasks,’ that can select one,
many, or all of the hosts in the inventory where each task is a call
to an Ansible \GE module - a small piece of code for doing a specific
task. The tasks may be complex, such as spinning up an entire cloud
formation infrastructure in Amazon EC2. Ansible includes hundreds of
modules which help it perform a vast range of
tasks \cite{www-ansible}.
Similar to many other languages Ansible \GE supports encapsulating
Playbook tasks into reusable units called ‘roles.’ These roles can be
used to easily apply common configurations in different scenarios,
such as having a common web server configuration role which can be
used in development, test, and production automation. The Ansible \GE
Galaxy community site contains thousands of customizable rules that
can be reused used to build Playbooks \GE.
\TODO{ except for the beginning of sentences , lowercase ansible and lowercase playbook. Also, in the first sentence of Playbooks consists ... please place a comma after plays. }
\section{Complex Orchestration Using Playbooks}
Playbook can be used to combine multiple tasks to achieve complex
automation \cite{www-ansible5}. Playbook \GE and Ansible \GE can be easily used
in implementing a cluster-wide rolling update that consists of
consulting a configuration/settings repository for information about
the involved servers, configuring the base OS on all machines and
enforcing desired state. It can also be used in signaling the
monitoring system of an outage window prior to bringing the servers
offline and signaling load balancers to take the application servers
out of a load balanced pool. The usage doesn't ends here and it can be
utilized to start and stop server, running appropriate tests on the
new server or even deploying/updating the server code, data, and
content. Ansible can also be used to send email reports and as a
logging tool \cite{www-ansible}.
\TODO{ except for the beginning of sentences , lowercase ansible }
\section{Extensibility}
Tasks in Ansible \GE are performed by Ansible ‘modules’ which are pieces
of code that run on remote hosts. Although there are a vast set of
modules which cover a lot of things which a user may require there
might be a need to implement a new module to handle some portion of IT
infrastructure. Ansible makes it simpler and by allowing the modules
to be written in any language, with the only requirement that they are
required to take JSON as input and product JSON as
output \cite{www-ansible}.
We can also extend Ansible \GE through it's dynamic inventory which allows
Ansible Playbooks \GE to run against machines and infrastructure
discovered during runtime. Out of the box Ansible \GE includes support for
all major cloud providers, it can also be easily extended to add
support for new providers and new sources as needed.
\TODO{ except for the beginning of sentences , lowercase ansible and lowercase playbooks }
\section{One Tool To Do It All}
Ansible is designed to make IT configurations and processes both
simple to read or write, the code is human readable and can be read
even by those who are not trained in reading those
configurations. Ansible \GE is different from genrally \SE used software
programming languages, it uses basic textual descriptions of desired
states and processes, while being neutral to the types of processes
described. Ansible’s \GE simple, task-based nature makes it unique and it
can be applied to a variety of use cases which includes configuration
management, application deployment, orchestration and need basis test
execution.
Ansible combines these approaches into a single tool. This not only
allows for integrating multiple disparate processes into a single
framework for easy training, education, and operation, but also means
that \GE Ansible \GE seamlessly fits in with other tools. It can be used for
any of the above mentioned use cases without modifying existing
infrastructure that may already be in use.
\TODO{ except for the beginning of sentences , lowercase ansible. Also, check the spelling of genrally. Probably could eliminate the word that -- doesn't offer value}
\section{Integration Of Cloud And Infrastructure}
Ansible can easily deploy workloads to a variety of public and
on-premise cloud environments. This capability includes cloud
providers such as Amazon Web Services, Microsoft Azure, Rackspace, and
Google Compute Engine, and local infrastructure such as VMware,
OpenStack, and CloudStack. This includes not just compute
provisioning, but storage and networks as well and the capability
doesn't ends here. As noted, further integrations are easy, and more
are added to each new Ansible \GE release. And \GE as Ansible is open source \GE
anyone can make his/her contributions \cite{www-ansible}.
\TODO{ except for the beginning of sentences , lowercase ansible.}
\section{Automating Network}
Ansible can easily automate traditional IT server and software
installations, but it strives to automate IT infrastructure entirely,
including areas which are not covered by traditional IT automation
tools. Due to the Ansible’s \GE agentless, task-based nature it can easily
be utilized in the networking space, and the inbuilt support is
included with Ansible \GE for automating networking from major vendors
such as Cisco, Juniper, Hewlett Packard Enterprise, Cumulus and
more \cite{www-ansible3}.
By harnessing this networking support, network automation is no longer
a task required to be done a separate team, with separate tools, and
separate processes. It can easily be done by the same tools and
processes used by other automation procedures you already
have. Ansible tasks also include configuring switching and VLAN for a
new service. So now users don't need a ticket filed whenever a new
service comes in.
\TODO{ except for the beginning of sentences , lowercase ansible. Also, sections 4-9 should be subsections of 3 since this explains features}
\section{Conclusion}
Ansible is an open source community project sponsored by Red Hat. It
is one of the easiest way to automate IT. Ansible code is easy to read
and write and the commands are in human readable format. The power of
Ansible \GE language can be utilized across entire IT teams – from systems
and network administrators to developers and managers. Ansible
includes thousands of reusable modules which makes it even more user
friendly and users can write new modules in any language which makes
it flexible too. Ansible by Red Hat provides enterprise-ready
solutions to automate your entire application lifecycle – from servers
to clouds to containers and everything in between. Ansible Tower by
Red Hat is a commercial offering that helps teams manage complex
multi-tier deployments by adding control, knowledge, and delegation to
Ansible-powered \GE environments.
\TODO{ except for the beginning of sentences , lowercase ansible. }
% Bibliography
\bibliography{references}
\end{document}
| {
"alphanum_fraction": 0.8030165227,
"avg_line_length": 47.6317689531,
"ext": "tex",
"hexsha": "3f507146451730d9c3294210d7e7ca3fc7504845",
"lang": "TeX",
"max_forks_count": 294,
"max_forks_repo_forks_event_max_datetime": "2018-07-13T01:32:24.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-01-09T13:18:39.000Z",
"max_forks_repo_head_hexsha": "42dd11b914c03c741dad8a8505c3e091dc6ec412",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "cloudmesh/sp17-i524",
"max_forks_repo_path": "paper1/S17-IR-2011/report-review.tex",
"max_issues_count": 98,
"max_issues_repo_head_hexsha": "42dd11b914c03c741dad8a8505c3e091dc6ec412",
"max_issues_repo_issues_event_max_datetime": "2017-10-27T11:30:50.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-01-19T04:24:02.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "cloudmesh/sp17-i524",
"max_issues_repo_path": "paper1/S17-IR-2011/report-review.tex",
"max_line_length": 180,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "42dd11b914c03c741dad8a8505c3e091dc6ec412",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "suunni/sp17-i524",
"max_stars_repo_path": "paper1/S17-IR-2011/report-review.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-14T19:13:18.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-10-30T09:54:25.000Z",
"num_tokens": 2928,
"size": 13194
} |
\chapter{Expansion Board Assembly}
\label{expboard}
While the main board comes fully assembled, you will have to assemble
the newer expansion board yourself. If you are not familiar with soldering,
please attend the soldering workshop to learn the basics, and if you
have any further questions, no matter how small, please contact a
member of the staff before you proceed. We'll be much happier if you
come to us before you've made a mistake than afterward. That said,
building the expansion board should not be much of a challenge for
anyone, and we recommend that you get started on it as soon as
possible, and not fall behind. The circuit board may look
complicated, but do not despair! Many of its parts (such as circuitry
to make it compatible with LEGO Mindstorms sensors) are optional and
not needed for 6.270. For reference, a complete, assembled Handy
Board is pictured at
\begin{quote}
\begin{verbatim}
http://www.mit.edu/~6.270/images/expbd.jpg
\end{verbatim}
\end{quote}
| {
"alphanum_fraction": 0.7868686869,
"avg_line_length": 45,
"ext": "tex",
"hexsha": "a326db69279478f6d9158201d38f6ff16e51529c",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-10-12T23:20:02.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-01-20T04:29:35.000Z",
"max_forks_repo_head_hexsha": "94593ddabf7e6b53b568b3b4a04ca9ed6df123f0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "sixtwoseventy/joyos",
"max_forks_repo_path": "src/coursenotes/handy.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "94593ddabf7e6b53b568b3b4a04ca9ed6df123f0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "sixtwoseventy/joyos",
"max_issues_repo_path": "src/coursenotes/handy.tex",
"max_line_length": 76,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "f19624be074d2ab10c89786e75daef0ef4e48eb9",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "viccro/andthebitches",
"max_stars_repo_path": "src/coursenotes/handy.tex",
"max_stars_repo_stars_event_max_datetime": "2020-03-04T02:16:16.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-02-01T06:43:17.000Z",
"num_tokens": 238,
"size": 990
} |
\documentclass{article}
\usepackage{hyperref}
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage{caption}
\usepackage{subcaption}
\usepackage[section]{placeins}
\renewcommand{\thesubsection}{\thesection.\alph{subsection}}
\usepackage{listings}
\usepackage{amssymb}
\title{\bf{CSE397: Assignment \#2}}
\author{Nicholas Malaya \\ Department of Mechanical Engineering \\
Institute for Computational Engineering and Sciences \\ University of
Texas at Austin} \date{}
\begin{document}
\maketitle
\newpage
\section{Problem 1}
\subsection{Find and classify all stationary points}
\begin{figure}[!htb]
\includegraphics[scale=.5]{figs/1_3d.pdf}
\label{fig:3d}
\caption{The topology of -cos(x)cos(y/10) in the domain.}
\end{figure}
Figure one shows a clear stationary point at (0,0). In addition, we are
seeking all points where $\nabla f = 0$. The gradient of $\nabla f =
(\text{sin}(x)\text{cos}(y/10),\frac{\text{cos}(x)}{10}\text{sin}(y/10))$. Solving
$\text{sin}(x)\text{cos}(y/10) = 0$ and
$\frac{\text{cos}(x)}{10}\text{sin}(y/10) = 0$ shows that while the
directional derivative of x is zero at $y \pm \frac{10 \pi}{2}$ and the
y-derivative is zero at $x \pm \frac{\pi}{2}$, the entire gradient is
only zero at (0,0). Thus, this is the only fixed point in the region we
are considering.
\subsection{Find the region where the Hessian Matrix of f(x,y) is positive definite.}
\begin{equation*}
H = \left(
\begin{array}{ c c }
\frac{\partial^2 f}{\partial x^2} & \frac{\partial^2 f}{\partial x
\partial y} \\
\frac{\partial^2 f}{\partial x
\partial y} & \frac{\partial^2 f}{\partial y^2}
\end{array} \right)
=
\left(
\begin{array}{ c c }
\text{cos}(x)\text{cos}(y/10) & -\frac{\text{sin}(x)}{10}\text{sin}(y/10) \\
-\frac{\text{sin}(x)}{10}\text{sin}(y/10) & \frac{\text{cos}(x)}{100}\text{cos}(y/10)
\end{array} \right)
\end{equation*}
Then setting the determinant of the matrix ($H - \lambda = 0$) to zero yields:
\begin{equation*}
\lambda^2 - \frac{101}{100}\cos(x)\cos(\frac{y}{10})\lambda + \frac{1}{100}(\cos^2(x)\cos^2(\frac{y}{10}) - \sin^2(x)\sin^2(\frac{y}{10})) = 0
\end{equation*}
The eigenvalues are therefore,
\begin{equation*}
\lambda = \frac{1}{200}\left[101\cos(x)\cos(\frac{y}{10}) \pm \sqrt{9801\cos^2(x)\cos^2(\frac{y}{10}) + 400\sin^2(x)\sin^2(\frac{y}{10})}\right]
\end{equation*}
The first eigenvalue is always positive. The second is positive when the
terms in the square root are positive, e.g.
\begin{equation*}
\cos^2(x)\cos^2(\frac{y}{10}) - \sin^2(x)\sin^2(\frac{y}{10}) > 0
\end{equation*}
In other words,
\begin{equation*}
\cos^2(x)\cos^2(\frac{y}{10}) > \sin^2(x)\sin^2(\frac{y}{10})
\end{equation*}
\begin{figure}[!htb]
\includegraphics[scale=.5]{figs/eigen.pdf}
\caption{The Contours of the Eigenvalues. The red line at the top is
the region where the Hessian is positive definite. }
\end{figure}
The bounds of this region are plotted in figure 2.
\newpage
\subsection{Derive expressions for the search directions}
The search direction for steepest descent is the negative gradient,
e.g.
\begin{equation*}
g = -\nabla f =\left(
\begin{array}{ c }
- \text{sin}(x)\text{cos}(y/10)\\
- \frac{\text{cos}(x)}{10}\text{sin}(y/10)
\end{array} \right)
\end{equation*}
For newton, the step direction is $H_k p_k = g_k$ or,
\begin{equation*}
\left(
\begin{array}{ c c }
\text{cos}(x)\text{cos}(y/10) & -\frac{\text{sin}(x)}{10}\text{sin}(y/10) \\
-\frac{\text{sin}(x)}{10}\text{sin}(y/10) & \frac{\text{cos}(x)}{100}\text{cos}(y/10)
\end{array} \right)
p_k
=
\left(
\begin{array}{ c }
- \text{sin}(x)\text{cos}(y/10)\\
- \frac{\text{cos}(x)}{10}\text{sin}(y/10)
\end{array} \right)
\end{equation*}
Which is,
\begin{equation*}
p_k =
\left(
\begin{array}{c}
\frac{\sin (2 x) \left(\cos \left(\frac{y}{5}\right)-\sin
\left(\frac{y}{5}\right)-1\right)}{2 \left(\cos (2 x)+\cos
\left(\frac{y}{5}\right)\right)} \\
-\frac{20 \sin \left(\frac{y}{10}\right) \left(\cos
\left(\frac{y}{10}\right) \cos ^2(x)+\sin ^2(x) \sin
\left(\frac{y}{10}\right)\right)}{\cos (2 x)+\cos
\left(\frac{y}{5}\right)}
\end{array}
\right)
\end{equation*}
\subsection{Write a program that performs both steepest descent and a
Newton Iteration}
\begin{figure}[!htb]
\includegraphics[scale=.5]{figs/P1SDNLSwin.jpg}
\caption{Steepest Descent from within the region of stability,
converging to the solution at (0,0).}
\end{figure}
\begin{figure}[!htb]
\includegraphics[scale=.5]{figs/P1SDNLSFail.jpg}
\caption{Steepest Descent from outside the region of stability, yet
still converging to the solution at (0,0).}
\end{figure}
\begin{figure}[!htb]
\includegraphics[scale=.5]{figs/P1NMWLSwin.jpg}
\caption{Newton's Method with line search inside the region of
stability, converging correctly.}
\end{figure}
\begin{figure}[!htb]
\includegraphics[scale=.5]{figs/P1NMWLSFail.jpg}
\caption{Newton's Method with line search outside the region of
stability, diverging. Hint: look at the bottom right.}
\end{figure}
\begin{figure}[!htb]
\includegraphics[scale=.5]{figs/P1NMNLSwin.jpg}
\caption{Newton's Method without line search inside the region of
stability, converging correctly.}
\end{figure}
\begin{figure}[!htb]
\includegraphics[scale=.5]{figs/P1NMNLSFailinside.jpg}
\caption{Newton's Method without line search inside the region of
stability, yet not converging correctly.}
\end{figure}
\subsection{What do you observe about convergence rates in these cases?}
The number of iterations required to converge were nearly what I
expected. Steepest descent always converges, but is the slowest. A line
search improves it by dialing back the otherwise overly greedy step
sizes, but still significantly underperforms Newton's method.
Newton's method is fastest without a line search, but only within the
region where the function is very nearly quadratic. Outside of this
region, it fails.
\newline
\begin{tabular}{| l | c |}
\hline
Method & Number of iterations \\
\hline
Steepest Descent without line search & 212 \\
Steepest Descent with line search & 81 \\
Newton's Method with Line Search & 33 \\
Newton's Method without Line Search & 3 \\
\hline
\end{tabular}
\section{Problem 2}
\subsection{Show that the two problems have the same minimizers}
We can see that the problems must have the same minimizers
(e.g. $x^\star$) because $\beta$ is just a scale factor, and our solution
should be invariant to homogeneous linear mappings.
Intuitively, if we multiply our function at each point by 10x, then
the lowest point should remain in the same place (and likewise, for the
max).
More formally speaking, our first order necessary conditions for a local
minimizer, $x^\star$, require that if $x^\star$ is a local minimizer of
$f_2(x)$, then $g_2(x^\star) = 0 $ at the local minimizer.
However, $f_2(x) = \beta f_1(x)$. Therefore, $f_2(x^\star) = \beta
f_1(x^\star)$, and $g_1(x^\star) = g_2(x^\star) = 0 $. Therefore, the
gradient is also zero at $x^\star$ for $f_1$.
\subsection{Compare Steepest Descent and Newton Directions at $x_0 \in \mathbb{R}$}
\begin{align*}
p_k&=-g_k = -\nabla f_k\\
p_{k,1}&=-g_{k,1}\\
p_{k,2}&=-g_{k,2} = -\beta g_{k,1}
\end{align*}
Thus,
\begin{equation}
p_{k,2} \ne p_{k,1}
\end{equation}
for steepest descent: the search directions
vary by a scale factor. However, using Newton's method, the first search
direction is,
\begin{align*}
H_k p_k&=-g_k \\
H_{k,1} p_{k,1}&=-g_{k,1}\\
\end{align*}
And the second is,
\begin{align*}
H_{k,2} p_{k,2}&=-g_{k,2}\\
\beta H_{k,1} p_{k,2}&=- \beta g_{k,1}\\
H_{k,1} p_{k,2}&=-g_{k,1}
\end{align*}
And therefore,
\begin{equation}
p_{k,2} = p_{k,1}
\end{equation}
Thus, Newton's method is insensitive to the scale of the underlying
problem. Steepest decent on the other hand, does depent on the scale of
the underlying problem, and therefore it is not possible to make a good
initial step length, $\alpha$.
\subsection{Multidimensional expansion}
The result from the previous section is generalizable to
multi-dimensions. Now, instead of $\beta$ we have B, a matrix of
positive elements that scale the function f(x) that we are trying to
minimize. Note that B is invertible, has full rank and is independent of
x.
\begin{equation}
Bg(x) = 0
\end{equation}
As before, our Newton step is insensitive to the scale of the system we
are solving,
\begin{align*}
H_{k} p_{k}&=-g_{k}\\
B H_{k} p_{k}&=- B g_{k}\\
(B^{-1}B)H_{k} p_{k}&=-(B^{-1}B)g_{k} \\
H_{k} p_{k}&=-g_{k}
\end{align*}
This is because the same scale factor will ``push-through'' the Hessian
Matrix, e.g.
\begin{equation}
H(x) = \nabla\nabla B f(x) = B \nabla\nabla f(x)
\end{equation}
Thus, Newton's method is affine invariant, e.g. independent of linear
changes of coordinates.
\section{Problem 3}
Write a program to minimize a quartic form -- a homogeneous polynomial
of degree four.
\newpage
\subsection{Compare the performance between Newton and Steepest Descent}
% mu = 0 sigma = 10
\begin{figure}[!htb]
\centering
\begin{subfigure}[bh]{0.45\textwidth}
\includegraphics[width=\textwidth]{figs/P3SDmu0sig1.jpg}
\end{subfigure}%
\begin{subfigure}[bh]{0.45\textwidth}
\includegraphics[width=\textwidth]{figs/P3NCGmu0sig1eta1.jpg}
\end{subfigure}
\centering
\begin{subfigure}[bh]{0.45\textwidth}
\includegraphics[width=\textwidth]{figs/P3NCGmu0sig1eta2.jpg}
\end{subfigure}%
\begin{subfigure}[bh]{0.45\textwidth}
\includegraphics[width=\textwidth]{figs/P3NCGmu0sig1eta3.jpg}
\end{subfigure}
\caption{Comparison between steepest descent and Newton CG at
varying $\eta$, for $\mu=0$, $\sigma=1$.}
\end{figure}
In figure 9, we can see that roughly all of the plots are converging
non-linearly. This is surprising, because I did not expect the steepest
descent to converge at this rate. It is possible that gradient descent
is converging super-linearly because of the function. I would conjecture
that
this is related to the $mu=0$, and that by killing the lower order term
in the function, we are making it easier for our gradient vector to
point in the correct direction ``downhill''.
% mu = 10, sigma = 1
\begin{figure}[!htb]
\centering
\begin{subfigure}[bh]{0.45\textwidth}
\includegraphics[width=\textwidth]{figs/P3SDmu10sig1.jpg}
\end{subfigure}%
\begin{subfigure}[bh]{0.45\textwidth}
\includegraphics[width=\textwidth]{figs/P3NCGmu10sig1eta1.jpg}
\end{subfigure}
\centering
\begin{subfigure}[bh]{0.45\textwidth}
\includegraphics[width=\textwidth]{figs/P3NCGmu10sig1eta2.jpg}
\end{subfigure}%
\begin{subfigure}[bh]{0.45\textwidth}
\includegraphics[width=\textwidth]{figs/P3NCGmu10sig1eta3.jpg}
\end{subfigure}
\caption{Comparison between steepest descent and Newton CG at
varying $\eta$, for $\mu=10$, $\sigma=1$.}
\end{figure}
In figure 10, steepest descent only converges linearly. We can see that
when $eta$ is fixed at one half, the convergence is only linear for
newton as well, as expected. Both of the other methods for selecting
$eta$ give superior convergence rates (super-linear), as expected.
Therefore, aside from the $\mu=0$ case, the theoretical convergence
rates for these different choices are what we expected.
\end{document} | {
"alphanum_fraction": 0.6845980596,
"avg_line_length": 34.5628742515,
"ext": "tex",
"hexsha": "50801e7eaa66af175bcff185d826ec0dba2ae5f3",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2019-12-16T19:34:24.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-01-04T16:08:18.000Z",
"max_forks_repo_head_hexsha": "11959056caa80d3c910759b714a0f8e42f986f0f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "nicholasmalaya/paleologos",
"max_forks_repo_path": "inv_prob/ps2/report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "11959056caa80d3c910759b714a0f8e42f986f0f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "nicholasmalaya/paleologos",
"max_issues_repo_path": "inv_prob/ps2/report.tex",
"max_line_length": 146,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "11959056caa80d3c910759b714a0f8e42f986f0f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "nicholasmalaya/paleologos",
"max_stars_repo_path": "inv_prob/ps2/report.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-04T17:49:42.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-11-04T17:49:42.000Z",
"num_tokens": 3803,
"size": 11544
} |
%% Copernicus Publications Manuscript Preparation Template for LaTeX Submissions
%% ---------------------------------
%% This template should be used for copernicus.cls
%% The class file and some style files are bundled in the Copernicus Latex Package, which can be downloaded from the different journal webpages.
%% For further assistance please contact Copernicus Publications at: [email protected]
%% https://publications.copernicus.org/for_authors/manuscript_preparation.html
%% Please use the following documentclass and journal abbreviations for discussion papers and final revised papers.
%% 2-column papers and discussion papers
\documentclass[journal abbreviation, manuscript]{copernicus}
%% Journal abbreviations (please use the same for discussion papers and final revised papers)
% Advances in Geosciences (adgeo)
% Advances in Radio Science (ars)
% Advances in Science and Research (asr)
% Advances in Statistical Climatology, Meteorology and Oceanography (ascmo)
% Annales Geophysicae (angeo)
% Archives Animal Breeding (aab)
% ASTRA Proceedings (ap)
% Atmospheric Chemistry and Physics (acp)
% Atmospheric Measurement Techniques (amt)
% Biogeosciences (bg)
% Climate of the Past (cp)
% Drinking Water Engineering and Science (dwes)
% Earth Surface Dynamics (esurf)
% Earth System Dynamics (esd)
% Earth System Science Data (essd)
% E&G Quaternary Science Journal (egqsj)
% Fossil Record (fr)
% Geographica Helvetica (gh)
% Geoscientific Instrumentation, Methods and Data Systems (gi)
% Geoscientific Model Development (gmd)
% History of Geo- and Space Sciences (hgss)
% Hydrology and Earth System Sciences (hess)
% Journal of Micropalaeontology (jm)
% Journal of Sensors and Sensor Systems (jsss)
% Mechanical Sciences (ms)
% Natural Hazards and Earth System Sciences (nhess)
% Nonlinear Processes in Geophysics (npg)
% Ocean Science (os)
% Primate Biology (pb)
% Proceedings of the International Association of Hydrological Sciences (piahs)
% Scientific Drilling (sd)
% SOIL (soil)
% Solid Earth (se)
% The Cryosphere (tc)
% Web Ecology (we)
% Wind Energy Science (wes)
%% \usepackage commands included in the copernicus.cls:
%\usepackage[german, english]{babel}
%\usepackage{tabularx}
%\usepackage{cancel}
%\usepackage{multirow}
%\usepackage{supertabular}
%\usepackage{algorithmic}
%\usepackage{algorithm}
%\usepackage{amsthm}
%\usepackage{float}
%\usepackage{subfig}
%\usepackage{rotating}
\begin{document}
\title{A neural network approach to estimate a posteriori distributions of Bayesian retrieval problems}
% \Author[affil]{given_name}{surname}
\Author[1]{Simon}{Pfreundschuh}
\Author[1]{Patrick}{Eriksson}
\Author[1]{David}{Duncan}
\Author[2]{Bengt}{Rydberg}
\Author[3]{Nina}{H{\aa}kansson}
\Author[3]{Anke}{Thoss}
\affil[1]{Department of Space, Earth and Environment, Chalmers University of Technology, Gothenburg, Sweden}
\affil[2]{Möller Data Workflow Systems AB, Gothenburg, Sweden}
\affil[3]{Swedish Meteorological and Hydrological Institute (SMHI), Norrköping, Sweden}
%% The [] brackets identify the author with the corresponding affiliation. 1, 2, 3, etc. should be inserted.
\runningtitle{TEXT}
\runningauthor{TEXT}
\correspondence{[email protected]}
\received{}
\pubdiscuss{} %% only important for two-stage journals
\revised{}
\accepted{}
\published{}
%% These dates will be inserted by Copernicus Publications during the typesetting process.
\firstpage{1}
\maketitle
\newcounter{enumic}
\begin{abstract}
A neural network based method, Quantile Regression Neural Networks (QRNNs), is
proposed as a novel approach to estimate the a posteriori distribution of
Bayesian remote sensing retrievals. The advantage of QRNNs over conventional
neural network retrievals is that they not only learn to predict a single
retrieval value but also the associated, case specific uncertainties. In this
study, the retrieval performance of QRNNs is characterized and compared to
that of other state-of-the-art retrieval methods. A synthetic retrieval
scenario is presented and used as a validation case for the application of
QRNNs to Bayesian retrieval problems. The QRNN retrieval performance is
evaluated against Markov chain Monte Carlo simulation and another Bayesian
method based on Monte Carlo integration over a retrieval database. The
scenario is also used to investigate how different hyperparameter
configurations and training set sizes affect the retrieval performance. In the
second part of the study, QRNNs are applied to the retrieval of cloud top
pressure from observations by the moderate resolution imaging
spectroradiometer (MODIS). It is shown that QRNNs are not only capable of
achieving similar accuracy as standard neural network retrievals, but also
provide statistically consistent uncertainty estimates for non-Gaussian
retrieval errors. The results presented in this work show that QRNNs are able
to combine the flexibility and computational efficiency of the machine
learning approach with the theoretically sound handling of uncertainties of
the Bayesian framework. Together with this article, a Python implementation of
QRNNs is released through a public repository to make the method available to
the scientific community.
\end{abstract}
%\copyrightstatement{TEXT}
\introduction %% \introduction[modified heading if necessary]
The retrieval of atmospheric quantities from remote sensing measurements
constitutes an inverse problem that generally does not admit a unique, exact
solution. Measurement and modeling errors, as well as limited sensitivity of the
observation system, preclude the assignment of a single, discrete solution to a
given observation. A meaningful retrieval should thus consist of a retrieved
value and an estimate of uncertainty describing a range of values that are
likely to produce a measurement similar to the one observed. However, even
if a retrieval method allows for explicit modeling of retrieval uncertainties,
their computation and representation is often possible only in an approximate
manner.
The Bayesian framework provides a formal way of handling the ill-posedness of
the retrieval problem and its associated uncertainties. In the Bayesian
formulation \citep{rodgers}, the solution of the inverse problem is given by the
a posteriori distribution $p(x | \vec{y})$, i.e. the conditional distribution of
the retrieval quantity $x$ given the observation $\vec{y}$. Under the modeling
assumptions, the posterior distribution represents all available knowledge about
the retrieval quantity $x$ after the measurement, accounting for all considered
retrieval uncertainties. Bayes' theorem states that the a posteriori
distribution is proportional to the product $p(\vec{y} | x)p(x)$ of the a priori
distribution $p(x)$ and the conditional probability of the observed measurement
$p(\vec{y} | x)$. The a priori distribution $p(x)$ represents knowledge about
the quantity $x$ that is available before the measurement and can be used to aid
the retrieval with supplementary information.
For a given retrieval, the a posteriori distribution can generally not be
expressed in closed form and different methods have been developed to compute
approximations to it. In cases that allow a sufficiently precise and efficient
simulation of the measurement, a forward model can be used to guide the solution
of the inverse problem. If such a forward model is available, the most general
technique to compute the a posteriori distribution is Markov chain Monte Carlo
(MCMC) simulation. MCMC denotes a set of methods that iteratively generate a
sequence of samples, whose sampling distribution approximates the true a
posteriori distribution. MCMC simulations have the advantage of allowing the
estimation of the a posteriori distribution without requiring any simplifying
assumptions on a priori knowledge, measurement error or the forward model. The
disadvantage of MCMC simulation is that each retrieval requires a
high number of forward model evaluations, which in many cases makes the method
computationally too demanding to be practical. For remote sensing retrievals, the
method is therefore of interest rather for testing and validation \citep{tamminen}, such
as for example in the retrieval algorithm developed by \cite{evans_2}.
A method that avoids costly forward model evaluations during the retrieval has
been proposed by \cite{kummerow_1}. The method is based on Monte Carlo
integration of importance weighted samples in a retrieval database
$\{(\vec{y}_i, x_i)\}_{i = 0}^n$, which consists of pairs of observations
$\vec{y}_i$ and corresponding values $x_i$ of the retrieval quantity. The
method will be referred to in the following as Bayesian Monte Carlo integration
(BMCI). Even though the method is less computationally demanding than methods
involving forward model calculations during the retrieval, it may require the
traversal of a potentially large retrieval database. Furthermore, the
incorporation of ancillary data to aid the retrieval requires careful
stratification of the retrieval database, as it is performed in the Goddard
Profiling Algorithm \citep{gprof} for the retrieval of precipitation profiles.
Further applications of the method can be found for example in the work by
\cite{rydberg_2} or \cite{evans_2}.
The optimal estimation method \citep{rodgers}, in short OEM (also 1DVAR, for
one-dimensional variational retrieval), simplifies the Bayesian retrieval
problem assuming that a priori knowledge and measurement uncertainty both follow
Gaussian distributions and that the forward model is only moderately non-linear.
Under these assumptions, the a posteriori distribution is approximately
Gaussian. The retrieved values in this case are the mean and maximum of the a
posteriori distribution, which coincide for a Gaussian distribution, together
with the covariance matrix describing the width of the a posteriori
distribution. In cases where an efficient forward model for the computation of
simulated measurements and corresponding Jacobians is available, the OEM has
become the quasi-standard method for Bayesian retrievals. Nonetheless, even
neglecting the validity of the assumptions of Gaussian a priori and measurement
errors as well as linearity of the forward model, the method is unsuitable for
retrievals that involve complex radiative processes. In particular, since the
OEM requires the computation of the Jacobian of the forward model,
processes such as surface or cloud scattering become too expensive to model
online during the retrieval.
Compared to the Bayesian retrieval methods discussed above, machine learning
provides a more flexible approach to learn computationally efficient retrieval
mappings directly from data. Large amounts of data available from simulations,
collocated observations or in situ measurements, as well as increasing computational
power to speed up the training, have made machine learning techniques
an attractive alternative to approaches based on (Bayesian) inverse modeling.
Numerous applications of machine learning regression methods to retrieval
problems can be found in recent literature \citep{jimenez, holl, strandgren, wang, hakansson, brath}.
All of these examples, however, neglect the probabilistic character of the
inverse problem and provide only a scalar estimate of the retrieval. Uncertainty
estimates in these retrievals are provided in the form of mean errors computed
on independent test data, which is a clear drawback compared to Bayesian
methods. A notable exception is the work by \citet{aires_2},
which applies the Bayesian framework to estimate errors in the retrieved
quantities due to uncertainties on the learned neural network parameters.
However, the only difference to the approaches listed above is that the
retrieval errors, estimated from the error covariance matrix observed on the
training data, are corrected for uncertainties in the network parameters. With
respect to the intrinsic retrieval uncertainties, the approach is thus afflicted
with the same limitations. Furthermore, the complexity of the required numerical
operations make it suitable only for small training sets and simple networks.
In this article, quantile regression neural networks (QRNNs) are proposed as a
method to use neural networks to estimate the a posteriori distribution of
remote sensing retrievals. Originally proposed by \citet{koenker_bassett},
quantile regression is a method for fitting statistical models to quantile
functions of conditional probability distributions. Applications of quantile
regression using neural networks \citep{cannon} and other machine learning
methods \citep{meinshausen} exist, but to the best knowledge of the authors this
is the first application of QRNNS to remote sensing retrievals. The aim of this
work is to combine the flexibility and computational efficiency of the machine
learning approach with the theoretically sound handling of uncertainties in the
Bayesian framework.
A formal description of QRNNs and the retrieval methods against which they will
be evaluated are provided in Sect.~\ref{sec:methods}. A simulated retrieval scenario is used to validate
the approach against BMCI and MCMC in Sect.~\ref{sec:synthetic}. Section~\ref{sec:ctp} presents the
application of QRNNs to the retrieval of cloud top pressure and associated
uncertainties from satellite observations in the visible and infrared. Finally,
the conclusions from this work are presented in Sect.~\ref{sec:conclusions}.
\section{Methods}
\label{sec:methods}
This section introduces the Bayesian retrieval formulation and the retrieval
methods used in the subsequent experiments. Two Bayesian methods, Markov chain
Monte Carlo simulation and Bayesian Monte Carlo integration, are presented.
Quantile regression neural networks are introduced as a machine learning
approach to estimate the a posteriori distribution of Bayesian retrieval
problems. The section closes with a discussion of the statistical metrics that
are used to compare the methods.
\subsection{The Retrieval Problem}
The general problem considered here is the retrieval of a scalar quantity $x \in
\mathrm{R}$ from an indirect measurement given in the form of an observation
vector $\vec{y} \in \mathrm{R}^m$. In the Bayesian framework, the retrieval
problem is formulated as finding the posterior distribution $p(x | \vec{y})$ of
the quantity $x$ given the measurement $\vec{y}$. Formally, this solution can be
obtained by application of Bayes theorem:
\begin{equation}\label{eq:bayes}
p(x | \vec{y}) = \frac{p(\vec{y} | x)p(x)}{\int p(x', \vec{y}) dx'}.
\end{equation}
The a priori distribution $p(x)$ represents the knowledge about the quantity $x$
that is available prior to the measurement. The a priori knowledge introduced
into the retrieval formulation regularizes the ill-posed inverse problem and
ensures that the retrieval solution is physically meaningful. The a posteriori
distribution of a scalar retrieval quantity $x$ can be represented by the
corresponding cumulative distribution function (CDF) $F_{x | \vec{y}}(x)$,
which is defined as
\begin{equation}\label{eq:cdf}
F_{x | \vec{y}}(x) = \int_{-\infty}^{x} p(x' | \vec{y}) \: dx'.
\end{equation}
\subsection{Bayesian Retrieval Methods}
Bayesian retrieval methods are methods that use the expression for the a
posteriori distribution in Eq. (\ref{eq:bayes}) to compute a solution to the
retrieval problem. Since the a posteriori distribution can generally not be
computed or sampled directly, these methods approximate the posterior
distribution to varying degrees of accuracy.
\subsubsection{Markov chain Monte Carlo}
Markov chain Monte Carlo (MCMC) simulation denotes a set of methods for the
generation of samples from arbitrary posterior distributions $p(x | \vec{y})$.
The general principle is to compute samples from an approximate distribution and
refine them in a way such that their distribution converges to the true a
posteriori distribution \citep{bda}. In this study, the Metropolis algorithm is
used to implement MCMC. The Metropolis algorithm iteratively generates a
sequence of states $\vec{x}_0, \vec{x}_1, \ldots$ using a symmetric proposal
distribution $J_t(\vec{x}^* | \vec{x}_{t-1})$. In each step of the algorithm, a
proposal $\vec{x}^*$ for the next step is generated by sampling from
$J_t(\vec{x}^* | \vec{x}_{t-1})$. The proposed state $\vec{x}^*$ is accepted as
the next simulation step $\vec{x}_t$ with probability $\text{min} \left \{1,
\frac{p(\vec{x}^* | \vec{y})}{p(\vec{x}_{t-1} | \vec{y})} \right \}$. Otherwise
$\vec{x}^*$ is rejected and the current simulation step $\vec{x}_{t-1}$ is kept
for $\vec{x}_t$. If the proposal distribution $J(\vec{x}^*, \vec{x}_{t-1})$ is
symmetric and samples generated from it satisfy the Markov chain property with a
unique stationary distribution, the Metropolis algorithm is guaranteed to
produce a distribution of samples which converges to the true a posteriori
distribution.
\subsubsection{Bayesian Monte Carlo integration}
The BMCI method is based on the use of importance sampling to approximate
integrals over the a posteriori distribution of a given retrieval case. Consider an
integral of the form
\begin{equation}\label{eq:bmci_int}
\int f(x') p(x'|\vec{y}) \: dx'.
\end{equation}
Applying Bayes' theorem, the integral can be written as
\begin{equation*}
\int f(x') p(x' | \vec{y}) \: dx' =
\int f(x') \frac{p(\vec{y} | x')p(x')}
{\int p(\vec{y} | x'') \: dx''} \: dx'.
\end{equation*}
The last integral can be approximated by a sum over an observation
database $\{(\vec{y}_i, x_i)\}_{i = 1}^n$ that is distributed according
to the a priori distribution $p(x)$
\begin{equation*}
\int f(x') p(x' | \vec{y}) \: dx' \approx \frac{1}{C} \sum_{i = 1}^n w_i(\vec{y}) f(x_i).
\end{equation*}
with the normalization factor $C$ given by $C = \sum_{i = 1}^n w_i(\vec{y}).$
The weights $w_i(\vec{y})$ are given by the probability $p(\vec{y} | \vec{y}_i)$
of the observed measurement $\vec{y}$ conditional on the database
measurement $\vec{y_i}$, which is usually assumed to be multivariate
Gaussian with covariance matrix $\vec{S}_o$:
\begin{equation*}
w_i(\vec{y}) \propto \exp \left \{- \frac{(\vec{y} - \vec{y}_i)^T \vec{S}_o^{-1}
(\vec{y} - \vec{y}_i)}{2} \right \}.
\end{equation*}
By approximating integrals of the form (\ref{eq:bmci_int}), it is possible
to estimate the expectation value and variance of the a posteriori distribution by
choosing $f(x) = x$ and $f(x) = (x - \mathcal{E}(x | \vec{y}))^2$, respectively.
While this is suitable to represent Gaussian distributions, a more general
representation of the a posteriori distribution can be obtained by
estimating the corresponding CDF (c.f. Eq.~(\ref{eq:cdf})) using
\begin{equation}
\label{eq:bmci_cdf}
F_{x | \vec{y}}(x) \approx \frac{1}{C} \sum_{x_i < x} w_i(\vec{y}).
\end{equation}
\subsection{Machine Learning}
Neglecting uncertainties, the retrieval of a quantity $x$ from a measurement
vector $\vec{y}$ may be viewed as a simple multiple regression task. In machine
learning, regression problems are typically approached by training a
parametrized model $f: \vec{x} \mapsto \vec{y}$ to predict a desired output
$\vec{y}$ from given input $\vec{x}$. Unfortunately, the use of the variables
$\vec{x}$ and $\vec{y}$ in machine learning is directly opposite to their use in
inverse theory. For the remainder of this section the variables $\vec{x}$ and
$\vec{y}$ will be used to denote, respectively, the input and output to the
machine learning model to ensure consistency with the common notation in the
field of machine learning. The reader must keep in mind that the method is
applied in the later sections to predict a retrieval quantity $x$ from a
measurement $\vec{y}$.
\subsubsection{Supervised learning and loss functions}
Machine learning regression models are trained using supervised training, in
which the model $f$ learns the regression mapping from a training set
$\{\vec{x}_i, \vec{y}_i\}_{i = 1}^n$ with input values $\vec{x}_i$ and expected
output values $\vec{y}_i$. The training is performed by finding model parameters
that minimize the mean of a given loss function $\mathcal{L}(f(\vec{x}),
\vec{y})$ on the training set. The most common loss function for regression
tasks is the squared error loss
\begin{equation}
\mathcal{L}_{se}(f(\vec{x}), \vec{y}) = (f(\vec{x}) - \vec{y})^T (f(\vec{x}) - \vec{y}),
\end{equation}
which trains the model $f$ to minimize the mean squared distance
of the neural network prediction $f(\vec{x})$ from the expected output
$\vec{y}$ on the training set.
If the estimand $\vec{y}$ is a random vector drawn from a conditional
probability distribution $p(\vec{y} | \vec{x})$, a regressor trained using a
squared error loss function learns to predict the conditional expectation value of
the distribution $p(\vec{y} | \vec{x})$ \citep{bishop_mdn}. Depending on the
choice of the loss function, the regressor can also learn to predict other statistics of
the distribution $p(\vec{y} | \vec{x})$ from the training data.
\subsubsection{Quantile regression}
Given the cumulative distribution function $F(x)$ of a probability distribution
$p$, its $\tau\text{th}$ quantile $x_\tau$ is defined as
\begin{align}
x_\tau &= \inf \{x \: : \: F(x) \geq \tau \},
\end{align}
i.e. the greatest lower bound of all values of $x$ for which $F(x) \geq \tau$.
As shown by \citet{koenker}, the $\tau\text{th}$ quantile $x_\tau$ of $F$
minimizes the expectation value $\mathcal{E}_x\left ( \mathcal{L}_\tau(x_\tau, x) \right) = \int_{-\infty}^\infty \mathcal{L}_\tau(x_\tau, x') p(x') \: dx'$
of the function
\begin{align}\label{eq:quantile_loss}
\mathcal{L}_{\tau}(x_\tau, x) &=
\begin{cases}
\tau |x - x_\tau|, & x_\tau < x \\
(1 - \tau)|x - x_\tau|, &\text{otherwise}.
\end{cases}
\end{align}
By training a machine learning regressor $f$ to minimize the mean of the quantile loss
function $\mathcal{L}_\tau(f(\vec{x}), y)$ over a training set $\{\vec{x}_i,
y_i\}_{i = 1}^n$, the regressor learns to predict the quantiles of the
conditional distribution $p(y | \vec{x})$. This can be extended to obtain an
approximation of the cumulative distribution function of $F_{y | \vec{x}}(y)$ by
training the network to estimate multiple quantiles of $p(y | \vec{x})$.
\subsubsection{Neural networks}
A neural network computes a vector of output activations $\vec{y}$ from a vector
of input activations $\vec{x}$. Feed-forward artificial neural networks (ANNs)
compute the vector $\vec{y}$ by application of a given number of subsequent,
learnable transformations to the input activations $\vec{x}$:
\begin{align*}
\vec{x}_0 &= \vec{x}\\
\vec{x}_i &= f_{i}
\left ( \vec{W}_{i} \vec{x}_{i - 1}+ \vec{\theta}_i \right ) \\
\vec{y} &= \vec{x}_{n}.
\end{align*}
The activation functions $f_i$ as well as the number and sizes of the hidden
layers $\vec{x}_1, \ldots, \vec{x}_{n-1}$ are prescribed, structural parameters of a
neural network model, generally referred to as hyperparameters. The learnable
parameters of the model are the weight matrices $\vec{W}_i$ and bias vectors
$\vec{\theta}_i$ of each layer. Neural networks can be efficiently trained in
a supervised manner by using gradient based minimization methods to find
suitable weights $\vec{W}_i$ and bias vectors $\vec{\theta}_i$. By using the
mean of the quantile loss function $\mathcal{L}_\tau$ as the training criterion,
a neural network can be trained to predict the quantiles of the distribution
$p(\vec{y} | \vec{x})$, thus turning the network into a quantile regression
neural network.
\subsubsection{Adversarial training}
\label{sec:adversarial_training}
Adversarial training is a data augmentation technique that has been proposed
to increase the robustness of neural networks to perturbations in the input
data \citep{goodfellow_2}. It has been shown to be effective also as a method
to improve the calibration of probabilistic predictions from neural networks
\citep{lakshminarayanan}. The basic principle of adversarial training is to
augment the training data with perturbed samples that are likely to
yield a large change in the network prediction. The method used here to
implement adversarial training is the fast gradient sign method
proposed by \citet{goodfellow_2}. For a training sample $(\vec{x}_i, \vec{y}_i)$
consisting of input $\vec{x}_i \in \mathrm{R}^{n}$ and expected output
$\vec{y}_i \in \mathrm{R}^m$, the corresponding adversarial sample
$(\vec{\tilde{x}}_i, \vec{y}_i)$ is chosen to be
\begin{align}\label{eq:adversarial_training}
\vec{\tilde{x}}_i = \vec{x}_i + \delta_\text{adv} \text{sign} \left (
\frac{d \mathcal{L}(\hat{\vec{x}}(\vec{x}_i), x)}{d\vec{x}_i}
\right ),
\end{align}
i.e. the direction of the perturbation is chosen in such a way that
it maximizes the absolute change in the loss function $\mathcal{L}$ due
to an infinitesimal change in the input parameters. The adversarial
perturbation factor $\delta_{\text{adv}}$ determines the strength of the perturbation
and becomes an additional hyperparameter of the neural network model.
\subsection{Evaluating Probabilistic Predictions}
A problem that remains is how to compare two estimates $p'(x | \vec{y}), p''(x
| \vec{y})$ of a given a posteriori distribution against a single observed sample
$x$ from the true distribution $p(x | \vec{y})$. A good
probabilistic prediction for the value $x$ should be sharp, i.e. concentrated in
the vicinity of $x$, but at the same time well calibrated, i.e. predicting
probabilities that truthfully reflect observed frequencies \citep{gneiting_2}.
Summary measures for the evaluation of predicted conditional distributions are
called scoring rules \citep{gneiting}. An important property of scoring rules
is propriety, which formalizes the concept of the scoring rule rewarding both
sharpness and calibration of the prediction. Besides providing reliable
measures for the comparison of probabilistic predictions, proper scoring rules
can be used as loss functions in supervised learning to incentivize
statistically consistent predictions.
The quantile loss function given in equation (\ref{eq:quantile_loss}) is a
proper scoring rule for quantile estimation and can thus be used to compare the
skill of different methods for quantile estimation \citep{gneiting}. Another
proper scoring rule for the evaluation of an estimated cumulative distribution
function $F$ against an observed value $x$ is the continuous ranked probability
score (CRPS):
\begin{align}\label{eq:crps}
\text{CRPS}(F, x) &= \int_{-\infty}^{\infty} \left ( F(x') - I_{x \leq x'}
\right )^2 \: dx'.
\end{align}
Here, $I_{x \leq x'}$ is the indicator function that is equal to 1 when
the condition $x \leq x'$ is true and $0$ otherwise.
For the methods used in this article the integral can only be evaluated
approximately. The exact way in which this is done for each method is
described in detail in Sect. \ref{sec:implementation_qrnn},
\ref{sec:implementation_bmci}.
The scoring rules presented above evaluate probabilistic predictions against a
single observed value. However, since MCMC simulations can be used to
approximate the true a posteriori distribution to an arbitrary degree of
accuracy, the probabilistic predictions obtained from BMCI and QRNN can be
compared directly to the a posteriori distributions obtained using MCMC. In
the idealized case where the modeling assumptions underlying the MCMC
simulations are true, the sampling distribution obtained from MCMC will
converge to the true posterior and can be used as a ground truth to assess the
predictions obtained from the other methods.
\subsubsection{Calibration plots}
Calibration plots are a graphical method to assess the calibration of prediction
intervals derived from probabilistic predictions. For a set of prediction
intervals with probabilities $p = p_1, \dots, p_n$, the fraction of cases
for which the true value did lie within the bounds of the interval is plotted
against the value $p$. If the predictions are well calibrated, the probabilities
$p$ match the observed frequencies and the calibration curve is close to the
diagonal $y = x$. An example of a calibration plot for three different
predictors is given in Figure~\ref{fig:calibration_plot_example}. Compared to the
scoring rules described above, the advantage of the calibration curves is that
they indicate whether the predicted intervals are too narrow or too wide.
Predictions that overestimate the uncertainty yield intervals that are too wide and
result in a calibration curve that lies above the diagonal, whereas observations
underestimating the uncertainty will yield a calibration curve that lies below
the diagonal.
\begin{figure}[hbpt!]
\centering
\includegraphics[width = 0.5\linewidth]{../plots/fig01}
\caption{Example of a calibration plot displaying calibration curves for
overly confident predictions (dark gray), well calibrated predictions (red),
and overly cautious predictions (blue).}
\label{fig:calibration_plot_example}
\end{figure}
%\section{Implementation}
%\label{sec:implementation}
%
% In this section the implementation of the retrieval methods used in the
% subsequent experiments is described. All three methods (MCMC, BCMI, QRNN) have
% been implemented in Python \citep{python} and released as parts of the typhon
% \citep{typhon} software package. The code for all calculations presented in
% this paper is made available in the form of jupyter notebooks through public
% repositories \citep{predictive_uncertainty, smhi}.
%
%\subsection{Markov Chain Monte Carlo}
%
% The MCMC retrieval presented in Section~\ref{sec:synthetic} are based on
% a Python implementation of the Metropolis algorithm \citep[Ch. 12]{bda}.
% The implementation is designed to use the ARTS amtospheric radiative
% transfer simulator \citep{arts_1, arts_2} as forwad model and has been
% integrated into the typhon package.
%
% The MCMC retrieval computes the a posteriori distribution of the integrated
% column water vapor from passive microwave observations. The state space in
% which the retrieval is performed is given by the temperature and water vapor
% profiles of a plane parallel atmsophere. Proposal states are generated from a
% random walk using the a priori covariance matrix scaled by an adaptive factor
% that aims to keep the acceptance rate close to the optimal $21\%$
% \citep{bda}.
%
% A single MCMC retrieval consists of 8 independent runs, that are started with
% different random states sampled from the a priori distribution. Each run
% starts with a warm-up phase followed by an adaptive phase during which the
% scaling of the covariance matrix of the proposal distribution is adapted. This is
% followed by a production phase during which 5000 samples of the a posteriori
% distribution are generated. Of these 5000 only 250 are kept in order to decrease
% the correlation between the samples. To ensure sufficient convergence of the
% simulations, the scale reduction factor $\hat{R}$ and the effective number of
% independent samples \citep[eqs. (11.12), (11.13)]{bda} are computed. The
% retrieval is discarded if the values are not smaller than 1.1 and larger than
% 100, respectively.
%
%\subsection{Bayesian Monte Carlo Integration}
%
% Also the BMCI method has been implemented in python and integrated into the
% typhon package. The implementation provides functionality to speed up
% calculations by excluding entries that are guaranteed to have a smaller weight
% than a given limit. For the experiments presented below this was not used
% since computational performance was not considered critical. In addition to
% retrieving the first two moments of the posterior distribution, the
% implementation also provides functionality to retrieve the posterior CDF using
% Equation (\ref{eq:cdf}). Approximate posterior quantiles are computed by
% interpolating the inverse CDF at the desired quantile values. To compute the
% CRPS score for a given retrieval, the trapezoidal rule is used to perform the
% integral over the values $x_i$ in the retrieval database $\{\vec{y}_i,
% x_i\}_{i = 1}^n$.
%
%\subsection{Quantile Regression Neural Networks}
%
% The implementation of quantile regression neural networks that has been
% developed is based on the Keras Python package for deep learning
% \citep{keras}. In order to enable the training of neural networks to predict
% quantiles, the quantile loss function $\mathcal{L}_\tau(x_\tau, x)$ has been
% implemented as a Keras-compatible loss function. The function can be
% initiated with a sequence of quantile fratctions $\tau_1, \ldots, \tau_k$
% allowing the neural network to learn to predict the corresponding quantiles
% $x_{\tau_1}, \ldots, x_{\tau_k}$.
%
% In addition to that, customized data generators have been added to the
% implementation in order to allow the incorporation of information on
% measurement uncertainty into the training process. If the training data is
% noise free, the data generator can be used to add Gaussian noise to each
% training batch according to the assumptions on measurement uncertainty. The
% noise is added only directly before the data is passed to the neural network
% keeping the original training data noise free. This ensures that the network
% does not see the same, noisy training sample twice during training.
%
% Here $\hat{\vec{x}}_\tau(\vec{y}_i)$ is the output predicted by the neural
% network for the input $\vec{x}_i$. The batch presented to the network is
% then simply given by the concatenation of the two batches.
%
% For the training of the neural network an adaptive form of stochastic batch
% gradient descent is used. During training, loss is monitored on a validation
% set. When the loss on the validation set hasn't decreased for a certain number
% of epochs, the training rate is reduced by a given reduction factor. The
% training is stopped when a predefined minimum learning rate is reached.
%
% The reconstruction of the CDF from the estimated quantiles is obtained
% by using the quantiles as nodes of a piece-wise linear approximation and
% extending the first and last segements out to 0 and 1, respectively.
% This approximation is also used to compute the CRPS score on the test
% data.
\section{Application to a synthetic retrieval case}
\label{sec:synthetic}
In this section, a simulated retrieval of column water vapor from passive
microwave observations is used to benchmark the performance of BMCI and QRNN
against MCMC simulation. The retrieval case has been set up to provide an
idealized but realistic scenario in which the true a posteriori distribution can
be approximated using MCMC simulation. The MCMC results can therefore be used as
the reference to investigate the retrieval performance of QRNNs and BMCI. Furthermore
it is investigated how different hyperparameters influence the performance of
the QRNN, and lastly how the size of the training set and retrieval database
impact the performance of QRNNs and BMCI.
\subsection{The retrieval}
For this experiment, the retrieval of column water vapor (CWV) from passive
microwave observations over the ocean is considered. The state of the
atmosphere is represented by profiles of temperature and water vapor
concentrations on 15 pressure levels between $10^3$ and $10\:\unit{hPa}$. The
variability of these quantities has been estimated based on ECMWF ERA
Interim data \citep{era_interim} from the year 2016, restricted to latitudes
between $23^\circ$ and $66^\circ$ N. Parametrizations of the multivariate
distributions of temperature and water vapor were obtained by fitting a joint
multivariate normal distribution to the temperature and the logarithm of
water vapor concentrations. The fitted distribution represents the a priori
knowledge on which the simulations are based.
\subsubsection{Forward model simulations}
The Atmospheric Radiative Transfer Simulator (ARTS, \citet{arts_new}) is used to
simulate satellite observations of the atmospheric states sampled from the a
priori distribution. The observations consist of simulated brightness
temperatures from five channels around $23, 88, 165, 183 \unit{GHz}$
(c.f. Table \ref{tab:channels}) of the ATMS sensor.
\begin{table}[hbpt]
\centering
\begin{tabular}{|r|c|c|c|}
\hline
Channel & Center frequency & Offset & Bandwidth \\
\hline
1 & $23.8 \unit{GHz}$ & --- & $270 \unit{MHz}$ \\
2 & $88.2 \unit{GHz}$ & --- & $500 \unit{MHz}$ \\
3 & $165.5\unit{GHz}$ & --- & $300 \unit{MHz}$ \\
4 & $183.3\unit{GHz}$ & $7 \unit{GHz}$ & $2000\unit{MHz}$ \\
5 & $183.3\unit{GHz}$ & $3 \unit{GHz}$ & $1000\unit{MHz}$ \\
\hline
\end{tabular}
\caption{Observation channels used for the synthetic retrieval of column water vapor.}
\label{tab:channels}
\end{table}
The simulations take into account only absorption and emission from water vapor.
Ocean surface emissivities are computed using the FASTEM-6 model \citep{fastem6}
with an assumed surface wind speed of zero. The sea surface temperature is
assumed to be equal to the temperature at the pressure level closest to the
surface but no lower than $270\unit{K}$. Sensor characteristics and absorption
lines are taken from the ATMS sensor descriptions that are provided within the
ARTS XML Data package. Simulations are performed for a nadir looking sensor and
neglecting polarization. The observation uncertainty is assumed to be
independent Gaussian noise with a standard deviation of $1\:\unit{K}$.
\subsubsection{MCMC implementation}
The MCMC retrieval is based on a Python implementation of the Metropolis
algorithm \citep[Ch. 12]{bda} that has been developed within the context of
this study. It is released as part of the \textit{typhon: tools for atmospheric
research} software package \citep{typhon}.
The MCMC retrieval is performed in the space of atmospheric states described
by the profiles of temperature and the logarithm of water vapor
concentrations. The multivariate Gaussian distribution that has been obtained
by fit to the ERA Interim data is taken as the a priori distribution. A random
walk is used as the proposal distribution, with its covariance matrix taken as
the a priori covariance matrix. A single MCMC retrieval consists of 8
independent runs, initialized with different random states sampled from the a
priori distribution. Each run starts with a warm-up phase followed by an
adaptive phase during which the covariance matrix of the proposal distribution
is scaled adaptively to keep the acceptance rate of proposed states close to
the optimal $21\%$ \citep{bda}. This is followed by a production phase during
which 5000 samples of the a posteriori distribution are generated. Only 1 out
of 20 generated samples is kept in order to decrease the correlation between
the resulting states. Convergence of each simulation is checked by computing
the scale reduction factor $\hat{R}$ and the effective number of independent
samples. The retrieval is accepted only if the scale reduction factor is
smaller than 1.1 and the effective sample size larger than 100. Each MCMC
retrieval generates a sequence of atmospheric states from which the column
water vapor is obtained by integration of the water vapor concentration
profile. The distribution of observed CWV values is then taken as the
retrieved a posteriori distribution.
\subsubsection{QRNN implementation}
\label{sec:implementation_qrnn}
The implementation of quantile regression neural networks is based on the
Keras Python package for deep learning \citep{keras}. It is also released
as part of the typhon package.
For the training of quantile regression neural networks, the quantile loss
function $\mathcal{L}_\tau(x_\tau, x)$ has been implemented so that it can be
used as a training loss function within the Keras framework. The function can
be initialized with a sequence of quantile fractions $\tau_1, \ldots,
\tau_k$ allowing the neural network to learn to predict the corresponding
quantiles $x_{\tau_1}, \ldots, x_{\tau_k}$.
Custom data generators have been added to the implementation to incorporate
information on measurement uncertainty into the training
process. If the training data is noise free, the data generator can be used to
add noise to each training batch according to the assumptions on measurement
uncertainty. The noise is added immediately before the data is passed to the
neural network, keeping the original training data noise free. This ensures
that the network does not see the same, noisy training sample twice during
training, thus counteracting overfitting.
% For the case that no specific assumptions
% on measurement uncertainty can be made, a data generator for adversarial
% training using the fast gradient sign method has been implemented. In this
% case the perturbation factor $\delta_{\text{adv}}$ becomes an additional
% hyperparameter of the network and needs to be tuned using independent
% validation data. Since for this experiment the noise characteristics of the
% sensor were known, this was not necessary here.
An adaptive form of stochastic batch gradient descent is used for the neural
network training. During the training, loss is monitored on a validation set.
When the loss on the validation set hasn't decreased for a certain number of
epochs, the training rate is reduced by a given reduction factor. The training
stops when a predefined minimum learning rate is reached.
The reconstruction of the CDF from the estimated quantiles is obtained
by using the quantiles as nodes of a piece-wise linear approximation and
extending the first and last segments out to 0 and 1, respectively.
This approximation is also used to compute the CRPS score on the test
data.
\subsubsection{BMCI implementation}
\label{sec:implementation_bmci}
The BMCI method has likewise been implemented in Python and added to the
typhon package. In addition to retrieving the first two moments of the
posterior distribution, the implementation provides functionality to
retrieve the posterior CDF using Eq.~(\ref{eq:bmci_cdf}). Approximate posterior
quantiles are computed by interpolating the inverse CDF at the desired quantile
values. To compute the CRPS score for a given retrieval, the trapezoidal rule
is used to perform the integral over the values $x_i$ in the retrieval database
$\{\vec{y}_i, x_i\}_{i = 1}^n$.
\subsection{QRNN model selection}
Just as with common neural networks, QRNNs have several hyperparameters that cannot
be learned directly from the data, but need to be tuned independently. For this
study the dependence of the QRNN performance on its hyperparameters has been
investigated. The results are included here as they may be a helpful reference
for future applications of QRNNs.
For this analysis, hyperparameters describing the structure of the QRNN model
are investigated separately from training parameters. The hyperparameters
describing the structure of the QRNN are:
\begin{enumerate}
\item the number of hidden layers,
\item the number of neurons per layer,
\item the type of activation function.
\setcounter{enumic}{\value{enumi}}
\end{enumerate}
The training method described in Sect.~\ref{sec:implementation_qrnn} is
defined by the following training parameters:
\begin{enumerate}
\setcounter{enumi}{\value{enumic}}
\item the batch size used for stochastic batch gradient descent,
\item the minimum learning rate at which the training is stopped,
\item the learning rate decay factor,
\item the number of training epochs without progress on the validation set
before the learning rate is reduced.
\end{enumerate}
\subsubsection{Structural parameters}
To investigate the influence of hyperparameters 1 - 3 on the performance of
the QRNN, 10-fold cross validation on the training set consisting of $10^6$
samples has been used to estimate the performance of different hyperparameter
configurations. As performance metric the mean quantile loss on the validation
set averaged over all predicted quantiles for $\tau = 0.05, 0.1, 0.2, \ldots,
0.9, 0.95$ is used. A grid search over a subspace of the configuration space
was performed to find optimal parameters. The results of the analysis are
displayed in Figure~\ref{fig:hyperparams}. For the configurations considered,
the layer width has the most significant effect on the performance.
Nevertheless, only small performance gains are obtained by increasing the
layer width to values above 64 neurons. Another general observation is that
networks with three hidden layers generally outperform networks with fewer
hidden layers. Networks using Rectified Linear Unit (ReLU) activation
functions not only achieve slightly better performance than networks using
tanh or sigmoid activation functions, but also show significantly lower
variability. Based on these results, a neural network with three hidden
layers, 128 neurons in each layer and ReLU activation functions has been
selected for the comparison to BMCI.
\begin{figure}[hbpt!]
\centering
\includegraphics[width = 1.0\linewidth]{../plots/fig02}
\caption{Mean validation set loss (solid lines) and standard deviation (shading)
of different hyperparameter configurations with respect to layer width (number of neurons).
Different lines display the results for different numbers of hidden layers $n_h$.
The three panels show the results for ReLU, tanh, and sigmoid activation functions.}
\label{fig:hyperparams}
\end{figure}
\subsubsection{Training parameters}
For the optimization of the training parameters 4 - 7, a very coarse grid
search was performed, using only three different values for each parameter.
In general, the training parameters showed only little effect ($< 2\%$ for the
combinations considered here) on the QRNN performance compared to the structural
parameters. The best cross-correlation performance was obtained for slow
training with a small learning rate reduction factor of $1.5$ and decreasing the
learning rate only after $10$ training epochs without reduction of the
validation loss. No significant increase in performance could be observed for
values of the learning rate minimum below $10^{-4}$. With respect to the batch
size, the best results were obtained for a batch size of 128 samples.
%However,
%while this very slow training works well for this example, a likely reason for
%this is the size and statistical homogeneity of the training and validation
%data. For real world data, a configuration that stops the training earlier
%may yield better results.
\subsection{Comparison against MCMC}
% The main purpose of the retrieval simulations is to provide a validation
% and benchmark case for QRNNs. They were set up so that MCMC simulations
% can be used to approximate the true a posteriori distribution. In this way,
% the probabilistic predictions obtained from QRNNs (and BMCI) can be directly
% compared to the approximated true a posteriori distribution.
In this section, the performance of a single QRNN and an ensemble of 10 QRNNs
are analyzed. The predictions from the ensemble are obtained by averaging the
predictions from each network in the ensemble. All tests in this subsection are
performed for a single QRNN, the ensemble of QRNNs, and BMCI. The retrieval database used
for BMCI and the training of the QRNNs in this expirement consists of $10^6$ entries.
Figure~\ref{fig:cdfs} displays retrieval results for eight example cases. The
choice of the cases is based on the Kolmogorov-Smirnov (KS) statistic, which
corresponds to the maximum absolute deviation of the predicted CDF from the
reference CDF obtained by MCMC simulation. A small KS value indicates a good
prediction of the true CDF, while a high value is obtained for large deviations
between predicted and reference CDF. The cases shown correspond to the 10th, 50th,
90th and 99th percentile of the distribution of KS values obtained using BMCI
or a single QRNN. In this way they provide a qualitative overview of the performance
of the methods.
In the displayed cases, both methods are generally successful in predicting the
a posteriori distribution. Only for the $99$th percentile of the KS value distribution
does the BMCI prediction show significant deviations from the reference distribution.
The jumps in the estimated a posteriori CDF indicate that the deviations are due to
undersampling of the input space in the retrieval database. This results in
excessively high weights attributed to the few entries close to the
observation. For this specific case the QRNN provides a better estimate of the a
posteriori CDF even though both predictions are based on the same data.
\begin{figure}[hbtp!]
\centering
\includegraphics[width = 1.0\linewidth]{../plots/fig03}
\caption{Retrieved a posteriori CDFs obtained using MCMC (gray), BMCI
(blue), a single QRNN (red line) and an ensemble of QRNNs (red marker). Cases
displayed in the first row correspond to the 1st, 50th, 90th, and 99th
percentiles of the distribution of the Kolmogorov-Smirnov statistic of BMCI
compared to the MCMC reference. The second row displays the same percentiles of
the distribution of the Kolmogorov-Smirnov statistic of the single QRNN
predictions compared to MCMC.}
\label{fig:cdfs}
\end{figure}
Another way of displaying the estimated a posteriori distribution is by means
of its probability density function (PDF), which is defined as the derivative
of its CDF. For the QRNN, the PDF is approximated by simply deriving
the piece-wise linear approximation to the CDF and setting the boundary values
to zero. For BMCI, the a posteriori PDF can be approximated using a histogram of the
CWV values in the database weighted by the corresponding weights $w_i(\mathbf{y})$.
The PDFs for the cases corresponding to the CDFs show in
Figure~\ref{fig:cdfs} are shown in Figure~\ref{fig:pdfs}.
\begin{figure}[hbtp!]
\centering
\includegraphics[width = 1.0\linewidth]{../plots/fig04}
\caption{Retrieved a posteriori PDFs corresponding to the CDFs displayed
in Figure~\ref{fig:cdfs} obtained using MCMC (gray), BMCI (blue), a single
QRNN (red line) and an ensemble of QRNNs (red marker).}
\label{fig:pdfs}
\end{figure}
To obtain a more comprehensive view on the performance of QRNNs and BMCI,
the predictions obtained from both methods are compared to those obtained
from MCMC for 6500 test cases. For the comparison, let the \textit{effective
quantile fraction} $\tau_{\text{eff}}$ be defined as the fraction of MCMC
samples that are less than or equal to the predicted quantile
$\widehat{x_\tau}$ obtained from QRNN or BMCI. In general, the predicted
quantile $\widehat{x_\tau}$ will not correspond exactly to the true quantile
$x_\tau$, but rather an effective quantile $x_{\tau_\text{eff}}$, defined by
the fraction $\tau_\text{eff}$ of the samples of the distribution that are
smaller than or equal to the predicted value $\widehat{x_\tau}$. The
resulting distributions of the effective quantile fractions for BMCI and
QRNNs are displayed in Figure~\ref{fig:quantile_fractions} for the estimated
quantiles for $\tau = 0.1, 0.2, \ldots, 0.9$.
\begin{figure}[hbpt!]
\centering
\includegraphics[width = 0.8\linewidth]{../plots/fig05}
\caption{Distribution of effective quantile fractions $\tau_\text{eff}$ achieved by
QRNN and BMCI on the test data. The left plot displays the performance of a
single QRNN compared to BMCI, the right plot the performance of the ensemble.}
\label{fig:quantile_fractions}
\end{figure}
For an ideal estimator of the quantile $x_\tau$, the resulting distribution
would be a delta function centered at $\tau$. Due to the estimation error,
however, the $\tau_{\text{eff}}$ values are distributed around the true quantile
fraction $\tau$. The results show that both BMCI and QRNN provide fairly
accurate estimates of the quantiles of the a posterior distribution. Furthermore,
all methods yield equally good predictions, making the distributions virtually
identical.
\subsection{Training set size impact}
Finally, we investigate how the size of the training data set used in the training
of the QRNN (or as retrieval database for BMCI) affects the performance of the
retrieval method. This has been done by randomly generating training subsets
from the original training data with sizes logarithmically spaced between $10^3$
and $10^6$ samples. For each size, five random training subsets have been
generated and used to retrieve the test data with a single QRNN and BMCI. As test data,
a separate test set consisting of $10^5$ simulated observation vectors and
corresponding CWV values is used.
Figure~\ref{fig:mape_crps} displays the means of the mean absolute percentage
error (MAPE, Panel (a)) and the mean continuous ranked probability score (CRPS,
Panel (b)) achieved by both methods on the differently sized training sets.
For the computation of the MAPE, the CWV prediction is taken as the median of
the estimated a posteriori distribution obtained using QRNNs or BMCI. This value
is compared to the true CWV value corresponding to the atmospheric state that
has been used in the simulation. As expected, the performance of both methods
improves with the size of the training set. With respect to the MAPE, both
methods perform equally well for a training set size of $10^6$, but the QRNN
outperforms BMCI for all smaller training set sizes. With respect to CRPS, a
similar behavior is observed. These are reassuring results, as they indicate
that not only the accuracy of the predictions (measured by the MAPE and CRPS) improves
as the amount of training data increases, but also their calibration (measured
only by the CRPS).
\begin{figure}[hbpt!]
% \centering
\includegraphics[width = 0.8\linewidth]{../plots/fig06}
\caption{MAPE (Panel (a)) and CRPS (Panel (b)) achieved by QRNN (red) and BMCI (blue)
on the test set using differently sized training sets and retrieval
databases. For each size, five random subsets of the original training data were
generated. The lines display the means of the observed values. The shading
indicates the range of $\pm \sigma$ around the mean.}
\label{fig:mape_crps}
\end{figure}
Finally, the mean of the quantile loss $\mathcal{L}_\tau$ on the test set for
$\tau = 0.1, 0.5, 0.9$ has been considered (Figure~\ref{fig:quantile_losses}).
Qualitatively, the results are similar to the ones obtained using MAPE and CRPS.
The QRNN outperforms BMCI for smaller training set sizes but converges to similar
values for training set sizes of $10^6$.
\begin{figure}[hbpt!]
\centering
\includegraphics[width = 0.8\linewidth]{../plots/fig07}
\caption{Mean quantile loss for different training set sizes $n_\text{train}$ and
$\tau = 0.1, 0.5, 0.9$.}
\label{fig:quantile_losses}
\end{figure}
The results presented in this section indicate that QRNNs can, at least
under idealized conditions, be used to estimate the a posteriori distribution of
Bayesian retrieval problems. Moreover, they were shown to work equally well
as BMCI for large data sets. What is interesting is that for smaller data sets,
QRNNs even provide better estimates of the a posteriori distribution than BMCI.
This indicates that QRNNs provide a better representation of the functional
dependency of the a posteriori distribution on the observation data, thus
achieving better interpolation in the case of scarce training data. Nonetheless,
it remains to be investigated, if this advantage can also be observed for real
world data.
A possible approach to handling scarce retrieval databases with BMCI is to
artificially increase the assumed measurement uncertainty. This has not been
performed for the BMCI results presented here and may improve the performance of
the method. The difficulty with this approach is that the method formulation
is based on the assumption of a sufficiently large database and thus can,
at least formally, not handle scarce training data. Finding a suitable way to
increase the measurement uncertainty would thus require either additional
methodological development or invention of an heuristic approach, both of which
are outside the scope of this study.
\section{Retrieving cloud top pressure from MODIS using QRNNs}
\label{sec:ctp}
In this section, QRNNs are applied to retrieve cloud top pressure (CTP) using
observations from the moderate resolution imaging spectroradiometer (MODIS,
\citet{modis}). The experiment is based on the work by \cite{hakansson} who
developed the NN-CTTH algorithm, a neural network based retrieval of cloud top
pressure. A QRNN based CTP retrieval is compared to the NN-CTTH algorithm and it
is investigated how QRNNs can be used to estimate the retrieval uncertainty.
\subsection{Data}
The QRNN uses the same data for training as the reference NN-CTTH algorithm. The
data set consists of MODIS Level 1B data \citep{myd021km, myd03} collocated with
cloud properties obtained from CALIOP \citep{calipso}. The \textit{top layer
pressure} variable from the CALIOP data is used as retrieval target. The data
was taken from all orbits from 24 days (the 1st and 14th of every month) from
the year 2010. In \citet{hakansson} multiple neural networks are trained using
varying combinations of input features derived from different MODIS channels and
ancillary NWP data in order to compare retrieval performance for different
inputs. Of the different neural network configurations presented in
\citet{hakansson}, the version denoted by NN-AVHRR is used for comparison
against the QRNN. This version uses only the $11 \unit{\mu m}$ and $12\unit{\mu
m}$ channels from MODIS. In addition to single pixel input, the input features
comprise structural information in the form of various statistics computed on a
$5 \times 5$ neighborhood around the center pixel. The ancillary numerical weather
prediction data provided to the network consists of surface pressure and
temperature, temperatures at five pressure levels, as well as column integrated
water vapor. These are also the input features that are used for the training of
the QRNN. The training data used for the QRNN are the \textit{training} and
\textit{during-training validation set} from \cite{hakansson}. The comparison to
the NN-AVHRR version of the NN-CTTH algorithm uses the data set for
\textit{testing under development} from \cite{hakansson}.
\subsection{Training}
The same training scheme as described in Sect.~\ref{sec:implementation_qrnn} is
used for the training of the QRNNs. The training progress, based on which the
learning rate is reduced or training aborted, is monitored using the
\textit{during-training validation} data set from \cite{hakansson}. After
performing a grid search (results not shown) over width, depth and minibatch
size, the best performance on the validation set was obtained for networks with
four layers with 64 neurons each, ReLU activation functions, and a batch size of
128 samples.
The main difference in the training process compared to the experiment from the
previous section is how measurement uncertainties are incorporated. For the
simulated retrieval, the training data was noise-free, so measurement
uncertainties could be realistically represented by adding noise according to
the sensor characteristics. This is not the case for MODIS observations;
instead, adversarial training is used here to ensure well-calibrated
predictions. For the tuning of the perturbation parameter $\delta_{\text{adv}}$
(c.f. Sect.~\ref{sec:adversarial_training}), the calibration on the
during-training validation set was monitored using a calibration plot. Ideally,
it would be desirable to use a separate data set to tune this parameter, but
this was sufficient in this case to achieve good results on the test data. The
calibration curves obtained using different values of $\delta_\text{adv}$ are
displayed in Figure~\ref{fig:validation_calibration}. It can be seen from the
plot that without adversarial training ($\delta_\text{adv} = 0$) the predictions
obtained from the QRNN are overly confident, leading to prediction intervals
that underrepresent the uncertainty in the retrieval. Since adversarial training
may be viewed as a way of representing observation uncertainty in the training
data, larger values of $\delta_\text{adv}$ lead to less confident predictions.
Based on these results, $\delta_\text{adv} = 0.05$ is chosen for the training.
\begin{figure}[hbpt!]
\centering
\includegraphics[width = 0.6\linewidth]{../plots/fig08}
\caption{Calibration of the QRNN prediction intervals on the validation set
used during training. The curves display the results for no adversarial training
($\delta_\text{adv} = 0$) and adversarial training with perturbation
factor $\delta_\text{adv} = 0.01, 0.05, 0.1$.}
\label{fig:validation_calibration}
\end{figure}
Except for the use of adversarial training, the structure of the underlying
network and the training process of the QRNN are fairly similar to what is used
for the NN-CTTH retrieval. The QRNN uses four instead of two hidden layers with
64 neurons in each of them instead of 30 in the first and 15 in second layer.
While this makes the neural network used in the QRNN slightly more complex, this
should not be a major drawback since computational performance is generally not
critical for neural network retrievals.
\subsection{Prediction accuracy}
Most data analysis will likely require a single predicted value for the cloud top
pressure. To derive a point value from the QRNN prediction, the median of the
estimated a posteriori distribution is used.
The distributions of the resulting median pressure values
on the \textit{testing during development} data set are displayed in
Figure~\ref{fig:ctp_results_p_dist} together with the retrieved pressure
values from the NN-CTTH algorithm. The distributions are displayed
separately for low, medium and high clouds (as classified by the CALIOP
feature classification flag) as well as the complete data set. From these
results it can be seen that the values predicted by the QRNN have stronger peaks
low in the atmosphere for low clouds and high in the atmosphere for
high clouds. For medium clouds the peak is more spread out and has heavier
tails low and high in the atmosphere than the values retrieved by the NN-CTTH
algorithm.
Figure~\ref{fig:ctp_results} displays the error distributions of the predicted
CTP values on the \textit{testing during development} data set, again seperated by
cloud type as well as for the complete data set. Both the simple QRNN and the ensemble
of QRNNs perform slightly better than the NN-CTTH algorithm for low and high clouds.
For medium clouds, no significant difference in the performance of the methods can
be observed. The ensemble of QRNNs seems to slightly improve upon the prediction
accuracy of a single QRNN but the difference is likely negligible. Compared to
the QRNN results, the CTP predicted by NN-CTTH is biased low for low clouds and
biased high for high clouds.
Even though both the QRNN and the NN-CTTH retrieval use the same input and
training data, the predictions from both retrievals differ considerably. Using
the Bayesian framework, this can likely be explained by the fact that the two
retrievals estimate different statistics of the a posteriori distribution. The
NN-CTTH algorithm has been trained using a squared error loss function which
will lead the algorithm to predict the mean of the a posteriori distribution.
The QRNN retrieval, on the other hand, predicts the median of the a posteriori
distribution. Since the median minimizes the expected absolute error, it is
expected that the CTP values predicted by the QRNN yield overall smaller errors.
\begin{figure}[htbp!]
\centering
\includegraphics[width = 0.8\linewidth]{../plots/fig09}
\caption{Distributions of predicted CTP values $(\text{CTP}_{\text{pred}})$
for high clouds in panel (a), medium clouds in panel (b), low
clouds in panel (c) and the complete test set in panel (d).}
\label{fig:ctp_results_p_dist}
\end{figure}
\begin{figure}[htbp!]
\centering
\includegraphics[width = 0.8\linewidth]{../plots/fig10}
\caption{Error distributions of predicted CTP values $(\text{CTP}_{\text{pred}})$
with respect to CTP from CALIOP ($\text{CTP}_{\text{ref}}$) for high clouds in
panel (a), medium clouds in panel (b), low clouds in panel (c) and the
complete test set in panel (d).}
\label{fig:ctp_results}
\end{figure}
\subsection{Uncertainty estimation}
The NN-CTTH algorithm retrieves CTP but does not provide case-specific
uncertainty estimates. Instead, an estimate of uncertainty is provided in the
form of the observed mean absolute error on the test set. In order to compare
these uncertainty estimates with those obtained using QRNNs, Gaussian error
distributions are fitted to the observed error based on the observed mean
absolute error (MAE) and mean squared error (MSE). A Gaussian error model
is chosen here as it is arguably the most common distribution used to represent
random errors.
A plot of the errors observed on the \textit{testing during development} data
set and the fitted Gaussian error distributions is displayed in Panel (a) of
Figure~\ref{fig:error_fit}. The fitted error curves correspond to the Gaussian
probability density functions with the same MAE and MSE as observed on the test
data. Panel (b) displays the observed error together with the predicted error
obtained from a single QRNN. The predicted error is computed as the deviation of
a random sample of the estimated a posteriori distribution from its median. The
fitted Gaussian error distributions clearly do not provide a good fit to the
observed error. On the other hand, the predicted errors obtained from the QRNN a
posteriori distributions yield good agreement with the observed error. This
indicates that the QRNN successfully learned to predict retrieval uncertainties.
Furthermore, the results show that the ensemble of QRNNs actually provides a
slightly worse fit to the observed error than a single QRNN. An ensemble of
QRNNs thus does not necessarily improve the calibration of the predictions.
\begin{figure}[hbpt!]
\centering
\includegraphics[width = 1.0\linewidth]{../plots/fig11.pdf}
\caption{Predicted and observed error distributions. Panel (a)
displays the observed error for the NN-CTTH retrieval as well as the
Gaussian error distributions that have been fitted to the observed
error distribution based on the MAE and MSE. Panel (b)
displays the observed test set error for a single QRNN as well as the
predicted error obtained as the deviation of a random sample of the
predicted a posteriori distribution from the median. Panel (c) displays
the same for the ensemble of QRNNs.}
\label{fig:error_fit}
\end{figure}
The Gaussian error model based on the MAE fit has also been used to produce
prediction intervals for the CTP values obtained from the NN-CTTH algorithm.
Figure~\ref{fig:calibration} displays the resulting calibration curves for the
NN-CTTH algorithm, a simple QRNN and an ensemble of QRNNs. The results support
the finding that a single QRNN is able to provide well calibrated probabilistic
predictions of the a posteriori distribution. The calibration curve for the
ensemble predictions is virtually identical to that for the single network. The
NN-CTTH predictions using a Gaussian fit are not as well calibrated and tend to
provide prediction intervals that are too wide for $p = 0.1, 0.3, 0.5, 0.7$ but
overly narrow intervals for $p = 0.9$.
\begin{figure}[hbpt!]
\centering
\includegraphics[width = 0.5\linewidth]{../plots/fig12}
\caption{Calibration plot for prediction intervals derived from the Gaussian
error model for the NN-CTTH algorithm (blue), the single QRNN (dark gray) and
the ensemble of QRNNs (red).}
\label{fig:calibration}
\end{figure}
\subsection{Sensitivity to a priori distribution}
As shown above, the predictions obtained from the QRNN are statistically
consistent in the sense that they predict probabilities that match observed
frequencies when applied to test data. This however requires that the test data
is statistically consistent with the training data. Statistically consistent
here means that both data sets come from the same generating distribution, or in
more Bayesian terms, the same a priori distribution. What happens when this is
not the case can be seen when the calibration with respect to different cloud
types is computed. Figure~\ref{fig:calibration_cloud_type} displays calibration
curves computed separately for low, medium and high clouds. As can be seen from the plot, the QRNN
predictions are no longer equally well calibrated. Viewed from the Bayesian
perspective, this is not very surprising as CTP values for median clouds have a
significantly different a priori distribution compared to CTP values for all
cloud types, thus giving different a posteriori distributions.
For the NN-CTTH algorithm, the results look different. While for low clouds
the calibration deteriorates, the calibration is even slightly improved for
high clouds. This is not surprising as the Gaussian fit may be more
appropriate on different subsets of the test data.
\begin{figure}[hbpt!]
\centering
\includegraphics[width = 0.5\linewidth]{../plots/fig13}
\caption{Calibration of the prediction intervals obtained from NN-CTTH (blue) and
a single QRNN (red) with respect to specific cloud types.}
\label{fig:calibration_cloud_type}
\end{figure}
\conclusions %% \conclusions[modified heading if necessary]
\label{sec:conclusions}
In this article, quantile regression neural networks have been proposed as a
method to estimate a posteriori distributions of Bayesian remote sensing retrievals.
They have been applied to two retrievals of scalar atmospheric variables. It has
been demonstrated that QRNNs are capable of providing accurate and
well-calibrated probabilistic predictions in agreement with the Bayesian
formulation of the retrieval problem.
The synthetic retrieval case presented in Sect.~\ref{sec:synthetic} shows that
the conditional distribution learned by the QRNN is the same as the Bayesian a
posteriori distribution obtained from methods that are directly based on the
Bayesian formulation. This in itself seems worthwhile to note, as it reveals the
importance of the training set statistics that implicitly represent the a priori
knowledge. On the synthetic data set, QRNNs compare well to BMCI and even perform
better for small data sets. This indicates that they are able to handle the
``curse of dimensionality'' \citep{friedman} better than BMCI, which would make them more suitable
for the application to retrieval problems with high-dimensional measurement
spaces.
While the optimization of computational performance of the BMCI method has not been
investigated in this work, at least compared to a naive implementation of BMCI,
QRNNs allow for at least one order of magnitude faster retrievals. QRNN retrievals
can be easily parallelized and hardware optimized implementations are available
for all modern computing architectures, thus providing very good performance out
of the box.
Based on these very promising results, the next step in this line of research
should be to compare QRNNs and BMCI on a real retrieval case to investigate if
the findings from the simulations carry over to the real world. If this is the
case, significant reductions in the computational cost of operational retrievals
and maybe even better retrieval performance could be achieved using QRNNs.
In the second retrieval application presented in this article, QRNNs have been
used to retrieve cloud top pressure from MODIS observations. The results show
that not only are QRNNs able to improve upon state-of-the-art retrieval accuracy
but they can also learn to predict retrieval uncertainty. The ability of QRNNs
to provide statistically consistent, case-specific uncertainty estimates should
make them a very interesting alternative to non-probabilistic neural network
retrievals. Nonetheless, also the sensitivity of the QRNN approach to a priori
assumptions has been demonstrated. The posterior distribution learned by the
QRNN depends on the validity of the a priori assumptions encoded in the training
data. In particular, accurate uncertainty estimates can only be expected if the
retrieved observations follow the same distribution as the training data. This,
however, is a limitation inherent to all empirical methods.
The second application case presented here demonstrated the ability of QRNNs to represent
non-Gaussian retrieval errors. While, as shown in this study, this is also the case
for BMCI (Eq.~(\ref{eq:bmci_cdf})), it is common in practice to estimate only mean and standard
deviation of the a posteriori distribution. Furthermore, implementations usually
assume Gaussian measurement errors, which is an unlikely assumption if the
observations in the retrieval database contain modeling errors. By requiring no
assumptions whatsoever on the involved uncertainties, QRNNs may provide a more
suitable way of representing (non-Gaussian) retrieval uncertainties.
The application of the Bayesian framework to neural network retrievals opens the
door to a number of interesting applications that could be pursued in future
research. It would for example be interesting to investigate if the a priori
information can be separated from the information contained in the retrieved
measurement. This would make it possible remove the dependency of the
probabilistic predictions on the a priori assumptions, which can currently be
considered a limitation of the approach. Furthermore, estimated a posteriori
distributions obtained from QRNNs could be used to estimate the information
content in a retrieval following the methods outlined by \citet{rodgers}.
In this study only the retrieval of scalar quantities was considered. Another
aspect of the application of QRNNs to remote sensing retrievals that remains to
be investigated is how they can be used to retrieve vector-valued retrieval
quantities, such as for example concentration profiles of atmospheric gases or
particles. While the generalization to marginal, multivariate quantiles should
be straightforward, it is unclear whether a better approximation of the
quantile contours of the joint a posteriori distribution can be obtained using
QRNNs.
%What has not been considered in this work is the quantification of
%out-of-distribution uncertainty, which refers to uncertainty due to statistical
%differences between the training data and the data to which the neural network
%is applied. It has been shown by \citet{lakshminarayanan} that ensemble methods
%can be used to detect out-of-sample application of neural networks. It would
%therefore be interesting to investigate if and how ensembles of QRNN can be used
%to detect out-of-sample uncertainty.
%% The following commands are for the statements about the availability of data sets and/or software code corresponding to the manuscript.
%% It is strongly recommended to make use of these sections in case data sets and/or software code have been part of your research the article is based on.
\codeavailability{The implementation of the retrieval methods that were used in
this article have been published as parts of the \textit{typhon: tools for
atmospheric research} \citep{typhon} software package. The source code for the
calculations presented in Sect.~\ref{sec:synthetic} and \ref{sec:ctp} are
accessible from public repositories \citep{predictive_uncertainty, smhi}.}
%\dataavailability{TEXT} %% use this section when having only data sets available
%\codedataavailability{TEXT} %% use this section when having data sets and software code available
%\appendix
%\section{} %% Appendix A
%
%\subsection{} %% Appendix A1, A2, etc.
\noappendix %% use this to mark the end of the appendix section
%% Regarding figures and tables in appendices, the following two options are possible depending on your general handling of figures and tables in the manuscript environment:
%% Option 1: If you sorted all figures and tables into the sections of the text, please also sort the appendix figures and appendix tables into the respective appendix sections.
%% They will be correctly named automatically.
%% Option 2: If you put all figures after the reference list, please insert appendix tables and figures after the normal tables and figures.
%% To rename them correctly to A1, A2, etc., please add the following commands in front of them:
%\appendixfigures %% needs to be added in front of appendix figures
%
%\appendixtables %% needs to be added in front of appendix tables
%
% \begin{table}[ht]
% \begin{center}
%
% \vspace{0.5cm}
% \resizebox{\textwidth}{!}{
% \begin{tabular}{|l|ccccccc|}
% \multicolumn{8}{c}{Linear}\\
% \hline
% \input{../tables/linear.tbl}
% \end{tabular}}
%
% \vspace{0.5cm}
% \resizebox{\textwidth}{!}{
% \begin{tabular}{|l|ccccccc|}
% \multicolumn{8}{c}{Sigmoid}\\
% \hline
% \input{../tables/sigmoid.tbl}
% \end{tabular}}
%
% \vspace{0.5cm}
% \resizebox{\textwidth}{!}{
% \begin{tabular}{|l|ccccccc|}
% \multicolumn{8}{c}{tanh}\\
% \hline
% \input{../tables/tanh.tbl}
% \end{tabular}}
%
% \vspace{0.5cm}
% \resizebox{\textwidth}{!}{
% \begin{tabular}{|l|ccccccc|}
% \multicolumn{8}{c}{ReLU}\\
% \hline
% \input{../tables/relu.tbl}
% \end{tabular}}
%
% \caption{Mean quantile loss and standard deviation for different activation functions, varying numbers
% $n_h$ of hidden layers and $n_n$ of neurons per layer. Results were obtained using 10-fold
% cross validation on the training set.}
%
% \label{tab:model_selection}
%
% \end{center}
% \end{table}
%% Please add \clearpage between each table and/or figure. Further guidelines on figures and tables can be found below.
\authorcontribution{All authors contributed to the study through discussion and
feedback. Patrick Eriksson and Bengt Rydberg proposed the application of QRNNs
to remote sensing retrievals. The study was designed and implemented by Simon Pfreundschuh,
who also prepared the manuscript including figures, text and tables.
Anke Thoss and Nina H{\aa}kansson provided the training data for the cloud top
pressure retrieval.} %% optional section
\competinginterests{The authors declare that they have no conflict of interest.} %% this section is mandatory even if you declare that no competing interests are present
%\disclaimer{TEXT} %% optional section
\begin{acknowledgements}
The scientists at Chalmers University of Technology were funded by the Swedish National Space Board.
The authors would like to acknowledge the work of Ronald Scheirer and Sara
Hörnquist who were involved in the creation of the collocation data set that was
used as training and test data for the cloud top pressure retrieval.
Numerous free software packages were used to perform the numerical experiments
presented in this article and visualize their results. The authors would like to
acknowledge the work of all the developers that contributed to making these tools
freely available to the scientific community, in particular the work by
\citet{matplotlib, ipython, numpy, python}.
\end{acknowledgements}
%% REFERENCES
%% The reference list is compiled as follows:
%% Since the Copernicus LaTeX package includes the BibTeX style file copernicus.bst,
%% authors experienced with BibTeX only have to include the following two lines:
%%
\bibliographystyle{copernicus}
\bibliography{literature.bib}
%%
%% URLs and DOIs can be entered in your BibTeX file as:
%%
%% URL = {http://www.xyz.org/~jones/idx_g.htm}
%% DOI = {10.5194/xyz}
%% LITERATURE CITATIONS
%%
%% command & example result
%% \citet{jones90}| & Jones et al. (1990)
%% \citep{jones90}| & (Jones et al., 1990)
%% \citep{jones90,jones93}| & (Jones et al., 1990, 1993)
%% \citep[p.~32]{jones90}| & (Jones et al., 1990, p.~32)
%% \citep[e.g.,][]{jones90}| & (e.g., Jones et al., 1990)
%% \citep[e.g.,][p.~32]{jones90}| & (e.g., Jones et al., 1990, p.~32)
%% \citeauthor{jones90}| & Jones et al.
%% \citeyear{jones90}| & 1990
%% FIGURES
%% When figures and tables are placed at the end of the MS (article in one-column style), please add \clearpage
%% between bibliography and first table and/or figure as well as between each table and/or figure.
%% ONE-COLUMN FIGURES
%%f
%\begin{figure}[t]
%\includegraphics[width=8.3cm]{FILE NAME}
%\caption{TEXT}
%\end{figure}
%
%%% TWO-COLUMN FIGURES
%
%%f
%\begin{figure*}[t]
%\includegraphics[width=12cm]{FILE NAME}
%\caption{TEXT}
%\end{figure*}
%
%
%%% TABLES
%%%
%%% The different columns must be seperated with a & command and should
%%% end with \\ to identify the column brake.
%
%%% ONE-COLUMN TABLE
%
%%t
%\begin{table}[t]
%\caption{TEXT}
%\begin{tabular}{column = lcr}
%\tophline
%
%\middlehline
%
%\bottomhline
%\end{tabular}
%\belowtable{} % Table Footnotes
%\end{table}
%
%%% TWO-COLUMN TABLE
%
%%t
%\begin{table*}[t]
%\caption{TEXT}
%\begin{tabular}{column = lcr}
%\tophline
%
%\middlehline
%
%\bottomhline
%\end{tabular}
%\belowtable{} % Table Footnotes
%\end{table*}
%
%
%%% MATHEMATICAL EXPRESSIONS
%
%%% All papers typeset by Copernicus Publications follow the math typesetting regulations
%%% given by the IUPAC Green Book (IUPAC: Quantities, Units and Symbols in Physical Chemistry,
%%% 2nd Edn., Blackwell Science, available at: http://old.iupac.org/publications/books/gbook/green_book_2ed.pdf, 1993).
%%%
%%% Physical quantities/variables are typeset in italic font (t for time, T for Temperature)
%%% Indices which are not defined are typeset in italic font (x, y, z, a, b, c)
%%% Items/objects which are defined are typeset in roman font (Car A, Car B)
%%% Descriptions/specifications which are defined by itself are typeset in roman font (abs, rel, ref, tot, net, ice)
%%% Abbreviations from 2 letters are typeset in roman font (RH, LAI)
%%% Vectors are identified in bold italic font using \vec{x}
%%% Matrices are identified in bold roman font
%%% Multiplication signs are typeset using the LaTeX commands \times (for vector products, grids, and exponential notations) or \cdot
%%% The character * should not be applied as mutliplication sign
%
%
%%% EQUATIONS
%
%%% Single-row equation
%
%\begin{equation}
%
%\end{equation}
%
%%% Multiline equation
%
%\begin{align}
%& 3 + 5 = 8\\
%& 3 + 5 = 8\\
%& 3 + 5 = 8
%\end{align}
%
%
%%% MATRICES
%
%\begin{matrix}
%x & y & z\\
%x & y & z\\
%x & y & z\\
%\end{matrix}
%
%
%%% ALGORITHM
%
%\begin{algorithm}
%\caption{...}
%\label{a1}
%\begin{algorithmic}
%...
%\end{algorithmic}
%\end{algorithm}
%
%
%%% CHEMICAL FORMULAS AND REACTIONS
%
%%% For formulas embedded in the text, please use \chem{}
%
%%% The reaction environment creates labels including the letter R, i.e. (R1), (R2), etc.
%
%\begin{reaction}
%%% \rightarrow should be used for normal (one-way) chemical reactions
%%% \rightleftharpoons should be used for equilibria
%%% \leftrightarrow should be used for resonance structures
%\end{reaction}
%
%
%%% PHYSICAL UNITS
%%%
%%% Please use \unit{} and apply the exponential notation
\end{document}
| {
"alphanum_fraction": 0.757181219,
"avg_line_length": 52.8072437078,
"ext": "tex",
"hexsha": "ae068b34fa09407953432eed7cc63d6ac4af4eb4",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2020-12-05T08:22:35.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-07-30T02:00:52.000Z",
"max_forks_repo_head_hexsha": "01a0b27f2902e57c49131d5c27faf5494318a064",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "simonpf/predictive_uncertainty",
"max_forks_repo_path": "article/amt_manuscript.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "01a0b27f2902e57c49131d5c27faf5494318a064",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "simonpf/predictive_uncertainty",
"max_issues_repo_path": "article/amt_manuscript.tex",
"max_line_length": 178,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "01a0b27f2902e57c49131d5c27faf5494318a064",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "simonpf/predictive_uncertainty",
"max_stars_repo_path": "article/amt_manuscript.tex",
"max_stars_repo_stars_event_max_datetime": "2019-03-10T14:10:03.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-07-30T01:59:48.000Z",
"num_tokens": 20316,
"size": 86023
} |
\chapter{Power series and Taylor series}
Polynomials are very well-behaved functions,
and are studied extensively for that reason.
From an analytic perspective, for example, they are smooth,
and their derivatives are easy to compute.
In this chapter we will study \emph{power series},
which are literally ``infinite polynomials'' $\sum_n a_n x^n$.
Armed with our understanding of series and differentiation,
we will see three great things:
\begin{itemize}
\ii Many of the functions we see in nature
actually \emph{are} given by power series.
Among them are $e^x$, $\log x$, $\sin x$.
\ii Their convergence properties are actually quite well behaved:
from the string of coefficients,
we can figure out which $x$ they converge for.
\ii The derivative of $\sum_n a_n x^n$
is actually just $\sum_n n a_n x^{n-1}$.
\end{itemize}
\section{Motivation}
To get the ball rolling, let's start with
one infinite polynomial you'll recognize:
for any fixed number $-1 < x < 1$ we have the series convergence
\[ \frac{1}{1-x} = 1 + x + x^2 + \dots \]
by the geometric series formula.
Let's pretend we didn't see this already in
\Cref{prob:geometric}.
So, we instead have a smooth function $f \colon (-1,1) \to \RR$ by
\[ f(x) = \frac{1}{1-x}. \]
Suppose we wanted to pretend that it was equal to
an ``infinite polynomial'' near the origin, that is
\[ (1-x)\inv = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + a_4 x^4 + \dots. \]
How could we find that polynomial,
if we didn't already know?
Well, for starters we can first note that by plugging in $x = 0$
we obviously want $a_0 =1$.
We have derivatives, so actually,
we can then differentiate both sides to obtain that
\[ (1-x)^{-2} = a_1 + 2a_2 x + 3a_3 x^2 + 4a_4 x^3. \]
If we now set $x = 0$, we get $a_1 = 1$.
In fact, let's keep taking derivatives and see what we get.
\begin{alignat*}{8}
(1-x)^{-1} &{}={}& a_0 &{}+{}& a_1x &{}+{}& a_2x^2 &{}+{}& a_3x^3 &{}+{}& a_4x^4 &{}+{}& a_5x^5 &{}+{}& \dots \\
(1-x)^{-2} &{}={}& && a_1 &{}+{}& 2a_2 x &{}+{}& 3a_3 x^2 &{}+{}& 4a_4 x^3 &{}+{}& 5a_5 x^4 &{}+{}&\dots \\
2(1-x)^{-3} &{}={}& && && 2a_2 &{}+{}& 6a_3 x &{}+{}& 12 a_4 x^2 &{}+{}& 20 a_5 x^3 &{}+{}& \dots \\
6(1-x)^{-4} &{}={}& && &&&& 6a_3 &{}+{}& 24 a_4 x &{}+{}& 60 a_5 x^2 &{}+{}& \dots \\
24(1-x)^{-5} &{}={}& && && && && 24 a_4 &{}+{}& 120 a_5 x &{}+{}& \dots \\
&{}\vdotswithin={}.
\end{alignat*}
If we set $x=0$ we find $1 = a_0 = a_1 = a_2 = \dots$
which is what we expect;
the geometric series $\frac{1}{1-x} = 1 + x + x^2 + \dots$.
And so actually taking derivatives was enough to get the right claim!
\section{Power series}
\prototype{$\frac{1}{1-z} = 1 + z + z^2 + \dots$, which converges on $(-1,1)$.}
Of course this is not rigorous,
since we haven't described what the right-hand side is,
much less show that it can be differentiated term by term.
So we define the main character now.
\begin{definition}
A \vocab{power series} is a sum of the form
\[ \sum_{n = 0}^\infty a_n z^n
= a_0 + a_1 z + a_2 z^2 + \dots \]
where $a_0$, $a_1$, \dots\ are real numbers,
and $z$ is a variable.
\end{definition}
\begin{abuse}
[$0^0=1$]
If you are very careful, you might notice
that when $z=0$ and $n=0$ we find $0^0$ terms appearing.
For this chapter the convention is that
they are all equal to one.
\end{abuse}
Now, if I plug in a \emph{particular} real number $h$,
then I get a series of real numbers $\sum_{n = 0}^{\infty} a_n h^n$.
So I can ask, when does this series converge?
It terms out there is a precise answer for this.
\begin{definition}
Given a power series $\sum_{n=0}^{\infty} a_n z_n$,
the \vocab{radius of convergence} $R$ is defined
by the formula
\[ \frac 1R = \limsup_{n \to \infty} \left\lvert a_n \right\rvert^{1/n}. \]
with the convention that $R = 0$ if the right-hand side is $\infty$,
and $R = \infty$ if the right-hand side is zero.
\end{definition}
\begin{theorem}
[Cauchy-Hadamard theorem]
Let $\sum_{n=0}^{\infty} a_n z^n$
be a power series with radius of convergence $R$.
Let $h$ be a real number, and consider the infinite series
\[ \sum_{n=0}^\infty a_n h^n \]
of real numbers.
Then:
\begin{itemize}
\ii The series converges absolutely if $|h| < R$.
\ii The series diverges if $|h| > R$.
\end{itemize}
\end{theorem}
\begin{proof}
This is not actually hard,
but it won't be essential, so not included.
\end{proof}
\begin{remark}
In the case $|h| = R$, it could go either way.
\end{remark}
\begin{example}
[$\sum z^n$ has radius $1$]
Consider the geometric series $\sum_{n} z^n = 1 + z + z^2 + \dots$.
Since $a_n = 1$ for every $n$, we get $R = 1$,
which is what we expected.
\end{example}
Therefore, if $\sum_n a_n z^n$ is a power
series with a nonzero radius $R > 0$ of convergence,
then it can \emph{also} be thought of as a function
\[ (-R, R) \to \RR
\quad\text{ by }\quad
h \mapsto \sum_{n \ge 0} a_n h^n. \]
This is great.
Note also that if $R = \infty$,
this means we get a function $\RR \to \RR$.
\begin{abuse}
[Power series vs.\ functions]
There is some subtly going on with ``types'' of objects again.
Analogies with polynomials can help.
Consider $P(x) = x^3 + 7x + 9$, a polynomial.
You \emph{can}, for any real number $h$,
plug in $P(h)$ to get a real number.
However, in the polynomial \emph{itself},
the symbol $x$ is supposed to be a \emph{variable} ---
which sometimes we will plug in a real number for,
but that happens only after the polynomial is defined.
Despite this, ``the polynomial $p(x) = x^3+7x+9$''
(which can be thought of as the coefficients)
and ``the real-valued function $x \mapsto x^3+7x+9$''
are often used interchangeably.
The same is about to happen with power series:
while they were initially thought of as a sequence of
coefficients, the Cauchy-Hadamard theorem
lets us think of them as functions too,
and thus we blur the distinction between them.
% Pedants will sometimes \emph{define} a polynomial
% to be the sequence of coefficients, say $(9,7,0,1)$,
% with $x^3+7x+9$ being ``intuition only''.
% Similarly they would define a power series
% to be the sequence $(a_0, a_1, \dots)$.
% We will not go quite this far, but they have a point.
%
% I will be careful to use ``power series''
% for the one with a variable,
% and ``infinite series'' for the sums of real numbers from before.
\end{abuse}
\section{Differentiating them}
\prototype{We saw earlier $1+x+x^2+x^3+\dots$ has derivative $1+2x+3x^2+\dots$.}
As promised, differentiation works exactly as you want.
\begin{theorem}
[Differentiation works term by term]
Let $\sum_{n \ge 0} a_n z^n$ be a power series
with radius of convergence $R > 0$,
and consider the corresponding function
\[ f \colon (-R,R) \to \RR \quad\text{by}\quad
f(x) = \sum_{n \ge 0} a_n x^n. \]
Then all the derivatives of $f$ exist and
are given by power series
\begin{align*}
f'(x) &= \sum_{n \ge 1} n a_n x^{n-1} \\
f''(x) &= \sum_{n \ge 2} n(n-1) a_n x^{n-2} \\
&\vdotswithin=
\end{align*}
which also converge for any $x \in (-R, R)$.
In particular, $f$ is smooth.
\end{theorem}
\begin{proof}
Also omitted.
The right way to prove it is to define the notion
``converges uniformly'',
and strengthen Cauchy-Hadamard to have
this is as a conclusion as well.
However, we won't use this later.
\end{proof}
\begin{corollary}
[A description of power series coefficients]
Let $\sum_{n \ge 0} a_n z^n$ be a power series
with radius of convergence $R > 0$,
and consider the corresponding function $f(x)$ as above.
Then
\[ a_n = \frac{f^{(n)}(x)}{n!}. \]
\end{corollary}
\begin{proof}
Take the $n$th derivative and plug in $x=0$.
\end{proof}
\section{Analytic functions}
\prototype{The piecewise $e^{-1/x}$ or $0$ function is not analytic,
but is smooth.}
With all these nice results about power series,
we now have a way to do this process the other way:
suppose that $f \colon U \to \RR$ is a function.
Can we express it as a power series?
Functions for which this \emph{is} true
are called analytic.
\begin{definition}
A function $f \colon U \to \RR$ is \vocab{analytic} at
the point $p \in U$
if there exists an open neighborhood $V$ of $p$ (inside $U$)
and a power series $\sum_n a_n z^n$ such that
\[ f(x) = \sum_{n \ge 0} a_n (x-p)^n \]
for any $x \in V$.
As usual, the whole function is analytic
if it is analytic at each point.
\end{definition}
\begin{ques}
Show that if $f$ is analytic, then it's smooth.
\end{ques}
Moreover, if $f$ is analytic,
then by the corollary above its coefficients are
actually described exactly by
\[ f(x) = \sum_{n \ge 0} \frac{f^{(n)}(p)}{n!} (x-p)^n. \]
Even if $f$ is smooth but not analytic,
we can at least write down the power series;
we give this a name.
\begin{definition}
For smooth $f$,
the power series $\sum_{n \ge 0} \frac{f^{(n)}(p)}{n!} z^n$
is called the \vocab{Taylor series} of $f$ at $p$.
\end{definition}
\begin{example}
[Examples of analytic functions]
\listhack
\label{ex:nonanalytic}
\begin{enumerate}[(a)]
\ii Polynomials, $\sin$, $\cos$, $e^x$, $\log$
all turn out to be analytic.
\ii The smooth function from before defined by
\[ f(x) = \begin{cases}
\exp(-1/x) & x > 0 \\
0 & x \le 0 \\
\end{cases}
\]
is \emph{not} analytic.
Indeed, suppose for contradiction it was.
As all the derivatives are zero,
its Taylor series would be $0 + 0x + 0x^2 + \dots$.
This Taylor series does \emph{converge}, but not to the right value ---
as $f(\eps) > 0$ for any $\eps > 0$, contradiction.
\end{enumerate}
\end{example}
\begin{theorem}
[Analytic iff Taylor series has positive radius]
Let $f \colon U \to \RR$ be a smooth function.
Then $f$ is analytic if and only if for any point $p \in U$,
its Taylor series at $p$ has positive radius of convergence.
\end{theorem}
\begin{example}
It now follows that $f(x) = \sin(x)$ is analytic.
To see that, we can compute
\begin{align*}
f(0) &= \sin 0 = 0 \\
f'(0) &= \cos 0 = 1 \\
f''(0) &= -\sin 0 = 0 \\
f^{(3)}(0) &= -\cos 0 = -1 \\
f^{(4)}(0) &= \sin 0 = 0 \\
f^{(5)}(0) &= \cos 0 = 1 \\
f^{(6)}(0) &= -\sin 0 = 0 \\
&\vdotswithin=
\end{align*}
and so by continuing the pattern
(which repeats every four) we find the Taylor series is
\[ z - \frac{z^3}{3!} + \frac{z^5}{5!} - \frac{z^7}{7!} + \dots \]
which is seen to have radius of convergence $R = \infty$.
\end{example}
Like with differentiable functions:
\begin{proposition}
[All your usual closure properties for analytic functions]
The sums, products, compositions, nonzero quotients
of analytic functions are analytic.
\end{proposition}
The upshot of this is is that most of your usual
functions that occur in nature,
or even artificial ones like $f(x) = e^x + x \sin(x^2)$,
will be analytic, hence describable locally by Taylor series.
\section{A definition of Euler's constant and exponentiation}
We can actually give a definition of $e^x$ using the tools we have now.
\begin{definition}
We define the map $\exp \colon \RR \to \RR$ by
using the following power series,
which has infinite radius of convergence:
\[ \exp(x) = \sum_{n \ge 0} \frac{x^n}{n!}. \]
We then define Euler's constant as $e = \exp(1)$.
\end{definition}
\begin{ques}
Show that under this definition, $\exp' = \exp$.
\end{ques}
We are then settled with:
\begin{proposition}
[$\exp$ is multiplicative]
Under this definition,
\[ \exp(x+y) = \exp(x) \exp(y). \]
\end{proposition}
\begin{proof}
[Idea of proof.]
There is some subtlety here with switching
the order of summation that we won't address.
Modulo that:
\begin{align*}
\exp(x) \exp(y)
&= \sum_{n \ge 0} \frac{x^n}{n!}
\sum_{m \ge 0} \frac{y^m}{m!}
= \sum_{n \ge 0} \sum_{m \ge 0}
\frac{x^n}{n!} \frac{y^m}{m!} \\
&= \sum_{k \ge 0} \sum_{\substack{m+n = k \\ m,n \ge 0}}
\frac{x^n y^m}{n! m!}
= \sum_{k \ge 0} \sum_{\substack{m+n = k \\ m,n \ge 0}}
\binom{k}{n} \frac{x^n y^m}{k!} \\
&= \sum_{k \ge 0} \frac{(x+y)^k}{k!} = \exp(x+y). \qedhere
\end{align*}
\end{proof}
\begin{corollary}
[$\exp$ is positive]
\listhack
\begin{enumerate}[(a)]
\ii We have $\exp(x) > 0$ for any real number $x$.
\ii The function $\exp$ is strictly increasing.
\end{enumerate}
\end{corollary}
\begin{proof}
First \[ \exp(x) = \exp(x/2)^2 \ge 0 \]
which shows $\exp$ is nonnegative.
Also, $1 = \exp(0) = \exp(x) \exp(-x)$ implies $\exp(x) \ne 0$
for any $x$, proving (a).
(b) is just since $\exp'$ is strictly positive (racetrack principle).
\end{proof}
The log function then comes after.
\begin{definition}
We may define $\log \colon \RR_{>0} \to \RR$
to be the inverse function of $\exp$.
\end{definition}
Since its derivative is $1/x$ it is smooth;
and then one may compute its coefficients to show it is analytic.
Note that this actually gives us a rigorous way to define
$a^r$ for any $a > 0$ and $r > 0$, namely
\[ a^r \defeq \exp\left( r \log a \right). \]
\section{This all works over complex numbers as well,
except also complex analysis is heaven}
We now mention that every theorem we referred to above
holds equally well if we work over $\CC$,
with essentially no modifications.
\begin{itemize}
\ii Power series are defined by $\sum_n a_n z^n$ with $a_n \in \CC$,
rather than $a_n \in \RR$.
\ii The definition of radius of convergence $R$ is unchanged!
The series will converge if $|z| < R$.
\ii Differentiation still works great.
(The definition of the derivative is unchanged.)
\ii Analytic still works great for functions
$f \colon U \to \CC$, with $U \subseteq \CC$ open.
\end{itemize}
In particular, we can now even define complex exponentials,
giving us a function \[ \exp \colon \CC \to \CC \]
since the power series still has $R = \infty$.
More generally if $a > 0$ and $z \in \CC$
we may still define \[ a^z \defeq \exp(z \log a). \]
(We still require the base $a$ to be a positive real
so that $\log a$ is defined, though.
So this $i^i$ issue is still there.)
However, if one tries to study calculus for complex functions
as we did for the real case,
in addition to most results carrying over,
we run into a huge surprise:
\begin{quote}
\itshape
If $f \colon \CC \to \CC$ is differentiable,
it is analytic.
\end{quote}
And this is just the beginning of the nearly unbelievable
results that hold for complex analytic functions.
But this is the part on real analysis,
so you will have to read about this later!
\section{\problemhead}
\begin{problem}
Find the Taylor series of $\log(1-x)$.
\end{problem}
\begin{dproblem}
[Euler formula]
Show that \[ \exp(i \theta) = \cos \theta + i \sin \theta \]
for any real number $\theta$.
\begin{hint}
Because you know all derivatives of $\sin$ and $\cos$,
you can compute their Taylor series,
which converge everywhere on $\RR$.
At the same time, $\exp$ was defined as a Taylor series,
so you can also compute it.
Write them all out and compare.
\end{hint}
\end{dproblem}
\begin{dproblem}
[Taylor's theorem, Lagrange form]
Let $f \colon [a,b] \to \RR$ be continuous
and $n+1$ times differentiable on $(a,b)$.
Define
\[ P_n = \sum_{k=0}^n \frac{f^{(k)}(b)}{k!} \cdot (b-a)^k. \]
Prove that there exists $\xi \in (a,b)$ such that
\[ f^{(n)}(\xi) = (n+1)! \cdot \frac{f(b) - P_n}{(b-a)^{n+1}}. \]
This generalizes the mean value theorem
(which is the special case $n = 0$, where $P_0 = f(a)$).
\begin{hint}
Use repeated Rolle's theorem.
You don't need any of the theory in this chapter to solve this,
so it could have been stated much earlier;
but then it would be quite unmotivated.
\end{hint}
\end{dproblem}
\begin{problem}
[Putnam 2018 A5]
\yod
Let $f \colon \RR \to \RR$ be smooth,
and assume that $f(0) = 0$, $f(1) = 1$, and $f(x) \ge 0$
for every real number $x$.
Prove that $f^{(n)}(x) < 0$ for some positive integer $n$
and real number $x$.
\begin{hint}
Use Taylor's theorem.
\end{hint}
\end{problem}
| {
"alphanum_fraction": 0.6628679962,
"avg_line_length": 34.1883116883,
"ext": "tex",
"hexsha": "3b4aca2a881db28108c3e563e9999af2dcc530a4",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e",
"max_forks_repo_licenses": [
"Apache-2.0",
"MIT"
],
"max_forks_repo_name": "aDotInTheVoid/ltxmk",
"max_forks_repo_path": "corpus/napkin/tex/calculus/taylor.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0",
"MIT"
],
"max_issues_repo_name": "aDotInTheVoid/ltxmk",
"max_issues_repo_path": "corpus/napkin/tex/calculus/taylor.tex",
"max_line_length": 114,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e",
"max_stars_repo_licenses": [
"Apache-2.0",
"MIT"
],
"max_stars_repo_name": "aDotInTheVoid/ltxmk",
"max_stars_repo_path": "corpus/napkin/tex/calculus/taylor.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5400,
"size": 15795
} |
\section*{Introduction}
This template was made for the course introduction to data science \cite{IDS2017}.
In this section we explain some of the conventions which should be followed for answering theory questions, and how you should organize the report when explaining the experimental part of the
assignment. Here are some general rules of the report who have to hand in:
\begin{itemize}
\item You are expected to \textbf{read and follow} instructions via Nestor and Github. Adhere to the rules and conventions of this report template.
\item The answers to the assignment questions should have number codes as the questions being answered. Students are requested not to change the number codes. For example, if one is writing the
answer of question 2.1 sub part (a), the answer should have the tag 2.1(a). If the answer to the question 2.1(a) is tagged as II.(i)(a) or 2.1.A or (ii).I.(a) or anything which is different
from the convention followed in numbering the assignment questions, the answers might be missed and not graded.
\item Do not exceed the page number limit of 5pages , single column.
\item Do not change the font size and page font.
\item Do not change the page margins.
\item Provide snippets of codes in the main report only when \textbf{explicitly} asked for it. Else provide code in Appendices only if you really want to share.
\item We ask for the codes to be submitted separately anyway. We will run these codes and only if they produce the same figures as those in your report, will your report be validated.
\end{itemize}
The following part guides you on how you should organize the report when answering the practical parts of the assignment, where you have to perform experiments.
\begin{enumerate}
\item Start with the correct number and letter tag of the question of the assignment.
\item Provide a \textbf{Motivation} for the experiments [\textit{Had to do it for IDS assignment} or \textit{because the question asked me/us to do} will not be accepted as motivation].
\item Mention the problem statement, where you point out exactly which sub-sets of the issues associated with the \textit{Motivation} you are trying to investigate, and why.
\item Give a brief overview of what are the available resources and tools which have have in the past and recent past been applied to find solutions to the problems you are trying to handle.
\item Explain how your experiment report is organized, i.e., you mention that:
\begin{itemize}
\item the first part is the \textbf{Introduction} to the topic, which contains the motivation, problem statement and literature review briefly.
\item the second part is the \textbf{Methodology} where you have explained briefly the \textit{n} techniques you have used in the concerned experiment.
\item the third part is the \textbf{Results} where you have put your figures, tables, graphs, etc.
\item the fourth part is \textbf{Discussion} where you have discussed the results.
\item the report ends with a \textbf{Conlcusion} where you have concluded about your findings
\end{itemize}
\end{enumerate} | {
"alphanum_fraction": 0.7851827887,
"avg_line_length": 96.59375,
"ext": "tex",
"hexsha": "25d281f792ced0e6047fc0db17bbc84fcc04d2a5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fb68ad0f3b29bf378915ff644accbfb3983748b2",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "fudanwqd/cskg_counterfactual_generation-main",
"max_forks_repo_path": "paper/0introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fb68ad0f3b29bf378915ff644accbfb3983748b2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "fudanwqd/cskg_counterfactual_generation-main",
"max_issues_repo_path": "paper/0introduction.tex",
"max_line_length": 195,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "fb68ad0f3b29bf378915ff644accbfb3983748b2",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "fudanwqd/cskg_counterfactual_generation-main",
"max_stars_repo_path": "paper/0introduction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 700,
"size": 3091
} |
\section{Summary}
Since the library is meant as an open-source project, the whole source code is available on \url{https://github.com/Ivellien/pgQuery4s/} as a public repository. The project is separated into multiple submodules.
\begin{description}[font=$\bullet$~\normalfont\scshape\color{black}\\]
\item [Native] \hfill \newline
This module contains everything related to native code. There is the \texttt{PgQueryWrapper} class, which implements single method \texttt{pgQueryParse} with \texttt{@native} annotation. Then there is the \textit{libpg\_query} library itself, and the JNI implementation of the native method, which directly calls the C library.
\item [Parser] \hfill \newline
Possibly the most important part of the library. Parser submodule contains the whole existing case class structure representation of the parse tree in \textit{node} and \textit{enums} packages. The \texttt{PgQueryParser} object then defines part of our usable public API. Each one of these methods takes string representing SQL query or expression:
\begin{itemize}
\item \texttt{json} - Returns JSON representation of the passed query as received from \texttt{libpg\_query}
\item \texttt{prettify} - Creates the \texttt{Node} representation of the passed SQL query and then deparses it back to string again.
\item \texttt{parse} - Attempts to parse the whole SQL query. Returns result as \texttt{PgQueryResult[Node]}.
\item \texttt{parseExpression} - Same as \texttt{parse}, but instead of parsing the whole input as query it prepends \textit{"SELECT "} to the expression. The expression is then extracted from the parse tree using pattern matching. Return result as \texttt{PgQueryResult[ResTarget]}.
\end{itemize}
\item [Macros] \hfill \newline
The macros submodule is further split into two other subprojects - \textit{liftable} and \textit{macros}. The macros were one of the reasons for splitting up the project because the macro has to always be compiled before it can be used elsewhere.
The macros subproject currently contains macro implementations for parsing queries, expressions and for implicit conversion from \texttt{String} to \texttt{ResTarget}.
The \textit{liftable} subproject contains generators of \texttt{Liftable} objects, as explained in section 4.5.2.
\item [Core] \hfill \newline
The core uses the macros package and contains definitions of the custom interpolators for queries, expressions, and implicit conversions.
\end{description}
So far, we have a library, which can validate queries using a C library called \textit{libpg\_query}. To connect our Scala code with the native code, we are using sbt-jni plugins. The JSON containing the parse tree representation is then parsed to our custom case class structure using a functional library for working with JSON, \textit{circe}. Then we implemented our interpolators, one for expressions and another one for queries. To achieve compile-time validation, we used macros, where we are working with abstract syntax trees of the program itself. The final query is then type-checked and throws compilation errors whenever the types don't match.
\section{Future work}
The library can be, for now, considered a prototype. It covers the majority of generally used SQL keywords and queries. However, the list of SQL keywords is long, and together with all the possible combinations, it leaves room for improvement. The library can be further expanded to eventually cover the whole SQL node structure.
At the end of May 2021, the newest version of \texttt{libpg\_query} was also released. It contains plenty of changes, support for the PostgreSQL 13 version, changes to JSON output format, new \texttt{Protobuf} parse tree output format, added deparsing functionality from parse tree back to SQL, and more. \cite{libpgquery13}.
| {
"alphanum_fraction": 0.7941793393,
"avg_line_length": 115.5757575758,
"ext": "tex",
"hexsha": "1fe5ac28027b857313e7e0a88c5ffd076fe33f31",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-07-28T13:38:18.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-07-28T13:38:18.000Z",
"max_forks_repo_head_hexsha": "36a1fe26aac0a1767b439d4568165fc1d2c19eae",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "coollib-dev/pgquery4s",
"max_forks_repo_path": "thesis/03_implementation/09_summary.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "36a1fe26aac0a1767b439d4568165fc1d2c19eae",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "coollib-dev/pgquery4s",
"max_issues_repo_path": "thesis/03_implementation/09_summary.tex",
"max_line_length": 655,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "36a1fe26aac0a1767b439d4568165fc1d2c19eae",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "coollib-dev/pgquery4s",
"max_stars_repo_path": "thesis/03_implementation/09_summary.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-28T07:52:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-01-28T07:52:07.000Z",
"num_tokens": 883,
"size": 3814
} |
\clearpage
\subsection{C Terminal Input} % (fold)
\label{sub:c_terminal_input}
C comes with a range of \nameref{sub:library}s that provide reusable programming artefacts, including reusable \nameref{sub:function} and \nameref{sub:procedure}s. The \texttt{stdio.h} refers to the Standard Input/Output library, and including code to read input from the Terminal. The \texttt{scanf} function is used to read data from the Terminal.
\begin{table}[h]
\centering
\begin{tabular}{|c|p{9cm}|}
\hline
\multicolumn{2}{|c|}{\textbf{Function Prototype}} \\
\hline
\multicolumn{2}{|c|}{} \\
\multicolumn{2}{|c|}{\texttt{int scanf(char *format, \ldots )}} \\
\multicolumn{2}{|c|}{} \\
\hline
\multicolumn{2}{|c|}{\textbf{Returns}} \\
\hline
\texttt{int} & The number of values read by \texttt{scanf}. \\
\hline
\textbf{Parameter} & \textbf{Description} \\
\hline
\texttt{ format } & The format specifier describing what is to be read from the Terminal. See \tref{tbl:format specifiers}. \\
& \\
\texttt{\ldots} & The variables into which the values will be read. There must be at least as many variables as format tags in the format specifier. \\
\hline
\end{tabular}
\caption{Parameters that must be passed to \texttt{scanf}}
\label{tbl:scanf parameters}
\end{table}
The \texttt{scanf} function is controlled by the \texttt{format} parameter. This parameter tells \texttt{scanf} what it must read from the input. Details of how to construct the \texttt{format} String are shown in \tref{tbl:format specifiers}.
\begin{table}[htbp]
\begin{minipage}{\textwidth}
\centering
\begin{tabular}{|c|p{8cm}|l|}
\hline
\textbf{} & \textbf{Description} & \textbf{Example Usage} \\
\hline
\emph{white space} & Skips white space at this point in the input. & \csnipet{scanf("{ } \%d", \&age);} \\
\hline
\emph{non white space}\footnote{Except for the percent character which is used in the Format Tag.} & Matches the input against characters, skipping this text in the input. Fails if input does not match. The example looks for `age: ' and reads the following integer.& \csnipet{scanf("age: \%d", \&age);} \\
\hline
\emph{format tag} & The tag indicates the kind of data to read and store in a Variable. This starts with a percent character. See \tref{tbl:scanf format tag} and \fref{csynt:scanf-format-string} for the syntax of the format tags. & \csnipet{scanf("\%d", \&age);}\\
\hline
\end{tabular}
\caption{Elements of the Format String in \texttt{scanf}}
\label{tbl:format specifiers}
\end{minipage}
\end{table}
\csyntax{csynt:scanf-format-string}{Format Tags for \texttt{scanf}}{storing-using-data/scanf-format-tags}
\csection{\ccode{clst:scanf}{Example of reading data using \texttt{scanf}.}{code/c/storing-using-data/test-scanf.c}}
\begin{table}[p]
\begin{minipage}{\textwidth}
\centering
\begin{tabular}{|c|p{7cm}|l|}
\hline
\textbf{} & \textbf{Description} & \textbf{Example Usage} \\
\hline
\texttt{*} & Read the data, but ignore it. Does not store the value in a Variable. & \csnipet{scanf("\%*d");} \\
\hline
\multicolumn{3}{c}{} \\
\hline
\textbf{Width} & \textbf{Description} & \textbf{Example Usage} \\
\hline
\emph{number} & The maximum number of characters to read in the current operation. & \csnipet{scanf("\%3d", \&age);}\\
\hline
\multicolumn{3}{c}{} \\
\hline
\textbf{Modifier} & \textbf{Description} & \textbf{Example Usage} \\
\hline
\texttt{h} & Reads a \texttt{short int} for the \texttt{d} or \texttt{i} Types. & \csnipet{scanf("\%hi", \&age);}\\
\hline
\texttt{l} & Reads a \texttt{long int} for the \texttt{d} or \texttt{i} Types, or a \texttt{double} for \texttt{f}. & \csnipet{scanf("\%lf \%li", \&height, \&count);} \\
\hline
\texttt{L} & Reads a \texttt{long double} for \texttt{f}. & \csnipet{scanf("\%Lf", \&range);} \\
\hline
\multicolumn{3}{c}{} \\
\hline
\textbf{Type} & \textbf{Data Read} & \textbf{Example Usage} \\
\hline
\texttt{c} & A single character. & \csnipet{scanf("\%c", \&ch);} \\
\hline
\texttt{d} or \texttt{i} & Decimal integer. This is able to read a starting + or - if present. & \csnipet{scanf("\%d", \&height);} \\
\hline
\texttt{f} & Decimal floating point number. Can be signed, or in scientific notation. & \csnipet{scanf("\%f", \&radius);} \\
\hline
\texttt{s} & Text data. Should be preceded by the number of characters to read. The c-string must have sufficient space to store the data read\footnote{This will be covered in future chapters.}. & \csnipet{scanf("\%40s", name);} \\
\hline
\texttt{[\emph{pattern}]} & Text data. As with \texttt{\%s}, but this allows you to specify the pattern of characters that can be read. & \csnipet{scanf("\%7[1234567890]", num_text);} \\
\hline
\texttt{[\emph{{\textasciicircum}pattern}]} & Text data. As with \texttt{\%s}, but this allows you to specify the pattern of characters that can \textbf{not} be read. & \texttt{\small scanf("\%40[\textasciicircum \textbackslash n]", name);} \\
\hline
\end{tabular}
\end{minipage}
\caption{Details for \texttt{scanf}'s Format Tag type, specifiers, modifiers, and width}
\label{tbl:scanf format tag}
\end{table}
% subsection c_terminal_input (end) | {
"alphanum_fraction": 0.6654288897,
"avg_line_length": 49.8703703704,
"ext": "tex",
"hexsha": "bec1150aa43a78915c51f8f7dc524ad2c141f285",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2022-03-24T07:42:53.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-06-02T03:18:37.000Z",
"max_forks_repo_head_hexsha": "8f3040983d420129f90bcc4bd69a96d8743c412c",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "macite/programming-arcana",
"max_forks_repo_path": "topics/storing-using-data/c/c-input.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07",
"max_issues_repo_issues_event_max_datetime": "2021-12-29T19:45:10.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-12-29T19:45:10.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "thoth-tech/programming-arcana",
"max_issues_repo_path": "topics/storing-using-data/c/c-input.tex",
"max_line_length": 349,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "thoth-tech/programming-arcana",
"max_stars_repo_path": "topics/storing-using-data/c/c-input.tex",
"max_stars_repo_stars_event_max_datetime": "2021-08-10T04:50:54.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-08-10T04:50:54.000Z",
"num_tokens": 1758,
"size": 5386
} |
\chapter{A Random Chapter Name}
\lipsum
| {
"alphanum_fraction": 0.775,
"avg_line_length": 13.3333333333,
"ext": "tex",
"hexsha": "7ca7b2bd8cb3149af767855c4a3ef001254c3dd2",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2019-01-12T10:16:04.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-04-30T12:14:41.000Z",
"max_forks_repo_head_hexsha": "fdd9aed7715029c3f1968b933c17d445d282cfb1",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "alpenwasser/latex-sandbox",
"max_forks_repo_path": "cli-args/chapters/some-random-chapter-name.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "fdd9aed7715029c3f1968b933c17d445d282cfb1",
"max_issues_repo_issues_event_max_datetime": "2019-11-14T15:58:26.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-03-14T22:58:44.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "alpenwasser/TeX",
"max_issues_repo_path": "cli-args/chapters/some-random-chapter-name.tex",
"max_line_length": 31,
"max_stars_count": 12,
"max_stars_repo_head_hexsha": "fdd9aed7715029c3f1968b933c17d445d282cfb1",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "alpenwasser/latex-sandbox",
"max_stars_repo_path": "cli-args/chapters/some-random-chapter-name.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-23T17:11:45.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-03-29T22:59:20.000Z",
"num_tokens": 12,
"size": 40
} |
% LaTeX resume using res.cls
\documentclass[margin]{res}
%\usepackage{helvetica} % uses helvetica postscript font (download helvetica.sty)
%\usepackage{newcent} % uses new century schoolbook postscript font
\setlength{\textwidth}{5.1in} % set width of text portion
\usepackage{hyperref}
\usepackage[T1]{fontenc}
\usepackage{hhline}
\hypersetup{
colorlinks=true,
linkcolor=blue,
filecolor=magenta,
urlcolor=cyan,
}
\renewcommand{\thepage}{\roman{page}}
\begin{document}
% Center the name over the entire width of resume:
\moveleft.5\hoffset\centerline{\large\textbf{\textsc{Katie Chamberlain}} \footnotesize{she/her}}
% Draw a horizontal line the whole width of resume:
\vspace{-.3\baselineskip}
\moveleft\hoffset\vbox{\hrule width\resumewidth height .8pt}
% \moveleft1\hoffset\vbox{\leaders\vrule width \resumewidth\vskip.7pt} % or other desired thickness
%\vskip\no % ditto
\vspace{-.8\baselineskip}
\moveleft\hoffset\vbox{\hrule width\resumewidth height .4pt} % or other desired thickness
\vskip\smallskipamount
% \moveleft\hoffset\vbox{\hrule width\resumewidth height .4pt}\smallskip
% address begins here
% Again, the address lines must be centered over entire width of resume:
\moveleft.5\hoffset\centerline{933 N. Cherry Avenue, Tucson, AZ 85721}
% \moveleft.5\hoffset\centerline{(406) 465-5814}
\moveleft.5\hoffset\centerline{{\footnotesize{email:}} [email protected]}
\moveleft.5\hoffset\centerline{{\footnotesize{website:}} \href{http://katiechambe.github.io}{katiechambe.github.io}}
\begin{resume}
\section{\textsc{Education}}
{\sl Ph.D., Astronomy and Astrophysics} \hfill May 2023 (Expected)\\
{\sl M.S., Astronomy and Astrophysics} \hfill May 2021 (Expected)\\
Steward Observatory, University of Arizona, Tucson, AZ\\
Advisor: Gurtina Besla
%Cumulative GPA: -
{\sl B.S., Physics} \hfill May 2018 \\
\textit{Secondary Major: Mathematics}\\
Montana State University, Bozeman, MT\\
Advisor: Nicolas Yunes
%Cumulative GPA: 3.85\\
%Major GPA: 3.91
%Reed College, Portland, OR, B.A., Physics, May 2012\\
% Montana State University, Bozeman, MT, M.S., Physics, May 2014\\
% Montana State University, Bozeman, MT, Ph.D., Physics, Degree in progress
\bigskip
\section{\textsc{Research Interests}}
Galaxy dynamics
\begin{itemize}
\item[-] Frequency of isolated pairs and satellites
\item[-] Orbital and internal dynamics of interacting pairs
\item[-] Cosmological simulations
\item[-] Local Group dynamics
\end{itemize}
\bigskip
\section{\textsc{Publications}}
\emph{Frequency-domain waveform approximants capturing Doppler shifts}\\
\textbf{K. Chamberlain}, C. Moore, D. Gerosa, N. Yunes\\
Phys. Rev. D \textbf{99}, 024025 (2019)
\emph{Theoretical Physics Implications of Gravitational Wave Observation
with Future\\ Detectors}\\
\textbf{K. Chamberlain}, N. Yunes\\
Phys. Rev. D \textbf{96}, 084039 (2017)
\emph{Theory-Agnostic Constraints on Black-Hole Dipole Radiation with
Multi-Band Gravitational-Wave Astrophysics}\\
E. Barausse, N. Yunes, \textbf{K. Chamberlain} \\
Physical Review Letters \textbf{116}, 241104 (2016)
\smallskip
\section{\textsc{Research Experience}}
{\sl Galaxy Dynamics Summer Workshop} \\
Center for Computational Astrophysics, Flatiron Institute, NY \hfill 2021\\
Advisor: Adrian Price-Whelan
{\sl Graduate Research Assistant} \\
Steward Observatory, University of Arizona, AZ \hfill 2018 - Present\\
Advisor: Gurtina Besla
{\sl LIGO Summer Undergraduate Research Fellowship (REU)} \\
California Institute of Technology, CA\hfill Summer 2017
{\sl Undergraduate Research Assistant} \\
eXtreme Gravity Institute - Montana State University, MT \hfill 2015 - 2018\\
Advisor: Nicolas Yunes
{\sl Research Apprentice} \\
Montana Space Grant Consortium, MT \hfill 2016 - 2017
\medskip
\section{\textsc{Computational Skills}}
\emph{Proficient:} Python, Mathematica, LaTeX, HPC, HTML, git, emacs. \\
\emph{Some experience:} Java, MATLAB, bash, vi. \\
\bigskip
\section{\textsc{Honors,\\ Awards \& Grants}}
{\textsc{Montana State University}}\\
{\sl Undergraduate Scholars Program Grant}\hfill 2015 - 2018\\
{\sl Research Travel Grants} \hfill 2017, 2018\\
{\sl Montana University System Honors Scholarship}\hfill 2013 - 2017
{\textsc{Montana State University Physics Department}}\\
{\sl Outstanding Graduating Senior in Physics}\hfill 2018\\
{\sl Outstanding Undergraduate Physics Researcher Award} \hfill 2017 - 2018 \\
{\sl Georgeanne Caughlan Scholarship for Women in Physics} \hfill 2017 - 2018 \\
{\sl John and Marilyn (Milburn) Asbridge Family Physics Scholarship}\hfill 2015 - 2017
{\textsc{Montana State University Mathematics Department}}\\
{\sl Outstanding Graduating Senior in Math}\hfill 2018\\
{\sl John L Margaret Mathematics and Science Scholarship} \hfill 2017 - 2018 \\
{\sl Mathematics Department Scholarship for Excellence in Coursework}\hfill 2014 - 2015
{\textsc{Other}}\\
American Physical Society Division of Gravitation Travel Award \hfill 2018
\bigskip
\section{\textsc{Talks$^{\dagger}$ \& Posters$^{*}$}}
\emph{${}^{\dagger}$Frequency of dwarf galaxy pairs throughout cosmic time}. \\
TiNy Titans Collaboration.\\
Virtual, Sep 2021
\emph{${}^{\dagger}$LMC's impact on the inferred
Local Group mass}. \\
Galaxy Dynamics Workshop, CCA Flatiron Institute.\\
New York, NY, July 2021
\emph{${}^{*}$Frequency of dwarf galaxy pairs throughout cosmic time}. \\
Division of Dynamical Astronomy, AAS.\\
Virtual, May 2021
\emph{${}^{*}$Frequency of dwarf galaxy pairs throughout cosmic time}. \\
Local Group Symposium, Space Telescope Science Institute.\\
Virtual, September 2020
\emph{${}^{*}$Frequency of dwarf galaxy pairs throughout cosmic time}. \\
Small Galaxies, Cosmic Questions Meeting.\\
Durham, UK, July 2019
\emph{${}^{\dagger}$Towards a ``Kicked'' Frequency-Domain Waveform Approximant}. \\
APS April Meeting.\\
Columbus, Ohio, April 2018
\emph{${}^{\dagger}$Testing Modified Gravity with Future Gravitational Wave Detectors}. \\
Pacific Coast Gravity Meeting.\\
California Institute of Technology, March 2018
\emph{${}^{\dagger}$Measuring Black Hole Kicks with Future Gravitational Wave Detectors}. \\
LIGO-Caltech Summer Research Celebration.\\
California Institute of Technology, August 2017
\emph{$^{*}$Theoretical Physics Implications with Future Gravitational Wave Detectors}.\\
Poster Presentation at Montana Space Grant Consortium Research Celebration. \\
Bozeman, MT, May 2017
\emph{$^{*}$Theoretical Physics Implications with Future Gravitational Wave Detectors}. \\
Poster Presentation at Undergraduate Research Celebration.\\
Montana State University, April 2017
\emph{${}^{\dagger}$Gravitational Wave Tests of General Relativity with Future Detectors}.\\
APS April Meeting. \\
Washington D.C., January 2017
\emph{${}^{\dagger}$Constraints on Modified Gravity with Future Gravitational Wave Detectors}. \\
Relativity and Astrophysics Seminar. \\
Montana State University, October 2016
\emph{$^{*}$Theoretical Physics with Multi-Band Gravitational Wave Astrophysics}. \\
Poster Presentation at Undergraduate Research Celebration\\ Montana State University, April 2016
\bigskip
\section{\textsc{Press}}
{\sl ``Looking for nothing to test gravity''} \hfill 2018\\
Interview with Symmetry Magazine
\section{\textsc{Outreach \& Service}}
{\sl Member }\hfill 2019 - Present\\
Steward Graduate Student Council\\
University of Arizona\\
{\footnotesize Acting liaison between faculty and graduate students, responsible for graduate student townhalls to discuss state of the graduate program. }
{\sl Volunteer }\hfill 2019, 2020\\
Warrior-Scholar Project\\
University of Arizona\\
{\footnotesize Research project leader - Taught students an introduction to programming in Python, led a Jupyter Notebook-based exoplanet research project, and provided tutoring help during students' homework sessions.}
{\sl Founder}\hfill 2016 - 2018\\
Society of Physics Students \emph{Coding Nights}\\
Montana State University\\
{\footnotesize Teaching basic coding courses in Mathematica and LaTeX to early undergraduate physics students.}
{\sl Society of Physics Students} \hfill 2013 - 2018\\
Montana State University
\begin{itemize}
\item[-] President \hfill 2016 - 2017
\item[-]Vice President \hfill 2015 - 2016
\item[-]Student Representative \hfill 2017 - 2018
\end{itemize}
\bigskip
\section{\textsc{Memberships}}
\begin{itemize}
\item[-] TiNy Titans (TNT) Collaboration \hfill 2018 - Present
\item[-] American Astronomical Society \hfill 2018 - Present
\item[-] LIGO Scientific Collaboration \hfill 2017
\item[-] American Physical Society \hfill 2016 - 2019
\item[-] eXtreme Gravity Institute at Montana State University \hfill 2015 - 2018
\end{itemize}
\bigskip
%\section{\textsc{References}}
%\textbf{Nicolas Yunes} (Advisor) \hfill \textbf{Neil Cornish}\\
%\emph{Associate Professor of Physics} \hfill \emph{Professor of Physics}\\
%Montana State University \hfill Montana State University\\
%203 Barnard Hall \hfill 261A Barnard Hall\\
%Bozeman, MT 59717 \hfill Bozeman, MT 59717\\
%1 (406) 994 3614 \hfill 1 (406) 994 7986 \\
%[email protected] \hfill [email protected]
%
%
%\textbf{Carla Riedel}\\
%\emph{Teaching Professor of Physics}\\
%Montana State University\\
%211 Barnard Hall\\
%Bozeman, MT 59717
%1 (406) 994 6178\\
%[email protected]
\end{resume}
\end{document}
| {
"alphanum_fraction": 0.7463615271,
"avg_line_length": 35.1185185185,
"ext": "tex",
"hexsha": "754c6ba0a569b360ab27cd0d291d479e72d07573",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a6e74972817c3471c8243c5f396a24576ae07b5c",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "katiechambe/katiechambe.github.io",
"max_forks_repo_path": "cv/CV.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a6e74972817c3471c8243c5f396a24576ae07b5c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "katiechambe/katiechambe.github.io",
"max_issues_repo_path": "cv/CV.tex",
"max_line_length": 219,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a6e74972817c3471c8243c5f396a24576ae07b5c",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "katiechambe/katiechambe.github.io",
"max_stars_repo_path": "cv/CV.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2751,
"size": 9482
} |
\documentclass{article}
\usepackage[utf8]{inputenc}
\title{PS9 Yarberry}
\author{Megan N. Yarberry }
\date{March 31, 2020}
\begin{document}
\maketitle
\section{Question 5}
What is the dimension of your training data (housing.train)?
\begin{itemize}
\item 404 objects by 648 variables
\end{itemize}
\section{Question 6 - LASSO Model}
What is the optimal value of λ?
\begin{itemize}
\item 7120.064
\end{itemize}
What is the in-sample RMSE?
\begin{itemize}
\item 0.1740835
\end{itemize}
What is the out-of-sample RMSE (i.e. the RMSE in the test data)?
\begin{itemize}
\item 0.1692024
\end{itemize}
\section{Question 7 - Ridge Regression Model}
What is the optimal value of λ?
\begin{itemize}
\item 7120.064
\end{itemize}
What is the in-sample RMSE?
\begin{itemize}
\item 0.1545797
\end{itemize}
What is the out-of-sample RMSE (i.e. the RMSE in the test data)?
\begin{itemize}
\item 0.1548843
\end{itemize}
\section{Question 8 - Elastic Net Model}
What is the optimal value of λ?
\begin{itemize}
\item 0.192089
\end{itemize}
What is the in-sample RMSE?
\begin{itemize}
\item 0.061
\end{itemize}
What is the out-of-sample RMSE (i.e. the RMSE in the test data)?
\begin{itemize}
\item 0.202
\end{itemize}
Does the optimal value of α lead you to believe that you should use LASSO or ridge regression for this prediction task?
\begin{itemize}
\item It would be better to use the ridge regression model better
\end{itemize}
\section{Question 9}
Based on bias-variance tradeoff, estimate a simple linear regression model on the housing.train dataframe. You are more likely to go with the Ridge Regression Model since the in sample and out of sample are the closest.
\end{document}
| {
"alphanum_fraction": 0.7520759193,
"avg_line_length": 25.1641791045,
"ext": "tex",
"hexsha": "c5e90b988cde22e5a8447931dce1e43013879b86",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "da3e45e48a26e7c8c37da33696e845e992a34711",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "myarberry26/DScourseS20",
"max_forks_repo_path": "ProblemSets/PS9/PS9_Yarberry.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "da3e45e48a26e7c8c37da33696e845e992a34711",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "myarberry26/DScourseS20",
"max_issues_repo_path": "ProblemSets/PS9/PS9_Yarberry.tex",
"max_line_length": 220,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "da3e45e48a26e7c8c37da33696e845e992a34711",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "myarberry26/DScourseS20",
"max_stars_repo_path": "ProblemSets/PS9/PS9_Yarberry.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 534,
"size": 1686
} |
\chapter{Background}\label{cap:estadodelarte}
\noindent This second chapter tries to acquaint the reader with the key concepts that define Cloud Computing as well as the MapReduce archetype. Later chapters will elaborate on top of them.
\section{Cloud Computing}\label{sec:computacioncloud}
\noindent As it has already been discussed, Cloud Computing is, in essence, a distributed computational model that attempts to ease on-demand consumption of computational infrastructure, by exporting it as virtual resources, platforms or services. However it may seem, Cloud Computing is no new technology; it introduces a new way to exploit idle computing capacity. What it intends is to make orchestration of enormous data centers more flexible, so as to let a user start or destroy virtual machines as required --- Infrastructure as a Service (\emph{IaaS}) ---, leverage a testing environment over a particular operating system or software platform --- Platform as a Service (\emph{PaaS}) --- or use a specific service like remote backup --- Software as a Service (\emph{SaaS}). Figure \ref{fig:cloudlayers} shows the corresponding high level layer diagram of a generic cloud.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.69\textwidth]{imagenes/003.pdf}
\caption{Software layers in a particular cloud deployment}
\label{fig:cloudlayers}
\end{center}
\end{figure}
Different IaaS frameworks will cover the functionality that is required to drive the cloud-defining \emph{physical} infrastructure. Nonetheless, an effort to analyze, design, configure, install and maintain the intended service will be needed, bearing in mind that the degree of elaboration grows from IaaS services to SaaS ones. In effect, PaaS and SaaS layers are stacked supported by those immediately under (figure \ref{fig:cloudlayers}) --- as happens with software in general which is implemented over a particular platform which, in turn, is also build upon a physical layer. Every Cloud framework focuses on giving the option to configure a stable environment in which to run virtual machines whose computing capabilities are defined by four variables: Virtual CPU count, virtual RAM, virtual persistent memory and virtual networking devices. Such an environment makes it possible to deploy virtual clusters upon which to install platforms or services to be subsequently consumed by users, building up the software layers that conform PaaS and SaaS paradigms respectively.
No less important features like access control, execution permissions, quota or persistent or safe storage will also be present in every framework.
\subsection{Architecture}\label{subsec:arquitecturacloud}
\noindent Figure \ref{fig:cloudlayers} showed possible layers that could be found in a cloud deployment. Depending on the implemented layers, the particular framework and the role played by the cluster node, there will appear different particular modules making it possible to consume the configured services. These modules may be though of as cloud subsystems that connect each one of the parts that are required to execute virtual machines. Those virtual machines' capabilities are defined by the four variables previously discussed --- VCPUS, RAM, HDD and networking. As there is no methodology guiding how those subsystems should be in terms of size and responsibility, each framework makes its own modular partition regarding infrastructure management.
Setting modularity apart, one common feature among different clouds is the separation of responsibility in two main roles: \emph{Cloud Controller} and \emph{Cloud Node}. Figure \ref{fig:archcloud} shows a generic cloud deployment in a cluster with both roles defined. The guidelines followed for having this two roles lay close to the \emph{Master-Slave} architecture approach. In those, in general, there's a set of computers labeled as coordinators which are expected to control execution, and another set made up with those machines that are to carry out the actual processing.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.9\textwidth]{imagenes/004.pdf}
\caption{Cloud Controller and Cloud Node}
\label{fig:archcloud}
\end{center}
\end{figure}
In a cluster, host computers or cluster nodes --- labeled as Cloud Controllers or Cloud Nodes --- cooperate in a time synchronized fashion with \emph{NTP} (\emph{Network Time Protocol}) and communicate via \emph{message-passing} supported by asynchronous queues. To store services' metadata and status they typically draw upon a \emph{DBMS} (\emph{Data Base Management System}) implementation, which is regularly kept running in a dedicated cluster node, or sharded (distributed) among the members of a database federation.
Although there is no practical restriction to configure both Cloud Controller and Cloud Node within a single machine in a cluster, this approach should be limited to development or prototyping environments due to the considerable impact in performance that it would imply.
\subsubsection{Cloud Controller}\label{subsubsec:cloudcontroller}
\noindent The fundamental task of a Cloud Controller is to maintain all of the cloud's constituent modules working together coordinating their cooperation. It is a Controller's duty to:
\begin{itemize}
\item Control authentication and authorization.
\item Recount available infrastructure resources.
\item Manage quota.
\item Keep track of usage balance.
\item Maintain an inventory of users and projects.
\item Expose an API for service consumption.
\item Monitor the cloud in real time.
\end{itemize}
Being an essential part of a cloud as it is, the Controller node is usually replicated in physically distinct computers. Figure \ref{fig:cloudcontroller} shows a Cloud Controller's architecture from a high level perspective.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.5\textwidth]{imagenes/005.pdf}
\caption{Cloud Controller in detail}
\label{fig:cloudcontroller}
\end{center}
\end{figure}
As a general rule, clients will interact with the cloud through a Web Service API --- \emph{RESTful} APIs (\emph{REpresentational State Transfer}) mostly. Those APIs vary slightly from company to company, as usual, which forces clients to be partially coupled to the cloud implementation they intend to use. That is why there has been an increasing trend for unifying and standardizing those APIs in order to guarantee compatibility inter-framework. Of special mention is the cloud standard proposed by the \emph{Open Grid Forum}: \emph{OCCI} (\emph{Open Cloud Computing Interface} \cite{occisdraft}).
Cloud's modules support its functional needs. Each one of them will have a well-defined responsibility, and so, there will appear networking-centered modules, access and security control modules, storage modules, etc. Many of them existed before the advent of Cloud Computing, but they worked only locally to a cluster or organization. Inter-module communication is handled by an asynchronous message queue that guarantees an efficient broadcasting system off the Cloud Controller. To store and expose configuration data to the cluster in a single place while managing concurrent requests to update these data, every IaaS Framework evaluated resorts to a DBMS.
Hardware requirements for the cluster nodes vary from each particular framework implementation and the \emph{QoS} expected, but they generally require something around 10 GB of RAM, quad core CPU, Gigabit Ethernet and one TB of storage to get started.
\subsubsection{Cloud Node}\label{subsubsec:cloudnode}
\noindent If the Cloud Controller is entrusted the cloud's correct functioning acting like glue for its parts, the actual task processing is performed in the Cloud Nodes using the VCPU, VRAM, VHDD that are going to be borrowed from the corresponding CPU, RAM and HDD from the real nodes of the cluster.
Cloud Nodes may be heterogeneous according to their hardware characteristics. They will configure a resource set that, seen from outside of the cluster, will appear to be a homogeneous whole where the summation of capacities of every participating node is the cloud's dimension. Furthermore, this homogeneous space could be provisioned, as discussed, on-demand. It is the Cloud Controller's responsibility to single out the optimal distribution of virtual servers throughout the cluster, attending to the physical aspects of both the virtual machine and the computer in which it will run.
The most important subsystem in a Cloud Controller is the \emph{hypervisor} or \emph{VMM} (\emph{Virtual Machine Monitor}). The hypervisor is responsible for making possible the execution of virtual servers --- or virtual instances --- by creating the virtual architecture needed and the \emph{virtual execution domain} that will be managed with the help of the kernel of the operating system. To generate this architecture there fundamentally exist three techniques: \emph{Emulation}, \emph{Paravirtualization} and \emph{Hardware Virtualization} --- or \emph{Full Virtualization}. Different hypervisors support them in different degree, and most will only cover one of them.
\subsection{Virtualization Techniques}\label{subsec:tecnicasemu}
\noindent What follows is a brief review of the main methods to create virtual infrastructure.
\subsubsection{Emulation}\label{subsubsec:emulacion}
\noindent Emulation is the most general virtualization method, in a sense that it does not call for anything special be present in the underlying hardware. However, it also carries the highest penalization in terms of performance. With emulation, every structure sustaining the virtual machine operation is created as a functional software copy of its hardware counterpart; i.e., every machine instruction to be executed in the virtual hardware must be run software-wise first, and then be translated on the fly into another machine instruction runnable in the physical domain --- the cluster node. The interpreter implementation and the divergence between emulated and real hardware will directly impact the translation overhead. This fact hinders the emulation from being widely employed in performance-critical deployments. Nonetheless, thanks to its operating flexibility, it's generally used as a mechanism to support legacy systems or as a development platform. Besides, the kernel in the guest operating system --- the kernel of the virtual machine operating system --- needs no alteration whatsoever, and the cluster node kernel need only load a module.
\subsubsection{Hardware Virtualization}\label{subsubsec:virthardware}
\noindent Hardware Virtualization, on the contrary, allows host's processes to run directly atop the physical hardware layer with no interpretation. Logically, this provides a considerable speedup from emulation, though it imposes a special treatment to be given to virtual processes. Regarding CPUs, both AMD's and Intel's support virtual process execution --- which is the capacity to run processes belonging to the virtual domain with little performance overhead --- as far as convenient hardware extensions are present (\emph{SVM} and \emph{VT-x} respectively \cite{intelvtx}). Just as happened with emulation, an unaltered host kernel may be used. This fact is of relative importance as if wasn't so it would limit the myriad of OSs that could be run as guests. Lastly, it should be pointed out that the hardware architecture is exposed to the VM as it is, i.e. with no software middleware or transformation.
\subsubsection{Paravirtualization}\label{subsubsec:paravirt}
\noindent Paravirtualization uses a different approach. To begin with, it is indispensable that the guest kernel be modified to make it capable of interacting with a paravirtualized environment. When the guest runs, the hypervisor will separate the regions of instructions that have to be executed in kernel mode in the CPU, from those in user mode which will be executed as regular host instructions. Subsequently, the hypervisor will manage an on-contract execution between host and guest allowing the latter to run kernel mode sections as if they belonged to the real execution domain --- as if they were instructions running in the host, not in the guest --- with almost no performance penalty. Paravirtualization, in turn, does not require an special hardware extension be present in the CPU.
\subsection{Cloud IaaS frameworks}\label{subsec:frameworksiaas}
\noindent Cloud IaaS frameworks are those software systems managing the abstraction of complexity associated with on-demand provisioning and administering failure-prone generic infrastructure. In spite of being almost all of them open-sourced --- which fosters reusability and collaboration ---, they have evolved in different contexts. This fact has raised the condition of lacking interoperability among them, maturing non-standard APIs; though today those divergences are fading away. These frameworks and APIs are product of the efforts to improve and ease the control of the underlying particular clusters of machines on which they germinated. Thus, it is no surprising their advances had originated in parallel with the infrastructure they drove, leaving compatibility as a secondary issue.
Slowly but steadily these managing systems became larger in reach and responsibility boosted by an increasing interest in the sector. It happened that software and systems engineering evolved them into more abstract software packages capable of managing more different physical architectures. AWS appearance finished forging the latent standardization need, and as of today, most frameworks try to define APIs close to Amazon's and OCCI's \cite{occisdraft}.
\section{MapReduce Paradigm}\label{sec:mapred}
\noindent The origin of the paradigm centers around a publication by two Google employees \cite{googlemapreduce}. In this paper they explained a method implementation devised to abstract the common parts present in distributed computing that rendered simple but large problems much more complex to solve when paralleling their own code on massive clusters.
A concise definition states that MapReduce is ``\emph{a data processing model and execution environment that runs on large clusters of commodity computers}'' \cite{hadoopdefguide}.
\subsection{Programming Model}\label{subsec:programacionmapred}
\noindent The MapReduce programming model requires the developer express problems as a partition of two well-defined pieces. A first part deals with the reading of input data and with producing a set of intermediate results that will be scattered over the cluster nodes. These intermediate transformations will be grouped according to an intermediate key value. A second phase begins with that grouping of intermediate results, and concludes when every \emph{reduce} operation on the groupings succeeds. Seen from another vantage point, the first phase corresponds, broadly speaking, to the behavior of the functional \emph{map} and the second to the functional \emph{fold}.
In terms of the MapReduce model, these functional paradigm concepts have given rise to \emph{map} and \emph{reduce} functions. Both map and reduce have to be supplied by the developer, which may require breaking down the original problem in a different manner in order to accommodate to the model's execution mechanics. As counterpart, the MapReduce model will deal with parallelizing the computation, distributing input data across the cluster, handling exceptions that could raise during execution and recovering output results; everything transparent to the engineer.
\subsubsection{Map Function}\label{map}
\noindent The typical functional map takes any function \emph{F} and a list of elements \emph{L} or, in general, any recursive data structure, to return a list resulting from applying \emph{F} to each element of \emph{L}. Figure \ref{fig:functionalmap} shows its signature and an example.
\begin{figure}[tbp]
\begin{center}
\begin{tabular}{|l|}
\hline
$\mathbf{map:} \: \left ( \alpha \rightarrow \beta \right ) \: \rightarrow \: \alpha \: list \: \rightarrow \: \beta \: list$ \\
$\mathbf{map} \: \left( \mathbf{pow}\:2 \right) \: \left[ 1,2,3 \right] \: \Rightarrow \: \left[ 1,4,9 \right ]$ \\
\hline
\end{tabular}
\caption{Map function example (functional version)}
\label{fig:functionalmap}
\end{center}
\end{figure}
In its MapReduce realization, the map function receives a tuple as input and produces another tuple \emph{(key, value)} as intermediate output. It is the MapReduce library's responsibility to feed the map function by transforming the data contained in input files into \emph{(key, value)} pairs. Then, after the mapping has been completed, it deals with grouping those intermediate tuples by key before passing them in as input to the reduce function. Input and output data types correspond to those shown in the function signature in figure \ref{fig:mapreducemap}.
\begin{figure}[tbp]
\begin{center}
\begin{tabular}{|l|}
\hline
$\mathbf{map:} \: \left( k1,v1 \right) \: \rightarrow \: \left( k2,v2 \right) list$ \\
$\mathbf{k:} \: clave \linebreak$ \\
$\mathbf{v:} \: valor \linebreak$ \\
$\mathbf{\left(kn,vn \right):} \: par \: \left( clave,valor \right) \: en \: un \: dominio \: n$ \\
\hline
\end{tabular}
\caption{Map function signature (MapReduce version)}
\label{fig:mapreducemap}
\end{center}
\end{figure}
\subsubsection{Reduce function}\label{reduce}
\noindent The typical functional fold expects any function \emph{G}, a list \emph{L}, or generally any type of recursive data type, and any initial element \emph{I}, subtype of \emph{L}'s elements. Fold returns the value in \emph{I} resulting from building up the intermediate values generated after applying \emph{G} to each element of \emph{L}. Figure \ref{fig:fold} presents fold's signature as well as an example.
\begin{figure}[tbp]
\begin{center}
\begin{tabular}{|l|}
\hline
$\mathbf{fold:} \: \left( \alpha \rightarrow \beta \rightarrow \alpha \right) \: \rightarrow \: \alpha \: \rightarrow \: \beta \: list \: \rightarrow \: \alpha$ \\
$\mathbf{fold} \: \left( \mathbf{+} \right) \: 0 \: \left[ 1,2,3 \right] \: \Rightarrow \: 6$ \\
\hline
\end{tabular}
\caption{Fold function example}
\label{fig:fold}
\end{center}
\end{figure}
Contrary to map, reduce expects the intermediate groups as input to produce a smaller set of values for each group as output, because reduce will iteratively \emph{fold} the groupings into values. Those reduced intermediate values will be passed in again to the reduce function if more values with the same key appeared as input. Reduce's signature is shown on figure \ref{fig:reduce}. Just as happens with map, MapReduce handles the transmission of intermediate results out from map into reduce. The model also describes the possibility to define a \emph{Combiner} function that would act after map, partially reducing the values with the same key to lower network traffic --- the combiner usually runs in the same machine as the map.
\begin{figure}[tbp]
\begin{center}
\begin{tabular}{|l|}
\hline
$\mathbf{reduce:} \: \left( k2,v2 \: list \right) \: \rightarrow \: v2 \: list$ \\
$\mathbf{k:} \: clave$ \\
$\mathbf{v:} \: valor$ \\
$\mathbf{\left(kn,vn \right):} \: par \: \left(clave,valor\right) \: en \: un \: dominio \: n$ \\
\hline
\end{tabular}
\caption{Reduce function signature}
\label{fig:reduce}
\end{center}
\end{figure}
\subsubsection{A word counter in MapReduce}\label{subsubsec:wordcount}
\noindent As an example, figure \ref{fig:wordcount} shows the pseudocode of a MapReduce application to count the number of words in a document set.
\begin{figure}[tbp]
\begin{center}
\begin{tabular}{|l|}
\hline
\texttt{{\bf Map} (String key, String value):} \\
\texttt{// key: document name} \\
\texttt{// value: document contents} \\
\texttt{{\bf for each} word w {\bf in} value:} \\
\texttt{{\bf EmitIntermediate} (w, ``1'')};\\ \\
\texttt{{\bf Reduce} (String key, Iterator values):} \\
\texttt{// key: a word} \\
\texttt{// values: an Iterable over intermediate counts of the word key} \\
\texttt{{\bf int} result = 0;} \\
\texttt{{\bf for each} v {\bf in} values:} \\
\texttt{{\bf Emit} ({\bf AsString} (result));} \\
\hline
\end{tabular}
\caption{MapReduce wordcount pseudocode. Source: \cite{googlemapreduce}}
\label{fig:wordcount}
\end{center}
\end{figure}
In a Wordcount execution flow the following is going to happen: map is going to be presented a set of names containing all of the documents in plain text whose words will be counted. Map will subsequently iterate over each document in the set emitting the tuple \emph{(<word>, ``1'')} for each word found. Thus, an explosion of intermediate pairs will be generated as output of map, which will be distributed over the network and progressively folded in the reduce phase. Reduce is going to be input every pair generated by map but under a different form. Reduce will accept the pair \emph{(<word>, list(``1''))} on each invocation. The list of \emph{``1''}s, or generically, an \texttt{Iterable} over \emph{``1''}s, will contain as many elements as instances of the word \emph{<word>} there were in the document set --- supposing the map phase were over before starting the reduce phase and that every word \emph{<word>} were submitted to the same reducer (a cluster node executing the reduce function) in the cluster.
Once the flow had been completed, MapReduce would return a listing with every word in the documents and the number of times it appeared.
\subsection{Applicability of the Model}\label{subsec:aplicabilidad}
\noindent The myriad of problems that could be expressed following the MapReduce programming paradigm is clearly reflected in \cite{googlemapreduce}, a subset of them being:
\begin{description}
\item[Distributed grep:] Map emits every line matching the regular expression. Reduce only forwards its input to its output acting as the identity function.
\item[Count of URL access frequency:] Like wordcount, but with the different URLs as input to map.
\item[Reverse web-link graph:] For each URL contained in a web document, map generates the pair \emph{(<target\_URL>, <source\_URL>)}. Reduce will emit the pair \emph{(target, list(source))}.
\item[Inverted index:] Map parses each document and emits a series of tuples in the form \emph{(<word>, <document\_id>)}. All of them are passed as input to reduce that will generate the sequence of pairs \emph{(<word>, list(document\_id))}.
\end{description}
\subsection{Processing Model}\label{subsec:processingmodel}
\noindent Besides defining the structure that the applications willing to leverage MapReduce capabilities will have to follow --- so that they need not code their own distribution mechanisms ---, with \cite{googlemapreduce} an implementation of the model was introduced which allowed Google to stay protocol, architecture and system agnostic while keeping their commodity clusters on full utilization. This agnosticism allows for deploying vendor-lock free distributed systems.
The MapReduce model works by receiving self-contained processing requests called \emph{job}s. Each job is a \emph{partition} of smaller duties called \emph{task}s. A job won't be completed until no task is pending for finishing execution. The processing model main intent is to distribute the tasks throughout the cluster in a way that reduced job latency. In general, task processing on each phase is done in parallel and phases execute in sequence; yet, it is not technically needed for reduce to wait until map is complete.
Figure \ref{fig:exmapreduce} shows a summary of a typical execution flow. It is interesting enough to deepen in its details as many other MapReduce implementations will present similar approaches.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.99\textwidth]{imagenes/006.pdf}
\caption{MapReduce execution diagram. Source: \cite{googlemapreduce}}
\label{fig:exmapreduce}
\end{center}
\end{figure}
\begin{enumerate}
\item MapReduce divides input files in \emph{M} parts, the size of which is controlled with a parameter, and distributes as many copies of the MapReduce user algorithm as nodes participate in the computation.
\item From this moment onward, each program copy resides in a cluster node. A random copy is chosen among them and is labeled as the \emph{Master Replica}, effectively assigning the \emph{Master Role} to the node holding the replica; every other node in the cluster is designated with the \emph{Worker Role}. Those worker nodes will receive the actual MapReduce tasks and their execution will be driven by the master node. There will be \emph{M} map tasks and \emph{R} reduce tasks.
\item Workers assigned map tasks read their corresponding portions of the input files and parse the contents, generating tuples \emph{(key, value)} that will be submitted to the map function for processing. Map outputs are stored in memory as cache.
\item Periodically, those pairs in memory are \emph{spilt} to a local disk and partitioned into \emph{R} regions. Their path on disk is then sent back to the master, responsible for forwarding these paths to \emph{reduce workers}.
\item Now, when a reducer is notified that it should start processing, the path to the data to be reduced is send along and the reducer will fetch it directly from the mapper via \emph{RPC} (\emph{Remote Procedure Call}). Before actually invoking the reduce function, the node itself will sort the intermediate pairs by key.
\item Lastly, the reducer iterates over the key-sorted pairs submitting to the user-defined reduce function the key and the Iterable of values associated with the key. The output of the reduce function is appended to a file stored over the distributed file system.
\end{enumerate}
When every map and reduce tasks had succeeded, the partitioned output space --- the file set within each partition --- would be returned back to the client application that had made the MapReduce invocation.
This processing model is abstract enough as to be employed to the resolution of very large problems running on huge clusters.
\subsection{Fault Tolerance}\label{subsec:toleranciafallos}
\noindent The idea of providing an environment to execute jobs long enough to require large sets of computing machines to keep the latency within reasonable timings, calls for the definition of a policy able to assure a degree of tolerance to failure. If unattended, those failures would lead to errors; some would cause finished tasks to get lost, others would put intermediate data offline. Consequently, if no measures were taken to prevent or deal with failure, job throughput would humble as some would have to be rescheduled all along.
The MapReduce model describes a policy foreseeing a series of issues within an execution flow, and duly implements a series of actions to prevent them.
\subsubsection{Worker Failure}\label{subsubsec:fallotrabajador}
\noindent The least taxing of the problems. To control that every worker is up, the master node polls them periodically. If a worker did not reply to pings repeatedly, it would be marked as \emph{failed}.
A worker marked failed will neither be scheduled new tasks nor will be remotely accessed by reducers to load intermediate map results that it may had; a fact that could prevent the workflow from succeeding. If so were the case, the access to these data would be resolved by the master labeling the results of the failing tasks as \emph{idle}, so that they could be rescheduled at a later time to store the results in an active worker.
\subsubsection{Master Failure}\label{subsubsec:fallomaestro}
Failure of a master node is more troublesome. The proposed approach consists in making the master periodically create a snapshot from which to restore to a previous state if it went unexpectedly down. It is a harder problem than a worker failure mainly because there can only be one master per cluster, and the time it would take another node to take over the master role would leave the scheduling pipeline stalled. The master being in a single machine has, nonetheless, the benefit of lowering the probability of failure --- this is precisely why the Google MapReduce paper \cite{googlemapreduce} had put forward that the entire job be canceled. Still, as it is no good design decision to leave a \emph{single point of failure}, subsequent MapReduce implementations have proposed to replicate the master in other nodes in the same cluster.
\subsection{Aditional Characteristics}\label{subsec:caracteristicasadicionales}
\noindent What follows is a summary of additional features of the original MapReduce implementation.
\subsubsection{Locality}\label{subsubsec:localidad}
\noindent The typical bottleneck in modern distributed deployments is network bandwidth. In MapReduce executions, the information flows into the cluster from an external client. As already discussed, each node in a MapReduce cluster holds a certain amount of the input data and shares its processing capacity to be used for particular MapReduce tasks over those data. Each stage in the MapReduce executing pipeline requires a lot of traffic to be handled by the network which would reduce throughput if no channel wide enough were deployed nor a locality exploiting strategy were implemented.
In fact, MapReduce explores a method to use locality as an additional resource. The idea is for the distributed file system to place data as close as possible to where they will transformed --- it will try to store map and reduce data close to the mappers and reducers that will transform those data ---, effectively diminishing the transport over the network.
\subsubsection{Complexity}\label{subsubsec:complejidad}
\noindent A priori, variables \emph{M} and \emph{R}, the number of partitions of the input space and of the intermediate space respectively, may be configured to take any value whatsoever. Yet, there exist certain practical limits to their values. For every running job the master will have to make $O(M + R)$ scheduling decisions --- if no error forced the master to reschedule tasks ---, as each partition of the input space will have to be submitted to a mapper and each intermediate partition will have to be transmitted to a reducer, coming to $O(M + R)$ as the expression of \emph{temporal complexity}. Regarding \emph{spatial complexity}, the master will have to maintain $O(M \cdot R)$ as piece of state in memory as the intermediate results of a map task may be propagated to every piece \emph{R} of the reduce space.
\subsubsection{Backup Tasks}\label{subsubsec:secundarias}
\noindent A situation could arise in which a cluster node be executing map or reduce tasks much slower than it theoretically could. Such a circumstance may arise with a damaged drive which would cause read and write operations to slow down. Since jobs complete when all of its composing tasks had been finished, the faulted node (the \emph{straggler}) would be curbing the global throughput. To alleviate this handicap, when only few tasks are left incomplete for a particular job, \emph{Backup Task}s are created and submitted to additional workers, making a single task be executed concurrently as two task instances. By the time one copy of the task succeeded it would be labeled \emph{completed}, duly reducing the impact of stragglers at the cost of wasting computational resources.
\subsubsection{Combiner Function}\label{subsubsec:combiner}
\noindent Many times it happens that there exists a good number of repeated intermediate pairs. Taking wordcount as an example, it can be easily seen that every mapper will generate as many tuples \emph{(``a'', ``1'')} as \emph{a}'s there are in the input documents. A mechanism to lower the tuples that will have to be emitted to reducers is to allow for the definition of a \emph{Combiner Function} to group outputs from the map function, in the same mapper node, before sending them out over the network, effectively cutting down traffic.
In fact, it is usual for both combiner and reduce functions to share the same implementation, even though the former writes its output to local disk while the latter writes directly to the distributed file system.
\subsection{Other MapReduce Implementations}\label{subsec:frameworksmapred}
\noindent Since 2004 multiple frameworks that implement the ideas exposed in the paper \cite{googlemapreduce} have been coming out. The next listing clearly shows the impact MapReduce has created.
\begin{description}
\item[Hadoop] \cite{hadoopdefguide} One of the first implementations to cover the MapReduce processing model and framework of reference to other MapReduce codifications. It is by far the most widely deployed, tested, configured and profiled today.
\item[GridGain] \cite{gridgainvshadoop} Commercial and centered around in-memory processing to speedup execution: lower data access latency at the expense of smaller I/O space.
\item[Twister] \cite{twister} Developed as a research project in the University of Indiana, tries to separate and abstract the common parts required to run MapReduce workflows in order to keep them longer in the cluster's distributed memory. With such an approach, the time taken to configure mappers and reducers in multiple executions is lowered by doing their setup only once. \emph{Twister} really shines in executing \emph{iterative} MapRecude jobs --- those jobs where maps and reduces do not finish in a single map-reduce sequence, but need instead a multitude of map-reduce cycles to complete.
\item[GATK] \cite{gatk} Used for genetic research to sequence and evaluate DNA fragments from multiple species.
\item[Qizmt] \cite{qizmt} Written in C\# and deployed for MySpace.
\item[misco] \cite{misco} Written 100\% Python and based on previous work at Nokia, it is posed as a MapReduce implementation capable of running in mobile devices.
\item[Peregrine] \cite{peregrine} By optimizing how intermediate results are transformed and by passing every I/O operation through an asynchronous queue, its developers claim to have formidably accelerated task execution rate.
\item[Mars] \cite{mars} Implemented in \emph{NVIDIA CUDA}, it revolves around extracting higher performance by moving the map and reduce operations into the graphics card. It is supposed to improve processing throughput by over an order of magnitude.
\end{description}
Hadoop is undoubtedly the most used MapReduce implementation nowadays. Its open-source nature and its flexibility, both for processing and storage, have been reporting back an increasing interest from the IT industry. This has brought out many pluggable extensions that enhance Hadoop's applicability and usage.
| {
"alphanum_fraction": 0.794092219,
"avg_line_length": 122.1830985915,
"ext": "tex",
"hexsha": "92d94cedd5f6d518291b3828277b03d2c7c18b0b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d72225b631606495a391f2543d3ee37a844a0125",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "marcos-sb/qosh-book",
"max_forks_repo_path": "stateoftheart.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d72225b631606495a391f2543d3ee37a844a0125",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "marcos-sb/qosh-book",
"max_issues_repo_path": "stateoftheart.tex",
"max_line_length": 1161,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d72225b631606495a391f2543d3ee37a844a0125",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "marcos-sb/qosh-book",
"max_stars_repo_path": "stateoftheart.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 7622,
"size": 34700
} |
% A (minimal) template for problem sets and solutions using the exam document class
% Organization:
%% Define new commands, macros, etc. in macros.tex
%% Anything that you would put before \begin{document} should go in prelude.tex
%% For multiple psets, each should get its own file to \input into main with a \section{}
\input{prelude}
\begin{document}
\title{Fourier Analysis and Wavelets\\Homework 1}
\author{Francisco Jose Castillo Carrasco}
\date{\today}
\maketitle
\input{macros}
%% Content goes here
\section*{Problem 2}
\input{problem2}
\newpage
\section*{Problem 7}
\input{problem7}
\newpage
\section*{Problem 11}
\input{problem11}
\section*{Problem 14}
\input{problem14}
\newpage
\section*{Problem 23}
\input{problem23}
\end{document}
| {
"alphanum_fraction": 0.7516600266,
"avg_line_length": 20.9166666667,
"ext": "tex",
"hexsha": "f0f9fcc852f3af76363f2069ae6de7f50567e1ce",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "12ab3e86a4a44270877e09715eeab713da45519d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "fjcasti1/Courses",
"max_forks_repo_path": "FourierAnalysisAndWavelets/Homework1/HW1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "12ab3e86a4a44270877e09715eeab713da45519d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "fjcasti1/Courses",
"max_issues_repo_path": "FourierAnalysisAndWavelets/Homework1/HW1.tex",
"max_line_length": 89,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "12ab3e86a4a44270877e09715eeab713da45519d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "fjcasti1/Courses",
"max_stars_repo_path": "FourierAnalysisAndWavelets/Homework1/HW1.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 217,
"size": 753
} |
% !TEX root = ../thesis.tex
\pdfbookmark[0]{Declaration}{Declaration}
\chapter*{Declaration}
\thispagestyle{empty}
I declare that I have completed my work solely and only with the help of the references mentioned above.
\bigskip
\noindent\textit{\thesisUniversityCity, \thesisDate}
\smallskip
\begin{flushright}
\begin{minipage}{5cm}
\rule{\textwidth}{1pt}
\centering\thesisName
\end{minipage}
\end{flushright}
| {
"alphanum_fraction": 0.7624703088,
"avg_line_length": 22.1578947368,
"ext": "tex",
"hexsha": "f949c9e4e750cd7683d2fe1f904c415ccf259f41",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6bf2d5363df683db37e15ea6a828160210401763",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "OmeGak/ai-thesis",
"max_forks_repo_path": "content/declaration.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6bf2d5363df683db37e15ea6a828160210401763",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "OmeGak/ai-thesis",
"max_issues_repo_path": "content/declaration.tex",
"max_line_length": 104,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "6bf2d5363df683db37e15ea6a828160210401763",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "OmeGak/ai-thesis",
"max_stars_repo_path": "content/declaration.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 126,
"size": 421
} |
\documentclass[10pt,a4paper,twocolumn]{article}
\usepackage[colorinlistoftodos]{todonotes}
\usepackage[affil-it]{authblk}
\usepackage{titlesec}
\usepackage{enumitem}
\usepackage{url}
\urldef{\mailsa}\path|{i.c.t.m.speek}@student.tudelft.nl|
\newcommand{\keywords}[1]{\par\addvspace\baselineskip%
\noindent\keywordname\enspace\ignorespaces#1}
\titlespacing\section{0pt}{20pt plus 4pt minus 2pt}{8pt plus 2pt minus 2pt}
\titlespacing\subsection{0pt}{12pt plus 4pt minus 2pt}{5pt plus 2pt minus 2pt}
\titlespacing\subsubsection{0pt}{12pt plus 4pt minus 2pt}{5pt plus 2pt minus 2pt}
% first the title is needed
\title{ScavengeIT - a face verification based photo database scavenger}
\author{I.C.T.M Speek%
\thanks{Electronic address: \texttt{[email protected]}; Corresponding author}}
\affil{Computer Vision \\ Delft University of Technology}
% the affiliations are given next; don't give your e-mail address
% unless you accept that it will be published
\begin{document}
\maketitle
%------------------------------------------------------------------------------
\begin{abstract}
\end{abstract}
%------------------------------------------------------------------------------
\section{Introduction}
\label{sec:introduction}
Since the emergence of social media websites, finding images of friends has been made easier. By manually adding tags to photos, websites as Facebook are able to identify your friends in your online images. This report focuses on developing an application in Matlab to scavenge your local file system for any image containing a queried friend.
%------------------------------------------------------------------------------
\section{Problem definition}
\label{sec:problem}
The challenge of scavenging a database of images for photos of a particular person require the need to verify one or multiple input images vs the faces in the image and return a list of images in which the input face is verified. This process can be divided into several parts:
\begin{itemize}[noitemsep]
\item Processing the database images
\item Detecting faces
\item Extract discriminant features from input and detected faces
\item Classify the similarities
\item Decide whether faces are the same
\item Return the verified faces
\end{itemize}
%------------------------------------------------------------------------------
\section{Solution}
\label{sec:solution}
\todo[inline]{Say something more about V\&J}
To detect the faces in the images we can use the Viola \& Jones cascaded classifiers. These cascaded classifiers are particularly convenient when speed is an issue and its unsure whether or not a face is present in the images. The cascaded weak learner decision stumps return a negative output as soon as a single classifier does so. This way negative samples are processed significantly faster and only detected faces are processed by all classifiers. As there are already pretrained cascaded classifiers available online from OpenCV, there is no need to train these as that process can take a week time.
When the faces are detected, discriminant features can be extracted. As described in \cite{klare2010taxonomy} there a 3 levels of features by which persons can be identified. Level 1 represents the global appearance of the person and distinguishes the size of the face, gender and racial discriminant features and distuingishes the persons as a first glance. Level 2 contains the most discrimative features and represents a shape and appearance model for the face and smaller local patches within the face region. Level 3 contains the most specific features such as birthmarks, wrinkles and more detailed specifications and will be left out in this report.
The Intraface Facial Feature Detection \& Tracking matlab library can be used to fit a shape model onto a detected face. This model annotates 49 feature landmarks in the face as can be seen in the upper left image in figure \ref{fig:ratio}. This shape model can be used to verify face images against one another.
The easiest and least robust way to distuinguish between faces would be to measure the person specific ratio of the length between the eyes and the length of the nose. By setting a threshold on the inputted face image ratio a decision can be made whether or not the found faces belong to the same person. This method works relatively well when only dealing with frontal faces. Because the scavenge application should be scavenging a broad ranges of images, the face discriminator should be invariant to pose, lighting and occlusion of features. This causes the need for another solution. \cite{gao2009normalization} presents normalization techniques that normalize the input faces to a shape mean (a frontal view). By doing so, the faces can be verified more reliable as we are not dealing with any shortening of the distances in between features as the effect of different poses is removed from the face.
\subsection{Active appearance models for face verification}
\label{sec:aam}
Active Appearance models as described in \cite{edwards1998aam} try to model the face appearance as complete as possible. This justified approximation of a case can be used to understand the interface (between different face identities) and intraface (between the same identity) variability. As the face approximation describes an optimization problem which is the same for every face, generality is solved offline to achieve rapid convergence online. To achieve this, active appearance models are generated by combining a level 2 shape variation model with a shape-normalized level 1 appearance model.
\todo[inline]{Add active appaearance image of appaearance as well}
These variation models are trained using a training set of labelled images where landmarks mark key positions on each face. These points are alligned in a common co-ordinate frame and represented by a vector. A \emph{Principal Component Analysis} (PCA) generated a statistical model of this shape variation where equation \ref{eq:pca} describes an example face.
\begin{equation}
x = \bar{x} + P_s b_s
\label{eq:pca}
\end{equation}
Where $x$ described the example face, $\bar{x}$ the mean model and $P_s$ is a set of orthogonal modes of shape variation and $b_s$ is a set of shape parameters \cite{edwards1998aam}. For each input face a set of shape and gray-level parameters is extracted which varies in identity, expression, pose and lighting. To efficiently match faces, the mahalonobix distance measure can be used which enhances the effect of inter-class variation (identitation) and surpresses the effect of intra-class variation(pose, lighting and expression) when the intra-class covariance matrix is available. This would mean that multiple input images should be provided of the same person in order to determine the intra-class covariance matrix. Assuming this is not available a \emph{Linear Discriminant Analysis} (LDA) can be used to seperate the inter-class variability from the intra-class variability assuming the intra-class variation is similar for each individual. This way the mean coveriance matrix can be used to estimate the variance.
%------------------------------------------------------------------------------
\section{Implementation}
\label{sec:implementation}
The project's approach was done using the same order as presented in section \ref{sec:problem}. This section will present the approach in chronological order, starting with the libraries that are used and explaining the additional steps that had to be made to solve the face verification problem.
\subsection{Used libraries and data models}
As recommended the state-of-the-art facial feature detection model IntraFace was used. This matlab library provides functionality for fitting a feature model using 49 landmarks on detected faces within an image. Figure \ref{fig:alt2} displays the initial performance of the facial feature detection functionality using the supplied 'auto' OpenCV face detection method. However when using the 'interactive' mode that allows for selecting a bounding box as face detection all but 1 face allows for fitting the shape model as can be seen in figure \ref{fig:int}. This indicates that there is need for a better face detection algorithm.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.5\textwidth]{alt2.jpg}
\caption{IntraFace demo facial feature detection 'auto' mode}
\label{fig:alt}
\end{center}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.5\textwidth]{interactive.jpg}
\caption{IntraFace demo facial feature detection 'interactive' mode}
\label{fig:int}
\end{center}
\end{figure}
\subsection{Own work}
\subsubsection{Face detection}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.5\textwidth]{ratio.png}
\caption{Normalized face and ratio}
\label{fig:ratio}
\end{center}
\end{figure}
\subsubsection{Normalizing the faces}
\subsubsection{Feature Extraction}
\subsubsection{Classification}
\subsubsection{Decision model}
%------------------------------------------------------------------------------
\section{Experiments}
%------------------------------------------------------------------------------
\section{Results}
%------------------------------------------------------------------------------
\section{Future Works}
By learning the positive faces whilst doing the linear discriminant analysis, the faces can be used to determine the intra-class covariance matric and thus the Mahalonobis distance can be used to identify the individual.
\bibliographystyle{plain}
\bibliography{vision}
\end{document}
| {
"alphanum_fraction": 0.7464978728,
"avg_line_length": 58.4060606061,
"ext": "tex",
"hexsha": "40a260ca4a007ef95987adba0e0bf29a8a25d813",
"lang": "TeX",
"max_forks_count": 17,
"max_forks_repo_forks_event_max_datetime": "2020-03-20T16:54:27.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-02-13T03:31:26.000Z",
"max_forks_repo_head_hexsha": "faad71f06dc45b1b887d02e0de1a05b7647e5d77",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "Imara90/PhotoVision",
"max_forks_repo_path": "report/FaceVerification_Imara.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "faad71f06dc45b1b887d02e0de1a05b7647e5d77",
"max_issues_repo_issues_event_max_datetime": "2017-12-30T21:28:07.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-02-06T23:09:16.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "Imara90/PhotoVision",
"max_issues_repo_path": "report/FaceVerification_Imara.tex",
"max_line_length": 1028,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "faad71f06dc45b1b887d02e0de1a05b7647e5d77",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "Imara90/PhotoVision",
"max_stars_repo_path": "report/FaceVerification_Imara.tex",
"max_stars_repo_stars_event_max_datetime": "2019-12-06T03:24:06.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-20T06:30:22.000Z",
"num_tokens": 2082,
"size": 9637
} |
\documentclass{article}
%\usepackage[margin=0.5in]{geometry}
\begin{document}
\title{HW\#4 || Graph Clustering Algorithms}
\author{Saket Choudhary}
\maketitle
\section*{Markov Clustering(MCL)}
Markov clustering makes use of stochastic flows in a graph. The underlying idea is
that a random walk on a dense cluster is likely to stay on the vertices of this cluster before
jumping to other cluster. This property is made use by simulating a stochastic flow on the graph
such that the flow be inflated wherever the current is strong and downweighted otherwise. This can
thus be formulated as a Markov graph. The flows are modelled by calculating successive powers of the matrix.
\section*{Restricted Neighborhood Search(RNSC)}
RNS clustering tries to minimises a cost function that captures the weight of
inter and intra cluster edges. With a random initial value, RNSC iteratively assigns
a node to other cluster if that leads to a local minima. Termination can be defined based on number of iterations
or convergence of the cost function.
\section*{Molecular Complex Detection(MCODE)}
Molecular complex detection assigns a weight to each vertex that is proportional
to the number of neighbors. Starting from the heaviest vertex it iteratively moves out
assigning each vertex to the cluster if it is above a certain threshold.
\section*{Comparison}
All thress algorithms are parametric and hence the quality of clustering will depend on the choice of appropriate parameters.
\textbf{Robustness}: MCL and RNSC are more robust to noise and missing information in the data, while they are also more robust towards the choice of parameters. Hence in case of missing data MCL or RNSC might be a better choice over MCODE.
\textbf{Heuristic} RNSC and MCODE are heuristic and hence the convergence depends on the initial set of clusters/vertices chosen
\section*{References}
Brohee, Sylvain, and Jacques Van Helden. "Evaluation of clustering algorithms for protein-protein interaction networks." BMC bioinformatics 7.1 (2006): 488.
\end{document} | {
"alphanum_fraction": 0.8058536585,
"avg_line_length": 52.5641025641,
"ext": "tex",
"hexsha": "2235fa035c2cb6e0ccbb8d57fe8333b4538343e3",
"lang": "TeX",
"max_forks_count": 12,
"max_forks_repo_forks_event_max_datetime": "2022-02-10T03:21:09.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-09-25T19:06:45.000Z",
"max_forks_repo_head_hexsha": "c5cfa2410d47c7e43a476a8c8a9795182fe8f836",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "saketkc/hatex",
"max_forks_repo_path": "2015_Spring/MATH578/Unit3/hw4.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "c5cfa2410d47c7e43a476a8c8a9795182fe8f836",
"max_issues_repo_issues_event_max_datetime": "2015-09-23T21:21:52.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-09-16T23:11:00.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "NeveIsa/hatex",
"max_issues_repo_path": "2015_Spring/MATH578/Unit3/hw4.tex",
"max_line_length": 241,
"max_stars_count": 19,
"max_stars_repo_head_hexsha": "c5cfa2410d47c7e43a476a8c8a9795182fe8f836",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "NeveIsa/hatex",
"max_stars_repo_path": "2015_Spring/MATH578/Unit3/hw4.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-10T03:20:47.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-09-10T02:45:33.000Z",
"num_tokens": 466,
"size": 2050
} |
\documentclass{ximera}
\input{../preamble}
\title{Power Series}
%%%%%\author{Philip T. Gressman}
\begin{document}
\begin{abstract}
We introduce the concept of a power series and some related fundamental properties.
\end{abstract}
\maketitle
\section*{(Video) Calculus: Single Variable}
\youtube{S1aouJc4hoM}
\section*{Online Texts}
\begin{itemize}
\item \link[OpenStax II 6.1: Power Series]{https://openstax.org/books/calculus-volume-2/pages/6-1-power-series-and-functions} and \link[6.2: Properties of Power Series]{https://openstax.org/books/calculus-volume-2/pages/6-2-properties-of-power-series}
\item \link[Ximera OSU: Power Series]{https://ximera.osu.edu/mooculus/calculus2/powerSeries/titlePage}
\item \link[Community Calculus 11.8: Power Series]{https://www.whitman.edu/mathematics/calculus_online/section11.08.html} and \link[11.9: Calculus with Power Series]{https://www.whitman.edu/mathematics/calculus_online/section11.09.html}
\end{itemize}
\section*{Examples}
\begin{example}
\end{example}
\begin{example}
\end{example}
\end{document}
| {
"alphanum_fraction": 0.7686496695,
"avg_line_length": 31.1470588235,
"ext": "tex",
"hexsha": "734d3e095733f6ac9848fe40081a3046c3bed7f9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ptgressman/math104",
"max_forks_repo_path": "powerseries/25powerserieswarm.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ptgressman/math104",
"max_issues_repo_path": "powerseries/25powerserieswarm.tex",
"max_line_length": 251,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ptgressman/math104",
"max_stars_repo_path": "powerseries/25powerserieswarm.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 316,
"size": 1059
} |
\section{\module{xml.etree} ---
The ElementTree API for XML}
\declaremodule{standard}{xml.etree}
\modulesynopsis{Package containing common ElementTree modules.}
\moduleauthor{Fredrik Lundh}{[email protected]}
\versionadded{2.5}
The ElementTree package is a simple, efficient, and quite popular
library for XML manipulation in Python.
The \module{xml.etree} package contains the most common components
from the ElementTree API library.
In the current release, this package contains the \module{ElementTree},
\module{ElementPath}, and \module{ElementInclude} modules from the full
ElementTree distribution.
% XXX To be continued!
\begin{seealso}
\seetitle[http://effbot.org/tag/elementtree]
{ElementTree Overview}
{The home page for \module{ElementTree}. This includes links
to additional documentation, alternative implementations, and
other add-ons.}
\end{seealso}
| {
"alphanum_fraction": 0.7391763464,
"avg_line_length": 33.8214285714,
"ext": "tex",
"hexsha": "789062d81de74e63554bb3c5d917f336586ae296",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2019-07-18T21:33:17.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-01-30T21:52:13.000Z",
"max_forks_repo_head_hexsha": "c6bee7fe0fb35893190c6c65181d05345880d0fc",
"max_forks_repo_licenses": [
"PSF-2.0"
],
"max_forks_repo_name": "linpingchuan/Python-2.5",
"max_forks_repo_path": "Doc/lib/xmletree.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c6bee7fe0fb35893190c6c65181d05345880d0fc",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"PSF-2.0"
],
"max_issues_repo_name": "linpingchuan/Python-2.5",
"max_issues_repo_path": "Doc/lib/xmletree.tex",
"max_line_length": 72,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "c6bee7fe0fb35893190c6c65181d05345880d0fc",
"max_stars_repo_licenses": [
"PSF-2.0"
],
"max_stars_repo_name": "linpingchuan/Python-2.5",
"max_stars_repo_path": "Doc/lib/xmletree.tex",
"max_stars_repo_stars_event_max_datetime": "2015-10-23T02:57:29.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-10-23T02:57:29.000Z",
"num_tokens": 216,
"size": 947
} |
\chapter{Classical and Modern Spectrum Estimation}
\input{sections/part1/11.tex}
\input{sections/part1/12.tex}
\input{sections/part1/13.tex}
\input{sections/part1/14.tex}
\input{sections/part1/15.tex}
\input{sections/part1/16.tex} | {
"alphanum_fraction": 0.7835497835,
"avg_line_length": 28.875,
"ext": "tex",
"hexsha": "c1d257d7bdc3ff76cb0f08259dcd38e626644be9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "88d8c848909fdcbfd55907201575ef2b67601c93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "zdhank/Adaptive-Signal-Processing",
"max_forks_repo_path": "Report/sections/Part1/Part1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "88d8c848909fdcbfd55907201575ef2b67601c93",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "zdhank/Adaptive-Signal-Processing",
"max_issues_repo_path": "Report/sections/Part1/Part1.tex",
"max_line_length": 50,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "88d8c848909fdcbfd55907201575ef2b67601c93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "zdhank/Adaptive-Signal-Processing",
"max_stars_repo_path": "Report/sections/Part1/Part1.tex",
"max_stars_repo_stars_event_max_datetime": "2021-04-19T08:55:10.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-09-05T10:27:57.000Z",
"num_tokens": 71,
"size": 231
} |
% Options for packages loaded elsewhere
\PassOptionsToPackage{unicode}{hyperref}
\PassOptionsToPackage{hyphens}{url}
%
\documentclass[
]{book}
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{ifxetex,ifluatex}
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{textcomp} % provide euro and other symbols
\else % if luatex or xetex
\usepackage{unicode-math}
\defaultfontfeatures{Scale=MatchLowercase}
\defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1}
\fi
% Use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
\IfFileExists{microtype.sty}{% use microtype if available
\usepackage[]{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\makeatletter
\@ifundefined{KOMAClassName}{% if non-KOMA class
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}}
}{% if KOMA class
\KOMAoptions{parskip=half}}
\makeatother
\usepackage{xcolor}
\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available
\IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}}
\hypersetup{
pdftitle={Editable Distributed Hydrological Model},
pdfauthor={Kan Lei},
hidelinks,
pdfcreator={LaTeX via pandoc}}
\urlstyle{same} % disable monospaced font for URLs
\usepackage{longtable,booktabs}
% Correct order of tables after \paragraph or \subparagraph
\usepackage{etoolbox}
\makeatletter
\patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{}
\makeatother
% Allow footnotes in longtable head/foot
\IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}}
\makesavenoteenv{longtable}
\usepackage{graphicx,grffile}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
% Set default figure placement to htbp
\makeatletter
\def\fps@figure{htbp}
\makeatother
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{5}
\usepackage{booktabs}
\usepackage{amsthm}
\makeatletter
\def\thm@space@setup{%
\thm@preskip=8pt plus 2pt minus 4pt
\thm@postskip=\thm@preskip
}
\makeatother
\usepackage{booktabs}
\usepackage{longtable}
\usepackage{array}
\usepackage{multirow}
\usepackage{wrapfig}
\usepackage{float}
\usepackage{colortbl}
\usepackage{pdflscape}
\usepackage{tabu}
\usepackage{threeparttable}
\usepackage{threeparttablex}
\usepackage[normalem]{ulem}
\usepackage{makecell}
\usepackage{xcolor}
\usepackage[]{natbib}
\bibliographystyle{apalike}
\title{Editable Distributed Hydrological Model}
\author{Kan Lei}
\date{2021-04-15}
\begin{document}
\maketitle
{
\setcounter{tocdepth}{1}
\tableofcontents
}
\begin{verbatim}
## [[1]]
## [[1]][[1]]
## NULL
##
## [[1]][[2]]
## NULL
##
## [[1]][[3]]
## NULL
##
##
## [[2]]
## [[2]][[1]]
## NULL
##
## [[2]][[2]]
## NULL
##
##
## [[3]]
## [[3]][[1]]
## NULL
##
## [[3]][[2]]
## NULL
##
##
## [[4]]
## [[4]][[1]]
## NULL
##
##
## [[5]]
## [[5]][[1]]
## NULL
##
##
## [[6]]
## [[6]][[1]]
## NULL
##
##
## [[7]]
## [[7]][[1]]
## NULL
##
## [[7]][[2]]
## NULL
##
##
## [[8]]
## [[8]][[1]]
## NULL
##
## [[8]][[2]]
## NULL
##
## [[8]][[3]]
## NULL
##
## [[8]][[4]]
## NULL
##
## [[8]][[5]]
## NULL
##
##
## [[9]]
## [[9]][[1]]
## NULL
##
##
## [[10]]
## [[10]][[1]]
## NULL
##
## [[10]][[2]]
## NULL
\end{verbatim}
\hypertarget{the-document-and-the-edhm-package}{%
\chapter*{The Document and the EDHM Package}\label{the-document-and-the-edhm-package}}
\addcontentsline{toc}{chapter}{The Document and the EDHM Package}
\hypertarget{the-document}{%
\section*{The Document}\label{the-document}}
\addcontentsline{toc}{section}{The Document}
This document is the use guide for EDHM and some other concept about the hydrological models (\textbf{HM}) building. In Chapter \ref{base} explain the basic concept of hydrological cycle and the important concept and idea of EDHM. In chapter \ref{develop} show the workflow of using a hydrological model with EDHM and the way to explain a new model. Chapter \ref{module} and \ref{model} show the basic information, e.g.~input data, parameters and output data of every module or model.
\hypertarget{edhm}{%
\section*{EDHM}\label{edhm}}
\addcontentsline{toc}{section}{EDHM}
EDHM is a R package for hydrological models in order to simplify the models building, specially the distributed hydrological model. In the package contain many complete \textbf{MODEL} that can used directly, and many \textbf{MODULE} that can a new MODEL to building. All of the MODELs and MODULEs are build with matrix-arithmetic, that can good deal with the distributed situation. In the package there are many tools to calibrate the parameters or build a new MODEL or a new MODULE. The Package is only in GitHub published, for the first time use, please install the package \href{https://github.com/LuckyKanLei/EDHM}{EDHM} and \href{https://github.com/LuckyKanLei/HMtools}{HMtools} use the following code:
\begin{verbatim}
install.packages("devtools")
devtools::install_github("LuckyKanLei/HMtools")
devtools::install_github("LuckyKanLei/EDHM")
\end{verbatim}
The summary of the Processes and Modules show in the following table:
\begin{table}[!h]
\centering
\begin{tabular}{l|l}
\hline
PROCESS & MODULE\\
\hline
ReferenceET & [ReferenceET.Hargreaves](\#ReferenceET.Hargreaves) | [ReferenceET.Linacre](\#ReferenceET.Linacre) | [ReferenceET.PenMon](\#ReferenceET.PenMon)\\
\cline{1-2}
ActualET & [ActualET.Gr4j](\#ActualET.Gr4j) | [ActualET.Vic](\#ActualET.Vic)\\
\cline{1-2}
SNOW & [SNOW.17](\#SNOW.17) | [SNOW.Ddf](\#SNOW.Ddf)\\
\cline{1-2}
BASEFLOW & [BASEFLOW.ARNO](\#BASEFLOW.ARNO)\\
\cline{1-2}
INTERCEPTION & [INTERCEPTION.Gash](\#INTERCEPTION.Gash)\\
\cline{1-2}
InfiltratRat & [InfiltratRat.GreenAmpt](\#InfiltratRat.GreenAmpt)\\
\cline{1-2}
Infiltration & [Infiltration.OIER](\#Infiltration.OIER) | [Infiltration.SER](\#Infiltration.SER)\\
\cline{1-2}
RUNOFF & [RUNOFF.Gr4j](\#RUNOFF.Gr4j) | [RUNOFF.OIER](\#RUNOFF.OIER) | [RUNOFF.SER](\#RUNOFF.SER) | [RUNOFF.Vic](\#RUNOFF.Vic) | [RUNOFF.VM](\#RUNOFF.VM)\\
\cline{1-2}
GROUNDWATER & [GROUNDWATER.Vic](\#GROUNDWATER.Vic)\\
\cline{1-2}
ROUTE & [ROUTE.G2RES](\#ROUTE.G2RES) | [ROUTE.Gr4j](\#ROUTE.Gr4j)\\
\hline
\end{tabular}
\end{table}
\hypertarget{base}{%
\chapter{Basic Concept}\label{base}}
\hypertarget{hydrological-cycle}{%
\section{Hydrological Cycle}\label{hydrological-cycle}}
\hypertarget{atmosphe-atmos}{%
\subsection{Atmosphe (Atmos)}\label{atmosphe-atmos}}
\hypertarget{snow}{%
\subsection{Snow}\label{snow}}
\hypertarget{canopy}{%
\subsection{Canopy}\label{canopy}}
\hypertarget{surface-runoff-route}{%
\subsection{Surface (Runoff, Route)}\label{surface-runoff-route}}
\hypertarget{subsurface-subsur}{%
\subsection{Subsurface (Subsur)}\label{subsurface-subsur}}
\hypertarget{ground}{%
\subsection{Ground}\label{ground}}
\textbf{\emph{Base flow / runoff:}} Sustained or fair weather runoff. In most streams, base base flow is composed largely of ground water effluent. (Langbein and others, 1947, p.~6.) The term base flow is often used in the same sense as base runoff. However, the distinction is the same as that between stream flow and runoff. When the concept in the terms base flow and base runoff is that of the natural flow in a stream, base runoff is the logical term.
In EDHM \textbf{Baseflow} is the unified name of
\hypertarget{important-concept-of-edhm}{%
\section{Important Concept of EDHM}\label{important-concept-of-edhm}}
Process
Method
Module
Model
Run\_Model
Evaluate
Calibrate
\hypertarget{data-and-parameter}{%
\section{Data and Parameter}\label{data-and-parameter}}
\hypertarget{variable-naming}{%
\subsection{Variable naming}\label{variable-naming}}
\hypertarget{data-structure}{%
\subsection{Data Structure}\label{data-structure}}
\hypertarget{data-or-parameter}{%
\subsection{Data or Parameter}\label{data-or-parameter}}
\hypertarget{develop}{%
\chapter{Model Use and Develop}\label{develop}}
Choose a Model
virtue: convenience
shortage: poor adaptability
\hypertarget{model-structure-or-concept}{%
\section{Model Structure or Concept}\label{model-structure-or-concept}}
Design a Model
\hypertarget{use-model-with-a-model-or-run_model}{%
\section{Use Model with a MODEL or Run\_MODEL}\label{use-model-with-a-model-or-run_model}}
\hypertarget{check-the-indata-list}{%
\subsection{Check the InData list}\label{check-the-indata-list}}
\hypertarget{data-preparation}{%
\subsection{Data Preparation}\label{data-preparation}}
\hypertarget{evaluate}{%
\subsection{Evaluate}\label{evaluate}}
\hypertarget{calibrate}{%
\subsection{Calibrate}\label{calibrate}}
\hypertarget{copuling-a-new-model-with-module}{%
\section{Copuling a new Model with MODULE}\label{copuling-a-new-model-with-module}}
\hypertarget{choose-module}{%
\subsection{Choose MODULE}\label{choose-module}}
\hypertarget{set-the-data-flow}{%
\subsection{Set the Data-FLow}\label{set-the-data-flow}}
\hypertarget{build-the-model-and-run_model}{%
\subsection{Build the MODEL and Run\_MODEL}\label{build-the-model-and-run_model}}
\hypertarget{design-a-new-module}{%
\section{Design a new MODULE}\label{design-a-new-module}}
\hypertarget{method-and-formula}{%
\subsection{Method and Formula}\label{method-and-formula}}
\hypertarget{coding-the-inhalt}{%
\subsection{Coding the Inhalt}\label{coding-the-inhalt}}
\hypertarget{set-inoutdata-and-parameter}{%
\subsection{Set In/OutData and Parameter}\label{set-inoutdata-and-parameter}}
Here is a review of existing methods.
\hypertarget{module}{%
\chapter{Basic Information of MODULEs}\label{module}}
Overview of \textbf{MODULEs}
\begin{table}[!h]
\centering
\begin{tabular}{l|l}
\hline
PROCESS & MODULE\\
\hline
ReferenceET & [ReferenceET.Hargreaves](\#ReferenceET.Hargreaves) | [ReferenceET.Linacre](\#ReferenceET.Linacre) | [ReferenceET.PenMon](\#ReferenceET.PenMon)\\
\cline{1-2}
ActualET & [ActualET.Gr4j](\#ActualET.Gr4j) | [ActualET.Vic](\#ActualET.Vic)\\
\cline{1-2}
SNOW & [SNOW.17](\#SNOW.17) | [SNOW.Ddf](\#SNOW.Ddf)\\
\cline{1-2}
BASEFLOW & [BASEFLOW.ARNO](\#BASEFLOW.ARNO)\\
\cline{1-2}
INTERCEPTION & [INTERCEPTION.Gash](\#INTERCEPTION.Gash)\\
\cline{1-2}
InfiltratRat & [InfiltratRat.GreenAmpt](\#InfiltratRat.GreenAmpt)\\
\cline{1-2}
Infiltration & [Infiltration.OIER](\#Infiltration.OIER) | [Infiltration.SER](\#Infiltration.SER)\\
\cline{1-2}
RUNOFF & [RUNOFF.Gr4j](\#RUNOFF.Gr4j) | [RUNOFF.OIER](\#RUNOFF.OIER) | [RUNOFF.SER](\#RUNOFF.SER) | [RUNOFF.Vic](\#RUNOFF.Vic) | [RUNOFF.VM](\#RUNOFF.VM)\\
\cline{1-2}
GROUNDWATER & [GROUNDWATER.Vic](\#GROUNDWATER.Vic)\\
\cline{1-2}
ROUTE & [ROUTE.G2RES](\#ROUTE.G2RES) | [ROUTE.Gr4j](\#ROUTE.Gr4j)\\
\hline
\end{tabular}
\end{table}
\hypertarget{ReferenceET}{%
\section{ReferenceET}\label{ReferenceET}}
\hypertarget{ReferenceET.Hargreaves}{%
\subsection{ReferenceET.Hargreaves}\label{ReferenceET.Hargreaves}}
This MODULE reference to the Literature: Reference Crop Evapotranspiration from Temperature \citep{GeorgeH.Hargreaves.1985}.
\textbf{\emph{InData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
& TAir & Cel & Average Air temperature in Timestep\\
\cline{2-4}
& TMax & Cel & Maximal Air temperature in one day\\
\cline{2-4}
\multirow{-3}{*}{\raggedright\arraybackslash MetData} & TMin & Cel & Minimul Air temperature in one day\\
\cline{1-4}
GeoData & Latitude & deg & Latitude\\
\cline{1-4}
TimeData & NDay & -- & Day nummer in one year\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{Param}}
\begin{table}[!h]
\centering
\begin{tabular}{l|r|r|l|l}
\hline
Paramter & Min & Max & Unit & Description\\
\hline
PeriodN & 1 & 9999 & -- & The number of Step\\
\cline{1-5}
GridN & 1 & 9999 & -- & The nummber of effektive Grids\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{OutData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
Evatrans & RET & mm & Reference evapotranspiration\\
\hline
\end{tabular}
\end{table}
Return to the \protect\hyperlink{module}{Overview} of MODULEs.
\hypertarget{ReferenceET.Linacre}{%
\subsection{ReferenceET.Linacre}\label{ReferenceET.Linacre}}
This MODULE reference to the Literature: A simple formula for estimating evaporation rates in various climates, using temperature data alone \citep{Linacre.1977}.
\textbf{\emph{InData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
& TAir & Cel & Average Air temperature in Timestep\\
\cline{2-4}
\multirow{-2}{*}{\raggedright\arraybackslash MetData} & Actual\_vapor\_press & mPa & Actual vapor press\\
\cline{1-4}
& Latitude & deg & Latitude\\
\cline{2-4}
\multirow{-2}{*}{\raggedright\arraybackslash GeoData} & Elevation & m & Elevation\\
\cline{1-4}
TimeData & NDay & -- & Day nummer in one year\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{Param}}
\begin{table}[!h]
\centering
\begin{tabular}{l|r|r|l|l}
\hline
Paramter & Min & Max & Unit & Description\\
\hline
PeriodN & 1 & 9999 & -- & The number of Step\\
\cline{1-5}
GridN & 1 & 9999 & -- & The nummber of effektive Grids\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{OutData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
Evatrans & RET & mm & Reference evapotranspiration\\
\hline
\end{tabular}
\end{table}
Return to the \protect\hyperlink{module}{Overview} of MODULEs.
\hypertarget{ReferenceET.PenMon}{%
\subsection{ReferenceET.PenMon}\label{ReferenceET.PenMon}}
This MODULE reference to the Literature: Step by Step Calculation of the Penman-Monteith Evapotranspiration (FAO-56) \citep{LincolnZotarelli.2014}.
\textbf{\emph{InData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
& TAir & Cel & Average Air temperature in Timestep\\
\cline{2-4}
& TMax & Cel & Maximal Air temperature in one day\\
\cline{2-4}
& TMin & Cel & Minimul Air temperature in one day\\
\cline{2-4}
& RelativeHumidity & \% & Relative Humidity, not greater than 100\\
\cline{2-4}
& WindSpeed & m/s & Average Wind Speed\\
\cline{2-4}
& WindH & m & The hight to mess the WindSpeed\\
\cline{2-4}
\multirow{-7}{*}{\raggedright\arraybackslash MetData} & SunHour & h & Sunshine duration in one day\\
\cline{1-4}
& Latitude & deg & Latitude\\
\cline{2-4}
\multirow{-2}{*}{\raggedright\arraybackslash GeoData} & Elevation & m & Elevation\\
\cline{1-4}
TimeData & NDay & -- & Day nummer in one year\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{Param}}
\begin{table}[!h]
\centering
\begin{tabular}{l|r|r|l|l}
\hline
Paramter & Min & Max & Unit & Description\\
\hline
PeriodN & 1 & 9999 & -- & The number of Step\\
\cline{1-5}
GridN & 1 & 9999 & -- & The nummber of effektive Grids\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{OutData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
Evatrans & RET & mm & Reference evapotranspiration\\
\hline
\end{tabular}
\end{table}
Return to the \protect\hyperlink{module}{Overview} of MODULEs.
\hypertarget{ActualET}{%
\section{ActualET}\label{ActualET}}
\hypertarget{ActualET.Gr4j}{%
\subsection{ActualET.Gr4j}\label{ActualET.Gr4j}}
This MODULE reference to the Literature: Improvement of a parsimonious model for streamflow simulation \citep{Perrin.2003}.
\textbf{\emph{InData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
Evatrans & RET & mm & Reference evapotranspiration\\
\cline{1-4}
Ground & MoistureVolume & mm & Moisture volume\\
\cline{1-4}
Prec & Precipitation & mm & Precipitation, summe of rain and snow\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{Param}}
\begin{table}[!h]
\centering
\begin{tabular}{l|r|r|l|l}
\hline
Paramter & Min & Max & Unit & Description\\
\hline
Gr4j\_X1 & 0.1 & 9.99 & mm & NA\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{OutData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
Evatrans & AET & mm & Actual evapotranspiration\\
\cline{1-4}
Prec & Precipitation & mm & Precipitation, summe of rain and snow\\
\hline
\end{tabular}
\end{table}
Return to the \protect\hyperlink{module}{Overview} of MODULEs.
\hypertarget{ActualET.Vic}{%
\subsection{ActualET.Vic}\label{ActualET.Vic}}
This MODULE reference to the Literature: none \citep{none}.
\textbf{\emph{InData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
& AerodynaResist & s/m & Aerodyna Resist\\
\cline{2-4}
& ArchitecturalResist & s/m & Architectural Resist\\
\cline{2-4}
\multirow{-3}{*}{\raggedright\arraybackslash Aerodyna} & StomatalResist & s/m & Stomatal Resist\\
\cline{1-4}
Canopy & StorageCapacity & mm & Canopy Storage Capacity for Intercept and Evaporation from Canopy\\
\cline{1-4}
Evatrans & RET & mm & Reference evapotranspiration\\
\cline{1-4}
& MoistureVolume & mm & Moisture volume\\
\cline{2-4}
\multirow{-2}{*}{\raggedright\arraybackslash Ground} & MoistureCapacityMax & mm & Maximal Moisture Capacity\\
\cline{1-4}
Intercept & Interception & mm & Interception in Canopy\\
\cline{1-4}
Prec & Precipitation & mm & Precipitation, summe of rain and snow\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{Param}}
\begin{table}[!h]
\centering
\begin{tabular}{l|r|r|l|l}
\hline
Paramter & Min & Max & Unit & Description\\
\hline
SoilMoistureCapacityB & 0 & 0 & -- & --\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{OutData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
& EvaporationCanopy & mm & Evaporation from Canopy\\
\cline{2-4}
& Transpiration & mm & Transpiration (water from Root layer of vegetation)\\
\cline{2-4}
\multirow{-3}{*}{\raggedright\arraybackslash Evatrans} & EvaporationLand & mm & Evaporation from Landsurface (sometimes cotain the Evaporation from Watersurface like Lake)\\
\hline
\end{tabular}
\end{table}
Return to the \protect\hyperlink{module}{Overview} of MODULEs.
\hypertarget{SNOW}{%
\section{SNOW}\label{SNOW}}
\hypertarget{SNOW.17}{%
\subsection{SNOW.17}\label{SNOW.17}}
This MODULE reference to the Literature: National Weather Service river forecast system: snow accumulation and ablation model \citep{Anderson.1973}.
\textbf{\emph{InData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
MetData & TAir & Cel & Average Air temperature in Timestep\\
\cline{1-4}
& Ice\_Volume & mm & Soild Ice Volume, not depth\\
\cline{2-4}
& Liquid\_Volume & mm & Liquid Volume\\
\cline{2-4}
& SN17\_ATI & -- & --\\
\cline{2-4}
\multirow{-4}{*}{\raggedright\arraybackslash Snow} & SN17\_HD & mm & --\\
\cline{1-4}
& SnowFall & mm & Snow\\
\cline{2-4}
\multirow{-2}{*}{\raggedright\arraybackslash Prec} & RainFall & mm & Rain\\
\cline{1-4}
GeoData & Elevation & m & Elevation\\
\cline{1-4}
TimeData & NDay & -- & Day nummer in one year\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{Param}}
\begin{table}[!h]
\centering
\begin{tabular}{l|r|r|l|l}
\hline
Paramter & Min & Max & Unit & Description\\
\hline
SN17\_SCF & 0.70 & 1.40 & -- & Snowfall correction factor\\
\cline{1-5}
SN17\_MFMAX & 0.50 & 2.00 & mm/6hCel & Maximum melt factor considered to occur on June 21\\
\cline{1-5}
SN17\_MFMIN & 0.05 & 0.49 & mm/6hCel & Minimum melt factor considered to occur on December 21\\
\cline{1-5}
SN17\_UADJ & 0.03 & 0.19 & mm/6hCel & The average wind function during rain-on-snow periods\\
\cline{1-5}
SN17\_NMF & 0.05 & 0.50 & mm/6hCel & Maximum negative melt factor\\
\cline{1-5}
SN17\_TIPM & 0.10 & 1.00 & -- & Antecedent snow temperature index\\
\cline{1-5}
SN17\_PXTEMP & -2.00 & 2.00 & Cel & Temperature that separates rain from snow\\
\cline{1-5}
SN17\_MBASE & 0.00 & 1.00 & Cel & Base temperature for non-rain melt factor\\
\cline{1-5}
SN17\_PLWHC & 0.02 & 0.30 & -- & Percent of liquid–water capacity\\
\cline{1-5}
SN17\_DAYGM & 0.00 & 0.30 & mm/d & Daily melt at snow–soil interface\\
\cline{1-5}
TimeStepSec & 1.00 & 9999.00 & s & Second pro Step\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{OutData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
& Ice\_Volume & mm & Soild Ice Volume, not depth\\
\cline{2-4}
& Liquid\_Volume & mm & Liquid Volume\\
\cline{2-4}
& SN17\_ATI & -- & --\\
\cline{2-4}
\multirow{-4}{*}{\raggedright\arraybackslash Snow} & SN17\_HD & mm & --\\
\cline{1-4}
Prec & Precipitation & mm & Precipitation, summe of rain and snow\\
\hline
\end{tabular}
\end{table}
Return to the \protect\hyperlink{module}{Overview} of MODULEs.
\hypertarget{SNOW.Ddf}{%
\subsection{SNOW.Ddf}\label{SNOW.Ddf}}
This MODULE reference to the Literature: none \citep{none}.
\textbf{\emph{InData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
Ground & MoistureVolume & mm & Moisture volume\\
\cline{1-4}
Snow & Volume & mm & Summe Volume of Ice and liquid water, not depth\\
\cline{1-4}
& SnowFall & mm & Snow\\
\cline{2-4}
\multirow{-2}{*}{\raggedright\arraybackslash Prec} & RainFall & mm & Rain\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{Param}}
\begin{table}[!h]
\centering
\begin{tabular}{l|r|r|l|l}
\hline
Paramter & Min & Max & Unit & Description\\
\hline
Factor\_Day\_degree & 0 & 0 & -- & --\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{OutData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
Snow & Volume & mm & Summe Volume of Ice and liquid water, not depth\\
\cline{1-4}
Prec & Precipitation & mm & Precipitation, summe of rain and snow\\
\hline
\end{tabular}
\end{table}
Return to the \protect\hyperlink{module}{Overview} of MODULEs.
\hypertarget{BASEFLOW}{%
\section{BASEFLOW}\label{BASEFLOW}}
\hypertarget{BASEFLOW.ARNO}{%
\subsection{BASEFLOW.ARNO}\label{BASEFLOW.ARNO}}
This MODULE reference to the Literature: LARGE AREA HYDROLOGIC MODELING AND ASSESSMENT PART I: MODEL DEVELOPMENT \citep{Arnold.1998}.
\textbf{\emph{InData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
& MoistureVolume & mm & Moisture volume\\
\cline{2-4}
\multirow{-2}{*}{\raggedright\arraybackslash Ground} & MoistureCapacityMax & mm & Maximal Moisture Capacity\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{Param}}
\begin{table}[!h]
\centering
\begin{tabular}{l|r|r|l|l}
\hline
Paramter & Min & Max & Unit & Description\\
\hline
ExponentARNOBase & 0 & 0 & -- & --\\
\cline{1-5}
ARNOBaseThresholdRadio & 0 & 0 & -- & --\\
\cline{1-5}
DrainageLossMax & 0 & 0 & -- & --\\
\cline{1-5}
DrainageLossMin & 0 & 0 & -- & --\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{OutData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
Ground & BaseFlow & mm & Base Flow\\
\hline
\end{tabular}
\end{table}
Return to the \protect\hyperlink{module}{Overview} of MODULEs.
\hypertarget{INTERCEPTION}{%
\section{INTERCEPTION}\label{INTERCEPTION}}
\hypertarget{INTERCEPTION.Gash}{%
\subsection{INTERCEPTION.Gash}\label{INTERCEPTION.Gash}}
This MODULE reference to the Literature: An analytical model of rainfall interception by forests \citep{Gash.1979}.
\textbf{\emph{InData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
Canopy & StorageCapacity & mm & Canopy Storage Capacity for Intercept and Evaporation from Canopy\\
\cline{1-4}
Evatrans & EvaporationCanopy & mm & Evaporation from Canopy\\
\cline{1-4}
Intercept & Interception & mm & Interception in Canopy\\
\cline{1-4}
Prec & Precipitation & mm & Precipitation, summe of rain and snow\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{Param}}
\begin{table}[!h]
\centering
\begin{tabular}{l|r|r|l|l}
\hline
Paramter & Min & Max & Unit & Description\\
\hline
CoefficientFreeThroughfall & 0 & 0 & -- & --\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{OutData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
Intercept & Interception & mm & Interception in Canopy\\
\cline{1-4}
Prec & Precipitation & mm & Precipitation, summe of rain and snow\\
\hline
\end{tabular}
\end{table}
Return to the \protect\hyperlink{module}{Overview} of MODULEs.
\hypertarget{InfiltratRat}{%
\section{InfiltratRat}\label{InfiltratRat}}
\hypertarget{InfiltratRat.GreenAmpt}{%
\subsection{InfiltratRat.GreenAmpt}\label{InfiltratRat.GreenAmpt}}
This MODULE reference to the Literature: Drainage to a water table analysed by the Green-Ampt approach \citep{Youngs.1976}.
\textbf{\emph{InData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
& MoistureVolume & mm & Moisture volume\\
\cline{2-4}
\multirow{-2}{*}{\raggedright\arraybackslash Ground} & Depth & mm & Ground Depth\\
\cline{1-4}
& Conductivity & m/s & Soil actual Conductivity\\
\cline{2-4}
& WettingFrontSuction & m/s & Wetting Front Suction\\
\cline{2-4}
\multirow{-3}{*}{\raggedright\arraybackslash SoilData} & Porosity & 100\% & Soil Porosity, not greater than 1\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{Param}}
\begin{table}[!h]
\centering
\begin{tabular}{l|r|r|l|l}
\hline
Paramter & Min & Max & Unit & Description\\
\hline
GridN & 1 & 9999 & -- & NA\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{OutData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
Infilt & InfiltrationRat & mm & Infiltration Rate (for some INFITRATION Module)\\
\hline
\end{tabular}
\end{table}
Return to the \protect\hyperlink{module}{Overview} of MODULEs.
\hypertarget{Infiltration}{%
\section{Infiltration}\label{Infiltration}}
\hypertarget{Infiltration.OIER}{%
\subsection{Infiltration.OIER}\label{Infiltration.OIER}}
This MODULE reference to the Literature: none \citep{none}.
\textbf{\emph{InData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
Infilt & InfiltrationRat & mm & Infiltration Rate (for some INFITRATION Module)\\
\cline{1-4}
Prec & Precipitation & mm & Precipitation, summe of rain and snow\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{Param}}
\begin{table}[!h]
\centering
\begin{tabular}{l|r|r|l|l}
\hline
Paramter & Min & Max & Unit & Description\\
\hline
InfiltrationRateB & 0 & 0 & -- & --\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{OutData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
Infilt & Infiltration & mm & Infiltration\\
\hline
\end{tabular}
\end{table}
Return to the \protect\hyperlink{module}{Overview} of MODULEs.
\hypertarget{Infiltration.SER}{%
\subsection{Infiltration.SER}\label{Infiltration.SER}}
This MODULE reference to the Literature: none \citep{none}.
\textbf{\emph{InData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
& MoistureCapacityMax & mm & Maximal Moisture Capacity\\
\cline{2-4}
\multirow{-2}{*}{\raggedright\arraybackslash Ground} & MoistureCapacity & mm & Moisture Capacity\\
\cline{1-4}
Prec & Precipitation & mm & Precipitation, summe of rain and snow\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{Param}}
\begin{table}[!h]
\centering
\begin{tabular}{l|r|r|l|l}
\hline
Paramter & Min & Max & Unit & Description\\
\hline
SoilMoistureCapacityB & 0 & 0 & -- & --\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{OutData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
Infilt & Infiltration & mm & Infiltration\\
\hline
\end{tabular}
\end{table}
Return to the \protect\hyperlink{module}{Overview} of MODULEs.
\hypertarget{RUNOFF}{%
\section{RUNOFF}\label{RUNOFF}}
\hypertarget{RUNOFF.Gr4j}{%
\subsection{RUNOFF.Gr4j}\label{RUNOFF.Gr4j}}
This MODULE reference to the Literature: Improvement of a parsimonious model for streamflow simulation \citep{Perrin.2003}.
\textbf{\emph{InData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
Ground & MoistureVolume & mm & Moisture volume\\
\cline{1-4}
Evatrans & AET & mm & Actual evapotranspiration\\
\cline{1-4}
Prec & Precipitation & mm & Precipitation, summe of rain and snow\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{Param}}
\begin{table}[!h]
\centering
\begin{tabular}{l|r|r|l|l}
\hline
Paramter & Min & Max & Unit & Description\\
\hline
Gr4j\_X1 & 0.1 & 9.99 & mm & NA\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{OutData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
& Runoff & mm & Runoff, it will be more wert, when the Runoff is in different form divided\\
\cline{2-4}
\multirow{-2}{*}{\raggedright\arraybackslash Ground} & MoistureVolume & mm & Moisture volume\\
\hline
\end{tabular}
\end{table}
Return to the \protect\hyperlink{module}{Overview} of MODULEs.
\hypertarget{RUNOFF.OIER}{%
\subsection{RUNOFF.OIER}\label{RUNOFF.OIER}}
This MODULE reference to the Literature: none \citep{none}.
\textbf{\emph{InData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
Infilt & InfiltrationRateMax & mm & Maximal Infiltration Rate (for some INFITRATION Module)\\
\cline{1-4}
Prec & Precipitation & mm & Precipitation, summe of rain and snow\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{Param}}
\begin{table}[!h]
\centering
\begin{tabular}{l|r|r|l|l}
\hline
Paramter & Min & Max & Unit & Description\\
\hline
InfiltrationRateB & 0 & 0 & -- & --\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{OutData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
Ground & Runoff & mm & Runoff, it will be more wert, when the Runoff is in different form divided\\
\cline{1-4}
Infilt & Infiltration & mm & Infiltration\\
\hline
\end{tabular}
\end{table}
Return to the \protect\hyperlink{module}{Overview} of MODULEs.
\hypertarget{RUNOFF.SER}{%
\subsection{RUNOFF.SER}\label{RUNOFF.SER}}
This MODULE reference to the Literature: none \citep{none}.
\textbf{\emph{InData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
& MoistureCapacityMax & mm & Maximal Moisture Capacity\\
\cline{2-4}
\multirow{-2}{*}{\raggedright\arraybackslash Ground} & MoistureVolume & mm & Moisture volume\\
\cline{1-4}
Prec & Precipitation & mm & Precipitation, summe of rain and snow\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{Param}}
\begin{table}[!h]
\centering
\begin{tabular}{l|r|r|l|l}
\hline
Paramter & Min & Max & Unit & Description\\
\hline
SoilMoistureCapacityB & 0 & 0 & -- & --\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{OutData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
Ground & Runoff & mm & Runoff, it will be more wert, when the Runoff is in different form divided\\
\cline{1-4}
Infilt & Infiltration & mm & Infiltration\\
\hline
\end{tabular}
\end{table}
Return to the \protect\hyperlink{module}{Overview} of MODULEs.
\hypertarget{RUNOFF.Vic}{%
\subsection{RUNOFF.Vic}\label{RUNOFF.Vic}}
This MODULE reference to the Literature: A new surface runoff parameterization with subgrid-scale soil heterogeneity for land surface models \citep{Liang.2001}.
\textbf{\emph{InData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
& MoistureCapacityMax & mm & Maximal Moisture Capacity\\
\cline{2-4}
\multirow{-2}{*}{\raggedright\arraybackslash Ground} & MoistureVolume & mm & Moisture volume\\
\cline{1-4}
Infilt & InfiltrationRat & mm & Infiltration Rate (for some INFITRATION Module)\\
\cline{1-4}
Prec & Precipitation & mm & Precipitation, summe of rain and snow\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{Param}}
\begin{table}[!h]
\centering
\begin{tabular}{l|r|r|l|l}
\hline
Paramter & Min & Max & Unit & Description\\
\hline
SoilMoistureCapacityB & 0 & 0 & -- & --\\
\cline{1-5}
InfiltrationRateB & 0 & 0 & -- & --\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{OutData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
Ground & Runoff & mm & Runoff, it will be more wert, when the Runoff is in different form divided\\
\cline{1-4}
Infilt & Infiltration & mm & Infiltration\\
\hline
\end{tabular}
\end{table}
Return to the \protect\hyperlink{module}{Overview} of MODULEs.
\hypertarget{RUNOFF.VM}{%
\subsection{RUNOFF.VM}\label{RUNOFF.VM}}
This MODULE reference to the Literature: none \citep{none}.
\textbf{\emph{InData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
& MoistureCapacity & mm & Moisture Capacity\\
\cline{2-4}
\multirow{-2}{*}{\raggedright\arraybackslash Ground} & MoistureCapacityMax & mm & Maximal Moisture Capacity\\
\cline{1-4}
Infilt & InfiltrationRateMax & mm & Maximal Infiltration Rate (for some INFITRATION Module)\\
\cline{1-4}
Prec & Precipitation & mm & Precipitation, summe of rain and snow\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{Param}}
\begin{table}[!h]
\centering
\begin{tabular}{l|r|r|l|l}
\hline
Paramter & Min & Max & Unit & Description\\
\hline
SoilMoistureCapacityB & 0 & 0 & -- & --\\
\cline{1-5}
InfiltrationRateB & 0 & 0 & -- & --\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{OutData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
Ground & Runoff & mm & Runoff, it will be more wert, when the Runoff is in different form divided\\
\cline{1-4}
Infilt & Infiltration & mm & Infiltration\\
\hline
\end{tabular}
\end{table}
Return to the \protect\hyperlink{module}{Overview} of MODULEs.
\hypertarget{GROUNDWATER}{%
\section{GROUNDWATER}\label{GROUNDWATER}}
\hypertarget{GROUNDWATER.Vic}{%
\subsection{GROUNDWATER.Vic}\label{GROUNDWATER.Vic}}
This MODULE reference to the Literature: none \citep{none}.
\textbf{\emph{InData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
& ZoneMoistureVolume & mm & Moisture volume, when the Ground is in more than one Layer divided\\
\cline{2-4}
& ZoneDepth & mm & Ground Depth, , when the Ground is in more than one Layer divided\\
\cline{2-4}
\multirow{-3}{*}{\raggedright\arraybackslash Ground} & BaseFlow & mm & Base Flow\\
\cline{1-4}
Infilt & Infiltration & mm & Infiltration\\
\cline{1-4}
Intercept & Interception & mm & Interception in Canopy\\
\cline{1-4}
& Porosity & 100\% & Soil Porosity, not greater than 1\\
\cline{2-4}
\multirow{-2}{*}{\raggedright\arraybackslash SoilData} & SaturatedConductivity & m/s & Soil Saturated Conductivity\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{Param}}
\begin{table}[!h]
\centering
\begin{tabular}{l|r|r|l|l}
\hline
Paramter & Min & Max & Unit & Description\\
\hline
GridN & 1 & 9999 & -- & The nummber of effektive Grids\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{OutData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
& Overflow & mm & Overflow, when the caculated water volume greater than Capacity\\
\cline{2-4}
\multirow{-2}{*}{\raggedright\arraybackslash Ground} & ZoneMoistureVolume & mm & Moisture volume, when the Ground is in more than one Layer divided\\
\hline
\end{tabular}
\end{table}
Return to the \protect\hyperlink{module}{Overview} of MODULEs.
\hypertarget{ROUTE}{%
\section{ROUTE}\label{ROUTE}}
\hypertarget{ROUTE.G2RES}{%
\subsection{ROUTE.G2RES}\label{ROUTE.G2RES}}
This MODULE reference to the Literature: none \citep{none}.
\textbf{\emph{InData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
& WaterSource & mm & Water Source for Routing, sometimes the same Data with the Runoff\\
\cline{2-4}
& UHAll & -- & All the UH data for all of the Grids for Routr with IUH\\
\cline{2-4}
& TypeGridID & -- & The grids type for Routr with IUH\\
\cline{2-4}
\multirow{-4}{*}{\raggedright\arraybackslash Route} & TransAll & -- & All of the transform Matrix for all of the Grids for Routr with IUH\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{Param}}
\begin{table}[!h]
\centering
\begin{tabular}{l|r|r|l|l}
\hline
Paramter & Min & Max & Unit & Description\\
\hline
PeriodN & 1 & 9999 & -- & The number of Step\\
\cline{1-5}
GridN & 1 & 9999 & -- & The nummber of effektive Grids\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{OutData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
Route & StaFlow & m3/s & Station Flow in the seted grid\\
\hline
\end{tabular}
\end{table}
Return to the \protect\hyperlink{module}{Overview} of MODULEs.
\hypertarget{ROUTE.Gr4j}{%
\subsection{ROUTE.Gr4j}\label{ROUTE.Gr4j}}
This MODULE reference to the Literature: Improvement of a parsimonious model for streamflow simulation \citep{Perrin.2003}.
\textbf{\emph{InData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
& WaterSource & mm & Water Source for Routing, sometimes the same Data with the Runoff\\
\cline{2-4}
& Store & mm & Store in the Route (for some Module)\\
\cline{2-4}
& Gr4j\_UH1 & -- & UH form 1 only for Module ROUTE.Gr4j, made by the function\\
\cline{2-4}
\multirow{-4}{*}{\raggedright\arraybackslash Route} & Gr4j\_UH2 & -- & UH form 1 only for Module ROUTE.Gr4j\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{Param}}
\begin{table}[!h]
\centering
\begin{tabular}{l|r|r|l|l}
\hline
Paramter & Min & Max & Unit & Description\\
\hline
Gr4j\_X2 & 0.1 & 9.99 & mm/Step & The catchment water exchange coe icient\\
\cline{1-5}
Gr4j\_X3 & 0.1 & 9.99 & mm & The one-day maximal capacity of the routing reservoir\\
\cline{1-5}
Gr4j\_X4 & 1.0 & 9.99 & mm/Step & The HU1 unit hydrograph time base\\
\cline{1-5}
time\_step\_i & 1.0 & 9999.00 & -- & The time Step index\\
\hline
\end{tabular}
\end{table}
\textbf{\emph{OutData}}
\begin{table}[!h]
\centering
\begin{tabular}{l|l|l|l}
\hline
Group & Variable & Unit & Description\\
\hline
& StaFlow & m3/s & Station Flow in the seted grid\\
\cline{2-4}
\multirow{-2}{*}{\raggedright\arraybackslash Route} & Store & mm & Store in the Route (for some Module)\\
\hline
\end{tabular}
\end{table}
Return to the \protect\hyperlink{module}{Overview} of MODULEs.
\hypertarget{model}{%
\chapter{Model}\label{model}}
\hypertarget{classical-vic}{%
\section{Classical VIC}\label{classical-vic}}
Chapter \ref{ReferenceET}
\protect\hyperlink{ReferenceET}{Section}
\hypertarget{gr4j}{%
\section{GR4J}\label{gr4j}}
\hypertarget{final-words}{%
\chapter{Final Words}\label{final-words}}
We have finished a nice book.
\bibliography{book.bib,packages.bib}
\end{document}
| {
"alphanum_fraction": 0.7108182207,
"avg_line_length": 25.2105263158,
"ext": "tex",
"hexsha": "ae052fa966141f34f57e6bd4e466ae27e36d4055",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f56cd4b5e44e7f3197daeeeb08dbc0dbc482d8c5",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "LuckyKanLei/EDHM_Document",
"max_forks_repo_path": "_book/EDHM_Document.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f56cd4b5e44e7f3197daeeeb08dbc0dbc482d8c5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "LuckyKanLei/EDHM_Document",
"max_issues_repo_path": "_book/EDHM_Document.tex",
"max_line_length": 707,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f56cd4b5e44e7f3197daeeeb08dbc0dbc482d8c5",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "LuckyKanLei/EDHM_Document",
"max_stars_repo_path": "_book/EDHM_Document.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 13881,
"size": 39757
} |
% template.tex for Biometrics papers
%
% This file provides a template for Biometrics authors. Use this
% template as the starting point for creating your manuscript document.
% See the file biomsample.tex for an example of a full-blown manuscript.
% ALWAYS USE THE referee OPTION WITH PAPERS SUBMITTED TO BIOMETRICS!!!
% You can see what your paper would look like typeset by removing
% the referee option. Because the typeset version will be in two
% columns, however, some of your equations may be too long. DO NOT
% use the \longequation option discussed in the user guide!!! This option
% is reserved ONLY for equations that are impossible to split across
% multiple lines; e.g., a very wide matrix. Instead, type your equations
% so that they stay in one column and are split across several lines,
% as are almost all equations in the journal. Use a recent version of the
% journal as a guide.
%
\documentclass[useAMS,usenatbib,referee]{biom}
%documentclass[useAMS]{biom}
%
% If your system does not have the AMS fonts version 2.0 installed, then
% remove the useAMS option.
%
% useAMS allows you to obtain upright Greek characters.
% e.g. \umu, \upi etc. See the section on "Upright Greek characters" in
% this guide for further information.
%
% If you are using AMS 2.0 fonts, bold math letters/symbols are available
% at a larger range of sizes for NFSS release 1 and 2 (using \boldmath or
% preferably \bmath).
%
% Other options are described in the user guide. Here are a few:
%
% - If you use Patrick Daly's natbib to cross-reference your
% bibliography entries, use the usenatbib option
%
% - If you use \includegraphics (graphicx package) for importing graphics
% into your figures, use the usegraphicx option
%
% If you wish to typeset the paper in Times font (if you do not have the
% PostScript Type 1 Computer Modern fonts you will need to do this to get
% smoother fonts in a PDF file) then uncomment the next line
% \usepackage{Times}
%%%%% PLACE YOUR OWN MACROS HERE %%%%%
\def\bSig\mathbf{\Sigma}
\newcommand{\VS}{V\&S}
\newcommand{\tr}{\mbox{tr}}
\newcommand{\RNum}[1]{\uppercase\expandafter{\romannumeral #1\relax}}
%packages
\usepackage{amsmath,amsfonts,amssymb}
%\usepackage{biblatex}
\usepackage{bm}
\usepackage{hyperref}
\usepackage{graphicx}
%\usepackage{natbib}
%\usepackage{subcaption}
% The rotating package allows you to have tables displayed in landscape
% mode. The rotating package is NOT included in this distribution, but
% can be obtained from the CTAN archive. USE OF LANDSCAPE TABLES IS
% STRONGLY DISCOURAGED -- create landscape tables only as a last resort if
% you see no other way to display the information. If you do do this,
% then you need the following command.
%\usepackage[figuresright]{rotating}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Here, place your title and author information. Note that in
% use of the \author command, you create your own footnotes. Follow
% the examples below in creating your author and affiliation information.
% Also consult a recent issue of the journal for examples of formatting.
\title[Addressing Confounding and Measurement Error]{Addressing Confounding and Exposure Measurement Error Using Conditional Score Functions}
% Here are examples of different configurations of author/affiliation
% displays. According to the Biometrics style, in some instances,
% the convention is to have superscript *, **, etc footnotes to indicate
% which of multiple email addresses belong to which author. In this case,
% use the \email{ } command to produce the emails in the display.
% In other cases, such as a single author or two authors from
% different institutions, there should be no footnoting. Here, use
% the \emailx{ } command instead.
% The examples below correspond to almost every possible configuration
% of authors and may be used as a guide. For other configurations, consult
% a recent issue of the the journal.
% Single author -- USE \emailx{ } here so that no asterisk footnoting
% for the email address will be produced.
%\author{John Author\emailx{[email protected]} \\
%Department of Statistics, University of Warwick, Coventry CV4 7AL, U.K.}
% Two authors from the same institution, with both emails -- use
% \email{ } here to produce the asterisk footnoting for each email address
%\author{John Author$^{*}$\email{[email protected]} and
%Kathy Authoress$^{**}$\email{[email protected]} \\
%Department of Statistics, University of Warwick, Coventry CV4 7AL, U.K.}
% Exactly two authors from different institutions, with both emails
% USE \emailx{ } here so that no asterisk footnoting for the email address
% is produced.
%\author
%{John Author\emailx{[email protected]} \\
%Department of Statistics, University of Warwick, Coventry CV4 7AL, U.K.
%\and
%Kathy Author\emailx{[email protected]} \\
%Department of Biostatistics, University of North Carolina at Chapel Hill,
%Chapel Hill, North Carolina, U.S.A.}
% Three or more authors from same institution with all emails displayed
% and footnoted using asterisks -- use \email{ }
%\author{John Author$^*$\email{[email protected]},
%Jane Author$^{**}$\email{[email protected]}, and
%Dick Author$^{***}$\email{[email protected]} \\
%Department of Statistics, University of Warwick, Coventry CV4 7AL, U.K}
% Three or more authors from same institution with one corresponding email
% displayed
%\author{John Author$^*$\email{[email protected]},
%Jane Author, and Dick Author \\
%Department of Statistics, University of Warwick, Coventry CV4 7AL, U.K}
% Three or more authors, with at least two different institutions,
% more than one email displayed
%\author{John Author$^{1,*}$\email{[email protected]},
%Kathy Author$^{2,**}$\email{[email protected]}, and
%Wilma Flinstone$^{3,***}$\email{[email protected]} \\
%$^{1}$Department of Statistics, University of Warwick, Coventry CV4 7AL, U.K \\
%$^{2}$Department of Biostatistics, University of North Carolina at
%Chapel Hill, Chapel Hill, North Carolina, U.S.A. \\
%$^{3}$Department of Geology, University of Bedrock, Bedrock, Kansas, U.S.A.}
% Three or more authors with at least two different institutions and only
% one email displayed
\author{Bryan S. Blette$^{1}$, Peter B. Gilbert$^{2}$, and Michael G. Hudgens$^{1,*}$\email{[email protected]} \\ $^{1}$Department of Biostatistics, University of North Carolina at Chapel Hill
\\ Chapel Hill, NC, U.S.A. \\
$^{2}$Department of Biostatistics, University of Washington and Fred Hutchinson Cancer Research Center
\\ Seattle, Washington, U.S.A.}
\usepackage{Sweave}
\begin{document}
\input{Biometrics-draft-concordance}
% This will produce the submission and review information that appears
% right after the reference section. Of course, it will be unknown when
% you submit your paper, so you can either leave this out or put in
% sample dates (these will have no effect on the fate of your paper in the
% review process!)
%\date{{\it Received October} 2007. {\it Revised February} 2008. {\it
%Accepted March} 2008.}
% These options will count the number of pages and provide volume
% and date information in the upper left hand corner of the top of the
% first page as in published papers. The \pagerange command will only
% work if you place the command \label{firstpage} near the beginning
% of the document and \label{lastpage} at the end of the document, as we
% have done in this template.
% Again, putting a volume number and date is for your own amusement and
% has no bearing on what actually happens to your paper!
%\pagerange{\pageref{firstpage}--\pageref{lastpage}}
%\volume{64}
%\pubyear{2008}
%\artmonth{December}
% The \doi command is where the DOI for your paper would be placed should it
% be published. Again, if you make one up and stick it here, it means
% nothing!
%\doi{10.1111/j.1541-0420.2005.00454.x}
% This label and the label ``lastpage'' are used by the \pagerange
% command above to give the page range for the article. You may have
% to process the document twice to get this to match up with what you
% expect. When using the referee option, this will not count the pages
% with tables and figures.
\label{firstpage}
% put the summary for your paper here
\begin{abstract}
Confounding and measurement error are common barriers to drawing causal inference. While there are broad methodologies for addressing each phenomenon individually, confounding and measurement biases frequently co-occur and there is a paucity of methods that address them simultaneously. The few existing methods that do so rely on supplemental data or strong distributional and extrapolation assumptions to correct for measurement error. In this paper, methods are derived which instead leverage the likelihood structure under classical additive measurement error to draw inference using only measured variables. Three estimators are proposed based on g-computation, inverse probability weighting, and doubly-robust estimation techniques. The estimators are shown to be consistent and asymptotically normal, and the doubly-robust estimator is shown to exhibit its namesake property. The methods perform well in finite samples under both confounding and measurement error as demonstrated by simulation studies. The proposed doubly-robust estimator is applied to study the effects of two biomarkers on HIV-1 infection using data from the HVTN 505 vaccine trial.
\end{abstract}
% Please place your key words in alphabetical order, separated
% by semicolons, with the first letter of the first word capitalized,
% and a period at the end of the list.
%
\begin{keywords}
Causal inference; Confounding; G-formula; HIV/AIDS;
Marginal structural models; Measurement error.
\end{keywords}
% As usual, the \maketitle command creates the title and author/affiliations
% display
\maketitle
% If you are using the referee option, a new page, numbered page 1, will
% start after the summary and keywords. The page numbers thus count the
% number of pages of your manuscript in the preferred submission style.
% Remember, ``Normally, regular papers exceeding 25 pages and Reader Reaction
% papers exceeding 12 pages in (the preferred style) will be returned to
% the authors without review. The page limit includes acknowledgements,
% references, and appendices, but not tables and figures. The page count does
% not include the title page and abstract. A maximum of six (6) tables or
% figures combined is often required.''
% You may now place the substance of your manuscript here. Please use
% the \section, \subsection, etc commands as described in the user guide.
% Please use \label and \ref commands to cross-reference sections, equations,
% tables, figures, etc.
%
% Please DO NOT attempt to reformat the style of equation numbering!
% For that matter, please do not attempt to redefine anything!
\section{Introduction}
\label{s:intro}
Confounding bias and measurement error are common barriers to identification, estimation, and inference of causal effects. These phenomena often occur together, but are rarely both addressed in an analysis, with researchers typically focusing on whichever is more egregious in their study or ignoring both entirely. The last few decades witnessed a proliferation of interest in and development of methods for causal inference and a parallel trend for methods accounting for measurement error, but comparatively few methods exist at the important intersection of these fields.
The measurement error literature is commonly split into (i) functional methods, which make no or limited assumptions on the distribution of mismeasured variables, and (ii) structural methods, which make explicit distributional assumptions~\citep{carroll2006}. Several causal methods based on structural approaches have been developed (\citealp{kuroki2014,edwards2015multiple,braun2017}; \citealp*{hong2017}). Likewise, three of the four most popular functional methods (regression calibration, SIMEX, and methods based on instrumental variables) have been adapted to a variety of causal problems (\citealp*{vansteelandt2009}; \citealp{cole2010,kendall2015,lockwood2015,kyle2016,wu2019}). These methods all rely on either supplemental data, such as replication, validation, or instrumental data, or on strong distributional and extrapolation assumptions to draw inference.
In contrast, the fourth main functional approach, that of score functions, leverages the likelihood structure under classical additive measurement error to draw inference without supplemental data or strong assumptions. Despite this advantage, this method has only been adapted to perform causal inference in very limited capacity. In particular, \citet*{mccaffrey2013} suggested using weighted conditional score equations to correct for measurement error in confounders and the approach was eventually implemented in \citet{shu2019}. In this paper, this methodology is expanded in multiple directions by considering exposure/treatment measurement error and defining g-formula, inverse probability weighted (IPW), and doubly-robust (DR) estimators.
The proposed methods are motivated by the HVTN 505 vaccine trial. This HIV vaccine efficacy trial stopped administering immunizations early after reaching predetermined cutoffs for efficacy futility~\citep{hammer2013}. However, subsequent analyses of trial data identified several immunologic biomarker correlates of risk among HIV vaccine recipients~\citep{janes2017,fong2018,neidich2019}. Some of these biomarkers correspond to possible target immune responses for future vaccines, but their causal effects on HIV-1 infection have not been described. The methods derived in this paper are motivated by this problem, where the biomarkers are measured with error and their effects are likely subject to confounding, but there are no supplemental data available.
This paper proceeds as follows. In Section 2 notation and the estimand are defined. In Section 3 assumptions are stated and three estimators that adjust for concurrent confounding and exposure measurement error using a conditional score approach are proposed. In Section 4 the proposed estimators are evaluated in a simulation study, and in Section 5 one of the estimators is applied to study two biomarkers collected in the HVTN 505 vaccine trial. Section 6 concludes with a discussion of the advantages and limitations of the proposed methods.
\section{Notation and Estimand}
\label{s:notation}
Suppose there are $m$ exposures/treatments of interest which may or may not be measured correctly. Let $\bm{A} = (A_{1}, A_{2}, ..., A_{m})$ be a row vector denoting the true exposures and $\bm{A}^{*} = (A^{*}_{1}, A^{*}_{2}, ..., A^{*}_{m})$ be the corresponding set of measured exposures. Suppose only the first $j$ exposures are measured with error, such that $A_{k} = A^{*}_{k}$ for $k > j$. For example, in the HVTN 505 trial, a biomarker of interest $A$ is antibody-dependent cellular phagocytosis activity. This biomarker was not observed exactly, but an imperfect phagocytic score $A^{*}$ was measured. Exposures subject to measurement error are assumed to be continuous random variables, while exposures known to be correctly measured may be continuous or discrete. Let $Y$ be the outcome of interest. Define $Y(\bm{a})$ to be the potential outcome under exposure $\bm{A} = \bm{a} = (a_{1}, a_{2}, ..., a_{m})$. Assuming $j \geq 1$, there is at least one continuous exposure and each individual has infinite potential outcomes. Let $\bm{L} = (L_{1}, L_{2}, ..., L_{p})$ represent a vector of baseline covariates measured prior to exposure. Assume that $n$ i.i.d. copies of the random variables $(\bm{L}, \bm{A}^{*}, Y)$ are observed.
The estimand of interest is the mean dose-response surface, namely $\text{E}\{ Y(\bm{a}) \}$ for $\bm{a} \in \bm{\mathcal{A}}$ where $\bm{\mathcal{A}}$ represents the $m$-dimensional space of exposure values of interest. For example, with one exposure, $\text{E}\{ Y(\bm{a}) \}$ may be a dose response curve across a closed interval of exposure values. Each of the proposed estimators described in this paper will make assumptions that explicitly or implicitly impose restrictions on the surface. For example, the proposed IPW estimator will target the parameters of a marginal structural model (MSM) given by
\begin{equation}
g(\text{E}[Y(\bm{a})]) = \gamma (1, \bm{a})^{T}
\end{equation}
where $g$ is a link function for an exponential family density, e.g., $g(\cdot) = \text{logit}(\cdot)$. The MSM parameter $\gamma = (\gamma_{0}, ..., \gamma_{m})$ is a row vector of length $m+1$ which quantifies the effects of the exposures on the outcome. Thus the IPW estimator assumes the dose-response surface is linear on the scale of the link function. The linearity assumption facilitates use of the conditional score method (CSM) described in the next section; estimation procedures for non-linear MSM specifications are considered in Web Appendix D. The g-formula and doubly-robust estimators will not make this linearity assumption, but they will make an outcome regression assumption that implicitly places restrictions on the dose-response surface.
In general, let $F(x)$ denote $P(X \leq x)$ and $F(w | X)$ denote $P(W \leq w | X)$ for any random variables $X$ and $W$. The proposed methods in Section 3 rely on the CSM modeling assumptions described in Section 3.1 as well as a standard set of assumptions used in causal inference: (i) causal consistency, $Y = Y(\bm{a})$ when $\bm{A} = \bm{a}$; (ii) conditional exchangeability, $Y(\bm{a}) \perp \!\!\! \perp \bm{A} | \bm{L}$; and (iii) positivity, $dF(\bm{a} | \bm{l}) > 0$ for all $\bm{l}$ such that $dF(\bm{l}) > 0$. In addition, assume that the outcome and covariates are not measured with error and that there is no model mis-specification unless otherwise stated.
\begin{figure}
\centering
\fbox{\includegraphics[width=\textwidth]{DAG_eps2.png}}
\caption{An example directed acyclic graph (DAG), with variables defined in Section 2 and $\epsilon_{me_{1}}$ and $\epsilon_{me_{4}}$ corresponding to measurement error. This DAG represents a scenario with $m=4$ exposures, one each of the following: mismeasured and unconfounded ($A_{1}$), correctly measured and unconfounded ($A_{2}$), correctly measured and confounded ($A_{3}$), and mismeasured and confounded ($A_{4}$). In this case, $A_{2} = A^{*}_{2}$ and $A_{3} = A^{*}_{3}$ since they are both measured without error.}
\label{fig:one}
\end{figure}
The methods proposed in this paper are applicable in settings such as the example directed acyclic graph (DAG) in Figure 1. The methods can accommodate the four types of exposures in the DAG: unconfounded and correctly measured, unconfounded and mismeasured, confounded and correctly measured, and confounded and mismeasured. Here and throughout, exposure measurement error is assumed non-differential with respect to the outcome, e.g., in Figure~\ref{fig:one}, $\epsilon_{me_{1}}, \epsilon_{me_{4}} \perp \!\!\! \perp Y$ because the corresponding measured exposures $A^{*}_{1}$ and $A^{*}_{4}$ are colliders on all paths between the errors and the outcome. Many existing methods that adjust for both confounding and measurement biases focus on only mismeasured exposures or on a set of correctly measured exposures with one additional mismeasured exposure; the methods developed in this paper provide a more broadly usable modeling framework where all four types of exposures represented in Figure~\ref{fig:one} can be analyzed simultaneously.
\section{Methods}
\label{s:methods}
The proposed methods combine existing methods for (i) correcting exposure measurement error using CSM and (ii) adjusting for confounding using g-formula, inverse probability weighting, and doubly-robust techniques. To begin, CSM is briefly reviewed.
\subsection{Review of Conditional Score Method}
Suppose the conditional density of the outcome given exposures and covariates is in the exponential family~\citep{mccullagh1989}, i.e., $f(y | \bm{a}, \bm{l}, \Theta) = \exp [ \{y\eta - \mathcal{D}(\eta)\}/ \phi + c(y, \phi) ]$ where $\eta = \beta_{0} + \bm{a}\beta_{a} + \bm{l}\beta_{l}$, $\mathcal{D}$ and $c$ are functions, and $\Theta = (\beta_{0}, \beta^{T}_{a}, \beta^{T}_{l}, \phi)$ represents the parameters to be estimated. Assume a classical additive measurement error model $\bm{A}^{*} = \bm{A} + \epsilon_{me}$, where $\epsilon_{me}$ is multivariate normal with mean zero and covariance matrix $\Sigma_{me}$. To account for $A_{j+1}, ..., A_{m}$ being correctly measured, assume $\Sigma_{me}$ has the block diagonal form
\begin{equation*}
\Sigma_{me} =
\begin{bmatrix}
\Sigma_{e} & 0_{j,m-j} \\
0_{m-j,j} & 0_{m-j,m-j}
\end{bmatrix}
\end{equation*}
where $\Sigma_{e}$ is the measurement error covariance matrix for exposures $A_{1},...,A_{j}$ and $0_{a,b}$ denotes an $a \times b$ matrix of zeros.
CSM leverages the fact that $\bm{\Delta} = \bm{A}^{*} + Y(\Sigma_{me}\beta_{a})^{T}/\phi$ is a sufficient statistic for $\bm{A}$. Furthermore, the conditional density of the outcome given covariates $\bm{L} = \bm{l}$ and sufficient statistic $\bm{\Delta} = \bm{\delta}$ is in the exponential family with $\eta_{*} = \beta_{0} + \bm{\delta}\beta_{a} + \bm{l}\beta_{l}$, $\mathcal{D}_{*}(\eta_{*}, \phi, \beta_{a}^{*}) = \phi \log \bigg[ \int \exp \{ y\eta_{*}/\phi + c_{*}(y, \phi, \beta_{a}^{*}) \} dy \bigg]$, and $c_{*}(y, \phi, \beta_{a}^{*}) = c(y, \phi) - \frac{1}{2}(y/\phi)^{2}\beta_{a}^{*}$, where $\beta_{a}^{*} = \beta_{a}^{T}\Sigma_{me}\beta_{a}$. This implies that the score equations from the likelihood conditional on $\bm{\Delta}$ yield consistent estimators for $\beta_{a}$ that only depend on the observed data. The form of the score equations depends on the outcome model specification; the general form is
\begin{equation}
\psi(Y, \bm{L}, \bm{A}^{*}, \Theta) =
\begin{bmatrix}
\{ Y - \text{E}(Y | \bm{L}, \bm{\Delta}) \} (1, \bm{L}, \bm{\Delta})^{T} \\
\phi - \{ Y - \text{E}(Y | \bm{L}, \bm{\Delta}) \}^{2} / \{ \text{Var}(Y | \bm{L}, \bm{\Delta}) / \phi \}
\end{bmatrix}
\end{equation}
where $\text{E}(Y | \bm{L}, \bm{\Delta}) = \partial \mathcal{D}_{*} / \partial \eta_{*}$ and $\text{Var}(Y | \bm{L}, \bm{\Delta}) = \phi \partial^{2} \mathcal{D}_{*} / \partial \eta^{2}_{*}$. This procedure can be extended to handle interaction terms~\citep{dagalp2001} by letting $\eta = \beta_{0} + \bm{a}\beta_{a} + \bm{l}\beta_{l} + \bm{a}\beta_{al}\bm{l}^{T}$, where $\beta_{al}$ is an $m \times p$ matrix of interaction parameters, $\bm{\Delta} = \bm{A}^{*} + Y\{ \Sigma_{me}(\beta_{a} + \beta_{al}\bm{L}^{T}) \}^{T}/\phi$, and the appropriate elements of $\beta_{al}$ are constrained to zero such that only relevant interactions are included in the model. Then one can derive score equations of the same form as (2), but replacing $(1, \bm{L}, \bm{\Delta})^{T}$ with $(1, \bm{L}, \bm{\Delta}, \bm{L} \otimes \bm{\Delta})^{T}$ where $\bm{L} \otimes \bm{\Delta}$ is a row vector containing the product of the $k^{th}$ element of $\bm{L}$ and the $r^{th}$ element of $\bm{\Delta}$ if and only if the $(r, k)$ element of $\beta_{al}$ is not constrained to be zero.
\subsection{G-formula CSM Estimator}
\sloppy The first proposed method combines the g-formula with CSM. When there is no measurement error, the g-formula estimator is $\hat{\text{E}}\{ Y(\bm{a}) \} = n^{-1} \sum_{i=1}^{n} \hat{\text{E}}(Y_{i} | \bm{A} = \bm{a}, \bm{L}_{i})$ where the predicted mean outcomes $\hat{\text{E}}(Y_{i} | \bm{A} = \bm{a}, \bm{L}_{i})$ are typically estimated using parametric models. To accommodate exposure measurement error, the proposed g-formula CSM estimator takes the same form, but instead utilizes a CSM outcome model with relevant covariates and interaction terms. The g-formula CSM estimator can be computed by solving the estimating equation $\sum_{i=1}^{n} \psi_{GF-CSM}(Y_{i}, \bm{L}_{i}, \bm{A}_{i}^{*}, \Sigma_{me}, \Theta_{GF}) = 0$, where
\begin{equation}
\psi_{GF-CSM}(Y, \bm{L}, \bm{A}^{*}, \Sigma_{me}, \Theta_{GF}) =
\begin{bmatrix}
\{ Y - \text{E}(Y | \bm{L}, \bm{\Delta}) \} (1, \bm{L}, \bm{\Delta}, \bm{L} \otimes \bm{\Delta})^{T} \\
\phi - \{ Y - \text{E}(Y | \bm{L}, \bm{\Delta}) \}^{2} / \{ \text{Var}(Y | \bm{L}, \bm{\Delta}) / \phi \} \\
g^{-1}(\beta_{0} + \bm{a}\beta_{a} + \bm{L}\beta_{l} +
\bm{a}\beta_{al}\bm{L}^{T}) - \text{E} \{ Y(\bm{a}) \}
\end{bmatrix}
\end{equation}
and $\Theta_{GF} = (\beta_{0}, \beta^{T}_{a}, \beta^{T}_{l}, vec(\beta_{al}), \phi, \text{E} \{ Y(\bm{a}) \})$. To estimate the dose response surface in practice, one can compute $\hat{\text{E}}\{ Y(\bm{a}) \}$ for a large grid of points $\bm{a}$ in the space of interest.
\subsection{IPW CSM Estimator}
Another common causal inference technique is to weight an estimator by the inverse probability/density of exposure(s) conditional on confounders, which effectively blocks the backdoor paths between exposures and the outcome through the confounder set. Methods that use this technique are typically referred to as IPW methods. Since the CSM estimator can be obtained by solving a set of estimating equations, it's straightforward to define an IPW extension. In particular, consider an estimator of the MSM parameters in (1) that solves $\sum_{i=1}^{n} \psi_{IPW-CSM}(Y_{i}, \bm{L}_{i}, \bm{A}^{*}_{i}, \Sigma_{me}, \Theta_{IPW}) = 0$ where
\begin{equation}
\psi_{IPW-CSM}(Y, \bm{L}, \bm{A}^{*}, \Sigma_{me}, \Theta_{IPW}) =
\begin{bmatrix}
\psi_{PS}(\bm{L}, \bm{A}^{*}, \xi, \zeta) \\
SW \{ Y - \text{E}(Y | \bm{\Delta}) \} (1, \bm{\Delta})^{T} \\
SW \left [ \phi - \{ Y - \text{E}(Y | \bm{\Delta})\}^{2} / \{ \text{Var}(Y | \bm{\Delta}) / \phi \} \right ]
\end{bmatrix},
\end{equation}
\begin{equation}
SW = \frac{dF(\bm{A}^{*}; \xi)}{dF(\bm{A}^{*} | \bm{L}; \zeta)},
\end{equation}
$\Theta_{IPW} = (\xi, \zeta, \gamma_{0}, \gamma^{T}_{a}, \phi)$, $\psi_{PS}(\bm{L}, \bm{A}^{*}, \xi, \zeta)$ is an estimating function corresponding to propensity score models with parameters $\xi$ and $\zeta$, and $\gamma_{0}$ and $\gamma_{a}$ are the MSM parameters from (1). Heuristically, weighting works by creating a pseudo-population where confounding is no longer present. For continuous exposures, this is accomplished by weighting by the inverse of the joint density of exposures conditional on confounders. The unconditional joint density in the numerator of $SW$ is used to stabilize the weights. Since weighting eliminates confounding, the estimating equations of (4) are actually simpler than (2) because covariates $\bm{L}$ do not need to be included in the outcome model. Thus, the conditional expectation and variance nested in (4) are not conditional on $\bm{L}$ as they were in (2) and (3), and the equations are indexed by the MSM parameters of interest $\gamma_{0}$ and $\gamma_{a}$ rather than the $\beta$ parameters from equations (2) and (3). The new forms of the conditional expectation and variance follow from setting $\eta_{*} = \gamma_{0} + \bm{\delta}\gamma_{a}$ in the equations in Section 3.1.
Methods for fitting MSM with multiple treatments often use weights of the form $SW = \prod_{j=1}^{m} dF(A^{*}_{j}) / dF(A^{*}_{j} | \bm{L})$, e.g., as in \citet{hernan2001}; to factorize the denominator in this way, the $m$ exposures $A_{1}, \ldots, A_{m}$ are assumed to be independent conditional on $\bm{L}$. This assumption will be made in the simulation study, but it may be dubious in various applications, such as when a treatment has a direct effect on another treatment or when treatments have an unmeasured common cause. The assumption does have testable implications; see \citet{zhang2012} for a related testing procedure. For examples of instead estimating the joint propensity of dependent binary exposures using mixed models, see \citet{tchetgen2012}, \citet{perez2014}, or \citet*{liu2016}. If a subset of exposure effects are not subject to confounding, then those exposures do not need to be included in the product defining $SW$.
When the weights $SW$ are known, $\psi_{PS}$ can be removed from $\psi_{IPW-CSM}$. In practice, the weights are usually not known and must be estimated. Models used to estimate weights will be referred to as propensity models and weight components corresponding to continuous exposures will be constructed using a ratio of normal densities estimated from linear models~\citep{hirano2004}. To illustrate this, first consider the setting where the true exposures are observed. For continuous exposure $A_{j}$, a model of the form $A_{j} = \zeta (1, \bm{L})^{T} + \epsilon_{ps}$ might be used, where $\epsilon_{ps} \sim \mathcal{N}(0, \sigma^{2}_{ps})$ and in general $\mathcal{N}(\mu, \nu)$ denotes a normal distribution with mean $\mu$ and variance $\nu$. Then based on the fitted model, the estimated conditional density $f(A_{j} | \bm{L}; \hat{\zeta})$ is used to estimate $dF(A_{j} | \bm{L})$. An intercept-only model is used similarly to estimate the weight numerators. Other methods and more flexible choices for weight models~\citep{naimi2014} can also be used for continuous exposures in practice.
Now consider the exposure measurement error setting, where only $\bm{A}^{*}$ is observed. For the IPW CSM estimator, exposure measurement error has an opportunity to introduce additional bias via the estimation of weights. Fortunately, for weight estimation the exposures are now the propensity model(s) outcomes, and measurement error is usually much less problematic for a model outcome than for model covariates~\citep{carroll2006}. The proposed IPW CSM estimator is shown to be consistent when the propensity models are fit using the mismeasured exposures $\bm{A}^{*}$ in Web Appendix A. Finally, weight components corresponding to discrete exposures (which are assumed to be always correctly measured) can be estimated using common practice approaches, such as logistic and multinomial regression.
\subsection{Doubly-Robust CSM Estimator}
Both g-formula and IPW methods rely on model specifications that may not be correct in practice. The g-formula provides consistent estimation of potential outcome means only when the outcome model conditional on exposures and confounders is correctly specified. Likewise, IPW estimators are consistent only when the propensity score models are correctly specified. In contrast, doubly-robust (DR) estimators entail specifying both propensity and outcome models, but remain consistent if one is misspecified and the other is not (\citealp*{robins1994}; \citealp{lunceford2004,bang2005}).
A DR estimator for the additive measurement error setting with an identity link can be derived as an extension of the doubly-robust standardization procedure described in \citet{vansteelandt2011} and \citet{kang2007}. Namely, one can use the standard g-formula estimator, but specify a weighted outcome regression where the weights are the inverse probability weights given in equation (5). This DR estimator is a solution to the estimating equation $\sum_{i=1}^{n} \psi_{DR-CSM}(Y_{i}, \bm{L}_{i}, \bm{A}_{i}^{*}, \Sigma_{me}, \Theta_{DR}) = 0$, where $\Theta_{DR} = (\zeta, \Theta_{GF})$ and
\begin{equation}
\psi_{DR-CSM}(Y, \bm{L}, \bm{A}^{*}, \Sigma_{me}, \Theta_{DR}) =
\begin{bmatrix}
\psi_{PS}(\bm{L}, \bm{A}^{*}, \zeta) \\
SW\{ Y - \text{E}(Y | \bm{L}, \bm{\Delta}) \} (1, \bm{L}, \bm{\Delta}, \bm{L} \otimes \bm{\Delta})^{T} \\
SW[\phi - \{ Y - \text{E}(Y | \bm{L}, \bm{\Delta}) \}^{2} / \{ \text{Var}(Y | \bm{L}, \bm{\Delta}) / \phi \}] \\
\beta_{0} + \bm{a}\beta_{a} + \bm{L}\beta_{l} +
\bm{a}\beta_{al}\bm{L}^{T} - \text{E} \{ Y(\bm{a}) \}
\end{bmatrix}
\end{equation}
This estimator will be referred to as the DR CSM estimator. If the stabilized weights $SW$ are unknown, they are estimated as described for the IPW CSM estimator; if the weights are known, $\psi_{PS}$ can be removed from $\psi_{DR-CSM}$. Like the g-formula CSM estimator, the DR CSM estimator can be evaluated over a grid of values $\bm{a}$ of interest to estimate a dose-response surface.
\subsection{Handling Unknown Measurement Error Variance}
Although the proposed methods require no supplemental data and no distributional assumption on the true exposures $\bm{A}$, a priori knowledge of the measurement error covariance matrix is required. Sometimes this matrix will be known from properties of the measurement devices (e.g., some bioassays, certain widely studied survey instruments) or will be available from a prior study. Here, some guidelines are provided for analyses where this matrix is unknown.
Firstly, there are many types of studies for which the covariance matrix should be diagonal (i.e., the measurement errors should be uncorrelated). For example, biological assays run on different samples and analyzed by separate machines/researchers should have uncorrelated measurement errors, and different assays run on the same sample may often have uncorrelated or very weakly correlated measurement errors. For other types of data such as survey responses, analysts should be more cautious, noting that response bias, recall bias, and other forms of measurement error in survey instruments may be correlated within individuals.
Some studies may have available supplemental data in the form of replicates of the potentially mismeasured variables. These replicates can be used to estimate $\Sigma_{me}$ as described in \citet{carroll2006}. In particular, suppose for individual $i$ there are $k_{i}$ replicates of the mismeasured exposures, $\bm{A}^{*}_{i1}, ..., \bm{A}^{*}_{ik_{i}}$ with mean $\bm{A}^{*}_{i.}$. Then an estimator for the measurement error covariance is given by:
\begin{equation*}
\hat{\Sigma}_{me} = \frac{\sum_{i=1}^{n} \sum_{j=1}^{k_{i}} (\bm{A}^{*}_{ij} - \bm{A}^{*}_{i.})^{T}(\bm{A}^{*}_{ij} - \bm{A}^{*}_{i.})}{\sum_{i=1}^{n}(k_{i} - 1)}
\end{equation*}
When replicates are available, some of the existing methods described in the Introduction are applicable and should give similar results to the proposed methods.
When no replicates are available and there is no prior knowledge of $\Sigma_{me}$, the proposed methods can still be used in conjunction with a sensitivity analysis. In some settings, an upper bound on the exposure measurement error variance may be assumed; for example, the variance of a correctly measured exposure (if estimated in a prior study) can act as a reasonable upper bound on measurement error variance for the corresponding mismeasured exposure. Once upper bounds are determined, inference may be repeated using the proposed methods for a range of $\Sigma_{me}$ specifications to assess robustness of point estimates and confidence intervals to the degree of assumed measurement error; this procedure is more straightforward when the matrix is small and diagonal, and becomes difficult to interpret as the number of non-zero parameters grows.
\subsection{Large-Sample Properties and Variance Estimation}
In Web Appendix A, each of the three proposed estimators are shown to be consistent and asymptotically normal by showing that their corresponding estimating functions have expectation 0~\citep{stefanski2002}. In addition, the DR CSM estimator is shown to be consistent when only one of the propensity and outcome models is correctly specified. Finally, the proposed estimators are shown to be generalizations of existing popular estimators used in the no measurement error setting.
Since each proposed estimator is an M-estimator, consistent estimators of their asymptotic variances are given by the empirical sandwich variance technique. Estimating equations corresponding to the estimation of weights (and $\hat{\Sigma}_{me}$ if estimated) should be included in the estimating equation vector for each method when computing the sandwich variance estimator. Wald $100(1-\alpha)\%$ confidence intervals for the parameters of interest can then be constructed in the usual fashion. The geex R package~\citep{saul2017} streamlines variance estimation for M-estimators and is used in the simulation and application sections.
\section{Simulation Study}
The performance of the proposed methods was evaluated in three simulation studies. The first simulation compared the proposed g-formula CSM approach to standard methods in a scenario where confounding and additive exposure measurement error were present. A total of 2000 data sets of $n = 800$ individuals were simulated, each with the following variables: a confounder $L_{1}$ simulated as a Bernoulli random variable with expectation $0.5$, i.e., $L_{1} \sim \text{Bern}(0.5)$, a second confounder $L_{2} \sim \text{Bern}(0.2)$, an exposure $A \sim \mathcal{N}(2 + 0.3L_{1} - 0.5L_{2}, 0.6)$, and an outcome $Y \sim \text{Bern}(\text{logit}^{-1}(-2 + 0.7A - 0.6L_{1} + 0.4L_{2} - 0.4AL_{1} - 0.2AL_{2}))$. The exposure was subject to additive measurement error, simulated as $A^{*} \sim \mathcal{N}(A, 0.25)$.
Dose-response curves were estimated using four methods: (i) logistic regression specifying $Y$ as a function of $A^{*}, L_{1}, L_{2}$ and the interactions between $A^{*}$ and each covariate; (ii) CSM with the same outcome specification as the logistic regression and correctly specified measurement error variance; (iii) the g-formula with a correctly specified outcome model using the mismeasured exposures; and (iv) the proposed g-formula CSM estimator with a correctly specified outcome regression. For methods (i) and (ii), dose-response curves were estimated as $\hat{\text{E}}\{ Y(a) \} = \text{logit}^{-1}(\hat{\beta} a)$ where $\hat{\beta}$ is the estimated coefficient for the exposure from each model, in essence assuming that adjusting for $L_{1}$ and $L_{2}$ in the models will appropriately control for confounding. For the point on the true dose response curve $\text{E}\{ Y(a = 3) \}$, the average bias, average estimated standard error, empirical standard error, and percentage of estimated confidence intervals covering the true value were estimated across the 2000 simulation runs.
Biases of the estimated dose-response curves are displayed in Figure~\ref{fig:two}. Only the proposed g-formula CSM estimator was approximately unbiased. In contrast, the other three methods performed poorly with high bias across nearly the entire range of exposure values evaluated. This is expected because these methods do not adjust for both confounding and measurement error. Likewise, only the proposed g-formula CSM estimator was unbiased with corresponding confidence intervals achieving nominal coverage for the potential outcome mean at $a = 3$ on the dose-response curve, as summarized in Table~\ref{tab:one}.
\begin{figure}
\centering
\includegraphics[width=6in]{paper1-fig2-grey.pdf}
\caption{Estimated dose-response curve bias for each of the four methods compared in the first simulation study. Bias refers to the average bias across 2,000 simulated data sets for each method evaluated at each point on the horizontal axis $a = (0, 0.2, 0.4, ..., 4)$. The gray horizontal line corresponds to zero bias.}
\label{fig:two}
\end{figure}
The second simulation study compared the proposed IPW CSM approach to standard methods. Let $\text{Exp}(\lambda)$ refer to an exponential random variable with expectation $\lambda$. A total of 2000 data sets of $n = 800$ individuals were simulated, each with the following variables: a confounder $L \sim \text{Exp}(3)$, first true exposure $A_{1} \sim \mathcal{N}(4 + 0.9L, 1.1)$, second true exposure $A_{2} \sim \mathcal{N}(2.5, 0.7)$, third true exposure $A_{3} \sim \mathcal{N}(1.4 + 0.5, 0.6)$ and an outcome $Y \sim \text{Bern}(\exp(-1.7 + 0.4A_{1} - 0.4A_{2} - 0.6A_{3} + 0.7L - 0.6A_{1}L - 0.9A_{3}L - \log(\lambda / (\lambda - 0.7 + 0.6A_{1} + 0.9A_{3}))) / (1 + \exp(-1.7 + 0.4A_{1} - 0.4A_{2} - 0.6A_{3})))$ such that $\text{logit}[\text{E}\{ Y(a_{1}, a_{2}, a_{3})\} ] = \gamma_{0} + \gamma_{1}a_{1} + \gamma_{2}a_{2} + \gamma_{3}a_{3} = -1.7 + 0.4a_{1} - 0.4a_{2} - 0.6a_{3}$. The third exposure $A_{3}$ was allowed to be correctly measured, while $A_{1}$ and $A_{2}$ were subject to additive measurement error simulated as $A_{1}^{*} \sim \mathcal{N}(A_{1}, 0.36)$ and $A_{2}^{*} \sim \mathcal{N}(A_{2}, 0.25)$ (i.e., the measurement error covariance matrix was diagonal).
\begin{table}[]
\centering
%\footnotesize
\caption{Results from the first simulation study. Bias: 100 times the average bias across simulated data sets for each method; ASE: 100 times the average of estimated standard errors; ESE: 100 times the standard deviation of parameter estimates; Cov: Empirical coverage of 95$\%$ confidence intervals for each method, rounded to the nearest integer.}
\begin{tabular}{lrrrr}
\hline
Estimator & Bias & ASE & ESE & Cov \\
\hline
%\hhline{|=|=|=|=|=|=|}
Regression & -51.5 & 18.3 & 18.3 & 19\% \\
CSM & -21.8 & 28.4 & 28.6 & 87\% \\
G-formula & -3.9 & 2.6 & 2.6 & 67\% \\
G-formula CSM & 0.5 & 4.0 & 4.1 & 95\% \\
\hline
\end{tabular}
\label{tab:one}
\end{table}
The MSM parameters of interest $\gamma_{1}$, $\gamma_{2}$, and $\gamma_{3}$ were estimated from the observed data $\{ Y, A_{1}^{*}, A_{2}^{*}, A_{3}, L \}$ using four methods: (i) logistic regression specifying $Y$ as a function of $A_{1}^{*}, A_{2}^{*}, A_{3}, L$ and the interactions between $A_{1}^{*}$ and $L$ as well as $A_{3}$ and $L$; (ii) the CSM method with the same outcome specification as the logistic regression and correctly specified measurement error variances; (iii) an IPW estimator where a correctly specified propensity model was fit for both $A_{1}^{*}$ and $A_{3}$ and the product of the resulting inverse probability weights were used to weight a logistic regression of $Y$ on the three observed exposures; and (iv) the proposed IPW CSM estimator using correctly specified measurement error variances and weights from correctly specified propensity models. Methods (i) and (ii) estimate $\gamma_{1}$, $\gamma_{2}$, and $\gamma_{3}$ by the main effect coefficient estimates of the respective exposures, assuming that adjusting for $L$ in the models will appropriately control for confounding.
\begin{table}[]
\caption{Results from the second simulation study. Bias, ASE, ESE, and Cov defined as in Table 1.}
\begin{center}
\begin{tabular}{clrrrr}
\hline
Parameter & Estimator & Bias & ASE & ESE & Cov \\
\hline
$\gamma_{1}$ & Regression & 5.8 & 13.6 & 13.3 & 93\% \\
$\gamma_{1}$ & CSM & 22.0 & 19.2 & 18.7 & 80\% \\
$\gamma_{1}$ & IPW & -9.7 & 9.1 & 9.0 & 80\% \\
$\gamma_{1}$ & IPW CSM & 0.3 & 12.5 & 12.3 & 95\% \\[4pt]
$\gamma_{2}$ & Regression & 11.7 & 13.3 & 13.0 & 84\% \\
$\gamma_{2}$ & CSM & -3.9 & 21.1 & 20.5 & 95\% \\
$\gamma_{2}$ & IPW & 13.8 & 13.3 & 12.9 & 80\% \\
$\gamma_{2}$ & IPW CSM & -0.3 & 20.7 & 20.1 & 95\% \\[4pt]
$\gamma_{3}$ & Regression & 10.4 & 27.9 & 27.4 & 92\% \\
$\gamma_{3}$ & CSM & 9.2 & 28.9 & 27.8 & 92\% \\
$\gamma_{3}$ & IPW & 0.3 & 19.8 & 19.4 & 94\% \\
$\gamma_{3}$ & IPW CSM & -0.4 & 20.1 & 19.7 & 95\% \\
\hline
\end{tabular}
\end{center}
\label{tab:two}
\end{table}
The average bias, average estimated standard error, empirical standard error, and percentage of confidence intervals covering the true values across the 2000 simulation runs are presented in Table~\ref{tab:two}. For $\gamma_{2}$, the parameter corresponding to the mismeasured but unconfounded exposure, methods (ii) and (iv) achieved near-zero average bias and approximately nominal coverage, while methods (i) and (iii) were biased and had lower coverage. For $\gamma_{3}$, the parameter corresponding to the confounded but correctly measured exposure, methods (iii) and (iv) achieved low bias and about nominal coverage, while methods (i) and (ii) performed poorly. For $\gamma_{1}$, the parameter corresponding to the mismeasured and confounded exposure, only the method proposed in this paper (iv) had approximately 0 bias and nominal coverage.
The third simulation study compared the proposed g-formula and IPW CSM approaches to the proposed DR CSM estimator under various model specifications. In particular, 2000 datasets of $n = 2000$ individuals were simulated with the following variables: a confounder $L_{1} \sim \text{Binom}(0.5)$, a second confounder $L_{2} \sim \mathcal{N}(1, 0.5)$, an exposure $A \sim \mathcal{N}(2 + 0.9L_{1} - 0.6L_{2}, 1.1)$, and a continuous outcome $Y \sim \mathcal{N}(1.5 + 0.7A + 0.9L_{1} - 0.7AL_{1} - 0.6L_{2} + 0.4AL_{2})$ such that the assumptions of all three methods hold given correct model specifications. The methods were compared in performance estimating the coefficient for $a$ in the marginal structural model resulting from these distributional assumptions. The exposure $A$ was subject to additive measurement error simulated as $A^{*} \sim \mathcal{N}(A, 0.16)$. Then the three approaches were compared under scenarios where only the propensity model was correctly specified, only the outcome regression was correctly specified, and where both were correctly specified. The propensity model was mis-specified by not including the confounder $L_{1}$ and the outcome regression was mis-specified by leaving out $L_{1}$ and the interaction between $A$ and $L_{1}$.
\begin{table}[]
\centering
%\footnotesize
\caption{Results from the third simulation study. Bias, ASE, ESE, and Cov defined as in Table 1. PS indicates the propensity score model is correctly specified; OR indicates the outcome regression is correctly specified.}
\begin{tabular}{lcrrrr}
\hline
Estimator & Correct Specifications & Bias & ASE & ESE & Cov \\
\hline
%\hhline{|=|=|=|=|=|=|}
G-formula CSM & PS & -6.6 & 1.9 & 1.9 & 8\% \\
IPW CSM & PS & 0.0 & 3.2 & 3.1 & 95\% \\
DR CSM & PS & 0.0 & 2.6 & 2.6 & 94\% \\[4pt]
G-formula CSM & OR & 0.0 & 1.7 & 1.7 & 94\% \\
IPW CSM & OR & -6.3 & 2.0 & 2.0 & 12\% \\
DR CSM & OR & 0.1 & 1.7 & 1.7 & 95\% \\[4pt]
G-formula CSM & PS and OR & 0.0 & 1.7 & 1.7 & 94\% \\
IPW CSM & PS and OR & 0.0 & 3.2 & 3.1 & 95\% \\
DR CSM & PS and OR & 0.1 & 1.9 & 1.9 & 94\% \\
\hline
\end{tabular}
\label{tab:three}
\end{table}
The simulation results are presented in Table~\ref{tab:three} and match the theoretical results described in Section 3.4. Namely, when only the propensity score model was specified correctly, the IPW estimator performed well, but the g-formula estimator was subject to substantial bias and undercoverage. Likewise when only the outcome model was specified correctly, the g-formula estimator performed well, but the IPW estimator was biased and had lower than nominal coverage. However, the doubly-robust estimator achieved low bias and approximately nominal coverage when only one of the two models was misspecified, exhibiting its namesake double-robustness property. While all simulations in this section generated data under positivity and additive measurement error, simulations of the proposed methods under violations of the positivity and additive measurement error assumptions are presented in Web Table 2 and Web Table 3.
\section{Application}
To illustrate the proposed methods, the DR CSM estimator was applied to data from the HVTN 505 vaccine trial. This trial studied a candidate HIV vaccine with a primary endpoint of diagnosis of HIV-1 infection after the Month 7 study visit and through the Month 24 study visit. Immunologic markers were measured from blood samples at the Month 7 study visit. As discussed in the Introduction, the candidate HIV vaccine was not found to be effective, but follow-up research described several interesting immunologic marker correlates of risk. In particular, \citet{neidich2019} found that antibody-dependent cellular phagocytosis and antigen-specific recruitment of Fc$\gamma$ receptors of several HIV-1 specific Env proteins were associated with reduced HIV-1 risk.
In this section, the primary analysis of \citet{neidich2019} is reassessed by (i) adjusting for measured potential confounders beyond simple inclusion in an outcome regression and (ii) allowing for additive measurement error of the exposures. Attention is focused on the log transforms of markers measuring antibody-dependent cellular phagocytosis and recruitment of Fc$\gamma$R\RNum{2}a of the H131-Con S gp140 protein, which will be referred to as ADCP and R\RNum{2}. The primary analysis of \citet{neidich2019} focused on the association of each of these exposures individually with HIV-1 acquisition among vaccine recipients. For each exposure, they fit a logistic regression model and reported odds ratios for the main effect of exposure adjusting for age, BMI, race, and behavior risk, as well as CD4 and CD8 polyfunctionality scores (CD4-P and CD8-P).
In this section, the data is analyzed making a Normal distributional assumption for the outcome such that the proposed doubly-robust CSM estimator can be used. Notably, the immunologic markers were not measured in all participants, but rather were measured in all vaccine recipients who acquired the HIV-1 infection primary endpoint and in a stratified random sample of vaccine recipients who completed 24 months of follow-up without diagnosis of HIV-1 infection. To account for this two-phase sampling design, the weights in the doubly-robust estimator are multiplied by inverse probability of sampling weights, following the procedure described in \citet{wang2009}. This version of the proposed estimator is described in more detail and evaluated in a simulation study in Web Appendix B. ADCP and R\RNum{2} were modeled separately to match the univariate-style analysis performed in \citet{neidich2019}; accordingly, separate propensity models were fit for each exposure. For each propensity model specification, main effects for the baseline covariates age, race, BMI, behavior risk, CD4-P, and CD8-P were included. For the outcome model specification, main effects for the exposure of interest, age, race, BMI, behavior risk, CD4-P, and CD8-P were included, along with interactions of the exposure with age and the exposure with behavior risk. Based on the theoretical and empirical results in Sections 3 and 4, the estimator should be consistent if either specification is correct. Finally, each exposure was assumed to follow a classical additive measurement error model where a sensitivity analysis was performed by varying the measurement error variances from 0 to 1/6, 1/4, and 1/3 of the variance for each exposure variable when restricted to vaccine recipients with an immune response. Since ADCP and R\RNum{2} are log-transformations of strictly positive random variables, this setup is equivalent to assuming that their corresponding non-log transformed variables follow multiplicative measurement error models. Dose response curves were estimated for each exposure and each measurement error level across a range of 0.5 to 3 for ADCP and 7 to 10 for R\RNum{2}; exposure levels above 3 and 10 respectively were associated with no or almost no risk.
\begin{figure}[h!]
\centering
\includegraphics[width=5.6in]{fig3-linear2.pdf}
\caption{HVTN 505 results. Each panel shows the dose-response curve for ADCP or R\RNum{2} estimated by the DR CSM estimator, as well as their respective shaded 95\% pointwise confidence regions. From top to bottom, each panel reflects increasing user-specified variances of measurement error $\sigma^{2}_{me}$ corresponding to proportions of 0, 1/6, 1/4, and 1/3 of $\sigma^{2}$, the estimated total exposure variances among vaccinees with immune responses. Lower confidence limits that were estimated to be negative were truncated at 0.}
\label{fig:three}
\end{figure}
The analysis results are plotted in Figure~\ref{fig:three}. For each exposure, lower values corresponded to higher HIV risk among the vaccine recipients, in line with prior results and biological theory. Moving across panels from top to bottom in Figure~\ref{fig:three}, the assumed measurement error variance increases and the confidence regions become wider, as expected. While higher levels of each exposure appear to be protective, there is substantial uncertainty when allowing for higher magnitudes of measurement error and because of the low sample size. Thus, studies measuring these biomarkers in more participants are needed to draw stronger conclusions.
\section{Discussion}
In this paper, estimators are proposed which adjust for both confounding and measurement error and which are consistent without any supplemental data and without specifying strong distributional or extrapolation assumptions. The proposed methods are semi-parametric in that while parametric assumptions for the errors were made, no assumptions were made about the distributions of true exposures. Accompanying this paper, the R package causalCSME has been developed (see Supporting Information) which includes code examples for the conditional score method, which have not previously been provided in even the standard (not causal inference focused) setting.
While the proposed methods were shown to have favorable theoretical and empirical properties, they are not without limitation. In particular, the methods require that the measurement error covariance is known or has been previously estimated. As demonstrated in Section 5, if the covariance is unknown then sensitivity analysis can be straightforward and informative if the covariance matrix is small or restricted such that it has few parameters. In addition, modeling assumptions on $\bm{A}^{*}$ and $Y$ that may be implausible in some settings were necessary. However, both additive measurement error models and the generalized linear model framework are commonly used and realistic in many settings. The DR CSM estimator provides some protection from mis-specification of these models, requiring only one but not necessarily both to be correctly specified.
There are several possible extensions of the methods described in this paper. Methods that accommodate different measurement error models and more flexible outcome model specifications would be useful. Generalizations of the conditional score approach such as in \citet{tsiatis2004} could be adapted to the causal setting to weaken some of the parametric assumptions made in this paper. A version of this idea is proposed by \citet{liu2017}, which considers adding confounders as additional covariates in the outcome regressions of \citet{tsiatis2004}. In the setting with no measurement error, adding confounders to outcome regressions has been shown to yield consistent estimates of causal parameters under certain assumptions, but those assumptions are more restrictive than that needed for g-methods such as IPW and the g-formula to yield consistent estimates. Assuming similar results hold in the measurement error setting, expanding the \citet{liu2017} approach using IPW, g-formula, and DR techniques could be a useful extension of this work.
Another possible future direction of research would be to combine g-estimation and CSM to estimate parameters of a structural nested model where at least one exposure is measured with error. In addition, while this paper expands on the conditional score estimation procedure described in \citet{stefanski1987}, the corrected score estimation procedure~\citep{nakamura1990} is a related method which could be extended to a causal inference setting in similar ways. This approach also adjusts for measurement error without supplemental data and has advantages and disadvantages compared to CSM in various settings. Finally, this paper focuses on point-exposures, but conditional-score based approaches addressing measurement error for longitudinal and survival data have been described and could be extended to define new causal inference methods in those settings.
% The \backmatter command formats the subsequent headings so that they
% are in the journal style. Please keep this command in your document
% in this position, right after the final section of the main part of
% the paper and right before the Acknowledgements, Supplementary Materials,
% and References sections.
\backmatter
% This section is optional. Here is where you will want to cite
% grants, people who helped with the paper, etc. But keep it short!
\section*{Acknowledgements}
The authors thank Kayla Kilpatrick, Shaina Mitchell, Sam Rosin, Bonnie Shook-Sa, and Jaffer Zaidi for helpful comments and suggestions, as well as the investigators, staff, and participants of the HVTN 505 trial. This work was supported by NIH grant R37 AI054165. \vspace*{-8pt}
\section*{Data Availability Statement}
The data that support the findings of this study are available on Atlas at \href{https://atlas.scharp.org/cpas/project/HVTN\%20Public\%20Data/HVTN\%20505/begin.view}{https://atlas.scharp.org/cpas/project/HVTN\%20Public\%20Data/HVTN\%20505/begin.view}.\vspace*{-8pt}
% Here, we create the bibliographic entries manually, following the
% journal style. If you use this method or use natbib, PLEASE PAY
% CAREFUL ATTENTION TO THE BIBLIOGRAPHIC STYLE IN A RECENT ISSUE OF
% THE JOURNAL AND FOLLOW IT! Failure to follow stylistic conventions
% just lengthens the time spend copyediting your paper and hence its
% position in the publication queue should it be accepted.
% We greatly prefer that you incorporate the references for your
% article into the body of the article as we have done here
% (you can use natbib or not as you choose) than use BiBTeX,
% so that your article is self-contained in one file.
% If you do use BiBTeX, please use the .bst file that comes with
% the distribution.
\bibliographystyle{biom}
\bibliography{refs}
%\begin{thebibliography}{}
%\end{thebibliography}
% If your paper refers to supplementary web material, then you MUST
% include this section!! See Instructions for Authors at the journal
% website http://www.biometrics.tibs.org
\section*{Supporting Information}
Web Appendices, Tables, and Figures referenced in Sections 3 and 6 are available with this paper at the Biometrics website on Wiley Online Library. All R code used in the simulations and application is publicly available at \href{https://github.com/bblette1/causalCSME}{https://github.com/bblette1/causalCSME}.\vspace*{-8pt}
\label{lastpage}
\end{document}
| {
"alphanum_fraction": 0.7454638735,
"avg_line_length": 107.6887661142,
"ext": "tex",
"hexsha": "72cd0affa0a101e6a722e8fcbbd07c87fa89f603",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "028b52f947126a0e835c5233a176fc84f56b2600",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "bblette1/causalCSME",
"max_forks_repo_path": "Biometrics paper draft/Biometrics-draft.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "028b52f947126a0e835c5233a176fc84f56b2600",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "bblette1/causalCSME",
"max_issues_repo_path": "Biometrics paper draft/Biometrics-draft.tex",
"max_line_length": 2261,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "028b52f947126a0e835c5233a176fc84f56b2600",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "bblette1/causalCSME",
"max_stars_repo_path": "Biometrics paper draft/Biometrics-draft.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 15388,
"size": 58475
} |
\section*{Introduction}
What is the question you are going to answer?
\begin{itemize}
\item Why is it an exciting question (basically the problem statement)
\item Why is this question important to our field
\item How is my work going to help to answer it?
\end{itemize}
"I have a major question. How did I get to this question? Why is that important? That’s essentially the introduction. And then I have the second part which is this idea that now I’m going to lead you through how I answered it, and that’s our methods and results. So, I think the story part comes in putting your work into context of the field, of other people’s work, of why it’s important, and it’ll make your results much more compelling" \cite{mensh2017ten}.
Readers might look at the title, they might skim your abstract and they might look at your figures, so we try to make our figures tell the story as much as possible.
\paragraph{How to structure a paragraph}
"For the whole paper, the introduction sets the context, the results present the content and the discussion brings home the conclusion" \cite{mensh2017ten}..
"In each paragraph, the first sentence defines the context, the body contains the new idea and the final sentence offers a conclusion" \cite{mensh2017ten}..
\paragraph{From 'Ten Simple Rules for structuring papers'}
"The introduction highlights the gap that exists in current knowledge or methods and why it is important. This is usually done by a set of progressively more specific paragraphs that culminate in a clear exposition of what is lacking in the literature, followed by a paragraph summarizing what the paper does to fill that gap.
As an example of the progression of gaps, a first paragraph may explain why understanding cell differentiation is an important topic and that the field has not yet solved what triggers it (a field gap). A second paragraph may explain what is unknown about the differentiation of a specific cell type, such as astrocytes (a subfield gap). A third may provide clues that a particular gene might drive astrocytic differentiation and then state that this hypothesis is untested (the gap within the subfield that you will fill). The gap statement sets the reader’s expectation for what the paper will deliver.
The structure of each introduction paragraph (except the last) serves the goal of developing the gap. Each paragraph first orients the reader to the topic (a context sentence or two) and then explains the “knowns” in the relevant literature (content) before landing on the critical “unknown” (conclusion) that makes the paper matter at the relevant scale. Along the path, there are often clues given about the mystery behind the gaps; these clues lead to the untested hypothesis or undeveloped method of the paper and give the reader hope that the mystery is solvable. The introduction should not contain a broad literature review beyond the motivation of the paper. This gap-focused structure makes it easy for experienced readers to evaluate the potential importance of a paper—they only need to assess the importance of the claimed gap.
The last paragraph of the introduction is special: it compactly summarizes the results, which fill the gap you just established. It differs from the abstract in the following ways: it does not need to present the context (which has just been given), it is somewhat more specific about the results, and it only briefly previews the conclusion of the paper, if at all." \cite{mensh2017ten}.
\paragraph{ End of introduction:}
Here you say what problem you are tackling: this should be made more clear:
\begin{enumerate}
\item What is the missing gap and
\item Why is it exciting and important.
\item What can be said/done / tested experimentally if we have modelled this, i.e. what consequences does it have or what hypothesis can be derived.
\end{enumerate}
Such a section is particularly important for a journal.
| {
"alphanum_fraction": 0.7887145749,
"avg_line_length": 85.9130434783,
"ext": "tex",
"hexsha": "d16e868a2311ac48e689377cb5a85c0541b75205",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e193afe6f9dac581e6b4fd3590b038da6f2879b8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "oesst/Paper-Template",
"max_forks_repo_path": "01_introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e193afe6f9dac581e6b4fd3590b038da6f2879b8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "oesst/Paper-Template",
"max_issues_repo_path": "01_introduction.tex",
"max_line_length": 839,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e193afe6f9dac581e6b4fd3590b038da6f2879b8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "oesst/Paper-Template",
"max_stars_repo_path": "01_introduction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 831,
"size": 3952
} |
\documentclass[gr-notes.tex]{subfiles}
\begin{document}
\setcounter{chapter}{4}
\chapter{Preface to Curvature}
\setcounter{section}{7}
\section{Exercises}
\textbf{1}
(a)
Repeat the argument leading to Equation 5.1, but this time assume that only a fraction $\epsilon < 1$ of the mass's kinetic energy is converted into a photon.
If only a fraction $\epsilon$ of the energy is converted into a photon, then it will start with an energy of $\epsilon (m + mgh + \order{\boldsymbol{v}^4}$, but once it reaches the top it should have an energy of $\epsilon m$, as it loses the component due to gravitational potential energy. Thus
%
\begin{displaymath}
\frac{E'}{E} =
\frac{\epsilon m}{\epsilon (m + mgh + \order{\boldsymbol{v}^4)}} =
\frac{m}{m + mgh + \order{\boldsymbol{v}^4}} =
1 - gh + \order{\boldsymbol{v}^4}
\end{displaymath}
(b)
Assume Equation 5.1 does not hold. Devise a perpetual motion device.
If we assume that the photon does not return to an energy $m$ once it reaches the top, but instead has an energy $m' > m$, then we could create the perpetual motion device shown in Figure \ref{fig:ch5-problem-1b}. A black box consumes the photon with energy $m'$, and splits it into a new object of mass $m$, and a photon of energy $m' - m$. The object repeats the action of the original falling mass, creating an infinite loop.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{img/ch5_problem_1b}
\caption{Problem 1: Perpetual motion device.}
\label{fig:ch5-problem-1b}
\end{figure}
\textbf{2}
Explain why a uniform gravitational field would not be able to create tides on Earth.
Tides depend on there being a gravitational field gradient. If the curvature closer to the source of the field (e.g. the Moon) is greater than it is further away, then the closer side will move towards the source more than the further side, thus creating tides. In the absense of such a gradient, there would be no difference in curvature between the two sides, and thus they would not stretch relative to each other.
\textbf{7}
Calculate the components of $\tensor{\Lambda}{^{\alpha'}_\beta}$ and $\tensor{\Lambda}{^\mu_{\nu'}}$ for transformations $(x, y) \leftrightarrow (r, \theta)$.
~
\begin{align*}
\mqty( \Delta r \\ \Delta \theta ) &=
\mqty( \pdv*{r}{x} & \pdv*{r}{y} \\
\pdv*{\theta}{x} & \pdv*{\theta}{y} )
\mqty( \Delta x \\ \Delta y )
&
\mqty( \Delta x \\ \Delta y ) &=
\mqty( \pdv*{x}{r} & \pdv*{x}{\theta} \\
\pdv*{y}{r} & \pdv*{y}{\theta} )
\mqty( \Delta r \\ \Delta \theta )
\\ &=
\mqty( x / \sqrt{x^2+y^2} & y / \sqrt{x^2 + y^2} \\
-y / (x^2 + y^2) & x / (x^2 + y^2) )
\mqty( \Delta x \\ \Delta y )
&&=
\mqty( \cos\theta & -r \sin\theta \\
\sin\theta & r \cos\theta )
\mqty( \Delta r \\ \Delta \theta )
\\ &=
\mqty( \cos\theta & \sin\theta \\
-(1/r) \sin\theta & (1/r) \cos\theta )
\mqty( \Delta x \\ \Delta y )
&&=
\mqty( x / \sqrt{x^2 + y^2} & -y \\
y / \sqrt{x^2 + y^2} & x)
\mqty( \Delta r \\ \Delta \theta )
\end{align*}
%
\begin{align*}
\tensor{\Lambda}{^r_x} &= x / \sqrt{x^2 + y^2} = \cos\theta
&
\tensor{\Lambda}{^x_r} &= \cos\theta = x / \sqrt{x^2 + y^2}
\\
\tensor{\Lambda}{^r_y} &= y / \sqrt{x^2 + y^2} = \sin\theta
&
\tensor{\Lambda}{^y_r} &= \sin\theta = y / \sqrt{x^2 + y^2}
\\
\tensor{\Lambda}{^\theta_x} &= -y / (x^2 + y^2) = -(1/r) \sin\theta
&
\tensor{\Lambda}{^x_\theta} &= -r \sin\theta = -y
\\
\tensor{\Lambda}{^\theta_y} &= x / (x^2 + y^2) = (1/r) \cos\theta
&
\tensor{\Lambda}{^y_\theta} &= r \cos\theta = x
\end{align*}
\textbf{8}
(a)
$f \equiv x^2 + y^2 + 2xy$, $\vec{V} \underset{(x,y)}{\to} (x^2 + 3y, y^2 + 3x)$, $\vec{W} \underset{(r,\theta)}{\to} (1, 1)$. Express $f = f(r, \theta)$, and find the components of $\vec{V}$ and $\vec{W}$ in a polar basis, as functions of $r$ and $\theta$.
\begin{align*}
f &= x^2 + y^2 + 2xy = (x + y)^2
\\ &=
(r \cos\theta + r \sin\theta)^2 =
r^2 \sin^2\theta + r^2 \cos^2\theta + 2 r^2 \sin\theta \cos\theta
\\ &=
r^2 (1 + \sin(2 \theta))
\\
%%%%%
\vec{V} &\underset{(x,y)}{\to}
\mqty( r^2 \cos^2\theta + 3 r \sin\theta \\
r^2 \sin^2\theta + 3 r \cos\theta )
\\
\vec{V} &\underset{(r,\theta)}{\to}
\mqty( \cos\theta & \sin\theta \\
-(1/r) \sin\theta & (1/r) \cos\theta )
\mqty( r^2 \cos^2\theta + 3 r \sin\theta \\
r^2 \sin^2\theta + 3 r \cos\theta )
\\ &\underset{(r,\theta)}{\to}
\mqty(
r^2 \cos^2\theta + 6 r \sin\theta \cos\theta + r^2 \sin^3\theta
\\
-r \cos^2\theta \sin\theta - 3 \sin^2\theta +
r \sin^2\theta \cos\theta + 3 \cos^2\theta
)
\\ &\underset{(r,\theta)}{\to}
\mqty(
r^2 (\sin^3\theta + \cos^3\theta) + 6 r \sin\theta \cos\theta
\\
r \sin\theta \cos\theta (\sin\theta - \cos\theta) +
3 (\cos^2\theta - \sin^2\theta)
)
\\ &\underset{(r,\theta)}{\to}
\mqty(
r^2 (\sin^3\theta + \cos^3\theta) + 3 r \sin(2\theta)
\\
(r/2) \sin(2\theta) (\sin\theta - \cos\theta) + 3 \cos(2\theta)
)
\\
%%%%%
\vec{W} &\underset{(r,\theta)}{\to}
\mqty( \cos\theta & \sin\theta \\
-(1/r) \sin\theta & (1/r) \cos\theta )
\mqty( 1 \\ 1 )
\\ &\underset{(r,\theta)}{\to}
\mqty(
\cos\theta + \sin\theta
\\
(1/r) (\cos\theta - \sin\theta)
)
\end{align*}
(b)
Express the components of $\tilde{\dd}f$ in $(x,y)$ and obtain them in $(r,\theta)$ by:
(i) using direct calculation in $(r, \theta)$:
\begin{displaymath}
\tilde{\dd}f \underset{(r,\theta)}{\to}
\qty( \pdv*{f}{r}, \pdv*{f}{\theta} ) =
\qty( 2 r (1 + \sin(2 \theta)), 2 r^2 \cos(2 \theta) )
\end{displaymath}
(ii) transforming the components in $(x, y)$:
\begin{displaymath}
\tilde{\dd}f \underset{(x,y)}{\to}
\qty( \pdv*{f}{x}, \pdv*{f}{y} ) =
\qty( 2 (x+y), 2 (x+y) ) =
\qty( 2 r (\cos\theta + \sin\theta), 2 r (\cos\theta + \sin\theta))
\end{displaymath}
\begin{align*}
\mqty( (\tilde{\dd}f)_r & (\tilde{\dd}f)_\theta ) &=
\mqty( 1 & 1 )
\mqty( \cos\theta & -r\sin\theta \\ \sin\theta & r\cos\theta )
\qty[ 2 r (\cos\theta + \sin\theta) ]
\\ &=
\mqty( 2 r (\cos^2\theta + \sin^2\theta + 2 \sin\theta\cos\theta) &
2 r^2 (\cos^2\theta - \sin^2\theta) )
\\ &=
\mqty( 2 r (1 + \sin(2 \theta)) & 2 r^2 \cos(2 \theta) )
\end{align*}
(c)
Now find the $(r,\theta)$ components of the one-forms $\tilde{V}$ and $\tilde{W}$ associated with the vectors $\vec{V}$ and $\vec{W}$ by
(i)
using the metric tensor in $(r,\theta)$:
\begin{align*}
V_r &=
g_{r\alpha} V^\alpha =
g_{rr} V^r + g_{r\theta} V^\theta
\\ &=
r^2 (\sin^3\theta + \cos^3\theta) + 3 r \sin(2\theta)
\\
V_\theta &=
g_{\theta r} V^r + g_{\theta\theta} V^\theta =
(1/2) r^3 \sin(2\theta) (\sin\theta - \cos\theta) +
3 r^2 \cos(2\theta)
\\
%%%%%
W_r &=
g_{r\alpha} W^\alpha =
g_{rx} W^x + g_{ry} W^y
\\ &=
1 (\cos\theta + \sin\theta) +
0 \qty[ (1/r) (\cos\theta - \sin\theta) ]
\\ &=
\cos\theta + \sin\theta
\\
%%%%%
W_\theta &=
g_{\theta x} W^x + g_{\theta y} W^y =
\\ &=
0 (\cos\theta + \sin\theta) +
r^2 \qty[ r (\cos\theta - \sin\theta) ]
\\ &=
r (\cos\theta - \sin\theta)
\end{align*}
(ii)
using the metric tensor in $(x,y)$ and then doing a coordinate transformation:
\begin{align*}
V_x &= V^x; \quad V_y = V^y
\\
V_r &=
\tensor{\Lambda}{^\alpha_r} V_\alpha =
\tensor{\Lambda}{^x_r} V_x + \tensor{\Lambda}{^y_r} V_y
\\ &=
\cos\theta V_x + \sin\theta V_y
\\ &=
r^2 \cos^3\theta + (3/2) r \sin(2\theta) +
r^2 \sin^3\theta + (3/2) r \sin(2\theta)
\\ &=
r^2 (\cos^3\theta + \sin^3\theta) + 3 r \sin(2\theta)
\\
V_\theta &=
\tensor{\Lambda}{^\alpha_\theta} V_\alpha =
\tensor{\Lambda}{^x_\theta} V_x + \tensor{\Lambda}{^y_\theta} V_y
\\ &=
(-r \sin\theta) V_x + (r \cos\theta) V_y
\\ &=
-r^3 \cos^2\theta \sin\theta - 3 r^2 \sin^2\theta +
r^3 \sin^2\theta \cos\theta + 3 r^2 \cos^2\theta
\\ &=
r^3 \sin\theta \cos\theta (\sin\theta - \cos\theta) +
3 r^2 (\cos^2\theta - \sin^2\theta)
\\ &=
(1/2) r^3 \sin(2\theta) (\sin\theta - \cos\theta) +
3 r^2 \cos(2\theta)
\\
%%%%%
W_x &= W^x = W_y = W^y = 1
\\
W_r &=
\tensor{\Lambda}{^\alpha_r} W_\alpha =
\tensor{\Lambda}{^x_r} W_x + \tensor{\Lambda}{^y_r} W_y
\\ &=
\cos\theta + \sin\theta
\\
W_\theta &=
\tensor{\Lambda}{^\alpha_\theta} W_\alpha =
\tensor{\Lambda}{^x_\theta} W_x + \tensor{\Lambda}{^y_\theta} W_y
\\ &=
-r \sin\theta + r \cos\theta
\\ &=
r (\cos\theta - \sin\theta)
\end{align*}
\textbf{11}
Consider $V \underset{(x,y)}{\to} (x^2 + 3y, y^2 + 3x)$.
(a)
Find $\tensor{V}{^\alpha_{,\beta}}$ in Cartesian coordinates.
%
\begin{displaymath}
\tensor{V}{^x_{,x}} = 2 x; \quad
\tensor{V}{^y_{,y}} = 2 y; \quad
\tensor{V}{^x_{,y}} = \tensor{V}{^y_{,x}} = 3.
\end{displaymath}
(b)
\begin{align*}
\tensor{V}{^{\mu'}_{;\nu'}} &=
\tensor{\Lambda}{^{\mu'}_\alpha}
\tensor{\Lambda}{^\beta_{\nu'}}
\tensor{V}{^\alpha_{,\beta}}
\\
\tensor{V}{^r_{;r}} &=
\tensor{\Lambda}{^r_x} \tensor{\Lambda}{^x_r} \tensor{V}{^x_{,x}} +
\tensor{\Lambda}{^r_y} \tensor{\Lambda}{^y_r} \tensor{V}{^y_{,y}} +
\tensor{\Lambda}{^r_x} \tensor{\Lambda}{^y_r} \tensor{V}{^x_{,y}} +
\tensor{\Lambda}{^r_y} \tensor{\Lambda}{^x_r} \tensor{V}{^y_{,x}}
\\ &=
(\cos^2\theta) (2r \cos\theta) +
(\sin^2\theta) (2r \sin\theta) +
(\sin\theta \cos\theta) (3) +
(\sin\theta \cos\theta) (3)
\\ &=
2r (\cos^3\theta + \sin^3\theta) +
3 \sin(2\theta)
\\
\tensor{V}{^\theta_{;\theta}} &=
\tensor{\Lambda}{^\theta_x} \tensor{\Lambda}{^\theta_r} \tensor{V}{^x_{,x}} +
\tensor{\Lambda}{^\theta_y} \tensor{\Lambda}{^\theta_r} \tensor{V}{^y_{,y}} +
\tensor{\Lambda}{^\theta_x} \tensor{\Lambda}{^y_\theta} \tensor{V}{^x_{,y}} +
\tensor{\Lambda}{^\theta_y} \tensor{\Lambda}{^\theta_r} \tensor{V}{^y_{,x}}
\\ &=
(\sin^2\theta) (2 r \cos\theta) + (\cos^2\theta) (2 r \sin\theta) +
(-\sin\theta \cos\theta) (3) + (-\sin\theta \cos\theta) (3)
\\ &=
\sin(2\theta) [ r (\sin\theta + \cos\theta) - 3 ]
\\
\tensor{V}{^r_{;\theta}} &=
\tensor{\Lambda}{^r_x} \tensor{\Lambda}{^x_\theta} \tensor{V}{^x_{,x}} +
\tensor{\Lambda}{^r_y} \tensor{\Lambda}{^y_\theta} \tensor{V}{^y_{,y}} +
\tensor{\Lambda}{^r_x} \tensor{\Lambda}{^y_\theta} \tensor{V}{^x_{,y}} +
\tensor{\Lambda}{^r_y} \tensor{\Lambda}{^x_\theta} \tensor{V}{^y_{,x}}
\\ &=
(-r \sin\theta \cos\theta) (2 r \cos\theta) +
(r \sin\theta \cos\theta) (2 r \sin\theta) +
(r \cos^2\theta) (3) + (-r \sin^2\theta)
\\ &=
r^2 \sin(2\theta) (\sin\theta - \cos\theta) +
3 r \cos(2 \theta)
\\
\tensor{V}{^\theta_{;r}} &=
\tensor{\Lambda}{^\theta_x} \tensor{\Lambda}{^x_r} \tensor{V}{^x_{,x}} +
\tensor{\Lambda}{^\theta_y} \tensor{\Lambda}{^y_r} \tensor{V}{^y_{,y}} +
\tensor{\Lambda}{^\theta_x} \tensor{\Lambda}{^y_r} \tensor{V}{^x_{,y}} +
\tensor{\Lambda}{^\theta_y} \tensor{\Lambda}{^x_r} \tensor{V}{^y_{,x}}
\\ &=
(-(1/r) \sin\theta \cos\theta) (2 r \cos\theta) +
( (1/r) \sin\theta \cos\theta) (2 r \sin\theta) +
(-(1/r) \sin^2\theta) (3) +
( (1/r) \cos^2\theta) (3)
\\ &=
\sin(2\theta) (\sin\theta - \cos\theta) +
\frac{3}{r} \cos(2 \theta)
\end{align*}
(c) compute $\tensor{V}{^{\mu'}_{;\nu'}}$ directly in polars using the Christoffel symbols.
Recall that we have $\tensor{\Gamma}{^\mu_{rr}} = \tensor{\Gamma}{^r_{r\theta}} = \tensor{\Gamma}{^\theta_{\theta\theta}} = 0$, $\tensor{\Gamma}{^\theta_{r\theta}} = 1/r$, and $\tensor{\Gamma}{^r_{\theta\theta}} = -r$.
\begin{align*}
\tensor{V}{^{\mu'}_{;\nu'}} &=
\tensor{V}{^{\mu'}_{,\nu'}} +
\tensor{V}{^{\alpha'}} \tensor{\Gamma}{^{\mu'}_{\alpha'\nu'}}
\\
\tensor{V}{^r_{;r}} &=
\tensor{V}{^r_{,r}} +
\tensor{V}{^{\alpha'}} \tensor{\Gamma}{^r_{\alpha'r}}
\\
\tensor{V}{^r_{,r}} &=
\pdv*{V^r}{r} =
2 r (\sin^3\theta + \cos^3\theta) + 3 \sin(2\theta)
\\
\tensor{V}{^\alpha} \tensor{\Gamma}{^r_{\alpha r}} &=
V^r \tensor{\Gamma}{^r_{rr}} +
V^\theta \tensor{\Gamma}{^r_{\theta r}} =
0
\\
\tensor{V}{^r_{;r}} &=
\tensor{V}{^r_{,r}} =
2 r (\sin^3\theta + \cos^3\theta) + 3 \sin(2\theta)
\\
\tensor{V}{^\theta_{;\theta}} &=
\tensor{V}{^\theta_{,\theta}} +
\tensor{V}{^{\alpha'}} \tensor{\Gamma}{^\theta_{\alpha'\theta}}
\\
\tensor{V}{^\theta_{,\theta}} &=
\pdv*{V^\theta}{\theta} =
(r/2) \sin(2\theta) (\sin\theta + \cos\theta) +
r \cos(2\theta) (\sin\theta - \cos\theta) -
6 \sin(2\theta)
\\
\tensor{V}{^{\alpha'}} \tensor{\Gamma}{^\theta_{\alpha'\theta}} &=
\tensor{V}{^r} \tensor{\Gamma}{^\theta_{r\theta}} +
\tensor{V}{^\theta} \tensor{\Gamma}{^\theta_{\theta\theta}}
\\ &=
[r^2 (\sin^3\theta + \cos^3\theta) + 3r \sin(2\theta)] (1/r)
\\ &=
r (\sin^3\theta + \cos^3\theta) + 3 \sin(2\theta)
\\
\tensor{V}{^\theta_{;\theta}} &=
\sin(2\theta) [r (\sin\theta + \cos\theta) - 3]
\\
\tensor{V}{^r_{;\theta}} &=
\tensor{V}{^r_{,\theta}} +
V^r \tensor{\Gamma}{^r_{r\theta}} +
V^\theta \tensor{\Gamma}{^r_{\theta\theta}}
\\ &=
\tensor{V}{^r_{,\theta}} +
V^\theta \tensor{\Gamma}{^r_{\theta\theta}} =
\pdv*{V^r}{\theta} - r V^\theta
\\ &=
6 r \cos(2\theta) +
(3/2) r^2 \sin(2\theta) (\sin\theta-\cos\theta) -
((1/2) r^2 \sin(2\theta) (\sin\theta - \cos\theta) + 3 r \cos(2\theta))
\\ &=
r^2 \sin(2\theta) (\sin\theta - \cos\theta) + 3 r \cos(2\theta)
\\
\tensor{V}{^\theta_{;r}} &=
\tensor{V}{^\theta_{,r}} +
V^r \tensor{\Gamma}{^\theta_{rr}} +
V^\theta \tensor{\Gamma}{^\theta_{\theta r}} =
\tensor{V}{^\theta_{,r}} +
\frac{1}{r}
V^\theta
\\ &=
(1/2) \sin(2\theta) (\sin\theta - \cos\theta) +
(1/2) \sin(2\theta) (\sin\theta - \cos\theta) +
(3/r) \cos(2\theta)
\\ &=
\sin(2\theta) (\sin\theta - \cos\theta) +
(3/r) \cos(2\theta)
\end{align*}
(d)
Calculate the divergence using the results from part (a)
%
\begin{displaymath}
\tensor{V}{^\alpha_{,\alpha}} =
\tensor{V}{^x_{,x}} + \tensor{V}{^y_{,y}} =
2 (x + y) =
2 r (\sin\theta + \cos\theta)
\end{displaymath}
(e)
Calculate the divergence using the results from either part (b) or (c).
%
\begin{align*}
\tensor{V}{^{\mu'}_{;\mu'}} &=
\tensor{V}{^{r}_{;r}} +
\tensor{V}{^{\theta}_{;\theta}}
\\ &=
2 r (\sin^3\theta + \cos^3\theta) +
3 \sin(2\theta) +
\sin(2\theta) [r (\sin\theta + \cos\theta) - 3]
\\ &=
2 r (\sin\theta + \cos\theta)
\end{align*}
(f)
Compute $\tensor{V}{^{\mu'}_{;\mu'}}$ using Equation 5.56.
\begin{displaymath}
\tensor{V}{^{\mu'}_{;\mu'}} =
\frac{1}{r} \pdv{r} (r V^r) + \pdv{\theta} (V^\theta) =
2 r (\sin\theta + \cos\theta)
\end{displaymath}
\textbf{12}
%
\begin{displaymath}
\tilde{p} \underset{(x,y)}{\to} (x^2 + 3y, y^2 + 3x).
\end{displaymath}
(a)
Find the components $p_{\alpha,\beta}$ in Cartesian coordinates.
Since $p_{\alpha,\beta} = \pdv*{p_\alpha}{x^\beta}$, it's simply $p_{x,x} = 2x$, $p_{y,y} = 2y$, and $p_{x,y} = p_{y,x} = 3$.
(b)
Find the components $p_{\mu';\nu'}$ in polar coordinates by using the transformation $\tensor{\Lambda}{^\alpha_{\mu'}} \tensor{\Lambda}{^\beta_{\nu'}} p_{\alpha,\beta}$.
\begin{align*}
p_{r;r} &=
\qty(\tensor{\Lambda}{^x_r})^2 p_{x,x} +
\qty(\tensor{\Lambda}{^y_r})^2 p_{y,y} +
2 \tensor{\Lambda}{^x_r} \tensor{\Lambda}{^y_r} p_{x,y}
\\ &=
(\cos^2\theta) (2 r \cos\theta) +
(\sin^2\theta) (2 r \sin\theta) +
2 (\sin\theta \cos\theta) (3)
\\ &=
2 r (\sin^3\theta + \cos^3\theta) + 3 \sin(2\theta)
\\
p_{\theta;\theta} &=
\qty(\tensor{\Lambda}{^x_\theta})^2 p_{x,x} +
\qty(\tensor{\Lambda}{^y_\theta})^2 p_{y,y} +
2 \tensor{\Lambda}{^x_\theta} \tensor{\Lambda}{^y_\theta} p_{x,y}
\\ &=
(-r \sin\theta)^2 (2 r \cos\theta) +
(r \cos\theta)^2 (2 r \sin\theta) +
2 (3 (-r \sin\theta) (r \cos\theta))
\\ &=
r^2 \sin(2\theta) (r (\sin\theta + \cos\theta) - 3)
\\
p_{r;\theta} &=
\tensor{\Lambda}{^x_r} \tensor{\Lambda}{^x_\theta} p_{x,x} +
\tensor{\Lambda}{^y_r} \tensor{\Lambda}{^y_\theta} p_{y,y} +
\tensor{\Lambda}{^x_r} \tensor{\Lambda}{^y_\theta} p_{x,y} +
\tensor{\Lambda}{^y_r} \tensor{\Lambda}{^x_\theta} p_{y,x}
\\ &=
(-r \sin\theta \cos\theta) (2 r \cos\theta) +
(r \sin\theta \cos\theta) (2 r \sin\theta) +
3 (r \cos^2\theta - r \sin^2\theta)
\\ &=
r^2 \sin(2\theta) (\sin\theta - \cos\theta) + 3 r \cos(2\theta),
\end{align*}
and by the symmetry of $p_{\alpha,\beta}$ in Cartesian coordinates, $p_{\theta;r} = p_{r;\theta}$.
(c)
Now find $p_{\mu';\nu'}$ using the Christoffel symbols.
\begin{align*}
p_{r;r} &=
p_{r,r} -
p_r \tensor{\Gamma}{^r_{rr}} -
p_\theta \tensor{\Gamma}{^\theta_{rr}} =
p_{r,r} = \pdv*{p_r}{r}
\\ &=
\pdv*{r} \qty[ r^2 (\cos^3\theta + \sin^3\theta) + 3 r \sin(2\theta) ] =
2 r (sin^3\theta + \cos^3\theta) + 3 \sin(2\theta)
\\
p_{\theta;\theta} &=
p_{\theta,\theta} -
p_r \tensor{\Gamma}{^r_{\theta\theta}} -
p_\theta \tensor{\Gamma}{^\theta_{\theta\theta}} =
p_{\theta,\theta} + r p_r =
\pdv*{p_\theta}{\theta}
\\ &=
\pdv*{\theta} \qty[
(1/2) r^3 \sin(2\theta) (\sin\theta - \cos\theta) +
3 r^2 \cos(2\theta)
] +
r \qty[
r^2 (\cos^3\theta + \sin^3\theta) + 3 r \sin(2\theta)
]
\\ &=
r^2 \sin(2\theta) \qty[ r (\sin\theta + \cos\theta) - 3 ]
\\
p_{r;\theta} &=
p_{r,\theta} -
p_r \tensor{\Gamma}{^r_{r\theta}} -
p_\theta \tensor{\Gamma}{^\theta_{r\theta}} =
\pdv*{p_r}{\theta} - (1/r) p_\theta
\\ &=
r^2 \sin(2\theta) (\sin\theta - \cos\theta) + 3 r \cos(2\theta)
\\
p_{\theta;r} &=
p_{\theta,r} -
p_r \tensor{\Gamma}{^r_{\theta r}} -
p_\theta \tensor{\Gamma}{^\theta_{\theta r}} =
\pdv*{p_\theta}{r} - (1/r) p_\theta
\\ &=
r^2 \sin(2\theta) (\sin\theta - \cos\theta) + 3 r \cos(2\theta)
\end{align*}
\textbf{13}
Show in polars that $g_{\mu'\alpha'} \tensor{V}{^{\alpha'}_{;\nu'}} = p_{\mu';\nu'}$.
\begin{align*}
g_{r\alpha'} \tensor{V}{^{\alpha'}_{;r}} &=
g_{rr} \tensor{V}{^{r}_{;r}} + g_{r\theta} \tensor{V}{^{\theta}_{;r}}
\\ &=
1 \tensor{V}{^r_{;r}} =
p_{r;r}
\\
g_{\theta\alpha'} \tensor{V}{^{\alpha'}_{;\theta}} &=
g_{\theta r} \tensor{V}{^r_{;\theta}} +
g_{\theta\theta} \tensor{V}{^\theta_{;\theta}}
\\ &=
r^2 \tensor{V}{^\theta_{;\theta}} =
p_{\theta;\theta}
\\
g_{r \alpha'} \tensor{V}{^{\alpha'}_{;\theta}} &=
g_{rr} \tensor{V}{^r_{;\theta}} +
g_{r\theta} \tensor{V}{^\theta_{;\theta}}
\\ &=
1 \tensor{V}{^r_{;\theta}} =
p_{\theta;r}
\\
g_{\theta\alpha'} \tensor{V}{^{\alpha'}_{;r}} &=
g_{\theta r} \tensor{V}{^r_{;r}} +
g_{\theta\theta} \tensor{V}{^\theta_{;r}}
\\ &=
r^2 \tensor{V}{^\theta_{;r}} =
p_{\theta;r}
\end{align*}
\textbf{14}
Compute $\grad_\beta A^{\mu\nu}$ for the tensor $\boldsymbol{A}$ with components:
%
\begin{align*}
A^{rr} &= r^2, & A^{r\theta} &= r \sin\theta,
\\
A^{\theta\theta} &= \tan\theta, & A^{\theta r} &= r \cos\theta
\end{align*}
%
\begin{align*}
\tensor{A}{^{rr}_{,r}} &= 2r &
\tensor{A}{^{rr}_{,\theta}} &= 0
\\
\tensor{A}{^{\theta\theta}_{,r}} &= 0 &
\tensor{A}{^{\theta\theta}_{,\theta}} &= \sec^2\theta
\\
\tensor{A}{^{r\theta}_{,r}} &= \sin\theta &
\tensor{A}{^{r\theta}_{,\theta}} &= r \cos\theta
\\
\tensor{A}{^{\theta r}_{,r}} &= \cos\theta &
\tensor{A}{^{\theta r}_{,\theta}} &= -r \sin\theta
\end{align*}
%
\begin{displaymath}
\grad_\beta A^{\mu \nu} =
\tensor{A}{^{\mu \nu}_{,\beta}} +
A^{\alpha \nu} \tensor{\Gamma}{^\mu_{\alpha \beta}} +
A^{\mu \alpha} \tensor{\Gamma}{^\nu_{\alpha \beta}}
\end{displaymath}
%
\begin{align*}
\grad_r A^{rr} &=
\tensor{A}{^{rr}_{,r}} +
A^{\alpha r} \tensor{\Gamma}{^r_{\alpha r}} +
A^{r\alpha} \tensor{\Gamma}{^r_{\alpha r}}
\\ &=
\tensor{A}{^{rr}_{,r}} +
A^{r r} \tensor{\Gamma}{^r_{r r}} +
A^{\theta r} \tensor{\Gamma}{^r_{\theta r}} +
A^{r r} \tensor{\Gamma}{^r_{r r}} +
A^{r \theta} \tensor{\Gamma}{^r_{\theta r}}
\\ &=
\tensor{A}{^{rr}_{,r}} =
2 r
\\
\grad_\theta A^{r r} &=
\tensor{A}{^{r r}_{,\theta}} +
A^{\alpha r} \tensor{\Gamma}{^r_{\alpha \theta}} +
A^{r \alpha} \tensor{\Gamma}{^r_{\alpha \theta}}
\\ &=
\tensor{A}{^{r r}_{,\theta}} +
A^{r r} \tensor{\Gamma}{^r_{r \theta}} +
A^{\theta r} \tensor{\Gamma}{^r_{\theta \theta}} +
A^{r r} \tensor{\Gamma}{^r_{r \theta}} +
A^{r \theta} \tensor{\Gamma}{^r_{\theta \theta}}
\\ &=
(A^{\theta r} + A^{r \theta}) \tensor{\Gamma}{^r_{\theta \theta}} =
-r^2 (\sin\theta + \cos\theta)
\\
\grad_r A^{\theta \theta} &=
\tensor{A}{^{\theta \theta}_{,r}} +
A^{\alpha \theta} \tensor{\Gamma}{^\theta_{\alpha r}} +
A^{\theta \alpha} \tensor{\Gamma}{^\theta_{\alpha r}}
\\ &=
\tensor{A}{^{\theta \theta}_{,r}} +
A^{r \theta} \tensor{\Gamma}{^\theta_{r r}} +
A^{\theta \theta} \tensor{\Gamma}{^\theta_{\theta r}} +
A^{\theta r} \tensor{\Gamma}{^\theta_{r r}} +
A^{\theta \theta} \tensor{\Gamma}{^\theta_{\theta r}}
\\ &=
2 A^{\theta \theta} \tensor{\Gamma}{^\theta_{\theta r}} =
(2/r) \tan\theta
\\
\grad_\theta A^{\theta \theta} &=
\tensor{A}{^{\theta \theta}_{,\theta}} +
A^{\alpha \theta} \tensor{\Gamma}{^\theta_{\alpha \theta}} +
A^{\theta \alpha} \tensor{\Gamma}{^\theta_{\alpha \theta}}
\\ &=
\tensor{A}{^{\theta \theta}_{,\theta}} +
A^{r \theta} \tensor{\Gamma}{^\theta_{r \theta}} +
A^{\theta \theta} \tensor{\Gamma}{^\theta_{\theta \theta}} +
A^{\theta r} \tensor{\Gamma}{^\theta_{r \theta}} +
A^{\theta \theta} \tensor{\Gamma}{^\theta_{\theta \theta}}
\\ &=
\tensor{A}{^{\theta \theta}_{,\theta}} +
(A^{r \theta} + A^{\theta r}) \tensor{\Gamma}{^\theta_{r \theta}} =
\sin\theta + \cos\theta + \sec^2\theta
\\
\grad_r A^{r \theta} &=
\tensor{A}{^{r \theta}_{,r}} +
A^{\alpha \theta} \tensor{\Gamma}{^r_{\alpha r}} +
A^{r \alpha} \tensor{\Gamma}{^\theta_{\alpha r}}
\\ &=
\tensor{A}{^{r \theta}_{,r}} +
A^{r \theta} \tensor{\Gamma}{^r_{r r}} +
A^{\theta \theta} \tensor{\Gamma}{^r_{\theta r}} +
A^{r r} \tensor{\Gamma}{^\theta_{r r}} +
A^{r \theta} \tensor{\Gamma}{^\theta_{\theta r}}
\\ &=
\tensor{A}{^{r \theta}_{,r}} +
A^{r \theta} \tensor{\Gamma}{^\theta_{\theta r}} =
2 \sin\theta
\\
\grad_\theta A^{r \theta} &=
\tensor{A}{^{r \theta}_{,\theta}} +
A^{\alpha \theta} \tensor{\Gamma}{^r_{\alpha \theta}} +
A^{r \alpha} \tensor{\Gamma}{^\theta_{\alpha \theta}}
\\ &=
\tensor{A}{^{r \theta}_{,\theta}} +
A^{r \theta} \tensor{\Gamma}{^r_{r \theta}} +
A^{\theta \theta} \tensor{\Gamma}{^r_{\theta \theta}} +
A^{r r} \tensor{\Gamma}{^\theta_{r \theta}} +
A^{r \theta} \tensor{\Gamma}{^\theta_{\theta \theta}}
\\ &=
\tensor{A}{^{r \theta}_{,\theta}} +
A^{\theta \theta} \tensor{\Gamma}{^r_{\theta \theta}} +
A^{r r} \tensor{\Gamma}{^\theta_{r \theta}} =
r (1 + \cos\theta - \tan\theta)
\\
\grad_r A^{\theta r} &=
\tensor{A}{^{\theta r}_{,r}} +
A^{\alpha r} \tensor{\Gamma}{^\theta_{\alpha r}} +
A^{\theta \alpha} \tensor{\Gamma}{^r_{\alpha r}}
\\ &=
\tensor{A}{^{\theta r}_{,r}} +
A^{r r} \tensor{\Gamma}{^\theta_{r r}} +
A^{\theta r} \tensor{\Gamma}{^\theta_{\theta r}} +
A^{\theta r} \tensor{\Gamma}{^r_{r r}} +
A^{\theta \theta} \tensor{\Gamma}{^r_{\theta r}}
\\ &=
\tensor{A}{^{\theta r}_{,r}} +
A^{\theta r} \tensor{\Gamma}{^\theta_{\theta r}} =
2 \cos\theta
\\
\grad_\theta A^{\theta r} &=
\tensor{A}{^{\theta r}_{,\theta}} +
A^{\alpha r} \tensor{\Gamma}{^\theta_{\alpha \theta}} +
A^{\theta \alpha} \tensor{\Gamma}{^r_{\alpha \theta}}
\\ &=
\tensor{A}{^{\theta r}_{,\theta}} +
A^{r r} \tensor{\Gamma}{^\theta_{r \theta}} +
A^{\theta r} \tensor{\Gamma}{^\theta_{\theta \theta}} +
A^{\theta r} \tensor{\Gamma}{^r_{r \theta}} +
A^{\theta \theta} \tensor{\Gamma}{^r_{\theta \theta}}
\\ &=
\tensor{A}{^{\theta r}_{,\theta}} +
A^{r r} \tensor{\Gamma}{^\theta_{r \theta}} +
A^{\theta \theta} \tensor{\Gamma}{^r_{\theta \theta}} =
-r \sin\theta
\end{align*}
\textbf{15}
Find the components of $\tensor{V}{^\alpha_{;\beta;\gamma}}$ for the vector $V^r = 1$, $V^\theta = 0$.
We start by finding the components of $\tensor{V}{^\alpha_{;\beta}}$.
%
\begin{displaymath}
\tensor{V}{^\alpha_{;\beta}} =
\tensor{V}{^\alpha_{,\beta}} +
V^\mu \tensor{\Gamma}{^\alpha_{\mu\beta}}.
\end{displaymath}
%
By noting that $\tensor{V}{^\alpha_{,\beta}} = V^\theta = \tensor{\Gamma}{^r_{rr}} = \tensor{\Gamma}{^r_{r\theta}} = 0$, we can simplify this to
%
\begin{displaymath}
\tensor{V}{^\alpha_{;\beta}} =
V^r \tensor{\Gamma}{^\alpha_{r\beta}},
\end{displaymath}
%
which means
%
\begin{displaymath}
\tensor{V}{^r_{;r}} = \tensor{V}{^r_{;\theta}} = \tensor{V}{^\theta_{;r}} = 0;
\quad
\tensor{V}{^\theta_{;\theta}} = \frac{1}{r}.
\end{displaymath}
%
Now we can say
%
\begin{displaymath}
\tensor{V}{^\alpha_{;\beta;\mu}} =
\grad_\mu \tensor{V}{^\alpha_{;\beta}} =
\tensor{V}{^\alpha_{;\beta,\mu}} +
\tensor{V}{^\gamma_{;\beta}} \tensor{\Gamma}{^\alpha_{\gamma\mu}} -
\tensor{V}{^\alpha_{;\gamma}} \tensor{\Gamma}{^\gamma_{\beta\mu}}.
\end{displaymath}
%
Note that $\tensor{V}{^\theta_{;\theta}}$ is a function only of $r$, and so $\tensor{V}{^\theta_{;\theta,r}} = -1/r^2$, and all other partial derivatives are zero. We can also see by inspecting the components, that $\tensor{V}{^r_{;\mu;\nu}} = \tensor{V}{^\theta_{;\mu}} \tensor{\Gamma}{^r_{\theta\mu}}$, as all other components go to zero. Likewise, we can see that $\tensor{V}{^\theta_{;r;\mu}} = -\tensor{V}{^\theta_{;\theta}} \tensor{\Gamma}{^\theta_{r\mu}}$. It then becomes easy to find all the individual components. I summarize their values in Table \ref{tab:ch5-ex15}.
\begin{table}[b]
\centering
\begin{tabular}{ccc|c}
$\alpha$ & $\beta$ & $\mu$ & $\tensor{V}{^\alpha_{;\beta;\mu}}$ \\
\hline
$\theta$ & $\theta$ & $\theta$ & $0$ \\
$\theta$ & $\theta$ & $r$ & $-1/r^2$ \\
$\theta$ & $r$ & $\theta$ & $-1/r^2$ \\
$\theta$ & $r$ & $r$ & $0$ \\
$r$ & $\theta$ & $\theta$ & $-1$ \\
$r$ & $\theta$ & $r$ & $0$ \\
$r$ & $r$ & $\theta$ & $0$ \\
$r$ & $r$ & $r$ & $0$ \\
\end{tabular}
\caption{Components of the tensor in Exercise 15.}
\label{tab:ch5-ex15}
\end{table}
\textbf{16}
Repeat the steps leading from Equation 5.74 to 5.75.
Recalling that $g_{\alpha\mu;\beta} = 0$, we can rewrite Equation 5.72 as
%
\begin{displaymath}
g_{\alpha\beta,\mu} =
\tensor{\Gamma}{^\nu_{\alpha\mu}} g_{\nu\beta} +
\tensor{\Gamma}{^\nu_{\beta\mu}} g_{\alpha\nu}.
\end{displaymath}
%
Now if we switch the $\beta$ and $\mu$ indices, and then switch the $\alpha$ and $\beta$ indices, we get two more equations,
%
\begin{align*}
g_{\alpha\mu,\beta} &=
\tensor{\Gamma}{^\nu_{\alpha\beta}} g_{\nu\mu} +
\tensor{\Gamma}{^\nu_{\mu\beta}} g_{\alpha\nu},
\\*
g_{\beta\mu,\alpha} &=
\tensor{\Gamma}{^\nu_{\beta\alpha}} g_{\nu\mu} +
\tensor{\Gamma}{^\nu_{\mu\alpha}} g_{\beta\nu}.
\end{align*}
%
Now we add the first two equations and subtract the third, getting
%
\begin{align*}
g_{\alpha\beta,\mu} + g_{\alpha\mu,\beta} - g_{\beta\mu,\alpha} &=
\tensor{\Gamma}{^\nu_{\alpha\mu}} g_{\nu\beta} +
\tensor{\Gamma}{^\nu_{\beta\mu}} g_{\alpha\nu} +
\tensor{\Gamma}{^\nu_{\alpha\beta}} g_{\nu\mu} +
\tensor{\Gamma}{^\nu_{\mu\beta}} g_{\alpha\nu} -
\tensor{\Gamma}{^\nu_{\beta\alpha}} g_{\nu\mu} -
\tensor{\Gamma}{^\nu_{\mu\alpha}} g_{\beta\nu}
\\ &=
{\color{red}
\tensor{\Gamma}{^\nu_{\alpha\mu}} g_{\beta\nu}} +
{\color{green}
\tensor{\Gamma}{^\nu_{\beta\mu}} g_{\alpha\nu}} +
{\color{blue}
\tensor{\Gamma}{^\nu_{\alpha\beta}} g_{\nu\mu}} +
{\color{green}
\tensor{\Gamma}{^\nu_{\beta\mu}} g_{\alpha\nu}} -
{\color{red}
\tensor{\Gamma}{^\nu_{\alpha\beta}} g_{\nu\mu}} -
{\color{blue}
\tensor{\Gamma}{^\nu_{\alpha\mu}} g_{\beta\nu}}
\\ &=
2 \tensor{\Gamma}{^\nu_{\beta\mu}} g_{\alpha\nu}.
\end{align*}
%
Recalling that $g^{\alpha\gamma} g_{\alpha\nu} = \tensor{g}{^\gamma_\nu} = \tensor{\delta}{^\gamma_\nu}$, we divide both sides by $2$ and multiply by $g^{\alpha\gamma}$, arriving at Equation 5.75:
%
\begin{align*}
\frac{1}{2}
g^{\alpha\gamma}
(g_{\alpha\beta,\mu} + g_{\alpha\mu,\beta} - g_{\beta\mu,\alpha} &=
\frac{2}{2}
g^{\alpha\gamma} g_{\alpha\nu}
\tensor{\Gamma}{^\nu_{\beta\mu}}
\\ &=
\tensor{\Gamma}{^\nu_{\beta\mu}}
\end{align*}
\textbf{17}
Show how $\tensor{V}{^\beta_{,\alpha}}$ and $V^\mu \tensor{\Gamma}{^\beta_{\upsilon\alpha}}$ transform under change of coordinates. Neither follows a tensor transformation law, but their \emph{sum} does.
\begin{align*}
\tensor{V}{^{\alpha'}_{,\beta'}} &=
\pdv{V^{\alpha'}}{x^{\beta'}} =
\tensor{\Lambda}{^\beta_{\beta'}}
\pdv{x^\beta} \qty[ \tensor{\Lambda}{^{\alpha'}_\alpha} V^\alpha ]
\\ &=
\tensor{\Gamma}{^\beta_{\beta'}} \qty[
V^\alpha \pdv{x^\beta} \tensor{\Lambda}{^{\alpha'}_\alpha} +
\tensor{\Lambda}{^{\alpha'}_\alpha} \pdv{x^\beta} V^\alpha
]
\\ &=
\tensor{\Lambda}{^\beta_{\beta'}} V^\alpha
\tensor{\Lambda}{^{\alpha'}_{\alpha,\beta}} +
\tensor{\Lambda}{^\beta_{\beta'}} \tensor{\Lambda}{^{\alpha'}_\alpha}
\tensor{V}{^\alpha_{,\beta}}
\\ &\neq
\tensor{\Lambda}{^\beta_{\beta'}} \tensor{\Lambda}{^{\alpha'}_\alpha}
\tensor{V}{^\alpha_{,\beta}}
\\
\pdv{\vec{e}_{\alpha'}}{x^{\beta'}} &=
\tensor{\Lambda}{^\beta_{\beta'}}
\pdv{x^\beta} \qty[ \tensor{\Lambda}{^\alpha_{\alpha'}} \vec{e}_\alpha ]
\\ &=
\tensor{\Lambda}{^\beta_{\beta'}} \qty[
\tensor{\Lambda}{^\alpha_{\alpha'}} \pdv{x^\beta} \vec{e}_\alpha +
\vec{e}_\alpha \pdv{x^\beta} \tensor{\Lambda}{^\alpha_{\alpha'}}
]
\\ &=
\tensor{\Lambda}{^\beta_{\beta'}}
\tensor{\Lambda}{^\alpha_{\alpha'}}
\tensor{\Gamma}{^\mu_{\alpha\beta}} \vec{e}_\mu +
\tensor{\Lambda}{^\beta_{\beta'}}
\tensor{\Lambda}{^\alpha_{\alpha',\beta}}
\vec{e_\alpha}
\\ &\neq
\tensor{\Lambda}{^\beta_{\beta'}}
\tensor{\Lambda}{^\alpha_{\alpha'}}
\tensor{\Gamma}{^\mu_{\alpha\beta}} \vec{e}_\mu,
\end{align*}
%
so we have shown that $\pdv*{\vec{e}_{\alpha'}}{x^{\beta'}}$ is not a tensor,
and since $\tensor{V}{^\mu}$ is a tensor, and the product of a tensor and a non-tensor is also not a tensor, then $V^\mu\tensor{\Gamma}{^\beta_{\nu\alpha}}$ is not a tensor.
According to Carroll, the precise transformation is
%
\begin{displaymath}
\tensor{\Gamma}{^{\nu'}_{\mu'\lambda'}} =
\tensor{\Lambda}{^\mu_{\mu'}}
\tensor{\Lambda}{^\lambda_{\lambda'}}
\tensor{\Lambda}{^{\nu'}_\nu}
\tensor{\Gamma}{^\nu_{\mu\lambda}} +
\tensor{\Lambda}{^\mu_{\mu'}}
\tensor{\Lambda}{^\lambda_{\lambda'}}
\tensor{\Lambda}{^{\nu'}_{\mu\lambda}}.
\end{displaymath}
%
Now we add the two expressions, in order to show that it is a tensor equation
%
\begin{align*}
\tensor{V}{^{\nu'}_{,\lambda'}} +
V^{\mu'} \tensor{\Gamma}{^{\nu'}_{\mu'\lambda'}} &=
\tensor{\Lambda}{^\lambda_{\lambda'}}
V^\nu \tensor{\Lambda}{^{\nu'}_{\nu,\lambda}} +
\tensor{\Lambda}{^\lambda_{\lambda'}}
\tensor{\Lambda}{^{\nu'}_\nu} \tensor{V}{^\nu_{,\lambda}} +
\tensor{\Lambda}{^\lambda_{\lambda'}}
\tensor{\Lambda}{^{\nu'}_\nu} V^\mu
\tensor{\Gamma}{^\nu_{\mu\lambda}} +
\tensor{\Lambda}{^\lambda_{\lambda'}}
V^\nu \tensor{\Lambda}{^{\nu'}_{\lambda,\mu}}
\\ &=
\tensor{\Lambda}{^\lambda_{\lambda'}}
\tensor{\Lambda}{^{\nu'}_\nu}
\qty(
\tensor{V}{^\nu_{,\lambda}} +
V^\mu \tensor{\Gamma}{^\nu_{\mu\lambda}}
)
\end{align*}
%
So it does in fact transform like a tensor equation, meaning $\tensor{V}{^\nu_{;\lambda}}$ is a tensor!
\textbf{18}
Verify Equation 5.78:
%
\begin{displaymath}
\left.
\begin{array}{r}
\vec{e}_{\hat\alpha} \cdot \vec{e}_{\hat\beta} \equiv
g_{\hat\alpha \hat\beta} =
\delta_{\hat\alpha \hat\beta}
\\
\tilde\omega^{\hat\alpha} \cdot \tilde\omega^{\hat\beta} \equiv
g^{\hat\alpha \hat\beta} =
\delta^{\hat\alpha \hat\beta}
\end{array}
\right\}
\end{displaymath}
For the basis \emph{vectors}, we have
%
\begin{align*}
g_{\hat{r} \hat{r}} &=
\vec{e}_{\hat{r}} \cdot \vec{e}_{\hat{r}} =
\vec{e}_r \cdot \vec{e}_r =
g_{rr} =
1
\\
g_{\hat\theta \hat\theta} &=
\vec{e}_{\hat\theta} \cdot \vec{e}_{\hat\theta} =
\qty(\frac{1}{r} \vec{e}_\theta) \cdot \qty(\frac{1}{r} \vec{e}_\theta) =
\frac{1}{r^2} (\vec{e}_\theta \cdot \vec{e}_\theta) =
\frac{1}{r} g_{r\theta} =
1
\\
g_{\hat{r} \hat\theta} &=
\vec{e}_{\hat{r}} \cdot \vec{e}_{\hat\theta} =
\vec{e}_r \cdot \qty(\frac{1}{r} \vec{e}_\theta) =
\frac{1}{r} (\vec{e}_r \cdot \vec{e}_\theta) =
\frac{1}{r} g_{r\theta} =
0
\\
g_{\hat\theta \hat{r}} &=
g_{\hat{r} \hat\theta} =
0
\end{align*}
%
So it is indeed true that $g_{\hat\alpha \hat\beta} = \delta_{\hat\alpha \hat\beta}$.
Now for the basis \emph{one-forms}, we have
%
\begin{align*}
g^{\hat{r} \hat{r}} &=
\tilde\omega^{\hat{r}} \cdot \tilde\omega^{\hat{r}} =
\tilde\dd{r} \cdot \tilde\dd{r} =
g^{rr} =
1
\\
g^{\hat\theta \hat\theta} &=
\tilde\omega^{\hat\theta} \cdot \tilde\omega^{\hat\theta} =
(r \tilde\dd\theta) \cdot (r \tilde\dd\theta) =
r^2 (\tilde\dd\theta \cdot \tilde\dd\theta) =
r^2 g^{\theta\theta} =
r^2 (1/r^2) =
1
\\
g^{\hat{r} \hat\theta} &=
\tilde\omega^{\hat{r}} \cdot \tilde\omega^{\hat\theta} =
\tilde\dd{r} \cdot (r \tilde\dd\theta) =
r (\tilde\dd{r} \cdot \tilde\dd\theta) =
r g^{r\theta} =
0
\\
g^{\hat\theta \hat{r}} &=
g^{\hat{r} \hat\theta} =
0
\end{align*}
%
So it is indeed true that $g^{\hat\alpha \hat\beta} = \delta^{\hat\alpha \hat\beta}$.
\textbf{19}
Repeat the calculations going from Equations 5.81 to 5.84, with $\tilde\dd{r}$ and $\tilde\dd\theta$ as your bases. Show that they form a coordinate basis.
\begin{align*}
\tilde\dd{r} &=
\cos\theta \dd{x} + \sin\theta \dd{y} =
\pdv{\xi}{x} \tilde\dd{x} + \pdv{\xi}{y} \tilde\dd{y}
\\
\pdv{\xi}{x} &=
\cos\theta; \quad
\pdv{\xi}{y} =
\sin\theta
\\
\pdv{y} \pdv{\xi}{x} &=
\pdv{x} \pdv{\xi}{y} \implies
\pdv{y} (x/r) = \pdv{x} (y/r),
\end{align*}
%
which is true, so we have shown that at least $\tilde\dd{r}$ may be part of a coordinate basis.
%
\begin{align*}
\tilde\dd\theta &=
-\frac{1}{r} \sin\theta \tilde\dd{x} +
\frac{1}{r} \cos\theta \tilde\dd{y} =
\pdv{\eta}{x} \tilde\dd{x} +
\pdv{\eta}{y} \tilde\dd{y}
\\
\pdv{\eta}{x} &=
-\frac{1}{r} \sin\theta; \quad
\pdv{\eta}{y} =
\frac{1}{r} \cos\theta
\\
\pdv{y} \pdv{\eta}{x} &=
\pdv{x} \pdv{\eta}{y} \implies
\pdv{y} \qty[ -\frac{1}{r} \sin\theta ] =
\pdv{x} \qty[ \frac{1}{r} \cos\theta ],
\end{align*}
%
which is also true, and thus we have shown that $\tilde\dd{r}$ and $\tilde\dd\theta$ form a coordinate basis.
\textbf{20}
For a non-coordinate basis $\{ \vec{e}_\mu \}$, let $\tensor{c}{^\alpha_{\mu\nu}} = \grad_{\vec{e}_\mu} \vec{e}_\nu - \grad_{\vec{e}_\nu} \vec{e}_\mu$. Use this in place of Equation 5.74 to derive a more general expression for Equation 5.75.
$\boldsymbol{c}$ is antisymmetric w.r.t. its bottom indices.
%
\begin{align*}
\tensor{c}{^\alpha_{\mu\nu}} \vec{e}_\alpha +
\tensor{c}{^\alpha_{\nu\mu}} \vec{e}_\alpha &=
(\grad_{\vec{e}_\mu} \vec{e}_\nu - \grad_{\vec{e}_\nu} \vec{e}_\mu) +
(\grad_{\vec{e}_\nu} \vec{e}_\mu - \grad_{\vec{e}_\mu} \vec{e}_\nu) =
0
\\ \implies
\tensor{c}{^\alpha_{\mu\nu}} \vec{e}_\alpha &=
-\tensor{c}{^\alpha_{\nu\mu}} \vec{e}_\alpha
\\ \implies
\tensor{c}{^\alpha_{\mu\nu}} &=
-\tensor{c}{^\alpha_{\nu\mu}}
\end{align*}
%
Expanding the covariant derivatives in the original expression, we get
\begin{align*}
\tensor{c}{^\alpha_{\mu\nu}} \vec{e}_\alpha &=
\vec{e}_{\nu;\mu} - \vec{e}_{\mu;\nu}
\\ &=
(\vec{e}_{\nu,\mu} - \vec{e}_\alpha \tensor{\Gamma}{^\alpha_{\nu\mu}}) -
(\vec{e}_{\mu,\nu} - \vec{e}_\alpha \tensor{\Gamma}{^\alpha_{\mu\nu}})
\\ &=
\vec{e}_\alpha
(\tensor{\Gamma}{^\alpha_{\mu\nu}} - \tensor{\Gamma}{^\alpha_{\nu\mu}})
\\
\tensor{c}{^\alpha_{\mu\nu}} &=
\tensor{\Gamma}{^\alpha_{\mu\nu}} - \tensor{\Gamma}{^\alpha_{\nu\mu}}
\end{align*}
%
Now we recall the result from Exercise 16, but without assuming symmetry of the Christoffel symbols
%
\begin{align*}
g_{\alpha\beta,\mu} + g_{\alpha\mu,\beta} - g_{\beta\mu,\alpha} &=
\tensor{\Gamma}{^\nu_{\alpha\mu}} g_{\nu\beta} +
\tensor{\Gamma}{^\nu_{\beta\mu}} g_{\alpha\nu} +
\tensor{\Gamma}{^\nu_{\alpha\beta}} g_{\nu\mu} +
\tensor{\Gamma}{^\nu_{\mu\beta}} g_{\alpha\nu} -
\tensor{\Gamma}{^\nu_{\beta\alpha}} g_{\nu\mu} -
\tensor{\Gamma}{^\nu_{\mu\alpha}} g_{\beta\nu}
\\ &=
{\color{red} \tensor{\Gamma}{^\nu_{\alpha\mu}} g_{\beta\nu}} +
{\color{green} \tensor{\Gamma}{^\nu_{\beta\mu}} g_{\alpha\nu}} +
{\color{blue} \tensor{\Gamma}{^\nu_{\alpha\beta}} g_{\nu\mu}} +
{\color{green} \tensor{\Gamma}{^\nu_{\mu\beta}} g_{\alpha\nu}} -
{\color{blue} \tensor{\Gamma}{^\nu_{\beta\alpha}} g_{\nu\mu}} -
{\color{red} \tensor{\Gamma}{^\nu_{\mu\alpha}} g_{\beta\nu}}
\\ &=
g_{\beta\nu}
(\tensor{\Gamma}{^\nu_{\alpha\mu}} - \tensor{\Gamma}{^\nu_{\mu\alpha}}) +
g_{\alpha\nu}
(\tensor{\Gamma}{^\nu_{\beta\mu}} + \tensor{\Gamma}{^\nu_{\mu\beta}}) +
g_{\nu\mu}
(\tensor{\Gamma}{^\nu_{\alpha\beta}} - \tensor{\Gamma}{^\nu_{\beta\alpha}})
\\ &=
g_{\beta\nu} \tensor{c}{^\nu_{\alpha\mu}} +
g_{\alpha\nu}
(\tensor{\Gamma}{^\nu_{\beta\mu}} +
\tensor{\Gamma}{^\nu_{\mu\beta}} +
\tensor{\Gamma}{^\nu_{\beta\mu}} -
\tensor{\Gamma}{^\nu_{\beta\mu}}) +
g_{\nu\mu} \tensor{c}{^\nu_{\alpha\beta}}
\\ &=
g_{\beta\nu} \tensor{c}{^\nu_{\alpha\mu}} +
g_{\nu\mu} \tensor{c}{^\nu_{\alpha\beta}} +
g_{\alpha\nu}
(2 \tensor{\Gamma}{^\nu_{\beta\mu}} + \tensor{c}{^\nu_{\mu\beta}})
\\
g^{\nu\mu} 2 g_{\alpha\nu} \tensor{\Gamma}{^\nu_{\beta\mu}} &=
g^{\nu\mu} (
g_{\alpha\beta,\mu} +
g_{\alpha\mu,\beta} -
g_{\beta\mu,\alpha} -
c_{\beta\alpha\mu} -
c_{\mu\alpha\beta} -
c_{\alpha\mu\beta}
)
\\
\tensor{\Gamma}{^\nu_{\beta\alpha}} &=
\frac{1}{2}
g^{\nu\mu} (
g_{\alpha\beta,\mu} +
g_{\alpha\mu,\beta} -
g_{\beta\mu,\alpha} -
c_{\beta\alpha\mu} -
c_{\mu\alpha\beta} -
c_{\alpha\mu\beta}
)
\end{align*}
\textbf{21}
A uniformly accelerated observer has world line
%
\begin{displaymath}
t(\lambda) = a \sinh\lambda,
\quad
x(\lambda) = a \cosh\lambda
\end{displaymath}
(a) Show that the spacelike line tangent to his world line (which is parameterized by $\lambda$) is orthogonal to the line parameterized by $a$.
The line tangent to his world line is
%
\begin{displaymath}
\vec{V} \to
\dv{\lambda} (t, x) =
(a \cosh\lambda, a \sinh\lambda).
\end{displaymath}
%
The line parameterized by $a$ is
%
\begin{displaymath}
\vec{W} \to
\dv{a} (t, x) =
(\sinh\lambda, \cosh\lambda)
\end{displaymath}
%
If they are orthogonal, then their dot product must be zero
%
\begin{displaymath}
\vec{V} \cdot \vec{W} =
-(a \cosh\lambda \sinh\lambda) + (a \sinh\lambda \cosh\lambda) =
0,
\end{displaymath}
%
which it is.
(b) To prove that this defines a valid coordinate transform from $(\lambda,a)$ to $(t,x)$, we show that the determinant of the transformation matrix is non-zero.
%
\begin{align*}
\det\mqty( \pdv*{t}{\lambda} & \pdv*{t}{a} \\
\pdv*{x}{\lambda} & \pdv*{x}{a} ) &=
\pdv{t}{\lambda} \pdv{x}{a} - \pdv{t}{a} \pdv{x}{\lambda}
\\ &=
a \cosh^2\lambda - a \sinh^2\lambda = a
\\ &\neq 0,
\end{align*}
%
and so it is indeed a valid coordinate transform.
To plot the curves parameterized by $a$, we take
%
\begin{align*}
-t^2 + x^2 &=
a^2 (\cosh^2\lambda - \sinh^2\lambda)
\\ &=
a^2,
\end{align*}
%
which gives us a family of space-like hyperbola, depending on the chosen value of $a$.
To plot the curves parameterized by $\lambda$, we take
%
\begin{align*}
x &= a \cosh\lambda \implies a = x / \cosh\lambda
\\
t &= a \sinh\lambda = x \sinh\lambda / \cosh\lambda = x \tanh\lambda,
\end{align*}
%
which gives us a family of space-like lines, depending on the chosen value of $\lambda$.
A plot of these curves is given in Figure \ref{fig:ch5-problem-21}, from which it is clear that only half of the $t$--$x$ plane is covered.
When $\abs{t} = \abs{x}$, then $a = 0$, since $-t^2 + x^2 = a^2$. We already found that the determinant of the coordinate transformation is $a$, so this would make the determinant $0$, making it singular.
\begin{figure}[ht]
\centering
\includegraphics[width=0.75\textwidth]{img/ch5_problem_21}
\caption{Lines of constant $\lambda$ and $a$ in Problem 21.}
\label{fig:ch5-problem-21}
\end{figure}
(c) Find the metric tensor and Christoffel symbols in $(\lambda,a)$ coordinates.
First we find the basis vectors:
%
\begin{align*}
\vec{e}_\lambda &=
a (\cosh\lambda \vec{e}_t + \sinh\lambda \vec{e}_x),
\\
\vec{e}_a &=
\sinh\lambda \vec{e}_t + \cosh\lambda \vec{e}_x.
\end{align*}
Now we find the components of the metric tensor $\vb{g}$ as
%
\begin{align*}
g_{\lambda\lambda} &=
a^2
( \cosh\lambda \vec{e}_t +
\sinh\lambda \vec{e}_x )^2
\\ &=
a^2
( \cosh^2\lambda \eta_{tt} +
\sinh^2\lambda \eta_{xx} +
2 \sinh\lambda \cosh\lambda \eta_{tx} )
\\ &=
a^2 (\sinh^2\lambda - \cosh^2\lambda)
\\ &=
-a^2
\\
g_{aa} &=
(\sinh\lambda \vec{e}_t + \cosh\lambda \vec{e}_x)^2
\\ &=
\sinh^2\lambda \eta_{tt} +
\cosh^2\lambda \eta_{xx} +
2 \sinh\lambda \cosh\lambda \eta_{tx}
\\ &=
1
\\
g_{\lambda a} = g_{a \lambda} &=
a (\cosh\lambda \vec{e}_t + \sinh\lambda \vec{e}_x)
(\sinh\lambda \vec{e}_t + \cosh\lambda \vec{e}_x)
\\ &=
a
( \cosh\lambda \sinh\lambda (\eta_{tt} + \eta_{xx}) +
2 \sinh\lambda \cosh\lambda \eta_{tx} )
\\ &=
0
\\
\vb{g} &\underset{(\lambda,a)}{\to}
\mqty( -a^2 & 0 \\ 0 & 1 )
\end{align*}
%
Now for the Christoffel symbols, since we know this is a coordinate basis, we can use
%
\begin{displaymath}
\tensor{\Gamma}{^\gamma_{\beta \mu}} =
\frac{1}{2} g^{\alpha \gamma}
(g_{\alpha \beta , \mu} + g_{\alpha \mu , \beta} - g_{\beta \mu , \alpha})
\end{displaymath}
%
\begin{align*}
\tensor{\Gamma}{^\lambda_{\lambda \lambda}} &=
\frac{1}{2} g^{\alpha \lambda}
(g_{\alpha \lambda , \lambda} + g_{\alpha \lambda , \lambda} - g_{\lambda \lambda , \alpha}) =
\frac{1}{2} g^{a\lambda} (-g_{\lambda\lambda,a})
\\ &=
0
\\
\tensor{\Gamma}{^a_{a a}} &=
\frac{1}{2} g^{\alpha a}
(g_{\alpha a , a} + g_{\alpha a , a} - g_{a a , \alpha})
\\ &=
0
\\
\tensor{\Gamma}{^\lambda_{\lambda a}} &=
\frac{1}{2} g^{\alpha \lambda}
(g_{\alpha \lambda , a} + g_{\alpha a , \lambda} - g_{\lambda a , \alpha}) =
\frac{1}{2} g^{\lambda\lambda} g_{\lambda\lambda,a} =
\frac{1}{2} (-a^{-2}) (-2a)
\\ &=
1/a
\\
\tensor{\Gamma}{^a_{\lambda a}} &=
\frac{1}{2} g^{\alpha a}
(g_{\alpha \lambda , a} + g_{\alpha a , \lambda} - g_{\lambda a , \alpha}) =
\frac{1}{2} g^{\lambda a} g_{\lambda\lambda,a}
\\ &=
0
\\
\tensor{\Gamma}{^\lambda_{a a}} &=
\frac{1}{2} g^{\alpha \lambda}
(g_{\alpha a , a} + g_{\alpha a , a} - g_{a a , \alpha})
\\ &=
0
\\
\tensor{\Gamma}{^a_{\lambda \lambda}} &=
\frac{1}{2} g^{\alpha a}
(g_{\alpha \lambda , \lambda} + g_{\alpha \lambda , \lambda} - g_{\lambda \lambda , \alpha}) =
\frac{1}{2} g^{aa} (-g_{\lambda\lambda,a}) =
\frac{1}{2} \cdot 2 \cdot a
\\ &=
a
\end{align*}
\textbf{22}
%
\begin{align*}
U^\alpha \grad_\alpha V^\beta = W^\beta &\implies
U^\alpha \tensor{V}{^\gamma_{;\alpha}} = W^\gamma
\\ &\implies
g_{\alpha\beta} U^\alpha \tensor{V}{^\gamma_{;\alpha}} =
g_{\gamma\beta} W^\gamma
\\ &\implies
U^\alpha V_{\beta;\alpha} = W_\beta
\\ &\implies
U^\alpha \grad_\alpha V_\beta = W_\beta
\end{align*}
\end{document} | {
"alphanum_fraction": 0.5588221783,
"avg_line_length": 31.3674351585,
"ext": "tex",
"hexsha": "7438652e082e8d2a15c66f90643c41608d6c901e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "da116f346bfa40fa3f9620782a2c85e9e9654e37",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "dwysocki/ASTP-760",
"max_forks_repo_path": "notes/textbook/tex/gr-ch5-notes.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "da116f346bfa40fa3f9620782a2c85e9e9654e37",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "dwysocki/ASTP-760",
"max_issues_repo_path": "notes/textbook/tex/gr-ch5-notes.tex",
"max_line_length": 577,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "da116f346bfa40fa3f9620782a2c85e9e9654e37",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "dwysocki/ASTP-760",
"max_stars_repo_path": "notes/textbook/tex/gr-ch5-notes.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 17952,
"size": 43538
} |
\input{preamble}
\begin{document}
\header{2}{Volumetric Path Tracing}
\begin{figure}[h]
\includegraphics[width=\linewidth]{imgs/colored_smoke.png}
\caption{The figure shows a heterogeneous volume with a spectrally varying density over space, rendered with multiple-scattering. Smoke data are generated using Wenzel Jakob's \href{http://www.mitsuba-renderer.org/misc.html}{fsolver}.}
\label{fig:gallery}
\end{figure}
In this homework, we will build a volumetric path tracer that can handle scattering and absorption inside participating media inside lajolla. We will split the development into 6 steps and build 6 volumetric path tracers. Each has more features than the previous ones.\footnote{This approach is inspired by Steve Marschner's \href{https://www.cs.cornell.edu/courses/cs6630/2015fa/notes/10volpath.pdf}{course note} on volumetric path tracing.} Your $n$-th volumetric path tracer should be able to render all scenes the $(n-1)$-th one can handle. If you want, you can only submit the final volumetric path tracer and let all the rest call the final code. This process is for helping you to slowly and steadily build up your rendering algorithm. You will notice that the scores sum up to more than $100\%$. This is because the last volumetric path tracer is a bit more mathematically involved (the implementation is not too bad though), so if you only did the first fifth, you still get $90\%$ of the scores. If you do all of them, you get $110\%$.
\href{https://cs.dartmouth.edu/~wjarosz/publications/novak18monte-sig.html}{This SIGGRAPH course note} is a good reference if you fail to understand the handout. Steve Marschner's course note in the footnote is also very useful. Most of the math in the last part is from Miller et al.~\cite{Miller:2019:NPI} and Georgiev et al.~\cite{Georgiev:2019:IFV}'s articles.
\paragraph{Submission and grading.} Please upload a zip file to Canvas including your code, and a text file (readme.txt) answering the questions below. For the questions, as long as you say something plausible, you will get full scores.
Participating media are volumes with many infinitesimal particles absorbing and scattering lights. Given a ray inside the volume parametrized by distance $\mathbf{p}(t)$, the radiance along the ray is modeled by the \emph{radiative transfer equation}:
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d}t} L(\mathbf{p}(t), \omega) = -(\sigma_a(\mathbf{p}(t)) + \sigma_s(\mathbf{p}(t))) L(\mathbf{p}(t), \omega) + L_e(\mathbf{p}(t), \omega) + \sigma_s(\mathbf{p}(t)) \int_{S^2} \rho(\mathbf{p}(t), \omega, \omega') L(\mathbf{p}(t), \omega') \mathrm{d}\omega',
\label{eq:rte}
\end{equation}
where $L$ is the radiance, $\sigma_a$ is the \emph{absorption coefficient}, $\sigma_s$ is the \emph{scattering coefficient}, $L_e$ is the (volumetric) emission, $\rho$ is the \emph{phase function} that is akin to BSDFs in surface rendering, and $S^2$ is the spherical domain.
This looks a bit scary, so let's break it down. For now, we'll drop the arguments for $\sigma_a$ and $\sigma_s$, but in general, they can still be spatially varying. The radiative transfer equation is made of four components: \textbf{absorption}, \textbf{emission}, \textbf{in-scattering}, and \textbf{out-scattering}. Absorption and emission handle particles that absorb and emit lights:
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d}t} L_a(\mathbf{p}(t), \omega) = -\sigma_a L_a(\mathbf{p}(t), \omega) + L_e(\mathbf{p}(t), \omega).
\end{equation}
Notice how this is a linear ordinary differential equation $x' = ax + b$, where $\sigma_a$ attenuates lights and $L_e$ is the gain.
The in-scattering accounts for all the lights bounce between the particles along the ray, just like the surface rendering equation:
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d}t} L_{is}(\mathbf{p}(t), \omega) = \sigma_s \int_{S^2} \rho(\mathbf{p}(t), \omega, \omega') L(\mathbf{p}(t), \omega') \mathrm{d}\omega'.
\end{equation}
However, the light does not just bounce \emph{into} the ray, it also bounces \emph{out}. That's what the out-scattering considers:
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d}t} L_{os}(\mathbf{p}(t), \omega) = -\sigma_s L_{os}(\mathbf{p}(t)).
\end{equation}
Combining all these components, we get the full radiative transfer equation (Equation~\ref{eq:rte}). Notice that the full radiative transfer equation is also like a linear ODE: $-(\sigma_a + \sigma_s) L$ attenuates light, and $L_e$ and the spherical integral are the gain that makes things brighter. For this reason, we often let $\sigma_t = \sigma_a + \sigma_s$ and call it the \emph{extinction coefficient}.
We'll start from a simplified version of the radiative transfer equation, then slowly handle more complex situations. To make things simpler, we will throughout assume our medium does not emit light itself: it will receive lighting from other surfaces in the scene.
Before that, let's introduce lajolla's data structures for storing the participating media.
\section{Lajolla's participating media data structures and interfaces}
\begin{figure}
\includegraphics[width=\linewidth]{imgs/media.pdf}
\caption{Lajolla assumes that media are separated by closed surface boundaries. At the object surface, we store the ID of the interior and exterior media (if both of them are vacuum, set the ID to \lstinline{-1}). The outmost medium is specified at the camera (and can be accessed through \lstinline{camera.medium_id}). A surface can be \emph{index-matching} meaning that light passes through without changing direction or losing energy. In this case, the \lstinline{material_id} of the surface is set to \lstinline{-1}. A surface can also be transmissive. In this case, it is assigned a transmissive material like \lstinline{roughdielectric}.}
\label{fig:data_structure}
\end{figure}
\paragraph{The Medium struct in lajolla.} Lajolla's medium interface is for querying the media parameters $\sigma_a$, $\sigma_s$, phase function $\rho$, and the \emph{majorant} which is the upper bound of the extinction coefficient $\sigma_t = \sigma_a + \sigma_s$ -- we will need the majorant in our final renderer.
\begin{lstlisting}[language=c++]
struct MediumBase {
PhaseFunction phase_function;
};
struct HomogeneousMedium : public MediumBase {
Spectrum sigma_a, sigma_s;
};
struct HeterogeneousMedium : public MediumBase {
VolumeSpectrum albedo, density;
};
using Medium = std::variant<HomogeneousMedium, HeterogeneousMedium>;
/// the maximum of sigma_t = sigma_s + sigma_a over the whole space
Spectrum get_majorant(const Medium &medium, const Ray &ray);
Spectrum get_sigma_s(const Medium &medium, const Vector3 &p);
Spectrum get_sigma_a(const Medium &medium, const Vector3 &p);
inline PhaseFunction get_phase_function(const Medium &medium) {
return std::visit([&](const auto &m) { return m.phase_function; }, medium);
}
\end{lstlisting}
You will need these functions to obtain the necessary quantities in the homework.
A \lstinline{HomogeneousMedium} should be straightforward: it contains constant $\sigma_a$ and $\sigma_s$.
We will talk more about \lstinline{HeterogeneousMedium} and \lstinline{PhaseFunction} later.
Lajolla assumes that the media are separated by surface boundaries (Figure~\ref{fig:data_structure}). It's up to the upstream users to make sure they are consistent with each other, and the surfaces are closed (if they are not, the results are undefined).\footnote{Modern production volume renderers have paid special attention to make sure the renderers can handle all sorts of inputs, including nested volumes. See \href{https://graphics.pixar.com/library/ProductionVolumeRendering/index.html}{here} for more information.}
In the scene file, each objects are marked with corresponding exterior and interior media:
\begin{lstlisting}[language=xml]
<medium type="homogeneous" id="medium">
<rgb name="sigmaA" value="0.5 0.5 0.5"/>
<rgb name="sigmaS" value="0.0 0.0 0.0"/>
<float name="scale" value="3"/>
</medium>
<shape type="sphere">
<!-- ... -->
<ref name="exterior" id="medium"/>
</shape>
<sensor type="perspective">
<!-- ... -->
<ref id="medium"/>
</sensor>
\end{lstlisting}
If an object has the same interior and exterior media, then it does not need to specify them.
The \lstinline{Medium}s are stored in \lstinline{scene.media} which you can access through \lstinline{scene.media[medium_id]}. The \lstinline{intersection} routine in lajolla returns a \lstinline{PathVertex} object which contains relevant information of the intersection:
\begin{lstlisting}[language=c++]
struct PathVertex {
Vector3 position;
Vector3 geometry_normal;
// ...
int shape_id = -1;
int primitive_id = -1; // For triangle meshes. This indicates which triangle it hits.
int material_id = -1;
// If the path vertex is inside a medium, these two IDs
// are the same.
int interior_medium_id = -1;
int exterior_medium_id = -1;
};
\end{lstlisting}
\section{Single monochromatic absorption-only homogeneous volume}
\begin{figure}
\includegraphics[width=\linewidth]{imgs/absorption_medium.pdf}
\caption{The setup of our first volumetric renderer.}
\label{fig:volpath1_illustration}
\end{figure}
Our first volume renderer will make four assumptions.
\begin{itemize}
\item There is only a single, homogeneous ($\sigma_a$ and $\sigma_s$ are constants over space) volume.
\item The volume does not scatter light: $\sigma_s = 0$.
\item The surfaces in the scene only emit lights (with intensity $L_e$) and do not reflect/transmit lights.
\item The volume is monochromatic: the three color channels of $\sigma_s$ and $\sigma_a$ have the same values.
\end{itemize}
Under these assumptions, the radiative transfer equation becomes
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d}t} L_1(\mathbf{p}(t), \omega) = -\sigma_a L_1(\mathbf{p}(t), \omega),
\end{equation}
and we know $L_1(\mathbf{p}(t_{\text{hit}}), \omega) = L_e(\mathbf{p}(t_{\text{hit}}))$ where $t_{\text{hit}}$ is the distance between the origin of the ray and the emissive surface.
This ordinary differential equation has a simple closed-form:
\begin{equation}
L_1(\mathbf{p}(0), \omega) = \exp\left(-\sigma_a t_{\text{hit}} \right) L_e(\mathbf{p}(t_{\text{hit}})).
\end{equation}
Figure~\ref{fig:volpath1_illustration} illustrates the setup. In volume rendering the exponential term
$\exp\left(-\sigma_a t_{\text{hit}} \right)$ is often called the ``transmittance''.
Our rendering algorithm (that generates a sample for a pixel) is as follows (in Python-style pseudo-code).
\begin{lstlisting}[language=python]
def L(screen_pos, rng):
camera_ray = sample_primary(camera, screen_pos, rng)
isect = intersect(scene, camera_ray)
if isect:
transmittance = exp(-sigma_a * t)
Le = 0
if is_light(isect):
Le = isect.Le
return transmittance * Le
return 0
\end{lstlisting}
While our volumes are monochromatic, the light can be colored, so the final results can still have color.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{imgs/volpath_1.png}
\caption{How Figure~\ref{fig:volpath1_illustration} would be rendered like.}
\label{fig:volpath1}
\end{figure}
\paragraph{Task (13\%).} You will implement the algorithm above in the \lstinline{vol_path_tracing_1} function in \lstinline{vol_path_tracing.h}. You might want to first look at the surface path tracing code in \lstinline{path_tracing.h} to understand lajolla's API better. Test your rendering using the scene \lstinline{scenes/volpath_test/volpath_test1.xml}. You should get an image that looks like Figure~\ref{fig:volpath1}.
\paragraph{Ray differentials.} In the surface path tracer, lajolla uses ray differential for texture lookup. Determining ray differentials for volumetric scattering is an unsolved problem. For this homework, we will disable ray differentials by setting it to 0 and do not change it:
\begin{lstlisting}[language=c++]
RayDifferential ray_diff = RayDifferential{Real(0), Real(0)};
\end{lstlisting}
\paragraph{Environment maps.} Throughout the homework, we assume there is no environment map in the scene.
\paragraph{Question(s) (5\%).} Change the absorption parameters to zero in \lstinline{scenes/volpath_test/volpath_test1.xml}. What do you see? Why?
\section{Single monochromatic homogeneous volume with absorption and single-scattering, no surface lighting}
\begin{figure}
\includegraphics[width=\linewidth]{imgs/single_scattering.pdf}
\caption{The figure shows the setup of our second volumetric renderer. Light can now scatter inside the medium once, and we need to account for the phase function $\rho$ and the extra transmittance $\exp{-\sigma_t t'}$ We need to integrate over the camera ray to account for all scattering.}
\label{fig:volpath2_illustration}
\end{figure}
Our next renderer adds single scattering to the volume (Figure~\ref{fig:volpath2_illustration}). We still assume:
\begin{itemize}
\item There is only a single, homogeneous ($\sigma_a$ and $\sigma_s$ are constants over space) volume.
\item The surfaces in the scene only emit lights (with intensity $L_e$) and do not reflect/transmit lights.
\item The volume is monochromatic: the three color channels of $\sigma_s$ and $\sigma_a$ have the same values.
\item Light only scatter (changes direction) once in the volume.
\end{itemize}
Now we need to solve the radiative transfer equation (Equation~\ref{eq:rte}), except the spherical integral only recurse once:
\begin{equation}
L_{\text{scatter}1}(\mathbf{p}, \omega) = \int_{\mathcal{M}} \rho(\mathbf{p}(t), \omega, \omega') L_e(\mathbf{p}', \omega') \exp\left(-\sigma_t \left\| \mathbf{p}(t) - \mathbf{p}' \right\|\right) \frac{\left| \omega' \cdot \mathbf{n}_{\mathbf{p}'} \right|}{{\left\| \mathbf{p}(t) - \mathbf{p}' \right\|}^2} \text{visible}(\mathbf{p}(t), \mathbf{p}') \mathrm{d}\mathbf{p}',
\end{equation}
instead of integrating over the sphere $S^2$, we apply a change of variable to integrate over the scene manifold $\mathcal{M}$ for all points on surfaces $\mathbf{p}'$. $\frac{\left| \omega' \cdot \mathbf{n}_{\mathbf{p}'} \right|}{{\left\| \mathbf{p}(t) - \mathbf{p}' \right\|}^2} \text{visible}(\mathbf{p}(t), \mathbf{p}')$ is the Jacobian of the transformation and $\omega' = \frac{\mathbf{p}' - \mathbf{p}(t)}{\left\| \mathbf{p}' - \mathbf{p}(t) \right\|}$. $\exp\left(-\sigma_t \left\| \mathbf{p}(t) - \mathbf{p}' \right\|\right)$ is the transmittance between $\mathbf{p}(t)$ and $\mathbf{p}'(t)$ that represents energy loss due to absorption and out-scattering (hence it uses the extinction coefficient $\sigma_t = \sigma_a + \sigma_s$ instead of $\sigma_a$ or $\sigma_s$). $s1$ stands for ``scattering once''.
The radiative transfer equation is then
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d}t} L_2(\mathbf{p}(t), \omega) = -\sigma_t L_2(\mathbf{p}(t), \omega) + \sigma_s L_{\text{scatter}1}(\mathbf{p}, \omega).
\label{eq:rte_single_scattering}
\end{equation}
We no longer have a simple closed-form solution and resort to Monte Carlo sampling. To make the differential equation more familiar to us who are more used to Monte Carlo rendering, let's convert the equation above into an integral:
\begin{equation}
L_2(\mathbf{p}(0), \omega) = \int_{0}^{t_{\text{hit}}} \exp\left(-\sigma_t t \right) \sigma_s L_{\text{scatter}1}(\mathbf{p}, \omega) \mathrm{d}t + \exp\left(-\sigma_t t_{\text{hit}} \right) L_e(\mathbf{p}(t_{\text{hit}}))
\label{eq:rte_single_scattering_integral_form}
\end{equation}
To sample this integral, we importance sample the transmittance $\exp\left(-\sigma_t t\right)$. To do this we need to sample $t$ such that $p(t) \propto \exp\left(-\sigma_t t\right)$. This can be done by the standard inverse tranform sampling. We first integrate $\exp\left(-\sigma_t t \right)$:
\begin{equation}
\int_{0}^{t} \exp\left(-\sigma_t s\right) \mathrm{d}s = -\frac{exp(-\sigma_t * t)}{\sigma_t} + \frac{1}{\sigma_t}.
\end{equation}
We thus know that $p(t) = \sigma_t\exp\left(-\sigma_t t\right)$ (setting $t \rightarrow \infty$ we can get our normalization factor). Integrating $p(t)$ to get the CDF we get
\begin{equation}
T_{\text{exp}}^{-1}(t) = -\exp\left(-\sigma_t * t\right) + 1 = u.
\end{equation}
Inverting $T_{\text{exp}}^{-1}$ we get
\begin{equation}
t = \frac{\log\left(1 - u\right)}{-\sigma_t}.
\end{equation}
Now, note that our integral is only in $[0, t_{\text{hit}}]$. Our sampling above can generate $t > t_{\text{hit}}$. In this case, we hit a surface and account for the surface emission $L_e(\mathbf{p}(t_{\text{hit}}))$ in Equation~\ref{eq:rte_single_scattering_integral_form}. We need to account for the probability of that:
\begin{equation}
P(t \geq t_{\text{hit}}) = \int_{t_{\text{hit}}}^{\infty} \sigma_t\exp\left(-\sigma_t t\right) = \exp\left(-\sigma_t * t_{\text{hit}}\right)
\label{eq:hit_surface_prob}
\end{equation}
Thus our rendering algorithm (that generate a sample for a pixel) looks like this:
\begin{lstlisting}[language=python]
def L(screen_pos, rng):
camera_ray = sample_primary(camera, screen_pos, rng)
isect = intersect(scene, camera_ray)
u = next(rng) # u \in [0, 1]
t = -log(1 - u) / sigma_t
if t < isect.t_hit:
trans_pdf = exp(-sigma_t * t) * sigma_t
transmittance = exp(-sigma_t * t)
# compute L_s1 using Monte Carlo sampling
p = camera_ray.org + t * camera_ray.dir
# Equation 7
L_s1_estimate, L_s1_pdf = L_s1(p, sample_point_on_light(rng))
return (transmittance / trans_pdf) * sigma_s * (L_s1_estimate / L_s1_pdf)
else:
# hit a surface, account for surface emission
trans_pdf = exp(-sigma_t * isect.t_hit)
transmittance = exp(-sigma_t * isect.t_hit)
Le = 0
if is_light(isect):
Le = isect.Le
return (transmittance / trans_pdf) * Le
\end{lstlisting}
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{imgs/volpath_2.png}
\caption{How a scene like the setup in Figure~\ref{fig:volpath2_illustration} would be rendered like. Now there is scattering, the area surrounding the solid sphere can have non-zero radiance.}
\label{fig:volpath2}
\end{figure}
\paragraph{Task (13\%).} You will implement the algorithm above in the \lstinline{vol_path_tracing_2} function in \lstinline{vol_path_tracing.h}. For evaluating the phase function $\rho$, use the \lstinline{eval} function in \lstinline{phase_function.h}.
Test your rendering using the scene \lstinline{scenes/volpath_test/volpath_test2.xml}. You should get an image that looks like Figure~\ref{fig:volpath2}. Note that the rendered image will be a bit noisy even a high sampling count. This is expected, since we are not importance sampling the geometry term $\frac{\left| \omega' \cdot \mathbf{n}_{\mathbf{p}'} \right|}{{\left\| \mathbf{p}(t) - \mathbf{p}' \right\|}^2} \text{visible}(\mathbf{p}(t), \mathbf{p}')$ along $t$. Such importance sampling scheme is called the \emph{equiangular sampling}~\cite{Kulla:2012:IST} (or \emph{track-length estimator} in nuclear engineering~\cite{Rief:1984:TLE}). Equiangular sampling is out of the scope for this homework, but you're free to implement it!
\paragraph{Question(s) (5\%).} Play with the parameters $\sigma_s$ and $\sigma_a$, how do they affect the final image? Why? (we assume monochromatic volumes, so don't set the parameters to be different for each channel.)
\paragraph{Bonus (25\%).} Implement the \emph{equiangular sampling}~\cite{Kulla:2012:IST} sampling strategy. Potentially combine it with the transmittance sampling using multiple importance sampling. (I do not recommend you implement this until you've finished the whole homework.) If you implement this bonus, please mention it in the README file, and provide rendering comparisons.
\section{Multiple monochromatic homogeneous volumes with absorption and multiple-scattering using only phase function sampling, no surface lighting}
\begin{figure}
\includegraphics[width=\linewidth]{imgs/multiple_scattering.pdf}
\caption{Setup of our third volumetric renderer.}
\label{fig:volpath3_illustration}
\end{figure}
Our next volumetric path tracer would start to be a bit complicated. This time we will consider multiple scattering between multiple volumes (Figure~\ref{fig:volpath3_illustration}). We make the following assumptions:
\begin{itemize}
\item There are multiple homogeneous ($\sigma_a$ and $\sigma_s$ are constants over space) volumes.
\item The surfaces in the scene only emit lights (with intensity $L_e$) and do not reflect/transmit lights.
\item The volume is monochromatic: the three color channels of $\sigma_s$ and $\sigma_a$ have the same values.
\item Light can scatter (changes direction) multiple times in the volume, but we only sample the scattering by sampling the phase function $\rho$.
\end{itemize}
Under this assumption, our volumetric integral becomes:
\begin{equation}
L_3(\mathbf{p}(0), \omega) = \int_{0}^{t_{\text{hit}}} \exp\left(-\sigma_t t \right) \sigma_s L_{\text{scatter}}(\mathbf{p}, \omega) \mathrm{d}t + \exp\left(-\sigma_t t_{\text{hit}} \right) L_e(\mathbf{p}(t_{\text{hit}})),
\label{eq:rte_multiple_scattering_integral_form}
\end{equation}
where $L_{\text{scatter}}$ is recursive:
\begin{equation}
L_{\text{scatter}}(\mathbf{p}, \omega) = \int_{S^2} \rho(\mathbf{p}(t), \omega, \omega') L_3(\mathbf{p}(t), \omega') \mathrm{d}\omega'.
\end{equation}
Our strategy of sampling the integral $L_3$ is as follows: we first generate a distance $t$ by sampling the transmittance as previous. If $t < t_{\text{hit}}$, we need to evaluate $L_{\text{scatter}}$. We do so by sampling the phase function $\rho$ for a direction $\omega'$. We then sample the next distance for evaluating the $L_3$ inside the $S^2$ integral. If we hit a surface ($t' \geq t'_{\text{hit}}$ for some distance $t'$ along our sampling), we include the emission $L_e$ and terminate.
\paragraph{Number of bounces.} We use the \lstinline{scene.options.max_depth} parameter to control the number of bounces. If \lstinline{max_depth = -1}, then we can bounce infinite amount of time. Otherwise, a light path can have at most \lstinline{max_depth + 1} vertices, including the camera vertex and the light vertex. \lstinline{max_depth = 2} corresponds to the single scattering case in the previous section.
\paragraph{Index-matched surfaces.} Sometimes we will hit surfaces that have no materials assigned (\lstinline{material_id = -1}). For these surfaces, we need to pass through them. Passing through an index-matched surface counts as one bounce.
\paragraph{Russian roulette.} We use the \lstinline{scene.options.rr_depth} to control the Russian roulette behavior. We initialize Russian roulette when a light path has \lstinline{rr_depth + 1} vertices. We set the probability for termination to
\begin{equation}
P_{rr} = \min\left(\frac{\text{contrib}(\text{path})}{p(\text{path})}, 0.95\right),
\end{equation}
where $\text{contrib}(path)$ is the evaluation of the integrand of the path in the volumetric integral, and $p(\text{path})$ is the probability density of generating such path.
The pseudo code looks like this:
\begin{lstlisting}[language=python]
def L(screen_pos, rng):
ray = sample_primary(camera, screen_pos, rng)
current_medium = camera.medium
current_path_throughput = 1
radiance = 0
bounces = 0
while True
scatter = False
isect = intersect(scene, ray)
# isect might not intersect a surface, but we might be in a volume
transmittance = 1
trans_pdf = 1
if current_medium:
# sample t s.t. p(t) ~ exp(-sigma_t * t)
# compute transmittance and trans_pdf
# if t < t_hit, set scatter = True
# ...
ray.org = ray.org + t * ray.dir
current_path_throughput *= (transmittance / trans_pdf)
if not scatter:
# reach a surface, include emission
radiance += current_path_throughput * Le(isect)
if bounces == max_depth - 1 and max_depth != -1:
# reach maximum bounces
break
if not scatter and isect:
if isect.material_id == -1:
# index-matching interface, skip through it
current_medium = update_medium(ray, isect, medium)
bounces += 1
continue
# sample next direct & update path throughput
if scatter:
next_dir = sample_phase_function(-ray.dir, rng)
current_path_throughput *=
(phase_function(-ray.dir, next_dir) / sample_phase_pdf(-ray.dir, next_dir)) * sigma_s
else:
# Hit a surface -- don't need to deal with this yet
break
# Russian roulette
rr_prob = 1
if (bounces >= rr_depth):
rr_prob = min(current_path_throughput, 0.95)
if next(rng) > rr_prob:
break
else:
current_path_throughput /= rr_prob
bounces += 1
return radiance
\end{lstlisting}
The pseudocode relies on the \lstinline{update_medium} routine, which updates the current medium the ray lies in:
\begin{lstlisting}[language=python]
def update_medium(isect, ray, medium):
if (isect.interior_medium != isect.exterior_medium):
# At medium transition. Update medium.
if dot(ray.dir, isect.geometry_normal) > 0:
medium = isect.exterior_medium_id
else:
medium = isect.interior_medium_id
return medium
\end{lstlisting}
We only update the medium if the intersection specifies a ``medium transition'', where the interior medium
is different from the exterior medium.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{imgs/volpath_3.png}
\caption{How Figure~\ref{fig:volpath3_illustration} would be rendered like. Left top is a light source, and bottom right is a volume with index-matched interface.}
\label{fig:volpath3}
\end{figure}
\paragraph{Task (13\%).} You will implement the algorithm above in the \lstinline{vol_path_tracing_3} function in \lstinline{vol_path_tracing.h}. For sampling the phase function $\rho$, use the \lstinline{sample_phase_function} and \lstinline{pdf_sample_phase} functions in \lstinline{phase_function.h}.
Test your rendering using the scene \lstinline{scenes/volpath_test/volpath_test3.xml}. You should get an image that looks like Figure~\ref{fig:volpath3}. Again, the image can be a bit noisy since we have not implemented next event estimation.
\paragraph{Question(s) (5\%).} Play with the parameters $\sigma_s$, $\sigma_a$ of different volumes, and change \lstinline{max_depth}, how do they affect the final image? Why?
\section{Multiple monochromatic homogeneous volumes with absorption and multiple-scattering with both phase function sampling and next event estimation, no surface lighting}
\begin{figure}
\includegraphics[width=\linewidth]{imgs/volume_next_event_estimation.pdf}
\caption{Our 4th volumetric path tracer adds next event estimation to the previous one. Given a point in the volume, we first select a point on the light, then we trace a shadow ray towards the light. Unlike normal next event estimation, where the visibility is a binary function, here the visibility is determined by the transmittance between the volume point and the light source. Our shadow ray needs to trace through index-matching surfaces and skip through them, accounting for the transmittance of all the media in between.}
\label{fig:volpath4_illustration}
\end{figure}
Sampling only using the phase function is inefficient, especially when the light source is relatively small. Our fourth volumetric path tracer adds the next event estimation to the sampling (Figure~\ref{fig:volpath4_illustration}) as our section title grows out of control. Instead of sampling from the phase function, we pick a point on the light source, and trace a shadow ray towards the light to account for the transmittance in between. We also need to account for \emph{index-matching surfaces} -- surfaces that don't have a material assigned to them (so the index of refraction is the same inside and outside). Our shadow ray will pass through all index-matching surfaces and account for all the transmittance in between.
Mathematically, the next event estimation is a change of variable of the spherical single scattering integral:
\begin{equation}
\int_{S^2} \rho(\mathbf{p}, \omega, \omega') \tau(\mathbf{p}, \mathbf{p}') L_e(\mathbf{p}') \mathrm{d} \omega' =
\int_{E} \rho(\mathbf{p}, \omega, \omega') \tau(\mathbf{p}, \mathbf{p}') L_e(\mathbf{p}') \frac{\left|\omega' \cdot \mathbf{n}_{\mathbf{p}'}\right|}{\left|\mathbf{p} - \mathbf{p}'\right|^2} \mathrm{d} \mathbf{p}',
\end{equation}
where $\mathbf{p}'$ are points on the light sources, $\tau(\mathbf{p}, \mathbf{p}')$ is the transmittance between point $\mathbf{p}$ and $\mathbf{p}'$. We also need to include the geometry term $\frac{\left|\omega' \cdot \mathbf{n}_{\mathbf{p}'}\right|}{\left|\mathbf{p} - \mathbf{p}'\right|^2}$ as the Jacobian of the change of variable.
The pseudo-code for the volumetric next event estimation looks like this:
\begin{lstlisting}[language=python]
def next_event_estimation(p, current_medium):
p_prime = sample_point_on_light(p)
# Compute transmittance to light. Skip through index-matching shapes.
T_light = 1
shadow_medium = current_medium
shadow_bounces = 0
p_trans_dir = 1 # for multiple importance sampling
while True:
shadow_ray = Ray(p, p_prime - p)
isect = intersect(scene, shadow_ray)
next_t = distance(p, p_prime)
if isect:
next_t = distance(p, isect.position)
# Account for the transmittance to next_t
if shadow_medium:
T_light *= exp(-sigma_t * next_t)
p_trans_dir *= exp(-sigma_t * next_t)
if not isect:
# Nothing is blocking, we're done
break
else:
# Something is blocking: is it an opaque surface?
if isect.material_id >= 0:
# we're blocked
return 0
# otherwise, it's an index-matching surface and
# we want to pass through -- this introduces
# one extra connection vertex
shadow_bounces += 1
if max_depth != -1 and bounces + shadow_bounces >= max_depth:
# Reach the max no. of vertices
return 0
shadow_medium = update_medium(isect, ray, shadow_medium)
p = p + next_t * dir_light
if T_light > 0:
# Compute T_light * G * f * L & pdf_nee
# ...
contrib = T_light * G * f * L / pdf_nee
# Multiple importance sampling: it's also possible
# that a phase function sampling + multiple exponential sampling
# will reach the light source.
# We also need to multiply with G to convert phase function PDF to area measure.
pdf_phase = pdf_sample_phase(phase_function, dir_view, dir_light) * G * p_trans_dir
# power heuristics
w = (pdf_nee * pdf_nee) / (pdf_nee * pdf_nee + pdf_phase * pdf_phase)
return w * contrib
return 0
\end{lstlisting}
Then we can put \lstinline{next_event_estimation()} in our previous code and include its contribution. Remember to multiply the result of next event estimation with the transmittance from the previous path vertex to $p$ and $\sigma_s(\mathbf{p})$.
Next, we need to include the multiple importance sampling weight for phase function sampling as well. Previously we included those contribution whenever our path hits a light source:
\begin{lstlisting}[language=python]
if not scatter:
# reach a surface, include emission
radiance += current_path_throughput * Le(isect)
\end{lstlisting}
This time, we need to multiply the contribution with the multiple importance sampling weight $\frac{p_{\text{phase}}^2}{p_{\text{phase}}^2 + p_{\text{nee}}^2}$ -- how do we get those values? The problem is that the quantities that are required for computing these two PDFs might exist several bounces ago, since in our main loop in \lstinline{L} we skip through index-matching surfaces. My strategy for resolving this is to cache the necessary quantities during the main loop. We introduce the quantities \lstinline{dir_pdf} (the pdf of the latest phase function sampling), \lstinline{nee_p_cache} (the last position p that can issue a next event estimation -- if it's on an index-matching surface, it can't issue next event estimation). \lstinline{multi_trans_pdf} (the product PDF of transmittance sampling going through several index-matching surfaces from the last phase function sampling):
\begin{lstlisting}[language=python]
def L(screen_pos, rng):
# ...
current_path_throughput = 1
radiance = 0
bounces = 0
dir_pdf = 0 // in solid angle measure
nee_p_cache = None
multi_trans_pdf = 1
while True:
# ...
\end{lstlisting}
We update these cache variables accordingly, and then when we hit a light source in the main loop, we use them to
compute the multiple importance sampling weight:
\begin{lstlisting}[language=python]
# If we reach a surface and didn't scatter, include the emission.
if not scatter:
if bounces == 0:
# This is the only way we can see the light source, so
# we don't need multiple importance sampling.
radiance += current_path_throughput * Le(isect);
else:
# Need to account for next event estimation
light_point = isect
# Note that pdf_nee needs to account for the path vertex that issued
# next event estimation potentially many bounces ago.
# The vertex position is stored in nee_p_cache.
pdf_nee = pdf_point_on_light(isect.light, light_point, nee_p_cache, scene)
# The PDF for sampling the light source using phase function sampling + transmittance sampling
# The directional sampling pdf was cached in dir_pdf in solid angle measure.
# The transmittance sampling pdf was cached in multi_trans_pdf.
dir_pdf_ = dir_pdf * multi_trans_pdf * G;
w = (dir_pdf_ * dir_pdf_) / (dir_pdf_ * dir_pdf_ + pdf_nee * pdf_nee)
# current_path_throughput already accounts for transmittance.
radiance += current_path_throughput * emission(vertex, -ray.dir, scene) * w
\end{lstlisting}
\begin{figure}
\centering
\includegraphics[width=0.48\linewidth]{imgs/volpath_4.png}
\includegraphics[width=0.48\linewidth]{imgs/volpath_4_2.png}
\caption{Test scenes for our fourth volumetric path tracer. The left scene is similar to the previous test scene in volumetric path tracer 3, but the light source is smaller. The right scene contains a dense volumetric ball floating in the air.}
\label{fig:volpath4}
\end{figure}
\paragraph{Debugging tip.} Implement the method first without multiple importance sampling. Render the previous scene in our third volumetric path tracer. Make sure it converges to the same image.
\paragraph{Task (13\%).} You will implement the algorithm above in the \lstinline{vol_path_tracing_4} function in \lstinline{vol_path_tracing.h}. Test your rendering using the scenes \lstinline{scenes/volpath_test/volpath_test4.xml} and \lstinline{scenes/volpath_test/volpath_test4_2.xml}. You should get images that look like the ones in Figure~\ref{fig:volpath4}.
\paragraph{Question(s) (5\%).} In \lstinline{scenes/volpath_test/volpath_test4_2.xml}, we render a scene with an object composed of dense volume. How does it compare to rendering the object directly with a Lambertian material? Why are they alike or different?
\section{Multiple monochromatic homogeneous volumes with absorption and multiple-scattering with both phase function sampling and next event estimation, with surface lighting}
We are finally adding surface lighting to our volumetric renderer. This should be much easier compared to the last two parts. So far, whenever we encounter a surface, we use $L_e$ to represent its emission. Adding surface lighting is just replacing $L_e$ with the full rendering equation (that includes transmittance and volumetric scattering). Code-wise, we just need to go through all the places where we sample or evaluate the phase function, and also consider the case where we hit a surface and include the BSDF sampling and evaluation. I will let you figure out the details!
\begin{figure}
\centering
\includegraphics[width=0.48\linewidth]{imgs/volpath_5.png}
\includegraphics[width=0.48\linewidth]{imgs/volpath_5_2.png}
\caption{Test scenes for our fifth volumetric path tracer. The left scene contains a volumetric ball and an opaque Lambertian ball (with red reflectance). The right scene contains an outer sphere with a dielectric interface and an inner sphere with Lambertian material, and between them is a dense volume.}
\label{fig:volpath5}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.48\linewidth]{imgs/volpath_5_cbox.png}
\includegraphics[width=0.48\linewidth]{imgs/volpath_5_cbox_teapot.png}
\caption{Cornell box scenes that we can render with our fifth volumetric path tracer.}
\label{fig:volpath5_cbox}
\end{figure}
\paragraph{Task (13\%).} You will add surface lighting in the \lstinline{vol_path_tracing_5} function in \lstinline{vol_path_tracing.h}. Test your rendering using the scenes \lstinline{scenes/volpath_test/volpath_test5.xml} and \lstinline{scenes/volpath_test/volpath_test5_2.xml}. You should get images that look like the ones in Figure~\ref{fig:volpath5}. Also check out \lstinline{scenes/volpath_test/vol_cbox.xml} and \lstinline{scenes/volpath_test/vol_cbox_teapot.xml} to see what we can render now (Figure~\ref{fig:volpath5_cbox})!
\paragraph{Question(s) (5\%).} Play with the index of refraction parameter of the dielectric interface in \\
\lstinline{scenes/volpath_test/volpath_test5_2.xml}. How does that affect appearance? Why?
\section{Multiple chromatic heterogeneous volumes with absorption and multiple-scattering with both phase function sampling and next event estimation, with surface lighting}
We reach the final stage of this homework. This last part is a bit tough due to the mathematicaly complexity (but the implementation isn't that bad), so don't worry too much if you cannot finish it. Even if you don't finish this part, you will still get $90\%$ of the score. But the math is fun and you should try to understand it!
So far, we have been assuming that 1) the volumes are monochromatic, and 2) they are homogeneous. We are going to solve the two problems at once with a super clever idea called \emph{null-scattering}. Let's focus on heterogeneous media first and look at the radiative transfer equation (assuming no volumetric emission):
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d}t}L(\mathbf{p}(t), \omega) = -\sigma_t(\mathbf{p}(t)) L(\mathbf{p}(t), \omega) + \sigma_s(\mathbf{p}(t)) \int_{S^2} \rho(\omega, \omega') L(\mathbf{p}(t), \omega') d\omega',
\end{equation}
with boundary condition $L(\mathbf{p}(t_{\text{hit}}), \omega) = L_{\text{surface}}(\mathbf{p}(t_{\text{hit}}))$ to account for surface lighting.
Integrating over distance, we get our integral volume rendering equation:
\begin{equation}
L(\mathbf{p}(t), \omega) = \int_{0}^{t_{\text{hit}}} \tau(\mathbf{p}(0), \mathbf{p}(t'))
\left(\sigma_s(\mathbf{p}(t')) \int_{S^2} \rho(\omega, \omega') L(\mathbf{p}(t'), \omega') \mathrm{d}\omega' \right) \mathrm{d}t' + \tau(\mathbf{p}(0), \mathbf{p}(t_{\text{hit}})) L_{\text{surface}}(\mathbf{p}(t_{\text{hit}})),
\end{equation}
where $\tau$ is the transmittance:
\begin{equation}
\tau(\mathbf{p}(0), \mathbf{p}(t')) = \exp\left(-\int_{0}^{t'} \sigma_t\left(\mathbf{p}(t'')\right) \mathrm{d}t'' \right).
\end{equation}
The transmittance has a closed-form when the volume is homogeneous. However when we have heterogeneous volumes (i.e. $\sigma_t$ changes with position), we need to use Monte Carlo to estimate the transmittance. The exponential makes it tricky to do an unbiased estimation: $E[\exp(X)] \neq \exp(E[X])$ for most random variable $X$, where $E$ is expectation. This means that when we have an estimation of the integral $X \approx \int_{0}^{t'} \sigma_t\left(\mathbf{p}(t'') \mathrm{d}t''\right)$, even if $X$ is unbiased ($E[X] = \int_{0}^{t'} \sigma_t\left(\mathbf{p}(t'')\right) \mathrm{d}t''$), the exponentiation of $X$ won't be an unbiased estimation of the exponentiation of the integral.
\paragraph{Homogenized free-flight sampling.} To resolve this, we apply a trick called \emph{homogenization}. That is, we convert the heterogeneous medium into a homogeneous medium by inserting \emph{fake} particles that do not scatter lights. Then we can use the closed-form solution from the homogeneous medium to obtain an answer. Mathematically, we modify the radiative transfer equation as follows:
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d}t}L(\mathbf{p}(t), \omega) = -(\sigma_t(\mathbf{p}(t)) + \sigma_n(\mathbf{p}(t))) L(\mathbf{p}(t), \omega) + \sigma_n(\mathbf{p}(t)) L(\mathbf{p}(t), \omega) + \sigma_s(\mathbf{p}(t)) \int_{S^2} \rho(\omega, \omega') L(\mathbf{p}(t), \omega') d\omega',
\end{equation}
where $\sigma_n$ is the density of the \emph{fake} particles. All we did is to add $-\sigma_n L + \sigma_n L$ to the right-hand side, so we did not alter the values of the radiative transfer equation at all.
Now, if we choose the fake particle density $\sigma_n(\mathbf{p}(t))$ such that $\sigma_t(\mathbf{p}(t)) + \sigma_n(\mathbf{p}(t))$ is a constant for all $t$, then we convert the radiative transfer equation back to a homogeneous medium!
What constant should we choose? A common choice is the upper bound of $\sigma_t(\mathbf{p}(t))$ for all $t$, so that $\sigma_n(\mathbf{p}(t)) \geq 0$. We call the upper bound the \emph{majorant} ($\sigma_m$). Now we can write the volume integral equation as:
\begin{equation}
\begin{aligned}
L(\mathbf{p}(t), \omega) = &\int_{0}^{t_{\text{hit}}} \tau_m(\mathbf{p}(0), \mathbf{p}(t'))
\left(\sigma_n(\mathbf{p}(t')) L(\mathbf{p}(t'), \omega) + \sigma_s(\mathbf{p}(t')) \int_{S^2} \rho(\omega, \omega') L(\mathbf{p}(t'), \omega') \mathrm{d}\omega' \right) \mathrm{d}t' + \\
&\tau_m(\mathbf{p}(0), \mathbf{p}(t_{\text{hit}})) L_{\text{surface}}(\mathbf{p}(t_{\text{hit}})),
\end{aligned}
\end{equation}
where $\tau_m$ is the \emph{homogenized} transmittance:
\begin{equation}
\tau_m(\mathbf{p}(0), \mathbf{p}(t')) = \exp\left(-\sigma_m t'\right).
\end{equation}
To evaluate $L$, we importance sample the homogenized transmittance $\tau_m$. Every time we sample a distance $t'$ based on the homogenized transmittance, three things can happen:
\begin{enumerate}
\item We hit the surface ($t' \geq t_{\text{hit}}$), and need to compute $L_{\text{surface}}$.
\item We hit a \emph{real} particle, and need to compute the $S^2$ integral to scatter the particle.
\item We hit a \emph{fake} particle, and need to keep evaluating $L$ by continuing the ray in a straight line.
\end{enumerate}
The probability that the first event happenning is the same as Equation~\ref{eq:hit_surface_prob}. For the second and third event, we can use whatever probability we like (it's just importance sampling). We will choose the \emph{real} probability to be $\frac{\sigma_t}{\sigma_m}$, and the \emph{fake} probability to be $\frac{\sigma_n}{\sigma_m}$ (a more optimal way to assign the probability is to set the real probability to $\frac{\sigma_s}{\sigma_s + \sigma_n}$, but then it is possible that $\sigma_s + \sigma_n = 0$ and we need to deal with those corner cases, so we opt for simplicity here). We arrive at the classical \emph{delta tracking} (or Woodcock tracking) algorithm~\cite{Woodcock:1965:TGC}.
\paragraph{Homogenized next event estimation.} Next, we need to evaluate the transmittance (instead of importance sampling) for the next event estimation.
We will use the same homogenization trick.
We have the following integral to estimate between two points $\mathbf{p}(0)$ and $\mathbf{p}(t')$:
\begin{equation}
\tau(\mathbf{p}(0), \mathbf{p}(t')) = \tau_\mathbf{p}(t') = \exp\left(-\int_{0}^{t'}\sigma_t(\mathbf{p}(t''))\mathrm{d}t''\right).
\end{equation}
We again observe that $\tau_\mathbf{p}(t')$ is the solution of an ordinary differential equation
\begin{equation}
\begin{aligned}
\frac{\mathrm{d}\tau_\mathbf{p}\left(t'\right)}{\mathrm{d}t'} &= -\sigma_t(\mathbf{p}(t'))\tau_\mathbf{p}\left(t'\right) = -\sigma_m \tau_\mathbf{p}\left(t'\right) + \sigma_n(\mathbf{p}(t))\tau_\mathbf{p}\left(t'\right). \\
\tau_\mathbf{p}(0) &= 1
\end{aligned}
\end{equation}
where we homogenize the ODE using the majorant $\sigma_t(\mathbf{p}(t')) = \sigma_m - \sigma_n(\mathbf{p}(t'))$.
To solve this ODE, we apply a change of variable $\tilde{\tau}_{\mathbf{p}}(t') = \exp\left(\sigma_m t'\right) \tau_{\mathbf{p}}(t')$. This gives us
\begin{equation}
\begin{aligned}
\frac{\mathrm{d}}{\mathrm{d}t'}\tilde{\tau}_{\mathbf{p}}(t') &= \frac{\mathrm{d}}{\mathrm{d}t'}\left(\exp\left(\sigma_m t'\right) \tau_{\mathbf{p}}(t')\right) = \sigma_n(\mathbf{p}(t')) \tilde{\tau}_{\mathbf{p}}(t') \\
\tilde{\tau}_{\mathbf{p}}(0) &= 1
\end{aligned}.
\end{equation}
Instead of solving the ODE above using exponentiation (which gives us back the exponential integral), we simply integrate both sides and get $\tilde{\tau}_{\mathbf{p}}(t') = 1 + \int_0^{t'} \sigma_n(\mathbf{p}(t'')) \tilde{\tau}_{\mathbf{p}}(t'') \mathrm{d}t''$. Changing the variable back to $\tau$ from $\tilde{\tau}$ gives us:
\begin{equation}
\begin{aligned}
\tau_{\mathbf{p}}(t') = \frac{\tilde{\tau}_{\mathbf{p}}(t')}{\exp\left(\sigma_m t'\right)} &= \exp\left(-\sigma_m t'\right) \left(1 + \int_{0}^{t'} \exp\left(\sigma_m t''\right) \sigma_n(\mathbf{p}(t'')) \tau_{\mathbf{p}}(t'') \mathrm{d}t'' \right) \\
&= \exp\left(-\sigma_m t'\right) + \int_{0}^{t'} \exp\left(\sigma_m (t''-t')\right) \sigma_n(\mathbf{p}(t'')) \tau_{\mathbf{p}}(t'') \mathrm{d}t'',
\end{aligned}
\label{eq:transmittance_recursive_integral}
\end{equation}
and now we have a recursive integral equation that we can directly estimate using Monte Carlo. (we can get an integral that does not involve homogenization nor exponentiation by just integrating both sides of $\frac{\mathrm{d}\tau_\mathbf{p}\left(t'\right)}{\mathrm{d}t'} = -\sigma_t(\mathbf{p}(t'))\tau_\mathbf{p}\left(t'\right)$, but Monte Carlo sampling of this integral turns out to be very very slow. See Georgiev et al.'s article for a deeper discussion~\cite{Georgiev:2019:IFV}.) This change of variable trick is called an \emph{exponential integrator} in ODE literature (Google it if you're interested ;).
The Monte Carlo sampler we will apply for solving Equation~\ref{eq:transmittance_recursive_integral} is called the \emph{ratio tracking} method. The idea is to use a path tracing like algorithm with Russian roulette. We notice that the recursive integral is a bit like the rendering equation: we have the \emph{light source term} $\exp\left(-\sigma_m t'\right)$, and the recursive integral $\int_{0}^{t'} \exp\left(\sigma_m (t''-t')\right) \sigma_n(\mathbf{p}(t'')) \tau_{\mathbf{p}}(t'') \mathrm{d}t''$.
Starting from $z=t'$, we sample a distance $z_d$ such that $p(z_d) \propto \exp\left(\sigma_m z_d\right)$ and update $z$. If we reach 0, then we evaluate the light source term. Otherwise, we evaluate the recursive term. One way to think about this is that we hit the fake particle if we choose to evaluate the recursive term. The probability for evaluating the light source term is $\exp\left(-\sigma_m z\right)$ (Equation~\ref{eq:hit_surface_prob}), and the probability density for evaluating the recursive term is $\exp\left(-\sigma_m z_d\right) \sigma_m$. We accumulate these probabilities by multiplying them, while also accounting for the integrand.
\paragraph{Dealing with chromatic media.} Finally, we need to deal with colors. So far, we have been assuming that $\sigma_a$ and $\sigma_s$ are monochromatic, but in practice, they can be RGB colors (or even a vector representing response per wavelength). We deal with this by sampling a channel (R, G, or B) whenever we need to sample distance. Importantly, when computing probability density, we take the average probability density over all channels: this is a form of multiple importance sampling (called the one-sample multiple importance sampling~\cite{Veach:1995:OCS}).
\paragraph{Multiple importance sampling.} To combine phase function sampling with the next event estimation, we need the PDFs for the ratio tracking (the sequence of decisions we make to evaluate either the recursive term or the source term) during free-flight sampling, and the PDFs of free-flight sampling (the real/fake particle events) during next event estimation. So when evaluating the PDF for one of them, we must also accumulate the PDF of the other.
Combining all of these, let's look at the pseudo-code. For homogenized free-flight sampling, it looks like this:
\begin{lstlisting}[language=python]
scatter = False
transmittance = make_const_spectrum(1)
trans_dir_pdf = make_const_spectrum(1) # PDF for free-flight sampling
trans_nee_pdf = make_const_spectrum(1) # PDF for next event estimation
if in_medium():
majorant = get_majorant(medium, ray);
# Sample a channel for sampling
u = next(rng) # u \in [0, 1]
channel = clamp(int(u * 3), 0, 2)
accum_t = 0
iteration = 0
while True:
if majorant[channel] <= 0:
break
if iteration >= max_null_collisions:
break
t = -log(1 - next(rng)) / majorant[channel]
dt = t_hit - accum_t
# Update accumulated distance
accum_t = min(accum_t + t, t_hit)
if t < dt: # haven't reached the surface
# sample from real/fake particle events
real_prob = sigma_t / majorant
if next(rng) < real_prob[channel]:
# hit a "real" particle
scatter = true
transmittance *= exp(-majorant * t) / max(majorant)
trans_dir_pdf *= exp(-majorant * t) * majorant * real_prob / max(majorant)
# don't need to account for trans_nee_pdf since we scatter
break
else:
# hit a "fake" particle
transmittance *= exp(-majorant * t) * sigma_n / max(majorant)
trans_dir_pdf *= exp(-majorant * t) * majorant * (1 - real_prob) / max(majorant)
trans_nee_pdf *= exp(-majorant * t) * majorant / max(majorant)
else: # reach the surface
transmittance *= exp(-majorant * dt)
trans_dir_pdf *= exp(-majorant * dt)
trans_nee_pdf *= exp(-majorant * dt)
break
iteration += 1
\end{lstlisting}
Two crucial differences between the pseudo code and the math above are:
1) We terminate the loop when the iteration reaches a maximum amount of fake particle collisions
(default is set to $1000$). While this is not mathematically correct, I found that sometimes
the numerical error can lead to unbounded loops.
2) We divide both transmittance and the PDF with the maximum of the majorant. This is to improve
numerical robustness. Whenever we hit the fake particle, both the transmittance and
the free-flight PDF needs to be multiplied with $\sigma_n$ (note that $\sigma_m (1 - \sigma_t / \sigma_m) = \sigma_n$),
which can be a few hundreds or larger. If we do not normalize these values, the numbers will
quickly blow up and go to infinity.
For the transmittance estimation in next event estimation the pseudo code looks like this:
\begin{lstlisting}[language=python]
p_prime = sample_point_on_light(p)
# Compute transmittance to light. Skip through index-matching shapes.
T_light = make_const_spectrum(1)
p_trans_nee = make_const_spectrum(1)
p_trans_dir = make_const_spectrum(1)
while True:
shadow_ray = Ray(p, p_prime - p)
isect = intersect(scene, shadow_ray)
next_t = distance(p, p_prime)
if isect:
next_t = distance(p, isect.position)
# Account for the transmittance to next_t
if shadow_medium:
u = next(rng) # Sample a channel for sampling
channel = clamp(int(u * 3), 0, 2)
iteration = 0 = 0
while True:
if majorant[channel] <= 0:
break
if iteration = 0 >= max_null_collisions:
break
t = -log(1 - next(rng)) / majorant[channel]
dt = next_t - accum_t
accum_t = min(accum_t + t, next_t)
if t < dt:
# didn't hit the surface, so this is a null-scattering event
T_light *= exp(-majorant * t) * sigma_n / max(majorant)
p_trans_nee *= exp(-majorant * t) * majorant / max(majorant)
real_prob = sigma_t / majorant
p_trans_dir *= exp(-majorant * t) * majorant * (1 - real_prob) / max(majorant)
if max(T_light) <= 0: # optimization for places where sigma_n = 0
break
else: # hit the surface
T_light *= exp(-majorant * dt);
p_trans_nee *= exp(-majorant * dt);
p_trans_dir *= exp(-majorant * dt);
iteration += 1
# same as the previous next event estimation code...
\end{lstlisting}
Notice that whenever we need to compute the Monte Carlo estimate of the transmittance, we need to divide it by the
average PDFs over the RGB channels. For example, when updating the path throughput:
\begin{lstlisting}
current_path_throughput *= (transmittance / avg(trans_dir_pdf))
\end{lstlisting}
Same when computing multiple importance sampling.
\begin{figure}
\centering
\includegraphics[width=0.48\linewidth]{imgs/volpath_6.png}
\caption{Test scene for our final volumetric path tracer.}
\label{fig:volpath6}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.48\linewidth]{imgs/hetvol.png}
\caption{Monochromatic heterogeneous volume. Scene data courtesy of Wenzel Jakob.}
\label{fig:hetvol}
\end{figure}
\paragraph{Task (15\%).} You will implement the algorithms above in the \lstinline{vol_path_tracing} function in \lstinline{vol_path_tracing.h}. For getting the majorant, use \lstinline{get_majorant} defined in \lstinline{medium.h}. For accessing \lstinline{max_null_collisions}, use \lstinline{scene.options.max_null_collisions}. Test your rendering using the scenes \lstinline{scenes/volpath_test/volpath_test6.xml} (chromatic homogeneous medium), \lstinline{scenes/volpath_test/hetvol.xml}, \lstinline{scenes/volpath_test/hetvol_colored.xml}. You should get images that look like the ones in Figure~\ref{fig:gallery},~\ref{fig:volpath6}, and~\ref{fig:hetvol}.
\paragraph{Question (s) (5\%).} Play with the \lstinline{g} parameter of the Henyey-Greenstein phase function in \lstinline{scenes/volpath_test/hetvol.xml} (the valid range is $(-1, 1)$). How does it change the appearance? Why?
\paragraph{Bonus (10\%).} Generate some heterogeneous volumes yourself and render them! Look at \lstinline{volume.cpp} for the format of the 3D volume grid. If you did this bonus, please submit the scene files and rendering and mention it in the README file.
\bibliographystyle{plain}
\bibliography{refs}
\end{document} | {
"alphanum_fraction": 0.7317176323,
"avg_line_length": 69.990990991,
"ext": "tex",
"hexsha": "39e09215ee5ffda8cb9060e8b1c431646ae5c963",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0fb84c552be5cec7e4ba767b093d00dd84b1038e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "TH3CHARLie/lajolla_public",
"max_forks_repo_path": "handouts/homework2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0fb84c552be5cec7e4ba767b093d00dd84b1038e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "TH3CHARLie/lajolla_public",
"max_issues_repo_path": "handouts/homework2.tex",
"max_line_length": 1045,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0fb84c552be5cec7e4ba767b093d00dd84b1038e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "TH3CHARLie/lajolla_public",
"max_stars_repo_path": "handouts/homework2.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 15173,
"size": 54383
} |
%&latex
\documentclass[letterpaper,11pt]{article}
\usepackage{graphicx}
\usepackage[margin=1.0in]{geometry}
\usepackage[hyphens]{url}
\usepackage{hyperref}
\makeatletter
\setlength{\@fptop}{0pt}
\makeatother
\title{TEBreak: Generalised insertion detection}
\author{Adam D. Ewing ([email protected])}
\begin{document}
\date{April 10, 2017}
\maketitle
\section{Introduction}
\subsection{Software Dependencies and Installation}
TEBreak requires the following software packages be available:
\begin{enumerate}
\item samtools (\url{http://samtools.sourceforge.net/})
\item bwa (\url{http://bio-bwa.sourceforge.net/})
\item LAST (\url{http://last.cbrc.jp/})
\item minia (\url{http://minia.genouest.org/})
\item exonerate (\url{https://github.com/adamewing/exonerate.git})
\end{enumerate}
Please run the included setup.py to check that external dependencies are installed properly and to install the required python libraries:
\begin{verbatim}
python setup.py build
python setup.py install
\end{verbatim}
\section{Testing / example run}
\subsection{Run tebreak.py}
TEBreak includes a small example intended to test whether the software is installed properly.
Starting in the TEBreak root directory, try the following:
\begin{verbatim}
tebreak/tebreak.py \
-b test/data/example.ins.bam \
-r test/data/Homo_sapiens_chr4_50000000-60000000_assembly19.fasta \
--pickle test/example.pickle \
--detail_out test/example.tebreak.detail.out
\end{verbatim}
This will yield two files, in this case test/example.pickle and test/example.tebreak.detail.out. If not specified, the output filenames will be based on the input BAM file name (or the first BAM in a comma delimited list of BAMs). The "pickle" file contains information that will be passed on to subsequent steps in TEBreak, the "detail" output contains raw putative insertion sites which generally require further refinement, annotation, and filtering, but can be useful in troubleshooting when a true positive site is missed.
\subsection{Run resolve.py}
The next step is to resolve putative insertions, in this case human transposable element insertions, using resolve.py:
\begin{verbatim}
tebreak/resolve.py \
-p test/example.pickle \
-i lib/teref.human.fa \
--detail_out test/example.resolve.detail.out \
-o test/example.table.txt
\end{verbatim}
If all went well, test/example.table.txt should contain evidence for 5 transposable element insertions (2 L1s and 3 Alus). If it does not, please double check that all prerequisites are installed, try again, and e-mail me at [email protected] with error messages if you are still not having any luck.
\section{Insertion site discovery (tebreak.py)}
\subsection{Usage}
The following output can be obtained by running tebreak.py -h:
\begin{verbatim}
usage: tebreak.py [-h] -b BAM -r BWAREF [-p PROCESSES] [-c CHUNKS]
[-i INTERVAL_BED] [-d DISCO_TARGET] [--minMWP MINMWP]
[--min_minclip MIN_MINCLIP] [--min_maxclip MIN_MAXCLIP]
[--min_sr_per_break MIN_SR_PER_BREAK]
[--min_consensus_score MIN_CONSENSUS_SCORE] [-m MASK]
[--rpkm_bam RPKM_BAM] [--max_fold_rpkm MAX_FOLD_RPKM]
[--max_ins_reads MAX_INS_READS]
[--min_split_reads MIN_SPLIT_READS]
[--min_prox_mapq MIN_PROX_MAPQ]
[--max_N_consensus MAX_N_CONSENSUS]
[--exclude_bam EXCLUDE_BAM]
[--exclude_readgroup EXCLUDE_READGROUP]
[--max_bam_count MAX_BAM_COUNT] [--map_tabix MAP_TABIX]
[--min_mappability MIN_MAPPABILITY]
[--max_disc_fetch MAX_DISC_FETCH]
[--min_disc_reads MIN_DISC_READS] [--bigcluster BIGCLUSTER]
[--skipbig] [--tmpdir TMPDIR] [--pickle PICKLE]
[--detail_out DETAIL_OUT] [--debug]
Find inserted sequences vs. reference
optional arguments:
-h, --help show this help message and exit
-b BAM, --bam BAM target BAM(s): can be comma-delimited list or .txt
file with bam locations
-r BWAREF, --bwaref BWAREF
bwa/samtools indexed reference genome
-p PROCESSES, --processes PROCESSES
split work across multiple processes
-c CHUNKS, --chunks CHUNKS
split genome into chunks (default = # processes),
helps control memory usage
-i INTERVAL_BED, --interval_bed INTERVAL_BED
BED file with intervals to scan
-d DISCO_TARGET, --disco_target DISCO_TARGET
limit breakpoint search to discordant mate-linked
targets (e.g. generated with
/lib/make_discref_hg19.sh)
--minMWP MINMWP minimum Mann-Whitney P-value for split qualities
(default = 0.01)
--min_minclip MIN_MINCLIP
min. shortest clipped bases per cluster (default = 3)
--min_maxclip MIN_MAXCLIP
min. longest clipped bases per cluster (default = 10)
--min_sr_per_break MIN_SR_PER_BREAK
minimum split reads per breakend (default = 1)
--min_consensus_score MIN_CONSENSUS_SCORE
quality of consensus alignment (default = 0.9)
-m MASK, --mask MASK BED file of masked regions
--rpkm_bam RPKM_BAM use alternate BAM(s) for RPKM calculation: use
original BAMs if using reduced BAM(s) for -b/--bam
--max_fold_rpkm MAX_FOLD_RPKM
ignore insertions supported by rpkm*max_fold_rpkm
reads (default = None (no filter))
--max_ins_reads MAX_INS_READS
maximum number of reads to use per insertion call
(default = 100000)
--min_split_reads MIN_SPLIT_READS
minimum total split reads per insertion call (default
= 4)
--min_prox_mapq MIN_PROX_MAPQ
minimum map quality for proximal subread (default =
10)
--max_N_consensus MAX_N_CONSENSUS
exclude breakend seqs with > this number of N bases
(default = 4)
--exclude_bam EXCLUDE_BAM
may be comma delimited
--exclude_readgroup EXCLUDE_READGROUP
may be comma delimited
--max_bam_count MAX_BAM_COUNT
maximum number of bams supporting per insertion
--map_tabix MAP_TABIX
tabix-indexed BED of mappability scores
--min_mappability MIN_MAPPABILITY
minimum mappability (default = 0.1; only matters with
--map_tabix)
--max_disc_fetch MAX_DISC_FETCH
maximum number of discordant reads to fetch per
insertion site per BAM (default = 50)
--min_disc_reads MIN_DISC_READS
if using -d/--disco_target, minimum number of
discordant reads to trigger a call (default = 4)
--bigcluster BIGCLUSTER
set big cluster warning threshold (default = 1000)
--skipbig drop clusters over size set by --bigcluster
--tmpdir TMPDIR temporary directory (default = /tmp)
--pickle PICKLE pickle output name
--detail_out DETAIL_OUT
file to write detailed output
--debug
\end{verbatim}
\subsection{Description}
Insertion sites are discovered through clustering and scaffolding of clipped reads. Additional support is obtained through local assembly of discordant read pairs, if applicable. Input requirements are minimal, consisting of one of more indexed BAM files and the reference genome corresponding to the alignments in the BAM files(s). Many additional options are available and recommended to improve performance and/or sensitivity.
\subsection{Input}
\paragraph{BAM Alignment input (-b/--bam)}
BAMs ideally should adhere to SAM specification i.e. they should validate via picard\'s ValidateSamFile. BAMs should be sorted in coordinate order and indexed. BAMs may consist of either paired-end reads, fragment (single end) reads, or both. Multiple BAM files can be input in a comma-delimited list.
\paragraph{Reference genome (-r/--bwaref)}
The reference genome should be the \textbf{same as that used to create the target BAM file}, specifically the chromosome names and lengths in the reference FASTA must be the same as in the BAM header. The reference must be indexed for bwa (\texttt{bwa index}) and indexed with samtools (\texttt{samtools faidx}).
\paragraph{Pickled output (--pickle)}
Output data in python's pickle format, meant for input to other scripts including resolve.py and picklescreen.py (in /scripts). Default is the basename of the input BAM with a .pickle extension.
\paragraph{Detailed human-readable output (--detail\_out)}
This is a file containing detailed information about consensus reads, aligned segments, and statistics for each putative insertion site detected. Note that this is done with minimal filtering, so these should not be used blindly. Default filename is tebreak.out
\paragraph{Options important for optimal runtime}
\begin{itemize}
\item -p/--processes : Split work across multiple processes. Parallelism is accomplished through python's multiprocessing module. If specific regions are input via -i/--interval\_bed, these intervals will be distributed one per process. If a whole genome is to be analysed (no -i/--interval\_bed), the genome is split into chunks, one per process, unless a specific number of chunks is specified via -c/--chunks.
\item -i/--interval\_bed : BED file specifying intervals to be scanned for insertion evidence, the first three columns must be chromosome, start, end. A number of intervals equal to the value specified by -p/--processes will be searched in parallel. Overrides -c/--chunks.
\item -c/--chunks : if no input BED is specified via -i/--interval\_bed and no set of discordant targets is specified by -d/--disco\_target, split the genome up into some number of "chunks" (default = number of processes). For whole-genome sequencing data split across 32-64 cores, 30000-40000 chunks often yields reasonable runtimes.
\item -m/--mask : BED file of regions to mask. Reads will not be considered if they fall into regions in this file. Usually this file would include regions likely to have poor unique alignability and a tendency to generate large pileups that leads to slow parsing such as centromeres, telomeres, recent repeats, and extended homopolymers and simple sequence repeats. An example is included for hg19/GRCh37 (lib/hg19.centromere\_telomere.bed).
\item -d/--disco\_target : if paired reads are present and the sequencing scheme permits, you can use discordant read pair mapping to narrow down regions to search for split reads as evidence for insertion breakpoints. Most Illumina WGS/WES sequencing approaches support this. This can result in a dramatic speedup with the tradeoff of a slightly reduced sensitivity in some cases.
\end{itemize}
\paragraph{Additional options}
\begin{itemize}
\item --minMWP: Minimum value of Mann-Whitney P-value used to check similarity between the distribution of mapped base qualities versus clipped base qualities for a soft-clipped read (default = 0.01).
\item --min\_minclip : the shortest amount of soft clipping that will be considered (default = 3, minimum = 2).
\item --max\_minclip : For a given cluster of clipped reads, the greatest number of bases clipped form any read in the cluster must be at least this amount (default = 10).
\item --min\_sr\_per\_break : minimum number of split (clipped) reads required to form a cluster (default = 1)
\item --min\_consensus\_score : minimum quality score for the scaffold created from clipped reads (default = 0.95)
\item -m/--mask : BED file of regions to mask. Reads will not be considered if they fall into regions in this file.
\item --rpkm\_bam : use alternate BAM file(s) for RPKM calculation for avoiding over-aligned regions (useful for subsetted BAMs).
\item --max\_fold\_rpkm : reject cluster if RPKM for clustered region is greater than the mean RPKM by this factor (default = None (no filter))
\item --max\_ins\_reads : maximum number of reads per insertion call (default = 100000)
\item --min\_split\_reads : minimum total split (clipped) read count per insertion (default = 4)
\item --min\_prox\_mapq : minimum mapping quality for proximal (within-cluster) alignments (default = 10)
\item --max\_N\_consensus : exclude reads and consensus breakends with greater than this number of N (undefined) bases (default = 4)
\item --exclude\_bam : only consider clusters that do not include reads from these BAM(s) (may be comma-delimited list)
\item --exclude\_readgroup : only consider clusters that to not include reads from these readgroup(s) (may be comma-delimited list)
\item --max\_bam\_count : set maximum number of BAMs involved per insertion
\item --insertion\_library : pre-select insertions containing sequence from specified FASTA file (not generally recommended but may improve running time in some instances)
\item --map\_tabix : tabix-indexed BED of mappability scores. Generate for human with script in lib/human\_mappability.sh.
\item --min\_mappability : minimum mappability for cluster (default = 0.5; only effective if --map\_tabix is also specified)
\item --max\_disc\_fetch : maximum number of discordant mates to fetch per insertion site per BAM. Sites with more than this number of discordant reads associated will be downsampled. This helps with runtime as fetching discordant rates is time-consuming (default = 50).
\item --min\_disc\_reads : sets the threshold for calling a cluster of discordant reads when using -d/--disco\_target (default = 4)
\item --tmpdir : directory for temporary files (default = /tmp)
\end{itemize}
\section{Resolution of specific insertion types (resolve.py)}
\subsection{Usage}
\begin{verbatim}
usage: resolve.py [-h] -p PICKLE [-t PROCESSES] -i INSLIB_FASTA
[-m FILTER_BED] [-v] [-o OUT] [-r REF]
[--max_bam_count MAX_BAM_COUNT]
[--min_ins_match MIN_INS_MATCH]
[--min_ref_match MIN_REF_MATCH]
[--min_cons_len MIN_CONS_LEN] [--min_discord MIN_DISCORD]
[--min_split MIN_SPLIT] [--ignore_filters]
[-a ANNOTATION_TABIX] [--refoutdir REFOUTDIR] [--use_rg]
[--keep_all_tmp_bams] [--detail_out DETAIL_OUT] [--unmapped]
[--usecachedLAST] [--uuid_list UUID_LIST] [--callmuts]
[--tmpdir TMPDIR]
Resolve insertions from TEbreak data
optional arguments:
-h, --help show this help message and exit
-p PICKLE, --pickle PICKLE
pickle file output from tebreak.py
-t PROCESSES, --processes PROCESSES
split work across multiple processes
-i INSLIB_FASTA, --inslib_fasta INSLIB_FASTA
reference for insertions (not genome)
-m FILTER_BED, --filter_bed FILTER_BED
BED file of regions to mask
-v, --verbose output detailed status information
-o OUT, --out OUT output table
-r REF, --ref REF reference genome fasta, expect bwa index, triggers
transduction calling
--max_bam_count MAX_BAM_COUNT
skip sites with more than this number of BAMs (default
= no limit)
--min_ins_match MIN_INS_MATCH
minumum match to insertion library (default 0.95)
--min_ref_match MIN_REF_MATCH
minimum match to reference genome (default 0.98)
--min_cons_len MIN_CONS_LEN
min total consensus length (default=250)
--min_discord MIN_DISCORD
minimum mapped discordant read count (default = 8)
--min_split MIN_SPLIT
minimum split read count (default = 8)
--ignore_filters
-a ANNOTATION_TABIX, --annotation_tabix ANNOTATION_TABIX
can be comma-delimited list
--refoutdir REFOUTDIR
output directory for generating tebreak references
(default=tebreak_refs)
--use_rg use RG instead of BAM filename for samples
--keep_all_tmp_bams leave ALL temporary BAMs (warning: lots of files!)
--detail_out DETAIL_OUT
file to write detailed output
--unmapped report insertions that do not match insertion library
--usecachedLAST try to used cached LAST db, if found
--uuid_list UUID_LIST
limit resolution to UUIDs in first column of input
list (can be tabular output from previous resolve.py
run)
--callmuts detect changes in inserted seq. vs ref. (requires
bcftools)
--tmpdir TMPDIR directory for temporary files
\end{verbatim}
\subsection{Description}
This script is the second step in insertion analysis via TEBreak, it is separated from the initial insertion discovery script (tebreak.py) to facilitate running multiple different analyses on the same set of putative insertion sites (e.g. detecting transposable elements, viral insertions, processed transcript insertions, and novel sequence insertions from the same WGS data).
\subsection{Input}
\paragraph{Insertion call input (-p/--pickle)}
This is the 'pickle' containing information about putative insertion sites derived from tebreak.py (filename specified by --pickle).
\paragraph{Reference genome (-i/--inslib}
A FASTA file containing template insertion sequences (e.g. reference transposable elements, viral sequences, mRNAs, etc.). For transposable elements, sequence superfamilies and subfamilies can be specified by separating with a colon (:) as follows:
\begin{verbatim}
>ALU:AluYa5
GGCCGGGCGCGGTGGCTCACGCCTGTAATCCCAGCACTTTGGGAGGCCGAGGCGGGCGGATCACGAGGTCAGGAGATCG
AGACCATCCCGGCTAAAACGGTGAAACCCCGTCTCTACTAAAAATACAAAAAATTAGCCGGGCGTAGTGGCGGGCGCCT
GTAGTCCCAGCTACTTGGGAGGCTGAGGCAGGAGAATGGCGTGAACCCGGGAGGCGGAGCTTGCAGTGAGCCGAGATCC
CGCCACTGCACTCCAGCCTGGGCGACAGAGCGAGACTCCGTCTCAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
\end{verbatim}
\paragraph{Detailed human-readable output (--detail\_out)}
This is a file containing detailed information about consensus reads, aligned segments, and statistics for each putative insertion site detected. Note that this is done with minimal filtering, so these should not be used blindly. Default filename is the input BAM filename minus the .bam extension plus .resolve.out
\paragraph{Tabular output (-o/--out)}
Filename for the output table which is usually the primary means to assess insertion detection. This format is described in detail later in this document. Default filename is the input BAM filename minus the .bam extension plus .table.txt
\paragraph{Additional options}
\begin{itemize}
\item -t/--threads : Split work across multiple processes.
\item -m/--filter\_bed : BED file of regions to mask (will not be output)
\item -v/--verbose : Output status information.
\item --max\_bam\_count : do not analyse insertions represented by more than this number of BAMs (default = no filter)
\item --min\_ins\_match : minimum percent match to insertion library
\item --min\_ref\_match : minimum percent match to reference genome
\item --min\_discord : minimum number of discordant reads associated with insertion sites (default = 8)
\item --min\_cons\_len : minimum consensus length (sum of 5p and 3p insertion consensus contigs)
\item --min\_split : minimum number of split reads associated with insertion sites (default = 8)
\item --annotation\_tabix : tabix-indexed file, entries overlapping insertions will be annotated in output
\item --refoutdir : non-temporary output directory for various references generated by resolve.py. This includes LAST references for the insertion library and insertion-specific BAMs if --keep\_ins\_bams is enabled.
\item --use\_rg : use the readgroup name instead of the BAM name to count and annotate samples
\item --keep\_all\_tmp\_bams : retain all temporary BAMs in temporary directory (warning: this can easily be in excess of 100000 files for WGS data and may lead to unhappy filesystems).
\item --unmapped : also report insertions that do not match the insertion library
\item --usecachedLAST : useful if -i/--inslib FASTA is large, you can use a pre-built LAST reference (in --refoutdir) e.g. if it was generated on a previous run.
\item --uuid\_list : limit analysis to set of UUIDs in the first column of specified file (generally, this is the table output by a previous run of resolve.py) - this is useful for changing annotations, altering parameters, debugging, etc.
\item --callmuts : reports changes in inserted sequence vs insertion reference in the 'Variants' column of the tabular output
\item --tmpdir : directory for temporary files (default is /tmp)
\end{itemize}
\subsection{Output}
A table (tab-delimited) is output (with a header) is written to the file specified by -o/--out in resolve.py. The columns are as follows:
\begin{itemize}
\item UUID : Unique identifier
\item Chromosome : Chromosome
\item Left\_Extreme : Coordinate of the left-most reference base mapped to\item --te : enable additional filtering specific to transposable element insertions a read supporting the insertion
\item Right\_Extreme : Coordinate of the right-most reference base mapped to a read supporting the insertion
\item 5\_Prime\_End : Breakend corresponding to the 5' end of the inserted sequence (where 5' is can be defined e.g. for transposable elements)
\item 3\_Prime\_End : Breakend corresponding to the 3' end of the inserted sequence (where 3' is can be defined e.g. for transposable elements)
\item Superfamily : Superfamily of insertions of superfamily is defined (see option -i/--inslib in resolve.py)
\item Subfamily : Subfamily of insertions of subfamily is defined (see option -i/--inslib in resolve.py)
\item TE\_Align\_Start : alignment start position within inserted sequence relative to reference
\item TE\_Align\_End : alignment end position within inserted sequence relative to reference
\item Orient\_5p : Orientation derived from the 5' end of the insertion (if 5' is defined), assuming insertion reference is read 5' to 3'
\item Orient\_3p : Orientation derived from the 3' end of the insertion (if 3' is defined), assuming insertion reference is read 5' to 3'
\item Inversion : If Orient\_5p and Orient\_3p disagree, there may be an inversion in the inserted sequence relative to the reference sequence for the insertion
\item 5p\_Elt\_Match : Fraction of bases matched to reference for inserted sequence on insertion segment of 5' supporting contig
\item 3p\_Elt\_Match : Fraction of bases matched to reference for inserted sequence on insertion segment of 3' supporting contig
\item 5p\_Genome\_Match : Fraction of bases matched to reference genome on genomic segment of 5' supporting contig
\item 3p\_Genome\_Match : Fraction of bases matched to reference genome on genomic segment of 3' supporting contig
\item Split\_reads\_5prime : Number of split (clipped) reads supporting 5' end of the insertion
\item Split\_reads\_3prime : Number of split (clipped) reads supporting 5' end of the insertion
\item Remapped\_Discordant : Number of discordant read ends re-mappable to insertion reference sequence
\item Remap\_Disc\_Fraction : The proportion of remapped discordant reads mapping to the reference insertion sequence
\item Remapped\_Splitreads : Number of split reads re-mappable to insertion reference sequence
\item Remap\_Split\_Fraction : The proportion of remapped split reads mapping to the reference insertion sequence
\item 5p\_Cons\_Len : Length of 5' consensus sequence (Column Consensus\_5p)
\item 3p\_Cons\_Len : Length of 3' consensus sequence (Column Consensus\_3p)
\item 5p\_Improved : Whether the 5' consensus sequence was able to be improved through local assembly in tebreak.py
\item 3p\_Improved : Whether the 3' consensus sequence was able to be improved through local assembly in tebreak.py
\item TSD\_3prime : Target site duplication (if present) based on 5' overlap on consensus sequence
\item TSD\_5prime : Target site duplication (if present) based on 3' overlap on consensus sequence (can be different from 5' end, but in many cases it is advisable to filter out putative insertions that disagree on TSDs)
\item Sample\_count : Number of samples (based on BAM files or readgroups, depending on --use\_rg option in resolve.py)
\item Sample\_support : Per-sample split read count supporting insertion presence (Sample$\vert$Count)
\item Genomic\_Consensus\_5p : Consensus sequence for 5' supporting contig assembled from genome side of breakpoint
\item Genomic\_Consensus\_3p : Consensus sequence for 3' supporting contig assembled from genome side of breakpoint
\item Insert\_Consensus\_5p : Consensus sequence for 5' supporting contig assembled from insertion side of breakpoint
\item Insert\_Consensus\_3p : Consensus sequence for 3' supporting contig assembled from insertion side of breakpoint
\item Variants : if --callmuts option given to resolve.py, this is a list of changes detected between inserted sequence and insertion reference sequence
\item Genotypes : per-sample read depth information useful for determining genotype
\end{itemize}
\section{Filtering}
\subsection{Pre-filtering}
In some cases it is not necessary to analyse the entire genome, or only specific regions are of interest. Regions of interest in BED format may be input to tebreak.py via the -i/--inverval\_bed option.
\subsection{Post-filtering}
The insertion resolution script (resolve.py) includes a number of options for filtering insertions, with default parameters generally being tuned towards high sensitivity for high-quality WGS samples at greater than 30x coverage. In many instances, additional filtering may be necessary. For example, false positives may arise due to homopolymer expansions and contrations around reference transposable elements, as is particularly the case in mismatch-repair defective tumour genomes. False positives may also arise from sample preparation methods that induce PCR artefacts such as chimeric reads. Generally, when searching for rare events, the decreased signal-to-noise ratio necessitates a low false positive rate. To perform additional filtering, a script is included in scripts/general\_filter.py which should be regarded as a starting point for obtaining an optimal analysis.
\subsection{Miscellaneous methods for improving runtime}
\begin{itemize}
\item For whole genome sequencing (WGS) data, it is only the soft-clipped reads and reads in discordant pairs that are informative; the rest of the mapped reads can be discarded. This can be accomplished using the script 'reduce\_bam.py', located in the scripts directory. Pre-processing with this script may yield lower runtime due to less disk access.
\item For sequence derived from targeted sequencing methods (e.g. whole exome sequencing), it may be useful to only analyse regions covered to at least a certain read depth. This can be accomplished using the script 'covered\_segments.py' in the scripts directory.
\item To improve the runtime of resolve.py over large datasets (e.g. WGS) the 'pickle' generated by tebreak.py can be reduced using picklescreen.py (in the scripts directory). This will pre-align all junction reads to an insertion reference library, reducing the number of candidates for further analysis by resolve.py.
\item For larger cohorts, splitting the 'pickle' file up into per-chromosome files may be required to reduce memory usage. This can be accomplished with the 'picklesplit.py' script in the scripts directory.
\end{itemize}
\end{document}
| {
"alphanum_fraction": 0.7343007255,
"avg_line_length": 70.1228070175,
"ext": "tex",
"hexsha": "a8bf7eee6cb0e529bb70716ae800e8629714e9f1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cdbb4a5cb553de7fa939184e7a8f3f4cc354d577",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Martingales/tebreak",
"max_forks_repo_path": "doc/manual.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cdbb4a5cb553de7fa939184e7a8f3f4cc354d577",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Martingales/tebreak",
"max_issues_repo_path": "doc/manual.tex",
"max_line_length": 881,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "cdbb4a5cb553de7fa939184e7a8f3f4cc354d577",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Martingales/tebreak",
"max_stars_repo_path": "doc/manual.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6454,
"size": 27979
} |
\documentclass[fancy]{article}
\setDate{November 2018}
\begin{document}
\begin{Name}{1}{rulesmc}{author}{LDL Tools}{title}
rulesmc -- model-checker for dsl4sc
\end{Name}
\section{Synopsis}
rulesmc <option>* <model_file>? <infile>
\section{Description}
rulesmc reads a model M (from <model_file>) and a set of requirements φ (from <infile>),
examines if M ⊨ φ (i.e., ⊨ M → φ) holds or not, and then
returns either "claim holds" (M ⊨ φ holds) or "claim does not hold"
\section{Options}
\begin{description}
%
\item[\Opt{--model} <model_file>]
reads a model from <model_file>
%
\item[\Opt{--reachability}]
check reachability (i.e., M ∧ φ ≠ empty), instead of entailment.
%
\item[\Opt{-v}, \Opt{--verbose}]
become verbose
%
\item[\Opt{-h}, \Opt{--help}]
show usage
\end{description}
\section{See Also}
rulessat, ldlmc
\section{Author}
LDLTools development team at IBM Research.
\begin{itemize}
\item URL: \URL{https://ldltools.github.io}
\item Email: \Email{[email protected]}
\end{itemize}
\section{Copyright}
(C) Copyright IBM Corp. 2018.
License Apache 2.0.\\
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
\end{document}
| {
"alphanum_fraction": 0.7148760331,
"avg_line_length": 23.2692307692,
"ext": "tex",
"hexsha": "d3fb8d577e1eb2f9eeccbee49f6be8db27ba2960",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3532f08b8de4c92e0b64f697240170903a76d666",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "ldltools/dsl4sc",
"max_forks_repo_path": "docs/man/rulesmc.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3532f08b8de4c92e0b64f697240170903a76d666",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "ldltools/dsl4sc",
"max_issues_repo_path": "docs/man/rulesmc.tex",
"max_line_length": 88,
"max_stars_count": 7,
"max_stars_repo_head_hexsha": "3532f08b8de4c92e0b64f697240170903a76d666",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "ldltools/dsl4sc",
"max_stars_repo_path": "docs/man/rulesmc.tex",
"max_stars_repo_stars_event_max_datetime": "2020-06-16T22:16:45.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-10-30T09:59:47.000Z",
"num_tokens": 373,
"size": 1210
} |
\chapter{Declaring Variables}
let and const are two relatively new types of variable declarations in JavaScript. let is similar to var in some respects, but allows users to avoid some of the common "gotchas" that users run into in JavaScript. const is an audmentation of let in that it prevents re-assignment to a variable.
With TypeScript being a superset of JavaScript, the language naturally supports let and const. Here will be more elaboration on these new declarations and why they're preferable to var.
\section{var declarations}
Declaring a variable in JavaScript has always traditionally been done with the var keyword. We can also declare a variable inside of a function and we can also access those same variables within other functions.
\subsubsection{Scoping rules}
var declarations have some odd scoping rules for those used to other languages. var declarations are accessible anywhere within their containning function, module, namespace or global scope - all which we'll fo over later on - regardless of the containing block. Some people call this var-scoping or function-scoping. Parameters are also function scoped.
These scoping rules can cause several types of mistakes. One problem they exacerbate is the fact that it is not an error to declare the same variable multiple times:
\begin{lstlisting}
function sumMatrix(matrix: number[][]){
var sum = 0;
for(var i = 0; i < matrix.length; i++){
var currentRow = matrix[i];
for(var i = 0; i < currentRow.length; i++){
sum += currentRow[i];
}
}
return sum;
}
\end{lstlisting}
Maybe it was easy to spot out for some, but the inner for-loop will accidentally overwrite the variable i because i refers to the same function-scoped variable. As experienced developers know by now, similar sorts of bugs slip thorugh code reviews and can be endless source of frustration.
\subsubsection{Variable capturing quirks}
With the next piece of code in mind, try to guess what the output will be:
\begin{lstlisting}
for(var i = 0; i < 10; i++)
{
setTimeout(function(){console.log(i);}, 100 * i);
}
\end{lstlisting}
For those unfamiliar, setTimeout will try to execute a function after a certain number of milliseconds (thorugh waiting for anything else to stop running). The output of this code will be 10 times the number 10 being displayed rather then the list of numbers between 0 and 9. This is because every function expression we pass to setTimeout actually refers to the same i from the same scope. setTimeout will run a function after some number of milliseconds, but only after the for loop has stopped executing: By the time the for loop has stopped executing, the value of i is 10. So each time the given function gets called, it will print out 10.
A common work around is to use an IIFE - an Immediately Invoked Function Expression - to capture i at each iteration:
\begin{lstlisting}
for(var i = 0l i < 10; i++){
//capture the current state of 'i'
//by invoking a function with its current value
(function(i){
setTimeout(function() {console.log(i); }, 100 * i);
})(i);
}
\end{lstlisting}
This odd-looking pattern is actually pretty common. The i in the parameter list actually shadows the i declared in the for loop, but since we named them the same, we didn't have to modify the loop body too much.
\section{let declarations}
By now we've figured out ,that var has some problems, which is precisely why let statements were introduced. Apart from the keyword used, let statements are written the same way var statements are.
The key difference is not in the syntax, but in the semantics, which we'll dive into.
\subsection{Block-scoping}
When a variable is declared using let, it uses what some call lexical-scoping or block-scoping. Unlike variables declared with var whose scopes leak out to their containing function, block-scoped variables are not visible outside of their nearest containing block or for-loop.
Variables declared in a catch clause also have similar scoping rules. Another property of block-scoped variables is that they can't be read or written to before they're actually declared. While these variables are "present" throughout their scope, all points up until their declaration are part of their temporal dead zone. This is just a sophisticated way of saying we can't access them before the let statement, and luckily TypeScript will let us know that.
Something to note is that we can still capture a block-scoped variable before it's declared. The only catch is that it's illegal to call that function before the declaration. If targeting ES2015, a modern runtime will throw an error; however, right now TypeScript is permissive and won't report this as an error.
\subsection{Re-declarations and Shadowing}
With var declarations, we mentioned that it didn't matter how many times we declared your variables; you just got one, meaning that if we declare the same variable twice or more times, we end up always refering to the same single variable. This often ends up being a source of bugs. Thankfully, let declarations are not as forgiving, and will throw an error if you try to declare the same variable twice. The variables don't necessarily need to both be block-scoped for Typescript to tell us that there is a problem:
\begin{lstlisting}
function f(x){
let x = 100; //error: interferes with parameter declaration
}
function g(){
let x = 100;
var x = 100; //errir: can't have both declarations of 'x'
}
\end{lstlisting}
That's not to say that block-scoped variable can never be declared with a function-scope variable. The block-scope variable just needs to be declared within a distinctly different block:
\begin{lstlisting}
functiom f(condition, x){
if(condition){
let x = 100;
return x;
}
return x;
}
f(false, 0); //returns '0'
f(true, 0); //returns '100'
\end{lstlisting}
The act of introducing a new name in a more nested scope is called shadowing. It is a bit of a double-edged sword in that it can introduce certain bugs on its own in the event of accidental shadowing, while also preventing certain bugs. For example:
\begin{lstlisting}
function sumMatrix(matrix: number[][]){
let sum = 0;
for(let i = 0; i < matrix.length; i++){
var currentRow = matrix[i];
for(let i = 0; i < currentRow.length; i++){
sum += currentRow[i];
}
}
return sum;
}
\end{lstlisting}
This version of the loop will actually perform the summation correctly because the inner loop's i shadows i from the outer loop. Shadowing should usually ne avoided in the interest of writing cleaner code. While there are some scenarios where it may be fitting to take advantage of it, we should use our best judgement.
\subsection{Block-scoped variable capturing}
When we first touched on the idea of variable capturing with var declaration, we briefly went into how variables act once captured. To give a better intuition to this, each time a scope it run, it creates an "environment" of variables. That environment and its captured variables can exist even after everything within its scope has finished executing.
\begin{lstlisting}
function theCityThatAlwaysSleeps(){
let getCity;
if(true){
let city = "Seattle";
getCity = function(){
return city;
}
}
return getCity;
}
\end{lstlisting}
Because we've captured city from within its environment, we're still able to access it despite the fact that the if block finished executing. Recall that with our earlier setTimeout example, we ended up needing to use an IIFE to capture the state of a variable for every iteration of the for loop. In effect, what we were doing was creating a new variable environment for our captured variables. That was a bit of a pain, but luckily, you'll never have to do that again in TypeScript.
let declarations have drastically different behaviour when declared as part of a loop. Rather than just introducing a new environment to the loop itself, these declarations sort of create a new scope per iteration. Since this is what we were doing anyway with out IIFE, we can change our old setTimeout example to just use a let declaration.
\section{const declarations}
const declarations are another way of declaring variables. They are like let declarations but, as their name implies, their value cannot be changed once they are bound. In other words, they have the same scoping rules as let, but we can't re-assign to them. This should not be confused with the idea that the values they refer to are immutable.
\begin{lstlisting}
const numLivesForCat = 9;
const kitty = {
name: "Aurora",
numLives: numLivesForCat
}
//error
kitty = {
name: "Danielle",
numLives: numLivesForCat
};
//all "okay"
kitty.name= "Rory";
kitty.name= "Kitty";
kitty.name = "Cat";
kitty.numLives--;
\end{lstlisting}
Unless we take specific measures to avoid it, the internal state of a const variable is still modifiable. Fortunately, TypeScript allows us to specify that members of an object are readonly.
\section{let vs. const}
Given that we have two types of declarations with similar scoping semantics, it's natural to find ourselves asking which one to use. Like most broad questions, the answer is: it depends.
Applying the principle of least privilege, all declarations other than those we plan to modify should use const. The rationale is that if a variable didn't need to get written to, others working on the same codebase shouldn't automatically be able to write to the object, and will need to consider whether they really need to reassign to the variable. Using const also makes code more predictable when reasoning about flow of data.
Use your best judgement, and if applicable, consult the matter with the rest of your team.
\section{Destructing}
Another ECMAScript 2015 feature that TypeScript has is destructing.
\subsection{Array destructing}
The simplest form of destructing is array destructing assignment:
\begin{lstlisting}
let input = [1,2];
let [first, second] = input;
console.log(first); //outputs 1
console.log(second); //outputs 2
\end{lstlisting}
This creates two new variables named first and second. This is equivalent to using indexing, but it is much more convenient. Destructing works with already-declared variables as well, and with parameters to a function:
\begin{lstlisting}
//swap variables
[first, second] = [second, first];
function f([third, forth]: [number, number]){
console.log(third);
console.log(forth);
}
f([1, 2]);
\end{lstlisting}
We can create a variable for the remaining items in a list using the syntax ...VariableName. Of course, since this is JavaScript, we can just ignore trailing elements we don't care about, or other elements:
\begin{lstlisting}
let [first]= [1, 2, 3, 4];
console.log(first); //outputs 1
let[, second, , forth] = [1, 2, 3, 4];
\end{lstlisting}
\subsection{Object destructing}
We can also destructure objects:
\begin{lstlisting}
let o = {
a: "foo",
b: 12,
c: "bar"
};
let {a, b} = o;
\end{lstlisting}
This creates new variables a and b from o.a and o.b. Notice that we can skip c if we don't need it. Like array destructuring, we can have assignment without declaration. Notice that we need to surround the statement with parentheses. JavaScript normally parses a \{ as the start of a block.
We can create a variable for the remaining items in an object using the syntax ...variableName.
\subsection{Property renaming}
We can also give different names to properties:
\begin{lstlisting}
let{a: newName1, b: newName2} = o;
\end{lstlisting}
Here the syntax starts to get confusing. We can read a : newName1 as "a as newName1". The direction is left-to-right, as if we had written:
\begin{lstlisting}
let newName1 = o.a;
let newName2 = o.b;
\end{lstlisting}
Confusingly, the colon here does not indicate the type. The type, if we specify it, still needs to be written after the entire destructuring:
\begin{lstlisting}
let {a, b}: {a: string, b: number} = o;
\end{lstlisting}
\subsection{Default values}
Default values let us specify a default value in case a property is undefined:
\begin{lstlisting}
function keepWholeObject(wholeObject: {a: string, b?: number}){
let{a, b=1001} = wholeObject;
}
\end{lstlisting}
keepWholeObject now has a variable for wholeObject as well as the properties a and b, even if b is undefined.
\subsection{Function declarations}
Destructuring also works in function declarations. For simple cases this is straightforward:
\begin{lstlisting}
type C = {a: string, b?: number}
function f({a, b}: C): void{
// ...
}
\end{lstlisting}
But specifying defaults is more common for parameters, and getting defaults right with destructuring can be tricky. First of all, we need to remember to put the pattern before the default value:
\begin{lstlisting}
function f({a, b} = {a: "", b: 0}) : void{
// ...
}
f(); // ok, default to {a: "", b: 0}
\end{lstlisting}
Then, we need to remember to give a default for optional properties on the destructured property instead of the main initializer. Remember that C was defined with b optional. | {
"alphanum_fraction": 0.7313970588,
"avg_line_length": 52.5096525097,
"ext": "tex",
"hexsha": "2b215700c6b41d49e525259cd63191cfd7545d3e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2b2cecb916b94d2e61b9789ea9ebade4b0517e50",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "alexcosta97/e-learning",
"max_forks_repo_path": "AngularCourse/docs/Section2/declaringVariables.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2b2cecb916b94d2e61b9789ea9ebade4b0517e50",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "alexcosta97/e-learning",
"max_issues_repo_path": "AngularCourse/docs/Section2/declaringVariables.tex",
"max_line_length": 644,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2b2cecb916b94d2e61b9789ea9ebade4b0517e50",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "alexcosta97/e-learning",
"max_stars_repo_path": "AngularCourse/docs/Section2/declaringVariables.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3078,
"size": 13600
} |
\documentclass[12pt,english]{article}
\usepackage{geometry}
\geometry{verbose, letterpaper, tmargin = 2.54cm, bmargin = 2.54cm,
lmargin = 2.54cm, rmargin = 2.54cm}
\geometry{letterpaper}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{setspace}
\usepackage{url}
\usepackage{lineno}
\usepackage{xcolor}
\usepackage{bm}
\renewcommand\linenumberfont{\normalfont\tiny\sffamily\color{gray}}
\modulolinenumbers[2]
\usepackage{booktabs}
\usepackage{bm}
\textheight 22.2cm
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
%\usepackage[hidelinks]{hyperref}
\usepackage{nameref}
% Fix line numbering and align environment
% http://phaseportrait.blogspot.ca/2007/08/lineno-and-amsmath-compatibility.html
\newcommand*\patchAmsMathEnvironmentForLineno[1]{%
\expandafter\let\csname old#1\expandafter\endcsname\csname #1\endcsname
\expandafter\let\csname oldend#1\expandafter\endcsname\csname end#1\endcsname
\renewenvironment{#1}%
{\linenomath\csname old#1\endcsname}%
{\csname oldend#1\endcsname\endlinenomath}}%
\newcommand*\patchBothAmsMathEnvironmentsForLineno[1]{%
\patchAmsMathEnvironmentForLineno{#1}%
\patchAmsMathEnvironmentForLineno{#1*}}%
\AtBeginDocument{%
\patchBothAmsMathEnvironmentsForLineno{equation}%
\patchBothAmsMathEnvironmentsForLineno{align}%
\patchBothAmsMathEnvironmentsForLineno{flalign}%
\patchBothAmsMathEnvironmentsForLineno{alignat}%
\patchBothAmsMathEnvironmentsForLineno{gather}%
\patchBothAmsMathEnvironmentsForLineno{multline}%
}
\hyphenation{glmmfields}
\usepackage{lscape}
\usepackage{makecell}
\usepackage{array}
\newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\usepackage{ragged2e}
\setlength{\RaggedRightParindent}{\parindent}
\usepackage[round,sectionbib]{natbib}
\bibpunct{(}{)}{,}{a}{}{,}
\bibliographystyle{ecology}
\usepackage{titlesec}
\titlespacing*{\section}
{0pt}{1.5ex plus 1ex minus .2ex}{1.0ex plus .2ex}
\titlespacing*{\subsection}
{0pt}{1.5ex plus 1ex minus .2ex}{1.0ex plus .2ex}
% Suppress section numbers
\makeatletter
\renewcommand{\@seccntformat}[1]{}
\makeatother
\title{Appendix S1\\
\Large{Black swans in space: modelling spatiotemporal\\ processes with extremes}}
\author{
Sean C. Anderson$^{1,2\ast}$ and
Eric J. Ward$^3$
}
\date{}
\clearpage
\RaggedRight
\hyphenpenalty=500
\newlabel{sec:testing-recovery}{{2.3}{6}{Testing the recovery of spatial extremeness}{subsection.2.3}{}}
\newlabel{sec:diagnosing}{{2.4}{7}{Testing the advantage of allowing for extremes}{subsection.2.4}{}}
\newlabel{sec:beetles-methods}{{2.5}{8}{Mountain pine beetles in the US Pacific Northwest}{subsection.2.5}{}}
\newlabel{fig:nu}{{1}{18}{}{figure.1}{}}
\newlabel{fig:sim-performance}{{3}{20}{}{figure.3}{}}
\newlabel{fig:map-etc}{{4}{21}{}{figure.4}{}}
\newlabel{fig:beetle-pred}{{5}{22}{}{figure.5}{}}
\newlabel{fig:recapture}{{2}{19}{}{}{}}
\begin{document}
%\begin{spacing}{1.8}
\maketitle
\renewcommand{\theequation}{S\arabic{equation}}
% \setcounter{section}{0}
% \section{Appendix S1}
% \linenumbers
% \setcounter{equation}{0}
\subsection{Posterior and joint distributions}
In this section, for clarity, we use the notation $[a|b]$ to describe the
conditional distribution of $a$ given $b$. We use bold symbol to refer to vectors or matrices.
The joint posterior distribution for
the model in the section \textit{\nameref{sec:testing-recovery}} can be written
as:
\begin{equation}
[
\bm{\gamma}^*,
\theta_{\mathrm{GP}},
\nu,
\mathrm{CV}_\mathrm{obs},
\sigma_{\mathrm{GP}}^2
|
\bm{y}]
\propto
[\bm{y} | \bm{\gamma}^*, \mathrm{CV}_\mathrm{obs}]
[\bm{\gamma}^* | \theta_{\mathrm{GP}}, \nu, \sigma_{\mathrm{GP}}^2]
[\theta_{\mathrm{GP}}]
[\nu]
[\mathrm{CV}_\mathrm{obs}]
[\sigma_{\mathrm{GP}}^2],
\end{equation}
\noindent where the parameters are defined in the main Methods section.
Importantly, we use parameters with an asterisk (e.g.\ $\bm{\gamma}^*$) to describe
the random effects at the locations of the unobserved knots, and parameters
without the asterisk to describe the projected random effects
at the locations of sampled data (projection equation summarized in main text
and described in \citet{latimer2009}). The joint
probability is calculated over points in space $s$, and time $t$. Missing
observations or replicate observations for given points in space and time may be
present.
The joint posterior distribution for the model in the section
\textit{\nameref{sec:diagnosing}} can be written as:
\begin{equation}
[
\bm{\gamma}^*,
\theta_{\mathrm{GP}},
\nu,
\sigma^2_{\mathrm{obs}},
\sigma_{\mathrm{GP}}^2
|
\bm{y}]
\propto
[\bm{y} | \bm{\gamma}^*, \sigma^2_{\mathrm{obs}}]
[\bm{\gamma}^* | \theta_{\mathrm{GP}}, \nu, \sigma_{\mathrm{GP}}^2]
[\theta_{\mathrm{GP}}]
[\nu]
[\sigma^2_{\mathrm{obs}}]
[\sigma_{\mathrm{GP}}^2],
\end{equation}
\noindent
and the joint posterior distribution
for the model of mountain pine beetle outbreaks can be written as:
\begin{multline}
[
\bm{\beta},
\bm{\gamma}^*,
\phi,
\theta_{\mathrm{GP}},
\nu,
\sigma^2_{\mathrm{obs}},
\sigma^2_{\beta},
\sigma_{\mathrm{GP}}^2
|
\bm{y}]
\propto \\
[\bm{y} | \beta_{t}, \bm{\gamma}^*, \sigma^2_{\mathrm{obs}}] \\
\times
[\beta_t | \beta_{t-1}, \sigma^2_{\beta}] \\
\times
[\bm{\gamma}^*_{t} | \bm{\gamma}^*_{t-1}, \phi, \theta_{\mathrm{GP}}, \nu, \sigma_{\mathrm{GP}}^2] \\
\times
[\beta_{t=1}]
[\bm{\gamma}^*_{t=1}]
[\phi]
[\theta_{\mathrm{GP}}]
[\nu]
[\sigma^2_{\mathrm{obs}}]
[\sigma^2_{\beta}]
[\sigma_{\mathrm{GP}}^2].
\end{multline}
\subsection{Prior distributions}
In the section \emph{\nameref{sec:testing-recovery}}, we placed weakly
informative priors on $\sigma_{\mathrm{GP}}$, $\theta_{\mathrm{GP}}$, and CV of half-$t$(3, 0, 3) (i.e.
Student-$t$(3, 0, 3)[0, $\infty$] where the Student-$t$ parameters refer to the
degrees of freedom parameter, the location parameter, and the scale parameter).
In the section \emph{\nameref{sec:diagnosing}}, we used these same priors and a
half-$t$(3,0,3) prior on $\sigma_{\mathrm{obs}}$. In the section
\emph{\nameref{sec:beetles-methods}}, we fit our models with a normal(0, 10)
prior on the initial year effect ($B_{t=1}$) (i.e.\ mean $0$ and standard
deviation $10$, not variance, following the convention in Stan), half-$t$(3,
0, 3) priors on all scale parameters, and a normal(0, 0.5)[-1, 1] prior on
$\phi$.
\subsection{MCMC sampling details}
For the models fit to the simulated datasets, we initially sampled from each
model with 500 iterations across four chains discarding the first half of the
iterations as warm-up. If the samples had not converged after this initial run,
we re-sampled from the model with 2000 iterations across four chains.
For the models fit to the mountain pine beetle data, we sampled
2000 iterations, discarding the first half as warm-up. We measured
convergence as a minimum effective sample size of $\ge 100$ and a maximum
$\hat{R}$ of $\le 1.05$) across all parameters.
\subsection{Simulation details}
In the section \emph{\nameref{sec:testing-recovery}}, we ran 100 simulations for
each parameter configuration.
In the section \emph{\nameref{sec:diagnosing}}, we ran 150 simulations.
We ran more simulations in the second case to smooth the model
comparison histograms in Fig.~\ref{fig:sim-performance}.
\subsection{Model comparison}
We compared MVN and MVT models via log predictive density, $\Delta$ LOOIC
(leave-one-out information criteria), RMSE (root mean squared error), and
credible interval width. We calculated the log predictive density (lpd) for
held-out data as:
\begin{equation}
\mathrm{lpd} = \sum^{n}_{i=1}{\log p(y_{i} |
\widehat{y_{i}}, \theta)},
\end{equation}
\noindent where $y_{i}$ represents the observed beetle coverage response value
for omitted data row $i$ (representing a given point in space and time),
$\widehat{y_{i}}$ indicates the predicted response value, and $\theta$
represents estimated parameters. We display the distribution of these quantities
across MCMC samples.
The Bayesian LOO (leave-one-out) expected out-of-sample
predictive fit $\mathrm{elpd}_\mathrm{loo}$ is defined as:
\begin{equation}
\mathrm{elpd}_\mathrm{loo} = \sum^{n}_{i=1}{\log p(y_i | y_{i-1}) },
\end{equation}
\noindent where
\begin{equation}
p(y_i | y_{i-1}) = \int p(y_i | \theta) p (\theta | y_{-i}) d \theta ,
\end{equation}
\noindent is the predictive density given the data without the $i$th data point
and $\theta$ represents all estimated parameters
\citep{vehtari2017}. For computational efficiency, the LOO R package calculates
LOOIC using an approach called Pareto smoothed importance sampling
\citep{vehtari2017}. LOOIC is then defined as $-2\ \mathrm{elpd}_\mathrm{loo}$
so that lower values indicate increased parsimony.
We calculated $\Delta$LOOIC as $\mathrm{LOOIC}_\mathrm{MVT}
- \mathrm{LOOIC}_\mathrm{MVN}$.
We calculated the root mean squared error (RMSE) of predictions as:
\begin{equation}
\sqrt{ \sum^{T}_{t=1}{ \sum^{S}_{s=1}{ \left( \log(\mu_{s,t}) - \log(\widehat{ \mu_{s,t} }) \right)^2 } } },
\end{equation}
\noindent where $\widehat{\mu_{s,t}}$ represents the median of the posterior of
$\mu_{s,t}$. Finally, we calculated the ratio of the credible interval
widths as:
\begin{equation}
\frac{
\mathrm{CI}_\mathrm{MVN}^\mathrm{upper} - \mathrm{CI}_\mathrm{MVN}^\mathrm{lower}
}{
\mathrm{CI}_\mathrm{MVT}^\mathrm{upper} -
\mathrm{CI}_\mathrm{MVT}^\mathrm{lower}
},
\end{equation}
\noindent where $\mathrm{CI}_\mathrm{MVN}^\mathrm{upper}$, for example,
represents the upper 95\% quantile-based credible interval of
$\widehat{\mu_{s,t}}$ for the model assuming MVN random fields.
\clearpage
%\end{spacing}
\renewcommand{\thefigure}{S\arabic{figure}}
\renewcommand{\thetable}{S\arabic{table}}
\setcounter{figure}{0}
\setcounter{table}{0}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.7\textwidth]{../figs/pp-illustration.pdf}
\caption{
Illustration of the steps to fitting a predictive process model.
First we observe spatial data, we select knot locations,
and we calculate the covariance between the knots and the observed data.
Then we fit the model with the knots and data remaining constant throughout.
For each MCMC iteration, values are proposed for the
knots, those values are projected from the knots to the locations of the observed data,
and the likelihood is evaluated at the locations of the observed data.}
\label{fig:didactic}
\end{center}
\end{figure}
% Might be good to just clarify the values being proposed are values of observed data - not spatial locations
\clearpage
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.90\textwidth]{../figs/sim-recapture.pdf}
\caption{
Full factorial results from the simulation shown in Fig.~\ref{fig:recapture}.
Individual dots show the median estimates from individual simulation runs.
Polygons indicate probability density of the median estimates of
individual simulation runs.
The colour scale indicates the true degree of heavy tailedness from
yellow (effectively normal) to red (very heavy tailed).
}
\label{fig:recapture-factorial}
\end{center}
\end{figure}
\renewcommand\theadfont{\scriptsize}
\renewcommand\theadalign{cl}
\begin{landscape}
\begin{table}
\begin{minipage}{\textwidth}
\caption{
Comparison of select R packages for spatiotemporal analysis with random
fields. ``Formula interface'' refers to the ability to pass a formula to
an R function such as \texttt{y $\sim$ a + b} as is common in many R functions
such as \texttt{lm()} or \texttt{glm()}.
}
\label{tab:packages}
\begin{scriptsize}
\begin{tabular}{llllL{2cm}lL{2.4cm}lL{2.0cm}llL{2.0cm}}
\toprule
Package & \thead[l]{MVT \\ random \\ fields} & \thead[l]{Formula \\ interface} & \thead[l]{Multi- \\ variate} & Estimation & \thead[l]{Simulation \\ function} & \thead[l]{Observation \\ model} & \thead[l]{Maximum \\ likelihood \\ estimation} & \thead[l]{Covariance \\ functions} & \thead[l]{AR Spatial \\ Fields} & \thead[l]{Dynamic \\ Linear \\ Models} & URL \\
\midrule
spTimer & No & Yes & No & MCMC & No & Gaussian & No & Exponential, Gaussian, Matern family, spherical & Yes & No & \url{https://CRAN.R-project.org/package=spTimer} \\
spate & No & No & No & MCMC/SPDE & Yes & Gaussian, skewed Tobit & Yes & Matern family & No & No & \url{https://CRAN.R-project.org/package=spate} \\
spBayes & No & Yes & Yes & MCMC & No & Gaussian, binomial, Poisson & No & Exponential, Gaussian, Matern family, spherical & No & Yes & \url{https://CRAN.R-project.org/package=spBayes} \\
INLA & No & Yes & Yes & Approximate posterior/SPDE & Yes & Many & Yes & Exponential, Gaussian, Matern family, many others & Yes & Yes & \url{http://www.r-inla.org/} \\
VAST & No & No & Yes & Maximum likelihood/SPDE & Yes & Gaussian, lognormal, gamma, binomial, Poisson, negative binomial & Yes & Exponential, Matern family & Yes & Yes & \url{https://github.com/James-Thorson/VAST} \\
glmmfields & Yes & Yes & No & MCMC/NUTS & Yes & Gaussian, lognormal, gamma, binomial, Poisson, negative binomial & No & Exponential, Gaussian, Matern family & Yes & Yes & \url{https://github.com/seananderson/glmmfields} \\
\bottomrule
\end{tabular}
\end{scriptsize}
\end{minipage}
\end{table}
\end{landscape}
\bibliography{spatial-extremes}
%\end{spacing}
\end{document}
| {
"alphanum_fraction": 0.6656782291,
"avg_line_length": 39.7287671233,
"ext": "tex",
"hexsha": "81c9d8a4eec27d711d0bb7a3d7dae4aa92ad81bc",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-05-07T05:40:21.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-06-29T23:00:32.000Z",
"max_forks_repo_head_hexsha": "f3551878472f71814f39e354fbd4598373803210",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "seananderson/spatial-extremes",
"max_forks_repo_path": "text/Appendix-S1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f3551878472f71814f39e354fbd4598373803210",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "seananderson/spatial-extremes",
"max_issues_repo_path": "text/Appendix-S1.tex",
"max_line_length": 478,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "f3551878472f71814f39e354fbd4598373803210",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "seananderson/spatial-extremes",
"max_stars_repo_path": "text/Appendix-S1.tex",
"max_stars_repo_stars_event_max_datetime": "2019-08-17T13:02:35.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-05-04T12:36:47.000Z",
"num_tokens": 4351,
"size": 14501
} |
% \usepackage{hyperref}
\startreport{Creating Input to Aggregator}
\reportauthor{Ramesh Subramonian}
\section{Introduction}
\subsection{Related Work}
Relevant prior work can be found in \cite{sarawagi99}
and \cite{Fagin05}
\section{Raw Input}
Following \cite{Fagin05}, we assume that the input is a
single relational table where columns are either dimensions or measures.
Dimensions are
attributes along which aggregations will be performed e.g., time, age,
gender, product category. These may be numeric, like time or
categorical, like gender.
Example of dimension columns: time of sale, store ID, product category \ldots
Measures are numerical and are the target of aggregations. Examples are sale
price, number of items, \ldots We might want to know how average sale price
differs based on Gender. Here, Price and Number are the measures and Gender is
the dimension.
In our context, Each row refers to customer transaction. It could alternatively
refer to a document.
We distinguish between raw columns and derived columns. As an example, see
Table~\ref{derived_columns}. In this case,
\be
\item the raw attribute TimeOfSale
has 4 derived attributes, DayOfWeek, Month, Quarter, IsWeekend
\item the raw attribute StoreId
has 3 derived attributes, District, State, RegionalDistributionCenter
\item the raw attribute Age
2as 3 derived attributes, Generation and AgeBand
\item
\ee
\begin{table}[hbtp]
\centering
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|} \hline \hline
Raw Column & Derived 1 & Derived 1 & Derived 3 & Column 4 \\ \hline
\hline
TimeOfSale & DayOfWeek & Month & Quarter & IsWeekend \\ \hline
StoreId & District & State & RegionalDistributionCenter & --- \\ \hline
Age & Generation & AgeBand & --- & --- \\ \hline
\end{tabular}
\caption{Examples of derived columns}
\label{derived_columns}
\end{table}
%-------------------------------------------------------
Discuss how raw input is converted to input to aggregator. \TBC
\section{Input to Aggregator}
We assume that the input is provided as follows. Let there be 3 raw attributes, \(a,
b, c\). Assume that
\be
\item \(a\) has 2 derived attributes \(a_1, a_2\)
\item \(b\) has 3 derived attributes \(b_1, b_2, b_3\)
\item \(c\) has 4 derived attributes \(c_1, c_2, c_3, c_4\)
\ee
We will introduce a dummy derived attribute, \(a_0, b_0, c_0\), for each raw
attribute. The value of this attribute is always 0.
The input looks like Table~\ref{example_input_to_agg}.
\begin{table}[hbtp]
\centering
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|} \hline \hline
{\bf Derived Column} & {\bf Value} \\ \hline \hline
\(a_0\) & 0 \\ \hline
\(a_1\) & \(v_{a, 1}\) \\ \hline
\(a_2\) & \(v_{a, 2}\) \\ \hline
%
\(b_0\) & 0 \\ \hline
\(b_1\) & \(v_{b, 1}\) \\ \hline
\(b_2\) & \(v_{b, 2}\) \\ \hline
\(b_3\) & \(v_{b, 3}\) \\ \hline
%
\(c_0\) & 0 \\ \hline
\(c_1\) & \(v_{c, 1}\) \\ \hline
\(c_2\) & \(v_{c, 2}\) \\ \hline
\(c_3\) & \(v_{c, 3}\) \\ \hline
\(c_4\) & \(v_{c, 4}\) \\ \hline
%
\end{tabular}
\caption{Result of processing one row of original input}
\label{example_input_to_agg}
\end{table}
%-------------------------------------------------------
Let \(N(x) \) be the number of derived attributes for \(x\).
Hence, in our example, \(N(a) = 2+1 = 3\), \(N(b) = 3+1 = 4\), \(N(c) = 4+1 = 5\).
This means that consuming one row of the input will cause \(\prod_x N(x)\) writes to the
aggregator. In our example, this means \((3 \times 4 \times 5) -1 = 59\).
Using our current example, the keys generated
are shown in Table~\ref{example_keys_to_aggregator}. We haven't shown all rows
but we hope the reader gets the picture.
\newpage
{\small
\begin{table}[hbtp]
\centering
\begin{tabular}{|c|l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|} \hline \hline
{\bf ID} & {\bf Key} & \(a_0\) & \(a_1\) & \(a_2\) &
\(b_0\) & \(b_1\) & \(b_2\) & \(b_3\) &
\(c_0\) & \(c_1\) & \(c_2\) & \(c_3\) & \(c_4\) \\ \hline \hline
\input{keys_for_agg}
\end{tabular}
\caption{Keys presented to aggregator}
\label{example_keys_to_aggregator}
\end{table}
}
We now explain the \(-1\) in the earlier formula. This is because
the first key is a ``match-all'' and hence will be disregarded.
Let's take a look at the second key
\((a_0 = \bot, b_0 = \bot, c_1 = v_{c, 1})\). Let ``price'' be a measure
attribute. Then, passing this to the aggregator is equivalent to wanting the
answer to the SQL statement
\begin{verbatim}
SELECT SUM(price) FROM in_tbl WHERE C = V_C_1
\end{verbatim}
One more example to drive home the point. Consider Row 40 with key
\((a_1 = v_{a, 1}, b_3 = v_{b, 3}, c_4 = v_{c, 4})\). The equivalent SQL would be
\begin{verbatim}
SELECT SUM(price) FROM in_tbl WHERE A = V_A_1 AND B = V_B_3 AND C = V_C_4
\end{verbatim}
\section{Implementation Details}
We now discuss how to create a simple and efficient implementation in C.
\begin{table}[hbtp]
\centering
\begin{tabular}{|l|l|l|} \hline \hline
{\bf Logical } & {\bf C} \\ \hline
Number of raw attributes & {\tt int nR;} \\ \hline
Number of derived attributes per raw attribute & \verb+ int *nDR; /* [n] */+ \\ \hline
Number of inputs & {\tt int nX;} \\ \hline
Table~\ref{example_input_to_agg} & \verb+uint8_t *X; /*[nX] */+ \\ \hline
Key for aggregator & {\tt int key; } \\ \hline
\end{tabular}
\caption{Data Structures for C}
\label{data_structures_for_C}
\end{table}
Things to note
\bi
\item \(nX = \sum_i \mathrm{nDR}[i]\)
\item We assume that \(nX \leq 16\). This allows us to encode a derived
attribute in 4 bits.
\item We assume that the maximum number of values that any derived attribute can
take on is 16. This allows ut to encode the value in 4 bits.
\item Hence, every row of Table~\ref{example_input_to_agg} can be encoded using
an 8 bit unsigned integer.
\item It is relatively simple to relax these constraints. However, one must
recognize that allowing unconstrained relaxation is meaningless since it really
doesn't make sense to do a group by when the number of values of the attribute
is very high e.g., one would never say ``group by time since epoch in seconds''
\item We assume that \(nR \leq 4\). Once again, this is easy to relax but the
assumption serves us well for now.
\ei
\bibliographystyle{alpha}
\bibliography{../../Q_PAPER/ref}
| {
"alphanum_fraction": 0.6828009035,
"avg_line_length": 36.2456140351,
"ext": "tex",
"hexsha": "0a5f85ae9d69e084e3803c5483d879b841a17ec4",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2015-05-14T22:34:13.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-05-14T22:34:13.000Z",
"max_forks_repo_head_hexsha": "2fb8a2b3636dd11e2dfeae2a6477bd130316da47",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "subramon/qlu",
"max_forks_repo_path": "OPERATORS/MDB/doc/mdb_1.tex",
"max_issues_count": 7,
"max_issues_repo_head_hexsha": "2fb8a2b3636dd11e2dfeae2a6477bd130316da47",
"max_issues_repo_issues_event_max_datetime": "2020-09-26T23:47:22.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-07-29T16:48:25.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "subramon/qlu",
"max_issues_repo_path": "OPERATORS/MDB/doc/mdb_1.tex",
"max_line_length": 88,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2fb8a2b3636dd11e2dfeae2a6477bd130316da47",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "subramon/qlu",
"max_stars_repo_path": "OPERATORS/MDB/doc/mdb_1.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2016,
"size": 6198
} |
%
% Appendix B
%
\chapter{Misidentification rate method}
\label{fakefactor}
The misidentified lepton background is dominant in hadronic channels, and it is estimated using the misidentification rate method discussed in Chapter~\ref{bkg_est}. The systematic uncertainties associated with estimating the misidentified lepton background impact the median expected limits significantly. To recap, we are using a normalization uncertainty of 30\% for the misidentified lepton background along with a 10\% normalization uncertainty uncorrelated across the categories of each channel. Similarly, the shape uncertainties are estimated in a \PW boson enriched control region, and these uncertainties have a high impact and are constrained by the data. To improve the analysis's sensitivity, we need a better handle on the misidentified lepton background and its associated uncertainties. An improvement to the misidentification rate method is presented here.
Since the jet flavor composition of different background processes is different, leading to different probabilities of these jets getting misidentified as \tauh, the misidentification rate is measured separately for each dominant process. The dominant background processes are \wjets, QCD multijet, and \ttbar. First, determination regions are defined, each enriched in one of the above backgrounds. From each determination region, one extracts a dedicated misidentification rate, which is parametrized by its main dependencies to reduce statistical fluctuation. Further, so-called closure corrections in important variables not used in the misidentification rates parametrization are calculated. Contaminations in the different determination regions other than the process of interest are subtracted from data by using MC simulation and embedding samples. The misidentification rate is applied on an event-by-event basis as a weighted sum in the so-called application region. The application region is equivalent to the ``background-like'' region and signal region is equivalent to the ``signal-like'' region defined in Chapter~\ref{bkg_est}. The weights applied to each misidentification rate contribution are called fractions and represent the probability of the event being of a particular background process. Fractions are determined from MC simulation and embedding samples. Figure~\ref{fig:ff} shows a graphical summary of the misidentification rate method.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\textwidth]{plots/appendix/FF.png}
\caption{Illustration of the misidentification rate method. For each of the processes (\wjets, QCD, and \ttbar), a dedicated misidentification rate is measured in a process enriched determination region. The measured misidentification rate is corrected and then applied on an event by event basis inside the application region. The application of the misidentification rate inside the application region also involves the fractions $f_i$, representing the probabilities of an event in the application region originating from process $i$.}
\label{fig:ff}
\end{figure}
The basic assumption of the misidentification rate method is the so-called universality. The misidentification rate measured in a determination region is the same as if measured in the signal region or any other phase space region. Universality is not strictly given because by measuring the misidentification rate in a determination region, one introduces biases—for example, the jet-flavor composition changes in the determination region with respect to the signal region. Therefore, the misidentification rate method also implements process-dependent bias corrections to restore the misidentification rate universality. In summary, the misidentification rate is measured in dedicated determination regions and parametrized in variables reflecting its most dominant dependencies. Afterward, corrections are derived for improving the modeling of the jet-\tauh misidentified contribution in variables not used for the parametrization. A last step bias correction is calculated, accounting for possible differences between the determination region where the misidentification rate is measured and the application region where the misidentification rate is applied.
A future Higgs boson LFV decays search can benefit from using this method for calculating the misidentified lepton background in hadronic channels. The corrections used for improving the modeling and the bias corrections can give a better description of the misidentified lepton background in hadronic channels. This method has been successfully incorporated in the SM \Htt analyses~\cite{CMS:2019pyn}.
| {
"alphanum_fraction": 0.8324742268,
"avg_line_length": 211.6363636364,
"ext": "tex",
"hexsha": "e2058d3e590bd4d3d446d91d4417f1eacd883c89",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9a7a4ae447331fb76b458374b9a3511298df309d",
"max_forks_repo_licenses": [
"LPPL-1.3c"
],
"max_forks_repo_name": "psiddire/nddiss",
"max_forks_repo_path": "thesis/appendixB.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9a7a4ae447331fb76b458374b9a3511298df309d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"LPPL-1.3c"
],
"max_issues_repo_name": "psiddire/nddiss",
"max_issues_repo_path": "thesis/appendixB.tex",
"max_line_length": 1464,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9a7a4ae447331fb76b458374b9a3511298df309d",
"max_stars_repo_licenses": [
"LPPL-1.3c"
],
"max_stars_repo_name": "psiddire/nddiss",
"max_stars_repo_path": "thesis/appendixB.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 872,
"size": 4656
} |
\documentclass[10pt, twocolumn]{article}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage[margin=0.25in]{geometry}
\usepackage{pdfpages}
\newcommand*{\Perm}[2]{{}^{#1}\!P_{#2}}%
\newcommand*{\Comb}[2]{{}^{#1}C_{#2}}%
\newcommand\given[1][]{\:#1\vert\:}
\begin{document}
\section{STAT 400, Aryn Harmon}
\subsection{Midterm 1 Material}
\textbf{Probability} is a real-valued function:
1. $\mathbb{P}(S) = 1$; 2. $\mathbb{P}(A) \geq 0$; 3. If $A_1, A_2$ are mutually exclusive events, $\mathbb{P}(A_1 \cup A_2) = \mathbb{P}(A_1) + \mathbb{P}(A_2)$ and so on.\\
\textbf{Inclusion-Exclusion:} $\mathbb{P}(A \cup B) = \mathbb{P}(A) + \mathbb{P}(B) - \mathbb{P}(A \cap B)$.\\
\textbf{Conditional Probability:} $\mathbb{P}(A|B) = \frac{\mathbb{P}(A \cap B)}{\mathbb{P}(B)}$ given $\mathbb{P}(B) > 0$. $\mathbb{P}(A|B) \neq \mathbb{P}(A)$ unless $A$ and $B$ are independent. Also see Bayes's Theorem. Probability of a string of unions given $B$ is equal to the sum of the individual conditional probabilities.\\
\textbf{Multiplication Rule:} Probability of two events both occurring: $\mathbb{P}(A \cap B) = \mathbb{P}(A) \cdot \mathbb{P}(B|A)$ or $\mathbb{P}(A \cap B) = \mathbb{P}(B) \cdot \mathbb{P}(A|B)$ (one is easier than the other).\\
\textbf{Bayes's Rule:} $\mathbb{P}(A|B) = \frac{\mathbb{P}(A) \cdot \mathbb{P}(B|A)}{\mathbb{P}(B)}$. $\mathbb{P}(A)$ is the \textit{prior probability} of $A$. $\mathbb{P}(A|B)$ is the \textit{posterior probability} of $A$ given that $B$ occurred. Use to invert probabilities.\\
\textbf{Bayes's Rule 2:} $\mathbb{P}(A|B) = \frac{\mathbb{P}(A) \cdot \mathbb{P}(B|A)}{\mathbb{P}(A) \cdot \mathbb{P}(B|A) + \mathbb{P}(A') \cdot \mathbb{P}(B|A')}$.\\
\textbf{Bayes's Rule Full:} Given some partition of $S$: $A_1 \cup \dots \cup A_k = S$. $\mathbb{P}(A_i|B) = \frac{\mathbb{P}(A_i) \cdot \mathbb{P}(B|A_i)}{\sum_{m=1}^{k} \mathbb{P}(A_m) \cdot \mathbb{P}(B|A_m)}$.\\
\textbf{Independence:} $\mathbb{P}(A|B) = \mathbb{P}(A)$, $\mathbb{P}(B|A) = \mathbb{P}(B)$, and $\mathbb{P}(A \cap B) = \mathbb{P}(A) \cdot \mathbb{P}(B)$.\\
\textbf{Pairwise Independence:} All of the following must hold: $\mathbb{P}(A \cap B) = \mathbb{P}(A) \cdot \mathbb{P}(B)$, $\mathbb{P}(A \cap C) = \mathbb{P}(A) \cdot \mathbb{P}(C)$, and $\mathbb{P}(B \cap C) = \mathbb{P}(B) \cdot \mathbb{P}(C)$.\\
\textbf{Mutual Independence:} $A$, $B$, and $C$ must be pairwise independent and in addition: $\mathbb{P}(A \cap B \cap C) = \mathbb{P}(A) \cdot \mathbb{P}(B) \cdot \mathbb{P}(C)$.
\textbf{Number of ways to take $r$ objects from $n$ candidates:}
\begin{tabular}{lll}
& Ordered Sample & Unordered Sample \\
W. R. & $n^r$ & ${n+r-1 \choose r}$ \\
W/o. R. & $\Perm{n}{r}$ & ${n \choose r}$
\end{tabular}
$\Perm{n}{k}=\frac{n!}{(n-k)!}$ - permutation\\
$\binom nk=\Comb{n}{k}=\frac{n!}{k!(n-k)!}$ - combination
\textbf{Binomial Theorem:} $(a+b)^n = \sum_{r=0}^n {n \choose r} a^r b^{n-r}$. ${n \choose n-r} = {n \choose r}$, ${n \choose 0} = {n \choose n} = 1$, ${n \choose 1} = {n \choose n-1} = n$.\\
\textit{Some magic formulae:} 1. $\sum_{r=0}^n {n \choose r} = 2^n$. 2. $\sum_{r=0}^n (-1)^r {n \choose r} = 0$. 3. $\sum_{r=0}^n {n \choose r} p^r (1-p)^{n-r} = 1$.\\
\textbf{Pascal's Equation:} ${n \choose r} = {n-1 \choose r} + {n-1 \choose r-1}$.\\
\textbf{Hypergeometric Distribution:} $f(x) = \mathbb{P}(X=x) = \frac{{N_1 \choose x} \cdot {N_2 \choose n-x}}{{N \choose n}}$, where $x \leq n$, $x \leq N_1$, $n-x \leq N_2$ and $N = N_1 + N_2$. $\mathbb{E}(X) = n\frac{N_1}{N}$ and $Var(X) = n \cdot \frac{N_1}{N} \cdot \frac{N_2}{N} \cdot \frac{N-n}{N-1}$. Example: Urn model: $N_1$ red balls, $N_2$ blue balls, draw $n$ balls from $N_1 + N_2$ balls, then look at the probability that there are $x$ red balls in the selected $n$ balls.\\
\textbf{Mean:} $\mu = \mathbb{E}(X) = \sum_{x \in S} x f(x)$.\\
\textbf{Variance:} $\sigma^2 = Var(X) = \mathbb{E}(X - \mu)^2 = \mathbb{E}(X^2) - \mu^2 = \sum_{i=1}^{k} x_i^2 f(x_i) - \mu^2$. \textbf{Standard Deviation:} $\sigma$.\\
\textbf{r-th Moment:} $\mathbb{E}(|X|^r) = \sum_{x \in S} |x|^r f(x) < \infty$; is the moment about the origin.\\
\textbf{r-th Moment about $b$:} $\mathbb{E}((X-b)^r) = \sum_{x \in S} (x-b)^r f(x)$; is the moment about $b$. Facts: $\mu$ is the first moment of $X$ about the origin. $\sigma^2$ is the second moment of $X$ about $\mu$.\\
\textit{Example of Variance Properties:} $Var(aX+b) = a^2 \cdot Var(X)$.
\textbf{Bernoulli Distribution:} A random experiment is called a set of Bernoulli trials if each trial has only two outcomes, has a constant $p$, and each trial is independent. $f(x) = p$ if $x=1$, $1-p$ if $x=0$, with $0 \leq p \leq 1$. $\mathbb{E}(X) = p$ and $Var(X) = p(1-p)$.\\
\textbf{Binomial Distribution:} Let $X$ be the number of successes in $n$ independent Bernoulli trials with $p$. Then $X \sim b(n,p)$. $f(x) = \mathbb{P}(X=x) = {n \choose x} p^x (1-p)^{n-x}$. $\mathbb{E}(X) = np$ and $Var(X) = np(1-p)$.\\
A note: binomial and hypergeometric are similar, but binomial has replacement and one category, and hypergeometric has two categories and no replacement.\\
\textbf{Cumulative Distribution Function:} $F(x) = \mathbb{P}(X \leq x), x \in (-\infty, \infty)$. For discrete random variables: $f(x) = \mathbb{P}(X=x) = \mathbb{P}(X \leq x) - \mathbb{P}(X \leq x - 1) = F(x) - F(x-1)$.\\
\textbf{Geometric Distribution:} $f(x) = p(1-p)^{x-1}, x = 1,2,3,\dots$. $X$ represents the draw in which the first success is drawn. $f(x)$ is the probability of getting a success in the $x$-th draw.\\
\textbf{Negative Binomial Distribution:} $f(x) = {x-1 \choose r-1} p^r (1-p)^{x-r}$ and $x = r,r+1,r+2,\dots$. $X$ is the number of trials it takes to get $r$ successes. $f(x)$ is the probability that the $r$-th success occurs on the $x$-th trial.
\subsection{Midterm 2 Material}
\textbf{Moment Generating Function:} $M(t) = \mathbb{E}(e^{tX}) = \sum_{x \in S}{e^{tX}f(x)}$ if $f(x)$ is the p.d.f. of some distribution and $t \in V_h(0)$ is finite. \textit{Theorem:} $\mathbb{E}(X^r) = M^{(r)}(0)$, so $\mu = \mathbb{E}(X) = M'(0)$ and $\sigma^2 = \mathbb{E}(X^2) - [\mathbb{E}(X)]^2 = M''(0) - [M'(0)]^2$. To calculate the m.g.f. and p.d.f. of some random variable given only its moments, use the Taylor series expansion centered at zero. $M(t) = M(0) + M'(0)\left(\frac{t}{1!}\right) + M''(0)\left(\frac{t^2}{2!}\right) + \dots = 1 + \mathbb{E}(X)\left(\frac{t}{1!}\right) + \mathbb{E}(X^2)\left(\frac{t^2}{2!}\right) + \dots$.\\
\textbf{Poisson distribution:} \textit{Definition:} Poisson process counts the number of events occurring in a fixed time/space given a rate $\lambda$. Let $X_t$ be the number events which occur in $t$ unit time intervals. $\mathbb{P}(X_t = x) = \frac{(\lambda t)^x}{x!} e^{-\lambda t}$. $\mu = \lambda, \sigma^2 = \lambda$.\\
\textbf{Poisson Approximation} of Binomial Distribution: If $X \sim b(n,p)$ and $n$ is large while $p$ is small, then $X$ can be approximated as $\hat{X} \sim poi(\lambda) \, s.t. \, \lambda = np$.\\
\textbf{Mean:} $\mu = \int_{-\infty}^{\infty} x f(x) dx$.\\
\textbf{Variance:} $\sigma^2 = \int_{-\infty}^{\infty} (x - \mu)^2 f(x) dx$.\\
\textbf{M.G.F.:} $M(t) = \mathbb{E}(e^{tX}) = \int_{-\infty}^{\infty} e^{tx} f(x) dx, t \in (-h,h)$.\\
\textbf{Percentile:} $p = \int_{-\infty}^{\pi_p} f(x) dx = F(\pi_p)$.\\
\textbf{Uniform Distribution:} $f(x) = \frac{1}{b-a}, a \leq x \leq b$. If $X \sim U(a,b)$, then $\mathbb{E}(X) = \frac{a+b}{2}$, $\sigma^2 = \frac{(b-a)^2}{12}$, and $M(t) = \mathbb{E}(e^{tX}) = \frac{e^{tb} - e^{ta}}{t(b-a)}, t \neq 0$ and $M(0) = 1$.\\
\textbf{Exponential Distribution:} This describes the waiting time between events in a Poisson process with rate $\lambda$. Let $\theta = \frac{1}{\lambda}$. Then if $X \sim Exp(\theta)$, $f(x) = \frac{1}{\theta}e^{-\frac{x}{\theta}}, x \geq 0$ and 0 otherwise. $\mu = \theta$, $\sigma^2 = \theta^2$, and $M(t) = (1 - \theta t)^{-1}, t < \frac{1}{\theta}$.\\
\textbf{Memoryless Property} of the Exponential Distribution: What happened in the past does not matter now. Only the present can determine the future. $\mathbb{P}(X > a+b \given[\big] X > a) = \mathbb{P}(X > b), \forall a, b \geq 0$.\\
\textbf{Gamma Function:} $\Gamma(x) = \int_{0}^{\infty} y^{x-1} e^{-y} dy, \, x > 0$.\\
\textbf{Gamma Distribution:} $f(t)$ represents the waiting time unitl the $\alpha$-th occurrence of some Poisson process with rate $\lambda$. $f(t) = \frac{\lambda^\alpha t^{\alpha-1}}{(\alpha-1)!} e^{-\lambda t}, \, t \geq 0$. Then let $\theta = \lambda^{-1}$, then $f(t) = \frac{1}{\Gamma(\alpha)} \theta^{\alpha} t^{\alpha - 1} e^{\frac{-t}{\theta}}, \, t \geq 0, \, \alpha \in \mathbb{R}$. $\mu = \alpha \theta$, $\sigma^2 = \alpha \theta^2$, and $M(t) = (1 - \theta t)^{-\alpha}$. If $\alpha = 1$, $\mathrm{gamma}(1,\theta) = \mathrm{Exp}(\theta)$.\\
\textbf{$\chi^2$ Distribution:} If $X \sim \mathrm{gamma}(\alpha,\theta)$, and $\theta = 2$ and $\alpha = \frac{r}{2}$, where $r$ is a positive integer, then $X$ is a $\chi^2$ distribution with degree of freedom $r$. $X \sim \chi^2(r)$. $\mu = r$, $\sigma^2 = 2r$, and $M(t) = (1-2t)^{\frac{-r}{2}}, \, t < \frac{1}{2}$. $e^2 = \chi^2(2)$.\\
\textbf{Normal Distribution:} Bell curve! $f(x) = \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{(x-\mu)^2}{2\sigma^2}}, \, x \in \mathbb{R}$. $X \sim N(\mu, \sigma^2)$. $X \sim N(0, 1)$ is the standard normal distribution. $\mu = \mu$, $\sigma^2 = \sigma^2$, and $M(t) = e^{\mu t + \frac{\sigma^2 t^2}{2}}$. Standardization: $X \sim N(\mu, \sigma^2)$, then $Z = \frac{X-\mu}{\sigma} \sim N(0, 1)$.\\
\textbf{Normal Square Distribution:} Let $X \sim N(\mu, \sigma^2)$ and $Z = \frac{X-\mu}{\sigma}$. Then the random variable $V = Z^2 \sim \chi^2(1)$.\\
\textbf{Cauchy Distribution:} (Why do we even need this distribution?) $f(x) = \frac{1}{\pi (1 + x^2)}, \, x \in \mathbb{R}$. Symmetric about zero, so median is zero, but $\mu$ is undefined because the tail of the p.d.f. is too heavy (i.e. each integral of the distribution does not converge). $c.d.f. = F(x) = \frac{1}{\pi} \arctan(x) + \frac{1}{2}, \, x \in \mathbb{R}$.\\
\textbf{Joint Probability Density Function:} $\mathbb{P}((X, Y) \in A) = \int\int f(x,y) dx dy$.\\
\textbf{Marginal Probability Density Function:} $f_x(x) = \int_{-\infty}^{\infty} f(x,y) dy$ and the other way around for $y$.\\
\textbf{Mathematical Expectation:} $\mathbb{E}[g(X,Y)] = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} g(x,y) f(x,y) dx dy$.\\
\textbf{Independent Random Variables:} Two events $A$ and $B$ are independent iff $f(x,y) = f_x(x) \cdot f_y(y) \, \forall x,y$. This works for both the p.d.f. and the c.d.f.\\
\textbf{Trinomial Distribution:} This is an extension of the binomial distribution into two dimensions. $f(x_1,x_2) = \mathbb{P}(X_1 = x_1, X_2 = x_2) = \frac{n!}{x_1! x_2! (n - x_1 - x_2)!} p_1^{x_1} p_2^{x_2} (1 - p_1 - p_2)^{n - x_1 - x_2}$ for $x_1, x_2 \geq 0$, $x_1 + x_2 \leq n$.\\
\textbf{Covariance:} $Cov(X,Y) = \mathbb{E}[(X - \mu_x)(Y - \mu_y)], \mu_x = \mathbb{E}(X), \mu_y = \mathbb{E}(Y)$. Useful identity: $Cov(X,Y) = \mathbb{E}(XY) - \mathbb{X}\mathbb{E}(Y)$.\\
\textbf{Properties of Covariance:} Zero-angle: $Cov(X,X) = Var(X)$. Symmetry: $Cov(X,Y) = Cov(Y,X)$. $Cov(aX + b, Y) = a \cdot Cov(X,Y)$. $Cov(X+Y, W) = Cov(X,W) + Cov(Y,W)$. $Cov(aX+bY, cX+dY) = (ac) \cdot Var(X)+(bd) \cdot Var(Y)+(ad+bc) \cdot Cov(X, Y)$. $Var(aX + bY) = a^2 \cdot Var(X) + 2ab \cdot Cov(X, Y) + b^2 \cdot Var(Y)$.\\
\textbf{Correlation Coefficient:} $\rho_{XY} = \frac{\sigma_{XY}}{\sigma_X \cdot \sigma_Y} = \frac{Cov(X,Y)}{\sqrt{Var(X)} \cdot \sqrt{Var(Y)}}$. Uncorrelated: $\rho_{XY} = 0 \Leftrightarrow \sigma_{XY} = 0$. If $X$ and $Y$ are independent, then $\rho_{XY} = 0$, but the implication \textbf{does not} go the other way.\\
\textbf{I.I.D.:} If every marginal probability density function is equal and independent, then all of the random variables are independent and identically distributed (i.i.d.).\\
\textbf{Combinations:} If $Y$ is a linear combination of independent random variables with constant multiples, then the expectation of $Y$ is simply the sum of the expectations of random variables. The variance of $Y$ is the sum of the variances of the random variables with their constant multiples squared. The moment of a linear combination of random variables is the product of their individual moments.\\
\textbf{Summary of Combinations:} Suppose $X$ and $Y$ are independent. Then,
\begin{itemize}
\item $bern(p) + bern(p) = b(2,p)$
\item $b(n_1,p) + b(n_2,p) = b(n_1 + n_2, p)$
\item $geo(p) + geo(p) = negbin(2,p)$
\item $negbin(r_1,p) + negbin(r_2,p) = negbin(r_1 + r_2, p)$
\item $poi(\lambda_1) + poi(\lambda_2) = poi(\lambda_1 + \lambda_2)$
\item $exp(\theta) + exp(\theta) = gamma(2,\theta)$
\item $gamma(\alpha_1,\theta) + gamma(\alpha_2,\theta) = gamma(\alpha_1 + \alpha_2, \theta)$
\item $N(\mu_1,\sigma_1^2) + N(\mu_2,\sigma_2^2) = N(\mu_1 + \mu_2,\sigma_1^2 + \sigma_2^2)$
\end{itemize}
\subsection{Final Material}
Everything taught in the class is on the final.\\
\textbf{Chebyshev's Inequality:} If $k \geq 1$: $\mathbb{P}(|X - \mu| \geq k \sigma) \leq \frac{\sigma^2}{\epsilon^2}$. Let $\epsilon = k \sigma$: $\mathbb{P}(|X - \mu| \geq \epsilon) \leq \frac{1}{k^2}$.\\
\textbf{Central Limit Theorem (CLT):} $\frac{\sqrt{n}(\overline{X} - \mu)}{\sigma} \rightarrow Z, \;\mathrm{as}\; n \rightarrow \infty$.\\
\textbf{Normal Approximation to Binomial:} If $np \geq 5$ and $n(1-p) \geq 5$, then\\
\begin{tabular}{lll}
& $b(n,p)$ & $N(\mu,\sigma^2)$ \\
Mean & $np$ & $\mu$ \\
Variance& $np(1-p)$ & $\sigma^2$
\end{tabular}\\
\textbf{Normal Approximation to Poisson:}\\
\begin{tabular}{lll}
& Poisson$(\lambda)$ & $N(\mu,\sigma^2)$ \\
Mean & $\lambda$ & $\mu$ \\
Variance& $\lambda$ & $\sigma^2$
\end{tabular}\\
\textbf{Jensen's Inequality:} If $g : \mathbb{R} \rightarrow \mathbb{R}$ is a convex function and $X$ is a random variable, then $\mathbb{E}[g(X)] \geq g(\mathbb{E}(X))$.\\
\textbf{Maximum Likelihood Estimator:} Maximize the likelihood function subject to the parameter space.
\begin{enumerate}
\item Construct likelihood function, $F(x; \theta) = \Pi_{i=1}^{n}(f(x_i))$.
\item If applicable, take the log of this to make things easier to differentiate and to change the product into a sum. $l(x; \theta)$.
\item Differentiate to get $\frac{dl(x;\theta)}{d\theta}$.
\item Assume convexity (ha) and find optimal $\theta$ by solving $\frac{dl(x;\theta)}{d\theta} = 0$.
\end{enumerate}
\textbf{Sample Mean:} $\frac{1}{n}\sum_{i=1}^{n}X_i \equiv \overline{X}$.\\
\textbf{Sample Variance:} For $z$ distribution: $\frac{1}{n}\sum_{i=1}^{n}(X_i - \overline{X})^2$, and for $t$ distribution: $\frac{1}{n-1}\sum_{i=1}^{n}(X_i - \overline{X})^2$.\\
\textbf{Method of Moments Estimator:} Construct a system of equations using moments and sample data. For example, equate theoretical mean with sample mean, and theoretical variance with sample variance, then solve. \texttt{<><><maybe put an example here<><>><}\\
\textbf{Helpful Distribution Values:}\\
\begin{tabular}{ll}
$\alpha$ & $z_{1-\alpha}$ \\
0.2 & 0.840 \\
0.1 & 1.280 \\
0.05 & 1.645 \\
0.025 & 1.960 \\
\end{tabular}\\
\textbf{Confidence Intervals for Means:}\\
\tiny
\begin{tabular}{rcc}
& Known Variance & Unknown Variance \\
Two Side & $\left[ \overline{X} - z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}}, \overline{X} + z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}} \right]$ & $\left[ \overline{X} - t_{\frac{\alpha}{2}}(n-1) \frac{S}{\sqrt{n}}, \overline{X} + t_{\frac{\alpha}{2}}(n-1) \frac{S}{\sqrt{n}} \right]$ \\
Lower & $\left[ \overline{X} - z_{\alpha} \frac{\sigma}{\sqrt{n}}, \infty \right)$ & you can \\
Upper & $\left( -\infty, \overline{X} + z_{\alpha} \frac{\sigma}{\sqrt{n}} \right]$ & figure it out \\
\end{tabular}\\
\normalsize
$S^2 = \frac{1}{n-1}\sum_{i=1}^{n}(X_i - \overline{X})^2$ be the sample variance.
When the population distribution is normal, use the ($n-1$) version unless you need to use MLE for which the ($n$) version is needed. For other distributions, use the ($n$) version. If no distribution is mentioned, then assume it to be normal and use the ($n-1$) version.\\
\textbf{Confidence Interval for Variance:} $\left[ \frac{(n-1)S^2}{b}, \frac{(n-1)S^2}{a} \right], a=\chi_{1-\frac{\alpha}{2}}^2(n-1), b=\chi_{\frac{\alpha}{2}}^2(n-1)$.\\
\textbf{Confidence Interval for Standard Deviation:} Use $a$ and $b$ from above, and just square root the expressions for the variance C.I.\\
\textbf{Confidence Interval for Proportions (and Wilcox Interval for large $n$):} Let $\hat{p} = \frac{Y}{n}$. Then $\hat{p} \pm z_{\frac{\alpha}{2}} \sqrt{\frac{\hat{p}(1-\hat{p})}{n}}$. Lower bound: $\left[ \hat{p} - z_{\alpha} \sqrt{\frac{\hat{p}(1-\hat{p})}{n}}, 1 \right]$. Upper bound: $\left[ 0, \hat{p} + z_{\alpha} \sqrt{\frac{\hat{p}(1-\hat{p})}{n}} \right]$.\\
\includepdf[pages={1-},scale=1.0]{gdoc.pdf}
\end{document} | {
"alphanum_fraction": 0.614706401,
"avg_line_length": 116.5273972603,
"ext": "tex",
"hexsha": "a7b14e947e4d9024d7ebf21939c0843fc2256e53",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "22a348a5618625b985408f7524a4be468ac9b84f",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "achcello/effective-umbrella",
"max_forks_repo_path": "eq-sheet.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "22a348a5618625b985408f7524a4be468ac9b84f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "achcello/effective-umbrella",
"max_issues_repo_path": "eq-sheet.tex",
"max_line_length": 651,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "22a348a5618625b985408f7524a4be468ac9b84f",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "achcello/effective-umbrella",
"max_stars_repo_path": "eq-sheet.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6785,
"size": 17013
} |
%% Introduction / first chapter.
\chapter{Introduction to \latex and eBooks}
Considering writing a book? Good for you! Publishing a book is easier and cheaper today
than it ever has been.
If you know what you want to write about, the next step (and sometimes the
hardest step) is getting started. The most important part is the material itself:
you can write paragraphs in any number of text and document editors,
and figure out how to format the material as a book later. But sooner or later,
the question arises of how to turn the material into a book format. That could mean a paper book,
which requires physical resources for printing and distributing. Or it could mean
an eBook, to be read on an eReader such as an Amazon Kindle or Kobo
Clara, or using an app on a phone, tablet, or laptop.
So how do you turn your document into an eBook? This is a very short book demonstrating one
way to do this, and if you're reading this on an eReader, that shows
that it already worked at least once --- for this book.
To start with, I chose to use the \latex typesetting system.
The original \tex was created by the famous computer scientist Donald Knuth \citep{knuth1984texbook},
and added to by Leslie Lamport to make \latex \citep{lamport1985latex}.
\latex is used throughout scientific fields to write papers and books. There are many
reasons for preferring \latex to one of the more `point, click, and type' text editors:
it handles figures, tables, chapters, sections, headings, cross-references,
bibliographic references, mathematical equations, and many more things that can be
notoriously irritating and time consuming to get right. And it's possible to use different typesetting
programs and commands to create all sorts of output from \latex input, including eBooks.
If \tex and \latex are entirely unfamiliar to you, even this automation and flexibility
may not make it the best choice for you to write a book, because there is quite a technical
learning-curve for \latex. It's not just writing text, it's writing commands telling the
typesetting program how to `compile' the output document --- in many ways, \latex
feels much more like programming than writing a Google or Microsoft Word document. But if
\latex is something you've already used to write a dissertation or paper, you're
probably well aware of its benefits (and its hassles).
In summary, if you've used \latex to write papers, now you want to write an eBook,
and you need to figure out how to do this, then this little book and
the open source template that it comes from might be ideal.
There are other ways to make eBooks, and other ways to use
\latex to make eBooks --- this book isn't comprehensive, but it might just
enable you to make an eBook end-to-end quickly, cheaply, and easily.
%%%
\section{How to Use this Book}
This is a book about itself --- it's about how it was written using
the templates and tools described in
the next few chapters. These are the main ways you can use this material:
\begin{itemize}
\item You can read it. It shouldn't take long, it outlines the process
used to make this eBook end to end, and then you can decide if this is something you want to try.
\item You can use it as an instruction manual, learning and following some of the procedures step-by-step to make your own book.
\item You can use it as a template. All of the source files used to make this book
are freely available in GitHub at {\small \url{https://github.com/dwiddows/ebookbook}} and Overleaf.
The source files are laid out in a way that should make it easy to clone the project and adapt it for your own book.
\end{itemize}
It follows that you could recreate this eBook for yourself, following just the process
described in the book: which is basically to clone the GitHub project as a template, build
the project, send the output HTML document through an ePub converter, and send this to
your eReader device.
So why would anyone buy a book if it's free? Because
anyone who reads the steps above and with enough familiarity to think
``git clone \ldots check dependencies \ldots build.sh \ldots
check dependencies again \ldots build.sh \ldots \ldots yeah, alright'' will expect it to take
more than a few minutes, hopefully less than an hour, and price their own time realistically.
If you want to read this book for free on your eReader,
compiling from source is the way to go about it. Or if you want to just click `buy' now and send the author most of the
\$2.99 price tag, please go ahead, and thank you!
Either way, if you're not put off by \latex and \smalltt{git}
commands, keep reading. I hope the book is useful to you, and wish you
all the best of luck and persistence writing your book!
| {
"alphanum_fraction": 0.7735331363,
"avg_line_length": 59.9746835443,
"ext": "tex",
"hexsha": "556a2f0fd6102b60e4eab68e1700daccfa151c97",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2022-01-23T13:19:55.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-09-08T16:33:47.000Z",
"max_forks_repo_head_hexsha": "c616cae80dc674eb317761fdf715f4153033e298",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "sthagen/dwiddows-ebookbook",
"max_forks_repo_path": "chapters/introduction.tex",
"max_issues_count": 5,
"max_issues_repo_head_hexsha": "c616cae80dc674eb317761fdf715f4153033e298",
"max_issues_repo_issues_event_max_datetime": "2022-02-26T20:11:41.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-09-04T21:51:59.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "sthagen/dwiddows-ebookbook",
"max_issues_repo_path": "chapters/introduction.tex",
"max_line_length": 132,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "c616cae80dc674eb317761fdf715f4153033e298",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "dwiddows/ebookbook",
"max_stars_repo_path": "chapters/introduction.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-23T13:19:53.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-09-08T12:21:06.000Z",
"num_tokens": 1096,
"size": 4738
} |
\chapter{Research Approach}
\label{chap:method}
To understand the effects that CL can have on LM, we will explore various curriculum methods on training, evaluation, and usage in downstream tasks.
Our methodology is driven around three main research questions:
\begin{enumerate}
\item Does curriculum learning help a language model converge to a more optimal global minimum?
\item Does the language representation learned via curriculum learning improve performance on downstream tasks when compared to non-curriculum methods?
\item Do curriculum learning methods increase model convergence speed in both pre-training and downstream task fine-tuning?
\end{enumerate}
In Section \ref{chap:method:sec:structure}, we will first describe our experiment's structure as scoped to various implementations of CL. Next, in Section \ref{chap:method:sec:cldevelopment} we will describe and discuss our methods for curriculum development. Then in Section \ref{chap:method:sec:training} we will discuss how we will train and evaluate our LMs and how we will explore hyperparameter tuning.
\section{Experiment Structure}
\label{chap:method:sec:structure}
Our experimentation strategy is simple: train many models with a fixed structure and hyper-parameters using different curriculum. Our implementation of this strategy is achieved by exploring two different curricula: corpus replacement style and competence based curriculum. Once a robust set of models are trained with each curriculum method, all models are evaluated on a held out portion of the training corpus and fine-tuned on the GLUE Benchmark. In the corpus replacement style (CRS) we create multiple distinct training data sets which are are used to train an unmodified LM. In the competence based curriculum (CBC) we first assign a notion of difficulty to each sample in a dataset and then alter the batch sampling method within the language model to sample from a gradually difficult training data. Our CRS method provides few controls but is computationally cheap while the CBC provides ample control at the cost of computational overhead. \\
\subsection{Language Model}
To optimize how quickly we can train our systems, we will only explore the curriculum's effect on an established successful baseline, ELMo \cite{Smith2019ContextualWR}. We leverage the code\footnote{https://github.com/allenai/bilm-tf} used for the original ELMo experiments and use the same hyper-parameters reported in their paper. We make two changes to the original implementation: an ability to use a CBC and the introduction of padding token $<PAD>$. In the original implementation, the training loader will load a text file, shuffle all the lines, and then iterate through them. In our implementation of CBC, we load the full corpus, which we do not shuffle, then select a batch at random from the examples that meet our model's current competence. Our implementation changes data sampling to unconstrained random sampling without replacement to sampling with replacement. Since our implementation of competence curriculum is based on sentence-level difficulty, it becomes possible that batches include sentences that are smaller than the defined context length that meets the original ELMo implementation (20 tokens). As a result, we have to introduce a padding token, $<PAD>$, which ensures that each sentence is at least the context window's size. To avoid having the model learn to predict the $<PAD>$ token, we introduce a masking function that sets the loss to 0 for any padding token. \\
It is worth noting that the introduction this token increases the computational cost to train a model as all padding tokens are essentially lost computation. To calculate the effect of this we counted the amount of padding tokens that are introduced in the wikitext-103 corpus by While we did not compute the cost of using this method for each of our curriculum methods we estimated the inefficiency introduce by this method by calculating the amount of padding tokens introduced. In the wikitext-103 corpus the training corpus has 101,425,658 tokens. When this is parsed into sentences and we sum the modulo of the context window(20) over all sentences we find we must introduce 42,547,653. If we do not split the corpus into sentences we still must introduce 12,204,311 tokens. This means that before even accounting for the cost of the curriculum implementation our model training requires about 12.03 or 41.95 percent more FLOPs.
\subsection{Datasets}
For our training corpus, we leverage two well-established language modeling benchmarks of wikitext-2, and wikitext-103 \cite{Merity2016PointerSM}. These datasets collect verified good and featured articles from English Wikipedia and feature 2 million and 103 million tokens respectively. Each dataset is comprises if full articles with original punctuation, numbers, and case and the only processing is the replacement of all words which occur less than 3 times with a <UNK> token to represent unknown words . Each corpus has already been tokenized, processed, and split into train, validation, and evaluation components and availible for broad usage. While most other research on language modeling has been focusing on bigger and bigger data, our focus on smaller data allows us to experiment with more curricula. We understand this will limit our model performance relative to current top methods as a smaller corpus limits the model performance \cite{Kaplan2020ScalingLF}. Moreover, these datasets were chosen because they are standard benchmarks for language modeling and sufficient size to train large scale, language models. We believe that the diversity of our two datasets (wikitext-103 is 50x larger than wikitext-2) will allow us to draw broad conclusions about CL independent of data size. More information about corpus size and diversity can be found in Equation \ref{table:corpussize}. Information about the corpus size used for other LM is included for comprehensiveness \cite{Chelba2014OneBW}.\\
\begin{table}[h!]
\begin{tabular}{|l|l|l|l|l|} \hline
\textbf{Corpus Name} & \textbf{vocabulary Size} & \textbf{Tokens} & \textbf{lines} & \textbf{sentences} \\ \hline
wikitext=2 & 33278 & 2507007 & 44836 & 131262 \\ \hline
wikitext-103 & 267735 & 103690236 & 1809468 & 5343947 \\ \hline
1B Word Benchmark & 793471 & 829250940 & N/A & N/A \\ \hline
\end{tabular}
\caption{Training Corpus details}
\label{table:corpussize}
\end{table} \\
As our competence methodology focuses on sentences, we make two versions of the wikitext datasets: sentence-level and line level. For our sentence level corpus, we leverage SPACY.IO's \cite{spacy2} \footnote{https://spacy.io/} sentence boundary detector to spit the original corpus to one delimited by sentences. The two different corpus splitting mechanisms mean that each of our competence based curriculum is run on four different datasets.
\subsection{Evaluation}
To evaluate our models, we will focus on three aspects of LM performance: performance on the trained task, performance on a downstream task, and the speed at which a model improves its performance on the trained task. The broad goal of our LMs is to represent a given corpus accurately and to do so we will evaluate model perplexity on the held-out portions of our datasets.
To measure the quality of our LMs downstream we will use the industry standard GLUE tasks.
\section{Curriculum Construction Methods}
\label{chap:method:sec:cldevelopment}
\subsection{Baseline}
Before we study the effect of CL on language modeling we first retrain the public ELMo model on our datasets and evaluate on GLUE. We also download the publicly released ELMo models and evaluate on GLUE. Our training of ELMo is done on for 10 epochs on the wikitext-2 and wikitext-103.
\subsection{Corpus Replacement Style}
Corpus Replacement Style is inspired by the the Bengio et al., 2009 \cite{Bengio2009CurriculumL} implementation of CL for LM. In their implementation, they train an AR LM with examples from a corpus with a context window of size 5. In the epoch the model trains on all 5 token spans which only contain the 5,000 most common tokens. For the second epoch, training mirrors the prior epoch but with a threshold of 10,000 most common words. This process continues until the model is training on a full unaltered corpus. \\
In early experiments we explored an approach modeled after the Bengio et al., 2009 method but as context size of ELMo is much larger performance suffered. Seeking to preserve the separation of model training and training data creation we created what we refer to as Corpus Replace Style (CRS) curriculum. In this implementation we simplify the corpus not by removing training spans but by replacing words less common than the threshold with an <UNK> token. Threshold selection was done by exploring various increment sizes and initial threshold size and our final most successful methods produced 6 unique datasets for input dataset. To match the training time of our baseline we train one epoch on each corpus and an additional 4 on the unaltered corpus. Details on corpus thresholds can be found in Table \ref{table:2}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline
\textbf{Corpus Name} & \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} & \textbf{7-10} \\ \hline
enwiki-2 & 5,000 & 10,000 & 15,000 & 20,000 & 25,000 & 30,000 & 33,278 \\ \hline
enwiki-103 & 25,000 & 50,000 & 75,000 & 100,000 & 150,000 & 250,000 & 267,735\\ \hline
\end{tabular}
\caption{Vocabulary size per epoch}
\label{table:2}
\end{center}
\end{table}
\subsection{Competence Method}
Following the work of \cite{Platanios2019CompetencebasedCL} introduced in Section \ref{chap:prior:sec:cl:cl} we will apply the notion of a CBC to LM training. CBC methods rely on the ability to assign a difficulty score to each sample in the training corpus and use this to only allow the model to train on samples that are easier than its current competence level. A model's competence score is defined by how far along in a training regime the model is. \\
Our training corpus, $X$ is a collection of sentences $S$, where each sentence $s_i$ is a sequence of words $s_i= w_o^i,w_1^i,...,w_n^i$ and each sentence is assigned a difficulty $\epsilon_{s_i}$. At each step in training model competence is represented by $\lambda_t$ where $\lambda_0$ represents the initial competence and $\lambda_{increment}$ represents how much model competence increases after each training batch. Prior to training, each sentence in the training data has been assigned a difficulty score from 0 to 1 represented by $\epsilon_{s_i}$. For each training batch, the model is only able to train on samples that have a difficulty where $\epsilon_{s_i} <= \lambda_t$. \\
To keep keep curriculum length for CBC and CRS equal we set our curriculum length to 6 epochs do a grid search on $\lambda_{increment}$ and $\lambda_0$ values. After our grid search we select the parameters that provide lowest training perplexity and set value $\lambda_0 = 0.1$ $\lambda_0 = 0.1$ for wikitext-2 and $\lambda_0 = 0.01$ $\lambda_{increment} = 0.00001$ for wikitext-103. The CBC based sampling algorithm is formalized in Algorithm \ref{algo:competence}. \\
\begin{algorithm}[H]
\label{algo:competence}
\SetAlgoLined
\KwResult{Model Trained with Competence Based Curriculum}
Input: X, $\lambda_0$, $\lambda_{increment}$ \;
Compute difficulty, $\epsilon_{s_i}$ for $s_i \in X$\;
Compute Cumulative density of $\epsilon_{s_i}$\;
$\lambda_t = \lambda_0$\;
\For{training step t = 1,...,n}{
Sample batch $b$ from X such that $\epsilon_{s_i} < \lambda_t$\;
Train on batch $b$\;
$\lambda_{t+1} = \lambda_t + \lambda_{increment}$\;
}
\caption{Competence-based curriculum}
\end{algorithm}
To understand the effect of CBC we study 8 different curricula per dataset: 2 baseline and six different difficulty heuristics. The first baseline is a curriculum method where we set $\lambda_0 = 1$, which means that our model can sample from the entire training dataset. This baseline aims to establish the effect that changing training with sample-without-replacement to sampling-with-replacement has on LM performance. The second baseline is a random curriculum where we sort the file randomly to create our sentence difficulty scores. The goal of this baseline is to establish the effect of any arbitrary curriculum on LM training. The following six heuristics we explore are based on common NLP difficulty metrics, the original CBC paper, and some linguistically motivated difficulties. The heuristics are sentence length, unigram sentence probability, bigram sentence probability, trigram sentence probability, part of speech diversity, and sentence dependency complexity. For each methodology, for each $s_i$ in $X$, we compute a difficulty value for each sentence $\epsilon_{s_i}$ and then sort the dataset by this difficulty score. Using the sorted dataset we compute the cumulative density function (CDF) giving each sentence of the difficulty score $\epsilon_{s_i} \in [0,1]$. We will now describe each method.
\subsubsection{Sentence Length}
Formalized in Equation \ref{equation:sentencelen}, this curriculum is built on the idea that is a lot harder to model longer sentences, as longer sentences require better tracking of dependencies. We believe this method would be particularly effective in Transformer based models as it can steer the model into learning how to leverage its multi-headed attention with different sentence lengths. \begin{equation}
\text{sentence-length-}\epsilon_{s_i} = length(s_i).
\label{equation:sentencelen}
\end{equation}
\subsubsection{Sentence Entropy}
Another part of language that can be difficult to model is words with a variety of frequency in the corpora. Models, if assumed to behave like humans, would find it difficult to understand the meaning of a word if they do not see it in a corpus nor have a diversity of usages to infer meaning. Since the statistical strength of training samples with rare words is low and the early model learned word embeddings are likely to have high variance it is likely that exposing a model early to rare words can result in badly estimated representations. To quantify this difficulty we propose producing a sentence entropy for each sentence with respect to its unigram, bigram, and trigram probabilities. These products can be thought of as an approximate naive language modeling as it assumes words are sampled independently. Note, we are not calculating the conditional probability of each word given the preceding N words but the probability of the N-gram given the text corpus. To produce a difficulty score $\epsilon_{s_i}$ we first calculate an $n$-gram probability for each unigram, bigram, and trigram in the training corpus. Then using this probability we calculate the $n$-gram difficulty of a input $s_i$ by computing the $\log$ product of each $n$-gram $\in s_i$ as show in Equation \ref{equation:gramprob}. $uc$, $bc$, and $tc$ are the counts of unique unigrams, bigrams, and trigrams in the corpus, $C$ is the corpus, $x \in C$ is a line in the corpus and $w_i \in x$ is a word in a line, and $l(x)$ represents the length of $x$ in $n$-grams.
\begin{equation}
\begin{split}
p(w_n) &= \frac{\sum_{x \in C} \sum_{n=0}^{l(x)} w_i == w_{n}}{\text{us}} \\
p(w_{n}, w_{m}) & = \frac{\sum_{x \in C} \sum_{n=0}^{l(x)-1}(w_i == w_{n} \& w_{i+1} == w_m)}{\text{bs}} \\
p(w_{n}, w_{m}, w_{j}) & = \frac{\sum_{x \in C}\sum_{n=0}^{l(x)-2} (w_i == w_{n} \& w_{i+1} == w_m\& w_{i+2} == w_j) }{\text{ts}} \\
\text{unigram-} \epsilon({s_i}) &= \prod_{n=0}^{length(s_i)} \log(p(w_n)) \\
\text{bigram-} \epsilon({s_i}) &= \prod_{n=0}^{length(s_i)-1} \log(p(w_{n-1}, w_n)) \\
\text{trigram-} \epsilon_{s_i} &= \prod_{n=0}^{length(s_i)-2} \log(p(w_{n}, w_{n+1}, w_{n+2}))
\end{split}
\label{equation:gramprob}
\end{equation}
\subsubsection{Sentence Dependency Complexity}
There are various methods to define sentence complexity but in our experiments we scope complexity to the complexity of a dependency parse. We leverage the language processing framework spacy \footnote{spacy.io} and for each sentence we generate a dependency parse and starting at the root we measure the depth of the tree. Sentence difficult is formalized in Equation \ref{equation:dep}.
\begin{equation}
\text{dep-}\epsilon_{s_i} = \text{depth}(s_i)
\label{equation:dep}
\end{equation}
\subsubsection{Part Of Speech Diversity}
Another core part of language complexity can be derived by the diversity of parts-of-speech in a sentence. We believe that more difficult sentences feature a higher diversity of parts-of-speech (POS). We leverage the part of speech parser from spacy to produce a set of all pos in each sentence. POS Diversity is formalized in Equation \ref{equation:pos}.
\begin{equation}
\text{pos-}\epsilon_{s_i} = \text{len}(\text{set}(\text{pos}({s_i})))
\label{equation:pos}
\end{equation}
\subsubsection{Sentence Vs. Line CBC}
As we mentioned prior, our curriculum methods are designed for sentence level sampling but most modern LMs train using a context window of line of text or larger. As a result we apply the CBC methods to both the sentence split corpus and a line delimited corpus. For the line delimited corpus we use the existing line breaks in the wikitext-* corpuses and apply our same heuristics at the line level instead of the sentence level. This means we favor short paragraphs far more in our line corpus than our sentence corpus. In our sentence corpus easy sentences will show up earlier in training while in our line corpus if there is a easy sentence which is part of a long line of text, it will not show up until later in training. \\
It is worth noting that since the percentage of padding tokens vary between these two methods line based curriculum effectively are larger batches. Since our batch size is limited by our GPU size we instead extend the training time of the sentence based method by roughly 30\%. While there are no guarantees on actual update steps, this approximately allows both methods to train on the same amount of tokens as the original ELMo implementation which is $10 * \text{tokens in corpus}$
\section{Model Training and Evaluation}
\label{chap:method:sec:training}
For model pre-training we follow the original implementation of ELMO and use 2 stacked 4096 dimensional BiLSTMs trained bidirectionally. We use dropout of 0.1, use a batch size of 128, use a context window of 20 tokens, and train for 10 epochs on the full text corpus. In our pre-training we a total of N models. We train 2 baselines (one for each corpus size) using the original ELMo implementation and then train 2 BS models using the same setup. For each of these 4 models they are evaluated on the validation portion of their train corpus at the end of each epoch. For competence based curriculum we train 32 models using our modified implementation: 4 corpuses and 8 curricula per corpus. Since training examples each model sees are different we track model performance over time by evaluating on the validation portion of the train corpus every 100 batches. Training was done using 3 Nvidia 2080 TI GPUs and training on the wikitext-103 corpus takes about a 30 hours and training on wikitext-2 is under an hour. \\
For model fine-tuning everything was implemented using the JIANT framework. Model weights were dumped from the pretrained models and downloaded from the public original ELMo model. Using these weights, each model is fine tuned on each of the GLUE sub-tasks. Using JIANT we normalized our training to have a batch size of 8, random seed of 42 initial learning rate of 0.0001, dropout of 0.2, and a Multi-layer perceptron with 512 hidden dimensions. Training of each model continues until the model learning rate dips below 0.000001 or the model has trained for 1000 epochs. Then, for each sub task, the best result the model predicted is the GLUE score for that task. Each model fine tuning takes about 8 hours using the same training setup. \\
To compare model performance across curricula we will look at 3 different results: model perplexity on held out portion of training corpus, how this perplexity changes over time, and model transfer performance on GLUE. | {
"alphanum_fraction": 0.7851162333,
"avg_line_length": 188,
"ext": "tex",
"hexsha": "bcb15bd7c5408e0138a1c745f9ceafbdea2f5698",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6d13c7cfff8ce80c2c31d0c33696eed58294ff50",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "spacemanidol/UWTHESIS",
"max_forks_repo_path": "Thesis/method.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6d13c7cfff8ce80c2c31d0c33696eed58294ff50",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "spacemanidol/UWTHESIS",
"max_issues_repo_path": "Thesis/method.tex",
"max_line_length": 1549,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "6d13c7cfff8ce80c2c31d0c33696eed58294ff50",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "spacemanidol/UWTHESIS",
"max_stars_repo_path": "Thesis/method.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4825,
"size": 20304
} |
% !TeX TXS-program:compile = txs:///pdflatex/
% !TeX program = pdflatex
% !BIB program = biber
%\sisetup{
% detect-all,
% round-integer-to-decimal = true,
% group-digits = true,
% group-minimum-digits = 5,
% group-separator = {\kern 1pt},
% table-align-text-pre = false,
% table-align-text-post = false,
% input-signs = + -,
% input-symbols = {*} {**} {***},
% input-open-uncertainty = ,
% input-close-uncertainty = ,
% retain-explicit-plus
%}
\section{\texttt{siunitx} Example Tables}
\label{sec:app:example_tables}
\begin{table}[h!]
\caption{An~Example of a~Regression Table \citep[Adapted from][]{Gerhardt2017}. Never Forget to Mention the Dependent Variable!}
\label{tab:lin_reg_interactions}
\input{siunitx_Example_Regression}
\end{table}
\begin{table}[h!]
\caption{Figure Grouping via \texttt{siunitx} in a~Table.}
\input{siunitx_Example_Figure_Grouping}
\end{table}
\newcommand{\mc}[2]{\multicolumn{1}{@{} c #2}{#1}}
\begin{table}[p]
\caption{Overview of the Choice Lists Presented to Subjects \citep[Adapted from][]{Gerhardt2017}.}
\label{tab:choice_lists}%
\input{siunitx_Example_Choice_Lists}
\end{table}% | {
"alphanum_fraction": 0.6739495798,
"avg_line_length": 28.3333333333,
"ext": "tex",
"hexsha": "86eb11bbe14441b5d73b0c8f411d5dd0d9fe3467",
"lang": "TeX",
"max_forks_count": 10,
"max_forks_repo_forks_event_max_datetime": "2021-10-12T04:13:23.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-11-02T03:10:26.000Z",
"max_forks_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "lemoxiao/Awesome-Beamer-Collection",
"max_forks_repo_path": "200+ beamer 模板合集/TeXTemplates(论文,报告,beamer,学术报告)/1_Example_Content/9_Appendix/siunitx_Examples.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "lemoxiao/Awesome-Beamer-Collection",
"max_issues_repo_path": "200+ beamer 模板合集/TeXTemplates(论文,报告,beamer,学术报告)/1_Example_Content/9_Appendix/siunitx_Examples.tex",
"max_line_length": 129,
"max_stars_count": 13,
"max_stars_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "lemoxiao/Awesome-Beamer-Collection",
"max_stars_repo_path": "200+ beamer 模板合集/TeXTemplates(论文,报告,beamer,学术报告)/1_Example_Content/9_Appendix/siunitx_Examples.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-24T09:27:26.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-07-30T04:09:54.000Z",
"num_tokens": 388,
"size": 1190
} |
\section{Natural numbers}
| {
"alphanum_fraction": 0.75,
"avg_line_length": 7,
"ext": "tex",
"hexsha": "b300e51af91bb7329620bc28070d0c648c9138c6",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/logic/sets/01-00-Natural_numbers.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/logic/sets/01-00-Natural_numbers.tex",
"max_line_length": 25,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/logic/sets/01-00-Natural_numbers.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 7,
"size": 28
} |
\section{Method}
An overview of the method is presented in figure 2. There are three main routines: feature extraction, classification of vegetation, and segmentation of linear objects. In the feature extraction we use the data of the point and its neighbors to get additional information that can help determine to what class a point belongs to. We then use this information to classify which points belong to higher than herb vegetation. These points we subsequently segment into rectangular region, which can be checked for linearity. Details of the various steps which will be described in the following sections.
\begin{figure}[t]
\centering
\begin{tikzpicture}[node distance=0.70cm, scale=0.7, every node/.style={transform shape}]
\node (pro0) [process, fill=darkorange!20] {Point-based feature extraction};
\node (pro1) [process, right=of pro0, fill=darkorange!20] {Neighborhood-based geometry feature extraction};
\node (pro2) [process, right=of pro1, fill=darkorange!20] {Neighborhood-based eigenvalue feature extraction};
\node (param) [title, above=0.4cm of pro1] {Feature extraction};
\begin{scope}[on background layer]
\node (fit1) [fit=(pro0)(pro1)(pro2)(param), inner sep=4pt, transform shape=false, draw=black!80, fill=darkorange, fill opacity=0.5] {};
\end{scope}
\node (in1) [io, left=of fit1] {Unclassified point cloud};
\node (out1) [io, below right=0.9cm and 11.7cm=of fit1] {Point cloud with features};
\node (class) [title, below=1.2cm of pro1] {Vegetation classification};
\node (pro5) [process, below=0.2cm of class, fill=lightblue!20] {Supervised classification (random forest)};
\node (pro4) [process, right=of pro5, fill=lightblue!20] {Data trimming};
\node (in2) [io, below=of pro5] {Manually classified point cloud};
\node (pro6) [process, left=of pro5, fill=lightblue!20] {Accuracy assessment (cross validation)};
\begin{scope}[on background layer]
\node (fit2) [fit=(class)(pro4)(pro5)(pro6), inner sep=4pt, transform shape=false, draw=black!80, fill=lightblue, fill opacity=0.5] {};
\end{scope}
\node (out2) [io, below left=4.9cm and 3.7=of fit2] {Classified point cloud};
\node (lin) [title, below=of in2] {Linear object segmentation};
\node (pro11) [process, below=0.2cm of lin, fill=turquoise!20] {Clustering (using DBSCAN)};
\node (pro10) [process, left=of pro11, fill=turquoise!20] {Preprocessing (2D conversion and downsampling)};
\node (pro12) [process, right=of pro11, fill=turquoise!20] {Region growing (based on rectangularity)};
\node (pro13) [process, below=of pro10, fill=turquoise!20] {Object merging};
\node (pro14) [process, right=of pro13, fill=turquoise!20] {Elongatedness assessment};
\node (pro15) [process, right=of pro14, fill=turquoise!20] {Accuracy assessment};
\begin{scope}[on background layer]
\node (fit3) [fit=(lin)(pro10)(pro11)(pro12)(pro13)(pro14)(pro15), inner sep=4pt, transform shape=false, draw=black!80, fill=turquoise, fill opacity=0.5] {};
\end{scope}
\node (in3) [io, below=of pro15] {Manually segmented vegetation objects};
\node (out3) [io, right=of fit3] {Linear vegetation objects};
% \node (out3) [io, right=3.2cm of pro14] {Linear vegetation objects};
\draw [arrow] (in1) -- (fit1);
\draw [arrow] (fit1) -| (out1);
\draw [arrow] (out1) |- (fit2);
\draw [arrow] (in2) -- (fit2);
\draw [arrow] (pro0) -- (pro1);
\draw [arrow] (pro1) -- (pro2);
\draw [arrow] (pro4) -- (pro5);
\draw [arrow] (pro5) -- (pro6);
\draw [arrow] (pro6) -- (pro5);
\draw [arrow] (fit2) -| (out2);
\draw [arrow] (out2) |- (fit3);
\draw [arrow] (pro10) -- (pro11);
\draw [arrow] (pro11) -- (pro12);
\draw [arrow] (pro12.south) -| +(0,-0.3) -| (pro13.north);
\draw [arrow] (pro13) -- (pro14);
\draw [arrow] (pro14) -- (pro15);
\draw [arrow] (in3) -- (in3 |- fit3.south);
\draw [arrow] (fit3) -- (out3);
% \draw [arrow] (out3 -| fit3.east) -- (out3);
\end{tikzpicture}
\caption{Workflow for feature extraction (orange), classification (blue), and segmentation of linear objects (green), with datasets represented as parallelograms and processes as rectangles.}
\label{fig:workflow}
\end{figure}
\subsection{Feature extraction}
The relevance of the various input features has extensively been studied to separate urban from vegetation objects \citep{chehata2009airborne, guo2011relevance, mallet2011relevance}, but concise information for vegetation classification is scarce. After reviewing relevant literature, we selected fourteen features (table \ref{tbl:features}). These features are based on information from echo and local neighborhood information (geometric and eigenvalue based) and their qualities are considered to be efficient for discriminating vegetation objects from point clouds \citep{chehata2009airborne}.
\subsubsection{Point-based features}
The point cloud \(\mathcal{P}\) is a set of points \(\{p_{1}, p_{2}, \dots, p_{n}\}\) \(\in \mathbb{R}^3\), where each point \(p_{i}\) has x, y and z coordinates. In addition, an intensity value (\(I\)), a return number (\(R\)), and a number of returns (\(R_{t}\)) of the returned signal are stored. Since we do not have all the data required to do a radiometric correction of the intensity data we omitted this feature for the classification \citep{kashani2015review}. As an additional echo-based feature we calculated a normalized return number (\(R_{n}\)) (table \ref{tbl:features}).
\subsubsection{Neighborhood-based features}
In addition to point-based features, we calculated four geometric features related to local neighborhoods. We defined a neighborhood set \(\mathcal{N}_{i}\) of points \(\{q_{1}, q_{2}, \dots, q_{k}\}\) for each point \(p_{i}\), where \(q_{1} = p_{i}\), by using the k-nearest neighbors method with \(k = 10\) points. This method performs well for datasets that vary in point densities (Weinmann et al., 2014). The four geometric features were: the height difference, the height standard deviation, the local radius and local point density (table \ref{tbl:features}).
We further calculated eigenvalue-based features which are used to describe the distribution of points in a neighborhood \citep{hoppe1992surface, chehata2009airborne}. We used the local structure tensor to estimate the surface normal and to define surface variation \citep{pauly2002efficient}. The structure tensor describes the dominant directions of the neighborhood of a point by determining the covariance matrix of the x, y and z coordinates of the set of neighborhood points and computing the eigenvalues (\(\lambda_{1}, \lambda_{2}, \lambda_{3}\), where \(\lambda_{1} > \lambda_{2} > \lambda_{3}\)) of this matrix and ranking them based on the eigenvalue values. Hence, the magnitude of the eigenvalues of this covariance matrix describe the spread of points in the direction of the eigenvector. The points are linearly distributed if the eigenvalue of the first principle direction is significantly bigger than the other two (\(\lambda_{1} \gg \lambda_{2} \approx \lambda_{3}\)), planarly distributed if the eigenvalues of the first two principle directions are about equal and significantly larger than the third (\(\lambda_{1} \approx \lambda_{2} \gg \lambda_{3}\)), and the points are scattered in all directions if all eigenvalues are about equal (\(\lambda_{1} \approx \lambda_{2} \approx \lambda_{3}\)). These and additional properties are quantified using formulas (table \ref{tbl:features}). The eigenvector belonging to the third eigenvalue is equal to the normal vector (\(\vec{N} = (N_{x}, N_{y}, N_{z})\)) \citep{pauly2002efficient}.
\begin{table}[t]
\caption{The features used for classification, split into two main groups: point-based and neighborhood-based. The point-based features are based on echo information and the neighborhood-based features are based on the local geometry and eigenvalue characteristics.}
\label{tbl:features}
\footnotesize
\begin{tabular}{l l l l l}
\toprule
\textbf{Feature group} & \textbf{Feature} & \textbf{Symbol} & \textbf{Formula} & \textbf{Reference} \\
\midrule
\textbf{Point} \\ \\
- Echo & Number of returns & \(R_{t}\) & \\ \\
& Normalized return number & \(R_{n}\) & \(R/R_{t}\) & \citet{guo2011relevance} \\ \\
\textbf{Neighborhood} \\ \\
- Geometric & Height difference & \(\Delta_{z}\) & \(\max_{j:\mathcal{N}_{i}}(q_{z_{j}}) - \min_{j:\mathcal{N}_{i}}(q_{z_{j}})\) & \cite{weinmann2015semantic} \\ \\
& Height standard deviation & \(\sigma_{z}\) & \(\sqrt{\frac{1}{k} \sum_{j=1}^k (q_{z_{j}} - \overline{q_{z}})^2}\) & \cite{weinmann2015semantic} \\ \\
& Local radius & \(r_{l}\) & \(\max_{j: \mathcal{N}_{i}}(|p_{i} - q_{j}|)\) & \cite{weinmann2015semantic} \\ \\
& Local point density & \(D\) & \(k/(\frac{4}{3} \pi r_{l}^3)\) & \cite{weinmann2015semantic} \\ \\
- Eigenvalue & Normal vector Z & \(N_{z}\) & & \citet{pauly2002efficient} \\ \\
& Linearity & \(L_{\lambda}\) & \(\frac{\lambda_{1} - \lambda_{2}}{\lambda_{1}}\) & \citet{west2004context} \\ \\
& Planarity & \(P_{\lambda}\) & \(\frac{\lambda_{2} - \lambda_{3}}{\lambda_{1}}\) & \citet{west2004context} \\ \\
& Scatter & \(S_{\lambda}\) & \(\frac{\lambda_{3}}{\lambda_{1}}\) & \citet{west2004context} \\ \\
& Omnivariance & \(O_{\lambda}\) & \(\sqrt[3]{\lambda_{1} \lambda_{2} \lambda_{3}}\) & \citet{west2004context} \\ \\
& Eigenentropy & \(E_{\lambda}\) & \(-\lambda_{1}\ln(\lambda_{1}) -\lambda_{2}\ln(\lambda_{2}) -\lambda_{3}\ln(\lambda_{3})\) & \citet{west2004context} \\ \\
& Sum of eigenvalues & \(\sum_{\lambda}\) & \(\lambda_{1} + \lambda_{2} + \lambda_{3}\) & \cite{mallet2011relevance} \\ \\
& Curvature & \(C_{\lambda}\) & \(\frac{\lambda_{3}}{\lambda_{1} + \lambda_{2} + \lambda_{3}}\) & \citet{pauly2002efficient} \\ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Vegetation classification}
The fourteen features served as input for the classification of vegetation, which required data trimming, a supervised vegetation classifier and accuracy assessment.
\subsubsection{Data trimming}
To facilitate efficient processing, points that certainly do not belong to higher than herb vegetation were removed from the dataset. These points are characterized by a locally planar neighborhood and are selected on the basis of sphericity feature (table \ref{tbl:features}). In this way, points with a scatter value \(S_{\lambda} < 0.05\) were removed. This threshold was conservative to ensure a large reduction of the data size, while still preserving the higher than herb vegetation points.
\subsubsection{Supervised classification}
A random forest classifier provides a good trade-off between classification accuracy and computational efficiency \citep{breiman2001random, weinmann2015semantic}. The random forest algorithm creates a collection of decision trees, where each tree is based on a random subset of the training data \citep{ho1998random}. Random forest parameters such as the maximum number of features, minimal samples per leaf, minimal samples per split and the ratio between minority and majority samples were optimized using a cross validated grid search. During the grid search a range of applicable values were chosen for each parameter and all combinations were tested and evaluated for performance using cross validation. To save time first a course grid was created to identify the region of best performance and subsequently a finer grid was made to find the best performing parameter set in that region \citep{hsu2003practical}.
The trimmed point cloud is imbalanced, and includes a lot more vegetation than `other' points. Imbalanced training data can lead to undesirable classification results \citep{he2009learning}. Therefore, we used a balanced random forest algorithm. In this algorithm the subsets are created by taking a bootstrap sample from the minority class and a random sample from the majority class with the same sample size as the minority class sample \citep{chen2004using}. By employing enough trees eventually all majority class data are used, while still maintaining a balance between the two classes. The decision trees were created using a Classification and Regression Tree (CART) algorithm \citep{breiman1984classification}. A manual annotation of the trimmed point cloud into `vegetation' and `other' classes was done using an interpretation of the point cloud and high resolution aerial photos.
%The point cloud and high resolution aerial photos were interpreted and used to manually annotate part of the trimmed 3D point cloud as `vegetation' and `other' classes.
\subsubsection{Accuracy assessment}
The receiver operating characteristic (ROC) curve \citep{bradley1997use}, the Matthew’s correlation coefficient (MCC) \citep{matthews1975comparison} and the geometric mean \citep{kubat1998machine} were used in the accuracy assessment. This allowed to have good performance metrics even when dealing with an imbalanced dataset \citep{kohavi1995study, sun2009classification, lopez2013insight}. To create a ROC curve, the true positive (TP) rate is plotted against the false positive (FP) rate at various decision thresholds. The area under a ROC curve (AUROCC) is a measure for the performance of the classifier \citep{bradley1997use}. The MCC analyzes the correlation between the observed and the predicted data and is defined as:
\begin{equation}
\label{eq:MCC}
{\text{MCC}}={\frac{TP\times TN-FP\times FN}{{\sqrt{(TP+FP)(TP+FN)(TN+FP)(TN+FN)}}}}
\end{equation}
where TN are the true negatives and FN the false negatives. The geometric mean is defined as:
\begin{equation}
\label{eq:geom}
{\text{Geometric mean}}={\sqrt{\text{producer's accuracy class 1} \times \text{producer's accuracy class 2}}}
\end{equation}
The MCC, AUROCC and the geometric mean were obtained using a 10-fold cross validation. This is done by splitting the data into 10 randomly mutually exclusive subsets and using a subset as testing data and a classifier trained on the remaining data \citep{kohavi1995study}.
\subsection{Linear object segmentation}
To segment the vegetation points into linear regions we did a preprocessing step, clustered the points using a DBSCAN, applied a region growing algorithm, merged nearby and aligned objects, evaluated their elongatedness and assessed the accuracy.
\subsubsection{Preprocessing}
The point cloud was converted to 2D by removing the z-coordinate of the vegetation points. In addition, the data were spatially downsampled to 1 meter distance between vegetation points. This substantially decreased computation time without losing too much precision (figure \ref{fig:downsampling}).
\begin{figure}
\centering
\includegraphics[scale=0.80]{./img/downsampling.pdf}
\caption{The vegetation points of a piece of tree line within the research area before (blue) and after (red) downsampling the point cloud, plotted on top of the high resolution orthophoto, in RD coordinates.}
\label{fig:downsampling}
\end{figure}
\subsubsection{Clustering}
After reducing the amount of points we clustered the remaining points together using a DBSCAN clustering algorithm \citep{ester1996density}. This algorithm is able to quickly cluster points together based on density and removes outlying points in the process. This decreases the processing time needed in the subsequent region growing step, since the amount of possible neighboring points is reduced.
\subsubsection{Region growing}
Region growing is an accepted way of decomposing point clouds \citep{rabbani2006segmentation, vosselman2013point} or raster imagery \citep{blaschke2014geographic} into homogeneous objects. Normally, seed locations are selected and regions are grown based on proximity and similarity of the attributes of the points. In our method, seed selection was based on the coordinates and regions were grown based on proximity and a rectangularity constraint. The proximity is constrained by taking the eight nearest neighbors of each point during the growing process. The rectangularity of an object is described as the ratio between the area of an object and the area of its minimum bounding rectangle (Rosin, 1999). The minimum bounding rectangle (figure \ref{fig:hulls}b) is computed with rotating calipers \citep{toussaint1983solving}. First a convex hull is constructed (figure \ref{fig:hulls}a) by using the QuickHull algorithm \citep{preparata1985computational}. The minimum bounding rectangle has a side collinear with one of the edges of the convex hull \citep{freeman1975determining}. Hence the minimum bounding rectangle can be found by rotating the system by the angle the edges of the convex hull make with the x-axis and checking the bounding rectangles of each rotation. The area of the object can be calculated by computing the concave hull of the set of points belonging to the object (figure \ref{fig:hulls}c). For this we use a concave hull algorithm using a \(k\)-nearest neighbors approach \citep{moreira2007concave}. This algorithm starts with finding a point with a minimum or maximum of one of the coordinates. Subsequently it goes around the points by repeatably finding the neighbor which makes an edge that makes the largest counterclockwise angle with the previous edge. This process is continued until the current point is the starting point. % Finally a check is done to ensure all the points fall within the hull. If this is not the case the \(k\) is increased by 1 and the algorithm repeated until an acceptable concave hull has been found.
In this way, for each cluster, points with the minimum x-coordinate and its 20 closest neighbors were used as the starting region. Subsequent points were added as long as the region’s rectangularity did not drop below a set threshold of 0.55 (figure \ref{fig:regiongrowing}). Then, the growing procedure was repeated for the next region until the entire cluster is segmented into rectangular regions.
%\begin{figure}
% \centering
% \includegraphics[scale=0.50]{./img/hulls.pdf}
% \caption{The vegetation points of a piece of tree line within the research area are plotted on top of the high resolution orthophoto, showing the effect of downsampling the point cloud (a) and the different hulls used during the region growing algorithm: the convex hull (b), the minimal oriented bounding box (c), and the concave hull (d). During the region growing the rectangularity is calculated by dividing the area of the concave hull (d) by the area of the bounding box (c).}
%\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./img/hulls.pdf}
\caption{The downsampled vegetation points of a piece of tree line within the research area plotted on top of the high resolution orthophoto in RD coordinates, showing the different hulls used during the region growing algorithm: the convex hull (a), the minimal oriented bounding box (b), and the concave hull (c). During the region growing the rectangularity is calculated by dividing the area of the concave hull (c) by the area of the bounding box (b).}
\label{fig:hulls}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./img/region_growing.pdf}
\caption{An example of the region growing process for one point. First the eight nearest neighbors are computed and the neighbors which are not already part of the region are considered to be added to the region (a). A bounding box and concave hull is computed for the region with the considered point added and the rectangularity is calculated (b). If this rectangularity is above a certain threshold the point is added to the region. Subsequently the process is repeated for the other points to consider (c, d). When all points to consider are checked, a next point of the region is check for nearest neighbors and the whole process is repeated until all points of the region, including the ones which are added during the growing process, have been checked once.}
\label{fig:regiongrowing}
\end{figure}
\subsubsection{Object merging}
The resulting objects can be fragmented as the result of minor curves in the linear vegetation or small interruptions in vegetation. These objects were merged when they were in close proximity, faced a similar compass direction, and were aligned. The compass direction was determined by computing the angle between one of the long sides of the minimum bounding box and the x-axis. The alignment was checked by comparing the angle of the line between the two center points with the directions of the objects. Once merged the lengths of the objects were added and the maximum of the widths taken as the new width.
\subsubsection{Elongatedness}
The merged objects were assessed for linearity by resolving the elongatedness of an object, which is defined as the ratio between its length and its width \citep{nagao2013structural}. The definition of a linear object is not clearly defined and consequently somewhat arbitrary. After trial and error, we set the minimum elongatedness at 2.5 and, to exclude long and wide patches from the resulting linear objects, a maximum width of 60 meters.
\subsubsection{Accuracy assessment}
The accuracy of the delineated linear objects was assessed by calculating the user’s, producer’s and overall accuracy, as well as the harmonic mean of the precision and recall (F1), kappa and MCC scores \citep{congalton2008assessing}. We manually annotated the vegetation data into linear and nonlinear objects, after converting the classified vegetation points into polygons. Consequently this assessment evaluates the accuracy of the segmentation given the accuracy of the vegetation points. Differencing of the automated and manually constructed data resulted in confusion matrices to compare true positives, true negatives, false positives and false negatives in the area. | {
"alphanum_fraction": 0.7637159982,
"avg_line_length": 108.9949748744,
"ext": "tex",
"hexsha": "98c4b4ee274eddb8b4349dfc5158b2008347b3f3",
"lang": "TeX",
"max_forks_count": 5,
"max_forks_repo_forks_event_max_datetime": "2021-10-07T12:56:39.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-01-07T18:03:32.000Z",
"max_forks_repo_head_hexsha": "8e45a40dca472ca9d5cbb58593d9f5b5bc855bf4",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "chrislcs/linear-vegetation-elements",
"max_forks_repo_path": "Article/tex/__latexindent_temp.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8e45a40dca472ca9d5cbb58593d9f5b5bc855bf4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "chrislcs/linear-vegetation-elements",
"max_issues_repo_path": "Article/tex/__latexindent_temp.tex",
"max_line_length": 2063,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "8e45a40dca472ca9d5cbb58593d9f5b5bc855bf4",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "chrislcs/linear-vegetation-elements",
"max_stars_repo_path": "Article/tex/__latexindent_temp.tex",
"max_stars_repo_stars_event_max_datetime": "2020-11-02T06:48:26.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-06-16T09:05:54.000Z",
"num_tokens": 5573,
"size": 21690
} |
\documentclass{ecnreport}
\stud{Master 2 CORO-IMARO}
\topic{Task-based control}
\author{O. Kermorgant}
\begin{document}
\inserttitle{Task-based control}
\insertsubtitle{Lab 1: visual servoing}
\section{Goals}
The goal of this lab is to observe the image-space and 3D-space behaviors that are induced when using various visual features.
The ones that are considered are:
\begin{itemize}
\item Cartesian coordinates of image points
\item Polar coordinates of image points
\item $2\frac{1}{2}$ visual servo
\item 3D translation (either $\Pass{c}{\mathbf{t}}{o}$ or $\Pass{c*}{\mathbf{t}}{c}$)
\item 3D rotation (either $\Pass{c}{\mathbf{R}}{c*}$ or $\Pass{c*}{\mathbf{R}}{c}$)
\end{itemize}
Several initial / desired poses are available, and some features that lead to a good behavior in some cases may lead to a undesired one in other cases.
\section{Running the lab}
The lab should be done on Ubuntu (either P-robotics room or the Virtual Machine).\\
\subsection{Virtual Machine update}
On the \textbf{virtual machine} you may need to install the \texttt{log2plot} module. If the \texttt{latest\_patch.sh} is in your home directory, just run it through: \texttt{./latest\_patch.sh}. If it is not, run the following lines (it will run it at the end):
\begin{center}\bashstyle
\begin{lstlisting}
cd ~
git clone https://github.com/CentraleNantesRobotics/ecn_install .ecn_install
ln -s .ecn_install/virtual_machine/latest_patch.sh .
\end{lstlisting}
\end{center}
\subsection{Download and setup}
You can download the lab from github. Navigate to the folder you want to place it and call:
\begin{center}\bashstyle
\begin{lstlisting}
git clone https://github.com/oKermorgant/ecn_visualservo
cd ecn_visualservo
gqt
\end{lstlisting}
\end{center}
This will download the git repository and prepare everything for the compilation.\\
You can then open QtCreator and load the \texttt{CMakeLists.txt} file of this project.
\subsection{Lab files}
Here are the files for this project:
\begin{center}
\begin{minipage}{.4\linewidth}
\dirtree{%
.1 ecn\_visualservo.
.2 include.
.3 feature\_stack.h.
.3 simulator.h.
.2 src.
.3 feature\_stack.cpp.
.3 simulator.cpp.
.2 CMakeLists.txt.
.2 config.yaml.
.2 main.cpp.
}
\end{minipage}
\begin{minipage}{.55\linewidth}
Only two files are to be modified:
\begin{itemize}
\item \texttt{main.cpp}: to build the feature stack and control law
\item \texttt{config.yaml}: configuration file to change which features you want to use
\end{itemize}
\end{minipage}
\end{center}
Feel free to have a look at the other files to see how everything is done under the hood.\\
The simulation runs with the green \texttt{Run} button on the bottom left corner of QtCreator.\\
Of course, as no visual feature are used initially, the camera does not move.
\section{Implementation}
Only a few things are to be implemented in C++ to get a valid simulation: adding features to the stack, and computing the control law.
\subsection{Implement the choice of the features}
In the \texttt{main.cpp} file, a few design variables are loaded from the configuration (\texttt{useXY, usePolar}, etc.).\\
They are used to tell which features we want to use for this simulation. All these features are already available in the ViSP library, and we just have to register them inside a custom feature stack.\\
The first part of this lab is hence to add features to the stack depending on the configuration variables.\\
We assume that:
\begin{itemize}
\item If \texttt{useXY} is \texttt{true}, we want to stack all points from the simulation with Cartesian coordinates.
\item If \texttt{usePolar} is \texttt{true}, we want to stack all points from the simulation with polar coordinates.
\item If \texttt{use2Half} is \texttt{true}, we want to stack:
\begin{itemize}
\item the center of gravity of the simulation with Cartesian coordinates
\item the center of gravity of the simulation with depth coordinate
\item a 3D rotation (either $\Pass{c}{\mathbf{R}}{c*}$ or $\Pass{c*}{\mathbf{R}}{c}$).\\Use \texttt{stack.setRotation3D("cdRc")} or \texttt{stack.setRotation3D("cRcd")}
\end{itemize}
\item The 3D translation and rotation are automatically used if they have a valid value in the configuration file:
\begin{itemize}
\item Translation can be \texttt{"cdTc"} ($\Pass{c*}{\mathbf{t}}{c}$) or \texttt{"cTo"} ($\Pass{c}{\mathbf{t}}{o}$)
\item Rotation can be \texttt{"cdRc"} ($\Pass{c*}{\mathbf{R}}{c}$) or \texttt{"cRcd"} ($\Pass{c}{\mathbf{R}}{c*}$)
\end{itemize}
\end{itemize}
Documentation on the \texttt{FeatureStack} class is available in Appendix \ref{app:stack}. It describes how to add new features to the stack.\\
From this point, the stack is able to update the features from a given pose $\cMo$ and give the current features and their interaction matrix.
\subsection{Implement the control law}
In the main control loop, you should retrieve the current feature values \texttt{s} from the stack, and their corresponding interaction matrix \texttt{L}.\\
Then compute \texttt{v} as the classical control $\v = -\lambda\L^+(\s-\ss)$.\\
This velocity twist will be sent to the simulation and a new pose will be computed.
Some gains and parameters are loaded from the configuration: \texttt{lambda, err\_min}, etc.\\
Observe what is their influence on the simulation.
\section{Combining features}
Test that your simulation works for all types of features. \\
Initially it starts from a position tagged as \texttt{cMo\_t} in the configuration file, while the desired position is tagged as \texttt{cdMo}.
Try various combinations of starting / desired poses, together with various choices on the feature set. Typically:
\begin{itemize}
\item Large translation / rotation error, using XY or Polar
\item 180 error using XY
\item Very close poses (\texttt{cMo\_visi})
\end{itemize}
What happens if many features are used at the same time, such as Cartesian + polar coordinates, or 2D + 3D features?
\subsection{On Z-estimation}
The \texttt{z\_estim} keyword of the configuration file allows changing how the Z-depth is estimated in 2D visual servoing.\\
Compare the behavior when using the current depth (negative \texttt{z\_estim}), the one at desired position (set \texttt{z\_estim} to 0) or an arbitrary constant estimation (positive \texttt{z\_estim}).
\subsection{Output files}
All simulations are recorded so you can easily compare them. They are placed in the folder:
\begin{center}
\texttt{ecn\_visualservo/results/<start>-<end>/<feature set>/}
\end{center}
All files begin with the Z-estimation method and the lambda gain in order to compare the same setup with various gains / estimation methods.
\newpage
\appendix
\section{Class helpers}
Below are details the useful methods of the main classes of the project. Other methods have an explicit name and can be listed from auto-completion.
\subsection{The \texttt{FeatureStack} class}\label{app:stack}
This class is instanciated under the variable \texttt{stack} in the main function. It has the following methods:
\paragraph{Adding feature to the stack}
\begin{itemize}
\item \texttt{void addFeaturePoint(vpPoint P, PointDescriptor descriptor)}
add a point to the stack, where \texttt{descriptor} can be:
\begin{itemize}
\item \texttt{PointDescriptor::XY}
\item \texttt{PointDescriptor::Polar}
\item \texttt{PointDescriptor::Depth}
\end{itemize}
\item \texttt{void setTranslation3D(std::string descriptor)}: adds the 3D translation feature (cTo, cdTc or none)
\item \texttt{void setRotation3D(std::string descriptor)}: adds the 3D rotation feature (cRcd, cdRc or none)
\item \texttt{void updateFeatures(vpHomogeneousMatrix cMo)}: updates the features from the passed \texttt{cMo}
\end{itemize}
\paragraph{Stack information}
\begin{itemize}
\item \texttt{void summary()}: prints a summary of the added features and their dimensions
\item \texttt{vpColVector s()}: returns the current value of the feature vector
\item \texttt{vpColVector sd()}: returns the desired value of the feature vector
\item \texttt{vpMatrix L()}: returns the current interaction matrix
\end{itemize}
\subsection{The \texttt{Simulator} class}\label{app:sim}
This class is instanciated under the variable \texttt{sim}. It has the following methods:
\begin{itemize}
\item \texttt{std::vector<vpPoint> observedPoints()}: returns the list of considered 3D points in the simulation
\item \texttt{vpPoint cog()}: returns the center of gravity as a 3D point
\end{itemize}
For example, to add all points of the simulation as XY features, plus the depth of the CoG, just write:
\begin{center}\cppstyle
\begin{lstlisting}
for(auto P: sim.observedPoints())
stack.addFeaturePoint(P, PointDescriptor::XY);
stack.addFeaturePoint(sim.cog(), PointDescriptor::Depth);
\end{lstlisting}
\end{center}
\end{document}
| {
"alphanum_fraction": 0.7564421338,
"avg_line_length": 41.1534883721,
"ext": "tex",
"hexsha": "61883b4122df3c374750cc6ce0b929c6e1036a69",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e0fa535605e2055ceaa676bde1342eadce49c64b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "oKermorgant/ecn_visualservo",
"max_forks_repo_path": "subject/visual_servo.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e0fa535605e2055ceaa676bde1342eadce49c64b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "oKermorgant/ecn_visualservo",
"max_issues_repo_path": "subject/visual_servo.tex",
"max_line_length": 262,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e0fa535605e2055ceaa676bde1342eadce49c64b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "oKermorgant/ecn_visualservo",
"max_stars_repo_path": "subject/visual_servo.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2414,
"size": 8848
} |
\documentclass[11pt,a4paper,leqno]{extarticle}
\usepackage[margin=1in]{geometry}
\usepackage[utf8]{inputenc}
\usepackage{booktabs} % for toprule, midrule and bottomrule
\usepackage{adjustbox}
\usepackage{amsmath}
\usepackage{bbold}
\usepackage{etoolbox}
\usepackage{setspace} % for \onehalfspacing and \singlespacing macros
\usepackage[hidelinks]{hyperref}
\usepackage{array}
\usepackage{graphicx}
\usepackage{setspace}
\usepackage{caption}
\usepackage{pdflscape}
\usepackage{caption}
\usepackage{tabularx}
\usepackage{authblk}
\usepackage{float}
\usepackage{siunitx}
\usepackage{titlesec}
\usepackage{pgfplots}
\usepackage[authoryear]{natbib}
\usepackage{scrextend}
\usepackage{nicefrac}
\usepackage{enumitem}
\usepackage{multirow}
\usepackage{xcolor}
\usepackage{cleveref}
\usepackage{varwidth}
\usepackage{gensymb}
\usepackage{lineno}
% section headings
\renewcommand{\thesection}{\Roman{section}.\hspace{-0.5em}}
\renewcommand\thesubsection{\Alph{subsection}.\hspace{-0.5em}}
\renewcommand\thesubsubsection{\hspace{-1em}}
\newcommand{\subsubsubsection}[1]{\begin{center}{\textit{#1}}\end{center}}
\titleformat{\section}
{\bf\centering\large}{\thesection}{1em}{}
\titleformat{\subsection}
{\itshape\centering}{\thesubsection}{1em}{}
\titleformat{\subsubsection}
{\bf}{\thesubsubsection}{1em}{}
% section referencing
\crefformat{section}{\S#2#1#3}
\crefformat{subsection}{\S#2#1#3}
\crefformat{subsubsection}{\S#2#1#3}
\crefrangeformat{section}{\S\S#3#1#4 to~#5#2#6}
\crefmultiformat{section}{\S\S#2#1#3}{ and~#2#1#3}{, #2#1#3}{ and~#2#1#3}
% appendix caption numbering
\DeclareCaptionLabelFormat{AppendixATable}{Table A.#2}
\DeclareCaptionLabelFormat{AppendixBFigure}{Figure B.#2}
% multiline cells
\newcommand{\specialcell}[2][c]{%
\begin{tabular}[#1]{@{}c@{}}#2\end{tabular}}
% proper caption centering
\DeclareCaptionFormat{centerproper}{%
% #1: label (e.g. "Table 1")
% #2: separator (e.g. ": ")
% #3: caption text
\begin{varwidth}{\linewidth}%
\centering
#1#2#3%
\end{varwidth}%
}
% caption set up
\captionsetup[table]{
font = {sc},
labelfont = {bf}
}
% caption set up
\captionsetup[figure]{
font = {sc},
labelfont = {bf}
}
% math claims
\newtheorem{theorem}{Theorem}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{proposition}{Proposition}
\newtheorem{lemma}{Lemma}
\newenvironment{proof}[1][Proof]{\noindent\textbf{#1:} }{\ \rule{0.5em}{0.5em}}
%math shortcuts
\DeclareMathOperator*{\argmax}{arg\,max}
\DeclareMathOperator*{\argmin}{arg\,min}
% hyperlinks
\definecolor{darkblue}{RGB}{0,0,150}
\hypersetup{
colorlinks=true,
linkcolor = darkblue,
urlcolor = darkblue,
citecolor = darkblue,
anchorcolor = darkblue
}
% bibliography
\makeatletter
\renewenvironment{thebibliography}[1]
{\section{References}%
\@mkboth{\MakeUppercase\refname}{\MakeUppercase\refname}%
\list{}%
{\setlength{\labelwidth}{0pt}%
\setlength{\labelsep}{0pt}%
\setlength{\leftmargin}{\parindent}%
\setlength{\itemindent}{-\parindent}%
\@openbib@code
\usecounter{enumiv}}%
\sloppy
\clubpenalty4000
\@clubpenalty \clubpenalty
\widowpenalty4000%
\sfcode`\.\@m}
{\def\@noitemerr
{\@latex@warning{Empty `thebibliography' environment}}%
\endlist}
\makeatother
% etoolbox
\AtBeginEnvironment{quote}{\singlespacing}
%% line numbers
%\linenumbers
\begin{document}
\title{\singlespacing{\textbf{Non-Constant Elasticity of Substitution and Intermittent Renewable Energy}}\\\vspace{1em}
Appendix}
\author[]{Saketh Aleti\thanks{Saketh Aleti: Graduate Student, Department of Economics, Duke University, 419 Chapel Drive, 213 Social Sciences Bldg., Durham, NC 27708-0097, USA (email: [email protected]).} \, and Gal Hochman\thanks{ Gal Hochman: Professor, Department of Agriculture, Food \& Resource Economics, Rutgers University, 116 Cook Office Building, 55 Dudley Road, New Brunswick, NJ 08901, USA (email: [email protected]).}}
% \date{\vspace{-1em}}
\maketitle
% % Syntax: \begin{addmargin}[<left indentation>]{<indentation>}
% \begin{addmargin}[0.5in]{0.5in}
% \textit{In this paper, we present a model of the electricity sector where generation technologies are intermittent. The economic value of an electricity generation technology is given by integrating its production profile with the market price of electricity. We use estimates of the consumer's intertemporal elasticity of substitution for electricity consumption while parameterizing the model empirically to numerically calculate the elasticity between renewables and fossil energy. We find that there is a non-constant elasticity of substitution between renewable and fossil energy that depends on prices and intermittency. This suggests that the efficacy and welfare effects of carbon taxes and renewable subsidies vary geographically. Subsidizing research into battery technology and tailoring policy for local energy markets can mitigate these distributional side effects while complementing traditional policies used to promote renewable energy.
% }
% \\
% \noindent\textbf{Key words:} renewable energy, intermittency, pollution, environment
%
% \noindent\textbf{JEL Classifications:} Q28, Q41, Q48, Q52, Q55, Q58
% \end{addmargin}
%
\section{Appendix A: Supplementary Proofs}
\label{sec:appendixa}
\renewcommand{\theequation}{\arabic{equation}a}
\subsection{Cobb-Douglas Case with Two Periods \& Two Technologies}
\label{sec:cobbdoug}
In this section, we consider a simpler case of our general model to better understand its implications. Firstly, we restrict the utility function to its Cobb-Douglas form which is simply the case where the elasticity of substitution $\sigma = 1$. Secondly, we limit the number of periods and technologies to 2. And, thirdly, we normalize the prices such that our representative consumer's income $I$ is $1$.
\subsubsection{Equilibrium Results}
Firstly, our demand equations simplify to:
\begin{align}
Z_t &= \alpha_t / p_t \\
Z_s &= \alpha_s / p_s
\end{align}
where $t$ and $s$ are our two periods. Next, solving for the FOC condition for profit maximization, we have:
\begin{align}
\mathbf{p} &= \boldsymbol{\xi}^{-1} \mathbf{c} = \begin{pmatrix}
-\dfrac{c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}}{\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}-\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}}} \\[2ex]
\dfrac{c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}}{\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}-\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}}}
\end{pmatrix}
\end{align}
And, substituting back into our demand equations, we find the equilibrium quantities for $\mathbf{Z}$ and $\mathbf{X}$.
\begin{align}
\mathbf{Z} &= \begin{pmatrix}
\dfrac{\alpha _{t}\,\left(\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}-\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}}\right)}{c_{2}\,\xi _{\mathrm{1s}} - c_{1}\,\xi _{\mathrm{2s}}} \\[2ex]
\dfrac{\alpha _{s}\,\left(\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}-\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}}\right)}{c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}}
\end{pmatrix}\\
\implies
\mathbf{X} &= \begin{pmatrix}
\dfrac{\alpha _{t}\,\xi _{\mathrm{2s}}}{c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}}+\dfrac{\alpha _{s}\,\xi _{\mathrm{2t}}}{c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}} \\[2ex]
-\dfrac{\alpha _{t}\,\xi _{\mathrm{1s}}}{c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}}-\dfrac{\alpha _{s}\,\xi _{\mathrm{1t}}}{c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}}
\end{pmatrix}
\end{align}
We now derive the restrictions on the parameters $\boldsymbol{\xi}$ and $\mathbf{c}$ that ensure $\mathbf{Z}, \mathbf{X} > 0$. These restrictions are given by Lemma 1 from the Model Equilibrium section:
\hfill
\noindent \textbf{Lemma 1A:} \textit{ Assume that, for all technologies $i$ and periods $t$, we have $\xi_{i,t} > 0 \, , \alpha_t > 0 \, ,$ and $ c_i > 0$. Then, for technology $j$ to be economical, there need to exist a period $s$ where the following three conditions are met:
\begin{itemize}
\item $\xi_{j,s}/c_j > \xi_{i,s}/c_i$ for all $i$
\item $\xi_{j,s}/\xi_{j,t} > \xi_{i,s}/\xi_{i,t} $ where $i \neq j$ and $t \neq s$
\item Period $s$ demand needs to be sufficiently large, i.e., $\alpha_s$ is large enough
\end{itemize}
}
In order to have $X_1, X_2 > 0$, we need to have the conditions of this lemma hold for both technologies. This is equivalent to requiring one of two possible sets of symmetrical restrictions on $\boldsymbol{\xi}$ and $\mathbf{c}$ which are detailed in \autoref{tab:paramrest}. The first set, Case 1, assumes that technology 2 is more cost efficient in period $t$, while the second set, Case 2, assumes that technology 1 is more cost efficient in period $t$. If a given set of parameters do not fall into either case, we are left with an edge case where one of the technologies is not used. Additionally, these inequalities compare two types of efficiency -- output efficiency and cost efficiency; we define output efficiency as electricity output per unit of input and cost efficiency in terms of electricity output per dollar of input. We refer to the last set of restrictions as mixed, because they relate both cost and output efficiency.
\vspace{0.15in}
\begin{center}
[INSERT Table A.1: Parameter Restrictions for $\mathbf{Z}, \mathbf{X} > 0$]
\end{center}
\vspace{0.15in}
\begin{proof}
We aim to derive conditions on $\xi$ and $c$ required to have positive $\mathbf{Z}$ and $\mathbf{X}$, so we begin by assuming $\mathbf{Z}, \mathbf{X} > 0$. Second, since the equations so far are symmetrical, note that there be two symmetrical sets of potential restrictions we must impose on the parameters. Thus, we first assume the inequality $c_1 \xi_{2t} - c_2 \xi_{1t} > 0$ to restrict ourselves to one of the two cases. This assumption results in the denominator of $Z_s$ being positive. Hence, we must also have $\xi_{1s}\xi_{2t} - \xi_{2s}\xi_{1t} > 0 $ for $Z_s > 0$. This same term appears in the numerator for $Z_t$, hence its denominator must be positive: $c_2 \xi_{1s} - c_1 \xi_{2s} > 0$. Now, rewriting these inequalities, we have:
\begin{align*}
c_1 \xi_{2t} - c_2 \xi_{1t} > 0 &\implies \xi_{2t}/c_2 > \xi_{1t}/c_1 \\
c_2 \xi_{1s} - c_1 \xi_{2s} > 0 &\implies \xi_{1s}/c_1 > \xi_{2s}/c_2 \\
\xi_{1s}\xi_{2t} - \xi_{2s}\xi_{1t} > 0 &\implies \xi_{1s}/\xi_{1t} > \xi_{2s}/\xi_{2t} \\
&\implies \xi_{1t}/\xi_{1s} < \xi_{2t}/\xi_{2s}
\end{align*}
Note that the latter two restrictions can be derived from the former two. Additionally, we implicitly assume that we have $\boldsymbol{\xi} > 0$. However, this is not necessary assumption, since $\boldsymbol{\xi}$ invertible only requires $\xi_{1t} \xi_{2s} > 0$ or $\xi_{1s} \xi_{2t} > 0$. Instead, we may leave the latter two inequalities in the form $ \xi_{1s}\xi_{2t} > \xi_{2s}\xi_{1t}$ which remains valid when values of $\xi$ are equal to $0$. Lastly, the mixed efficiency restrictions come from $X > 0$. To start, for $X_1$, we have:
\begin{align*}
X_1 > 0 &\implies (\alpha_t \xi_{2s})(c_1 \xi_2t - c_2\xi_1t) + (\alpha_s \xi_{2t})(c_1 \xi_{2s} - c_2 \xi_{1s}) < 0\\
&\implies (\alpha_t \xi_{2s})(c_1 \xi_2t - c_2\xi_1t) < (\alpha_s \xi_{2t})(c_2 \xi_{1s} - c_1 \xi_{2s}) \\
&\implies (\xi_{2s}/\xi_{2t}) < (\alpha_s (c_2 \xi_{1s} - c_1 \xi_{2s}))/(\alpha_t(c_1 \xi_{2t} - c_2\xi_{1t})) \\
&\implies (\xi_{2s}/\xi_{2t}) < (\alpha_s (\xi_{1s}/c_1 - \xi_{2s}/c_2))/(\alpha_t(\xi_{2t}/c_2 - \xi_{1t}/c_1))
\end{align*}
Similarly, for $X_2$, note that only the numerators differ; $\xi_{2s}$ is replaced with $-\xi_{1s}$ and $\xi_{2t}$ is replaced with $-\xi_{1t}$. Hence, we have
\begin{align*}
X_2 > 0 &\implies (\alpha_t \xi_{1s})(c_1 \xi_2t - c_2\xi_1t) + (\alpha_s \xi_{1t})(c_1 \xi_{2s} - c_2 \xi_{1s}) > 0\\
&\implies (\xi_{1s}/\xi_{1t}) > (\alpha_s (\xi_{1s}/c_1 - \xi_{2s}/c_2))/(\alpha_t(\xi_{2t}/c_2 - \xi_{1t}/c_1))
\end{align*}
To double check, note that combining the inequalities from $X_1>0$ and $X_2 > 0$ leads to $\xi_{2s}/\xi_{2t} < \xi_{1s}/\xi_{1t}$. This is precisely the earlier result obtained from $\mathbf{Z} > 0$. Again, it is important to note that we assume $\boldsymbol{\xi} > 0$ for to simplify the inequalities of $X_1 > 0$ and $X_2 > 0$ . Otherwise, we may leave the inequalities in their original forms and they are still valid when $\xi_{1t} \xi_{2s} > 0$ or $\xi_{1s} \xi_{2t} > 0$. \\ \hfill
\end{proof}
Let us consider the set of restrictions belonging to Case 1. The first inequality, our initial assumption, states that technology 2 is relatively more cost efficient in period $t$. The second inequality claims technology 1 is relatively more cost efficient in period $s$. The implications are fairly straightforward; if a technology is to be used, it must have an absolute advantage in cost efficiency in at least one period. The third condition states that the relative output efficiency of technology 2 is greater than that of the first technology in period $t$. And, the fourth condition makes a symmetrical claim but for the technology 1 and period $s$. These latter two restrictions regarding output efficiency enter $\mathbf{Z}$ and $\mathbf{X}$ through $\mathbf{p}$; they're simply a restatement of the invertibility of $\xi$ and can also be derived through the cost efficiency restrictions.
The mixed efficiency restrictions are less intuitive. Firstly, note that $\left(\xi_{1s}/c_1 - \xi_{2s}/c_2\right)$ is the difference in cost efficiency for the two technologies in period $s$; this is equivalent to the increase in $Z_s$ caused by shifting a marginal dollar towards technology 1. Similarly, the bottom term $\left( \xi_{2t}/c_2 - \xi_{1t}/c_1 \right)$ represents the change in $Z_t$ caused by shifting a marginal dollar towards technology 1. Both these terms are then multiplied by the share parameter of the utility function for their respective time periods. Furthermore, note that $\alpha_t$ $(\alpha_s)$ is the elasticity of utility with respect to $Z_t$ $(Z_s)$. Hence, in total, the mixed efficiency restrictions relate the relative cost efficiencies of each technology with their output efficiency and the demand for energy. So, for example, suppose that consumers prefer, \textit{ceteris paribus}, that nearly all their electricity arrives in period $t$. This would imply $\alpha_t$ is arbitrarily large which results in the left-hand side of the fraction becoming arbitrarily small. This violates the first mixed efficiency restriction but not the second; consequently, use of the first technology, which is less cost efficient in period $t$, approaches $0$.
In more practical terms, suppose that our first technology is coal power and the latter is solar power. Although coal power is dispatchable, it does not easily ramp up or down within a day; hence, it is reasonable to apply our model where capacities are fixed over time so long as our time frame is sufficiently short. Hence, we now assume periods $t$ and $s$ represent the peak and off-peak for a day. And, we expect that there is more available solar radiation during peak hours than off-peak hours, since peak hours are usually during the middle of the day. This implies that the output efficiency of solar power is higher in period $t$ due to more available solar radiation. Additionally, since the energy output of a unit of coal is independent of time, we know that the output efficiency of coal is constant. In total, this implies that we have met the output efficiency restrictions, since we have $\xi_{2t}/\xi_{2s} > \xi_{1t}/\xi_{1s}$. Next, we can reasonably assume that coal is more cost efficient than solar in the off-peak period when there is less sun; hence, the second cost efficiency restriction is satisfied. Then, for there to be an incentive to use solar power, we must satisfy the first cost-efficiency condition; that is, solar needs be cost efficient during peak hours otherwise we hit an edge case where no solar is employed. And, finally, solar must also satisfy the mixed efficiency condition, which essentially implies that there must be sufficient demand for electricity during period $t$, when solar is more effective, for it to be a feasible technology. So, overall, for a technology to be economical, it must meet three conditions: it must the most cost efficient technology for a particular period, it must have a comparative advantage in output efficiency in the same period, and there must be a sufficient amount of demand in that period.
%\footnote{This same analysis can be further extended to any $n$ technologies. However, the number restrictions and different cases to ensure $X , Z > 0$ expands very quickly ($O(n!)$). For instance, if we had 3 technologies and 3 periods, we must first assume each technology is more cost efficient than the other two in a unique period; this adds 3*2 restrictions. Then, we must make the output efficiency restrictions comparing each pair of technologies for each pair of periods.}
\subsubsection{Comparative Statics}
The comparative statics are similarly intuitive. The equilibrium quantity of a technology is increasing with its output efficiency and decreasing with its cost per unit. Additionally, the equilibrium quantities for a particular technology move in the opposite direction with respect to the output efficiency and cost of the other technologies. For a practical example, consider again coal and solar power from before. An increase in the output efficiency of solar or a decrease in solar power's cost will reduce the optimal quantity of coal power. Likewise, as coal power's efficiency improves, it's adoption rises. To find the effects of $\boldsymbol{\alpha}$ on $\mathbf{X}$, we must assume one of the cases of restrictions shown in \autoref{tab:paramrest}. So, again, let us assume Case 1 is true; this implies that $X_2$ is the most cost efficient technology in period $t$ and likewise for $X_1$ in period $s$. Firstly, note that $\boldsymbol{\alpha}$ determines the demand for electricity in a period. Hence, when $\alpha_t$ rises, we see the optimal level of $X_2$ rise as well; likewise, $X_1$ rises with $\alpha_s$. In short, the optimal quantity of a technology rises linearly with the demand for electricity in the period it specializes in. Moreover, these relationships are reversed with respect to demand in each technology's suboptimal period. So, for example, we would expect the use of solar energy to rise when the demand for electricity during peak hours rises, and it would fall when demand for energy in the off-peak rises. On the other hand, use coal power would rise with off-peak demand and fall with peak demand. This concept carries through for the comparative statics of $\mathbf{Z}$. When the output efficiency of technology 1 rises or its cost falls, we see output $Z_s$ rise and output $Z_t$ fall. This is because technology 1 is optimal in period $s$ given the Case 1 restrictions. Likewise, we see symmetrical results for the output with respect to the cost and output efficiency of technology 2; improvements in the efficiency of $X_2$ result in greater output in $Z_t$ and smaller output in $Z_s$. In total, we have Proposition 1:
\hfill
\noindent \textbf{Proposition 1A:} \textit{Suppose that the conditions of Lemma 1 hold for each technology, so we are not in an edge case. Then,
\begin{itemize}
\item The equilibrium quantity of a technology is increasing with its output and decreasing with its cost; at the same time, it is decreasing with the output of other technologies and increasing with the cost of other technologies.
\item Also, suppose that some technology $i$ is the most cost efficient in period $t$. Then, its equilibrium quantity is increasing with respect to the demand parameter $\alpha_t$ and decreasing with respect to the demand parameters in other periods.
\item Furthermore, again assuming technology $i$ is the most cost efficient in period $t$, the comparative statics of $Z_t$ and $X_i$ are equivalent.
\end{itemize}}
\hfill
\begin{proof}
We begin by deriving the comparative statics of the cost and efficiency parameters with respect to $\mathbf{X}$. Firstly, we take derivatives with respect to the cost vectors:
\begin{align*}
\frac{\partial X_1}{\partial \mathbf{c}} &=
\begin{pmatrix}
\dfrac{-\alpha _{t}\,{\xi _{\mathrm{2s}}}^2}{{\left(c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}\right)}^2}-\dfrac{\alpha _{s}\,{\xi _{\mathrm{2t}}}^2}{{\left(c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}\right)}^2}<0 \\
\dfrac{\alpha _{t}\,\xi _{\mathrm{1s}}\,\xi _{\mathrm{2s}}}{{\left(c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}\right)}^2}+\dfrac{\alpha _{s}\,\xi _{\mathrm{1t}}\,\xi _{\mathrm{2t}}}{{\left(c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}\right)}^2}>0
\end{pmatrix}\\
\frac{\partial X_2}{\partial \mathbf{c}} &=
\begin{pmatrix}
\dfrac{\alpha _{t}\,\xi _{\mathrm{1s}}\,\xi _{\mathrm{2s}}}{{\left(c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}\right)}^2}+\dfrac{\alpha _{s}\,\xi _{\mathrm{1t}}\,\xi _{\mathrm{2t}}}{{\left(c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}\right)}^2}>0 \\
\dfrac{-\alpha _{t}\,{\xi _{\mathrm{1s}}}^2}{{\left(c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}\right)}^2}-\dfrac{\alpha _{s}\,{\xi _{\mathrm{1t}}}^2}{{\left(c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}\right)}^2}<0
\end{pmatrix}
\end{align*}
The first and second terms of $\partial X_1 / \partial c_1$ are clearly both negative independent of the restrictions on the parameters. Similarly, all terms of $\partial X_1 / \partial c_2$ are positive independent of any restrictions. Since the structure of this problem is symmetrical with respect to $X_1$ and $X_2$, the same comparative statics apply but in reverse for $X_1$. Next, we derive comparative statics for each element of $\xi$.
\begin{alignat*}{1}
\frac{\partial X_1}{\partial \boldsymbol{\xi}} &=
\begin{pmatrix}
\dfrac{\alpha _{s}\,c_{2}\,\xi _{\mathrm{2t}}}{{\left(c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}\right)}^2}>0 & \dfrac{\alpha _{t}\,c_{2}\,\xi _{\mathrm{2s}}}{{\left(c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}\right)}^2}>0 \\
\dfrac{-\alpha _{s}\,c_{2}\,\xi _{\mathrm{1t}}}{{\left(c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}\right)}^2}<0 & \dfrac{-\alpha _{t}\,c_{2}\,\xi _{\mathrm{1s}}}{{\left(c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}\right)}^2}<0 \\
\end{pmatrix}\\
\frac{\partial X_2}{\partial \boldsymbol{\xi}} &=
\begin{pmatrix}
\dfrac{-\alpha _{s}\,c_{1}\,\xi _{\mathrm{2t}}}{{\left(c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}\right)}^2} <0& \dfrac{-\alpha _{t}\,c_{1}\,\xi _{\mathrm{2s}}}{{\left(c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}\right)}^2} <0\\
\dfrac{\alpha _{s}\,c_{1}\,\xi _{\mathrm{1t}}}{{\left(c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}\right)}^2}>0& \dfrac{\alpha _{t}\,c_{1}\,\xi _{\mathrm{1s}}}{{\left(c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}\right)}^2} >0\\
\end{pmatrix}
\end{alignat*}
Again, the signs are fairly straightforward. The optimal quantity of $X_1$ increases with its output efficiency in both periods; however, it decreases with the output efficiency of $X_2$ in both periods. Similarly, symmetrical results are shown for $X_2$. Next, we study the effects of $\boldsymbol{\alpha}$ on $\mathbf{X}$; this requires us to place some restrictions on the parameters, so we use those belonging to Case 1 in \autoref{tab:paramrest}. Then, we have
\begin{align*}
\frac{\partial X_1}{\partial \boldsymbol{\alpha}} &=
\begin{pmatrix}
\dfrac{\xi _{\mathrm{2s}}}{c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}}<0 \\
\dfrac{\xi _{\mathrm{2t}}}{c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}}>0
\end{pmatrix}\\
\frac{\partial X_2}{\partial \boldsymbol{\alpha}} &=
\begin{pmatrix}
\dfrac{-\xi _{\mathrm{1s}}}{c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}}>0 \\
\dfrac{-\xi _{\mathrm{1t}}}{c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}}<0
\end{pmatrix}
\end{align*}
Note that our restrictions imply that $c_1 \xi_{2t} - c_2 \xi_{1t} > 0$ and $c_2 \xi_{1s} - c_1 \xi_{2s} > 0$. From here, the intuition is clear; we assume that $X_2$ is more cost efficient in period $t$, so increases in demand during period $t$ (caused by increases in $\alpha_t$) will increase the optimal quantity of $X_2$. And, the same applies to $X_1$ with respect to period $s$ and $\alpha_s$. Again, due to symmetry, the statics are reversed when the technologies are flipped. Similarly, the signs would also be flipped if we used the restrictions given by Case 2 instead.
Next, we derive the comparative statics for $\mathbf{Z}$. From our restrictions, we have $\xi_{1s}\xi_{2t} > \xi_{2s}\xi_{1t}$. All the results above follow from this inequality and the cost efficiency restrictions.
\begin{align*}
\frac{\partial Z_t}{\partial \mathbf{c}} &=
\begin{pmatrix}
\dfrac{\alpha _{t}\,\xi _{\mathrm{2s}}\,\left(\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}-\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}}\right)}{{\left(c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}\right)}^2}>0\\
\dfrac{-\alpha _{t}\,\xi _{\mathrm{1s}}\,\left(\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}-\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}}\right)}{{\left(c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}\right)}^2}<0
\end{pmatrix}\\
\frac{\partial Z_s}{\partial \mathbf{c}} &=
\begin{pmatrix}
\dfrac{-\alpha _{s}\,\xi _{\mathrm{2t}}\,\left(\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}-\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}}\right)}{{\left(c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}\right)}^2} <0\\
\dfrac{\alpha _{s}\,\xi _{\mathrm{1t}}\,\left(\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}-\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}}\right)}{{\left(c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}\right)}^2} >0
\end{pmatrix}\\
\frac{\partial Z_t}{\partial \boldsymbol{\xi}} &=
\begin{pmatrix}
\dfrac{\alpha _{t}\,\xi _{\mathrm{2s}}}{c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}} < 0& \dfrac{-\alpha _{t}\,\xi _{\mathrm{2s}}\,\left(c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}\right)}{{\left(c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}\right)}^2} <0 \\
\dfrac{-\alpha _{t}\,\xi _{\mathrm{1s}}}{c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}} >0& \dfrac{\alpha _{t}\,\xi _{\mathrm{1s}}\,\left(c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}\right)}{{\left(c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}\right)}^2} >0\\
\end{pmatrix}\\
\frac{\partial Z_s}{\partial \boldsymbol{\xi}} &=
\begin{pmatrix}
\dfrac{-\alpha _{s}\,\xi _{\mathrm{2t}}\,\left(c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}\right)}{{\left(c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}\right)}^2}> 0& \dfrac{\alpha _{s}\,\xi _{\mathrm{2t}}}{c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}} > 0\\
\dfrac{\alpha _{s}\,\xi _{\mathrm{1t}}\,\left(c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}\right)}{{\left(c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}\right)}^2} < 0& \dfrac{-\alpha _{s}\,\xi _{\mathrm{1t}}}{c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}} <0\\
\end{pmatrix}
\end{align*}
Again, recall that we have $c_1 \xi_{2t} - c_2 \xi_{1t} > 0$ and $c_2 \xi_{1s} - c_1 \xi_{2s} > 0$; the rest follows. And finally, we have:
\begin{align*}
\frac{\partial Z_t}{\partial \boldsymbol{\alpha}} &=
\begin{pmatrix}
\dfrac{-\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}-\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}}}{c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}} > 0 \\
0
\end{pmatrix} \\
\frac{\partial Z_s}{\partial \boldsymbol{\alpha}} &=
\begin{pmatrix}
0 \\
\dfrac{\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}-\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}}}{c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}} > 0
\end{pmatrix}
\end{align*}
These are fairly trivial, since $Z_t = \alpha_t / p_t$ (and $Z_s = \alpha_s/ p_s$) and prices are positive. \\
\hfill
\end{proof}
\subsubsection{Elasticity of Substitution}
\label{sec:EOS_derivation}
We derive the elasticity of substitution between technologies in this two-period, two-technology setting. By definition, the elasticity of substitution is
\begin{equation}
e_{12} \equiv \frac{\partial \log(X_1/X_2)}{\partial \log(c_2/c_1)} = \frac{\partial X_1/X_2 }{ \partial (c_2/c_1)} \cdot \frac{c_2/c_1}{X_1/X_2} = \frac{\partial X_1/X_2}{ \partial c_2} \cdot \frac{\partial c_2}{\partial (c_2/c_1)} \cdot \frac{c_2/c_1}{X_1/X_2}
\end{equation}
Note that
\begin{align}
X_1/X_2 &= -\frac{\alpha _{s}\,c_{1}\,\xi _{\mathrm{2s}}\,\xi _{\mathrm{2t}}-\alpha _{s}\,c_{2}\,\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}+\alpha _{t}\,c_{1}\,\xi _{\mathrm{2s}}\,\xi _{\mathrm{2t}}-\alpha _{t}\,c_{2}\,\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}}}{\alpha _{s}\,c_{1}\,\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}}-\alpha _{s}\,c_{2}\,\xi _{\mathrm{1s}}\,\xi _{\mathrm{1t}}+\alpha _{t}\,c_{1}\,\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}-\alpha _{t}\,c_{2}\,\xi _{\mathrm{1s}}\,\xi _{\mathrm{1t}}} \\
\frac{\partial X_1/X_2}{\partial c_2} &= \frac{\alpha _{s}\,\alpha _{t}\,c_{1}\,{\left(\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}-\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}}\right)}^2}{{\left(\alpha _{s}\,c_{1}\,\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}}-\alpha _{s}\,c_{2}\,\xi _{\mathrm{1s}}\,\xi _{\mathrm{1t}}+\alpha _{t}\,c_{1}\,\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}-\alpha _{t}\,c_{2}\,\xi _{\mathrm{1s}}\,\xi _{\mathrm{1t}}\right)}^2} \\
\frac{\partial c_2}{\partial (c_2/c_1)} &= \left( \frac{\partial (c_2/c_1)}{\partial c_2} \right)^{-1} = c_1
\end{align}
Thus, the elasticity of substitution is
\begin{equation}
e_{12} =
\frac{-\alpha _{s}\,\alpha _{t}\,c_{1}\,c_{2}\,{\left(\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}-\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}}\right)}^2 \left(\alpha _{s}\,c_{1}\,\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}}-\alpha _{s}\,c_{2}\,\xi _{\mathrm{1s}}\,\xi _{\mathrm{1t}}+\alpha _{t}\,c_{1}\,\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}-\alpha _{t}\,c_{2}\,\xi _{\mathrm{1s}}\,\xi _{\mathrm{1t}}\right)^{-1}}{
\,\left(\alpha _{s}\,c_{1}\,\xi _{\mathrm{2s}}\,\xi _{\mathrm{2t}}-\alpha _{s}\,c_{2}\,\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}+\alpha _{t}\,c_{1}\,\xi _{\mathrm{2s}}\,\xi _{\mathrm{2t}}-\alpha _{t}\,c_{2}\,\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}}\right)
}
\end{equation}
The mixed efficiency restrictions (either case) from \autoref{tab:paramrest} imply that $(\alpha _{s}\,c_{1}\,\xi _{\mathrm{2s}}\,\xi _{\mathrm{2t}}-\alpha _{s}\,c_{2}\,\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}+\alpha _{t}\,c_{1}\,\xi _{\mathrm{2s}}\,\xi _{\mathrm{2t}}-\alpha _{t}\,c_{2}\,\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}})$ and $(\alpha _{s}\,c_{1}\,\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}}-\alpha _{s}\,c_{2}\,\xi _{\mathrm{1s}}\,\xi _{\mathrm{1t}}+\alpha _{t}\,c_{1}\,\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}-\alpha _{t}\,c_{2}\,\xi _{\mathrm{1s}}\,\xi _{\mathrm{1t}})$ have opposite signs. Then, including the negative sign on the first term, the elasticity of substitution is positive as expected.
At this point, we cannot further simplify the elasticity of substitution. Furthermore, this expression and how it changes with the given parameters is not intuitive. Hence, we primarily rely on numerical simulations in the main text. However, in the following two subsections, we look at two special cases for a better theoretical intuition of this elasticity.
\subsection{Equilibrium in the Case of $\sigma \to \infty $}
Here, we consider what the competitive equilibrium looks like when $\sigma \to \infty$. This corresponds to the case where electricity consumption in each period is a perfect substitute for electricity consumption in the other periods.
Recall that utility is given by
\begin{equation}
U = \left( \sum_t \alpha_t Z_t^\phi \right)^{1/\phi}
\end{equation}
Since $\sigma = 1/(1-\phi)$, we have $\phi \to 1$ which implies that the utility function becomes
\begin{equation}
U= \sum_t \alpha_t Z_t = \boldsymbol{\alpha}^T \boldsymbol{\xi}^T \mathbf{X}
\end{equation}
Utility here is simply a weighted sum of total electricity consumption. In other words, $Z_T$ is a prefect substitute for $Z_s$ for all periods $t,s$.
In order to find the equilibrium solution, we can maximize utility directly with respect to $\mathbf{X}$. To see why, first note that we have assumed that markets are perfectly competitive with no frictions; also, the utility function satisfies local nonsatiation. With these assumptions, we can apply the First Welfare Theorem which states that the market equilibrium will be equivalent to the social planner's solution. Consequently, we can abstract from the firm problem and set $\mathbf{X}$ directly in a way that maximizes consumer utility.
Firstly, we can see that the marginal utility of each technology is given by $\sum_t \alpha_t \xi_{i,t}$. This is simply the sum of electricity output of each input across periods, weighted by the share parameter. Next, the cost of each input $i$ is $c_i$ by definition. Therefore, the total utility from spending all of income $I$ into a particular input is given by $(I/c_i) \sum_t \alpha_t \xi_{i,t}$. Now, consider the set
\begin{equation}
S = \argmax_i \, \sum_t \alpha_t \xi_{i,t}/c_i
\end{equation}
This set contains the indices of the inputs that have the maximal cost-adjusted marginal utility.
The optimal solution for $\mathbf{X}$ is defined by the set
\begin{equation}
W =
\{ \mathbf{X} \, : \, \mathbf{c}^T \mathbf{X} = I \;,\; \mathbf{X} \geq 0 \;,\; \text{and} \;\; (x_i > 0 \implies i \in S) \;\; \forall i \}
\end{equation}
This set contains vectors that define bundles of inputs that maximize utility. The intuition behind this result is based on the linearity of utility. This causes the total utility offered by a particular input to be equal to its quantity multiplied by its marginal utility. Consequently, since each input's marginal utility is a constant, exogenous value, the optimal bundle of inputs consists of a convex combination of those inputs in $S$. Specifically, the indices in $S$ denote the technologies that have maximial cost-adjusted marginal utility, so these inputs will individually and in combination maximize consumer utility against the budget constraint. This gives us the last proposition:
\hfill
\noindent \textbf{Proposition 2A:} \textit{Suppose that we have $\sigma \to \infty$. Then,
\begin{itemize}
\item Electricity consumption in each period $t$ is a perfect substitute for electricity consumption in period $s$ for all periods $t,s$;
\item The utility function takes on the linear form $U = \sum_t \alpha_t Z_t$; and
\item The set of optimal bundles of inputs $\mathbf{X}$ is given by
$$
W =
\{ \mathbf{X} \, : \, \mathbf{c}^T \mathbf{X} = I \,,\; \mathbf{X} \text{ non-negative} \,,\; \text{and} \;\; (x_i > 0 \implies i \in S) \;\; \forall i \}
$$
$$\text{where} \quad S = \argmax_i \, \sum_t \alpha_t \xi_{i,t}/c_i$$
\end{itemize}
$S$ represents the set of indices for technologies that have maximal cost-adjusted marginal utility. In other words, any vector of inputs $\mathbf{X}$ consisting of a feasible (non-negative $\mathbf{X}$) and affordable $(\mathbf{c}^T \mathbf{X} = I$) combination of technologies in $S$ represent a valid equilibrium solution. Furthermore, the set $Y = \{ \mathbf{Z} \,: \, \mathbf{Z} = \boldsymbol{\xi}^T \mathbf{X} \; \, \forall \, \mathbf{X} \in W \}$ contains all possible equilibrium values of electricity output. }
\hfill
\begin{proof}
Consider an arbitrary input bundle $\mathbf{Y} \not\in W$ where $\mathbf{Y} \geq 0$ (feasible) and $\mathbf{c}^T \mathbf{Y} \leq I$ (affordable). Note that $\mathbf{Y} \not\in W$ implies we have either $\mathbf{c}^T \mathbf{Y} < I$ or there exists some $i$ such that $y_i > 0$ but $i \not\in S$.
In the first case, $\mathbf{Y}$ does not satisfy the budget constraint with equality. Consequently, it is possible to raise some element of $\mathbf{Y}$ by $\epsilon$ while maintaining the budget constraint and achieving higher utility. Thus, $\mathbf{Y}$ cannot be optimal. That is, utility is strictly monotonic with respect to each input, so the budget constraint must be satisfied with equality at an optimal solution.
So, alternatively, that $\mathbf{Y}$ satisfies the budget constraint with equality but does not satisfy the second condition. Specifically, let $j$ be such an index where $y_j > 0$ but $j \not\in S$. Now, pick arbitrary $k \in S$, and define
$$ \mathbf{Y}^* = \mathbf{Y} - e_j y_j + e_k y_j (c_j/c_k)$$
where $e_j$ is a unit vector which contains 0 in all entries except the $j$'th where entry which is equal to 1. Note that subtracting $e_j y_j$ reduces the cost by $c_j y_j$, while adding $ e_k y_j (c_j/c_k)$ increases the cost by $y_j c_j$. Consequently, we have $ \mathbf{c}^T \mathbf{Y}^* = I$. Furthermore, note that subtracting $e_j y_j$ still means that the $j$'th input is feasible since it will then be 0. And, the $k$'th input will still be feasible because adding $ e_k y_j (c_j/c_k)$ means it will remain non-negative. Next, in terms of utility, we have
\begin{align*}
U(\mathbf{Y}^*) &= \sum_t \alpha_t \left( \sum_i \xi_{i,t} ( \mathbf{Y} - e_j y_j + e_k y_j (c_j/c_k) )_i \right) \\
&= \sum_t \alpha_t \left( \sum_i \xi_{i,t} y_i \right) - \sum_t \alpha_t \xi_{j,t} y_j + \sum_t \alpha_t \xi_{k,t} y_j (c_j/c_k)
\end{align*}
Note that
\begin{align*}
\sum_t \alpha_t \xi_{k,t} y_j (c_j/c_k) &> \sum_t \alpha_t \xi_{j,t} y_j \\
\iff \sum_t \alpha_t \xi_{k,t} /c_k &> \sum_t \alpha_t \xi_{j,t} /c_j
\end{align*}
The last inequality must be true given that $k \in S$ but $j \not\in S$. Therefore, we have
\begin{align*}
- \sum_t \alpha_t \xi_{j,t} y_j + \sum_t \alpha_t \xi_{k,t} y_j (c_j/c_k) &> 0 \\
\implies U(\mathbf{Y}^*) > U(\mathbf{Y})
\end{align*}
Thus, all feasible and affordable vectors of inputs $\mathbf{Y} \not\in W$ must be suboptimal, since there exists another feasible and affordable vector of inputs that offers higher utility.
Now, we show that every element in $W$ gives the same amount of utility.
Consider an arbitrary vector $\mathbf{Y} \in W$. Each input gives a constant marginal utility of $\sum_t \alpha_t \xi_{i,t}$. Since utility is linear, the total utility offered by $\mathbf{Y}$ is
\begin{align*}
U(\mathbf{Y}) &= \sum_i \left( \sum_t \alpha_t \xi_{i,t} \right) y_i \\
&= \sum_t \alpha_t \xi_{1,t} y_1 + \dots + \sum_t \alpha_t \xi_{n,t} y_n
\end{align*}
where $n$ is the number of inputs. By definition, we have $y_i > 0 \implies i \in S$, so we have $i \not\in S \implies y_i = 0$. Thus, we can simplify the utility to
$$U(\mathbf{Y}) = \sum_t \alpha_t \xi_{s_1,t} y_{s_1} + \dots + \sum_t \alpha_t \xi_{s_m,t} y_{s_m} $$
where $m$ is the number of elements in $S$. If $m = 1$, there is only one element in $S$, so there is only one element in $W$ and we are done; otherwise, assume that $m > 1$. Now, suppose that we reduce element $s_m$ of $\mathbf{Y}$ to 0 and shift the additional income left over to the first element $s_1$. This gives us
$$\mathbf{Y}^{(m)} = \mathbf{Y} - e_{s_m} y_{s_m} + e_{s_1} y_{s_m} (c_{s_m}/c_{s_1})$$
Again, linear utility gives us
\begin{align*}
U(\mathbf{Y}^{(m)}) &= U(\mathbf{Y}) - U(e_{s_m} y_{s_m}) + U(e_{s_1} y_{s_m} (c_{s_m}/c_{s_1}))
\end{align*}
And, since $s_1, s_m \in S$, we have
\begin{align*}
\sum_t \alpha_t \xi_{s_1, t} /c_{s_1} &= \sum_t \alpha_t \xi_{s_m, t} /c_{s_m} \\
\sum_t \alpha_t \xi_{s_1, t} (c_{s_m}/c_{s_1}) &= \sum_t \alpha_t \xi_{s_m, t} \\
\sum_t \alpha_t \xi_{s_1, t} (c_{s_m}/c_{s_1}) y_{s_m} &= \sum_t \alpha_t \xi_{s_m, t} y_{s_m} \\
U(e_{s_1} y_{s_m} (c_{s_m}/c_{s_1})) &= U(e_{s_m} y_{s_m})\\
\end{align*}
Therefore, $U(\mathbf{Y}^{(m)}) = U(\mathbf{Y})$. Furthermore, it is easy to see that $\mathbf{Y}^{(m)}$ is feasible (elements non-negative) and satisfies the budget constraint by an argument similar to one used earlier. Next, if we repeat this procedure $m-1$ times, we arrive at $U(\mathbf{Y}^{(1)}) = U(\mathbf{Y})$ where $\mathbf{Y}^{(1)}$ is a bundle consisting only of the first input. Note that $\mathbf{Y}^{(1)}$ must satisfy the budget constraint with equality, so it must be equal to $e_{s_1} (I/c_{s_1})$. In short, we have shown that every vector in $W$ offers utility $U( e_{s_1} (I/c_{s_1})) = \sum_t \alpha_t \xi_{{s_1},t} (I/c_{s_1})$.
Overall, we have shown that all bundles in $W$ offer the same utility, and all feasible and affordable bundles not in $W$ offer strictly less utility than those in $W$. \\
\hfill
\end{proof}
%This further allows us to write the marginal utility of each input $i$ as $\sum_t \alpha_t \xi_{i,t}$ -- a weighted sum of the electricity produced by a marginal unit of that input. Since we have assumed that firms are competitive,
\subsection{CES Production as a Special Case}
\label{sec:CESspecialcase}
Our framework nests the case where there exists a CES production structure between each technology. This occurs when each technology can only produce in a single, unique period; note that this is not a realistic scenario. For instance, this would occur if we had one technology that can only output electricity during the day and another that only outputs electricity at night. Anyways, in this case, the CES production function's elasticity parameter will be equivalent to that of the consumer's CES utility function -- the intertemporal elasticity of substitution for electricity consumption.
\begin{proof}
Firstly, note that we can reindex our technologies such that $\xi$ is diagonal, since each technology only produces in one period. Hence, without loss of generality, we have diagonal $\xi$. Next, we may say that the electricity output in period $i$ is given by $Z_i = \xi_{i,i} X_i$. Now, recall that the FOC for profit-maximization is given by $p = \xi^{-1} c$, hence we have $p_i = c_i / \xi_{i,i}$. Combining these equations with the FOC for utility maximization, we have:
\begin{align*}
\frac{Z_i}{Z_j} &= \left( \frac{\alpha_i p_j}{\alpha_j p_i} \right)^\sigma \\
\implies \frac{ X_i }{X_j} &= \left( \frac{ \alpha_i p_j \xi_{j,j}^{1/\sigma} }{ \alpha_j p_i \xi_{i,i}^{1/\sigma} } \right)^\sigma \\
\implies \frac{ X_i }{X_j} &= \left( \frac{ \alpha_i c_j \xi_{j,j}^{1/\sigma - 1} }{ \alpha_j c_i \xi_{i,i}^{1/\sigma - 1} } \right)^\sigma
\end{align*}
By definition, the elasticity of substitution between any two, arbitrary technologies $i$ and $j$ is constant. Moreover, it can be shown that this FOC can be rearranged to give the following demand equation for each technology $i$
\begin{align*}
X_i &= \left(\frac{\beta_i}{c_i} \right)^\sigma \frac{I}{P} \\
P &= \sum_t \beta_i^\sigma p_i^{1-\sigma}
\end{align*}
where $\beta_i = \alpha_i \xi_{i,i}^{-\phi}$, $\sigma = 1 / (1-\phi)$, and $I$ is the consumers income. So, in total, accounting for both the producer and consumer's objectives, we are essentially solving for:
\begin{align*}
V &= \left( \sum_i \beta_i X_i^\phi \right)^{(1/\phi)} \\
\text{such that\quad} I &= \sum_i c_i X_i
\end{align*}
This is a standard CES function.
\\ \hfill
\end{proof}
\subsection{Asymptotic Elasticity of Substitution}
\label{sec:asympeos}
Suppose we are in a two-period, two-technology setting with $\sigma = 1$. Furthermore, suppose that the output of our first technology is constant in both periods, $\xi_{1t} = \xi_{2t}$, but the output of our second technology is zero in the second period $\xi_{2s} = 0$. And, assume we have the parameter restrictions mentioned in earlier in \autoref{tab:paramrest} that ensure $\mathbf{X}, \mathbf{Z} > 0$. This is a simple case where we have (1) a constant output technology and (2) a highly intermittent technology. We now show that, in this case, the elasticity of substitution approaches $1$ as the relative cost of our second technology $c_2/c_1$ approaches $0$. Furthermore, we show that the elasticity of substitution between $X_1$ and $X_2$ is a linear function of $X_1/X_2$.
\begin{proof}
Firstly, note that from earlier we have:
$$
X = \begin{pmatrix}
\dfrac{\alpha _{t}\,\xi _{\mathrm{2s}}}{c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}}+\dfrac{\alpha _{s}\,\xi _{\mathrm{2t}}}{c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}} \\[2ex]
-\dfrac{\alpha _{t}\,\xi _{\mathrm{1s}}}{c_{1}\,\xi _{\mathrm{2s}}-c_{2}\,\xi _{\mathrm{1s}}}-\dfrac{\alpha _{s}\,\xi _{\mathrm{1t}}}{c_{1}\,\xi _{\mathrm{2t}}-c_{2}\,\xi _{\mathrm{1t}}}
\end{pmatrix}
$$
Let $\xi_1 = \xi_{1t} = \xi_{2t}$ and note that $\xi_{2s} = 0$. Hence, we have:
$$
X = \begin{pmatrix}
\dfrac{\alpha_s}{c_1 - c_2} \\[2ex]
\dfrac{\alpha_t}{c_2} - \dfrac{\alpha_s}{c_1 - c_2}
\end{pmatrix}
$$
Now, note that $X_1/X_2$ is given by:
\begin{align*}
\frac{X_1}{X_2} &= \frac{\alpha _{s}\,c_{2}}{\alpha _{t}\,c_{1} - c_2}
\end{align*}
where $\alpha_t + \alpha_s = 1$ by definition (see the \hyperref[sec:consumers]{Consumer subsection} of the Electricity Market Equilibrium section). Next, using the earlier result, we have
\begin{align*}
e_{12} &=
\frac{-\alpha _{s}\,\alpha _{t}\,c_{1}\,c_{2}\,{\left(\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}-\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}}\right)}^2 \left(\alpha _{s}\,c_{1}\,\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}}-\alpha _{s}\,c_{2}\,\xi _{\mathrm{1s}}\,\xi _{\mathrm{1t}}+\alpha _{t}\,c_{1}\,\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}-\alpha _{t}\,c_{2}\,\xi _{\mathrm{1s}}\,\xi _{\mathrm{1t}}\right)^{-1}}{
\,\left(\alpha _{s}\,c_{1}\,\xi _{\mathrm{2s}}\,\xi _{\mathrm{2t}}-\alpha _{s}\,c_{2}\,\xi _{\mathrm{1s}}\,\xi _{\mathrm{2t}}+\alpha _{t}\,c_{1}\,\xi _{\mathrm{2s}}\,\xi _{\mathrm{2t}}-\alpha _{t}\,c_{2}\,\xi _{\mathrm{1t}}\,\xi _{\mathrm{2s}}\right)
}\\
&= \frac{-\alpha_s \, \alpha_t \, c_1 \, c_2 \, (\xi_{1s} \xi_{2t})^2 ( -\alpha_s c_2 \xi_{1s} \xi_{1t} + \alpha_t c_1 \xi_{1s} \xi_{2t} - \alpha_t c_2 \xi_{1s} \xi_{1t})^{-1}}{
( \alpha_s c_2 \xi_{1s} \xi_{2t})
} \\
&=
\frac{-\alpha _{t}\,c_{1}\,\xi _{\mathrm{1t}}}{\alpha _{s}\,c_{2}\,\xi _{\mathrm{1t}}-\alpha _{t}\,c_{1}\,\xi _{\mathrm{1t}}+\alpha _{t}\,c_{2}\,\xi _{\mathrm{1t}}} \\
&=
\frac{\alpha _{t}\,c_{1} }{\alpha _{t}\,c_{1} - c_2}
\end{align*}
Finally, it is simple to see that:
$$ \lim_{c_2/c_1 \to 0} \frac{\partial \log(X_1/X_2)}{\partial \log(c_2/c_1)} = 1$$
Additionally, we can see that the elasticity of substitution between $X_1$ and $X_2$ is linear with respect to $X_1/X_2$. That is, note that we may rewrite the elasticity above as:
\begin{align*}
\frac{\partial \log(X_1/X_2)}{\partial \log(c_2/c_1)} &= \frac{\alpha _{t}\,c_{1} }{\alpha _{t}\,c_{1} - c_2}\\
&= \left( \frac{\alpha _{s}\,c_{2} }{\alpha _{t}\,c_{1} - c_2} \right) \left( \frac{ \alpha_t c_1 }{ \alpha_s c_2 } \right)\\
&= \left( \frac{X_1}{X_2} \right) \left( \frac{ \alpha_t c_1 }{\alpha_s c_2 } \right)
\end{align*}
Hence, we have shown that $e$ can be written as a linear function of $X_1/X_2$.
\\ \hfill
\end{proof}
\pagebreak
\section{Appendix B: Supplementary Figures}
\label{sec:AppendixB}
\autoref{fig:regstaterobust} is a robustness check for fit (3) of Table 2 from the main text. We regress fit (3) on subsamples where each state in our dataset is dropped out. In this figure, we plot the regression results; specifically, the estimated coefficients and the inverse of their standard error.
\vspace{0.15in}
\begin{center}
[INSERT Figure B.1: Partially Linear IV Regression Estimates with State Drop Outs]
\end{center}
\vspace{0.15in}
\autoref{fig:eosrange} below models the elasticity of substitution for two technologies that are close to being non-intermittent. That is, for technology 1, we have $\xi_1 = (0.95,\, 1)$, and, for technology 2, we have $\xi_2 = (1, \,0.95)$. We further set their costs, $c_1$ and $c_2$, to equal values and allow $\alpha_t = \alpha_s$. This example illustrates how the elasticity of substitution between technologies $e_{1,2}$ would appear with minimal intermittency. We can see that the $e_{1,2}$ takes on a u-shape.
\vspace{0.15in}
\begin{center}
[INSERT Figure B.2: The Elasticity of Substitution Between Two Minimally Intermittent Technologies]
\end{center}
\vspace{0.15in}
\autoref{fig:eosalt} models the elasticity of substitution between coal and a hypothetical renewable technology. This technology is parametrized equivalently to solar except that we have $\xi_2 = (0.1, \, 1)$. In other words, it produces most of its energy during the off-peak rather than peak. This figure illustrates how the hockey-stick shape of $e$ persists even in a model where the intermittent renewable technology does not primarily generate energy during the peak period.
\vspace{0.15in}
\begin{center}
[INSERT Figure B.3: The Elasticity of Substitution Between Coal and a Hypothetical Renewable Technology]
\end{center}
\vspace{0.15in}
Additionally, we repeat the exercise done to produce Figure 4 from the main text but with $\xi_2 = (1, 0.01)$. That is, we assume here that solar is far more intermittent. We plot our results below in \autoref{fig:ves_int}.
\vspace{0.15in}
\begin{center}
[INSERT Figure B.4: The VES Approximation of the Elasticity of Substitution between Highly Intermittent Solar and Coal]
\end{center}
\vspace{0.15in}
\pagebreak
\section{Appendix C: Econometric Methodology}
\label{sec:AppendixC}
We aim to estimate $\sigma$ in the following set of equations:
\begin{align*}
\ln (Z_{ t, i} / Z_{ s, i}) &= -\sigma \ln (P_{t,i} / P_{s,i}) + f \left( A_{t,i}, A_{s,i}, \Delta_{t,s} \right) + u_i \\
\ln (Z_{ t, i} / Z_{ s, i}) &= \beta \ln (P_{t,i} / P_{s,i}) + g \left( \ln (C_{t,i} / C_{s,i}) \right) + v_{i}
\end{align*}
We base our estimation procedure on \citet{Newey}. To start, for simplicity, let us rewrite the above set of equations as:
\begin{align*}
Q &= P \beta_d + f(T) + u \\
Q &= P \beta_s + g(W) + v
\end{align*}
where $\beta_d$ is the parameter of interest. Furthermore, we assume that
\begin{align*}
E(u \, | \, T, W) &= 0 \\
E(v \, | \, T, W) &= 0
\end{align*}
Next, with $\alpha \equiv (\beta_d - \beta_s)^{-1}$ and assuming $\beta_d \neq \beta_s$, note that:
\begin{align*}
P &= (\beta_d - \beta_s)^{-1} \, \left( g(W) - f(T) + v - u \right) \\
E(P\,|\,T) &= \alpha \left( E(g(W)|T) - f(T) \right)\\
E(P\,|\,W) &= \alpha \left( g(W) - E(f(T)|W) \right) \\
E(P\,|\,T,W) &= \alpha \left( g(W) - f(T) \right)
\end{align*}
Now, differencing $Q$ with its conditional expectation, we have:
\begin{align*}
Q - E(Q \,|\,T) &= (P - E(P\,|\,T))\beta_d + (f(T) - E(f(T) \,|\,T)) + (u - E(u|T))\\
&= (\alpha(g(W) - E(g(W)|T)))\beta_d + 0 + u
\end{align*}
Furthermore, it can be shown that:
$$E(P\,|\,T,W) - E(P\,|\,T) = \alpha (g(W) - E(g(W)|T))$$
Hence, we can regress
$$(Q - E(Q \,|\,T)) = (E(P\,|\,T,W) - E(P\,|\,T)) \beta_d + u_d $$
to estimate $\beta_d$. But, we cannot know the true values of these expectations. Hence, we estimate each conditional expectation in this regression by using Nadaraya–Watson kernel regressions. This requires us to trim the data, so we drop 1\% of outliers of $Q, P, T$, and $W$. Additionally, we use the \citet{Silverman} rule-of-thumb to select bandwidths. After estimating $E(Q \,|\,T)), (E(P\,|\,T,W),$ and $E(P\,|\,T))$ through kernel regressions, we finally regress
$$(Q - \hat{E}(Q \,|\,T)) = (\hat{E}(P\,|\,T,W) - \hat{E}(P\,|\,T)) \beta_d + u_d $$
to obtain $\hat{\beta}_d$. Recall the earlier substitution, $Q = \ln (Z_{ t, i} / Z_{ s, i})$ and $P = \ln (P_{t,i} / P_{s,i})$. So, we have $-\beta_d$ being $\sigma$, the intertemporal elasticity of substitution.
\pagebreak
\pagebreak
\section{Tables and Figures}
\begin{table}[h!]
\captionsetup{format=centerproper, labelformat=AppendixATable}
\caption{Parameter Restrictions for $Z, X > 0$}
\label{tab:paramrest}
\small
\centering
\begin{tabular}{@{\extracolsep{2em}}l@{\hspace{-0.5 em}}cc}
\\[-4ex]
\toprule \\[-2.5ex]
& \textbf{Case 1} & \textbf{Case 2 } \\
\cmidrule(lr){2-2} \cmidrule(lr){3-3} \\[-1.5ex]
\textbf{Cost Efficiency}& $\xi_{2t}/c_2 > \xi_{1t}/c_1$ & $\xi_{2t}/c_2 < \xi_{1t}/c_1 $\\
\textbf{Restrictions} & $\xi_{1s}/c_1 > \xi_{2s}/c_2$ & $\xi_{1s}/c_1 < \xi_{2s}/c_2 $ \\ [3ex]
\textbf{Output Efficiency}& $\xi_{2t}/\xi_{2s} > \xi_{1t}/\xi_{1s}$ & $\xi_{2t}/\xi_{2s} < \xi_{1t}/\xi_{1s}$\\
\textbf{Restrictions} & $\xi_{1s}/\xi_{1t} > \xi_{2s}/\xi_{2t} $ & $\xi_{1s}/\xi_{1t} < \xi_{2s}/\xi_{2t} $ \\ [3ex]
\multirow{2}{10em}{\textbf{Mixed Efficiency Restrictions}}& $\dfrac{\alpha_s \left(\xi_{1s}/c_1 - \xi_{2s}/c_2\right)}{\alpha_t \left( \xi_{2t}/c_2 - \xi_{1t}/c_1 \right)} > \xi_{2s}/\xi_{2t}$ & $\dfrac{\alpha_s \left(\xi_{1s}/c_1 - \xi_{2s}/c_2\right)}{\alpha_t \left( \xi_{2t}/c_2 - \xi_{1t}/c_1 \right)} < \xi_{2s}/\xi_{2t}$\\
& $\dfrac{\alpha_s \left(\xi_{1s}/c_1 - \xi_{2s}/c_2\right)}{\alpha_t \left( \xi_{2t}/c_2 - \xi_{1t}/c_1 \right)} < \xi_{1s}/\xi_{1t} $ & $\dfrac{\alpha_s \left(\xi_{1s}/c_1 - \xi_{2s}/c_2\right)}{\alpha_t \left( \xi_{2t}/c_2 - \xi_{1t}/c_1 \right)} > \xi_{1s}/\xi_{1t} $ \\[2ex]
\midrule
\multicolumn{3}{@{}p{40em}@{}}{\footnotesize \textit{Note: } The inequalities in this table assume that all elements of $\xi$ are greater than $0$. The full proof given below provides equivalent restrictions for the zero cases. }
\end{tabular}
\end{table}
\clearpage
\begin{figure}[h!]
\captionsetup{format=centerproper, labelformat=AppendixBFigure}
\caption{Partially Linear IV Regression Estimates with State Drop Outs}
\label{fig:regstaterobust}
\footnotesize
\vspace{-1em}
\begin{tabular}{@{\extracolsep{0em}}c}
\includegraphics[width=0.75\linewidth]{../figures/regression_state_robustness_check.png} \\
\multicolumn{1}{@{\hspace{0.2in}}p{5.9in}@{}}{ \textit{Note: } This is a joint plot of the IES $\hat{\sigma}$ against the inverse of its estimated standard deviation from fit (3) of Table 2. Each point represents an estimate obtained from regressing on a dataset that drops out one state from the full sample. Since the sample consists of the 48 contiguous US states, this regression is a decrease in sample size of 2.08\% relative to the full dataset. On the top and right side of the graph are histograms for the two variables.} \\
\end{tabular}
\end{figure}
\clearpage
\begin{figure}[h!]
\captionsetup{format=centerproper, labelformat=AppendixBFigure}
\caption{The Elasticity of Substitution Between Two \newline Minimally Intermittent Technologies}
\label{fig:eosrange}
\footnotesize
\vspace{-1em}
\begin{tabular}{@{\extracolsep{0em}}c}
\includegraphics[width=1\linewidth]{../figures/fig_elasticity_range} \\
\multicolumn{1}{@{\hspace{0.2in}}p{5.9in}@{}}{ \textit{Note: } The y-axis of the first plot is equivalent to $\log(X_1/X_2)$ and the x-axis of both plots is equivalent to $\log(c_2/c_1)$. Technology 1 and 2 represent two arbitrary technologies that are practically non-intermittent. The legend in the upper subplot also applies to the lower subplot. These results were obtained using the following parameters: $\alpha_t = 0.5$, $\alpha_s = 0.5$, $\xi_1 = (0.95, \, 1)$, $\xi_2 = (1, \, 0.95)$, $c_1 = 100$, $c_2 = 100$. In order to generate these numerical results, we first found the optimal quantities of $\mathbf{X}$ over a range of prices $c_1^* \in (0.5\, c_1, 1.5 \,c_1)$. Then, we obtained estimates of the elasticity of substitution by numerically differentiating $\ln(X_1/X_2)$ with respect to $-\ln(c_1, c_2)$. That is, the elasticity of substitution between technology 1 and 2 is given by the slope of the upper subplot, and it is graphed in the lower subplot. Finally, we repeat this procedure for various values of $\sigma$. } \\
\end{tabular}
\end{figure}
\clearpage
\begin{figure}[h!]
\captionsetup{format=centerproper, labelformat=AppendixBFigure}
\caption{The Elasticity of Substitution Between Coal \newline and a Hypothetical Renewable Technology}
\label{fig:eosalt}
\footnotesize
\vspace{-1em}
\begin{tabular}{@{\extracolsep{0em}}c}
\includegraphics[width=1\linewidth]{../figures/fig_elasticity_alt} \\
\multicolumn{1}{@{\hspace{0.2in}}p{5.9in}@{}}{\textit{Note: } Technology 1 is coal and technology 2 is a hypothetical renewable technology. This hypothetical technology has $\xi_2 = (0.1, \, 1)$ but is otherwise equivalent to solar. The legend in the upper subplot also applies to the lower subplot. These results were obtained using the following parameters: $\alpha_t = 0.6$, $\alpha_s = 0.4$, $\xi_1 = (1, \, 1)$, $\xi_2 = (0.1, \, 1)$, $c_1 = 104.3$, $c_2 = 60$. Furthermore, we set the parameter for the intertemporal elasticity of substitution for electricity consumption equal to our estimate $\hat{\sigma} = 0.8847$. In order to generate these numerical results, we first found the optimal quantities of $\mathbf{X}$ over a range of prices $c_1^* \in (0.5\, c_1, 2 \,c_1)$. Then, we obtained estimates of the elasticity of substitution by numerically differentiating $\ln(X_1/X_2)$ with respect to $-\ln(c_1/ c_2)$. That is, the elasticity of substitution between technology 1 and 2 is given by the slope of the upper subplot, and it is graphed in the lower subplot. Finally, we repeat this procedure with $\sigma$ equal to two standard deviations above and below its estimated value $\hat{\sigma}$; that is, the dashed lines represent $\sigma = 0.8847 \pm (1.96)(0.044)$. } \\
\end{tabular}
\end{figure}
\clearpage
\begin{figure}[h!]
\captionsetup{format=centerproper, labelformat=AppendixBFigure}
\caption{The VES Approximation of the Elasticity of Substitution \\ between Highly Intermittent Solar and Coal}
\label{fig:ves_int}
\footnotesize
\vspace{-1em}
\begin{tabular}{@{\extracolsep{0em}}c}
\includegraphics[width=1\linewidth]{../figures/fig_ves_approx_int} \\
\multicolumn{1}{@{\hspace{0.2in}}p{5.9in}@{}}{ \textit{Note: } Technology 1 is coal and technology 2 is a highly intermittent version of solar. The purple, dash-dots line represents a linear approximation of $e_{1,2}$ for $\sigma = 0.8847$ with a fixed intercept of 1. These results were obtained using the following parameters: $\alpha_t = 0.6$, $\alpha_s = 0.4$, $\xi_1 = (1, \, 1)$, $\xi_2 = (1, \, 0.01)$, $c_1 = 104.3$, $c_2 = 60$. Furthermore, we set the parameter for the intertemporal elasticity of substitution for electricity consumption equal to our estimate $\hat{\sigma} = 0.8847$. In order to generate these numerical results, we first found the optimal quantities of $\mathbf{X}$ over a range of prices $c_1^* \in (c_1, 2 \,c_1)$. Then, we obtained estimates of the elasticity of substitution by numerically differentiating $\ln(X_1/X_2)$ with respect to $-\ln(c_1, c_2)$. } \\
\end{tabular}
\end{figure}
\clearpage
\end{document}
| {
"alphanum_fraction": 0.6824645351,
"avg_line_length": 79.2801608579,
"ext": "tex",
"hexsha": "16f71de5f527438dbf52c649077cc05355ee9c19",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c9d65c54f2bc005f4d884998352b6801a467426b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "sa-914/Energy-Intermittency-Paper",
"max_forks_repo_path": "documents/final_appendix.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c9d65c54f2bc005f4d884998352b6801a467426b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "sa-914/Energy-Intermittency-Paper",
"max_issues_repo_path": "documents/final_appendix.tex",
"max_line_length": 2165,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c9d65c54f2bc005f4d884998352b6801a467426b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "sa-914/Energy-Intermittency-Paper",
"max_stars_repo_path": "documents/final_appendix.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 20661,
"size": 59143
} |
% This LaTeX was auto-generated from MATLAB code.
% To make changes, update the MATLAB code and republish this document.
\documentclass{article}
\usepackage{graphicx}
\usepackage{color}
\sloppy
\definecolor{lightgray}{gray}{0.5}
\setlength{\parindent}{0pt}
\begin{document}
\section*{Gradient Descent}
\subsection*{Contents}
\begin{itemize}
\setlength{\itemsep}{-1ex}
\item Gradient
\item Gradient Descent
\item Fail situation
\item Plot contour
\item Reference
\end{itemize}
\subsection*{Gradient}
\begin{par}
Gradient descent method is based on gradient
\end{par} \vspace{1em}
\begin{par}
$$ \nabla f = \frac{\partial f}{\partial x_1 }\mathbf{e}_1 + \cdots +
\frac{\partial f}{\partial x_n }\mathbf{e}_n $$
\end{par} \vspace{1em}
\begin{par}
gradient always point to the asent direction
\end{par} \vspace{1em}
\subsection*{Gradient Descent}
\begin{par}
f is object function, and this is unconstrained
\end{par} \vspace{1em}
\begin{par}
$$\min_{x} f $$
\end{par} \vspace{1em}
\begin{verbatim}
f = (@(X) (exp(X(1,:)-1) + exp(1-X(2,:)) + (X(1,:) - X(2,:)).^2));
%f = (@(X) (sin(0.5*X(1,:).^2 - 0.25 * X(2,:).^2 + 3) .* cos(2*X(1,:) + 1 - exp(X(2,:))) ))
\end{verbatim}
\subsection*{Fail situation}
\begin{par}
Rosenbrock function Gradient descent/ascent algorithm zig-zags, because the gradient is nearly orthogonal to the direction of the local minimum in these regions. It's hard to convergence
\end{par} \vspace{1em}
\begin{par}
$$ f(x, y) = (1-x)^2 + 100 * (y - x^2) ^ 2 $$
\end{par} \vspace{1em}
\begin{verbatim}
%f = (@(X) (1-X(1,:)).^2 + 100 * (X(2,:) - X(1,:).^2).^2);
\end{verbatim}
\subsection*{Plot contour}
\begin{verbatim}
[X, Y] = meshgrid(-2:0.1:2);
XX = [reshape(X, 1, numel(X)); reshape(Y, 1, numel(Y))];
%surf(X, Y, reshape(f(XX), length(X), length(X)))
contour(X, Y, reshape(f(XX), length(X), length(X)), 50);
hold on;
\end{verbatim}
\includegraphics [width=4in]{test_01.eps}
\begin{par}
plot gradient of function
\end{par} \vspace{1em}
\begin{verbatim}
for i=1:5:length(XX)
tmp = XX(:,i);
g = gradient_of_function(f, tmp);
%plot([tmp(1),tmp(1)+g(1)*0.02],[tmp(1),tmp(2)+g(1)*0.02]);
quiver(tmp(1),tmp(2),g(1)*0.02,g(2)*0.02);
end
\end{verbatim}
\includegraphics [width=4in]{test_02.eps}
\begin{par}
calculation
\end{par} \vspace{1em}
\begin{verbatim}
x0 = [-1; -1];
[x, v, h] = gradient(f, x0)
% built-in method
[x_in, v_in] = fminunc(f, x0)
\end{verbatim}
\color{lightgray} \begin{verbatim}
x =
0.7960
1.2038
v =
1.7974
h =
Columns 1 through 7
-1.0000 -1.0271 -0.4515 0.6432 0.8185 0.7755 0.7859
-1.0000 0.4778 0.2130 1.0809 1.1279 1.1801 1.2059
Columns 8 through 12
0.7925 0.7963 0.7956 0.7959 0.7960
1.2007 1.2024 1.2033 1.2039 1.2038
Warning: Gradient must be provided for trust-region algorithm;
using line-search algorithm instead.
Local minimum found.
Optimization completed because the size of the gradient is less than
the default value of the function tolerance.
x_in =
0.7961
1.2039
v_in =
1.7974
\end{verbatim} \color{black}
\begin{par}
plot descent steps
\end{par} \vspace{1em}
\begin{verbatim}
for i=2:length(h)
tmp1 = h(:,i-1);
tmp2 = h(:,i);
quiver(tmp1(1),tmp1(2),tmp2(1)-tmp1(1),tmp2(2)-tmp1(2), 0, 'r','LineWidth',2)
end
\end{verbatim}
\includegraphics [width=4in]{test_03.eps}
\subsection*{Reference}
\begin{enumerate}
\setlength{\itemsep}{-1ex}
\item \begin{verbatim}http://www.onmyphd.com/?p=gradient.descent\end{verbatim}
\item Convex Optimization
\item \begin{verbatim}https://en.wikipedia.org/wiki/Gradient\end{verbatim}
\item \begin{verbatim}https://en.wikipedia.org/wiki/Gradient_descent\end{verbatim}
\item \begin{verbatim}http://stronglyconvex.com/blog/gradient-descent.html\end{verbatim}
\end{enumerate}
\end{document}
| {
"alphanum_fraction": 0.6502173357,
"avg_line_length": 21.2554347826,
"ext": "tex",
"hexsha": "c6647738fd4f42924161191fd5ef477ffd3db825",
"lang": "TeX",
"max_forks_count": 13,
"max_forks_repo_forks_event_max_datetime": "2021-05-24T02:10:53.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-07-08T13:10:48.000Z",
"max_forks_repo_head_hexsha": "a088b4260c48dd320d7b9b63b42f9051b1d740fd",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "SwordYork/MachineLearning",
"max_forks_repo_path": "first-order/Gradient_Descent/html/test.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a088b4260c48dd320d7b9b63b42f9051b1d740fd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "SwordYork/MachineLearning",
"max_issues_repo_path": "first-order/Gradient_Descent/html/test.tex",
"max_line_length": 186,
"max_stars_count": 11,
"max_stars_repo_head_hexsha": "a088b4260c48dd320d7b9b63b42f9051b1d740fd",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "SwordYork/MachineLearning",
"max_stars_repo_path": "first-order/Gradient_Descent/html/test.tex",
"max_stars_repo_stars_event_max_datetime": "2018-12-10T06:40:54.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-02-09T20:07:27.000Z",
"num_tokens": 1436,
"size": 3911
} |
\chapter{What Pear Does in Fog}\label{sec-decisions}
Big things start small. In this chapter, we provide further details about what we have chosen to do and why we have chosen to do it. Then, at the end, we introduce several high priority projects.
\section{The Power of International Standards}
Today, people who frequently download media files occasionally come across legacy video formats like \texttt{.avi}, \texttt{.rmvb}, or \texttt{.mkv} , albeit for the last few years, almost all videos have been encoded in H.264/AVC and encapsulated in MP4 containers, with a \texttt{.mp4} extension (see Figure~\ref{fig:video-formats}). This is mainly because the said technologies had been standardised by working groups from MPEG, ITU and ISO/IEC. No matter what proprietary codecs a particular browser or client supports, it must support international standardised formats at the same time. In the case of video formats, \texttt{.mp4} is essential, but this is merely one example among many relevant international standards.
\begin{figure}[hbt]
\centering
\includegraphics[width=0.80\textwidth]{fig/decisions/video-formats.png}
\caption{Changes of prevalent video formats.}\label{fig:video-formats}
\end{figure}
Just as media file extensions have undergone change, computing paradigms have as well.
Parallel Computing was primary designed to speed up vision and graphics rendering via PC's CPUs and/or GPUs. It required low-level programming with OpenMP, Cilk, TBB, CUDA, OpenCL or other low-level concurrent/parallel libraries. This was then replaced with Distributed Computing. Network and storage capabilities were integrated, while some required hand-written networking or file system operation code. In addition, resource allocation and task scheduling should also be handled by programmers. Today, Cloud Computing has encapsulated the above two, and it relies on another international standard --- the representational state transfer (RESTful) architectural style used for web development. Cloud providers rely on REST so as to improve network management, simplify updates and facilitate safe third party development, as they only expose RESTful APIs to the users or developers at the application layer.
\begin{figure}[ht]
\centering
\includegraphics[width=0.80\textwidth]{fig/decisions/parallel-to-cloud.png}
\caption{Changes of computing paradigms.}\label{fig:parallel-to-cloud}
\end{figure}
The Cloud RESTful APIs are mostly represented in JSON format. Figure~\ref{lt:googlemaps-api} outlines a JSON response result of Google BigQuery's \texttt{getQueryResults} API.
\lstinputlisting[caption={JSON Result of a Google BigQuery API.},label={lt:googlemaps-api}]{code/bigquery.json}
One may argue that JSON versus XML is an exception, as XML is an ISO standard but it is used less and less than JSON in many scenarios, such as cloud APIs. However, if you view JSON from another angle, it is a part of JavaScript, one of the ISO-standardised programming languages \cite{ecmascriptiso} \footnote{The current four ISO-standardised programming languages that are still popular are C, C++, Ada, and JavaScript. However, Ada is used only in military applications.}. Besides, JavaScript is now the main language manipulating the Web.
Going back to the Web, before HTTP and HTML, Gopher was one protocol that facilitated distributing, publishing, searching, and retrieving documents over the Internet. Unfortunately, when the Web became standardised, Gopher almost disappeared from popular applications.
Before WebRTC, there was another dominant technology that enabled browsers with video/audio decoding, and even P2P capabilities. It was Adobe's Flash plug-in. Flash technologies are 100\% proprietary, as argued in Steve Jobs' letter on Adobe Flash \cite{thoughts-on-flash}, while HTML5 is not. Today, almost all web browsers are moving away from Flash \cite{ff-away-flash}, and even Adobe no longer supports Flash. This transformation will also create lots of new business opportunities.
Therefore, taking a long-term view, only open and standardised technologies will survive in the future.
Accordingly, Pear has chosen to embrace several new and upcoming protocols that have just become or will soon become international standards:
\begin{itemize}
\item Pear mainly uses WebRTC for end-to-end data communication.
\item Pear works with DASH for media content representations.
\item Pear mainly adopts WebSocket for client-to-server signalling.
\item Pear represents its service APIs mainly in JSON, as well as in XML in some places.
\item Pear's Fog components are implemented in C, which makes them portable across all types of devices; Pear's initial SDK will be provided in JavaScript, which is the most web-friendly language.
\end{itemize}
\section{From Cloud to Fog}
\subsection{Problems of the Cloud and Advantages with the Fog}
Here is an incomplete list of current problems of the pure Cloud model:
\begin{itemize}
\item Long latency\\
The communication latency is positively correlated with the physical distance and the number of hops (paths between routers) between the client and the Cloud. More and more financial and industrial applications require millisecond-level latencies, which can hardly be satisfied with pure-Cloud models.
\item Wasting power, computation, and bandwidth resources.\\
In pure-cloud systems, each client has to communicate with the Cloud upon every state-change or data-generation, let alone deal with the pervasive heartbeat messages. Just think about whether it is necessary for data to circle round ({\em i.e.} get transmitted data thousands of km away, get processed, and then delivered back) all the time? Moreover, if there are billions of clients, what kind of pressure will be brought to the backbone networks? What is worse, clouds consume much more electricity and resources than they truly need. Take China as an example: ``Cloud'' data centres in China consumed 100 billion kWh of electricity in 2014, but only 6\%--12\% was used for processing tasks, and the rest was wasted when servers were idle.
\item Still expensive\\
Most service providers ({\em e.g.} CPs) have to pay a considerable amount of fees for the IDC resources, especially in bandwidth. For some large CPs ({\em e.g.} Tencent), bandwidth cost constitutes over 50\% of the total operational cost. Moreover, the cost of storing historically accumulated data cannot be neglected when the data volume is large, although the cloud storage cost had fallen to less than RMB~0.1 per GB per month as of 2012.
\item Single point of failure risks\\
Remember that clouds have control or ``master'' nodes. Because of routeing and load-balancing needs, data access points are essentially centralised. Today, there is a nefarious underground industry, from which individuals and institutions can easily buy CC, DDoS or NTP amplification attacks, DNS hijacking or poisoning, and even customised intrusion services to harm their rivals or competitors. If one control node is under attack, the service may be unavailable or unreachable. Beyond these, as Internet giants monopolise users' data, information islands can easily be formed due to the lack of a sharing mechanism. One may still remember the cancellation of Google Wave, Google Reader and Google Code, the capacity reduction of Microsoft OneDrive, and the change of allowed synchronised clients of Evernote. If a monopolising giant just changes a single service term or policy, how many data disasters will there be?
\item Security and privacy issues\\
The essentially centralised storage (as well as control or access points) of the Cloud increases the risk of data leakage. Recall \emph{iCloud Nude Leaks} in the US, as well as database ``drag'' incidents of hotel chains, travel agencies, and web forums in China: users essentially have no control over their data stored in clouds, because it is centralised and far away.
\item Not being scalable in the ongoing explosion\\
Suppose a video sharing startup wants to expand to the size of YouTube; will it have enough power resources to transcode all the content in real-time? Alternatively, imagine a gene analytics company wanting to scale up its business to perform DNA sequencing or gene analysis; will it have enough computation power to undertake this task? In the future IoT, a pure Cloud model will definitely not be enough for handling the massive number of constantly-online smart devices that is increasing by orders of magnitude.
\end{itemize}
Given the above problems of the Cloud, the Fog has definite advantages:
\begin{itemize}
\item Its ability to offload the pressure of the Cloud\\
Fog can undertake simple tasks that eat up a great portion of the Cloud resources.
\item Physical proximity to end-users\\
Fog nodes are as near as 0-hop to users, so the Fog can achieve higher performance \& lower delay in most scenarios. Kevin Kelly even predicted that by 2020, over 2/3 of data will be processed locally (within 1km from where it is generated), instead of being uploaded to the ``Cloud''\footnote{\url{http://www.cyzone.cn/a/20141027/264795.html}}. In addition, fog nodes are near users' sensors or computing devices, which are placed in their living environments and generally not accessible directly from remote clouds.
\item Low cost\\
How many Wi-Fi routers are online 7/24? How many unused USB drives do you have? With some \textbf{rebate} incentives, fog has the potential to utilise users' idle resources in a crowdsourcing way, create a new sharing economy in computing and thus lower everyone's costs.
\item Eco-friendly characteristics\\
Most computer programs are either data parallelisable or task parallelisable. In fact, we can achieve linear speedups in most parallelisation work. Obviously, low-power fog nodes are more eco-friendly in processing these tasks (refer to the Section~\ref{sec-dg-vs-dc}).
\item Huge amount of resources\\
With a strong and sustainable sharing business model with multiplexing, all Internet users, AP device vendors, operators and service providers can participate in the Fog.
\end{itemize}
\subsection{Where Fog Works}
\subsubsection{Hardware Resources Analysis of the Fog}\label{sec-hardware-res-analysis}
We now perform an analysis of the different hardware resources in the Fog (Table~\ref{tb:analysis-fog-resources}).
\begin{table}[ht]
\small
\centering
\caption{Analysis over different types of fog hardware resources.}\label{tb:analysis-fog-resources}
\begin{tabular}{p{0.16\linewidth}p{0.79\linewidth}}
\toprule
Resource Type & Suitable For \\
\midrule
CPU \& GPU & Logical \& numerical processing, computing, analytics, rendering, coding, and (deep) learning\\
RAM & Data relay, live streaming, VoIP, video conferencing, broadcasting, dynamic accelerations\\
Storage\tablefootnote{May only be available in some smart APs.} & VoD, backup, distributed cache \& database, blockchains\\
IP Address & Communication, signaling, certain applications requiring a multi-source effect\\
$\cdots$ & $\cdots$ \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[htb]
\small
\centering
\caption{First Batch of Fog Applications.}\label{tb:proposed-fog-app}
\begin{tabular}{llc}
\toprule
Applications & Mainly Uses & Estimated Difficulty\\
\midrule
Media Coding & CPU and GPU cycles & $\star \star \star$ \\
E-mail Push\tablefootnote{Proposed by Francis Kowk, the CEO of RADICA Systems} & Node IPs/IDs, CPU cycles & $\star \star$ \\
VPN & Bi-directional bandwidth & $\star \star \star$ \\
CDN & Up-link bandwidth, storage(in VoD) & $\star \star \star \star \star$ \\
IoT & Proximity, reachability, computational power & $\star \star \star \star$ \\
Distributed AI & Computational power, storage, network & $\star \star \star \star \star$ \\
$\cdots$ & $\cdots$ & $\cdots$ \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Proposed Applications}
We can abstract a generalised model of fog applications, as depicted in Figure~\ref{fig:fog-generalised-model}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.75\textwidth]{fig/fog-models/model.pdf}
\caption{A generalised fog model.}\label{fig:fog-generalised-model}
\end{figure}
We are working on several joint projects that can be launched first, as shown in Table~\ref{tb:proposed-fog-app}. With a \textbf{rebate} scheme, the collaboration will benefit \textbf{all} parties.
\section{A Stable Hardware Carrier} \label{sec-stable-carrier}% Do hardware!!!
To get a revenue stream to start to flow as quickly as possible, we have to attract contractual or subscribed users. To attract and retain business users, we have to provide a stable, cost-saving and sufficient service. To provide the service, we have to find a sufficient number of suitable hardware carriers powered by Pear's fog programs. Figure~\ref{fig:issue-tree-fog-nodes} shows an issue-tree-like analysis of the hardware carrier solutions.
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\textwidth]{fig/decisions/issue-tree-fog-nodes.pdf}
\caption{Issue-tree-like analysis of the hardware carrier solutions.}\label{fig:issue-tree-fog-nodes}
\end{figure}
Individual fog peers can be slow, forbidden, blocked, busy, over-loaded, unavailable, or unreachable. However, to provide fog services, the network size or the system capacity cannot fluctuate too often or too much. Therefore, we had better find stable hardware carriers to run our fog programs on.
As analysed in Section~\ref{sec-dev-hardware}, we have considered designing an all-in-one hardware device. Whatever form the hardware takes, it should have some AP capabilities, so the devices can serve as home/business Wi-Fi routers and gateways. Below are some of their attributes, online patterns, and advantages in their use scenarios:
\begin{itemize}
\item Stable\\
In fact, most AP devices in the real-world are online 24/7, so our devices should be too.
\item Relieving us from some NAT traversal problems\\
Most APs themselves are NAT devices with dedicated IP addresses. With simple port-forwarding tricks, a considerable number of the devices can work as fog servers that can be accessed from outside.
\item Transparent and tolerable resource-scheduling plans\\
Different from the traditional P2P applications 10 years ago, which would ``steal'' the resources on PCs to serve others, scheduling the resources on AP devices to provide fog services has no influence over the users' Internet services since they are not devices the users are \emph{using} directly. In addition, we have monetised incentives for initial acceptance as well as ongoing tolerance of occasional bandwidth reduction (during some peak hours).
\item Hardware compatibility and utility\\
The APs should be very versatile, as discussed in Section~\ref{sec-hardware-res-analysis}.
\end{itemize}
To find a suitable hardware carrier, another thing we must do is to estimate the processing power in terms of ``performance per dollar''.
After analysing different types of devices, we found that the current TV-boxes can meet our three main requirements:
\begin{itemize}
\item Meet our basic needs for running most fog components
\item Have a lower cost than most other types of devices
\item Have low energy consumption --- less than 15W Thermal Design Power (TDP)
\end{itemize}
It is hard to measure the performance in a single index or metric, but from the models in Table~\ref{tb:device-comparision}, we can easily see that a TV box is the most suitable platform to run Pear's fog programs, in terms of performance per unit of cost.
%\doublerulesep 0.1pt
\begin{table}[htb]
\small
\centering
\caption{Performance-Price Comparisons of Different Types of Devices}\label{tb:device-comparision}
\begin{tabular}{p{0.13\linewidth}p{0.25\linewidth}p{0.25\linewidth}l}
\toprule
Device Type & Smart Phone & TV Box & Wi-Fi Router \\
\midrule % \parbox[t]{\linewidth}{ARM A7 2.2Ghz $\times$ 4 + \\ARM A17 1.7GHz $\times$ 4}
CPU & \parbox[t]{8cm}{ARM A7 $2.2$\,Ghz $\times$ $4$ + \\ARM A17 $1.7$\,GHz $\times$ $4$} & ARM A7 $1.3$\,GHz $\times$ $4$ & MIPS 24KEc $580$\,MHz $\times$ $2$\\
GPU & PowerVR G6200 & Mali-400MP2 & N/A\\
RAM & $2$\,GB DDR3 & $1$\,GB DDR3 & $128$\,MB DDR2\\
ROM & N/A & $128$\,MB NAND Flash& $64$\,MB Nor Flash\\
Storage & $32$\,GB eMMC & $8$\,GB eMMC & N/A\\
Cost & CNY~$500$ & CNY~$80$ & CNY~$100$\\
\bottomrule
\end{tabular}
\end{table}
From Table~\ref{tb:device-comparision}, we can see that the Pear smart router's ``all-in-one'' feature and the ``ARM'' facts and trends make it an easy choice for our hardware carrier. Regarding the Operating System, we will use embedded Linux first, and then later port all components to native programs on Android. Here is our rationale:
\begin{enumerate}
\item Embedded Linux distributions, like OpenWrt, are widely used in smart routers.\\
So to utilise the existing components, we had better build on systems like OpenWrt.
\item The Android OS has the strongest ecosystem currently.\\
It works not only on smart phones, but also on TV-boxes, tablets, projectors, PCs, watches, and it should work on routers in the near future. Its community is even stronger than Windows.
\item The technology that wins the developers wins the world.\\
This has been proven many times.
\end{enumerate}
Finally, we will partner with existing hardware vendors, and at the same time, cooperating with them to make our own ``exemplary'' fog devices. Table~\ref{tb:pear-device-spec} shows proposed specifications of Pear's fog node device.
\begin{table}[htb]
\small
\centering
\caption{Proposed specifications of Pear fog device (smart router).}\label{tb:pear-device-spec}
\begin{tabular}{lc}
\toprule
Component & Specification\\
\midrule
OS & Linux, or Android with OpenWrt port\\
CPU & Intel Atom or Core M or ARM Cortex, Quad Core\\
GPU & Intel HD Graphics or ARM Mali\\
RAM & 1-4GB DDR3\\
ROM & 4GB NAND Flash\\
Storage & 32-128GB eMMC/HFS onboard, SSD/HDD extendible\\
Slots/Connectors & USB3.0 or Type C, SD/MMC\\
Networking & 100/1000Mbps Ethernet, 802.11ac Wireless\\
Shape & Compute Stick, or TV-Box like\\
Functionalities & Mini-server, NAS, Wi-Fi router, TV-Box, USB Drive\\
\bottomrule
\end{tabular}
\end{table}
\section{Fog for Content Delivery}
Among all the potential revenue streams, fog content delivery is paramount. Below we justify its superiority over traditional content delivery over the Cloud, and then we list some key objectives in such content delivery systems.
\subsection{In Content Delivery, Where Fog Works Better} % Time in-sensitive!!!
For retrieving an image, a web page, a JavaScript, Flash, or CSS file, which is likely to be a few dozens of KBs in size, fog can hardly beat Cloud. Typically this kind of job can be done within a few hundreds of milliseconds, even taking the RTT into account. By contrast, in a typical fog system which is organised in a structural P2P fashion, where peers and resources are indexed by DHTs. Each query is expected to be routed through a few $\log(n)$ hops (where $n$ is the network size), so roughly throwing these jobs to the Fog will induce a large delay.
Even in traditional CDNs, handling small-sized data is usually done in a reverse-proxying fashion, as:
\begin{enumerate}
\item there might also be routeing delay in querying the cloud storage system;
\item redirection mechanisms such as HTTP 302 incur a considerable delay that is much higher than the transmission/processing time.
\end{enumerate}
Figure~\ref{fig:cdn-req} depicts different request-response models in CDNs.
\begin{figure}[ht]
\centering
\includegraphics[width=0.72\textwidth]{fig/decisions/CDN_req.pdf}
\caption{Request-Response models in CDNs.}\label{fig:cdn-req}
\end{figure}
We do not even need to mention fog, which is obviously not suitable for small sized static content. Instead, we further analyse the use of fog in four different content service scenarios:
\begin{description}
\item[VoD.] VoD is mostly latency-tolerable; fog takes on the relatively hot portion of all content. Fog helps achieve two main benefits:
\begin{itemize}
\item High total bandwidth along the edge with higher distribution density\\
Even if the per-connection rate is limited, we can aggregate the bandwidth with the help of multi-connection effects.
\item Low bandwidth fees at the edge.\\
\end{itemize}
\item[Web Content.] This application is time-sensitive in content accessibility, and the data volume is huge. Fog only helps with the large-sized files. In terms of popularity, the hottest portion is already cached in traditional CDN edge regions and the coldest portion should be placed in data centres. Fog mainly helps the medium popularity content objects that are not popular enough to be cached on Cloud CDN edges. Fog helps achieve:
\begin{itemize}
\item Relatively low latency if with a considerable replication density
\item Lower bandwidth cost for content providers
\end{itemize}
\item[Live Streaming.] For interactive scenarios, fog is not suitable, because the application is too time-sensitive. Fog can well undertake 10-second to 2-minute delay channels, for example, live broadcasting and live event streaming with low or no user-interaction.
\item[Historical Internet Archives.] These have huge amounts of files. When equipped with large storage capacities, fog nodes can also help to quickly replicate the cold portion, providing CPs with a multi-path delivery channel or failover solution.
\end{description}
\subsection{Key Objectives for Fog Content Delivery}
In this section, we list the key objectives of Pear's fog platform. This can also be deemed as an extension with architectural solution answers to the ``pillars'' part of the OpenFog white paper \cite{openfog}.
%Connectivity, Security, Interoperability, Transparency, Reliability, Availability, Serviceability, Scalability, Autonomy, Programmability, Proximity, Efficiency, Resiliency(Elasticity), Performance, Agility, Functionality, Utility, Hierarchy ... Robustness Profitability!!! (commerciality) change to descriptions:
\subsubsection{Connectivity, Reliability, Availability, Serviceability, Durability}
As most user-end network devices are behind NATs and/or firewalls, and IPv6 is still not widely used, NAT traversal is critical in fog systems.
Although Pear's fog devices are also Wi-Fi APs with NAT capabilities, these devices may work behind users' or ISPs' NATs.
WebRTC has built-in STUN and ICE components to traverse cone-type NATs. However, a considerable portion ($\approx$ 30\%) of NATs are symmetric.
Ford {\em et al.}~\cite{Ford:2005:PCA:1247360.1247373} generalised the procedures of NAT traversal. To traverse symmetric NATs with random port assignment ($\approx$ 0.5\% of total routers), Tsai~\cite{sqt2008} proposed an brute-force ``occupying'' method, but it induced too much overhead. Wei~\cite{wei2008new} and Klinec~\cite{Klinec:2014:TSN:2659651.2659698} proposed methods by port prediction.
We build a hybrid [UPnP] + [PCP] (or [NAT-PMP]) + [ICE] solution to increase the connectivity at fog nodes. If a port mapping by UPnP or PCP can be successfully set up, then our fog nodes can serve as TCP and HTTP servers; otherwise we resort to STUN in ICE to establish a P2P connection on top of UDP. In our STUN implementation, we apply a machine learning-based mechanism to predict the range the next assigning port is likely to fall into.
We also use the parallel mechanisms in Trickle ICE to reduce the communication set-up delay.
Hence, our fog program on end-users' ``0-hop'' devices can be seen as a super Web server that supports both HTTP and WebRTC as well as a P2P node. The architecture of Pear's fog node engine is roughly depicted in Figure~\ref{fig:pear-fog-node-engine}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.75\textwidth]{fig/decisions/pear-fog-node-engine.pdf}
\caption{Architecture of Pear's fog node engine.}\label{fig:pear-fog-node-engine}
\end{figure}
The system needs to survive and not fluctuate to some degree. Therefore, durability is a clear success factor. We mainly use monetary incentives to encourage end-users to power and run the devices in a stable and durable fashion.
\subsubsection{Proximity, Scalability, Autonomy}
Both the Fog and the Cloud provide computation, storage, bandwidth, and application resources. Perhaps the most salient difference between the two is the Fog's proximity to users, the environment, and the underlying accessing nodes. The Fog is more localised, while the Cloud is more generalised.
In content delivery, to achieve high throughput and to confine traffic within ISPs, years ago, the Global Network Positioning (GNP) method was introduced. Its key idea is to approximately represent the complex structure of the Internet by a simple geometric space ({\em e.g.} an N-dimensional Euclidean space). In this representation, each host in the Internet is characterized by its position in the geometric space with a set of geometric coordinates. Usually, a number of stable servers are used as landmarks.
However, Choffnes~{\em et al.}~\cite{net-pos-edge} revealed that the Internet positioning from the edge at scale exhibits noticeably worse performance than previously reported in studies conducted on research testbeds. A few hundred of landmarks might be needed to capture most of the variance.
We devised an online node clustering method in which peers \texttt{ping} each other and form groups in an Expectation-maximisation (E-M) like way. With the help of Radix-tree and GeoHashing, our system allows fast $k$NN and range queries over the whole Internet topology.
In the near future, we plan to implement this component into ALTO and DHT protocols.
Beyond gossip protocols like SCAMP \cite{SCAMP}, we will support hierarchical peer regions/groups.
\subsubsection{Security}
Security is a big issue across all layers. Security in Pear's fog can be categorised into two aspects:
\begin{enumerate}
\item Security for CPs' content and control;
\item Security for end-users' devices, data and privacy.
\end{enumerate}
Currently, we do not have enough human resources to ensure a 100\% secure system, but building a TLS layer with bi-directional host verification can ensure security for most of the scenarios in the above aspects. Figure~\ref{fig:TLS-hypo} is a Hypothesis-Tree-like analysis of why to construct this TLS layer at an early stage.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.00\textwidth]{fig/decisions/tls-hypothesis.pdf}
\caption{Hypothesis-Tree-like Analysis on the TLS issue}\label{fig:TLS-hypo}
\end{figure}
To ensure data integrity and avoid data being replaced by illegal content, Pear will implement its own Self-verifying File System (SFS) with \texttt{lfuse}.
\subsubsection{Interoperability, Transparency, Programmability}
WebRTC endows the system with strong interoperability with the Web world. In addition to it, we provide SDKs for different platforms such as Windows, Android and embedded Linux.
Our SDKs work nearly transparently. Section~\ref{sec-streaming-protocolic} shows a one-line enabling demo for the web front-end.
We also provide plug-in development capabilities on top of Pear Fog components. To program with Pear Fog, a developer is just required to override or implement a few call-back functions.
\subsubsection{Efficiency}
Different from BitTorrent's design logic, Pear's Fog system cares more about efficiency than fairness or equality.
On top of the shared platform and resources, Pear implements different algorithms for different service scenarios.
\subsubsection{Resiliency, Elasticity, Agility}
We will soon develop an automatic Operating Support System (OSS) that distributes solid and reliable agents to every fog node, partly as described in the U-2 project in Section~\ref{sec-medium-term-projs}.
\subsubsection{Commerciality, Profitability}
Last but not least, every service the Fog platform provides shall be business-friendly. Cooperating with CPs, Pear is going to create a digital coin and build sustainable monetary incentives.
As discussed before, we will try our best to introduce a strong Android eco-system to our platform and make the plug-in development profitable. We will try to win the developers --- from the beginning.
\section{The Economy of Attention and User-centric Innovation}
As the amount of information increases, the most valuable resource becomes attention --- not the information itself. During the past decade, most successful marketing philosophies were built on top of ``the economy of attention''. However, this inherently embeds in an ``informing'' thought and often aims at a deal or a transaction. It lacks warmth and ignores the humanistic solicitude of customers.
Now there is a shift from ``the economy of attention'' to ``the economy of willingness''.
The customers not only engage in the design but also create and share value among themselves in the communities. It is user-centric: no informing or persuasion -- just touching users' hearts through deeds and matters of the heart. In an era of a willingness economy, engagement in design and production is more important than obtaining the product. The competitiveness of future enterprises will lie on the ability to turn willingness into demand.
Before turning willingness into demand, we have to create the willingness, and before creating the willingness, we have to build up a good image. Pear partners will actively participate in technical web forums on OpenWrt, Android, or wireless routers. A considerable number of the members of these communities are likely to be Pear Fog's early adopters, both for software and hardware.
On the other hand, they can also serve as volunteers to help improve Pear's products.
\section{A Single-point Breakthrough}
For a small start-up, it is not wise to do something which is obviously occupying a giant's main road before it. Instead, most successful start-ups have grown big because they grabbed a narrow trail that had the potential to lead to a broad land where they could dominate. At first, their trails then looked so trivial and non-profitable that the giants saw no value in paying attention, let alone putting resources into them.
We minimise our potential projects to a single thing: iWebRTC, a smart fog gateway that
\begin{itemize}
\item supports all Web-friendly protocols such as HTTP, WebRTC, and WebSocket;
\item supports all P2P-friendly techniques such as uTP, UPnP, PCP (or NAT-PMP), STUN, Trickle ICE, DHT, GOSSIP and ALTO;
\item unites the Fog, the Cloud, and the Web.
\end{itemize}
It is an all-purpose project that connects:
\begin{itemize}
\item the router world and the browser world
\item the communication domain and the computing domain
\item the circuit switching net and the packet switching net
\item the very back-end and the very front-end
\end{itemize}
Moving forward, this project enables us to perform domain-specific applications, such as healthcare; going backward, this project enables us to do better traffic-related services.
We will try our best to make a single-point breakthrough on top of it. We expect that in a foreseeable time ({\em i.e.} 2-3 years), Pear's iWebRTC will be to the real-time data/media communication world what Nginx has become to the Web server field.
Specifically, in the first stage, even for the WebRTC stack, we will avoid jumping too deeply into the media codec part, because this part is still not settled within the standardisation committee, as it is where giants fight with each other (see Figure~\ref{fig:giants-fight}).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.86\textwidth]{fig/decisions/giants_fight.jpg}
\caption{Video Coding Standards: where giants fight.}\label{fig:giants-fight}
\end{figure}
\section{Three Medium-Term Projects of Pear}\label{sec-medium-term-projs}
\begin{description}
\item[Manhattan] First of all, we will create a business that utilises the idle bandwidth at the user-end to help with a lot of fog computing scenarios. Our core is the software suite that runs on the devices that process the data requests, plus the ``coordinator'' programs on our servers that optimise the traffic on the whole network. Our architecture and algorithms greatly outperform the traditional ones. It is like an atomic bomb. So we have named this part the ``Manhattan'' project.
\item[B-29] But even if we successfully make an atomic bomb, we will still be unable to drop it on our enemy -- because it is too far away and we have no long-range carrier. Obviously, we should develop a software launcher beforehand. It must automatically update itself and work in an incremental and peer-to-peer fashion, thus saving bandwidth --- imagine tens of millions of clients: how can we update them all to the latest version in a single day? We need a combination of the advantages of Google's, Microsoft's, and Tencent QQ's updating schemes. We call this one the ``B-29'' project. This project also contains a hardware carrier selection process.
\item[U-2] After the above two weapons are done, the system will be in operation. To save as much labour-cost as possible, we should also develop a system that monitors and manages the devices anywhere and everywhere. It should also work automatically as much as possible and provide fault-tolerance capabilities. This system is to us in our competition what the U-2 plane was to the US in the cold war.
\begin{figure}[ht]
\centering
\subfigure[B-29.] { \label{fig:Manhattan}
\includegraphics[width=0.33\columnwidth]{fig/decisions/B-29.jpg}
}
\subfigure[Manhattan.] { \label{fig:B-29}
\includegraphics[width=0.27\columnwidth]{fig/decisions/fat-man-little-boy.jpg}
}
\subfigure[U-2.] { \label{fig:U-2}
\includegraphics[width=0.33\columnwidth]{fig/decisions/U-2.jpg}
}
\caption{Three metaphors of the 3 pivotal projects of Pear.}
\label{fig:3projs}
\end{figure}
\end{description}
\newpage
| {
"alphanum_fraction": 0.7795381467,
"avg_line_length": 90.9840425532,
"ext": "tex",
"hexsha": "589f905b63bab6a34c7add7c8592980935f3c6ef",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "71b530cb32fa8545150f1dcd53312e01c184f6d2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "PearOrg/PRCToken-Whitepaper",
"max_forks_repo_path": "chp/decisions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "71b530cb32fa8545150f1dcd53312e01c184f6d2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "PearOrg/PRCToken-Whitepaper",
"max_issues_repo_path": "chp/decisions.tex",
"max_line_length": 924,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "71b530cb32fa8545150f1dcd53312e01c184f6d2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "PearOrg/PRCToken-Whitepaper",
"max_stars_repo_path": "chp/decisions.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 8167,
"size": 34210
} |
\addtocontents{toc}{\protect\setcounter{tocdepth}{0}}
%\renewcommand\thechapter{}
%\renewcommand\thesection{\arabic{section}}.
%\chapter*{Data}
\newpage
\section*{Data for $ H_{competition} $}
blub
\section*{Data for $ H_{vm-predict} $}
blab | {
"alphanum_fraction": 0.6887159533,
"avg_line_length": 21.4166666667,
"ext": "tex",
"hexsha": "9827ed4a0a58047c10296d54656d6fd1a965afa9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4b34b0ab019ebf969b91dc2f8ee755a1cc8aba02",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "timbrgr/auction-simulation-SMRA-HPB-BA-thesis",
"max_forks_repo_path": "thesis/pages/appendix.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4b34b0ab019ebf969b91dc2f8ee755a1cc8aba02",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "timbrgr/auction-simulation-SMRA-HPB-BA-thesis",
"max_issues_repo_path": "thesis/pages/appendix.tex",
"max_line_length": 54,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4b34b0ab019ebf969b91dc2f8ee755a1cc8aba02",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "timbrgr/auction-simulation-SMRA-HPB-BA-thesis",
"max_stars_repo_path": "thesis/pages/appendix.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 86,
"size": 257
} |
%-------------------------
% Resume in Latex
% Author : Sidratul Muntaha Ahmed
% License : MIT
%------------------------
\documentclass[letterpaper,11pt]{article}
\usepackage{latexsym}
\usepackage[empty]{fullpage}
\usepackage{titlesec}
\usepackage{marvosym}
\usepackage[usenames,dvipsnames]{color}
\usepackage{verbatim}
\usepackage{enumitem}
\usepackage[hidelinks]{hyperref}
\usepackage{fancyhdr}
\usepackage[english]{babel}
\usepackage{tabularx}
\input{glyphtounicode}
%----------FONT OPTIONS----------
% sans-serif
% \usepackage[sfdefault]{FiraSans}
% \usepackage[sfdefault]{roboto}
% \usepackage[sfdefault]{noto-sans}
% \usepackage[default]{sourcesanspro}
% serif
% \usepackage{CormorantGaramond}
% \usepackage{charter}
\pagestyle{fancy}
\fancyhf{} % lear all header and footer fields
\fancyfoot{}
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0pt}
% Adjust margins
\addtolength{\oddsidemargin}{-0.5in}
\addtolength{\evensidemargin}{-0.5in}
\addtolength{\textwidth}{1in}
\addtolength{\topmargin}{-.5in}
\addtolength{\textheight}{1.0in}
\urlstyle{same}
\raggedbottom
\raggedright
\setlength{\tabcolsep}{0in}
% Sections formatting
\titleformat{\section}{
\vspace{-4pt}\scshape\raggedright\large
}{}{0em}{}[\color{black}\titlerule \vspace{-5pt}]
% Ensure that generate pdf is machine readable/ATS parsable
\pdfgentounicode=1
%-------------------------
% Custom commands
\newcommand{\resumeItem}[1]{
\item\small{
{#1 \vspace{-2pt}}
}
}
\newcommand{\resumeSubheading}[4]{
\vspace{-2pt}\item
\begin{tabular*}{0.97\textwidth}[t]{l@{\extracolsep{\fill}}r}
\textbf{#1} & #2 \\
\textit{\small#3} & \textit{\small #4} \\
\end{tabular*}\vspace{-7pt}
}
\newcommand{\resumeSubSubheading}[2]{
\item
\begin{tabular*}{0.97\textwidth}{l@{\extracolsep{\fill}}r}
\textit{\small#1} & \textit{\small #2} \\
\end{tabular*}\vspace{-7pt}
}
\newcommand{\resumeProjectHeading}[2]{
\item
\begin{tabular*}{0.97\textwidth}{l@{\extracolsep{\fill}}r}
\small#1 & #2 \\
\end{tabular*}\vspace{-7pt}
}
\newcommand{\resumeSubItem}[1]{\resumeItem{#1}\vspace{-4pt}}
\renewcommand\labelitemii{$\vcenter{\hbox{\tiny$\bullet$}}$}
\newcommand{\resumeSubHeadingListStart}{\begin{itemize}[leftmargin=0.15in, label={}]}
\newcommand{\resumeSubHeadingListEnd}{\end{itemize}}
\newcommand{\resumeItemListStart}{\begin{itemize}}
\newcommand{\resumeItemListEnd}{\end{itemize}\vspace{-5pt}}
%-------------------------------------------
%%%%%% RESUME STARTS HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
%----------HEADING----------
\begin{center}
\textbf{\Huge \scshape Remi Guan} \\ \vspace{1pt}
\href{mailto:[email protected]}{[email protected]} $|$
\href{https://github.com/remi-crystal}{\underline{github.com/remi-crystal}}
% {\underline{x.x.com}} $|$
% \href{https://linkedin.com/in/x}{\underline{linkedin.com/in/x}} $|$
\end{center}
%-----------EDUCATION-----------
\section{Education}
\resumeSubHeadingListStart
\resumeSubheading
{University of London}{April 2022 – Present}
{Bachelor of Computer Secience}{Remotely}
% \resumeItemListStart
% \resumeItem{WAM/GPA: X}
% \resumeItem{Award/Achievement/Involvement 1}
% \resumeItem{Award/Achievement/Involvement 2}
% \resumeItem{Award/Achievement/Involvement 3}
% \resumeItemListEnd
\resumeSubHeadingListEnd
%-----------EXPERIENCE-----------
\section{Experience}
\resumeSubHeadingListStart
\resumeSubheading
{Full Stack Web developer}{May 2021 – Present}
{Qiubi Tech, Ltd}{Shanghai, China}
\resumeItemListStart
\resumeItem{Developed web pages and backend in React and NestJs}
\resumeItem{Managed the servers}
\resumeItemListEnd
\resumeSubHeadingListEnd
%-----------STUDY-----------
\section{Study}
\resumeSubHeadingListStart
\resumeProjectHeading
{\textbf{Data Structures And
Algorithms} $|$ \emph{Java}}{August 2021 -- Present}
\resumeItemListStart
\resumeItem{Learning about basic algorithm and data structure, such as array, list, tree}
\resumeItemListEnd
\resumeProjectHeading
{\textbf{Coursera}}{February 2022 -- March 2022}
\resumeItemListStart
\resumeItem{\textbf{From Nand To Tetris}; Learned how to build a modern computer from logic gates}
\resumeItemListEnd
\resumeProjectHeading
{\textbf{CS 61A} $|$ \emph{Python}}{March 2021 -- February 2022}
\resumeItemListStart
\resumeItem{Learned about the essence of functional programming, such as abstraction, higher order function, recursion}
\resumeItem{Also learned about programming language, wrote my own scheme interpreter}
\resumeItemListEnd
\resumeProjectHeading
{\textbf{Human resource machine} $|$ \emph{Game}}{December 2021 -- January 2022}
\resumeItemListStart
\resumeItem{Learned about register machine, and programming in assembly}
\resumeItem{Additionally learned about optimization assembly with classical trick such as block reordering, common subexpression elimination, peephole optimization and loop unrolling}
\resumeItemListEnd
\resumeProjectHeading
{\textbf{Turing Complete} $|$ \emph{Game}}{March 2022 - Present}
\resumeItemListStart
\resumeItem{Learning about building a computer and writing assembly code}
\resumeItemListEnd
\resumeSubHeadingListEnd
%-----------PROJECTS-----------
\section{Projects}
\resumeSubHeadingListStart
\resumeProjectHeading
{\textbf{Meta Signature Util} $|$ \emph{TypeScript}}{November 2021 -- December 2021}
\resumeItemListStart
\resumeItem{NPM library to sign articles with curve25519 algorithm}
\resumeItemListEnd
\resumeProjectHeading
{\textbf{Meta UCenter} $|$ \emph{Nest.js}}{July 2021 -- Present}
\resumeItemListStart
\resumeItem{Backend for users signup and login based on TypeScript and NestJs}
\resumeItemListEnd
\resumeProjectHeading
{\textbf{Meta CMS} $|$ \emph{React}}{August 2021 -- November 2021}
\resumeItemListStart
\resumeItem{Frontend for users to create a blog and manage their account. Based on Ant Design Pro}
\resumeItemListEnd
\resumeSubHeadingListEnd
%
%-----------TECHNICAL SKILLS-----------
\section{Technical Skills}
\begin{itemize}[leftmargin=0.15in, label={}]
\item{
Basic algorithm,
Functional programming}
\\
\item{JavaScript, TypeScript, Python3, LaTeX} \\
\item{HTML, CSS, React, Vue, Next.js, Nest.js} \\
\end{itemize}
%
%-----------MISC-----------
\section{Misc}
\begin{itemize}[leftmargin=0.15in, label={}]
\item{
\resumeItemListStart
\resumeItem{19k Reputation at Stack Overflow with answers to Python questions (\href{https://stackoverflow.com/users/5299236}{Remi Crystal})}
\resumeItemListEnd
}
\end{itemize}
%-------------------------------------------
\end{document}
| {
"alphanum_fraction": 0.6463664979,
"avg_line_length": 30.5732217573,
"ext": "tex",
"hexsha": "2b19d42eaa31385d5f997f2cf970c4e5dda92e8d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1bd4e4cedc0224f33a3602bb81585995fd0b9ac0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "remi-crystal/latex-resume",
"max_forks_repo_path": "main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1bd4e4cedc0224f33a3602bb81585995fd0b9ac0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "remi-crystal/latex-resume",
"max_issues_repo_path": "main.tex",
"max_line_length": 195,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1bd4e4cedc0224f33a3602bb81585995fd0b9ac0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "remi-crystal/latex-resume",
"max_stars_repo_path": "main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2033,
"size": 7307
} |
\documentclass{standalone}
\begin{document}
\chapter*{Introduction}\addcontentsline{toc}{chapter}{Introduction}
\markboth{Introduction}{Introduction}
Since the end of 2019, COVID-19 has widely spread all over the world. Up to now, the gold standards for the diagnosis of this disease are the reverse transcription-polymerase chain reaction (RT-PCR) and the gene sequencing of sputum, throat swab and lower respiratory tract secretion~\cite{ART:Zhao}.
An initial prospective made by Huang et al.~\cite{ART:Huang} on chest CT scans of patients affected by COVID-19, has shown that the $98\%$ of examined subjects have a bilateral patchy shadow or ground-glass opacity (GGO) and consolidation(CS). Their severity, shape and involved percentage of lungs were related to the stage of the disease~\cite{ART:Bernheim}. In the end, other studies have monitored the change in volume and shape of these features on healed patients~\cite{ART:Ai} to check their actual recovery. This behaviour can be seen in \figurename\,\ref{fig:HealthVSCovid}, in which chest CT scans of patients with different severity of disease are compared.
\begin{figure}[h!]
\centering
\includegraphics[scale=.37]{lesionStage.png}
\caption{Groud Glass Opacity and Consolidation on chest CT scans of COVID-19 affected patients with different severity of the disease. From left to right we can observe an increment of the GGO areas in the lung.}\label{fig:HealthVSCovid}
\end{figure}
GGO and CS are not exclusive of COVID-19, but may be also caused by pulmonary edema, bacterial infection, other viral infection or alveolar haemorrage~\cite{ART:Collins}. The combination of CT scan information and other diagnostic techniques like the RT-PCR mentioned above may help the diagnosis, the monitoring of the course of the disease and the checking of the recovery in healed patients. The study of these patterns may help to understand the infection pathogenesis, which is not well known since COVID-19 is a new illness.
Identification and quantification of these lesions in chest CT scans are a fundamental task. Up to now, the segmentation is made in a manual or in a semi-automatic way, which are time-consuming (several hours or days) and subjected to the operator experience, since requires the interaction with trained personnel. Moreover, these kinds of segmentation cannot be reproduced. To overcome these issues, an automatic and fast way for the segmentation is required.
This work of thesis, made in collaboration with the Department of Diagnostic and Preventive Medicine of the Policlinico Sant'Orsola - Malpighi, aims to develop an automatic pipeline for the identification of GGO and CS in chest CT of patients affected by COVID-19. The work was based and tested on chest CT scans provided by Sant'Orsola, but also public repositories~\cite{DATA:ZENODO}~\cite{DATA:MOSMED} were used as a benchmark.
We start our discussion by understanding what is a digital medical image, its physical meaning and digital representation, focusing mainly on Computed Tomography. After that a brief review on the main image segmentation techniques will be presented, focusing on the ones used for the implementation of the pipeline.
The discussion will continue by describing the main pipeline characteristics and its structure: We will see how colour quantization was used to achieve the segmentation and how the digital image properties were used to take into account different image features. We also discuss how a preliminary lung segmentation will help the identification performances. After that, we will continue the discussion by describing in details the pipeline implementation.
In the end, we will discuss the segmentation results. The pipeline performances were checked trough different method, like visual comparison with other segmentation techniques, quantitative comparison against manual annotation and blind evaluation by experts.
\end{document} | {
"alphanum_fraction": 0.8082573455,
"avg_line_length": 127.3548387097,
"ext": "tex",
"hexsha": "975183a7d9ee6663b6c428cd03dfa2cbe87cce32",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2506df1995e5ba239b28d2ca0b908ba55f81761b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "RiccardoBiondi/SCDthesis",
"max_forks_repo_path": "tex/introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2506df1995e5ba239b28d2ca0b908ba55f81761b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "RiccardoBiondi/SCDthesis",
"max_issues_repo_path": "tex/introduction.tex",
"max_line_length": 669,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2506df1995e5ba239b28d2ca0b908ba55f81761b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "RiccardoBiondi/SCDthesis",
"max_stars_repo_path": "tex/introduction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 843,
"size": 3948
} |
\section{Giant Rat}
asdf | {
"alphanum_fraction": 0.7037037037,
"avg_line_length": 9,
"ext": "tex",
"hexsha": "6658ec709abae22a376b3d44f8579c32b135504b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "NTrixner/RaggedLandsPenAndPaper",
"max_forks_repo_path": "npcs/beasts/giantrat.tex",
"max_issues_count": 155,
"max_issues_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95",
"max_issues_repo_issues_event_max_datetime": "2022-03-03T13:49:05.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-03-18T13:19:57.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "NTrixner/RaggedLandsPenAndPaper",
"max_issues_repo_path": "npcs/beasts/giantrat.tex",
"max_line_length": 20,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "NTrixner/RaggedLandsPenAndPaper",
"max_stars_repo_path": "npcs/beasts/giantrat.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-03T09:32:08.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-03-13T09:33:31.000Z",
"num_tokens": 8,
"size": 27
} |
\documentclass[11pt]{article}
\usepackage{listings}
\usepackage{tikz}
\usepackage{alltt}
\usepackage{hyperref}
\usepackage{url}
%\usepackage{algorithm2e}
\usetikzlibrary{arrows,automata,shapes}
\tikzstyle{block} = [rectangle, draw, fill=blue!20,
text width=5em, text centered, rounded corners, minimum height=2em]
\tikzstyle{bt} = [rectangle, draw, fill=blue!20,
text width=1em, text centered, rounded corners, minimum height=2em]
\newtheorem{defn}{Definition}
\newtheorem{crit}{Criterion}
\newcommand{\true}{\mbox{\sf true}}
\newcommand{\false}{\mbox{\sf false}}
\newcommand{\handout}[5]{
\noindent
\begin{center}
\framebox{
\vbox{
\hbox to 5.78in { {\bf Software Testing, Quality Assurance and Maintenance } \hfill #2 }
\vspace{4mm}
\hbox to 5.78in { {\Large \hfill #5 \hfill} }
\vspace{2mm}
\hbox to 5.78in { {\em #3 \hfill #4} }
}
}
\end{center}
\vspace*{4mm}
}
\newcommand{\lecture}[4]{\handout{#1}{#2}{#3}{#4}{Lecture #1}}
\topmargin 0pt
\advance \topmargin by -\headheight
\advance \topmargin by -\headsep
\textheight 8.9in
\oddsidemargin 0pt
\evensidemargin \oddsidemargin
\marginparwidth 0.5in
\textwidth 6.5in
\parindent 0in
\parskip 1.5ex
%\renewcommand{\baselinestretch}{1.25}
%\renewcommand{\baselinestretch}{1.25}
% http://gurmeet.net/2008/09/20/latex-tips-n-tricks-for-conference-papers/
\newcommand{\squishlist}{
\begin{list}{$\bullet$}
{ \setlength{\itemsep}{0pt}
\setlength{\parsep}{3pt}
\setlength{\topsep}{3pt}
\setlength{\partopsep}{0pt}
\setlength{\leftmargin}{1.5em}
\setlength{\labelwidth}{1em}
\setlength{\labelsep}{0.5em} } }
\newcommand{\squishlisttwo}{
\begin{list}{$\bullet$}
{ \setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\setlength{\topsep}{0pt}
\setlength{\partopsep}{0pt}
\setlength{\leftmargin}{2em}
\setlength{\labelwidth}{1.5em}
\setlength{\labelsep}{0.5em} } }
\newcommand{\squishend}{
\end{list} }
\begin{document}
\lecture{25 --- March 9, 2015}{Winter 2015}{Patrick Lam}{version 1}
\section*{Logic Coverage}
We now shift from graphs to logical expressions, for instance:
\begin{center}
\tt if (visited \&\& x > y || foo(z))
\end{center}
Graphs are made up of nodes, connected by edges. Logical expressions,
or predicates, are made up of clauses, connected by operators.
\paragraph{Motivation.} Logic coverage is required by standard if you're writing
avionics software. The idea is that software makes conditional decisions. While a
condition evaluates to true or false, there are a number of ways that it might
have done so, depending on its subparts. Logic coverage aims to make sure that
test cases explore the subparts adequately.
The standard is called DO-178B. You can find a tutorial about logic coverage,
and in particular the coverage called Modified Condition/Decision Coverage (MC/DC), here:
\url{http://shemesh.larc.nasa.gov/fm/papers/Hayhurst-2001-tm210876-MCDC.pdf}.
\paragraph{Predicates.} A \emph{predicate} is an expression that evaluates
to a logical value. Example:
\[ a \wedge b \leftrightarrow c \]
Here are the operators we allow, in order of precedence (high to low):
\begin{itemize}
\item $\neg$: negation
\item $\wedge$: and (not short-circuit)
\item $\vee$: or (not short-circuit)
\item $\rightarrow$: implication
\item $\oplus$: exclusive or
\item $\leftrightarrow$: equivalence
\end{itemize}
We do not allow quantifiers; they are harder to reason about. Note
also that our operators are not quite the same as the ones in typical
programming languages.
\paragraph{Clauses.} Predicates without logical operators are \emph{clauses};
clauses are, in some sense, ``atomic''. The following predicate contains three clauses:
\begin{center}
{\tt (x > y) || foo(z) \&\& bar}
\end{center}
\paragraph{Logical Equivalence.} Two predicates may be logically equivalent, e.g.
\[ x \wedge y \vee z \equiv (x \vee z) \wedge (y \vee z), \]
and these predicates are not equivalent to $x \leftrightarrow z$. Equivalence is harder with
short-circuit operators.
\paragraph{Sources of Predicates:} source code, finite state machines, specifications.
\section*{Logic Expression Coverage Criteria}
We'll use the following notation:
\begin{itemize}
\item $P$: a set of predicates;
\item $C$: all clauses making up the predicates of $P$.
\end{itemize}
Let $p \in P$. Then we write $C_p$ for the clauses of the predicate $p$,
i.e.
\[ C_p = \{ c\ |\ c \in p \}; \qquad C = \bigcup_{p \in P} C_p \]
Given a set of predicates $P$, we might want to cover all of the predicates.
\begin{crit}
{\bf Predicate Coverage} (PC). For each $p \in P$, TR contains two requirements:
1) $p$ evaluates to true; and 2) $p$ evaluates to false.
\end{crit}
PC is analogous to edge coverage on a CFG. (Let $P$ be the
predicates associated with branches.)
{\sf Example:} ~\\[1em]
\[ P = \{ ( x + 1 == y) \wedge z \}. \]
% predicate coverage:
% x = 4, y = 5, z = true -> T
% x = 3, y = 5, z = true -> F
PC gives a very coarse-grained view of each predicate.
We can break up predicates into clauses to get more details.
\begin{crit}
{\bf Clause Coverage} (CC). For each $c \in C$, TR contains two requirements:
1) $c$ evaluates to true; 2) $c$ evaluates to false.
\end{crit}
{\sf Example:} ~\\[2em]
\[ (x + 1 == y) \wedge z\] now needs:
% x + 1 == y -> T and F
% z -> T and F
% This gives the tests (x = 4, y = 5, z = T) and (x = 4, y = 4, z = F).
\paragraph{Subsumption.} Are there subsumption relationships between
CC and PC?\\[3em]
% no, example: p = a \wedge b, C = \{ a, b \}
% T_c = \{ \langle a:T, b:F \rangle, \langle a:F, b:T \rangle \} satisfies CC but not PC;
% T_p = \{ \langle a:T, b:F \rangle, \langle a:F, b:F \rangle \} satisfies PC but not CC.
The obvious exhaustive approach: try everything. (This obviously subsumes
everything else).
\begin{crit}
{\bf Combinatorial Coverage} (CoC). For each $p \in P$, TR has test
requirements for the clauses in $C_p$ to evaluate to each possible
combination of truth values.
\end{crit}
This is also known as multiple condition coverage. Unfortunately, the
number of test requirements, while finite, grows
\underline{\hspace*{7em}} and is hence unscalable.
\section*{A Heuristic: Active Clauses}
So, in general, we can't evaluate every value of every clause, because
there are too many of them. The next best thing is to focus on each
clause and make sure it affects the predicate. That is, we'll test
each clause while it is \emph{active}.
{\sf Example.} Consider the following clause:
\[ p = x \wedge y \vee (z \wedge w) \]
Let's say that we focus on $y$; we'll call it the major clause. (We
may designate any clause as a major clause at our whim.) That makes $x, z, w$
minor clauses.
We want to make $y$ \emph{determine} $p$ with certain minor clause
values. That is:
\begin{itemize}
\item if we set $y$ to {\sf true}, then $p$ evaluates to some value $X$;
\item if we set $y$ to {\sf false}, then $p$ must evaluate to $\neg X$.
\end{itemize}
The truth assignment:
\[ x = \mbox{\textvisiblespace} \quad z = \mbox{\textvisiblespace} \quad w = \mbox{\textvisiblespace} \]
will make $y$ determine $p$; in particular, $y$ {\sf true} makes $p$
{\sf true} and $t$ {\sf false} makes $p$ {\sf false}.
% x = true, z = false, w = true.
\begin{defn}
Designate $c_i$ as a major clause in predicate $p$. Then $c_i$
\emph{determines} $p$ if the minor clauses $c_j \in p, j \neq i$, have
values such that changing the truth value of $c_i$ changes the truth value
of $p$.
\end{defn}
We do \emph{not} require that $c_i$ has the same truth value as $p$.
{\sf That requirement leads to trouble e.g. for the predicate: }
Informally, determination tests each clause in a context where that
clause has an effect.
\paragraph{Example.}
\[ p = a \vee b \]
{\sf Note that $b$ does not determine $p$ when:}\\[1em]
% a is true
That is, testing $b$ has no effect; the test set {\sf \{ true, false
\} } does not test $a$ or $b$ effectively.
Here is a variant of clause coverage which uses determination.
\begin{defn}
Active Clause Coverage (ACC). For each $p \in P$ and making each
clause $c_i \in C_p$ major, choose assignments for minor clauses $c_j,
j \neq i$ such that $c_i$ determines $p$. TR has two requirements for each
$c_i$: $c_i$ evaluates to true and $c_i$ evaluates to false.
\end{defn}
This definition is somewhat ambiguous. We'll refine it soon.
\paragraph{Example.} For $p = a \vee b$, make $a$ major. We need
$b$ to be false for $a$ to determine $p$. {\sf This leads to the TRs:} \\[1em]
%\[ \{ \langle a:\true, b:\true \rangle, \langle a:\false, b:\false \rangle \} \]
and similarly for $b$ to determine $p$ {\sf we need TRs:}\\[1em]
%\[ \{ \langle a:\false, b:\true \rangle, \langle a:\false, b:\false \rangle \} \]
Note the overlap between test requirements; it will always exist, meaning
that {\sf our set of TRs for active clause coverage are:}
%\[ \{ \langle a:\true, b:\true \rangle, \langle a:\false, b:\false \rangle, \langle a:\false, b:\true \rangle \} \]
In general, for a predicate with $n$ clauses, we need $n+1$ test requirements.
(It might seem that we'd need $2n$ clauses, but we don't. Why?)
\end{document}
| {
"alphanum_fraction": 0.695026178,
"avg_line_length": 35.9529411765,
"ext": "tex",
"hexsha": "8d8bab78882d9ec9cfc3b12d9a19d030ca9bbbf4",
"lang": "TeX",
"max_forks_count": 14,
"max_forks_repo_forks_event_max_datetime": "2018-04-14T20:06:46.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-01-09T18:29:15.000Z",
"max_forks_repo_head_hexsha": "63db2b4e97c0f53e49f3d91696e969ed73d67699",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "patricklam/stqam-2017",
"max_forks_repo_path": "2015-notes/L25.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "63db2b4e97c0f53e49f3d91696e969ed73d67699",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "patricklam/stqam-2017",
"max_issues_repo_path": "2015-notes/L25.tex",
"max_line_length": 116,
"max_stars_count": 30,
"max_stars_repo_head_hexsha": "63db2b4e97c0f53e49f3d91696e969ed73d67699",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "patricklam/stqam-2017",
"max_stars_repo_path": "2015-notes/L25.tex",
"max_stars_repo_stars_event_max_datetime": "2018-04-15T22:27:00.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-12-17T01:10:11.000Z",
"num_tokens": 2886,
"size": 9168
} |
\documentclass[12pt]{article}
\input{physics1}
\begin{document}
\section*{NYU Physics I---dimensional analysis of orbits}
Here we exercise dimensional analysis to derive some properties of
gravitational orbits, including one of Kepler's laws.
For context, the Earth ($m= 6\times 10^{24}\,\kg$) orbits the Sun ($M= 2\times
10^{30}\,\kg$) at a distance of $R= 1.5\times 10^{11}\,\m$.
\paragraph{\theproblem}\refstepcounter{problem}%
What is the orbital period $T$ of the Earth in units of $\s$?
\paragraph{\theproblem}\refstepcounter{problem}%
What combination of $M$, $R$, and $T$ has dimensions of acceleration?
What has dimensions of force? What has dimensions of momentum?
\paragraph{\theproblem}\refstepcounter{problem}%
The force on the Earth from the Sun that keeps it in orbit is
gravitational. What is the magnitude of the gravitational force, in
terms of $G$, $m$, $M$, and $R$? What about acceleration?
\paragraph{\theproblem}\refstepcounter{problem}%
If you assume that the gravitational force provides the
dimensional-analysis force you derived above, how much longer would
the year be if the Earth's orbit were four times larger in radius than
it is? The general answer is one of Kepler's laws!
\paragraph{\theproblem}\refstepcounter{problem}%
Now solve the problem and find speed $v$ in terms of $G$, $m$, $M$,
and $R$ for a circular orbit. Do the same for period $T$. By what
factor were your dimensional-analysis answers wrong?
\paragraph{\theproblem}\refstepcounter{problem}%
You got Kepler's law correct even though your dimensional-analysis result
was off by a constant factor. Why?
\paragraph{\theproblem}\refstepcounter{problem}%
If the Earth were four times less massive than it is, but were still on
a circular orbit at its current radius, how much longer or shorter
would the year be?
\paragraph{\theproblem}\refstepcounter{problem}%
If the Sun were four times less massive than it is, but the Earth were
still on a circular orbit at its current radius, how much longer or
shorter would the year be?
\paragraph{\theproblem}\refstepcounter{problem}%
The Sun orbits the Milky Way on a close-to-circular orbit at a speed
of $2\times 10^5\,\mps$ and a distance of $R= 3\times 10^{20}\,\m$.
What is the orbital time of the Sun around the Milky Way?
\paragraph{\theproblem}\refstepcounter{problem}%
What, therefore, is the mass of the Milky Way, approximately, in units
of the mass of the Sun?
\end{document}
| {
"alphanum_fraction": 0.7601309865,
"avg_line_length": 41.406779661,
"ext": "tex",
"hexsha": "b71589fc94b413655882a0d0dee538e384977584",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6723ce2a5088f17b13d3cd6b64c24f67b70e3bda",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "davidwhogg/Physics1",
"max_forks_repo_path": "tex/worksheet_orbits.tex",
"max_issues_count": 29,
"max_issues_repo_head_hexsha": "6723ce2a5088f17b13d3cd6b64c24f67b70e3bda",
"max_issues_repo_issues_event_max_datetime": "2019-01-29T22:47:25.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-10-07T19:48:57.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "davidwhogg/Physics1",
"max_issues_repo_path": "tex/worksheet_orbits.tex",
"max_line_length": 78,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "6723ce2a5088f17b13d3cd6b64c24f67b70e3bda",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "davidwhogg/Physics1",
"max_stars_repo_path": "tex/worksheet_orbits.tex",
"max_stars_repo_stars_event_max_datetime": "2017-11-13T03:48:56.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-11-13T03:48:56.000Z",
"num_tokens": 662,
"size": 2443
} |
%=========================================================================
% sec-xpc
%=========================================================================
\section{Thesis Proposal: Explicit-Parallel-Call Architectures}
\label{sec-xpc}
\begin{figure}[t]
\begin{minipage}[b]{0.42\tw}
\input{fig-intro-overview}
\end{minipage}%
\hfill%
\begin{minipage}[b]{0.56\tw}
\input{fig-intro-vision}
\end{minipage}
\end{figure}
%In order to \emph{expose} fine-grain parallel tasks, we propose a
%parallel programming library coupled with a unified XPC instruction set
%architecture (ISA). To \emph{execute} parallel tasks, we propose a
%heterogeneous mix of microarchitectural tiles specialized for
%conventional and amorphous data parallelism. Finally, to \emph{schedule}
%parallel tasks, we propose a software runtime with hardware support for
%adaptive migration of tasks.
\subsection{Exposing Fine-Grain Parallel Tasks}
%\textbf{Exposing Parallel Tasks --}
The XPC programming framework exposes parallelism at a fine granularity
and provides support for nested parallelism. The framework enables
application developers to focus on exploring different algorithms for
implementing a given application (e.g., recursive, iterative, etc.) and
evaluating the tradeoffs of mapping different algorithms to different XPC
tiles. Several parallel primitives in the spirit of the Intel
TBB~\cite{reinders-tbb-book2007} library including \TT{parallel\_invoke}
for recursively generating tasks as well as \TT{parallel\_for} for
parallelizing loop iterations are provided.
%We take advantage of modern C++11 lambda functions to achieve a clean and
%convenient method for defining tasks.
The XPC ISA extends a traditional RISC ISA with three new instructions: a
parallel function call instruction (\TT{pcall}), a parallel function
return instruction (\TT{pret}), and a parallel synchronization
instruction (\TT{psync}). A parallel function call enables the software
to inform the hardware that it is acceptable to execute the callee in
parallel with the caller, but critically, parallel execution is \emph{not
required}. \TT{pcall}s can be nested to arbitrary depths, and recursive
\TT{pcall}s are also allowed. A \TT{pret} returns from a \TT{pcall} and
also acts as an implicit synchronization point. A particularly novel
feature of the XPC ISA is the ability to encode a parallel multi-call
which informs the hardware that the given function should be executed $n$
times where $n$ can be either specified at compile time or run time.
%While it is certainly possible to achieve the same effect by using a
%basic \TT{pcall} within a loop, a parallel multi-call enables very
%efficient execution on data-parallel accelerator tiles.
\subsection{Executing Fine-Grain Parallel Tasks}
%The underlying XPC microarchitecture is a heterogeneous mix of XPC tiles
%that are specialized for exploiting a broad range of data-parallelism.
%\textbf{Executing Parallel Tasks --}
\textbf{XPC TCL Tile --} An XPC tile with tightly coupled lanes (TCL) is
a data-parallel accelerator specialized for regular data-parallelism. A
simple in-order, single-issue control processor manages an array of
lanes. Instruction fetch and decode are amortized by the control
processor and an issue unit schedules a group of threads to execute in
lock-step on the tightly coupled lanes.
%Control irregularity can cause divergence between threads, serializing
%execution of both paths of the branch. Memory accesses are dynamically
%coalesced by a memory unit by comparing the addresses generated across
%the lanes.
%Unlike GPGPUs, the TCL tile focuses on exploiting intra-warp parallelism
%instead of relying on extreme temporal multithreading to exploit
%inter-warp parallelism.
The TCL tile is inspired by my previous work on accelerating
regular data-parallel applications on GPGPUs by exploiting value
structure~\cite{kim-simt-vstruct-isca2013}.
%Similar techniques can be used to eliminate redundant computation.
%Although the TCL tile will achieve high performance and energy efficiency
%by amortizing the front-end and memory accesses as well as exploiting
%value structure for conventional data parallel tasks, it will struggle
%with the serialization caused by control and memory-access irregularity
%in amorphous data parallel tasks.
\textbf{XPC LCL Tile --} An XPC tile with loosely coupled lanes (LCL) is
an accelerator specialized for more irregular data-parallelism.
The key difference between a TCL and an LCL tile is that an LCL tile
allows the control flow between lanes to be decoupled. This is achieved
by using per-lane instruction buffers that are pre-populated by the
control processor. The control processor still amortizes instruction
fetch and a lane management unit distributes tasks across the lanes. Each
LCL lane is more lightweight than its TCL counterpart since expensive
long-latency functional units and memory ports are shared across all
lanes and the control processor.
%As a result, LCL tiles will likely include more lanes than TCL tiles for
%roughly the same area.
The LCL design is inspired by previous work in our research group on
architectural specialization for inter-iteration loop dependence
patterns~\cite{srinath-xloops-micro2014}.
% The LCL tile can be used as reasonable middle-ground for accelerating
% amorphous data parallelism before resorting to a general-purpose tile
% because of its higher tolerance for control irregularity. However, the
% LCL tile is still better suited for simpler tasks on the lanes and can
% suffer from high memory-access irregularity.
\textbf{XPC CMC Tile --} An XPC tile with co-operative multicores (CMC)
can be used for highly irregular irregular data-parallel applications
that cannot be efficiently accelerated on other XPC tiles. A CMC tile
includes an array of discrete cores connected by a crossbar to facilitate
rapid inter-core communication and includes hardware acceleration for
scheduling fine-grain parallel tasks. The CMC tile can function as a
default tile for the profiling phase when statistics are collected to
determine if the application would be more suitable for a TCL or an LCL
tile.
% in addition to being the optimal tile for highly irregular amorphous
% data parallel tasks which cannot fully utilize the more structured TCL
% or LCL tiles.
% Each CMC core, albeit relatively simple, is akin to a stripped-down
% control processor in a TCL/LCL tile, making it capable of handling high
% control and memory-access irregularity with acceptable performance. The
% tradeoff is that CMC is the least energy efficient tile of the three
% XPC tiles for applications which exhibit more regular amorphous data
% parallelism. The key difference between the CMC tile and a traditional
% multicore is that
\subsection{Scheduling Fine-Grain Parallel Tasks}
%\begin{figure}
% \begin{minipage}[b]{0.53\tw}
% \input{fig-xpc-task-cache}
% \end{minipage}%
% \hfill%
% \begin{minipage}[b]{0.45\tw}
% \input{fig-xpc-task-network}
% \end{minipage}
%\end{figure}
%The fine-grain parallel tasks exposed in software are mapped to the XPC
%tiles in hardware through a software runtime that adaptively schedules
%tasks onto the best suited tiles. This process can be accelerated in
%hardware with a \emph{task cache} and \emph{task distribution network}.
%\textbf{Scheduling Parallel Tasks --}
One way for the software runtime to manage parallel tasks is with
per-thread task queues in memory similar to the Intel TBB runtime
implementation~\cite{reinders-tbb-book2007}.
%Entries in the task queues are composed of a function pointer to the task
%available for parallel execution, as well as the thread ID of the parent
%that spawned this parallel task. A convenient consequence of using
%lambdas is that the arguments and local variables normally allocated on
%the stack are naturally captured to create a closure. This library-based
%approach is similar in spirit to the Intel TBB runtime
%implementation~\cite{reinders-tbb-book2007}. Idle threads can steal tasks
%by dequeuing tasks from the top of another thread's task queue, but a
%thread may always execute any tasks from its own task queue by dequeueing
%from the bottom.
I plan to explore techniques to bias the work stealing algorithm such
that tasks naturally migrate to the most appropriate XPC tile. The
runtime can collect statistics on control and memory-access irregularity
as proxies for performance and energy efficiency.
%Higher irregularity would suggest the task is better suited for more
%general-purpose tiles, whereas highly regular tasks would be better
%suited for data-parallel accelerator tiles.
Although the runtime normally generates and distributes tasks through
memory, hardware acceleration can be used to streamline these
processes. For example, a per-tile \emph{task cache} that exploits
intra-tile task locality can be used to store entries of the task queue.
%in a hardware cache to avoid memory accesses. This is based on the
%observation that each thread or tile will almost always need to access
%its own task queue whenever it is ready to execute a new
%task.
Furthermore, a \emph{task distribution network} can be used to
connect task caches in order to facilitate efficient task stealing
between tiles.
%The runtime triggers a broadcast of task steal requests if the tile it is
%running on is idle.
%For scalability, the request is only broadcasted to the tile's direct
%neighbors.
The network also communicates meta-data with each request describing the
type of task for which it is best suited, which can considered along with
collected statistics to determine whether or not a tile should allow task
stealing. These hardware techniques for storing and distributing tasks is
inspired by my previous work in task distribution for irregular
data-parallel applications on GPGPUs~\cite{kim-hwwl-micro2014}. Similar
approaches to task distribution in traditional multicore systems have
been proposed~\cite{kumar-carbon-isca2007,sanchez-adm-asplos2010}, but
these techniques do not address heterogeneous architectures where a new
dimension of using heuristics to optimally schedule tasks across
different microarchitectural tiles must be considered.
| {
"alphanum_fraction": 0.7935420744,
"avg_line_length": 52.1428571429,
"ext": "tex",
"hexsha": "813bf53bbe9a18dd7442f4bbc0edf8b0eb73855f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f68f30c11388bc9e11913f8e7dcb95696e2cda4e",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "jyk46/fellowship-nvidia-xpc",
"max_forks_repo_path": "src/sec-xpc.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f68f30c11388bc9e11913f8e7dcb95696e2cda4e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "jyk46/fellowship-nvidia-xpc",
"max_issues_repo_path": "src/sec-xpc.tex",
"max_line_length": 74,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f68f30c11388bc9e11913f8e7dcb95696e2cda4e",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "jyk46/fellowship-nvidia-xpc",
"max_stars_repo_path": "src/sec-xpc.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2309,
"size": 10220
} |
%% LyX 1.4.2 created this file. For more info, see http://www.lyx.org/.
%% Do not edit unless you really know what you are doing.
\documentclass[english]{article}
\usepackage[T1]{fontenc}
\usepackage[latin1]{inputenc}
\IfFileExists{url.sty}{\usepackage{url}}
{\newcommand{\url}{\texttt}}
\newcommand{\version}{1.3}
\makeatletter
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% LyX specific LaTeX commands.
\providecommand{\LyX}{L\kern-.1667em\lower.25em\hbox{Y}\kern-.125emX\@}
%% Bold symbol macro for standard LaTeX users
\providecommand{\boldsymbol}[1]{\mbox{\boldmath $#1$}}
%% Because html converters don't know tabularnewline
\providecommand{\tabularnewline}{\\}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% User specified LaTeX commands.
\usepackage{metatron}
\renewcommand{\abstractname}{Executive Summary}
\title{LedgerSMB Manual v. \version}
\author{The LedgerSMB Core Team}
\date{\today}
\usepackage{babel}
\makeatother
\begin{document}
\maketitle
Except for the included scripts (which are licensed under the GPL v2 or later),
the following permissive license governs this work.
Copyright 2005-2012 The LedgerSMB Project.
Redistribution and use in source (\LaTeX) and 'compiled' forms (SGML,
HTML, PDF, PostScript, RTF and so forth) with or without modification, are
permitted provided that the following conditions are met:
\begin{enumerate}
\item Redistributions of source code (\LaTeX) must retain the above
copyright notice, this list of conditions and the following disclaimer as the
first lines of this file unmodified.
\item Redistributions in compiled form (converted to
PDF, PostScript, RTF and other formats) must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
\end{enumerate}
THIS DOCUMENTATION IS PROVIDED BY THE LEDGERSMB PROJECT "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE LEDGERSMB PROJECT BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
\tableofcontents{}
\listoffigures
\clearpage
\part{LedgerSMB and Business Processes}
\section{Introduction to LedgerSMB}
\subsection{What is LedgerSMB}
LedgerSMB is an open source financial accounting software program which is
rapidly developing additional business management features. Our goal is to
provide a robust financial management suite for small to midsize busiensses.
\subsection{Why LedgerSMB}
\subsubsection{Advantages of LedgerSMB}
\begin{itemize}
\item Flexibility and Central Management
\item Accessibility over the Internet (for some users)
\item Relatively open data format
\item Integration with other tools
\item Excellent accounting options for Linux users
\item Open Source
\item Flexible, open framework that can be extended or modified to fit
your business.
\item Security-conscious development community.
\end{itemize}
\subsubsection{Key Features}
\begin{itemize}
\item Accounts Receivable
\begin{itemize}
\item Track sales by customer
\item Issue Invoices, Statements, Receipts, and more
\item Do job costing and time entry for customer projects
\item Manage sales orders and quotations
\item Ship items from sales orders
\end{itemize}
\item Accounts Payable
\begin{itemize}
\item Track purchases and debts by vendor
\item Issue RFQ's Purchase Orders, etc.
\item Track items received from purchase orders
\end{itemize}
\item Budgeting
\begin{itemize}
\item Track expenditures and income across multiple departments
\item Track all transactions across departments
\end{itemize}
\item Check Printing
\begin{itemize}
\item Customize template for any check form
\end{itemize}
\item General Ledger
\item Inventory Management
\begin{itemize}
\item Track sales and orders of parts
\item Track cost of goods sold using First In/First Out method
\item List all parts below reorder point
\item Track ordering requirements
\item Track, ship, receive, and transfer parts to and from multiple warehouses
\end{itemize}
\item Localization
\begin{itemize}
\item Provide Localized Translations for Part Descriptions
\item Provide Localized Templates for Invoices, Orders, Checks, and more
\item Select language per customer, invoice, order, etc.
\end{itemize}
\item Manufacturing
\begin{itemize}
\item Track cost of goods sold for manufactured goods (assemblies)
\item Create assemblies and stock assemblies, tracking materials on hand
\end{itemize}
\item Multi-company/Multiuser
\begin{itemize}
\item One isolated database per company
\item Users can have localized systems independent of company data set
\item Depending on configuration, users may be granted permission to different
companies separately.
\end{itemize}
\item Point of Sale
\begin{itemize}
\item Run multiple cash registers against main LedgerSMB installation
\item Suitable for retail stores and more
\item Credit card processing via TrustCommerce
\item Supports some POS hardware out of the box including:
\begin{itemize}
\item Logic Controls PD3000 pole displays (serial or parallel)
\item Basic text-based receipt printers
\item Keyboard wedge barcode scanners
\item Keyboard wedge magnetic card readers
\item Printer-attached cash drawers
\end{itemize}
\end{itemize}
\item Price Matrix
\begin{itemize}
\item Track different prices for vendors and customers across the board
\item Provide discounts to groups of customers per item or across the board
\item Store vendors' prices independent of the other last cost in the parts
record
\end{itemize}
\item Reporting
\begin{itemize}
\item Supports all basic financial statements
\item Easily display customer history, sales data, and additional information
\item Open framework allows for ODBC connections to be used to generate
reports using third party reporting tools.
\end{itemize}
\item Tax
\begin{itemize}
\item Supports Retail Sales Tax and Value Added Tax type systems
\item Flexible framework allows one to customize reports to change the tax
reporting framework to meet any local requirement.
\item Group customers and vendors by tax form for easy reporting (1099, EU VAT
reporting, and similar)
\end{itemize}
\item Fixed Assets
\begin{itemize}
\item Group fixed assets for easy depreciation and disposal
\item Straight-line depreciation supported out of the box
\item Framework allows for easy development of production-based and time-based
depreciation methods
\item Track full or partial disposal of assets
\end{itemize}
\end{itemize}
\subsection{Limitations of LedgerSMB}
\begin{itemize}
\item No payroll module (Payroll must be done manually)
\item Some integration limitations
\item Further development/maintenance requires a knowledge of a relatively
broad range of technologies
\end{itemize}
\subsection{System Requirements of LedgerSMB}
\begin{itemize}
\item PostgreSQL 8.3 or higher
\item A CGI-enabled Web Server (for example, Apache)
\item Perl 5.8.x or higher
\item An operating system which supports the above software (usually Linux,
though Windows, MacOS X, etc. do work)
\item \LaTeX{}\ (optional) is required to create PDF or Postscript invoices
\item The following CPAN modules:
\begin{itemize}
\item Data::Dumper
\item Log::Log4perl
\item Locale::Maketext
\item DateTime
\item Locale::Maketext::Lexicon
\item DBI
\item MIME::Base64
\item Digest::MD5
\item HTML::Entities
\item DBD::Pg
\item Math::BigFloat
\item IO::File
\item Encode
\item Locale::Country
\item Locale::Language
\item Time::Local
\item Cwd
\item Config::Std
\item MIME::Lite
\item Template
\item Error
\item CGI::Simple
\item File::MimeInfo
\end{itemize}
and these optional ones:
\begin{itemize}
\item Net::TCLink
\item Parse::RecDescent
\item Template::Plugin::Latex
\item XML::Twig
\item Excel::Template::Plus
\end{itemize}
\end{itemize}
\section{Installing LedgerSMB}
The process of installing LedgerSMB is described in detail
in the INSTALL file which comes with the distribution archive.
In the process is:
\begin{enumerate}
\item Install the base software: Web Server (Apache), Database server (PostgreSQL) and Perl
from your distribution and package manager or source. Read the INSTALL file for details
\item Installing Perl module dependencies
from your distribution and package manager or CPAN. Read the INSTALL file for details
\item Give the web server access to the ledgersmb directory
\item Edit ./ledgersmb/ledgersmb.conf to be able to access the database and locate the relevant PostgreSQL contrib scripts
\item Initializing a company database; database setup and upgrade script at http://localhost/ledgersmb/setup.pl
\item Login with your name (database username), password (database user password), Company (databasename)
\end{enumerate}
\section{User Account and Database Administration Basics}
LedgerSMB 1.3 offers a variety of tools for setting up new databases, and most
functionality (aside from creating new databases) is now offered dirctly within
the main applications. LedgerSMB 1.2 users will note that the admin.pl script
is no longer included.
\subsection{Companies and Datasets}
LedgerSMB 1.3 stores data for each company in a separate "database". A
database is a PostgreSQL concept for grouping tables, indexes, etc.
To create a database you will need to know a PostgreSQL superuser name and
password. If you do not know this information you can set authentication to
"trust," then set the password, then set back to a more reasonable setting after
this process. Plese see the PostgreSQL documentation for details.
Each company database must be named. This name is essentially the system
identifier within PostgreSQL for the company's dataset. The name for the
company database can only contain letters, digits and underscores.
Additionally, it must start with a letter. Company database names are
case insensitive, meaning you can't create two separate company databases
called 'Ledgersmb' and 'ledgersmb'.
One way you can create databases fairly easily is by directing your web browser
to the setup.pl script at your installed ledgersmb directory. So if the
base URL is http://localhost/ledgersmb/, you can access the database setup and
upgrade script at http://localhost/ledgersmb/setup.pl. This is very different
from the approaches taken by LedgerSMB 1.2.x and earlier and SQL-Ledger, but
rather forms a wizard to walk you through the process.
An alternative method is the 'prepare-company-database.sh' script contributed by
Erik Huelsmann. This script can be useful in creating and populating databases
from the command line and it offers a reference implementation written in BASH
for how this process is done.
The 'prepare-company-database.sh' script in the tools/ directory will set
up databases to be used for LedgerSMB. The script should be run as 'root'
because it wants to 'su' to the postgres user. Alternatively, if you
know the password of the postgres user, you can run the script as any other
user. You'll be prompted for the password. Additionally, the script creates
a superuser to assign ownership of the created company database to. By
default this user is called 'ledgersmb'. The reason for this choice is that
when removing the ledgersmb user, you'll be told about any unremoved parts
of the database, because the owner of an existing database can't be removed
until that database is itself removed.
The following invocation of the script sets up your first test company,
when invoked as the root user and from the root directory of the LedgerSMB
sources:
\$ ./tools/prepare-company-database.sh --company testinc
The script assumes your PostgreSQL server runs on 'localhost' with
PostgreSQL's default port (5432).
Upon completion, it will have created a company database with the name
'testinc', a user called 'ledgersmb' (password: 'LEDGERSMBINITIALPASSWORD'),
a single user called 'admin' (password: 'admin') and the roles required to
manage authorizations.
Additionally, it will have loaded a minimal list of languages required
to succesfully navigate the various screens.
All these can be adjusted using arguments provided to the setup script. See
the output generated by the --help option for a full list of options.
\subsection{How to Create a User}
In the database setup workflow, a simple application user will be created. This
user, by default, only has user management capabilities. Ideally actual work
should be done with accounts which have fewer permissions.
To set up a user, log in with your administrative credentials (created when
initializing the database) and then go to System/Admin Users/Add User. From
here you can create a user and add appropriate permissions.
\subsection{Permissions}
Permissions in LedgerSMB 1.3 are enforced using database roles. These are
functional categories and represent permissions levels needed to do basic tasks.
They are assigned when adding/editing users.
The roles follow a naming convention which allows several LSMB databases to
exist on the same PostgreSQL instance. Each role is named lsmb\_\lbrack
dbname\rbrack\_\_ followed by the role name. Note that two underscores
separate the database and role names. If this naming convention is followed,
then the interface will find the defined groups in the data base and display
them along with other permissions.
\subsubsection{List of Roles}
Roles here are listed minus their prefix (lsmb\_$[$database name$]$\_\_, note
the double underscore at the end of the prefix).
\begin{itemize}
\item Contact Management: Customers and Vendors
\begin{description}
\item[contact\_read] Allows the user to read contact information
\item[contact\_create] Allows the user to enter new contact information
\item[contact\_edit] Allows the user to update the contact information
\item[contact\_all] provides permission for all of the above. Member of:
\begin{itemize}
\item contact\_read
\item contact\_create
\item contact\_edit
\end{itemize}
\end{description}
\item Batch Creation and Approval
\begin{description}
\item[batch\_create] Allows the user to create batches
\item[batch\_post] Allows the user to take existing batches and post them
to the books
\item[batch\_list] Allows the user to list batches and vouchers within
a batch. This role also grants access to listing draft
tansactions (i.e. non-approved transactions not a
part of a batch). Member of:
\begin{itemize}
\item ar\_transaction\_list
\item ap\_transaction\_list
\end{itemize}
\end{description}
\item AR: Accounts Receivable
\begin{description}
\item[ar\_transaction\_create] Allows user to create transctions. Member
of:
\begin{itemize}
\item contact\_read
\end{itemize}
\item[ar\_transaction\_create\_voucher]. Allows a user to create AR
transaction vouchers. Member of:
\begin{itemize}
\item contact\_read
\item batch\_create
\end{itemize}
\item[ar\_invoice\_create] Allows user to create sales invoices. Member
of:
\begin{itemize}
\item ar\_transaction\_create
\end{itemize}
\item[ar\_transaction\_list] Allows user to view transactions. Member Of:
\begin{itemize}
\item contact\_read
\end{itemize}
\item[ar\_transaction\_all], all non-voucher permissions above, member of:
\begin{itemize}
\item ar\_transaction\_create
\item ar\_transaction\_list
\end{itemize}
\item[sales\_order\_create] Allows user to create sales order. Member of:
\begin{itemize}
\item contact\_read
\end{itemize}
\item[sales\_quotation\_create] Allows user to create sales quotations.
Member of:
\begin{itemize}
\item contact\_read
\end{itemize}
\item [sales\_order\_list] Allows user to list sales orders. Member of:
\begin{itemize}
\item contact\_read
\end{itemize}
\item[sales\_quotation\_list] Allows a user to list sales quotations.
Member of:
\begin{itemize}
\item contact\_read
\end{itemize}
\item[ar\_all]: All AR permissions, member of:
\begin{itemize}
\item ar\_voucher\_all
\item ar\_transaction\_all
\item sales\_order\_create
\item sales\_quotation\_create
\item sales\_order\_list
\item sales\_quotation\_list
\end{itemize}
\end{description}
\item AP: Accounts Payable
\begin{description}
\item[ap\_transaction\_create] Allows user to create transctions. Member
of:
\begin{itemize}
\item contact\_read
\end{itemize}
\item[ap\_transaction\_create\_voucher]. Allows a user to create AP
transaction vouchers. Member of:
\begin{itemize}
\item contact\_read
\item batch\_create
\end{itemize}
\item[ap\_invoice\_create] Allows user to create vendor invoices. Member
of:
\begin{itemize}
\item ap\_transaction\_create
\end{itemize}
\item[ap\_transaction\_list] Allows user to view transactions. Member Of:
\begin{itemize}
\item contact\_read
\end{itemize}
\item[ap\_transaction\_all], all non-voucher permissions above, member of:
\begin{itemize}
\item ap\_transaction\_create
\item ap\_transaction\_list
\end{itemize}
\item[purchase\_order\_create] Allows user to create purchase orders,
Member of:
\begin{itemize}
\item contact\_read
\end{itemize}
\item[rfq\_create] Allows user to create requests for quotations.
Member of:
\begin{itemize}
\item contact\_read
\end{itemize}
\item [purchase\_order\_list] Allows user to list purchase orders.
Member of:
\begin{itemize}
\item contact\_read
\end{itemize}
\item[rfq\_list] Allows a user to list requests for quotations.
Member of:
\begin{itemize}
\item contact\_read
\end{itemize}
\item[ap\_all]: All AP permissions, member of:
\begin{itemize}
\item ap\_voucher\_all
\item ap\_transaction\_all
\item purchase\_order\_create
\item rfq\_create
\item purchase\_order\_list
\item rfq\_list
\end{itemize}
\end{description}
\item Point of Sale
\begin{description}
\item[pos\_enter] Allows user to enter point of sale transactions
Member of:
\begin{itemize}
\item contact\_read
\end{itemize}
\item[close\_till] Allows a user to close his/her till
\item[list\_all\_open] Allows the user to enter all open transactions
\item[pos\_cashier] Standard Cashier Permissions. Member of:
\begin{itemize}
\item pos\_enter
\item close\_till
\end{itemize}
\item[pos\_all] Full POS permissions. Member of:
\begin{itemize}
\item pos\_enter
\item close\_till
\item list\_all\_open
\end{itemize}
\end{description}
\item Cash Handling
\begin{description}
\item[reconciliation\_enter] Allows the user to enter reconciliation
reports.
\item[reconciliation\_approve] Allows the user to approve/commit
reconciliation reports to the books.
\item[reconciliation\_all] Allows a user to enter and approve
reconciliation reports. Don't use if separation of duties is
required. Member of:
\begin{itemize}
\item reconciliation\_enter
\item reconciliation\_approve
\end{itemize}
\item[payment\_process] Allows a user to enter payments. Member of:
\begin{itemize}
\item ap\_transaction\_list
\end{itemize}
\item[receipt\_process] Allows a user to enter receipts. Member of:
\begin{itemize}
\item ar\_transaction\_list
\end{itemize}
\item[cash\_all] All above cash roles. Member of:
\begin{itemize}
\item reconciliation\_all
\item payment\_process
\item receipt\_process
\end{itemize}
\end{description}
\item Inventory Control
\begin{description}
\item[part\_create] Allows user to create new parts.
\item[part\_edit] Allows user to edit parts
\item[inventory\_reports] Allows user to run inventory reports
\item[pricegroup\_create] Allows user to create pricegroups.
Member of:
\begin{itemize}
\item contact\_read
\end{itemize}
\item[pricegroup\_edit] Allows user to edit pricegroups
Member of:
\begin{itemize}
\item contact\_read
\end{itemize}
\item[assembly\_stock] Allows user to stock assemblies
\item[inventory\_ship] Allows user to ship inventory. Member of:
\begin{itemize}
\item sales\_order\_list
\end{itemize}
\item[inventory\_receive] Allows user to receive inventory. Member of:
\begin{itemize}
\item purchase\_order\_list
\end{itemize}
\item[inventory\_transfer] Allows user to transfer inventory between
warehouses.
\item[warehouse\_create] Allows user to create warehouses.
\item[warehouse\_edit] Allows user to edit warehouses.
\item[inventory\_all] All permissions groups in this section.
Member of:
\begin{itemize}
\item part\_create
\item part\_edit
\item inventory\_reports
\item pricegroup\_create
\item pricegroup\_edit
\item assembly\_stock
\item inventory\_ship
\item inventory\_transfer
\item warehouse\_create
\item warehouse\_edit
\end{itemize}
\end{description}
\item GL: General Ledger and General Journal
\begin{description}
\item[gl\_transaction\_create] Allows a user to create journal entries
or drafts.
\item[gl\_voucher\_create] Allows a user to create GL vouchers and
batches.
\item[gl\_reports] Allows a user to run GL reports, listing all financial
transactions in the database. Member of:
\begin{itemize}
\item ar\_list\_transactions
\item ap\_list\_transactions
\end{itemize}
\item[yearend\_run] Allows a user to run the year-end processes
\item[gl\_all] All GL permissions. Member of:
\begin{itemize}
\item gl\_transaction\_create
\item gl\_voucher\_create
\item gl\_reports
\item yearend\_run
\end{itemize}
\end{description}
\item Project Accounting
\begin{description}
\item[project\_create] Allows a user to create project entries. User must
have contact\_read permission to assing them to customers however.
\item[project\_edit] Allows a user to edit a project. User must
have contact\_read permission to assing them to customers however.
\item[project\_timecard\_add] Allows user to add time card. Member of:
\begin{itemize}
\item contact\_read
\end{itemize}
\item[project\_timecard\_list] Allows a user to list timecards. Necessary
for order generation. Member of:
\begin{itemize}
\item contact\_read
\end{itemize}
\item[project\_order\_generate] Allows a user to generate orders from
time cards. Member of:
\begin{itemize}
\item project\_timecard\_list
\item orders\_generate
\end{itemize}
\end{description}
\item Order Generation, Consolidation, and Management
\begin{description}
\item[orders\_generate] Allows a user to generate orders. Member of:
\begin{itemize}
\item contact\_read
\end{itemize}
\item[orders\_sales\_to\_purchase] Allows creation of purchase orders
from sales orders. Member of:
\begin{itemize}
\item orders\_generate
\end{itemize}
\item[orders\_purchase\_consolidate] Allows the user to consolidate
purchase orders. Member of:
\begin{itemize}
\item orders\_generate
\end{itemize}
\item[orders\_sales\_consolidate] Allows user to consolidate sales
orders. Member of:
\begin{itemize}
\item orders\_generate
\end{itemize}
\item[orders\_manage] Allows full management of orders. Member of:
\begin{itemize}
\item project\_order\_generate
\item orders\_sales\_to\_purchase
\item orders\_purchase\_consolidate
\item orders\_sales\_consolidate
\end{itemize}
\end{description}
\item Financial Reports
\begin{description}
\item[financial\_reports] Allows a user to run financial reports.
Member of:
\begin{itemize}
\item gl\_reports
\end{itemize}
\end{description}
\item Batch Printing
\begin{description}
\item[print\_jobs\_list] Allows the user to list print jobs.
\item[print\_jobs] Allows user to print the jobs
Member of:
\begin{itemize}
\item print\_jobs\_list
\end{itemize}
\end{description}
\item System Administration
\begin{description}
\item[system\_settings\_list] Allows the user to list system settings.
\item[system\_settings\_change] Allows user to change system settings.
Member of:
\begin{itemize}
\item system\_settings\_list
\end{itemize}
\item[taxes\_set] Allows setting of tax rates and order.
\item[account\_create] Allows creation of accounts.
\item[account\_edit] Allows one to edit accounts.
\item[auditor] Allows one to access audit trails.
\item[audit\_trail\_maintenance] Allows one to truncate audit trails.
\item[gifi\_create] Allows one to add GIFI entries.
\item[gifi\_edit] Allows one to edit GIFI entries.
\item[account\_all] A general group for accounts management. Member of:
\begin{itemize}
\item account\_create
\item account\_edit
\item taxes\_set
\item gifi\_create
\item gifi\_edit
\end{itemize}
\item[department\_create] Allow the user to create departments.
\item[department\_edit] Allows user to edit departments.
\item[department\_all] Create/Edit departments. Member of:
\begin{itemize}
\item department\_create
\item department\_edit
\end{itemize}
\item[business\_type\_create] Allow the user to create business types.
\item[business\_type\_edit] Allows user to edit business types.
\item[business\_type\_all] Create/Edit business types. Member of:
\begin{itemize}
\item business\_type\_create
\item business\_type\_edit
\end{itemize}
\item[sic\_create] Allow the user to create SIC entries.
\item[sic\_edit] Allows user to edit business types.
\item[sic\_all] Create/Edit business types. Member of:
\begin{itemize}
\item sic\_create
\item sic\_edit
\end{itemize}
\item[tax\_form\_save] Allow the user to save the tax form entries.
\item[template\_edit] Allow the user to save new templates. This
requires sufficient file system permissions.
\item[users\_manage] Allows an admin to create, edit, or remove users.
Member of:
\begin{itemize}
\item contact\_create
\item contact\_edit
\end{itemize}
\item[system\_admin] General role for accounting system administrators.
Member of:
\begin{itemize}
\item system\_setting\_change
\item account\_all
\item department\_all
\item business\_type\_all
\item sic\_all
\item tax\_form\_save
\item template\_edit
\item users\_manage
\end{itemize}
\end{description}
\item Manual Translation
\begin{description}
\item[language\_create] Allow user to create languages
\item[language\_edit] Allow user to update language entries
\item[part\_translation\_create] Allow user to create translations of
parts to other languages.
\item[project\_translation\_create] Allow user to create translations of
project descriptions.
\item[manual\_translation\_all] Full management of manual translations.
Member of:
\begin{itemize}
\item language\_create
\item language\_edit
\item part\_translation\_create
\item project\_translation\_create
\end{itemize}
\end{description}
\end{itemize}
\subsection{Creating Custom Groups}
Because LedgerSMB uses database roles and naming conventions to manage
permissions it is possible to create additional roles and use them to manage
groups. There is not currently a way of doing this from the front-end, but as
long as you follow the conventions, roles you create can be assigned to users
through the front-end. One can also create super-groups that the front-end
cannot see but can assign permissions to groups of users on multiple databases.
This section will cover both of these approaches.
\subsubsection{Naming Conventions}
In PostgreSQL, roles are global to the instance of the server. This means that
a single role can exist and be granted permissions on multiple databases. We
therefore have to be careful to avoid naming collisions which could have the
effect of granting permissions unintentionally to individuals who are not
intended to be application users.
The overall role consists of a prefix and a name. The prefix starts with lsmb\_
to identify the role as one created by this application, and then typically the
name of the database. This convention can be overridden by setting this in the
defaults table (the setting is named 'role\_prefix') but this is typically done
only when renaming databases. After the prefix follow {\bf two} underscores.
So by default a role for LedgerSMB in a company named mtech\_test would start
with lsmb\_mtech\_test\_\_. To create a role for LedgerSMB all we have to do is
create one in the database with these conventions.
\subsubsection{Example}
Suppose mtech\_test is a database for a financial services company
and most users must have appropriate permissions to enter batches etc, but not
approve them A role could be created like:
\begin{verbatim}
CREATE ROLE lsmb_mtech_test__user;
GRANT lsmb_mtech_test__all_ap,
lsmb_mtech_test__create_batch,
lsmb_mtech_test__read_contact,
lsmb_mtech_test__list_batches,
lsmb_mtech_test__create_contact,
lsmb_mtech_test__all_gl,
lsmb_mtech_test__process_payment
TO lsmb_mtech_test__user;
\end{verbatim}
Then when going to the user interface to add roles, you will see an entry that
says "user" and this can be granted to the user.
\section{Contact Management}
Every business does business with other persons, corporate or natural. They may
sell goods and services to these persons or they may purchase goods and
services from these persons. With a few exceptions those who are being sold
goods and services are tracked as customers, and those from whom goods and
services are being purchased from are vendors. The actual formal distinction is
that vendors are entities that the business pays while customers pay the
business. Here are some key terms:
\begin{description}
\item[Credit Account] An agreement between your business and another person
or business and your business about the payment for the delivery of goods and
services on an ongoing basis. These credit accounts define customer and vendor
relationships.
\item[Customer] Another person or business who pays your business money
\item[Vendor] Another person or business you pay money to.
\end{description}
Prior versions of LedgerSMB required that customers and vendors be entirely
separate. In 1.3, however, a given entity can have multiple agreements with the
business, some being as a customer, and some being as a vendor.
All customers and vendors are currently tracked as companies, while employees
are tracked as natural persons but this will be
changing in future versions so that natural persons can be tracked as customers
and vendors too.
Each contact must be attached to a country for tax reporting purposes. Credit
accounts can then be attached to a tax form for that country (for 1099 reporting
in the US or EU VAT reporting).
\subsection{Addresses}
Each contact, whether an employee, customer, or vendor, can have one or more
addresses attached, but only one can be a billing address.
\subsection{Contact Info}
Each contact can have any number of contact info records attached. These convey
phone, email, instant messenger, etc. info for the individual. New types of
records can be generated easily by adding them to the contact\_class table.
\subsection{Bank Accounts}
Each contact can have any number of bank accounts attached, but only one can be
the primary account for a given credit account. There are only two fields here.
The first (BIC, or Banking Institution Code) represents the bank's identifier,
such as an ABA routing number, or a SWIFT code, while the second (IBAN)
represents the individual's account number.
\subsection{Notes}
In 1.3, any number of read-only notes can be attached either to an entity (in
which case they show up for all credit accounts for that entity), or a credit
account, in which case they show up only when the relevant credit account is
selected.
\section{Chart of Accounts}
The Chart of Accounts provides a basic overview of the logical structure
of the accounting program. One can customize this chart to allow for
tracking of different sorts of information.
\subsection{Introduction to Double Entry Bookkeeping}
In order to set up your chart of accounts in LedgerSMB you will need to
understand a bit about double entry bookkeeping. This section provides a
brief overview of the essential concepts. There is a list of references
for further reading at the end.
\subsubsection{Business Entity}
You always want to keep your personal expenses and income separate from that of
the business or you will not be able to tell how much money it is making (if
any). For the same reason you will want to keep track of how much money
you put into and take out of the business so you will want to set up a
completely seperate set of records for it and treat it almost as if it had
a life of its own.
\subsubsection{Double Entry}
Examples:
\begin{itemize}
\item When you buy you pay money and receive goods.
\item When you sell you get money and give goods.
\item When you borrow you get money and give a promise to pay it back.
\item When you lend you give money and get a promise to pay it back.
\item When you sell on credit you give goods and get a promise to pay.
\item When you buy on credit you give a promise to pay and get goods.
\end{itemize}
You need to record both sides of each transaction: thus double entry.
Furthermore, you want to organize your entries, recording those having to
do with money in one place, value of goods bought and sold in another,
money owed in yet another, etc. Hence you create accounts, and record each
half of each transaction in an appropriate account. Of course, you won't
have to actually record the amount in more than one place yourself: the
program takes care of that.
\subsubsection{Accounts}
\begin{description}
\item[Assets] Money and anything that can be converted into money without
reducing the net equity of the business. Assets include money owed, money held,
goods available for sale, property, and the like.
\item[Liabilities] Debts owned by the business such as bank loans and unpaid
bills.
\item[Equity or Capital] What would be left for the owner if all the assets were
converted to money and all the liabilities paid off ("Share Capital" on the
LedgerSMB default chart of accounts: not to be confused with "Capital Assets".)
\item[Revenue] Income from business activity: increases Equity
\item[Expense] The light bill, the cost of goods sold, etc: decreases Equity
\end{description}
All other accounts are subdivisions of these. The relationship between the
top-level accounts is often stated in the form of the Accounting Equation
(don't worry: you won't have to solve it):
Assets = Liabilities + Equity + (Revenue - Expense)
You won't actually use this equation while doing your bookkeeping, but it's
a useful tool for understanding how the system works.
\subsubsection{Debits and Credits}
The words "Debit" and "Credit" come from Latin roots. Debit is related to our
word "debt" while credit can indicate a loan or something which edifies an
individual or business. The same applies to double entry accounting as it
involves equity accounts. Debts debit equity, moneys owed to the business
credit the business. Credits to equity accounts make the business more valuable
while debits make it less. Thus when you invest money in your business you are
crediting that business (in terms of equity), and when you draw money, perhaps
to pay yourself, you are debiting that business.
Double entry accounting systems grew out of single entry ones. The goal was to
create a system which had inherent checks against human error. Consequently
accounts and transactions are arranged such that debits across all accounts
always equal credit accounts.
If you invest money in your business that credits an equity account, but the
other side of the transaction must thus be a debit. Because the other side of
the transaction is an asset account (for example a bank account) it is debited.
Similarly as liability accounts increase, the equity of the business decreases.
Consequently, liabilities increase with credits. Income and expense accounts
are often the flip sides of transactions involving assets and liailities and
represent changes in equity. Therefore they follow the same rules as equity
accounts.
\begin{itemize}
\item Debits increase assets
\item Debits increase expense
\item Credits increase liabilities
\item Credits increase capital
\item Credits increase revenue
\end{itemize}
Examples:
You go to the bank and make a deposit. The teller tells you that he is
going to credit your account. This is correct: your account is money the
bank owes you and so is a liability from their point of view. Your deposit
increased this liability and so they will credit it. They will make an
equal debit to their cash account. When you return you will debit your
bank deposits account because you have increased that asset and credit cash
on hand because you have decreased that one.
\subsubsection{Accrual}
Early accounting systems were usually run on a cash basis. One generally did
not consider money owed to affect the financial health of a company, so expenses
posted when paid as did income.
The problem with this approach is that it becomes very difficult or impossible
to truly understand the exact nature of the financial health of a business. One
cannot get the full picture of the financial health of a business because
outstanding debts are not considered. Futhermore, this does not allow for
revenue to be tied to cost effectively, so it becomes difficult to assess how
profitable a given activity truly is.
To solve this problem, accrual-based systems were designed. The basic principle
is that income and expense should be posted as they are incurred, or accrued.
This allows one to track income relative to expense for specific projects or
operations, and make better decisions about which activities will help one
maximize profitability.
To show how these systems differ, imagine that you bill a customer for time and
materials for a project you have just completed. The customer pays the bill
after 30 days. In a cash based system, you would post the income at the time
when the customer pays, while in an accrual system, the income is posted at the
time when the project is completed.
\subsubsection{Separation of Duties}
There are two important reasons not to trust accounting staff too much regarding
the quality of data that is entered into the system. Human error does occur,
and a second set of eyes can help reduce that error considerably.
A second important reason to avoid trusting accounting staff too much is that
those with access to financial data are in a position to steal money from the
business. All too often, this actually happens. Separation of duties is the
standard solution to this problem.
Separation of duties then refers to the process of separating parts of the
workflow such that one person's work must be reviewed and approved by someone
else. For example, a book keeper might enter transactions and these might later
be reviewed by a company accountant or executive. This thus cuts down both on
errors (the transaction is not on the books until it is approved), and on the
possibility of embezzlement.
Typically, the way duties are likely to be separated will depend on the specific
concerns of the company. If fraud is the primary concern, all transactions will
be required to go through approval and nobody will ever be allowed to approve
their own transactions. If fraud is not a concern, then typically transactions
will be entered, stored, and later reviewed/approved by someone different, but
allowances may be made allowing someone to review/approve the transactions
he/she entered. This latter example doesn't strictly enforce separation of
duties, but encourages them nonetheless.
By default, LedgerSMB is set up not to strictly enforce the separation of
duties. This can be changed by adding a database constraint to ensure that
batches and drafts cannot be approved by the same user that enters them.
In the age of computers, separation of duties finds one more important
application: it allows review and approval by a human being of automatically
entered transactions. This allows the accounting department to double-check
numbers before they are posted to the books and thus avoid posting incorrect
numbers (for example, due to a software bug in custom code).
Unapproved transactions may be deleted as they are not full-fledged transactions
yet. Approved transactions should be reversed rather than deleted.
Separation of duties is not available yet for sales/vendor invoice documents,
but something similar can be handled by feeding them through the order entry
workflow (see \ref{oe}).
\subsubsection{References}
\url{http://www.accounting-and-bookkeeping-tips.com/learning-accounting/accounting-basics-credit.htm}\\
Discussion of debits and credits as well as links to other accounting subjects.\\
\noindent \url{http://www.computer-consulting.com/accttips.htm}\\
Discussion of double entry bookkeeping.\\
\noindent \url{http://www.minnesota.com/~tom/sql-ledger/howtos/}\\
A short glossary, some links, and a FAQ (which makes the "credit=negative
number" error). The FAQ focuses on SQL-Ledger, LedgerSMB's ancestor.\\
\noindent \url{http://bitscafe.com/pub2/etp/sql-ledger-notes\#expenses}\\
Some notes on using SQL-Ledger (LedgerSMB's ancestor).\\
\noindent \url{http://en.wikipedia.org/wiki/List\_of\_accounting\_topics}\\
Wikipedia articles on accounting.\\
\noindent \url{http://www.bized.ac.uk/learn/accounting/financial/index.htm}\\
Basic accounting tutorial.\\
\noindent \url{http://www.asset-analysis.com/glossary/glo\_index.html}\\
Financial dictionary and glossary.\\
\noindent \url{http://www.geocities.com/chapleaucree/educational/FinanceHandbook.html}\\
Financial glossary.\\
\noindent \url{http://www.quickmba.com/accounting/fin/}\\
Explanation of fundamentals of accounting, including good discussions
of debits and credits and of double-entry.
\subsection{General Guidelines on Numbering Accounts}
In general, most drop-down boxes in LedgerSMB order the accounts
by account number. Therefore by setting appropriate account numbers,
one can affect the default values.
A second consideration is to try to keep things under each heading
appropriate to that heading. Thus setting an account number for a
bank loan account in the assets category is not generally advisable.
If in doubt, review a number of bundled chart of accounts templates to see what
sorts of numbering schemes are used.
\subsection{Adding/Modifying Accounts}
These features are listed under System-\textgreater Chart of Accounts.
One can list the accounts and click on the account number to modify
them or click on the \char`\"{}add account\char`\"{} option to create
new accounts.
\begin{itemize}
\item Headings are just broad categories and do not store values themselves,
while accounts are used to store the transactional information.
\item One cannot have an account that is a summary account (like AR)
and also has another function.
\item GIFI is mostly of interest to Canadian customers but it can be used
to create reports of account hierarchies.
\end{itemize}
\subsection{Listing Account Balances and Transactions}
One can list the account balances via the Reports-\textgreater Chart
of Accounts report. Clicking on the account number will provide a
ledger for that account.
\section{Administration}
This section covers other (non-Chart of Accounts) aspects to the
setup of the LedgerSMB accounting package. These are generally accessed
in the System submenu.
\subsection{Taxes, Defaults, and Preferences}
Since LedgerSMB 1.2, sales tax has been modular, allowing for different tax
accounts to be goverend by different rules for calculating taxes (although only
one such module is supplied with LedgerSMB to date). This allows one to
install different tax modules and then
select which taxes are applied by which programming modules. The sales tax
module has access to everything on the submitted form so it is able to make
complex determinations on what is taxable based on arbitrary criteria.
The tax rules drop-down box allows one to select any installed tax module
(LedgerSMB 1.3 ships only with the simple module), while the ordering is an
integer which allows one to specify a tax run which occurs on the form after
any rules with lower entries in this box. This allows for compounding of sales tax (for example, when PST applies to the total and GST as well).
As in 1.3.16, new API's have been added to allow one to pass info to tax modules
as to minimum and maximum taxable values. These values are used by the Simple tax module as applying to the subtotal of the invoice.
\subsubsection{Adding A Sales Tax Account}
Sales Tax is collected on behalf of a state or national government
by the individual store. Thus a sales tax account is a liability--
it represents money owed by the business to the government.
To add a sales tax account, create an account in the Chart of Accounts
as a liability account, check all of the ``tax'' checkboxes.
Once this account is created, one can set the tax amount.
\subsubsection{Setting a Sales Tax Amount}
Go to System-\textgreater Defaults and the tax account will be listed
near the bottom of the page. The rate can be set there.
\subsubsection{Default Account Setup}
These accounts are the default accounts for part creation and foreign
exchange tracking.
\subsubsection{Currency Setup}
The US accounts list this as USD:CAD:EUR. One can add other currencies
in here, such as IDR (Indonesian Rupiah), etc. Currencies are separated
by colons.
\subsubsection{Sequence Settings}
These sequences are used to generate user identifiers for quotations,
invoices, and the like. If an identifier is not added, the next number
will be used.
A common application is to set invoices, etc. to start at 1000 in
order to hide the number of issued invoices from a customer.
Leading zeros are preserved. Other special values which can be embedded using
$<$?lsmb ?$>$ tags include:
\begin{description}
\item[DATE] expands to the current date
\item[YYMMDD] expands to a six-digit version of the date. The components of
this date can be re-arranged in any order, so MMDDYY, DDMMYY,
or even just MMYY are all options.
\item[NAME] expands to the name of the customer or vendor
\item[BUSINESS] expands to the type of business assigned to the customer or
ventor.
\item[DESCRIPTION] expands to the description of the part. Valid only for parts.
\item[ITEM] expands to the item field. Valid only for parts.
\item[PERISCOPE] expands to the partsgroup. Valid only for parts.
\item[PHONE] expands to the telephone number for customers and vendors.
\end{description}
\subsection{Audit Control}
Auditability is a core concern of the architects of any accounting
system. Such ensures that any modification to the accounting information
leaves a trail which can be followed to determine the nature of the
change. Audits can help ensure that the data in the accounting system
is meaningful and accurate, and that no foul play (such as embezzlement)
is occurring.
\subsubsection{Explaining transaction reversal}
In paper accounting systems, it was necessary to have a means to authoritatively
track corrections of mistakes. The means by which this was done was
known as ``transaction reversal.''
When a mistake would be made, one would then reverse the transaction
and then enter it in correctly. For example, let us say that an office
was renting space for \$300 per month. Let us say that they inadvertently
entered it in as a \$200 expense.
The original transaction would be:
\begin{tabular}{l|r|r}
Account &
Debit &
Credit \tabularnewline
\hline
5760 Rent &
\$200 &
\tabularnewline
2100 Accounts Payable &
&
\$200\tabularnewline
\end{tabular}
The reversal would be:
\begin{tabular}{l|r|r}
Account &
Debit &
Credit \tabularnewline
\hline
5760 Rent &
&
\$200\tabularnewline
2100 Accounts Payable &
\$200 &
\tabularnewline
\end{tabular}
This would be followed by re-entering the rent data with the correct
numbers. This was meant to ensure that one did not erase data from
the accounting books (and as such that erasing data would be a sign
of foul play).
LedgerSMB has a capability to require such reversals if the business
deems this to be necessary. When this option is enabled, existing
transactions cannot be modified and one will need to post reversing
transactions to void existing transactions before posting corrected
ones.
Most accountants prefer this means to other audit trails because it
is well proven and understood by them.
\subsubsection{Close books option}
You cannot alter a transaction that was entered before the closing date.
\subsubsection{Audit Trails}
This option stores additional information in the database to help
auditors trace individual transactions. The information stored, however,
is limited and it is intended to be supplemental to other auditing
facilities.
The information added includes which table stored the record, which
employee entered the information, which form was used, and what the
action was. No direct financial information is included.
\subsection{Departments}
Departments are logical divisions of a business. They allow for budgets
to be prepared for the individual department as well as the business
as a whole. This allows larger businesses to use LedgerSMB to meet
their needs.
\subsubsection{Cost v Profit Centers.}
In general business units are divided into cost and profit centers.
Cost centers are generally regarded as business units where the business
expects to lose money and profit centers are where they expect to
gain money. For example, the legal department in most companies is
a cost center.
One of the serious misunderstandings people run up against is that
LedgerSMB tends to more narrowly define cost and profit centers than
most businesses do. In LedgerSMB a cost center is any department
of the business that does not issue AR transactions. Although many
businesses may have cost centers (like technical support) where customer
fees may subsidize the cost of providing the service, in LedgerSMB,
these are profit centers.
LedgerSMB will not allow cost centers to be associated with AR transactions.
So if you want this functionality, you must create the department
as a profit center.
\subsection{Warehouses}
LedgerSMB has the ability to track inventory by warehouse. Inventory
items can be moved between warehouses, and shipped from any warehouse
where the item is in stock. We will explore this concept more later.
\subsection{Languages}
Languages allow for goods and services to be translated so that one
can maintain offices in different countries and allow for different
goods and service descriptions to be translated to different languages
for localization purposes.
\subsection{Types of Businesses}
One can create types of businesses and then give them discounts across
the board. For example, one might give a firm that uses one's services
as a subcontractor a 10\% discount or more.
\subsection{Misc.}
\subsubsection{GIFI}
GIFI is a requirement for Canadian customers. This feature allows
one to link accounts with Canadian tax codes to simplify the reporting
process. Some European countries now use a similar system.
People that don't otherwise have a use for GIFI can use it to create reports
which agregate accounts together.
\subsubsection{SIC}
Standard Industrial Classification is a way of tracking the type of
business that a vendor or customer is in. For example, an accountant
would have an SIC of 8721 while a graphic design firm would have an
SIC of 7336. The classification is hierarchical so one could use this
field for custom reporting and marketing purposes.
\subsubsection{Overview of Template Editing}
The templates for invoices, orders, and the like can be edited from
within LedgerSMB. The submenus within the System submenu such as
HTML Templates, Text Templates and \LaTeX{} templates provide access
to this functionality.
\subsubsection{Year-end}
Although the Year-end functionality in LedgerSMB is very useful,
it does not entirely make the process simple and painless. One must
still manually enter adjustments prior to closing the books. The extent
to which these adjustments are necessary for any given business is
a matter best discussed with an accountant.
The standard way books are normally closed at the end of the year
is by moving all adjusted\footnote{Adjustments would be entered via the General
Ledger. The exact process is beyond the scope of this document, however.} income
and expenses to an equity account usually called `Retained
Earnings.' Assets and liabilities are not moved. Equity drawing/dividend
accounts are also moved, but the investment accounts are not. The
reasoning behind this process is that one wants a permanent record
of the amount invested in a business, but any dividends ought not
to count against their recipients when new investors are brought on
board.
LedgerSMB automatically moves all income and expense into the specified
year-end/retained earnings account. It does not move the drawing account,
and this must be done manually, nor does it automate the process of
making adjustments.
Contrary to its name, this function can close the books at any time,
though this would likely be of limited use.
Once the books are closed, no transactions can be entered into the closed
period. Additionally the year end routines cannot be run if there are
unapproved transactions in a period to be closed.
\subsection{Options in the ledger-smb.conf}
The ledger-smb.conf configures the software by assigning site-wide
variables. Most of these should be left alone unless one knows what
one is doing. However, on some systems some options might need to
be changed, so all options are presented here for reference:
\begin{description}
\item[auth] is the form of authenticaiton used. If in doubt use `DB' auth.
\item[decimal\_places] Number of decimal places for money.
\item[templates] is the directory where the templates are stored.
\item[sendmail] is the command to use to send a message. It must read the
email from standard input.
\item[language] allows one to set the language for the login screen and
admin page.
\item[latex] tells LedgerSMB whether \LaTeX{} is installed. \LaTeX{} is
required for generating Postscript and PDF invoices and the like.
\item[Environmental variables] can be set here too. One
can add paths for searching for \LaTeX{}, etc.
\item[Printers] section can be used to set a hash table of printers for the software.
The primary example is\\
$[$printers$]$\\
Default = lpr\\
Color = lpr -PEpson \\%
However, this can use any program that can accept print documents
(in Postscript) from standard input, so there are many more possibilities.
\item[database] provides connection parameters for the database, typically the
host and port, but also the location of the contrib scripts (needed for the
setup.pl), and the default namespace.
\end{description}
\section{Goods and Services}
The Goods and Services module will focus on the definition of goods
and services and the related accounting concepts.
\subsection{Basic Terms}
\begin{description}
\item [{COGS}] is Cost of Goods Sold. When an item is sold, then the expense
of its purchase is accrued as attached to the income of the sale.
\item [{List}] Price is the recommended retail price.
\item [{Markup}] is the percentage increase that is applied to the last
cost to get the sell price.
\item [{ROP}] is re-order point. Items with fewer in stock than this will
show up on short reports.
\item [{Sell}] Price is the price at which the item is sold.
\end{description}
\subsection{The Price Matrix}
It is possible to set different prices for different groups of customers,
or for different customers individually. Similarly, one can track
different prices from different vendors along with the required lead
time for an order.
\subsection{Pricegroups}
Pricegroups are used to help determine the discount a given customer
may have.
\subsection{Groups}
Groups represent a way of categorizing POS items for a touchscreen
environment. It is not fully functional yet, but is sufficient that
with some stylesheet changes, it could be made to work.
\subsection{Labor/Overhead}
Labor/overhead is usually used for tracking manufacturing expenses.
It is not directly billed to a customer. It is associated with an
expense/Cost of Goods Sold (COGS) account.
\subsection{Services}
Services include any labor that is billed directly to the customer.
It is associated with an expense/COGS account and an income account.
Services can be associated with sales tax.
\subsubsection{Shipping and Handling as a Service}
One approach to dealing with shipping and handling is to add it as
a service. Create a service called ``Shipping and Handling,''
with a sell price \$1 per unit, and a 0\% markup. Bill it as \$1 per
unit. This allows one to add the exact amount of shipping and handling
as necessary.
\subsection{Parts}
A part is any single item you might purchase and either might resell
or use in manufacturing an assembly. It is linked to an expense/COGS
account, an income account, and an inventory account. Parts can be
associated with sales tax.
\subsection{Assemblies and Manufacturing}
Manufacturers order parts but they sell the products of their efforts.
LedgerSMB supports manufacturing using the concept of assemblies.
An assembly is any product which is manufactured on site. It consists
of a selection of parts, services, and/or labor and overhead. Assemblies
are treated as parts in most other regards.
However, one cannot order assemblies from vendors. One must instead
order the components and stock them once they are manufactured.
\subsubsection{Stocking Assemblies}
One stocks assemblies in the Stock Assembly entry on the Goods and
Services submenu. When an assembly is stocked the inventory is adjusted
properly.
The Check Inventory option will cause LedgerSMB to refuse to stock
an assembly if the inventory required to produce the assembly would
drop the part below the reorder point.
\subsection{Reporting}
\subsubsection{All Items and Parts Reports}
The All Items report provides a unified view of assemblies, parts, services,
and labor for the company, while the Parts report confines it to parts.
Types of reports are:
\begin{description}
\item [{Active}] lists all items not marked as obsolete.
\item [{On}] Hand lists current inventory .
\item [{Short}] Lists all items which are stocked below their ROP.
\item [{Obsolete}] Lists all items which are marked as obsolete.
\item [{Orphaned}] Lists all items which have never had a transaction associated
with them.
\end{description}
One can also list these goods by invoice, order, or quotation.
For best results, it is a good idea to enter some AR and AP data before
running these reports.
\subsubsection{Requirements}
This report is designed to assist managers determine the quantities
of goods to order and/or stock. It compares the quantity on hand with
the activity in a given time frame and provides a list of goods which
need to be ordered and the relevant quantity.
\subsubsection{Services and Labor}
This is similar to the Parts and All Items menu but only supports
Active, Obsolete, and Orphaned reports.
\subsubsection{Assemblies}
This is similar to the Parts and All Items reports but it also provides
an ability to list individual items in the assemblies as well.
AP Invoices, Purchase Orders, and RFQ's are not available on this
report.
\subsubsection{Groups and Pricegroups}
These reports provide a simple interface for locating groups and pricegroups.
The report types are similar to what they are for services.
\subsection{Translations}
One can add translations so that they show up in the customer's native
language in the issued invoice.
To issue translations, one must have languages defined. One can then
add translations to descriptions and part groups.
\subsection{How Cost of Goods Sold is tracked}
Cost of Goods Sold is tracked on a First-In, First-out (FIFO) basis.
When a part is purchased, its cost is recorded in the database. The
cost of the item is then added to the inventory asset account. When
the good is sold, the cost of the item is moved to the cost of goods
sold account.
This means that one must actually provide invoices for all goods entered
at their actual cost. If one enters in \$0 for the cost, the cost
of goods sold will also be \$0 when the item is sold. We will cover
this entire process in more depth after we cover the AP and AR units
below.
\section{Transaction Approval}
With the exception of Sales/Vendor Invoices (with inventory control), any
financial transaction entered by default must be approved before it shows up in
financial reports. Because there are two ways these can be set up, there are
two possibly relevant workflows.
For sales/vendor invoices where goods and services are tracked (as distinct
from AR/AP transactions which only track amounts), the separation of duties
interface is not complete. Here, you should use orders for initial entry, and
convert these to invoices.
\subsection{Batches and Vouchers}
Often larger businesses may need to enter batches of transactions which may need
to be approved or rolled back as a batch. Batches are thus a generic
"container" for vouchers. A given batch can have AR, AP, payment, receipt, and
GL vouchers in it together. That same batch, however, will be classified by its
main purpose.
For example, one may have a batch for processing payments. That batch may
include payment transactions, but also related ar/ap transactions (relating to
specific charges relating to payment or receipt). The batch would still be
classified as a payment batch however.
In the ``Transaction Approval/Batches'' screen, one can enter search criteria
for batches.
The next screen shows a list of batches including control codes, amounts covered
in the batch, and descriptions. Clicking on the control code leads you to a
details screen where specific vouchers can be dropped from the batch.
When the batch is approved, all transactions in it are approved with it.
\subsection{Drafts}
Drafts are single transactions which have not yet been approved. For example,
a journal entry or AR Transaction would become a ``draft'' that would need to be
approved after entry.
As with batches, one searches for drafts on the first screen and then can
approve either on the summary screen or the details screen (by clicking
through).
\section{AP}
\subsection{Basic AP Concepts}
The Accounts Payable module tracks all financial commitments that
the company makes to other businesses. This includes rent, utilities,
etc. as well as orders of goods and services.
\subsection{Vendors}
A vendor is any business that the company agrees to pay money to.
One can enter vendor information under AP-\textgreater Vendors-\textgreater
Add Vendor. The vendor list can be searched under AP-\textgreater
Vendors-\textgreater Reports-\textgreater Search.
Please see the Contact Management section above for more on manageing vendors.
\subsection{AP Transactions}
AP Transactions are generally used for items other than goods and
services. Utilities, rent, travel expenses, etc. could be entered
in as an AP transaction.
If the item is paid partially or in full when the transaction is entered,
one can add payments to the payment section.
All other payments can and should be entered under cash payment (below).
The PO Number and Order Number fields are generally used to track
associations with purchase orders sent to vendors, etc. These fields
can be helpful for adding misc. expenses to orders for reporting purposes.
The department drop-down box appears when one has created one or more
departments. A transaction is not required to be associated with a
department, but one can use this feature for budget tracking.
With AP Transactions, there is no option for internal notes. All notes
will appear on any printed version of the transaction.
Note: Printing a transaction does not post it. No data is committed
until the invoice is posted.
\subsection{AP Invoices}
AP Invoices are used to enter in the receipt of goods and services.
Goods and services are deemed entered into the inventory when they
are invoiced.
This screen is reasonably similar to the AP Transaction Screen, though
the part entry section is a bit different.
The AP Invoice section has a capacity to separate internal notes from
notes printed on the invoice. Note, however, that since these are
received invoices, it is rare that one needs this ability.
Note that LedgerSMB can search for partial part numbers or descriptions.
Also if you have a group you can use this to select the part.
To remove a line item from an invoice or order, delete the partnumber
and click update.
\subsubsection{Correcting an AP Invoice}
If an invoice is entered improperly, the methods used to correct it
will vary depending on whether transaction reversal is enforced or
not. If transaction reversal is not enforced, one can simply correct
the invoice or transaction and repost. Note, however, that this violates
generally accepted accounting principles.
If transaction reversal is in effect, one needs to create a duplicate
invoice with exactly opposite values entered. If one part was listed as
received, then one should enter a negative one for the quantity. Then one
can enter the invoice number as the same as the old one. Add an R to the
end to show that it is a reversing transaction. Once this is posted, one can
enter the invoice correctly.
\subsection{Cash payment And Check Printing}
It is a bad idea to repost invoices/transactions just to enter a payment.
The Cash-\textgreater Payment window allows one to enter payments against
AP invoices or transactions.
The printing capability can be used to print checks. The default template
is NEBS 9085, though you can use 9082 as well (as Quickbooks does).
The source field is used to store an identifying number of the source
document, such as the check number. One must select the item to have
it paid, and then enter the amount. One can then print a check.
\subsubsection{Batch Payment Entry Screen}
For bulk payment entry, we provide the batch payment workflow. You can use this
to pay any or all vendors filtered by business class or the like. Each payment
batch is saved, and hits the books after it is reviewed. It is possible to pay
over ten thousand invoices a week using this interface. It is found under
Cash/Vouchers/Payments.
\subsection{Transaction/Invoice Reporting}
\subsubsection{Transactions Report}
This report is designed to help you locate AP transactions based on
various criteria. One can search by vendor, invoice number, department,
and the like. One can even search by the shipping method.
The summary button will show what was placed where, while the details
button will show all debits and credits associated with the transaction.
To view the invoice, click on the invoice number. In the detail view,
to view the account transactions as a whole, click on the account
number.
Open invoices are ones not fully paid off, while closed invoices
are those that have been paid.
\subsubsection{Outstanding Report}
The outstanding report is designed to help you locate AP transactions
that are not paid yet. The ID field is mostly useful for locating
the specific database record if a duplicate invoice number exists.
\subsubsection{AP Aging Report}
This report can tell you how many invoices are past due and by how
much.
A summary report just shows vendors while a detail report shows individual
invoices.
\subsubsection{Tax Paid and Non-taxable Report}
These reports have known issues. It is better to use the GL reports and filter
accordingly.
\subsection{Vendor Reporting}
\subsubsection{Vendor Search}
The Vendor Search screen can be used to locate vendors or AP transactions
associated with those vendors.
The basic types of reports are:
\begin{description}
\item [{All}] Lists all vendors
\item [{Active}] Lists those vendors currently active
\item [{Inactive}] Lists those vendors who are currently inactive. time
frame.
\item [{Orphaned}] Lists those vendors who do not have transactions associated
with them. These vendors can be deleted.
\end{description}
One can include purchase orders, Requests for Quotations, AP invoices,
and AP transactions on this report as well if they occur between the
from and to dates.
\subsubsection{Vendor History}
This report can be used to obtain information about the past goods
and services ordered or received from vendors. One can find quantities,
partnumber, and sell prices on this report. This facility can be used
to search RFQ's, Purchase Orders, and AP Invoices.
\section{AR}
\subsection{Customers}
Customers are entered in using the AR-\textgreater Customers-\textgreater
Add Customer menu.
The salesperson is autopopulated with the current user who is logged
in. Otherwise, it looks fairly similar to the Vendor input screen.
Customers, like vendors can be assigned languages, but it is more
important to do so because invoices will be printed and sent to them.
The credit limit field can be used to assign an amount that one is
willing to do for a customer on credit.
\subsubsection{Customer Price Matrix}
The price list button can be used to enter specific discounts to the
customer, and groups of customers can be assigned a pricegroup for
the purpose of offering specific discounts on specific parts to the
customer. Such discounts can be temporary or permanent.
\subsection{AR Transactions}
AR Transactions are where one can add moneys owed the business by
customers. One can associate these transactions with income accounts,
and add payments if the item is paid when the invoice is issued.
The PO number field is used to track the PO that the customer sent.
This makes it easier to find items when a customer is asking for clarification
on a bill, for example.
\subsection{AR Invoices}
AR Invoices are designed to provide for the delivery of goods and
services to customers. One would normally issue these invoices at
the time when the everything has been done that is necessary to get
paid by the customer.
As with AP invoices, one can search for matches to partial part numbers
and descriptions, and enter initial payments at this screen.
\subsection{Cash Receipt}
The Cash-\textgreater Receipt screen allows you to accept prepayments
from customers or pay single or multiple invoices after they have
been posted. One can print a receipt, however the current templates
seem to be based on check printing templates and so are unsuitable
for this purpose. This presents a great opportunity for improvement.
\subsubsection{Cash Receipts for multiple customers}
The cash-\textgreater receipts screen allows you to accept payments
on all open customer invoices of all customers at once. One could
print (directly to a printer only) all receipts to be sent out if
this was desired.
\subsection{AR Transaction Reporting}
The AR Outstanding report is almost identical to the AP Outstanding
report and is not covered in any detail in this document.
\subsubsection{AR Transactions Report}
This is almost identical to the AP Transactions Report.
If a customer's PO has been associated with this transaction, one
can search under this field as well.
\subsubsection{AR Aging Report}
This report is almost identical to the AP Aging report, with the exception
that one can print up statements for customer accounts that are overdue.
One more application is to calculate interest based on balance owed
so that these can be entered as AR transactions associated with the
customer.
\subsection{Customer Reporting}
These reports are almost identical to the AP Vendor reports and are
not discussed in these notes.
\section{Projects}
\subsection{Project Basics}
A project is a logical collection of AR and AP transactions, orders,
and the like that allow one to better manage specific service or product
offerings. LedgerSMB does not offer comprehensive project management
capabilities, and projects are only used here as they relate to accounting.
One can also add translated descriptions to the project names as well.
\subsection{Timecards}
Timecards allow one to track time entered on specific services. These
can then be used to generate invoices for the time entered.
The non-chargeable is the number of hours that are not billed on the
invoice.
One can then generate invoices based on this information.
The project field is not optional.
\subsection{Projects and Invoices}
One can select the project id for line items of both AR and AP invoices.
These will then be tracked against the project itself.
\subsection{Reporting}
\subsubsection{Timecard Reporting}
The Timecard Report allows one to search for timecards associated
with one or more projects. One can then use the total time in issuing
invoices (this is not automated yet).
\subsubsection{Project Transaction Reporting}
The Standard or GIFI options can be used to create different reports
(for example, for Canadian Tax reporting purposes).
This report brings up a summary that looks sort of like a chart of
accounts. Of one clicks on the account numbers, one can see the transactions
associated with the project.
\subsubsection{List of Projects}
This provides a simple way of searching for projects to edit or modify.
\subsection{Possibilities for Using Projects}
\begin{itemize}
\item One can use them similar to departments for tracking work done for
a variety of customers.
\item One can use them for customer-specific projects.
\end{itemize}
\section{Quotations and Order Management}
\label{oe}
This unit will introduce the business processes that LedgerSMB allows.
These processes are designed to allow various types of businesses
to manage their orders and allow for rudimentary customer relationship
management processes to be built around this software. In this section,
we will introduce the work flow options that many businesses may use
in their day-to-day use of the software.
\subsection{Sales Orders}
Sales orders represent orders from customers that have not been delivered
or shipped yet. These orders can be for work in the future, for
back ordered products, or work in progress. A sales order can be generated
form an AR invoice or from a quotation automatically.
\subsection{Quotations}
Quotations are offers made to a customer but to which the customer
has not committed to the work. Quotations can be created from Sales
orders or AR Invoice automatically.
\subsection{Shipping}
The Shipping module (Shipping-\textgreater Shipping) allows one to
ship portions or entireties of existing sales orders, printing pick
lists and packing slips.
One can then generate invoices for those parts that were shipped.
In general, one will be more likely to use these features if they
have multiple warehouses that they ship from. More likely most customers
will just generate invoices from orders.
\subsection{AR Work Flow}
\subsubsection{Service Example}
A customer contacts your firm and asks for a quote on some services.
Your company would create a quotation for the job and email it to
the customer or print it and mail it. Once the customer agrees to
pay, one creates a sales order from the quotation.
When the work is completed, the sales order is converted into a sales
invoice and this is presented to the customer as a bill.
Note that in some cases, this procedure may be shortened. If the customer
places an order without asking for a quotation and is offered a verbal
quote, then one might merely prepare the sales order.
%
\begin{figure}[hbtp]
\caption{Simple AR Service Invoice Workflow Example}
\input{simple_ar_dataflow}
\end{figure}
\subsubsection{Single Warehouse Example}
A customer contacts your firm and asks for a quotation for shipping
a part. You would create the quotation and when you get confirmation,
convert it to an order. Once the parts are in place you could go to
shipping and ship the part.
The billing department can then generate the invoice from the sales
order based on what merchandise has been shipped and mail it to the
customer.
Note that this requires that you have the part in your inventory.
%
\begin{figure}[hbtp]
\caption{AR Workflow with Shipping}
\input{ar_workflow_ship}
\end{figure}
\subsubsection{Multiple Warehouse Example}
A customer contacts your firm and asks for a quotation for a number
of different parts. You would create a quotation and when you get
confirmation, convert it to a sales order. When you go to ship the item,
you would select the warehouse in the drop-down menu, and select the
parts to ship. One would repeat with other warehouses until the entire
order is shipped.
Then the billing department would go to the sales order and generate
the invoice. It would then be mailed to the customer.
%
\begin{figure}[hbtp]
\caption{Complex AR Workflow with Shipping}
\input{ar_workflow_complex}
\end{figure}
\subsection{Requests for Quotation (RFQ)}
A request for quotation would be a formal document one might submit
to a vendor to ask for a quote on a product or service they might
offer. These can be generated from Purchase Orders or AP Invoices.
\subsection{Purchase Orders}
A purchase order is a confirmation that is issued to the vendor to
order the product or service. Many businesses will require a purchase
order with certain terms in order to begin work on a product. These
can be generated from RFQ's or AP Invoices.
\subsection{Receiving}
The Shipping-\textgreater Receiving screen allows you to track the
parts received from an existing purchase order. Like shipping, it
does not post an invoice but tracks the received parts in the order.
\subsection{AP Work Flow}
\subsubsection{Bookkeeper entering the received items, order completed in full}
Your company inquires about the price of a given good or service from
another firm. You submit an RFQ to the vendor, and finding that the
price is reasonable, you convert it to an order, adjust the price
to what they have quoted, and save it. When the goods are delivered
you convert the order into an AP invoice and post it.
%
\begin{figure}[hbtp]
\caption{Simple AP Workflow}
\input{simple_ap_workflow}
\end{figure}
\subsubsection{Bookkeeper entering received items, order completed in part}
Your company inquires about the price of a given good or service from
another firm, You submit an RFQ to the vendor, and finding that the
price is acceptable, you convert it into an order, adjusting the price
to what they have quoted, and save it. When some of the goods are
received, you open up the purchase order, enter the number of parts
received, convert that order into an invoice, and post it. Repeat
until all parts are received.
%
\begin{figure}[hbtp]
\caption{AP Workflow with Receiving}
\input{ap_workflow_ship}
\end{figure}
\subsubsection{Receiving staff entering items}
Your company inquires about the price of a given good or service from
another firm, You submit an RFQ to the vendor, and finding that the
price is acceptable, you convert it into an order, adjusting the price
to what they have quoted, and save it. When some or all of the goods
are received, the receiving staff goes to Shipping-Receiving, locates
the purchase order, and fills in the number of items received.
The bookkeeper can then determine when all items have been received
and post the invoice at that time.
%
\begin{figure}[hbtp]
\caption{Complex AP Workflow}
\input{ap_workflow_complex}
\end{figure}
\subsection{Generation and Consolidation}
\subsubsection{Generation}
The Generation screen allows you to generate Purchase Orders based
on sales orders. One selects the sales orders one wants to use, and
clicks \char`\"{}Generate Purchase Orders.\char`\"{} Then one selects
clicks on the parts to order, adjusts the quantity if necessary, and
clicks \char`\"{}Select Vendor.\char`\"{} This process is repeated
for every vendor required. Then the Generate Orders button is clicked.
\subsubsection{Consolidation}
One can consolidate sales and/or purchase orders using this screen.
For the consolidation to work you must have more than one order associated
with the relevant customer or vendor.
\subsection{Reporting}
The reporting functionality in the order management is largely limited
to the ability to locate purchase orders, sales orders, RFQ's, and
quotations.
\subsection{Shipping Module: Transferring Inventory between Warehouses}
One can transfer inventory between warehouses if necessary by using
the Shipping-\textgreater Transfer Inventory screen.
\section{Fixed Assets}
One of the new features in LedgerSMB 1.3.x is fixed asset management, which
includes tracking, depreciation, and disposal. In general, the LedgerSMB
approach to these topics is implemented in a streamlined fashion but with
an ability to add advanced methods of depreciation later.
\subsection{Concepts and Workflows}
Fixed asset management and accounting provides a better ability to track
the distribution of expenses relative to income. Many fixed assets may be
depreciated so that the expense of obtaining the asset can be spread across
the usable life of that asset. This is done in accordance with the idea of
attempting to match actual resource consumption (as expenses) with income
in order to better gauge the financial health of the business.
\subsubsection{Fixed Assets and Capital Expenses}
Fixed assets are pieces of property that a business may own which cannot be
easily converted into cash. They are differentiated from liquid assets, which
include inventory, cash, bank account balances, some securities, etc. Fixed
assets, by their nature are typically purchased and used for an extended period
of time, called the estimated usable life. During the estimated usable life,
the fixed asset is being utilized by the business, and so we would want to
track the expense as gradually incurred relative to the income that the asset
helps produce. This expense is called a "capital expense" and refers to either
a purchase of a fixed asset or an expense which improves it.
Examples of capital expenses and fixed assets might include (using a pizza
place as an example):
\begin{itemize}
\item A company vehicle
\item Tables, chairs, ovens, etc.
\item Changes to the leased property needed for the business.
\item Major repairs to the company vehicle.
\end{itemize}
\subsubsection{Asset Classes}
LedgerSMB allows assets and capital expenses to be grouped together in asset
classes for easy management. An asset class is a collection of assets which are
depreciated together, and which are depreciated in the same way. One cannot mix
depreciation methods within a class. Account links for the asset class are used
as default links for fixed assets, but assets may be tied to different accounts
within an asset class.
\subsubsection{Depreciation}
Depreciation is a method for matching the portion of a capital expense to
income related to it. Expenses are depreciated so that they are spread out over
the usable life of the asset or capital expense. Depreciation may be linear or
non-linear and maybe spread out over time or over units of production. For
example, one might wish to depreciate a car based on miles driven, over a usable
life of, say, 100000 miles, or one might want to depreciate it based on a useful
life of five years.
LedgerSMB currently only supports variations on straight-line depreciation,
either with an estimated life measured in months or in years.
Depreciation is subject to separation of duties. Permissions for entering and
approving depreciation reports are separate, and the GL transactions created
must currently be approved in order to show up on the books.
\subsubsection{Disposal}
Fixed assets may be disposed of through sale or abandonment. Abandonment
generally is a method of giving the asset away. A sale involves getting
something for the asset.
The disposal workflow is conceptually similar to the depreciation workflow
except that additionally one can track proceeds for sales, and tie in
gains/losses to appropriate income or expense accounts.
Gains are realized where the salvage value is greater than the undepreciated
value of the asset, and losses where the salvage value is less.
\subsubsection{Net Book Value}
Net book value represents the value to depreciate of fixed assets. It is
defined as the basis value minus depreciation that has been recorded. The basis
is further defined as the purchase value minus the estimated salvage value for
LedgerSMB purposes. We track all capital expenses separately for depreciation
purposes, and so capital expenses which adjust value of other fixed assets have
their own net book value records. This is separate from how capital gain and
loss might need to be tracked for tax purposes in the US.
\subsubsection{Supported Depreciation Methods}
Currently we only ship with the following variations on the straight line
depreciation method:
\begin{description}
\item[Annual Straight Line Daily] Life is measured in years, depreciation is an
equal amount per year, divided up into equal portions daily.
\item[Annual Straight Line Monthly] Life is measured in years, depreciation
is an equal amount per year, divided up into equal portions each month. This
differs from daily in that February would have less depreciation than August.
This module is more commonly used than the daily depreciation because it is
easier to calculate and thus more transparent.
\item[Whole Month Straight Line] Life is measured in months, and depreciation
occurs only per whole month.
\end{description}
\section{HR}
The HR module is currently limited to tracking employees for and their
start and end dates. It has very little other functionality. One could
build payroll systems that could integrate with it however.
\section{POS}
LedgerSMB 1.2 includes a number of components merged from Metatron Technology
Consulting's SL-POS. Although it is still not a perfect solution, it is greatly improved in both workflow and hardware support. It is suitable for retail
establishments at the moment.
\subsection{Sales Screen}
The sales screen looks very much like a normal invoice entry screen
with a few differences.
\begin{itemize}
\item The discount text field is not available, nor is the unit field..
\item The next part number is automatically focused when the data loads
for rapid data entry.
\item Hot keys for the buttons are Alt-U for update, Alt-P for print, Alt-O
for post, and Alt-R for print and post.
\item Part Groups appear at the bottom of the screen.
\item Alt-N moves the cursor to the next free payment line.
\end{itemize}
\subsection{Possibilities for Data Entry}
\begin{itemize}
\item Barcode scanners can be used to scan items in as they are being rung
in.
\item One could use touch screens, though this would ideally require some
custom stylesheets to make it efficient.
\end{itemize}
\subsection{Hardware Support}
As LedgerSMB is a web-based application, the web browser usually
does not allow the page to write to arbitrary files. Therefore hardware
support for pole displays, etc. is not readily possible from the application
itself. LedgerSMB gets around this limitation by using an additional set of
network sockets from the server to the client to control its hardware. This
naturally requires that other software is also running on the client.
Notes for specific types of hardware are as follows:
\begin{description}
\item [{Touch}] screens: The default stylesheet is not really usable from
a touchscreen as the items are often too small. One would need to
modify the stylesheets to ensure that the relevant items would be
reasonable. Setting down the resolution would also help.
\item [{Receipt}] Printers: ESC/POS printers generally work in text mode.
Control sequences can be embedded in the template as necessary.
\item [{Pole}] Displays: Generally supported. Only the Logic Controls PD3000 is
supported out of the box, but making this work for other models ought to be
trivial.
\item [{Cash}] Drawers: These should be attached to the printer. The control
codes is then specified in the pos.conf.pl so that the command is sent to the
printer when the open till button is pushed.
\item [{Barcode}] Scanners: Most customers use decoded barcode scanners
through a keyboard wedge interface. This allows them to scan items
as if they were typing them on the keyboard.
\end{description}
\subsection{Reports}
\subsubsection{Open Invoices}
The POS-\textgreater Open screen allows one to find any POS receipts
that are not entirely paid off.
\subsubsection{Receipts}
The POS-\textgreater Receipts screen allows one to bring up a basic
record of the POS terminals. It is not sufficient for closing the
till, however, though it may help for reconciliation.
The till column is the last component or octet of the terminal's IP
address. Therefore it is a good idea to try to avoid having IP addresses
where the last octet is the same.
All entries are grouped by date and source in this report.
\section{General Ledger}
\subsection{GL Basics}
The General Ledger is the heart of LedgerSMB. Indeed, LedgerSMB
is designed to be as close as possible to a software equivalent of
a paper-based accounting program (but with no difference between the
General Ledger and General Journal).
\subsubsection{Paper-based accounting systems and the GL}
In order to understand the principle of the General Ledger, one must
have a basic understanding of the general process of bookkeeping using
double-entry paper-based accounting systems.
Normally when a transaction would be recorded, it would first be recorded
in the ``General Journal'' which would contain detailed
information about the transaction, notes, etc. Then the entries from
the General Journal would be transcribed to the General Ledger, where
one could keep closer tabs on what was going on in each account.
In the general journal, all transactions are listed chronologically
with whatever commentary is deemed necessary, while in the general
ledger each account has its own page and transactions are recorded
in a simple and terse manner. The General Journal is the first place
the transaction is recorded and the General Ledger is the last.
At the end of the accounting period, the GL transactions would be
summarized into a trial balance and this would be used for creating
financial statements and closing the books at the end of the year.
\subsubsection{Double Entry Examples on Paper}
Let us say that John starts his business with an initial investment
of \$10,000.
This is recorded in the General Journal as follows (in this example,
suppose it is page 1):
\begin{tabular}{|l|l|l|r|r|}
\hline
Date &
Accounts and Explanation &
Ref &
DEBIT &
CREDIT \tabularnewline
\hline
March 1 &
Checking Account &
1060 &
10000.00 &
\tabularnewline
&
John Doe Capital &
3011 &
&
10000.00\tabularnewline
&
John Doe began a business &
&
&
\tabularnewline
&
with an investment of &
&
&
\tabularnewline
&
\$10000 &
&
&
\tabularnewline
\hline
\end{tabular}\medskip{}
This would then be transcribed into two pages of the General Ledger.
The first page might be the Checking Account page:\medskip{}
\begin{tabular}{|l|l|l|r|l|l|l|r|}
\hline
DATE &
EXPLANATION &
REF. &
DEBITS &
DATE &
EXPLANATION &
REF. &
CREDITS\tabularnewline
\hline
March 1 &
&
J1 &
10000.00 &
&
&
&
\tabularnewline
\hline
\end{tabular}\medskip{}
On the John Doe Capital page, we would add a similar entry:\medskip{}
\begin{tabular}{|l|l|l|r|l|l|l|r|}
\hline
DATE &
EXPLANATION &
REF. &
DEBITS &
DATE &
EXPLANATION &
REF. &
CREDITS\tabularnewline
\hline
&
&
&
&
March 1 &
&
J1 &
10000.00\tabularnewline
\hline
\end{tabular}\medskip{}
\subsubsection{The GL in LedgerSMB}
The paper-based accounting procedure works well when one is stuck
with paper recording requirements but it has one serious deficiency---
all of this transcribing creates an opportunity for errors.
Relational databases relieve the need for such transcription as it
is possible to store everything physically in a way similar to the
way a General Journal is used in the paper-based systems and then
present the same information in ways which are more closely related
to the General Ledger book.
This is the exact way that the General Ledger is used in LedgerSMB.
The actual data is entered and stored as if it was a general journal,
and then the data can be presented in any number of different ways.
All modules of LedgerSMB that involve COA accounts store their data
in the General Ledger (it is a little more complex than this but this
is very close to the actual mechanism).
\subsection{Cash Transfer}
The simplest form of GL entry in LedgerSMB is the Cash-\textgreater
Transfer screen. This screen shows two transaction lines, and fields
for reference, department, description, and notes.
The field descriptions are as follows:
\begin{description}
\item [{Reference}] refers to the source document for the transfer. One
can use transfer sheets, bank receipt numbers, etc for this field.
\item [{Description}] is optional but really should be filled in. It ought
to be a description of the transaction.
\item [{Notes}] provide supplemental information for the transaction.
\item [{FX}] indicates whether foreign exchange is a factor in this transaction.
\item [{Debit}] indicates money going \textbf{into} the asset account.
\item [{Credit}] indicates money coming \textbf{out} of the asset account.
\item [{Source}] is the source document for that portion of the transaction.
\item [{Memo}] lists additional information as necessary.
\item [{Project}] allows you to assign this line to a project.
\end{description}
The credit and debit options seem to be the opposite of what one would
think of concerning one's bank account. The reason is that your bank
statement is done from the bank's point of view. Your bank account balance
is an asset to you and therefor you show it as having a debit balance, but
to the bank it is money they owe you and so they show it as having a credit
balance.
Note that in this screen, when an item is updated, it will reduce
the number of lines to those already filled in plus an extra line
for the new line in the data entry.
\subsection{GL Transactions}
The GL Transaction screen (General Ledger-\textgreater Add Transaction)
is identical to the Cash Transfer screen with the exception that it
starts with nine lines instead of two. Otherwise, they are identical.
Again, one must be careful with debits and credits. Often it is easy
to get confused. It is generally worth while to go back to the principle
that one tracks them with regard to their impact on the equity accounts.
So expenses are credits because they debit the equity accounts, and
income is a debit because it credits the retained earning equity account.
\subsection{Payroll as a GL transaction}
Currently payroll must be done as a GL transaction. The attempts to
create a payroll system that would ship with LSMB have largely stalled.
Most customers running their businesses will have an idea of how to
do this.
%
\begin{figure}[hbtp]
\caption{Payroll as a GL Transaction (Purely fictitious numbers)}
\begin{tabular}{|l|r|r|}
\hline
Account &
Debit &
Credit \tabularnewline
5101 Wages and Salaries &
500 &
\tabularnewline
2032 Accrued Wages &
&
450 \tabularnewline
2033 Fed. Income Tax wthd &
&
30 \tabularnewline
2034 State Inc. Tax. wthd &
&
15 \tabularnewline
2035 Social Security wthd &
&
3 \tabularnewline
2036 Medicare wthd &
&
2 \tabularnewline
2032 Accrued Wages &
450 &
\tabularnewline
1060 Checking Acct &
&
450 \tabularnewline
\hline
\end{tabular}
\end{figure}
\subsection{Reconciliation}
To reconcile an account (say, when one would get a checking account
statement), one would go to cash/reconciliation, and check off the
items that have cleared. One can then attempt to determine where any
errors lie by comparing the total on the statement with the total
that LSMB generates.
This can be done for other accounts too, such as petty cash.%
\footnote{Petty cash denotes a drawer of cash that is used to pay small
expenses. When an expense is paid, it is recorded on a slip of paper that is
stored for reconciliation purposes.} Some users even reconcile liability
accounts and the likes.
In LedgerSMB 1.3.x, the reconciliation framework has been completely rewritten
to allow for easier reconciliation especially where there are large numbers of
transactions. It is now possible to reconcile accounts with thousands of
transactions per month, and to track what was reconciled in each report.
Reconciliation is now also subject to separation of duties, allowing a
reconciliation report to be committed to the books only when approved.
The reconciliation screen is now divided into four basic parts. At the top is a
header with information about which account is being reconcilied, ending
balances, variances, etc. Then comes a list of cleared transactions, as well
as totals of debits and credits.
After this comes a (usually empty) list of failed matches when the file import
functionality is used. These again list the source, then the debits/credits in
LedgerSMB and the debits/credits in the bank import.
Finally there is a list of uncleared transactions. To move error a transaction
from the error or uncleared section into the cleared section, check one or more
off and click update. Update also checks for new uncleared transactions in the
reconciliation period, and it saves the current report so it can be continued
later.
\subsubsection{File Import Feature}
1.3.x has a plugin model that allows one to write parsers against a variety of
file formats, one or more per account. The file can be placed in the
LedgerSMB/Reconciliation/CSV/Formats directory, and the function must be called
parse\_ followed by the account id. The function must be in the
LedgerSMB::Reconciliation::CSV namespace.
This obviously has a few limitations. Hosting providers might want to start the
account tables 10000 apart as far as the initial id's go or provide dedicated
LedgerSMB instances per client.
\subsection{Reports}
The most flexible report in LedgerSMB is the GL report because it
has access to the entire set of financial transactions of a business.
Every invoice posted, payment made or received, etc. can be located
here.
The search criteria include:
\begin{description}
\item [{Reference}] is the invoice number, or other reference number associated
with the transaction.
\item [{Source}] is the field related to the source document number in
a payment or other transaction.%
\footnote{Source documents are things like receipts, canceled checks, etc. that
can be used to verify the existence and nature of a transaction.%
}
\item [{Memo}] relates to the memo field on a payment.
\item [{Department}] can be used to filter results by department.
\item [{Account}] Type can be used to filter results by type of account
(Asset, Liability, etc.)
\item [{Description}] can be used to filter by GL description or by
customer/vendor name.
\end{description}
The actual format of the report looks more like what one would expect
in a paper accounting system's general journal than a general ledger
per se. A presentation of the data that is more like the paper general
ledger is found in the Chart of Accounts report.
\subsubsection{GL as access to almost everything else}
The GL reports can be used to do all manner of things. One can determine,
for example, which AP invoice or transaction was paid with a certain
check number or which invoice by a specific customer was paid by a specific
check number.
\section{Recurring Transactions}
Any transaction or invoice may be repeated a number of times in regular
intervals. To schedule any GL, AR, or AP transaction or invoice, click
the schedule button.
In general the reference number should be left blank as this will
force LedgerSMB to create a new invoice or transaction number for
each iteration. The rest of the options are self-explanatory. Note
that a blank number if iterations will result in no recurrences of
the transaction.
To process the recurring transactions, click on the Recurring Transactions
option on the main menu select the ones you want to process and click
\char`\"{}Process Transactions.\char`\"{}
An enhanced recurring transaction interface is forthcoming from the LedgerSMB
project.
\section{Financial Statements and Reports}
Financial statements and reports are a very important part of any
accounting system. Accountants and business people rely on these reports
to determine the financial soundness of the business and its prospects
for the next accounting period.
\subsection{Cash v. Accrual Basis}
Financial statements, such as the Income Statement and Balance Sheet
can be prepared either on a cash or accrual basis. In cash-basis accounting,
the income is deemed earned when the customer pays it, and the expenses
are deemed incurred when the business pays them.
There are a number of problems with cash-basis accounting from a business
point of view. The most serious is that one can misrepresent the wellbeing
of a business by paying a large expense after a deadline. Thus cash-basis
accounting does not allow one to accurately pair the income with the
related expense as these are recorded at different times. If one cannot
accurately pair the income with the related expense, then financial
statements cannot be guaranteed to tell one much of anything about
the well-being of the business.
In accrual basis accounting, income is considered earned when the
invoice is posted, and expenses are considered incurred at the time
when the goods or services are delivered to the business. This way,
one can pair the income made from the sale of a product with the expense
incurred in bringing that product to sale. This pairing allows for
greater confidence in business reporting.
\subsection{Viewing the Chart of Accounts and Transactions}
The Reports--\textgreater Chart of Accounts will provide the chart
of accounts along with current totals in each account.
If you click on an account number, you will get a screen that allows
you to filter out transactions in that account by various criteria.
One can also include AR/AP, and Subtotal in the report.
The report format is similar to that of a paper-based general ledger.
\subsection{Trial Balance}
\subsubsection{The Paper-based function of a Trial Balance}
In paper-based accounting systems, the accountant at the end of the
year would total up the debits and credits in every account and transfer
them onto another sheet called the trial balance. The accountant would
check to determine that the total debits and credits were equal and
would then transfer this information onto the financial statements.
It was called a trial balance because it was the main step at which
the error-detection capabilities of double-entry accounting systems
were used.
\subsubsection{Running the Trial Balance Report}
This report is located under Reports --\textgreater Trial Balance.
One can filter out items by date, accounting period, or department.
One can run the report by accounts or using GIFI classifications to
group accounts together.
From this report, you can click on the account number and see all
transactions on the trial balance as well as whether or not they have
been reconciled.
\subsubsection{What if the Trial Balance doesn't Balance?}
If the trial balance does not balance, get technical support immediately.
This usually means that transactions were not entered properly. Some
may have been out of balance, or some may have gone into non-existent
accounts (believe it or not, LedgerSMB does not check this latter
issue).
\subsubsection{Trial Balance as a Summary of Account Activity}
The trial balance offers a glance at the total activity in every account.
It can provide a useful look at financial activity at a glance for
the entire business.
\subsubsection{Trial Balance as a Budget Planning Tool}
By filtering out departments, one can determine what a department
earned and spent during a given financial interval. This can be used
in preparing budgets for the next accounting period.
\subsection{Income Statement}
The Income Statement is another tool that can be used to assist with
budgetary planning as well as provide information on the financial
health of a business.
The report is run from Reports--\textgreater Income Statement. The
report preparation screen shows the following fields:
\begin{description}
\item [{Department}] allows you to run reports for individual departments.
This is useful for budgetary purposes.
\item [{Project}] allows you to run reports on individual projects. This
can show how profitable a given project was during a given time period.
\item [{From}] and To allow you to select arbitrary from and to dates.
\item [{Period}] allows you to specify a standard accounting period.
\item [{Compare to}] fields allow you to run a second report for comparison
purposes for a separate range of dates or accounting period.
\item [{Decimalplaces}] allows you to display numbers to a given precision.
\item [{Method}] allows you to select between accrual and cash basis reports.
\item [{Include}] in Report provides various options for reporting.
\item [{Accounts}] allows you to run GIFI reports instead of the standard
ones.
\end{description}
The report shows all income and expense accounts with activity during
the period when the report is run, the balances accrued during the
period, as well as the total income and expense at the bottom of each
section. The total expense is subtracted from the total income to
provide the net income during the period. If there is a loss, it appears
in parentheses.
\subsubsection{Uses of an Income Statement}
The income statement provides a basic snapshot of the overall ability
of the business to make money. It is one of the basic accounting statements
and is required, for example, on many SEC forms for publicly traded
firms.
Additionally, businessmen use the income statement to look at overall
trends in the ability of the business to make money. One can compare
a given month, quarter, or year with a year prior to look for trends
so that one can make adjustments in order to maximize profit.
Finally, these reports can be used to provide a look at each department's
performance and their ability to work within their budget. One can
compare a department or project's performance to a year prior and
look for patterns that can indicate problems or opportunities that
need to be addressed.
\subsection{Balance Sheet}
The balance sheet is the second major accounting statement supported
by LedgerSMB. The balance sheet provides a snapshot of the current
financial health of the business by comparing assets, liabilities,
and equity.
In essence the balance sheet is a statement of the current state of
owner equity. Traditionally, it does not track changes in owner equity
in the same way the Statement of Owner Equity does.
The Balance Sheet report preparation screen is much simpler than the
Income Statement screen. Balance sheets don't apply to projects, but
they do apply to departments. Also, unlike an income statement, a
balance sheet is fixed for a specific date in time. Therefore one
does not need to select a period.
The fields in creating a balance sheet are:
\begin{description}
\item [{Department}] allows you to run separate balance sheets for each
department.
\item [{As}] at specifies the date. If blank this will be the current date.
\item [{Compare to}] specifies the date to compare the balance sheet to.
\item [{Decimalplaces}] specifies the number of decimal places to use.
\item [{Method}] selects between cash and accrual basis.
\item [{Include}] in report allows you to select supplemental information
on the report.
\item [{Accounts}] allows you to select between standard and GIFI reports.
\end{description}
The balance sheet lists all asset, liability, and equity accounts
with a balance. Each category has a total listed, and the total of
the equity and liability accounts is also listed.
The total assets should be equal to the sum of the totals of the liability
and equity accounts.
\subsection{What if the Balance Sheet doesn't balance?}
Get technical support immediately, This may indicate that out of balance
transactions were entered or that transactions did not post properly.
\subsection{No Statement of Owner Equity?}
The Statement of Owner Equity is the one accounting statement that
LedgerSMB does not support. However, it can be simulated by running
a balance sheet at the end of the time frame in question and comparing
it to the beginning. One can check this against an income statement
for the period in question to verify its accuracy. The statement of
owner equity is not as commonly used now as it once was.
\section{The Template System}
LedgerSMB allows most documents to be generated according to a template
system. This allows financial statements, invoices, orders, and the
like to be customized to meet the needs of most businesses. Company
logos can be inserted, the format can be radically altered, one can
print letters to be included with checks to vendors instead of the
checks themselves, and the like. In the end, there is very little
that cannot be accomplished regarding modification of these documents
with the template system.
One can define different templates for different languages, so that
a customer in Spain gets a different invoice than a customer in Canada.
LedgerSMB provides templates in a variety of formats including text, html,
LaTeX, Excel, and Open Document Spreadsheat. Each of these is processed using
TemplateToolkit\footnote{See \url{http://template-toolkit.org/} for more
information and documentation.} with a few specific modifications:
\begin{itemize}
\item start tag is \textless?lsmb and end tag is ?\textgreater
\item text(string) is available as a function in the templates for localization
purposes.
\item gettext(string, language) can be used to translate a string into a
specified language rather than the default language (useful for multi-lingual
templates).
\end{itemize}
Additionally the UI/lib/ui-header.html can be used to provide standardized
initial segments for HTML documents and inputs can be entered via interfaces in
the UI/lib/elements.html template.
\subsubsection{What is \LaTeX{}\ ?}
\LaTeX{}\ (pronounced LAY-tech) is an extension on the \TeX{}\ typesetting
system. It largely consists of a set of macros that allow one to focus
on the structure of the document while letting the \TeX{}\ engine
do the heavy lifting in terms of determining the optimal formatting
for the page. \LaTeX{}\ is used in a large number of academic journals
(including those of the American Mathematics Association). It is available
at \url{http://www.tug.org} and is included in most Linux distributions.
Like HTML, \LaTeX{}\ uses plain text documents to store the formatting
information and then when the document is rendered, attempts to fit
it onto a page. \LaTeX{}\
supports the concept of stylesheets, allowing one to separate content
from format, and this feature is used in many higher-end applications,
like journal publication.
Unlike HTML, \LaTeX{}\ is a complete though simple programming language
that allows one to redefine internals of the system for formatting
purposes.
This document is written in \LaTeX{}.
\subsubsection{Using \LyX{} to Edit \LaTeX{}\ Templates}
\LyX{} is a synchronous \LaTeX{}\ editor that runs on Windows, UNIX/Linux,
and Mac OS X. It requires an installed \LaTeX{}-2e implementation
and can be obtained at \url{http://www.lyx.org}. Like the most common
\LaTeX{}\ implementations, it is open source and is included with most
Linux distributions.
\subsection{Customizing Logos}
\LaTeX{}\ requires different formats of logos depending on whether
the document is going to be generated as a PDF or as postscript. Postscript
requires an embedded postscript graphic, while PDF requires any type
of graphic other than embedded postscript. Usually one uses a PNG's
for PDF's, though GIF's could be used as well. The logo for a \LaTeX{}\ document
must be fully qualified as to its path.
HTML documents can have logos in many different formats. PNG's are
generally preferred for printing reasons. The image can be stored
anywhere and merely referenced in the HTML.
Note: Always test the an invoice with images to ensure that
the rest of the page format is not thrown off by it.
\subsection{How are They Stored in the Filesystem?}
The template directory (``templates'' in the root
LedgerSMB install directory) contains all the root templates used
by LedgerSMB. The default templates are stored in the demo folder.
Inside the templates directory are one or more subdirectories where the
relevant
templates have been copied as default language templates for the user.
Many users can use the same user directory (which bears the name of
the LedgerSMB username). Within this directory are more subdirectories
for translated templates, one for each language created. If the requested
language template is not found, LedgerSMB will attempt to translate the one in
the main folder.
\subsection{Upgrade Issues}
When LedgerSMB is upgraded, the templates are not replaced. This
is designed to prevent the upgrade script from overwriting changes
made during the course of customizing the templates.
Occasionally, however, the data model changes in a way which can cause
the templates to stop printing certain information. When information
that was showing up before an upgrade stops showing up, one can either
upgrade the templates by copying the source template over the existing
one, or one can edit the template to make the change.
\section{An Introduction to the CLI for Old Code}
This section applies to the following Perl scripts:
\begin{description}
\item[am.pl] System Management
\item[ap.pl] Accounts Payable Transactions and Reports
\item[ar.pl] Accounts Receivable Transactions and Reports
\item[bp.pl] Batch Printing
\item[ca.pl] Chart of Accounts (some functions, others are in the accounts.pl
script)
\item[gl.pl] General Ledger Reports and Journal Entry
\item[ic.pl] Inventory control
\item[ir.pl] Invoices Received (vendor invoices)
\item[is.pl] Invoices Shipped (sales invoices)
\item[jc.pl] Job costing (timecards)
\item[oe.pl] Order Entry
\item[pe.pl] Projects
\item[ps.pl] Point of Sale
\item[rp.pl] Reports
\end{description}
\subsection{Conventions}
The command-line API will be referred to as the API.
\subsection{Preliminaries}
All scripts included in the documentation can also be found in the doc/samples
directory.
Consider a simple example:
cd /usr/local/ledger-smb
./ct.pl "login=name\&path=bin\&password=xxxxx\&action=search\&db=customer"
The cd command moves your terminal session's current working directory into
the main LedgerSMB directory. Then the LedgerSMB perl script ct.pl is called
with one long line as an argument. The argument is really several variable=value pairs
separated by ampersands (\&). The value for the login variable is the username
that LedgerSMB is to use, and the value for the password variable is the plaintext password.
To build our examples we will use a username of "clarkkent" who has a password
of "lOis,lAn3".
cd /usr/local/ledger-smb
./ct.pl "login=clarkkent\&path=bin\&password=lOis,lAn3\&action=search\&db=customer"
If we execute these commands we will get the html for the search form for
the customer database. This result isn't useful in itself, but it shows we
are on the right track.
\subsection{First Script: lsmb01-cli-example.sh}
With a working example, we can start to build reproducible routines that we can grow
to do some useful work.
This is a bash script which:
\begin{enumerate}
\item sets NOW to the current working directory
\item prompts for and reads your LedgerSMB login
\item prompts for and reads (non-echoing) your LedgerSMB password
\item changes directory to /usr/local/ledger-smb
\item constructs login and logout commands and a transaction command
\item logins into ledger-smb (in a real program, output would be checked for
success or failure)
\item executes the transaction
\item logs out of ledger-smb (although this is not necessary)
\item returns to the original working directory
\item exits
\end{enumerate}
Running lsmb01-cli-example.sh produces:
\$ lsmb01-cli-example.sh
LedgerSMB login: clarkkent
LedgerSMB password:
\begin{verbatim}
<body>
<form method=post action=ct.pl>
<input type=hidden name=db value=customer>
<table width=100%>
<tr>
<th class=listtop>Search</th>
.
.
.
\end{verbatim}
A script like this would work well for simple batch transactions, but
bash is not a very friendly language for application programming.
A nicer solution would be to use a language such as perl to drive the
command line API.
\subsubsection{Script 1 (Bash)}
\begin{verbatim}
#!/bin/bash
#######################################################################
#
# lsmb01-cli-example.sh
# Copyright (C) 2006. Louis B. Moore
#
# $Id: $
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
#
#######################################################################
NOW=`pwd`
echo -n "Ledger-SMB login: "
read LSLOGIN
echo
echo -n "Ledger-SMB password: "
stty -echo
read LSPWD
stty echo
echo
ARG="login=${LSLOGIN}&password=${LSPWD}&path=bin&action=search&db=customer"
LGIN="login=${LSLOGIN}&password=${LSPWD}&path=bin&action=login"
LGOT="login=${LSLOGIN}&password=${LSPWD}&path=bin&action=logout"
cd /usr/local/ledger-smb
./login.pl $LGIN 2>&1 > /dev/null
./ct.pl $ARG
./login.pl $LGOT 2>&1 > /dev/null
cd $NOW
exit 0
\end{verbatim}
\subsection{Second Script: lsmb02-cli-example.pl}
Our second script is written in perl and logs you in but it still uses the API
in its simplest form, that is, it builds commands and then executes them. This
type of script can be used for more complex solutions than the simple bash script
above, though it is still fairly limited. If your needs require, rather than have
the script build and then execute the commands it could be written to generate a
shell script which is executed by hand.
This script begins by prompting for your LedgerSMB login and password. Using
the supplied values a login command is constructed and passed into the runLScmd
subroutine. runLScmd changes directory to /usr/local/ledger-smb/ for the length
of the subroutine. It formats the command and executes it and returns both the
output and error information to the caller in a scalar.
The script checks to see if there was an error in the login, exiting if there was.
Next, the script reads some records which are stored in the program following the
\_\_END\_\_ token. It takes each record in turn, formats it then feeds each transaction
through runLScmd and looks at the results for a string that signals success.
Once all the transactions are processed, runLScmd is called one last time to
logout and the script exits.
\subsubsection{Script 2 (Perl)}
\begin{verbatim}
#!/usr/bin/perl -w
#
# File: lsmb02-cli-example.pl
# Environment: Ledger-SMB 1.2.0+
# Author: Louis B. Moore
#
# Copyright (C) 2006 Louis B. Moore
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
# Revision:
# $Id$
#
#
use File::chdir;
use HTML::Entities;
print "\n\nLedger-SMB login: ";
my $login = <STDIN>;
chomp($login);
print "\nLedger-SMB password: ";
system("stty -echo");
my $pwd = <STDIN>;
system("stty echo");
chomp($pwd);
print "\n\n";
$cmd = "login=" . $login . '&password=' . $pwd . '&path=bin&action=login';
$signin = runLScmd("./login.pl",$cmd);
if ( $signin =~ m/Error:/ ) {
print "\nLogin error\n";
exit;
}
while (<main::DATA>) {
chomp;
@rec = split(/\|/);
$arg = 'path=bin/mozilla&login=' . $login . '&password=' . $pwd .
'&action=' . escape(substr($rec[0],0,35)) .
'&db=' . $rec[1] .
'&name=' . escape(substr($rec[2],0,35)) .
'&vendornumber=' . $rec[3] .
'&address1=' . escape(substr($rec[4],0,35)) .
'&address2=' . escape(substr($rec[5],0,35)) .
'&city=' . escape(substr($rec[6],0,35)) .
'&state=' . escape(substr($rec[7],0,35)) .
'&zipcode=' . escape(substr($rec[8],0,35)) .
'&country=' . escape(substr($rec[9],0,35)) .
'&phone=' . escape(substr($rec[10],0,20)) .
'&tax_2150=1' .
'&taxaccounts=2150' .
'&taxincluded=0' .
'&terms=0';
$rc=runLScmd("./ct.pl",$arg);
if ($rc =~ m/Vendor saved!/) {
print "$rec[2] SAVED\n";
} else {
print "$rec[2] ERROR\n";
}
}
$cmd = "login=" . $login . '&password=' . $pwd . '&path=bin&action=logout';
$signin = runLScmd("./login.pl",$cmd);
if ( $signin =~ m/Error:/ ) {
print "\nLogout error\n";
}
exit;
#*******************************************************
# Subroutines
#*******************************************************
sub runLScmd {
my $cmd = shift;
my $args = shift;
my $i = 0;
my $results;
local $CWD = "/usr/local/ledger-smb/";
$cmd = $cmd . " \"" . $args . "\"";
$results = `$cmd 2>&1`;
return $results;
}
sub escape {
my $str = shift;
if ($str) {
decode_entities($str);
$str =~ s/([^a-zA-Z0-9_.-])/sprintf("%%%02x", ord($1))/ge;
}
return $str;
}
#*******************************************************
# Record Format
#*******************************************************
#
# action | db | name | vendornumber | address1 | address2 | city | state | zipcode | country | phone
#
__END__
save|vendor|Parts are Us|1377|238 Riverview|Suite 11|Cheese Head|WI|56743|USA|555-123-3322|
save|vendor|Widget Heaven|1378|41 S. Riparian Way||Show Me|MO|39793|USA|555-231-3309|
save|vendor|Consolidated Spackle|1379|1010 Binary Lane|Dept 1101|Beverly Hills|CA|90210|USA|555-330-7639 x772|
\end{verbatim}
\clearpage
\part{Technical Overview}
\section{Basic Architecture}
LedgerSMB is a web-based Perl program that interfaces with PostgreSQL
using the relevant Perl modules. The code is well partitioned, and
the main operation modules are written in an object oriented way.
\subsection{The Software Stack}
%
\begin{figure}[hbtp]
\label{fig-sl-stack} \input{sl-stack.tex}
\caption{The LedgerSMB software stack in a Typical Implementation}
\end{figure}
LedgerSMB runs in a Perl interpreter. I do not currently know if
it is possible to run it with Perl2C or other language converters
to run in other environments. However, except for high-capacity environments,
Perl is a good language choice for this sort of program.
LedgerSMB used to support DB2 and Oracle as well as PostgreSQL. However,
currently some of the functionality is implemented using PostgreSQL
user-defined functions. These would need to be ported to other database
managers in order to make the software work on these. It should not
be too hard, but the fact that it has not been done yet may mean that
there is no real demand for running the software under other RDBMS's.
One can substitute other web servers for Apache. Normally LedgerSMB
is run as a CGI program but it may be possible to run it in the web
server process (note that this may not be entirely thread-safe).
The operating system can be any that supports a web server and Perl
(since PostgreSQL need not run on the same system). However, there
are a few issues running LedgerSMB on Windows (most notably in trying
to get Postscript documents to print properly).
On the client side, any web-browser will work. Currently, the layout
is different for Lynx (which doesn't support frames), and the layout
is not really useful under eLinks (the replacement for Lynx which
does support frames). Some functionality requires Javascript to work
properly, though the application is usable without these features.
\subsection{Capacity Planning}
Some companies may ask how scalable LedgerSMB is. In general, it
is assumed that few companies are going to have a need for a high-concurrency
accounting system. However, with all the features available in LedgerSMB,
the staff that may have access to some of the application may be senior
enough to make the question worthwhile.
As of 1.3, there are users with databases containing over a million financial
transactions, thousands of customers and vendors, and a full bookkeeping
department. In general, 1.3 is considered scalable for midsize businesses.
In general if you are going to have a large number of users for the software, or
large databases, we'd generally suggest running the database on a separate
server, since this makes it easier to isolate and address performance issues.
Additionally, the database server is the hardest part to scale horizontally and
so you want to put more hardware there than with the database server.
\subsubsection{Scalability Strategies}
LedgerSMB is a fairly standard web-based application. However,
sometimes the database schema changes during upgrades. In these cases,
it becomes impossible to use different versions of the software against
the same database version safely. LedgerSMB checks the version of
the database and if the version is higher than the version of the
software that is running, will refuse to run.
Therefore although one strategy might be to run several front-end
web servers with LedgerSMB, in reality this can be a bit of a problem.
One solution is to take half of the front-end servers off-line while
doing the initial upgrade, and then take the other offline to upgrade
when these are brought back online.
The database manager is less scalable in the sense that one cannot
just add more database servers and expect to carry on as normal. However,
aside from the known issues listed below, there are few performance
issues with it. If complex reports are necessary, these can be moved
to a replica database (perhaps using Slony-I or the streaming repliaction of
PostgreSQL 9.x).
If this solution is insufficient for database scalability, one might
be able to move staff who do not need real-time access to new entries
onto a PG-Pool/Slony-I cluster where new transactions are entered
on the master and other data is looked up on the replica. In certain
circumstances, one can also offload a number of other queries from
the master database in order to minimize the load. LedgerSMB has
very few issues in the scalability of the application, and those we find, we
correct as quickly as possible.
\subsubsection{Database Maintenance}
PostgreSQL uses a technique called Multi-version Concurrency Control
(MVCC) to provide a snapshot of the database at the beginning of a
statement or transaction (depending on the transaction isolation level).
When a row is updated, PostgreSQL leaves the old row in the database,
and inserts a new version of that row into the table. Over time, unless
those old rows are removed, performance can degrade as PostgreSQL
has to search through all the old versions of the row in order to
determine which one ought to be the current one.
Due to the way the SQL statements are executed in LedgerSMB, many
inserts will also create a dead row.
A second problem occurs in that each transaction is given a transaction
id. These id's are numbered using 32-bit integers. If the transaction
id wraps around (prior to 8.1), data from transactions that appear
(due to the wraparound) to be in the future suddenly becomes inaccessible.
This problem was corrected in PostgreSQL 8.1, where the database will
refuse to accept new transactions if the transaction ID gets too close
to a wraparound. So while the problem is not as serious in 8.1, the
application merely becomes inaccessible rather than displaying apparent
data loss. Wraparound would occur after about a billion transactions
between all databases running on that instance of PostgreSQL.
Prior to 8.1, the main way to prevent both these problems was to run
a periodic vacuumdb command from cron (UNIX/Linux) or the task scheduler
(Windows). In 8.1 or later, autovacuum capabilities are part of the
back-end and can be configured with the database manager. See the
PostgreSQL documentation for treatment of these subjects.
In general, if performance appears to be slowly degrading, one should
try to run vacuumdb -z from the shell in order to attempt to reclaim
space and provide the planner with accurate information about the
size and composition of the tables. If this fails, then one can go
to other methods of determining the bottleneck and what to do about
it.
\subsubsection{Known issues}
There are no known issues in PostgreSQL performance as relates to LedgerSMB
performance in supported versions.
\section{Customization Possibilities}
LedgerSMB is designed to be customized relatively easily and rapidly.
In general, the source code is well written and compartmentalized.
This section covers the basic possibilities involving customization.
\subsection{Brief Guide to the Source Code}
LedgerSMB is an application with over 120000 lines of code.\footnote{The line
count provided by Ohloh.net is 216000 lines of code, but this includes database
documentation, the source for this manual, and more. The source code is
certainly above 120000 lines by any metric though.} While
it is not possible to cover the entire application here, a brief overview
of the source code is in order.
In the root of the install directory, one will find a setup.pl program,
a number of other .pl programs, and a number of directories. The setup.pl
program is used to update or install LedgerSMB. The other .pl programs
provide a basic set of services for the framework (including authentication)
and then pass the work on to the data entry screen file in the bin
directory.
The bin directory contains the workflow scripts inherited from SQL-Ledger.
These do basic calculations and generate the user interface. The scripts/
directory does the same with new code.
The css directory in the root install directory contains CSS documents
to provide various stylesheets one can select for changing various
aspects of the look and feel of the application.
The locale directory contains translation files that LedgerSMB uses
to translate between different languages. One could add translations
to these files if necessary.
The UI directory contains user interface templates for old and new code. These
templates use TemplateToolkit to generate the basic HTML. The directory also
contains template-specific javascript and css files.
The LedgerSMB directory is where the Perl modules reside that provide the
core business logic in LedgerSMB. These modules provide functionality
such as form handling, email capabilities, and access to the database
through its at least partially object oriented API.
Finally, the sql directory provides the database schemas and upgrade
scripts.
\subsection{Data Entry Screens}
One can customize the data entry screens to optimize work flow, display
additional information, etc.
\subsubsection{Examples}
We set up hot keys for payment lines, automatically focused the keyboard
on the last partnumber field, removed separate print and post buttons
to ensure that invoices were always printed and posted together, and
removed the ability to print to the screen, and even the ability to
scan items in when an invoice was received (using a portable data
terminal) and import this data into LedgerSMB. Finally we added the
ability to reconcile the till online in a paperless manner.
For another customer, we added the ability to print AR invoices in
plain text format and added templates (based on the POS sales template)
to do this.
\subsection{Extensions}
One can add functionality to the Perl modules in the LSMB directory
and often add missing functions easily.
\subsubsection{Examples}
For one customer, we added a module to take data from a portable data
terminal collected when inventory items were taken and use that to
add shrinkage and loss adjustments. We also extended the parts model
to add a check id flag (for alcohol sales) and added this flag to
the user interface.
For another customer, we added a complex invoice/packing slip tracking
system that allowed one to track all the printed documents associated
with an order or invoice.
\subsection{Templates}
As noted before templates can be modified or extended, though sometimes
this involves extending the user interface scripts. Most templates
are easy enough to modify.
\subsubsection{Examples}
For one customer we added text-only invoices for AR and AP transactions/Invoices
and an ability to use Javascript in them to automatically print them
on load.
\subsection{Reports}
The fact that all the data is available within the database manager
is a huge advantage of LedgerSMB over Quickbooks and the like. The
rapid development of reports allows for one to easily develop reports
of any sort within LedgerSMB.
Beginning in 1.4, LedgerSMB has a reporting framework that allows you to turn
general SQL queries into reports that can be output in a variety of formats
including HTML, PDF, CSV, and ODS. This reporting framework is not too
difficult to learn. This section guides you through how to write such a report
in LedgerSMB 1.4.
In this example we will walk through the implementation of the Contact Search
structure. This is believed to be a fairly basic report. Many features will be
mentioned in passing along with which files they are used in, so if you need to
use advanced features you will know where to look (I recommend checking the
development documentation in the POD of the actual program modules as your
primary resource here).
\subsubsection{Essential Parts of a Report}
A report in LedgerSMB 1.4 consists of the following basic components:
\begin{itemize}
\item Filter screen
\item Stored Procedure
\item Report module
\item Workflow hooks
\end{itemize}
Each of these will be covered in turn.
\subsubsection{Filter Screen}
Every filter screen must have a unique name and is placed in
UI/Reports/filters/contact\_search.html. This file creates an HTML form that
submits the search criteria back to a specified script. It is involed through a
URL like
\url{http://myhost/ledgersmb/reports.pl?action=being\_report\&report\_name=contact\_search\&module=gl}.
The arguments passed by the HTTP request are: action (what to do), report\_name
(which filter file to render), and module (which set of business reporting units
we might be interested in). Most commonly the correct module is gl which is for
all financial transactions.
As always the filter screen is an HTML template rendered with Template Toolkit,
but using SGML process instruction tags rather than the default \%-based ones.
The choice of the SGML $<$?lsmb ... ?$>$ PI tags is largely to support LaTeX
better as well as to better conform to HTML and SGML standards.
You may also want to look at UI/lib/report\_base.html for prebuilt controls for
things like date ranges and lists of business report units.
\subsubsection{The Stored Procedure}
The primary work for the report is the stored procedure. There are a couple
general LedgerSMB rules to be aware of. Prefix your arguments with in\_ and
it's generally better to let the query mapper generate the queries than try to
pass args in directly. Typically package and method name are separated by a
double underscore (\_\_) which can be a bit hard to see sometimes.
The stored procedure for the Contact Search report is defined as follows:
\begin{verbatim}
CREATE OR REPLACE FUNCTION contact__search
(in_entity_class int, in_contact text, in_contact_info text[],
in_meta_number text, in_address text, in_city text, in_state text,
in_mail_code text, in_country text, in_active_date_from date,
in_active_date_to date,
in_business_id int, in_name_part text, in_control_code text,
in_notes text)
RETURNS SETOF contact_search_result AS $$
DECLARE
out_row contact_search_result;
loop_count int;
t_contact_info text[];
BEGIN
t_contact_info = in_contact_info;
FOR out_row IN
SELECT e.id, e.control_code, ec.id, ec.meta_number,
ec.description, ec.entity_class,
c.legal_name, c.sic_code, b.description , ec.curr::text
FROM (select * from entity
where control_code like in_control_code || '%'
union
select * from entity where in_control_code is null) e
JOIN (SELECT legal_name, sic_code, entity_id
FROM company
WHERE legal_name @@ plainto_tsquery(in_name_part)
UNION ALL
SELECT legal_name, sic_code, entity_id
FROM company
WHERE in_name_part IS NULL
UNION ALL
SELECT coalesce(first_name, '') || ' '
|| coalesce(middle_name, '') || ' '
|| coalesce(last_name, ''), null, entity_id
FROM person
WHERE coalesce(first_name, '') || ' '
|| coalesce(middle_name, '') || ' '
|| coalesce(last_name, '')
@@ plainto_tsquery(in_name_part)
UNION ALL
SELECT coalesce(first_name, '') || ' '
|| coalesce(middle_name, '') || ' '
|| coalesce(last_name, ''), null, entity_id
FROM person
WHERE in_name_part IS NULL) c ON (e.id = c.entity_id)
LEFT JOIN entity_credit_account ec ON (ec.entity_id = e.id)
LEFT JOIN business b ON (ec.business_id = b.id)
WHERE coalesce(ec.entity_class,e.entity_class) = in_entity_class
AND (c.entity_id IN (select entity_id
FROM entity_to_contact
WHERE contact ILIKE
ANY(t_contact_info))
OR '' ILIKE
ALL(t_contact_info)
OR t_contact_info IS NULL)
AND ((in_address IS NULL AND in_city IS NULL
AND in_state IS NULL
AND in_country IS NULL)
OR (c.entity_id IN
(select entity_id FROM entity_to_location
WHERE location_id IN
(SELECT id FROM location
WHERE (line_one @@ plainto_tsquery(
in_address)
OR
line_two @@ plainto_tsquery(
in_address)
OR
line_three @@ plainto_tsquery(
in_address))
AND city ILIKE
'%' ||
coalesce(in_city, '')
|| '%'
AND state ILIKE
'%' ||
coalesce(in_state, '')
|| '%'
AND mail_code ILIKE
'%' ||
coalesce(in_mail_code,
'')
|| '%'
AND country_id IN
(SELECT id FROM country
WHERE name ilike
in_country
OR short_name
ilike
in_country)))))
AND (ec.business_id =
coalesce(in_business_id, ec.business_id)
OR (ec.business_id IS NULL
AND in_business_id IS NULL))
AND (ec.startdate <= coalesce(in_active_date_to,
ec.startdate)
OR (ec.startdate IS NULL))
AND (ec.enddate >= coalesce(in_active_date_from,
ec.enddate)
OR (ec.enddate IS NULL))
AND (ec.meta_number like in_meta_number || '%'
OR in_meta_number IS NULL)
AND (in_notes IS NULL OR e.id in (
SELECT entity_id from entity_note
WHERE note @@ plainto_tsquery(in_notes))
OR ec.id IN (select ref_key FROM eca_note
WHERE note @@ plainto_tsquery(in_notes)))
LOOP
RETURN NEXT out_row;
END LOOP;
END;
$$ language plpgsql;
\end{verbatim}
This is just a bit complex. Most of the query is just a set of filter
condutions, however. The filter runs, and spits out results based on inputs.
Moreover, because of the magic of the DBObject mapper, an input in the filter
screen named notes gets mapped to the in\_notes argument here.
\subsubsection{Report Module}
The report module for this report is in
LedgerSMB/DBObject/Report/Contact/Search.pm. This file defines the basic outlay
of the report. These modules all must inherit from LedgerSMB::DBObject::Report
and therefore must also use Moose. This also automatically makes the DBOBject
query mapping routines available to report objects.
All reports are assumed to be tabular structures (this isn't always the case but
non-tabular reports are an advanced topic beyond this current introduction. See
the income statement and balance sheet reports for how this would be done
however).
All criteria are handled as Moose properties (using the "has" function) and it
is important that they be listed here. Non-listed properties will not be
available to your stored procedure. Reports are also assumed to have rows and
so the Report class automatically handles this here.
Additionally the following functions are required in your module:
\begin{description}
\item[columns] This returns a list of column data. Each is a hashref with
the following attributes:
\begin{description}
\item[col\_id] The unique name of the column
\item[type] Typically `text' though `checkbox' and `href' are not
uncommon.
\item[href\_base] Only valid for type href, and is the beginning of the
href.
\item[pwidth] Size to print on an arbitrary scale, for PDF only.
\item[name] Label for the column
\end{description}
\item[name] This returns the name of the report, to print in the header
\item[header\_lines] Returns a list of hasrefs (name/text are keys) for
printing criteria information on the header of the report.
\end{description}
Additionally subtotal\_on (returning a list of column names for subtotals) is
optional.
Typically the convention is that a prepare\_input function takes a \$request
object and turns dates into appropriate types, while a run\_report function
actually runs the report and sets the rows property. render(\$request) is
provided by the Report class (see workflow hooks below).
\subsubsection{Workflow Hooks}
In the workflow script one will typically see some lines like:
\begin{verbatim}
sub search{
my ($request) = @_;
LedgerSMB::DBObject::Report::Contact::Search->prepare_criteria($request);
my $report = LedgerSMB::DBObject::Report::Contact::Search->new(%$request);
$report->run_report;
$report->render($request);
}
\end{verbatim}
There are a few notes about the above lines that are worth making though usually
these can be copied and pasted, and modified to fit.
The render call requires a request object because it checks to see if a set of
columns are requested. If any columns are explicitly requested it displays only
those that were requested. If columns are not explicitly requested it displays
all available columns. A column is explicitly requested if the request contains
a true value for a parameter of col\_\$colname, so a column named city would be
requested by a variable set named col\_city. Additionally ordering and
subtotals are then automatically handled as well.
\section{Integration Possibilities}
An open database system and programming API allows for many types
of integration. There are some challenges, but in the end, one can
integrate a large number of tools.
\subsection{Reporting Tools}
Any reporting tool which can access the PostgreSQL database can be
used with LedgerSMB for custom reporting. These can include programs
like Microsoft Access and Excel (using the ODBC drivers), PgAccess
(A PostgreSQL front-end written in TCL/Tk with a similar feel to Access),
Rekall, Crystal Reports, OpenOffice and more.
\subsubsection{Examples}
We have created spreadsheets of the summaries of activity by day and
used the ODBC driver to import these into Excel. Excel can also read
HTML tables, so one can use PostgreSQL to create an HTML table of
the result and save it with a .xls extension so that Windows opens
it with Excel. These could then be served via the same web server
that serves LedgerSMB.
\subsection{Line of Business Tools on PostgreSQL}
Various line of business tools have been written using PostgreSQL
in large part due to the fact that it is far more mature than MySQL
in areas relating to data integrity enforcement, transactional processing,
and the like. These tools can be integrated with LedgerSMB in various
ways. One could integrate this program with the HERMES CRM framework,
for example.
\subsubsection{Strategies}
In general, it is advisable to run all such programs that benefit
from integration in the same database but under different schemas.
This allows PostgreSQL to become the main method of synchronizing
the data in real time. However, sometimes this can require dumping
the database, recreating the tables etc. in a different schema and
importing the data back into LedgerSMB.
One possibility for this sort of integration is to use database triggers
to replicate the data between the applications in real-time. This
can avoid the main issue of duplicate id's. One issue that can occur
however relates to updates. If one updates a customer record in HERMES,
for example, how do we know which record to update in LedgerSMB?
There are solutions to this problem but they do require some forethought.
A second possibility is to use views to allow one application to present
the data from the other as its own. This can be cleaner regarding
update issues, but it can also pose issues regarding duplicate id
fields.
\subsubsection{Examples}
Others have integrated L'ane POS and LedgerSMB in order to make it
work better with touch screen devices. Still others have successfully
integrated LedgerSMB and Interchange. In both cases, I believe that
triggers were used to perform the actual integration.
\subsection{Line of Business Tools on other RDBMS's}
Often there are requests to integrate LedgerSMB with applications
like SugarCRM, OSCommerce, and other applications running on MySQL
or other database managers. This is a far more complex field and it
requires a great deal more effort than integrating applications within
the same database.
\subsubsection{Strategies}
Real-time integration is not always possible. MySQL does
not support the SQL extension SQL/MED (Management of External Data) that
supports non-MySQL data sources so it is not possible to replicate the
data in real-time. Therefore
one generally resorts to integrating the system using time-based updates.
Replication may be somewhat error-prone unless the database manager
supports triggers (first added to MySQL in 5.0) or other mechanisms
to ensure that all changed records can be detected and replicated.
In general, it is usually advisable to add two fields to the record--
one that shows the insert time and one that shows the last update.
Additionally, I would suggest adding additional information to the
LedgerSMB tables so that you can track the source record from the
other application in the case of an update.
In general, one must write replication scripts that dump the information
from one and add it to the other. This must go both ways.
\subsubsection{Integration Products and Open Source Projects}
While many people write Perl scripts to accomplish the replication,
an open source project exists called DBI-Link. This package requires
PL/Perl to be installed in PostgreSQL, and it allows PostgreSQL to
present any data accessible via Perl's DBI framework as PostgreSQL
tables. DBI-Link can be used to allow PostgreSQL to pull the data
from MySQL or other database managers.
DBI-Link can simplify the replication process by reducing the operation
to a set of SQL queries.
\section{Customization Guide}
This section is meant to provide a programmer with an understanding
of the technologies enough information to get up to speed quickly
and minimize the time spent familiarizing themselves with the software.
Topics in this section are listed in order of complexity. As it appeals
to a narrower audience than previous discussions of this topic, it
is listed separately.
\subsection{General Information}
The main framework scripts (the ar.pl, ap.pl, etc. scripts found in
the root of the installation directory) handle such basic features
as instantiating the form object, ensuring that the user is logged
in, and the like. They then pass the execution off to the user interface
script (usually in the bin/mozilla directory).
LedgerSMB in many ways may look sort of object oriented in its design,
but in reality, it is far more data-driven than object oriented. The
Form object is used largely as a global symbol table and also as a
collection of fundamental routines for things like database access.
It also breaks down the query string into sets of variables which
are stored in its attribute hash table.
In essence one can and often will store all sorts of data structures
in the primary Form object. These can include almost anything. It
is not uncommon to see lists of hashes stored as attributes to a Form
object.
\subsection{Customizing Templates}
Templates are used to generate printed checks, invoices, receipts,
and more in LedgerSMB. Often the format of these items does not fit
a specific set of requirements and needs to be changed. This document
will not include \LaTeX{} or HTML instruction, but will include a
general introduction to editing templates. Also, this is not intended
to function as a complete reference.
Template instructions are contained in tags \textless?lsmb and ?\textgreater.
The actual parsing is done by the parse\_template function in LSMB/Form.pm.
\subsubsection{Template Control Structures}
As of 1.3, all templates use the Template Toolkit syntax for generating LaTeX,
text, and html output. The LaTeX can then be processed to create postscript or
pdf files, and could be trivially extended to allow for DVI output as well.
Template Toolkit provides a rich set of structures for controlling flow which
are well beyond what was available in previous versions of LedgerSMB. The only
difference is in the start and end tag sequences, where we use <?lsmb and ?> in
order to avoid problems with rendering LaTeX templates for testing purposes.
\subsection{Customizing Forms}
Data entry forms and other user interface pieces are in the bin directory.
In LedgerSMB 1.0.0 and later, symlinks are not generally used.
Each module is identified with a two letter combination: ar, ap, cp,
etc. These combinations are generally explained in the comment headers
on each file.
Execution in these files begins with the function designated by the
form->$\lbrace$action$\rbrace$ variable. This variable is usually
derived from configuration parameters in the menu.ini or the name
of the button that was clicked on to submit the previous page. Due
to localization requirements, the following process is used to determine
the appropriate action taken:
The \$locale-\textgreater getsub routine is called. This routine
checks the locale package to determine if the value needs to be translated
back into an appropriate LSMB function. If not, the variable is lower-cased,
and all spaces are converted into underscores.
In general there is no substitute for reading the code to understand
how this can be customized and how one might go about doing this.
In 1.3, all UI templates for new code (referenced in the scripts directory) and
even some for the old code (from the bin directory) use TemplateToolkit and
follow the same rules as the other templates.
\subsection{Customizing Modules}
The Perl Modules (.pm files) in the LedgerSMB directory contain the
main business logic of the application including all database access.
Most of the modules are relatively easy to follow, but the code quality
leaves something to be desired. The API calls themselves are
likely to be replaced in the future, so work on documenting API calls is
now focused solely on those calls that are considered to be stable.
At the moment, the best place to request information on the API's is on
the Developmers' Email List (\mailto{[email protected]}).
Many of these modules have a fair bit of dormant code in them which
was written for forthcoming features, such as payroll and bills of
materials.
One can add a new module through the normal means and connect it to
other existing modules.
\subsubsection{Database Access}
Both the Form (old code) and LedgerSMB (new code) objects have a dbh property
which is a database handle to the PostgreSQL database. This handle does not
autocommit.
\clearpage
\part{Appendix}
\appendix
%dummy comment inserted by tex2lyx to ensure that this paragraph is not empty
\section{Where to Go for More Information}
There are a couple of relevant sources of information on LedgerSMB
in particular.
The most important resources are the LedgerSMB web site
(\url{http://www.ledgersmb.org}) and the email lists found at our Sourceforge
site.
In addition, it is generally recommended that the main bookkeeper
of a company using LedgerSMB work through at least one accounting
textbook. Which textbook is not as important as the fact that a textbook
is used.
\section{Quick Tips}
\subsection{Understanding Shipping Addresses and Carriers}
Each customer can have a default shipping address. This address is
displayed prominantly in the add new customer screen. To change the
shipping address for a single order, one can use the ship to button
at the bottom of the quote, order, or invoice screen.
The carrier can be noted in the Ship Via field. However, this is a
freeform field and is largely used as commentary (or instructions
for the shipping crew).
\subsection{Handling bad debts}
In the event that a customer's check bounces or a collection requirement
is made, one can flag the customer's account by setting the credit
limit to a negative number.
If a debt needs to be written off, one can either use the allowance
method (by writing it against the contra asset account of \char`\"{}Allowance
for Bad Debts\char`\"{} or using the direct writeoff method where
it is posted as an expense.
\section{Step by Steps for Vertical Markets}
\subsection{Common Installation Errors}
\begin{itemize}
\item LedgerSMB is generally best installed in its own directory outside
of the wwwroot directory. While it is possible to install it inside
the wwwroot directory, the instructions and the faq don't cover the
common problems here.
\item When the chart of accounts (COA) is altered such that it is no longer
set up with appropriate items, you can make it impossible to define
goods and services properly. In general, until you are familiar with
the software, it is best to rename and add accounts rather than deleting
them.
\end{itemize}
\subsection{Retail With Light Manufacturing}
For purposes of this example we will use a business that assembles
computers and sells them on a retail store.
\begin{enumerate}
\item Install LedgerSMB.
\item Set preferences, and customize chart of accounts.
\begin{enumerate}
\item Before customizing the COA it is suggested that you consult an accountant.
\end{enumerate}
\item Define Goods, Labor, and Services as raw parts ordered from the vendors.
\begin{itemize}
\item These are located under the goods and services menu node.
\end{itemize}
\item Define assemblies.
\begin{itemize}
\item These are also located under goods and services.
\item Component goods and services must be defined prior to creating assembly.
\end{itemize}
\item Enter an AP Invoice to populate inventory with proper raw materials.
\begin{itemize}
\item One must generally add a generic vendor first. The vendor is added
under AP-\textgreater Vendors-\textgreater Add Vendor.
\end{itemize}
\item To pay an AP invoice like a check, go to cash-$>$payment. Fill out
approrpiate fields and click print.
\begin{itemize}
\item Note that one should select an invoice and enter in the payment amount
in the appropriate line of the invoice list. If you add amounts to
the master amount list, you will find that they are added to the amount
paid on the invoice as a prepayment.
\item The source field is the check number.
\end{itemize}
\item Stock assemblies
\item One can use AR Invoices or the POS interface to sell goods and services.
\begin{itemize}
\item Sales Invoice
\begin{itemize}
\item Can be generated from orders or quotations
\item Cannot include labor/overhead except as part of an assembly
\item One can make the mistake of printing the invoice and forgetting to
post it. In this event, the invoice does not officially exist in the
accounting system.
\item For new customers, you must add the customer first (under AR-\textgreater
Customers-\textgreater Add Customer.
\end{itemize}
\item POS Interface
\begin{itemize}
\item Cannot include labor/overhead except as part of an assembly
\item Printing without posting is often even easier in the POS because of
the rapid workflow. Yet it is just as severe a problem.
\end{itemize}
\item Ecommerce and Mail Order Operations
\begin{itemize}
\item See the shipping workflow documentation above.
\end{itemize}
\item Customers are set up by going to AR-\textgreater Customers-\textgreater
Add Customer (or the equivalent localized translation). The appropriate
fields are filled out and the buttons are used at the bottom to save
the record and optionally use it to create an invoice, etc.
\begin{itemize}
\item Saving a customer returns to the customer screen. After the appropriate
invoice, transaction, etc. is entered and posted, LedgerSMB will
return to the add customer screen.
\end{itemize}
\end{itemize}
\item One can use the requirements report to help determine what parts need
to be ordered though one cannot generate PO's directly from this report.
\end{enumerate}
Note, the needs of LedgerSMB are mostly useful for light manufacturing
operations (assembling computers, for example). More manufacturing
capabilities are expected to be released in the next version.
A custom assembly is a bit difficult to make. One must add the assembly
prior to invoice (this is not true of goods and services). If the
assembly is based on a different assembly but may cost more (due to
non-standard parts) you can load the old assembly using Goods and
Services-\textgreater Reports-\textgreater Assemblies and then make
necessary changes (including to the SKU/Partnumber) and save it as
new.
Then one can add it to the invoice.
\section{Glossary}
\begin{description}
\item [{BIC}] Bank Identifier Code is often the same as the S.W.I.F.T.
code. This is a code for the bank a customer uses for automated money
transfers.
\item [{COGS}] is Cost of Goods Sold. When an item is sold, then the expense
of its purchase is accrued as attached to the income of the sale.
It is tracked as COGS.
\item [{Credit}] : A logical transactional unit in double entry accounting.
It is the opposite of a debit. Credits affect different account types
as follows:
\begin{description}
\item [{Equity}] : Credits are added to the account when money is invested
in the business.
\item [{Asset}] : Credits are added when money is deducted from an asset
account.
\item [{Liability}] : Credits are added when money is owed to the business
account.
\item [{Income}] : Credits are added when income is earned.
\item [{Expense}] : Credits are used to apply adjustments at the end of
accounting periods to indicate that not all the expense for an AP
transaction has been fully accrued.
\end{description}
\item [{Debit}] : A logical transactional unit in double entry accounting
systems. It is the opposite of a credit. Debits affect different account
types as follows:
\begin{description}
\item [{Equity}] : Debits are added when money is paid to business owners.
\item [{Asset}] : Debits are added when money is added to an account.
\item [{Liability}] : Debits are added when money that is owed is paid
off.
\item [{Income}] : Debits are used to temporarily adjust income to defer
unearned income to the next accounting period.
\item [{Expense}] : Debits are added as expenses are incurred.
\end{description}
\item [{IBAN}] International Bank Account Number is related to the BIC
and is used for cross-border automated money transfers.
\item [{List}] Price is the recommended retail price.
\item [{Markup}] is the percentage increase that is applied to the last
cost to get the sell price.
\item [{ROP}] Re-order point. Items with fewer in stock than this will
show up on short reports.
\item [{Sell}] Price is the price at which the item is sold.
\item [{SKU}] Stock Keeping Unit: a number designating a specific product.
\item [{Source}] Document : a paper document that can be used as evidence
that a transaction occurred. Source documents can include canceled
checks, receipts, credit card statements and the like.
\item [{Terms}] is the number of days one has to pay the invoice. Most
businesses abbreviate the terms as Net n where n is the number of
days. For example, Net 30 means the customer has 30 days to pay the
net due on an invoice before it is late and incurs late fees. Sometimes
you will see 2 10 net 30 or similar. This means 2\% discount if paid within
10 days but due within 30 days in any case.
\end{description}
\end{document}
| {
"alphanum_fraction": 0.7521890432,
"avg_line_length": 38.6617006647,
"ext": "tex",
"hexsha": "7d464360d42ef40971e7c74f73935186a9d32e4b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ba4774c4c473238ed277537efdfd7bb6b24d0fd9",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "tridentcodesolution/jpankleshwaria.github.io",
"max_forks_repo_path": "projects/CRM/LedgerSMB-master/doc/manual/LedgerSMB-manual.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ba4774c4c473238ed277537efdfd7bb6b24d0fd9",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "tridentcodesolution/jpankleshwaria.github.io",
"max_issues_repo_path": "projects/CRM/LedgerSMB-master/doc/manual/LedgerSMB-manual.tex",
"max_line_length": 156,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ba4774c4c473238ed277537efdfd7bb6b24d0fd9",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "tridentcodesolution/jpankleshwaria.github.io",
"max_stars_repo_path": "projects/CRM/LedgerSMB-master/doc/manual/LedgerSMB-manual.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 37295,
"size": 168681
} |
% Default to the notebook output style
% Inherit from the specified cell style.
\documentclass[11pt]{article}
\usepackage[T1]{fontenc}
% Nicer default font (+ math font) than Computer Modern for most use cases
\usepackage{mathpazo}
% Basic figure setup, for now with no caption control since it's done
% automatically by Pandoc (which extracts  syntax from Markdown).
\usepackage{graphicx}
% We will generate all images so they have a width \maxwidth. This means
% that they will get their normal width if they fit onto the page, but
% are scaled down if they would overflow the margins.
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth
\else\Gin@nat@width\fi}
\makeatother
\let\Oldincludegraphics\includegraphics
% Set max figure width to be 80% of text width, for now hardcoded.
\renewcommand{\includegraphics}[1]{\Oldincludegraphics[width=.8\maxwidth]{#1}}
% Ensure that by default, figures have no caption (until we provide a
% proper Figure object with a Caption API and a way to capture that
% in the conversion process - todo).
\usepackage{caption}
\DeclareCaptionLabelFormat{nolabel}{}
\captionsetup{labelformat=nolabel}
\usepackage{adjustbox} % Used to constrain images to a maximum size
\usepackage{xcolor} % Allow colors to be defined
\usepackage{enumerate} % Needed for markdown enumerations to work
\usepackage{geometry} % Used to adjust the document margins
\usepackage{amsmath} % Equations
\usepackage{amssymb} % Equations
\usepackage{textcomp} % defines textquotesingle
% Hack from http://tex.stackexchange.com/a/47451/13684:
\AtBeginDocument{%
\def\PYZsq{\textquotesingle}% Upright quotes in Pygmentized code
}
\usepackage{upquote} % Upright quotes for verbatim code
\usepackage{eurosym} % defines \euro
\usepackage[mathletters]{ucs} % Extended unicode (utf-8) support
\usepackage[utf8x]{inputenc} % Allow utf-8 characters in the tex document
\usepackage{fancyvrb} % verbatim replacement that allows latex
\usepackage{grffile} % extends the file name processing of package graphics
% to support a larger range
% The hyperref package gives us a pdf with properly built
% internal navigation ('pdf bookmarks' for the table of contents,
% internal cross-reference links, web links for URLs, etc.)
\usepackage{hyperref}
\usepackage{longtable} % longtable support required by pandoc >1.10
\usepackage{booktabs} % table support for pandoc > 1.12.2
\usepackage[inline]{enumitem} % IRkernel/repr support (it uses the enumerate* environment)
\usepackage[normalem]{ulem} % ulem is needed to support strikethroughs (\sout)
% normalem makes italics be italics, not underlines
% Colors for the hyperref package
\definecolor{urlcolor}{rgb}{0,.145,.698}
\definecolor{linkcolor}{rgb}{.71,0.21,0.01}
\definecolor{citecolor}{rgb}{.12,.54,.11}
% ANSI colors
\definecolor{ansi-black}{HTML}{3E424D}
\definecolor{ansi-black-intense}{HTML}{282C36}
\definecolor{ansi-red}{HTML}{E75C58}
\definecolor{ansi-red-intense}{HTML}{B22B31}
\definecolor{ansi-green}{HTML}{00A250}
\definecolor{ansi-green-intense}{HTML}{007427}
\definecolor{ansi-yellow}{HTML}{DDB62B}
\definecolor{ansi-yellow-intense}{HTML}{B27D12}
\definecolor{ansi-blue}{HTML}{208FFB}
\definecolor{ansi-blue-intense}{HTML}{0065CA}
\definecolor{ansi-magenta}{HTML}{D160C4}
\definecolor{ansi-magenta-intense}{HTML}{A03196}
\definecolor{ansi-cyan}{HTML}{60C6C8}
\definecolor{ansi-cyan-intense}{HTML}{258F8F}
\definecolor{ansi-white}{HTML}{C5C1B4}
\definecolor{ansi-white-intense}{HTML}{A1A6B2}
% commands and environments needed by pandoc snippets
% extracted from the output of `pandoc -s`
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
% Add ',fontsize=\small' for more characters per line
\newenvironment{Shaded}{}{}
\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}}
\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{{#1}}}
\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}}
\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}}
\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}}
\newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{{#1}}}}
\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{{#1}}}
\newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}}
\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{{#1}}}
\newcommand{\RegionMarkerTok}[1]{{#1}}
\newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}}
\newcommand{\NormalTok}[1]{{#1}}
% Additional commands for more recent versions of Pandoc
\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{{#1}}}
\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{{#1}}}
\newcommand{\ImportTok}[1]{{#1}}
\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{{#1}}}}
\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}}
\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}}
\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{{#1}}}
\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}}
\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{{#1}}}
\newcommand{\BuiltInTok}[1]{{#1}}
\newcommand{\ExtensionTok}[1]{{#1}}
\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{{#1}}}
\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{{#1}}}
\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}}
\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}}
% Define a nice break command that doesn't care if a line doesn't already
% exist.
\def\br{\hspace*{\fill} \\* }
% Math Jax compatability definitions
\def\gt{>}
\def\lt{<}
% Document parameters
\title{Quantum Computing Basics}
% Pygments definitions
\makeatletter
\def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax%
\let\PY@ul=\relax \let\PY@tc=\relax%
\let\PY@bc=\relax \let\PY@ff=\relax}
\def\PY@tok#1{\csname PY@tok@#1\endcsname}
\def\PY@toks#1+{\ifx\relax#1\empty\else%
\PY@tok{#1}\expandafter\PY@toks\fi}
\def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{%
\PY@it{\PY@bf{\PY@ff{#1}}}}}}}
\def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}}
\expandafter\def\csname PY@tok@w\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}}
\expandafter\def\csname PY@tok@c\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@cp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.74,0.48,0.00}{##1}}}
\expandafter\def\csname PY@tok@k\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@kp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@kt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.69,0.00,0.25}{##1}}}
\expandafter\def\csname PY@tok@o\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@ow\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}}
\expandafter\def\csname PY@tok@nb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@nf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@nc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@nn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@ne\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.82,0.25,0.23}{##1}}}
\expandafter\def\csname PY@tok@nv\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@no\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.00,0.00}{##1}}}
\expandafter\def\csname PY@tok@nl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.63,0.00}{##1}}}
\expandafter\def\csname PY@tok@ni\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.60,0.60,0.60}{##1}}}
\expandafter\def\csname PY@tok@na\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.49,0.56,0.16}{##1}}}
\expandafter\def\csname PY@tok@nt\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@nd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}}
\expandafter\def\csname PY@tok@s\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@sd\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@si\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}}
\expandafter\def\csname PY@tok@se\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.13}{##1}}}
\expandafter\def\csname PY@tok@sr\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}}
\expandafter\def\csname PY@tok@ss\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@sx\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@m\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@gh\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}}
\expandafter\def\csname PY@tok@gu\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}}
\expandafter\def\csname PY@tok@gd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}}
\expandafter\def\csname PY@tok@gi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.63,0.00}{##1}}}
\expandafter\def\csname PY@tok@gr\endcsname{\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}}
\expandafter\def\csname PY@tok@ge\endcsname{\let\PY@it=\textit}
\expandafter\def\csname PY@tok@gs\endcsname{\let\PY@bf=\textbf}
\expandafter\def\csname PY@tok@gp\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}}
\expandafter\def\csname PY@tok@go\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.53,0.53}{##1}}}
\expandafter\def\csname PY@tok@gt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.27,0.87}{##1}}}
\expandafter\def\csname PY@tok@err\endcsname{\def\PY@bc##1{\setlength{\fboxsep}{0pt}\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{\strut ##1}}}
\expandafter\def\csname PY@tok@kc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@kd\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@kn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@kr\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@bp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@fm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@vc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@vg\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@vi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@vm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@sa\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@sb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@sc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@dl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@s2\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@sh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@s1\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@mb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@mf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@mh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@mi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@il\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@mo\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@ch\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@cm\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@cpf\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@c1\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@cs\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\def\PYZbs{\char`\\}
\def\PYZus{\char`\_}
\def\PYZob{\char`\{}
\def\PYZcb{\char`\}}
\def\PYZca{\char`\^}
\def\PYZam{\char`\&}
\def\PYZlt{\char`\<}
\def\PYZgt{\char`\>}
\def\PYZsh{\char`\#}
\def\PYZpc{\char`\%}
\def\PYZdl{\char`\$}
\def\PYZhy{\char`\-}
\def\PYZsq{\char`\'}
\def\PYZdq{\char`\"}
\def\PYZti{\char`\~}
% for compatibility with earlier versions
\def\PYZat{@}
\def\PYZlb{[}
\def\PYZrb{]}
\makeatother
% Exact colors from NB
\definecolor{incolor}{rgb}{0.0, 0.0, 0.5}
\definecolor{outcolor}{rgb}{0.545, 0.0, 0.0}
% Prevent overflowing lines due to hard-to-break entities
\sloppy
% Setup hyperref package
\hypersetup{
breaklinks=true, % so long urls are correctly broken across lines
colorlinks=true,
urlcolor=urlcolor,
linkcolor=linkcolor,
citecolor=citecolor,
}
% Slightly bigger margins than the latex defaults
\geometry{verbose,tmargin=1in,bmargin=1in,lmargin=1in,rmargin=1in}
\begin{document}
\maketitle
\section{A Breif Introduction to Quantum
Mechanics}\label{a-breif-introduction-to-quantum-mechanics}
Quantum mechanics studies the smallest parts of our universe. One main
difference between the quantum mechanical world, and the world we
interact with daily, is that quantum particles behave probabilistically.
That is, where a classical object will always follow the same
interactions, quantum particles behave differently each time, with their
behavior defined by probability density functions. A particles
probability density function is given by its \emph{wavefunction},
\(\psi(x)\), where \(|\psi(x)|^2\) (magnitude squared) is the
probability to find that particle at position \(x\).
What does it mean to define a particle's position by a continuous
function? This is where quantum mechanics requires a new method of
thinking. Normally when we think of an object's position, it can be
defined in terms of \((x, y, z)\) coordinates, and exists only in that
location. Quantum particles, however, somewhat "exist" in \emph{all}
locations until measured. Once measured, the continuous wavefunction
will collapse into a single point, defining the location of the
particle.
For example, if a particle has a wave function defined by a sine
function on the interval \([0, \pi]\), the PDF will look like:
\begin{figure}
\centering
\includegraphics{images/sin_pdf.png}
\caption{}
\end{figure}
Notice that the function has some normalization constant out front. This
is required as the total probability for existence must be equal to one.
Mathematically, the normalization constant can be found by integrating
the entire PDF which is equal to one, then solving for the constant;
however, that goes slightly beyond the scope of quantum computing
basics.
This wavefunction suggests that the \emph{most likely} location to
measure our particle is in the middle at \(\frac{\pi}{2}\). However, if
we were to measure a series of particles that had this wavefunction, not
all would be located at \(\frac{\pi}{2}\), which is the most important
point of this example. The location of particles are probabilistically
defined by their wavefunctions, and are unknown until measured. This
phenomenon is the beginning of what gives quantum processors their
advantages over classical computers and will become more apparent in
future sections.
\subsubsection{Dirac Notation}\label{dirac-notation}
Before continuing into quantum mechanics, it is important to establish a
different style for representing linear algebra. Dirac notation, also
known as Bra-Ket notation, allows us to easily represent quantum states.
Bra-ket notation uses two new vector representations, bras and kets,
which are equivalent to row vectors and column vectors, respectively.
Bras are represented as \[\langle a |\] and kets by \[| b \rangle\]
We define some general bras and kets, each with two elements, for
example purposes:
\[\langle a | = \begin{pmatrix} a_1 \ a_2 \end{pmatrix}\]
\[| b \rangle = \begin{pmatrix} b_1 \\ b_2 \end{pmatrix}\]
Like normal vectors, bras and kets can be scaled
\[n\langle a | = \begin{pmatrix} na_1 \ na_2 \end{pmatrix}\]
\[n| b \rangle = \begin{pmatrix} nb_1 \\ nb_2 \end{pmatrix}\]
Bras can be added to bras, and kets can be added to kets
\[ | b \rangle + | c \rangle = \begin{pmatrix} b_1 \\ b_2 \end{pmatrix} + \begin{pmatrix} c_1 \\ c_2 \end{pmatrix} = \begin{pmatrix} b_1 + c_1 \\ b_2 + c_2 \end{pmatrix}\]
A bra can be transformed into a ket and vice versa by taking the
conjugate transpose, or dagger operator of the state:
\[\langle a |^{*T} = \langle a |^\dagger = \begin{pmatrix} a_1 \ a_2 \end{pmatrix}^\dagger = \begin{pmatrix} a_1^* \\ a_2^* \end{pmatrix} = | a \rangle\]
And a normal linear algebra dot product is represented by
\[\langle a | b \rangle = \begin{pmatrix} a_1 \ a_2 \end{pmatrix} \cdot \begin{pmatrix} b_1 \\ b_2 \end{pmatrix} = a_1b_1 + a_2b_2\]
which is referred to as the \emph{inner product}. Reversing the ordering
of thse bra/ket results in matrix multiplication, also known as the
\emph{outer product}.
\[ | b \rangle \langle a | = \begin{pmatrix} b_1 \\ b_2 \end{pmatrix} \begin{pmatrix} a_1 \ a_2 \end{pmatrix} = \begin{pmatrix} b_1a_1 & b_1a_2 \\ b_2a_1 & b_2a_2 \end{pmatrix} \]
\subsubsection{Basis Vectors and
Superposition}\label{basis-vectors-and-superposition}
Bras and kets are used to define the \textbf{basis vectors} which will
be incredibly important going forwards. In the Z basis, these are
defined as
\[|0\rangle \ = \ \begin{pmatrix} 1 \\ 0 \end{pmatrix} \ \ \ \ \ \ |1\rangle \ = \ \begin{pmatrix} 0 \\ 1 \end{pmatrix}\]
These are the most fundamental vectors for our purposes, as a particle
in the \(|0\rangle\) state will have an output measurement of 0, and a
particle in the state \(|1\rangle\) state will have an output
measurement of 1, \emph{when taking a z measurement}.
It is possible for a particle to be in a \emph{linear combination} of
states. For example, a particle might be in the general state
\[|\psi\rangle = \alpha|0\rangle + \beta|1\rangle\] where \(\alpha\) and
\(\beta\) are constants. Quantum states must also always be normalized
meaning \(|\alpha|^2 + |\beta|^2 = 1\). The particle existing in this
linear combination of states is referred to as \emph{superposition}.
When measuring a qubit, the resulting output will either be a 0 or a 1,
meaning that our superposition state is not directly observable. When
measuring the state \(|\psi\rangle\), the probability of meauring a 0
will be \(|\alpha|^2\), and the probability of measusing 1 will be
\(|\beta|^2\).
Let us solve for a state which has equal probabilities for the outcome
of a z measurement. Equal probabilities means
\(|\alpha|^2 = |\beta|^2\), and using our normalization condition we can
solve for the constants.
\[|\alpha|^2 + |\beta|^2 = 1\]
\[2|\alpha|^2 = 1\]
\[\alpha = \frac{1}{\sqrt{2}}\]
\[\implies |\psi\rangle = \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)\]
We will see that measuring a particle in this state will result in equal
measurements of 0 and 1. The state we just produced is considered the
\textbf{x basis}, with the state symbols given as
\[|+\rangle = \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)\]
and
\[|-\rangle = \frac{1}{\sqrt{2}}(|0\rangle - |1\rangle)\]
The third basis, which is the \textbf{y basis}, is defined with complex
numbers, as follows
\[|\circlearrowright\rangle = \frac{ | 0 \rangle + i | 1 \rangle}{\sqrt{2}}\]
\[|\circlearrowleft\rangle = \frac{ | 0 \rangle -i | 1 \rangle}{\sqrt{2}}\]
\subsubsection{Classical Bits versus Quantum Bits
(Qubits)}\label{classical-bits-versus-quantum-bits-qubits}
The quantum mechanical particles described above are the base unit of
quantum information used in a quantum computer. Where in classical
computing the base unit is called a \emph{bit}, in quantum computing
they are called quantum bits, or \emph{qubits}. A single qubit can be in
the states described above, and it is the physical entity that we
manipulate and measure.
A classical bit is straightforward and easy to define. A bit either
takes the value of a 0 or a 1. The classical bit is in a single state at
any given time, and any measurement will simply result in that state,
which is intuitive. As described above, the qubit can be in 0 or 1 like
the classical bit, but it can also be in any state between 0 or 1. This
is much easier visualized on the \textbf{Bloch Sphere}, which is the
unit sphere representing the possible linear combinations between the x,
y, and z basis. The following figure shows the Block sphere, with a
vector representing the \(|0\rangle\) state.
\begin{figure}
\centering
\includegraphics{images/bloch_0.png}
\caption{}
\end{figure}
The \(|+\rangle\) state points purely along the x axis, which is halfway
between \(|0\rangle\) and \(|1\rangle\).
\begin{figure}
\centering
\includegraphics{images/bloch_+.png}
\caption{}
\end{figure}
And the \(|\circlearrowright\rangle\) state is along the y axis.
\begin{figure}
\centering
\includegraphics{images/bloch_y.png}
\caption{}
\end{figure}
Along with the three sets of basis vectors, measurements are made for a
single base. The default measurement used in Qiskit measures the z
component of a state. When a particle is measured once and reveals an
outputs state, further measurements on that particle without any other
state changes will result in the same output. For example, if the
\(|+\rangle\) state was measured, and the outcome was 1, performing more
measurements would \emph{always} result in 1. However, after measuring
in the z base, the particles x and y measurements reset back to 50/50.
We will see how to manipulate the qubit to measure in the x and y basis
in the future, but it is important to note that the particle can only be
in one deterministic state at a time.
\subsubsection{Manipulating Qubits}\label{manipulating-qubits}
When qubits are initialized, they are put into the \(|0\rangle\) state.
In order the change a qubit's state, we use quantum gates. Quantum gates
have matrix representations, and typically perform rotations of the
vector around the Block sphere. The output state is given from the
matrix multiplication of the gate and the input state. We will start
with the simplest gates, known as the Paulis: \(X, Y\) and \(Z\). These
gates have the following matrix representations:
\[
X= \begin{pmatrix} 0&1 \\\\ 1&0 \end{pmatrix}\\\\
Y= \begin{pmatrix} 0&-i \\\\ i&0 \end{pmatrix}\\\\
Z= \begin{pmatrix} 1&0 \\\\ 0&-1 \end{pmatrix}
\]
We will look at an example of applying the \(X\) gate to the
\(|0\rangle\) state:
\[X |0\rangle = |1\rangle = \begin{pmatrix} 0&1 \\\\ 1&0 \end{pmatrix} \begin{pmatrix} 1 \\ 0 \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \end{pmatrix} = |1\rangle\]
This takes a 0 input state and transforms it to a 1. Other transitions
are
\[
Z |+\rangle = |-\rangle, \\\\ Z |-\rangle = |+\rangle.
\]
\[
Y |0\rangle = i|1\rangle, \\\\ Y |1\rangle = -i|0\rangle.
\]
These are just a couple examples of how qubits can be manipulated. The
following sections will cover the various gates and their implementation
in Qiskit.
\subsubsection{Quantum Circuits}\label{quantum-circuits}
A quantum circuit represents the control flow that happens in a quantum
processor, defining the stages that gates are applied and when
measurements are taken. The following image is an example of a quantum
circuit. Two qubits are represented by \(q_0\) and \(q_1\). \(q_0\),
which is initially loaded as state \(|0\rangle\), as the Pauli \(X\)
gate applied. Then between the barriers (which are not physical, just
for the output diagram) both qubits are manipulated with another gate
(which will be discussed), and finally are measured into control qubits.
The Python code for generating a circuit without any gates is as
follows:
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}1}]:} \PY{k+kn}{from} \PY{n+nn}{qiskit} \PY{k}{import} \PY{o}{*}
\PY{k+kn}{from} \PY{n+nn}{qiskit}\PY{n+nn}{.}\PY{n+nn}{visualization} \PY{k}{import} \PY{n}{plot\PYZus{}histogram}\PY{p}{,} \PY{n}{plot\PYZus{}bloch\PYZus{}vector}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}2}]:} \PY{n}{n\PYZus{}q} \PY{o}{=} \PY{l+m+mi}{2} \PY{c+c1}{\PYZsh{} number of qubits}
\PY{n}{n\PYZus{}b} \PY{o}{=} \PY{l+m+mi}{2} \PY{c+c1}{\PYZsh{} number of output bits}
\PY{n}{qc} \PY{o}{=} \PY{n}{QuantumCircuit}\PY{p}{(}\PY{n}{n\PYZus{}q}\PY{p}{,} \PY{n}{n\PYZus{}b}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{draw}\PY{p}{(}\PY{n}{output}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{mpl}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
\texttt{\color{outcolor}Out[{\color{outcolor}2}]:}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_18_0.png}
\end{center}
{ \hspace*{\fill} \\}
Adding in some measurement gates
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}3}]:} \PY{n}{qc}\PY{o}{.}\PY{n}{measure}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{measure}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{draw}\PY{p}{(}\PY{n}{output}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{mpl}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
\texttt{\color{outcolor}Out[{\color{outcolor}3}]:}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_20_0.png}
\end{center}
{ \hspace*{\fill} \\}
And running the program on a simulator, where we expect to see 0 as the
output state for both qubits
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}4}]:} \PY{n}{counts} \PY{o}{=} \PY{n}{execute}\PY{p}{(}\PY{n}{qc}\PY{p}{,} \PY{n}{Aer}\PY{o}{.}\PY{n}{get\PYZus{}backend}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{qasm\PYZus{}simulator}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{)}\PY{o}{.}\PY{n}{result}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{get\PYZus{}counts}\PY{p}{(}\PY{p}{)}
\PY{n}{plot\PYZus{}histogram}\PY{p}{(}\PY{n}{counts}\PY{p}{)}
\end{Verbatim}
\texttt{\color{outcolor}Out[{\color{outcolor}4}]:}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_22_0.png}
\end{center}
{ \hspace*{\fill} \\}
The output is stored as a 2 bit binary string, corresponding to each
qubit output state. We see a 100\% rate of measure 0 on both qubits
because we did nothing with the circuit. If we were to add the Pauli
\(X\) gate to \(q_0\), let's see what happens.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}5}]:} \PY{c+c1}{\PYZsh{} reinitialize our circuit to add the X gate partway through}
\PY{n}{qc} \PY{o}{=} \PY{n}{QuantumCircuit}\PY{p}{(}\PY{n}{n\PYZus{}q}\PY{p}{,} \PY{n}{n\PYZus{}b}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{x}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{barrier}\PY{p}{(}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{measure}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{measure}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{draw}\PY{p}{(}\PY{n}{output}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{mpl}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
\texttt{\color{outcolor}Out[{\color{outcolor}5}]:}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_24_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}6}]:} \PY{n}{counts} \PY{o}{=} \PY{n}{execute}\PY{p}{(}\PY{n}{qc}\PY{p}{,} \PY{n}{Aer}\PY{o}{.}\PY{n}{get\PYZus{}backend}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{qasm\PYZus{}simulator}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{)}\PY{o}{.}\PY{n}{result}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{get\PYZus{}counts}\PY{p}{(}\PY{p}{)}
\PY{n}{plot\PYZus{}histogram}\PY{p}{(}\PY{n}{counts}\PY{p}{)}
\end{Verbatim}
\texttt{\color{outcolor}Out[{\color{outcolor}6}]:}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_25_0.png}
\end{center}
{ \hspace*{\fill} \\}
Notice that \(q_0\)'s output is given by the least significant bit in
the output string. We see the flip from 0 to 1, as expected from out
matrix derivation. The output probability for this is still 100\%, as we
have simply put the qubits into the z basis states, which are
deterministic upon measuring. We will move onto describing more quantum
gates, where the output will become more interesting.
\subsubsection{Single Qubit Gates}\label{single-qubit-gates}
The following section will focus on the gates which manipulate a single
qubit, along with Qiskit implementations and the Bloch sphere
visualization. All code cells require the imports from above.
\paragraph{Pauli X}\label{pauli-x}
As above, the Pauli X gate is given by
\[X= \begin{pmatrix} 0&1 \\\\ 1&0 \end{pmatrix}\\\\\]
an is similar to a classical NOT gate, performing a Bloch sphere half
rotation about the x axis, resulting in the vector below:
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}7}]:} \PY{n}{plot\PYZus{}bloch\PYZus{}vector}\PY{p}{(}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)}
\end{Verbatim}
\texttt{\color{outcolor}Out[{\color{outcolor}7}]:}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_31_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}8}]:} \PY{c+c1}{\PYZsh{} reinitialize our circuit to add the X gate partway through}
\PY{n}{qc} \PY{o}{=} \PY{n}{QuantumCircuit}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{x}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{measure}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{draw}\PY{p}{(}\PY{n}{output}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{mpl}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
\texttt{\color{outcolor}Out[{\color{outcolor}8}]:}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_32_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}9}]:} \PY{n}{counts} \PY{o}{=} \PY{n}{execute}\PY{p}{(}\PY{n}{qc}\PY{p}{,} \PY{n}{Aer}\PY{o}{.}\PY{n}{get\PYZus{}backend}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{qasm\PYZus{}simulator}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{)}\PY{o}{.}\PY{n}{result}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{get\PYZus{}counts}\PY{p}{(}\PY{p}{)}
\PY{n}{plot\PYZus{}histogram}\PY{p}{(}\PY{n}{counts}\PY{p}{)}
\end{Verbatim}
\texttt{\color{outcolor}Out[{\color{outcolor}9}]:}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_33_0.png}
\end{center}
{ \hspace*{\fill} \\}
\paragraph{Hadamard}\label{hadamard}
The Hadamard gate is highly important as it performs a half rotation
about the Bloch sphere, changing the \(|0\rangle\) state to the
\(|+\rangle\) state. This will give us a split probability of measuring
0 or 1, as the qubit has been put into a superposition of states. The
Hadamard gate is given by
\[H= \frac{1}{\sqrt{2}}\begin{pmatrix} 1&1 \\\\ 1&-1 \end{pmatrix}\\\\\]
The output will not be perfectly split 50/50 as the simulator runs a
finite amount of times, but the results will be close to an even split.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{qc.h(}\DecValTok{0}\NormalTok{) }\CommentTok{# Hadamard on qubit 0}
\end{Highlighting}
\end{Shaded}
The new state is shown in the Bloch sphere as:
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}10}]:} \PY{n}{plot\PYZus{}bloch\PYZus{}vector}\PY{p}{(}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)}
\end{Verbatim}
\texttt{\color{outcolor}Out[{\color{outcolor}10}]:}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_36_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}11}]:} \PY{n}{qc} \PY{o}{=} \PY{n}{QuantumCircuit}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{h}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{measure}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{draw}\PY{p}{(}\PY{n}{output}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{mpl}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
\texttt{\color{outcolor}Out[{\color{outcolor}11}]:}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_37_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}12}]:} \PY{n}{counts} \PY{o}{=} \PY{n}{execute}\PY{p}{(}\PY{n}{qc}\PY{p}{,} \PY{n}{Aer}\PY{o}{.}\PY{n}{get\PYZus{}backend}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{qasm\PYZus{}simulator}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{)}\PY{o}{.}\PY{n}{result}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{get\PYZus{}counts}\PY{p}{(}\PY{p}{)}
\PY{n}{plot\PYZus{}histogram}\PY{p}{(}\PY{n}{counts}\PY{p}{)}
\end{Verbatim}
\texttt{\color{outcolor}Out[{\color{outcolor}12}]:}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_38_0.png}
\end{center}
{ \hspace*{\fill} \\}
\paragraph{S and S Dagger}\label{s-and-s-dagger}
The S and S\(^\dagger\) gates rotate between the x and y basis. Their
matrices are given by
\[
S = \begin{pmatrix} 1&0 \\\\ 0&i \end{pmatrix}, \, \, \, \, S^\dagger = \begin{pmatrix} 1&0 \\\\ 0&-i \end{pmatrix}.
\]
and their effects are
\[
S |+\rangle = |\circlearrowright\rangle, \, \, \, \, S |-\rangle = |\circlearrowleft\rangle,\\\\
S^\dagger |\circlearrowright\rangle = |+\rangle, \, \, \, \, S^\dagger |\circlearrowleft\rangle = |-\rangle.
\]
If applied before a Hadamard gate, this will result in measuring the y
component.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{qc.s(}\DecValTok{0}\NormalTok{) }\CommentTok{# S on qubit 0}
\NormalTok{qc.sdg(}\DecValTok{0}\NormalTok{) }\CommentTok{# S† on qubit 0}
\end{Highlighting}
\end{Shaded}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}23}]:} \PY{n}{qc} \PY{o}{=} \PY{n}{QuantumCircuit}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{sdg}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{h}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{measure}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{draw}\PY{p}{(}\PY{n}{output}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{mpl}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
\texttt{\color{outcolor}Out[{\color{outcolor}23}]:}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_41_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}24}]:} \PY{n}{counts} \PY{o}{=} \PY{n}{execute}\PY{p}{(}\PY{n}{qc}\PY{p}{,} \PY{n}{Aer}\PY{o}{.}\PY{n}{get\PYZus{}backend}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{qasm\PYZus{}simulator}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{)}\PY{o}{.}\PY{n}{result}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{get\PYZus{}counts}\PY{p}{(}\PY{p}{)}
\PY{n}{plot\PYZus{}histogram}\PY{p}{(}\PY{n}{counts}\PY{p}{)}
\end{Verbatim}
\texttt{\color{outcolor}Out[{\color{outcolor}24}]:}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_42_0.png}
\end{center}
{ \hspace*{\fill} \\}
\paragraph{Rotation gates}\label{rotation-gates}
Much like how the Pauli X, Y and Z gates performed rotations about their
respective axis, more generalized gates exist which allow rotations at
arbitrary angles about their respective axis. These are implemented as
the \(R_x(\theta)\), \(R_y(\theta)\) and \(R_z(\theta)\) gates.
\[
R_x(\theta) = \begin{pmatrix} \cos(\frac{\theta}{2})& -i\sin(\frac{\theta}{2}) \\\\ -i\sin(\frac{\theta}{2})&\cos(\frac{\theta}{2}) \end{pmatrix}
\]
\[
R_y(\theta) = \begin{pmatrix} \cos(\frac{\theta}{2})& -\sin(\frac{\theta}{2}) \\\\ -\sin(\frac{\theta}{2})&\cos(\frac{\theta}{2}) \end{pmatrix}
\]
\[
R_y(\theta) = \begin{pmatrix} e^{\frac{-i\theta}{2}}&0 \\\\ 0&e^{\frac{i\theta}{2}} \end{pmatrix}
\]
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{qc.rx(theta, }\DecValTok{0}\NormalTok{) }\CommentTok{# rx rotation on qubit 0}
\NormalTok{qc.ry(theta, }\DecValTok{0}\NormalTok{) }\CommentTok{# ry rotation on qubit 0}
\NormalTok{qc.rz(theta, }\DecValTok{0}\NormalTok{) }\CommentTok{# rz rotation on qubit 0}
\end{Highlighting}
\end{Shaded}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}15}]:} \PY{k+kn}{from} \PY{n+nn}{numpy} \PY{k}{import} \PY{n}{pi}
\PY{n}{theta} \PY{o}{=} \PY{n}{pi}\PY{o}{/}\PY{l+m+mi}{4}
\PY{n}{qc} \PY{o}{=} \PY{n}{QuantumCircuit}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{rx}\PY{p}{(}\PY{n}{theta}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{measure}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{draw}\PY{p}{(}\PY{n}{output}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{mpl}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
\texttt{\color{outcolor}Out[{\color{outcolor}15}]:}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_45_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}16}]:} \PY{n}{counts} \PY{o}{=} \PY{n}{execute}\PY{p}{(}\PY{n}{qc}\PY{p}{,} \PY{n}{Aer}\PY{o}{.}\PY{n}{get\PYZus{}backend}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{qasm\PYZus{}simulator}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{)}\PY{o}{.}\PY{n}{result}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{get\PYZus{}counts}\PY{p}{(}\PY{p}{)}
\PY{n}{plot\PYZus{}histogram}\PY{p}{(}\PY{n}{counts}\PY{p}{)}
\end{Verbatim}
\texttt{\color{outcolor}Out[{\color{outcolor}16}]:}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_46_0.png}
\end{center}
{ \hspace*{\fill} \\}
\paragraph{T and T Dagger}\label{t-and-t-dagger}
Two specific angles for the \(R_z(\theta)\) gate have their own names.
\(T\) and \(T^\dagger\) are given by angles \(\theta=\pm \pi/4\) and
have the matrix form
\[
T = \begin{pmatrix} 1&0 \\\\ 0&e^{i\pi/4}\end{pmatrix}, \, \, \, \, T^\dagger = \begin{pmatrix} 1&0 \\\\ 0&e^{-i\pi/4} \end{pmatrix}.
\]
These gates will leave a qubits \(|0\rangle\) amplitude the same, while
multiplying the \(|1\rangle\)'s phase by \(\pm i\)
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{qc.t(}\DecValTok{0}\NormalTok{) }\CommentTok{# t on qubit 0}
\NormalTok{qc.tdg(}\DecValTok{0}\NormalTok{) }\CommentTok{# t† on qubit 0}
\end{Highlighting}
\end{Shaded}
\subsubsection{Multi-Qubit Gates}\label{multi-qubit-gates}
For quantum computers to compete with their classical counterparts,
there must exist gates that facilitate the interaction between multiple
qubits. The main ones are the two-qubit controlled-NOT gate (CNOT) and
the three-qubit Toffoli.
\paragraph{CNOT}\label{cnot}
The CNOT gate is analogous to the XOR gate. Operating on two qubits, the
behavior is as follows:
\[\textbf{CNOT} \ |00\rangle \Rightarrow |00\rangle\]
\[\textbf{CNOT} \ |01\rangle \Rightarrow |01\rangle\]
\[\textbf{CNOT} \ |10\rangle \Rightarrow |11\rangle\]
\[\textbf{CNOT} \ |11\rangle \Rightarrow |10\rangle\]
Using this notation, the first qubit is the control and the second qubit
is the output. The second qubit in the output only switches to 1 when
one of the first qubits is a 1. We can also define controlled Y and Z
gates.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{qx.cx(}\DecValTok{0}\NormalTok{, }\DecValTok{1}\NormalTok{) }\CommentTok{# CNOT with control qubit 0, target qubit 1}
\NormalTok{qc.cy(}\DecValTok{0}\NormalTok{,}\DecValTok{1}\NormalTok{) }\CommentTok{# controlled-Y, control qubit 0, target qubit 1}
\NormalTok{qc.cz(}\DecValTok{0}\NormalTok{,}\DecValTok{1}\NormalTok{) }\CommentTok{# controlled-Z, control qubit 0, target qubit 1}
\end{Highlighting}
\end{Shaded}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}17}]:} \PY{n}{qc} \PY{o}{=} \PY{n}{QuantumCircuit}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{x}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{cx}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{measure}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{measure}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{draw}\PY{p}{(}\PY{n}{output}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{mpl}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
\texttt{\color{outcolor}Out[{\color{outcolor}17}]:}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_54_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}18}]:} \PY{n}{counts} \PY{o}{=} \PY{n}{execute}\PY{p}{(}\PY{n}{qc}\PY{p}{,} \PY{n}{Aer}\PY{o}{.}\PY{n}{get\PYZus{}backend}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{qasm\PYZus{}simulator}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{)}\PY{o}{.}\PY{n}{result}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{get\PYZus{}counts}\PY{p}{(}\PY{p}{)}
\PY{n}{plot\PYZus{}histogram}\PY{p}{(}\PY{n}{counts}\PY{p}{)}
\end{Verbatim}
\texttt{\color{outcolor}Out[{\color{outcolor}18}]:}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_55_0.png}
\end{center}
{ \hspace*{\fill} \\}
\paragraph{Toffoli}\label{toffoli}
The Toffoli gate uses two control qubits and one target qubit. This gate
is like the classic AND gate, where the target qubit will only change if
the two control bits are in the \(|0\rangle\) state.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{qc.ccx(}\DecValTok{0}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{2}\NormalTok{) }\CommentTok{# Toffoli controlled on qubits 0 and 1 with qubit 2 as target}
\end{Highlighting}
\end{Shaded}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}19}]:} \PY{n}{qc} \PY{o}{=} \PY{n}{QuantumCircuit}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{x}\PY{p}{(}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{ccx}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{measure}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{measure}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{measure}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{draw}\PY{p}{(}\PY{n}{output}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{mpl}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
\texttt{\color{outcolor}Out[{\color{outcolor}19}]:}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_58_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}20}]:} \PY{n}{counts} \PY{o}{=} \PY{n}{execute}\PY{p}{(}\PY{n}{qc}\PY{p}{,} \PY{n}{Aer}\PY{o}{.}\PY{n}{get\PYZus{}backend}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{qasm\PYZus{}simulator}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{)}\PY{o}{.}\PY{n}{result}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{get\PYZus{}counts}\PY{p}{(}\PY{p}{)}
\PY{n}{plot\PYZus{}histogram}\PY{p}{(}\PY{n}{counts}\PY{p}{)}
\end{Verbatim}
\texttt{\color{outcolor}Out[{\color{outcolor}20}]:}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_59_0.png}
\end{center}
{ \hspace*{\fill} \\}
\paragraph{Swap}\label{swap}
The swap gate causes the two qubits to trade states.
\[\textbf{SWAP} \ |00\rangle \Rightarrow |00\rangle\]
\[\textbf{SWAP} \ |01\rangle \Rightarrow |10\rangle\]
\[\textbf{SWAP} \ |10\rangle \Rightarrow |01\rangle\]
\[\textbf{SWAP} \ |11\rangle \Rightarrow |10\rangle\]
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{qc.swap(}\DecValTok{0}\NormalTok{, }\DecValTok{1}\NormalTok{) }\CommentTok{# swap qubit 0 and 1's states}
\end{Highlighting}
\end{Shaded}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}21}]:} \PY{n}{qc} \PY{o}{=} \PY{n}{QuantumCircuit}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{x}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{swap}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{measure}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{measure}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{qc}\PY{o}{.}\PY{n}{draw}\PY{p}{(}\PY{n}{output}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{mpl}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
\texttt{\color{outcolor}Out[{\color{outcolor}21}]:}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_62_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}22}]:} \PY{n}{counts} \PY{o}{=} \PY{n}{execute}\PY{p}{(}\PY{n}{qc}\PY{p}{,} \PY{n}{Aer}\PY{o}{.}\PY{n}{get\PYZus{}backend}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{qasm\PYZus{}simulator}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{)}\PY{o}{.}\PY{n}{result}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{get\PYZus{}counts}\PY{p}{(}\PY{p}{)}
\PY{n}{plot\PYZus{}histogram}\PY{p}{(}\PY{n}{counts}\PY{p}{)}
\end{Verbatim}
\texttt{\color{outcolor}Out[{\color{outcolor}22}]:}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_63_0.png}
\end{center}
{ \hspace*{\fill} \\}
% Add a bibliography block to the postdoc
\end{document}
| {
"alphanum_fraction": 0.6450675399,
"avg_line_length": 47.253384913,
"ext": "tex",
"hexsha": "8c58f092e69a2246e785cce2d5ceb2e6f90c03ed",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4a2f5b1b0346a1a8bb0a0e6e638cf2225c4c213c",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "austinbeauch/QML",
"max_forks_repo_path": "notebook.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4a2f5b1b0346a1a8bb0a0e6e638cf2225c4c213c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "austinbeauch/QML",
"max_issues_repo_path": "notebook.tex",
"max_line_length": 354,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4a2f5b1b0346a1a8bb0a0e6e638cf2225c4c213c",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "austinbeauch/QML",
"max_stars_repo_path": "notebook.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 18689,
"size": 48860
} |
\documentclass{article}
\input{../homework-problems}
\toggletrue{solutions}
%\togglefalse{solutions}
\toggletrue{answers}
\newtheorem{problem}{Problem}
\newcommand{\hide}[1]{}
\renewcommand{\fcProblemRef}{\theproblem.\theenumi}
\renewcommand{\fcSubProblemRef}{\theenumi.\theenumii}
\begin{document}
\begin{center}
\Large
Master Problem Sheet \\ Calculus I \\
\end{center}
%\noindent \textbf{Name:} \hfill{~}
%\begin{tabular}{c|c|c|c|c|c|c|c|c||c}
%Problem&1 &2&3&4&5&6&7&8& $\sum$\\ \hline
%Score&&&&&&&&&\\ \hline
%Max&20&20&20&20&20&10&20&20&150
%\end{tabular}
This master problem sheet contains all freecalc problems on the topics studied in Calculus I. For a list of contributors/authors of the freecalc project (and in particular, the present problem collection) see file contributors.tex.
\fcLicenseContent
\tableofcontents
\section{Functions, Basic Facts}\label{secMPSfunctionBasics}
\subsection{Understanding function notation}
\begin{problem}
\input{\freecalcBaseFolder/modules/precalculus/homework/functions-evaluate-simplify}
\end{problem}
\subsection{Domains and ranges}
\begin{problem}
\input{../../modules/precalculus/homework/functions-domains-ranges-1}
\end{problem}
\subsection{Piecewise Defined Functions}
\begin{problem}
\input{../../modules/functions-basics/homework/piecewise-linear-formula-from-plot-1}
\end{problem}
\begin{problem}
\input{../../modules/functions-basics/homework/piecewise-linear-piecewise-circular-formula-from-plot-1}
\end{problem}
\begin{problem}
\input{../../modules/precalculus/homework/piecewise-defined-plot-from-formula}
\end{problem}
\subsection{Function composition}
\begin{problem}
\input{../../modules/functions-basics/homework/function-composition-domains-and-ranges-1}
\end{problem}
\begin{problem}
\input{../../modules/functions-basics/homework/functions-composing-fractional-linear-1}
\end{problem}
\subsection{Linear Transformations and Graphs of Functions}
\begin{problem}
\input{../../modules/precalculus/homework/functions-plot-transformations}
\end{problem}
\begin{problem}
\input{../../modules/function-graph-linear-transformations/homework/use-known-graph-to-sketch-its-linear-transformation-1}
\end{problem}
\input{../../modules/function-graph-linear-transformations/homework/use-known-graph-to-sketch-its-linear-transformation-1-solutions}
\section{Trigonometry}\label{secMPStrigonometry}
\subsection{Angle conversion}
\begin{problem}
\input{../../modules/trigonometry/homework/trigonometry-angle-conversion-degree-to-radian}
\end{problem}
\begin{problem}
\input{../../modules/trigonometry/homework/trigonometry-angle-conversion-radian-to-degree}
\end{problem}
\subsection{Trigonometry identities}
\begin{problem}
\input{../../modules/trigonometry/homework/trigonometry-identities}
\end{problem}
\subsection{Trigonometry equations}
\begin{problem}
\input{../../modules/trigonometry/homework/trigonometry-equations}
\end{problem}
\input{../../modules/trigonometry/homework/trigonometry-equations-solutions}
\section{Limits and Continuity}
\subsection{Limits as $x$ tends to a number}\label{secMPSlimitsXtendsToNumer}
\begin{problem}
\input{../../modules/limits/homework/limit-x-tends-to-a-common-factor}
\end{problem}
\input{../../modules/limits/homework/limit-x-tends-to-a-common-factor-solutions}
\begin{problem}
\input{../../modules/limits/homework/limit-x-tends-to-a-common-factor2}
\end{problem}
\begin{problem}
\input{../../modules/limits/homework/limit-x-tends-to-a-direct-substitution}
\end{problem}
\subsection{Limits involving $\infty $}
\subsubsection{Limits as $x\to\pm \infty$}\label{secMPSlimitsXtoInfty}
\begin{problem}
\input{../../modules/limits/homework/limit-x-tends-to-infinity-1}
\end{problem}
\input{../../modules/limits/homework/limit-x-tends-to-infinity-1-solutions}
\subsubsection{Limits involving vertical asymptote}\label{secMPSlimitsVerticalAsymptote}
\begin{problem}
\input{../../modules/curve-sketching/homework/asymptotes-vertical}
\end{problem}
\begin{problem}
\input{../../modules/curve-sketching/homework/asymptotes-vertical2}
\end{problem}
\input{../../modules/curve-sketching/homework/asymptotes-vertical2-solutions}
\subsubsection{Find the Horizontal and Vertical Asymptotes}\label{secMPShorAndVertAsymptotes}
\begin{problem}
\input{../../modules/curve-sketching/homework/asymptotes-vertical-horizontal}
\end{problem}
\input{../../modules/curve-sketching/homework/asymptotes-vertical-horizontal-solutions}
\subsection{Limits - All Cases - Problem Collection}
\begin{problem}
\input{../../modules/limits/homework/limit-problem-collection-1}
\end{problem}
\subsection{Continuity}
\subsubsection{Continuity to evaluate limits}
\begin{problem}
\input{../../modules/continuity/homework/continuity-to-evaluate-limits-1}
\end{problem}
\subsubsection{Conceptual problems} \label{secMPScontinuityConceptual}
\begin{problem}
\input{../../modules/continuity/homework/continuity-problems-involving-understanding}
\end{problem}
\begin{problem}
\input{../../modules/continuity/homework/continuity-problems-involving-understanding-2}
\end{problem}
\subsubsection{Continuity and Piecewise Defined Functions} \label{secMPScontinuityPiecewise}
\begin{problem}
\input{../../modules/continuity/homework/continuity-with-piecewise-defined-functions}
\end{problem}
\begin{problem}
\input{../../modules/continuity/homework/continuity-with-piecewise-defined-functions-2}
\end{problem}
\begin{problem}
\input{../../modules/continuity/homework/continuity-with-piecewise-defined-functions-3}
\end{problem}
\subsection{Intermediate Value Theorem}\label{secMPSintermediateValueTheorem}
\begin{problem}
\input{../../modules/continuity/homework/IVT-problems}
\end{problem}
\input{../../modules/continuity/homework/IVT-problems-solutions}
\begin{problem}
\input{../../modules/continuity/homework/IVT-problems2}
\end{problem}
\input{../../modules/continuity/homework/IVT-problems2-solutions}
\section{Inverse Functions}\label{secMPSInverseFunctions}
\subsection{Problems Using Rational Functions Only}
\begin{problem}
\input{../../modules/inverse-functions/homework/inverse-functions-no-exponents-logarithms-1}
\end{problem}
\input{../../modules/inverse-functions/homework/inverse-functions-no-exponents-logarithms-1-solutions}
\begin{problem}
\input{../../modules/inverse-functions/homework/inverse-functions4}
\end{problem}
\subsection{Problems Involving Exponents, Logarithms}
\begin{problem}
\input{../../modules/inverse-functions/homework/inverse-functions1}
\end{problem}
\input{../../modules/inverse-functions/homework/inverse-functions1-solutions}
\section{Logarithms and Exponent Basics}\label{secMPSLogarithmsExponentsBasics}
\subsection{Exponents Basics}
\begin{problem}
\input{../../modules/exponential-functions/homework/exponent-simplify}
\end{problem}
\input{../../modules/exponential-functions/homework/exponent-simplify-solutions}
\subsection{Logarithm Basics}
\begin{problem}
\input{../../modules/logarithms/homework/logarithms-basic-no-properties-1}
\end{problem}
\input{../../modules/logarithms/homework/logarithms-basic-properties-1-solutions}
\begin{problem}
\input{../../modules/logarithms/homework/logarithms-combine-1}
\end{problem}
\input{../../modules/logarithms/homework/logarithms-combine-1-solutions}
\begin{problem}
\input{../../modules/logarithms/homework/logarithms-basic-properties-1}
\end{problem}
\input{../../modules/logarithms/homework/logarithms-basic-properties-1-solutions}
\subsection{Some Problems Involving Logarithms}
\begin{problem}
\input{../../modules/logarithms/homework/logarithms-equations}
\end{problem}
\input{../../modules/logarithms/homework/logarithms-equations-solutions}
\section{Derivatives}
\subsection{Derivatives and Function Graphs: basics}\label{secMPSderivativesFunGraphsBasics}
\begin{problem}
\input{../../modules/curve-sketching/homework/match-graph-to-derivative-graph-2}
\end{problem}
\begin{problem}
\input{../../modules/curve-sketching/homework/match-graph-to-derivative-graph-2-solution}
\end{problem}
\begin{problem}
\input{../../modules/curve-sketching/homework/match-graph-to-derivative-graph}
\end{problem}
\subsection{Product and Quotient Rules}\label{secMPSproductQuotientRules}
\begin{problem}
\input{../../modules/derivatives/homework/derivatives-product-quotient-rule-1}
\end{problem}
\input{../../modules/derivatives/homework/derivatives-product-quotient-rule-1-solutions}
\begin{problem}
\input{../../modules/derivatives/homework/derivatives-product-quotient-rule-2}
\end{problem}
\input{../../modules/derivatives/homework/derivatives-product-quotient-rule-2-solutions}
\begin{problem}
\input{../../modules/derivatives/homework/derivatives-product-quotient-rule-3}
\end{problem}
\input{../../modules/derivatives/homework/derivatives-product-quotient-rule-3-solutions}
\subsection{Basic Trigonometric Derivatives}\label{secMPStrigDerivatives}
\begin{problem}
\input{../../modules/derivatives/homework/derivatives-trig}
\end{problem}
\begin{problem}
\input{../../modules/derivatives/homework/derivatives-trig-2}
\end{problem}
\subsection{Natural Exponent Derivatives}
\input{../../modules/exponential-functions/homework/exponent-derivative}
\subsection{The Chain Rule}\label{secMPSchainRule}
\begin{problem}
\input{../../modules/chain-rule/homework/chain-rule1}
\end{problem}
\input{../../modules/chain-rule/homework/chain-rule1-solutions}
\begin{problem}
\input{../../modules/chain-rule/homework/chain-rule2}
\end{problem}
\begin{problem}
\input{../../modules/chain-rule/homework/chain-rule3}
\end{problem}
\input{../../modules/chain-rule/homework/chain-rule3-solutions}
\begin{problem}
\input{../../modules/chain-rule/homework/chain-rule4}
\end{problem}
\begin{problem}
\input{../../modules/chain-rule/homework/chain-rule5}
\end{problem}
\subsection{Problem Collection All Techniques}
\begin{problem}
\input{../../modules/derivatives/homework/derivative-problem-collection-1}
\end{problem}
\input{../../modules/derivatives/homework/derivative-problem-collection-1-solutions}
\subsection{Implicit Differentiation}\label{secMPSImplicitDifferentiation}
\begin{problem}
\input{../../modules/implicit-differentiation/homework/implicitly-differentiate}
\end{problem}
\begin{problem}
\input{../../modules/implicit-differentiation/homework/implicit-tangent}
\end{problem}
\input{../../modules/implicit-differentiation/homework/implicit-tangent-solutions}
\subsection{Implicit Differentiation and Inverse Trigonometric Functions}
\begin{problem}
\input{../../modules/implicit-differentiation/homework/implicit-inverse-trig1}
\end{problem}
\begin{problem}
\input{../../modules/implicit-differentiation/homework/implicit-inverse-trig2}
\end{problem}
\subsection{Derivative of non-Constant Exponent with non-Constant Base}\label{secMPSDerivativeNonConstExponent}
\begin{problem}
\input{../../modules/logarithms/homework/derivatives-non-const-exponent-2}
\end{problem}
\input{../../modules/logarithms/homework/derivatives-non-const-exponent-2-solutions}
\begin{problem}
\input{../../modules/logarithms/homework/derivatives-non-const-exponent}
\end{problem}
\subsection{Related Rates}\label{secMPSrelatedRates}
\begin{problem}
\input{../../modules/related-rates/homework/related-rates-text-problems-2}
\end{problem}
\input{../../modules/related-rates/homework/related-rates-text-problems-2-solutions}
\begin{problem}
\input{../../modules/related-rates/homework/related-rates-text-problems}
\end{problem}
\input{../../modules/related-rates/homework/related-rates-text-problems-solutions}
\section{Graphical Behavior of Functions}
\subsection{Mean Value Theorem}\label{secMPS-MVT}
\begin{problem}
\input{../../modules/maxima-minima/homework/find-the-mvt-points-1}
\end{problem}
\begin{problem}
\input{../../modules/maxima-minima/homework/mvt-to-estimate-fmax-fmin-1}
\end{problem}
\begin{problem}
\input{../../modules/maxima-minima/homework/mvt-show-only-one-root-1}
\end{problem}
\input{../../modules/maxima-minima/homework/mvt-show-only-one-root-1-solutions}
\subsection{Maxima, Minima}\label{secMPSoneVariableMinMax}
\subsubsection{Closed Interval method}\label{secMPSclosedInterval}
\begin{problem}
\input{../../modules/maxima-minima/homework/maxima-minima}
\end{problem}
\begin{problem}
\input{../../modules/logarithms/homework/logarithm-physics}
\end{problem}
\subsubsection{Derivative tests}
\begin{problem}
\input{../../modules/maxima-minima/homework/local-maxima-minima-derivative-tests-1}
\end{problem}
\subsubsection{Optimization}\label{secMPSoptimization}
\begin{problem}
\input{../../modules/maxima-minima/homework/maxima-minima-text-problems-1}
\end{problem}
\begin{problem}
\input{../../modules/optimization/homework/optimization-problem-collection-1}
\end{problem}
\input{../../modules/optimization/homework/optimization-problem-collection-1-solutions}
\subsection{Function Graph Sketching}\label{secMPSfunctionGraphSketching}
\begin{problem}
\input{../../modules/curve-sketching/homework/sketch-graph-with-all-the-details}
\end{problem}
\input{../../modules/curve-sketching/homework/sketch-graph-with-all-the-details-solutions}
\begin{problem}
\input{../../modules/curve-sketching/homework/sketch-graph-with-all-the-details-2}
\end{problem}
\section{Linearizations and Differentials} \label{secMPSLinearizationAndDifferentials}
\begin{problem}
\input{../../modules/differentials/homework/linearization-problems-1}
\end{problem}
\input{../../modules/differentials/homework/linearization-problems-1-solutions}
\section{Integration Basics}
\subsection{Riemann Sums}\label{secMPSRiemannSums}
\begin{problem}
\input{../../modules/integration/homework/riemann-sum-problems-1}
\end{problem}
\input{../../modules/integration/homework/riemann-sum-problems-1-solutions}
\subsection{Antiderivatives}\label{secMPSantiderivatives}
\begin{problem}
\input{../../modules/antiderivatives/homework/antiderivatives-basic-integrals-1}
\end{problem}
\begin{problem}
\input{../../modules/antiderivatives/homework/antiderivatives-basic-integrals-initial-condition-1}
\end{problem}
\begin{problem}
\input{../../modules/antiderivatives/homework/antiderivatives-verify-by-differentiation-1}
\end{problem}
\subsection{Basic Definite Integrals} \label{secMPSBasicDefiniteIntegrals}
\begin{problem}
\input{../../modules/integration/homework/basic-definite-integrals-1}
\end{problem}
\input{../../modules/integration/homework/basic-definite-integrals-1-solutions}
\begin{problem}
\input{../../modules/integration/homework/basic-definite-integrals-2}
\end{problem}
\subsection{Fundamental Theorem of Calculus Part I}\label{secMPSFTCpart1}
\begin{problem}
\input{../../modules/integration/homework/FTC-part1-problems-1}
\end{problem}
\input{../../modules/integration/homework/FTC-part1-problems-1-solutions}
\subsection{Integration with The Substitution Rule}
\label{secMPSintegrationSubstitutionRule}
\subsubsection{Substitution in Indefinite Integrals}
\label{secMPSintegrationSubstitutionRuleIndefinite}
\begin{problem}
\input{../../modules/substitution-rule/homework/substitution-rule-1}
\end{problem}
\input{../../modules/substitution-rule/homework/substitution-rule-1-solutions}
\begin{problem}
\input{../../modules/substitution-rule/homework/substitution-rule-2}
\end{problem}
\input{../../modules/substitution-rule/homework/substitution-rule-2-solutions}
\subsubsection{Substitution in Definite Integrals}
\label{secMPSintegrationSubstitutionRuleDefinite}
\begin{problem}
\input{../../modules/substitution-rule/homework/substitution-rule-definite-1}
\end{problem}
\input{../../modules/substitution-rule/homework/substitution-rule-definite-1-solutions}
\section{First Applications of Integration}
\subsection{Area Between Curves}\label{secMPSareaBetweenCurves}
\begin{problem}
\input{../../modules/area-between-curves/homework/area-between-curves-problems-1}
\end{problem}
\input{../../modules/area-between-curves/homework/area-between-curves-problems-1-solutions}
\subsection{Volumes of Solids of Revolution}\label{secMPSvolumesSolidsRevolution}
\subsubsection{Problems Geared towards the Washer Method}\label{secMPSvolumesSolidsRevolutionWashers}
\begin{problem}
\input{../../modules/volumes/homework/solids-of-revolution-problems-1}
\end{problem}
\input{../../modules/volumes/homework/solids-of-revolution-problems-1-solutions}
\subsubsection{Problems Geared towards the Cylindrical Shells Method}\label{secMPSvolumesSolidsRevolutionShells}
\begin{problem}
\input{../../modules/volumes/homework/solids-of-revolution-cylindrical-shells-1}
\end{problem}
\end{document} | {
"alphanum_fraction": 0.7862219792,
"avg_line_length": 37.4113636364,
"ext": "tex",
"hexsha": "877bb9e8fa1199bf0098eabf7c60732ddcebce4a",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2017-09-21T13:51:45.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-09-21T13:51:45.000Z",
"max_forks_repo_head_hexsha": "b34b2ad2946fc0b6a9b403d4399a0cf9d23c19dd",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "tmilev/freecalc",
"max_forks_repo_path": "homework/referenceallproblemsbycourse/calculusimasterproblemsheet.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b34b2ad2946fc0b6a9b403d4399a0cf9d23c19dd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "tmilev/freecalc",
"max_issues_repo_path": "homework/referenceallproblemsbycourse/calculusimasterproblemsheet.tex",
"max_line_length": 231,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "b34b2ad2946fc0b6a9b403d4399a0cf9d23c19dd",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "tmilev/freecalc",
"max_stars_repo_path": "homework/referenceallproblemsbycourse/calculusimasterproblemsheet.tex",
"max_stars_repo_stars_event_max_datetime": "2017-07-12T11:15:57.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-07-12T11:15:57.000Z",
"num_tokens": 4588,
"size": 16461
} |
% Reciprocal BLAST
%
% Simple introduction to reciprocal best BLAST matches
\subsection{Reciprocal BLAST}
\begin{frame}
\frametitle{Reciprocal Best BLAST Hits (RBBH)}
\begin{itemize}
\item To compare our genecall proteins to \texttt{NC\_004547.faa} reference set...
\item BLAST reference proteins against our proteins
\item BLAST our proteins against reference proteins
\item Pairs with each other as best BLAST Hit are called RBBH
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{One-way BLAST vs RBBH}
One-way BLAST includes many low-quality hits \\
\includegraphics[width=0.3\textwidth]{images/rbbh1}
\includegraphics[width=0.33\textwidth]{images/rbbh2}
\includegraphics[width=0.34\textwidth]{images/rbbh3}
\end{frame}
\begin{frame}
\frametitle{One-way BLAST vs RBBH}
Reciprocal best BLAST hits remove many low-quality matches \\
\includegraphics[width=0.3\textwidth]{images/rbbh4}
\includegraphics[width=0.34\textwidth]{images/rbbh5}
\includegraphics[width=0.34\textwidth]{images/rbbh6}
\end{frame}
\begin{frame}
\frametitle{Reciprocal Best BLAST Hits (RBBH)}
\begin{itemize}
\item Pairs with each other as best BLAST hit are called RBBH
\item Should filter on percentage identity and alignment length
\item RBBH pairs are candidate orthologues
\begin{itemize}
\item (most orthologues will be RBBH, but the relationship is complicated)
\item Outperforms OrthoMCL, etc. (beyond scope of course why and how$\ldots$) \\
\url{http://dx.doi.org/10.1093/gbe/evs100} \\
\url{http://dx.doi.org/10.1371/journal.pone.0018755}
\end{itemize}
\end{itemize}
(We have a tool for this on our in-house Galaxy server)
\end{frame}
% Test data generated locally using FASTA protein files made with Artemis,
% and the underlying Python script for my Galaxy BLAST RBH tool available
% from https://github.com/peterjc/galaxy_blast/tree/master/tools/blast_rbh
% Here I *DID NOT* import minimum coverage or identity thresholds!
%
% /mnt/galaxy/galaxy_blast/tools/blast_rbh/blast_rbh.py NC_004547.faa chrA_prodigal.fasta -o rbbh_ref_vs_chrA_prodigal.tab -t blastp -a prot
% /mnt/galaxy/galaxy_blast/tools/blast_rbh/blast_rbh.py NC_004547.faa chrB_prodigal.fasta -o rbbh_ref_vs_chrB_prodigal.tab -t blastp -a prot
% /mnt/galaxy/galaxy_blast/tools/blast_rbh/blast_rbh.py NC_004547.faa chrC_prodigal.fasta -o rbbh_ref_vs_chrC_prodigal.tab -t blastp -a prot
% /mnt/galaxy/galaxy_blast/tools/blast_rbh/blast_rbh.py NC_004547.faa chrD_prodigal.fasta -o rbbh_ref_vs_chrD_prodigal.tab -t blastp -a prot | {
"alphanum_fraction": 0.7594594595,
"avg_line_length": 46.25,
"ext": "tex",
"hexsha": "664cac7b4b024e2480cea36896cbd2d079372cca",
"lang": "TeX",
"max_forks_count": 14,
"max_forks_repo_forks_event_max_datetime": "2021-06-08T18:24:04.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-02-05T20:54:37.000Z",
"max_forks_repo_head_hexsha": "d642804aa73e80546e2326d2c2537c5727ac3ee8",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "peterjc/Teaching-Intro-to-Bioinf",
"max_forks_repo_path": "presentation/sections/subsection_reciprocalblast.tex",
"max_issues_count": 3,
"max_issues_repo_head_hexsha": "d642804aa73e80546e2326d2c2537c5727ac3ee8",
"max_issues_repo_issues_event_max_datetime": "2018-10-05T10:53:49.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-11-25T11:55:43.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "peterjc/Teaching-Intro-to-Bioinf",
"max_issues_repo_path": "presentation/sections/subsection_reciprocalblast.tex",
"max_line_length": 140,
"max_stars_count": 20,
"max_stars_repo_head_hexsha": "d642804aa73e80546e2326d2c2537c5727ac3ee8",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "peterjc/Teaching-Intro-to-Bioinf",
"max_stars_repo_path": "presentation/sections/subsection_reciprocalblast.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-20T13:38:44.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-05-28T18:29:42.000Z",
"num_tokens": 826,
"size": 2590
} |
\subsection{Defining Image and Kernel}
\subsubsection{Image}
Of a function, the set of vectors in the codomain hit by the domain.
The image of a function $f:\;\R^m\rightarrow \R^n$ is:
\[\boxed{
\mathrm{Im}(f)=\{\tb{y}\in \R^n | \exists\;\tb{x}\in \R^m\text{ s.t. }f(\tb{x})=\tb{y}\}
}\]
Similar in concept to the range of a function in non-linear context.
\subsubsection{Kernel}
Set of vectors in the domain that are mapped to $\tb{0}$ in the codomain.
Kernel of a function $f:\;\R^m\rightarrow \R^n$ is:
\[\boxed{
\mathrm{Ker}(f)=\{\tb{x}\in \R^m | f(\tb{x})=\tb{0}\}
}\]
Analogous to the roots/zeros of a polynomial.
\subsection{Examples}
Following linear transformation's image lives in $\R^2$:
\[T(\tb{x})=\left(\begin{array}{cc}
1 & 1 \\
-1 & 1
\end{array}\right) \tb{x}\]
Following linear transformation's image forms a plane that is
$\R^2$:
\[T(\tb{x})=\left(\begin{array}{cc}
1 & -1 \\
0 & 2 \\
2 & 2
\end{array}\right) \tb{x}\]
\subsection{Span}
If $A$ is an $n\times m$ matrix, then image of $T(\tb{x})=A\tb{x}$ is set of all vectors in $\R^n$
that are linear combinations of column vectors of $A$.\\
\noindent
Thus, the span of a set of $n$ vectors is all linear combinations of those vectors:
\[\boxed{
\mathrm{span}(\tb{v}_1,\tb{v}_2,\cdots,\tb{v}_n)=\{\sum c_i\tb{v}_i | c_i\in\R\}
}\]
So the span of column vectors of $A$ is the image of the associated linear transformation.
\subsection{Kernel}
Kernel amounts to finding solutions to $A\tb{x}=\tb{0}$. Kernels are closed under linear combinations.
Kernel can never be empty set, it always holds true that $T(\tb{0})=\tb{0}$.
\subsection{Invertible Linear Transformations}
Main conclusions about image and kernel:
\begin{itemize}
\item Kernel is always (trivially) $\{\tb{0}\}$, else would imply dependance and therefore singularity in the associated matrix $A$
so $\boxed{\mathrm{ker}(T)=\{\tb{0}\}}$
\item Image is always the space $\R^n$ if associated matrix $A$ is $n\times n$, so $\boxed{\mathrm{Im}(T)=\R^n}$
\end{itemize} | {
"alphanum_fraction": 0.6578057032,
"avg_line_length": 29.5571428571,
"ext": "tex",
"hexsha": "c02184188d1b29dc4063b1b95aa81d3d72acd6a9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "bbd935b7ff9781169775c052625b1917a47d5dcc",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "sidnb13/latex-notes",
"max_forks_repo_path": "linear-algebra/tex/11_image-kernel.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "bbd935b7ff9781169775c052625b1917a47d5dcc",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "sidnb13/latex-notes",
"max_issues_repo_path": "linear-algebra/tex/11_image-kernel.tex",
"max_line_length": 135,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "bbd935b7ff9781169775c052625b1917a47d5dcc",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "sidnb13/latex-notes",
"max_stars_repo_path": "linear-algebra/tex/11_image-kernel.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 673,
"size": 2069
} |
%------------------------------------------------------------------------------
\chapter{Verification \& Validation}
\label{cha:validation}
%------------------------------------------------------------------------------
%------------------------------------------------------------------------------
\section{Overview}
%------------------------------------------------------------------------------
In order to show that the implementation of \neptune{} was successful, performing a verification and validation process is essential. In this context,
verification will always be referred to comparing a method to another comparable one. In contrast, validation means that the results are compared to a \textit{truth}.
A validation process was set up for the state vector propagation, using freely available \gls{acr:poe} data for distinct satellites. The selected satellites
are:
\begin{itemize}
\item Jason-1: Oceanography satellite.
\item \acrshort{acr:gps}: 32 satellites of the \gls{acr:gps} constellation
\item \acrshort{acr:ers}-1 and \acrshort{acr:ers}-2: Earth observation satellites
\item \acrshort{acr:icesat}: Earth observation satellite
\end{itemize}
The validation process was performed in two steps. The first step consists of using a \gls{acr:poe} of the satellite under consideration as the initial state
vector for the numerical propagation in \neptune{}. The second step consists of taking multiple \gls{acr:poe}s and assume them to be observations in order to
perform a differential correction to find an initial state vector, which is expected to provide more accurate results. The following settings were used for
\neptune{}:
\begin{itemize}
\item Cannon-ball model for each satellite
\item \acrshort{acr:egm}96, \num{70}$\times$\num{70} geopotential
\item \gls{acr:eop}
\item \acrshort{acr:nrl}\acrshort{acr:msis}E-00 atmosphere model
\item Lunisolar gravitational perturbations
\item Solar radiation pressure
\item Solid Earth and ocean tides
\item Earth radiation pressure
\end{itemize}
Alternatively, the state vector propagation can also be verified against another validated numerical integration tool. The main advantage here would be that all relevant orbit regions can be analysed, while using \glspl{acr:poe} is only possible for distinct orbits.
For the propagation of the covariance matrix propagation and the noise matrix evaluation a \gls{acr:mc} approach was used for the verification of the routines
as a validation was not possible due to a lack of available data.
\section{State vector propagation validation}
%\begin{itemize}
% \item Post-processing often achieves accuracies of 2-10 cm radial position \cite{vallado2007}
% \item \gls{acr:tle} achieves km-level accuracy, can be at about 400 m, or even 50-100 m in post processing using numerical methods \cite{vallado2007}
% \item First possibility: Take single state vector from a \gls{acr:poe} and propagate without modification, as \gls{acr:poe} is assumed to be within an
%accuracy of 2-10 cm (radial position).
% \item Second possibility: Slightly perturb satellite state (1 m) and compare against \gls{acr:poe}
% \item Propagation span shall be about 7-14 days, which is a typical span for operational decisions.
% \item In the drag regime:
% \item Optical properties, e.g. for TOPEX, LAGEOS, etc. can be obtained via \cite{knocke1989}.
% \item Credit to University of Texas \gls{acr:csr}
% \end{itemize}
% \subsection{Laser Geodynamics Satellites (LAGEOS)}
% \gls{acr:lageos}
% \begin{itemize}
% \item Promising candidate for small perturbing forces (SRP, ERP, Tides) due to simple geometry and low level of drag
% \item Satellite properties in \cite{knocke1989}, p.43
% \end{itemize}
\subsection{Jason-1}
The oceanography mission Jason-1 was launched in December 2001 as a successor to the TOPEX/Poseidon mission. With an initial mass of \SI{500}{\kilogram} it reached its
first orbit about \SIrange{10}{15}{\kilo\metre} below the target orbit of TOPEX/Poseidon at \SI{1337}{\kilo\metre} \cite{nasaJason2014}. It then moved to the same orbit with an along-track
separation to TOPEX/Poseidon of about \SI{500}{\kilo\metre}. It is thus reasonable to assume that the mass of Jason-1 was lower than \SI{500}{\kilogram} after it reached its final orbit.
In \cite{vallado2007} the mass was set to \SI{475}{\kilogram}. The other required satellite parameters, which were used in the propagation with \neptune{}, are shown in
\tab{tab:val-jason-data}.
\begin{table}[h!]
\centering
\caption{Satellite parameters used for Jason-1.\label{tab:val-jason-data}}
\begin{tabular}{lS}
\toprule
\textbf{Parameter} & \textbf{Value} \\
\cmidrule{1-2}
Mass / \si{\kilogram} & 475.0 \\
Cross-section / \si{\metre\squared} & 10.0 \\
Drag coefficient & 2.2 \\
\acrshort{acr:srp} coefficient & 1.2 \\
\bottomrule
\end{tabular}
\end{table}
While the satellite mass, drag coefficient and \gls{acr:srp} coefficient were equal to the values used in \cite{vallado2007}, for the cross-section a value of
\SI{10.0}{\metre\squared} was estimated, based on the fact that the area of the solar panels is about \SI{9.5}{\metre\squared} and the box-shaped satellite has dimensions of
\SI{954}{\milli\metre}$\times$\SI{954}{\milli\metre}$\times$\SI{2218}{\milli\metre}\footnote{\url{http://www.eoportal.org/directory/pres_Jason1AltimetryMission.html}, accessed on 2014-03-03.}.
The \gls{acr:poe} data was obtained from the \acrshort{acr:legos}-\acrshort{acr:cls} via the \gls{acr:ids}
website\footnote{\url{http://ids-doris.org/welcome.html}, accessed on 2014-03-03.} in the \gls{acr:sp3} format for a period between March 20, 2002 and April 21,
2002. That period was also used as the propagation span within \neptune{}.
The results for a 24-day propagation starting on March 24, 2002, are shown in \fig{fig:val-jas-01} \todo[TikZ conversion]{Provide Tikz image instead of GNUplot png}.
\begin{figure}[!h]
\centering
%\input{07-Tikz/valjas_1.tex}
\includegraphics[width=0.85\textwidth]{val_jas_01.png}
\caption{Comparison results for Jason-1.\label{fig:val-jas-01}}
\end{figure}
It can be seen that the error in the along-track component amounts to about one kilometre after three weeks of propagation.
A closer look at a smaller time interval of one day is shown in \fig{fig:val-jas-02} \todo[TikZ conversion]{Provide Tikz image instead of GNUplot png}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.85\textwidth]{val_jas_02.png}
\caption{Comparing results for Jason-1 for a one-day propagation.\label{fig:val-jas-02}}
\end{figure}
The error in the along-track component is at about \num{14} metres after one day. The quite irregular looking evolution of the along-track error is basically due to
the Solar and Earth radiation pressure contributions, as Jason-1 was modelled as a spherically shaped object. This becomes more obvious, when looking at the
error evolution after switching these two perturbations off subsequently. In \fig{fig:val-jas-03} \todo[TikZ conversion]{Provide Tikz image instead of GNUplot png} the Earth radiation pressure (short-wave albedo and long-wave
infrared) was switched off.
\begin{figure}[!h]
\centering
\includegraphics[width=0.85\textwidth]{val_jas_03.png}
\caption{Comparing results for Jason-1 for a one-day propagation without Earth radiation pressure.\label{fig:val-jas-03}}
\end{figure}
The error increases to about 25 metres after one day, when Earth's radiation pressure is not considered. When solar radiation pressure is also switched off,
the along-track error increases to about 65 metres after one day as shown in \fig{fig:val-jas-04}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.85\textwidth]{val_jas_04.png}
\caption{Comparing results for Jason-1 for a one-day propagation without Solar and Earth radiation pressure.\label{fig:val-jas-04}}
\end{figure}
\subsection{Global Positioning System (GPS)}
The validation using \gls{acr:gps} precision ephemerides is based on data available from the \gls{acr:nga} website \cite{nga2013}. The data was downloaded from
the \gls{acr:nga} \acrshort{acr:ftp} server\footnote{\url{ftp://ftp.nga.mil/pub2/gps/pedata/}} for the year 2013.
For the validation it is essential to use a time span free of any orbital maneuvers for the satellites under consideration. Therefore the so-called
\gls{acr:nanu} messages were used, which are provided by the U.S. Coast Guard Navigation Center\footnote{\url{http://navcen.uscg.gov/?pageName=gpsAlmanacs}}
for the \gls{acr:gps} satellites, containing information on which satellite is expected to have forecast outages for which period of time. A good
overview on the available \gls{acr:nanu} messages for an individual year can also be found at the Celestrak website \cite{celestrak-nanu-2014}.
Extracting all the required information from the \gls{acr:nanu}s for 2013, one gets a good picture of which time frames are adequate to perform a validation
for the complete \gls{acr:gps} constellation. The result of such an analysis is shown in \fig{fig:val-gps-nanu-2013}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\textwidth]{nanu_summary_2013.png}
\caption{Unusable ephemerides for all GPS satellites in 2013. The time span between the two vertical lines (blue) can
be used in the validation analysis.\label{fig:val-gps-nanu-2013}}
\end{figure}
It can be seen that there is a window of approximately three weeks between day 131 and day 162 as indicated by the vertical blue lines, corresponding to a time frame of one month between May 11, 2013 and June 11, 2013.
The currently operational \gls{acr:gps} constellation consists of 32 satellites (as of February 2014) from the \textit{Block II}
generation only \cite{gps2014} with satellites of the
\textit{Block III} generation being under production. For the operational satellites a further subdivision is possible \cite{navcen2014}:
\begin{itemize}
\item \textbf{Block IIA:} 8 operational, second generation, ``Advanced''
\item \textbf{Block IIR:} 12 operational, ``Replenishment''
\item \textbf{Block IIR(M):} 8 operational, ``Modernized''
\item \textbf{Block IIF:} 4 operational, ``Follow-on''
\end{itemize}
It is important to consider the different types of Block II satellites in order to have an appropriate estimate of the cross-sectional area and the mass. In \tab{tab:val-gps-sats} the currently operational satellites are shown together with their \gls{acr:prn} code, the dimensions\footnote{Width across solar panels}
and the mass, which were also used
for the validation.
\begin{table}[h!]
\centering
\caption{Current GPS constellation as of February 26, 2014 \citep{navcen2014, about.com2014}.\label{tab:val-gps-sats}}
\begin{tabular}{lp{5cm}lS}
\toprule
\textbf{Type} & \textbf{PRNs} & \textbf{Dim. W$\times$D$\times$H / \si{\milli\metre}} & \textbf{Mass / \si{\kilogram}} \\
\cmidrule{1-4}
Block IIA & 3, 4, 6, 8, 9, 10, 26, 32 & 5300 $\times$ 3400 $\times$ 3400 & 840 \\
Block IIR & 2, 11, 13, 14, 16, 18-23, 28 & 11\ 400 $\times$ 1956 $\times$ 2210 & 1127 \\
Block IIR(M) & 5, 7, 12, 15, 17, 29, 30$^*$, 31 & 11\ 400 $\times$ 1956 $\times$ 2210 & 1127 \\
Block IIF & 1, 24, 25, 27$^{**}$ & 11\ 820 $\times$ 2032 $\times$ 2235 & 1465 \\
\bottomrule
\multicolumn{1}{r}{\footnotesize $^*$} & \multicolumn{3}{l}{\footnotesize PRN30 was launched on Feb 21, 2014 \citepalias{nanu2014018} and is not considered in
the validation.} \\
\multicolumn{1}{r}{\footnotesize $^{**}$} & \multicolumn{3}{p{12cm}}{\footnotesize PRN27 was launched on May 15, 2013 \citepalias{nanu2013031} and is not considered
in the validation.} \\
\end{tabular}
\end{table}
From the 32 active satellites in the constellation, for the selected validation time frame only 30 satellites may be used. \acrshort{acr:prn}27 was launched
within that time span on May 15, 2013 \citepalias{nanu2013031} and was fully operational from June 21, 2013 \citepalias{nanu2013035}, so that it could not be used in the validation. Also, \acrshort{acr:prn}30 was launched on February 21, 2014 \citepalias{nanu2014018} and is currently in its testing phase.
For the simulations by \neptune{} it is required to estimate the cross-section of the GPS satellites based on the dimensions as specified in different sources for the different types of Block II objects. As for satellites in the GPS constellation the effective cross-section relative to the Sun's direction is significant, it was assumed that the full area of the solar panels contributes to the total cross-section. For the main body, the average of the three different surfaces of the rectangular cuboid (box) was computed. The total cross-section as well as the drag and \gls{acr:srp} parameter, which were used in the validation, are shown in \tab{tab:val-gps-sim-data}.
\begin{table}[h!]
\centering
\caption{Cross-section, \gls{sym:c_d} and \gls{sym:c_srp} of \acrshort{acr:gps} satellites used in the validation.\label{tab:val-gps-sim-data}}
\begin{tabular}{lSSS}
\toprule
\textbf{Type} & \textbf{Cross-section / \si{\metre\squared}} & \textbf{\gls{sym:c_d}} & \textbf{\gls{sym:c_srp}} \\
\cmidrule{1-4}
Block IIA & 18.02 & 2.2 & 1.3\\
Block IIR & 22.63 & 2.2 & 1.3\\
Block IIR(M) & 22.63 & 2.2 & 1.3\\
Block IIF & 24.29 & 2.2 & 1.3\\
\bottomrule
\end{tabular}
\end{table}
The \gls{acr:poe} data for \gls{acr:gps} satellites is obtained in the SP3 format \citep{remondi1989}, where the cartesian state vectors in five minute intervals are given in the \acrshort{acr:wgs}84 frame. In \neptune{} the \gls{acr:itrf} is used, however, \acrshort{acr:wgs}84 is closely aligned to the
\gls{acr:itrf} \citep{tapley2004} so that no coordinate transformation was required at this point. Instead, the ephemerides were converted to \gls{acr:gcrf} in order to be compared with the trajectory as propagated by \neptune{}. Some exemplary results are shown in the following.
\begin{figure}[!h]
\centering
\includegraphics[width=0.85\textwidth]{val_gps_prn04.png}
\caption{Comparison results for PRN04.\label{fig:val-gps-prn04}}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.85\textwidth]{val_gps_prn11.png}
\caption{Comparison results for PRN11.\label{fig:val-gps-prn11}}
\end{figure}
In \fig{fig:val-gps-prn04} the result for \acrshort{acr:prn}04 is shown. The error in the along-track component is about \SI{2.5}{\kilo\metre} after 21 days. However, there were also examples, where the error was significantly lower, as for example in the case of \acrshort{acr:prn}11, where the along-track error is at about \SI{700}{\metre} after the same period of 21 days. It can also be seen that the radial error for \acrshort{acr:prn}11 is lower than for \acrshort{acr:prn}04, which, of
course, correlates with the evolution of the along-track component.
A third example is shown in \fig{fig:val-gps-prn09} for \acrshort{acr:prn}09. Here, the along-track error increases to about \SI{3.5}{\kilo\metre} after 21 days, while the radial error is almost at \SI{500}{\metre}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.85\textwidth]{val_gps_prn09.png}
\caption{Comparison results for PRN09.\label{fig:val-gps-prn09}}
\end{figure}
The errors in all examples are showing a fluctuation with a period of one orbit, which is typical for a model misalignment. This was expected, as the first \gls{acr:poe} was selected to be the initial state for \neptune{} which then propagates with another model as compared to the one used for the \gls{acr:poe} generation.
\subsection{European Remote Sensing Satellites (ERS-1 and -2)}
The first mission \acrshort{acr:ers}-1 was launched on July 17, 1991 and was accompanied by \acrshort{acr:ers}-2 on April 21, 1995 with both objects put into a sun-synchronous orbit between \SIrange{782}{785}{\kilo\metre}
\footnote{\url{https://earth.esa.int/web/guest/missions/esa-operational-eo-missions/ers/satellite}\label{foot:esa}}. From \acrshort{acr:esa}'s
website$^{\ref{foot:esa}}$ an initial estimate for the satellite parameters was derived, shown in \tab{tab:val-ers-data}. Missing parameters, being the drag and \gls{acr:srp} coefficients, were taken from \cite{vallado2007}.
\begin{table}[h!]
\centering
\caption{Satellite parameters used for \gls{acr:ers}-1 and \gls{acr:ers}-2.\label{tab:val-ers-data}}
\begin{tabular}{lS}
\toprule
\textbf{Parameter} & \textbf{Value} \\
\cmidrule{1-2}
Mass / \si{\kilogram} & 2300 \\
Cross-section / \si{\metre\squared} & 35.0 \\
Drag coefficient & 2.5 \\
\acrshort{acr:srp} coefficient & 1.0 \\
\bottomrule
\end{tabular}
\end{table}
For \acrshort{acr:ers}-1 the first propagation was performed starting on August 4, 1991. The result is shown in \fig{fig:val-ers1-01}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.85\textwidth]{valers1_1.png}
\caption{Comparison results for ERS-1 for a one-week propagation.\label{fig:val-ers1-01}}
\end{figure}
After a one-week propagation, the accumulated error in the along-track component is about eight kilometers.
For a shorter time frame of about six hours starting from the initial epoch, the error is shown in \fig{fig:val-ers1-02}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.85\textwidth]{valers1_2.png}
\caption{Comparison results for ERS-1 for a six-hour propagation.\label{fig:val-ers1-02}}
\end{figure}
The radial error increases after each revolution by about one meter which leads to the build-up in along-track error. This indicates, as the radial error is positive, that \acrshort{acr:ers}-1 was decaying faster than the propagated orbit of \neptune{}, as in general the error was computed via:
\begin{equation}
\Delta \vec{r} = \vec{r}_{NEPTUNE} - \vec{r}_{ERS}
\end{equation}
While the considered epoch for \acrshort{acr:ers}-1 was within a period of increased solar activity, for \acrshort{acr:ers}-2 it was looked at whether the situation changes during low solar activity. Therefore, the initial epoch was selected to be at September 8, 1995. Also, the cross-section was adapted to \SI{11}{\metre\squared} and the mass set to \SI{2200}{\kilogram} to make the results comparable to the ones obtained in \cite{vallado2007}. The errors for a propagation span of one week are shown
in \fig{fig:val-ers2-01}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.85\textwidth]{valers2_1.png}
\caption{Comparison results for ERS-2 for a one-week propagation.\label{fig:val-ers2-01}}
\end{figure}
It can be seen that the resulting error after seven days in the along-track component is about \SI{1.2}{\kilo\metre}. It was at about \SI{10}{\kilo\metre} when using the satellite parameters as given in \tab{tab:val-ers-data} which were used for \acrshort{acr:ers}-1. However, using another set of parameters for \acrshort{acr:ers}-1, for example as used for \acrshort{acr:ers}-2 in \fig{fig:val-ers2-01}, did not provide better results. It can thus be argued that solar activity has a significant influence on the prediction accuracy, even if the solar and geomagnetic activity data used in the simulation was observed data for the considered
propagation spans.
\subsection{Ice, Cloud, and land Elevation Satellite (ICESat)}
\gls{acr:icesat} was part of \acrshort{acr:nasa}'s Earth observation system between 2003 and 2009 and its mission was to measure the ice sheet mass balance, cloud
and aerosol heights, as well as perform land topography and analyse vegetation characteristics \footnote{\url{http://icesat.gsfc.nasa.gov/}}. The
\gls{acr:poe}s for \gls{acr:icesat} were obtained from the \acrshort{acr:ftp} server of the \gls{acr:csr} at the University of Texas \citep{csr2014} for a time frame between September 24, 2003 and November 18, 2003.
As \gls{acr:icesat} was operated in an orbit at about \SI{600}{\kilo\metre} altitude, atmospheric drag was providing a significant contribution to the orbital evolution.
Therefore, it is essential to have a good estimate of the ballistic parameter. In a similar study by \cite{vallado2007}, an initial mass of \SI{970}{\kilogram} was reduced to about \SI{950}{\kilogram}, assuming that some maneuvering fuel had been spent, to be used in the simulation for February 2003.
For the \neptune{} validation and the time frame between September and November 2013, a value of \SI{950}{\kilogram} was used as a starting value. Also, in
\cite{vallado2007} further required values are given, which were used in the following analyses and are shown in \tab{tab:val-ice-data}.
\begin{table}[h!]
\centering
\caption{Satellite parameters used for \gls{acr:icesat}.\label{tab:val-ice-data}}
\begin{tabular}{lS}
\toprule
\textbf{Parameter} & \textbf{Value} \\
\cmidrule{1-2}
Cross-section / \si{\metre\squared} & 2.0 \\
Drag coefficient & 2.2 \\
\acrshort{acr:srp} coefficient & 1.0 \\
\bottomrule
\end{tabular}
\end{table}
The first analysis was done for a propagation span of a few days, starting at September 25, 2003, 21:00 \acrshort{acr:utc}. The results can be seen in
\fig{fig:val-ice-01}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.85\textwidth]{valice01.png}
\caption{Comparison results for ICESat showing a maneuver on September 26, 2003.\label{fig:val-ice-01}}
\end{figure}
While initially the errors in all three components are in the order of magnitude between \SIrange{10}{100}{\metre} with the along-track component showing an expected drift, on September 26, 2003 a maneuver can be identified, which manifests itself through a sudden change in slope for the along-track error and a sudden increase in the
mean radial error.
For the next analysis, the initial epoch was set to September 27, 2003, 21:00 \gls{acr:utc}, which is about one and a half days beyond the previous maneuver.
As can be seen in \fig{fig:val-ice-02}, for a period free of maneuvers, the along-track error amounts to about one kilometer after about three days of
propagation.
\begin{figure}[!h]
\centering
\includegraphics[width=0.85\textwidth]{valice02.png}
\caption{Comparison results for ICESat for about four days after September 27, 2003, 21:00 UTC.\label{fig:val-ice-02}}
\end{figure}
This behaviour is expected in the presence of atmospheric drag, which becomes even more obvious when looking at the radial component alone as shown in
\fig{fig:val-ice-03}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.85\textwidth]{valice03.png}
\caption{Comparison results for ICESat: Radial error after September 27, 2003, 21:00 UTC.\label{fig:val-ice-03}}
\end{figure}
Here, the radial error shows a drift which means that \gls{acr:icesat} was descending with a faster rate than what \neptune{} predicted, as the error, in
general, was computed by:
\begin{equation}
\Delta \vec{r} = \vec{r}_{NEPTUNE} - \vec{r}_{ICESat}
\end{equation}
A faster descent would suggest that \gls{acr:icesat} had a lower mass (or a higher cross-section) as was initially assumed. Due to the maneuver on September
26, it could be possible that the mass after that maneuver was below \SI{950}{\kilogram}. This seems to be even more reasonable when considering the fact that the mass
estimate is based on \cite{vallado2007}, where the analysis was done for February 2003, with a few possible additional maneuvers until September 2003. Therefore, another validation run was performed after decreasing the mass to \SI{900}{\kilogram} and the result is shown in \fig{fig:val-ice-04}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.85\textwidth]{valice04.png}
\caption{Comparison results for ICESat: Radial error for a different initial mass.\label{fig:val-ice-04}}
\end{figure}
It can be seen that a reduced mass does not affect the result significantly.
In \cite{vallado2007} it is mentioned that \gls{acr:icesat} may fly in a ``sailboat'' orientation, where the solar panels contribute most in velocity direction. A value of \SI{5.21}{\metre\squared} is given in \cite{vallado2007} together with an
adapted drag coefficient of \gls{sym:c_d}=\num{2.52}. The result for this simulation is shown in \fig{fig:val-ice-05}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.85\textwidth]{valice05.png}
\caption{Comparison results for ICESat: Error in ``sailboat'' orientation mode.\label{fig:val-ice-05}}
\end{figure}
It can be seen that the along-track error in this case is positive which means that the propagation by \neptune{} is trailed by \gls{acr:icesat} meaning that
\neptune{} now results in predicting a too fast decay. As the error is at about one kilometer after two days - as compared to the same along-track error after
three days in \fig{fig:val-ice-02} - a ``sailboat'' orientation seems unlikely for this period. However, an orientation somewhere in between the two analysed
cases should provide a better solution, which is exemplarily shown in \fig{fig:val-ice-06} for a cross-section of \SI{3.0}{\metre\squared}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.85\textwidth]{valice06.png}
\caption{Comparison results for ICESat: Error for a cross-section of \SI{3.0}{\metre\squared}.\label{fig:val-ice-06}}
\end{figure}
\section{Covariance matrix propagation verification}
The verification of the covariance matrix propagation in \neptune{} was done by a \gls{acr:mc} analysis. Therefore, a dedicated tool was developed: \cover{} (Covariance Verification). Its task, as shown in \fig{fig:val-cover-scheme}, is to receive the initial covariance matrix used by \neptune{} and to generate an object cloud representing the underlying statistics from the uncertainties in the state vector.
\begin{figure}[!h]
\centering
\setlength{\abovecaptionskip}{0.5cm}
\begin{tikzpicture}[scale=0.7, transform shape, node distance=0.5cm, >=latex]
\node (cover) [redbox] {\textbf{COVER}};
\node (input) [bluebox, below =of cover] {Read input file: cover.inp};
\node (inpfile) [speicher, right=of input] {Input};
\node (cloud) [bluebox, below =of input] {Generate object cloud based on variance/covariance matrix};
\node (loop_time) [branch, below=of cloud] {\textbf{Loop:} Time};
\node (time) [bluebox, right=of loop_time] {Increment time};
\node (parent) [bluebox, below=of time] {Propagate parent's state and covariance};
\node (loop_obj) [branch, below=of parent] {\textbf{Loop:} Objects};
\node (objects) [bluebox, below right=0.5cm and 1cm of loop_obj] {Propagate object's state};
\node (end1) [branch, below=1cm of loop_obj] {\textbf{End?}};
\node (uvw) [bluebox, below=of end1] {Convert states to UVW frame};
\node (stats) [bluebox, below=of uvw] {Process statistics};
\node (output1) [bluebox, below =of stats] {Write line to output: $<$RUNID$>$.cov};
\node (output2) [bluebox, below =of output1] {Write line to output: $<$RUNID$>$.prop.cov};
\node (prop_file) [speicher, right=of output2] {Propagated covariance matrix};
\node (cov_file) [speicher, right=of output1] {Object cloud statistics};
\node (end2) [branch, below=13cm of loop_time] {\textbf{End?}};
\node (end) [redbox, below=of end2] {\textbf{End}};
\draw [->] (cover) to (input);
\draw [->] (inpfile) to (input);
\draw [->] (input) to (cloud);
\draw [->] (cloud) to (loop_time);
\draw [->] (loop_time) to (time);
\draw [->] (time) to (parent);
\draw [->] (parent) to (loop_obj);
\draw [->] (loop_obj) -| (objects);
\draw [->] (end1) to (loop_obj);
\draw [->] (end1) to (uvw);
\draw [->] (uvw) to (stats);
\draw [->] (stats) to (output1);
\draw [->] (output1) to (cov_file);
\draw [->] (output1) to (output2);
\draw [->] (output2) to (prop_file);
\draw [->] (end2) to (loop_time);
\draw [->] (end2) to (end);
%
\draw [->] (objects.south) |- (end1.east);
\draw [->] (output2.south) |- (end2.east);
\end{tikzpicture}
\caption{Schematic overview of the \cover{} tool.}
\label{fig:val-cover-scheme}
\end{figure}
As the radius and the velocity component errors present in the variance/covariance matrix are correlated, a Cholesky decomposition has to be performed. For the
covariance matrix \gls{sym:P} the Cholesky decomposition is defined as:
\begin{equation}
\gls{sym:P} = \gls{sym:L}\cdot\gls{sym:L}^T
\end{equation}
It is assumed that the initial errors in the state vector are normally distributed, so that random variables \gls{sym:Z} are computed according to a standard
normal distribution:
\begin{equation}
\gls{sym:Z} = \mathcal{N}\left(0,1\right)
\end{equation}
Then, for each object, the state vector can be sampled using six random numbers (for position and velocity), given via the vector $\gls{sym:z}_i =
\left[\gls{sym:Z}_1,\ldots,\gls{sym:Z}_6\right]$ and the mean $\overline{\gls{sym:state}}$, which is the associated state vector:
\begin{equation}
\gls{sym:state}_i = \gls{sym:L}\cdot\gls{sym:z}_i + \overline{\gls{sym:state}}
\end{equation}
For each generated object an individual \neptune{} propagation run for the sampled state vector to a certain reference epoch is performed. At this epoch the
object cloud is processed to extract its statistics and form the covariance matrix for that point in time again. The result is then compared to the numerical
integration of the covariance matrix as implemented in \neptune{}.
The verification process of the covariance matrix propagation contained different scenarios with varying initial covariance matrix, as defined in the following
list:
\begin{enumerate}
\item Propagation of object cloud considering only the central body force.
\item Scenario 1 + geopotential with order $\gls{sym:geo_n}=2$.
\item Scenario 1 + geopotential with order $\gls{sym:geo_n}>2$.
\item Scenario 1 + atmospheric drag
\end{enumerate}
\paragraph{Scenario 1: Central body force}
In the first scenario, only the central body force was used for the object cloud consisting of one parent object (identical to the state vector corresponding
to the covariance matrix) and 499 generated objects according to the uncertainty in the state.
The first example for an initial radial error of \SI{1}{\metre}, and no error in the other components, is shown in \fig{fig:val-cov-scen1-01} for a circular orbit at \SI{300}{\kilo\metre} altitude and \ang{50;;} inclination.
\begin{figure}[h!]
\centering
\subfigure[Position]{\label{fig:val-cov-scen1-01a} \includegraphics[width=0.4\textwidth]{2B_1_rad.png}}
\hspace{1cm}
\subfigure[Velocity]{\label{fig:val-cov-scen1-01b} \includegraphics[width=0.4\textwidth]{2B_1_vel.png}}
\caption{Comparing MC analysis results to \neptune{} covariance matrix propagation using a \SI{60}{\second} integration step size for an initial radial error of \SI{1}{\metre}.\label{fig:val-cov-scen1-01}}
\end{figure}
An error in radial position only leads to a drift in the along-track error as the orbital period is directly affected by the radial component. The latter shows an oscillation with the minimum being at \SI{1}{\metre} and the maximum reaching \SI{3}{\metre} after half an orbit. What can also be observed is that there is no error in the
orbit normal component as should be expected for a two-body scenario.
It can be seen that there is almost no difference between the numerical propagation in \neptune{} as compared to the results from the \gls{acr:mc} analysis. In \fig{fig:val-cov-scen1-01a} and \fig{fig:val-cov-scen1-01b} the correlation between the along-track position error and the radial velocity error, as well as the correlation between the radial position and along-track velocity component, can be nicely seen.
The next example in \fig{fig:val-cov-scen1-02} shows the standard deviation evolution for an initial error of \SI{1}{\metre\per\second} for the radial velocity only for the same orbit at \SI{300}{\kilo\metre} altitude and \ang{50;;} inclination.
\begin{figure}[h!]
\centering
\subfigure[Position]{\label{fig:val-cov-scen1-02a} \includegraphics[width=0.4\textwidth]{2B_2_rad.png}}
\hspace{1cm}
\subfigure[Velocity]{\label{fig:val-cov-scen1-02b} \includegraphics[width=0.4\textwidth]{2B_2_vel.png}}
\caption{Comparing MC analysis results to \neptune{} covariance matrix propagation using a \SI{60}{\second} integration step size for an initial velocity error of \SI{1}{\metre\per\second}.\label{fig:val-cov-scen1-02}}
\end{figure}
Once again the results show a close alignment between the \gls{acr:mc} approach and \neptune{}'s integration. However, the along-track position error difference at about half an orbit is about \SI{80}{\kilo\metre}, while the error in the radial velocity at the same time is in the order of magnitude of \SI{0.1}{\metre\per\second}.
This, however, is not indicating an erroneous implementation but is due to the fact that only 500 objects have been used. If the number is increased to 2000 objects, for example, the difference after half an orbit reduces to about \SI{63}{\kilo\metre} in along-track position error and about \SI{0.06}{\metre\per\second} in radial velocity error.
Increasing the number even further to \num{10000} objects in the \gls{acr:mc} approach leads to a further decrease with the along-track position error difference
being at about \SI{27}{\kilo\metre} and the radial velocity error difference at \SI{0.025}{\metre\per\second}. Thus, the results obtained by the \gls{acr:mc} analysis are converging towards the
results from the numerical integration by \neptune{} for an increasing number of objects, which should be expected from the \textit{law of large numbers}.
\paragraph{Scenario 2: Geopotential with order $\gls{sym:geo_n}=2$}
In \sect{sec:propagation-covariance-set-integration-geopotential} the model for the integration of the state error transition matrix under consideration of the
geopotential is described. This allows to consider, beside the two-body contribution, the evolution of the variances and covariances due to the non-sphericity
of the Earth within \neptune{}. Especially the Earth's flattening, which is described by the second zonal harmonic coefficient, shows a distinct characteristic
as is shown in \fig{fig:val-cov-scen2-01}.
\begin{figure}[h!]
\centering
\subfigure[Position]{\label{fig:val-cov-scen2-01a} \includegraphics[width=0.4\textwidth]{2BJ2_1_rad.png}}
\hspace{1cm}
\subfigure[Velocity]{\label{fig:val-cov-scen2-01b} \includegraphics[width=0.4\textwidth]{2BJ2_1_vel.png}}
\caption{Comparing MC analysis results to \neptune{} covariance matrix propagation using a \SI{60}{\second} integration step size for an initial radial error of \SI{1}{\metre} considering Earth's flattening. \label{fig:val-cov-scen2-01}}
\end{figure}
While the general behaviour in the radial and the along-track component are similar to the one already seen for the two-body case, including Earth's flattening introduces a drift in the orbit normal component. For the velocity normal component, the standard deviation is in the sub-mm/s region after two orbits, while for the position normal the standard deviation is in the centimeter regime. Note, that the simulation again was based on an initial error of \SI{1}{\metre} in the
radial component.
In \fig{fig:val-cov-scen2-01} the results of \neptune{} closely align with the \gls{acr:mc} analysis. This gives evidence for the correct implementation of the partial derivatives as described in \sect{sec:propagation-covariance-set-integration-geopotential}. One could also look at what happens, if for the \gls{acr:mc} analysis once again the flattened Earth is used, while in \neptune{} the covariance matrix is integrated using the two-body scenario only. The result is shown in \fig{fig:val-cov-scen2-02}.
\begin{figure}[h!]
\centering
\subfigure[Position]{\label{fig:val-cov-scen2-02a} \includegraphics[width=0.4\textwidth]{2BJ2_2_rad.png}}
\hspace{1cm}
\subfigure[Velocity]{\label{fig:val-cov-scen2-02b} \includegraphics[width=0.4\textwidth]{2BJ2_2_vel.png}}
\caption{Comparing MC analysis results (incl. Earth's flattening) to \neptune{} covariance matrix propagation using a \SI{60}{\second} integration step size for an initial radial error of \SI{1}{\metre} without Earth's flattening. \label{fig:val-cov-scen2-02}}
\end{figure}
It can be seen that there are significant differences in the radial and the cross-track component for the position error. The normal component even shows a phase shift of half an orbit. One question in this context is why the \neptune{} results for the two-body propagation look different than what was seen in
\fig{fig:val-cov-scen1-01}. This is due to the fact that the state vector is propagated for a flattened Earth, while the covariance matrix is not. Thus, the state vector is different at each point in time when compared to the two-body propagation, which directly affects the integration of the covariance matrix, as it depends on the current radius vector for each integration step.
\paragraph{Scenario 3: Geopotential with order $\gls{sym:geo_n}>2$}
In \fig{fig:val-cov-scen3-01} the results are shown for a 36 $\times$ 36 geopotential for the same orbit and initial error of \SI{1}{\metre} in the radial position component as already in the previous examples.
\begin{figure}[h!]
\centering
\subfigure[Position]{\label{fig:val-cov-scen3-01a} \includegraphics[width=0.4\textwidth]{2BGEOP_1_rad.png}}
\hspace{1cm}
\subfigure[Velocity]{\label{fig:val-cov-scen3-01b} \includegraphics[width=0.4\textwidth]{2BGEOP_1_vel.png}}
\caption{Comparing MC analysis results to \neptune{} covariance matrix propagation using a \SI{60}{\second} integration step size for an initial radial error of \SI{1}{\metre} considering a 36 $\times$ 36 geopotential. \label{fig:val-cov-scen3-01}}
\end{figure}
The situation looks very similar to the one for a second order geopotential as shown in \fig{fig:val-cov-scen2-01}. The differences are very small so that the
same simulation was then performed for different orders of the geopotential and then to have a closer look on the normal component in
\fig{fig:val-cov-scen3-02a} and the radial component in \fig{fig:val-cov-scen3-02b}.
\begin{figure}[h!]
\centering
\subfigure[Normal component]{\label{fig:val-cov-scen3-02a} \includegraphics[width=0.4\textwidth]{2BGEOP_2_nor.png}}
\hspace{1cm}
\subfigure[Radial component]{\label{fig:val-cov-scen3-02b} \includegraphics[width=0.4\textwidth]{2BGEOP_2_rad.png}}
\caption{Sensitivity in covariance matrix propagation for varying order in the geopotential for an initial radial error of \SI{1}{\metre}. \label{fig:val-cov-scen3-02}}
\end{figure}
It can be seen that the error in the standard deviation for the normal component is in the order of magnitude of about \SI{1}{\milli\metre} when comparing a second order
geopotential to higher orders. This corresponds to about \SI{5}{\percent} variation in this example as the second order geopotential accounts for \SI{2}{\centi\metre} after one and a half orbit. For the radial component the variation by using different order geopotentials is about \SI{5}{\milli\metre}. Note, however, that the number are only an example for that special orbit (\SI{300}{\kilo\metre}, \ang{50;;}) for a short time frame beyond the initial epoch.
In the same context, it is also interesting to look at orbits experiencing resonance effects due to the geopotential. An example shall be given for a \gls{acr:geo}.\todo[Covariance GEO example]{Provide example for GEO resonance effects in covariance matrix propagation.}
\paragraph{Scenario 4: Atmospheric drag}
For orbits experiencing significant drag perturbations it is possible to use additional terms in the variational equations to account for the resulting errors
in the covariance matrix propagation. An example for a \SI{200}{\kilo\metre}, \ang{54;;} orbit with an initial error of about \SI{3}{\metre} in the along-track position component is shown in \fig{fig:val-cov-scen4-01}, with \neptune integrating only the two-body part for the covariance matrix.
\begin{figure}[h!]
\centering
\subfigure[Position]{\label{fig:val-cov-scen4-01a} \includegraphics[width=0.4\textwidth]{DRAG_2_rad.png}}
\hspace{1cm}
\subfigure[Velocity]{\label{fig:val-cov-scen4-01b} \includegraphics[width=0.4\textwidth]{DRAG_2_vel.png}}
\caption{Drag perturbed orbit (\SI{200}{\kilo\metre}, \ang{54;;}) with \neptune not accounting for drag in the variational equations for an initial along-track position
error of \SI{3}{\metre}. \label{fig:val-cov-scen4-01}}
\end{figure}
For the position error in \fig{fig:val-cov-scen4-01a} it can be seen that the error in the along-track component shows a drift of about \SI{10}{\centi\metre} per orbit. The
increase is not accounted for by \neptune{}. Also, the radial component shows a significant difference after just a few orbits.
A similar behaviour can be observed for the velocity errors in \fig{fig:val-cov-scen4-01b}. The radial component shows a secular increase, while the
along-track component decreases. Again both trends are not followed by \neptune{}, as expected.\todo[Explain drag deviations]{For the propagation of the covariance matrix, explain deviation with respect to the MC simulation} | {
"alphanum_fraction": 0.7444382715,
"avg_line_length": 77.2369402985,
"ext": "tex",
"hexsha": "c5346f440f115df4e2e012c4723ad98d3f7a84e3",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2022-01-06T15:13:35.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-06-10T05:30:20.000Z",
"max_forks_repo_head_hexsha": "6c170d0df7b12fbfa1e92b15337ca8e8df3b48b0",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "mmoeckel/neptune",
"max_forks_repo_path": "documentation/01-NEPTUNE/validation.tex",
"max_issues_count": 7,
"max_issues_repo_head_hexsha": "6c170d0df7b12fbfa1e92b15337ca8e8df3b48b0",
"max_issues_repo_issues_event_max_datetime": "2022-03-11T12:34:22.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-06-11T03:36:48.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "mmoeckel/neptune",
"max_issues_repo_path": "documentation/01-NEPTUNE/validation.tex",
"max_line_length": 674,
"max_stars_count": 10,
"max_stars_repo_head_hexsha": "6c170d0df7b12fbfa1e92b15337ca8e8df3b48b0",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "mmoeckel/neptune",
"max_stars_repo_path": "documentation/01-NEPTUNE/validation.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-11T10:48:20.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-03-30T08:42:47.000Z",
"num_tokens": 11886,
"size": 41399
} |
\section*{Chapter 5: The Emergence of Modern Number Theory}
\paragraph{Exercise 5.1}
Prove that if $n > 4$ is composite, then $(n-1)!$ is a multiple of $n$.
\begin{proof}
Let $n > 4$ be a composite integer with prime factorization
$n = p_1^{\alpha_1} \dots p_k^{\alpha_k}$. First note that, if $d$ is a
proper divisor of $n$, then $\Divides{d}{(n-1)!}$. Indeed, $d \leq n-1$,
and so $(n-1)! = (n-1) \cdot (n-2) \cdots d \cdot (d-1) \cdots 1$. \\
Suppose that $k > 1$. Then, $\Divides{p_i^{\alpha_i}}{(n-1)!}$,
$1 \leq i \leq k$. Since $p_1^{\alpha_1}, \dots, p_k^{\alpha_k}$ are
pairwise coprime, we have that
$\Divides{n = p_1^{\alpha_1} \dots p_k^{\alpha_k}}{(n-1)!}$.\\
Now, suppose that $k = 1$. Since
$n$ is composite and $n = p_1^{\alpha_1} > 4$, then either $p_1 > 2$ or
otherwise $\alpha_1 > 2$. In the latter case,
$(n-1)! = (n-1) \cdot (n-2) \cdots p_1^{\alpha_1 - 1} \cdots p_1 \cdots 1$,
and so $\Divides{n = p_1^{\alpha_1}}{(n-1)!}$. Otherwise, if $\alpha_1 = 2$,
$2 \cdot p_1 < p_1^2 = n$, and so
$(n-1)! = (n-1) \cdot (n-2) \cdots 2p_1 \cdots p_1 \cdots 1$, which means
that $\Divides{n = p_1^{2}}{(n-1)!}$.
\end{proof}
| {
"alphanum_fraction": 0.6024518389,
"avg_line_length": 43.9230769231,
"ext": "tex",
"hexsha": "b27ae76780658c574563f2f7def7fedf1ce814e4",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-08-29T14:01:30.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-05-15T07:07:43.000Z",
"max_forks_repo_head_hexsha": "4809c0b430fa05a2676db5750e502b112941d6d7",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "lukius/fmtgp",
"max_forks_repo_path": "math/src/chapter5.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4809c0b430fa05a2676db5750e502b112941d6d7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "lukius/fmtgp",
"max_issues_repo_path": "math/src/chapter5.tex",
"max_line_length": 76,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "4809c0b430fa05a2676db5750e502b112941d6d7",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "lukius/fmtgp",
"max_stars_repo_path": "math/src/chapter5.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-23T03:00:43.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-10-12T17:40:40.000Z",
"num_tokens": 507,
"size": 1142
} |
\documentclass[]{book}
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{ifxetex,ifluatex}
\usepackage{fixltx2e} % provides \textsubscript
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\else % if luatex or xelatex
\ifxetex
\usepackage{mathspec}
\else
\usepackage{fontspec}
\fi
\defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase}
\fi
% use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
% use microtype if available
\IfFileExists{microtype.sty}{%
\usepackage{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\usepackage[margin=1in]{geometry}
\usepackage{hyperref}
\hypersetup{unicode=true,
pdftitle={A brief introduction to QGIS},
pdfauthor={Ivan Viveros Santos},
pdfborder={0 0 0},
breaklinks=true}
\urlstyle{same} % don't use monospace font for urls
\usepackage{natbib}
\bibliographystyle{apalike}
\usepackage{longtable,booktabs}
\usepackage{graphicx,grffile}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}
}
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{5}
% Redefines (sub)paragraphs to behave more like sections
\ifx\paragraph\undefined\else
\let\oldparagraph\paragraph
\renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}}
\fi
\ifx\subparagraph\undefined\else
\let\oldsubparagraph\subparagraph
\renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}}
\fi
%%% Use protect on footnotes to avoid problems with footnotes in titles
\let\rmarkdownfootnote\footnote%
\def\footnote{\protect\rmarkdownfootnote}
%%% Change title format to be more compact
\usepackage{titling}
% Create subtitle command for use in maketitle
\newcommand{\subtitle}[1]{
\posttitle{
\begin{center}\large#1\end{center}
}
}
\setlength{\droptitle}{-2em}
\title{A brief introduction to QGIS}
\pretitle{\vspace{\droptitle}\centering\huge}
\posttitle{\par}
\author{Ivan Viveros Santos}
\preauthor{\centering\large\emph}
\postauthor{\par}
\predate{\centering\large\emph}
\postdate{\par}
\date{2019-09-23}
\usepackage{booktabs}
\usepackage{amsthm}
\makeatletter
\def\thm@space@setup{%
\thm@preskip=8pt plus 2pt minus 4pt
\thm@postskip=\thm@preskip
}
\makeatother
\usepackage{amsthm}
\newtheorem{theorem}{Theorem}[chapter]
\newtheorem{lemma}{Lemma}[chapter]
\theoremstyle{definition}
\newtheorem{definition}{Definition}[chapter]
\newtheorem{corollary}{Corollary}[chapter]
\newtheorem{proposition}{Proposition}[chapter]
\theoremstyle{definition}
\newtheorem{example}{Example}[chapter]
\theoremstyle{definition}
\newtheorem{exercise}{Exercise}[chapter]
\theoremstyle{remark}
\newtheorem*{remark}{Remark}
\newtheorem*{solution}{Solution}
\begin{document}
\maketitle
{
\setcounter{tocdepth}{1}
\tableofcontents
}
\chapter{Introduction}\label{introduction}
\href{https://qgis.org/en/site/}{QGIS} is an Open Source Geographic
Information System (GIS) licensed under the GNU General Public License.
The aim of this document is to present some recipes about importing and
exporting vectors, and treatment and exploration of spatial data. I plan
to add more recipes on vector styling and map composition.
This is a brief introduction to QGIS, as such, it does not replace in
any maner the \href{https://docs.qgis.org/3.4/en/docs/user_manual/}{QGIS
User Guide}. I also recommend you visiting the
\href{https://gis.stackexchange.com/}{StackExchange} chanel, where you
can get support on questions related to QGIS.
The data used to illustrate the recipes presented in this document come
from the \href{http://donnees.ville.montreal.qc.ca/}{Portail données
ouvertes Montréal}; particularly, the selected data from this site is
under the license
\href{https://creativecommons.org/licenses/by/4.0/}{Creative Commons
Attribution 4.0 International}.
To follow the recipes presented in this document, please download the
accompanying files from my
\href{https://github.com/iviveros/QGIS_Data}{GitHub page}.
\chapter{QGIS installation and Graphical User
Interface}\label{installation}
\section{QGIS installation}\label{qgis-installation}
It is possible to install QGIS on Windows, Mac OS X, Linux, BSD, and
Android operating systems. The installers can be downloaded from
\href{https://qgis.org/en/site/forusers/download.html}{this website},
and \href{https://qgis.org/en/site/forusers/alldownloads.html\#}{this
link} provides detailed installation instructions.
By the time of writing this document, the last version of QGIS is
\texttt{3.8.2\ Zanzibar}, which was released on October 16, 2019.
\section{QGIS Graphical User
Interface}\label{qgis-graphical-user-interface}
The QGIS graphical user interface (GUI) is composed of:
\begin{itemize}
\tightlist
\item
\textbf{Menu bar}: gives access to the main functionalities of QGIS.
\item
\textbf{Toolbars}: give a quick access to QGIS functionalities.
\item
\textbf{Panels}: they provide several functionalities, for instance
managing layers, and browsing spatial data.
\item
\textbf{Map display}: shows the spatial data of the current project.
\end{itemize}
\begin{figure}
{\centering \includegraphics[width=16.19in]{figures/QGIS_interface}
}
\caption{QGIS GUI}\label{fig:unnamed-chunk-1}
\end{figure}
It is possible to customize the appearance of QGIS by navigating to
\textbf{View} from the \textbf{Menu bar}. From here, you can select the
panels and toolbars you want to display. I would recommend toggling the
\textbf{Processing Toolbox} from the panel because it allows to quickly
access to the main operations available in QGIS.
\chapter{Loading and Exporting data}\label{loading-and-exporting-data}
The quickest way to load data into QGIS is the technique drag-and-drop.
It only requires locating the data you want to import into QGIS, select
the file, and drop it on the \textbf{Map display} or on the
\textbf{layer panel}.
Another option for importing data is to use the browser panel. Navigate
through the browser, locate the layer or data to be imported,
right-click on it, and finally select Add Layer to Project. However,
using the \textbf{Manage Layers Toolbar} or \textbf{Add Layer} from the
\textbf{Layer} menu give more control on the importation.
\section{Importing data from text
files}\label{importing-data-from-text-files}
This recipe is illustrated with the
\texttt{coordonnees-stations-rsqa.csv} file, which reports the locations
of the stations of the Air Quality Monitoring Network (RSQA) set up in
Montreal. The aim of the
\href{https://ville.montreal.qc.ca/rapportmontrealdurable/en/air-quality.php}{RSQA}
is to monitor the atmospheric concentration of
\href{https://www.epa.gov/criteria-air-pollutants}{criteria pollutants}.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Navigate to the \textbf{Layer} menu and select \textbf{Add delimited
text layer}. The following dialog will pop up:
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=15.21in]{figures/Delimited_Text_Dialog}
}
\caption{Add delimited text layer window}\label{fig:unnamed-chunk-2}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
In the \textbf{File Name} field, indicate the path to the
\texttt{coordonnees-stations-rsqa.csv} file.
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=15.38in]{figures/Delimited_Text_Dialog_2}
}
\caption{Indicating the path to the text file}\label{fig:unnamed-chunk-3}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\item
In the \textbf{Geometry Definition} section, indicate that point
coordinates are being imported. Then indicate the corresponding fields
for longitude and latitude. According to the sources of the layer
being imported, the coordinate system is WGS 84, so we don't need to
change it. When you want to import an attribute table, you can select
the option \textbf{No geometry}.
\item
Click on \textbf{Add}. You will see the following set of points,
probably not in the same colour. Optionally, you can add a base map or
another layer to give some spatial context, but for now we have
successfully imported the layer corresponding to the stations of the
RSQA network.
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=11.49in]{figures/Stations_RSQA}
}
\caption{Montreal's Air Quality Monitoring Network}\label{fig:unnamed-chunk-4}
\end{figure}
\begin{quote}
In section \ref{basemaps} we describe how to add a basemap to add
spatial context.
\end{quote}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{4}
\tightlist
\item
\textbf{Optional step}: save the imported layer as a new shapefile
(Section \ref{saveLayer}).
\end{enumerate}
\section{Importing data from KML
files}\label{importing-data-from-kml-files}
QGIS supports importing
\href{https://developers.google.com/kml/}{Keyhole Markup Language} (KML)
files. KML is the format used by Google Earth to show spatial data.
The importation of a KML is illustrated with the \texttt{grandparcs.kml}
file which contains polygons corresponding to the
\href{http://donnees.ville.montreal.qc.ca/dataset/grands-parcs}{parks of
Montreal}.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Navigate to the \textbf{Layer} menu and select \textbf{Add vector
layer}. The following dialog will pop up:
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=14.32in]{figures/Import_KML}
}
\caption{Add vector layer window}\label{fig:unnamed-chunk-5}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
In the \textbf{File Name} field, indicate the path to the
\texttt{grandparcs.kml} file. The following layer will be displayed in
QGIS:
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=9.64in]{figures/Import_KML_2}
}
\caption{Montreal's parks}\label{fig:unnamed-chunk-6}
\end{figure}
\section{Importing GeoJSON files}\label{importing-geojson-files}
\href{https://geojson.org/}{GeoJSON} is another frequently used format
for storing and representing geographical attributes. This format is
based on the JavaScript Object Notation (JSON).
The importation of a GeoJSON layer is illustrated with the
\texttt{ilotschaleur.json} file. This file contains the
\href{http://donnees.ville.montreal.qc.ca/dataset/schema-environnement-milieux-naturels/resource/8cd8d34a-cfdd-4acf-a363-d4adaeff18c0}{urban
heat islands (UHI) of Montreal}. UHI correspond to urban areas
characterized by higher summer temperatures than the immediate
environment with differences between 5 and 10°C.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Navigate to the \textbf{Layer} menu and select \textbf{Add vector
layer}. The following dialog will pop up:
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=12.12in]{figures/Import_geojson}
}
\caption{Add vector layer window}\label{fig:unnamed-chunk-7}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
In the \textbf{File Name} field, indicate the path to the
\texttt{ilotschaleur.json} file. The following layer will be displayed
in QGIS:
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=9.46in]{figures/Import_geojson_2}
}
\caption{Urban heat islands (UHI) of Montreal}\label{fig:unnamed-chunk-8}
\end{figure}
\section{Saving a layer}\label{saveLayer}
We use the previously imported \texttt{coordonnees-stations-rsqa.csv}
file to illustrate how to export a layer in a different format.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Right-click on the name of the layer, select \textbf{Export}, then
\textbf{Save features As\ldots{}} The following dialog window will pop
up.
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=9.78in]{figures/Save_Layer}
}
\caption{Save features as... window}\label{fig:unnamed-chunk-9}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\item
Select the format in which you want to export the layer. In this case,
we have selected the widely used ESRI shapefile, we will use this
layer in future recipes. Another commonly used format is GeoJson since
it is compatible with web base applications.
\item
Indicate the path where the layer will be stored and give it a name.
Indicate whether you want to add the saved file to the current
project, and finally click on OK.
\end{enumerate}
\section{Reprojecting a layer}\label{reprojecting-a-layer}
Most of the time, the layers are not in a CRS that is more convenient
for the operation in hand. QGIS offers on-the-fly reprojections for
rendering the layers. However, when executing operations like spatial
analysis, it is required that all layers be in the same CRS. This recipe
is illustrated with the \texttt{LIMADMIN.shp} file that corresponds to
the administrative limits of Montreal's boroughs.
Before executing a spatial analysis, it is recommended to reproject the
layers to the most convenient CRS.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Right-click on the name of the layer, select \textbf{Export}, then
\textbf{Save features As\ldots{}}. The following dialog window will
pop up.
\item
Indicate the path where the layer will be stored and give it a name.
If you tick the box \textbf{Add saved file to map}, the recently
exported layer will be saved and added to the current project.
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=9.64in]{figures/Reproject_Layer}
}
\caption{Reprojecting a layer}\label{fig:unnamed-chunk-10}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\tightlist
\item
Click on the little globe to select the CRS in which the new layer
will be projected. The following window will pop up:
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=9.88in]{figures/Reproject_Layer_CRS}
}
\caption{Selecting another CRS}\label{fig:unnamed-chunk-11}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{3}
\tightlist
\item
You can filter the CRS. In this case we will export the new layer in
\textbf{EPSG:6622}, since it is the most accurate for
\href{https://epsg.io/6622}{Quebec}. Click \textbf{OK} to confirm the
selection of the CRS and click again \textbf{OK} to save the layer.
\end{enumerate}
\chapter{Data treatment}\label{data-treatment}
\section{Joining a layer data}\label{joining-a-layer-data}
Another frequent task before executing spatial analysis with QGIS is to
join an attribute table to a layer.
To illustrate this task, we refer to the \texttt{stations\_rsqa.shp}
file generated in section 1 and the file
\texttt{pollutants\_average\_12\_31\_2019\_13H.csv} file. When we load
the shapefile into QGIS we identify that this layer only contains
information on the identification and location of stations that compose
the Air Quality Monitoring Network (RSQA).
\begin{figure}
{\centering \includegraphics[width=15.43in]{figures/Change_Encoding}
}
\caption{Attribute table of stations}\label{fig:unnamed-chunk-12}
\end{figure}
We realize another problem, the name (nom) and the address of the
stations are not correctly displayed. In order to fix it, right-click on
the name of the layer, and select properties. A window will pop up. Go
to \textbf{Source} and change the \textbf{Data Source Encoding} to
\textbf{UTF-8}. Finally, click on Apply to accept the changes.
\begin{figure}
{\centering \includegraphics[width=12.79in]{figures/Change_Encoding_2}
}
\caption{Changing encoding}\label{fig:unnamed-chunk-13}
\end{figure}
Now, import the \texttt{pollutants\_average\_12\_31\_2019\_13H.csv}
file. One kick method is to drag and drop the file from the file into
QGIS. This works fine for this file; however, you also can use
\textbf{Add Delimited Text Layer} to have more control on the
importation.
The \texttt{pollutants\_average\_12\_31\_2019\_13H.csv} file reports the
average concentration of criteria pollutants from December 23, 2013, at
12h. The units of concentration are indicated in the following table.
\begin{longtable}[]{@{}ll@{}}
\toprule
Pollutant & Unit\tabularnewline
\midrule
\endhead
CO & ppm\tabularnewline
H2S & ppb\tabularnewline
NO & ppb\tabularnewline
NO2 & ppb\tabularnewline
SO2 & \(\mu g/m^3\)\tabularnewline
PM10 & \(\mu g/m^3\)\tabularnewline
PM2.5 & \(\mu g/m^3\)\tabularnewline
O3 & ppb\tabularnewline
\bottomrule
\end{longtable}
In order to join the \texttt{pollutants\_average\_12\_31\_2019\_13H}
attribute table to the \texttt{stations\_rsqa} layer, follow these
steps:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Right-click on the name of \texttt{stations\_rsqa} layer, select
\textbf{Properties}, then \textbf{Joins} from the dialog window.
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=13.24in]{figures/Joins_Dialog_Box}
}
\caption{Layer properties window}\label{fig:unnamed-chunk-14}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
Click on the green \textbf{+} sign. The following window will pop up:
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=7.26in]{figures/Joins_Dialog_Box_2}
}
\caption{Joining a layer table}\label{fig:unnamed-chunk-15}
\end{figure}
In this case, since we have imported only one attribute table, QGIS has
already selected the Join layer.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\item
Specify the \textbf{Join field} and the \textbf{Target field}, which
correspond to the keys that relate the shapefile layer and the data
layer. In this example, \textbf{NO\_POSTE} is the identifier of the
stations in the data layer and \textbf{Numéro st} is the identifier of
the stations in the shapefile layer. Furthermore, it is possible to
select the fields that will be joined, and the prefix that will be
used. Since there are no repeated columns, we just deleted the default
prefix, which corresponds to the name of the layer.
\item
Click on \textbf{OK}, then on \textbf{Apply} to finish the joining.
\end{enumerate}
\begin{quote}
To verify if the join has worked, you can open the attribute table of
the shapefile. To make this change permanent, you need to export the
layer. After the join, the layer was exported as
\texttt{stations\_rsqa\_12\_31\_2013.shp}.
\end{quote}
\section{Cleaning up the attribute
table}\label{cleaning-up-the-attribute-table}
Sometimes, data imported into QGIS is not in the correct format, the
name of columns is not self-explanatory, or we simply want to discard
the columns that will not be used during the task in hand.
The \textbf{Refactor fields} algorithm simplifies removing, renaming and
converting the format of dbf tables in QGIS. This algorithm ca be
accessed through the \textbf{Processing Toolbox}. The use of
\textbf{Refactor fields} is illustrated with the
\texttt{stations\_rsqa\_12\_31\_2013.shp} file that was generated in the
previous section.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Import \texttt{stations\_rsqa\_12\_31\_2013.shp} file
\item
Launch the \textbf{Refactor field}. It will detect the available layer
in the current project. In this window we can identify the name of
columns, the type of information they store, their length and
precision. The columns corresponding to concentrations, such as CO,
H2S, and NO, are currently stored as text (data type is string). This
is not convenient, since we cannot do arithmetic with text.
Furthermore, we may want to change the name of \textbf{Numéro st}
field for \textbf{Station}.
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=15.72in]{figures/Refactor_Dialog}
}
\caption{Refactor field window}\label{fig:unnamed-chunk-16}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\tightlist
\item
We change the name of the first three columns according to the
following figure, and the type of columns corresponding to
concentrations was set as \textbf{Double} (real number) with a length
of 23 and a precision of 3. Click on \textbf{Run} to generate a new
layer.
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=15.32in]{figures/Refactor_Dialog_Settings}
}
\caption{Using the refactor field algorithm}\label{fig:unnamed-chunk-17}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{3}
\tightlist
\item
The generated layer is named by default Refactored, of course, you can
change its name at your convenience.
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=8.82in]{figures/Refactor_Dialog_Settings_2}
}
\caption{Saving the refactored layer}\label{fig:unnamed-chunk-18}
\end{figure}
\chapter{Data preprocessing steps}\label{data-preprocessing-steps}
\section{Clipping vectors}\label{clipping-vectors}
In some cases, it is necessary than a layer covers only an area of
interest. For this purpose, we can use a layer setting the extent we
want to keep for a section of a layer. To accomplish this task, we use
\textbf{Clip} from the \textbf{Geoprocessing Tool}.
To illustrate the operation of clipping vectors, we will use
\texttt{terre\_shp} and \texttt{LIMADMIN.shp} files.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Load both layers into QGIS. \texttt{LIMADMIN.shp} corresponds to the
administrative limits of Montreal's boroughs (last update in 2013);
however, the polygons extent beyond the terrestrial limits; whereas
the \texttt{terre\_shp} file corresponds to the terrestrial limits of
Montreal Island.
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=10.25in]{figures/Clipping_Vectors}
}
\caption{Montreal’s boroughs}\label{fig:unnamed-chunk-19}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
Navigate to \textbf{Layer}, then to \textbf{Geoprocessing Tool}, and
select \textbf{Clip}. The following window will pop up.
\texttt{LIMADMIN.shp} layer corresponds to the Input layer, since it
is the layer we want to cut, whereas the \texttt{quartier\_limite.shp}
is the Overlay layer, the layer we will use to set the limits we want
to keep.
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=10.6in]{figures/Clipping_Vectors_Dialog}
}
\caption{Using the clip algorithm}\label{fig:unnamed-chunk-20}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{3}
\tightlist
\item
Click on Run to generate a new layer. Save the clipped layer as
\texttt{limadmin\_clipped.shp}.
\end{enumerate}
\section{Intersecting vectors}\label{IntersectionVectors}
The \textbf{Intersection} algorithm extracts the properties from the
input layer that overlap features in the overlay layer.
To illustrate this recipe, consider the \texttt{terre.shp} and
\texttt{LIMADMIN.shp} files. In this case, we will create a layer
resulting from the intersection between \texttt{terre.shp} and
\texttt{LIMADMIN.shp}. Therefore, the \texttt{terre.shp} layer
corresponds to the \textbf{Input Layer}, whereas \texttt{LIMADMIN.shp}
to the \textbf{Overlay Layer}. We can add an overlay index to identify
the features that were intersected from the overlay layer in
Intersection layer generated
\begin{figure}
{\centering \includegraphics[width=10.5in]{figures/Intersecting_Vectors_Dialog}
}
\caption{Intersection of vectors}\label{fig:unnamed-chunk-21}
\end{figure}
\section{Check validity and fix
geometries}\label{check-validity-and-fix-geometries}
In some cases, when clipping and intersecting vectors, errors may arise
because of invalid geometries. Fortunately, QGIS allows us to check the
validity of layers, and even more importantly to fix them. The following
algorithms can be accessed through the \textbf{Processing Toolbox}:
\begin{itemize}
\tightlist
\item
\textbf{Check validity}: The algorithm performs a validity check on
the geometries of a vector layer. The geometries are classified in
three groups (valid, invalid and error).
\item
\textbf{Fix geometries}: The algorithm attempts to create a valid
representation of an invalid geometry without losing any of the input
vertices.
\end{itemize}
\begin{figure}
{\centering \includegraphics[width=3.94in]{figures/Validity_Fix_Geometry}
}
\caption{Check validity and fix geometries}\label{fig:unnamed-chunk-22}
\end{figure}
\chapter{Data exploration}\label{data-exploration}
\section{Listing unique values in a
column}\label{listing-unique-values-in-a-column}
The \textbf{List Unique values} algorithm, from the \textbf{Processing
Toolbox}, generates a table and a HTLM report of the unique values of a
given layer's field.
To illustrate the use of the \textbf{List Unique values} algorithm, we
will import the \texttt{grandparcs.kml} file. The field
\textbf{Generique2} stores the type of parks located in Montreal, and we
would like to know the unique values without having to open the
attribute table.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Launch the \textbf{List Unique values} algorithm from the
\textbf{Processing Toolbox}. It will identify the available layer in
the current project.
\item
Click on \textbf{\ldots{}} from the \textbf{Target Field(s)} and
select \textbf{Generique2}. Click on Run to generate a temporary layer
and a HTLM report.
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=14.79in]{figures/List_Unique_Values}
}
\caption{List unique values window}\label{fig:unnamed-chunk-23}
\end{figure}
\section{Loading BaseMaps}\label{basemaps}
When we import layers into QGIS, sometimes it is difficult to identify
what the points, lines or polygons of a layer correspond to. In these
situations, it is very helpful to add a base map to give some spatial
context.
One plugging that comes in hand to add spatial context is
\textbf{QuickMapServices}. If it is not already available in your QGIS
Desktop, navigate to \textbf{Plugins}, then to \textbf{Manage and
Install Plugins}. In the window that has been displayed search
\textbf{QuickMapServices} and click on \textbf{Install Plugin}.
\begin{figure}
{\centering \includegraphics[width=14.35in]{figures/QuickMapServices}
}
\caption{Manage and install plugins in QGIS}\label{fig:unnamed-chunk-24}
\end{figure}
To illustrate the use of \textbf{QuickMapServices}, load the
\texttt{stations\_rsqa\_12\_31\_2013.shp}. Navigate to \textbf{Web} from
the menu bar, select \textbf{QuickMapServices}, go to \textbf{OSM}, and
select \textbf{OSM\_Standard}. You will see the following set of air
quality monitoring stations located in Montreal.
\begin{figure}
{\centering \includegraphics[width=11.42in]{figures/BaseMaps_Example}
}
\caption{Montreal's Air Quality Monitoring Network}\label{fig:unnamed-chunk-25}
\end{figure}
\chapter{Exercises}\label{exercises}
\section{Exercise 1: Determine the area fraction of urban heat islands
by boroughs of
Montreal}\label{exercise-1-determine-the-area-fraction-of-urban-heat-islands-by-boroughs-of-montreal}
The aim of this exercise is to generate a
\href{https://en.wikipedia.org/wiki/Choropleth_map}{choropleth map}
showing the area fraction of urban heat islands by boroughs of Montreal.
\begin{figure}
{\centering \includegraphics[width=18.1in]{figures/Choropleth_UHI}
}
\caption{Area fraction of UHI by boroughs}\label{fig:unnamed-chunk-26}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Import \texttt{ilotschaleur.json} and \texttt{limadmim\_clipped.shp}.
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=15.1in]{figures/UHA_Montreal}
}
\caption{UHI and boroughs of Montreal}\label{fig:unnamed-chunk-27}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\item
Export \texttt{ilotschaleur.json} as a shapefile and save it as
\texttt{ilotschaleur.shp} (Section \ref{saveLayer}).
\item
Intersect the recently created layer \texttt{ilotschaleur.shp} with
\texttt{limadmim\_clipped.shp}. Navigate to \textbf{Vector}, select
\textbf{Geoprocessing Tools}, and the \textbf{Intersection} (Section
\ref{IntersectionVectors}). Set the following parameters:
\end{enumerate}
\begin{itemize}
\tightlist
\item
\textbf{Input layer}: \texttt{ilotschaleur.shp}
\item
\textbf{Overlay layer}: \texttt{limadmim\_clipped.shp}
\end{itemize}
The aim is to generate a new layer in which the urban heat islands are
divided according to Montreal's boroughs.
\begin{figure}
{\centering \includegraphics[width=10.64in]{figures/Intersection_UHA}
}
\caption{Intersection of layers}\label{fig:unnamed-chunk-28}
\end{figure}
Click Run to execute the algorithm. However, it will stop since there
are some invalid geometries in the input layer.
\begin{figure}
{\centering \includegraphics[width=10.54in]{figures/Intersection_UHA_Error}
}
\caption{Error: Intersection of layers}\label{fig:unnamed-chunk-29}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{3}
\tightlist
\item
We will use the \textbf{Fix Geometries} algorithm from the
\textbf{Processing Toolbox} to fix the geometries. Run the algorithm
according to the following settings.
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=10.46in]{figures/Fix_Geometry_UHA}
}
\caption{Fix geometries algorithm}\label{fig:unnamed-chunk-30}
\end{figure}
A temporary layer will be generated after running the algorithm. Right
click on layer \texttt{Fixed\ geometries} and rename it
\texttt{ilotschaleur\_fixed}.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{4}
\tightlist
\item
Run again the \textbf{Intersection} algorithm, but this time use the
fixed layer of urban heat islands. Set the following parameters:
\end{enumerate}
\begin{itemize}
\tightlist
\item
\textbf{Input layer}: \texttt{ilotschaleur\_fixed.shp}
\item
\textbf{Overlay layer}: \texttt{limadmim\_clipped.shp}
\end{itemize}
\begin{figure}
{\centering \includegraphics[width=10.53in]{figures/Intersection_UHA_Fixed}
}
\caption{Intersection of layers}\label{fig:unnamed-chunk-31}
\end{figure}
A temporary layer \texttt{Intersection} will be generated after running
the algorithm. To verify that the algorithm has properly worked, open
the attribute table and you will notice that the \texttt{Intersection}
layer has 584 attributes, whereas the \texttt{ilotschaleur\_fixed} has
498.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{5}
\tightlist
\item
Dissolve the \texttt{Intersection} layer by NOM. The aim is to combine
the polygons corresponding to the same borough, which is given by the
NOM field. In the \textbf{Dissolve field(s)}, click on the \ldots{}
and select NOM.
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=10.6in]{figures/Dissolve_UHA}
}
\caption{Dissolve Intersection layer by NOM}\label{fig:unnamed-chunk-32}
\end{figure}
After running the \textbf{Dissolve} algorithm, a temporary layer
\texttt{Dissolved} will be generated. Open the attribute table of this
layer and verify that it has 33 attributes; whereas the
\texttt{Intersection} layer has 584. The task of combining the polygons
has been accomplished.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{6}
\tightlist
\item
Calculate the area of the urban heat islands by boroughs.
\end{enumerate}
First of all, we will delete the fields of the \texttt{Dissolved} layer
that will not be used. Open the attribute table, then click on the
pencil shown in the left corner (\textbf{Toggle editing mode}) to allow
editing the attribute table. Click on \textbf{Delete} field and select
the fields shown in the following figure:
\begin{figure}
{\centering \includegraphics[width=13.75in]{figures/Delete_Fields}
}
\caption{Delete some fields of Dissolved layer}\label{fig:unnamed-chunk-33}
\end{figure}
We will now calculate the area of the urban heat islands by borough.
Click on \textbf{New field}. In the window that will display, select
\textbf{Create a new field}, indicate \emph{AREA} in the \textbf{Output
field name}, set \emph{Decimal number (real)} as the \textbf{Output
field type}. Double click on \emph{\$area} from the center panel.
\begin{figure}
{\centering \includegraphics[width=12.49in]{figures/Calculate_Area}
}
\caption{Calculate the area of UHI}\label{fig:unnamed-chunk-34}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{7}
\tightlist
\item
Join the AREA of the urban heat islands (\texttt{Dissolved} layer) to
the respective borough in the \texttt{limadmim\_clipped.shp} layer.
The prefix \textbf{UHA\_ was} set to distinguish the fields from the
joined layer.
\end{enumerate}
\begin{figure}
{\centering \includegraphics[width=14.26in]{figures/Join_UHA}
}
\caption{Join layers}\label{fig:unnamed-chunk-35}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{8}
\tightlist
\item
Calculate the fraction area of urban heat islands (UHI) by boroughs.
\end{enumerate}
First, calculate the AREA of each borough from the
\texttt{limadmim\_clipped.shp} layer. Click on \textbf{New field}. In
the window that will display, select \textbf{Create a new field},
indicate AREA in the Output field name, set Decimal number (real) as the
Output field type. Double click on \$area from the center panel.
\begin{figure}
{\centering \includegraphics[width=13.42in]{figures/Calculate_Area_Montreal}
}
\caption{Calculate the area of Montreal's boroughs}\label{fig:unnamed-chunk-36}
\end{figure}
Finally, calculate the area fraction of UHI. In the center panel, go to
\textbf{Fields and Values}, double click to select the involved fields
in the computation of the new one. In this case, the area fraction is
given by the expression: FRAC\_UHA = UHA\_AREA/AREA
\begin{quote}
Note: UHI was misspelled. So UHA stand for UHI.
\end{quote}
\begin{figure}
{\centering \includegraphics[width=14.1in]{figures/Calculate_Area_Fraction}
}
\caption{Calculate the area fraction of UHI}\label{fig:unnamed-chunk-37}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{9}
\tightlist
\item
Style the \texttt{limadmim\_clipped.shp} layer so that it shows the
area fraction of UHI in four classes of equal interval.
\end{enumerate}
Right click on \texttt{limadmim\_clipped.shp}, select
\textbf{Properties}, then \textbf{Symbology}, and set the parameters
according to the following figure.
\begin{figure}
{\centering \includegraphics[width=14.17in]{figures/Area_Fraction_UHA}
}
\caption{Changing the symbology of a layer}\label{fig:unnamed-chunk-38}
\end{figure}
\chapter{References}\label{references}
\begin{longtable}[]{@{}lll@{}}
\toprule
\begin{minipage}[b]{0.08\columnwidth}\raggedright\strut
Data\strut
\end{minipage} & \begin{minipage}[b]{0.15\columnwidth}\raggedright\strut
File name\strut
\end{minipage} & \begin{minipage}[b]{0.10\columnwidth}\raggedright\strut
Source\strut
\end{minipage}\tabularnewline
\midrule
\endhead
\begin{minipage}[t]{0.08\columnwidth}\raggedright\strut
RSQA - liste des stations\strut
\end{minipage} & \begin{minipage}[t]{0.15\columnwidth}\raggedright\strut
coordonnees-stations-rsqa.csv\strut
\end{minipage} & \begin{minipage}[t]{0.10\columnwidth}\raggedright\strut
\href{http://donnees.ville.montreal.qc.ca/dataset/rsqa-liste-des-stations/resource/29db5545-89a4-4e4a-9e95-05aa6dc2fd80}{Portail
données ouvertes Montréal}\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.08\columnwidth}\raggedright\strut
RSQA - polluants gazeux 2013-07-01 à 2013-12-31\strut
\end{minipage} & \begin{minipage}[t]{0.15\columnwidth}\raggedright\strut
pollutants\_average\_12\_31\_2019\_13H.csv\strut
\end{minipage} & \begin{minipage}[t]{0.10\columnwidth}\raggedright\strut
\href{http://donnees.ville.montreal.qc.ca/dataset/rsqa-polluants-gazeux/resource/26ddbd0b-47f6-4039-98b2-b32568ed01b1}{Portail
données ouvertes Montréal}\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.08\columnwidth}\raggedright\strut
Grands parcs\strut
\end{minipage} & \begin{minipage}[t]{0.15\columnwidth}\raggedright\strut
grandparcs.kml\strut
\end{minipage} & \begin{minipage}[t]{0.10\columnwidth}\raggedright\strut
\href{http://donnees.ville.montreal.qc.ca/dataset/grands-parcs}{Portail
données ouvertes Montréal}\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.08\columnwidth}\raggedright\strut
Îlots de chaleur\strut
\end{minipage} & \begin{minipage}[t]{0.15\columnwidth}\raggedright\strut
ilotschaleur.json\strut
\end{minipage} & \begin{minipage}[t]{0.10\columnwidth}\raggedright\strut
\href{http://donnees.ville.montreal.qc.ca/dataset/schema-environnement-milieux-naturels/resource/8cd8d34a-cfdd-4acf-a363-d4adaeff18c0}{Portail
données ouvertes Montréal}\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.08\columnwidth}\raggedright\strut
Limite administrative de l'agglomération de Montréal (Arrondissement et
Ville liée)\strut
\end{minipage} & \begin{minipage}[t]{0.15\columnwidth}\raggedright\strut
LIMADMIN.shp\strut
\end{minipage} & \begin{minipage}[t]{0.10\columnwidth}\raggedright\strut
\href{http://donnees.ville.montreal.qc.ca/dataset/polygones-arrondissements}{Portail
données ouvertes Montréal}\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.08\columnwidth}\raggedright\strut
Limite terrestre\strut
\end{minipage} & \begin{minipage}[t]{0.15\columnwidth}\raggedright\strut
terre.shp\strut
\end{minipage} & \begin{minipage}[t]{0.10\columnwidth}\raggedright\strut
\href{http://donnees.ville.montreal.qc.ca/dataset/limites-terrestres/resource/674c2a18-e013-42ca-bfa6-3bb7358e820d}{Portail
données ouvertes Montréal}\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
\bibliography{book.bib,packages.bib}
\end{document}
| {
"alphanum_fraction": 0.7730934548,
"avg_line_length": 31.4599190283,
"ext": "tex",
"hexsha": "d6ae73b2eb6ad559270a9049e2c0a11cec7aeda7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "28c7806334b99f126a1131ca6fc73cf777e1a8fe",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "iviveros/QGIS_intro_fr",
"max_forks_repo_path": "docs/QGIS_intro.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "28c7806334b99f126a1131ca6fc73cf777e1a8fe",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "iviveros/QGIS_intro_fr",
"max_issues_repo_path": "docs/QGIS_intro.tex",
"max_line_length": 142,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "28c7806334b99f126a1131ca6fc73cf777e1a8fe",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "iviveros/QGIS_intro_fr",
"max_stars_repo_path": "docs/QGIS_intro.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 11533,
"size": 38853
} |
%In this section we provide details of the three datasets that we evaluate in this paper, as well as the methodology used.
%\subsection{Dataset description}
%\subsubsection{Twitter example (Detection of natural disasters)}
\textcolor{red}{The scenario we have chosen to evaluate our algorithms is related to the detection of natural disasters discussed in a collection of tweets. We used the Twitter data described in \cite{Iman2017}, which was curated collaboratively by four students as follows: (1) the dataset was restricted to users located within the US, (2) non-English tweets were filtered out, (3) only the tweets related to 12 natural disasters were kept -- tweets related to other natural disasters were removed. These natural disasters are temporally, and geographically disjoint -- a storm, a hurricane, a drought, two floods, two earthquakes, two tornadoes, and three blizzards. Finally, (4) false positive tweets were intentionally included -- tweets mentioning natural disasters specific-keywords but are not related to a particular natural disaster, e.g., \textit{\textquotedblleft I'm flooded with questions from followers\textquotedblright{}}. The final dataset contains 39,486 tweets with 5,075 marked as relevant tweets (in this context, a relevant tweet is a tweet related to a natural disaster).
}
\textcolor{red}{In addition to comparing our algorithms to the optimal solution, we also propose to use X-Means \cite{Pelleg2000} as a baseline method for clustering. X-Means is an extension of K-Means which tries to automatically determine the number of clusters. Starting with only one cluster, the X-Means algorithm goes into action after each run of K-Means, making local decisions about which subset of the current centroids should split themselves in order to better fit the data. Also, the distance metric we have used for X-Means is a linear combination of: (i) the Euclidean distance of time, (ii) the Euclidean distance of location, and (iii) the cosine distance of the textual content. This distance metric is defined as follows:}
\begin{equation}
d(i,j) = \alpha\times\textrm{[time dist.]}+\beta\times \textrm{[location dist.]}+\gamma\times \textrm{[text cosine dist.]}
\end{equation}
\textcolor{red}{\noindent where $\alpha$, $\beta$, and $\gamma$ are weights that sum to 1, set all to 1 in the off-line evaluation and set respectively to 0.1, 0.8, and 0.1 in the user study -- values manually tuned by our four students. Finally, we used EF1 to rank clusters returned by X-Means. }
%We assume having an agent monitoring tweets, a topical tweets classifier (e.g., \cite{Iman2017}), and a display showing the locations of the tweets (See Figure \ref{Fig:TwitterData}). We used the 2.5 TB of Twitter data described in \cite{Iman2017}, for which we restricted our analysis to the 9M tweets of January 2014.
%\begin{figure}[t]
%\begin{centering}
%\subfigure[Twitter Networks.]{\includegraphics[width=2.8cm]{imgs/twitter_example_3}\label{Fig:TwitterData}}\subfigure[Enron Networks.]{\includegraphics[width=2.8cm]{imgs/enron_net_4}\label{Fig:EnronData}}\subfigure[Reddit Networks.]{\includegraphics[width=2.8cm]{imgs/srforum3}\label{Fig:RedditData}}
%\par\end{centering}
%\caption{Layouts used for the different datasets.}
%\end{figure}
| {
"alphanum_fraction": 0.779264214,
"avg_line_length": 102.78125,
"ext": "tex",
"hexsha": "2e2f3b7937046a31ea84994c9caddbd8e5d55a0f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cd1860984dee8d7aba368857e734ad11c14124c8",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "D3Mlab/viz-ir",
"max_forks_repo_path": "Documents/CHIIR2019/Setup.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cd1860984dee8d7aba368857e734ad11c14124c8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "D3Mlab/viz-ir",
"max_issues_repo_path": "Documents/CHIIR2019/Setup.tex",
"max_line_length": 1100,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "cd1860984dee8d7aba368857e734ad11c14124c8",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "D3Mlab/visir",
"max_stars_repo_path": "Documents/CHIIR2019/Setup.tex",
"max_stars_repo_stars_event_max_datetime": "2021-03-10T07:40:04.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-03-10T07:40:04.000Z",
"num_tokens": 821,
"size": 3289
} |
\documentclass{article}
\usepackage{leonine,amsmath,amssymb,amsthm,graphicx}
\setkeys{Gin}{width=\linewidth,totalheight=\textheight,keepaspectratio}
\graphicspath{{graphics/}}
% Prints a trailing space in a smart way.
\usepackage{xspace}
% Inserts a blank page
\newcommand{\blankpage}{\newpage\hbox{}\thispagestyle{empty}\newpage}
% \usepackage{units}
% Typesets the font size, leading, and measure in the form of 10/12x26 pc.
\newcommand{\measure}[3]{#1/#2$\times$\unit[#3]{pc}}
\theoremstyle{definition}
\newtheorem{pred}[thm]{Prediction}
\title{Askesis: Negative Pathway 2} \author{Eric Purdy}
\begin{document}
\maketitle
\section{Overview}
The purpose of the negative pathway is to filter out potential
movements produced by the positive pathway, leaving only the desired
movements as output from the deep nuclear cells. Our theory of the
negative pathway is very similar to the perceptron model of Albus,
although we modify the input to each perceptron slightly.
\section{Purkinje Cells and Basket Cells}
\label{sec-purkinje}
Let $G_i(t)$ be the firing of the $i$-th granular cell at time $t$.
We model the basket cell as
$$B_j(t) = \sigma \left(\sum_i W^-_{ij} G_i(t) +\theta^-_j \right).$$
We model the Purkinje cell as
$$P_j(t) = \sigma \left(\sum_i W^+_{ij} G_i(t) +\theta^+_j - \alpha B_j(t) \right).$$
We will use a simpler model for the combined basket cell-Purkinje cell
pair:
\begin{align*}
P_j(t) &= \sigma \left(\sum_i (W^+_{ij}-W^-_{ij}) G_i(t) + (\theta^+_j
- \theta^-_j)\right)\\ &= \sigma \left(\sum_i W_{ij} G_i(t) +\theta_j \right),
\end{align*}
where the $W_{ij}$ and $\theta_j$ are free to take on both positive
and negative values.
Let $y_j(t)$ be $1$ if the $j$-th inferior olive cell fires at time
$t$, and $-1$ otherwise. We assume that the inferior olive cells are
in one-to-one correspondence to the Purkinje cells, which is a slight
simplification.
In order to best set the weights $W_{ij}$ and bias $\theta_j$ of the
$j$-th Purkinje cell, we minimize the following function:
$$L(\{W_{ij}\}, \{\theta_j\}) = \sum_t y_j(t) \left(W_{ij} G_i(t) +
\theta_j\right) + \lambda \sum_i W_{ij}^2. $$
This rewards us for having the Purkinje cell activation ($W_{ij}
G_i(t) + \theta_j$) low when the climbing fiber is active
($y_j(t)=1$), and for having the Purkinje cell activation high when
the climbing fiber is inactive ($y_j(t)=-1$). Since the Purkinje cell
suppresses the corresponding deep nuclear cell, and the climbing fiber
encodes the information that we want the corresponding deep nuclear
cell to fire, this is the desired behavior.
The term $\lambda \sum_i W_{ij}^2$ is necessary to prevent the weights
from increasing without bound. It favors parameter settings with
smaller weights, which are thought of as being ``simpler'' in the
machine learning literature; favoring smaller weights is thus a form
of Occam's Razor. The constant multiplier $\lambda$ controls the
tradeoff between this term and the other term. The larger $\lambda$
is, the more the algorithm will favor model simplicity (small weights)
over fitting the data well.
This can be compared with the support vector machine (SVM), an
algorithm that minimizes the following function:
$$L(\{W_{ij}\}, \{\theta_j\}) = \sum_t \left[ 1+y(t)\left(\sum_i
W_{ij} G_i(t) + \theta_j\right)\right]_+ + \lambda \sum_i
W_{ij}^2,$$ where $[a]_+$ is zero if $a$ is negative, and
equal to $a$ otherwise. The difference between the two is that
the SVM uses the ``hinge loss'' while we are simply using a linear
loss function. This difference means that we get extra credit for
being more certain that we are right; with the hinge loss, we are
penalized a lot when we are certain but wrong, but not rewarded for
being more certain when we are right. These loss functions, as well as
the step loss function, are shown in Figure \ref{fig-hinge}; the hinge
loss is a sort of combination of the step loss function and the linear
loss function.
\begin{figure}
\includegraphics[width=0.3\linewidth]{step_loss.png}
\includegraphics[width=0.3\linewidth]{linear_loss.png}
\includegraphics[width=0.3\linewidth]{hinge_loss.png}
\caption{Various loss functions: the step loss function, the linear loss function, and the hinge loss function.}
\label{fig-hinge}
\end{figure}
The partial derivatives are:
\begin{align*}
\frac{\partial L(W)}{\partial W_{ij}} &= \sum_t y(t) G_i(t) + 2\lambda W_{ij}\\
\frac{\partial L(W)}{\partial \theta_j} &= \sum_t y(t).\\
\end{align*}
Using stochastic gradient descent (since we want to minimize
$L(\{W_{ij}\}, \{\theta_j\})$), this leads to the update rules
\begin{align*}
\Delta W_{ij} &= -\eta y_j(t) G_i(t) - \frac{2\eta\lambda}{T} W_{ij}\\
\Delta \theta_j &= -\eta y_j(t). \\
\end{align*}
We apply this to the actual synapse weights and cell biases by adding
$\frac{1}{2}\Delta W_{ij}$ to the weight $W^+_{ij}$ and
$-\frac{1}{2}\Delta W_{ij}$ to the weight $W^-_{ij}$, and adding
$\frac{1}{2}\Delta \theta_j$ to $\theta^+_j$ and adding
$-\frac{1}{2}\Delta \theta_j$ to $\theta^-_j$. This is consistent with
the observed learning behavior at the parallel fiber-Purkinje cell
synapse and the parallel fiber-basket cell synapse:
\begin{itemize}
\item LTD at the parallel fiber-Purkinje cell synapse when the
inferior olive cell fires at the same time as the parallel fiber
($y_j(t)=1, G_i(t)=1$)
\item LTP at the parallel fiber-Purkinje cell synapse when the
parallel fiber fires but the inferior olive cell does not
($y_j(t)=-1, G_i(t)=1$)
\item LTP at the parallel fiber-basket cell synapse when the inferior
olive cell fires at the same time as the parallel fiber ($y_j(t)=1,
G_i(t)=1$)
\item LTD at the parallel fiber-basket cell synapse when the parallel
fiber fires but the inferior olive cell does not ($y_j(t)=-1,
G_i(t)=1$)
\item No change when the parallel fiber is inactive ($G_i(t)=0$)
\end{itemize}
We also predict an exponential decay of the weights at both types of
synapse, as well as changes in the intrinsic excitability of the
basket cells and Purkinje cells corresponding to the change in
$\theta^-_j$ and $\theta^+_j$, respectively. The exponential decay
would contribute to memories in the negative pathway being short-lived
relative to memories in the positive pathway, which seems to be the
case.
We have phrased these as if the climbing fiber and parallel fiber
activations should be synchronized, but learning is observed to be
maximized then there is a delay on the order of 100 milliseconds
between the activation of the parallel fiber and the activation of the
climbing fiber. This makes sense: the climbing fiber is activated by
slower, non-cerebellar networks, so its input will always arrive
delayed relative to the relevant stimuli reaching the Purkinje and
basket cells.
\subsection{Symmetry-breaking mechanism for the Purkinje cells}
Each Purkinje cell receives as input, in addition to its input from
the parallel fibers, collaterals from several nearby granular
cells. This input is weighted more highly than that at the parallel
fiber-Purkinje cell synapse. We posit that these collaterals exist to
break the symmetry between Purkinje cells that project to the same
deep nuclear cell, so that we can learn multiple different
classifiers, each of which is capable of suppressing the deep nuclear
cell. Otherwise, adjacent Purkinje cells would receive the same input.
(Recall that nearby inferior olive cells tend to be coupled with gap
junctions, so that the input from the inferior olive would also be the
same for each Purkinje cell.)
\end{document}
| {
"alphanum_fraction": 0.7500331873,
"avg_line_length": 45.9329268293,
"ext": "tex",
"hexsha": "47825256d10e10742a6f7c3f7d1d403cfb2a86c9",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2018-05-13T17:37:48.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-05-13T17:37:48.000Z",
"max_forks_repo_head_hexsha": "52806b53d6d3ee92e1bd2a8c00f7728cebc9e684",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "advancedresearch/ethicophysics",
"max_forks_repo_path": "askesis/negative2.tex",
"max_issues_count": 46,
"max_issues_repo_head_hexsha": "52806b53d6d3ee92e1bd2a8c00f7728cebc9e684",
"max_issues_repo_issues_event_max_datetime": "2018-12-15T10:13:05.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-04-26T16:25:59.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "advancedresearch/ethicophysics",
"max_issues_repo_path": "askesis/negative2.tex",
"max_line_length": 112,
"max_stars_count": 7,
"max_stars_repo_head_hexsha": "52806b53d6d3ee92e1bd2a8c00f7728cebc9e684",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "advancedresearch/ethicophysics",
"max_stars_repo_path": "askesis/negative2.tex",
"max_stars_repo_stars_event_max_datetime": "2021-06-19T01:56:36.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-04-26T17:10:58.000Z",
"num_tokens": 2117,
"size": 7533
} |
\section{Challenges}
\subsection{Relational Modeling}
A large portion of time leading to the completion of the project was attributed to understanding how subjects should be linked. This process was judged to be difficult since some domain requirements can be ambiguous. To resolve this issue, an experiment was conducted using the three main subjects of the domain: agents (search algorithms), puzzles (graphs), and nodes (board states).
\\
\\
(i) Agents use a puzzles and nodes.
\\
(ii) Puzzles use agents and nodes.
\\
(iii) Each agent uses puzzles and nodes.
\\
(iv) Each puzzle uses agents and nodes.
\\
\\
When the models were compared on the same data set, it was observed that the average run-time for models (i) to (iv) was 2 minutes, 66 ms, 102 ms, and 18 ms respectively. Moreover, results of the experiment led to two important findings that helped us classify the domain requirements of the project.\\
The first finding resolves the concern of whether or not to use agent or puzzle as the primary subject of the system. For example, should (i/iii) be chosen over (i/iv)? If agent was the primary subject, then each agent must have a reference to a puzzle, and each agent would rely on the puzzle to provide it with a new state for every step of a traversal. The issue with this is that each agent had similar implementations for traversing the puzzle. This affected the overall performance of our system because there were redundancies for the traversal behaviour. To resolve this issue, we took away the responsibility of traversing nodes from the agent class and assigned it to the puzzle class. As a result of this change, the agent's responsibility was now limited to computing heuristic values. Overall, this finding suggests against using (i) and (iii) as a schema for modelling the system.\\
The first finding also suggests that agents should not have direct access to the node class. Yet, having access to this object is important for the system to function properly since it stores the values of each heuristic. A first approach to this problem was to include an accesssor method in the puzzle class. However, this increased the time complexity for state searching by a linear factor, and also contradicts the schema proposed in (ii) and (iv). Therefore, the puzzle needs to reference the agent.\\
\begin{figure}[H]
\includegraphics[width=0.75\linewidth]{assets/functional.png}
\caption{Sequence Diagram of Indonesian Dot Solver} \label{fig1}
\end{figure}
This leads to the second finding: A puzzle and an agent cannot have a many-to-many relational type because agent no longer has a reference to a puzzle. Moreover, many nodes are used by one puzzle. Therefore, the combination of both the agent and the node identifies the puzzle that it belongs to. This makes sense since each node points to it's predecessor, with the root node uniquely linked to the puzzle it belongs to (assuming there are no duplicate lines in the test file). Therefore, the second finding suggests that the logical architecture should be modelled after statement (iv).\\
Overall, by performing this experimentation and discovering why the logical architecture performed well under different scenarios, we were able to create a classification plan for classifying each domain requirement into one that belongs to agents, puzzles, or nodes. This experimentation also helped us maximize the performance of the overall system.
\begin{figure}[H]
\includegraphics[width=0.75\linewidth]{assets/schema.png}
\caption{Entity Relationship Diagram of Indonesian Dot Solver} \label{fig2}
\end{figure}
\subsection{Touch Redundancy}
Every state of the Indonesian Dot puzzle has actions equivalent to the size of the board. For example, if the board size 25 (or 5x5), then there are 25 actions to choose from. Whenever a traversal was made, it used to be that the puzzle object would touch every action on a node and if it was visited, then it would ignore the action. Yet, checking is an expensive operation especially when the board size increases in size. The touch redundancy issue refers to the issue of having to check for $\sum_{i=0}^{n} {{n!}\over{(n-i)!}} $ actions in the worst case, where $n$ is the total size of the board. This is depicted in the following Figure:
\begin{figure}[H]
\includegraphics[width=0.75\linewidth]{assets/touch_redundancy.png}
\caption{Average Run Time of DFS, BFS, and A* on 7 Puzzles with Touch Redundancy Issue} \label{fig3}
\end{figure}
To resolve this issue, we discovered that the order of actions leading to the goal state did not matter. More formally, the solution path (in actions) can be any combination of the solution set. This also implies that the system would check for $\sum_{i=0}^{n} {n\choose{i}} = 2^n$ actions in the worst case. This discovery greatly helped reduce the run time of the system:
\begin{figure}[H]
\includegraphics[width=0.75\linewidth]{assets/touch_redundancy_2.png}
\caption{Average Run Time of DFS, BFS, and A* on 7 Puzzles without Touch Redundancy Issue} \label{fig4}
\end{figure} | {
"alphanum_fraction": 0.7847826087,
"avg_line_length": 88.7719298246,
"ext": "tex",
"hexsha": "abc1b0926b49e185ead2b6f1a8b48a4798ba3141",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-03-18T15:23:24.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-03-18T15:23:24.000Z",
"max_forks_repo_head_hexsha": "2baf507d23816b686f046f89d4c833728b25f2dc",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Ra-Ni/Indonesian-Dot-Solver",
"max_forks_repo_path": "documentation/sections/difficulties.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2baf507d23816b686f046f89d4c833728b25f2dc",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Ra-Ni/Indonesian-Dot-Solver",
"max_issues_repo_path": "documentation/sections/difficulties.tex",
"max_line_length": 896,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2baf507d23816b686f046f89d4c833728b25f2dc",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Ra-Ni/Indonesian-Dot-Solver",
"max_stars_repo_path": "documentation/sections/difficulties.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1142,
"size": 5060
} |
%!TEX root = ..\..\dissertation.tex
\chapter{Research Approach}\label{chp:Approach}
A central aspect of this thesis is the creation of new knowledge on manufacturing system platforms with both theoretical uses and practical applications.
With the limited existing research in the field and continuing difficulties in developing and utilising manufacturing system platforms in companies, new theories, concepts, methods, models, and tools contributing to the state-of-the-art are needed.
In particular, the steps and tools required in going from realising platforms, modules, and changeable manufacturing are needed, to identifying, developing, and documenting these for future use.
Thus, considering the above and the state-of-the-art presented in \cref{chp:Introduction}, the overall objective of this thesis can be formulated as below.
\begin{objective}
Create and apply methods and tools for identifying, developing, and documenting manufacturing system platforms through commonality and standardisation of assets
%, thereby supporting manufacturers in creating changeable manufacturing systems
\end{objective}
The focus on creation of new knowledge and its application in companies calls for a research approach centred around this.
It should be an approach that can be employed for research on both manufacturing systems, products, and systems engineering as a whole, in order to account for future co-development and the necessary alignment between these departments in a company.
In the following sections, the applied design science research approach is outlined, the industrial case presented, and finally the research objective is framed by individual research questions and sub-questions.
% The interplay of design science research with these theories and concepts forms the contextual foundation of this research, as illustrated on \cref{fig:contextFound}.
% \begin{figure}[tb]
% \centering
% \missingfigure[figwidth=\textwidth]{contextual foundation}
% \caption{Contextual foundation for this research, showing the main theories and works on which it is based, as well as their relation.}\label{fig:contextFound}
% \end{figure}
\input{mainmatter/approach/designScienceResearch}
\input{mainmatter/approach/case}
\section{Research Questions}\label{sec:RQ}
In order to further frame this thesis and elaborate on the research objective listed in the beginning of this chapter, three overall research questions have been formulated.
Each research question is addressed through one or more sub-questions from six research tasks, documented in the six appended papers.
While the overall research approach is the design science research framework described in \cref{sec:DSR}, a number of other methods have also been used for the individual appended papers.
For each sub-question, the paper it originates from is listed along with the selected research methods.
\begin{resq}\label{resq1}
How can manufacturing system platforms be developed and documented using well-known concepts from software systems engineering and architecture, and which challenges arise during this process?
\begin{enumerate}[leftmargin=3em, label=RQ\arabic{resq}.\arabic*]
\item How can production platforms be developed and documented with the aid of concepts and constructs from the field of software systems architecture? (Paper~\ref{paper:MCPC2017}; multi case study)
\item Which challenges arise during manufacturing system platform development, and how can these be addressed? (Paper~\ref{paper:MCPC2017}; multi case study)
\end{enumerate}
\end{resq}
\begin{resq}\label{resq2}
How can commonality in processes across manufacturing systems be classified and used to identify candidates for manufacturing system platforms?
\begin{enumerate}[leftmargin=3em, label=RQ\arabic{resq}.\arabic*]
\item How can processes during production of discrete products be classified independently of the means facilitating the process? (Paper~\ref{paper:CMS2018}; literature review, iterative search and consolidation)
\item What are the essential aspects of manufacturing systems that must be captured in order to classify them? (Paper~\ref{paper:clsfCoding}; design science research, classification, analysis, development and refinement of artefacts and knowledge)
\item What is the best form/structure of a coding scheme that captures and classifies essential manufacturing system aspects/characteristics? (Paper~\ref{paper:clsfCoding}; design science research, classification, analysis, development and refinement of artefacts and knowledge)
\item How can a production system classification coding scheme be used to identify candidates for a manufacturing system platform? (Paper~\ref{paper:APMS2019}; design science research, single case study, application in environment)
\end{enumerate}
\end{resq}
\begin{resq}\label{resq3}
How can manufacturing system platforms be developed in a brownfield approach taking into account a manufacturer's existing production landscape and which challenges arise over time as platform development progresses?
\begin{enumerate}[leftmargin=3em, label=RQ\arabic{resq}.\arabic*]
\item Which challenges do mature manufacturers face over time, when developing manufacturing system platforms? (Paper~\ref{paper:APMS2018}; design science research, evolving case study)
\item What steps should a manufacturer take to develop platforms of standardised assets based on existing manufacturing systems and environments? (Paper~\ref{paper:CMS2019}; conceptual research, evolving case study)
\end{enumerate}
\end{resq} | {
"alphanum_fraction": 0.8121070595,
"avg_line_length": 109.1568627451,
"ext": "tex",
"hexsha": "9db8711a2ae204ebdd2309ae9a1df8538d27101f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7b8b71e6dfbe16da3298dce0e03b62e59d3d7ae8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Firebrazer/DevelopingManufacturingSystemPlatforms",
"max_forks_repo_path": "mainmatter/approach/approach.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7b8b71e6dfbe16da3298dce0e03b62e59d3d7ae8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Firebrazer/DevelopingManufacturingSystemPlatforms",
"max_issues_repo_path": "mainmatter/approach/approach.tex",
"max_line_length": 282,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "7b8b71e6dfbe16da3298dce0e03b62e59d3d7ae8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Firebrazer/DevelopingManufacturingSystemPlatforms",
"max_stars_repo_path": "mainmatter/approach/approach.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1154,
"size": 5567
} |
LM estimation starts with the collection of n-grams and their frequency counters. Then,
smoothing parameters are estimated for each n-gram level; infrequent n-grams are
possibly pruned and, finally, a LM file is created containing n-grams with probabilities and
back-off weights. This procedure can be very demanding in terms of memory and
time if it applied on huge corpora. We provide here a way to split LM training into smaller and independent steps, that can be easily distributed among independent processes. The
procedure relies on a training scripts that makes little use of computer RAM and implements
the Witten-Bell smoothing method in an exact way.
\noindent
Before starting, let us create a working directory under {\tt examples}, as many files will be created:
\begin{verbatim}
$> mkdir stat
\end{verbatim}
The script to generate the LM is:
\begin{verbatim}
$> build-lm.sh -i "gunzip -c train.gz" -n 3 -o train.ilm.gz -k 5
\end{verbatim}
where the available options are:
\begin{verbatim}
-i Input training file e.g. 'gunzip -c train.gz'
-o Output gzipped LM, e.g. lm.gz
-k Number of splits (default 5)
-n Order of language model (default 3)
-t Directory for temporary files (default ./stat)
-p Prune singleton n-grams (default false)
-s Smoothing: witten-bell (default), kneser-ney, improved-kneser-ney
-b Include sentence boundary n-grams (optional)
-d Define subdictionary for n-grams (optional)
-v Verbose
\end{verbatim}
\noindent
The script splits the estimation procedure into 5 distinct jobs, that are explained in
the following section. There are other options that can be used. We recommend for instance to use pruning of singletons to get smaller LM files.
Notice that {\tt build-lm.sh} produces a LM file {\tt train.ilm.gz} that is NOT in the final ARPA format, but in
an intermediate format called {\tt iARPA}, that is recognized by the {\tt compile-lm}
command and by the Moses SMT decoder running with {\IRSTLM}.
To convert the file into the standard ARPA format you can use the command:
\begin{verbatim}
$> compile-lm train.ilm.gz train.lm --text=yes
\end{verbatim}
this will create the proper ARPA file {\tt lm-final}.
To create a gzipped file you might also use:
\begin{verbatim}
$> compile-lm train.ilm.gz /dev/stdout --text=yes | gzip -c > train.lm.gz
\end{verbatim}
\noindent
In the following sections, we will discuss on LM file formats, on compiling LMs into a
more compact and efficient binary format, and on querying LMs.
\subsection{Estimating a LM with a Partial Dictionary}
A sub-dictionary can be defined by just taking words occurring more than 5 times ({\tt -pf=5})
and at most the top frequent 5000 words ({\tt -pr=5000}):
\begin{verbatim}
$>dict -i="gunzip -c train.gz" -o=sdict -pr=5000 -pf=5
\end{verbatim}
\noindent
The LM can be restricted to the defined sub-dictionary with the
command {\tt build-lm.sh} by using the option {\tt -d}:
\begin{verbatim}
$> build-lm.sh -i "gunzip -c train.gz" -n 3 -o train.ilm.gz -k 5 -p -d sdict
\end{verbatim}
\noindent
Notice that all words outside the sub-dictionary will be mapped into the {\tt <unk>}
class, the probability of which will be directly estimated from the corpus statistics.
A preferable alternative to this approach is to estimate a large LM and then to filter
it according to a list of words (see Filtering a LM).
| {
"alphanum_fraction": 0.7500738552,
"avg_line_length": 42.3125,
"ext": "tex",
"hexsha": "d3bdf3040611a3cef989d144cc6a7ed2f45ce143",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "38530efb23fc0f57a1d139d27b797be93ecb41b2",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "aashishyadavally/Working-with-Kaldi",
"max_forks_repo_path": "tools/extras/irstlm/doc/giganticLM.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "38530efb23fc0f57a1d139d27b797be93ecb41b2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "aashishyadavally/Working-with-Kaldi",
"max_issues_repo_path": "tools/extras/irstlm/doc/giganticLM.tex",
"max_line_length": 183,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "38530efb23fc0f57a1d139d27b797be93ecb41b2",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "aashishyadavally/Working-with-Kaldi",
"max_stars_repo_path": "tools/extras/irstlm/doc/giganticLM.tex",
"max_stars_repo_stars_event_max_datetime": "2019-05-13T05:03:24.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-08-02T17:19:16.000Z",
"num_tokens": 880,
"size": 3385
} |
\chapter{Multiple Integrals}
\input{./multipleIntegrals/doubleIntegrals}
\input{./multipleIntegrals/tripleIntegras} | {
"alphanum_fraction": 0.8290598291,
"avg_line_length": 39,
"ext": "tex",
"hexsha": "be98ee16bfb495aa929c11f0ff3a3984697e66c1",
"lang": "TeX",
"max_forks_count": 10,
"max_forks_repo_forks_event_max_datetime": "2021-08-17T15:21:12.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-04-10T05:41:17.000Z",
"max_forks_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "aneziac/Math-Summaries",
"max_forks_repo_path": "multiCalc/multipleIntegrals/multipleIntegrals.tex",
"max_issues_count": 26,
"max_issues_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f",
"max_issues_repo_issues_event_max_datetime": "2021-10-07T04:47:03.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-03-28T17:44:18.000Z",
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "aneziac/Math-Summaries",
"max_issues_repo_path": "multiCalc/multipleIntegrals/multipleIntegrals.tex",
"max_line_length": 44,
"max_stars_count": 39,
"max_stars_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "aneziac/Math-Summaries",
"max_stars_repo_path": "multiCalc/multipleIntegrals/multipleIntegrals.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-17T17:38:45.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-03-26T06:20:36.000Z",
"num_tokens": 35,
"size": 117
} |
\documentclass[12pt]{article}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{geometry}
\usepackage{enumerate}
\usepackage{natbib}
\usepackage{float}%稳定图片位置
\usepackage{graphicx}%画图
\usepackage[english]{babel}
\usepackage{a4wide}
\usepackage{indentfirst}%缩进
\usepackage{enumerate}%加序号
\usepackage{multirow}%合并行
\title{\large UM-SJTU JOINT INSTITUTE\\Advanced Lasers and Optics Laboratory\\(VE438)\\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\
Post Lab Assignment\\\ \\\ LAB 1\\\ Polarization, Total Internal
Reflection and Single Slit Diffraction \\\ \\\ \\\ \\\ \\\ }
\author{Name: Pan Chongdan \\ID: 516370910121}
\date{Date: \today}
\begin{document}
\maketitle
\newpage
\section{PART A: Polarization}
\subsection{Measurement data}
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c||c|c|c|}
\hline
rotation angle $\theta$ &measured voltage $mV$&rotation angle $\theta$ &measured voltage $mV$ \\ \hline
$0^o$ &369.4 &$90^o$ &5.3 \\ \hline
$5^o$ &368.7 &$95^o$ &67.7 \\ \hline
$10^o$ &367.6 &$100^o$ &150.3 \\ \hline
$15^o$ &364.4 &$105^o$ &205.4 \\ \hline
$20^o$ &359 &$110^o$ &239.5 \\ \hline
$25^o$ &355.1 &$115^o$ &267.0 \\ \hline
$30^o$ &354 &$120^o$ &286.6 \\ \hline
$35^o$ &351.9 &$125^o$ &301.9 \\ \hline
$40^o$ &342.7 &$130^o$ &316.1 \\ \hline
$45^o$ &333.0 &$135^o$ &327.0 \\ \hline
$50^o$ &318.4 &$140^o$ &334.0 \\ \hline
$55^o$ &304.2 &$145^o$ &347.4 \\ \hline
$60^o$ &286.5 &$150^o$ &352.0 \\ \hline
$65^o$ &264.4 &$155^o$ &357.5 \\ \hline
$70^o$ &239.6 &$160^o$ &362.2 \\ \hline
$75^o$ &196 &$165^o$ &365 \\ \hline
$80^o$ &147.4 &$170^o$ &366.1 \\ \hline
$85^o$ &66.8 &$175^o$ &369.0 \\ \hline
\end{tabular}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{P1.jpg}
\caption{Relation between voltage (light intensity) and rotation angle}
\end{figure}
\subsection{Answers for Post Lab Questions}
Since the emitted light is linear polarized light, so it won't change much when we set the polarizer to zero point but after we start to rotated the polarizer, the light becomes darker and we're hardly to see the light after we rotated if for 90 degrees. Then the light come back during subsequent rotation. So the light intensity first become smaller than it grown back when we rotated the polarizer. The relation between the intensity and rotation angle is like the plot.
\par However, even when the rotation is 90 degree and the light intensity is very small, there was still some reading ,which means the laser been can't be totally blocked.
\par According to Malus's Law, $I_{light}=I_{light,0}\cos ^2\theta$, where $\theta$ is the angle between the angle of polarizer and the direction of the polarization of the light. Since the $I_light$ can't be 0, then $\cos^2\theta$ can't be $90^o$ ,so the emitted light is not totally linear polarized.
\section{PART B: Total Internal Reflection}
\subsection{Measured Data}
We find critical angle for the total internal reflection when the incident angle is $219.5^o-180^o=39.5^o$. So the critical angle is also $39.5^o$. Since $$\frac{\sin\theta_i}{\sin\theta_o}=\frac{n_i}{n_o}$$ and $\sin\theta_o=1,n_i=1$, then
$n_o=\frac{1}{\sin39.5}=1.57$
\par So the refractive index of the prism is 1.57
\subsection{Answers for Post Lab Questions}
First, we put the prism at the center of rotation stage so that the incident light was perpendicular to the long edge of the prism. Then we started to rotate the rotation stage until the refracted light was parallel to the long edge of the prism, which means the refraction angle is $90^o$. At the time, the angle we rotated is also the incident angle, which is both $39.5^o$
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{P2.jpg}
\caption{Experiment Scheme}
\end{figure}
\par According to Fresnel equations, $$t=\frac{2n_i\cos\theta_i}{(n_i\cos\theta_i+n_t\cos\theta_t)}$$
so the transmission coefficient from light $t=\frac{2*1*\cos39.5^o}{1*\cos39.5^o+0}=2$
\section{PARTC: Diffraction}
\subsection{Measured Data}
The distance between the slit and screen is $35cm$. The distance between the 0 light center and 1 and -1 dark patterns is $0.5cm$. I guess the width of slit is less than or slightly larger than $632.8nm$.
\subsection{Answers for Post Lab Questions}
According to $$y_m=x\frac{m\lambda}{a},a=x\frac{m\lambda}{y_m}=35*10^{-2}*\frac{632.8*10^{-9}}{1*10^{-2}}=0.22mm$$
\end{document} | {
"alphanum_fraction": 0.6564179746,
"avg_line_length": 58.1375,
"ext": "tex",
"hexsha": "37bfece69a6421a762d01d19d9ae015201b61456",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ed8171e5872ecef1d3e3e81935d71bc65063fc95",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "PANDApcd/Physics",
"max_forks_repo_path": "VE438Optics/HW/L3/PostLab1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ed8171e5872ecef1d3e3e81935d71bc65063fc95",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "PANDApcd/Physics",
"max_issues_repo_path": "VE438Optics/HW/L3/PostLab1.tex",
"max_line_length": 473,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ed8171e5872ecef1d3e3e81935d71bc65063fc95",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "PANDApcd/Physics",
"max_stars_repo_path": "VE438Optics/HW/L3/PostLab1.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1564,
"size": 4651
} |
%!TEX root = ../Report.tex
\chapter{Acknowledgements}
I would like to acknowledge the support of the management at Picksoft Ltd who gave me generous access to our data analysis reports for our client and particularly of Ömer Faruk Demir, Chief Technology Officer, who is my tutor, allowed me to analyze at length and encouraged me to pursue this topic. I am also grateful him to spent extra time helping me to achieve a clearer structure. | {
"alphanum_fraction": 0.801369863,
"avg_line_length": 146,
"ext": "tex",
"hexsha": "e15b5c83793b52ea342c4bc237a5e328149b4769",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4a4355e42ed8fbe9fecc80df21224b85ea6a4a0f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "YlmRdm/report-flink",
"max_forks_repo_path": "frontmatter/Acknowledgements.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4a4355e42ed8fbe9fecc80df21224b85ea6a4a0f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "YlmRdm/report-flink",
"max_issues_repo_path": "frontmatter/Acknowledgements.tex",
"max_line_length": 384,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4a4355e42ed8fbe9fecc80df21224b85ea6a4a0f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "YlmRdm/report-flink",
"max_stars_repo_path": "frontmatter/Acknowledgements.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 91,
"size": 438
} |
%% ----------------------------------------------------------------
%% Conclusions.tex
%% ----------------------------------------------------------------
\chapter{Conclusions} \label{Chapter: Conclusions}
It works.
| {
"alphanum_fraction": 0.2792792793,
"avg_line_length": 37,
"ext": "tex",
"hexsha": "7bfc71a9da85d6f8055d98e7fed070442febf722",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2bf3f60a1d79ee196cfcad52c52b23b072c5f7a9",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "hlwhl/ECS_Thesis_Template_LaTex",
"max_forks_repo_path": "ECSTemplate/Conclusions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2bf3f60a1d79ee196cfcad52c52b23b072c5f7a9",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "hlwhl/ECS_Thesis_Template_LaTex",
"max_issues_repo_path": "ECSTemplate/Conclusions.tex",
"max_line_length": 69,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "2bf3f60a1d79ee196cfcad52c52b23b072c5f7a9",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "hlwhl/ECS_Thesis_Template_Latex",
"max_stars_repo_path": "ECSTemplate/Conclusions.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-15T18:06:08.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-08-09T16:44:22.000Z",
"num_tokens": 28,
"size": 222
} |
%!TEX root = ../main.tex
\objective{Utilize standard angle values and circular symmetry}
There are not many angles we could draw for which we could precisely name
the displacement involved. Two shapes we know every such thing about are the
square and the equilateral triangle. We might begin by drawing a square with
sides of length 1, and then drawing a diagonal.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=2]
\draw (0,0) -- (1,0) -- (1,1) -- (0,1) -- (0,0) -- (1,1);
\draw (0,0.9) -- (.1,.9) -- (.1,1);
\node at (0.5,0) [anchor=north] {1};
\node at (0,0.5) [anchor=east] {1};
\end{tikzpicture}
\begin{tikzpicture}[scale=2]
\draw (0,0) -- (2,0) -- (1, 1.732050808) -- (1,0) -- (0,0) -- (1,1.732050808);
\draw (1,.1) -- (1.1,.1) -- (1.1,0);
\node at (1.5,0) [anchor=north] {1};
\node at (1.7,.85) [anchor=south]{2};
\end{tikzpicture}
\caption{Special triangles are halves of thoroughly understood shapes.}
\end{center}
\end{figure}
According to the Pythagorean Theorem, what must the length of the
hypotenuse be? For either one of the triangles, what are the angles and
the ratios of the sides?
Next, consider an equilateral triangle with side of length 2. If all the angles
are the same, what must they be? Now we draw the perpendicular bisector,
which is also the altitude. How long are the two half of the side we
bifurcated? What are the angles we split in two? What would Pythagorus
tell you the length of the altitude is?
\begin{derivation}{Special Triangles}
The special triangles are derived from
squares and equilateral triangles.
A $45^\circ$-$45^\circ$-$90^\circ$ triangle has sides with ratios $1:1:\sqrt{2}$.
A $30^\circ$-$60^\circ$-$90^\circ$ triangle has sides with ratios $1:\sqrt{3}:2$.
\end{derivation}
Armed with such triangles, we can answer what sine, cosine, and tangent are
for $30^\circ$, $45^\circ$, and $60^\circ$, nine facts you should memorize.
\subsection{Reflection and Symmetry}
In the previous section of this chapter, we computed reference angles without
explaining why. We will rectify that now. Consider that there is a point $(x,y)$
somewhere in the first quadrant. Draw a ray from the origin through this point,
and make it the terminal side of an angle. Call that angle $\theta$.
\begin{figure}[h]
\begin{centering}
\begin{tikzpicture}[scale=0.5]
\draw[<->, thick] (-4,0) -- (4,0);
\draw[<->, thick] (0,-4) -- (0,4);
\draw[fill] (55:3) circle (0.1) node[anchor=south] {$(x,y)$};
\draw (3,0) arc(0:55:3);
\node at (1,0.7) {$\theta$};
\draw (0,0) -- (55:3);
\end{tikzpicture}
\caption{Turning angle $\theta$ results in going to coordinates $(x,y)$.}
\end{centering}
\end{figure}
What will be at the reference angle in the fourth quadrant? If we turn
down from $0^\circ$ to $360^\circ-\theta$, we will be taken to $(x,-y)$.
In the second quadrant, the reference angle is $1800^\circ-\theta$, which
ends in the coordinates $(-x,y)$. Finally, the third quadrant reference
angle of $180^\circ+\theta$ points to $(-x,-y)$.
Through the use of symmetry and reference angle, every fact we
learn in the first quadrant instantly reveals three other facts around
the grid. Fill knowledge of the first quadrant entails full knowledge
of the entire Cartesian plane.
\begin{figure}[h]
\begin{centering}
\begin{tikzpicture}[scale=0.5]
\draw[<->, thick] (-4,0) -- (4,0);
\draw[<->, thick] (0,-4) -- (0,4);
\draw[fill] (55:3) circle (0.1) node[anchor=south] {$(x,y)$};
\draw (3,0) arc(0:55:3);
\node at (1,0.7) {$\theta$};
\draw (0,0) -- (55:3);
\draw[fill] (125:3) circle (0.1) node[anchor=south] {$(-x,y)$};
\draw (0,0) ++ (125:3) arc(125:180:3);
\node at (-1,0.7) {$\theta$};
\draw (0,0) -- (125:3);
\draw[fill] (235:3) circle (0.1) node[anchor=north] {$(-x,-y)$};
\draw (0,0) ++ (180:3) arc(180:235:3);
\node at (-1,-0.7) {$\theta$};
\draw (0,0) -- (235:3);
\draw[fill] (-55:3) circle (0.1) node[anchor=north] {$(x,-y)$};
\draw (3,0) arc(0:-55:3);
\node at (1,-0.7) {$\theta$};
\draw (0,0) -- (-55:3);
\end{tikzpicture}
\caption{Reference angles are examples of horizontal and/or vertical symmetry. }
\end{centering}
\end{figure}
\subsection{$x$ and $y$ and slope}
Even the infinite number of points in the first quadrant can be scaled
down somewhat. Let us consider only the points 1 unit away from
the origin. Such points make a circle, called the Unit Circle.
If you think of sine as opposite over hypotenuse, and the hypotenuse is
always 1, then on the unit circle, sine equals the opposite leg
of the reference triangle. On the unit circle, this is always the $y$
value. Sine \emph{is} $y$.
Again, if cosine is adjacent over hypotenuse and the hypotenuse is
always one, then on the unit circle, cosine equals the adjacent leg
of the reference triangle. On the unit circle, this always the $x$
value. Cosine \emph{is} $x$.
Lastly, if tangent is opposite over adjacent, then on the unit circle
that is $y$ over $x$, rise over run, a.k.a. slope. Tangent \emph{is}
slope. And for any radius greater or less than 1, we need only
multiply by that radius.
When we overlay this way of thinking about trigonometric functions
(cosine is $x$, sine is $y$, tangent is slope) onto the grid, we can see
that all the first quadrant information we might gather can be
applied to any other quadrant with only sign changes. Sine (as $y$)
will be positive in the first and second quadrant, and negative in
the third and fourth. Cosine (as $x$) will be positive in the first
and fourth quadrants, and negative in the second and third.
Tangent (as $m$) will be positive in the first and third, and negative
in the second and fourth.
| {
"alphanum_fraction": 0.6891485219,
"avg_line_length": 39.5034965035,
"ext": "tex",
"hexsha": "af197fb8d31bd1ab85c3ffc230821b31d90ec696",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "aquatiki/AnalysisTextbook",
"max_forks_repo_path": "ch09/0902.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "aquatiki/AnalysisTextbook",
"max_issues_repo_path": "ch09/0902.tex",
"max_line_length": 82,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "aquatiki/AnalysisTextbook",
"max_stars_repo_path": "ch09/0902.tex",
"max_stars_repo_stars_event_max_datetime": "2019-07-07T12:32:53.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-10-08T15:05:17.000Z",
"num_tokens": 1845,
"size": 5649
} |
%!TEX program = xelatex
\documentclass[12pt,a4paper]{article}
\usepackage{xeCJK}
\usepackage{amsmath}
\setmainfont{Times New Roman}
\usepackage{setspace}
\usepackage{caption}
\usepackage{graphicx, subfig}
\usepackage{float}
\usepackage{listings}
\usepackage{booktabs}
\usepackage{setspace}%使用间距宏包
\usepackage{mathtools}
\usepackage{amsfonts}
\begin{document}
\title{homework6}
\author{11611118 郭思源}
\begin{spacing}{1.5}%%行间距变为double-space
\section{Question 1}
Prove that a discrete state space stochastic process which satisfies Markovian property has
\[
P^{(n+m)} = P^{(n)}P^{(m)}
\]
and thus
\[
P^{(n)} = P^{(n-1)}P = P\times\cdots\times P = P^n,
\]
where $P^{(n)}$ denotes the matrix of $n$-step transition probabilities and $P^n$ denotes the $n$th power of the matrix $P$.
$\\\\$
\subsection{The probability : }
Let the $n$-step transition probabilities be denoted by
$p_{ij}^{(n)}$: the probability that a process in state $i$ will be in state $j$ after $n$ additional transitions. That is,
\[
p_{ij}^{(n)}
=
P\{X_{n+m}=j|X_m=i\} \ n \geq 0,\ i,\ j \geq 0,
\]
This probability does not depend on $m$, either!
\[p_{ij}^{(1)} = p_{ij}\]
\newpage
\subsection{Proof : }
\begin{equation*}
\begin{aligned}
p_{ij}^{(n+m)}
&= P\{X_{n+m}=j|X_0=i\} \\
&= \displaystyle \sum_{k=0}^{\infty}
P\{X_{n+m}=j,X_{n}=k|X_0=i\} \\
&= \displaystyle \sum_{k=0}^{\infty}
P\{X_{n+m}=j|X_{n}=k,X_0=i\} P\{X_{n}=k|X_{0}=i\} \\
&= \displaystyle \sum_{k=0}^{\infty}
P\{X_{n+m}=j|X_{n}=k\} P\{X_{n}=k|X_{0}=i\} \\
&= \displaystyle \sum_{k=0}^{\infty}
p_{ik}^{(n)}p_{kj}^{(m)} .
\end{aligned}
\end{equation*}
$\\$In matrix form, we have
\[P^{(n+m)} = P^{(n)}P^{(m)}\]
and thus
\[P^{(n)} = P^{(n-1)}P^{}=P\times \dots \times P=P^n,\]
where $P^{(n)}$ denotes the matrix of $n$-step transition probabilities and $P^{(n)}$ denote the $n$th power of the matrix $P$.
\begin{center}
$p_{ij}^{(n+m)}
= \displaystyle \sum_{k=0}^{\infty}p_{ik}^{(n)}p_{kj}^{(m)}
\quad $ for all $ n,m \geq 0 , \ $all $ i,j \geq 0.$
\end{center}
In that, we have assumed that, without loss any generality, the state space $E$ is \{0, 1, 2, · · · \}. If we do not assume this, and let the state space be $E$, then the equation should be written as
\begin{center}
$p_{ij}^{(n+m)}
= \displaystyle \sum_{k\in E}^{}p_{ik}^{(n)}p_{kj}^{(m)}
\quad $ for all $ n,m \geq 0 , \ $all $ i,j \in E.$
\end{center}
\newpage
\section{Question 2}
Consider adding a pizza delivery service as an alternative to the dining halls. \\Table~\ref{T:hw-6-1} gives the transition percentages based on a student survey. Determine the long-term percentages eating at each place. Try several different starting values. Is equilibrium achieved in each case? If so, what is the final distribution of students in each case?
\begin{table}[htb]
\centering
\begin{tabular}{ccccc}
&
& \multicolumn{1}{c}{}
& \multicolumn{1}{c}{Next state}
& \multicolumn{1}{c}{}
\\ \cline{2-5} \multicolumn{1}{c|}{}
& \multicolumn{1}{c|}{}
& \multicolumn{1}{c}
{\begin{tabular}[c]{@{}c@{}}Grease\\Dining Hall\end{tabular}}
& \multicolumn{1}{c}
{\begin{tabular}[c]{@{}c@{}}Sweet\\ Dining Hall\end{tabular}}
& \multicolumn{1}{c|}
{\begin{tabular}[c]{@{}c@{}}Pizza\\ delivery\end{tabular}}
\\ \cline{2-5} \multicolumn{1}{c|}{}
& \multicolumn{1}{c|}{Grease Dining Hall}
& 0.25
& 0.25
& \multicolumn{1}{c|}{0.50}
\\ \multicolumn{1}{c|}{Present state}
& \multicolumn{1}{c|}{Sweet Dining Hall}
& 0.10
& 0.30
& \multicolumn{1}{c|}{0.60}
\\ \multicolumn{1}{c|}{}
& \multicolumn{1}{c|}{Pizza delivery}
& 0.05
& 0.15
& \multicolumn{1}{c|}{0.80}
\\ \cline{2-5}
\end{tabular}
\caption{Survey of dining at College USA}\label{T:hw-6-1}
\end{table}
\begin{center}
\[ {p}=
\begin{bmatrix}
0.25 & 0.25 & 0.50
\\[8pt]
0.10 & 0.30 & 0.60
\\[8pt]
0.05 & 0.15 & 0.80
\end{bmatrix} \]
\end{center}
from $qp = p$, we have :
\begin{center}
\[ {q}=
\begin{bmatrix}
0.0741 & 0.1852 & 0.7407
\end{bmatrix} \]
\end{center}
So that we can see for every case, assume that the total number of customers are $n$, then in the equilibrium, Grease Dining Hall has $0.0741n$ customers, Sweet Dining Hall has $0.1852n$ customers, Pizza delivery has $0.7407n$ customers.
\end{spacing}
\end{document} | {
"alphanum_fraction": 0.5701921047,
"avg_line_length": 31.7919463087,
"ext": "tex",
"hexsha": "3f8b10f459a52dd438c1c82dfb9ebf8daa0efd8c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e2187b3d181185af4927255c50b4c08ba2a5fb3e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "c235gsy/Sustech_Mathematical-Modeling",
"max_forks_repo_path": "Latex/HW6/HW6.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e2187b3d181185af4927255c50b4c08ba2a5fb3e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "c235gsy/Sustech_Mathematical-Modeling",
"max_issues_repo_path": "Latex/HW6/HW6.tex",
"max_line_length": 361,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "e2187b3d181185af4927255c50b4c08ba2a5fb3e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "c235gsy/Sustech_Mathematical-Modeling",
"max_stars_repo_path": "Latex/HW6/HW6.tex",
"max_stars_repo_stars_event_max_datetime": "2019-11-30T11:32:36.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-11-30T11:32:36.000Z",
"num_tokens": 1768,
"size": 4737
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.