Search is not available for this dataset
text
string | meta
dict |
---|---|
%============================================================================
% ATOMIC STRUCTURE DEFINITIONS AND CONSTANTS
%============================================================================
This section describes the atomic structure information required for the form
factor calculations such as physical constants, definitions, atomic energy
levels and atomic wave functions.
%============================================================================
% CONSTANTS
%============================================================================
\section{Constants and Definitions}
We first need to define the usual symbols which are common in atomic physics.
\begin{center}
\begin{tabular}{|c|l|} \hline
Symbol & Meaning \\ \hline \hline
$c$ & Speed Of Light \\
$e$ & Electron Charge \\
$m$ & Electron Mass \\
$a_0$ & Bohr Radius \\
$\alpha$ & Fine Structure Constant \\
$\hbar$ & Planck's Constant/$2\pi$ \\
$r_0$ & Classical Electron Radius \\
$Z$ & Atomic Number \\
$Y_{lm}(\theta,\phi)$ & Spherical Harmonics \\
\hline
\end{tabular}
\end{center}
We will use $\mb{k} = (k_x,k_y,k_z)$ to represent the photon wave propagation
vector, and in the case of photoionisation, we will use $\mb{k'} =
(k'_x,k'_y,k'_z)$ for the wave-vector of the ejected electron.
Electronic coordinates with respect to the nucleus will be denoted by
$\mb{r} = (r,\theta,\phi)$ whereas the ejection angles of a photo-electron will
be denoted by $\Omega = (\Theta,\Phi)$.
In order to avoid confusion with the fine structure constant $\alpha$,
the Dirac alpha matrix will be denoted by $\Alpha$ where
\begin{equation}
\Alpha =
\left(
\begin{array}{cc}
0 & \mbox{\boldmath $\sigma$} \\
\mbox{\boldmath $\sigma$} & 0
\end{array}
\right)
\end{equation}
and where $\mbox{\boldmath $\sigma$}$ are the Pauli spin matrices.
When dealing with four component spinors the following definitions have been
adopted.
\begin{itemize}
\item $\ket{i}$ denotes the $i^{\mathrm{th}}$ state of an atom. For example,
$\ket{0}$ denotes the ground state, $\ket{1}$ denotes the first
excited state etc. $\ket{c}$ denotes the continuum or free electron
state. The state $\ket{i}$ may also be written as $\psi_i$.
\item $\psi_{ij}$ denotes the $j^{\mathrm{th}}$ component of the
$i^{\mathrm{th}}$ atomic state.
\item $g_i(r)$ and $h_i(r)$ denote the radial components of the $i^{\mathrm{th}}$
atomic state.
\item $A_{ij}(\theta,\phi)$ denotes the $j^{\mathrm{th}}$ angular component of the
$i^{\mathrm{th}}$ atomic state.
\end{itemize}
\begin{equation}
\begin{array}{cc}
\gamma_1 = \sqrt{1 - (\alpha Z)^2} & \gamma_2 = \sqrt{4 - (\alpha Z)^2}
\end{array}
\end{equation}
\begin{equation}
\begin{array}{cccc}
N_1 = 1 ;& N_2 = \sqrt{2(1 + \gamma_1)} ;& N_3 = 2
;&
\sigma_1 = \left( \frac{2Z}{N_1 a_0} \right)
\end{array}
\end{equation}
\begin{equation}
\begin{array}{ccc}
\epsilon_1 =
\left[
1 + \left( \frac{\alpha Z}{\gamma_1} \right)^2
\right]^{-1/2}
&
\epsilon_2 =
\left[
1 + \left( \frac{\alpha Z}{1+\gamma_1} \right)^2
\right]^{-1/2}
&
\epsilon_3 =
\left[
1 + \left( \frac{\alpha Z}{\gamma_2} \right)^2
\right]^{-1/2}
\end{array}
\end{equation}
%============================================================================
% SCHRODINGER EQUATION
%============================================================================
\section{The Schr\"odinger Equation and Non Relativistic Wave-functions}
The Schr\"odinger equation for an electron in a Coulomb potential describes
the non relativistic structure of a hydrogen atom.
Ignoring fine structure corrections, the equation can be written as:
\begin{equation}
\left[
-\frac{\hbar^2}{2m} \nabla^2 - \frac{Ze}{r}
\right] \psi(\mb{r}) = E \psi(\mb{r})
\end{equation}
Solutions to this equation~\cite{Bransden-Joachain} give the energy levels
of the hydrogen atom as a function of the principal quantum number $n$
\begin{equation}
E_n = - \frac{e^2}{4\pi\epsilon_0 a_0} \frac{Z^2}{2n^2}
\end{equation}
and wave functions as a function of the three space variables
$r$, $\theta$ and $\phi$ and the three quantum numbers $n$,$l$ and $m$.
\begin{equation}
\psi_{nlm}(r,\theta,\phi) = R_{nl}(r) Y_{lm}(\theta,\phi)
=
- \sqrt{
\left(
\frac{2Z}{n a_0}
\right)^3
\frac{(n - l - 1)!}{2n [(n+l)!]^3}
}
e^{-\rho/2} \rho^l L^{2l+1}_{n+l}(\rho) Y_{lm}(\theta,\phi)
\end{equation}
where $\rho = 2Zr/na_0$, $a_0 = 4\pi\epsilon_0\hbar^2 / e^2 m$,
$Y_{lm}(\theta,\phi)$ are the spherical harmonics and
$L^{2l+1}_{n+l}(\rho)$ are the associated Laguerre
polynomials~\cite{Bransden-Joachain}.
For example, the ground state wave-function of hydrogen
($\psi_{1s}$ or $\psi_{100}$) is defined as:
\begin{equation} \label{eq:schrodinger-ground}
\psi_{1s}(r,\theta,\phi) = \frac{1}{\sqrt{\pi}}
\left( \frac{Z}{a_0} \right)^{3/2}
e^{-Zr/a_0} .
\end{equation}
%============================================================================
% DIRAC EQUATION
%============================================================================
\section{The Dirac Equation and Relativistic Wave-functions}
In order to obtain more accurate energy levels and wave-functions for
hydrogen, the effects of special relativity which give rise to the fine
structure in the hydrogen spectrum need to be considered. and as such
the solutions to the Dirac equation for an electron in a central coulombic
field need to be used.
The Dirac equation is shown here (where $\psi$ is a four component spinor
wave-functions) together with its energy eigenvalue $E_{nj}$
for hydrogenic atoms.
\begin{equation} \label{eq:dirac-equation}
\begin{array}{lcr}
\left[
e \Alpha \cdot \mb{p} + \beta mc^2 - \frac{Ze^2}{r}
\right] \psi = E_{nj} \psi
&
\; ; \;
&
E_{nj} = \frac{mc^2}{
\sqrt{
1 + \frac{(Z\alpha)^2}{n-j-\frac{1}{2} +
\sqrt{(j+\frac{1}{2})^2 - (Z\alpha)^2} }
}
}
\end{array}
\end{equation}
Solutions to the Dirac equation for hydrogenic atoms are written in terms of
four component spinors, separated in radial and angular components as
defined in~\cite{Bethe-Salpeter}.
There are two general spinor solutions $\psi_a$ corresponding to the case
when $j=l+1/2$ and $\psi_b$ for the case when $j=l-1/2$.
Note, also that the large and small components of the radial wave-functions are
denoted by $g(r)$ and $h(r)$ instead of the more traditional $g(r)$ and $f(r)$,
so as to avoid confusion with equation for the form factor which uses the symbol $f$.
\begin{equation}
\begin{array}{lr}
\psi_a =
\left(
\begin{array}{c}
g(r) \sqrt{\frac{l+m+1/2}{2l+1}} Y_{l,m-1/2}(\theta,\phi) \\
-g(r) \sqrt{\frac{l-m+1/2}{2l+1}} Y_{l,m+1/2}(\theta,\phi) \\
-ih(r) \sqrt{\frac{l-m+3/2}{2l+3}} Y_{l+1,m-1/2}(\theta,\phi) \\
-ih(r) \sqrt{\frac{l+m+3/2}{2l+3}} Y_{l+1,m+1/2}(\theta,\phi)
\end{array}
\right)
&
\psi_b =
\left(
\begin{array}{c}
g(r) \sqrt{\frac{l-m+1/2}{2l+1}} Y_{l,m-1/2}(\theta,\phi) \\
g(r) \sqrt{\frac{l+m+1/2}{2l+1}} Y_{l,m+1/2}(\theta,\phi) \\
-ih(r) \sqrt{\frac{l+m-1/2}{2l-1}} Y_{l-1,m-1/2}(\theta,\phi) \\
ih(r) \sqrt{\frac{l-m-1/2}{2l-1}} Y_{l-1,m+1/2}(\theta,\phi)
\end{array}
\right)
\end{array}
\end{equation}
The radial components $g(r)$ and $f(r)$ of the spinors can be written in
general in terms of the hypergeometric function $F(a,b,x)$.
% g(r) radial component
\begin{equation}
\begin{split}
g(r) = - \frac{\sqrt{\Gamma(2\gamma + n' + 1)}}{\Gamma(2\gamma+1)\sqrt{n'!}}
\sqrt{\frac{1+\epsilon}{4N(N-x)}}
\left( \frac{2Z}{Na_0} \right)^{3/2}
e^{-Zr/a_0}
\left( \frac{2Zr}{Na_0} \right)^{\gamma-1}
\\
\times
\left[
-n' F(-n'+1,2\gamma+1,\frac{2Zr}{Na_0})
+
(N-x) F(-n',2\gamma+1,\frac{2Zr}{Na_0})
\right]
\end{split}
\end{equation}
% h(r) radial component
\begin{equation}
\begin{split}
h(r) = - \frac{\sqrt{\Gamma(2\gamma+n'+1)}}{\Gamma(2\gamma+1)\sqrt{n'!}}
\sqrt{\frac{1-\epsilon}{4N(N-x)}}
\left( \frac{2Z}{Na_0} \right)^{3/2}
e^{-\frac{Zr}{Na_0}}
\left( \frac{2Zr}{Na_0} \right)^{\gamma-1}
\\
\times
\left[
n' F(-n'+1,2\gamma+1,\frac{2Zr}{Na_0})
+ (N-x) F(-n',2\gamma+1,\frac{2Zr}{Na_0})
\right]
\end{split}
\end{equation}
Where we have $x = -(j+1/2)=-(l+1)$ if $j=l+1/2$, $x=j+1/2=+l$ if $j=l-1/2$,
$\gamma = \pm\sqrt{x^2 - (Z\alpha)^2}$, $\epsilon = E/E_0$ (energy/rest mass
energy), $n' = \alpha Z \epsilon/\sqrt{1 - \epsilon^2} - \gamma$,
$N = \sqrt{n^2 - 2n'(k - \sqrt{k^2 - \alpha^2 Z^2})}$, $k = |x|$, and
$m = \pm(l+1/2)$~\footnote{See Bethe and Salpeter~\cite{Bethe-Salpeter} for a
detailed explanation of all these constants and definitions}.
%========================================================================
% GROUND STATE
%========================================================================
\subsection{The Ground State}
The ground state (or $1S_{\frac{1}{2}}$) of the hydrogen atom corresponds to
the quantum numbers $n=1$, $l=0$ and $j=\frac{1}{2}$. This corresponds to
the $j = l+ \frac{1}{2}$ spinor.
\begin{equation} \label{eq:dirac-ground}
\ket{1S_{\frac{1}{2}}} = \ket{0} =
\left(
\begin{array}{c}
A_{01}(\theta,\phi) \; g_0(r) \\
A_{02}(\theta,\phi) \; g_0(r) \\
A_{03}(\theta,\phi) \; h_0(r) \\
A_{04}(\theta,\phi) \; h_0(r)
\end{array}
\right)
=
\left(
\begin{array}{c}
Y_{00} g_0(r) \\
0 \\
-i \sqrt{\frac{1}{3}} Y_{10} h_0(r) \\
-i \sqrt{\frac{2}{3}} Y_{11} h_0(r)
\end{array}
\right)
\end{equation}
%======================================================================
\begin{equation} \label{eq:dirac-radial}
\begin{array}{cc}
g_0(r) = G_0 e^{-\frac{1}{2} \sigma_1 r} r^{\gamma_1 - 1} &
h_0(r) = H_0 g_0(r)
\\
\\
H_0 = - \sqrt{ \frac{1 - \epsilon_1}{1 + \epsilon_1} } &
G_0 = \left( \frac{2Z}{a_0} \right)^{3/2}
\sqrt{\frac{1 + \epsilon_1}{2 \Gamma(2\gamma_1 + 1) }}
\end{array}
\end{equation}
%======================================================================
% FIRST EXCITED STATE
%======================================================================
\subsection{The First Excited State}
\begin{equation}
\ket{2S_{1/2}} = \ket{1} =
\left(
\begin{array}{c}
A_{11}(\theta,\phi) g_1(r) \\
A_{12}(\theta,\phi) g_1(r) \\
A_{13}(\theta,\phi) h_1(r) \\
A_{14}(\theta,\phi) h_1(r)
\end{array}
\right)
=
\left(
\begin{array}{c}
Y_{00} g_1(r) \\
0 \\
-i \sqrt{\frac{1}{3}} Y_{10} h_1(r) \\
-i \sqrt{\frac{2}{3}} Y_{11} h_1(r)
\end{array}
\right)
\end{equation}
\begin{equation}
\begin{array}{cc}
g_1(r) = e^{-\frac{1}{2} \sigma_2 r} r^{\gamma_1}
\left( G'_{1} \frac{1}{r} - G''_{1} \right)
&
h_1(r) = H_1 \left( \frac{H'_1 - H''_1 r}{H'''_1 - H''''_1 r} \right) g_1(r)
\end{array}
\end{equation}
\begin{equation*}
\begin{array}{ccc}
G_1 = \left( \frac{2Z}{N_2 a_0} \right)^{3/2}
\sqrt{\frac{2\gamma_1 + 1}{\Gamma(2\gamma_1 + 1)}}
\sqrt{\frac{1 + \epsilon_2}{4N_2 (N_2 + 1)}}
\; ; &
G'_1 = N_2 G_1 \sigma_2^{\gamma_1 - 1}
\; ; &
G''_1 = \left( \frac{N_2 + 1}{2\gamma_1 + 1} \right) G_1 \sigma_2^{\gamma_1}
\end{array}
\end{equation*}
\begin{equation*}
\begin{split}
\begin{array}{ccc}
H_1 = -\sqrt{\frac{1 - \epsilon_2}{1 + \epsilon_2}}
\; ; &
H'_1 = (2\gamma_1 + 1)(N_2 + 2)
\; ; &
H''_1 = (N_2 + 1)\sigma_2
\end{array}
\\
\begin{array}{cc}
H'''_1 = (2\gamma_1 + 1)N_2
\; ; &
H''''_1 = (N_2 + 1)\sigma_2
\end{array}
\end{split}
\end{equation*}
%=================================================================
% The Second Excited State
%=================================================================
\subsection{The Second Excited State}
\begin{equation}
\ket{2P_{1/2}} = \ket{2} =
\left(
\begin{array}{c}
A_{21}(\theta,\phi) g_2(r) \\
A_{22}(\theta,\phi) g_2(r) \\
A_{23}(\theta,\phi) h_2(r) \\
A_{24}(\theta,\phi) h_2(r)
\end{array}
\right)
=
\left(
\begin{array}{c}
\sqrt{\frac{1}{3}} Y_{10} g_2(r) \\
\sqrt{\frac{2}{3}} Y_{11} g_2(r) \\
-i Y_{00} h_2(r) \\
0
\end{array}
\right)
\end{equation}
\begin{equation}
\begin{array}{cc}
g_2(r) = e^{-\frac{1}{2}\sigma_2 r} r^{\gamma_1}
\left(
G'_2 \frac{1}{r} - G''_2
\right)
\; ; &
h_2(r) = H_2
\left(
\frac{H'_2 - H''_2 r}{H'''_2 - H''''_2 r}
\right) g_2(r)
\end{array}
\end{equation}
\[
\begin{array}{ccc}
G_2 = \left( \frac{2Z}{N_2 a_0} \right)^{3/2}
\sqrt{\frac{2\gamma_1 + 1}{\Gamma(2\gamma_1 + 1)}}
\sqrt{\frac{1+\epsilon_2}{4N_2 (N_2 - 1)}}
\; ; &
G'_2 = (N_2 - 2) \sigma_2^{\gamma_1 - 1} G_2
\; ; &
G''_2 = \left( \frac{N_2 - 1}{2\gamma_1 + 1} \right) \sigma_2^{\gamma_1} G_2
\end{array}
\]
\[
\begin{array}{cc}
H_2 = -\sqrt{\frac{1-\epsilon_2}{1+\epsilon_2}}
&
H'_2 = (2\gamma_1 + 1)N_2
\end{array}
\]
\[
\begin{array}{ccc}
H''_2 = (N_2 - 1)\sigma_2
&
H'''_2 = (2\gamma_1 + 1)(N_2 - 2)
&
H''''_2 = (N_2 - 1)\sigma_2
\end{array}
\]
%=================================================================
% The Third Excited State
%=================================================================
\subsection{The Third Excited State}
\begin{equation}
\ket{2P_{3/2}} = \ket{3} =
\left(
\begin{array}{c}
A_{31}(\theta,\phi) g_3(r) \\
A_{32}(\theta,\phi) g_3(r) \\
A_{33}(\theta,\phi) h_3(r) \\
A_{34}(\theta,\phi) h_3(r)
\end{array}
\right)
=
\left(
\begin{array}{c}
\sqrt{\frac{2}{3}} Y_{10} g_3(r) \\
-\sqrt{\frac{1}{3}} Y_{11} g_3(r) \\
-i \sqrt{\frac{2}{5}} Y_{20} h_3(r) \\
-i \sqrt{\frac{3}{5}} Y_{21} h_3(r)
\end{array}
\right)
\end{equation}
\begin{equation}
\begin{array}{cc}
g_3(r) = G_3 e^{-\frac{1}{2}\sigma_3 r} r^{\gamma_2 - 1}
\; ; &
h_3(r) = H_3 g_3(r)
\end{array}
\end{equation}
\[
\begin{array}{cc}
G_3 = \left( \frac{Z}{a_0} \right)^{3/2}
\sqrt{ \frac{1 + \epsilon_3}{2\Gamma(2\gamma_2 + 1)} }
\sigma_3^{\gamma_2 - 1}
\; ; &
H_3 = -\sqrt{ \frac{1 - \epsilon_3}{1 + \epsilon_3} }
\end{array}
\]
\subsection{The Free Dirac Electron}
The continuum state is described by a free Dirac electron state with energy $E$
and momentum wave-vector $\mb{k'} = (k'_x,k'_y,k'_z)$.
Such a state, can be represented by one of the two four-component spinors below.
\begin{equation}
\begin{array}{ccccc}
\ket{\psi_c}_{\uparrow} =
\left(
\begin{array}{c}
1 \\
0 \\
\xi k'_z \\
\xi (k'_x + ik'_y)
\end{array}
\right)
& ; &
\ket{\psi_c}_{\downarrow} =
\left(
\begin{array}{c}
0 \\
1 \\
\xi(k'_x - ik'_y) \\
-\xi k'_z
\end{array}
\right)
& ; &
\xi = \frac{c\hbar}{E + E_0}
\end{array}
\end{equation}
Where we have $E$ and $E_0$ as the electron's kinetic and rest mass energy
respectively.
The $\ket{\psi_c}_{\uparrow}$ represents the electron in the spin up state,
whereas the $\ket{\psi_c}_{\downarrow}$ represents the electron in the spin
down state.
| {
"alphanum_fraction": 0.482544153,
"avg_line_length": 34.5,
"ext": "tex",
"hexsha": "e2195ad4e9b36f824b512307e4d40650bf7e70f0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "63a8d151c4d53c369d223d44ff80c075ea102420",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mikepsn/atomic-form-factors-thesis",
"max_forks_repo_path": "intro_atomic.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "63a8d151c4d53c369d223d44ff80c075ea102420",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mikepsn/atomic-form-factors-thesis",
"max_issues_repo_path": "intro_atomic.tex",
"max_line_length": 89,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "63a8d151c4d53c369d223d44ff80c075ea102420",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mikepsn/atomic-form-factors-thesis",
"max_stars_repo_path": "intro_atomic.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5888,
"size": 17043
} |
\documentclass{article}
\usepackage{geometry}
\usepackage{harvard}
\usepackage{helvet}
\usepackage{setspace}
\renewcommand{\familydefault}{\sfdefault}
\geometry{a4paper}
\title{Dissertation Research Proposal, SP53004A}
\author{Student Registration No. 22164733}
\date{\today}
\begin{document}
\begin{doublespace}
\citationstyle{dcu}
\bibliographystyle{agsm}
\maketitle
\paragraph{}This research proposal addresses arguments within Political Theory and Political Economy about the nature of our contemporary political settlement. My dissertation will explore the politics and economics of contemporary western societies. I will consider the contents of and the relationships between the `master concepts' of liberalism and democracy and consider what opportunity there is for their renewal. I will put forward an ethnography of Free/Libre and Open Source and examine the political philosophy and political economy of this archetypical `networked society'. With an increasing prevalence and an apparent counter-system agenda, an analysis of Free/Libre and Open-Source in relationship with the liberal democratic and capitalist system that sustains it is important and of contemporary relevance.
\paragraph{}The meshing of the politics of the Hacker, the Artist and the Activist has developed a manifesto of Free/Libre and Open-Source ideas, which I will refer to as \emph{f/los}.\footnote{A more common acronym, FLOSS, demonstrates the genealogy of the term and its origins in computing. I will not refer to free/libre and open-source ideas with an additional \emph{s} as I am not directly concerned here with the software aspect of the definition which the letter denotes.} Increasing contact with f/los artefacts, technologies and economies and the wider dissemination of f/los concepts is raising popular consciousness of emerging redefinitions for master concepts such as freedom, property and ownership. Freedom and openness are two ideas central to the hacker culture and the movements it has influenced. My research will lead me to examine critically the political philosophy of f/los to reveal the content of these emerging definitions, how they differ from other conceptions, and what new understandings could contemporary politics and economic inflect from these ideas.
\paragraph{}I describe myself as a hacker\footnote{``To understand the full meaning of the word `hacker', it helps to examine the word's etymology over the years. [...] Once a vague item of obscure student jargon, the word `hacker' has become a linguistic billiard ball, subject to political spin and ethical nuances. Perhaps this is why so many hackers and journalists enjoy using it.'' \cite[Appendix B.]{Williams:2007uq}}. I participate in open-source projects and `gift' a significant proportion of my creative endeavors to `the community' and to the benefit of humanity as a whole. While I question the likelihood of someone benefiting from my individual openness and free gifting, the coordinated collaboration of individuals in various f/los communities has bestowed significant benefits on us all. The idea of `community' is frequently evoked in f/los discourse and most f/los activity takes place within a group, with disparate and dispersed members. I will examine the construction and maintenance of identity \cite{Appadurai:1996lp} and the definition of a community by f/los because I think that the social relationships that form these communities and the political culture that conditions the governance of them may provide useful examples for developments in contemporary society.
\paragraph{}To hackers closedness is an obstacle to development. These obstacles \emph{must} be dismantled, the process documented and the knowledge gained by the exercise shared with others. The imperative is critical to the philosophy: there is an obligation to fellow humans to share. Closedness is a lack of recognition of relationships. Closed devices will not inter-operate. Closed systems do not share information. Closed people relate to each other as means and not as ends.
\paragraph{}Closure characterizes the liberal-democratic political culture. The balance between these hyphenated concepts is delicate, leaving little but breathing space. Political liberalism's freedoms for the individual are curtailed by democratic mechanisms on behalf of the same free individuals, sovereignly constituted. Yet democracy's 'people', liberated from being cogs, now appear as little more than figures watching the machine they have been freed from continue its work without them. What was perhaps once ideal now seems like a bad idea. Worryingly it appears that these weakly linked concepts, liberal-democracy, are the only ideas left. I will critically examine the nature of liberalism and democracy, but most crucially I will examine the practical meaning of the hyphen that links them in our contemporary context. The discourse analytical approach of Laclau \cite{Laclau:2004dn} and other post-structuralists will help me rigorously demonstrate my assertion of `closure', a `closure' to anything prior to the acceptance of the liberal-democratic consensus \cite{Mouffe:2000fk}. To examine the implications of this perceived closure in liberal-democractic political systems I will draw on the work of \citeasnoun{Crick:2005il}, whose \emph{In Defense of Politics} is a powerful criticism of closure.
\paragraph{}Underpinning or central to - depending on your preference for ordering these things - liberal-democratic politics and the institutions they configure are economic activities and social relationships. In some societies these are still quite distinct from each other, in others the distinction is blurring. There is a tendency towards the subsuming of the social in the economic. With regard to the status of economic activities in contemporary society, it is clear that capitalism now towers above `the market', now large-scale, diversified and abstracted, and it moves on seeking new arenas where transactions take place to capitalize. What are the power relations that appear to enable capitalism to `Extend, embrace, extinguish'?\footnote{This term seems apt to describe the growth imperative, the \emph{modus operandi} of capitalism. It was used to describe the business practices of Microsoft, one of the worlds largest businesses. Microsoft are dependent on the revenue they can generate from proprietary products which are protected by intellectual property rights in order to make a profit.} In what ways does f/los philosophy and political economy represent a challenge these power relationships? To what extent is f/los part of the same political economy as neo-liberal doctrine that f/los appears, in many aspects, to be the antithesis of.
\paragraph{}As I intend to demonstrate of liberal-democracy, so I seek also to demonstrate the closure of capitalism as an economic system. Capitalism considers anything external to its self as raw materials, as means towards the ends of primitive accumulation. Here of course Marx and Luxemberg have written plenty, although my own understanding of capitalism as `closed' has been developed by a contemporary analysis of and introduction to `fictitious capital' by \citeasnoun{Goldner:2007pb}.
\paragraph{}The ``contemporary proliferation of political spaces and the multiplicity of democratic demands'' \cite[p.17]{Mouffe:2000fk} needs a new politics, a politics that works at the global and the local, the universal and the particular. New relationships of interconnectedness between people, communities, identities, artifacts, locations and events need to be reflected in the political culture of our institutions. What are the dynamics of these relationships? Where are these new political spaces emerging in such a tightly closed political culture?
\paragraph{}Anarchic? Communitarian? Autonomous? A future f/los society of free individuals freely participating in communities socially sharing with others without the motivation of profit? My first chapter will be an ethnography of the politics and economics of f/los, revealing its radical ideas of community. I will draw upon the writing of alpha-hackers such as Eric \citeasnoun{Raymond:2000lr} and Richard \citeasnoun{Stallman:2002xy} as well as interviews, hacker-folklore and personal experience. My approach to an evaluation and criticism of f/los will be discourse analytical. I have been reading \citeasnoun{Terranova:2004ly} and \citeasnoun{Appadurai:1996lp} and I will draw on their arguments in considering the invention of hacker identity in a networked society. The role of `the community' in educating and inculcating a good f/los civic citizen will also be examined.
\paragraph{}From f/los I move on to the contemporary western political settlement for my second chapter. Here I will examine the nature of modern political liberalism and democracy, considering the constituency of `the people' with regards to the principle of popular sovereignty and the diminishing of demos through universalized rights. I will examine the tension in the hyphen between liberal-democracy using the discourse analytical method of \citeasnoun{Mouffe:2000fk}. Drawing on her work in the \emph{The Democratic Paradox} with that of \citeasnoun{Bobbio:2005vn} and also \possessivecite{Keenan:2003tw} \emph{Democracy in Question} I will look for opportunities for radical `democratic openness in a time of political closure'. \citeasnoun{Dahl:1961fj} asked `Who Governs?' and found a gradual shift from oligarchy to pluralism. \possessivecite{Barry:2001ff} \emph{Political Machines} was concerned with `governing in a technological society'\ldots This chapter, then, develops both those lines of enquiry and can be conceived as `Who Governs in a Networked Society'. The concept of \emph{multitude} \cite{Hardt:2005zt} may be useful to me here in my attempt at re-conceptualizing the popular sovereignty of the people.
\paragraph{}I continue with my focus on contemporary politics in my third chapter with a consideration of the political economy of capitalism. What are the typologies of power conditioned by neo-liberal democratic capitalism? What challenges, if any, does f/los political economy present to the dominant system of exchange? What happens when the empty signifiers of `free' and `open' are co-opted and adapted by commercial enterprise; when `free as in speech' becomes `free as in beer', as demonstrated in `web 2.0's' social sharing spaces, do questions of access now replace questions of ownership of means of (re-)production? \citeaffixed{Hartzog:2007qm,Wright:2005wb}{e.g.}
\paragraph{}The political economy of f/los is based on an appeal to the rule of law, constitutionalized in licenses and institutionalized in the methods for debate and revision of the licenses. Here f/los perhaps demonstrates a neo-liberal - understood as the conflation of political liberalism and its economic variant, or perhaps more accurately, the subsuming of political liberalism by economic liberalism - character. How does this character square with f/los's first principle, freedom? This potential disjuncture between intention and effect caused by licensing will be explored using the works of \citeasnoun{Hardie:2005px} as contra to \citeasnoun{Lessig:2004pi}. I will also reflect on the apparent universalized American legal-constitutionalism in f/los and its role in the project of Empire \cite{Hardt:2001jl}. Is f/los political economy essentially a neo-imperial non-tarrif barrier to trade and development? How does f/los respond to the charge of being neo-liberalism's handmaiden? Is `the mirror going to steal the soul' of f/los? \cite{Prug:2006xr}
\paragraph{}I will draw together the ethnography of f/los and my analysis of contemporary politics in my final chapter which will contain my search for a new Politics of Openness. Here I will examine f/los philosophy in transmission, reflecting on what economic, social and cultural scenarios inspired by f/los can be envisaged. What hope is there for a radical democracy and a radical political economy? Will we see the emergence of a f/los society and state? \footnote{I would certainly not be the first to consider such ideas as an `open source state' \citeaffixed{Claude:2005jx,Rushkoff:2003wu}{e.g.}; nor will I simplistically imaginer such a society from the clumsy, techno-utopian application of a software development and distribution methodology to the enormous complexity of the real world.} I am in search of a strategy for a revival of the left \cite{Laclau:2001rt,Laclau:1990rc} against the hegemony of neo-liberal political and economic order, so in this chapter I will discuss what aspects of f/los are compatible with a new radical politics and economy while at the same time, no doubt, posting questions for further exploration.
\paragraph{}Word count: 1795
\end{doublespace}
\newpage
\bibliography{/Users/robdyke/Documents/GoldsmithsCourses/bibliography/globalbib}
\end{document} | {
"alphanum_fraction": 0.8156717569,
"avg_line_length": 270.125,
"ext": "tex",
"hexsha": "45fbeaf9e23c6343fefc6871d1cced03cf6c702a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "56af3cedb274d16cfc2773eab167f36884cbb345",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "robdyke/goldsmiths",
"max_forks_repo_path": "Dissertation Project/researchproposal.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "56af3cedb274d16cfc2773eab167f36884cbb345",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "robdyke/goldsmiths",
"max_issues_repo_path": "Dissertation Project/researchproposal.tex",
"max_line_length": 1362,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "56af3cedb274d16cfc2773eab167f36884cbb345",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "robdyke/goldsmiths",
"max_stars_repo_path": "Dissertation Project/researchproposal.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2783,
"size": 12966
} |
%% UzK - A BEAMER THEME FOR THE UNIVERSITY OF COLOGNE
%% http://solstice.github.com/uzk-theme/
\documentclass{beamer}
%\usefonttheme{serif}
\usefonttheme{professionalfonts}% don't change fonts inside beamer
%\usepackage[ngerman]{babel}
%\usepackage[latin1]{inputenc}
%\usepackage[T1]{fontenc}
\usepackage{polyglossia}
\setmainlanguage{english}
\usepackage{fontspec}
\usepackage{xeCJK}
\usepackage{unicode-math}
\setmathfont{Latin Modern Math} % default
%\setmathfont[range=\mathalpha]{Asana Math}
%\setmathfont{Asana Math}[range={\mathbin}] %\mathord
\setmathfont{STIX Math}[range={"02609}] % ☉
%\setmathfont{Latin Modern Math}[range={"1D608-"1D63B}, sans-style=italic]
%\setmathfont{Latin Modern Math}[range={"00391-"003A9, "003B1-"003F5}, sans-style=upright]
\usepackage{siunitx}
%% Falls Anzeige der \sections, \subsections etc. gewuenscht, kann zB.
%% das infolines theme geladen werden. Wichtig ist jedoch, dass andere
%% Themes _vor_ dem UzK-Theme geladen werden.
%\useoutertheme{infolines}
%% Falls keine der Optionen zur Bestimmung der Fusszeile uebergeben werden %%
%% werden alle Fakultaetsfarben verwendet. ---------------------------------- %%
\usetheme[%
%wiso, %% Wiso-Fakultaet
%jura, %% Rechtswissenschaftliche Fakultaet
%medizin, %% Medizinische Fakultaet
%philo, %% Philosophische Fakultaet
%matnat, %% Mathematisch-Naturwissenschaftliche Fakultaet
%human, %% Humanwissenschaftliche Fakultaet
%verw, %% Universitaetsverwaltung
%nav, %% Schaltet die Navigationssymbole ein
latexfonts, %% Verwendet die latex-beamer-Standardschrift
%colorful, %% Farbige Balken im infolines-Theme
%squares, %% Aufzaehlungspunkte rechteckig
%nologo, %% Kein Logo im Seitenhintergrund
]{UzK}
%\title{Beamer-Theme \texttt{UzK} für Präsentationen im Corporate
% Design der Universität zu Köln}
\title%[Short Paper Title] % (optional, use only with long paper titles)
{Black Hole Thermodynamics}
\subtitle{And Hawking Radiation}
\author[Arenas, Schmitz, Wang] % (optional, use only with lots of authors)
{Sebastiàn Arenas \and Tim Schmitz \and Yi-Fan Wang 王一帆}
\date{Advanced Seminar on Relativity and Cosmology}
%\author[David Kusterer \and Bernd Weiß]%
%{David Kusterer\thanks{ \href{mailto:[email protected]}{[email protected]} }%
% \and%
% Bernd Weiß\thanks{\href{mailto:[email protected]}{[email protected]}}}
%\institute[Forschungsinstitut für Soziologie]{%
%Forschungsinstitut für Soziologie \\
%Greinstraße 2\\
%50939 Köln}
%\institute[Uni Köln]{Universität zu Köln}
\usepackage[citestyle=alphabetic,doi=false,isbn=false,url=false]%
{biblatex}
\addbibresource{black-holes.bib}
\usepackage{tikz}
\usetikzlibrary{decorations.pathmorphing}
\usetikzlibrary{calc}
\usepackage{cleveref}
\usepackage{braket}
% Math symbols and user-defined extensions
% some unicode characters
% ≙ for equal with hat
% Mathematical constants
\newcommand{\ii}{{\Bbbi}}
\newcommand{\ee}{{\Bbbe}}
\newcommand{\pp}{{\Bbbpi}}
% Bracket-like
\newcommand{\rbr}[1]{{\left(#1\right)}}
\newcommand{\sbr}[1]{{\left[#1\right]}}
\newcommand{\cbr}[1]{{\left\{#1\right\}}}
\newcommand{\abr}[1]{{\left<#1\right>}}
\newcommand{\vbr}[1]{{\left|#1\right|}}
\newcommand{\fat}[2]{{\left.#1\right|_{#2}}}
% Functions; note the space between the name and the bracket!
\newcommand{\rfun}[2]{{#1}\mathopen{}\left(#2\right)\mathclose{}}
\newcommand{\sfun}[2]{{#1}\mathopen{}\left[#2\right]\mathclose{}}
\newcommand{\cfun}[2]{{#1}\mathopen{}\left\{#2\right\}\mathclose{}}
\newcommand{\afun}[2]{{#1}\mathopen{}\left<#2\right>\mathclose{}}
\newcommand{\vfun}[2]{{#1}\mathopen{}\left|#2\right|\mathclose{}}
% Differentials
\newcommand{\Dif}{\BbbD}
\newcommand{\Diff}{\,\BbbD}
\newcommand{\dif}{\Bbbd}
\newcommand{\diff}{\,\Bbbd}
\newcommand{\dva}{\mupdelta} % no better way?!
\newcommand{\dvar}{\,\mupdelta}
% Fraction-like
\newcommand{\frde}[2]{{\frac{\dif{#1}}{\dif{#2}}}}
\newcommand{\frDe}[2]{{\frac{\Dif{#1}}{\Dif{#2}}}}
\newcommand{\frpa}[2]{{\frac{\partial{#1}}{\partial{#2}}}}
% Equal marks
\newcommand{\eeq}{{\overset{!}{=}}}
\newcommand{\lls}{{\overset{!}{<}}}
\newcommand{\ggt}{{\overset{!}{>}}}
\newcommand{\lle}{{\overset{!}{\le}}}
\newcommand{\gge}{{\overset{!}{\ge}}}
% overline-like marks
\newcommand{\ol}[1]{{\overline{{#1}}}}
\newcommand{\ul}[1]{{\underline{{#1}}}}
\newcommand{\tld}[1]{{\widetilde{{#1}}}}
\newcommand{\ora}[1]{{\overrightarrow{#1}}}
\newcommand{\ola}[1]{{\overleftarrow{#1}}}
\newcommand{\td}[1]{{\widetilde{#1}}}
\newcommand{\what}[1]{{\widehat{#1}}}
%\newcommand{\prm}{{\symbol{"2032}}}
% Math operators
% Why does \DeclareMathOperator not work?
\DeclareMathOperator{\sgn}{sgn}
\DeclareMathOperator{\grad}{grad}
\DeclareMathOperator{\curl}{curl}
\DeclareMathOperator{\rot}{rot}
\DeclareMathOperator{\opdiv}{div}
\DeclareMathOperator{\opdeg}{deg}
\DeclareMathOperator{\sech}{sech}
\DeclareMathOperator{\csch}{csch}
\DeclareMathOperator{\diag}{diag}
\DeclareMathOperator{\tr}{tr}
\DeclareMathOperator{\ad}{ad}
\DeclareMathOperator{\expi}{expi}
% Group and Algebras
\newcommand{\SO}{\mathsf{SO}\,}
\newcommand{\SU}{\mathsf{SU}\,}
\newcommand{\so}{\mathfrak{so}\,}
\newcommand{\su}{\mathfrak{su}\,}
\begin{document}
\begin{frame}%[titlepage]
\titlepage
\end{frame}
\begin{frame}{Outline}
% \tableofcontents
% You might wish to add the option [pausesections]
\begin{enumerate}
\item {\makebox[4cm][l]{Introduction} Tim}
\item {Laws of black hole mechanics and thermodynamics}
\begin{enumerate}
\setcounter{enumii}{-1}
\item {\makebox[3cm][l]{The \alert{second} law} Tim}
\item {\makebox[3cm][l]{The \alert{zeroth} law} Sebastiàn}
\item {\makebox[3cm][l]{The \alert{first} law} Sebastiàn}
\item {\makebox[3cm][l]{The third law} Sebastiàn}
\end{enumerate}
\item {\makebox[4cm][l]{Hawking radiation} Yi-Fan}
\end{enumerate}
\end{frame}
\begin{frame}{Outline}
\tableofcontents
% You might wish to add the option [pausesections]
\end{frame}
\section{Motivations}
\begin{frame}{Motivations for the Radiation}%
{Developments in classical physics}
\begin{itemize}
\item (Speculative) entropy and temperature of black holes
\cite{Bekenstein1973,Bekenstein1980}
\item Penrose process and super-radiance
\end{itemize}
\end{frame}
\begin{frame}{Motivations for the Radiation}%
{Developments in quantum physics}
\begin{itemize}
\item Arguments from fundamental principles
\begin{itemize}
\item Uncertainty principle (black holes ``might split up''
\cite{susskind2008black})
\item Vacuum polarisation (e.g.\ Schwinger Effect, where electron-positron
pairs are pulled out from vacuum in strong external electric field), see
e.g.\ \cite{Frolov1998}
\end{itemize}
\item Arguments from field theory in curved space-time:
definition of particle ambiguous \cite{Fulling1973}, leading to particle
creations
\begin{itemize}
\item in dynamical universes, see e.g.\ \cite{parker2009quantum}
\item for accelerated observers \cite{Fulling1973,Davies1975,Unruh1976}
\end{itemize}
\end{itemize}
\end{frame}
\section{Particle creation in curved space-time}
\begin{frame}{Scalar Field in Special Relativity}{Warm-up}
\begin{itemize}
\item Diagonalising Hamiltonian: \emph{Fourier} kernel \& `\emph{momentum}' $p$
\[
\rfun{\tilde{\phi}}{p} \propto \int \dif x\,\ee^{-\ii p x} \rfun{\phi}{x},
\quad
\rfun{\tilde{\pi}}{p} \propto \int \dif x\,\ee^{+\ii p x} \rfun{\pi}{x}.
\]
\item Trsfing field between inertial obs.\ $A$ and $B$: Poincaré $\mscrP$
%\begin{alignat*}{3}
\begin{align*}
x &\to \mscrP x = \Lambda x + x_0,\quad p \to \mscrP p = p+p_0;
\\
\rfun{\tilde{\phi}_A}{p} &\to \rfun{\tilde{\phi}_A}{\mscrP^{-1}p}
\propto \rfun{\tilde{\phi}_B}{p'},\quad\text{etc.}
%\qquad &\rfun{\tilde{\pi}}{p} &\to \rfun{\tilde{\pi}}{\mscrP^{-1}p}.
\end{align*}
%\end{alignat*}
\item Canonical quantisation: ladder operators
\[ \rfun{\what{a}_\phi}{p} \propto
E_p^{+\frac{1}{2}}\rfun{\what{\tilde{\phi}}}{p}
+ \ii E_p^{-\frac{1}{2}}\rfun{\what{\tilde{\pi}}^\dagger}{p},
\quad \text{etc.}\]
\item Trsfing ladder optrs btw.\ inertial obs.: particle \alert{remains}
p.\
\[
\rfun{\what{b}}{p'}
= \what{U}_\mscrP\rfun{\what{a}}{p}\what{U}^\dagger_\mscrP
\propto \rfun{\what{a}}{\mscrP p},\quad\text{etc.}
\]
\end{itemize}
\end{frame}
\begin{frame}{Scalar Field in General Relativity}{The technical advance
we just mentioned}
\begin{itemize}
\item Diagonalising Hamiltonian: \emph{general} kernel \& \emph{parameter} $k$
\[
\rfun{\tilde{\phi}}{k} \propto \int \dif x\,\rfun{K}{k;x} \rfun{\phi}{x},
\quad
\rfun{\tilde{\pi}}{k} \propto \int \dif x\,\rfun{K^*}{k;x} \rfun{\pi}{x}
\]
\item Trsfing field between (time-like) obs.: general diffeom.\ $\mscrD$
%\begin{alignat*}{3}
\begin{align*}
x &\to \chi = \rfun{f}{x},\quad k \to \kappa = \rfun{g}{k};
\\
\rfun{\tilde{\phi}_A}{k} &\to \rfun{\tilde{\phi}_A}{\rfun{g^{-1}}{\kappa}}
=\int \dif \kappa\,\rfun{T}{k;\kappa}\rfun{\tilde{\phi}_B}{\kappa},
\quad \text{etc.}
%&\rfun{\tilde{\pi}}{k} &\to \rfun{\tilde{\pi}}{\rfun{g^{-1}}{\kappa}}.
\end{align*}
%\end{alignat*}
\item Canonical quantisation: ladder operators
\[ \rfun{\what{a}_\phi}{k} \propto
E_k^{+\frac{1}{2}}\rfun{\what{\tilde{\phi}}}{k}
\pm \ii E_k^{-\frac{1}{2}}\rfun{\what{\tilde{\pi}}^\dagger}{k},
\quad \text{etc.}\]
\item Transforming ladder optrs: \alert{Bogolyubov}, \alert{mixing} p.\ \& anti-p.\
\[
\rfun{\what{b}}{\kappa}
= \what{U}_\mscrD \rfun{\what{a}}{k}\what{U}^\dagger_\mscrD
\propto \int \dif k\,\rfun{\alpha}{\kappa;k} \rfun{\what{a}}{k}
+ \rfun{\beta}{\kappa;k} \rfun{\what{a}^\dagger}{k},
\quad\text{etc.}
\]
\end{itemize}
\end{frame}
\begin{frame}{Scalar Field in Special and General Relativity}{Particle
`creation'}
\begin{itemize}
\item Defining the $\what{a}$-vacuum as being annihilated by all $\hat{a}$'s
\[\rfun{\what{a}}{p\text{ or }k}\Ket{\Omega} \equiv 0\]
\item Number operators transform as
\begin{align*}
\rfun{\what{a}^\dagger}{p} \rfun{\what{a}}{p}
&\to \rfun{\what{b}^\dagger}{p'} \rfun{\what{b}}{p'}
&\phantom{}&\rfun{\what{a}^\dagger}{k} \rfun{\what{a}}{k}
\to \rfun{\what{b}^\dagger}{\kappa} \rfun{\what{b}}{\kappa}\\
&\propto \rfun{\what{a}^\dagger}{\mscrP p} \rfun{\what{a}}{\mscrP p};
&=& \ldots \what{a}^\dagger \what{a} + \ldots \what{a} \what{a}
+ \ldots \what{a}^\dagger \what{a}^\dagger \\
&&+& \int \dif k\, \vbr{\rfun{\beta}{\kappa;k}}^2
\rfun{\what{a}}{k} \rfun{\what{a}^\dagger}{k}.
\end{align*}
\item $\abr{\what{n}_a}_\Omega$'s transform as
\begin{align*}
\abr{\rfun{\what{a}^\dagger}{p} \rfun{\what{a}}{p}}
&\to
\abr{\rfun{\what{b}^\dagger}{p'} \rfun{\what{b}}{p'}}
&
\abr{\rfun{\what{a}^\dagger}{k} \rfun{\what{a}}{k}}
&\to
\abr{\rfun{\what{b}^\dagger}{\kappa} \rfun{\what{b}}{\kappa}}
\\
&= 0; &&= \int \dif k\, \vbr{\rfun{\beta}{\kappa;k}}^2.
\end{align*}
\end{itemize}
\end{frame}
\section{Gravitational collapse}
\begin{frame}{Spherically gravitational collapse}%
{The model and its quantisation}
\begin{columns}
\begin{column}{.61\textwidth}
\begin{itemize}
\item Massless neutral scalar field on the given background space-time w/
in- and out-going eigen-modes
\begin{align*}
f_{\omega'lm} &\propto r^{-1}F_{\omega'}\ee^{\ii\omega' v}
\rfun{Y_{lm}}{\theta,\phi}, \\
p_{\omega lm} &\propto r^{-1}P_{\omega}\ee^{\ii\omega u}
\rfun{Y_{lm}}{\theta,\phi}
\end{align*}
\item Quantising at Cauchy surfaces
\begin{enumerate}
\item Early time: $\what{a}$'s defined on $\mscrI^-$
\item Late time: $\what{b}$'s on $\mscrI^+$, $\what{c}$'s on $\mscrH^+$
\end{enumerate}
\item \emph{Physical} vacuum: $\what{a}\Ket{H} \equiv 0$
\item Concerned with $\Braket{H | \what{n}_b | H}$
\end{itemize}
\end{column}
\begin{column}[c]{.38\textwidth}
\begin{tikzpicture}%[scale=2]
\pgfmathsetmacro\myunit{3}
\pgfmathsetmacro\sc{1.414213562}
\draw [dashed] (0,0)
-- ++(-90: \sc * \myunit)
node [below] {$i^-$}
coordinate (a);
\draw (a)
-- ++(45: 1.5 * \myunit)
node [right] {$i^0$}
node [pos = .5, below right] {$\mscrI^-$}
-- ++(135: 0.5 * \myunit)
node [above] {$i^+$}
node [pos = .5, above right] {$\mscrI^+$}
coordinate (b)
-- ++(-135: \myunit)
node [pos = .5, above left] {$\mscrH^+$};
\draw [decorate, decoration=zigzag] (b)
-- (0,0)
node [pos = .5, above] {bh sing.};
\draw [densely dotted, out = 67.5, in = -80, thick, ->] (a)
to (\sc/4* \myunit, 0);
\end{tikzpicture}
\end{column}
\end{columns}
\end{frame}
\begin{frame}{Spherically gravitational collapse}{Hawking's construction
\cite{HAWKING1974,Hawking1975}}
\begin{columns}
\begin{column}{.61\textwidth}
\begin{itemize}
\item Consider $p_\omega$ on $\mscrI^+$ propagating backwardly
to $\mscrI^-$, ending up with
\begin{itemize}
\item The same freq.\ $p_\omega^{(1)}$: \alert{only} gives a
$\rfun{\delta}{\omega-\omega'}$ term in $\rfun{\alpha}{\omega;\omega'}$
\item The rest $p_\omega^{(2)}$: contributing to $\beta$ as well;
\alert{of interest}
\end{itemize}
\item On $\mscrI^-$, approximately $p_\omega^{(2)} \propto$
\[
\begin{cases}
0, & v > v_0; \\
r^{-1}P_\omega \rbr{\frac{v_0 - v}{CD}}^{-\ii\frac{\omega}{\kappa_\text{S}}},
& v \lesssim v_0
\end{cases}
\]
where $v_0$ is the latest time a null geodesic could leave $\mscrI^-$,
$C$ and $D$ const.\
% \begin{enumerate}
% \item
% \item $\beta$ comes mainly from the high-freq.\ modes (b/c of grav.\
% blue-shift) with $v_0-v \gtrsim 0$
% \end{enumerate}
\end{itemize}
\end{column}
\begin{column}{.38\textwidth}
\begin{tikzpicture}%[scale=2]
\pgfmathsetmacro\myunit{3}
\pgfmathsetmacro\sc{1.414213562}
\draw [dashed] (0,0)
-- ++(-90: \sc * \myunit)
node [below] {$i^-$}
coordinate (a);
\draw (a)
-- ++(45: 1.5 * \myunit)
node [right] {$i^0$}
node [pos = .3333333333, below right] {$v_0$}
node [pos = .1666666666, below right] {$\mscrI^-_\text{e}$}
node [pos = .6666666666, below right] {$\mscrI^-_\text{l}$}
-- ++(135: .5 * \myunit)
node [above] {$i^+$}
node [pos = .5, above right] {$\mscrI^+$}
coordinate (b)
-- ++(-135: \myunit)
node [pos = .5, above left] {$\mscrH^+$}
coordinate (B);
\draw [decorate, decoration=zigzag] (b)
-- (0,0)
node [pos = .5, above] {bh sing.};
\draw [densely dotted, out = 67.5, in = -80, thick, ->] (a)
to (\sc/4* \myunit, 0);
\draw [red, dashed, thick, <-] (B) --++(-45: .5*\myunit);
\draw [red, dashed, thick, ->] (B) --++(+45: \myunit);
\end{tikzpicture}
\end{column}
\end{columns}
\end{frame}
\begin{frame}{Results and Interpretation}{Hawking temperature}
\begin{itemize}
\item Expectation value of particle number
\begin{equation}
\abr{\rfun{\what{n}_b}{\omega}}_H
= \int\dif\omega'\,\vbr{\rfun{\beta}{\omega;\omega'}}^2
\approx \Gamma_\omega\rbr{\ee^\frac{2\pp\omega}{\kappa_\text{S}}-1}^{-1}
\label{eq:exp-number}
\end{equation}
\item Comparing \cref{eq:exp-number} with Bose--Einstein dist.\ (\emph{black}
body) $\abr{\rfun{\what{n}}{\omega}}_\text{BE} = \rbr{\ee^\frac{\omega}{T} -
1}^{-1}$, one may conclude that \alert<2>{the physical system concerned} is a
\emph{grey} body, with a temperature of
\[
T_\text{H} = \frac{\kappa_\text{S}}{2\pp}
\approx \rbr{\frac{M_\astrosun}{M}} \cdot \SI{6.169e-8}{\kelvin}.
\label{eq:Hawking-temp}
\]
\item<2> $\what{\rho}_H = \Ket{H}\Bra{H}$ \alert{pure},
while the thermal $\what{\rho}_\text{BE} = \frac{1}{Z} \ee^{-\what{H}/T}$
\alert{mixed}?
\end{itemize}
\end{frame}
\section{Eternal black hole}
\begin{frame}{Eternal Schwarzschild black hole}{
\only<1>{The conformal diagram}
\only<2,4->{\cite{Candelas1980,Frolov1998}}
\only<3>{The conformal diagram, with cut}}
\only<1>{
\begin{center}
\begin{tikzpicture}%[scale=2]
\pgfmathsetmacro\myunit{3}
\draw (0,0)
% node [left] {$i^0$}
--++(45:\myunit)
% node [above]{$i^+$}
coordinate (a)
node [below = 1.8 cm] {$P$}
% node [pos = .5, above left] {$\mscrI^+$}
% node [pos=.5, below, sloped] {$\bar{u}=\infty$}
--++(-45:2*\myunit)
% node [pos = .25, above right] {$\mscrH^+$}
node [pos = .75, above right] {$\mscrH^-$} %, sloped]{$r = \rSch$}%x_+ \to -\infty$}
coordinate (d)
node [below]{$i^-$}
--++(45:\myunit)
node [pos = .5, below right] {$\mscrI^-$} % , sloped]{$x_- \to -\infty$}
node [right] {$i^0$}
--++(135:\myunit)
node [pos = .5, above right] {$\mscrI^+$} % , sloped]{$x_+ \to +\infty$}
node [above] {$i^+$}
node [below = 1.8 cm] {$U$}
coordinate (b)
--++(-135:2*\myunit)
coordinate (c)
% node [pos = .25, below, sloped] {$r = \rSch$}%x_- \to +\infty$}
% node [pos = .75, below right] {$\mscrH^-$}
node [pos = .25, below right] {$\mscrH^+$}
% node [below] {$i^-$}
--cycle
% node [pos = .5, below left] {$\mscrI^-$}
;
\draw [decorate, decoration=zigzag]
(a) --
node [above] {bh singularity} % =6pt
node [below = .75 cm] {$B$}
(b)
(c) --
node [below] {wh singularity}
node [above = .75 cm] {$W$}
(d);
\end{tikzpicture}
\end{center}}
\only<2,4->{
\begin{itemize}
\item<4-> Comparing the space-time with that of the collapsing body:
\alert<4>{cut} along the dots and \alert<4>{stick} the right to an
interior solution
\item Clean calculation; $\abr{\what{\phi}^2}_\text{ren}$,
$\abr{\what{T}_{\mu\nu}}_\text{ren}$ etc.\ obtainable
\item Different `vacua' can be defined
\begin{itemize}
\item \alert<2>{Boulware}: No flux, $\abr{\what{\phi}^2}_\text{ren}$
blows up at $\mscrH^-\cup\mscrH^+$
\begin{itemize}
\item<2> Miracle near the horizons
\end{itemize}
\item \alert<2>{Israel--Hartle--Hawking}: I/O flux,
$\abr{\what{\phi}^2}_\text{ren}$ finite at $\mscrH^-\cup\mscrH^+$
\begin{itemize}
\item<2> Black hole in a heat bath
\item<2> \alert<2>{Mathematically} more desirable
\end{itemize}
\item \alert<2,4>{Unruh}: O flux, $\abr{\what{\phi}^2}_\text{ren}$
blows up at $\mscrH^-$
\begin{itemize}
\item<4> $\mscrH^-$ and the divergence therein can be hidden
\item<4> Similar to the (physical) collapsing case
\end{itemize}
\end{itemize}
\end{itemize}}
\only<3>{
\begin{center}
\begin{tikzpicture}%[scale=2]
\pgfmathsetmacro\myunit{3}
\draw (0,0)
% node [left] {$i^0$}
--++(45:\myunit)
% node [above]{$i^+$}
coordinate (a)
node [below = 1.8 cm] {$P$}
% node [pos = .5, above left] {$\mscrI^+$}
% node [pos=.5, below, sloped] {$\bar{u}=\infty$}
--++(-45:2*\myunit)
% node [pos = .25, above right] {$\mscrH^+$}
node [pos = .75, above right] {$\mscrH^-$} %, sloped]{$r = \rSch$}%x_+ \to -\infty$}
coordinate (d)
node [below]{$i^-$}
--++(45:\myunit)
node [pos = .5, below right] {$\mscrI^-$} % , sloped]{$x_- \to -\infty$}
node [right] {$i^0$}
--++(135:\myunit)
node [pos = .5, above right] {$\mscrI^+$} % , sloped]{$x_+ \to +\infty$}
node [above] {$i^+$}
node [below = 1.8 cm] {$U$}
coordinate (b)
--++(-135:2*\myunit)
coordinate (c)
% node [pos = .25, below, sloped] {$r = \rSch$}%x_- \to +\infty$}
% node [pos = .75, below right] {$\mscrH^-$}
node [pos = .25, below right] {$\mscrH^+$}
% node [below] {$i^-$}
--cycle
% node [pos = .5, below left] {$\mscrI^-$}
;
\draw [decorate, decoration=zigzag]
(a) --
node [above] {bh singularity} % =6pt
node [below = .75 cm] {$B$}
(b)
(c) --
node [below] {wh singularity}
node [above = .75 cm] {$W$}
(d);
\draw [thick, densely dotted, out = 112.5, in = -100] (d)
to ($(a)!0.75!(b)$);
\end{tikzpicture}
\end{center}}
\end{frame}
%\item Wave-functional formalisms
% \begin{itemize}
% \item Painlevé and Lemaı̂tre coordinates
% \item Same physics as in ...
% \end{itemize}
%\item Canonical quantum gravitation
%\begin{frame}{Canonical quantum gravitation}{In toy gravitation models}
%\cite{Callan1992}
%\begin{itemize}
% \item QFT in CST recovered in Born-Oppenheimer expansion \cite{Demers1996}
%\end{itemize}
%\end{frame}
\section{Information loss problem}
\begin{frame}{Information loss problem}{See e.g.\
\cite{Mathur2009,Mann2015}}%{Grey-body radiation from black holes}
\begin{columns}
\begin{column}{.61\textwidth}
\begin{itemize}
\item Minkowski: particles from $\mscrI^-$ to $\mscrI^+$
\item<2-> Collapsing body: can also go to $\mscrH^+$
\item<2-> Obs.\ able to set up \alert<2>{input} at $\mscrI^-$ and
collect \alert<2>{output} at $\mscrI^+$, but \alert<2>{not} at $\mscrH^+$
\item<3-> $\Ket{\alpha}$ on $\mscrI^-$ evolves to $\Ket{\beta}$ on $\mscrH^+
\cup \mscrI^+$
\item<3-> \alert<3>{Pure} $\what{\rho}_\text{I} = \Ket{\alpha}\Bra{\alpha}$
turns \alert<3>{mixed} $\what{\rho}_\text{O} = \mathrm{tr}_{\mscrH^+}
\Ket{\beta}\Bra{\beta}$! Something is lost
\item<4-> Non-conservation arguments \cite{Hawking1976}
\item<4-> Conservation arguments \cite{Page1993}
\end{itemize}
\end{column}
\begin{column}[c]{.38\textwidth}
\only<1>{
\begin{tikzpicture}%[scale=2]
\pgfmathsetmacro\myunit{3}
\draw (0,0)
node [above] {$i^+$}
--++(-45:\myunit)
node [right] {$i^0$}
node [pos = .5, above right] {$\mscrI^+$}
--++(-135:\myunit)
coordinate (b)
node [below] {$i^-$}
node [pos = .5, below right] {$\mscrI^-$};
\draw [dashed] (b) -- (0,0);
\end{tikzpicture}}
\only<2->{
\begin{tikzpicture}%[scale=2]
\pgfmathsetmacro\myunit{3}
\pgfmathsetmacro\sc{1.414213562}
\draw [dashed] (0,0)
-- ++(-90: \sc * \myunit)
node [below] {$i^-$}
coordinate (a);
\draw (a)
-- ++(45: 1.5 * \myunit)
node [right] {$i^0$}
node [pos = .5, below right] {$\mscrI^-$}
-- ++(135: 0.5 * \myunit)
node [above] {$i^+$}
node [pos = .5, above right] {$\mscrI^+$}
coordinate (b)
-- ++(-135: \myunit)
node [pos = .5, above left] {$\mscrH^+$};
\draw [decorate, decoration=zigzag] (b)
-- (0,0)
node [pos = .5, above] {bh sing.};
\draw [densely dotted, out = 67.5, in = -80, thick, ->] (a)
to (\sc/4* \myunit, 0);
\end{tikzpicture}}
\end{column}
\end{columns}
\end{frame}
%\begin{frame}{Interpretations}%{Grey-body radiation from black holes}
%\begin{itemize}
%\item Thermal-like radiation
% \begin{itemize}
% \item 123
% \end{itemize}
%\item Entropies and temperatures
%\item Progressive evaporation
%\item Final fate
%\item Information
% \begin{itemize}
% \item Non-conservation \cite{Hawking1976}
% \item Conservation \cite{Page1993}
% \end{itemize}
%\item Violation of unitarity
%\end{itemize}
%\end{frame}
%\section{Questions}
%\begin{frame}{Questions}
%\begin{itemize}
%\item Field, not gravitation itself
%\item Thermality of the radiation
%\end{itemize}
%\end{frame}
\section*{Summary}
\begin{frame}{Summary}
% Keep the summary *very short*.
\begin{itemize}
\item
\alert{Robust} calculation of outgoing particle flux for the space-time
of collapsing body
\item
\alert{Controversial} interpretations and extrapolations
%\item
%Perhaps a \alert{third message}, but not more than that.
\end{itemize}
\vskip0pt plus.5fill
\begin{itemize}
\item Back-reaction to the metric: no time
\end{itemize}
% The following outlook is optional.
\vskip0pt plus.5fill
\begin{itemize}
\item
Seeks combination with
\begin{itemize}
\item Quantum gravitation
\item Quantum information
%\item Super-string theory
\end{itemize}
\end{itemize}
\end{frame}
% All of the following is optional and typically not needed.
\appendix
%\section<presentation>*{\appendixname}
%\subsection<presentation>*{For Further Reading}
\begin{frame}[allowframebreaks]
\frametitle<presentation>{References}
%\printbibliography
% \begin{thebibliography}{10}
\beamertemplatebookbibitems
% Start with overview books.
\printbibliography[type=book]
% \bibitem{Author1990}
% A.~Author.
% \newblock {\em Handbook of Everything}.
% \newblock Some Press, 1990.
\beamertemplatearticlebibitems
% Followed by interesting articles. Keep the list short.
\printbibliography[nottype=book]
% \bibitem{Someone2000}
% S.~Someone.
% \newblock On this and that.
% \newblock {\em Journal of This and That}, 2(1):50--100,
% 2000.
% \end{thebibliography}
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.6486304823,
"avg_line_length": 30.5964010283,
"ext": "tex",
"hexsha": "f2d8a70556ec661de5b357d7aec3011c06a47159",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f44b3dbe8db64e4464f27c92af967bc610fb49de",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "cmp0xff/Presentations",
"max_forks_repo_path": "20170110/UzK-Example.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f44b3dbe8db64e4464f27c92af967bc610fb49de",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "cmp0xff/Presentations",
"max_issues_repo_path": "20170110/UzK-Example.tex",
"max_line_length": 97,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f44b3dbe8db64e4464f27c92af967bc610fb49de",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "cmp0xff/Presentations",
"max_stars_repo_path": "20170110/UzK-Example.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 9184,
"size": 23804
} |
\documentclass{beamer}
\usepackage[utf8]{inputenc}
\usepackage{utopia} %font utopia imported
\usepackage{verbatim}
\usetheme{Madrid}
\usecolortheme{default}
\usepackage{graphicx}
%------------------------------------------------------------
%This block of code defines the information to appear in the
%Title page
\title[Wrangell] %optional
{Wrangell}
\subtitle{A DSL for Data-Wrangelling}
\author[Dana Iltis, Kenan Nalbant, Donald Pinckney] % (optional)
{Dana Iltis \and Kenan Nalbant \and Donald Pinckney}
%End of title page configuration block
%------------------------------------------------------------
%------------------------------------------------------------
%The next block of commands puts the table of contents at the
%beginning of each section and highlights the current section:
\begin{comment}
\AtBeginSection[]
{
\begin{frame}
\frametitle{Table of Contents}
\tableofcontents[currentsection]
\end{frame}
}
\end{comment}
%------------------------------------------------------------
\begin{document}
%The next statement creates the title page.
\frame{\titlepage}
\begin{comment}
%---------------------------------------------------------
%This block of code is for the table of contents after
%the title page
\begin{frame}
\frametitle{Table of Contents}
\tableofcontents
\end{frame}
%---------------------------------------------------------
\end{comment}
%---------------------------------------------------------
%Changing visivility of the text
\begin{frame}
\frametitle{Data Wrangling: a time-consuming and tedious job}
Why do we need a DSL for data-wrangelling?
\begin{itemize}
\item<1-> Wide variety of file formats
\item<1-> Difficult to accomplish for users with minimal programming experience
\item<1-> Often requires manual labor to create ad-hoc solutions
%\item<2-> Text visible on slide 2
%\item<3> Text visible on slides 3
%\item<4-> Text visible on slide 4
\end{itemize}
\end{frame}
%---------------------------------------------------------
\section{Second section}
\begin{frame}
\frametitle{A Motivating Example}
Perhaps we want to remove sensitive information from a database
\includegraphics[scale=0.35]{screen}
\end{frame}
\section{Third Section}
\begin{frame}
\frametitle{Solution: Create a DSL}
Why create \textit{another} DSL
\begin{itemize}
\item<1-> We want ease of usage when it comes to this particular problem domain while still allowing for a lot of flexibility
\item<1-> Make common place tasks easier to accomplish and eliminate tedious boilerplate that writing an ad-hoc solution in another language would require
%\item<2-> Text visible on slide 2
%\item<3> Text visible on slides 3
%\item<4-> Text visible on slide 4
\end{itemize}
\end{frame}
\section{Fourth Section}
\begin{frame}
\frametitle{Introducing Wrangell}
Wrangell is a DSL which eliminates much of the boilerplate code that typically arises when data-munging.
\begin{itemize}
\item<1> Based on a familiar Lisp-like syntax which while simple is very expressive.
\item<1> Very extensible, arbitrary new file formats can be supported for input and output as long as functions which map to and from Wrangell's internal data representation are provided.
\item<1>We have a working interpreter implemented in Haskell
\end{itemize}
\end{frame}
\section{Fifth Section}
\begin{frame}
\frametitle{Wrangell}
\begin{itemize}
\item<1> Has a dynamic type system as with Lisp, but much more strict, with less implicit type coercions
\item<1> Additionally Wrangell supports polymorphic expressions, if the functions used in an expression can accept many different types so will the compound expression.
\end{itemize}
\end{frame}
\section{Sixth Section}
\begin{frame}
\frametitle{Wrangell}
\begin{itemize}
\item<1> Wrangell also requires columns of the data set to have types.
\item<1> All column transformations are type checked to guarantee type-soundness.
\end{itemize}
\end{frame}
\section{Seventh Section}
\begin{frame}
\frametitle{Why Haskell}
\begin{itemize}
\item<1> Because Haskell
is a functional language, we thought that data-wrangling,
with its emphasis on data transformations would be well suited and also an instructive experience on how to build large scale functional programs
\end{itemize}
\end{frame}
\section{Eighth Section}
\begin{frame}
\frametitle{Future Work}
\begin{itemize}
\item<1> Currently the only supported type of input and output files are character delimited (e.g. CSV or TSV) so work can be done to make more file formats supported
\item<1> There is currently a degree of inefficiency with the current implementation of many of the internal data transformations which can absolutely be remedied
\end{itemize}
\end{frame}
\section{Last Section}
\begin{frame}
\frametitle{Conclusion}
\begin{itemize}
\item<1> Wrangell is a novel language which provides many primitives to easily deal with data-preprocessing tasks
\item<1> Wrangell additionally provides facilities to easily extend to new file types.
\item<1> The type system on both Wrangell expressions and internal data tables allows for both expresiveness and safety
\end{itemize}
\end{frame}
\end{document} | {
"alphanum_fraction": 0.6927033721,
"avg_line_length": 30.6959064327,
"ext": "tex",
"hexsha": "45d186678c1073720295ca57ce4bb68ffdcf7b8b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8391d7cc175a4166770a487d6fcc9499c1e4e512",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "knalbant/wrangell",
"max_forks_repo_path": "Presentation/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8391d7cc175a4166770a487d6fcc9499c1e4e512",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "knalbant/wrangell",
"max_issues_repo_path": "Presentation/main.tex",
"max_line_length": 191,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "8391d7cc175a4166770a487d6fcc9499c1e4e512",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "knalbant/wrangell",
"max_stars_repo_path": "Presentation/main.tex",
"max_stars_repo_stars_event_max_datetime": "2017-03-29T22:48:15.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-03-23T17:11:47.000Z",
"num_tokens": 1289,
"size": 5249
} |
% !TEX TS-program = pdflatex
% !TEX encoding = UTF-8 Unicode
% !TEX spellcheck = es_CL
% !TEX root = ../Plantilla_PELN.tex
\chapter*{Nomenclatura}
\phantomsection
\addcontentsline{toc}{chapter}{Nomenclatura}
%\subsection*{Matrices}
%\begin{longtable}[l]{lcl}%{p{50pt} p{50pt} p{200pt}}
%
%\end{longtable}
| {
"alphanum_fraction": 0.7124183007,
"avg_line_length": 23.5384615385,
"ext": "tex",
"hexsha": "fe5c5570c255b4d5753c4e3985e6f4e12593f4fb",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e928ff76b862a35132a7161cc716343c29406515",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "FabianInostroza/Plantilla_PELN",
"max_forks_repo_path": "capitulos/nomenclatura.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e928ff76b862a35132a7161cc716343c29406515",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "FabianInostroza/Plantilla_PELN",
"max_issues_repo_path": "capitulos/nomenclatura.tex",
"max_line_length": 53,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e928ff76b862a35132a7161cc716343c29406515",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "FabianInostroza/Plantilla_PELN",
"max_stars_repo_path": "capitulos/nomenclatura.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 109,
"size": 306
} |
\section{Conclusion}\label{sec:conclusion}
% In the conclusion say that it was fun, you think the result is good even if
% the application was not convincing and put a great citation about it in it.
% Say that for someone new to the graphics programming it was difficult
% Say that 16 weeks ago you knew not much about the pipeline
Although the result of our implementation did not produce convincing realistic
water, it confirmed our intuitions. The Gerstner water model has to be combined
with another one or completely discarded. Water rendering is a complex process
involving many physical properties, each having to be approximated as discussed
in \autoref{sec:water_models}. Every one of them has to be carefully selected in
order to avoid a major slowdown during rendering. Building an application from
scratch is not recommended as too many concepts like the scene graph, need to be
programmed.
As eighteen weeks ago we knew very little about the rendering pipeline and
practically nothing about its programming, we are confident to obtain good
results for our further work. We will try to implement a hybrid approach which
handles shallow and deep water into \textit{0 A.D.} using an enhanced projected
grid like the one from \autocite{kryachko2016sea}. For the shallow water we
think to use one precomputed Gerstner wave adapted to the curvatures of the
beaches and stored as a displacement map. The fast Fourier transform from
\autocite{tessendorf2001simulating} is rather appealing to use for deep water.
This will be the main focus of our further work. If there is time left, we will
add the other elements discussed below.
We think that blending the foam in an out based on the water surface height is
amply sufficient. Interactions between objects and the water surface will have
to be placed also in a displacement texture and computed with an adapted form of
the \textit{Wave Particles} model.
| {
"alphanum_fraction": 0.8100208768,
"avg_line_length": 59.875,
"ext": "tex",
"hexsha": "ccecc75f5f0cff52884da68b08f81cd43587e338",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "957f9f45e93d99f3f82b1b3cb6e988d296991675",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "SamuelGauthier/rtwr",
"max_forks_repo_path": "doc/report/text/6_conclusion.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "957f9f45e93d99f3f82b1b3cb6e988d296991675",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "SamuelGauthier/rtwr",
"max_issues_repo_path": "doc/report/text/6_conclusion.tex",
"max_line_length": 80,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "957f9f45e93d99f3f82b1b3cb6e988d296991675",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "SamuelGauthier/rtwr",
"max_stars_repo_path": "doc/report/text/6_conclusion.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 425,
"size": 1916
} |
\input{../header_function}
%---------- start document ---------- %
\section{arith1 - miscellaneous arithmetic functions}\linkedzero{arith1}
\subsection{floorsqrt -- floor of square root}\linkedone{arith1}{floorsqrt}
\func{floorsqrt}{\hiki{a}{integer/\linkingone{rational}{Rational}}}{\out{integer}}\\
\spacing
% document of basi document
\quad Return the floor of square root of \param{a}.\\
%\spacing
% input, output document
%\quad Input number \param{a} must be integer or \linkingone{rational}{Rational}.\\
%
\subsection{floorpowerroot -- floor of some power root}\linkedone{arith1}{floorpowerroot}
\func{floorpowerroot}{\hiki{n}{integer},\ \hiki{k}{integer}}{\out{integer}}\\
\spacing
% document of basi document
\quad Return the floor of \param{k}-th power root of \param{n}.\\
%\spacing
% input, output document
%\quad Input numbers \param{n}, \param{k} must be integer.\\
%
\subsection{legendre - Legendre(Jacobi) Symbol}\linkedone{arith1}{legendre}
\func{legendre}{\hiki{a}{integer},\ \hiki{m}{integer}}{\out{integer}}\\
\spacing
% document of basi document
\quad Return the Legendre symbol or Jacobi symbol $\DS \Bigl(\frac{\param{a}}{\param{m}}\Bigr)$.\\
%\spacing
% input, output document
%\quad Input numbers \param{a}, \param{m} must be integer.\\
%
\subsection{modsqrt -- square root of $a$ for modulo $p$}\linkedone{arith1}{modsqrt}
\func{modsqrt}{\hiki{a}{integer}, \, \hiki{p}{integer}}{\out{integer}}\\
\spacing
% document of basi document
\quad Return one of the square roots of \param{a} for modulo \param{p} if square roots exist, raise ValueError otherwise.\\
\spacing
% add document
%\spacing
% input, output ducument
\quad \param{p} must be a prime number.\\
%
\subsection{expand -- $m$-adic expansion}\linkedone{arith1}{expand}
\func{expand}{\hiki{n}{integer}, \, \hiki{m}{integer}}{\out{list}}\\
\spacing
% document of basi document
\quad Return the \param{m}-adic expansion of \param{n}.\\
\spacing
% input, output document
\quad \param{n} must be nonnegative integer. \param{m} must be greater than or equal to $2$. The output is a list of expansion coefficients in ascending order.\\
%
\subsection{inverse -- inverse}\linkedone{arith1}{inverse}
\func{inverse}{\hiki{x}{integer}, \, \hiki{n}{integer}}{\out{integer}}\\
\spacing
% document of basi document
\quad Return the inverse of \param{x} for modulo \param{n}.\\
\spacing
% input, output document
\quad \param{n} must be coprime to \param{x}.\\
%
\subsection{CRT -- Chinese Reminder Theorem}\linkedone{arith1}{CRT}
\func{CRT}{\hiki{nlist}{list}}{\out{integer}}\\
\spacing
% document of basi document
\quad Return the uniquely determined integer satisfying all modulus
conditions given by \param{nlist}.\\
\spacing
% input, output document
\quad Input list \param{nlist} must be the list of lists each consisting of two elements.
The first element is remainder and the second is divisor.
They must be integer.\\
%
\subsection{AGM -- Arithmetic Geometric Mean}\linkedone{arith1}{AGM}
\func{AGM}{\hiki{a}{integer},\ \hiki{b}{integer}}{\out{float}}\\
\spacing
% document of basi document
\quad Return the Arithmetic-Geometric Mean of \param{a} and \param{b}.\\
%\spacing
% input, output document
%\quad Input number \param{a}, \param{b} must be integer.\\
%
%\subsection{\_BhaskaraBrouncker}\linkedone{arith1}{\_BhaskaraBrouncker}
%\func{\_BhaskaraBrouncker}{\hiki{n}{integer}}{\out{integer}}\\
%\spacing
% document of basi document
%\quad Return the minimum tuple \param{p}, \param{q} such that, $\param{p}^2
%- \param{n} \param{q}^2 = \pm 1$.\\
%\spacing
% input, output document
%\quad Input number \param{n} must be positive integer.
%
\subsection{vp -- $p$-adic valuation}\linkedone{arith1}{vp}
\func{vp}{\hiki{n}{integer},\ \hiki{p}{integer}, \hikiopt{k}{integer}{0}}{\out{tuple}}\\
\spacing
% document of basi document
\quad Return the \param{p}-adic valuation and other part for \param{n}.\\
\spacing
% added document
\quad \negok If $k$ is given, return the valuation and the other part for $\param{n}p^\param{k}$.\\
% input, output document
%\quad Input number \param{n}, \param{p} must be int, long or \linkingone{rational}{Integer}.
%
\subsection{issquare - Is it square?}\linkedone{arith1}{issquare}
\func{issquare}{\hiki{n}{integer}}{\out{integer}}\\
\spacing
% document of basi document
\quad Check if \param{n} is a square number and return square root
of \param{n} if \param{n} is a square.
Otherwise, return \(0\).\\
%\spacing
% input, output document
%\quad Input number \param{n} must be int, long or \linkingone{rational}{Integer}.
%
\subsection{log -- integer part of logarithm}\linkedone{arith1}{log}
\func{log}{\hiki{n}{integer},\ \hikiopt{base}{integer}{2}}{\out{integer}}\\
\spacing
% document of basi document
\quad Return the integer part of logarithm of \param{n} to the \param{base}.\\
%\spacing
% input, output document
%\quad Input number \param{n}, \param{base} must be int, long or \linkingone{rational}{Integer}.
%
\subsection{product -- product of some numbers}\linkedone{arith1}{product}
\func{product}{\hiki{iterable}{list},\ \hikiopt{init}{object}{None}}{\out{\hiki{prod}{object}}}\\
\spacing
% document of basic document
\quad Return the products of all elements in \param{iterable}. \\
\spacing
% added document
\quad If \param{init} is given, the multiplication starts with \param{init} instead of the first element in \param{iterable}.\\
\spacing
% input, output document
\quad Input list \param{iterable} must be list of mathematical objects which support multiplication.\\
The type of output \param{prod} is determined by the types of elements of \param{iterable} and \param{init}.\\
If the \param{iterable} is empty, then \param{init} (if given) or \(1\) (otherwise) will be returned.\\
%
\begin{ex}
>>> arith1.AGM(10, 15)
12.373402181181522
>>> arith1.CRT([[2, 5], [3, 7]])
17
>>> arith1.CRT([[2, 5], [3, 7], [5, 11]])
192
>>> arith1.expand(194, 5)
[4, 3, 2, 1]
>>> arith1.vp(54, 3)
(3, 2)
>>> arith1.product([1.5, 2, 2.5])
7.5
>>> arith1.product([3, 4], 2)
24
>>> arith1.product([])
1
\end{ex}
%---------- end document ---------- %
\input{../footer}
| {
"alphanum_fraction": 0.7065395991,
"avg_line_length": 38.0375,
"ext": "tex",
"hexsha": "f7aded0ffa3794c94400b793075d2bb205fd3a81",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "turkeydonkey/nzmath3",
"max_forks_repo_path": "manual/en/arith1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "turkeydonkey/nzmath3",
"max_issues_repo_path": "manual/en/arith1.tex",
"max_line_length": 162,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "turkeydonkey/nzmath3",
"max_stars_repo_path": "manual/en/arith1.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-26T19:22:17.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-05-26T19:22:17.000Z",
"num_tokens": 1920,
"size": 6086
} |
%!TEX root = ../dissertation.tex
\chapter{Appendix chapter}
\label{appendix:appendix_chapter}
Your new appendix chapter's content goes here
| {
"alphanum_fraction": 0.7872340426,
"avg_line_length": 23.5,
"ext": "tex",
"hexsha": "4e6566a01ee66f030f1ef80865e259a1ef17cb6d",
"lang": "TeX",
"max_forks_count": 27,
"max_forks_repo_forks_event_max_datetime": "2018-11-15T10:31:51.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-08-18T15:22:24.000Z",
"max_forks_repo_head_hexsha": "9f13ccffe788db16dff8df75fbae4d51e44d8f62",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "MarcioDSilva/IST_Dissertation_template",
"max_forks_repo_path": "appendix/new_appendix_chapter_template.tex",
"max_issues_count": 15,
"max_issues_repo_head_hexsha": "9f13ccffe788db16dff8df75fbae4d51e44d8f62",
"max_issues_repo_issues_event_max_datetime": "2021-08-04T08:00:42.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-08-18T16:09:48.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "MarcioDSilva/IST_Dissertation_template",
"max_issues_repo_path": "appendix/new_appendix_chapter_template.tex",
"max_line_length": 45,
"max_stars_count": 43,
"max_stars_repo_head_hexsha": "9f13ccffe788db16dff8df75fbae4d51e44d8f62",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "MarcioDSilva/IST_Dissertation_template",
"max_stars_repo_path": "appendix/new_appendix_chapter_template.tex",
"max_stars_repo_stars_event_max_datetime": "2021-08-03T13:11:18.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-08-18T15:22:14.000Z",
"num_tokens": 37,
"size": 141
} |
\section{Observational Design Study}
\label{sec:abs_study}
In this observational design study, we wanted to observe how groups of people with varying levels of expertise in art would work together on a drawing task, and see how collaboration would be affected by usage of abstraction strategies. While the experts we observed used abstract composition sketches to create the ``undersketch'' of their eventual drawing, we anticipated that most of our participants would not know how to do this.
\subsection{Method}
\subsubsection{Participants}
Six groups of two to four people ($n=$ 19, 11 female, average age $=$ 28 years) participated in a collaborative drawing task. Participants were recruited from a technology company and local art school through an email advertisement and signed up for scheduled time slots. Though group members were not required to know each other, at least two people from each group had an existing working relationship. The average self-reported drawing experience level was 1.95 ($SD=$ 0.78) out of 4, with 1 being no experience and 4 being expert level. We randomly assigned groups to two conditions: three to \textit{abstraction} and three to \textit{freeform}. Each condition comprised one group of size two, one group of three, and one group of four to explore how communication potentially changes as function of group size. While most groups were co-located, one group in each condition consisted of distributed participants using an online shared canvas (Google Jamboard) and video conferencing (BlueJeans). Observing these distributed groups gave insight on the impact of abstraction on co-located versus remote collaboration. Participants received a \$25 USD gift card for their time. All sessions were video and audio recorded with the experimenter taking note of critical moments during each session.
\subsubsection{Procedure}
For \textit{abstraction} groups, we first presented a short tutorial of how to use abstraction blocks (Figure \ref{fig:blocks}). Our tutorial explained that participants could use the blocks to represent where a sketch element would be in a drawing. They were free to use text, sketches, or the blocks as they pleased. Groups each made a collective drawing in response to the prompt, ``A scene with a time-traveling tourist.'' We chose a fictional storytelling prompt and drawing task because it does not require any domain knowledge and enables collaborative ideation and interaction \cite{Davis2017}. The task was 20 minutes long and broken into three segments to control for time spent in each part of the process across conditions: five minutes for brainstorming, five minutes for planning and drafting, and ten minutes for drawing. Groups could use paper, pens, pencils, and colored pencils throughout the task. We asked each group to deliver their drawing on one sheet of paper by the end of the study session.
During brainstorming, participants discussed ideas without worrying about drawing yet. Before the planning phase, we gave a short explanation of basic composition concepts and instructed groups to focus on drawing composition during planning. \textit{Abstraction} groups were given standard size and small sticky notes to use and presented with the blocking strategy at this time. Google Jamboard's sticky note feature and pen tools simulated using physical sticky notes and regular analog sketching in the remote sessions. \textit{Freeform} groups were able to plan their drawing in any manner. Following planning, groups collaboratively worked on their drawing for the remainder of the task. Directly after the drawing task, the experimenter replayed recorded critical moments on a computer screen for groups (and through the sharescreen feature for digital groups), asked them to reflect on these clips, and reviewed any additional moments mentioned during the discussion.
\subsection{Results}
To find emergent themes of how groups interacted with drawing artifacts and each other, we noted interesting behaviors and critical moments in each of the sessions, looking for commonalities across groups and between conditions. We found that groups that used abstraction blocks engaged in more conceptual discussion and worked together in a more flexible manner.
\subsubsection{Abstraction Enabled Integrated Collaboration}
Compared to freeform sketching, abstraction blocks seemed to help groups form a strong shared understanding of each others' ideas. To represent drawing elements, \textit{abstraction} groups placed sticky notes on their draft paper to create a blueprint of their composition. For example, Group 2 (\textit{abstraction}) put three sticky notes together to represent a focal point (a train) of their drawing while using the smaller sticky notes to represent smaller elements (individual people). These sticky notes were labeled with a brief written description or a small sketch showing what the block was supposed to represent.
\begin{figure*}
\centering
\vspace{-0.2in}
\includegraphics[width=\textwidth]{abstraction/figures/abs.jpg}
% \includegraphics[width=\textwidth]{abstraction/figures/freeform.jpg}
\vspace{-0.3in}
\caption{\textit{Abstraction} groups developed composition plans from the abstract blocks that formed the basis of their final drawings.}
~\label{fig:abs_drawings}
\vspace{-0.2in}
\end{figure*}
\begin{figure*}
\centering
\vspace{-0.2in}
\includegraphics[width=\textwidth]{abstraction/figures/freeform.jpg}
\vspace{-0.3in}
\caption{\textit{Freeform} groups tended to sketch individual details before deciding on a final composition.}
~\label{fig:freeform_drawings}
\vspace{-0.2in}
\end{figure*}
Similar to the thumbnail sketches created by the experts we observed, abstraction blocks helped anchor discussion by providing shared representations of what members of the group were thinking about: \textit{``I think once someone gave us a visual view of what [they were] thinking, I feel like we all were like ‘Yeah! Perfect'.''} (P5, \textit{abstraction}). P6 (\textit{abstraction}) similarly said in the post-interview that \textit{``everyone has different ways of placing things on paper so once we laid out that this goes here, I think it was helpful for all of us to just visualize the layout of all the things we were talking about.''}
In contrast, Group 1 was the only \textit{freeform} group to make a lightly sketched blueprint of their drawing due to one participant's prior domain knowledge in photography and compositions.
In other \textit{freeform} groups, participants sketched their own individual drawing elements on separate sheets of paper or divided up their shared drawing sheet into areas ``owned'' by each member of the group (Figure \ref{fig:freeform_drawings}). For example, Group 4 members each sketched their own version of the Eiffel Tower on their scratch paper.
For \textit{freeform} groups, planning seemed to be oriented around making decisions about specific details of the drawing rather than explore options for the drawing's overall composition.
These differences in how groups approached planning their drawing also affected how they allotted drawing work among individuals. \textit{Abstraction} groups tended to use the blocks they had created as a way to define who should draw what. For example, Group 2 (\textit{abstraction}) assigned each person to draw one of the block elements on their composition plan. They physically checked off the blocks on their plan as they were drawing to mark which ones they completed:
\begin{quote}
P3: \textit{``I can take `colonial' [people].''}\\
P5: \textit{``We'll take the `today' people on this side.''}\\
--- Group 2 (\textit{abstraction})
\end{quote}
\textit{Freeform} groups focused instead on smaller, individual elements and focused on the details of these elements. For example, Group 1 (\textit{freeform}) split up their work by individual characters: \textit{``So do you want to start drafting the person and play around with outfits, and I'll do the dinosaurs?''} (P1, \textit{freeform}). However, because \textit{freeform} participants had individually planned portions of the drawing separately from others in their group, they encountered conflicts in their understanding of the drawing's composition as sketching progressed.
One \textit{freeform} participant in Group 5 asked another group member to redraw and switch the orientation of another element during drawing: \textit{``Can you draw the portal on the right because I drew the mammoth from right to left, and this will be very complicated for me to draw it from left to right.''} (P16 \textit{freeform}). There was also sometimes confusion about what each person should work on during drawing. In the post-interview, P13 (\textit{freeform}) said, \textit{``I didn't quite know what I was doing, everyone seem to have their own parts...and I was like 'what should I do?'''}. P14 (\textit{freeform}) also mentioned that he wished his group had done more concrete outlining beforehand so \textit{``people can work more independently instead of having to wait for another part to be done.''}
\subsubsection{Abstraction Groups Focused on High-Level Decisions}
Groups that used abstract blocks seemed to discuss higher-level concepts like general theme and layouts throughout the drawing process. For example, when talking about how to depict a person looking at a time travel app on his phone, Group 6 (\textit{abstraction}) included light sketches on their blocks (Figure \ref{fig:abs_drawings}) to capture and discuss concepts like point of view, background focus, and scale during planning:
\begin{quote}
P19: \textit{``Maybe we're still in his point of view, like he's looking at his phone.''}\\
P17: \textit{``The scale can be small because we probably don't want to get too detailed, but the idea is just to convey that it's a travel app.''}\\
P18: \textit{``And maybe we can have a thing where it's like zoomed in to show this is what he's holding.''}\\
--- Group 6 (\textit{abstraction})
\end{quote}
Group 3 (\textit{abstraction}), the digital group, spent their planning time setting high-level goals for their drawing. They made blocks with questions for what they wanted to achieve in their drawing rather than making individual sketches: \textit{``Do you wanna have the main question on the top, and then like three vignettes below?''} (P8, \textit{abstraction})
\begin{figure}[b!]
\centering
\vspace{-0.25in}
\includegraphics[width=.9\columnwidth]{abstraction/figures/planning.png}
\caption{(a) \textit{Freeform} groups discussed details early without making concrete sketches until their final drawing. (b) \textit{Abstraction} groups instead used abstract blocks to concretely discuss and sketch a drawing composition.}~\label{fig:tangible}
\vspace{-0.2in}
\end{figure}
\textit{Freeform} groups instead had a tendency to discuss lower-level details during planning, corroborating findings from prior work \cite{jansson1991design, Little2010, Yu2016}. In Group 5 (\textit{freeform}), participants discussed incremental details for how specific elements should look. For example, when discussing how to show a time traveller coming out of a time travel portal:
\begin{quote}
P15: \textit{``We could have different futures in the portal, like maybe one where the world is flooded.''}\\
P14: \textit{``Looks like [P15] has gone in pretty hard on [drawing] the portal...so [P13] should draw the traveler.''}\\
P15: \textit{``And then [P14] will add lasers to it.''}\\
P14: \textit{``Yeah like laser accessories.''}\\
--- Group 5 (\textit{freeform})
\end{quote}
Discussions would often chain together incrementally, adding more details to existing ideas rather than offering new directions to explore. For example, Group 4 (\textit{freeform}) similarly attended to details and discussed the specifics of what their subjects would be doing in their drawing despite being in the planning stage of the study session:
\begin{quote}
P12: \textit{``What are they doing? Talking?''}\\
P9: \textit{``Eating?''}\\
P11: \textit{``Yeah, wine. Eating.''}\\
P12: \textit{``Coffee?''}\\
P11: \textit{``No, wine.''}\\
P9: \textit{``Cheese?''}\\
P11: \textit{``Yup, cheese. Macarons? Done.''}\\
--- G4 (\textit{freeform})
\end{quote}
In contrast to the expert group we observed, where improvisational discussion of details occurred during the final stages of drawing, \textit{freeform} groups discussed these details during the early stages of ideation despite being encouraged to plan. For \textit{abstraction} groups, structuring abstraction around blocks oriented discussion around the overall composition of their drawing rather than the specific details of how the drawing should eventually look.
\subsubsection{Abstraction Enabled Flexible Workflows}
Abstraction blocks enabled \textit{abstraction} groups to easily explore alternative layouts and ideas.
\textit{Abstraction} groups described drawings as iterative and malleable, using their planning process to ask new ``what if'' scenarios. For example, while discussing how to draw a train station for time traveling, Group 6 (\textit{abstraction}) moved blocks to reflect their changing concept and even incorporated the blocks in their final drawing so they could continue to iterate and adapt (Figure \ref{fig:abs_drawings}):
\begin{quote}
P17: \textit{``What if it's like multiple doors to multiple time periods?''}\\
P19: \textit{``Oh and like this [sticky note] can be like the screen that shows all the departures.''}\\
P18: \textit{``What if it was like this?''} (Takes a sticky note and moves it to another part of the planning canvas)\\
--- Group 6 (\textit{abstraction})
\end{quote}
Abstraction blocks were useful for planning, but also for making changes during drawing. For instance, during drawing Group six originally drew a screen in the middle of their page, but because they drew the screen on a block, they moved it further to the right side of the page. When asked their reasoning behind this change, P19 (\textit{abstraction}) said, \textit{``There was a moment [when] we realized that we didn't have enough space so that was a moment where we switched things around''}. Abstraction blocks helped in easily adapting their drawing plan even in the later stages of drawing.
P17 (\textit{abstraction}) said the blocks felt impermanent: \textit{``When you draw something, it's permanent, but you can move [blocks] around, so you're not really committed.''}. P7 (\textit{abstraction}) also said direct manipulation of blocks helped in getting started: \textit{``Being able to take a sticky note and drag it to where you want it to go, it's so much easier to get started.''} (Figure \ref{fig:tangible}). Some participants thought the abstract blocks resembled other low-fidelity prototyping methods in helping groups view the drawing as piecemeal rather than a fixed whole. In particular, tangible pieces provided structure in building drawing plans: \textit{``It's about being tangible versus intangible so when we see [a block], we grasp onto the idea of that being something''} P18 (\textit{abstraction}). The flexibility in creating malleable chunks framed the drawing as impermanent, helping groups easily explore different ideas and concepts even during the final drawing phase.
P7 (\textit{abstraction}) also said the abstraction blocks helped in getting started on the drawing: \textit{``it helped in [overcoming] that sort of blank page...and you're so daunted that you don't know how to get started...Then when you have a plan you can just get going on the drawing. There wasn't much discussion about what to do in the final stage because it had already been planned out.''} (P7, \textit{abstraction}).
In contrast, \textit{freeform} groups worked linearly and were often more hesitant to commit to and revise ideas.
Sketching, even at a draft stage, was viewed as representing a permanent decision. In the post-interview, P12 (\textit{freeform}) mentioned expressing many ideas verbally, but being hesitant to start their drawing: \textit{``Nothing was concrete on the paper. So it was like we're just spitballing, but once you actually put ink to paper then it's like oh now we need to actually do it.''} Group 4 (\textit{freeform}) iterated on specific drawing elements individually (Figure \ref{fig:freeform_drawings}) because sketches on a shared sheet felt more permanent and committal. They aimed to form verbal consensus rather than exploring ideas the way \textit{abstract} groups did (Figure \ref{fig:tangible}). Here they discuss drawing a picture showing the time traveller posing with a historical figure:
\begin{quote}
P9: \textit{``Do we want to fill the whole page or do we want to do a frame?''}\\
P11: \textit{``Ooh let's frame it.''}\\
P9: \textit{``So we agreed on a picture, yes?''}\\
P10: \textit{``A picture within a picture? So is this like a postcard?''}\\
P9: \textit{``So someone is like holding the postcard-''}\\
P12: \textit{``Hands. Yes. I like that.''}\\
--- Group 4 (\textit{freeform})
\end{quote}
Despite the fact that sketching is often useful as an abstraction tool \cite{Buxton2007,Tversky2009}, \textit{freeform} participants in our study still viewed sketching as costly and permanent, akin to final drawing. By providing \textit{abstraction} participants with a way to move their work around, abstract blocks instead made sketching together a more flexible and less daunting process.
| {
"alphanum_fraction": 0.7802698241,
"avg_line_length": 126.381294964,
"ext": "tex",
"hexsha": "8773693cf4ae4118542a737a9ee8aff3fbe750e8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1e4956137d90f55e0437b94576eb2de655f0bc64",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "tngoon/ucsd-thesis",
"max_forks_repo_path": "abstraction/4_study.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1e4956137d90f55e0437b94576eb2de655f0bc64",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "tngoon/ucsd-thesis",
"max_issues_repo_path": "abstraction/4_study.tex",
"max_line_length": 1299,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1e4956137d90f55e0437b94576eb2de655f0bc64",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "tngoon/ucsd-thesis",
"max_stars_repo_path": "abstraction/4_study.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4035,
"size": 17567
} |
\chapter{Coordinators}\label{sec:coordinators}
Coordinators are responsible for upholding the values of the Collective as defined in Article III of the Constitution. They are elected twice a year as per Bylaw~\ref{sec:elections}, and each position is open at only one of those two elections.
If they fail to uphold the Collective's values, a warning may be issued by a majority vote of the coordinators. Three warnings cause the coordinator to be removed from their position.
A coordinator may voluntarially vacate their position by notifying the other coordinators.
Vacant positions may be temporarily filled by any member through a majority vote of the current coordinators. Temporary coordinators abide by the same rules and guidelines as other coordinators.
There are no limits on the number of terms a coordinator may serve.
What follows is a complete list of coordinator positions for the Collective. The election in which the position is open is indicated as ``Summer'' or ``Winter''.
\section[Steward]{Steward (Summer)}\label{sec:steward}
Responsible for long term vision, Collective community, and managing coordinators.
\section[Operations]{Operations (Winter)}\label{sec:operations}
Responsible for internal affairs, event management, and assisting coordinators.
\section[Treasurer]{Treasurer (Summer)}\label{sec:treasurer}
Responsible for budgeting, accounting, taxes.
\section[Fundraising]{Fundraising (Winter)}\label{sec:fundraising}
Responsible for pursuing grants, donations, and partners.
\section[Librarian]{Librarian (Summer)}\label{sec:librarian}
Responsible for organizing Collective information, skills, and resources.
\section[Historian]{Historian (Winter)}\label{sec:historian}
Responsible for running general meeting, taking attendance, and recording meeting notes.
\section[Diversity and Accessibility]{Diversity and Accessibility (Summer)}\label{sec:accessability}
Responsible for working to improve access and responding to any related issues, internal or external.
\section[Publicist]{Publicist (Winter)}\label{sec:publicist}
Responsible for public communication, managing the contact list, and ensuring transparency.
| {
"alphanum_fraction": 0.8125867654,
"avg_line_length": 56.8684210526,
"ext": "tex",
"hexsha": "04d8b53e7e5a517ea222f984f40d90aad30e437e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ff349982113ad2587df70eaabc1afd60e6b2ecd5",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "arichiv/lions-main-governing",
"max_forks_repo_path": "bylaws/coordinators.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ff349982113ad2587df70eaabc1afd60e6b2ecd5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "arichiv/lions-main-governing",
"max_issues_repo_path": "bylaws/coordinators.tex",
"max_line_length": 244,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ff349982113ad2587df70eaabc1afd60e6b2ecd5",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "arichiv/lions-main-governing",
"max_stars_repo_path": "bylaws/coordinators.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 465,
"size": 2161
} |
\chapter{Further work}
\label{ch:further}
\section{UBX limitations}
The UBX framework allows to write blocks with well defined separation of concerns, but does not actually \emph{require} it: a coder could, for example, make a c-block store data in its private memory space, or make a i-block process the data with a computational behavior.\\
Moreover another limitation of UBX is that it still does not support blocks composition, while it should be possible to define sub-nodes to be used as regular blocks.\\
A new library named AB5C is in a very early developing stage and should replace UBX and solve those problematics.
\section{Missing models}
There still is a lack of modeling in this work, also because of the lack of tools to do so. More techniques are under development, especially for data; the goal is to be have (someday) developing frameworks ables to auto-generate the code from models; although there already are tool-chains able to declare (not define!) functions from models, the road to tools able to implement components from mathematical models is still long. By now, the more feasible goal is to have a centralized repository of data models (e.g. positions, rotations, ecc.) to conforms to and at least have data-model transformation tool able to generate "adapter" components to trivially connect components not developed for being connected.\\
\section{Open-source contributions}
As a personal opinion, a large users base is essential to reach high goals and the open source software developing model has proved to work once reached a "critical mass" of developers; as an example, take the ROS project: an environment very similar to UBX that is leading to success thank to a wide adoption in the robotic community.\\
Moreover, universities represent a great source of work force, especially for the complex job of modeling, and should take the initiative in order to tweak a wider (industrial, scientific, recreational, commercial) adoption.
| {
"alphanum_fraction": 0.8011161847,
"avg_line_length": 131.4,
"ext": "tex",
"hexsha": "e5a356624a2913cd8765ef5ada5d387edb0cef9f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0ff0d2100d6547afb67af40c0f355bec1a3c9150",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "fmr42/MasterThesis",
"max_forks_repo_path": "Chapters/Chapter12.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0ff0d2100d6547afb67af40c0f355bec1a3c9150",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "fmr42/MasterThesis",
"max_issues_repo_path": "Chapters/Chapter12.tex",
"max_line_length": 717,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0ff0d2100d6547afb67af40c0f355bec1a3c9150",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "fmr42/MasterThesis",
"max_stars_repo_path": "Chapters/Chapter12.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 411,
"size": 1971
} |
\chapter{Results}
\label{appendix:results}
\input{avg_fitness}
\input{avg_size}
\input{avg_time}
\input{std_fitness}
\input{std_size}
\input{std_time}
\clearpage
\input{perfects_found}
| {
"alphanum_fraction": 0.7849462366,
"avg_line_length": 15.5,
"ext": "tex",
"hexsha": "79628675da42d78e148b3d3f9c81c441e090de8a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "14cdb7dc2c1343a6405289ca2b046f01b4746d9b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "flexgp/BehavioralGP",
"max_forks_repo_path": "docs/thesis/results.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "14cdb7dc2c1343a6405289ca2b046f01b4746d9b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "flexgp/BehavioralGP",
"max_issues_repo_path": "docs/thesis/results.tex",
"max_line_length": 24,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "14cdb7dc2c1343a6405289ca2b046f01b4746d9b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "flexgp/BehavioralGP",
"max_stars_repo_path": "docs/thesis/results.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 60,
"size": 186
} |
\section{Mapping DNNs to systolic array}
The Figure \ref{fig:convolution} depicts an example mapping of a 2D filter over a 2D feature-map onto the proposed systolic array. The weights are unrolled and fed to each columns where they are kept stationary in the PEs. The inputs are streamed across the rows. The output partial-sums are streamed across the columns and are accumulated in the accumulator-buffer.
There are three key points that need to be considered,
\begin{itemize}
\item The addresses to be fetched for weights and input feature maps are provided via the ISA. But, the partial sums need to be stored and accumulated across channels. The addressing of these partial sums need to be handled internally as it is not exposed to the front-end or the control-processor.
\item The global-buffer needs to support a bandwidth equal to the number of rows (nRows) in the systolic array, since there are nRows inputs to the systolic array every cycle. There are two ways to handle this problem. (1). Perform im2col outside the systolic array and have as many global buffer banks as the number of rows and load the buffers with replicated values. (2). Have a replication unit that exploits the fact that many values across rows share the same value, which can be fetched once and replicated across. However, in this approach, edge cases need to be taken care of. The latter approach is akin to having a im2col logic inside the accelerator subsystem. This is the approach we have taken in our design, although extensive verification needs to be performed to make sure that all edge cases are met.
\item There is also a need to re-use the tensor-ALU to do the partial sum reduction, as the module is expected not to be busy when convolution operation is being performed as there are no other operations to perform such as normalization or pooling (unless we need to support fused-operators, currently it's not clear how to support these things)
\end{itemize}
\begin{figure}[h]
\centering
\includegraphics[scale=0.3]{images/convolution.png}
\caption{Example of mapping 2D filters and a 2D feature map on our systolic array supporting weight-stationary}
\label{fig:convolution}
\end{figure}
| {
"alphanum_fraction": 0.7901846015,
"avg_line_length": 116.8947368421,
"ext": "tex",
"hexsha": "a727d45db4bee19764c0866d30cd8ec0c0db3c42",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d3e240c5a87406dd89fd516a2ee6ec8d6a8e762f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "iitm-sysdl/SHAKTIMAAN",
"max_forks_repo_path": "docs/old_docs/src/mapping.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d3e240c5a87406dd89fd516a2ee6ec8d6a8e762f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "iitm-sysdl/SHAKTIMAAN",
"max_issues_repo_path": "docs/old_docs/src/mapping.tex",
"max_line_length": 822,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "d3e240c5a87406dd89fd516a2ee6ec8d6a8e762f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "iitm-sysdl/SHAKTIMAAN",
"max_stars_repo_path": "docs/old_docs/src/mapping.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-27T13:26:31.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-09-19T05:55:33.000Z",
"num_tokens": 492,
"size": 2221
} |
% Options for packages loaded elsewhere
\PassOptionsToPackage{unicode}{hyperref}
\PassOptionsToPackage{hyphens}{url}
%
\documentclass[
english,
man,floatsintext]{apa6}
\usepackage{amsmath,amssymb}
\usepackage{lmodern}
\usepackage{ifxetex,ifluatex}
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{textcomp} % provide euro and other symbols
\else % if luatex or xetex
\usepackage{unicode-math}
\defaultfontfeatures{Scale=MatchLowercase}
\defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1}
\fi
% Use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
\IfFileExists{microtype.sty}{% use microtype if available
\usepackage[]{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\makeatletter
\@ifundefined{KOMAClassName}{% if non-KOMA class
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}}
}{% if KOMA class
\KOMAoptions{parskip=half}}
\makeatother
\usepackage{xcolor}
\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available
\IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}}
\hypersetup{
pdftitle={Comparing probabilistic accounts of probability judgments},
pdfauthor={Derek Powell1},
pdflang={en-EN},
pdfkeywords={probability judgments, Bayesian cognitive science, heuristics and biases},
hidelinks,
pdfcreator={LaTeX via pandoc}}
\urlstyle{same} % disable monospaced font for URLs
\usepackage{graphicx}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
% Set default figure placement to htbp
\makeatletter
\def\fps@figure{htbp}
\makeatother
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{-\maxdimen} % remove section numbering
% Make \paragraph and \subparagraph free-standing
\ifx\paragraph\undefined\else
\let\oldparagraph\paragraph
\renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}}
\fi
\ifx\subparagraph\undefined\else
\let\oldsubparagraph\subparagraph
\renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}}
\fi
% Manuscript styling
\usepackage{upgreek}
\captionsetup{font=singlespacing,justification=justified}
% Table formatting
\usepackage{longtable}
\usepackage{lscape}
% \usepackage[counterclockwise]{rotating} % Landscape page setup for large tables
\usepackage{multirow} % Table styling
\usepackage{tabularx} % Control Column width
\usepackage[flushleft]{threeparttable} % Allows for three part tables with a specified notes section
\usepackage{threeparttablex} % Lets threeparttable work with longtable
% Create new environments so endfloat can handle them
% \newenvironment{ltable}
% {\begin{landscape}\centering\begin{threeparttable}}
% {\end{threeparttable}\end{landscape}}
\newenvironment{lltable}{\begin{landscape}\centering\begin{ThreePartTable}}{\end{ThreePartTable}\end{landscape}}
% Enables adjusting longtable caption width to table width
% Solution found at http://golatex.de/longtable-mit-caption-so-breit-wie-die-tabelle-t15767.html
\makeatletter
\newcommand\LastLTentrywidth{1em}
\newlength\longtablewidth
\setlength{\longtablewidth}{1in}
\newcommand{\getlongtablewidth}{\begingroup \ifcsname LT@\roman{LT@tables}\endcsname \global\longtablewidth=0pt \renewcommand{\LT@entry}[2]{\global\advance\longtablewidth by ##2\relax\gdef\LastLTentrywidth{##2}}\@nameuse{LT@\roman{LT@tables}} \fi \endgroup}
% \setlength{\parindent}{0.5in}
% \setlength{\parskip}{0pt plus 0pt minus 0pt}
% \usepackage{etoolbox}
\makeatletter
\patchcmd{\HyOrg@maketitle}
{\section{\normalfont\normalsize\abstractname}}
{\section*{\normalfont\normalsize\abstractname}}
{}{\typeout{Failed to patch abstract.}}
\patchcmd{\HyOrg@maketitle}
{\section{\protect\normalfont{\@title}}}
{\section*{\protect\normalfont{\@title}}}
{}{\typeout{Failed to patch title.}}
\makeatother
\shorttitle{Comparing probability judgment accounts}
\keywords{probability judgments, Bayesian cognitive science, heuristics and biases\newline\indent Word count: 4826}
\usepackage{csquotes}
\usepackage{tikz}
\usetikzlibrary{bayesnet}
\mathchardef\hyph="2D
\ifxetex
% Load polyglossia as late as possible: uses bidi with RTL langages (e.g. Hebrew, Arabic)
\usepackage{polyglossia}
\setmainlanguage[]{english}
\else
\usepackage[main=english]{babel}
% get rid of language-specific shorthands (see #6817):
\let\LanguageShortHands\languageshorthands
\def\languageshorthands#1{}
\fi
\ifluatex
\usepackage{selnolig} % disable illegal ligatures
\fi
\usepackage[style=apa,]{biblatex}
\addbibresource{references.bib}
\title{Comparing probabilistic accounts of probability judgments}
\author{Derek Powell\textsuperscript{1}}
\date{}
\authornote{
This manuscript has not yet been peer-reviewed. This study reports secondary data analyses and was not preregistered.
}
\affiliation{\vspace{0.5cm}\textsuperscript{1} Arizona State University, School of Social and Behavioral Sciences}
\abstract{
Bayesian theories of cognitive science hold that cognition is fundamentally probabilistic, but people's explicit probability judgments often violate the laws of probability. Two recent proposals, the ``Probability Theory plus Noise'' \autocite{costello.watts2014} and ``Bayesian Sampler'' \autocite{zhu.etal2020} theories of probability judgments, both seek to account for these biases while maintaining that mental credences are fundamentally probabilistic. These theories fit quite differently into the larger project of Bayesian cognitive science, but their many similarities complicate comparisons of their predictive accuracy. In particular, comparing the models demands a careful accounting of model complexity. Here, I cast these theories into a Bayesian data analysis framework that supports principled model comparison using information criteria. Comparing the fits of both models on data collected by Zhu and colleagues \autocite*{zhu.etal2020} I find the data are best explained by a modified version of the Bayesian Sampler model under which people may hold informative priors about probabilities.
}
\begin{document}
\maketitle
Bayesian theories of cognition offer a unified formal framework for cognitive science \autocite{tenenbaum.etal2011} that has had remarkable explanatory successes across domains, including in perception \autocite[e.g.][]{kersten.etal2004}, memory \autocite[e.g.][]{anderson1991}, language \autocite[e.g.][]{xu.tenenbaum2007}, and reasoning \autocite[e.g.][]{lu.etal2012}. At the heart of the Bayesian project is the idea that cognition is fundamentally probabilistic: that people reason according to subjective degrees of belief which follow the laws of probability and, in particular, that they are revised in light of evidence according to Bayes' Rule. It is somewhat embarrassing then, that these theories have often been accused of failing to describe human ``beliefs'' of the simple and everyday sort, such as beliefs like ``it will rain tomorrow'', ``vaccines are safe,'' or ``this politician is trustworthy'' \autocite{chater.etal2020}.
Trouble starts as soon as we attempt to measure beliefs. According to Bayesian theories of cognition and epistemology \autocite{jaynes2003}, the degree to which people believe in various propositions, or their credences, should reflect subjective mental probabilities. So asking people to express beliefs in terms of probability seems only natural.
Unfortunately, people's explicit probability judgments routinely violate the most basic axioms of probability theory. For example, human probability judgments often exhibit the ``conjunction fallacy'': people will often judge the conjunction of two events (e.g.~``Tom Brady likes football and miniature horses'') as being more probable than one of the events in isolation (e.g.~``Tom Brady likes miniature horses''), a plain and flagrant violation of probability theory \autocite{tversky.kahneman1983}. Other demonstrations of the incoherence of probability judgments include disjunction fallacies, subadditivity or ``unpacking'' effects \autocite{tversky.koehler1994}, and a variety of other findings illustrating the incoherence of human probability judgments \autocite[for an accessible review, see][]{kahneman2013}. Altogether these findings have led many researchers to abandon the notion that degrees of belief are represented as probabilities.
Recently however, two groups of researchers have proposed theories of human probability judgments that account for biases in these judgments while maintaining that mental credences are fundamentally probabilistic \autocite{costello.watts2014,zhu.etal2020}. Both of these theories build on the increasingly popular notion that a variety of human reasoning tasks are accomplished by a limited process of mental ``sampling'' from a probabilistic mental model \autocites[see also][]{chater.etal2020,dasgupta.etal2017}.
\hypertarget{two-probabilistic-theories-of-probability-judgment}{%
\subsection{Two probabilistic theories of probability judgment}\label{two-probabilistic-theories-of-probability-judgment}}
Costello and Watts \autocite*{costello.watts2014,costello.watts2016,costello.watts2018} have proposed a theory of probability judgment they call the ``Probability Theory plus Noise'' theory (PT+N). In the PT+N model, mental ``samples'' are drawn from a probabilistic mental model of events and are then ``read'' with noise, so that some positive examples will be read as negative and some negative examples read as positive with some probability \(d\). The results are probability judgments reflecting probabilistic credences perturbed by noise. In their model, the probability a mental sample for an event \(A\) is read as A is the probability that the sample truly is \(A\), \(p(A)\), and that it is correctly read \((1-d)\), plus the probability that the sample is not \(A\), \(1-P(A)\) and that it is incorrectly read (\(d\)), or:
\begin{align*}
P(\text{read as A}) &= (1-d)P(A) + d(1-P(A)) \\
&= (1-2d)P(A) + d
\end{align*}
Thus under the simplest form of the PT+N model, the expected value of probability judgments is:
\[E[\hat{P}_{PT+N}(A)] = (1-2d)P(A) + d \]
By assumption, a maximum of 50\% of samples can be misread, so that \(d\) is a number in the range \([0, 1/2]\). The PT+N theory provides a unified account for a wide variety of biases in probability judgment that were previously attributed to different types of heuristics, as well as novel biases identified based on the model's predictions \autocite{costello.watts2014,costello.watts2016,costello.watts2017,costello.watts2018}.
Meanwhile, Zhu, Sanborn, \& Chater \autocite*{zhu.etal2020} have proposed a Bayesian model of probability judgment they call the ``Bayesian Sampler.'' Under this model, probability judgment is itself seen as a process of Bayesian inference. To judge the probability of an event, a limited number of samples are again drawn from a mental model of the event. Then, those ``observed'' samples are integrated with a prior over probabilities to produce a probability judgment. This prior takes the form of a symmetric Beta distribution, \(Beta(\beta, \beta)\). After observing \(S(A)\) successes and \(N - S(A)\) failures, the posterior over probabilities is distributed \(Beta(\beta + S(A), \beta + N - S(A))\). Zhu and colleagues \autocite*{zhu.etal2020} assume that people report the mean of their posterior probability estimates. For any Beta distribution \(x \sim Beta(a,b)\), \(E[x] = \frac{a}{a+b}\). So, the expected probability estimate is a linear function of S, N, and \(\beta\).
\[\hat{P}_{BS}(A) = \frac{S(A)}{N+2\beta} + \frac{\beta}{N+2\beta}\]
The expected value of the estimate can then be written in terms of the expected number of successes, or \(P(A) \cdot N\). Under the simplest version of the Bayesian Sampler model, this gives the following formula:
\[E[\hat{P}_{BS}(A)] = \frac{N}{N+2\beta}P(A) + \frac{\beta}{N+2\beta}\]
Like the PT+N model, the Bayesian Sampler model accounts for a wide array of biases in probability judgments, including the novel biases identified by Costello and Watts \autocite{costello.watts2014,costello.watts2016}. In fact, important equivalencies can be drawn between the two models. Zhu and colleagues \autocite*{zhu.etal2020} show that the \(N\) and \(\beta\) parameters of their model can be related to the \(d\) parameter of the PT+N model via the following bridging formula:
\[d = \frac{\beta}{N+2\beta}\]
Thus, the effect of a Bayesian prior is nearly identical to the effect of noise in the PT+N model. But, rather than merely perturbing people's probability judgments, this prior can be seen as regularizing these judgments away from extreme values. Zhu and colleagues \autocite*{zhu.etal2020} argue that such regularization can be adaptive in cases where only a small number of mental samples can be drawn. For instance, consider someone estimating the probability that they can swim across a lake, outrun an animal, or win a hand of poker: if a mental simulation of these events produces two samples indicating success, one might conclude these are all certain victories and thereby be too willing to assume risk. A regularizing prior pushes these estimates away from extremes, thereby promoting better decision-making when mental samples are sparse. However, this hedging comes at the cost of systematic incoherence and biases.
\hypertarget{two-accounts-of-conditional-probability-judgments}{%
\subsubsection{Two accounts of conditional probability judgments}\label{two-accounts-of-conditional-probability-judgments}}
By explaining the incoherence of human probability judgments using coherent mental probabilities, both models have the potential to rescue the larger project of Bayesian cognitive science as applied to everyday beliefs \autocite{chater.etal2020}. However, the two models diverge substantially in their treatment of conditional probability judgments. Bayesian cognitive theories are fundamentally theories of inductive reasoning: Bayes' rule describes how existing beliefs should be updated conditional on the observation of different kinds of evidence. So, treatment of the conditioning of beliefs is at the heart of these theories.
According to the Bayesian sampler model, conditioning is something that happens in the mental model of the events, not as part of the process of rendering probability judgments. By not assigning any special status to conditional probability judgments, the Bayesian Sampler theory fits neatly into the larger Bayesian project of cognitive science: probability judgments are simply another judgment process applied to the outputs of other (ideally Bayesian) mental models \autocite{chater.etal2020}.
In contrast, the PT+N model presents a constructive account of conditional probability judgments that is fundamentally non-Bayesian \autocite{costello.watts2016}. According to the PT+N model, conditional probabilities \(P(A|B)\) are estimated by a two-stage sampling procedure: first both events \(A\) and \(B\) are sampled with noise, and then a second noisy process computes the ratio of the events read as \(A\) and \(B\) over events read as \(B\). The PT+N model predicts conditional probability estimates using the following equation:
\[P_e(A|B) = \frac{(1-2d)^2P(A \land B) + d(1-2d)\big(P(A)+P(B)\big)+d^2}{(1-2d)P(B)+d}\]
This non-Bayesian account of conditional probabiliity judgments separates the PT+N theory quite fundamentally from the Bayesian Sampler and the larger project of Bayesian cognitive science.
\hypertarget{comparing-the-models}{%
\subsection{Comparing the models}\label{comparing-the-models}}
Zhu, Sanborn, and Chater \autocite*{zhu.etal2020} compared their Bayesian Sampler model against Costello \& Watts' (2014; 2016; 2017; 2018) PT+N model as explanations for human probability judgments in two experiments. Unfortunately, their results were somewhat equivocal. They fit both ``simple'' and ``complex'' versions of each model, where the complex versions of these models introduce additional parameters \(d'\) and \(N'\) that allow for different patterns of judgments for conjunctive and disjunctive judgments as compared with simple probability judgments. These additional parameters are crucial to both models' explanations of conjunction and disjunction fallacies---key findings in the probability judgment literature \autocite{costello.watts2017,zhu.etal2020}. They compared the fits of these models to data via Bayesian Information Criteria score. Table 1 below presents the total BIC scores computed for each model as originally fit, using the authors' original code and saved model outputs \autocite[Supplementary materials]{zhu.etal2020}.
\begin{table}
\caption{\label{tab:table1}Original model fitting results with best-fitting model in bold face.}
\centering
\begin{tabular}[t]{lrr}
\toprule
Model & Exp. 1 & Exp. 2\\
\midrule
\textbf{Bayesian Sampler simple} & \textbf{956.9} & \textbf{-5371.9}\\
Bayesian Sampler complex & 1174.4 & -5099.3\\
PT+N simple & 1257.5 & -4901.1\\
PT+N complex & 1039.7 & -5159.6\\
Bayesian Sampler avg. & 1065.6 & -5235.6\\
\addlinespace
PT+N avg. & 1148.6 & -5030.4\\
\bottomrule
\end{tabular}
\end{table}
In both experiments, the simple version of the Bayesian Sampler scores best, but the complex version of the PT+N model comes in second (lower BIC scores are better). It is not obvious what conclusions should be drawn from these results. Of course, the simple Bayesian Sampler model appears to win by the numbers. But conjunction and disjunction fallacies are extremely robust empirical findings \autocite{mellers.etal2001,sides.etal2002} and are clearly present in the data collected by Zhu and colleagues \autocite*{zhu.etal2020}. So we might justifiably rule out the simple variants of the models on the grounds that they will fail to capture important qualitative features of the data, which could instead favor the PT+N theory. For their part, Zhu and colleagues \autocite*{zhu.etal2020} chose to average the simple and complex model scores together. This approach somewhat favors the Bayesian Sampler theory overall, though they are cautious about drawing strong conclusions in favor of their theory.
\hypertarget{predictive-generalization-and-model-complexity}{%
\subsubsection{Predictive generalization and model complexity}\label{predictive-generalization-and-model-complexity}}
A chief goal of scientific models is to make accurate predictions about future observations. When comparing and selecting models based on their predictive accuracy, it is crucial to consider how the models will perform on new, as-yet-unseen data. The traditional approach to this issue is to compute the fit of the model to the observed data, but to then correct for the potential for these models to ``overfit'' these data. Typically this correction comes in the form of a ``complexity penalty,'' penalizing models in proportion to their flexibility for accommodating different patterns of data \autocite{gelman.etal2014}. Zhu and colleagues \autocite*{zhu.etal2020} warn that BIC, which penalizes models based solely on the number of parameters in the model, cannot fully account for the differences in the competing models' complexity \autocite[also see][]{piantadosi2018}.
There are at least three challenges to accounting for model complexity in the comparison of the PT+N and Bayesian Sampler models. First, the models differ not only in the number of parameters but in the domain of those parameters. Zhu and colleagues \autocite*{zhu.etal2020} assume that the Bayesian Sampler model's prior distribution should reflect ignorance or a lack of information. A uniform prior, Beta(1, 1) is the most obvious choice in this case, but theoretical arguments can also be made for Beta(0.5, 0.5) (Jeffrey's prior), Beta(0,0) (Haldene's prior), or perhaps other symmetric beta distributions Beta(\(\beta\),\(\beta\)) with \(\beta \in [0, 1]\) \autocite{jaynes2003}. Via the bridging conditions, they show that assuming \(\beta \in [0, 1]\) restricts the ``noise level'' for the Bayesian Sampler, represented in implied \(d\) under the PT+N model, to fall within {[}0, \(1/3\){]} (approaching \(1/3\) as \(N\) approaches 1), whereas the PT+N model permits noise values in \([0, 1/2]\).
Second, it is not immediately clear how the models' structural differences impact their flexibility. Zhu and colleagues' \autocite*{zhu.etal2020} bridging condition makes it clear that the PT+N model is more flexible when it comes to predicting unconditional probabilities. But, what impact does the PT+N model's treatment of conditional probabilities have on model complexity? Is this component of the model a sort of Ptolemaic epicycle, adding complexity to the model that should be penalized? Or, does it constitute a commitment to novel predictions that thereby constrain its flexibility? Determining model complexity a priori is not always straightforward when the models being compared differ structurally.
Third, one potential explanation for the relative weakness of the ``complex'' model variants is Zhu and colleagues' \autocite*{zhu.etal2020} use of ``unpooled'' models, where parameters are estimated independently for each individual participant. In contrast, a comparison of fully pooled variants of the simple and complex models (where a single population parameter is used for all participants) would require adding only one extra parameter to the penalty term. If there is limited heterogeneity across individuals, then adding a parameter for each participant may effectively over-penalize the complex variants relative to the simple variants. Partial pooling is a solution that balances between these extreme approaches, allowing for an accounting of heterogeneity without over-penalizing in cases where heterogeneity is low.
Finally, it is worth recognizing that formal measures of model complexity cannot be expected to perfectly track notions of simplicity or elegance in scientific explanation \autocites[for some related discussions, see][]{kuhn1977,sober2002,piantadosi2018}. For instance, even if the PT+N model's account of conditional probability judgments constrains its flexibility empirically, it seems clear this added component makes it more complex as a putative scientific explanation.
\hypertarget{the-present-work}{%
\subsubsection{The present work}\label{the-present-work}}
Here, I cast both the Bayesian Sampler and PT+N models into a Bayesian data analysis framework that may permit a more decisive comparison. First, Bayesian data analysis allows issues of model complexity to be addressed through comparisons of model fit based newer information criteria, such as the widely-applicable information criterion (\({\text{WAIC}}\)) and Pareto smoothed importance sampling approximate leave-one-out cross validation (\(\text{PSIS-LOO}\)). In particular, rather than estimating model fit and then penalizing for model complexity, \(\text{PSIS-LOO}\) estimates out-of-sample prediction performance directly by estimating the expected log predictive density (\(\widehat{\text{elpd}}\)) of the model, or the expected probability of new unseen data \autocite{gelman.etal2014,vehtari.etal2017}. From these calculations, an estimate of model complexity (\(\hat{p}_{\text{LOO}}\)) can also be derived.
In addition, the Bayesian framework supports straightforward implementation of hierarchical versions of these models with partial pooling. This allows for information about model parameters to be shared across participants, resulting in potential improvements to out-of-sample prediction, reductions in model complexity, and a more realistic test of the models.
\hypertarget{methods}{%
\section{Methods}\label{methods}}
\hypertarget{data-selection}{%
\subsection{Data selection}\label{data-selection}}
Zhu, Sanborn, \& Chater \autocite*{chater.etal2020} conducted two experiments to compare the PT+N and Bayesian Sampler theories. These experiments asked participants to judge the probability of different events in various combinations. Following prior work by Costello and Watts \autocites*[e.g.][]{costello.watts2016,costello.watts2018}, both experiments focused on the everyday events of different kinds of weather.
Experiment 1 asked about the events {[}icy, frosty{]} and {[}normal, typical{]} (e.g.~``what is the probability that the weather in London is normal and not typical?''). The authors' goal was to ask about highly correlated events, but the events used are perhaps nearly perfectly correlated. Because the terms used to describe these events are nearly synonymous, there is a concern about the interpretation of the statements evaluated in this experiment. This is especially clear, as the authors note, for disjunctive query trials such as ``normal or typical,'' where ``or typical'' might not be read as a disjunction but rather an elaborative clause. In light of these concerns, I excluded the disjunctive trials from Experiment 1 from my analyses.
Experiment 2 focused on more moderately correlated events, {[}cold, rainy{]} and {[}windy, cloudy{]}, that do not admit these misinterpretations. In addition, a third experimental condition asking about {[}warm, snowy{]} was also included in the experiment, but was dropped from the analyses reported in the paper. Exploring the raw responses from this condition reveals a substantial fraction of ``zero'' and ``one'' responses for certain trials. This may reflect a different response process than was intended. For instance, some participants may have engaged in deductive reasoning to judge that it is not possible for the weather to at once be warm and snowy, and therefore responded with zero---failing to properly consider that it is possible (at least logically) for it to be warm and snowy at different times within the same day. Given these potentially aberrant responses, I followed Zhu and colleagues \autocite*{zhu.etal2020} in ignoring data from this condition.
\hypertarget{modeling}{%
\subsection{Modeling}\label{modeling}}
I implement several variants of the Bayesian Sampler and PT+N models in a Bayesian framework. These models were implemented in the probabilistic programming language Numpyro. All code and results are available as supplemental materials (\url{https://github.com/derekpowell/bayesian-sampler}).
\hypertarget{bayesian-implementations-of-the-models}{%
\subsubsection{Bayesian implementations of the models}\label{bayesian-implementations-of-the-models}}
The PT+N model defines expected probability judgments (\(P_e\)) as:
\begin{align*}
P_{e}(A) &= (1-2d)P(A) + d \\
P_e(A\land B) &= (1-d^\prime)P(A \land B)+d^\prime \\
P_e(A\lor B) &= (1-d^\prime)P(A \lor B)+d^\prime \\
P_e(A|B) &= \frac{(1-2d)^2P(A \land B) + d(1-2d)\big(P(A)+P(B)\big)+d^2}{(1-2d)P(B)+d}
\end{align*}
In contrast, the Bayesian Sampler model defines expected probability judgments as:
\begin{align*}
P_{e}(A) &= \frac{N}{N + 2 \beta}P(A) + \frac{\beta}{N+2 \beta} \\
P_{e}(A \land B) &= \frac{N’}{N’ + 2 \beta}P(A \land B) + \frac{\beta}{N’+2 \beta} \\
P_{e}(A \lor B) &= \frac{N’}{N’ + 2 \beta}P(A \lor B) + \frac{\beta}{N’+2 \beta} \\
P_{e}(A|B) &= \frac{N}{N + 2 \beta}P(A|B) + \frac{\beta}{N+2 \beta}
\end{align*}
Fixing \(d\) and \(d'\) or \(N\) and \(N'\) equal yields the ``simple'' variant of each of the models, which treat conjunctive and disjunctive probability judgments identically to simple probability judgments.
Notice that for each model the probability judgments depend on underlying subjective probabilities, derived from a mental sampling process. These subjective probabilities are unobserved, and must be estimated as a latent variable. Here, they are represented with a four-dimensional dirichlet distribution for each subject, representing the probability of the elementary events (\(A \land B\), \(\neg A \land B\), \(A \land \neg B\), \(\neg A \land \neg B\)).
Zhu, Sanborn \& Chater (2020) implement completely unpooled models with separate \(d\), \(d'\), \(N\), \(N'\), and \(\beta\) parameters for each participant. Although hierarchical models with partial pooling might be expected to better account for the data and offer a better test of the models, for consistency and comparison with Zhu et al.'s {[}@-zhu.etal2020{]} analyses, I first estimated implementations of these unpooled models. Figure 1 displays the translation of the PT+N model into the Bayesian framework, along with a plate diagram representing the dependencies among parameters.
\begin{figure}
\begin{minipage}{.5\textwidth}
\flushright
\input{ptn_platediagram}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\flushleft
\small
\begin{align*}
y_{ijk} &\sim Beta\big(\mu_{ijk} \kappa, (1-\mu_{ijk}) \kappa \big) \\
\mu_{ijk} &= f_{PT+N}(\overrightarrow{p_{jk}}, d_j, d_j) \\
d_j &= \frac{1}{2} \ \text{logistic}(\delta_j) \\
d_j’ &= \frac{1}{2} \ \text{logistic}\big(\delta_j + \exp(\Delta\delta_j)\big) \\
\overrightarrow{p_{jk}} &\sim Dirichlet(\overrightarrow{1}) \\
\delta_j &\sim Normal(0,1) \\
\Delta\delta_j &\sim Normal(0,1) \\
\kappa &\sim \mathit{Half \hyph Cauchy}(20)
\end{align*}
\end{minipage}
\caption{Complex unpooled PT+N model diagram and formula specifications. Circular nodes are parameters, shaded nodes are observations, and squared nodes are deterministic functions of parameters. Plates signify values defined for $i$ trials, $j$ participants, and $k$ conditions.}
\end{figure}
The function \(f_{PT+N}\) computes the expected probability estimate using the underlying subjective probability \(p\), the noise parameters \(d\) and \(d'\), and the relevant equation as defined by the PT+N theory (see supplemental materials for implementation details). Prior predictive checks were conducted for all models to select priors that would be uninformative or minimally informative on the scale of the model parameters \(d\) and \(d'\).
Recall that Zhu and colleagues \autocite*{zhu.etal2020} identified a bridging condition relating \(\beta\) and \(N\) in the Bayesian Sampler model to the \(d\) parameter of the PT+N model. To support direct comparisons of the models, I parameterize the Bayesian Sampler model according to the implied \(d\) and \(d'\), rather than directly according to its \(\beta\), \(N\), and \(N'\) parameters.\footnote{Strictly speaking, under the original form of the Bayesian sampler model, \(N\) and \(N'\) are discrete parameters representing the number of distinct independent samples drawn. Given a particular implied \(d\), this could create constraints on the possible values of \(d'\), assuming \(\beta\) is held constant. However, Zhu and colleagues \autocite*{zhu.etal2020} also consider the possibility that people draw non-independent mental samples, in which case \(N\) and \(N'\) would represent the \emph{effective number of samples}, accounting for their autocorrelation. In this case, we could treat this effective number of samples as a continuous quantity, and therefore imagine there are no clear constraints on \(d\) and \(d'\) except the stipulation that \(d \leq d'\).} I constrain \(d\) to \([0, 1/3]\) for the Bayesian Sampler model to reflect the assumption that \(\beta \in [0, 1]\). This allows the same priors to be used for the corresponding Bayesian Sampler and PT+N models, simplifying their comparison.
The Bayesian Sampler model is therefore identical to the PT+N model save for the changes to \(\mu_{ijk}\), \(d\), and \(d'\) shown below:
\begin{align*}
\mu_{ijk} &= f_{BS}(\overrightarrow{p_{jk}}, d_j, d_j) \\
d_j &= \frac{1}{3} \ \text{logistic}(\delta_j) \\
d_j’ &= \frac{1}{3} \ \text{logistic}\big(\delta_j + \exp(\Delta\delta_j)\big)
\end{align*}
Where the function \(f_{BS}\) computes the expected probability estimate as prescribed by the Bayesian Sampler theory.
\hypertarget{hierarchical-implementations-of-the-models}{%
\subsubsection{Hierarchical implementations of the models}\label{hierarchical-implementations-of-the-models}}
Both of these models can also be implemented as hierarchical models with partial pooling for the \(d\) and \(d'\) parameters (implicitly, for \(N\) and \(N'\) in the case of the Bayesian Sampler). This partial pooling can help to regularize parameter estimates and improve out-of-sample predictive performance. In addition, partial pooling effectively reduces model complexity, and could support more realistic comparison between the ``simple'' and ``complex'' variants of the models.
The hierarchical implementation adds parameters for the population-level \(d\) and \(d'\) as well as a parameter controlling the standard deviation of the distribution for the subject-level effects. For ease of interpretation, the centered parameterization is shown below, although the actual models used a non-centered parameterization to improve sampling efficiency \autocite{papaspiliopoulos.etal2007}. Figure 2 displays the translation of a hierarchical implementation of the Bayesian Sampler model into the Bayesian framework, along with a plate diagram representing the dependencies among parameters.
\begin{figure}
\begin{minipage}{.5\textwidth}
\flushright
\input{bs_platediagram_mlm}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\flushleft
\small
\input{bs_equation_mlm}
\end{minipage}
\caption{Hierarchical complex Bayesian Sampler model diagram and formula specifications. Circular nodes are parameters, shaded nodes are observations, and squared nodes are deterministic functions of parameters. Plates signify values defined for $i$ trials, $j$ participants, and $k$ conditions.}
\end{figure}
Finally, I also explored fitting a hierarchical version of the Bayesian Sampler model that allowed values of \(\beta > 1\). Restricting \(\beta\) to {[}0,1{]} restricts the prior distribution of the Bayesian sampler to the class of ``ignorance priors'' \autocite{zhu.etal2020}. However, it is also possible that people bring informative priors to the probability judgment task. Indeed, Zhu and colleagues (2020) acknowledge there are situations where an informative prior may be warranted \autocite[see e.g.,][]{fennell.baddeley2012}. If \(\beta\) is unrestricted, allowed to fall in the domain \([0, \infty]\) then the Bayesian Sampler model becomes more flexible, allowing for equivalent ``noise'' levels in the same \([0, 1/2]\) range as the PT+N model. That is, through the bridging condition, the implied \(d\) approaches \(1/2\) in the limit as \(N \to 1\) and \(\beta \to \infty\). Though it would seem a more fundamental change, this same model may also be seen as a version of the PT+N theory that jettisons its constructive account of conditional probability judgment. Thus, fitting this additional unrestricted model allows for a complete comparison of the models along both of their differing dimensions.
\hypertarget{results}{%
\section{Results}\label{results}}
I fit each of the models specified above to data from Zhu et al's \autocite*{zhu.etal2020} Experiment 1 and 2 and estimated the expected log predictive density with PSIS-LOO (\(\widehat{\text{elpd}}_{\text{LOO}}\)) for each combination. Compared with BIC, \(\widehat{\text{elpd}}_{\text{LOO}}\) offers a more sophisticated account of model complexity and is more appropriate in the ``\(\mathcal{M}\)-open'' case; situations where we do not know if any of the models being compared are the ``true'' model \autocite{vehtari.etal2019}. Model posteriors were estimated using the Numpyro \autocite{phan.etal2019} implementation of the No-U-Turn Hamiltonian Markov chain Monte Carlo (MCMC) sampler. For each model, four MCMC chains of 2000 iterations were sampled after 2000 iterations of warmup and all passed convergence tests according to \(\hat{R}\) \autocite[see][]{gelman.etal2014a}. Figure 3 below displays the estimated differences in \(\widehat{\text{elpd}}_{\text{LOO}}\) scores for each of the models as compared to the best-scoring model.
\begin{figure}[ht]
\centering
\includegraphics[width=6in]{plot_compare.png}
\caption[]{Model comparison results for data from Experiments 1 and 2. Error bars indicate two standard errors of the estimates. Typically, a difference of greater than two standard errors is taken as clear evidence for the superiority of the lower-scoring model (\cite{sivula.etal2020}).}
\end{figure}
Data from Experiment 1 favor ``complex'' variants of the Bayesian Sampler model compared with the ``simple'' variants and all versions of the PT+N model. However, there is no clear single winner. As shown in Figure 3, the best-scoring model is an unrestricted variant of the Bayesian Sampler allows for people to bring informative priors to the probability judgment task, that is, a Bayesian Sampler model allowing \(\beta \in [0, \infty]\) (greater values of \(\widehat{\text{elpd}}_{\text{LOO}}\) are better). However, the complex and complex hierarchical implementations of the Bayesian Sampler assuming uninformative priors (i.e., restricting \(\beta \in [0,1]\)) have \(\widehat{\text{elpd}}_{\text{LOO}}\) scores within two standard errors of the difference, indicating that these models are also plausible \autocite{sivula.etal2020}.
Data from Experiment 2 more decisively reveal a single winning model: the hierarchical ``unrestricted'' implementation of the Bayesian Sampler model allowing for informative priors.
Figure 4 shows the posterior distributions of the population-level \(d\) and \(d'\) parameters inferred from the unrestricted Bayesian Sampler model. In Experiment 2, population-level estimates of \(d'\) are greater than \(1/3\), as are a substantial number of participant-level estimates for \(d\) (37 of 83). These values fall outside the range implied by the assumption of ``ignorance priors'' in the Bayesian Sampler model. Parameters fit to the data from Experiment 1 are more consistent with this assumption, although a substantial proportion of individual participants' \(d\) and \(d'\) estimates also lie outside this range (11 of 59 for \(d\), 18 of 59 for \(d'\)). The finding that there are clear differences in \(d\) and \(d'\) estimated across experiments suggest that the mental sampling processes producing estimates vary in the different conditions, either in terms of the number of samples that are drawn, the noise in reading those samples, or the form of the prior distribution assumed by participants in each context.
\begin{figure}[ht]
\centering
\includegraphics[width=4in]{plot_params.png}
\caption{Posterior density of population-level $d$ and $d'$ parameters estimated from the unrestricted hierarchical Bayesian Sampler model for data from Experiments 1 and 2. Dashed line indicates theoretical maximum values for Bayesian Sampler model with uninformative priors.}
\end{figure}
Recall, the unrestricted Bayesian Sampler model may also be seen as a version of the PT+N theory that excises its constructive account of conditional probability judgments. So, comparing the PT+N models to the unrestricted Bayesian Sampler model offers a test of the PT+N's constructive account of conditional probability jugments. Comparing predictions from the unrestricted Bayesian Sampler model and the best-fitting PT+N model, we see that the Bayesian Sampler model better captures these judgments from both experiments (see Table 2). And comparing their predictions specifically for conditional probability judgments, the unrestricted Bayesian Sampler again provides a better fit, especially for the modestly-correlated events of Experiment 2 (Exp 1: response-level \(r\) = .91 vs.~.89 (\emph{p} \textless{} .05); query-level \(r\) = .98 vs.~.97; Exp 2: response-level \(r\) = .72 vs.~.67 (\emph{p} \textless{} .01); query-level \(r\) = .92 vs.~.85) These findings suggest that conditioning is better seen as part of a mental model of the events than as part of the probability judgment process.
Table 2 also presents estimates of \(\hat{p}_{\text{LOO}}\), the estimate of the effective number of parameters for each model. Compared to the Bayesian Sampler model, the PT+N model with its constructive account of conditional probability judgments has a similar penalty term estimate when fit to Experiment 1, but has a smaller penalty term when fit to Experiment 2, despite having the same parameterization in terms of \(d\) and \(d'\). Although its special treatment of conditional probability judgments makes it more complex as a putative scientific explanation, this structural component appears to actually constrain its predictive flexibility. However, this constraint leads to a worse-fitting model.
Perhaps surprisingly, the Bayesian Sampler with unrestricted \(\beta\) actually receives a smaller penalty term than the more ``restricted'' version of the model for the data in Experiment 2. This is at first counterintuitive. However, model complexity depends not only on the model and priors, but also the observed data \autocite[see][]{gelman.etal2014}. To illustrate, Gelman and colleagues consider a case where a parameter is constrained to be positive and its value is then estimated from data \autocite*{gelman.etal2014}. If the estimated value is some very large positive number, then the constraint won't have been very informative. But, if the estimated value is very close to zero, then the constraint that the parameter is positive will provide substantial information and the model's penalty term will therefore be smaller. Here, it seems reasonable to conjecture that because the implied \(d\) and \(d'\) estimated from Experiment 2's data are both very near \(1/3\) under this model, the restriction results in posterior estimates of the linear parameters that are relatively far from the prior, which can result in a greater penalty.
The dependence of complexity penalties on observed data may strike some as an undesirable feature of model comparison through information criteria like \(\text{PSIS-LOO}\). Indeed, it is worth acknowledging that principled model comparison is still an area of active inquiry, with differing perspectives \autocites[e.g.][]{gronau.wagenmakers2019,vehtari.etal2019}. Fortunately, in this case, comparisons between the models do not rest solely on differences in estimated model complexity.
\begin{table}
\caption{\label{tab:table2}Bayesian model comparison results with best scoring model in bold face.}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}[t]{lrrrrrrrr}
\toprule
\multicolumn{1}{c}{ } & \multicolumn{4}{c}{Experiment 1} & \multicolumn{4}{c}{Experiment 2} \\
\cmidrule(l{3pt}r{3pt}){2-5} \cmidrule(l{3pt}r{3pt}){6-9}
Model & $\widehat{\text{elpd}}_{\text{LOO}}$ & $\hat{p}_{\text{LOO}}$ & $r_{resp}$ & $r_{query}$ & $\widehat{\text{elpd}}_{\text{LOO}}$ & $\hat{p}_{\text{LOO}}$ & $r_{resp}$ & $r_{query}$\\
\midrule
\textbf{Unrestricted BS hier.} & \textbf{1118.7} & \textbf{259.3} & \textbf{0.884} & \textbf{0.964} & \textbf{1978.6} & \textbf{366.4} & \textbf{0.688} & \textbf{0.878}\\
Bayesian Sampler complex hier. & 1088.3 & 269.6 & 0.883 & 0.960 & 1912.3 & 395.0 & 0.675 & 0.852\\
Bayesian Sampler complex & 1087.8 & 264.6 & 0.883 & 0.961 & 1876.9 & 443.4 & 0.679 & 0.848\\
Bayesian Sampler simple & 1045.6 & 253.8 & 0.874 & 0.952 & 1861.2 & 419.1 & 0.667 & 0.831\\
Bayesian Sampler simple hier. & 1039.6 & 259.1 & 0.874 & 0.953 & 1900.7 & 377.8 & 0.667 & 0.839\\
\addlinespace
PT+N complex hier. & 993.0 & 268.9 & 0.867 & 0.946 & 1902.4 & 351.5 & 0.658 & 0.835\\
PT+N complex & 966.1 & 259.8 & 0.864 & 0.941 & 1886.9 & 395.5 & 0.667 & 0.840\\
PT+N simple hier. & 864.0 & 250.0 & 0.839 & 0.919 & 1821.9 & 305.3 & 0.617 & 0.772\\
PT+N simple & 863.5 & 245.5 & 0.838 & 0.918 & 1821.5 & 319.5 & 0.619 & 0.777\\
Relative Freq. & 649.3 & 289.4 & 0.820 & 0.875 & 643.8 & 424.6 & 0.516 & 0.639\\
\bottomrule
\end{tabular}}
\end{table}
Finally, it is worth noting that the best of these models provide quite strong overall fits to the data, not just for the query averages, but also for the query averages across individual participants as seen from the correlations between predicted and observed responses in Table 2.
\hypertarget{discussion}{%
\section{Discussion}\label{discussion}}
By a fair margin, the models best accounting for the experimental data from Zhu and colleagues \autocite*{zhu.etal2020} were implementations of the Bayesian Sampler theory, with the best single model being a version of the Bayesian Sampler model without restriction on the range of its \(\beta\) parameters. Alternatively, this model can also be seen as a variant of the PT+N model that removes its account of conditional probability judgments. Thus, what these findings indicate most clearly is that the Bayesian Sampler theory provides a superior account of conditional probability judgments in this task. In keeping with the larger theoretical framework of Bayesian cognitive science, the Bayesian Sampler theory assumes that subjective probabilities underlie people's probability judgments, and that conditional probability judgments are produced by Bayesian conditioning occurring in their mental models of the events in question, rather than as arising from the probability judgment process \autocite{chater.etal2020,zhu.etal2020}.
Zhu and colleagues' findings do cast some doubt on their proposal that the priors of the Bayesian Sampler model should reflect ``ignorance priors,'' symmetric Beta distributions with \(\beta \in [0, 1]\). As a generic prior that would be used across contexts, this class of uninformative priors has an appealing rational basis. Nevertheless, many participants' estimated \(d\) and \(d'\) parameters fell outside the \([0, 1/3]\) range implied by ``ignorance'' priors. Given the different values of \(d\) and \(d'\) across Experiments 1 and 2, it seems likely that people bring informative, domain-specific priors to some judgment tasks, meaning that the most appropriate priors might be dictated by the context in which they make their judgments.
In addition, the implied \(d\) and \(d'\) noise parameters also varied across individuals. Of note, some participants' implied \(d\) and \(d'\) parameters \emph{were} consistent with the class of ignorance priors, even in Experiment 2 (see supplemental materials). Further research should explore this heterogeneity within and across individuals. Unfortunately, pinning down specific components of the Bayesian Sampler model is a challenge, as the \(\beta\) and \(N\) parameters are not uniquely identifiable from judgment data of the sort examined here.
One thing this model comparison has not decided, and likely \emph{cannot} decide, is whether distortions of probability judgments are products of mental noise or of further reasoning processes. Given the two models' tight connections via the bridging condition \autocite{zhu.etal2020}, it may not be possible to draw decisive conclusions here. Moreover, the theories may not be in any real competition over this point: Zhu and colleagues consider that ``noise'' might give an algorithmic-level solution to the computation-level task defined by the Bayesian Sampler \autocite*{zhu.etal2020}.
Finally, some of the most interesting implications of these models go well beyond the probability judgment task itself: the models both support a probabilistic account of beliefs \autocite{chater.etal2020}. Indeed, by representing the ``true'' subjective probabilities as a latent variable, Bayesian data analysis allows those underlying credences to be inferred. Examining the model posteriors here reveals these estimates often come with considerable uncertainty, but at least for some participants they can be estimated with useful levels of precision. Of course, Zhu and colleagues' \autocite*{zhu.etal2020} experiments were never designed for this purpose. Future research could explore how estimates of people's credences might be made more reliable, and how inferences about these mental probabilities might be integrated with other Bayesian models of reasoning \autocites[e.g.][]{franke.etal2016,jern.etal2014,griffiths.tenenbaum2006}. One particularly promising direction could be to integrate these models with formal models of belief revision, which might then shed new light on these fundamental cognitive processes \autocites[e.g.][]{cook.lewandowsky2016,jern.etal2014,powell.etal2018}.
\printbibliography
\end{document}
| {
"alphanum_fraction": 0.7760547553,
"avg_line_length": 113.0369515012,
"ext": "tex",
"hexsha": "300837e9b95098f0dfc9a87288ef01b5f243dfec",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "de484cdfc0aada504b414378d6ed9cd6481d6c5a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "derekpowell/bayesian-sampler",
"max_forks_repo_path": "paper-rmd/paper-papaja.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "de484cdfc0aada504b414378d6ed9cd6481d6c5a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "derekpowell/bayesian-sampler",
"max_issues_repo_path": "paper-rmd/paper-papaja.tex",
"max_line_length": 1424,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "de484cdfc0aada504b414378d6ed9cd6481d6c5a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "derekpowell/bayesian-sampler",
"max_stars_repo_path": "paper-rmd/paper-papaja.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 12159,
"size": 48945
} |
\documentclass{article}
\usepackage[letterpaper,top=2cm,bottom=2cm,left=3cm,right=3cm,marginparwidth=1.75cm]{geometry}
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage{listings}
\usepackage{ulem}
\usepackage{algorithm}
\usepackage{algpseudocode}
\usepackage[colorlinks=true, allcolors=blue]{hyperref}
\title{Expectation Maximization for Mixture Model}
\author{dw}
\begin{document}
\maketitle
\section{Kernel Method}
Kernel methods owe their name to the use of kernel functions, which enable them to operate in a high-dimensional, implicit feature space without ever computing the coordinates of the data in that space, but rather by simply computing the inner products between the images of all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. This approach is called the "kernel trick".
For models which are based on a fixed nonlinear feature space mapping φ(x), the kernel function is given
by the relation
\begin{equation}
k(\bf{x},\bf{x}') = \phi(\bf{x})^T \phi(\bf{x}')
\label{eq: kernel}
\end{equation}
The simplest example of a kernel function is obtained by considering the identity
mapping for the feature space in Eq. (\ref{eq: kernel}) so that $\phi(\bf{x}) = \bf{x}$, in which case $k(\bf{x}, \bf{x}') = \bf{x}^T \bf{x}'$. We shall refer to this as the linear kernel.
For hyperplane $(w,b)$, the functional margin of sample point $(x_i, y_i)$ is
\begin{equation}
\hat{\gamma}_i = y_i (\frac{\bf{w}}{\lVert{\bf{w}}\rVert}\cdot x_i + \frac{b}{\lVert{\bf{w}}\rVert})
\end{equation}
\begin{equation}
\max_{\bf{w},b} \gamma
\end{equation}
\begin{equation}
s.t. y_i (\bf{w}\cdot x_i + b) \geq \hat{\gamma}
\end{equation}
\end{document} | {
"alphanum_fraction": 0.7264044944,
"avg_line_length": 45.641025641,
"ext": "tex",
"hexsha": "fa171731f191986faf3fd1aea80b004186783124",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "81892dc7ce9be76d2d85f19b1ad7ba7416f1ec7c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "dw839566105/dw839566105.github.io",
"max_forks_repo_path": "pdf/SVM/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "81892dc7ce9be76d2d85f19b1ad7ba7416f1ec7c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "dw839566105/dw839566105.github.io",
"max_issues_repo_path": "pdf/SVM/main.tex",
"max_line_length": 457,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "81892dc7ce9be76d2d85f19b1ad7ba7416f1ec7c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "dw839566105/dw839566105.github.io",
"max_stars_repo_path": "pdf/SVM/main.tex",
"max_stars_repo_stars_event_max_datetime": "2019-10-15T02:52:27.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-08-12T13:19:53.000Z",
"num_tokens": 516,
"size": 1780
} |
\begin{figure}[ht]
\includegraphics[scale=0.31]{imgs/scenario.png}
\caption{Network Scenario}
\label{fig:networkscenario}
\end{figure}
\begin{figure*}[h]
\begin{subfigure}{0.48\textwidth}
\includegraphics[width=\textwidth]{imgs/DoS_attack.png}
\caption{DoS attack scenario} \label{fig:DoS}
\end{subfigure}
\hspace*{\fill} % separation between the subfigures
\begin{subfigure}{0.48\textwidth}
\includegraphics[width=\textwidth]{imgs/DDoS_attack.png}
\caption{DDoS attack scenario} \label{fig:DDoS}
\end{subfigure}
\caption{Attacks scenarios}
\label{fig:atks}
\end{figure*}
\section{Project structure and implementation}
\label{sec:projstruct}
Our project \cite{proj_repo} and its source code is freely downloadable on \href{https://github.com/CristianTuretta/DDoS-Network-Flow-Forensics-Analyser-.git}{Github}. Here, we focused on the UDP flood D(D)oS analysis of \texttt{.pcap} records: the goal is to point out good and evil users given a \texttt{.pcap} network sniff file converted to \texttt{.csv} format. The project is a multi-layered tool which primarily consists in two executable Python 3 scripts:
\subsection{DDoSAnalysis}
\textit{DDoSAnalysis.py} can be used in two different modes, depending on the line parameters used:
\begin{itemize}
\item \textit{-g dataset\_name n\_members n\_lines n\_attackers atk\_volume atk\_duration} \\Generates a random, bogus dataset in the current working directory with the name specified in the second argument, alongside with the number of normal network users specified in \textit{n\_members} argument, the dimension of the dataset (in lines), the number of infected machines, the attack volume (per packet) and its duration. At the end of the generation process, it copies the dataset into the Hadoop File System. We assumed that Hadoop is installed and a folder tree under \path{hdfs://user/your_user/project/input} exists.
\item \textit{-a dataset\_name} \\ Begin the analysis of the dataset \textit{dataset\_name} using a Pig script. In order to work, the dataset must have been previously copied into the Hadoop input folder, which automatically happens if the dataset is generated using \textit{-g} option. It saves the elaborated dataset under \path{outputs/dataset_name} with an image consisting of a plot of every agent average velocity (Mbps), an attack volume graph and a statistical image which shows the squared margin to mean velocity.
\item \textit{-anp dataset\_name} \\ It's like the \textit{-a} argument, but it doesn't start Pig analysis. It's used when we have already completed a Pig analysis, and we only want to aggregate data and obtain the plots.
\item \textit{-ga dataset\_name n\_members n\_lines n\_attackers atk\_volume atk\_duration} \\ Launches both the generation and the analysis
\item \textit{-sga dataset\_name n\_members n\_lines n\_attackers atk\_volume atk\_duration} \\ Launches both the generation and the analysis. After generation but before the Pig analysis, it prints an estimation of the dataset dimension, letting the user choose whether to proceed or not.
\end{itemize}
The script also automatically records infos about performance timing under \path{PerformanceHistory.csv}.
\subsection{PerformanceAnalyser}
\textit{PerformanceAnalyser.py} is used to automatically plot all the infos stored under \path{PerformanceHistory.csv}. It supports two modes:
\begin{itemize}
\item \textit{-a img\_name} \\ Stores a plot under the current working directory named \textit{img\_name} of analysis statistics (history of dataset analyzed and time elapsed)
\item \textit{-g img\_name} \\ Stores a plot under the current working directory named \textit{img\_name} of dataset generation statistics (history of dataset generated and time elapsed)
\end{itemize}
\textit{PerformanceAnalyser.py} also exposes a method used as a wrapper to call the generation and analysis routines, using a \textbf{CProfile} python module to gain time statistics.
\bigskip
The first two mentioned scripts are the user interface of our tool. However, we have other core scripts which make the generation/analysis possible:
\subsection{DatasetGenerator}
\textit{DatasetGenerator.py} contains the core generation routine of datasets. It generates a pool of innocent IPs and an attackers' one, then it fills line-by-line the dataset with random and bogus informations, extracting random users and attackers. It saves a csv file in the format: \textit{id, time, source\_ip, dest\_ip, protocol, packet\_size, payload}
\subsection{Evaluator}
\textit{Evaluator.py} exposes the main routine which processes the Pig script output. It computes the mean velocity of all users, and then produces a plot consisting of the velocity of every single user, represented with a blue line, previously calculated by the Pig script (\textit{udpfloodpcap.pig}) and the mean velocity of all users, represented with a red line. The data scientist could distinguish between evil and good users just looking at the deviation from the average. It also plots a volume comparison graph between users and data margin to mean.
\subsection{Pig script}
\textit{Udpfloodpcap.pig} calculates the mean velocity of every machine given a dataset in input.
This is the core of the analysis tool: it uses Pig Latin mapreduce paradigm.
The script loads the dataset, filter the records having UDP protocol, and group them by \textit{(source\_ip, destination\_ip)} tuple.
At this point we have to handle a list of \textit{map(tuple, bag)} containing all the corresponding packets sent by a machine.
We can then calculate the number of packets (counting the elements in the bag), the total volume exchanged by a particular machine (summing up the corresponding data for each element of the bag) and the mean velocity in bps (using the total volume divided by the max time minus the min time).
This is a crucial factor which we use to discriminate good and evil users. Here below some part of code we wrote for prepare the dataset in order to detect UDP flood attack:
\begin{lstlisting}[numbers=right, columns=flexible, breaklines=true, frame=tb, caption={\textit{Udpfloodpcap.pig} script}, label={lst:udpfloodscript}]
udpdataset = FILTER dataset BY protocol == 'UDP';
udpgroup = GROUP udpdataset BY (sourceip, destip);
udpmap = FOREACH udpgroup GENERATE group, ordereddataset.(pkt_n,time,dim), MIN(ordereddataset.time) AS min_ts, MAX(ordereddataset.time) AS max_ts, COUNT(ordereddataset.pkt_n) AS n_packets, SUM(ordereddataset.dim) AS total_volume;
result = FOREACH udpmap GENERATE group, min_ts, max_ts, n_packets,total_volume, (float)(max_ts-min_ts) AS time_difference, (float)(total_volume/(max_ts-min_ts)) AS ratio_vol_td;
\end{lstlisting}
The first line filter the loaded dataset in order to have only UDP packets while the second line group the filtered dataset by source IP and destination IP.
It is using in line four in order to find for each pair of source and destination IP the following features: timestamp of the first and the last packet exchanged, the number of packets exchanged and the total volume of data exchanged obtained by sum over the packet length feature.
In line six we calculate an approximated time window of communication between source and destination address and the average speed by dividing the total amount of data exchanged over the time window.
At this point we have highlighted all the necessary features to accomplish our analysis by applying our mathematical model previously illustrated.
| {
"alphanum_fraction": 0.7905306557,
"avg_line_length": 85.4431818182,
"ext": "tex",
"hexsha": "6f046b7a4bfef3e7fe228b862619b02d27c3b323",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "732e53e29c99f7c3d61e4a8da31e760aeaa917cd",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "AndreaPerazzoli/DDoS-Network-Flow-Forensics-Analyser-report",
"max_forks_repo_path": "sections/ProjectStructure.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "732e53e29c99f7c3d61e4a8da31e760aeaa917cd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "AndreaPerazzoli/DDoS-Network-Flow-Forensics-Analyser-report",
"max_issues_repo_path": "sections/ProjectStructure.tex",
"max_line_length": 625,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "732e53e29c99f7c3d61e4a8da31e760aeaa917cd",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "AndreaPerazzoli/DDoS-Network-Flow-Forensics-Analyser-report",
"max_stars_repo_path": "sections/ProjectStructure.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1801,
"size": 7519
} |
%!TEX root = forallxcam.tex
\part{Natural deduction for FOL}
\label{ch.NDFOL}
\chapter{Basic rules for FOL}\label{s:BasicFOL}
The language of FOL uses of all of the connectives of TFL. So proofs in FOL will use all of the basic and derived rules from chapter \ref{ch.NDTFL}. We shall also use the proof-theoretic notions (particularly, the symbol `$\proves$') introduced in that chapter. However, we will also need some new basic rules to govern the quantifiers, and to govern the identity sign.
\section{Universal elimination}
From the claim that everything is F, you can infer that any particular thing is F. You name it; it's F. So the following should be fine:
\begin{proof}
\hypo{a}{\forall xRxxd}
\have{c}{Raad} \Ae{a}
\end{proof}
We obtained line 2 by dropping the universal quantifier and replacing every instance of `$x$' with `$a$'. Equally, the following should be allowed:
\begin{proof}
\hypo{a}{\forall xRxxd}
\have{c}{Rddd} \Ae{a}
\end{proof}
We obtained line 2 here by dropping the universal quantifier and replacing every instance of `$x$' with `$d$'. We could have done the same with any other name we wanted.
This motivates the universal elimination rule ($\forall$E):
\factoidbox{
\begin{proof}
\have[m]{a}{\forall \meta{x}\meta{A}(\ldots \meta{x} \ldots \meta{x}\ldots)}
\have[\ ]{c}{\meta{A}(\ldots \meta{c} \ldots \meta{c}\ldots)} \Ae{a}
\end{proof}}
The notation here was introduced in \S\ref{s:TruthFOL}. The point is that you can obtain any \emph{substitution instance} of a universally quantified formula: replace every instance of the quantified variable with any name you like.
I should emphasise that (as with every elimination rule) you can only apply the $\forall$E rule when the universal quantifier is the main logical operator. So the following is \emph{banned}:
\begin{proof}
\hypo{a}{\forall x Bx \eif Bk}
\have{c}{Bb \eif Bk}\by{naughtily attempting to invoke $\forall$E}{a}
\end{proof}
This is illegitimate, since `$\forall x$' is not the main logical operator in line 1. (If you need a reminder as to why this sort of inference should be banned, reread \S\ref{s:MoreMonadic}.)
\section{Existential introduction}
From the claim that some particular thing is an F, you can infer that something is an F. So we ought to allow:
\begin{proof}
\hypo{a}{Raad}
\have{b}{\exists x Raax} \Ei{a}
\end{proof}
Here, we have replaced the name `$d$' with a variable `$x$', and then existentially quantified over it. Equally, we would have allowed:
\begin{proof}
\hypo{a}{Raad}
\have{c}{\exists x Rxxd} \Ei{a}
\end{proof}
Here we have replaced both instances of the name `$a$' with a variable, and then existentially generalised. But we do not need to replace \emph{both} instances of a name with a variable: if Narcissus loves himself, then there is someone who loves Narcissus. So we also allow:
\begin{proof}
\hypo{a}{Raad}
\have{d}{\exists x Rxad} \Ei{a}
\end{proof}
Here we have replaced \emph{one} instance of the name `$a$' with a variable, and then existentially generalised. These observations motivate our introduction rule, although to explain it, we shall need to introduce some new notation.
Where $\meta{A}$ is a sentence containing the name $\meta{c}$, we can emphasise this by writing `$\meta{A}(\ldots \meta{c} \ldots \meta{c}\ldots)$'. We shall write `$\meta{A}(\ldots \meta{x} \ldots \meta{c}\ldots)$' to indicate any formula obtained by replacing \emph{some or all} of the instances of the name \meta{c} with the variable \meta{x}. Armed with this, our introduction rule is:
\factoidbox{
\begin{proof}
\have[m]{a}{\meta{A}(\ldots \meta{c} \ldots \meta{c}\ldots)}
\have[\ ]{c}{\exists \meta{x}\meta{A}(\ldots \meta{x} \ldots \meta{c}\ldots)} \Ei{a}
\end{proof}
\meta{x} must not occur in $\meta{A}(\ldots \meta{c} \ldots \meta{c}\ldots)$}
The constraint is included to guarantee that any application of the rule yields a sentence of FOL. Thus the following is allowed:
\begin{proof}
\hypo{a}{Raad}
\have{d}{\exists x Rxad} \Ei{a}
\have{e}{\exists y \exists x Rxyd} \Ei{d}
\end{proof}
But this is banned:
\begin{proof}
\hypo{a}{Raad}
\have{d}{\exists x Rxad} \Ei{a}
\have{e}{\exists x \exists x Rxxd}\by{naughtily attempting to invoke $\exists$I}{d}
\end{proof}
since the expression on line 3 contains clashing variables, and so is not a sentence of FOL.
\section{Empty domains}
The following proof combines our two new rules for quantifiers:
\begin{proof}
\hypo{a}{\forall x Fx}
\have{in}{Fa}\Ae{a}
\have{e}{\exists x Fx}\Ei{in}
\end{proof}
Could this be a bad proof? If anything exists at all, then certainly we can infer that something is F, from the fact that everything is F. But what if \emph{nothing} exists at all? Then it is surely vacuously true that everything is F; however, it does not following that something is F, for there is nothing to \emph{be} F. So if we claim that, as a matter of logic alone, `$\exists x Fx$' follows from `$\forall x Fx$', then we are claiming that, as a matter of \emph{logic alone}, there is something rather than nothing. This might strike us as a bit odd.
Actually, we are already committed to this oddity. In \S\ref{s:FOLBuildingBlocks}, we stipulated that domains in FOL must have at least one member. We then defined a logical truth (of FOL) as a sentence which is true in every interpretation. Since `$\exists x\ x=x$' will be true in every interpretation, this \emph{also} had the effect of stipulating that it is a matter of logic that there is something rather than nothing.
Since it is far from clear that logic should tell us that there must be something rather than nothing, we might well be cheating a bit here.
If we refuse to cheat, though, then we pay a high cost. Here are three things that we want to hold on to:
\begin{ebullet}
\item $\forall x Fx \proves Fa$: after all, that was $\forall$E.
\item $Fa \proves \exists x Fx$: after all, that was $\exists$I.
\item the ability to copy-and-paste proofs together: after all, reasoning works by putting lots of little steps together into rather big chains.
\end{ebullet}
If we get what we want on all three counts, then we have to countenance that $\forall xFx \proves \exists x Fx$. So, if we get what we want on all three counts, the proof system alone tells us that there is something rather than nothing. And if we refuse to accept that, then we have to surrender one of the three things that we want to hold on to!
Before we start thinking about which to surrender, we might want to ask how \emph{much} of a cheat this is. Granted, it may make it harder to engage in theological debates about why there is something rather than nothing. But the rest of the time, we will get along just fine. So maybe we should just regard our proof system (and FOL, more generally) as having a very slightly limited purview. If we ever want to allow for the possibility of \emph{nothing}, then we shall have to cast around for a more complicated proof system. But for as long as we are content to ignore that possibility, our proof system is perfectly in order. (As, similarly, is the stipulation that every domain must contain at least one object.)
\section{Universal introduction}
Suppose you had shown of each particular thing that it is F (and that there are no other things to consider). Then you would be justified in claiming that everything is F. This would motivate the following proof rule. If you had established each and every single substitution instance of `$\forall x Fx$', then you can infer `$\forall x Fx$'.
Unfortunately, that rule would be utterly unusable. To establish each and every single substitution instance would require proving `$Fa$', `$Fb$', $\ldots$, `$Fj_2$', $\ldots$, `$Fr_{79002}$', $\ldots$, and so on. Indeed, since there are infinitely many names in FOL, this process would never come to an end. So we could never apply that rule. We need to be a bit more cunning in coming up with our rule for introducing universal quantification.
Our cunning thought will be inspired by considering:
$$\forall x Fx \therefore \forall y Fy$$
This argument should \emph{obviously} be valid. After all, alphabetical variation ought to be a matter of taste, and of no logical consequence. But how might our proof system reflect this? Suppose we begin a proof thus:
\begin{proof}
\hypo{x}{\forall x Fx}
\have{a}{Fa} \Ae{x}
\end{proof}
We have proved `$Fa$'. And, of course, nothing stops us from using the same justification to prove `$Fb$', `$Fc$', $\ldots$, `$Fj_2$', $\ldots$, `$Fr_{79002}, \ldots$, and so on until we run out of space, time, or patience. But reflecting on this, we see that there is a way to prove $F\meta{c}$, for any name \meta{c}. And if we can do it for \emph{any} thing, we should surely be able to say that `$F$' is true of \emph{everything}. This therefore justifies us in inferring `$\forall y Fy$', thus:
\begin{proof}
\hypo{x}{\forall x Fx}
\have{a}{Fa} \Ae{x}
\have{y}{\forall y Fy} \Ai{a}
\end{proof}
The crucial thought here is that `$a$' was just some \emph{arbitrary} name. There was nothing special about it---we might have chosen any other name---and still the proof would be fine. And this crucial thought motivates the universal introduction rule ($\forall$I):
\factoidbox{
\begin{proof}
\have[m]{a}{\meta{A}(\ldots \meta{c} \ldots \meta{c}\ldots)}
\have[\ ]{c}{\forall \meta{x}\meta{A}(\ldots \meta{x} \ldots \meta{x}\ldots)} \Ai{a}
\end{proof}
\meta{c} must not occur in any undischarged assumption\\
\meta{x} must not occur in $\meta{A}(\ldots \meta{c} \ldots \meta{c}\ldots)$}
A crucial aspect of this rule, though, is bound up in the first constraint. This constraint ensures that we are always reasoning at a sufficiently general level.\footnote{Recall from \S\ref{s:BasicTFL} that we are treating `$\ered$' as a canonical contradiction. But if it were the canonical contradiction as involving some \emph{constant}, it might interfere with the constraint mentioned here. To avoid such problems, we shall treat `$\ered$' as a canonical contradiction \emph{that involves no particular names}.} To see the constraint in action, consider this terrible argument:
\begin{quote}
Everyone loves Kylie Minogue; therefore everyone loves themselves.
\end{quote}
We might symbolise this obviously invalid inference pattern as:
$$\forall x Lxk \therefore \forall x Lxx$$
Now, suppose we tried to offer a proof that vindicates this argument:
\begin{proof}
\hypo{x}{\forall x Lxk}
\have{a}{Lkk} \Ae{x}
\have{y}{\forall x Lxx} \by{naughtily attempting to invoke $\forall$I}{a}
\end{proof}\noindent
This is not allowed, because `$k$' occurred already in an undischarged assumption, namely, on line 1. The crucial point is that, if we have made any assumptions about the object we are working with, then we are not reasoning generally enough to license $\forall$I.
Although the name may not occur in any \emph{undischarged} assumption, it may occur in a \emph{discharged} assumption. That is, it may occur in a subproof that we have already closed. For example, this is just fine:
\begin{proof}
\open
\hypo{f1}{Gd}
\have{f2}{Gd}\by{R}{f1}
\close
\have{ff}{Gd \eif Gd}\ci{f1-f2}
\have{zz}{\forall z(Gz \eif Gz)}\Ai{ff}
\end{proof}
This tells us that `$\forall z (Gz \eif Gz)$' is a \emph{theorem}. And that is as it should be.
I should emphasise one last point. As per the conventions of \S\ref{s:MainLogicalOperatorQuantifier}, the use of $\forall$I requires that we are replacing \emph{every} instance of the name \meta{c} in $\meta{A}(\ldots \meta{x}\ldots\meta{x}\ldots)$ with the variable \meta{x}. If we only replace \emph{some} names and not others, we end up `proving' silly things. For example, consider the argument:
\begin{quote}
Everyone is as old as themselves; so everyone is as old as Judi Dench
\end{quote}
We might symbolise this as follows:
$$\forall x Oxx \therefore \forall x Oxd$$
But now suppose we tried to \emph{vindicate} this terrible argument with the following:
\begin{proof}
\hypo{x}{\forall x Oxx}
\have{a}{Odd}\Ae{x}
\have{y}{\forall x Oxd}\by{naughtily attempting to invoke $\forall$I}{a}
\end{proof}
Fortunately, our rules do not allow for us to do this: the attempted proof is banned, since it doesn't replace \emph{every} occurrence of `$d$' in line $2$ with an `$x$'.
\section{Existential elimination}
Suppose we know that \emph{something} is F. The problem is that simply knowing this does not tell us which thing is F. So it would seem that from `$\exists x Fx$' we cannot immediately conclude `$Fa$', `$Fe_{23}$', or any other substitution instance of the sentence. What can we do?
Suppose we know that something is F, and that everything which is F is G. In (almost) natural English, we might reason thus:
\begin{quote}
Since something is F, there is some particular thing which is an F. We do not know anything about it, other than that it's an F, but for convenience, let's call it `obbie'. So: obbie is F. Since everything which is F is G, it follows that obbie is G. But since obbie is G, it follows that something is G. And nothing depended on which object, exactly, obbie was. So, something is G.
\end{quote}
We might try to capture this reasoning pattern in a proof as follows:
\begin{proof}
\hypo{es}{\exists x Fx}
\hypo{ast}{\forall x(Fx \eif Gx)}
\open
\hypo{s}{Fo}
\have{st}{Fo \eif Go}\Ae{ast}
\have{t}{Go} \ce{st, s}
\have{et1}{\exists x Gx}\Ei{t}
\close
\have{et2}{\exists x Gx}\Ee{es,s-et1}
\end{proof}\noindent
Breaking this down: we started by writing down our assumptions. At line 3, we made an additional assumption: `$Fo$'. This was just a substitution instance of `$\exists x Fx$'. On this assumption, we established `$\exists x Gx$'. But note that we had made no \emph{special} assumptions about the object named by `$o$'; we had \emph{only} assumed that it satisfies `$Fx$'. So nothing depends upon which object it is. And line 1 told us that \emph{something} satisfies `$Fx$'. So our reasoning pattern was perfectly general. We can discharge the specific assumption `$Fo$', and simply infer `$\exists x Gx$' on its own.
Putting this together, we obtain the existential elimination rule ($\exists$E):
\factoidbox{
\begin{proof}
\have[m]{a}{\exists \meta{x}\meta{A}(\ldots \meta{x} \ldots \meta{x}\ldots)}
\open
\hypo[i]{b}{\meta{A}(\ldots \meta{c} \ldots \meta{c}\ldots)}
\have[j]{c}{\meta{B}}
\close
\have[\ ]{d}{\meta{B}} \Ee{a,b-c}
\end{proof}
\meta{c} must not occur in any assumption undischarged before line $i$\\
\meta{c} must not occur in $\exists \meta{x}\meta{A}(\ldots \meta{x} \ldots \meta{x}\ldots)$\\
\meta{c} must not occur in \meta{B}}
As with universal introduction, the constraints are extremely important. To see why, consider the following terrible argument:
\begin{quote}
Tim Button is a lecturer. Someone is not a lecturer. So Tim Button is both a lecturer and not a lecturer.
\end{quote}
We might symbolise this obviously invalid inference pattern as follows:
$$Lb, \exists x \enot Lx \therefore Lb \eand \enot Lb$$
Now, suppose we tried to offer a proof that vindicates this argument:
\begin{proof}
\hypo{f}{Lb}
\hypo{nf}{\exists x \enot Lx}
\open
\hypo{na}{\enot Lb}
\have{con}{Lb \eand \enot Lb}\ae{f, na}
\close
\have{econ1}{Lb \eand \enot Lb}\by{naughtily attempting to invoke $\exists$E }{nf, na-con}
\end{proof}
The last line of the proof is not allowed. The name that we used in our substitution instance for `$\exists x \enot Lx$' on line 3, namely `$b$', occurs in line 4. And this would be no better:
\begin{proof}
\hypo{f}{Lb}
\hypo{nf}{\exists x \enot Lx}
\open
\hypo{na}{\enot Lb}
\have{con}{Lb \eand \enot Lb}\ae{f, na}
\have{con1}{\exists x (Lx \eand \enot Lx)}\Ei{con}
\close
\have{econ1}{\exists x (Lx \eand \enot Lx)}\by{naughtily attempting to invoke $\exists$E }{nf, na-con1}
\end{proof}
The last line is still not be allowed. For the name that we used in our substitution instance for `$\exists x \enot Lx$', namely `$b$', occurs in an undischarged assumption, namely line 1.
The moral of the story is this. \emph{If you want to squeeze information out of an existential quantifier, choose a new name for your substitution instance.} That way, you can guarantee that you meet all the constraints on the rule for $\exists$E.
\practiceproblems
\problempart
Explain why these two `proofs' are incorrect. Also, provide interpretations invalidate the fallacious argument forms the `proofs' enshrine:
\begin{multicols}{2}
\begin{proof}
\hypo{Rxx}{\forall x Rxx}
\have{Raa}{Raa}\Ae{Rxx}
\have{Ray}{\forall y Ray}\Ai{Raa}
\have{Rxy}{\forall x \forall y Rxy}\Ai{Ray}
\end{proof}
\begin{proof}
\hypo{AE}{\forall x \exists y Rxy}
\have{E}{\exists y Ray}\Ae{AE}
\open
\hypo{ass}{Raa}
\have{Ex}{\exists x Rxx}\Ei{ass}
\close
\have{con}{\exists x Rxx}\Ee{E, ass-Ex}
\end{proof}
\end{multicols}
\problempart
\label{pr.justifyFOLproof}
The following three proofs are missing their citations (rule and line numbers). Add them, to turn them into bona fide proofs.
\begin{proof}
\hypo{p1}{\forall x\exists y(Rxy \eor Ryx)}
\hypo{p2}{\forall x\enot Rmx}
\have{3}{\exists y(Rmy \eor Rym)}{}
\open
\hypo{a1}{Rma \eor Ram}
\have{a2}{\enot Rma}{}
\have{a3}{Ram}{}
\have{a4}{\exists x Rxm}{}
\close
\have{n}{\exists x Rxm} {}
\end{proof}
\begin{multicols}{2}
\begin{proof}
\hypo{1}{\forall x(\exists yLxy \eif \forall zLzx)}
\hypo{2}{Lab}
\have{3}{\exists y Lay \eif \forall zLza}{}
\have{4}{\exists y Lay} {}
\have{5}{\forall z Lza} {}
\have{6}{Lca}{}
\have{7}{\exists y Lcy \eif \forall zLzc}{}
\have{8}{\exists y Lcy}{}
\have{9}{\forall z Lzc}{}
\have{10}{Lcc}{}
\have{11}{\forall x Lxx}{}
\end{proof}
\begin{proof}
\hypo{a}{\forall x(Jx \eif Kx)}
\hypo{b}{\exists x\forall y Lxy}
\hypo{c}{\forall x Jx}
\open
\hypo{2}{\forall y Lay}
\have{3}{Laa}{}
\have{d}{Ja}{}
\have{e}{Ja \eif Ka}{}
\have{f}{Ka}{}
\have{4}{Ka \eand Laa}{}
\have{5}{\exists x(Kx \eand Lxx)}{}
\close
\have{j}{\exists x(Kx \eand Lxx)}{}
\end{proof}
\end{multicols}
\problempart
\label{pr.BarbaraEtc.proof1}
In \S\ref{s:MoreMonadic} problem part A, we considered fifteen syllogistic figures of Aristotelian logic. Provide proofs for each of the argument forms. NB: You will find it \emph{much} easier if you symbolise (for example) `No F is G' as `$\forall x (Fx \eif \enot Gx)$'.
\
\problempart
\label{pr.BarbaraEtc.proof2}
Aristotle and his successors identified other syllogistic forms which depended upon `existential import'. Symbolise each of these argument forms in FOL and offer proofs.
\begin{ebullet}
\item \textbf{Barbari.} Something is H. All G are F. All H are G. So: Some H is F
\item \textbf{Celaront.} Something is H. No G are F. All H are G. So: Some H is not F
\item \textbf{Cesaro.} Something is H. No F are G. All H are G. So: Some H is not F.
\item \textbf{Camestros.} Something is H. All F are G. No H are G. So: Some H is not F.
\item \textbf{Felapton.} Something is G. No G are F. All G are H. So: Some H is not F.
\item \textbf{Darapti.} Something is G. All G are F. All G are H. So: Some H is F.
\item \textbf{Calemos.} Something is H. All F are G. No G are H. So: Some H is not F.
\item \textbf{Fesapo.} Something is G. No F is G. All G are H. So: Some H is not F.
\item \textbf{Bamalip.} Something is F. All F are G. All G are H. So: Some H are F.
\end{ebullet}
\problempart
\label{pr.someFOLproofs}
Provide a proof of each claim.
\begin{earg}
\item $\proves \forall x Fx \eor \enot \forall x Fx$
\item $\proves\forall z (Pz \eor \enot Pz)$
\item $\forall x(Ax\eif Bx), \exists x Ax \proves \exists x Bx$
\item $\forall x(Mx \eiff Nx), Ma\eand\exists x Rxa\proves \exists x Nx$
\item $\forall x \forall y Gxy\proves\exists x Gxx$
\item $\proves\forall x Rxx\eif \exists x \exists y Rxy$
\item $\proves\forall y \exists x (Qy \eif Qx)$
\item $Na \eif \forall x(Mx \eiff Ma), Ma, \enot Mb\proves \enot Na$
\item $\forall x \forall y (Gxy \eif Gyx) \proves \forall x\forall y (Gxy \eiff Gyx)$
\item $\forall x(\enot Mx \eor Ljx), \forall x(Bx\eif Ljx), \forall x(Mx\eor Bx)\proves \forall xLjx$
\end{earg}
\problempart
\label{pr.likes}
Write a symbolisation key for the following argument, symbolise it, and prove it:
\begin{quote}
There is someone who likes everyone who likes everyone that she likes. Therefore, there is someone who likes herself.
\end{quote}
\problempart
\label{pr.FOLequivornot}
For each of the following pairs of sentences: If they are provably equivalent, give proofs to show this. If they are not, construct an interpretation to show that they are not logically equivalent.
\begin{earg}
\item $\forall x Px \eif Qc, \forall x (Px \eif Qc)$
\item $\forall x\forall y \forall z Bxyz, \forall x Bxxx$
\item $\forall x\forall y Dxy, \forall y\forall x Dxy$
\item $\exists x\forall y Dxy, \forall y\exists x Dxy$
\item $\forall x (Rca \eiff Rxa), Rca \eiff \forall x Rxa$
\end{earg}
\problempart
\label{pr.FOLvalidornot}
For each of the following arguments: If it is valid in FOL, give a proof. If it is invalid, construct an interpretation to show that it is invalid.
\begin{earg}
\item $\exists y\forall x Rxy \therefore \forall x\exists y Rxy$
\item $\exists x(Px \eand \enot Qx) \therefore \forall x(Px \eif \enot Qx)$
\item $\forall x(Sx \eif Ta), Sd \therefore Ta$
\item $\forall x(Ax\eif Bx), \forall x(Bx \eif Cx) \therefore \forall x(Ax \eif Cx)$
\item $\exists x(Dx \eor Ex), \forall x(Dx \eif Fx) \therefore \exists x(Dx \eand Fx)$
\item $\forall x\forall y(Rxy \eor Ryx) \therefore Rjj$
\item $\exists x\exists y(Rxy \eor Ryx) \therefore Rjj$
\item $\forall x Px \eif \forall x Qx, \exists x \enot Px \therefore \exists x \enot Qx$
\end{earg}
\chapter{Conversion of quantifiers}\label{s:CQ}
In this section, we shall add some additional rules to the basic rules of the previous section. These govern the interaction of quantifiers and negation.
In \S\ref{s:FOLBuildingBlocks}, we noted that $\enot\exists x\meta{A}$ is logically equivalent to $\forall x \enot\meta{A}$. We shall add some rules to our proof system that govern this. In particular, we add:
\factoidbox{
\begin{proof}
\have[m]{a}{\forall \meta{x} \enot\meta{A}}
\have[\ ]{con}{\enot \exists \meta{x} \meta{A}}\cq{a}
\end{proof}}
and
\factoidbox{
\begin{proof}
\have[m]{a}{ \enot \exists \meta{x} \meta{A}}
\have[\ ]{con}{\forall \meta{x} \enot \meta{A}}\cq{a}
\end{proof}}
Equally, we add:
\factoidbox{
\begin{proof}
\have[m]{a}{\exists \meta{x}\enot \meta{A}}
\have[\ ]{con}{\enot \forall \meta{x} \meta{A}}\cq{a}
\end{proof}}
and
\factoidbox{
\begin{proof}
\have[m]{a}{\enot \forall \meta{x} \meta{A}}
\have[\ ]{con}{\exists \meta{x} \enot \meta{A}}\cq{a}
\end{proof}}
\practiceproblems
\problempart
Show that the following are jointly contrary:
\begin{earg}
\item $Sa\eif Tm, Tm \eif Sa, Tm \eand \enot Sa$
\item $\enot\exists x Rxa, \forall x \forall y Ryx$
\item $\enot\exists x \exists y Lxy, Laa$
\item $\forall x(Px \eif Qx), \forall z(Pz \eif Rz), \forall y Py, \enot Qa \eand \enot Rb$
\end{earg}
\problempart
Show that each pair of sentences is provably equivalent:
\begin{earg}
\item $\forall x (Ax\eif \enot Bx), \enot\exists x(Ax \eand Bx)$
\item $\forall x (\enot Ax\eif Bd), \forall x Ax \eor Bd$
\end{earg}
\problempart
In \S\ref{s:MoreMonadic}, I considered what happens when we move quantifiers `across' various logical operators. Show that each pair of sentences is provably equivalent:
\begin{earg}
\item $\forall x (Fx \eand Ga), \forall x Fx \eand Ga$
\item $\exists x (Fx \eor Ga), \exists x Fx \eor Ga$
\item $\forall x(Ga \eif Fx), Ga \eif \forall x Fx$
\item $\forall x(Fx \eif Ga), \exists x Fx \eif Ga$
\item $\exists x(Ga \eif Fx), Ga \eif \exists x Fx$
\item $\exists x(Fx \eif Ga), \forall x Fx \eif Ga$
\end{earg}
NB: the variable `$x$' does not occur in `$Ga$'. When all the quantifiers occur at the beginning of a sentence, that sentence is said to be in \emph{prenex normal form}. These equivalences are sometimes called \emph{prenexing rules}, since they give us a means for putting any sentence into prenex normal form.
\chapter{Rules for identity}
In \S\ref{s:Interpretations}, I mentioned the philosophically contentious thesis of the \emph{identity of indiscernibles}. This is the claim that objects which are indiscernible in every way are, in fact, identical to each other. I also mentioned that we will not subscribe to this thesis. It follows that, no matter how much you tell me about two objects, I cannot prove that they are identical. Unless, of course, you tell me that the two objects are, in fact, identical. But then the proof will hardly be very illuminating.
The general point, though, is that \emph{no sentences} which do not already contain the identity predicate could justify an inference to `$a=b$'. So our identity introduction rule cannot allow us to infer to an identity claim containing two \emph{different} names.
However, every object is identical to itself. No premises, then, are required in order to conclude that something is identical to itself. So this will be the identity introduction rule:
\factoidbox{
\begin{proof}
\have[\ \,\,\,]{x}{\meta{c}=\meta{c}} \by{=I}{}
\end{proof}}
Notice that this rule does not require referring to any prior lines of the proof. For any name \meta{c}, you can write $\meta{c}=\meta{c}$ on any point, with only the {=}I rule as justification.
Our elimination rule is more fun. If you have established `$a=b$', then anything that is true of the object named by `$a$' must also be true of the object named by `$b$'. For any sentence with `$a$' in it, you can replace some or all of the occurrences of `$a$' with `$b$' and produce an equivalent sentence. For example, from `$Raa$' and `$a = b$', you are justified in inferring `$Rab$', `$Rba$' or `$Rbb$'. More generally:
\factoidbox{\begin{proof}
\have[m]{e}{\meta{a}=\meta{b}}
\have[n]{a}{\meta{A}(\ldots \meta{a} \ldots \meta{a}\ldots)}
\have[\ ]{ea1}{\meta{A}(\ldots \meta{b} \ldots \meta{a}\ldots)} \by{=E}{e,a}
\end{proof}}
The notation here is as for $\exists$I. So $\meta{A}(\ldots \meta{a} \ldots \meta{a}\ldots)$ is a formula containing the name $\meta{a}$, and $\meta{A}(\ldots \meta{b} \ldots \meta{a}\ldots)$ is a formula obtained by replacing one or more instances of the name $\meta{a}$ with the name $\meta{b}$. Lines $m$ and $n$ can occur in either order, and do not need to be adjacent, but we always cite the statement of identity first. Symmetrically, we allow:
\factoidbox{\begin{proof}
\have[m]{e}{\meta{a}=\meta{b}}
\have[n]{a}{\meta{A}(\ldots \meta{b} \ldots \meta{b}\ldots)}
\have[\ ]{ea2}{\meta{A}(\ldots \meta{a} \ldots \meta{b}\ldots)} \by{=E}{e,a}
\end{proof}}
This rule is sometimes called \emph{Leibniz's Law}, after Gottfried Leibniz.
To see the rules in action, we shall prove some quick results. First, we shall prove that identity is \emph{symmetric}:
\begin{proof}
\open
\hypo{ab}{a = b}
\have{aa}{a = a}\by{=I}{}
\have{ba}{b = a}\by{=E}{ab, aa}
\close
\have{abba}{a = b \eif b =a}\ci{ab-ba}
\have{ayya}{\forall y (a = y \eif y = a)}\Ai{abba}
\have{xyyx}{\forall x \forall y (x = y \eif y = x)}\Ai{ayya}
\end{proof}
We obtain line 3 by replacing one instance of `$a$' in line 2 with an instance of `$b$'; this is justified given `$a= b$'.
Second, we shall prove that identity is \emph{transitive}:
\begin{proof}
\open
\hypo{abc}{a = b \eand b = c}
\have{ab}{a = b}\ae{abc}
\have{bc}{b = c}\ae{abc}
\have{ac}{a = c}\by{=E}{ab, bc}
\close
\have{con}{(a = b \eand b =c) \eif a = c}\ci{abc-ac}
\have{conz}{\forall z((a = b \eand b = z) \eif a = z)}\Ai{con}
\have{cony}{\forall y\forall z((a = y \eand y = z) \eif a = z)}\Ai{conz}
\have{conx}{\forall x \forall y \forall z((x = y \eand y = z) \eif x = z)}\Ai{cony}
\end{proof}
We obtain line 4 by replacing `$b$' in line 3 with `$a$'; this is justified given `$a= b$'.
\practiceproblems
\problempart
\label{pr.identity}
Provide a proof of each claim.
\begin{earg}
\item $Pa \eor Qb, Qb \eif b=c, \enot Pa \proves Qc$
\item $m=n \eor n=o, An \proves Am \eor Ao$
\item $\forall x\ x=m, Rma\proves \exists x Rxx$
\item $\forall x\forall y(Rxy \eif x=y)\proves Rab \eif Rba$
\item $\enot \exists x\enot x = m \proves \forall x\forall y (Px \eif Py)$
\item $\exists x Jx, \exists x \enot Jx\proves \exists x \exists y\ \enot x = y$
\item $\forall x(x=n \eiff Mx), \forall x(Ox \eor \enot Mx)\proves On$
\item $\exists x Dx, \forall x(x=p \eiff Dx)\proves Dp$
\item $\exists x\bigl[(Kx \eand \forall y(Ky \eif x=y)) \eand Bx\bigr], Kd\proves Bd$
\item $\proves Pa \eif \forall x(Px \eor \enot x = a)$
\end{earg}
\problempart
Show that the following are provably equivalent:
\begin{ebullet}
\item $\exists x \bigl([Fx \eand \forall y (Fy \eif x = y)] \eand x = n\bigr)$
\item $Fn \eand \forall y (Fy \eif n= y)$
\end{ebullet}
And hence that both have a decent claim to symbolise the English sentence `Nick is the F'.
\
\problempart
In \S\ref{sec.identity}, I claimed that the following are logically equivalent symbolisations of the English sentence `there is exactly one F':
\begin{ebullet}
\item $\exists x Fx \eand \forall x \forall y \bigl[(Fx \eand Fy) \eif x = y\bigr]$
\item $\exists x \bigl[Fx \eand \forall y (Fy \eif x = y)\bigr]$
\item $\exists x \forall y (Fy \eiff x = y)$
\end{ebullet}
Show that they are all provably equivalent. (\emph{Hint}: to show that three claims are provably equivalent, it suffices to show that the first proves the second, the second proves the third and the third proves the first; think about why.)
\
\problempart
Symbolise the following argument
\begin{quote}
There is exactly one F. There is exactly one G. Nothing is both F and G. So: there are exactly two things that are either F or G.
\end{quote}
And offer a proof of it.
%\begin{ebullet}
%\item $\exists x \bigl[Fx \eand \forall y (Fy \eif x = y)\bigr], \exists x \bigl[Gx \eand \forall y ( Gy \eif x = y)\bigr], \forall x (\enot Fx \eor \enot Gx) \proves \exists x \exists y \bigl[\enot x = y \eand \forall z ((Fz \eor Gz) \eif (x = y \eor x = z))\bigr]$
%\end{ebullet}
\chapter{Derived rules}\label{s:DerivedFOL}
As in the case of TFL, I first introduced some rules for FOL as basic (in \S\ref{s:BasicFOL}), and then added some further rules for conversion of quantifiers (in \S\ref{s:CQ}). In fact, the CQ rules should be regarded as \emph{derived} rules, for they can be derived from the \emph{basic} rules of \S\ref{s:BasicFOL}. (The point here is as in \S\ref{s:Derived}.) Here is a justification for the first CQ rule:
\begin{proof}
\hypo[m]{An}{\forall x \enot A x}
\open
\hypo[k]{E}{\exists x Ax}
\open
\hypo{c}{Ac}%\by{for $\exists$E}{}
\have{nc}{\enot Ac}\Ae{An}
\have{red}{\ered}\ri{c,nc}
\close
\have{red2}{\ered}\Ee{E,c-red}
\close
\have{dada}{\enot \exists x Ax}\ni{E-red2}
\end{proof}
%You will note that on line 3 I have written `for $\exists$E'. This is not technically a part of the proof. It is just a reminder---to me and to you---of why I have bothered to introduce `$\enot Ac$' out of the blue. You might find it helpful to add similar annotations to assumptions when performing proofs. But do not add annotations on lines other than assumptions: the proof requires its own citation, and your annotations will clutter it.
Here is a justification of the second CQ rule:
\begin{proof}
\hypo[m]{nEna}{\exists x \enot Ax}
\open
\hypo[k]{Aa}{\forall x Ax}
\open
\hypo{nac}{\enot Ac}%\by{for $\exists$E}{}
\have{a}{Ac}\Ae{Aa}
\have{con}{\ered}\ri{a,nac}
\close
\have{con1}{\ered}\Ee{nEna, nac-con}
\close
\have{dada}{\enot \forall x Ax}\ni{Aa-con1}
\end{proof}
This explains why the CQ rules can be treated as derived. Similar justifications can be offered for the other two CQ rules.
\practiceproblems
\problempart
Offer proofs which justify the addition of the third and fourth CQ rules as derived rules.
\chapter{Proof-theoretic concepts and semantic concepts}
We have used two different turnstiles in this book. This:
$$\meta{A}_1, \meta{A}_2, \ldots, \meta{A}_n \proves \meta{C}$$
means that there is some proof which ends with $\meta{C}$ and whose only undischarged assumptions are among $\meta{A}_1, \meta{A}_2, \ldots, \meta{A}_n$. This is a \emph{proof-theoretic notion}. By contrast, this:
$$\meta{A}_1, \meta{A}_2, \ldots, \meta{A}_n \entails \meta{C}$$
means that no valuation (or interpretation) makes all of $\meta{A}_1, \meta{A}_2, \ldots, \meta{A}_n$ true and $\meta{C}$ false. This concerns assignments of truth and falsity to sentences. It is a \emph{semantic notion}.
I cannot emphasise enough that these are different notions. But I can emphasise it a bit more: \emph{They are different notions.}
At the risk of repetition: \emph{They are different notions.}
Once you have fully internalised this point, continue reading.
Although our semantic and proof-theoretic notions are different, there is a deep connection between them. To explain this connection, I shall start by considering the relationship between logical truths and theorems.
To show that a sentence is a theorem, you need only perform a proof. Granted, it may be hard to produce a twenty line proof, but it is not so hard to check each line of the proof and confirm that it is legitimate; and if each line of the proof individually is legitimate, then the whole proof is legitimate. Showing that a sentence is a logical truth, though, requires reasoning about all possible interpretations. Given a choice between showing that a sentence is a theorem and showing that it is a logical truth, it would be easier to show that it is a theorem.
Contrawise, to show that a sentence is \emph{not} a theorem is hard. We would need to reason about all (possible) proofs. That is very difficult. But to show that a sentence is not a logical truth, you need only construct an interpretation in which the sentence is false. Granted, it may be hard to come up with the interpretation; but once you have done so, it is relatively straightforward to check what truth value it assigns to a sentence. Given a choice between showing that a sentence is not a theorem and showing that it is not a logical truth, it would be easier to show that it is not a logical truth.
Fortunately, \emph{a sentence is a theorem if and only if it is a logical truth}. As a result, if we provide a proof of $\meta{A}$ on no assumptions, and thus show that $\meta{A}$ is a theorem, i.e.\ ${}\proves \meta{A}$, we can legitimately infer that $\meta{A}$ is a logical truth, i.e., $\entails\meta{A}$. Similarly, if we construct a model in which \meta{A} is false and thus show that it is not a logical truth, i.e.\ $\nentails \meta{A}$, it follows that \meta{A} is not a theorem, i.e.\ $\nproves \meta{A}$.
More generally, we have the following powerful result:
$$\meta{A}_1, \meta{A}_2, \ldots, \meta{A}_n \proves\meta{B} \textbf{ iff }\meta{A}_1, \meta{A}_2, \ldots, \meta{A}_n \entails\meta{B}$$
This shows that, whilst provability and entailment are \emph{different} notions, they are extensionally equivalent. As such:
\begin{ebullet}
\item An argument is \emph{valid} iff \emph{the conclusion can be proved from the premises}.
\item Two sentences are \emph{logically equivalent} iff they are \emph{provably equivalent}.
\item Sentences are \emph{jointly consistent} iff they are \emph{not jointly contrary}.
\end{ebullet}
For this reason, you can pick and choose when to think in terms of proofs and when to think in terms of valuations/interpretations, doing whichever is easier for a given task. The table on the next page summarises which is (usually) easier.
It is intuitive that provability and semantic entailment should agree. But---let me repeat this---do not be fooled by the similarity of the symbols `$\entails$' and `$\proves$'. These two symbols have very different meanings. And the fact that provability and semantic entailment agree is not an easy result to come by.
In fact, demonstrating that provability and semantic entailment agree is, very decisively, the point at which introductory logic becomes intermediary logic. Agreement, in the case of TFL, is covered in a little sequel to this book, \texttt{Metatheory}. Agreement, in the case of FOL, is one of the first big results in mathematical logic.
\begin{sidewaystable}
\begin{center}
\begin{tabular*}{\textwidth}{p{.25\textheight}p{.325\textheight}p{.325\textheight}}
& \textbf{Yes} & \textbf{No}\\
\\
Is \meta{A} a \textbf{logical truth}?
& give a proof which shows $\proves\meta{A}$
& give an interpretation in which \meta{A} is false\\
\\
Is \meta{A} a \textbf{contradiction}? &
give a proof which shows $\proves\enot\meta{A}$ &
give an interpretation in which \meta{A} is true\\
\\
%Is \meta{A} contingent? &
%give two interpretations, one in which \meta{A} is true and another in which \meta{A} is false & give a proof which either shows $\proves\meta{A}$ or $\proves\enot\meta{A}$\\
%\\
Are \meta{A} and \meta{B} \textbf{equivalent}? &
give two proofs, one for $\meta{A}\proves\meta{B}$ and one for $\meta{B}\proves\meta{A}$
& give an interpretation in which \meta{A} and \meta{B} have different truth values\\
\\
Are $\meta{A}_1, \meta{A}_2, \ldots, \meta{A}_n$ \textbf{jointly consistent}?
& give an interpretation in which all of $\meta{A}_1, \meta{A}_2, \ldots, \meta{A}_n$ are true
& prove a contradiction from assumptions $\meta{A}_1, \meta{A}_2, \ldots, \meta{A}_n$\\
\\
Is $\meta{A}_1, \meta{A}_2, \ldots, \meta{A}_n \therefore \meta{C}$ \textbf{valid}?
& give a proof with assumptions $\meta{A}_1, \meta{A}_2, \ldots, \meta{A}_n$ and concluding with \meta{C}
& give an interpretation in which each of $\meta{A}_1, \meta{A}_2, \ldots, \meta{A}_n$ is true and \meta{C} is false\\
\end{tabular*}
\end{center}
\end{sidewaystable}
| {
"alphanum_fraction": 0.7130149317,
"avg_line_length": 58.9266770671,
"ext": "tex",
"hexsha": "5cf5f6b879cf2f3cab0353899af3eb4f90fa7529",
"lang": "TeX",
"max_forks_count": 10,
"max_forks_repo_forks_event_max_datetime": "2022-02-19T10:13:13.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-09-08T05:09:01.000Z",
"max_forks_repo_head_hexsha": "37f7bbf197ba0fee0e2106f90755e2fc35f5b9bf",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "ryanmichaelhebert/forallx-cam",
"max_forks_repo_path": "forallx-cam-prooffol.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "37f7bbf197ba0fee0e2106f90755e2fc35f5b9bf",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "ryanmichaelhebert/forallx-cam",
"max_issues_repo_path": "forallx-cam-prooffol.tex",
"max_line_length": 718,
"max_stars_count": 7,
"max_stars_repo_head_hexsha": "37f7bbf197ba0fee0e2106f90755e2fc35f5b9bf",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "OpenLogicProject/forallx-cam",
"max_stars_repo_path": "forallx-cam-prooffol.tex",
"max_stars_repo_stars_event_max_datetime": "2021-07-04T05:59:31.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-02-19T01:39:52.000Z",
"num_tokens": 12189,
"size": 37772
} |
\chapter{Probability}
\section{Probability space}
\index{probability measure}%
\index{probability!measure}%
\index{measure!probability}%
A \emph{probability measure} \(\Pr : 2^\Omega \to \Real\) on a set \(\Omega\) is a measure where \(\Pr(\Omega) = 1\).
\index{probability space}%
\index{probability!space}%
\index{space!probability}%
A \emph{probability space} is a measure space \((\Omega,\Pr)\) where
\(\Omega\) is a set called the
\index{sample space}%
\index{sample!space}%
\index{space!sample}%
\emph{sample space}
and \(\Pr : 2^\Omega \to \Real\) is a probability measure.
\index{event}%
An \emph{event} is a subset of a sample space.
Relationship between two events \(A\) and \(B\):
They are
\index{events!equiprobable}%
\index{equiprobable events}%
\emph{equiprobable} iff \(\Pr(A) = \Pr(B)\),
\index{mutually exclusive events}%
\index{events!mutually exclusive}%
\emph{mutually exclusive} iff \(A \cap B = \emptyset\),
\index{independent events}%
\index{events!independent}%
\emph{independent} iff \(\Pr(A \cap B) = \Pr(A) \cdot \Pr(B)\).
\paragraph{Example}
A fair six-faced dice roll can be modeled with
\(\Omega = \{1,2,3,4,5,6\}\) and \(\Pr(\{1\}) = \Pr(\{2\}) = \Pr(\{3\}) = \Pr(\{4\}) = \Pr(\{5\}) = \Pr(\{6\}) = 1/6\).
The probability of rolling an odd face is \(\Pr(\{1,3,5\}) = \Pr(\{1\}) + \Pr(\{3\}) + \Pr(\{5\}) = 3/6\).
\section{Density}
\index{density of a measure}%
The \emph{density of the measure \(m : 2^U \to \Real\)}
is the \(f : U \to \Real\) where \(\forall A \subset U : \int_A f = m(A)\).
\index{probability density function}%
A \emph{probability density function} (pdf) is a function \(f : \Real \to \Real\) where \(\int_\Real f = 1\).
Let \((\Omega,\Pr)\) be a probability space.
The density of \(\Pr\) is a pdf.
The density of that space is the density of its measure.
\section{Distribution}
\index{distribution function}%
\index{cumulative distribution function}%
The \emph{distribution function} of \(\Pr:2^\Real\to\Real\) is \(F(a) = \Pr(\{ x~|~x \le a\})\),
which is also called the \emph{cumulative distribution function} of \(\Pr\).
Relationship between density and distribution:
Iff \(\Pr : 2^\Real \to \Real\),
\(f\) is the density of \(P\), and
\(F\) is the distribution of \(P\),
then \(F(a) = \int_{(-\infty,a]} f\).
We say that \(f\) is the pdf of \(F\).
\section{Conditional probability}
\index{conditional probability}%
\index{probability!conditional}%
The \emph{conditional probability of \(R\) given \(C\)}
is \(\Pr(R|C) = \Pr(R \cap C) / \Pr(C)\).
\(C\)-conditionalization maps \((\Omega,\Pr)\) to \((C,Q)\)
where \(Q(R) = \Pr(R|C)\).
Let \(f\) be the density of \(\Pr\).
Recall that \(\Pr(A) = \int_A f\).
Therefore \(\Pr(R|C) = \Pr(R \cap C)/\Pr(C) = \int_{R \cap C} f / \int_C f\).
\index{conditional density}%
\index{density!conditional}%
The conditional density \(f(r|C)\) satisfies \(\int_R (r \to f(r|C)) = \Pr(R|C) = Q(R)\).
\(\int_{R \cap C} f / \int_C f\).
The right side of the vertical bar is a set.
\index{conditional distribution}%
\index{distribution!conditional}%
The conditional distribution
\(F(r \le a|C) = \int_{(-\infty,a]} (r \to f(r|C))\).
\section{Random variable}
\index{random variable}%
\index{random variable!distribution of}%
\index{distribution!of random variable}%
A \emph{random variable} \(X : \Omega \to V\)
is a measurable function.
\index{real random variable}%
\index{random variable!real}%
Iff \(V = \Real\), then \(X\) is a \emph{real random variable}.
\index{probability space!of random variable}%
Such \(X\) maps
\((\Omega,\Pr)\) to \((V,Q)\)
where \(Q(A) = \Pr(\{\omega~|~X(\omega)\in A\}) = \Pr(X\in A)\).
Note that \(\Pr(X \in A)\) is an abuse of notation.
The \emph{distribution of \(X\)} is the distribution of \(Q\),
that is \(F(a) = \Pr(X \le a)\).
\(X \sim D\) means that \emph{the distribution of \(X\) is \(D\)}.
\index{probability mass function}%
A \emph{probability mass function} (pmf) \(f : \Omega \to \Real\)
is a function where \(\forall A \subseteq \Omega : \sum_{x \in A} f(x) = \Pr(A)\).
\index{probability density function}%
A \emph{probability density function} (pdf) \(f : \Omega \to \Real\)
is a function where \(\forall A \subseteq \Omega : \int_A f = \Pr(A)\).
A statement about pdfs can usually be translated into
a statement about pmfs by replacing integral with summation.
If \(A \subseteq \Real\), then a pmf \(f : A \to \Real\)
can be turned into a pdf \(g : \Real \to \Real\)
where
\(g(x) = \sum_{c \in A} f(c) \cdot \delta(x-c)\)
where
\index{Dirac delta function}%
\index{function!Dirac delta}%
\(\delta\) is the \emph{Dirac delta function}
where
\( \int_X \delta = [0 \in X] \)
where \([F]\) is 0 iff \(F\) is false and 1 iff \(F\) is true.
\paragraph{Iid}
Two random variables \(X\) and \(Y\) are
\index{random variables!independent}%
\index{independent random variables}%
\emph{independent} iff \(\forall A, B : \Pr(X \in A \wedge Y \in B) = \Pr(X \in A) \cdot \Pr(Y \in B)\),
\index{random variables!identically distributed}%
\index{identically-distributed random variables}%
\emph{identically distributed} iff
\(\forall A : \Pr(X \in A) = \Pr(Y \in A)\),
\index{random variables!independent and identically-distributed}%
\index{independent and identically-distributed random variables}%
\index{random variables!iid}%
\index{iid!random variables}%
\emph{iid} iff they are independent and identically-distributed.
\paragraph{Example}
If \(\Omega\) is the set of all people on Earth,
then \(X(\omega)\) might be the age of the person \(\omega\) in years.
If \(\Omega\) is the set of all cookies produced by a machine,
then \(X(\omega)\) might be the sugar amount in the cookie \(\omega\) in grams.
If \(\Omega\) is the set of all tasks,
then \(X(\omega)\) might be the time required to finish the task \(\omega\) in seconds.
\section{Joint and marginal}
\index{joint probability space}%
\index{probability space!joint}%
\index{probability!joint}%
Let there be \(n\) probability spaces \(S_k = (\Omega_k,\Pr_k)\).
The \emph{joint probability space} of \(S_1,\ldots,S_n\) is
\(S = (\Omega,\Pr)\)
where \(\Omega = \prod_{k=1}^n \Omega_k\).
Define the set \(W_k(A) = \{ \omega ~|~ \omega \in \Omega, ~ \omega_k \in A \}\).
Relationship: \(\Pr_k(A) = \Pr(W_k(A))\).
We say that \(\Pr_k\) is obtained by \emph{retaining} the \(k\)th component of \(S\).
Let \(N = \{1,\ldots,n\}\) be the set of the indexes of \(S\)
and let \(K \subseteq N\) be the set of the indexes we want to retain.
The result of
\index{retain}%
\emph{retaining} \(K\) in \(S\)
is \((\Omega,\Pr_K)\) where
\(\Pr_K(A) = \Pr\left(\bigcap_{k \in K} W_k(A_k)\right)\)
where \(A_k = \{ a_k ~|~ a \in A \} \).
\index{marginalize out}%
To \emph{marginalize out} \(K\) from \(S\) is to retain \(N-K\) in \(S\).
Note the relationship among ``retain'', ``marginalize out'', and ``all but'':
To retain \(x\) is to marginalize out all but \(x\),
and to marginalize out \(x\) is to retain all but \(x\).
\index{marginal probability space}%
\index{probability space!marginal}%
\index{probability!marginal}%
Marginal probability space?
Example?
Many random variables:
The notation \(\Pr(X_1\in A_1 \wedge \ldots \wedge X_n\in A_n)\) means
\(\Pr(\{(\omega_1,\ldots,\omega_n) ~|~ X_1(\omega_1) \in A_1, ~ \ldots, ~ X_n(\omega_n) \in A_n\})\)
where \(\Omega = \prod_{k=1}^n \Omega_k\)
and \(\forall k (X_k : \Omega_k \to V_k)\).
Such \((\Omega,\Pr)\) is a \emph{joint probability space}.
\section{Central tendency}
\index{average!weighted}%
\index{weighted average}%
The \emph{weighted average} of \(f:\Real\to\Real\) is
\(w(f) = \int_\Real (x \to f(x) \cdot x) / \int_\Real f\)
where \(\int_\Real f\) is the
\index{normalizing constant}%
\index{constant!normalizing}%
\emph{normalizing constant},
which is 1 iff \(f\) is a pdf.
\index{mean!of a pdf}%
The \emph{mean} of a pdf \(f\) is \(\mu(f) = w(f)\),
\index{mean!of a distribution}%
of a distribution \(D\) whose pdf is \(f\) is \(\mu(D) = \mu(f)\),
\index{mean!of a probability measure}%
of a probability measure \(\Pr\) whose distribution is \(D\) is
\(\mu(\Pr) = \mu(D)\),
\index{mean!of a probability space}%
of a probability space \(S = (\Real,\Pr)\) is \(\mu(S) = \mu(\Pr)\).
\index{expectation}%
The \emph{expectation} of a random variable \(X \sim D\) is \(\Expect{X} = \mu(D)\).
\index{linearity of expectation}%
\index{expectation!linearity of}%
Expectation is linear:
\(\Expect{aX} = a \Expect{X}\) and
\(\Expect{X+Y} = \Expect{X} + \Expect{Y}\).
\index{moment}%
The \emph{\(k\)th moment of \(X\)} is \(m_k(X) = \Expect{X^k}\)
where superscript means power and not composition.
Expectation is first moment.
\section{Dispersion tendency}
\index{central moment}%
The \emph{\(k\)th central moment of \(X\)} is \(\mu_k(X) = \Expect{(X - \Expect{X})^k}\).
\index{variance}%
\emph{Variance} is the second central moment.
The variance of \(X\) is \(\sigma_X^2 = \mu_2(X)\).
\index{standard deviation}%
\emph{Standard deviation} is square root of variance.
\index{skewness}%
\emph{Skewness} is the third central moment.
\section{Mixed moments}
Covariance is \(Cov(X,Y) = \Expect{(X-\Expect{X})(Y-\Expect{Y})}\).
Correlation is \(Cor(X,Y) = Cov(X,Y) / (\sigma_X \sigma_Y)\).
\section{Bernoulli trial}
\index{Bernoulli trial}%
A \emph{Bernoulli trial}
is a probability space \((\Omega,\Pr)\) where \(\Omega = \{0,1\}\),
\(\Pr(\{0\}) = 1 - p\), and \(\Pr(\{1\}) = p\).
\section{Bernoulli distribution}
\index{distribution!Bernoulli}%
\index{Bernoulli distribution}%
The pmf of \(\Bernoulli(p)\) (the \emph{Bernoulli distribution with parameter \(p\)})
is \(f(0) = 1 - p\) and \(f(1) = p\),
which can also be written \(f(k) = (1-p)^{1-k} p^k\) where \(k \in \{0,1\}\).
\section{Binomial distribution}
Bernoulli distribution is a special case of
\emph{binomial distribution} \(\Binomial(n,k,p)\) whose pmf is
\(f(k) = \binom{n}{k} (1-p)^k p^{n-k}\) where \(k : \Nat, k \le n\) with \(n=1\).
\(\Binomial(n,k,p)\) is the distribution of \(n\) iid instances of a Bernoulli trial.
The \emph{binomial formula} is \(\binom{n}{k} = \frac{n!}{k!(n-k)!}\).
\(\binom{n}{k}\) is the number of all \(k\)-sized subsets of an \(n\)-sized set.
The cardinality of \(\{ S ~|~ S \subseteq A, |S| = k \}\) is \(\binom{n}{k}\)
where \(|A| = n\).
The binomial formula is involved in the expansion \((x+y)^n = \sum_{k=0}^n \binom{n}{k} x^k y^{n-k}\).
\section{Multivariate random variable}
\index{random variable!multivariate}%
\index{multivariate random variable}%
\index{random vector}%
\index{random!vector}%
\index{vector!random}%
A \emph{multivariate random variable} or \emph{random vector}
is a vector of random variables,
or a vector-valued random variable,
or a random variable whose sample space is a vector space.
Define \(\{ X < a \}\) as \(\{ x ~|~ x \in u, ~ X~x < a \}\).
If \(f\) is a bijection and \(Y = f~X\),
then \(P~\{f~X < a\} = P~\{X < f^{-1}~a\}\).
Algebra on random variables:
If \(k : a\) and \(X : RV~a\) then \(k X : RV~a\).
If \(X : RV~a\) and \(Y : RV~a\) then \(X+Y : RV~a\).
\index{population}%
\index{population!to sample a}%
\index{population!sample space of}%
\index{population!probability space}%
\index{sample (noun)}%
\index{sample (verb)!a population}%
\index{observation}%
\index{unit!statistical}%
A \emph{population} is a probability space.
Let \(p\) be a population and \(s\) be its sample space.
A \emph{sample} of \(p\) is a subset of \(s\).
To \emph{sample} \(p\) is to pick a subset of \(s\).
An \emph{observation} is a member of \(s\).
A \emph{unit} is an observation.
\section{What are these rules for?}
\index{chain rule}%
\index{probability!chain rule}%
\index{conditional probability!chain rule}%
The \emph{chain rule} is \(\Pr(a \cap b) = \Pr(b|a) \cdot \Pr(a) = \Pr(a|b) \cdot \Pr(b)\).
\index{law of total probability}%
\index{probability!law of total probability}%
\index{conditional probability!law of total probability}%
The \emph{law of total probability} is \(\Pr(b) = \sum_{k=1}^n \Pr(b|a_k) \cdot \Pr(a_k) \)
where \(a_1,\ldots,a_n\) is a partitioning of \(a\).
\index{Bayes rule}%
The \emph{Bayes rule} is \(\Pr(a|b) = \Pr(b|a) \cdot \Pr(a)\)
where \(a_1,\ldots,a_n\) is a partitioning of \(a\).
\section{Materials looking for place to belong}
\index{Gaussian distribution}%
\index{normal distribution}%
\index{distribution!Gaussian}%
\index{distribution!normal}%
\index{Gaussian probability density function}%
\index{normal probability density function}%
\index{probability density function!Gaussian}%
\index{probability density function!normal}%
The \emph{Gaussian pdf} or the \emph{normal pdf}
with mean \(\mu\) and variance \(\sigma^2\) is
\(\frac{1}{\sqrt{2\sigma^2\pi}}\exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)\).
\index{Poisson distribution}%
\index{Chi-square distribution}%
\index{Cauchy distribution}%
\index{Beta distribution}%
The \emph{Poisson distribution}...
The \emph{Chi-square distribution}...
The \emph{Cauchy distribution}...
The \emph{Beta distribution}...
\section{Random process}
The type of a \emph{random process} is \(\Nat \to \Omega \to R\).
\emph{Random process} is also called \emph{stochastic process}.
\section{Wiener process}
| {
"alphanum_fraction": 0.658270361,
"avg_line_length": 35.8931506849,
"ext": "tex",
"hexsha": "34149ac6a00b919bbd6e36559b4f5d23c86a9e2c",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2018-10-02T15:20:22.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-10-02T15:20:22.000Z",
"max_forks_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0",
"max_forks_repo_licenses": [
"Apache-2.0",
"CC0-1.0"
],
"max_forks_repo_name": "edom/work",
"max_forks_repo_path": "research/probability.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0",
"max_issues_repo_issues_event_max_datetime": "2022-02-16T00:55:32.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-12-02T18:37:37.000Z",
"max_issues_repo_licenses": [
"Apache-2.0",
"CC0-1.0"
],
"max_issues_repo_name": "edom/work",
"max_issues_repo_path": "research/probability.tex",
"max_line_length": 119,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0",
"max_stars_repo_licenses": [
"Apache-2.0",
"CC0-1.0"
],
"max_stars_repo_name": "edom/work",
"max_stars_repo_path": "research/probability.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4455,
"size": 13101
} |
\subsection{The Virtual Soundstage}
The abstracted virtual space will be a two-dimensional space with orthogonal basis $(\hat{x},\hat{y})$. For ease of visualization, $\hat{x}$ will be considered as left-pointing, and $\hat{y}$ will be considered as pointing forward and bisecting the soundstage. Thus, the used virtual soundstage area will be the area within the range of positive $y$; the positive $x$ direction will be perceptual left, and the negative $x$ direction will be perceptual right.
\subsection{The Sound Source}
\subsubsection{The propagation of sound from real sound sources}
The first immediate barrier to modeling the propagation of sound from a sound source is the nature of any given sound source. Few acoustic resonators approach being perfect point sources with equal spherical radiation of sound across the entire audible frequency spectrum. It is, in theory, possible to model the emission of sound using frequency-dependent statistical directivity data derived from various instruments. This model could be applied to the algorithm if the facing of the sound source was known. However, chasing down this trail raises questions into the propagation of sound towards any given microphone, the effect of the physical characteristics of the sound stage, and the effect of height components in the positioning of the source and microphones.
Trying to account for all of these components would eventually necessitate venturing into the realm of physical modeling. While such an undertaking is a worthwhile avenue of research, it breaks this processor's goals of ease of use and intuition of abstraction, as well violating the goal of remaining computationally efficient enough to use as a general DAW plug-in.
A review of literature regarding stereophonic recording practice tends to focus on the directional characteristics of sound sources as considerations for the height of microphones in the array, with the aim of obtaining the desired blend of each source’s characteristic frequency components, as well as informing the optimal placement of spot/area microphones. The effect that a sound source's directivity can have on localization does not seem to be a remarkable characteristic of stereophonic recording arrays.
\subsubsection{The virtual sound source}
Based on the typical assumptions of stereophonic recording praxis, the virtual sound source can be considered as an ideal sound radiator which radiates sound equally towards every measured virtual point. The virtual sound source, $S$, has its location at $(r, \theta)$ and does not have any other properties -- it does not inherently apply any frequency/amplitude transfer function onto the pre-encoded signal.
The useable range of $\theta_S$, in keeping with the bounds of the defined virtual soundstage is $[0, \pi]$. The radial distance $r_S$ is an abstraction of a real distance in meters.
Since the virtual sound source only imparts positional data to the sound, the actual recorded sound to be encoded can also be represented by $S$; where $S$ is the recorded sound, which is emitted into the virtual space at point $(r, \theta)$\footnote{The choice of polar coordinates to represent the position of $S$ is semi-arbitrary, and does owe some part to the specifics of the implementation. An origin shift will be necessary during the processing which will require the translation of the polar coordinates into cartesian, however there will be a second translation back to polar coordinates again. In this sense, the source is always conceptually considered based on its angle and distance from some reference point. Thus, it seems natural to define $S$ in those terms.}
\subsection{Virtual Microphones}
The virtual stereophonic array can be represented as a set of virtual microphones, $M \: | \: M = \{m_1, ... m_n\}$. Each virtual microphone has its placement within the virtual space at coordinates $(x,y)$, and has orientation $\theta$, with polar directivity factor $p$.
The orientation, $\theta$ of the virtual microphone represents is direction of facing into the soundstage. Like $\theta_s$, the range of usable values is $[0, \pi]$ where a rotation of $0$ indicates a facing directly to the left, a facing of $\pi$ is directly to the right, and an orientation of $\frac{\pi}{2}$ points the virtual microphone directly forward -- parallel to the $y$-axis with a lateral offset of $x_m$.
The polar directivity value, $p$ is a real number in the range $[0,1]$ where $p=0$ yields a non-directional polar directivity pattern, and $p=1$ yields a bidirectional polar directivity pattern. The resulting polar attenuation pattern at angle of arrival (relative to the ``font'' of the virtual microphone) $\phi$ for any value of $p$ can be found as:
\begin{equation}\label{polarPatternPlot}
m(p,\phi) = (1-p) + p\cos(\phi)
\end{equation}
\subsubsection{Limitations in abstracting ideal microphones}
It should be noted that real microphones' polar patterns are frequency-dependent, with cardioid microphones ($0 < p < 1$) becoming increasingly less directional at lower frequencies. At higher frequencies, the physical body of the microphone and the diameter of the microphone diaphragm also begin to affect in the polar response. Additionally, cardioid microphones -- due to the physical mechanics of generating a cardioid polar pattern -- tend to exhibit a marked attenuation in their frequency response at lower frequencies.
Often, the specific color that a specific model of microphone imparts on the recorded sound is carefully chosen, with the choice of microphone being considered one of the major creative decisions in recording and the design of many microphones being intentional to support specific colorations and yield various creative options. It would be impractical to try and model the specific responses of common microphones used in main arrays, and the processing needed to apply the frequency-domain modeling would break the CPU-efficiency goal. Additionally, an attempt to reproduce the frequency-domain aspects of a stereophonic array would have the effect of distorting the (likely intentionally-chosen) frequency-domain aspects of the source recording.
Thus, using virtual microphones with ideal polar patterns is likely the optimal use-case for most users. This will be with the understanding that the virtual microphone array will impart localization cues that approach that of a real microphone array, but -- due to limitations in modeling real-world physics -- will not fully replicate the cues of a real-world microphone array. However, the processor will provide localization cues that are closer than level-difference-only panning methodologies.
There is one case that is worth considering. As previously noted, directional microphones tend to exhibit a marked attenuation at low-frequencies. There are some recording techniques that take advantage of this tendency, especially since the human prioritization of localization cues is frequency-dependent. It may be desirable to include some simple high-pass and low-pass filter options within the implementation to allow the user to emulate some of the key frequency-dependent behavior.
\subsection{The User Interface}
In the implementation of the processor, the relevant controls for defining the desired virtual objects and their parameters need to be exposed to the user. For the virtual sound source, this means exposing controls for $\theta_S$ and $r_S$. Then the number of virtual microphones in set $M$ needs to be defined, the relevant values for $p_m$, $(x_m,y_m)$, and $\theta_m$ need to be exposed.
\subsubsection{Orienting the user within the virtual space} The first concern is presenting the orientation of the virtual space to the user in a way allows them to be intuitively oriented within and able to intuitively understand the distances and placements of virtual objects with in. Thus, establishing an intuitive reference for the placement of the origin of the space is of primary importance.
Following the specific facings of the basis $(\hat{x},\hat{y})$, the most logical place to orient $O$ would be at the central point of the microphone array -- which is also what user will intuit as the "perceptual center". This also gives a consistent and predictable basis for user to take real-world measurements and translate them into the virtual abstraction used by the processor.
\subsubsection{Orienting the virtual sound source} The typical mixing engineer is likely to naturally think of angular rotation in terms of radians. Similarly, considering the amount of angular rotation from the $x$-axis is not reflective of the "front" oriented human experience. Thus, the user interface is better presented with $\theta_S$ translated as degrees of deflection from forward. Additionally, there is a bias (at least within Western culture) that a numerical increase in amount be translated as right-ward motion. Due to this, the number of degrees needs to decrease with counter-clockwise rotation.
Thus, if $\phi$ is the value, in degrees of deflection from forward, then:
\begin{equation}
\theta_S = (-\phi + 90) * \frac{\pi}{180} \; | \; \phi = [-90,90]
\end{equation}
Since $r$ is abstracted as meters, the user can specify the distance $r$ from point $O$ directly -- understanding it as the distance from the center of the real (or imagined) microphone array.
\subsubsection{Constructing the microphone array}
The set of microphones in the virtual array, $M$, is some number of virtual microphones. While it is possible to define an arbitrary number of virtual microphones each at any conceivable position within the virtual space and define the algorithm in a way to accommodate, usual stereophonic recording practice, again, gives some practical limits that can be used to refine and focus the amount of controls exposed to the user and enhance the quality of the user experience.
The fundamental praxis of stereophonic recording is based on the use of a microphone pair, that translate to the raw left and right audio channels. This microphone pair is laterally spaced at some distance, and has an angular splay of some angle. This primary pair can be designated the ``main' stereo pair, $m_\text{mains}$, that has the properties of its lateral distance, $d$, angular splay, $\phi$.
As discussed in the overview, it is also common to use a second pair of microphones in tandem to the first, that occupy a wider spacing. These additional microphones can be designated as the "flanks", $m_\text{flanks}$. The flank pair will have the same properties, $d$ and $\phi$, as the main pair.
Finally, there is a long tradition of using a center microphone, $m_\text{center}$. Typically, the center microphone does not rotate in the 2D plane; so, it will only have a distance, $d$. However, for a center microphone, the displacement is forward, rather than lateral; following this, any value for $d_\text{center}$ will be long the $y$ axis.
This gives a maximum size of $M$ of $n=5$, which is a manageable number. These are presented to the user as two pairs and the single center microphone. The preferred ordering of the microphones within $M$ will follow the order of expected frequency of use (mains, flanks, center), and the audio-industry preference for left-right ordering of channels:
\begin{equation}
M = \{m_\text{mL}, m_\text{mR}, m_\text{fL}, m_\text{fR}, m_\text{c}\}
\end{equation}
The two pairs and center microphone are each switchable, so that the user can freely enable/disable any combination of them. Each microphone group also has a control exposed that applies an amplitude scalar to the signal of that group, to allow for different proportions of that group's signal into the output\footnote{Ideally, the plug-in would be able to output the individual processing of each microphone; however the format of the competition that the plug-in was designed for limits the output channels to two.}. This scalar, $g$, has an adjustment range in decibels: $g = [-20,0]$.
\subsubsection{Identifying virtual microphone distance limits}
Identifying practical limits for the $d$ value for the two pairs will require a bit of arbitrary demarcation. The main pair must allow for a minimum of $d_\text{mains}=0$ to accommodate coincident microphone techniques. Likewise, for flexibility and simplicity there is little reason not to allow $d_\text{flanks} = 0$ (though it would an odd use-case for this value to be used). For the center microphone, while it is common in many cases for it to be displaced forward, there are also a number of cases that require it to remain in-line with adjacent pairs; so a minimum value of $d_\text{center} = 0$ is also necessary to accommodate all reasonable use-cases.
For maximal allowed values, and examination of the literature shows that main mic pairs usually occur in spaces smaller than 100cm, however triplets can employ rather wide distances, with ``typical'' Decca-tree configurations spanning 2m. The current implementation allows for a maximal span of 3m to accommodate some extra working room for three-microphone expanded configurations, additionally, due to the common use of $d < 1m$, the values for $d_\text{mains}$ are presented to the user in centimeters; this gives $0 \leq d_\text{mains} \leq 300$ as the total range for the mains' lateral distance.
After an informal poll and discussion with several sound recordists who specialize in acoustic ensemble recording, an upper limit of 10m was chosen for the flanking pair. This gives $0 \leq d_\text{flanks} \leq 10$ as the total range for the flanks' lateral distance, and this presented to the user in meters.
Finally, the maximum amount of forward displacement for the center microphone is was set to $d_\text{center} = 100$ in centimeters. Common uses of a forward-set center microphone are under this threshold, and the 100cm point allows for a round number as the limit and offers some extra distance as a buffer to catch reasonable outliers.
\subsubsection{Microphone UI parameter conversion to values for calculation}
A key aspect of many stereophonic recording arrays is bilateral symmetry. Hence the focus on pairs of microphones and defining them in terms of their mutual distance and angular splay. Because of this, it can be assumed that for any of the paired microphones:
\begin{equation}
d_{\vec{mO}} = \frac{d_m}{2}
\end{equation}
Since the standard abstracted unit of measure in the virtual space is a meter, the distances for the center microphone and the main pair will need to be converted from the centimeter presentation to the user. Since every microphone only moves along one axis, and does not cross the origin, its position along the other axis will remain fixed at $0$. For the pairs of microphones, the symmetry at the $y$-axis means that the left microphone of each pair will remain in the span $d_L \leq 0$ whereas the right microphone in each pair will be within the span $d_R \geq 0$.
Thus, for the pairs of microphones:
\begin{equation}
\begin{bmatrix} x_L \\ x_R \end{bmatrix} = C0.5 \begin{bmatrix} -d_m \\ +d_m \end{bmatrix}
\end{equation}
Where $C$ is the unit conversion constant: $C_\text{mains} = 0.01$ and $C_\text{flanks} = 1$.
The value of $d_c$ can be directly converted into meters, and used as the $y$-coordinate value for the center microphone:
\begin{equation}
x_c = 0.01d_m
\end{equation}
For the angular splay values of the microphone pairs a similar conversion can be used. Since the angular splay value is the angle between the microphones, the deflection from forward can be found by halving the angular splay. Since perceptual forward is at an angular rotation of $\frac{\pi}{2}$, the left and right microphones' rotation values can be found by adding or subtracting the deflection value (converted to radians) from $\frac{\pi}{2}$:
\begin{equation}
\begin{bmatrix} \theta_L \\ \theta_R \end{bmatrix} = \frac{\phi\pi}{360}J_{2,1} + \begin{bmatrix} +\frac{\pi}{2} \\ -\frac{\pi}{2} \end{bmatrix}
\end{equation}
Where $\phi$ is the angular splay value in degrees and $J$ is a matrix of ones.
| {
"alphanum_fraction": 0.7893558816,
"avg_line_length": 131.8360655738,
"ext": "tex",
"hexsha": "981e2946d35ceab0354aa6fad0609c820dce5514",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a78161b4b99b50e481c133ba8a5049d548561954",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "jmclark85/StereoPairsEmulator",
"max_forks_repo_path": "Technical Documentation/definitions.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "a78161b4b99b50e481c133ba8a5049d548561954",
"max_issues_repo_issues_event_max_datetime": "2020-08-26T18:24:14.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-08-26T18:24:14.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "jmclark85/StereoPairsEmulator",
"max_issues_repo_path": "Technical Documentation/definitions.tex",
"max_line_length": 778,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a78161b4b99b50e481c133ba8a5049d548561954",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "jmclark85/StereoPairsEmulator",
"max_stars_repo_path": "Technical Documentation/definitions.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3556,
"size": 16084
} |
\documentclass[11pt,a4paper, final]{moderncv}
\usepackage{color}
\usepackage{fontspec}
\moderncvtheme{classic}
% the (custom) color which will be used in the cv
\definecolor{color1}{RGB}{1, 52, 64}
% scale the page layout
\usepackage[scale=0.75]{geometry}
% change width of the column with the dates
\setlength{\hintscolumnwidth}{2.5cm}
% required when changing page layout lengths
\AtBeginDocument{\recomputelengths}
\usepackage{xunicode}
\usepackage{xltxtra}
\usepackage[utf8]{inputenc}
% german word break/hyphenation rules
\usepackage[ngerman]{babel}
% insert dummy text (used in the letter)
\usepackage{lipsum}
% used for \begin{comment}...\end{comment}
\usepackage{verbatim}
% use guilllemets in bibliography
\usepackage[babel,german=guillemets]{csquotes}
\usepackage[
sorting=none,
minbibnames=8,
maxbibnames=9,
block=space
]{biblatex}
\bibliography{publications}
% get rid of the number-labels ([1], [2], etc.) in front of publications
\defbibenvironment{midbib}
{\list
{}
{
\setlength{\leftmargin}{0mm}
\setlength{\itemindent}{-\leftmargin}
\setlength{\itemsep}{\bibitemsep}
\setlength{\parsep}{\bibparsep}}
}
{\endlist}
{\item}
% add [DOI] and [PDF] fields at the end of each publication entry
\DeclareSourcemap{
\maps[datatype=bibtex]{
% the bibtex entry 'mydoi' gets mapped to 'usera'
\map{
\step[fieldsource=mydoi]
\step[fieldset=usera, origfieldval]
}
% the bibtex entry 'mypdf' gets mapped to 'usera'
\map{
\step[fieldsource=mypdf]
\step[fieldset=userb, origfieldval]
}
}
}
% [DOI] entries in publication
\DeclareFieldFormat{usera}{\color{color1}[\href{#1}{\textsc{doi}}]}
\AtEveryBibitem{
% put [DOI] stuff at the end of a publication entry
\csappto{blx@bbx@\thefield{entrytype}}{%
\iffieldundef{usera}{
% this gets invoked, once nothing is supplied
% via the mypdf or mydoi value.
% you could e.g. display a default thing here.
}{\space\printfield{usera}}}
}
% [PDF] entries in publication
\DeclareFieldFormat{userb}{\color{color1}[\href{#1}{\textsc{pdf}}]}
\AtEveryBibitem{
% put [DOI] stuff at the end of a publication entry
\csappto{blx@bbx@\thefield{entrytype}}{\iffieldundef{userb}{}{\printfield{userb}}}
}
\renewcommand*{\mkbibnamegiven}[1]{%
\ifitemannotation{highlight}
{\textbf{#1}}
{#1}}
\renewcommand*{\mkbibnamefamily}[1]{%
\ifitemannotation{highlight}
{\textbf{#1}}
{#1}
}
% Minion Pro is used as the main font, if you don't
% have it installed uncomment this line or choose another pretty font
\setmainfont[Numbers=OldStyle]{Minion Pro}
% the moderncv package will populate a lot of the pdf meta-information.
% this can be used to change some of these infos.
\AfterPreamble{\hypersetup{
pdfcreator={XeLaTeX},
pdftitle={Fictional CV of Isaac Asimov}
}}
% for the icons (telephone, globe). i found the icons provided by the
% fontawesome package prettier than the standard moderncv icons.
\usepackage{fontawesome}
% personal data
\firstname{Isaac}
\familyname{Asimov}
\address{Random St. 21}{9873 New York}
% i use the extrainfo command for additional information, since i
% want to use custom icons and have finer control over spacing.
\extrainfo{
\faPhone\hspace{0.3em}+49 137 3717321\\
{\small\faEnvelope}\hspace{0.3em}[email protected]\\
\faGlobe\hspace{0.3em}http://isaac.asimov.com
}
% picture, resized to a height of 84pt
\photo[84pt]{picture/photo}
% spacing before (sub)sections
\newcommand{\spacesection}{\vspace{0.4cm}}
\newcommand{\spacesubsection}{\vspace{0.2cm}}
%===========================
\begin{document}
\maketitle
\section{Personal Data}
\cvline{Name}{Isaac Asimov}
\cvline{Geburtsdatum}{20.12.1919}
\cvline{Geburtsort}{Petrovichi, Russia}
\spacesection
\section{Education}
\cventry{\textsc{1948}}{Ph.D. Chemistry}{University Extension}{New York}{\emph{United States}}{}
\cventry{\textsc{1939--1941}}{M.Sc. Chemistry}{University Extension}{New York}{\emph{United States}}{}
\cventry{\textsc{1935--1939}}{B.Sc. Chemistry}{University Extension}{New York}{\emph{United States}}{}
\cventry{\textsc{1930--1935}}{High School}{Boys High School}{Brooklyn}{\emph{United States}}{}
\spacesection
\section{Experience}
\subsection{Academic Services}
\cventry{\textsc{1958--present}}{Advisor}{Boston University School of Medicine}{}{}{
Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Ut purus
elit, vestibulum ut, placerat ac, adipiscing vitae, felis.
}
\cventry{\textsc{1951--1958}}{Advisor}{Boston University School of Medicine}{}{}{
Pellentesque cursus luctus mauris. Nulla malesuada porttitor diam.
Donec felis erat, congue non, volutpat at, tincidunt tristique, libero.
}
\spacesubsection
\subsection{Industry Work Experience}
\cventry{\textsc{1958--present}}{Advisor}{Boston University School of Medicine}{}{}{
Vivamus viverra fermentum felis. Donec nonummy pellentesque ante.
Phasellus adipiscing semper elit. Proin fermentum massa ac quam.
}
\newpage
\spacesection
\section{Expertise}
\subsection{Chemistry}
\cvcomputer{Vitae}{Adipiscing, Vitae, Felis}
{Placerat}{Vitae, Adipiscing, Felis}
\cvcomputer{Consectetuer}{Felis, Adipiscing, Vitae}
{Donec}{Felis, Erat, Congue non}
\spacesubsection
\subsection{Computer Science}
\cvcomputer{Lugla}{Adipiscing, Vitae, Felis}
{Tincident}{Tristique, Libero}
\cvcomputer{Eleifend}{Morbi blandit, Ligula}
{Accumsan}{Nam ipsum, suscipit}
\spacesection
\section{Interests}
\cvline{Adipiscing}{
Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Ut purus
elit, vestibulum ut, placerat ac, adipiscing vitae, felis.
}
\cvline{Vestibulum}{
Maecenas lacinia. Nam ipsum ligula, eleifend at, accumsan nec, suscipit a,
ipsum. Morbi blandit ligula.
}
\spacesection
\section{Miscellaneous}
\cventry{\textsc{1958--1963}}{Adipiscing Elit}{New York}{}{}{
Vivamus viverra fermentum felis. Donec nonummy pellentesque ante. Phasellus
adipiscing semper elit. Pellentesque cursus luctus mauris.
}
\spacesection
\section{Publications}
{\color{color1}\textsc{Acamdemic Writings}}
\nocite{*}
\printbibliography[heading=none, env=midbib, keyword=academic]
\ \\
{\color{color1}\textsc{Fictional Writings}}
\printbibliography[heading=none, env=midbib, keyword=fiction]
%=============================
% this part is a simple cover letter
\clearpage
\recipient{Human Resources}{Some Company Ltd.\\Some Street 123\\12345 Some City}
\date{\today}
\opening{Dear Sir or Madam,}
\closing{Sincerely yours,}
\enclosure[Attached]{curriculum vit\ae{}}
\makelettertitle
\lipsum[1-3]
\makeletterclosing
\end{document}
| {
"alphanum_fraction": 0.7352404139,
"avg_line_length": 26.7154471545,
"ext": "tex",
"hexsha": "05c67ab5c0329c3a20f5d935db0f91d1c8234ae3",
"lang": "TeX",
"max_forks_count": 252,
"max_forks_repo_forks_event_max_datetime": "2022-03-10T12:11:32.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-02-10T08:15:54.000Z",
"max_forks_repo_head_hexsha": "a6a9082eadec9553fc36c8a18a360461ac569162",
"max_forks_repo_licenses": [
"Unlicense",
"MIT"
],
"max_forks_repo_name": "BarronPun/latex-template-collection",
"max_forks_repo_path": "curriculum-vitae/cv.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "a6a9082eadec9553fc36c8a18a360461ac569162",
"max_issues_repo_issues_event_max_datetime": "2017-05-04T16:54:59.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-12-10T09:55:23.000Z",
"max_issues_repo_licenses": [
"Unlicense",
"MIT"
],
"max_issues_repo_name": "BarronPun/latex-template-collection",
"max_issues_repo_path": "curriculum-vitae/cv.tex",
"max_line_length": 102,
"max_stars_count": 887,
"max_stars_repo_head_hexsha": "f16b1ea3b405a19196a1b959deffc239d12a670b",
"max_stars_repo_licenses": [
"MIT",
"Unlicense"
],
"max_stars_repo_name": "dawnnao/latex-template-collection",
"max_stars_repo_path": "curriculum-vitae/cv.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-31T03:52:04.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-03T04:28:39.000Z",
"num_tokens": 2128,
"size": 6572
} |
\documentclass[letterpaper]{turabian-researchpaper}
\usepackage[utf8]{inputenc}
\usepackage[autostyle, english = american]{csquotes}
%\MakeOuterQuote{"} % uncomment this line if you want to have 'smart' quotes like in word
\usepackage[pass, letterpaper]{geometry}
%\usepackage[american]{babel}
\usepackage[backend=biber,style=sbl]{biblatex} % Can also include 'annotation' within the '[]'
% bibleref is used to quote the bible
\usepackage{bibleref}
% bibleref also supports different ways of citing the bible:
% default - 2 Corinthians 12:1-5
% jerusalem - 2 Co 12:1-5
% anglosaxon - II Cor. XII.1-5
% JEH - 2 Cor. xii. 1-5
% NTG - 2 Cor xii, 1-5
% MLA - 2 Cor. xii.1-5
% Chicago - 2 Cor. xii: 1-5
% text - Second Epistle to the Corinthians, chapter twelve, verse one to five
% You can set the style by addding:
%\biblerefstyle{<style>}
\addbibresource{ref.bib} % Add our bibliography file
\usepackage{hyperref} %Allows links to be clicked
\usepackage{wrapfig}
\usepackage{setspace} %\doublespace
\usepackage{fancyhdr} % allows the setting of a custom header
\pagestyle{fancy} %fancyhdr package
\fancyhf{} %clears out prev header/footer
\rhead{\hfill \thepage
\\ \hfill John Doe
\\ \hfill THEO200 }
\renewcommand\headrulewidth{0pt} %gets rid of the bar > sets it to 0pt
% this is an example of how to define a custom command, in this case,
% we are defining a new command '\ccparencite' that can be used to parenthetically cite sources.
\newcommand{\ccparencite}[1]{(\citeauthor{#1})}
% here is a command that just prints "Valparaiso University"
\newcommand{\vu}{Valparaiso University}
\title{My Very Important Paper (Due Saturday)}
\subtitle{}
\author{John Doe}
\course{THEO200}
\date{\today}
\begin{document}
\maketitle
\doublespacing
\begin{center}
\textbf{My Very Important Paper (Due Saturday)}
\end{center}
This is an example document to showcase how to do different things in \LaTeX.
Let's make a footnote\footnote{It shows up!}. Here is a parenthetical citation\parencite{kant}.
That is \textit{technically} the correct way to cite things paranthetically in Chicago style, so let's
do a footnote style citation\autocite{akut, 34}. Now that looks \textbf{better}.
\section{Here's a section heading!}
Blah blah blah.
Here's how to cite with your own custom citation command\ccparencite{kant, 23}.
Here I am going to cite a bible verse (\bibleverse{ICor}(2:1-14)).
\biblerefstyle{jerusalem}
Why not cite the new testament with the jerusalem style (\bibleverse{John}(1:1-5)).
%\biblerefstyle{default} % return the style back to default
\subsection{Here's a subsection\ldots}
\subsubsection{And a sub-subsection...}
Etc. Etc. \vu\vu\vu
Let's cite Akutagawa again\autocite{akut}, because the citation will be different.
Now let's try a parenthetical citation with a page number\parencite{mill, 54}.
This is how you write ``a quote''\parencite*{akut, 84}.
Now it's time to sign the honor code. So here's an example of
how to ``sign'' your paper.
\noindent\rule{\textwidth}{1pt} % makes a line that goes across the page.
\begin{center}
\textit{I have not given or received nor have I tolerated others' use of unauthorized aid.}
\newline
\textit{- John Doe}
\end{center}
\newpage
%\nocite{*} % Removing the comment will print out everything in the .bib file
\printbibliography
\end{document}
| {
"alphanum_fraction": 0.7429766886,
"avg_line_length": 35.5957446809,
"ext": "tex",
"hexsha": "55ae89fa64e9d1a92d8f51a27858f52f697e4db2",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b871411f7b09f943db23b93a3c7fe533428b3b23",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "ehawkvu/vu-latex-templates",
"max_forks_repo_path": "examples/Theology/example.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b871411f7b09f943db23b93a3c7fe533428b3b23",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "ehawkvu/vu-latex-templates",
"max_issues_repo_path": "examples/Theology/example.tex",
"max_line_length": 102,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "b871411f7b09f943db23b93a3c7fe533428b3b23",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "ehawkvu/vu-latex-templates",
"max_stars_repo_path": "examples/Theology/example.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1021,
"size": 3346
} |
% mainfile: ../../Refinement.tex
\begin{definition}[Process syntax]
\label{def_process_syntax}
The syntax of a \picalc{} process \textit{P} is defined by:
\begin{align*}
P & \syntdef \procsum \ebnf \procpar{P_1}{P_2} \ebnf \procres{\vec{y}}{P} \ebnf \proccall{A}{\vec{v}}
\end{align*}
where:
\begin{itemize}
\item $\procsum$ is the guarded sum.
\item $\procpar{P_1}{P_2}$ is the parallel composition of processes.
\item $\procres{\vec{y}}{P}$ is the restriction of the scope of the names $\vec{y}$ to the process $P$
\item $\proccall{A}{\vec{v}}$ is a process call.
\end{itemize}
\end{definition}
\subsubsection{\findex[guarded sum]{Guarded sum}:} The guarded sum is the choice between multiple guarded processes. If the guard of one process took place, other guarded processes will be discarded. For example, the processes: $\procchoice{\inp{x}{}.P_1}{\inp{y}{}.P_2}$ will evolve to the process $P_1$ if the guard $\inp{x}{}$ occurred.
Furthermore, The process $\proczero$ is called the \findex[process!stop]{stop process} stands for the process that can do nothing. It can be omitted.
\subsubsection{\findex[guard]{Guard}:} The guard is also called \findex[process!action prefix]{action prefix} and denoted by $\pi$. Its syntax is defined by:
\begin{definition}[Action prefix syntax]
\label{def_prefix_syntax}
\begin{align*}
\pi \syntdef \out{x}{\vec{y}} \ebnf \inp{x}{\vec{y}} \ebnf \tau
\end{align*}
where:
\begin{itemize}
\item $\out{x}{\vec{y}}$\footnote{$\out{x}{{}}$ means: send a signal via $x$. $\out{x}{{y}}$ means: send the name $y$ via $x$. $\out{x}{{\vec{y}}}$ means: send the sequence $\vec{y}$ via $x$.} represents the action: send $\vec{y}$ via the channel $x$.
\item $\inp{x}{\vec{y}}$\footnote{$\inp{x}{{}}$ means: receive a signal via $x$. $\inp{x}{{y}}$ means: receive any name $y$ via $x$. $\inp{x}{{\vec{y}}}$ means: receive any sequence $\vec{y}$ via $x$. ``$y$ here plays the role of parameter''} represents the action: receive $\vec{y}$ via the channel $x$.
\item $\tau$ represents an internal non-observable action.
\end{itemize}
\end{definition}
We use the symbols $\alpha$ or $\beta$ to denote an action. Furthermore, we call $x$ the subject and $\vec{y}$ the object of a an action $\out{x}{\vec{y}}$ or $\inp{x}{\vec{y}}$. The functions $\sub{\alpha}$ and $\sub{\alpha}$ returns the subject and object of an action $\alpha$. More formally:
\[\sub{\alpha} \text{: returns the channel name through which the exchange occurs.}\]
\[\obj{\alpha} \text{: return the exchanged names across the channel.}\]
The set of all actions is defined as $\actions\define\outA\cup\inA\cup\set{\tau}$, Where:
\begin{itemize}
\item $\outA$ is the set of output actions, defined as $\outA\define\set[x,y_1,\dots,y_n\in\names]{\out{x}{\vec{y}}}$.
\item $\inA$ is the set of input actions, defined as $\inA\define\set[x,y_1,\dots,y_n\in\names]{\inp{x}{\vec{y}}}$.
\end{itemize}
\subsubsection{\findex[parallel composition]{Parallel composition}:}
The parallel composition operator $\procpar{}{}$ represents the concept of concurrency in the \picalc{}, where two processes can evolve concurrently. It represents an interleaving behavior of the concurrency.
For example let: $P\define\procpar{P_1}{(\procpar{P_2}{P_3})}$ where: $P_1\define\inp{x}{y}.Q_1$, $P_2\define\out{x}{y}.Q_2$ and $P_3\define\inp{x}{y}.Q_3$. So $P\define\procpar{\inp{x}{y}.Q_1}{(\procpar{\out{x}{y}.Q_2}{\inp{x}{y}.Q_3})}$.
Possible evolution cases of $P$ are:
\begin{itemize}
\item $\procpar{P_1}{(\procpar{Q_2}{Q_3})}$. $P_2$ sends $y$ via $x$ to $P_3$.
\item $\procpar{Q_1}{(\procpar{Q_2}{P_3})}$. $P_2$ sends $y$ via $x$ to $P_1$.
\end{itemize}
The example above illustrates the privacy nature of the parallel operator in the \picalc{}. A process can communicate via a channel with only one process per time, i.e., a binary synchronization occurs via the channel. $P_2$ cannot communicate with both $P_1$, $P_3$ at the same time, while in \gls{CSP} a process can communicate with multiple processes at the same time via the same channel by sending multiple copies of the same message, i.e., in CSP a multiple synchronization occurs via the channel.
\subsubsection{\findex[restriction]{Restriction}:}
The expression $\procres{\vec{y}}{P}$ binds the names $\vec{y}$ to the process $P$. In other words: the visibility scope of the names $\vec{y}$ is restricted to the process $P$. It is similar to declaring a private variable in programming languages. Thus, the names $\vec{y}$ are not visible outside $P$ and $P$ cannot use them to communicate with the outside. For example, let $P\define\procpar{P_1}{P_2}$ where: $P_1\define\procres{y}{\out{y}{z}.Q_1}$ and $P_2\define\inp{y}{z}.Q_2$. The process $P$ cannot evolve to $\procpar{Q_1}{Q_2}$, since the name $y$ in $P_1$ is only visible inside $P_1$, i.e., from the $P_2$'s point of view $P_1$ does not have a channel called $y$. This takes us to the definition of the
\findex[name!bound]{bound names} and \findex[free!bound]{free names}.
\begin{definition}[Bound names]
\label{def_bound_names}
the bound names are all the restricted names in a process.
\end{definition}
\begin{definition}[Free names]
\label{def_free_names}
the free names are all the names that occur in a process except the bound names.
\end{definition}
For example, let $P_1\define\procres{x}{(\out{x}{y}.P_2)}$ where $P_2\define\procres{z}{(\out{x}{z}.P_3)}$. The name $x$ is bound in $P_1$ but free in $P_2$.
\subsubsection{\findex[process!call]{Process call}:}
\label{subsubsection_process_call}
Let $P$ be a process and let $A$ be a process identifier. To be able to use the process $P$ recursively we use the process identifier $A$ as follows: $\procdef{A}{\vec{w}}\define{}P$. Thus, when we write $\proccall{A}{\vec{v}}$ we are using the identifier $A$ to call the process $P$ with replacing the names $\vec{w}$ in $P$ with the names $\vec{v}$. This replacement is called the $\alpha$-conversion.
For example, let $\procdef{A}{w}\define\out{w}{y}.\procdef{A}{w}$, then the behavior of $\proccall{A}{v}$ is equivalent to $\out{v}{y}.\proccall{A}{v}$.
| {
"alphanum_fraction": 0.7043063774,
"avg_line_length": 77.0126582278,
"ext": "tex",
"hexsha": "179c8033864d0a13795154b78dfa49d21b211a98",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "dcb71d5d7af7a8e87d7d230d58f18c01dbdfe13a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "MuhammadEkbal/thesis",
"max_forks_repo_path": "chapters/mainpart/preliminaries/sections/pi_calculus/subsections/pi_syntax/pi_syntax.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "dcb71d5d7af7a8e87d7d230d58f18c01dbdfe13a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "MuhammadEkbal/thesis",
"max_issues_repo_path": "chapters/mainpart/preliminaries/sections/pi_calculus/subsections/pi_syntax/pi_syntax.tex",
"max_line_length": 718,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "dcb71d5d7af7a8e87d7d230d58f18c01dbdfe13a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "MuhammadEkbal/thesis",
"max_stars_repo_path": "chapters/mainpart/preliminaries/sections/pi_calculus/subsections/pi_syntax/pi_syntax.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1956,
"size": 6084
} |
\chapter{The Frobenius element}
Throughout this chapter $K/\QQ$ is a Galois extension with Galois group $G$,
$p$ is an \emph{unramified} rational prime in $K$, and $\kp$ is a prime above it.
Picture:
\begin{diagram}
K & \supset & \OO_K & \kp & & \OO_K / \kp \cong \FF_{p^f} \\
\dLine && \dLine & \dLine & & \dLine \\
\QQ & \supset & \ZZ & (p) & & \FF_p
\end{diagram}
If $p$ is unramified, then one can show there
is a unique $\sigma \in \Gal(L/K)$ such that
$\sigma(\alpha) \equiv \alpha^p \pmod{\kp}$ for every prime $p$.
\section{Frobenius elements}
\prototype{$\Frob_\kp$ in $\ZZ[i]$ depends on $p \pmod 4$.}
Here is the theorem statement again:
\begin{theorem}[The Frobenius element]
Assume $K/\QQ$ is Galois with Galois group $G$.
Let $p$ be a rational prime unramified in $K$, and $\kp$ a prime above it.
There is a \emph{unique} element $\Frob_\kp \in G$
with the property that
\[ \Frob_\kp(\alpha) \equiv \alpha^{p} \pmod{\kp}. \]
It is called the \vocab{Frobenius element} at $\kp$, and has order $f$.
\end{theorem}
The \emph{uniqueness} part is pretty important:
it allows us to show that a given $\sigma \in \Gal(L/K)$
is the Frobenius element by just observing that it satisfies
the above functional equation.
Let's see an example of this:
\begin{example}[Frobenius elements of the Gaussian integers]
Let's actually compute some Frobenius elements for $K = \QQ(i)$,
which has $\OO_K = \ZZ[i]$.
This is a Galois extension, with $G = \Zm2$,
corresponding to the identity and complex conjugation.
If $p$ is an odd prime with $\kp$ above it,
then $\Frob_\kp$ is the unique element such that
\[ (a+bi)^p \equiv \Frob_\kp(a+bi) \pmod{\kp} \]
in $\ZZ[i]$. In particular,
\[ \Frob_\kp(i) = i^p =
\begin{cases}
i & p \equiv 1 \pmod 4 \\
-i & p \equiv 3 \pmod 4.
\end{cases}
\]
From this we see that $\Frob_\kp$ is the identity when $p \equiv 1 \pmod 4$
and $\Frob_\kp$ is complex conjugation when $p \equiv 3 \pmod 4$.
\end{example}
Note that we really only needed to compute $\Frob_\kp$ on $i$.
If this seems too good to be true,
a philosophical reason is ``freshman's dream''
where $(x+y)^p \equiv x^p + y^p \pmod{p}$ (and hence mod $\kp$).
So if $\sigma$ satisfies the functional equation on generators,
it satisfies the functional equation everywhere.
We also have an important lemma:
\begin{lemma}
[Order of the Frobenius element]
Let $\Frob_\kp$ be a Frobenius element from an extension $K/\QQ$.
Then the order of $\kp$ is equal to the inertial degree $f_\kp$.
In particular, $(p)$ splits completely in $\OO_K$
if and only if $\Frob_\kp = \id$.
\end{lemma}
\begin{exercise}
Prove this lemma as by using the fact that $\OO_K / \kp$
is the finite field of order $f_\kp$,
and the Frobenius element is just $x \mapsto x^p$ on this field.
\end{exercise}
Let us now prove the main theorem.
This will only make sense in the context of decomposition groups,
so readers which skipped that part should omit this proof.
\begin{proof}
[Proof of existence of Frobenius element]
The entire theorem is just a rephrasing of the fact
that the map $\theta$ defined in the last section
is an isomorphism when $p$ is unramified.
Picture:
\begin{center}
\begin{asy}
size(12cm);
filldraw( (-4,-2)--(-4,2)--(1.5,2)--(1.5,-2)--cycle, lightblue+opacity(0.2), black);
label("$G = \operatorname{Gal}(K/\mathbb Q)$", (-1,2), dir(90));
dot( (-1.1,-1.2) );
dot( (-1.4,0.9) );
dot( (-2,1.4) );
dot( (-2.7,-0.4) );
dot( (-3.1,0.2) );
dot( (-3.4,-1.6) );
filldraw(scale(0.8,1.8)*unitcircle, lightcyan+opacity(0.4), black);
label("$D_{\mathfrak p}$", (0.8,2), dir(-90));
for (real y=-1.5; y<2; ++y) { dot( (0,y) ); }
label("$\operatorname{Frob}_{\mathfrak p}$", (0,-1.5), dir(90));
filldraw(shift(5,0)*scale(0.8,1.8)*unitcircle, lightcyan+opacity(0.4), black);
for (real y=0.5; y<2; ++y) { dot( (5,y) ); }
dot("$T$", (5,-1.5), dir(45));
dot("$T^2$", (5,-0.5), dir(45));
draw( (1,0)--(4,0), Arrows );
label("$\left<T \mid T^f=1\right>$", (5,1.8), dir(90));
draw( (0.2,-1.5)--(4.8,-1.5), dashed, EndArrow);
label("$\theta(\operatorname{Frob}_{\mathfrak p}) = T$", (2.8,-1.5), dir(-90));
label("$\theta$", (2.5,0), dir(90));
label("$\cong$", (2.5,0), dir(-90));
\end{asy}
\end{center}
In here we can restrict our attention to $D_\kp$
since we need to have $\sigma(\alpha) \equiv 0 \pmod \kp$
when $\alpha \equiv 0 \pmod \kp$.
Thus we have the isomorphism
\[ D_\kp \taking\theta \Gal\left( (\OO_K/\kp) / \FF_p \right). \]
But we already know $\Gal\left( (\OO_K/\kp)/\FF_p \right)$,
according to the string of isomorphisms
\[
\Gal\left( (\OO_K/\kp) / \FF_p \right)
\cong \Gal\left( \FF_{p^f} / \FF_p \right)
\cong \left< T = x \mapsto x^p \right>
\cong \Zc{f} .
\]
So the unique such element is the pre-image of $T$ under $\theta$.
\end{proof}
\section{Conjugacy classes}
Now suppose $\kp_1$ and $\kp_2$ are \emph{two} primes above an unramified rational prime $p$.
Then we can define $\Frob_{\kp_1}$ and $\Frob_{\kp_2}$.
Since the Galois group acts transitively,
we can select $\sigma \in \Gal(K/\QQ)$ be such that
\[ \sigma(\kp_1) = \kp_2. \]
We claim that
\[
\Frob_{\kp_2} = \sigma \circ \Frob_{\kp_1} \circ \sigma\inv.
\]
Note that this is an equation in $G$.
\begin{ques}
Prove this.
\end{ques}
More generally, for a given unramified rational prime $p$, we obtain:
\begin{theorem}
[Conjugacy classes in Galois groups]
The set
\[ \left\{ \Frob_\kp \mid \kp \text{ above } p \right\} \]
is one of the conjugacy classes of $G$.
\end{theorem}
\begin{proof}
We've used the fact that $G = \Gal(K/\QQ)$ is transitive
to show that $\Frob_{\kp_1}$ and $\Frob_{\kp_2}$ are conjugate
if they both lie above $p$; hence it's \emph{contained} in some
conjugacy class.
So it remains to check that for any $\kp$, $\sigma$,
we have $\sigma \circ \Frob_\kp \circ \sigma\inv = \Frob_{\kp'}$
for some $\kp'$. For this, just take $\kp' = \sigma\kp$.
Hence the set is indeed a conjugacy class.
\end{proof}
%We denote the conjugacy class by the \vocab{Frobenius symbol}
%\[ \left( \frac{K/\QQ}{p} \right). \]
In summary,
\begin{moral}
$\Frob_{\kp}$ is determined up to conjugation by the prime $p$
from which $\kp$ arises.
\end{moral}
So even though the Gothic letters look scary, the content of $\Frob_{\kp}$
really just comes from the more friendly-looking rational prime $p$.
\begin{example}
[Frobenius elements in $\QQ(\cbrt2,\omega)$]
With those remarks, here is a more involved example of a Frobenius map.
Let $K = \QQ(\cbrt2, \omega)$ be the splitting field of
\[ t^3-2 = (t-\cbrt2)(t-\omega\cbrt2)(t-\omega^2\cbrt2). \]
Thus $K/\QQ$ is Galois.
We've seen in an earlier example that
\[ \OO_K = \ZZ[\eps] \quad\text{where}\quad \eps \text { is a root of } t^6+3t^5-5t^3+3t+1. \]
Let's consider the prime $5$ which factors (trust me here) as
\[ (5) = (5, \eps^2+\eps+2)(5, \eps^2+3\eps+3)(5, \eps^2+4\eps+1)
= \kp_1 \kp_2 \kp_3. \]
Note that all the prime ideals have inertial degree $2$.
Thus $\Frob_{\kp_i}$ will have order $2$ for each $i$.
Note that
\[ \Gal(K/\QQ) =
\text{permutations of } \{\cbrt2,\omega\cbrt2,\omega^2\cbrt2\}
\cong S_3. \]
In this $S_3$ there are $3$ elements of order three:
fixing one root and swapping the other two.
These correspond to each of $\Frob_{\kp_1}$, $\Frob{\kp_2}$, $\Frob_{\kp_3}$.
In conclusion, the conjugacy class
$\left\{ \Frob_{\kp_1}, \Frob_{\kp_2}, \Frob_{\kp_3} \right\}$
associated to $(5)$ is the
cycle type $(\bullet)(\bullet \; \bullet)$ in $S_3$.
\end{example}
\section{Chebotarev density theorem}
Natural question: can we represent every conjugacy class in this way?
In other words, is every element of $G$ equal to $\Frob_\kp$ for some $\kp$?
Miraculously, not only is the answer ``yes'', but in fact it does so in the nicest way possible:
the $\Frob_\kp$'s are ``equally distributed'' when we pick a random $\kp$.
\begin{theorem}
[Chebotarev density theorem over $\QQ$]
Let $C$ be a conjugacy class of $G = \Gal(K/\QQ)$.
The density of (unramified) primes $p$ such that $\{ \Frob_\kp \mid \kp \text{ above } p \} = C$
%\[ \left( \frac{K/\QQ}{p} \right) = C \]
is exactly $\left\lvert C \right\rvert / \left\lvert G \right\rvert$.
In particular, for any $\sigma \in G$ there are infinitely many rational primes $p$
with $\kp$ above $p$ so that $\Frob_{\kp} = \sigma$.
\end{theorem}
By density, I mean that the proportion of primes $p \le x$ that work
approaches $\frac{\left\lvert C \right\rvert}{\left\lvert G \right\rvert}$ as $x \to \infty$.
Note that I'm throwing out the primes that ramify in $K$.
This is no issue, since the only primes that ramify are those dividing $\Delta_K$,
of which there are only finitely many.
In other words, if I pick a random prime $p$ and look at the resulting conjugacy class,
it's a lot like throwing a dart at $G$:
the probability of hitting any conjugacy class depends just on the size of the class.
\begin{center}
\begin{asy}
size(8cm);
bigbox("$G$");
pen b = lightcyan + opacity(0.4);
pen k = black;
filldraw( (-2.6,2.5)--(0.6,2.5)--(0.6,0.5)--(-2.6,0.5)--cycle, b, k);
filldraw( (-2.6,-2.5)--(0.6,-2.5)--(0.6,-0.5)--(-2.6,-0.5)--cycle, b, k);
filldraw( (2,0)--(3.5,0)--(3.5,2.5)--(2,2.5)--cycle, b, k);
filldraw( (2,-1)--(3.5,-1)--(3.5,-2)--(2,-2)--cycle, b, k);
for (real x = -2; x < 1; ++x) {
dot( (x, 1.9) );
dot( (x, 1.1) );
dot( (x, -1.9) );
dot( (x, -1.1) );
}
label("$37.5\%$", (-2.6, 0.5), dir(140));
label("$37.5\%$", (-2.6,-2.5), dir(140));
label("$C_1$", (-2.6, 2.5), dir(225));
label("$C_2$", (-2.6, -.5), dir(225));
dot( (2.75, 2.0) );
dot( (2.75, 1.25) );
dot( (2.75, 0.50) );
dot( (2.75, -1.50) );
label("$C_3$", (2, 0), dir(-90));
label("$18.75\%$", (3, 0), dir(-75));
label("$C_4$", (2, -2), dir(-90));
label("$6.25\%$", (3, -2), dir(-75));
\end{asy}
\end{center}
\begin{remark}
Happily, this theorem (and preceding discussion)
also works if we replace $K/\QQ$ with any Galois extension $K/F$;
in that case we replace ``$\kp$ over $p$'' with ``$\kP$ over $\kp$''.
In that case, we use $\Norm(\kp) \le x$ rather than $p \le x$ as the way
to define density.
\end{remark}
\section{Example: Frobenius elements of cyclotomic fields}
Let $q$ be a prime, and consider $L = \QQ(\zeta_q)$,
with $\zeta_q$ a primitive $q$th root of unity.
You should recall from various starred problems that
\begin{itemize}
\ii $\Delta_L = \pm q^{q-2}$,
\ii $\OO_L = \ZZ[\zeta_q]$, and
\ii The map \[ \sigma_n : L \to L \quad\text{by}\quad \zeta_q \mapsto \zeta_q^n \]
is an automorphism of $L$ whenever $\gcd(n,q)=1$,
and depends only on $n \pmod q$.
In other words, the automorphisms of $L/\QQ$ just shuffle around the $q$th roots of unity.
In fact the Galois group consists exactly of the elements $\{\sigma_n\}$, namely
\[ \Gal(L/\QQ) = \{ \sigma_n \mid n \not\equiv 0 \pmod q \}. \]
As a group, \[ \Gal(L/\QQ) = \Zm q \cong \Zcc{q-1}. \]
\end{itemize}
This is surprisingly nice,
because \textbf{elements of $\Gal(L/\QQ)$ look a lot
like Frobenius elements already}.
Specifically:
\begin{lemma}[Cyclotomic Frobenius elements]
\label{lem:cyclo_frob}
In the cyclotomic setting $L = \QQ(\zeta_q)$,
let $p$ be a rational unramified prime
and $\kp$ above it. Then \[ \Frob_\kp = \sigma_p. \]
\end{lemma}
\begin{proof}
Observe that $\sigma_p$ satisfies the functional equation
(check on generators).
Done by uniqueness.
% We know $\Frob_\kp(\alpha) \equiv \alpha^p \pmod{\kp}$ by definition,
% but also that $\Frob_\kp = \sigma_n$ for some $n$
% We want $n=p$; since $\sigma_n(\zeta_q)^n = \zeta_q^n$ by definition
% it would be very weird if this wasn't true!
%
% Given $\zeta_q^n \equiv \zeta_q^p \pmod{\kp}$, it suffices to
% prove that the $q$th roots of unity are distinct mod $\kp$.
% Look at the polynomial $F(x) = x^q-1$ in $\ZZ[\zeta_p]/\kp \cong \FF_{p^f}$.
% Its derivative is \[ F'(x) = qx^{q-1} \not\equiv 0 \pmod{\kp} \]
% (since $\FF_{p^f}$ has characteristic $p \nmid q$).
% The only root of $F'$ is zero, hence $F$ has no double roots mod $\kp$.
\end{proof}
\begin{ques}
Conclude that a rational prime $p$
splits completely in $\OO_L$ if and only if $p \equiv 1 \pmod m$.
\end{ques}
\section{Frobenius elements behave well with restriction}
Let $L/\QQ$ and $K/\QQ$ be Galois extensions, and consider the setup
\begin{diagram}
L & \supseteq & \kP & \rDotted & \Frob_{\kP} \in \Gal(L/\QQ)\\
\dLine && \dLine && \\
K & \supseteq & \kp & \rDotted & \Frob_\kp \in \Gal(K/\QQ) \\
\dLine && \dLine && \\
\QQ & \supseteq & (p) &&
\end{diagram}
Here $\kp$ is above $(p)$ and $\kP$ is above $\kp$.
We may define
\[ \Frob_\kp : K \to K
\quad\text{and}\quad
\Frob_{\kP} : L \to L \]
and want to know how these are related.
\begin{theorem}
[Restrictions of Frobenius elements]
Assume $L/\QQ$ and $K/\QQ$ are both Galois.
Let $\kP$ and $\kp$ be unramified as above.
Then $\Frob_{\kP} \restrict{K} = \Frob_{\kp}$,
i.e.\ for every $\alpha \in K$,
\[ \Frob_\kp(\alpha) = \Frob_{\kP}(\alpha). \]
\end{theorem}
\begin{proof}
We know
\[ \Frob_{\kP}(\alpha) \equiv \alpha^p \pmod{\kP}
\quad \forall \alpha \in \OO_L \]
from the definition.
\begin{ques}
Deduce that
\[ \Frob_{\kP}(\alpha) \equiv \alpha^p \pmod{\kp}
\quad \forall \alpha \in \OO_K. \]
(This is weaker than the previous statement in two ways!)
\end{ques}
Thus $\Frob_{\kP}$ restricted to $\OO_K$ satisfies the
characterizing property of $\Frob_\kp$.
\end{proof}
In short, the point of this section is that
\begin{moral}
Frobenius elements upstairs restrict to Frobenius elements downstairs.
\end{moral}
\section{Application: Quadratic reciprocity}
We now aim to prove:
\begin{theorem}
[Quadratic reciprocity]
Let $p$ and $q$ be distinct odd primes.
Then
\[ \left( \frac pq \right)\left( \frac qp \right)
= (-1)^{\frac{p-1}{2} \cdot \frac{q-1}{2}}. \]
\end{theorem}
(See, e.g. \cite{ref:holden} for an exposition on quadratic reciprocity,
if you're not familiar with it.)
\subsection{Step 1: Setup}
For this proof, we first define
\[ L = \QQ(\zeta_q) \]
where $\zeta_q$ is a primitive $q$th root of unity.
Then $L/\QQ$ is Galois, with Galois group $G$.
\begin{ques}
Show that $G$ has a unique subgroup $H$ of order two.
\end{ques}
In fact, we can describe it exactly: viewing $G \cong \Zm q$, we have
\[ H = \left\{ \sigma_n \mid \text{$n$ quadratic residue mod $q$} \right\}. \]
By the fundamental theorem of Galois Theory, there ought to be a degree $2$
extension of $\QQ$ inside $\QQ(\zeta_q)$ (that is, a quadratic field).
Call it $\QQ(\sqrt{q^\ast})$, for $q^\ast$ squarefree:
\begin{diagram}
L = \QQ(\zeta_q) & \rIsom & \{1\} \\
\dLine^{\frac{q-1}{2}} && \dLine \\
K = \QQ(\sqrt{q^\ast}) & \rIsom & H \\
\dLine^2 && \dLine \\
\QQ & \rIsom & G \\
\end{diagram}
\begin{exercise}
Note that if a rational prime $\ell$ ramifies in $K$,
then it ramifies in $L$.
Use this to show that
\[ q^\ast = \pm q \text{ and } q^\ast \equiv 1 \pmod 4. \]
Together these determine the value of $q^\ast$.
\end{exercise}
(Actually, it is true in general
$\Delta_K$ divides $\Delta_L$ in a tower $L/K/\QQ$.)
\subsection{Step 2: Reformulation}
Now we are going to prove:
\begin{theorem}
[Quadratic reciprocity, equivalent formulation]
For distinct odd primes $p$, $q$ we have
\[ \left( \frac pq \right) = \left( \frac{q^\ast}{p} \right). \]
\end{theorem}
\begin{exercise}
Using the fact that $\left( \frac{-1}{p} \right) = (-1)^{\frac{p-1}{2}}$,
show that this is equivalent to quadratic reciprocity as we know it.
\end{exercise}
We look at the rational prime $p$ in $\ZZ$.
Either it splits into two in $K$ or is inert; either way let $\kp$ be a prime factor
in the resulting decomposition (so $\kp$ is either $p \cdot \OO_K$ in the inert case,
or one of the primes in the split case).
Then let ${\kP}$ be above $\kp$.
It could possibly also split in $K$: the picture looks like
\begin{diagram}
\OO_L = \ZZ[\zeta_q] & \supset {\kP} & \rDotted & \ZZ[\zeta_p]/{\kP} \cong \FF_{p^f} \\
\OO_K = \ZZ[\frac{1+\sqrt{q^\ast}}{2}] & \supseteq \kp & \rDotted & \FF_p \text{ or } \FF_{p^2} \\
\ZZ & \supseteq (p) & \rDotted & \FF_p
\end{diagram}
\begin{ques}
Why is $p$ not ramified in either $K$ or $L$?
\end{ques}
\subsection{Step 3: Introducing the Frobenius}
Now, we take the Frobenius
\[ \sigma_p = \Frob_{\kP} \in \Gal(L/\QQ). \]
We claim that
\[ \Frob_{\kP} \in H \iff \text{$p$ splits in $K$}. \]
To see this, note that $\Frob_{\kP}$ is in $H$ if and only if it acts
as the identity on $K$.
But $\Frob_{\kP} \restrict{K}$ is $\Frob_\kp$!
So \[ \Frob_{\kP} \in H \iff \Frob_\kp = \id_K. \]
Finally note that $\Frob_\kp$ has order $1$ if $p$ splits
($\kp$ has inertial degree $1$)
and order $2$ if $p$ is inert.
This completes the proof of the claim.
\subsection{Finishing up}
We already know by \Cref{lem:cyclo_frob} that $\Frob_{\kP} = \sigma_p \in H$
if and only if $p$ is a quadratic residue.
On the other hand,
\begin{exercise}
Show that $p$ splits in $\OO_K = \ZZ[\frac12(1+\sqrt{q^\ast})]$
if and only if $\left( \frac{q^\ast}{p} \right) = 1$.
(Use the factoring algorithm. You need the fact that $p \neq 2$ here.)
\end{exercise}
In other words
\[ \left( \frac pq \right) = 1
\iff \sigma_p \in H \iff \text{$p$ splits in $\ZZ\left[ \tfrac12(1+\sqrt{q^\ast}) \right]$}
\iff \left( \frac{q^\ast}{p} \right) = 1.
\]
This completes the proof.
\section{Frobenius elements control factorization}
\prototype{$\Frob_\kp$ controlled the splitting of $p$ in the proof of quadratic reciprocity;
the same holds in general.}
In the proof of quadratic reciprocity, we used the fact that Frobenius elements behaved
well with restriction in order to relate the splitting of $p$ with properties of $\Frob_\kp$.
In fact, there is a much stronger statement for
any intermediate field $\QQ \subseteq E \subseteq K$
which works even if $E/\QQ$ is not Galois.
It relies on the notion of a \emph{factorization pattern}.
Here is how it goes.
Set $n = [E:\QQ]$, and let $p$ be a rational prime unramified in $K$.
Then $p$ can be broken in $E$ as
\[ p \cdot \OO_E = \kp_1 \kp_2 \dots \kp_g \]
with inertial degrees $f_1$, \dots, $f_g$:
(these inertial degrees might be different since $E/\QQ$ isn't Galois).
The numbers $f_1 + \dots + f_g = n$ form a partition of the number $n$.
For example, in the quadratic reciprocity proof we had $n = 2$,
with possible partitions $1 + 1$ (if $p$ split) and $2$ (if $p$ was inert).
We call this the \vocab{factorization pattern} of $p$ in $E$.
Next, we introduce a Frobenius $\Frob_{\kP}$ above $(p)$, all the way in $K$;
this is an element of $G = \Gal(K/\QQ)$.
Then let $H$ be the group corresponding to the field $E$.
Diagram:
\begin{diagram}
K & \rIsom & \{1\} && \Frob_{\kP} & \\
\dLine && \dLine &&& \\
E & \rIsom & H && \kp_1 \dots \kp_g & \qquad f_1 + \dots + f_g = n \\
\dLine^n && \dLine_n && \dLine & \\
\QQ & \rIsom & G && (p) &
\end{diagram}
Then $\Frob_{\kP}$ induces a \emph{permutation}
of the $n$ left cosets $gH$ by left multiplication
(after all, $\Frob_{\kP}$ is an element of $G$ too!).
Just as with any permutation, we may look at the resulting cycle decomposition,
which has a natural ``cycle structure'': a partition of $n$.
\begin{center}
\begin{asy}
size(8cm);
pen tg = heavyred; // "times g"
pen pointpen = lightblue;
pair A = Drawing("g_1H", dir(80), dir(80), pointpen);
pair B = Drawing("g_2H", A*dir(120), A*dir(120), pointpen);
pair C = Drawing("g_3H", A*dir(240), A*dir(240), pointpen);
draw(A--B, dashed + pointpen, EndArrow, Margin(2,2));
draw(B--C, dashed + pointpen, EndArrow, Margin(2,2));
draw(C--A, dashed + pointpen, EndArrow, Margin(2,2));
label("$\times g$", midpoint(A--B), A+B, tg);
label("$\times g$", midpoint(B--C), B+C, tg);
label("$\times g$", midpoint(C--A), C+A, tg);
label("$3$", origin, origin, pointpen);
add(shift( (-3.2,0.1) ) * CC());
label("$g = \operatorname{Frob}_{\mathfrak P}$", (-1.7,1.7), origin, tg);
pointpen = heavygreen;
pair W = Drawing("g_4H", dir(50), dir(50), pointpen);
pair X = Drawing("g_5H", W*dir(90), W*dir(90), pointpen);
pair Y = Drawing("g_6H", W*dir(180), W*dir(180), pointpen);
pair Z = Drawing("g_7H", W*dir(270), W*dir(270), pointpen);
draw(W--X, dashed + pointpen, EndArrow, Margin(2,2));
draw(X--Y, dashed + pointpen, EndArrow, Margin(2,2));
draw(Y--Z, dashed + pointpen, EndArrow, Margin(2,2));
draw(Z--W, dashed + pointpen, EndArrow, Margin(2,2));
defaultpen(red);
label("$\times g$", W--X, W+X, tg);
label("$\times g$", X--Y, X+Y, tg);
label("$\times g$", Y--Z, Y+Z, tg);
label("$\times g$", Z--W, Z+W, tg);
label("$4$", origin, origin, pointpen);
label("$\boxed{n = 7 = 3+4}$", (-2,-1.8), origin, black);
\end{asy}
\end{center}
The theorem is that these coincide:
\begin{theorem}
[Frobenius elements control decomposition]
\label{thm:frob_control_decomp}
Let $\QQ \subseteq E \subseteq K$ an extension of number fields
and assume $K/\QQ$ is Galois (though $E/\QQ$ need not be).
Pick an unramified rational prime $p$; let $G = \Gal(K/\QQ)$
and $H$ the corresponding intermediate subgroup.
Finally, let $\kP$ be a prime above $p$ in $K$.
Then the \emph{factorization pattern} of $p$ in $E$ is given by
the \emph{cycle structure} of $\Frob_{\kP}$ acting on the left cosets of $H$.
\end{theorem}
Often, we take $E = K$, in which case this is just asserting
that the decomposition of the prime $p$ is controlled by a Frobenius element over it.
An important special case is when $E = \QQ(\alpha)$,
because as we will see it is let us determine how the minimal
polynomial of $\alpha$ factors modulo $p$.
To motivate this, let's go back a few chapters
and think about the Factoring Algorithm.
Let $\alpha$ be an algebraic integer and $f$ its minimal polynomial (of degree $n$).
Set $E = \QQ(\alpha)$ (which has degree $n$ over $\QQ$).
Suppose we're lucky enough that $\OO_E = \ZZ[\alpha]$,
i.e.\ that $E$ is monogenic.
Then we know by the Factoring Algorithm,
to factor any $p$ in $E$, all we have to do is factor $f$ modulo $p$,
since if $f = f_1^{e_1} \dots f_g^{e_g} \pmod p$ then we have
\[ (p) = \prod_i \kp_i = \prod_i (f_i(\alpha), p)^{e_i}. \]
This gives us complete information about the ramification indices and inertial degrees;
the $e_i$ are the ramification indices, and $\deg f_i$ are the inertial degrees
(since $\OO_E / \kp_i \cong \FF_p[X] / (f_i(X))$).
In particular, if $p$ is unramified then all the $e_i$ are equal to $1$, and we get
\[ n = \deg f = \deg f_1 + \deg f_2 + \dots + \deg f_g. \]
Once again we have a partition of $n$;
we call this the \vocab{factorization pattern} of $f$ modulo $p$.
So, to see the factorization pattern of an unramified $p$ in $\OO_E$,
we just have to know the factorization pattern of the $f \pmod p$.
Turning this on its head, if we want to know the factorization pattern of $f \pmod p$,
we just need to know how $p$ decomposes.
And it turns out these coincide even without the assumption that $E$ is monogenic.
\begin{theorem}[Frobenius controls polynomial factorization]
\label{thm:factor_poly_frob}
Let $\alpha$ be an algebraic integer with minimal polynomial $f$,
and let $E = \QQ(\alpha)$.
Then for any prime $p$ unramified in the splitting field $K$ of $f$,
the following coincide:
\begin{enumerate}[(i)]
\ii The factorization pattern of $p$ in $E$.
\ii The factorization pattern of $f \pmod p$.
\ii The cycle structure associated to the action
of $\Frob_{\kP} \in \Gal(K/\QQ)$ on the roots of $f$,
where $\kP$ is above $p$ in $K$.
\end{enumerate}
\end{theorem}
\begin{example}[Factoring $x^3-2 \pmod 5$]
Let $\alpha = \cbrt2$ and $f = x^3-2$, so $E = \QQ(\cbrt2)$.
Set $p=5$ and let finally, let $K = \QQ(\cbrt2, \omega)$ be the splitting field.
Setup:
\begin{diagram}
K = \QQ(\cbrt2, \omega) && \kP && x^3-2 = (x-\cbrt2)(x-\cbrt2\omega)(x-\cbrt2\omega^2) \\
\dLine~2 && \dLine \\
E = \QQ(\sqrt[3]{2}) && \kp && x^3-2 = (x-\cbrt2)(x^2+\cbrt2x+\cbrt4) \\
\dLine~3 && \dLine \\
\QQ && (5) && x^3-2 \text{ irreducible over } \QQ
\end{diagram}
The three claimed objects now all have shape $2+1$:
\begin{enumerate}[(i)]
\ii By the Factoring Algorithm, we have
$(5) = (5, \cbrt2-3)(5, 9+3\cbrt2+\cbrt4)$.
\ii We have $x^3-2 \equiv (x-3)(x^2+3x+9) \pmod 5$.
\ii We saw before that $\Frob_{\kP} = (\bullet)(\bullet \; \bullet)$.
\end{enumerate}
\end{example}
\begin{proof}[Sketch of Proof]
Letting $n = \deg f$.
Let $H$ be the subgroup of $G = \Gal(K/\QQ)$ corresponding to $E$, so $[G:E] = n$.
Pictorially, we have
\begin{diagram}
K && \{1\} && \kP \\
\dLine && \dLine && \dLine \\
E = \QQ(\alpha) && H && \kp \\
\dLine && \dLine && \dLine \\
\QQ && G && (p)
\end{diagram}
We claim that (i), (ii), (iii) are all equivalent to
\begin{center}
(iv) The pattern of the action of $\Frob_{\kP}$ on the $G/H$.
\end{center}
In other words we claim the cosets correspond to the $n$ roots of $f$ in $K$.
Indeed $H$ is just the set of $\tau \in G$ such that $\tau(\alpha)=\alpha$,
so there's a bijection between the roots and the cosets $G/H$
by $\tau H \mapsto \tau(\alpha)$.
Think of it this way: if $G = S_n$, and $H = \{\tau : \tau(1) = 1\}$,
then $G/H$ has order $n! / (n-1)! = n$ and corresponds to the elements $\{1, \dots, n\}$.
So there is a natural bijection from (iii) to (iv).
The fact that (i) is in bijection to (iv) was the previous theorem,
\Cref{thm:frob_control_decomp}.
The correspondence (i) $\iff$ (ii) is a fact of Galois theory,
so we omit the proof here.
\end{proof}
All this can be done in general with $\QQ$ replaced by $F$;
for example, in \cite{ref:lenstra_chebotarev}.
\section{Example application: IMO 2003 problem 6}
As an example of the power we now have at our disposal, let's prove:
\begin{center}
\begin{minipage}{4.5cm}
\includegraphics[width=4cm]{media/IMO-2003-logo.png}
\end{minipage}%
\begin{minipage}{10cm}
\textbf{Problem 6}.
Let $p$ be a prime number.
Prove that there exists a prime number $q$ such that for every integer $n$,
the number $n^p-p$ is not divisible by $q$.
\end{minipage}
\end{center}
We will show, much more strongly, that there exist infinitely many primes $q$
such that $X^p-p$ is irreducible modulo $q$.
\begin{proof}[Solution]
Okay! First, we draw the tower of fields
\[ \QQ \subseteq \QQ(\sqrt[p]{p}) \subseteq K \]
where $K$ is the splitting field of $f(x) = x^p-p$.
Let $E = \QQ(\sqrt[p]{p})$ for brevity and note it has degree $[E:\QQ] = p$.
Let $G = \Gal(K/\QQ)$.
\begin{ques}
Show that $p$ divides the order of $G$. (Look at $E$.)
\end{ques}
Hence by Cauchy's theorem (\Cref{thm:cauchy_group}, which is a purely group-theoretic fact)
we can find a $\sigma \in G$ of order $p$.
By Chebotarev, there exist infinitely many rational (unramified) primes $q \neq p$
and primes $\kQ \subseteq \OO_K$ above $q$
such that $\Frob_\kQ = \sigma$.
(Yes, that's an uppercase Gothic $Q$. Sorry.)
We claim that all these $q$ work.
By \Cref{thm:factor_poly_frob}, the factorization of $f \pmod q$ is
controlled by the action of $\sigma = \Frob_\kQ$ on the roots of $f$.
But $\sigma$ has prime order $p$ in $G$!
So all the lengths in the cycle structure have to divide $p$.
Thus the possible factorization patterns of $f$ are
\[ p = \underbrace{1 + 1 + \dots + 1}_{\text{$p$ times}}
\quad\text{or}\quad p = p. \]
So we just need to rule out the $p = 1 + \dots + 1$ case now:
this only happens if $f$ breaks into linear factors mod $q$.
Intuitively this edge case seems highly unlikely (are we really so unlucky
that $f$ factors into \emph{linear} factors when we want it to be irreducible?).
And indeed this is easy to see: this means that $\sigma$ fixes all
of the roots of $f$ in $K$, but that means $\sigma$ fixes $K$ altogether,
and hence is the identity of $G$, contradiction.
\end{proof}
\begin{remark}
In fact $K = \QQ(\sqrt[p]{p}, \zeta_p)$, and $\left\lvert G \right\rvert = p(p-1)$.
With a little more group theory, we can show that in fact the density of
primes $q$ that work is $\frac 1p$.
\end{remark}
\section\problemhead
\begin{problem}
Show that for an odd prime $p$, \[ \left( \frac 2p \right) = (-1)^{\frac 18(p^2-1)}. \]
\begin{hint}
Modify the end of the proof of quadratic reciprocity.
\end{hint}
\begin{sol}
It is still true that
\[ \left( \frac 2q \right) = 1
\iff \sigma_2 \in H \iff \text{$2$ splits in $\ZZ\left[ \tfrac12(1+\sqrt{q^\ast}) \right]$}. \]
Now, $2$ splits in the ring if and only if $t^2 - t - \tfrac14(1-q^\ast)$
factors mod $2$. This happens if and only if $q^\ast \equiv 1 \pmod 8$.
One can check this is exactly if $q \equiv \pm 1 \pmod 8$, which gives the conclusion.
\end{sol}
\end{problem}
\begin{problem}
Let $f$ be a nonconstant polynomial with integer coefficients.
Suppose $f \pmod p$ splits completely into linear factors
for all sufficiently large primes $p$.
Show that $f$ splits completely into linear factors.
\end{problem}
\begin{dproblem}
[Dirichlet's theorem on arithmetic progressions]
Let $a$ and $m$ be relatively prime positive integers.
Show that the density of primes $p \equiv a \pmod m$ is exactly $\frac{1}{\phi(m)}$.
\begin{hint}
Chebotarev Density on $\QQ(\zeta_m)$.
\end{hint}
\begin{sol}
Let $K = \Gal(\QQ(\zeta_m)/\QQ)$.
One can show that $\Gal(K/\QQ) \cong \Zm m$ exactly as before.
In particular, $\Gal(K/\QQ)$ is abelian and therefore its conjugacy classes
are singleton sets; there are $\phi(m)$ of them.
As long as $p$ is sufficiently large, it is unramified
and $\sigma_p = \Frob_\kp$ for any $\kp$ above $p$
(as $m$th roots of unity will be distinct modulo $p$;
differentiate $x^m-1$ mod $p$ again).
\end{sol}
\end{dproblem}
\begin{problem}
Let $n$ be an odd integer which is not a prime power.
Show that the $n$th cyclotomic polynomial is not
irreducible modulo \emph{any} rational prime.
% http://mathoverflow.net/questions/12366/how-many-primes-stay-inert-in-a-finite-non-cyclic-extension-of-number-fields
\end{problem}
\begin{problem}
[Putnam 2012 B6]
\yod
Let $p$ be an odd prime such that $p \equiv 2 \pmod 3$.
Let $\pi$ be a permutation of $\FF_p$ by $\pi(x) = x^3 \pmod p$.
Show that $\pi$ is even if and only if $p \equiv 3 \pmod 4$.
\begin{hint}
By primitive roots, it's the same as the action of $\times 3$ on $\Zcc{p-1}$.
Let $\zeta$ be a $(p-1)$st root of unity.
Take $d = \prod_{i < j} (\zeta^i - \zeta^j)$, think about $\QQ(d)$,
and figure out how to act on it by $x \mapsto x^3$.
\end{hint}
\begin{sol}
This solution is by David Corwin.
By primitive roots, it's the same as the action of $\times 3$ on $\Zcc{p-1}$.
Let $\zeta$ be a $(p-1)$st root of unity.
Consider
\[ d = \prod_{0 \le i < j < p-1} (\zeta^i - \zeta^j). \]
This is the square root of the discriminant of
the polynomial $X^{p-1}-1$; in other words $d^2 \in \ZZ$.
In fact, by elementary methods one can compute
\[ (-1)^{\binom{p-1}{2}} d^2 = -(p-1)^{p-1} \]
Now take the extension $K = \QQ(d)$, noting that
\begin{itemize}
\ii If $p \equiv 3 \pmod 4$, then $d = (p-1)^{\half(p-1)}$, so $K = \QQ$.
\ii If $p \equiv 1 \pmod 4$, then $d = i(p-1)^{\half(p-1)}$, so $K = \QQ(i)$.
\end{itemize}
Either way, in $\OO_K$, let $\kp$ be a prime ideal above $(3) \subseteq \OO_K$.
Let $\sigma = \Frob_\kp$ then be the unique element such that
$\sigma(x) = x^3 \pmod{\kp}$ for all $x$.
Then, we observe that
\[
\sigma(d) \equiv \prod_{0 \le i < j < p-1} (\zeta^{3i} - \zeta^{3j})
\equiv \begin{cases}
+d & \text{if $\pi$ is even} \\
-d & \text{if $\pi$ is odd}
\end{cases} \pmod{\kp}.
\]
Now if $K = \QQ$, then $\sigma$ is the identity, thus $\sigma$ even.
Conversely, if $K = \QQ(i)$, then $3$ does not split, so $\sigma(d) = -d$
(actually $\sigma$ is complex conjugation) thus $\pi$ is odd.
Note the condition that $p \equiv 2 \pmod 3$ is used only
to guarantee that $\pi$ is actually a permutation (and thus $d \neq 0$);
it does not play any substantial role in the solution.
\end{sol}
\end{problem}
| {
"alphanum_fraction": 0.647064326,
"avg_line_length": 39.9887780549,
"ext": "tex",
"hexsha": "d5c40ea115971ad955f0030a99264dde4faf2dbb",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e",
"max_forks_repo_licenses": [
"Apache-2.0",
"MIT"
],
"max_forks_repo_name": "aDotInTheVoid/ltxmk",
"max_forks_repo_path": "corpus/napkin/tex/alg-NT/frobenius.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0",
"MIT"
],
"max_issues_repo_name": "aDotInTheVoid/ltxmk",
"max_issues_repo_path": "corpus/napkin/tex/alg-NT/frobenius.tex",
"max_line_length": 119,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e",
"max_stars_repo_licenses": [
"Apache-2.0",
"MIT"
],
"max_stars_repo_name": "aDotInTheVoid/ltxmk",
"max_stars_repo_path": "corpus/napkin/tex/alg-NT/frobenius.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 11630,
"size": 32071
} |
%++++++++++++++++++++++++++++++++++++++++
\documentclass[letterpaper,12pt]{article}
\usepackage{tabularx}
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage[margin=1in,letterpaper]{geometry}
\usepackage{cite}
\usepackage[final]{hyperref}
\hypersetup{
colorlinks=true,
linkcolor=blue,
citecolor=blue,
filecolor=magenta,
urlcolor=blue
}
%++++++++++++++++++++++++++++++++++++++++
\begin{document}
\title{A Brief Exploration of the NBA Draft}
\author{Mike "dirty-mike" Neuder}
\date{\today}
\maketitle
\begin{abstract}
This quick analysis of the NBA draft was inspired and written for Phil Derbesy and Michael Chiappini of the exceptional podcast, "The Basketball Guyaries". During Episode 4, they are exploring the complexity of the current NBA draft, and ask for a bit of clarification on the probabilities going on to determine the order of picks. With the 2017 Draft just occurring, it seemed appropriate to take a glance at the process and learn more about how 14 ping-pong balls can change everything.
\end{abstract}
\section{Current Lottery System}
The current weighted lottery system was implemented in 1990 and reformed in 1993. The system was changed due to teams allegedly losing games intentionally during the near the end of the season in order to secure the first pick in the draft, as the team with the worst record of the season got the first pick. In the new weighted lottery system, the teams with the worst records still have the highest chance to pick early in the draft, however there is an element of randomness introduced in order to make it less appealing to be the worst team in the NBA. Currently, with 30 teams in the NBA, 16 will qualify for the playoffs and the remaining 14 are eligible to participate in the lottery. Since 1987, only the first three picks are determined by the lottery, and the remaining picks are done in reverse record order. Under this current system, the worst team of the previous season at least would be picking fourth in the draft, while the second worst team would choose at latest fifth and so forth. Now that the stage is set, lets look at how it works.
\section{A Bit of Math}
The first three picks are determined by a random selection of ping-pong balls numbered one through fourteen. These balls are put into a lottery machine, and four are selected at random. Because order doesn't matter, (i.e. $5,6,7,8 = 5,7,6,8 = 5,8,7,6 = etc$), the total number of combinations is 1001. We can calculate by recalling our Algebra 2 formula for combinations, where $n$ is the total number of items, and $r$ is how many we want to select while ignoring order.
$$ \binom{n}{r} = \frac{n!}{r!\cdot(n-r)!}$$ \newpage
In this context, we say we have 14 choose 4 combinations of ping-pong balls and by using our handy formula we can conclude that the number of combinations is,
$$\binom{14}{4} = \frac{14!}{4!\cdot(14-4)!} = \frac{87178291200}{24 \cdot 3628800} = 1001.$$
Out of all these combinations, only one isn't assigned to a team ($11,12,13,14$). The remaining 1000 combinations are distributed among the 14 non-playoff teams based on their regular season record according to the table below. Note that the rank column is in reverse order (worst team is ranked 1 and so forth).
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
rank & number of combinations & probability of being drawn \\
\hline
1 & 250 & 0.2498 \\
2 & 199 & 0.1988 \\
3 & 156 & 0.1558 \\
4 & 119 & 0.1189 \\
5 & 88 & 0.0879 \\
6 & 63 & 0.0629 \\
7 & 43 & 0.0429 \\
8 & 28 & 0.0280 \\
9 & 17 & 0.0170 \\
10 & 11 & 0.0110 \\
11 & 8 & 0.0080 \\
12 & 7 & 0.0070 \\
13 & 6 & 0.0060 \\
14 & 5 & 0.0050 \\
\hline
\end{tabular}
\end{center}
It is clear that as the better teams in the lottery have a pretty slim chance at landing a top pick. Another interesting thing to note is that if the 14th worst team in the league doesn't get one of the first three picks, they are locked into receiving the 14th pick, due to the fact that after the first three picks the selections are given in reverse record order. One last thing to examine is the probabilities of each team earning a specific pick. Note that in the table below, a cell with a '-' implies that this team cannot receive this pick.
\begin{center}
\includegraphics[scale=0.6]{../images/table.png}
\end{center} \newpage
In order to better visualize these probabilities I created a couple plots displayed below. The first plot shows a given teams probability of receiving any of the first fourteen draft slots.
\begin{center}
\includegraphics[scale=0.5]{../images/plot1.png}
\end{center}
I found it interesting that as you move farther into the draft the teams are have more certainty around which pick they will be receiving. The next plot I created is quite similar, but instead of plotting the probability of a team receiving a pick, it displays the probability that a team will have picked by the $n^{th}$ slot of the draft. For example, team 1 will have .250 chance of having picked after round 1, and after round two they will have $.250 + .215 = .465$ chance of having picked.
\begin{center}
\includegraphics[scale=0.5]{../images/plot2.png}
\end{center} \newpage
As expected, one team per round will reach a probability having picked of 1. This is due to the fact that the worst slot the rank 1 team can be assigned is fourth, and the result follows for the remaining teams.
\section{Conclusions}
In a sport dominated by superstars, securing strong draft picks is hugely important for NBA teams. Though the process is complicated and long, it is nice to know that in a world where control is paramount, a little bit of randomness can turn expectation on its head. Thanks for reading.\begin{footnote}{ if you are interested in seeing how i collected and plotted the data or want to play around with any of the files they are being hosted \href{https://github.com/michaelneuder/nba_draft}{here}. if you find any errors or have any questions feel free to get in touch.}\end{footnote} \\ \\
-dirty mike
\newpage
\begin{thebibliography}{99}
\bibitem{melissinos}
\url{https://en.wikipedia.org/wiki/NBA_draft_lottery}
\bibitem{Cyr}
\url{http://www.nba.com/news/draft/nba-draft-lottery-what-will-happen-2016/}
\bibitem{Wiki}
\url{https://www.sbnation.com/nba/2017/4/13/15268144/nba-draft-lottery-odds-2017-lakers-celtics-suns}
\end{thebibliography}
\end{document}
| {
"alphanum_fraction": 0.7392314844,
"avg_line_length": 63.2745098039,
"ext": "tex",
"hexsha": "d1872651b80ec65ec66dda435e7797bf809a2365",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f2069c49f0dbce91a58c32bd99e51a7b0cd8559f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "michaelneuder/nba_draft",
"max_forks_repo_path": "tex_files/draft_analysis.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f2069c49f0dbce91a58c32bd99e51a7b0cd8559f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "michaelneuder/nba_draft",
"max_issues_repo_path": "tex_files/draft_analysis.tex",
"max_line_length": 1057,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "f2069c49f0dbce91a58c32bd99e51a7b0cd8559f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "michaelneuder/nba_draft",
"max_stars_repo_path": "tex_files/draft_analysis.tex",
"max_stars_repo_stars_event_max_datetime": "2017-08-07T23:34:59.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-08-07T23:34:59.000Z",
"num_tokens": 1724,
"size": 6454
} |
\documentclass[prd,aps,floats,showkeys,nofootinbib,notitlepage]{revtex4}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{float}
\restylefloat{table}
%\usepackage{natbib}
\usepackage{graphicx}
\usepackage{epsfig}
\usepackage{epstopdf}
\usepackage{color}
\usepackage[colorlinks=true,linkcolor=red]{hyperref}
\usepackage{listings}
\usepackage{xcolor} % for setting colors
\usepackage{hyperref}
\usepackage{multirow}
\allowdisplaybreaks % for a pagebreak inside an aligned equation
% Usual (decimal) numbering
\renewcommand{\thesection}{\arabic{section}}
\renewcommand{\thesubsection}{\thesection.\arabic{subsection}}
\renewcommand{\thesubsubsection}{\thesubsection.\arabic{subsubsection}}
\begin{document}
\begin{titlepage}
%\vspace*{3cm}
\begin{center}
\Huge {A documentation on GiNaCDE library to solve Differential Equations}
\end{center}
\vspace{3cm}
\begin{center}
\Large{GiNaCDE (V1.0.0)}
\end{center}
\vspace{2cm}
\begin{center}
Mithun Bairagi \\[3pt]
\textit{Department of Physics, The University of Burdwan, Golapbag 713104, West Bengal, India} \\ [1cm]
%\textit{Address} \\ [1cm]
\today
\end{center}
\end{titlepage}
\clearpage
\tableofcontents
\section{Introduction}
The GiNaCDE library written in C++ programming language is complete implementations of three different methods F-expansion \cite{fexpn024,fexpn024_1,fexpn123,0246,234}, modified F-expansion (in short mF-expansion) \cite{modfexpn}, and first integral methods (in short FIM) \cite{fim0,fim,mirza,complexTwt1}. GiNaCDE can outputs exact analytical traveling-wave solutions of nonlinear partial differential equations (NLPDEs) automatically.
GiNaCDE for its algebraic manipulations entirely depends on GiNaC library \cite{ginac}.
%In this short tutorial, we have described the following:
%In Sec. \ref{sec:compile}, the compiling and installing procedures of the GiNaCDE library have been explained. In Sec. \ref{sec:command}, we present some useful commands which need to solve NLPDEs by the GiNaCDE library. In Sec. \ref{sec:examples}, GiNaCDE library has been applied on some examples to derive their solutions, and we can know the techniques to use GiNaCDE with C++ code. In Sec. \ref{sec:furexam} we have applied the library on different types of NLPDEs to provide the usage description of the library in detail. In Sec. \ref{sec:furexam} we have also shown some new exact solutions obtained in some NLPDEs. In Sec. \ref{gui}, we give a brief discussion about the GUI version of GiNaCDE.
\section{Problem}
\par Let us consider an NLPDE with independent variables $t,x_1,x_2, \ldots x_m=\mathbf{X}$ and dependent variable $u$ in the following general form
\begin{equation}\label{geneq}
F\left( {\alpha_i,u,u_t,u_{x_1},u_{x_1} \ldots ,u_{x_m},u_{tt},u_{t{x_1}},u_{t{x_2}}, \ldots ,u_{t{x_m}},u_{{x_1}{x_1}},u_{{x_1}{x_2}} \ldots ,u_{{x_1}{x_m}} \ldots } \right) = 0,
\end{equation}
where $\alpha_i(i=1,\ldots,l)$ are the parameters, $u=u(t,x_1,x_2, \ldots x_m)$ and $F$ is a polynomial about $u$ and its derivatives. Equation \eqref{geneq} does not explicitly depend on the independent variables $t,x_1,x_2, \ldots x_m$. The current package GiNaCDE with the all three algorithms of F-expansion, mF-expansion and FIM described below, can solve the NLPDEs of the form \eqref{geneq}. However, there is no guarantee that the code always give the complete solutions of all NLPDEs of the form \eqref{geneq}, and sometimes the code may failure to give solutions due to the complexity in the problems. Here, it should be noted that our algorithms are also applicable to Eq. \eqref{geneq} without the parameters $\alpha_i$. Equation \eqref{geneq} is called input-NLPDE throughout this documentation. In the next three sections, following the research works \cite{fexpn024,fexpn024_1,fexpn123,0246,234,modfexpn,fim0,fim,mirza,complexTwt1}, we present the algorithms for the methods of F-expansion, mF-expansion and FIM respectively.
\section{Algorithm of F-expansion method}\label{sec:Fexpn}
This section explains the algorithm for the F-expansion method following \cite{fexpn024,fexpn024_1,fexpn123,0246,234} to obtain closed-form traveling-wave solutions of NLPDEs automatically. However, in the proposed algorithms, we have to guide the solution process initially by providing some initial data. In that sense, the algorithms are not fully automated, but this makes the algorithms more powerful, and we can apply these algorithms to a huge variant of NLPDEs with different types of initial data.
We divide the algorithm into five main steps (labeled F1-F5).
\textbf{Step F1}(Transform the NLPDE into a NLODE): At first we take a transformation in Eq. \eqref{geneq}
\begin{equation}\label{twtrans}
u = U(\xi )e^{I\theta},
\end{equation}
where
\begin{align}
\label{twco}& \xi = k_0t + k_1{x_1} + {k_2}{x_2} + \ldots + {k_m}{x_m}=\boldsymbol{K}\cdot\mathbf{X},\nonumber\\
\intertext{and}& \theta = {p_0}t + {p_1}{x_1} + {p_2}{x_2} + \ldots + {p_m}{x_m}=\boldsymbol{P}\cdot\mathbf{X}.
\end{align}
Here $\xi$ is called traveling-wave coordinate and $U(\xi )$ is the traveling-wave part of the solutions. The second part of Eq. \eqref{twtrans} is $e^{I\theta}$ which is called the phase part of the solutions and $\theta$ is called phase coordinate. Usually, this part is present when differential equation \eqref{geneq} has an imaginary part. In our program library we must retain this phase part when Eq. \eqref{geneq} is complex otherwise we only retain traveling-wave part. Taking proper forms or values of the constant coefficients ($k_i\;(i=0\ldots m)$) of traveling-wave coordinate $\xi$ and phase angle constants $p_i\;(i=0\ldots m)$, we can transform the NLPDE \eqref{geneq} into the nonlinear ordinary differential equation (NLODE) with single independent variable $\xi$ and dependent variable $U(\xi)$. Repeatedly applying the chain rule
\begin{equation}\label{twtrans2}
\frac{\mathrm{\partial} \bullet }{\mathrm{\partial} \mathbf{X}}=\left ( \boldsymbol{K}\frac{\mathrm{d} }{\mathrm{d} \xi}+I\boldsymbol{P} \right )\bullet
\end{equation}
on Eq. \eqref{twtrans}, Eq. \eqref{geneq} is transformed into NLODE. The form of NLODE in general form is given by
\begin{equation}\label{tweq}
G\left( {\alpha_i,k_i,p_i,U,U^{(1)},U^{(2)}} \ldots \right) = 0,
\end{equation}
where $U^{(n)} (n=1,2\ldots)$ indicates $n$ times differentiation with respect to $\xi$. All the three algorithms (F-expansion, mF-expansion, FIM) work only when Eq. \eqref{tweq} is a polynomial in variable $U(\xi)$ and its derivatives, and does not explicitly depend on the independent variables $\xi$. Then our algorithm check the integrability of Eq. \eqref{tweq}. If Eq. \eqref{tweq} is integrable, our method try to integrate Eq. \eqref{tweq} and if integration is successful then one can assign a value to each integrating constant ($ic_i,\;i=1,2,\ldots,\eta$, $\eta$ is the number of integration) in own choices. In the case of complex NLPDE, if real and imaginary both parts are present in the transformed NLODE, one part is taken to solve. To select one part from complex NLODE, we follow \cite{complexTwt,complexTwt1}. In this context, the following strategies are followed:
\begin{itemize}
\item[i.] Assume all the parameters $(\alpha_i,k_i,p_i)$ are real.
\item[ii.] Express the NLODE \eqref{tweq} in the form
\begin{equation}\label{tweq3}
G\left( {\alpha_i,k_i,p_i,U,U^{(1)},U^{(2)}} \ldots \right) = Re\left( {\alpha_i,k_i,p_i,U,U^{(1)},U^{(2)}} \ldots \right)+Im\left( {\alpha_i,k_i,p_i,U,U^{(1)},U^{(2)}} \ldots \right)*I=0,
\end{equation}
where $Re\left( {\alpha_i,k_i,p_i,U,U^{(1)},U^{(2)}} \ldots \right)$ is real part and $Im\left( {\alpha_i,k_i,p_i,U,U^{(1)},U^{(2)}} \ldots \right)$ is imaginary part of NLODE \eqref{tweq3}, and $I=\sqrt{-1}$.
\item[iii.] Check in which part ($Re$ or $Im$) dependent variable $U(\xi)$ is not present, but minimum one number of parameter is present there. Get the constraint on that parameters contained in this part. Take other part (may be $Re$ or $Im$) as NLODE whose solutions are to be solved.
\par Suppose in the real part $Re$, the dependent variable $U(\xi)$ is not present and there is only present the parameters $\alpha_i,k_i,p_i$. Then Eq. \eqref{tweq3} can be expressed by
\begin{equation}
Re\left( {\alpha_i,k_i,p_i} \right)+Im\left( {\alpha_i,k_i,p_i,U,U^{(1)},U^{(2)}} \ldots \right)*I=0,
\end{equation}
and we have to solve the NLODE $Im\left( {\alpha_i,k_i,p_i,U,U^{(1)},U^{(2)}} \ldots \right)=0$ subject to the constraint $Re\left( {\alpha_i,k_i,p_i}\right)=0$.
\item[iv.] If the above step is failure, compare the expressions of $Re$ and $Im$ to check whether they are the same NLODE for some constraint on the parameters of $Re$ or $Im$. If they are the same NLODE, take anyone part as NLODE whose solutions are to be solved.
\par Suppose for the constraint
\begin{equation}\label{consf}
f(\alpha_i,k_i,p_i)=0,
\end{equation}
we get
\begin{equation}\label{reEqIm}
Re\left( {\alpha_i,k_i,p_i,U,U^{(1)},U^{(2)}} \ldots \right)=Im\left( {\alpha_i,k_i,p_i,U,U^{(1)},U^{(2)}} \ldots \right)
\end{equation}
from Eq. \eqref{tweq3}, then the solutions can be determined either from the NLODE $Re$ or $Im$ subject to the constraint relation \eqref{consf}.
\end{itemize}
If any above criteria are not satisfied, the complex input-NLPDE equation cannot be transformed into single NLODE in our algorithm and the method does not work
successfully.\\
\textbf{Step F2}(Determine the highest power $N$ of finite power series):
Now according to the F-expansion method, the solution of the NLODE \eqref{tweq} can be expressed as the finite power series, which is
\begin{equation}\label{soluseries}
U = \sum\limits_{i = 0}^N {a_i{F^i}(\xi )}+\sum\limits_{i = 1}^N {\frac{{{b_i}}}{{F^i}(\xi )}},
\end{equation}
where $a_i(i=0\ldots N)$ and $b_i(i=1\ldots N)$ are constants to be determined later.
In Eq. \eqref{soluseries} first term is positive part and second term is negative part in the solution. The value of $N$ (positive integer or positive noninteger) in Eq. \eqref{soluseries} can be determined by considering a homogeneous balance between the highest order nonlinear term with the highest order derivative of $U(\xi)$ in Eq. \eqref{tweq}.
However, to automate this process, we have employed a method described in \cite{rath}. At first, Eq. \eqref{tweq} is expanded in the sum of product (SOP) form. Our aim is to determine the highest possible value of $N$, and so it is sufficient to replace $U$ with $U^N$. Assuming degree of $U(\xi)$ is $D[U(\xi)]=N$, we replace $U$ by $U^N$ and collect the degrees of each term appearing in Eq. \eqref{tweq} by a variable, say {\em E}. To determine the degree of an expression, we use the relations
\begin{equation}\label{degRltn}
D\left[ {\frac{{{d^n}U(\xi )}}{{d{\xi ^n}}}} \right] = N + n,\,\,D\left[ {{U^n}{{\left( {\frac{{{d^n}U(\xi )}}{{d{\xi ^n}}}} \right)}^m}} \right] = Nn + m(N + n).
\end{equation}
Generally in ({\em max( E)},$N$) plane we get a turning point and the value of $N$ is taken at this point. Whole procedure is automated in the following order:
\begin{itemize}
\item[i.] Substitute $U$ by $U^N$ and simplify.
\item[ii.] Expand and express in SOP form.
\item[iii.] Collect the degree of each term appearing in SOP form and store them in a list $ E$.
\item[iv.] Replace $N$ in $E$ by a sequence of numbers whose first number is 0. In GiNaCDE, we take three number sequence with common differences- $1/2,1/3,1/4$ up to last number 11.
\item[v.] For each number in sequence, calculate $ max( E)$.
\item[vi.] Calculate differences between the value of $ max( E)$ for successive numbers in the sequence.
\item[vii.] To get highest power $N$, take the number in sequence for which differences are not same with previous one. More clearly if $i$ is the current number in the the sequence and $ ({max(E_i)} - {max(E_{i - 1})}) \ne ({max(E_{i - 1})} - {max(E_{i - 2})})$, then the highest power $N=i-1$.
\end{itemize}
It is clear that for a larger value of $N$ the complexity in mathematical operations is increased. To avoid such complexity in derivations, one should set the maximum allowed value of $N$. However, sometimes in some cases, the auto-evaluation of $N$ may fail. \\
\textbf{Step F3}(Derive the system of Nonlinear Algebraic Equations for the coefficients of $F(\xi)$):
$F(\xi)$ satisfy the first-order nonlinear ODE (also called auxiliary equation (A.E.))
\begin{equation}\label{1stnlode}
{F'\left( \xi \right)} = \mathcal{F}(F(\xi)),
\end{equation}
where $\mathcal{F}(F(\xi))$ is some known functions of $F(\xi)$. The prime over $F(\xi)$ represents differentiation with respect to $\xi$. In case of F-expansion method
\begin{equation} \label{aeF-exp}
\mathcal{F}(F(\xi)) = \sqrt{{A_0} + {A_1}F + {A_2}{F^2} + \ldots + {A_\delta }{F^\delta }},
\end{equation}
where $\delta$ is a positive integer and $A_i(i=0,1,\ldots \delta)$ are coefficients of A.E. Here one can use any functional form of Eq. \eqref{aeF-exp} by choosing any positive integer value of $\delta$ and real values of $A_i$. As a result the solutions of first-order NLODE \eqref{aeF-exp} can be expressed in terms of a large variety of functions such as polynomial, exponential, trigonometric, hyperbolic, rational, Jacobi elliptic etc..
However, taking some well-known functional forms of $\mathcal{F}$ in Eq. \eqref{aeF-exp}, we have shown some solutions of $F(\xi)$ in the Appendix: \ref{sec:appenC}, \ref{sec:appenD}. The higher derivatives of $F(\xi)$ using Eq. \eqref{1stnlode} can be expressed by
\begin{equation}\label{1stnlodeD}
F^{\prime\prime}= \frac{\mathrm{d} \mathcal{F} }{\mathrm{d}F}\mathcal{F},\;\;F^{\prime\prime\prime}=\left(\frac{\mathrm{d} \mathcal{F} }{\mathrm{d}F}\mathcal{F}\right)^2+\mathcal{F}^2\frac{\mathrm{d^2} \mathcal{F} }{\mathrm{d}F^2}\;\; \text{and so on.}
\end{equation}
Now substituting Eq. \eqref{soluseries} into Eq. \eqref{tweq} and using Eq. \eqref{1stnlodeD} with \eqref{aeF-exp} we get an expression, and the numerator of resulted expression contains $F(\xi)^j{\mathcal{F}}^k\;(j=0,1,2,\ldots;k=0,1)$ terms. Setting each coefficient of $F(\xi)^j{\mathcal{F}}^k$ to zero an overdetermined system of nonlinear algebraic equations are obtained where the constant parameters $a_i(i=0\ldots N)$, $b_i(i=1\ldots N)$, $k_i(i=0\ldots m)$, $p_i(i=0\ldots m)$, $A_i(i=0,1,\ldots \delta)$, parameters $\alpha_i(i=1\ldots l)$ appearing in input-NLPDE and integrating constants $ic_i(i=1\ldots \eta)$ if \eqref{tweq} is integrable, are present.
The nonlinear system of equations is a set of simultaneous equations in which the unknowns (the constant parameters) appear as variables of a polynomial of degree one or higher than one.
Suppose the system of algebraic equations is solved for all parameters that are present in the system of algebraic equations. In that case, it takes a larger time to get solutions for the system of equations, even sometimes solutions are not obtained for a complicated system. To reduce the calculating time and the complexity in derivations, we categorize all the parameters into two different types. They are external parameters and internal parameters. External parameters are $\alpha_i(i=1\ldots l)$, $A_i(i=0,1,\ldots \delta)$ which are present in input-NLPDE equation and auxiliary equation respectively. When input-NLPDE equations are integrable, the generated integrating constant(s) $ic_i(i=1\ldots \eta)$ is also external parameter. On the other hand all the remaining parameters, such as $a_i(i=0\ldots N)$, $b_i(i=1\ldots N)$, $k_i(i=0\ldots m)$, $p_i(i=0\ldots m)$ are called internal parameters as they generate internally. Nonlinear algebraic systems are always solved for internal parameters. But one can have control over external parameters to choose the parameters for which nonlinear algebraic system is to be solved. For this purposes, the programming variables {\em ASolve} and {\em paraInDiffSolve} (detailed descriptions are given in user manual) are used to choose the unknowns from the external parameters in our choice, and it will reduce the calculating time and can handle more complicated algebraic expressions. At the same time, the exact solutions are determined subject to the conditions on chosen external parameters. \\
\textbf{Step F4}(Solve the system of Nonlinear Algebraic Equations): Analysing and solving the nonlinear algebraic system is a vital and challenging step among all steps of the method. In fact, the number of exact solutions of NLPDE derived by F-expansion, modified F-expansion, or first integral methods is entirely depending on how many solutions are obtained from a nonlinear algebraic system. The executing time of the software mainly depends on this step. Many methods are available to solve nonlinear algebraic systems such that Gröbner basis methods \cite{grobner}, the Ritt-Wu characteristic sets method implemented by Wang \cite{RittWu,RittWu1}, and the Reduced Involutive Form (Rif) code by Wittkopf and Reid \cite{rif}. D. Baldwin et al. \cite{baldwin} have employed a simple algorithm to design a powerful nonlinear solver in Mathematica. We have followed their algorithm to create a nonlinear solver in GiNaC symbolic system. The nonlinear solver implemented in \cite{baldwin} solves the entire system in an automated way using the built-in Mathematica function \textit{Reduce}. Their solver can solve polynomial and non-polynomial systems both. The nonlinear solver implemented by us can solve only the polynomial system required in this application, and its own C++ function solves the polynomial equations.
The steps used in this algorithm are very much like the steps used to solve a nonlinear algebraic system by hand. In this method, the simplest equation is solved for sorted unknown parameters. Then the solutions are substituted in the remaining equations. Such solving and substitution procedures are repeated until the system is completely solved. We operate the whole procedure in the following order:
\begin{itemize}
\item[i.] Check whether each equation is polynomial in unknowns.
\item[ii.] Factor and simplify each equation.
\item[iii.] Measure complexity of each equation by the number of add containers, unknown parameters, and the degree of unknowns. Then, sort the system based on their complexity. If more than one equations have the same complexity, in GiNaCDE they are sorted according to the GiNaC in-built comparison predicate {\em ex\_is\_less}.
\item[iv.] Sort the unknown parameters contained in the simplest equation by their degree.
\item[v.] Solve the simplest equation for the lowest degree unknown. If the number of unknown for the lowest degree is greater than one, then GiNaCDE uses the comparison predicate {\em ex\_is\_less} to choose the unknown. If solutions are absent, solve the unknown for the next higher degree.
\item[vi.] Substitute the solutions into the remaining equations and simplify.
\item[vii.] Repeat the steps i-vi until all the equations are reduced to zero.
\item[viii.] Substitute all the unknowns which are present in the computed solutions with the help of other solutions.
\item[ix.] Test the solutions by substituting them into each equation.
\item[x.] Finally, collect all solutions branches.
\end{itemize}
Our solver is powerful and can easily handle nonlinear equations (of course, polynomials in unknowns) with multi-parameters. Sometimes, there are risks of missing some solutions due to numerous parameters in the system or if the system is high degree. In this solver, the unknowns from all parameters appearing in the system are chosen in order of complexity. Then the solutions for these unknowns are expressed in terms of other parameters that are to be regarded as arbitrary parameters. Sometimes it is observed that the solutions become simpler where these arbitrary parameters are taken as unknowns.\\
\textbf{Step F5}(Build solutions with calculating steps): Substitute the solutions obtained in step F4 into Eq. \eqref{aeF-exp} and obtain the solutions of $F$ using the Appendix: \ref{sec:appenC}, \ref{sec:appenD}. Then, to obtain traveling-wave solutions of Eq. \eqref{tweq}, substitute $F$ and the solutions obtained in step F4 into Eq. \eqref{soluseries}. Finally the explicit solutions in original variables are obtained using Eqs. \eqref{twtrans}, \eqref{twco}.\\
\section{Algorithm of modified F-expansion method}\label{sec:mF}
In this section, we present the algorithm of modified F-expansion method following \cite{modfexpn}.
The algorithm for the modified F-expansion method \cite{modfexpn} is very similar to the algorithm of the F-expansion method. This algorithm also has five main steps (labeled MF1-MF5), and it has only one difference to the F-expansion method. The difference is that a different form of A.E. is taken in step MF3 in comparison to step F3. Therefore, one can check new exact solutions of the NLPDE applying both methods (F-expansion method and modified F-expansion method) to the same NLPDE with different forms of A.E. Details of all steps are described below:\\
\textbf{Step MF1}(Transform the NLPDE into an NLODE): Same as step F1.
\textbf{Step MF2}(Determine the highest power $N$ of finite power series): Same as step F2.
\textbf{Step MF3}(Derive the system of Nonlinear Algebraic Equations for the coefficients of $(F(\xi)$):
In the modified F-expansion method, the solution of the NLODE \eqref{tweq} is also expressed by a finite series like Eq. \eqref{soluseries}. In this method, we have generalized the modified F-expansion method \cite{modfexpn} taking the A.E. in more general form
\begin{equation}\label{1stnlode2}
F'\left( \xi \right) = {A_0} + {A_1}F + {A_2}{F^2} + \ldots + {A_\delta }{F^\delta },
\end{equation}
where $\delta$ is a positive integer and $A_i(i=0,1,\ldots \delta)$ are coefficients of A.E. One can choose any functional form of Eq. \eqref{1stnlode2} using any positive integer value of $\delta$ and any real value of $A_i$. Here interestingly, we note that by choosing various functional forms of Eq. \eqref{1stnlode2} in our choices, one can get the final solutions of input-NLPDE in terms of a large variety of functions. For example, some well-known equations can be obtained from Eq. \eqref{1stnlode2}, such as Riccati equation with $\delta=2$ and Bernouli equation with $A_i=0,\;(i\ne 1\; \text{and}\; i \ne \delta)$. The exact solutions of Riccati and Bernouli equations are known that are given in Appendix: \ref{sec:appenA} and Appendix: \ref{sec:appenB} respectively.
Now substituting Eq. \eqref{soluseries} into Eq. \eqref{tweq} and using Eq. \eqref{1stnlode2} we get an expression appearing the terms $F(\xi)^j\;(j=0,1,2,\ldots)$ in the numerator. The equations must vanish identically. Hence, to generate a nonlinear algebraic system, equate to zero the coefficients of the power terms in $F$.
\textbf{Step MF4}(Solve the system of Nonlinear Algebraic Equation): Similar strategy as in step F4.
\textbf{Step MF5}(Build solutions with calculating steps): Substitute the solutions of step MF4 into Eq. \eqref{1stnlode2}. Obtain the solutions of $F$ using Appendix: \ref{sec:appenA}, \ref{sec:appenB}. Then, substitute $F$ along with the solutions of step MF4 into Eq. \eqref{soluseries}. To get the explicit solutions in original variables, Eqs. \eqref{twtrans}, \eqref{twco} are used.
\section{Algorithm of first integral method}\label{sec:fim}
In this section, we present the algorithm of first integral method following \cite{fim0,fim,mirza,complexTwt1}.
In first integral method \cite{fim0,fim,mirza,complexTwt1}, one important advantage over F-expansion and modified F-expansion methods is that one does not have to choose A.E. to solve NLPDEs; instead, the input-NLPDE is automatically reduced to a suitable first-order NLODE whose solutions have to be calculated.
The algorithm for the automated first integral method has eight main steps (labeled FIM1-FIM8). Now we give an outline of every step as follows:\\
\textbf{Step FIM1}(Transform the NLPDEs into NLODEs): Same as step F1. The condition for applying first integral method to the Eq. \eqref{tweq} is that Eq. \eqref{tweq} must be a second-order NLODE. Therefore Eq. \eqref{tweq} is expressed in the form
\begin{equation}\label{tweq2}
\mathbf{G} \left( {\alpha_i,k_i,p_i,U,U^{(1)},U^{(2)}} \right) = 0.
\end{equation}
\textbf{Step FIM2}(Convert into a system of NLODEs): We assume that $U(\xi)=X(\xi)$ and introducing a new independent variable $Y(\xi) = X_{\xi}(\xi)$, Eq. \eqref{tweq2} can be rewritten as a system of NLODEs \cite{fim0}
\begin{subequations}\label{sysode}
\begin{align}
&X_{\xi}(\xi)= Y(\xi),\label{sysode1}\\
&Y_{\xi}(\xi)= \frac{P(X(\xi),Y(\xi))}{H(X)} = \frac{1}{H(X)}\left(K_0(X)+K_1(X)Y+\ldots+K_d(X)Y^d\right)\label{sysode2}.
\end{align}
\end{subequations}
We have expressed $P(X(\xi),Y(\xi))$ as a polynomial in variable $Y(\xi)$ with degree $d$ and $H(X),K_i(X)(i=0,1\ldots d)$ are polynomials in variable $X$.
$H(X)$ is the coefficient of the highest derivative term in Eq. \eqref{tweq2}.\\
\textbf{Step FIM3}(Apply Division Theorem): If $X(\xi),Y(\xi)$ are nontrivial solutions of Eq. \eqref{sysode}, then applying the Division Theorem \cite{fim0} there exist an irreducible polynomial in the complex domain $C[X ,Y ]$ such that
\begin{equation}\label{irred}
q(X(\xi),Y(\xi))=\sum\limits_{i = 0}^N {{a_i}(X){Y^i}}=0,
\end{equation}
where $a_i(i=0\ldots N)$ are polynomials of $X$ and $a_N\neq 0$. Equation \eqref{irred} is called the first integral to Eqs. \eqref{sysode1} and \eqref{sysode2}. Using Division Theorem there exists a polynomial $(g(X)+h(X)Y)$ such that
\begin{equation}\label{qform}
\frac{{dq}}{{d\xi }} = \frac{{\partial q}}{{\partial X}}\frac{{dX}}{{d\xi }} + \frac{{\partial q}}{{\partial Y}}\frac{{dY}}{{d\xi }} = \left( {g(X) + h(X)Y} \right)\sum\limits_{i = 0}^N {{a_i}(X){Y^i}}.
\end{equation}
Using Eqs. \eqref{sysode}, \eqref{irred} in the Eq. \eqref{qform}, we get
\begin{equation}\label{qred}
\begin{gathered}
\sum\limits_{i = 0}^N {{{\dot a}_i}(X){Y^{i + 1}} + } \sum\limits_{i = 0}^N {i{a_i}(X){Y^{i - 1}}\frac{1}{H(X)}\left(K_0(X)+K_1(X)Y+\ldots+K_d(X)Y^d\right)} \\
= \left( {g(X) + h(X)Y} \right)\sum\limits_{i = 0}^N {{a_i}(X){Y^i}}.
\end{gathered}
\end{equation}
Dot over $a_i(X)$ denotes derivative with respect to $X$.
The degree in variable $Y$ of left hand side (L.H.S) in Eq. \eqref{qred} is $i+d-1$ and the degree in variable $Y$ of right hand side (R.H.S) in Eq. \eqref{qred} is $i+1$. Balancing degrees between both sides we get $i+d-1=i+1$, hence $d=2$. So the method is applicable when the degree of Eq. \eqref{sysode2} in variable $Y$ is less than or equal to 2. Taking the maximum degree 2, Eq. \eqref{qred} can be rewritten as
\begin{equation}\label{qred2}
\sum\limits_{i = 0}^N {{{\dot a}_i}(X){Y^{i + 1}} + } \sum\limits_{i = 0}^N {i{a_i}(X){Y^{i - 1}}\frac{1}{H(X)}\left(K_0(X)+K_1(X)Y+K_2(X)Y^2\right)} = \left( {g(X) + h(X)Y} \right)\sum\limits_{i = 0}^N {{a_i}(X){Y^i}}.
\end{equation}
\textbf{Step FIM4}(Derive the Algebraic System of equations for coefficients of $Y^i$):
Comparing coefficients of $Y^i\;(i=N+1,N,\ldots,1,0)$ on both sides of \eqref{qred2}, and for $H(X)\neq 0$ canceling $H(X)$ in denominator from both sides we obtain
\begin{subequations}\label{Yeq}
\begin{align}
\label{Yeq1}&{Y^{N + 1}}:\,\,H(X){{\dot a}_N}(X)+Na_N(X)K_2(X) = H(X)h(X){a_N}(X),\\
\label{Yeq2_1}&{Y^N}\,\,\,\,:\,H(X){{\dot a}_{N - 1}}(X) + NK_1(X){a_N}(X) + (N + 1)K_0(X){a_{N + 1}}(X) = H(X)g(X){a_N}(X) + H(X)h(X){a_{N - 1}}(X),\\
&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \vdots\nonumber\\
\label{Yeq3}&{Y^1}\,\,\,\,\,:\,H(X){{\dot a}_0}(X) + K_1(X){a_1}(X) + 2K_0(X){a_2}(X) = H(X)g(X){a_1}(X) + H(X)h(X){a_0}(X),\\
\label{Yeq4}&{Y^0}\,\,\,\,:\,K_0(X){a_1}(X) = H(X)g(X){a_0}(X),
\end{align}
\end{subequations}
where $a_i(X)=0$ for $i<0$ and $i>N$.
In the next step FIM5 we take $a_N=1$ to derive the polynomial forms of $h(X),g(X),a_i(i=0..\ldots N-1)$. For $a_N=1$ from Eq. \eqref{Yeq1} we obtain
\begin{equation}\label{hform}
h(X)=\frac{NK_2(X)}{H(X)}.
\end{equation}
If $H (X)$ is not a constant and at the same time degree of $P(X(\xi),Y(\xi))$ in variable $Y$ is 2 then it is clear from Eqs. \eqref{Yeq1}, \eqref{hform} that $h (X) $ will not be polynomial in $X $. In this case we avoid such non-polynomial form of $h (X)$ by making the transformation \cite{mirza}
\begin{equation}\label{trans}
d\xi=H(X)d\eta,
\end{equation}
in Eq. \eqref{sysode} temporarily. Applying the transformation in Eq. \eqref{sysode} we get
\begin{subequations}\label{sysodetrans}
\begin{align}
\label{sysodetrans1}&X_{\eta}(\eta)= H(X)Y,\\
\label{sysodetrans2}&Y_{\eta}(\eta)=K_0(X)+K_1(X)Y+\ldots+K_d(X)Y^d.
\end{align}
\end{subequations}
Consequently the nonlinear algebraic system becomes
\begin{subequations}\label{Yeq2}
\begin{align}
\label{Yeq12}&{Y^{N + 1}}:\,\,H(X){{\dot a}_N}(X)+Na_N(X)K_2(X) = h(X){a_N}(X),\\
\label{Yeq22}&{Y^N}\,\,\,\,:\,H(X){{\dot a}_{N - 1}}(X) + NK_1(X){a_N}(X) + (N + 1)K_0(X){a_{N + 1}}(X) = g(X){a_N}(X) + h(X){a_{N - 1}}(X),\\
&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\vdots\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\vdots\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\nonumber\\
\label{Yeq32}&{Y^1}\,\,\,\,\,:\,H(X){{\dot a}_0}(X) + K_1(X){a_1}(X) + 2K_0(X){a_2}(X) = g(X){a_1}(X) + h(X){a_0}(X),\\
\label{Yeq42}&{Y^0}\,\,\,\,:\,K_0(X){a_1}(X) = g(X){a_0}(X).
\end{align}
\end{subequations}
Now for $a_N=1$, from Eq. \eqref{Yeq12} we get $h(X)=NK_2(X)$ which is polynomial in $X$. In the following steps, we explain all the procedures with the help of Eq. \eqref{Yeq}, because the same procedures are applicable when Eq. \eqref{Yeq2} is considered for the non-polynomial case of $h(X)$.
\textbf{Step FIM5}(Determine degrees of $h (X), g (X), a_i (X)(i=0\ldots N)$ and express them in polynomial forms): For simplicity substitute $a_N=1$ in Eq. \eqref{Yeq1} (in Eq. \eqref{Yeq12} for non-polynomial case of $h(X)$) and obtain polynomial form of $h(X)$.
To determine polynomial forms of $g,a_i(i<N)$, the degrees of $X$ between L.H.S and R.H.S in each equation of Eqs. \eqref{Yeq} (in Eqs. \eqref{Yeq2} for non-polynomial case of $h(X)$) are balanced. The balancing process is implemented in the following order:
\begin{itemize}
\item[i.] Replace $g\rightarrow X^{d_g},h(X),a_i\rightarrow X^{d_{a{_i}}}(i<N)$ in each equation from \eqref{Yeq2_1} to \eqref{Yeq4}. Here should be noted, if degree of $P(X(\xi),Y(\xi))$ in variable $Y$ is 2 and $H(X)$ is not a constant (i.e. the non-polynomial case of $h(X)$) then the replacements are done in Eqs. \eqref{Yeq2} instead of Eqs. \eqref{Yeq}.
\item[ii.]$d_g,d_{a_{i}}$ are replaced by the sequence $0,1, 2,\ldots$ of all positive integers arranged in increasing order. To avoid an infinite loop, sequences are taken up to a certain maximum number. In our library, the maximum number is 5. Sometimes it is impossible to balance the equations for any positive integer numbers
\item[iii.] Take the numbers from sequences at which L.H.S and R.H.S of each equation are balanced in the degree of $X$.
\end{itemize}
After balancing, if $d_{g}, d_{a_{i}}\;(i<N)$ are degrees of $g,a_{i}\;(i<N)$ respectively, then $g,a_{i}$ are expressed by
\begin{subequations}\label{parapoly}
\begin{align}
\label{gpoly}&g=g_0+g_1X+\ldots+g_{d_g}X^{d_g},\\
\label{a0poly}&a_{0}=a_{00}+a_{01}X+\ldots+a_{0d_{a_{0}}}X^{d_{a_{0}}},\\
\label{a1poly}&a_{1}=a_{10}+a_{11}X+\ldots+a_{1d_{a_{1}}}X^{d_{a_{1}}},\\
&\vdots=\;\;\;\;\;\;\;\;\;\;\;\;\;\;\vdots\nonumber\\
\label{aNminus1poly}&a_{N-1}=a_{(N-1)0}+a_{(N-1)1}X+\ldots+a_{(N-1)d_{a_{(N-1)}}}X^{d_{a_{(N-1)}}}.
\end{align}
\end{subequations}
Where $g_i(i=0,1,\ldots,d_g),a_{ij}(i=0,1,\ldots,N-1,j=0,1,\ldots,d_{a_{i}})$ are arbitrary constants.
In GiNaCDE, all possible combinations of balanced degrees (whose values are $<$ 6) for $g,a_i$ are calculated, and for each combination, the solutions of input-NLPDE are derived.\\
\textbf{Step FIM6}(Derive the Nonlinear Algebraic System for the parameters $g_i,a_{ij}$): Substitute $a_N=1$ and \eqref{parapoly} into each equation of \eqref{Yeq}. The coefficients of the power in $X,Y$ in each equation of \eqref{Yeq} must vanish. Collect the coefficients and generate a nonlinear algebraic system of equations parametrized by $g_i,a_{ij},\boldsymbol{K},\boldsymbol{P}$, integrating constants (for integrable NLPDE) and parameters appearing in input-NLPDE.\\
\textbf{Step FIM7}(Solve the Nonlinear Parameterized Algebraic System): Here, the external parameters are parameters appearing in input-NLPDE and integrating constants. Internal parameters are $g_i,a_{ij},\boldsymbol{K},\boldsymbol{P}$. The nonlinear algebraic system is solved following a similar process in step F4. Like step F4, here also, the runtime of this algorithm mainly depends on this step.\\
\textbf{Step FIM8}(Build solutions): The solutions in step FIM7 are substituted in Eq. \eqref{irred} and using $Y(\xi)=U_\xi(\xi)$, Eq. \eqref{irred} converts into first-order NLODE called first integral form of Eq. \eqref{tweq2}. Some well-known forms of first-order NLODE with solutions have been listed in Appendix: \ref{sec:appenA}, \ref{sec:appenB}, \ref{sec:appenC}, \ref{sec:appenD}. If first integral form matches with any form of listed NLODE in Appendix, solutions are shown, otherwise the program shows only the first integral forms. Combining Eqs. \eqref{twtrans}, \eqref{twco}, we obtain final solutions of input-NLPDE in original variables.\\
\section{Software Implementation}\label{sec:imple}
We have implemented the algorithms described in Secs. \ref{sec:Fexpn}, \ref{sec:mF} and \ref{sec:fim}, into GiNaCDE. GiNaCDE is a C++ library that is built on a pure C++ symbolic library GiNaC \cite{ginac}. Besides this library version, we have also developed a GUI version of GiNaCDE called GiNaCDE GUI. When we solve differential equations using GiNaCDE GUI, we do not have to write any C++ code, and compilation of any code is not required. This GUI version guides us in each step to obtain the output results. However, a complete guide of GiNaCDE GUI has been provided in the GiNaCDE \href{https://github.com/mithun218/GiNaCDE}{repository}. In both versions of GiNaCDE (library and GUI), the output results are saved in a text file with calculating steps. Output results can be saved in Maple or Mathematica programming language by assigning the C++ macros {\em Maple} or {\em Mathematica} to the programming variable {\em {output}}.
F-expansion and mF-expansion methods can be applied to higher-order NLPDEs. But, FIM method is applicable to an NLPDE when its transformed NLODE \eqref{tweq2} is second-order only. Actually, there are no rules to know in advance the appropriate method among three to solve the given NLPDE.
In GiNaCDE, to derive the solutions of NLPDEs we assume all the constant parameters ($g_i,a_{ij},\boldsymbol{K},\boldsymbol{P}$ etc.) are strictly real, positive, and the rules like $\sqrt{a^2}\rightarrow a,\sqrt{-a^2}\rightarrow Ia, \sqrt{-a}\rightarrow I\sqrt{a}$ are successively applied. Such simplification rules are used in {Maple} CAS with {\em simplify} routine along with {\em assume=positive} or {\em symbolic} option. One demerit of such rules is that the solutions which exist for negative values of the parameters may be missed.
In order to start a solution process for a given NLPDE or NLODE in GiNaCDE, we require some initial data (the options and parameters specified by the user). Some programming variables such as \textit{twcPhase, positivePart, negativePart, NValue, degAcoeff, ASolve, paraInDiffSolve} (detailed descriptions of these variables are given in the Sec. \ref{sec:command}) are available, which can be initially set up by the users in their own choices for getting better results for a given NLPDE before starting the solution process in the GiNaCDE software. There is no rule to know in advance a specific type of initialization to get better results.
In all three methods, if the input NLPDE or NLODE is complex, then the software tries to separate the real and imaginary parts following the step F1.
In all three methods, if the input NLPDE or NLODE is integrable, the software tries to integrate them after starting the solution process. If the integration is successful, then the software gives an option to us to assign a numerical or symbolic value to the integrating constant(s) $ic_i\;(i=1,2,..)$ in our choices. All these options make the software more powerful and flexible, enhancing its ability to find many new exact solutions to huge variants of NLPDEs.
Now we shall discuss some implementation details of each method separately:
\textbf{F-expansion and mF-expansion methods:} These methods have been implemented in \textit{F\_expns\_methd.cpp} and \textit{F\_expns\_methd.h} files.
The F-expansion and mF-expansion methods are chosen by the C++ macros {\em F\_expansion, mF\_expansion} respectively.
One can use the coordinates $\boldsymbol{K},\boldsymbol{P}$ in own choices with the help of the programming variable \textit{twcPhase}.
We can take any one or both parts in the solutions \eqref{soluseries} with the help of the programming variables \textit{positivePart}, \textit{negativePart}. In GiNaCDE, we set the maximum allowed value of $N$ at $10$. However, sometimes in some cases, the auto-evaluated value of $N$ exceeds 10. Then this step fails to find $N$.
In this case, we can check the solutions of input-NLPDE by specifying the value of $N$ in our choice lower than $10$ with the help of the programming variable {\em NValue}. If we do not assign any value to the variable {\em NValue}, the value of $N$ is auto-evaluated following the criteria in step F2.
In these methods, we initially have to input A.E. to start the solution process in GiNaCDE manually. One can choose the parameters $A_i(i=0,1,\ldots \delta)$ in own choices with the help of the programming variable \textit{degAcoeff}. There are no rules to know in advance what type of A.E. can give exact solutions for a given NLPDE. In this context, we have shown the solutions of some well-known A.E. in the Appendix: \ref{sec:appenA}, \ref{sec:appenB}, \ref{sec:appenC}, \ref{sec:appenD}. If our chosen A.E. matches with any form of listed A.E. in Appendix, solutions are shown, otherwise the program shows only the A.E.
The variable {\em ASolve} confirms whether the nonlinear algebraic system will be solved for the parameters contained in A.E. ( i.e., the parameters $A_i$ where $i=0,1,\ldots \delta$) along with other parameters. The parameters appearing in input-NLPDE (these parameters are belong to external parameters) are supplied in the programming variable {\em paraInDiffSolve} to solve the nonlinear algebraic system for those parameters also.
This will determine the conditions on that external parameters so that exact solutions are obtained.
\textbf{FIM:} This method has been implemented in \textit{fim.cpp} and \textit{fim.h} files. This method is chosen by the C++ macro {\em FIM}. For the first integral method, we do not have to input A.E. Here, for initializations, we have only three programming variables \textit{twcPhase, NValue, paraInDiffSolve}. The variables \textit{twcPhase, paraInDiffSolve} have been discussed above. Like F-expansion method the value of $N$ is also assigned by the variable {\em NValue} and the default value is $N=1$. However, in our library, the allowed values of {\em NValue} are 1 and 2.
%After starting the solution process, there is also scope to assign a value to the integrating constant generated from an integrable NLPDE.
\section{Compiling and installing}\label{sec:compile}
We need to make sure that we already have a few tools installed, including pkg-config (>= 0.29.2), CMake(>=3.1), CLN (>= 1.3.4), GiNaC (>= 1.7.6). We need a decent ISO C++11 compiler. We suggest to use the C++ compiler from the GNU compiler collection, GCC >= 4.9. The GTK+ 3.xx libraries (optional) are required to build the graphical user interface (GUI) of GiNaCDE library.
The source files are compiled using CMake build system by executing the following commands with Make and the GCC compiler on the command-line:
\begin{verbatim}
1. mkdir build-dir # generate a separate directory
2. cd build-dir
3. cmake -DGINACDE_GUI_BUILD=on <path-to-source> # generate Makefiles
4. make
5. make install
\end{verbatim}
On line 3, the option {\em -DGINACDE\_GUI\_BUILD=on} is used to build GUI of GiNaCDE library, which requires GTK+ 3.xx libraries. If GTK+ 3.xx libraries are absent in the system, one can omit this option. The name of source directories is used in place of {\em <path-to-source>}. We have checked that the source files are successfully compiled on the Windows platform using MSYS2 (https://www.msys2.org) when the commands on line 3 are simply replaced by
\begin{verbatim}
3. cmake -G "MSYS Makefiles" -DGINACDE_GUI_BUILD=on <path-to-source>
\end{verbatim}
If you have just now installed the library, sometime it is necessary to run
\begin{verbatim}
1. ldconfig
\end{verbatim}
which creates the necessary links and cache to the most recent shared libraries installed in the system.
To test GiNaCDE library after building it, simply type the following
\begin{verbatim}
1. ctest
\end{verbatim}
\section{Arithmetic operators and some supported functions in library}
In writing an algebraic expressions GiNaCDE follows GiNaC rules. Like GiNaC, GiNaCDE supports all the arithmetic operators $*$ (multiplication), $+$ (addition), $-$ (subtraction), $/$ (division). GiNaCDE's C++ library version does not support the operator $ ^{\wedge}$. For this purpose, an object of class {\em power} (in short {\em pow}) play the role of $ ^{\wedge}$ operator and this {\em power} object has two slots, one for the basis, one for the exponent, such as $x^5$ is written as $pow(x,5)$. Here one should note that GiNaCDE's GUI version supports the operator $ ^{\wedge}$ also.
The imaginary unit in GiNaCDE has been predefined in GiNaC as a numeric object with the name $I$. GiNaCDE supports the function {\em conjugate()} which finds the complex conjugate of mathematical expressions, such as complex conjugate of $x$ is determined by {\em conjugate(x)}. GiNaCDE also supports the function {\em sqrt()} which finds square root.
\par The relational operators which express a relation for two {\em ex} objects, signals equality, inequality, and so on between them. These relations are created by simply using the C++ operators $==, !=, <, <=, >$ and $>=$ between two expressions.
\section{The parameters of the different methods used in library}
Many parameters are used in the three different methods. All parameters have been well described in Secs. \ref{sec:Fexpn}, \ref{sec:mF} and \ref{sec:fim}. In the software, we have used the same parameter's names as used in the above descriptions of the algorithms, only in the case of first integral method some parameter's $(g,h)$ and algebraic symbol's $(X,Y)$ name followed by an underscore in the software to avoid conflict with user-provided names. These parameters are $g\_,h\_$ and symbols are $X\_,Y\_$. During derivations, the software always generate some parameters, their names are $a_i(i=0,1,2,\ldots),b_i(i=0,1,2,\ldots),a_{ij}(i=0,1,2,\ldots,j=0,1,2,\ldots),g_i(i=0,1,2,\ldots),N,d_g,d_{a_i}(i=0,1,2,\ldots)$. When using GiNaCDE, we will always choose the name of the parameter such that its name does not conflict with these parameters generated by the software.
\section{Some useful commands in library}\label{sec:command}
In the following we have dealt with some useful commands of GiNaCDE library required to write C++ code. In particular, for more details about the commands {\em ex, lst}, we refer to the tutorial of GiNaC library \cite{ginac}.
\begin{itemize}
\item[] {\em \textbf{ex:}} Any algebraic expressions or numbers are handled by the GiNaC class {\em ex} \cite{ginac}.
\item[] {\em \textbf{lst:}} The GiNaC class {\em lst} is a container which can stores list of arbitrary expressions \cite{ginac}.
\item[] {\em \textbf{NValue:}} {\em NValue} is an {\em extern ex} variable. The value of $N$ is assigned by the variable {\em NValue}. In the case of F-expansion and mF-expansion methods, if {\em NValue} is not assigned by the user, it is auto-evaluated following the algorithm in step F2. In the case of FIM, the default value of {\em NValue} is 1, i.e., if {\em NValue} is not assigned by the user, its value is 1.
\item[] {\em \textbf{positivePart, negativePart:}} Due to the presence of a huge number of parameters in the problem, sometimes it is tough to get solutions. To overcome such a problem as far as possible, we have introduced these {\em boolean} variables. In F-expansion and mF-expansion methods, the power series \eqref{soluseries}
%\begin{equation}
%U = \sum\limits_{i = 0}^N {a_i{F^i}(\xi )}+\sum\limits_{i = 1}^N {\frac{{{b_i}}}{{F^i}(\xi )}},
%\end{equation}
has two parts, first term is positive part and second term is negative part. We can take any one or both parts in the solutions with the help of these {\em boolean} variables. The definitions {\em positivePart = true; negativePart = false;} will take the positive part. The definitions {\em positivePart = false; negativePart = true;} will take negative part, and the both parts are taken with the definitions {\em positivePart = true; negativePart = true;}. By default the last definition is active. These {\em boolean} variables are only available in F-expansion and mF-expansion methods.
\item[] {\em \textbf{ASolve:}} Sometimes, due to the presence of a significant number of parameters in the nonlinear algebraic system, the solver may fail to solve the system. To remove such a problem as far as possible, we have also introduced another boolean variable {\em ASolve}. It confirms whether the nonlinear algebraic system will be solved for the parameters contained in A.E. \eqref{aeF-exp} or \eqref{1stnlode2} ( i.e., the parameters $A_i$ where $i=0,1,\ldots \delta$) along with other parameters. When the definition {\em ASolve = false} is taken, the nonlinear algebraic system will not be solved for $A_i$, and $A_i$ becomes arbitrary. With the definition {\em ASolve = true} (by default), the algebraic system will be solved for $A_i$ along with other parameters and also determine the conditions among $A_i$ to get exact solutions. It takes more solving time than before. This {\em boolean} variable is only available in F-expansion and mF-expansion methods.
\item[] {\em \textbf{paraInDiffSolve:}} {\em paraInDiffSolve} is an {\em extern lst} variable. The parameters appearing in input-NLPDE (this parameters are belong to external parameters) are supplied in {\em lst} variable {\em paraInDiffSolve} to solve the nonlinear algebraic system for those parameters also. This will determine the conditions on that external parameters so that exact solutions are obtained.
\item[] {\em \textbf{twcPhase:}} {\em twcPhase} is an {\em extern lst} variable. The parameters $\boldsymbol{K}, \boldsymbol{P}$ are provided through this variable in following manner
\begin{equation}
twcPhase = \{lst\{k_0,k_1,\ldots,k_m\},lst\{p_0,p_1,\ldots,p_m\}\};
\end{equation}
\item[] {\em \textbf{degAcoeff:}} {\em degAcoeff} is an {\em extern lst} variable. The positive integer ($\delta$) in A.E., the parameters contained in A.E. ($A_i,i=0,1,\ldots\delta$) are provided through this variable in this way
\begin{equation}
degAcoeff=\{\delta,A_0,A_1,\ldots,A_\delta\};
\end{equation}
This variable is only available in F-expansion and mF-expansion methods.
\item[] {\em \textbf{ex reader(string ``algbExpr''):}} We can create algebraic expressions from the {\em string} {\em ``algbExpr''} using the command {\em reader}. The generated expressions are {\em ex } objects.
\item[] {\em \textbf{ex simplify(string ``algbExpr''):}} The {\em simplify} command is used to apply simplification rules (such as power rule ($(x^m)^n\rightarrow x^{mn}$), combine rule ($x^mx^n\rightarrow x^{m+n}$), factor, normalization etc.) to an algebraic expression. Generally when an algebraic symbol (let $a$) is real, the algebraic rule $\sqrt{a^2}\rightarrow |a|$ can be applied. But in our case to derive the solutions of NLPDEs we assume all the constant parameters ($g_i,a_{ij},\boldsymbol{K},\boldsymbol{P}$ etc.) are strictly real, positive, and the rules like $\sqrt{a^2}\rightarrow a,\sqrt{-a^2}\rightarrow Ia, \sqrt{-a}\rightarrow I\sqrt{a}$ are successively applied. In {Maple} CAS, {\em simplify} routine along with {\em assume=positive} or {\em symbolic} option apply similar rules. One demerit of such rules is that the solutions which exist for negative values of parameters may be missed.
\item[] {\em \textbf{depend(ex f1, lst f2):}} This sets up a dependency of a variable in the first argument {\em f1} on the list of algebraic variables {\em f2}. The dependency of {\em f1} on a particular variable can be removed by the command {\em depend.clear(f1, variable)} and the dependencies on all variables are removed by the command {\em depend.clear(f1)}.
One should maintain the same order of independent variables and the corresponding coefficients when they are placed in list {\em f2} and in the programming variables {\em twcPhase} respectively.
\item[] {\em \textbf{ex pdiff(ex expression, ex variable, ex order):}} The {\em pdiff} command computes the partial derivative of the {\em expression} with respect to {\em variable}. {\em order} is the times of differentiation.
\item[] {\em \textbf{ex Diff(ex expression, ex variable, ex order):}} {\em Diff} command returns the partial derivative in unevaluated form.
\item[] {\em \textbf{ex integrate(ex expression, ex variable):}} The integrate command computes the integration of the {\em expression} with respect to {\em variable}.
\item[] {\em \textbf{ex Integrate(ex expression, ex variable, ex order):}} {\em Integrate} command returns the integration in unevaluated form.
\item[] {\em \textbf{ex evaluate(ex expression):}} The evaluations of {\em Diff} and {\em Integrate} commands are made by {\em evaluate} function.
\item[] {\em \textbf{exsetlst solve(lst equations, lst variables):}} A list of equations in 1st argument are solved for a list of variables in 2nd argument. The equations must be the polynomials in the variables for which the equations are solved. It returns the solutions in {\em exsetlst} container. {\em exsetlst} is the {\em typedef} of ${ std::set< lst,ex\_is\_less>}$.
\item[] {\em \textbf{desolve( ex differential\_equation, lst dependent\_variable, int method):}} The {\em differential\_equation} is solved for the {\em dependent\_variable}. To choose the solution method, one has to be assigned the 3rd argument by a {\em macro}. There are three {\em macros}- {\em F\_expansion, mF\_expansion, FIM} and the corresponding solution methods are respectively F-expansion, modified F-expansion and first integral methods.
\item[] {\em \textbf{output:}} Output results can be saved in Maple or Mathematica programming language by assigning {\em Maple} or {\em Mathematica}
{macros} to this variable. This option gives us an additional advantage to make further calculations using the output results in the commercially available softwares such as {Maple} or {Mathematica}.
\item[] {\em \textbf{filename:}} The file name in a {\em string} is assigned to this variable. Output results are saved in this file.
\end{itemize}
\section{Input and Output of library}
In order to solve the NLPDEs with the help of GiNaCDE three tools are available, they are GiNaCDE library (programming interface), GiNaCDE GUI (graphical user interface) and gtools (console interface). In this tutorial we have described about the GiNaCDE library. A short tutorials on GiNaCDE GUI and gtools are given in separate manuals. For solving NLPDE, we have to input an NLPDE in proper syntax. For example we consider the following NLPDEs
\begin{align}
&{u_{xt}} - 4{u_x}{u_{xy}} - 2{u_y}{u_{xx}} + {u_{xxxy}} = 0;\\
&I{u_t} + {u_{xx}} + 2{\left( {{{\left| u \right|}^2}} \right)_x}u + {\left| u \right|^4}u = 0;
\end{align}
and the input syntax of the above NLPDEs are
\begin{verbatim}
Diff(Diff(u,x,1),t,1) - 4*Diff(u,x,1)*Diff(Diff(u,x,1),y,1) - 2*Diff(u,y,1)*Diff(u,x,2)
+ Diff(Diff(u,x,3),y,1)==0;
I*Diff(u,t,1) + Diff(u,x,2) + 2*u*Diff(u*conjugate(u),x,1) +
u*u*conjugate(u)*conjugate(u)*u==0;
\end{verbatim}
\par The software determines exact solutions with calculating steps, and the results are saved in an output file.
Beside this, the solutions of the NLPDE is collected by a variable {\em solutionClt}. The variable {\em solutionClt} is a container of type {\em vector<lst>}.
{\em solutionClt} holds the list of solutions. Invariably, the first element in each list is the solutions of the nonlinear algebraic system, and the remaining elements are the exact solutions of the given NLPDE.
Now, we exemplify the usage of the variable {\em solutionClt}. Let us consider {\em solutionClt} has collected four set of solutions which can be obtained by calling {\em solutionClt[0]}, {\em solutionClt[1]}, {\em solutionClt[2]} and {\em solutionClt[3]}. Let us assume, {\em solutionClt[0]} has stored the following solutions
\begin{verbatim}
{{a_0==f_1(k_0,k_1),a_1==f_2(k_0,k_1)},u==g_1(k_0,k_1),{u==g_3(k_0,k_1),h(k_0,k_1)==0}}
\end{verbatim}
In the aboe example, the first element {\em solutionClt[0][0]} is GiNaC {\em lst} container which is the list of solutions of nonlinear algebraic system. The remaining elements {\em solutionClt[0][1]} and {\em solutionClt[0][2]} represent the exact solutions of NLPDE with dependent variable $u$. Here, it should be noted that the solutions stored in {\em solutionClt[0][2]} is a GiNaC {\em lst} container. In this container, the first element {\em solutionClt[0][2][0]} is the exact solution of $u$ and the second element {\em solutionClt[0][2][1]} is the condition for which this solution exists.
In similar way, other set of solutions are collected by {\em solutionClt[1]}, {\em solutionClt[2]} and so on.
There is also an another variable named {\em constraints} which is a GiNaC {\em lst} container. {\em constraints} contains the conditions on the parameters for which all the exact solutions of $u$ are determined and stored in the variable {\em solutionClt}.
\subsection{Additional Note}
Here one should note that one important property of GiNaC that differentiates it from other computer algebra programs we may have used: GiNaC assigns a unique (hidden) serial number for each newly created symbol object, and GiNaC uses this unique serial number instead of its name for algebraic manipulations. The serial number for the same name of a symbol may be changed in each running session of the GiNaC program. As a result, the symbols in the same algebraic expressions in the results obtained in the output files, may be ordered differently during each running session of the GiNaCDE program. This happens because to order the symbols of an algebraic expression GiNaC internally uses a comparison predicate, called {\em ex\_is\_less} which uses an internal symbol id counter.
\section{Writing programs using library}\label{sec:examples}
In this section, we explain how to write C++ programs using the GiNaCDE library for solving NLPDEs. We use two examples (NLS and KdVB equations) to illustrate each line of the C++ program.
\subsection{NLS Equation}
%{\em \textbf{Example 1:}}
For solving one dimensional cubic nonlinear Schrödinger (NLS) equation %\cite{nlse}
\begin{equation}\label{nls}
Iu_t-pu_{xx}+q{|u|}^2u=0,
\end{equation}
by {\em F\_expansion} and {\em FIM} method using GiNaCDE library, the C++ codes are (here $p,q$ are non-zero real constants and $u(x,t)$ is a complex-valued function depends on the variables $t,x$.)\\
\begin{verbatim}
// NLS.cpp
#include <GiNaCDE/GiNaCDE.h>
int main()
{
1. const ex u=reader("u"), t=reader("t"), x=reader("x"), p=reader("p"), q=reader("q"),
k_0=reader("k_0"),k_1=reader("k_1"),p_0=reader("p_0"),p_1=reader("p_1"),
A_0=reader("A_0"),A_2=reader("A_2");
2. depend(u, {t, x});
3. ex pde = I*Diff(u,t,1)-p*Diff(u,x,2)+q*pow(u,2)*conjugate(u);
4. output = maple;
5. twcPhase={lst{k_0,k_1},lst{p_0,p_1}};
6. degAcoeff = {2,A_0,0,A_2};
7. ASolve=false;
8. positivePart = true;
9. negativePart = true;
10. paraInDiffSolve={};
11. filename = "NLS_Fexp.txt";
12. desolve(pde,{u},F_expansion);
13. output = mathematica;
14. filename = "NLS_FIM.txt";
15. desolve(pde, {u}, FIM);
16. return 0;
}
\end{verbatim}
To compile the above code into an executable, we run the following command from a terminal:
\begin{verbatim}
g++ -Wall -g NLS.cpp -o NLS -lcln -lginac -lGiNaCDE
\end{verbatim}
Next we explain the commands in each line of the above program.
In line 1 we define some mathematical objects that are handled by the GiNaC class {\em ex} \cite{ginac}. In line 2 we make dependency of {\em u} on the independent variables {\em t, x}. In line 3 we are storing Eq. \eqref{nls} in a variable {\em pde} which is a GiNaC class {\em ex}. We can save all the output results in {Maple} (line 4) or {Mathematica} (line 13) format. We now transform NLPDE \eqref{nls} to NLODE using the relation
\begin{equation}\label{nls_twtrans}
u(t,x)=U(\xi)e^{Ip_0t+Ip_1x},\;\;\;\text{where}\;\;\xi=k_0t+k_1x.
\end{equation}
The traveling-wave coordinate $(\xi)$ is related to independent variables by the relation \eqref{nls_twtrans} and we set the constant coefficients of independent variables in 1st list of the {\em lst} variable {\em twcPhase} in line 5. Since Eq. \eqref{nls} is a complex NLPDE, we define constant coefficients of independent variables in phase part in 2nd list of the {\em lst} variable {\em twcPhase} in line 5. It is worth noting that the order of the constant coefficients in line 5 must match the corresponding order of independent variables that is here $\{t,x\}$ in line 2.
We have taken A.E. in the form
\begin{equation}\label{nlsAE}
F^{\prime}=\sqrt{A_0+A_2F^2}.
\end{equation}
which is set by {\em lst} variable {\em degAcoeff} in line 6. The first element of the variable {\em degAcoeff} is positive integer $\delta=2$ in the A.E. and the rest of the elements are the constant coefficients $A_0,0,A_2$ of $F^i(i=0\;..2)$ respectively. In line 7, {\em ASolve} is assigned to {\em false} and so the nonlinear algebraic system is not solved for the parameters $A_0,A_2$ here. The definitions used in lines 8 and 9 ask the library to put positive and negative both parts in the solutions. In line 10, we have not supplied the parameters $p,q$ appearing in input-NLPDE \eqref{nls}.
In line 11, we give a name of a file in which all the output results with calculating steps are saved.
Finally, the equation is solved using the F-expansion method by the command {\em desolve} in line 12, and all the output results with calculating steps are saved in the file \emph{NLS{\_}Fexp.txt}.
\par The Command in line 15 solve Eq. \eqref{nls} by {\em FIM} method.
Output results with calculating steps are saved in the file \emph{NLS\_FIM.txt}.
We have supplied the output files \emph{NLS{\_}Fexp.txt} and \emph{NLS\_FIM.txt} in the examples\ folder of GiNaCDE \href{https://github.com/mithun218/GiNaCDE}{repository} separately.
\subsection{KdVB Equation}
%{\em \textbf{Example 2:}}
For solving well-known KdVB equation %\cite{kdvb}
\begin{equation}\label{KdVB}
{u_t} + u{u_x} - p{u_{xx}} + q{u_{xxx}} = 0,
\end{equation}
by modified F-expansion method the C++ codes are (here $p,q$ are non-zero real constants)\\
\begin{verbatim}
// KdVB.cpp
#include <GiNaCDE/GiNaCDE.h>
int main()
{
1. const ex u=reader("u"), t=reader("t"), x=reader("x"),p=reader("p"), q=reader("q"),
k_0=reader("k_0"), k_1=reader("k_1"), A_0=reader("A_0"), A_1=reader("A_1"),
A_2=reader("A_2");
2. depend(u, {t,x});
3. ex pde =Diff(u,t,1)+Diff(u,x,1)*u-p*Diff(u,x,2)+q*Diff(u,x,3);
4. output = maple;
5. twcPhase = {lst{k_0,k_1},lst{}};
6. degAcoeff = {2,1,1,A_2};
7. ASolve=true;
8. positivePart = true;
9. negativePart = true;
10. paraInDiffSolve={};
11. filename = "KdVB_mF.txt";
12. desolve(pde, {u}, mF_expansion);
13. return 0;
}
\end{verbatim}
The input-NLPDE \eqref{KdVB} is assigned to the variable {\em pde} in line 3 and it is transformed into NLODE with the traveling-wave coordinate ($\xi$) using the transformation
\begin{equation}\label{KdVB_twtrans}
u(t,x)=U(\xi)\;\;\text{where}\;\;\xi=k_0t+k_1x,
\end{equation}
which is provided in line 5.
Our program library then checks the integrability of transformed NLODE. The transformed NLODE is integrated one time, and the generated integrating constants are assigned with the values in our choice after running the program. After compiling and running the program, the generated output screen with our supplied values for the integrating constant is
\begin{verbatim}
The Diff. Equ. is integrable;
Do you assign a value to integrating constant (ic_1)? y
ic_1: 0
\end{verbatim}
In the above output screen, the integrating constant ($ic\_1$) has been assigned to $0$ value.
Next the A.E. in the form
\begin{equation}\label{KdVBAE}
F^{\prime}=1+F+A_2F^2
\end{equation}
has been provided by the command in line 6. The command in line 7 instructs the library to solve the nonlinear algebraic system also for the variable $A_2$ here.
Here positive and negative, both parts are retained in the solutions using the definitions in lines 8 and 9. In line 10, we have not supplied the parameters $p,q$ appearing in input-NLPDE \eqref{KdVB}.
The command in line 12 solves {\em pde} by the modified F-expansion method.
The solutions with the calculating steps are saved in {\em KdVB\_mF.txt} file.
We have supplied the output file {\em KdVB\_mF.txt} in the examples\ folder of GiNaCDE \href{https://github.com/mithun218/GiNaCDE}{repository}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Complex and Integrable NLPDE case}
In the example below, we demonstrate how the GiNaCDE handles a complex and integrable NLPDE.
We consider the perturbed NLS equation with Kerr law nonlinearity %\cite{nlsewtkerr,nlsewtkerr1,nlsewtkerr2,fim4,complexTwt1}
\begin{equation}\label{nlseWtKerr}
I{u_t} + {u_{2x}} + A|u{|^2}u + I\left( {{G_1}{u_{3x}} + {G_2}|u{|^2}{u_x} + {G_3}{{\left( {|u{|^2}} \right)}_x}u} \right) = 0,
\end{equation}
where $u(t,x)$ represents the complex function and the parameters $G_1,G_2$ and $G_3$ are the higher order dispersion coefficient, the coefficient of Raman scattering, the coefficient of nonlinear dispersion term respectively, while $A$ represents fiber loss.
The model equation \eqref{nlseWtKerr} has important application in various fields, such as semiconductor materials, optical fiber communications, plasma physics, fluid and solid mechanics.
We run GiNaCDE software applying all the three available methods on Eq. \eqref{nlseWtKerr}. Here we have used the following initializations:
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
\textbf{F-expansion:}& \textbf{mF-expansion:}& \textbf{FIM:} \\
\hline
\begin{minipage}{2.3in}
\small
\begin{verbatim}
twcPhase=
lst{lst{k_0,k_1},lst{p_0,p_1}};
degAcoeff=
lst{4,0,0,A_2,A_3,A_4};
ASolve=true;
positivePart=true;
negativePart=true;
paraInDiffSolve=lst{};
filename="kerrNLS_Fexp.txt";
\end{verbatim}
\end{minipage}&
\begin{minipage}{2.3in}
\small
\begin{verbatim}
twcPhase=
lst{lst{k_0,k_1},lst{p_0,p_1}};
degAcoeff=lst{2,A_0,A_1,A_2};
ASolve=false;
positivePart=true;
negativePart=true;
paraInDiffSolve=lst{};
filename="kerrNLS_mF.txt";
\end{verbatim}
\end{minipage}&
\begin{minipage}{2.3in}
\small
\begin{verbatim}
twcPhase=
lst{lst{k_0,k_1},lst{p_0,p_1}};
paraInDiffSolve=lst{};
filename="kerrNLS_FIM.txt";
\end{verbatim}
\end{minipage}\\
\hline
\end{tabular}
\end{center}
The complete C++ code is given in the examples/ folder of the GiNaCDE \href{https://github.com/mithun218/GiNaCDE}{repository} with a file name \emph{kerrNLS.cpp}.
In all methods, the software substitutes the traveling-wave solution \eqref{nls_twtrans} into Eq. \eqref{nlseWtKerr}, and separates the real part and the imaginary part following the step F1. The imaginary part is integrated one time and the software assigns the constant of integration to zero. Then the algebraic expressions of real part and imaginary part are compared and the software detects that they are same equations subject to the following conditions
\begin{equation}\label{nlseWtKerrCond}
\frac{{{k_0} + 2{p_1}{k_1} - 3{G_1}{k_1}p_1^2}}{{ - p_1^2 - {p_0} + {G_1}p_1^3}} = \frac{{{G_1}k_1^3}}{{ - 3{G_1}{p_1}k_1^2 + k_1^2}} = \frac{{2{k_1}{G_3} + {k_1}{G_2}}}{{3A - 3{p_1}{G_2}}}.
\end{equation}
Same conditions were obtained in \cite{complexTwt1}. Therefore GiNaCDE evaluates the exact analytical solutions of imaginary part only.
We get exact solutions of the NLPDE \eqref{nlseWtKerr} with the F-expansion and mF-expansion methods. FIM is unable to solve the NLPDE \eqref{nlseWtKerr}, because the NLPDE \eqref{nlseWtKerr} in terms of traveling-wave coordinate $\xi$ is not a second-order NLODE. The output results obtained from running the above program have been provided under the examples/ folder of GiNaCDE \href{https://github.com/mithun218/GiNaCDE}{repository} with file names \textit{kerrNLS\_Fexp.txt}, \textit{kerrNLS\_mF.txt} and \textit{kerrNLS\_FIM.txt}.
\section{Non-Polynomial case in FIM}
In the following example, we show how GiNaCDE handles the non-polynomial case in FIM. We consider the following well-known Kudryashov–Sinelshchikov equation: %proposed in \cite{ks,ks1}:
\begin{equation}\label{Ks}
{u_{3x}} + gu{u_x} - n{u_{2x}} - \left( {u{u_{2x}} + u_x^2} \right)d - k{u_x}{u_{2x}} - e\left( {u{u_{3x}} + {u_x}{u_{2x}}} \right) + {u_t}=0,
\end{equation}
where $g,n,k,d$ and $e$ are real parameters. Equation \eqref{Ks} models the pressure waves in a liquid and gas bubbles mixture when the viscosity of liquid and the heat transfer are both considered. We have employed the first integral method (FIM) to find exact traveling-wave solutions of Eq. \eqref{Ks} with the help of GiNaCDE using the following initializations:
\begin{verbatim}
//KS.cpp
#include <GiNaCDE/GiNaCDE.h>
int main()
{
const ex u=reader("u"), t=reader("t"), x=reader("x"), k_0=reader("k_0"), k_1=reader("k_1"),
A_0=reader("A_0"), A_1=reader("A_1"), A_2=reader("A_2"), A_3=reader("A_3"), g=reader("g"),
n=reader("n"), d=reader("d"), e=reader("e"), k=reader("k");
const ex pde = Diff(u,t,1)+g*u*Diff(u,x,1)+Diff(u,x,3)-e*Diff(u*Diff(u,x,2),x,1)
-k*Diff(u,x,1)*Diff(u,x,2)-n*Diff(u,x,2)-d*Diff(u*Diff(u,x,1),x,1);
depend(u, {t, x});
twcPhase=lst{lst{k_0,k_1},lst{0,0}};
paraInDiffSolve=lst{};
filename="KS_FIM.txt";
desolve(pde, {u}, FIM);
return 0;
}
\end{verbatim}
GiNaCDE makes the traveling-wave transformation \eqref{KdVB_twtrans}
on the Eq. \eqref{Ks} and then it integrates the transformed NLPDE one time. We assign the integrating constant $ic_1$ to $0$. The software automatically detects that $h(X)$ is not polynomial in $X$. To avoid such non-polynomial form of $h(X)$, we have implemented the procedure explained in \cite{mirza} in step FIM4 of our algorithm. Following step FIM4, GiNaCDE performs a transformation to avoid singularity temporarily and the corresponding part of output where GiNaCDE makes the transformation is:
\begin{verbatim}
We make the transformation, d xi = (-1+e*X_)*d eta to avoid singularity -1+e*X_ = 0 temporarily.
Let U = X_, Diff(U,eta, 1) = Y_*(-1+e*X_), then we get
Diff(X_,eta, 1) = Y_*(-1+e*X_),
Diff(Y_,eta, 1) = -1/2*k*Y_^2-(d*k_1^(-1)*X_+n*k_1^(-1))*Y_+1/2*g*k_1^(-2)*X_^2
+k_0*k_1^(-3)*X_,
\end{verbatim}
After that transformation, assuming $a_1=1$, the software evaluates $h=-\frac{k}{2}$. Then the degrees of $a_0,g$ are auto-evaluated following the strategy in step FIM5 and it finds two sets of balanced degrees which are $deg(a_0,g)=(2,0),(1,1)$. Second set of balanced degrees was obtained in \cite{mirza}.
%For first balanced degrees set $(2,0)$, the software finds the exact solutions of Eq. \eqref{Ks} only for $k_0=0,k_1=0$. Interestingly all our results derived by GiNaCDE using FIM match with \cite{mirza} if we replace $k_0,k_1$ with $-c,1$ respectively.
The output results obtained from running the above program have been provided under the examples/ folder of GiNaCDE \href{https://github.com/mithun218/GiNaCDE}{repository} with file name \textit{KS\_FIM.txt}.
\section{GUI of the library}\label{gui}
In sec. \ref{sec:compile} we have mentioned that the source code can be compiled using the option {\em -DGINACDE\_GUI\_BUILD=on} to build GiNaCDE GUI. The GTK+3 library is required to build a GUI version of the GiNaCDE library. GiNaCDE GUI can interact graphically with us more easily without any programming knowledge, and there does not need for compilation each time which saves time. When we solve differential equations using GiNaCDE GUI, this GUI version guides us in each step to obtain the output results. For more details about the GiNaCDE GUI, we refer to the tutorial \textit{GiNaCDE\_guiTutorial.pdf}, which is available with the source code.
\appendix
\renewcommand{\theequation}{A-\arabic{equation}}
\section{Solutions of Riccati equation}\label{sec:appenA}
In case of Riccati equation, Eq. \eqref{1stnlode2} take the form
\begin{equation}\label{riccati}
F'\left( \xi \right) = {A_0} + {A_1}F + {A_2}{F^2}.
\end{equation}
The solutions of the equation \eqref{riccati} are \cite{yang}
\begin{subequations}\label{riccati_solu}
\begin{align}
\label{riccati_solu1}F\left( \xi \right) &= - \frac{{{A_1}}}{{2{A_2}}} - \frac{S}{{2{A_2}}}\tanh \left( {\frac{S}{2}\xi + C} \right) \;\;(\text{If } A_2 \neq 0 \text{ and } A_1 \text{or} A_0 \neq 0),\\
F\left( \xi \right) &= - \frac{{{A_1}}}{{2{A_2}}} - \frac{S}{{2{A_2}}}\coth \left( {\frac{S}{2}\xi + C} \right) \;\;(\text{If } A_2 \neq 0 \text{ and } A_1 \text{or} A_0 \neq 0),\\
F\left( \xi \right) &= {\left( { - \frac{{{A_1}}}{{2{A_0}}} + \frac{S}{{2{A_0}}}\tanh \left( {\frac{S}{2}\xi + C} \right)} \right)^{ - 1}} \;\;(\text{If }A_0 \neq 0 \text{ and } A_1 \text{or} A_2 \neq 0),\\
F\left( \xi \right) &= {\left( { - \frac{{{A_1}}}{{2{A_0}}} + \frac{S}{{2{A_0}}}\coth \left( {\frac{S}{2}\xi + C} \right)} \right)^{ - 1}} \;\;(\text{If }A_0 \neq 0 \text{ and } A_1 \text{or} A_2 \neq 0),\\
\label{riccati_solu2e}F\left( \xi \right) &= - \frac{{{A_1}}}{{2{A_2}}} - \frac{S }{{2{A_2}}}\tanh \left( {\frac{{S \xi }}{2}+C} \right) + \frac{{\operatorname{sech} \left( {\frac{{S \xi }}{2}+C} \right)}}{{{C}\cosh \left( {\frac{{S \xi }}{2}+C} \right) - \frac{{2{A_2}}}{S }\sinh \left( {\frac{{S \xi }}{2}+C} \right)}}\\
& \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(\text{If } A_2 \neq 0 \text{ and } A_1 \text{or} A_0 \neq 0),\nonumber\\
\label{riccati_soluA2f}F\left( \xi \right) &= {\text{ - }}\frac{{{A_0}}}{{{A_1}}} + C{e^{ {{A_1}\xi }}},\;\; (\text{If } A_2=0 \text{ and } A_1\neq 0),\\
F\left( \xi \right) &= A_0\xi +C\;\;(\text{If } A_2=A_1=0 \text{ and } A_0\neq 0 ),
\end{align}
\end{subequations}
where $S = \sqrt{A_1^2-4A_0A_2}$ and $C$ is auxiliary constant. In the above solutions from Eq. \eqref{riccati_solu1} to \eqref{riccati_solu2e}, the condition $A_1^2-4A_0A_2>0$ must be satisfied. In the following solutions if not mentioned $C$ has to be assumed as an auxiliary constant.
\renewcommand{\theequation}{B-\arabic{equation}}
\section{Solutions of Bernoulli equation}\label{sec:appenB}
In case of Bernoulli equation, Eq. \eqref{1stnlode2} is reduced to
\begin{equation}\label{bernouli}
F'\left( \xi \right) = {A_1}F + {A_\delta}F^{\delta}.
\end{equation}
The solutions of the equation \eqref{bernouli} are \cite{yang}
\begin{subequations}\label{bernouli_solu}
\begin{align}
F\left( \xi \right) &= {\left( {\frac{{{A_1} \left( {\cosh \left( {{A_1}\left( {\delta - 1} \right) \xi + {C} {A_1}} \right) + \sinh \left( {{A_1} \left( {\delta - 1} \right)\xi + {C} {A_1}} \right)} \right)}}{{1 - {A_\delta} \cosh \left( {{A_1}\left( {\delta - 1} \right) \xi + {C} {A_1}} \right) - {A_\delta} \sinh \left( {{A_1}\left( {\delta - 1} \right) \xi + {C} {A_1}} \right)}}} \right)^{{{\left( {\delta - 1} \right)}^{ - 1}}}} \;\;(\text{If } A_1\neq 0\text{ and }\delta \neq 1),\\
F\left( \xi \right) &= {\left( { - \frac{{{A_\delta }}}{{{A_1}}} + C{e^{{A_1}(1 - \delta )\xi }}} \right)^{ \frac{1}{{1 - \delta }}}}\;\;(\text{If } A_1\neq 0\text{ and }\delta \neq 1), \\
F\left( \xi \right) &= {\left( { - \frac{{{A_1}}}{{2{A_\delta }}} - \frac{{{A_1}}}{{2{A_\delta }}}\tanh \left( {\frac{{(\delta - 1){A_1}}}{2}\xi + C} \right)} \right)^{\frac{1}{{\delta - 1}}}}\;\;(\text{If } A_1\neq 0\text{ and }\delta \neq 1),\\
F\left( \xi \right) &= {\left( { - \frac{{{A_1}}}{{2{A_\delta }}} - \frac{{{A_1}}}{{2{A_\delta }}}\coth \left( {\frac{{(\delta - 1){A_1}}}{2}\xi + C} \right)} \right)^{\frac{1}{{\delta - 1}}}}\;\;(\text{If } A_1\neq 0\text{ and }\delta \neq 1), \\
F\left( \xi \right) &= {\left( {{{\left( {{A_\delta} \xi(1 - \delta) + {C}} \right)}^{{{\left( {\delta - 1} \right)}^{ - 1}}}}} \right)^{ - 1}} \;\;(\text{If } A_1=0 \text{ and } \delta \neq 1)),\\
F\left( \xi \right) &= C e^{({A_1} + {A_\delta })\xi }\;\; (\text{If }\delta = 1).
\end{align}
\end{subequations}
\renewcommand{\theequation}{C-\arabic{equation}}
\section{first-order NLODEs related to Jacobi Elliptic Functions}\label{sec:appenC}
The incomplete elliptic integral of the first kind, is defined by
\begin{equation}\label{ellipInt}
u(\phi,m) = \int\limits_0^\phi {\frac{{d\theta }}{{\sqrt {1 - {m^2}{{\sin }^2}\theta } }}},
\end{equation}
where $m$ is the elliptic modulus, and $\phi=\text{JacobiAM}(u,m)$ is the Jacobi amplitude which is the inverse of elliptic integral \eqref{ellipInt}. The three principal elliptic functions are denoted $\text{JacobiSN}(u,m),\text{JacobiCN}(u,m),\text{JacobiDN}(u,m),$ which are in turn defined in terms of the amplitude function JacobiAM satisfying
\begin{align}
&\text{JacobiSN}(u,m)=\sin\left(\text{JacobiAM}(u,m)\right),\\
&\text{JacobiCN}(u,m)=\cos\left(\text{JacobiAM}(u,m)\right),\\
&\text{JacobiDN}(u,m)=\frac{\partial}{\partial u}\text{JacobiAM}(u,m)=\sqrt{1-m^2\text{JacobiSN}(u,m)^2}.
\end{align}
There are total twelve Jacobian functions that can be expressed in general by a name $\text{JacobiXY}$ which follows the identities $\text{JacobiXY}=\frac{1}{\text{JacobiYX}}=\frac{\text{JacobiPR}}{\text{JacobiQR}}$. Here $X,Y,R$ are any three of $S,C,N,D$. Following these rules of notations other nine subsidiary Jacobian elliptic functions can be defined in terms of the three JacobiSN, JacobiCN, JacobiDN by the following identities:
\begin{align}
& {\text{JacobiNS}}(u,m) = {({\text{JacobiSN}}(u,m))^{ - 1}},{\text{JacobiND}}(u,m) = {({\text{JacobiDN}}(u,m))^{ - 1}},{\text{JacobiSC}}(u,m) = {({\text{JacobiCS}}(u,m))^{ - 1}}, \hfill\nonumber \\
& {\text{JacobiSD}}(u,m) = {({\text{JacobiDS}}(u,m))^{ - 1}},{\text{JacobiDC}}(u,m) = {({\text{JacobiCD}}(u,m))^{ - 1}}, {\text{JacobiNC}}(u,m) = {({\text{JacobiCN}}(u,m))^{ - 1}},\nonumber\\
&{\text{JacobiCS}}(u,m) = \frac{{{\text{JacobiCN}}(u,m)}}{{{\text{JacobiSN}}(u,m)}},{\text{JacobiDS}}(u,m) = \frac{{{\text{JacobiDN}}(u,m)}}{{{\text{JacobiSN}}(u,m)}},{\text{JacobiCD}}(u,m) = \frac{{{\text{JacobiCN}}(u,m)}}{{{\text{JacobiDN}}(u,m)}}.
\end{align}
Jacobian elliptic functions can also be defined as solutions to the differential equations \cite{fexpn024,fexpn024_1,Schwalm}
\begin{equation}\label{024}
F'(\xi)=\sqrt{A_0+A_2F^2+A_4F^4}.
\end{equation}
Solutions of Eq. \eqref{024} in terms of Jacobi elliptic functions are listed in below (with the conditions that all the algebraic expressions within square-root must be greater than $0$):
\begin{subequations}\label{024_solu}
\begin{align}
&F(\xi)= \frac{{\sqrt {2{A_2} {A_4} \left( { - {A_2}^2 + \sqrt {{A_2}^2S} } \right)} }}{{2{A_2} {A_4}}}{\text{JacobiSN}}\left( {\frac{{\sqrt {2{A_2} \left( { - {A_2}^2 - \sqrt {{A_2}^2S} } \right)} \xi }}{{2{A_2}}},\frac{{\sqrt {2{A_0} {A_4} \left( { - 2 {A_0} {A_4} + {A_2}^2 - \sqrt {{A_2}^2S} } \right)} }}{{2{A_0} {A_4}}}} \right), \hfill \\
&F(\xi)= \frac{{\sqrt {2{A_2} {A_4} \left( { - {A_2}^2 + \sqrt {{A_2}^2S} } \right)} }}{{2{A_2} {A_4}}}{\text{JacobiCN}}\left( {\frac{{\sqrt { - {A_2} \sqrt {{A_2}^2S} } \xi }}{{{A_2}}},\frac{{\sqrt { - 2S\left( {\sqrt {{A_2}^2S} - S} \right)} }}{{2S}}} \right), \hfill \\
&F(\xi)= \frac{{\sqrt {2{A_2} {A_4} \left( { - {A_2}^2 - \sqrt {{A_2}^2S} } \right)} }}{{2{A_2} {A_4}}}{\text{JacobiDN}}\left( { \frac{{\sqrt {2{A_2} \left( {{A_2}^2 + \sqrt {{A_2}^2S} } \right)} \xi }}{{2{A_2}}}, \frac{{\sqrt {2{A_0} {A_4} \left( {\sqrt {{A_2}^2S} - S} \right)} }}{{2{A_0} {A_4}}}} \right), \hfill \\
&F(\xi)= \frac{{\sqrt {2{A_4} \left( { - {A_2} + \sqrt S } \right)} }}{{2{A_4}}}{\text{JacobiNS}}\left( {1/2 \sqrt { - 2 {A_2} + 2 \sqrt S } \xi , \frac{{\sqrt {2{A_0} {A_4} \left( {{A_2} \sqrt S - 2 {A_0} {A_4} + {A_2}^2} \right)} }}{{2{A_0} {A_4}}}} \right), \hfill \\
&F(\xi)= \frac{{\sqrt { - 2{A_2} - 2\sqrt S } }}{{2\sqrt {{A_4}} }}{\text{JacobiNC}}\left( {\sqrt { - \sqrt S } \xi ,\frac{{\sqrt { - 2 {A_0} {A_4}} }}{{\sqrt {{A_2} \sqrt S + S} }}} \right),\\
&F(\xi)= \frac{{\sqrt { - 2{A_2} + 2\sqrt S } }}{{2\sqrt {{A_4}} }}{\text{JacobiND}}\left( {\frac{{\sqrt { - 2 {A_0} {A_4}} \xi }}{{\sqrt { - {A_2} + \sqrt S } }},\frac{{\sqrt {2{A_2} \sqrt S - 2S} }}{{2\sqrt {{A_0} {A_4}} }}} \right), \hfill \\
&F(\xi)= - \frac{{\sqrt 2 \sqrt {{A_0}} }}{{\sqrt {{A_2} + \sqrt S } }}{\text{JacobiSC}}\left( {\frac{{\sqrt {{A_2} \sqrt S - 2 {A_0} {A_4} + {A_2}^2} \xi }}{{\sqrt {{A_2} + \sqrt S } }},\frac{{\sqrt {{A_2} \sqrt S + S} }}{{\sqrt {{A_2} \sqrt S - 2 {A_0} {A_4} + {A_2}^2} }}} \right), \hfill \\
&F(\xi)= \frac{{\sqrt 2 \sqrt {{A_0}} \sqrt { - {A_2} \sqrt S + 2 {A_0} {A_4} - {A_2}^2} }}{{\sqrt {{A_2} + \sqrt S } \sqrt { - {A_2} \sqrt S - S} }}{\text{JacobiSD}}\left( {\frac{{\sqrt {{A_2} \sqrt S + S} \xi }}{{\sqrt {{A_2} + \sqrt S } }},\frac{{\sqrt { - {A_2} \sqrt S + 2 {A_0} {A_4} - {A_2}^2} }}{{\sqrt { - {A_2} \sqrt S - S} }}} \right), \hfill \\
&F(\xi)= \frac{{\sqrt 2 \sqrt {{A_0}} }}{{\sqrt {{A_2} + \sqrt S } }}{\text{JacobiCS}}\left( {\frac{{ \sqrt {2{A_0} {A_4}} \xi }}{{\sqrt {{A_2} + \sqrt S } }},\frac{{\sqrt { - {A_2} \sqrt S - S} }}{{\sqrt {2{A_0} {A_4}} }}} \right), \hfill \\
&F(\xi)= \frac{{\sqrt {{A_2} \sqrt S - S} }}{{\sqrt {{A_2} {A_4} - {A_4} \sqrt S } }}{\text{JacobiDS}}\left( {\frac{{\sqrt {{A_2} \sqrt S - S} \xi }}{{\sqrt {{A_2} - \sqrt S } }},\frac{{\sqrt {2{A_0} {A_4}} }}{{\sqrt {{A_2} \sqrt S - S} }}} \right),
\end{align}
\end{subequations}
where $S = \left( { - 4 {A_0} {A_4} + {A_2}^2} \right)$.
The solutions of an another first-order NLODE \cite{fexpn123,Schwalm}
\begin{equation}\label{123}
F'(\xi)=\sqrt{A_1F+A_2F^2+A_3F^3},
\end{equation}
can also be expressed in terms of Jacobi elliptic functions. The solutions of Eq. \eqref{123} are (with the conditions that all the algebraic expressions within square-root must be greater than $0$)
\begin{subequations}\label{123_solu}
\begin{align}
&F(\xi)= \frac{{ - {A_2} + \sqrt S }}{{2{A_3}}}{\text{JacobiS}}{{\text{N}}^2}\left( { \frac{{\sqrt {2{A_1} {A_3}} \xi }}{{2\sqrt { - {A_2} + \sqrt S } }}, \frac{{\sqrt {{A_2}^2 - {A_2} \sqrt S - 2 {A_1} {A_3}} }}{{\sqrt {2{A_1} {A_3}} }}} \right), \hfill \\
&F(\xi)= - \frac{{2{A_1}}}{{{A_2} - \sqrt S }}{\text{JacobiC}}{{\text{N}}^2}\left( { \frac{{\sqrt[4]{S}\xi }}{2}, \frac{{\sqrt {{A_2} + \sqrt S } }}{{\sqrt 2 \sqrt[4]{S}}}} \right), \hfill \\
&F(\xi)= - \frac{{2{A_1}}}{{{A_2} - \sqrt S }}{\text{JacobiD}}{{\text{N}}^2}\left( {\frac{{\sqrt {2{A_1} {A_3}} \xi }}{{2\sqrt {{A_2} - \sqrt S } }}, \frac{{\sqrt {4 {A_1} {A_3} + \sqrt S {A_2} - {A_2}^2} }}{{\sqrt {2{A_1} {A_3}} }}} \right), \hfill \\
&F(\xi)= - \frac{{2{A_1}}}{{{A_2} + \sqrt S }}{\text{JacobiN}}{{\text{S}}^2}\left( { \frac{{\sqrt 2 }}{4}\sqrt { - {A_2} + \sqrt S } \xi ,\frac{{\sqrt {{A_2} + \sqrt S } }}{{\sqrt {{A_2} - \sqrt S } }}} \right), \hfill \\
&F(\xi)= - \frac{{2{A_1}}}{{{A_2} + \sqrt S }}{\text{JacobiN}}{{\text{C}}^2}\left( { \frac{{\sqrt[4]{S}\xi }}{2},\frac{{\sqrt {{A_2} + \sqrt S } }}{{\sqrt 2 \sqrt[4]{S}}}} \right), \hfill \\
&F(\xi)= - \frac{{2{A_1}}}{{{A_2} + \sqrt S }}{\text{JacobiN}}{{\text{D}}^2}\left( { \frac{{\sqrt 2 \sqrt {{A_2} + \sqrt S } \xi }}{4},\frac{{\sqrt 2 \sqrt[4]{S}}}{{\sqrt {{h_2} + \sqrt S } }}} \right), \hfill \\
&F(\xi)= \frac{{2{A_1}}}{{{A_2} + \sqrt S }}{\text{JacobiS}}{{\text{C}}^2}\left( {\frac{{\sqrt {\sqrt S {A_2} - 2 {A_1} {A_3} + {A_2}^2} \xi }}{{2\sqrt {{A_2} + \sqrt S } }},\frac{{\sqrt {\sqrt S {A_2} + S} }}{{\sqrt {\sqrt S {A_2} - 2 {A_1} {A_3} + {A_2}^2} }}} \right), \hfill \\
&F(\xi)= \frac{{2{A_1} \left( { - 2 {A_1} {A_3} + \sqrt S {A_2} + {A_2}^2} \right)}}{{\left( {{A_2} + \sqrt S } \right)\left( {\sqrt S {A_2} + S} \right)}}{\text{JacobiS}}{{\text{D}}^2}\left( {\frac{{\sqrt {\sqrt S {A_2} + S} \xi }}{{2\sqrt {{A_2} + \sqrt S } }},\frac{{\sqrt {2 {A_1} {A_3} - \sqrt S {A_2} - {A_2}^2} }}{{\sqrt { - \sqrt S {A_2} - S} }}} \right), \hfill \\
&F(\xi)= \frac{{2{A_1}}}{{{A_2} - \sqrt S }}{\text{JacobiC}}{{\text{S}}^2}\left( {\frac{{\sqrt {{A_1} {A_3}} \xi }}{{\sqrt 2 \sqrt {{A_2} - \sqrt S } }}, \frac{{\sqrt {\sqrt S {A_2} - S} }}{{\sqrt {2{A_1} {A_3}} }}} \right), \hfill \\
&F(\xi)= \frac{{ - \sqrt S {A_2} - S}}{{{A_3}\left( {{A_2} + \sqrt S } \right)}}{\text{JacobiD}}{{\text{S}}^2}\left( {\frac{{\sqrt { - \sqrt S {A_2} - S} \xi }}{{2\sqrt {{A_2} + \sqrt S } }},\frac{{ \sqrt {2{A_1} {A_3}} }}{{\sqrt { - \sqrt S {A_2} - S} }}} \right),
\end{align}
\end{subequations}
where
\begin{align}
&S = - 4 {A_1} {A_3} + {A_2}^2.
\end{align}
\renewcommand{\theequation}{D-\arabic{equation}}
\section{Solutions of some more different types of first-order NLODEs}\label{sec:appenD}
In this section, we discuss the solutions of more different types of the first-order NLODE. Among them, we can choose an auxiliary equation suitable for input-NLPDE.\\
\textbf{Type-1 first-order NLODE:}
\par Let us first consider the first-order NLODE as follows
\begin{equation}\label{0246}
F'(\xi ) = \sqrt{{A_0} + {A_2}{F^2} + {A_4}{F^4} + {A_6}{F^6}}.
\end{equation}
The above equation admits following special hyperbolic solutions \cite{0246}:
If ${A_0} = \frac{{8A_2^2}}{{27{A_4}}}\,{\text{and}}\,{A_6} = \frac{{A_4^2}}{{4{A_2}}}$ then it has a bell profile solution
\begin{subequations}\label{0246_solu}
\begin{align}
F(\xi) &= {\left( { - \frac{{8{A_2} {{\left( {\tanh \left( { \sqrt { - \frac{{{A_2}}}{3}} \xi + C} \right)} \right)}^2}}}{{3{A_4} \left( {3 + {{\left( {\tanh \left( {\sqrt { - \frac{{{A_2}}}{3}} \xi + C} \right)} \right)}^2}} \right)}}} \right)^{\frac{1}{2}}}, \hfill \\
\intertext{and a singular solution}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\nonumber\\
F(\xi) &= {\left( { - \frac{{8{A_2} {{\left( {\coth \left( { \sqrt { - \frac{{{A_2}}}{3}} \xi + C} \right)} \right)}^2}}}{{3{A_4} \left( {3 + {{\left( {\coth \left( {\sqrt { - \frac{{{A_2}}}{3}} \xi + C} \right)} \right)}^2}} \right)}}} \right)^{\frac{1}{2}}}\,.
\end{align}
\end{subequations}
\\
\textbf{Type-2 first-order NLODE:}
\par For $A_0=0$ Eq. \eqref{0246} is reduced to
\begin{equation}\label{246}
F'(\xi ) = \sqrt{{A_2}{F^2} + {A_4}{F^4} + {A_6}{F^6}}.
\end{equation}
Equation \eqref{246} has a triangular periodic solution \cite{0246}
\begin{subequations}\label{246_solu1}
\begin{equation}\label{246_1}
F(\xi ) = {\left( {\frac{{2{A_2}{{\operatorname{sech} }^2}(\sqrt {{A_2}} \xi + C)}}{{2\sqrt {A_4^2 - 4{A_2}{A_6}} - \left( {\sqrt {A_4^2 - 4{A_2}{A_6}} + {A_4}} \right){{\operatorname{sech} }^2}(\sqrt {{A_2}} \xi + C)}}} \right)^{\frac{1}{2}}},
\end{equation}
and a singular triangular periodic solution
\begin{equation}\label{246_2}
F(\xi ) = {\left( {\frac{{2{A_2}{{\operatorname{csch} }^2}(\sqrt {{A_2}} \xi + C)}}{{2\sqrt {A_4^2 - 4{A_2}{A_6}} + \left( {\sqrt {A_4^2 - 4{A_2}{A_6}} - {A_4}} \right){{\operatorname{csch} }^2}(\sqrt {{A_2}} \xi + C)}}} \right)^{\frac{1}{2}}}.
\end{equation}
\end{subequations}
If ${A_6} = \frac{{A_4^2}}{{4{A_2}}}$, Eq. \eqref{246} also admits a kink profile solution
\begin{subequations}\label{246_solu2}
\begin{equation}\label{246_3}
F(\xi)={\left( { - \frac{{{A_2}}}{{{A_4}}} \left( {1 + \tanh \left( {\sqrt {{A_2}} \xi + C} \right)} \right)} \right)^{\frac{1}{2}}},
\end{equation}
and a singular solution
\begin{equation}\label{246_4}
F(\xi)={\left( { - \frac{{{A_2}}}{{{A_4}}} \left( {1 + \coth \left( {\sqrt {{A_2}} \xi + C} \right)} \right)} \right)^{\frac{1}{2}}}.
\end{equation}
\end{subequations}
When $A_4=0$ Eq. \eqref{246} is reduced to
\begin{equation}\label{26}
F'(\xi ) = \sqrt{{A_2}{F^2} + {A_6}{F^6}}.
\end{equation}
It is clear that the Eqs. \eqref{246_1}, \eqref{246_2} with $A_4=0$ are also solutions of Eq. \eqref{26}, but Eqs. \eqref{246_3}, \eqref{246_4} are not solutions of Eq. \eqref{26} as they are undefined at $A_4=0$.\\
\textbf{Type-3 first-order NLODE:}
\par An another type of first-order NLODE is
\begin{equation}\label{234}
F'(\xi ) = \sqrt{{A_2}{F^2} + {A_3}{F^3} + {A_4}{F^4}},
\end{equation}
which has different type solitary wave solutions \cite{234}
\begin{subequations}\label{234_solu}
\begin{align}
\label{234_1} F(\xi) &= - \frac{{{A_2} {A_3} {{\left( {{\text{sech}}\left( { \frac{{\sqrt {{A_2}} }}{2}\xi } \right)} \right)}^2}}}{{{A_3}^2 - {A_2} {A_4} {{\left( {1 - \tanh \left( {\frac{{\sqrt {{A_2}} }}{2}\xi } \right)} \right)}^2}}}, \; F(\xi) = \frac{{2{A_2} {\text{sech}}\left( {\sqrt {{A_2}} \xi } \right)}}{{\sqrt { - 4 {A_2} {A_4} + {A_3}^2} - {A_3} {\text{sech}}\left( {\sqrt {{A_2}} \xi } \right)}}.
\end{align}
\end{subequations}
For $A_4=0$ Eq. \eqref{234} is reduced to
\begin{equation}\label{23}
F'(\xi ) = \sqrt{{A_2}{F^2} + {A_3}{F^3}}.
\end{equation}
One can get the solutions of above equations substituting $A_4=0$ in solutions \eqref{234_solu}. If $A_3=0$ Eq. \eqref{234} is simplified to
\begin{equation}\label{24}
F'(\xi ) = \sqrt{{A_2}{F^2} + {A_4}{F^4}}.
\end{equation}
Solutions \eqref{234_solu} with $A_3=0$ also exist for \eqref{24}. Beside these solutions Eq. \eqref{24} has following two extra solutions:
\begin{subequations}\label{24_solu}
\begin{align}
F(\xi) &= \frac{{4{A_2} {{\text{e}}^{\left( {\xi + C} \right)\sqrt {{A_2}} }}}}{{ - 4 {A_2} {A_4} {{\text{e}}^{2 \sqrt {{A_2}} \xi }} + {{\text{e}}^{2 C \sqrt {{A_2}} }}}}, \;
F(\xi) = \frac{{4{A_2} {{\text{e}}^{\left( {\xi + C} \right)\sqrt {{A_2}} }}}}{{ - 4 {A_2} {A_4} {{\text{e}}^{2 C \sqrt {{A_2}} }}{\text{ + }}{{\text{e}}^{2 \sqrt {{A_2}} \xi }}}}.
\end{align}
\end{subequations}
\textbf{Type-4 first-order NLODE:}
\par We will now consider one more simplified first-order NLODE
\begin{equation}\label{02}
F'(\xi ) = \sqrt{{A_0} + {A_2}{F^2}}.
\end{equation}
We find some exact solutions of Eq. \eqref{02} containing exponential, hyperbolic functions, which are listed below
\begin{subequations}\label{02_solu}
\begin{align}
F(\xi) &= \frac{{\left( { - {A_0} {{\text{e}}^{2 \xi \sqrt {{A_2}} }} + {{\text{e}}^{2 C\sqrt {{A_2}} }}} \right){{\text{e}}^{ - \left( {C + \xi } \right)\sqrt {{A_2}} }}}}{{2\sqrt {{A_2}} }}, \;
F(\xi) = \frac{{\left( { - {{\text{e}}^{2 \xi \sqrt {{A_2}} }} + {A_0}{{\text{e}}^{2 C\sqrt {{A_2}} }}} \right){{\text{e}}^{ - \left( {C + \xi } \right)\sqrt {{A_2}} }}}}{{2\sqrt {{A_2}} }}, \hfill \\
F(\xi) &= \pm \sqrt {\frac{{ - {A_0}}}{{{A_2}}}} \cosh \left( {\xi \sqrt {{A_2}} + C} \right), \;
F(\xi) = \pm \sqrt {\frac{{{A_0}}}{{{A_2}}}} \sinh \left( {\xi \sqrt {{A_2}} + C} \right).
\end{align}
\end{subequations}
\begin{thebibliography}{00}
%\bibitem{extfexpn4} Y.M. Zhao, F-Expansion Method and Its Application for Finding New Exact Solutions to the Kudryashov-Sinelshchikov Equation. Journal of Applied Mathematics. 2013 (2013) 7. doi:10.1155/2013/895760.
\bibitem{fexpn024} Y. Zhou, M. Wang, and Y. Wang, Periodic wave solutions to a coupled KdV equations with variable coefficients. Physics Letters A. 308 (2003) 31–36.
\bibitem{fexpn024_1} A.H. BHRAWY, M.A. ABDELKAWY, S. KUMAR and A. BISWAS, SOLITONS AND OTHER SOLUTIONS TO KADOMTSEV-PETVIASHVILI EQUATION OF B-TYPE, Rom. Journ. Phys. 58, (2013) 729–748.
\bibitem{fexpn123} Y. He, New Jacobi Elliptic Function Solutions for the Kudryashov-Sinelshchikov Equation Using Improved F-Expansion Method, Mathematical Problems in Engineering, 2013, (2013) 104894.
\bibitem{0246} D.J. Huang, D.S. Li, H.G. Zhang, Explicit and exact travelling wave solutions for the generalized derivative Schr\"odinger equation. Chaos, Solitons and Fractals. 31 (2007) 586–593.
\bibitem{234} Sirendaoreji, New exact travelling wave solutions for the Kawahara and modified Kawahara equations. Chaos, Solitons and Fractals 19 (2004) 147–150.
\bibitem{modfexpn} G. Cai, Q. Wang, J, Huang, A Modified F-expansion Method for Solving Breaking Soliton Equation. International Journal of Nonlinear Science. 2 (2006) 122.
\bibitem{fim0} Z. Feng, On Explicit Exact Solutions to the Compound Burgers-KdV Equation. Physics Letters A. 293 (2002) 57-66. doi:10.1016/S0375-9601(01)00825-8.
\bibitem{fim} Z. Feng, The first integer method to study the Burgers-Korteweg-de Vries equation. Journal of Physics A: Mathematical and General. 35 (2002) 343-349.
\bibitem{mirza} M. Mirzazadeh, M. Eslami, Exact solutions of the Kudryashov–Sinelshchikov equation and nonlinear telegraph equation via the first integral method. Nonlinear Analysis: Modelling and Control. 17 (2012) 481–488.
\bibitem{complexTwt1} Z.Y. Zhang, Z.H. Liu, X.J. Miao, Y.Z. Chen, New exact solutions to the perturbed nonlinear Schr\"odinger’s equation with Kerr law nonlinearity, Applied Mathematics and Computation. 216 (2010) 3064-3072.
\bibitem{ginac} C. W. Bauer, A. Frink and R. Kreckel, Introduction to the GiNaC Framework for Symbolic Computation within the C++ Programming Language. arXiv:cs/0004015; the GiNaC library is available at http://www.ginac.de.
\bibitem{complexTwt} N. Taghizadeh, M. Mirzazadeh, F. Farahrooz, Exact solutions of the nonlinear Schrödinger equation by the first integral method. In Journal of Mathematical Analysis and Applications. 374 (2011) 549-553. https://doi.org/10.1016/j.jmaa.2010.08.050
\bibitem{rath} Zhi-Bin Li \& Yin-Ping Liu, RATH: A Maple package for finding travelling solitary wave solutions to nonlinear evolution equations. Comput. Phys. Comm. {148} (2002) 256-266.
\bibitem{grobner} T. Becker, V. Weispfenning, Gr\"obner Bases: A Computational Approach to Commutative Algebra. Springer-Verlag, Berlin, 1993.
\bibitem{RittWu} D. Wang, Elimination Methods. Springer-Verlag, New York, 2001.
\bibitem{RittWu1} D. Wang. A generalized algorithm for computing characteristic sets. World Scientific Publishing Company, Singapore. 165–174 (2001).
\bibitem{rif} A. Wittkopf, G. Reid, Introduction to the Rif Package Version 1.1. See: http://www.cecm.sfu.ca/~wittkopf/Rif.html, 2003.
\bibitem{baldwin} D. Baldwin, \"U. G\"okta\c{s}, W. Hereman, Symbolic computation of hyperbolic tangent solutions for nonlinear differential–difference equations. Comput. Phys. Comm. 162 (2004) 203–217.
\bibitem{yang} X. F. Yang, Z. C. Deng, Y. Wei. Riccati-Bernoulli sub-ODE method for nonlinear partial differential equations and its application. Advances in Difference Equations. 2015 (2015) 117.
\bibitem{Schwalm}W. A. Schwalm, Lectures on Selected Topics in Mathematical Physics: Elliptic Functions and Elliptic Integrals; Morgan \& Claypool Publishers, (2015).
\end{thebibliography}
\end{document} | {
"alphanum_fraction": 0.6893433038,
"avg_line_length": 101.6455834242,
"ext": "tex",
"hexsha": "29d6466f11d330eda1bc0fbc9897f7e1150532b3",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "aeeb00a89add8d01979ee6ced16d2a6432b5097c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "peanutfun/GiNaCDE",
"max_forks_repo_path": "doc/documentation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "aeeb00a89add8d01979ee6ced16d2a6432b5097c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "peanutfun/GiNaCDE",
"max_issues_repo_path": "doc/documentation.tex",
"max_line_length": 1558,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "aeeb00a89add8d01979ee6ced16d2a6432b5097c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "peanutfun/GiNaCDE",
"max_stars_repo_path": "doc/documentation.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 31782,
"size": 93209
} |
\SetAPI{J-C}
\section{PermissionRule}
\label{feature:PermissionRule}
\ClearAPI
\TODO | {
"alphanum_fraction": 0.7976190476,
"avg_line_length": 16.8,
"ext": "tex",
"hexsha": "c7f5848b8e5638cc42c4be18ffa066990e313269",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2022-01-08T12:54:51.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-10-28T14:05:27.000Z",
"max_forks_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "Dennis-Koch/ambeth",
"max_forks_repo_path": "doc/reference-manual/tex/feature/PermissionRule.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda",
"max_issues_repo_issues_event_max_datetime": "2022-01-21T23:15:36.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-04-24T06:55:18.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "Dennis-Koch/ambeth",
"max_issues_repo_path": "doc/reference-manual/tex/feature/PermissionRule.tex",
"max_line_length": 30,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Dennis-Koch/ambeth",
"max_stars_repo_path": "doc/reference-manual/tex/feature/PermissionRule.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 27,
"size": 84
} |
\documentclass[12pt]{scrartcl}
\input{preamble}
\makeatletter
\title{Hack 12.0}\let\Title\@title
\subtitle{Computer Science I -- Java\\
Recursion \& Memoization\\
{\small
\vskip1cm
Department of Computer Science \& Engineering \\
University of Nebraska--Lincoln}
\vskip-3cm}
%\author{Dr.\ Chris Bourke}
\date{~}
\makeatother
\begin{document}
\maketitle
\hrule
\input{instructions.tex}
\section*{Problem Statement}
A binomial coefficient, ``$n$ choose $k$'' is a number that corresponds
to the number of ways to \emph{choose} $k$ items from a set of $n$ distinct
items. You may be familiar with some the notations, $C(n,k)$ or $C_n^k$
or ${}_{n}C_k $, but most commonly this is written as
$${n \choose k}$$
and read as ``$n$ choose $k$''. There is an easy to compute formula involving
factorials:
$${n \choose k} = \frac{n!}{(n-k)!k!}$$
For example, if we have $n = 4$ items, say $\{a, b, c, d\}$ and want to choose
$k=2$ of them, then there are
$${4 \choose 2} = \frac{4!}{(4-2)!2!} = 6$$
ways of doing this. The six ways are:
$$\{a, b\}, \{a, c\}, \{a, d\}, \{b, c\}, \{b, d\}, \{c, d\}$$
There are a lot of other interpretations and applications for binomial
coefficients, but this hack will focus on computing their value using
a different formula, Pascal's Rule\footnote{Which can be used to generate
Pascal's Triangle, \url{https://en.wikipedia.org/wiki/Pascals_triangle}}:
$${n \choose k} = {n-1 \choose k} + {n-1 \choose k-1}$$
which is a recursive formula. The base cases for Pascal's Rule are when
$k = 0$ and $n = k$. In both cases, the value is 1. When $k = 0$, we are
not choosing any elements and so there is only one way of doing that (i.e.\
choose nothing). When $n = k$ we are choosing every element, again there
is only one way of doing that.
\subsection*{Writing a Naive Recursion}
Create a class named \mintinline{text}{Binomial} and implement and
test the following method \emph{using a recursive} solution:
\mintinline{java}{public static long choose(int n, int k)}
which takes $n$ and $k$ and computes ${n\choose k}$ using Pascal's Rule.
Note that the return type is a \mintinline{java}{long} which is a 64-bit
integer allowing you to compute values up to
$$2^{63}-1 = 9,223,372,036,854,775,807$$
(a little over 9 quintillion). Write a \mintinline{java}{main} method
that takes $n$ and $k$ as command line arguments and outputs the result
to the standard output so you can easily test it.
\subsection*{Benchmarking}
Run your program on values of $n, k$ in Table \ref{table:easyValues}
and time (roughly) how long it takes your program to execute. You
can check your solutions with an online tool such as
\url{https://www.wolframalpha.com/}.
\begin{table}[ht]
\centering
\begin{tabular}{c|c}
$n$ & $k$ \\
\hline\hline
4 & 2 \\
10 & 5 \\
32 & 16 \\ %5 seconds
34 & 17 \\ %15 seconds
36 & 18 \\ %60 seconds
\end{tabular}
\caption{Test Values}
\label{table:easyValues}
\end{table}
Now formulate an estimate of how long your program would take to
execute with larger values. You can make a \emph{rough} estimate
how many method calls are made using the binomial value itself.
That is, to compute ${n \choose k}$ using Pascal's Rule would make
\emph{about} ${n \choose k}$ method calls.
Use the running time of your program from the test values to
estimate how long your program would run for the values in
Table \ref{table:hardValues}.
\begin{table}[ht]
\centering
\begin{tabular}{c|r}
${n \choose k}$ & value \\
\hline\hline
${54 \choose 27}$ & = 1,946,939,425,648,112 \\
${56 \choose 28}$ & = 7,648,690,600,760,440 \\
${58 \choose 29}$ & = 30,067,266,499,541,040 \\
${60 \choose 30}$ & = 118,264,581,564,861,424 \\
${62 \choose 31}$ & = 465,428,353,255,261,088 \\
${64 \choose 32}$ & = 1,832,624,140,942,590,534 \\
${66 \choose 33}$ & = 7,219,428,434,016,265,740 \\
\end{tabular}
\caption{Larger Values}
\label{table:hardValues}
\end{table}
\subsection*{Improving Performance with Memoization}
You'll now improve your program's performance using memoization
to avoid unnecessary repeated recursive calls.
\begin{enumerate}
\item First, change your return type to use Java's \mintinline{java}{BigInteger}
class. This is an arbitrary precision number class meaning that it can
represent arbitrarily large integer values. You won't be able to use
the normal arithmetic operators however. Instead, you'll need to RTM and
use the class's methods to add and perform other operations. See the
documentation here: \url{https://docs.oracle.com/en/java/javase/14/docs/api/java.base/java/math/BigInteger.html}
\item To ``cache'' values so that you are not continually repeating the
same calculations over-and-over you \emph{could} use a table, but let's
use a \emph{smart data structure}: a \mintinline{java}{Map}.
This map will be used to map a pair of input values, $(n,k)$ to the
value of the binomial ${n \choose k}$. The problem is that we want
to use the combination of two values as a single key. To do so, we've
provided a \mintinline{java}{Pair} class that allows you to pair two
objects together to use as a key.
Create and instantiate a static map of the following type:
\mintinline{java}{Map<Pair<Integer, Integer>, BigInteger>}
\item Modify your \mintinline{java}{binomial} method to use this map
to store and use values to avoid unnecessary repeated recursive calls.
Your method should have the following signature:
\mintinline{java}{public static BigInteger binomial(int n, int k)}
When the method needs to compute ${n \choose k}$ it checks the map
first: if the value has already been computed (is not \mintinline{java}{null}) then it
returns that value. Otherwise, it performs the recursive computation.
Before returning the value, however, it should store it (\emph{cache}
it) in the map so that subsequent computations avoid the recursion.
\item Rerun your program with the values in Tables \ref{table:easyValues}
and \ref{table:hardValues} to verify they work and note the difference
in running time.
\end{enumerate}
\section*{Instructions}
\begin{itemize}
\item All your code should be in the class file,
\mintinline{text}{Binomial.java} along with full documentation.
\item You are encouraged to collaborate any number of students
before, during, and after your scheduled hack session.
\item Include the name(s) of everyone who worked together on
this activity in your source file's header.
\item Turn in all of your files via webhandin, making sure that
it runs and executes correctly in the webgrader. Each individual
student will need to hand in their own copy and will receive
their own individual grade.
\end{itemize}
\end{document}
| {
"alphanum_fraction": 0.7128318584,
"avg_line_length": 37.6666666667,
"ext": "tex",
"hexsha": "d64f815c172569db5c2ec674ea5cc860f4762ae6",
"lang": "TeX",
"max_forks_count": 204,
"max_forks_repo_forks_event_max_datetime": "2022-03-29T16:51:50.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-10-17T18:35:28.000Z",
"max_forks_repo_head_hexsha": "40be47f15817a50497f6c6f7cdca9ee1db429b00",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "hrithik125/ComputerScienceI",
"max_forks_repo_path": "honors/hacks/hack12.0.tex",
"max_issues_count": 8,
"max_issues_repo_head_hexsha": "40be47f15817a50497f6c6f7cdca9ee1db429b00",
"max_issues_repo_issues_event_max_datetime": "2021-03-02T04:34:10.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-05-11T01:30:55.000Z",
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "hrithik125/ComputerScienceI",
"max_issues_repo_path": "honors/hacks/hack12.0.tex",
"max_line_length": 114,
"max_stars_count": 68,
"max_stars_repo_head_hexsha": "93e22289e966386f208c477ee319837877bbe62a",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "bobbys131/ComputerScienceI",
"max_stars_repo_path": "honors/hacks/hack12.0.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-19T10:05:16.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-05-14T20:29:13.000Z",
"num_tokens": 1984,
"size": 6780
} |
\subsection{What is Rse-ops?}
The role of Research Software Engineer (RSE) has been emerging in the last decade, and due to the hybrid nature of working environments across cloud, and HPC, DevOps practices are logically being adopted. So if DevOps is the intersection of "Developer" and "Operations," then how does this concept map to this new space, where high performance computing, or more generally, Research Software Engineering is at the forefront?
Inspired by DevOps, we can define a similar term for the Research Software Engineering community to also inspire collaboration and champion best practices -- RSE-ops. Research Software Engineers (RSEs) \cite{rse-history} are those individuals that write code for scientific software, and more generally support researchers to use codes on high performance computing systems, cloud services, and lab computers.
Akin to traditional Software Engineers at major tech companies, they are responsible not just for software development, but also for deployment of analysis pipelines and general services.
It can be noted that we are not calling the new term RseDevOps (dropping "Dev"), and this is done intentionally as the term "Research Software Engineering" encompasses this "Development" portion. RSE-ops, then, appropriately refers to best practices for ensuring the same reliability, scale, collaboration, and software engineering for research codes. We may not always be running a scaled web service, but we might be running scaled jobs on a manager, profiling performance, or testing software in development.
Thus, RSE-ops is the intersection of Research Software Engineering and Operations, and generally refers to best practices for development and operations of scientific software.
Arguably, the RSE community has just as much to gain by building community and putting structure around these practices.
It's important to note that while high performance computing (HPC) has traditionally been a large part of scientific computation, researchers have extended their tools to also use cloud services and other non-HPC tools, so HPC is only considered a subset of Research Software Engineering and thus RSE-ops. Many modern applications are web-based and extend beyond HPC, and so it is important to consider this set as part of the larger scientific or research software engineering universe. However, the dual need to run or deploy application across environments presents greater challenges for the community.
% We will want to make sure the document is clear on distinguishing between HPC and RSE - many RSEs use HPC, but not all, and HPC does not necessarily have research software engineers (they may be called developers or something else)
| {
"alphanum_fraction": 0.8155662117,
"avg_line_length": 193.6428571429,
"ext": "tex",
"hexsha": "aafabdd3e22166b57de2afe5a8721246d048e399",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-09-20T01:20:20.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-09-08T00:21:55.000Z",
"max_forks_repo_head_hexsha": "8756decfd1f947b5989bcbaa7a3faeaeef3ae0ca",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ArangoGutierrez/rse-ops",
"max_forks_repo_path": "src/what-is-rseops.tex",
"max_issues_count": 5,
"max_issues_repo_head_hexsha": "8756decfd1f947b5989bcbaa7a3faeaeef3ae0ca",
"max_issues_repo_issues_event_max_datetime": "2021-09-20T02:56:20.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-09-06T22:15:41.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ArangoGutierrez/rse-ops",
"max_issues_repo_path": "src/what-is-rseops.tex",
"max_line_length": 608,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "8756decfd1f947b5989bcbaa7a3faeaeef3ae0ca",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ArangoGutierrez/rse-ops",
"max_stars_repo_path": "src/what-is-rseops.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-20T03:02:19.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-09-20T01:20:18.000Z",
"num_tokens": 526,
"size": 2711
} |
\chapter{Preface}
Computer-aided design (CAD) is a powerful tool for
spatial investigation and manipulation. Like many
powerful tools, its usage can be broken down into
simple steps. I hope to take you through those steps
the right way the first time, so that you build good
habits from day one.
\section{Who is this book for?}
This book assumes no fore-knowledge of CAD or engineering
concepts. We'll start from square one. This book is for
individuals who want to manipulate 2-D and 3-D objects.
User groups that may find this work useful are:
\begin{itemize}
\item Budding mechanical engineering students. Though I've spent
very little time in my engineering career as a designer,
I still CAD regularly.
\item Hobbyists and artists. I've used CAD to help churn through ideas on personal projects where pencil-on-paper
sketches would be sufficient but slow.
\item Parents who, due to the Covid-19 pandemic, can now
add ``teacher'' to their resume. I wish the lessons in
this book had been a part of my early schooling.
\end{itemize}
\section{Why am I writing this book?}
I'm writing this book because:
\begin{itemize}
\item I, like many, was taught to CAD through word of
mouth. I was blessed to have strong mentors, but I
believe one can build a solid foundation with
well-designed lessons. Surprisingly, the resource I
envision doesn't seem to exist.
\item Instruction tends toward details, advanced theory,
and excess talk while missing fundamental concepts and,
most important, practice. I aim to create a book that,
like a good coach,
encourages practice, critical thought, and strong
fundamentals.
\item I believe that learning the skills well from
Day 1 can shorten the learning curve by months. Learning
them haphazardly can build poor habits that later need to
be remodeled.
\item I've consumed my fair share of do-it-yourself
books. I hope to give back to that collection of knowledge.
\end{itemize}
\section{Using this Book}
This book is intended as a series of work-along exercises.
CAD, like most cognitive skills, is best learned with a
bit of listening and a lot of doing. I recommend these
steps to make the most of this book.
\begin{description}
\item[Mouse] Get yourself a mouse with at least two
buttons and a scroll wheel. I use one with a cord.
\item[This Book] Keep this book open as you work. This is
not a ``read-a-chapter-then-do'' type of book.
\item[Monitors] If you've got two monitors, great! I
don't. Find a way to easily view both Onshape and this book, preferably at the same time.
\end{description}
\section{Using Onshape}
This book uses Onshape because it
is free (as in beer), and therefore accessible. You
will need to make an account at (\url{onshape.com}).
Onshape's free accounts include storage space for public
documents, which means other users can view and copy
but not edit your documents. Onshape's free subscription
is not the place to develop highly confidential designs.
Perhaps the world needs less highly confidential designs.
\section{CAD Efficiency}
I find the following allow me to smoothly develop and manipulate my models. I recommend building these habits, and only shunning them once you've determined they don't work for you.
\begin{description}
\item[Hotkeys] I'll call out relevant hotkeys when appropriate. Because keystrokes are more practiced and require less precision than mouse clicks, they are faster for repetitive tasks such as defining a sketch. Where a hotkey appears as (\keystroke{shift} \keystroke{s}), this is translated as ``While holding \keystroke{shift}, press \keystroke{s}''.
\end{description}
| {
"alphanum_fraction": 0.7834908282,
"avg_line_length": 42.3294117647,
"ext": "tex",
"hexsha": "3fc187472f9d5dd246b02cb7f4016428d414ccd9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "aaeed0275b03d2f6cf9691714364b2de9bb424f7",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "smithb16/ThinkCAD",
"max_forks_repo_path": "ch0.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "aaeed0275b03d2f6cf9691714364b2de9bb424f7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "smithb16/ThinkCAD",
"max_issues_repo_path": "ch0.tex",
"max_line_length": 352,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "aaeed0275b03d2f6cf9691714364b2de9bb424f7",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "smithb16/ThinkCAD",
"max_stars_repo_path": "ch0.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 852,
"size": 3598
} |
% Default to the notebook output style
% Inherit from the specified cell style.
\documentclass[11pt]{article}
\usepackage[T1]{fontenc}
% Nicer default font (+ math font) than Computer Modern for most use cases
\usepackage{mathpazo}
% Basic figure setup, for now with no caption control since it's done
% automatically by Pandoc (which extracts  syntax from Markdown).
\usepackage{graphicx}
% We will generate all images so they have a width \maxwidth. This means
% that they will get their normal width if they fit onto the page, but
% are scaled down if they would overflow the margins.
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth
\else\Gin@nat@width\fi}
\makeatother
\let\Oldincludegraphics\includegraphics
% Set max figure width to be 80% of text width, for now hardcoded.
\renewcommand{\includegraphics}[1]{\Oldincludegraphics[width=.8\maxwidth]{#1}}
% Ensure that by default, figures have no caption (until we provide a
% proper Figure object with a Caption API and a way to capture that
% in the conversion process - todo).
\usepackage{caption}
\DeclareCaptionLabelFormat{nolabel}{}
\captionsetup{labelformat=nolabel}
\usepackage{adjustbox} % Used to constrain images to a maximum size
\usepackage{xcolor} % Allow colors to be defined
\usepackage{enumerate} % Needed for markdown enumerations to work
\usepackage{geometry} % Used to adjust the document margins
\usepackage{amsmath} % Equations
\usepackage{amssymb} % Equations
\usepackage{textcomp} % defines textquotesingle
% Hack from http://tex.stackexchange.com/a/47451/13684:
\AtBeginDocument{%
\def\PYZsq{\textquotesingle}% Upright quotes in Pygmentized code
}
\usepackage{upquote} % Upright quotes for verbatim code
\usepackage{eurosym} % defines \euro
\usepackage[mathletters]{ucs} % Extended unicode (utf-8) support
\usepackage[utf8x]{inputenc} % Allow utf-8 characters in the tex document
\usepackage{fancyvrb} % verbatim replacement that allows latex
\usepackage{grffile} % extends the file name processing of package graphics
% to support a larger range
% The hyperref package gives us a pdf with properly built
% internal navigation ('pdf bookmarks' for the table of contents,
% internal cross-reference links, web links for URLs, etc.)
\usepackage{hyperref}
\usepackage{longtable} % longtable support required by pandoc >1.10
\usepackage{booktabs} % table support for pandoc > 1.12.2
\usepackage[inline]{enumitem} % IRkernel/repr support (it uses the enumerate* environment)
\usepackage[normalem]{ulem} % ulem is needed to support strikethroughs (\sout)
% normalem makes italics be italics, not underlines
% Colors for the hyperref package
\definecolor{urlcolor}{rgb}{0,.145,.698}
\definecolor{linkcolor}{rgb}{.71,0.21,0.01}
\definecolor{citecolor}{rgb}{.12,.54,.11}
% ANSI colors
\definecolor{ansi-black}{HTML}{3E424D}
\definecolor{ansi-black-intense}{HTML}{282C36}
\definecolor{ansi-red}{HTML}{E75C58}
\definecolor{ansi-red-intense}{HTML}{B22B31}
\definecolor{ansi-green}{HTML}{00A250}
\definecolor{ansi-green-intense}{HTML}{007427}
\definecolor{ansi-yellow}{HTML}{DDB62B}
\definecolor{ansi-yellow-intense}{HTML}{B27D12}
\definecolor{ansi-blue}{HTML}{208FFB}
\definecolor{ansi-blue-intense}{HTML}{0065CA}
\definecolor{ansi-magenta}{HTML}{D160C4}
\definecolor{ansi-magenta-intense}{HTML}{A03196}
\definecolor{ansi-cyan}{HTML}{60C6C8}
\definecolor{ansi-cyan-intense}{HTML}{258F8F}
\definecolor{ansi-white}{HTML}{C5C1B4}
\definecolor{ansi-white-intense}{HTML}{A1A6B2}
% commands and environments needed by pandoc snippets
% extracted from the output of `pandoc -s`
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
% Add ',fontsize=\small' for more characters per line
\newenvironment{Shaded}{}{}
\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}}
\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{{#1}}}
\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}}
\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}}
\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}}
\newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{{#1}}}}
\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{{#1}}}
\newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}}
\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{{#1}}}
\newcommand{\RegionMarkerTok}[1]{{#1}}
\newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}}
\newcommand{\NormalTok}[1]{{#1}}
% Additional commands for more recent versions of Pandoc
\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{{#1}}}
\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{{#1}}}
\newcommand{\ImportTok}[1]{{#1}}
\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{{#1}}}}
\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}}
\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}}
\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{{#1}}}
\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}}
\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{{#1}}}
\newcommand{\BuiltInTok}[1]{{#1}}
\newcommand{\ExtensionTok}[1]{{#1}}
\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{{#1}}}
\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{{#1}}}
\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}}
\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}}
% Define a nice break command that doesn't care if a line doesn't already
% exist.
\def\br{\hspace*{\fill} \\* }
% Math Jax compatability definitions
\def\gt{>}
\def\lt{<}
% Document parameters
\title{EP\_Exact+Numerical}
% Pygments definitions
\makeatletter
\def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax%
\let\PY@ul=\relax \let\PY@tc=\relax%
\let\PY@bc=\relax \let\PY@ff=\relax}
\def\PY@tok#1{\csname PY@tok@#1\endcsname}
\def\PY@toks#1+{\ifx\relax#1\empty\else%
\PY@tok{#1}\expandafter\PY@toks\fi}
\def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{%
\PY@it{\PY@bf{\PY@ff{#1}}}}}}}
\def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}}
\expandafter\def\csname PY@tok@w\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}}
\expandafter\def\csname PY@tok@c\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@cp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.74,0.48,0.00}{##1}}}
\expandafter\def\csname PY@tok@k\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@kp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@kt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.69,0.00,0.25}{##1}}}
\expandafter\def\csname PY@tok@o\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@ow\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}}
\expandafter\def\csname PY@tok@nb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@nf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@nc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@nn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@ne\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.82,0.25,0.23}{##1}}}
\expandafter\def\csname PY@tok@nv\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@no\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.00,0.00}{##1}}}
\expandafter\def\csname PY@tok@nl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.63,0.00}{##1}}}
\expandafter\def\csname PY@tok@ni\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.60,0.60,0.60}{##1}}}
\expandafter\def\csname PY@tok@na\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.49,0.56,0.16}{##1}}}
\expandafter\def\csname PY@tok@nt\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@nd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}}
\expandafter\def\csname PY@tok@s\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@sd\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@si\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}}
\expandafter\def\csname PY@tok@se\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.13}{##1}}}
\expandafter\def\csname PY@tok@sr\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}}
\expandafter\def\csname PY@tok@ss\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@sx\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@m\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@gh\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}}
\expandafter\def\csname PY@tok@gu\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}}
\expandafter\def\csname PY@tok@gd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}}
\expandafter\def\csname PY@tok@gi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.63,0.00}{##1}}}
\expandafter\def\csname PY@tok@gr\endcsname{\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}}
\expandafter\def\csname PY@tok@ge\endcsname{\let\PY@it=\textit}
\expandafter\def\csname PY@tok@gs\endcsname{\let\PY@bf=\textbf}
\expandafter\def\csname PY@tok@gp\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}}
\expandafter\def\csname PY@tok@go\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.53,0.53}{##1}}}
\expandafter\def\csname PY@tok@gt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.27,0.87}{##1}}}
\expandafter\def\csname PY@tok@err\endcsname{\def\PY@bc##1{\setlength{\fboxsep}{0pt}\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{\strut ##1}}}
\expandafter\def\csname PY@tok@kc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@kd\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@kn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@kr\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@bp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@fm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@vc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@vg\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@vi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@vm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@sa\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@sb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@sc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@dl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@s2\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@sh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@s1\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@mb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@mf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@mh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@mi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@il\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@mo\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@ch\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@cm\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@cpf\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@c1\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@cs\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\def\PYZbs{\char`\\}
\def\PYZus{\char`\_}
\def\PYZob{\char`\{}
\def\PYZcb{\char`\}}
\def\PYZca{\char`\^}
\def\PYZam{\char`\&}
\def\PYZlt{\char`\<}
\def\PYZgt{\char`\>}
\def\PYZsh{\char`\#}
\def\PYZpc{\char`\%}
\def\PYZdl{\char`\$}
\def\PYZhy{\char`\-}
\def\PYZsq{\char`\'}
\def\PYZdq{\char`\"}
\def\PYZti{\char`\~}
% for compatibility with earlier versions
\def\PYZat{@}
\def\PYZlb{[}
\def\PYZrb{]}
\makeatother
% Exact colors from NB
\definecolor{incolor}{rgb}{0.0, 0.0, 0.5}
\definecolor{outcolor}{rgb}{0.545, 0.0, 0.0}
% Prevent overflowing lines due to hard-to-break entities
\sloppy
% Setup hyperref package
\hypersetup{
breaklinks=true, % so long urls are correctly broken across lines
colorlinks=true,
urlcolor=urlcolor,
linkcolor=linkcolor,
citecolor=citecolor,
}
% Slightly bigger margins than the latex defaults
\geometry{verbose,tmargin=1in,bmargin=1in,lmargin=1in,rmargin=1in}
\begin{document}
\maketitle
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}1}]:} \PY{o}{\PYZpc{}}\PY{k}{matplotlib} inline
\PY{k+kn}{from} \PY{n+nn}{pylab} \PY{k}{import} \PY{o}{*}
\PY{n}{style}\PY{o}{.}\PY{n}{use}\PY{p}{(}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{fivethirtyeight}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{./00\PYZus{}mplrc}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}
\PY{k+kn}{import} \PY{n+nn}{matplotlib}
\PY{n}{matplotlib}\PY{o}{.}\PY{n}{rcParams}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{figure.figsize}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{p}{[}\PY{l+m+mi}{12}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{]}
\PY{n}{matplotlib}\PY{o}{.}\PY{n}{rcParams}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{axes.labelsize}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{x\PYZhy{}large}\PY{l+s+s1}{\PYZsq{}}
\PY{n}{matplotlib}\PY{o}{.}\PY{n}{rcParams}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{xtick.labelsize}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{large}\PY{l+s+s1}{\PYZsq{}}
\PY{n}{matplotlib}\PY{o}{.}\PY{n}{rcParams}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ytick.labelsize}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{large}\PY{l+s+s1}{\PYZsq{}}
\PY{n}{matplotlib}\PY{o}{.}\PY{n}{rcParams}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{figure.facecolor}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{FCFFFF}\PY{l+s+s1}{\PYZsq{}}
\end{Verbatim}
\section{Exact Integration for an EP SDOF
System}\label{exact-integration-for-an-ep-sdof-system}
We want to compute the response, using the constant acceleration
algorithm plus MNR, of an Elasto Plastic (EP) system... but how we can
confirm or reject our results?
It turns out that computing the exact response of an EP system with a
single degree of freedom is relatively simple.
Here we discuss a program that computes the analytical solution of our
problem.
The main building blocks of the program will be two functions that
compute, for the elastic phase and for the plastic phase, the analytical
functions that give the displacement and the velocity as functions of
time.
\subsection{Elastic response}\label{elastic-response}
We are definining a function that, for a linear dynamic system, returns
not the displacement or the velocity at a given time, but rather a
couple of functions of time that we can use afterwards to compute
displacements and velecities at any time of interest.
The response depends on the parameters of the dynamic system \(m,c,k,\)
on the initial conditions \(x_0, v_0,\) and on the characteristics of
the external load.
Here the external load is limited to a linear combination of a cosine
modulated, a sine modulated (both with the same frequency \(\omega\))
and a constant force,
\(P(t) = c_C \cos\omega t + c_S \sin\omega t + F,\)
but that's all that is needed for the present problem.
The particular integral being
\(\xi(t) = S \cos\omega t + R \sin\omega t + D,\)
substituting in the equation of motion and equating all the
corresponding terms gives the undetermined coefficients in \(\xi(t)\),
then evaluation of the general integral and its time derivative for
\(t=0\) permits to find the constants in the homogeneous part of the
integral.
The final step is to define the displacement and the velocity function,
according to the constants we have determined, and to return these two
function to the caller
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}3}]:} \PY{k}{def} \PY{n+nf}{resp\PYZus{}elas}\PY{p}{(}\PY{n}{m}\PY{p}{,}\PY{n}{c}\PY{p}{,}\PY{n}{k}\PY{p}{,} \PY{n}{cC}\PY{p}{,}\PY{n}{cS}\PY{p}{,}\PY{n}{w}\PY{p}{,} \PY{n}{F}\PY{p}{,} \PY{n}{x0}\PY{p}{,}\PY{n}{v0}\PY{p}{)}\PY{p}{:}
\PY{n}{wn2} \PY{o}{=} \PY{n}{k}\PY{o}{/}\PY{n}{m} \PY{p}{;} \PY{n}{wn} \PY{o}{=} \PY{n}{sqrt}\PY{p}{(}\PY{n}{wn2}\PY{p}{)} \PY{p}{;} \PY{n}{beta} \PY{o}{=} \PY{n}{w}\PY{o}{/}\PY{n}{wn}
\PY{n}{z} \PY{o}{=} \PY{n}{c}\PY{o}{/}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{m}\PY{o}{*}\PY{n}{wn}\PY{p}{)}
\PY{n}{wd} \PY{o}{=} \PY{n}{wn}\PY{o}{*}\PY{n}{sqrt}\PY{p}{(}\PY{l+m+mi}{1}\PY{o}{\PYZhy{}}\PY{n}{z}\PY{o}{*}\PY{n}{z}\PY{p}{)}
\PY{c+c1}{\PYZsh{} xi(t) = R sin(w t) + S cos(w t) + D}
\PY{n}{det} \PY{o}{=} \PY{p}{(}\PY{l+m+mf}{1.}\PY{o}{\PYZhy{}}\PY{n}{beta}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{+}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{beta}\PY{o}{*}\PY{n}{z}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}
\PY{n}{R} \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{l+m+mi}{1}\PY{o}{\PYZhy{}}\PY{n}{beta}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)}\PY{o}{*}\PY{n}{cS} \PY{o}{+} \PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{beta}\PY{o}{*}\PY{n}{z}\PY{p}{)}\PY{o}{*}\PY{n}{cC}\PY{p}{)}\PY{o}{/}\PY{n}{det}\PY{o}{/}\PY{n}{k}
\PY{n}{S} \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{l+m+mi}{1}\PY{o}{\PYZhy{}}\PY{n}{beta}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)}\PY{o}{*}\PY{n}{cC} \PY{o}{\PYZhy{}} \PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{beta}\PY{o}{*}\PY{n}{z}\PY{p}{)}\PY{o}{*}\PY{n}{cS}\PY{p}{)}\PY{o}{/}\PY{n}{det}\PY{o}{/}\PY{n}{k}
\PY{n}{D} \PY{o}{=} \PY{n}{F}\PY{o}{/}\PY{n}{k}
\PY{n}{A} \PY{o}{=} \PY{n}{x0}\PY{o}{\PYZhy{}}\PY{n}{S}\PY{o}{\PYZhy{}}\PY{n}{D}
\PY{n}{B} \PY{o}{=} \PY{p}{(}\PY{n}{v0}\PY{o}{+}\PY{n}{z}\PY{o}{*}\PY{n}{wn}\PY{o}{*}\PY{n}{A}\PY{o}{\PYZhy{}}\PY{n}{w}\PY{o}{*}\PY{n}{R}\PY{p}{)}\PY{o}{/}\PY{n}{wd}
\PY{k}{def} \PY{n+nf}{x}\PY{p}{(}\PY{n}{t}\PY{p}{)}\PY{p}{:}
\PY{k}{return} \PY{n}{exp}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{n}{z}\PY{o}{*}\PY{n}{wn}\PY{o}{*}\PY{n}{t}\PY{p}{)}\PY{o}{*}\PY{p}{(}\PY{n}{A}\PY{o}{*}\PY{n}{cos}\PY{p}{(}\PY{n}{wd}\PY{o}{*}\PY{n}{t}\PY{p}{)}\PY{o}{+}\PY{n}{B}\PY{o}{*}\PY{n}{sin}\PY{p}{(}\PY{n}{wd}\PY{o}{*}\PY{n}{t}\PY{p}{)}\PY{p}{)}\PY{o}{+}\PY{n}{R}\PY{o}{*}\PY{n}{sin}\PY{p}{(}\PY{n}{w}\PY{o}{*}\PY{n}{t}\PY{p}{)}\PY{o}{+}\PY{n}{S}\PY{o}{*}\PY{n}{cos}\PY{p}{(}\PY{n}{w}\PY{o}{*}\PY{n}{t}\PY{p}{)}\PY{o}{+}\PY{n}{D}
\PY{k}{def} \PY{n+nf}{v}\PY{p}{(}\PY{n}{t}\PY{p}{)}\PY{p}{:}
\PY{k}{return} \PY{p}{(}\PY{o}{\PYZhy{}}\PY{n}{z}\PY{o}{*}\PY{n}{wn}\PY{o}{*}\PY{n}{exp}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{n}{z}\PY{o}{*}\PY{n}{wn}\PY{o}{*}\PY{n}{t}\PY{p}{)}\PY{o}{*}\PY{p}{(}\PY{n}{A}\PY{o}{*}\PY{n}{cos}\PY{p}{(}\PY{n}{wd}\PY{o}{*}\PY{n}{t}\PY{p}{)}\PY{o}{+}\PY{n}{B}\PY{o}{*}\PY{n}{sin}\PY{p}{(}\PY{n}{wd}\PY{o}{*}\PY{n}{t}\PY{p}{)}\PY{p}{)}
\PY{o}{+}\PY{n}{wd}\PY{o}{*}\PY{n}{exp}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{n}{z}\PY{o}{*}\PY{n}{wn}\PY{o}{*}\PY{n}{t}\PY{p}{)}\PY{o}{*}\PY{p}{(}\PY{n}{B}\PY{o}{*}\PY{n}{cos}\PY{p}{(}\PY{n}{wd}\PY{o}{*}\PY{n}{t}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{n}{A}\PY{o}{*}\PY{n}{sin}\PY{p}{(}\PY{n}{wd}\PY{o}{*}\PY{n}{t}\PY{p}{)}\PY{p}{)}
\PY{o}{+}\PY{n}{w}\PY{o}{*}\PY{p}{(}\PY{n}{R}\PY{o}{*}\PY{n}{cos}\PY{p}{(}\PY{n}{w}\PY{o}{*}\PY{n}{t}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{n}{S}\PY{o}{*}\PY{n}{sin}\PY{p}{(}\PY{n}{w}\PY{o}{*}\PY{n}{t}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{k}{return} \PY{n}{x}\PY{p}{,}\PY{n}{v}
\end{Verbatim}
\subsection{Plastic response}\label{plastic-response}
In this case the equation of motion is
\(m\ddot x + c \dot x = P(t),\)
the homogeneous response is
\(x(t)=A\exp(-\frac{c}{m}t)+B,\)
and the particular integral, for a load described as in the previous
case, is (slightly different...)
\(\xi(t) = S \cos\omega t + R \sin\omega t + Dt.\)
Having computed \(R, S,\) and \(D\) from substituting \(\xi\) in the
equation of motion, \(A\) and \(B\) by imposing the initial
conditions,we can define the displacement and velocity functions and,
finally, return these two functions to the caller.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}4}]:} \PY{k}{def} \PY{n+nf}{resp\PYZus{}yield}\PY{p}{(}\PY{n}{m}\PY{p}{,}\PY{n}{c}\PY{p}{,} \PY{n}{cC}\PY{p}{,}\PY{n}{cS}\PY{p}{,}\PY{n}{w}\PY{p}{,} \PY{n}{F}\PY{p}{,} \PY{n}{x0}\PY{p}{,}\PY{n}{v0}\PY{p}{)}\PY{p}{:}
\PY{c+c1}{\PYZsh{} csi(t) = R sin(w t) + S cos(w t) + Q t}
\PY{n}{Q} \PY{o}{=} \PY{n}{F}\PY{o}{/}\PY{n}{c}
\PY{n}{det} \PY{o}{=} \PY{n}{w}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{*}\PY{p}{(}\PY{n}{c}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{+}\PY{n}{w}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{m}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)}
\PY{n}{R} \PY{o}{=} \PY{p}{(}\PY{o}{+}\PY{n}{w}\PY{o}{*}\PY{n}{c}\PY{o}{*}\PY{n}{cC}\PY{o}{\PYZhy{}}\PY{n}{w}\PY{o}{*}\PY{n}{w}\PY{o}{*}\PY{n}{m}\PY{o}{*}\PY{n}{cS}\PY{p}{)}\PY{o}{/}\PY{n}{det}
\PY{n}{S} \PY{o}{=} \PY{p}{(}\PY{o}{\PYZhy{}}\PY{n}{w}\PY{o}{*}\PY{n}{c}\PY{o}{*}\PY{n}{cS}\PY{o}{\PYZhy{}}\PY{n}{w}\PY{o}{*}\PY{n}{w}\PY{o}{*}\PY{n}{m}\PY{o}{*}\PY{n}{cC}\PY{p}{)}\PY{o}{/}\PY{n}{det}
\PY{c+c1}{\PYZsh{} x(t) = A exp(\PYZhy{}c t/m) + B + R sin(w t) + S cos(w t) + Q t}
\PY{c+c1}{\PYZsh{} v(t) = \PYZhy{} c A/m exp(\PYZhy{}c t/m) + w R cos(w t) \PYZhy{} w S sin(w t) + Q}
\PY{c+c1}{\PYZsh{}}
\PY{c+c1}{\PYZsh{} v(0) = \PYZhy{}c A / m + w R + Q = v0}
\PY{n}{A} \PY{o}{=} \PY{n}{m}\PY{o}{*}\PY{p}{(}\PY{n}{w}\PY{o}{*}\PY{n}{R} \PY{o}{+} \PY{n}{Q} \PY{o}{\PYZhy{}} \PY{n}{v0}\PY{p}{)}\PY{o}{/}\PY{n}{c}
\PY{c+c1}{\PYZsh{} x(0) = A + B + S = x0}
\PY{n}{B} \PY{o}{=} \PY{n}{x0} \PY{o}{\PYZhy{}} \PY{n}{A} \PY{o}{\PYZhy{}} \PY{n}{S}
\PY{k}{def} \PY{n+nf}{x}\PY{p}{(}\PY{n}{t}\PY{p}{)}\PY{p}{:}
\PY{k}{return} \PY{n}{A}\PY{o}{*}\PY{n}{exp}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{n}{c}\PY{o}{*}\PY{n}{t}\PY{o}{/}\PY{n}{m}\PY{p}{)}\PY{o}{+}\PY{n}{B}\PY{o}{+}\PY{n}{R}\PY{o}{*}\PY{n}{sin}\PY{p}{(}\PY{n}{w}\PY{o}{*}\PY{n}{t}\PY{p}{)}\PY{o}{+}\PY{n}{S}\PY{o}{*}\PY{n}{cos}\PY{p}{(}\PY{n}{w}\PY{o}{*}\PY{n}{t}\PY{p}{)}\PY{o}{+}\PY{n}{Q}\PY{o}{*}\PY{n}{t}
\PY{k}{def} \PY{n+nf}{v}\PY{p}{(}\PY{n}{t}\PY{p}{)}\PY{p}{:}
\PY{k}{return} \PY{o}{\PYZhy{}}\PY{n}{c}\PY{o}{*}\PY{n}{A}\PY{o}{*}\PY{n}{exp}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{n}{c}\PY{o}{*}\PY{n}{t}\PY{o}{/}\PY{n}{m}\PY{p}{)}\PY{o}{/}\PY{n}{m}\PY{o}{+}\PY{n}{w}\PY{o}{*}\PY{n}{R}\PY{o}{*}\PY{n}{cos}\PY{p}{(}\PY{n}{w}\PY{o}{*}\PY{n}{t}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{n}{w}\PY{o}{*}\PY{n}{S}\PY{o}{*}\PY{n}{sin}\PY{p}{(}\PY{n}{w}\PY{o}{*}\PY{n}{t}\PY{p}{)}\PY{o}{+}\PY{n}{Q}
\PY{k}{return} \PY{n}{x}\PY{p}{,}\PY{n}{v}
\end{Verbatim}
\subsubsection{An utility function}\label{an-utility-function}
We need to find when
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
the spring yields
\item
the velocity is zero
\end{enumerate}
to individuate the three ranges of different behaviour
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
elastic
\item
plastic
\item
elastic, with permanent deformation.
\end{enumerate}
We can use the simple and robust algorithm of \emph{bisection} to find
the roots for
\(x_{el}(t)=x_y \text{ and } \dot{x}_{ep}(t)=0\).
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}5}]:} \PY{k}{def} \PY{n+nf}{bisect}\PY{p}{(}\PY{n}{f}\PY{p}{,}\PY{n}{val}\PY{p}{,}\PY{n}{x0}\PY{p}{,}\PY{n}{x1}\PY{p}{)}\PY{p}{:}
\PY{n}{h} \PY{o}{=} \PY{p}{(}\PY{n}{x0}\PY{o}{+}\PY{n}{x1}\PY{p}{)}\PY{o}{/}\PY{l+m+mf}{2.0}
\PY{n}{fh} \PY{o}{=} \PY{n}{f}\PY{p}{(}\PY{n}{h}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{n}{val}
\PY{k}{if} \PY{n+nb}{abs}\PY{p}{(}\PY{n}{fh}\PY{p}{)}\PY{o}{\PYZlt{}}\PY{l+m+mf}{1e\PYZhy{}8} \PY{p}{:} \PY{k}{return} \PY{n}{h}
\PY{n}{f0} \PY{o}{=} \PY{n}{f}\PY{p}{(}\PY{n}{x0}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{n}{val}
\PY{k}{if} \PY{n}{f0}\PY{o}{*}\PY{n}{fh} \PY{o}{\PYZgt{}} \PY{l+m+mi}{0} \PY{p}{:}
\PY{k}{return} \PY{n}{bisect}\PY{p}{(}\PY{n}{f}\PY{p}{,} \PY{n}{val}\PY{p}{,} \PY{n}{h}\PY{p}{,} \PY{n}{x1}\PY{p}{)}
\PY{k}{else}\PY{p}{:}
\PY{k}{return} \PY{n}{bisect}\PY{p}{(}\PY{n}{f}\PY{p}{,} \PY{n}{val}\PY{p}{,} \PY{n}{x0}\PY{p}{,} \PY{n}{h}\PY{p}{)}
\end{Verbatim}
\subsection{The system parameters}\label{the-system-parameters}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}7}]:} \PY{n}{mass} \PY{o}{=} \PY{l+m+mf}{1000.} \PY{c+c1}{\PYZsh{} kg}
\PY{n}{k} \PY{o}{=} \PY{l+m+mf}{40000.} \PY{c+c1}{\PYZsh{} N/m}
\PY{n}{zeta} \PY{o}{=} \PY{l+m+mf}{0.03} \PY{c+c1}{\PYZsh{} damping ratio}
\PY{n}{fy} \PY{o}{=} \PY{l+m+mf}{2500.} \PY{c+c1}{\PYZsh{} N}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Limit displacement Uy =}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{fy}\PY{o}{*}\PY{l+m+mi}{1000}\PY{o}{/}\PY{n}{k}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{mm}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Limit displacement Uy = 62.5 mm
\end{Verbatim}
\subsection{Derived quantities}\label{derived-quantities}
The damping coefficient \(c\) and the first yielding displacement,
\(x_y\).
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}11}]:} \PY{n}{damp} \PY{o}{=} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{zeta}\PY{o}{*}\PY{n}{sqrt}\PY{p}{(}\PY{n}{k}\PY{o}{*}\PY{n}{mass}\PY{p}{)}
\PY{n}{xy} \PY{o}{=} \PY{n}{fy}\PY{o}{/}\PY{n}{k} \PY{c+c1}{\PYZsh{} m}
\end{Verbatim}
\subsection{Load definition}\label{load-definition}
Our load is a half-sine impulse
\(p(t)=\begin{cases}p_0\sin(\frac{\pi t}{t_1})&0\leq t\leq t_1,\\\
0&\text{otherwise.}\end{cases}\)
In our exercise
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}13}]:} \PY{n}{t1} \PY{o}{=} \PY{l+m+mf}{0.3} \PY{c+c1}{\PYZsh{} s}
\PY{n}{w} \PY{o}{=} \PY{n}{pi}\PY{o}{/}\PY{n}{t1} \PY{c+c1}{\PYZsh{} rad/s}
\PY{n}{Po} \PY{o}{=} \PY{l+m+mf}{6000.} \PY{c+c1}{\PYZsh{} N}
\end{Verbatim}
\subsection{The actual computations}\label{the-actual-computations}
\subsubsection{Elastic, initial conditions, get system
functions}\label{elastic-initial-conditions-get-system-functions}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}10}]:} \PY{n}{x0}\PY{o}{=}\PY{l+m+mf}{0.0} \PY{c+c1}{\PYZsh{} m}
\PY{n}{v0}\PY{o}{=}\PY{l+m+mf}{0.0} \PY{c+c1}{\PYZsh{} m/s}
\PY{n}{x\PYZus{}next}\PY{p}{,} \PY{n}{v\PYZus{}next} \PY{o}{=} \PY{n}{resp\PYZus{}elas}\PY{p}{(}\PY{n}{mass}\PY{p}{,}\PY{n}{damp}\PY{p}{,}\PY{n}{k}\PY{p}{,} \PY{l+m+mf}{0.0}\PY{p}{,}\PY{n}{Po}\PY{p}{,}\PY{n}{w}\PY{p}{,} \PY{l+m+mf}{0.0}\PY{p}{,} \PY{n}{x0}\PY{p}{,}\PY{n}{v0}\PY{p}{)}
\end{Verbatim}
\subsubsection{Yielding time is}\label{yielding-time-is}
The time of yielding is found solving the equation
\(x_\text{next}(t) = x_y\)
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}12}]:} \PY{n}{t\PYZus{}yield} \PY{o}{=} \PY{n}{bisect}\PY{p}{(}\PY{n}{x\PYZus{}next}\PY{p}{,} \PY{n}{xy}\PY{p}{,} \PY{l+m+mf}{0.0}\PY{p}{,} \PY{n}{t1}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{n}{t\PYZus{}yield}\PY{p}{,} \PY{n}{x\PYZus{}next}\PY{p}{(}\PY{n}{t\PYZus{}yield}\PY{p}{)}\PY{o}{*}\PY{n}{k}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
0.2032657027244568 2500.0000921877736
\end{Verbatim}
\subsubsection{Forced response in elastic range
is}\label{forced-response-in-elastic-range-is}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}14}]:} \PY{n}{t\PYZus{}el} \PY{o}{=} \PY{n}{linspace}\PY{p}{(} \PY{l+m+mf}{0.0}\PY{p}{,} \PY{n}{t\PYZus{}yield}\PY{p}{,} \PY{l+m+mi}{201}\PY{p}{)}
\PY{n}{x\PYZus{}el} \PY{o}{=} \PY{n}{vectorize}\PY{p}{(}\PY{n}{x\PYZus{}next}\PY{p}{)}\PY{p}{(}\PY{n}{t\PYZus{}el}\PY{p}{)}
\PY{n}{v\PYZus{}el} \PY{o}{=} \PY{n}{vectorize}\PY{p}{(}\PY{n}{v\PYZus{}next}\PY{p}{)}\PY{p}{(}\PY{n}{t\PYZus{}el}\PY{p}{)}
\PY{c+c1}{\PYZsh{} \PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}}
\PY{n}{figure}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{plot}\PY{p}{(}\PY{n}{t\PYZus{}el}\PY{p}{,}\PY{n}{x\PYZus{}el}\PY{p}{,}
\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mf}{0.25}\PY{p}{)}\PY{p}{,}\PY{p}{(}\PY{n}{xy}\PY{p}{,}\PY{n}{xy}\PY{p}{)}\PY{p}{,}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZhy{}\PYZhy{}b}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}
\PY{p}{(}\PY{n}{t\PYZus{}yield}\PY{p}{,}\PY{n}{t\PYZus{}yield}\PY{p}{)}\PY{p}{,}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mf}{0.0699}\PY{p}{)}\PY{p}{,}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZhy{}\PYZhy{}b}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\PY{n}{title}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZdl{}x\PYZus{}}\PY{l+s+si}{\PYZob{}el\PYZcb{}}\PY{l+s+s2}{(t)\PYZdl{}}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Time, s}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{ylabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Displacement, m}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{c+c1}{\PYZsh{} \PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}}
\PY{n}{figure}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{plot}\PY{p}{(}\PY{n}{t\PYZus{}el}\PY{p}{,}\PY{n}{v\PYZus{}el}\PY{p}{)}
\PY{n}{title}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZdl{}}\PY{l+s+s2}{\PYZbs{}}\PY{l+s+s2}{dot x\PYZus{}}\PY{l+s+si}{\PYZob{}el\PYZcb{}}\PY{l+s+s2}{(t)\PYZdl{}}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Time, s}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{ylabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Velocity, m/s}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}14}]:} Text(0,0.5,'Velocity, m/s')
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{EP_Exact+Numerical_files/EP_Exact+Numerical_19_1.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{EP_Exact+Numerical_files/EP_Exact+Numerical_19_2.png}
\end{center}
{ \hspace*{\fill} \\}
\subsubsection{Preparing for EP
response}\label{preparing-for-ep-response}
First, the system state at \(t_y\) is the initial condition for the EP
response
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}15}]:} \PY{n}{x0}\PY{o}{=}\PY{n}{x\PYZus{}next}\PY{p}{(}\PY{n}{t\PYZus{}yield}\PY{p}{)}
\PY{n}{v0}\PY{o}{=}\PY{n}{v\PYZus{}next}\PY{p}{(}\PY{n}{t\PYZus{}yield}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{n}{x0}\PY{p}{,} \PY{n}{v0}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
0.06250000230469434 0.7097432496991541
\end{Verbatim}
now, the load must be expressed in function of a \emph{restarted time},
\begin{verbatim}
$\tau=t-t_y\;\rightarrow\;t=\tau+t_y\;\rightarrow\;\sin(\omega t)=\sin(\omega\tau+\omega t_y)$
\end{verbatim}
\begin{verbatim}
$\rightarrow\;\sin(\omega t)=\sin(\omega\tau)\cos(\omega t_y)+\cos(\omega\tau)\sin(\omega t_y)$
\end{verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}16}]:} \PY{n}{cS} \PY{o}{=} \PY{n}{Po}\PY{o}{*}\PY{n}{cos}\PY{p}{(}\PY{n}{w}\PY{o}{*}\PY{n}{t\PYZus{}yield}\PY{p}{)}
\PY{n}{cC} \PY{o}{=} \PY{n}{Po}\PY{o}{*}\PY{n}{sin}\PY{p}{(}\PY{n}{w}\PY{o}{*}\PY{n}{t\PYZus{}yield}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{n}{Po}\PY{o}{*}\PY{n}{sin}\PY{p}{(}\PY{n}{w}\PY{o}{*}\PY{l+m+mf}{0.55}\PY{p}{)}\PY{p}{,} \PY{n}{cS}\PY{o}{*}\PY{n}{sin}\PY{p}{(}\PY{n}{w}\PY{o}{*}\PY{p}{(}\PY{l+m+mf}{0.55}\PY{o}{\PYZhy{}}\PY{n}{t\PYZus{}yield}\PY{p}{)}\PY{p}{)}\PY{o}{+}\PY{n}{cC}\PY{o}{*}\PY{n}{cos}\PY{p}{(}\PY{n}{w}\PY{o}{*}\PY{p}{(}\PY{l+m+mf}{0.55}\PY{o}{\PYZhy{}}\PY{n}{t\PYZus{}yield}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
-2999.999999999998 -2999.999999999996
\end{Verbatim}
Now we generate the displacement and velocity functions for the yielded
phase, please note that the yielded spring still exerts a constant force
\(f_y\) on the mass, and that this fact must be (and it is) taken into
account.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}17}]:} \PY{n}{x\PYZus{}next}\PY{p}{,} \PY{n}{v\PYZus{}next} \PY{o}{=} \PY{n}{resp\PYZus{}yield}\PY{p}{(}\PY{n}{mass}\PY{p}{,} \PY{n}{damp}\PY{p}{,} \PY{n}{cC}\PY{p}{,}\PY{n}{cS}\PY{p}{,}\PY{n}{w}\PY{p}{,} \PY{o}{\PYZhy{}}\PY{n}{fy}\PY{p}{,} \PY{n}{x0}\PY{p}{,}\PY{n}{v0}\PY{p}{)}
\end{Verbatim}
At this point I must confess that I have already peeked the numerical
solution, hence I know that the velocity at \(t=t_1\) is still greater
than 0 and I know that the current solution is valid in the interval
\(t_y\leq t\leq t_1\).
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}18}]:} \PY{n}{t\PYZus{}y1} \PY{o}{=} \PY{n}{linspace}\PY{p}{(}\PY{n}{t\PYZus{}yield}\PY{p}{,} \PY{n}{t1}\PY{p}{,} \PY{l+m+mi}{101}\PY{p}{)}
\PY{n}{x\PYZus{}y1} \PY{o}{=} \PY{n}{vectorize}\PY{p}{(}\PY{n}{x\PYZus{}next}\PY{p}{)}\PY{p}{(}\PY{n}{t\PYZus{}y1}\PY{o}{\PYZhy{}}\PY{n}{t\PYZus{}yield}\PY{p}{)}
\PY{n}{v\PYZus{}y1} \PY{o}{=} \PY{n}{vectorize}\PY{p}{(}\PY{n}{v\PYZus{}next}\PY{p}{)}\PY{p}{(}\PY{n}{t\PYZus{}y1}\PY{o}{\PYZhy{}}\PY{n}{t\PYZus{}yield}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}19}]:} \PY{n}{figure}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{)}
\PY{n}{plot}\PY{p}{(}\PY{n}{t\PYZus{}el}\PY{p}{,}\PY{n}{x\PYZus{}el}\PY{p}{,} \PY{n}{t\PYZus{}y1}\PY{p}{,}\PY{n}{x\PYZus{}y1}\PY{p}{,}
\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mf}{0.25}\PY{p}{)}\PY{p}{,}\PY{p}{(}\PY{n}{xy}\PY{p}{,}\PY{n}{xy}\PY{p}{)}\PY{p}{,}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZhy{}\PYZhy{}b}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}
\PY{p}{(}\PY{n}{t\PYZus{}yield}\PY{p}{,}\PY{n}{t\PYZus{}yield}\PY{p}{)}\PY{p}{,}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mf}{0.0699}\PY{p}{)}\PY{p}{,}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZhy{}\PYZhy{}b}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Time, s}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{ylabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Displacement, m}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{c+c1}{\PYZsh{} \PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}}
\PY{n}{figure}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{)}
\PY{n}{plot}\PY{p}{(}\PY{n}{t\PYZus{}el}\PY{p}{,} \PY{n}{v\PYZus{}el}\PY{p}{,} \PY{n}{t\PYZus{}y1}\PY{p}{,} \PY{n}{v\PYZus{}y1}\PY{p}{)}
\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Time, s}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{ylabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Velocity, m/s}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}19}]:} Text(0,0.5,'Velocity, m/s')
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{EP_Exact+Numerical_files/EP_Exact+Numerical_28_1.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{EP_Exact+Numerical_files/EP_Exact+Numerical_28_2.png}
\end{center}
{ \hspace*{\fill} \\}
In the next phase, still it is \(\dot x> 0\) so that the spring is still
yielding, but now \(p(t)=0\), so we must compute two new state
functions, starting as usual from the initial conditions (note that the
yielding force is still applied)
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}20}]:} \PY{n}{x0} \PY{o}{=} \PY{n}{x\PYZus{}next}\PY{p}{(}\PY{n}{t1}\PY{o}{\PYZhy{}}\PY{n}{t\PYZus{}yield}\PY{p}{)}
\PY{n}{v0} \PY{o}{=} \PY{n}{v\PYZus{}next}\PY{p}{(}\PY{n}{t1}\PY{o}{\PYZhy{}}\PY{n}{t\PYZus{}yield}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{n}{x0}\PY{p}{,} \PY{n}{v0}\PY{p}{)}
\PY{n}{x\PYZus{}next}\PY{p}{,} \PY{n}{v\PYZus{}next} \PY{o}{=} \PY{n}{resp\PYZus{}yield}\PY{p}{(}\PY{n}{mass}\PY{p}{,} \PY{n}{damp}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{,} \PY{n}{w}\PY{p}{,} \PY{o}{\PYZhy{}}\PY{n}{fy}\PY{p}{,} \PY{n}{x0}\PY{p}{,} \PY{n}{v0}\PY{p}{)}
\PY{n}{t2} \PY{o}{=} \PY{n}{t1} \PY{o}{+} \PY{n}{bisect}\PY{p}{(} \PY{n}{v\PYZus{}next}\PY{p}{,} \PY{l+m+mf}{0.0}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mf}{0.3}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{n}{t2}\PY{p}{)}
\PY{n}{t\PYZus{}y2} \PY{o}{=} \PY{n}{linspace}\PY{p}{(} \PY{n}{t1}\PY{p}{,} \PY{n}{t2}\PY{p}{,} \PY{l+m+mi}{101}\PY{p}{)}
\PY{n}{x\PYZus{}y2} \PY{o}{=} \PY{n}{vectorize}\PY{p}{(}\PY{n}{x\PYZus{}next}\PY{p}{)}\PY{p}{(}\PY{n}{t\PYZus{}y2}\PY{o}{\PYZhy{}}\PY{n}{t1}\PY{p}{)}
\PY{n}{v\PYZus{}y2} \PY{o}{=} \PY{n}{vectorize}\PY{p}{(}\PY{n}{v\PYZus{}next}\PY{p}{)}\PY{p}{(}\PY{n}{t\PYZus{}y2}\PY{o}{\PYZhy{}}\PY{n}{t1}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{n}{x\PYZus{}next}\PY{p}{(}\PY{n}{t2}\PY{o}{\PYZhy{}}\PY{n}{t1}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
0.13520933022261195 0.7099968785770994
0.5697131395339965
0.22932407805445898
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}21}]:} \PY{n}{figure}\PY{p}{(}\PY{l+m+mi}{5}\PY{p}{)}
\PY{n}{plot}\PY{p}{(}\PY{n}{t\PYZus{}el}\PY{p}{,}\PY{n}{x\PYZus{}el}\PY{p}{,} \PY{n}{t\PYZus{}y1}\PY{p}{,}\PY{n}{x\PYZus{}y1}\PY{p}{,} \PY{n}{t\PYZus{}y2}\PY{p}{,} \PY{n}{x\PYZus{}y2}\PY{p}{,}
\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mf}{0.25}\PY{p}{)}\PY{p}{,}\PY{p}{(}\PY{n}{xy}\PY{p}{,}\PY{n}{xy}\PY{p}{)}\PY{p}{,}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZhy{}\PYZhy{}b}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}
\PY{p}{(}\PY{n}{t\PYZus{}yield}\PY{p}{,}\PY{n}{t\PYZus{}yield}\PY{p}{)}\PY{p}{,}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mf}{0.0699}\PY{p}{)}\PY{p}{,}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZhy{}\PYZhy{}b}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Time, s}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{ylabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Displacement, m}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{c+c1}{\PYZsh{} \PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}}
\PY{n}{figure}\PY{p}{(}\PY{l+m+mi}{6}\PY{p}{)}
\PY{n}{plot}\PY{p}{(}\PY{n}{t\PYZus{}el}\PY{p}{,} \PY{n}{v\PYZus{}el}\PY{p}{,} \PY{n}{t\PYZus{}y1}\PY{p}{,} \PY{n}{v\PYZus{}y1}\PY{p}{,} \PY{n}{t\PYZus{}y2}\PY{p}{,} \PY{n}{v\PYZus{}y2}\PY{p}{)}
\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Time, s}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{ylabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Velocity, m/s}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}21}]:} Text(0,0.5,'Velocity, m/s')
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{EP_Exact+Numerical_files/EP_Exact+Numerical_31_1.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{EP_Exact+Numerical_files/EP_Exact+Numerical_31_2.png}
\end{center}
{ \hspace*{\fill} \\}
\subsubsection{Elastic unloading}\label{elastic-unloading}
The only point worth commenting is the constant force that we apply to
our system.
The force-displacement relationship for an EP spring is
\(f_\text{E} = k(x-x_\text{pl})= k x - k (x_\text{max}-x_y)\)
taking the negative, constant part of the last expression into the right
member of the equation of equilibrium we have a constant term, as
follows
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}22}]:} \PY{n}{x0} \PY{o}{=} \PY{n}{x\PYZus{}next}\PY{p}{(}\PY{n}{t2}\PY{o}{\PYZhy{}}\PY{n}{t1}\PY{p}{)} \PY{p}{;} \PY{n}{v0} \PY{o}{=} \PY{l+m+mf}{0.0}
\PY{n}{x\PYZus{}next}\PY{p}{,} \PY{n}{v\PYZus{}next} \PY{o}{=} \PY{n}{resp\PYZus{}elas}\PY{p}{(}\PY{n}{mass}\PY{p}{,}\PY{n}{damp}\PY{p}{,}\PY{n}{k}\PY{p}{,} \PY{l+m+mf}{0.0}\PY{p}{,}\PY{l+m+mf}{0.0}\PY{p}{,}\PY{n}{w}\PY{p}{,} \PY{n}{k}\PY{o}{*}\PY{n}{x0}\PY{o}{\PYZhy{}}\PY{n}{fy}\PY{p}{,} \PY{n}{x0}\PY{p}{,}\PY{n}{v0}\PY{p}{)}
\PY{n}{t\PYZus{}e2} \PY{o}{=} \PY{n}{linspace}\PY{p}{(}\PY{n}{t2}\PY{p}{,}\PY{l+m+mf}{4.0}\PY{p}{,}\PY{l+m+mi}{201}\PY{p}{)}
\PY{n}{x\PYZus{}e2} \PY{o}{=} \PY{n}{vectorize}\PY{p}{(}\PY{n}{x\PYZus{}next}\PY{p}{)}\PY{p}{(}\PY{n}{t\PYZus{}e2}\PY{o}{\PYZhy{}}\PY{n}{t2}\PY{p}{)}
\PY{n}{v\PYZus{}e2} \PY{o}{=} \PY{n}{vectorize}\PY{p}{(}\PY{n}{v\PYZus{}next}\PY{p}{)}\PY{p}{(}\PY{n}{t\PYZus{}e2}\PY{o}{\PYZhy{}}\PY{n}{t2}\PY{p}{)}
\end{Verbatim}
now we are ready to plot the whole response
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}23}]:} \PY{c+c1}{\PYZsh{} \PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}}
\PY{n}{figure}\PY{p}{(}\PY{l+m+mi}{7}\PY{p}{)} \PY{p}{;}
\PY{n}{plot}\PY{p}{(}\PY{n}{t\PYZus{}el}\PY{p}{,} \PY{n}{x\PYZus{}el}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZhy{}b}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}
\PY{n}{t\PYZus{}y1}\PY{p}{,} \PY{n}{x\PYZus{}y1}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZhy{}r}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}
\PY{n}{t\PYZus{}y2}\PY{p}{,} \PY{n}{x\PYZus{}y2}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZhy{}r}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}
\PY{n}{t\PYZus{}e2}\PY{p}{,} \PY{n}{x\PYZus{}e2}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZhy{}b}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}
\PY{p}{(}\PY{l+m+mf}{0.6}\PY{p}{,} \PY{l+m+mf}{4.0}\PY{p}{)}\PY{p}{,} \PY{p}{(}\PY{n}{x0}\PY{o}{\PYZhy{}}\PY{n}{xy}\PY{p}{,} \PY{n}{x0}\PY{o}{\PYZhy{}}\PY{n}{xy}\PY{p}{)}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZhy{}\PYZhy{}y}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\PY{n}{title}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{In blue: elastic phases.}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{\PYZdq{}}\PY{o}{+}
\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{In red: yielding phases.}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{\PYZdq{}}\PY{o}{+}
\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Dashed: permanent plastic deformation.}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Time, s}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{ylabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Displacement, m}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}23}]:} Text(0,0.5,'Displacement, m')
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{EP_Exact+Numerical_files/EP_Exact+Numerical_35_1.png}
\end{center}
{ \hspace*{\fill} \\}
\subsection{Numerical solution}\label{numerical-solution}
first, we need the load function
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}24}]:} \PY{k}{def} \PY{n+nf}{make\PYZus{}p}\PY{p}{(}\PY{n}{p0}\PY{p}{,}\PY{n}{t1}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{\PYZdq{}\PYZdq{}\PYZdq{}make\PYZus{}p(p0,t1) returns a 1/2 sine impulse load function, p(t)\PYZdq{}\PYZdq{}\PYZdq{}}
\PY{k}{def} \PY{n+nf}{p}\PY{p}{(}\PY{n}{t}\PY{p}{)}\PY{p}{:}
\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZdq{}}
\PY{k}{if} \PY{n}{t}\PY{o}{\PYZlt{}}\PY{n}{t1}\PY{p}{:}
\PY{k}{return} \PY{n}{p0}\PY{o}{*}\PY{n}{sin}\PY{p}{(}\PY{n}{t}\PY{o}{*}\PY{n}{pi}\PY{o}{/}\PY{n}{t1}\PY{p}{)}
\PY{k}{else}\PY{p}{:}
\PY{k}{return} \PY{l+m+mf}{0.0}
\PY{k}{return} \PY{n}{p}
\end{Verbatim}
and also a function that, given the displacement, the velocity and the
total plastic deformation, returns the stiffness and the new p.d.; this
function is defined in terms of the initial stiffness and the yielding
load
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}25}]:} \PY{k}{def} \PY{n+nf}{make\PYZus{}kt}\PY{p}{(}\PY{n}{k}\PY{p}{,}\PY{n}{fy}\PY{p}{)}\PY{p}{:}
\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{make\PYZus{}kt(k,fy) returns a function kt(u,v,up) returning kt, up}\PY{l+s+s2}{\PYZdq{}}
\PY{k}{def} \PY{n+nf}{kt}\PY{p}{(}\PY{n}{u}\PY{p}{,}\PY{n}{v}\PY{p}{,}\PY{n}{up}\PY{p}{)}\PY{p}{:}
\PY{n}{f}\PY{o}{=}\PY{n}{k}\PY{o}{*}\PY{p}{(}\PY{n}{u}\PY{o}{\PYZhy{}}\PY{n}{up}\PY{p}{)}
\PY{k}{if} \PY{p}{(}\PY{o}{\PYZhy{}}\PY{n}{fy}\PY{p}{)}\PY{o}{\PYZlt{}}\PY{n}{f}\PY{o}{\PYZlt{}}\PY{n}{fy}\PY{p}{:} \PY{k}{return} \PY{n}{k}\PY{p}{,}\PY{n}{up}
\PY{k}{if} \PY{n}{fy}\PY{o}{\PYZlt{}}\PY{o}{=}\PY{n}{f} \PY{o+ow}{and} \PY{n}{v}\PY{o}{\PYZgt{}}\PY{l+m+mi}{0}\PY{p}{:} \PY{n}{up}\PY{o}{=}\PY{n}{u}\PY{o}{\PYZhy{}}\PY{n}{uy}\PY{p}{;}\PY{k}{return} \PY{l+m+mi}{0}\PY{p}{,}\PY{n}{up}
\PY{k}{if} \PY{n}{fy}\PY{o}{\PYZlt{}}\PY{o}{=}\PY{n}{f} \PY{o+ow}{and} \PY{n}{v}\PY{o}{\PYZlt{}}\PY{o}{=}\PY{l+m+mi}{0}\PY{p}{:} \PY{n}{up}\PY{o}{=}\PY{n}{u}\PY{o}{\PYZhy{}}\PY{n}{uy}\PY{p}{;}\PY{k}{return} \PY{n}{k}\PY{p}{,}\PY{n}{up}
\PY{k}{if} \PY{n}{f}\PY{o}{\PYZlt{}}\PY{o}{=}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{n}{fy}\PY{p}{)} \PY{o+ow}{and} \PY{n}{v}\PY{o}{\PYZlt{}}\PY{l+m+mi}{0}\PY{p}{:} \PY{n}{up}\PY{o}{=}\PY{n}{u}\PY{o}{+}\PY{n}{uy}\PY{p}{;}\PY{k}{return} \PY{l+m+mi}{0}\PY{p}{,}\PY{n}{up}
\PY{k}{else}\PY{p}{:} \PY{n}{up}\PY{o}{=}\PY{n}{u}\PY{o}{+}\PY{n}{uy}\PY{p}{;}\PY{k}{return} \PY{n}{k}\PY{p}{,}\PY{n}{up}
\PY{k}{return} \PY{n}{kt}
\end{Verbatim}
\subsubsection{Problem data}\label{problem-data}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}26}]:} \PY{c+c1}{\PYZsh{} Exercise from lesson 04}
\PY{c+c1}{\PYZsh{} }
\PY{n}{mass} \PY{o}{=} \PY{l+m+mf}{1000.00} \PY{c+c1}{\PYZsh{} kilograms}
\PY{n}{k} \PY{o}{=} \PY{l+m+mf}{40000.00} \PY{c+c1}{\PYZsh{} Newtons per metre}
\PY{n}{zeta} \PY{o}{=} \PY{l+m+mf}{0.03} \PY{c+c1}{\PYZsh{} zeta is the damping ratio}
\PY{n}{fy} \PY{o}{=} \PY{l+m+mf}{2500.00} \PY{c+c1}{\PYZsh{} yelding force, Newtons}
\PY{n}{t1} \PY{o}{=} \PY{l+m+mf}{0.30} \PY{c+c1}{\PYZsh{} half\PYZhy{}sine impulse duration, seconds}
\PY{n}{p0} \PY{o}{=} \PY{l+m+mf}{6000.00} \PY{c+c1}{\PYZsh{} half\PYZhy{}sine impulse peak value, Newtons}
\PY{n}{uy} \PY{o}{=} \PY{n}{fy}\PY{o}{/}\PY{n}{k} \PY{c+c1}{\PYZsh{} yelding displacement, metres}
\end{Verbatim}
\subsubsection{Initialize the algorithm}\label{initialize-the-algorithm}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
compute the functions that return the load and the tangent sstiffness
+ plastic deformation
\item
compute the damping constant
\item
for a given time step, compute all the relevant algorithmic constants,
with \(\gamma=\frac12\) and \(\beta=\frac14\)
\end{enumerate}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}27}]:} \PY{c+c1}{\PYZsh{} using the above constants, define the loading function}
\PY{n}{p}\PY{o}{=}\PY{n}{make\PYZus{}p}\PY{p}{(}\PY{n}{p0}\PY{p}{,}\PY{n}{t1}\PY{p}{)}
\PY{c+c1}{\PYZsh{} the following function, given the final displacement, the final}
\PY{c+c1}{\PYZsh{} velocity and the initial plastic deformation returns a) the tangent}
\PY{c+c1}{\PYZsh{} stiffness b) the final plastic deformation}
\PY{n}{kt}\PY{o}{=}\PY{n}{make\PYZus{}kt}\PY{p}{(}\PY{n}{k}\PY{p}{,}\PY{n}{fy}\PY{p}{)}
\PY{c+c1}{\PYZsh{} we need the damping coefficient \PYZdq{}c\PYZdq{}, to compute its value from the}
\PY{c+c1}{\PYZsh{} damping ratio we must first compute the undamped natural frequency }
\PY{n}{wn}\PY{o}{=}\PY{n}{sqrt}\PY{p}{(}\PY{n}{k}\PY{o}{/}\PY{n}{mass}\PY{p}{)} \PY{c+c1}{\PYZsh{} natural frequency of the undamped system}
\PY{n}{damp}\PY{o}{=}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{mass}\PY{o}{*}\PY{n}{wn}\PY{o}{*}\PY{n}{zeta} \PY{c+c1}{\PYZsh{} the damping coefficient}
\PY{c+c1}{\PYZsh{} the time step}
\PY{n}{h}\PY{o}{=}\PY{l+m+mf}{0.005}
\PY{c+c1}{\PYZsh{} required duration for the response}
\PY{n}{t\PYZus{}end} \PY{o}{=} \PY{l+m+mf}{4.0}
\PY{c+c1}{\PYZsh{} the number of time steps to arrive at t\PYZus{}end}
\PY{n}{nsteps}\PY{o}{=}\PY{n+nb}{int}\PY{p}{(}\PY{p}{(}\PY{n}{t\PYZus{}end}\PY{o}{+}\PY{n}{h}\PY{o}{/}\PY{l+m+mi}{100}\PY{p}{)}\PY{o}{/}\PY{n}{h}\PY{p}{)}\PY{o}{+}\PY{l+m+mi}{1}
\PY{c+c1}{\PYZsh{} the maximum number of iterations in the Newton\PYZhy{}Raphson procedure}
\PY{n}{maxiters} \PY{o}{=} \PY{l+m+mi}{30}
\PY{c+c1}{\PYZsh{} using the constant acceleration algorithm}
\PY{c+c1}{\PYZsh{} below we define the relevant algorithmic constants}
\PY{n}{gamma}\PY{o}{=}\PY{l+m+mf}{0.5}
\PY{n}{beta}\PY{o}{=}\PY{l+m+mf}{1.}\PY{o}{/}\PY{l+m+mf}{4.}
\PY{n}{gb}\PY{o}{=}\PY{n}{gamma}\PY{o}{/}\PY{n}{beta}
\PY{n}{a}\PY{o}{=}\PY{n}{mass}\PY{o}{/}\PY{p}{(}\PY{n}{beta}\PY{o}{*}\PY{n}{h}\PY{p}{)}\PY{o}{+}\PY{n}{damp}\PY{o}{*}\PY{n}{gb}
\PY{n}{b}\PY{o}{=}\PY{l+m+mf}{0.5}\PY{o}{*}\PY{n}{mass}\PY{o}{/}\PY{n}{beta}\PY{o}{+}\PY{n}{h}\PY{o}{*}\PY{n}{damp}\PY{o}{*}\PY{p}{(}\PY{l+m+mf}{0.5}\PY{o}{*}\PY{n}{gb}\PY{o}{\PYZhy{}}\PY{l+m+mf}{1.0}\PY{p}{)}
\end{Verbatim}
\subsubsection{System state
initialization}\label{system-state-initialization}
and a bit more, in species we create two empty vectors to hold the
computation results
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}28}]:} \PY{n}{t0}\PY{o}{=}\PY{l+m+mf}{0.0}
\PY{n}{u0}\PY{o}{=}\PY{l+m+mf}{0.0}
\PY{n}{up}\PY{o}{=}\PY{l+m+mf}{0.0}
\PY{n}{v0}\PY{o}{=}\PY{l+m+mf}{0.0}
\PY{n}{p0}\PY{o}{=}\PY{n}{p}\PY{p}{(}\PY{n}{t0}\PY{p}{)}
\PY{p}{(}\PY{n}{k0}\PY{p}{,} \PY{n}{up}\PY{p}{)}\PY{o}{=}\PY{n}{kt}\PY{p}{(}\PY{n}{u0}\PY{p}{,}\PY{n}{v0}\PY{p}{,}\PY{n}{up}\PY{p}{)}
\PY{n}{a0}\PY{o}{=}\PY{p}{(}\PY{n}{p0}\PY{o}{\PYZhy{}}\PY{n}{damp}\PY{o}{*}\PY{n}{v0}\PY{o}{\PYZhy{}}\PY{n}{k0}\PY{o}{*}\PY{p}{(}\PY{n}{u0}\PY{o}{\PYZhy{}}\PY{n}{up}\PY{p}{)}\PY{p}{)}\PY{o}{/}\PY{n}{mass}
\PY{n}{time} \PY{o}{=} \PY{p}{[}\PY{p}{]}\PY{p}{;} \PY{n}{disp} \PY{o}{=} \PY{p}{[}\PY{p}{]}
\end{Verbatim}
\subsubsection{Iteration}\label{iteration}
We iterate over time and, if there is a state change, over the single
time step to equilibrate the unbalanced loadings
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}29}]:} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{nsteps}\PY{p}{)}\PY{p}{:}
\PY{n}{time}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{n}{t0}\PY{p}{)}\PY{p}{;} \PY{n}{disp}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{n}{u0}\PY{p}{)}
\PY{c+c1}{\PYZsh{} advance time, next external load value, etc}
\PY{n}{t1} \PY{o}{=} \PY{n}{t0} \PY{o}{+} \PY{n}{h}
\PY{n}{p1} \PY{o}{=} \PY{n}{p}\PY{p}{(}\PY{n}{t1}\PY{p}{)}
\PY{n}{Dp} \PY{o}{=} \PY{n}{p1} \PY{o}{\PYZhy{}} \PY{n}{p0}
\PY{n}{Dp\PYZus{}}\PY{o}{=} \PY{n}{Dp} \PY{o}{+} \PY{n}{a}\PY{o}{*}\PY{n}{v0} \PY{o}{+} \PY{n}{b}\PY{o}{*}\PY{n}{a0}
\PY{n}{k\PYZus{}} \PY{o}{=} \PY{n}{k0} \PY{o}{+} \PY{n}{gb}\PY{o}{*}\PY{n}{damp}\PY{o}{/}\PY{n}{h} \PY{o}{+} \PY{n}{mass}\PY{o}{/}\PY{p}{(}\PY{n}{beta}\PY{o}{*}\PY{n}{h}\PY{o}{*}\PY{n}{h}\PY{p}{)}
\PY{c+c1}{\PYZsh{} we prepare the machinery for the modified Newton\PYZhy{}Raphson}
\PY{c+c1}{\PYZsh{} algorithm. if we have no state change in the time step, then the}
\PY{c+c1}{\PYZsh{} N\PYZhy{}R algorithm is equivalent to the standard procedure}
\PY{n}{u\PYZus{}init}\PY{o}{=}\PY{n}{u0}\PY{p}{;} \PY{n}{v\PYZus{}init}\PY{o}{=}\PY{n}{v0} \PY{c+c1}{\PYZsh{} initial state}
\PY{n}{f\PYZus{}spring}\PY{o}{=}\PY{n}{k}\PY{o}{*}\PY{p}{(}\PY{n}{u0}\PY{o}{\PYZhy{}}\PY{n}{up}\PY{p}{)} \PY{c+c1}{\PYZsh{} the force in the spring}
\PY{n}{DR}\PY{o}{=}\PY{n}{Dp\PYZus{}} \PY{c+c1}{\PYZsh{} the unbalanced force, initially equal to the}
\PY{c+c1}{\PYZsh{} external load increment}
\PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{maxiters}\PY{p}{)}\PY{p}{:}
\PY{n}{Du}\PY{o}{=}\PY{n}{DR}\PY{o}{/}\PY{n}{k\PYZus{}} \PY{c+c1}{\PYZsh{} the disp increment according to the initial stiffness}
\PY{n}{u\PYZus{}next} \PY{o}{=} \PY{n}{u\PYZus{}init} \PY{o}{+} \PY{n}{Du}
\PY{n}{v\PYZus{}next} \PY{o}{=} \PY{n}{v\PYZus{}init} \PY{o}{+} \PY{n}{gb}\PY{o}{*}\PY{n}{Du}\PY{o}{/}\PY{n}{h} \PY{o}{\PYZhy{}} \PY{n}{gb}\PY{o}{*}\PY{n}{v\PYZus{}init} \PY{o}{+} \PY{n}{h}\PY{o}{*}\PY{p}{(}\PY{l+m+mf}{1.0}\PY{o}{\PYZhy{}}\PY{l+m+mf}{0.5}\PY{o}{*}\PY{n}{gb}\PY{p}{)}\PY{o}{*}\PY{n}{a0}
\PY{c+c1}{\PYZsh{} we are interested in the total plastic elongation}
\PY{n}{oops}\PY{p}{,}\PY{n}{up}\PY{o}{=}\PY{n}{kt}\PY{p}{(}\PY{n}{u\PYZus{}next}\PY{p}{,}\PY{n}{v\PYZus{}next}\PY{p}{,}\PY{n}{up}\PY{p}{)}
\PY{c+c1}{\PYZsh{} because we need the spring force at the end}
\PY{c+c1}{\PYZsh{} of the time step}
\PY{n}{f\PYZus{}spring\PYZus{}next}\PY{o}{=}\PY{n}{k}\PY{o}{*}\PY{p}{(}\PY{n}{u\PYZus{}next}\PY{o}{\PYZhy{}}\PY{n}{up}\PY{p}{)}
\PY{c+c1}{\PYZsh{} so that we can compute the fraction of the}
\PY{c+c1}{\PYZsh{} incremental force that\PYZsq{}s equilibrated at the}
\PY{c+c1}{\PYZsh{} end of the time step}
\PY{n}{df}\PY{o}{=}\PY{n}{f\PYZus{}spring\PYZus{}next}\PY{o}{\PYZhy{}}\PY{n}{f\PYZus{}spring}\PY{o}{+}\PY{p}{(}\PY{n}{k\PYZus{}}\PY{o}{\PYZhy{}}\PY{n}{k0}\PY{p}{)}\PY{o}{*}\PY{n}{Du}
\PY{c+c1}{\PYZsh{} and finally the incremental forces unbalanced}
\PY{c+c1}{\PYZsh{} at the end of the time step}
\PY{n}{DR}\PY{o}{=}\PY{n}{DR}\PY{o}{\PYZhy{}}\PY{n}{df}
\PY{c+c1}{\PYZsh{} finish updating the system state}
\PY{n}{u\PYZus{}init}\PY{o}{=}\PY{n}{u\PYZus{}next}\PY{p}{;} \PY{n}{v\PYZus{}init}\PY{o}{=}\PY{n}{v\PYZus{}next}\PY{p}{;} \PY{n}{f\PYZus{}spring}\PY{o}{=}\PY{n}{f\PYZus{}spring\PYZus{}next}
\PY{c+c1}{\PYZsh{} if the unbalanced load is small enough (the}
\PY{c+c1}{\PYZsh{} criteria used in practical programs are}
\PY{c+c1}{\PYZsh{} energy based) exit the loop \PYZhy{} note that we}
\PY{c+c1}{\PYZsh{} have no plasticization/unloading DR==0 at the}
\PY{c+c1}{\PYZsh{} end of the first iteration}
\PY{k}{if} \PY{n+nb}{abs}\PY{p}{(}\PY{n}{DR}\PY{p}{)}\PY{o}{\PYZlt{}}\PY{n}{fy}\PY{o}{*}\PY{l+m+mf}{1E\PYZhy{}6}\PY{p}{:} \PY{k}{break}
\PY{c+c1}{\PYZsh{} now the load increment is balanced by the spring force and}
\PY{c+c1}{\PYZsh{} increments in inertial and damping forces, we need to compute the}
\PY{c+c1}{\PYZsh{} full state at the end of the time step, and to change all}
\PY{c+c1}{\PYZsh{} denominations to reflect the fact that we are starting a new time step.}
\PY{n}{Du}\PY{o}{=}\PY{n}{u\PYZus{}init}\PY{o}{\PYZhy{}}\PY{n}{u0}
\PY{n}{Dv}\PY{o}{=}\PY{n}{gamma}\PY{o}{*}\PY{n}{Du}\PY{o}{/}\PY{p}{(}\PY{n}{beta}\PY{o}{*}\PY{n}{h}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{n}{gamma}\PY{o}{*}\PY{n}{v0}\PY{o}{/}\PY{n}{beta}\PY{o}{+}\PY{n}{h}\PY{o}{*}\PY{p}{(}\PY{l+m+mf}{1.0}\PY{o}{\PYZhy{}}\PY{l+m+mf}{0.5}\PY{o}{*}\PY{n}{gamma}\PY{o}{/}\PY{n}{beta}\PY{p}{)}\PY{o}{*}\PY{n}{a0}
\PY{n}{u1}\PY{o}{=}\PY{n}{u0}\PY{o}{+}\PY{n}{Du} \PY{p}{;} \PY{n}{v1}\PY{o}{=}\PY{n}{v0}\PY{o}{+}\PY{n}{Dv}
\PY{n}{k1}\PY{p}{,}\PY{n}{up}\PY{o}{=}\PY{n}{kt}\PY{p}{(}\PY{n}{u1}\PY{p}{,}\PY{n}{v1}\PY{p}{,}\PY{n}{up}\PY{p}{)}
\PY{n}{a1}\PY{o}{=}\PY{p}{(}\PY{n}{p}\PY{p}{(}\PY{n}{t1}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{n}{damp}\PY{o}{*}\PY{n}{v1}\PY{o}{\PYZhy{}}\PY{n}{k}\PY{o}{*}\PY{p}{(}\PY{n}{u1}\PY{o}{\PYZhy{}}\PY{n}{up}\PY{p}{)}\PY{p}{)}\PY{o}{/}\PY{n}{mass}
\PY{n}{t0}\PY{o}{=}\PY{n}{t1}\PY{p}{;} \PY{n}{v0}\PY{o}{=}\PY{n}{v1}\PY{p}{;} \PY{n}{u0}\PY{o}{=}\PY{n}{u1} \PY{p}{;} \PY{n}{a0}\PY{o}{=}\PY{n}{a1} \PY{p}{;} \PY{n}{k0}\PY{o}{=}\PY{n}{k1} \PY{p}{;} \PY{n}{p0}\PY{o}{=}\PY{n}{p1}
\end{Verbatim}
\subsubsection{Plotting our results}\label{plotting-our-results}
we plot red crosses for the numericaly computed response and a
continuous line for the results of the analytical integration of the
equation of motion.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}31}]:} \PY{n}{figure}\PY{p}{(}\PY{l+m+mi}{8}\PY{p}{)}
\PY{n}{plot}\PY{p}{(}\PY{n}{time}\PY{p}{[}\PY{p}{:}\PY{p}{:}\PY{l+m+mi}{4}\PY{p}{]}\PY{p}{,}\PY{n}{disp}\PY{p}{[}\PY{p}{:}\PY{p}{:}\PY{l+m+mi}{4}\PY{p}{]}\PY{p}{,}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{xr}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\PY{n}{plot}\PY{p}{(}\PY{n}{t\PYZus{}el}\PY{p}{,} \PY{n}{x\PYZus{}el}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZhy{}b}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}
\PY{n}{t\PYZus{}y1}\PY{p}{,} \PY{n}{x\PYZus{}y1}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZhy{}r}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}
\PY{n}{t\PYZus{}y2}\PY{p}{,} \PY{n}{x\PYZus{}y2}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZhy{}r}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}
\PY{n}{t\PYZus{}e2}\PY{p}{,} \PY{n}{x\PYZus{}e2}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZhy{}b}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}
\PY{p}{(}\PY{l+m+mf}{0.6}\PY{p}{,} \PY{l+m+mf}{4.0}\PY{p}{)}\PY{p}{,} \PY{p}{(}\PY{n}{x0}\PY{o}{\PYZhy{}}\PY{n}{xy}\PY{p}{,} \PY{n}{x0}\PY{o}{\PYZhy{}}\PY{n}{xy}\PY{p}{)}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZhy{}\PYZhy{}y}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\PY{n}{title}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Continuous line: exact response.}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{\PYZdq{}}\PY{o}{+}
\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Red crosses: constant acceleration + MNR.}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Time, s}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{ylabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Displacement, m}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}\PY{p}{;}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{EP_Exact+Numerical_files/EP_Exact+Numerical_49_0.png}
\end{center}
{ \hspace*{\fill} \\}
% Add a bibliography block to the postdoc
\end{document}
| {
"alphanum_fraction": 0.5518339297,
"avg_line_length": 65.136550308,
"ext": "tex",
"hexsha": "da70f37b99334fdaacba823d74378e52ad908f6b",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-08-15T18:33:55.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-06-23T12:32:39.000Z",
"max_forks_repo_head_hexsha": "365f16d047fb2dbfc21a2874790f8bef563e0947",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "shishitao/boffi_dynamics",
"max_forks_repo_path": "dati_2020/04/EP_Exact+Numerical.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "365f16d047fb2dbfc21a2874790f8bef563e0947",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "shishitao/boffi_dynamics",
"max_issues_repo_path": "dati_2020/04/EP_Exact+Numerical.tex",
"max_line_length": 492,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "365f16d047fb2dbfc21a2874790f8bef563e0947",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "shishitao/boffi_dynamics",
"max_stars_repo_path": "dati_2020/04/EP_Exact+Numerical.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 27935,
"size": 63443
} |
\documentclass[conference]{IEEEtran}
\IEEEoverridecommandlockouts
% The preceding line is only needed to identify funding in the first footnote. If that is unneeded, please comment it out.
\usepackage{amsmath,amssymb,amsfonts}
\usepackage{algorithmic}
\usepackage{textcomp}
\usepackage{xcolor}
\usepackage{nicefrac} % nicer inline fractions
\usepackage{tensor} % allows fancy indices
\usepackage{siunitx} % easy handling of value + unit (e.g. \SI{10}{\pF})
% \sisetup{} % configure siunitx (e.g. locale = DE)
\sisetup{output-complex-root=\ensuremath{\mathrm{j}}, complex-root-position = before-number} % configures SI format 10 + j5 for complex numbers (instead of 10 + 5i)
\usepackage{listings}
\usepackage{enumerate}
\usepackage{booktabs} % nicer tables (e.g. \toprule)
\usepackage{verbatim} % inline code (\verb||)
\usepackage[european, siunitx, RPvoltages]{circuitikz} % draw circuit diagrams
\usepackage{enumitem}
\setlist[itemize]{label=\rule[0.5ex]{0.6ex}{0.6ex}} % black squares for itemize
\usepackage{graphicx}
\graphicspath{{./figures/}}
\usepackage{csquotes} % removes biber warning
\usepackage[ % ieee style citations (e.g. [1])
backend = biber,
maxbibnames = 99,
autocite = footnote,
style = ieee,
citestyle = numeric-comp,
doi=false, isbn=false
]{biblatex}
\addbibresource{bibliography.bib}
\usepackage[nobiblatex]{xurl} % line breaks in URLs
% last imports
\usepackage[bookmarksopen,colorlinks,citecolor=black,linkcolor=black, urlcolor = black]{hyperref}
% after hyperref!
\usepackage[noabbrev, nameinlink]{cleveref}
% e.g. \cref{label} or \Cref(label) for capital letter
% configure cleveref not to use brackets around equation references
\creflabelformat{equation}{#2\textup{#1}#3} % Equation references without parentheses
\AtBeginEnvironment{appendices}{\crefalias{section}{appendix}} % Appendix referencing (for cref "Appendix A" instead of "Section A")
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}
\lstdefinestyle{mystyle}{
captionpos=b,
tabsize=4
}
\lstset{style=mystyle}
\pagenumbering{arabic}
\begin{document}
\title{A Little Riscy: Implementation of a Simple Superscalar RISC-V Processor\\
{
\thanks{Submitted \today}
}
}
\date{\today}
\author{\IEEEauthorblockN{Severin Jäger}
[email protected]\\
Mat.Nr. 01613004\\
\and
\IEEEauthorblockN{Max Tamussino}
[email protected]\\
Mat.Nr. 01611815\\
}
\maketitle
\begin{abstract}
\emph{A Little Riscy} is a minimalistic superscalar RISC-V processor featuring parallel execution of ALU and load/store instructions. Due to its four-stage pipeline it is able to efficiently handle data hazards, however its performance decreases drastically when branching is involved. Thus, it shows that superscalar processors do not only need parallel execution units, but also compiler support and branch prediction with speculative execution to achieve a high IPC outside artificial mixes of instructions.
\end{abstract}
\section{Introduction}
The open RISC-V instruction set architecture \cite{risc-v} has gained significant popularity both in academia and in industry. Within the \emph{Advanced Computer Architecture} course at TU Wien, a minimalistic superscalar RISC-V processor called \emph{A Little Riscy} has been designed. It implements the RV32I instruction set and allows parallel execution of ALU and load/store instructions with in-order dual issue.
Due to the limited scope of this project, several instruction level parallelism techniques were not implemented. These include branch prediction and speculative execution as well as out-of-order issuing.
This report covers the implemented architecture of the processor in Section~\ref{sec:implementation}, some evaluation results in Section~\ref{sec:eval} and a discussion of the design including potential improvements in Section~\ref{sec:conclusion}.
\begin{figure}[h]
\centering
\includegraphics[width=7cm]{basic_architecture_2.PNG}
\caption{Basic structure of the implemented processor \cite{Hamacher}}
\label{fig:basic_arch}
\end{figure}
\section{Implementation} \label{sec:implementation}
The implemented processor follows the architecture depicted in Figure~\ref{fig:basic_arch}. Thus, it features a Harvard architecture with a four stage pipeline (Fetch, Dispatch, Execute, Write Back) with an ALU and a load/store unit in parallel in the execution stage. In order to utilise the parallel execution units, two instructions per cycle are fetched, enqueued, dequeued and dispatched respectively. The whole design was implemented using the Chisel hardware description language.
As the evaluation of the processor was conducted with rather simple single-threaded programs (cf. Section~\ref{sec:eval}), the following instructions were not implemented and are thus interpreted as NOP: FENCE, ECALL, EBREAK. Otherwise, \emph{A Little Riscy} is except for some memory limitations (cf. Section~\ref{sec:memory}) compliant with the RV32I instruction set.
\subsection{Fetch Unit} \label{sec:fetch}
The fetch unit loads two instructions per cycle from the instruction memory and places them into the instruction queue. Additionally, it administrates the PC. In case the queue is full, the whole fetching process is stalled.
Furthermore, it implements all control flow transfer instructions. This implies the following:
\begin{itemize}
\item As there is no ALU in a previous pipeline stage, additional hardware is required for address calculations.
\item To ensure the correct register values are present while calculating addresses, the fetch unit has to wait for all previous instructions to take effect before the calculation. This implies that no instructions can be queued until the whole pipeline is idle.
\end{itemize}
The latter can in principle be mitigated using branch prediction and speculative execution techniques, however this was outside the scope of this project.
In case a jump or branch instruction is dequeued, the fetch unit performs the following steps.
\begin{itemize}
\item Stall the pipeline (by scheduling NOPs)\footnote{The instruction loaded from the instruction memory is still dispatched in case its PC is lower than the one of the branch instruction.}.
\item Wait for the dispatcher to report an empty pipeline (i.e. all register values are written).
\item Decode the branch instruction, calculate the target and determine whether the branch is taken.
\item Enqueue an ADDI instruction writing the address of the instruction after the jump (only applicable for jumps).
\item Continue with regular fetch operation.
\end{itemize}
\subsection{Instruction Queue}
The instruction queue is a simple register based FIFO (cf. the \verb|RegFifo| in \cite{schoeberl}) with a width of $96$~bits ($2\times32$~bits instruction, $32$~bits PC) and a depth of $4$. This brings the limitation that only the two instructions being enqueued together can be dequeued at the same time. In case of structural hazards (i.e. only one of the loaded instructions can be issued), this leads to a serialised processing.
\subsection{Dispatcher} \label{sec:dispatch}
\begin{figure}
\centering
\includegraphics[width=5cm]{dispatcher.png}
\caption{High-level schematic of the implemented dispatcher}
\label{fig:dispatcher}
\end{figure}
The dispatcher dequeues two instructions and issues them to the execution units (ALU and load/store). It is of utmost importance, that structural and data hazards are resolved beforehand. This is done by the means of operand forwarding and stalling of conflicting instructions.
The treatment of structural hazards is done as the following: In case two instructions for the same execution unit or with identical destination register are dequeued, the first one is issued while the second is stalled in special stall registers (s. Figure~\ref{fig:dispatcher}). In the next cycle, no new instructions are dequeued and the stalled instruction is issued.
Afterwards, the instructions are decoded (in parallel for ALU and load/store), then data hazards can be treated. This is done by comparing the operand register addresses with the destination register of instructions in the current and the last cycle. In case there is a data hazard related to an instruction dispatched in the last cycle, operand forwarding can be used to resolve this issue without stalling the pipeline\footnote{This is not the case for the standard five stage RISC pipeline proposed in~\cite{HP}.} So in this case, the dispatcher only has to calculate the forwarding signals shown in Figure~\ref{fig:dispatcher}.
However, in case instructions decoded during the same cycle impose some ordering constraints, the instruction with the higher PC has to be stalled. This is done in the same fashion as the instruction stalling for structural hazards and delays instructions for one cycle.
In principle, all aforementioned hazards can be resolved by implementing an dynamic scheduling approach like Tomasulo´s algorithm. However, this is not suitable for the chosen architecture. This is due to the common data bus, which broadcasts the results of all calculations to the reservation stations and the register bank. This bus need some arbitration policy to prevent two execution units from broadcasting their results simultaneously. However, in the case of \emph{A Little Riscy} both execution units generate one result per cycle respectively\footnote{This is not the case for floating point units, where individual instructions usually take multiple cycles.}. Thus, the bandwidth of the basic single-word bus is not sufficient. This leads to a major bottleneck for the whole pipeline.
\subsection{Arithmetic Logic Unit}
This execution unit implements all instructions listed in Section~$2.4$ of the RISC-V Specification~\cite{risc-v}. The LUI and the AUIPC instruction as well as the operand forwarding described in Section~\ref{sec:dispatch} require a dynamic allocation of the operands. This is achieved by the means of multiplexers with select signals created by the dispatcher. This is depicted in Figure~\ref{fig:alu}.
\begin{figure}
\centering
\includegraphics[width=7cm]{alu_mux.png}
\caption{ALU with input multiplexers for operand forwarding}
\label{fig:alu}
\end{figure}
\subsection{Load/Store Unit} \label{sec:load_store}
The load/store unit handles all accesses to the data memory. It implements all respective instructions in the RV32I instruction set, the only limitations are discussed in the next section. Similar to the ALU, its inputs feature multiplexers to allow for the use of forwarded values.
\subsection{Registers} \label{sec:registers}
The register bank consists in compliance with the RISC-V reference manual out of 32 general purpose registers (x0 is always set to zero) and the program counter. The architecture demands six read ports (two for the ALU, two for the load/store unit and two for the fetch unit) and two write ports (one for ALU and load/store unit respectively).
\subsection{Memory System} \label{sec:memory}
As \emph{A Little Riscy} implements a Harvard architecture, instruction and data memory are separated. The instruction memory is a read-only type, holds $128$ word (and thus instructions), and is only word addressable. Due to their small size, both can be implemented as registers. In contrast the $256$ word data memory allows byte, halfword and word addressing, however it assumes aligned halfwords and words both for reads and writes.
\subsection{Performance Considerations} \label{sec:performance}
From the previous sections, the following statements about the theoretically achievable performance can be made:
\begin{itemize}
\item As the data memory is implemented as register, there is no memory delay, so load/store instructions take as long as ALU instructions.
\item In the absence of pipelining hazards, two instructions per cycle can be executed, thus a IPC of $2$ is the upper bound for this processor.
\item Instructions with data hazards can be executed without any additional delay as long as they are not fetched simultaneously.
\item Branching requires completion of all previous instructions and thus introduces significant idle time.
\item The instruction queue adds latency to the pipeline, this is unfavourable in the beginning and during branches.
\end{itemize}
Thus, efficient code for \emph{A Little Riscy} consist of interleaved load/store and ALU instructions and minimises the number of control flow transfer instructions. The following section investigates these observations empirically.
\section{Evaluation} \label{sec:eval}
\subsection{Benchmarks}
In order to evaluate the performance of the implemented processor some benchmarks were conducted. Thereto short C programs were compiled using the \href{https://github.com/riscv/riscv-gnu-toolchain}{RISC-V GNU Compiler Toolchain}. All experiments were conducted in a Chisel Tester based simulation environment.
The first benchmark aims at evaluating the performance of the processor with specialised ALU load. Listing~\ref{code:alu} shows the respective C code which was compiled with the \verb|-O3| flag. Afterwards, the instructions loading the value of the global variables used as input\footnote{This is done in order to exploit the compilers optimiser without losing the actual implementation.} from memory were replaced by simpler load immediate instructions.
\begin{lstlisting}[language=C, caption=Code for the ALU benchmark, label=code:alu]
int A = 13;
int B = 2;
int C = 11;
int D = 7;
int main() {
int a = A;
int b = B;
int c = C;
int d = D;
int r = ((((a+5-b) << 3) >> b) -
((c | a) - 9)) ^ (d+2);
}
\end{lstlisting}
The code for the second benchmark is depicted in Listing~\ref{code:loop}. It consist of two loops, one setting the elements of an array to \verb|i+1| and the second one is summing up its elements. This is compiled with the \verb|-O1| flag. The resulting assembly code contains numerous branch instructions due to the implemented loops. Thus, frequent control flow transfers occur.
\begin{figure}
\centering
\includegraphics[width=8.7cm]{vcd_forwarding.png}
\caption{Dispatcher signals for operand forwarding in the ALU benchmark}
\label{fig:vcd_alu}
\end{figure}
\begin{lstlisting}[language=C, caption=Code for the loop benchmark, label=code:loop]
int N = 5;
int main() {
int n = N;
int a[5] = {};
int i = 0;
for(;i<n;i++) {
a[i] = i+1;
}
int r = 0;
for(i=0;i<n;i++) {
r += a[i];
}
return r;
}
\end{lstlisting}
The third benchmark uses the code from Listing~\ref{code:loop} again, however it was compiled with the \verb|-O3| flag, which resulted in an unrolled loop. Some minor rearrangements have been made in order to ensure efficient utilisation by interleaving of ALU and load/store instructions.
\begin{figure*}
\centering
\includegraphics[width=0.65\textwidth]{vcd_loop.png}
\caption{Dispatcher signals in the loop benchmark. No instructions are scheduled during branches, this leads to significant idle time.}
\label{fig:vcd_loop}
\end{figure*}
The benchmarking results are summed up in Table~\ref{tab:results}. Note that the latency of the pipeline is subtracted from the measured number of cycles in order to measure independent of the length of the code. Thus, the maximum theoretically achievable IPC is $2$.
\begin{table} [h]
\caption{Benchmarking results}
\centering
\begin{tabular}{l c c c}
Benchmark & Instructions & Cycles & IPC \\
\midrule
ALU & 13 & 13 & 1 \\
Loop & 52 & 108 & 0.48 \\
Loop Unrolled & 27 & 21 & 1.29
\end{tabular}
\label{tab:results}
\end{table}
The ALU benchmark, which consists only of ALU instructions, leads to an IPC of $1$. Note that the ALU instructions have to be serialised because there is only one ALU available, the load/store unit is idle during the whole benchmark. Thus, more instructions per cycle are not achievable. This maximum could be reached due to the implementation of operand forwarding. This technique enables handling of the occurring data hazards without introducing any delay. This is shown exemplarily in Figure~\ref{fig:vcd_alu}.
The simple loop benchmark shows the weak spots of the processor. The IPC drops significantly below $1$, this is mainly due to the negative effects of branching (cf. Section~\ref{sec:fetch}). The waveforms in Figure~\ref{fig:vcd_loop} shows one iteration of the first loop and indicates that branch instructions force the pipeline to be idle for significant timespans.
The unrolled loop is able to cope with this problem. As branch instructions were removed, the performance is limited by the distribution between ALU and load/store unit instructions. As long as one load/store and one ALU instruction is issued per cycle, the IPC is $2$, it only drops within phases of sequential ALU or load/store instructions or when destination register conflicts occur. This is shown in Figure~\ref{fig:vcd_loop_unrolled}.
\begin{figure}
\centering
\includegraphics[width=8.7cm]{vcd_loop_unrolled.png}
\caption{Dispatcher signals in the unrolled loop benchmark. The first two instructions are both load/store instructions, they are therefore executed sequentially. Afterwards, ALU and load/store instructions are interleaved, the design is exploited efficiently.}
\label{fig:vcd_loop_unrolled}
\end{figure}
\subsection{FPGA Evaluation}
\begin{table}
\caption{Resource consumption of the different units on the target FPGA}
\centering
\begin{tabular}{l c c c}
Entity & LUTs & Registers & Logic Cells \\
\midrule
ALU & 1929 & 38 & 1967 \\
Dispatcher & 357 & 191 & 548 \\
Fetch Unit & 501 & 55 & 556 \\
Instruction Memory & 121 & 0 & 121 \\
Load/Store Unit & 864 & 69 & 933 \\
Data Memory & 2431 & 8192 & 10623 \\
Instruction Queue & 12 & 342 & 354 \\
Register & 1125 & 1024 & 2149 \\
\midrule
Total & 7340 & 9825 & 17165 \\
\end{tabular}
\label{tab:resources}
\end{table}
An Altera DE2-115 evaluation board featuring an Altera Cyclone IV FPGA was used as hardware platform. The resource usage of the implemented processor is shown in Table~\ref{tab:resources}.
The notably high number of logic required for the register bank is due to the high number of read and write ports (cf. Section~\ref{sec:registers}).
The maximum available clock frequency for this design is \SI{29.1}{\MHz}. The critical path runs through the dispatcher (between input and output of the stall registers, s. Figure~\ref{fig:dispatcher}). This is hardly surprising as the dispatcher has a rather complex task. Finer pipelining i.e. splitting the dispatcher into two stages might be a solution to increase the clock frequency.
\section{Conclusion and Outlook} \label{sec:conclusion}
During this project, a simple superscalar RISC-V processor with one ALU and one load/store unit in parallel was implemented. It is capable of executing up to two instructions per cycle, however benchmarks have showed that this requires a rather artificial mix of instructions. Especially branching causes significant performance drops as neither branch prediction nor speculative execution were implemented.
In general, it can be concluded that instruction level parallelism is an interesting way to increase the performance of a processor, however there are two requirements that have to be satisfied in order to reach an IPC significantly above $1$:
\begin{itemize}
\item The compiler has to optimise with the architecture in mind and make sure that instructions are ordered in a suitable way for superscalar machines. This mainly involves suitable loop unrolling and interleaving of different instruction types.
\item Going superscalar does not only require parallel execution units, but an all-in approach covering several modern ILP techniques (like the ones presented in.~\cite{HP}). This includes branch prediction and speculative execution in order to reduce the branch penalty as well as out-of-order issuing. However, the come at the cost of additional transistors as well as significantly increased design complexity.
\end{itemize}
\printbibliography
\end{document} | {
"alphanum_fraction": 0.7825396825,
"avg_line_length": 64.8231511254,
"ext": "tex",
"hexsha": "6b7d8661d8a384706210a832fbc534846ae95027",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cf36eba4964b3be94d9740371342182249c3a9f7",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "sevjaeg/a-little-riscy",
"max_forks_repo_path": "doc/report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cf36eba4964b3be94d9740371342182249c3a9f7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "sevjaeg/a-little-riscy",
"max_issues_repo_path": "doc/report.tex",
"max_line_length": 795,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "cf36eba4964b3be94d9740371342182249c3a9f7",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "sevjaeg/a-little-riscy",
"max_stars_repo_path": "doc/report.tex",
"max_stars_repo_stars_event_max_datetime": "2021-01-31T21:57:54.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-01-31T21:57:54.000Z",
"num_tokens": 4814,
"size": 20160
} |
\documentclass[letterpaper]{article}
\usepackage[utf8]{inputenc}
\usepackage{fullpage}
\usepackage{nopageno}
\usepackage{amsmath}
\usepackage{amssymb}
\allowdisplaybreaks
\newcommand{\abs}[1]{\left\lvert #1 \right\rvert}
\begin{document}
\title{Notes}
\date{January 21, 2015}
\maketitle
$m*(A\cup B)\le m*A+m*B$
\subsubsection*{we want}
if $A_i\cap A_j=\emptyset$
$m*(\bigcup_{i=1}^\infty A_i)=\sum\limits_{i=1}^\infty{m*(A_i)}$
we don't always get this (sum of parts bigger than the whole)
this is a huge problem
\subsubsection*{measureable}
a set $E$ is measurable if for any set $A$ $m*(A)=m*A\cap E)+m*(A\cap E^C)$
if we can chop $A$ in half and the size of the pieces together is the same as the size of the pieces apart then $E$ is measurable. this gives us what we want above.
\subsection*{two facts}
\begin{enumerate}
\item
if $E$ is measurable, then so is $E^C$ -- definition is symmetric with respect to complements
\item
$E$ is measurable if and only if $m*A\ge m*(A\cap E)+m*(A\cap E^C)$ for any $A$
\subsubsection*{proof}
$A=(A\cap E)\cup(A\cap E^C)\to m*(A)\le m*(A\cap E)+m*(A\cap E^C)$ (less than comes free, only need to show the greater than part)
\end{enumerate}
$E$ measurable means that $m*A=m*(A\cap E)+m*(A\cap E^C)$ for any $A$
\begin{enumerate}
\item
if $m*(E)=0$ then $E$ is measurable ($\emptyset$, singletons, cantor set)
\subsubsection*{proof}
let $A$ be a set. $A\cap E\subseteq E\to m*(A\cap E)\le m*E=0 $ and $A\cap E^C\subset A\to m*(A\cap E^C\le m*(A)$ and so $m*(A\cap E)+m*(A\cap E^C)\le 0+m*(A)$
\end{enumerate}
\section*{intersections and unions}
\subsection*{proposition}
if $E_1,E_2$ are measurable, then so is $E_1\cup E_2$.
\subsubsection*{proof}
$m*A=m*A\cap E_1+m*A\cap E_1^C$
$m*A=m*A\cap E_2+m*A\cap E_2^C$
we want
$m*A\ge m*A\cap (E_1\cup E_2)+m*A\cap (E_1\cup E_2)^C$
\begin{align*}
A&\cap(E_1\cup E_2)\\
(A&\cap E_1)\cup (A\cap (E_2\cap E_1^C))\\
A&\cap(E_1\cup E_2)^C\\
A&\cap(E_1^C\cup E_2^C)\\
m*(A\cap E_1^C)&=m*((A\cap E_1^C)\cap E_2)+m*((A\cap E_1^C)\cap E_2^C)\\
m*(A\cap(E_1\cup E_2))&\le m*(A\cap E_1)+m*(A\cap E_2\cap E_1^2\\
m*(A\cap E_1^C)&-m*((A\cap E_1^C)\cap E_2)=m*((A\cap E_1^C)\cap E_2^C)\\
m*(A\cap(E_1\cup E_2))+m*((A\cap E_1^C)\cap E_2)&\le m*(A\cap E_1)+m*(A\cap E_1^C)=m*(A)\\
\end{align*}
\subsection*{proposition}
if $E_1, E_2$ are measurable, so is $E_1\cap E_2$.
$E_1\cap E_2=(E_1^C)^C\cap(E_2^C)^C$
demorgans law says $(E_1)^C\cup E_2^C)^C$ and so $(E_1^C\cup E_2^C)$ is measurable and so on
\subsection*{propostion}
if $A$ is a set and $E_1, E_2$ are measureable and $E_1\cap E_2=\emptyset$ then $m*(A\cap(E_1\cup E_2))=m*(A\cap E_1)+m*(A\cap E_2)$
\subsubsection*{proof}
$m*A=m*A\cap E_1+m*A\cap E_1^C$
$m*A\cap(E_1\cup E_2)=m*A\cap(E_1\cup E_2)\cap E_1+m*A\cap(E_1\cup E_2)\cap E_1^C=m*A\cap E_1+m*A\cap E_2$
\end{document}
| {
"alphanum_fraction": 0.6572934974,
"avg_line_length": 32.7011494253,
"ext": "tex",
"hexsha": "fc86d70a84659db1bc34a74bb837dea6d0b289ca",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "ylixir/school",
"max_forks_repo_path": "ra2/ra2-notes-2015-01-21.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "ylixir/school",
"max_issues_repo_path": "ra2/ra2-notes-2015-01-21.tex",
"max_line_length": 164,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "ylixir/school",
"max_stars_repo_path": "ra2/ra2-notes-2015-01-21.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1279,
"size": 2845
} |
% Options for packages loaded elsewhere
\PassOptionsToPackage{unicode}{hyperref}
\PassOptionsToPackage{hyphens}{url}
%
\documentclass[
]{book}
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{ifxetex,ifluatex}
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{textcomp} % provide euro and other symbols
\else % if luatex or xetex
\usepackage{unicode-math}
\defaultfontfeatures{Scale=MatchLowercase}
\defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1}
\fi
% Use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
\IfFileExists{microtype.sty}{% use microtype if available
\usepackage[]{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\makeatletter
\@ifundefined{KOMAClassName}{% if non-KOMA class
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}}
}{% if KOMA class
\KOMAoptions{parskip=half}}
\makeatother
\usepackage{xcolor}
\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available
\IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}}
\hypersetup{
pdftitle={An analysis of Cryptocurrency},
pdfauthor={Timothy McCormack, Carolyn Herrera, Wei Chun Tien,},
hidelinks,
pdfcreator={LaTeX via pandoc}}
\urlstyle{same} % disable monospaced font for URLs
\usepackage{color}
\usepackage{fancyvrb}
\newcommand{\VerbBar}{|}
\newcommand{\VERB}{\Verb[commandchars=\\\{\}]}
\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
% Add ',fontsize=\small' for more characters per line
\usepackage{framed}
\definecolor{shadecolor}{RGB}{248,248,248}
\newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}}
\newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}}
\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}}
\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\BuiltInTok}[1]{#1}
\newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}}
\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}}
\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}}
\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}}
\newcommand{\ExtensionTok}[1]{#1}
\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\ImportTok}[1]{#1}
\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}}
\newcommand{\NormalTok}[1]{#1}
\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}}
\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}}
\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}}
\newcommand{\RegionMarkerTok}[1]{#1}
\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\usepackage{longtable,booktabs}
% Correct order of tables after \paragraph or \subparagraph
\usepackage{etoolbox}
\makeatletter
\patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{}
\makeatother
% Allow footnotes in longtable head/foot
\IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}}
\makesavenoteenv{longtable}
\usepackage{graphicx,grffile}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
% Set default figure placement to htbp
\makeatletter
\def\fps@figure{htbp}
\makeatother
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{5}
\usepackage{booktabs}
\usepackage{amsthm}
\makeatletter
\def\thm@space@setup{%
\thm@preskip=8pt plus 2pt minus 4pt
\thm@postskip=\thm@preskip
}
\makeatother
\usepackage{booktabs}
\usepackage{longtable}
\usepackage{array}
\usepackage{multirow}
\usepackage{wrapfig}
\usepackage{float}
\usepackage{colortbl}
\usepackage{pdflscape}
\usepackage{tabu}
\usepackage{threeparttable}
\usepackage{threeparttablex}
\usepackage[normalem]{ulem}
\usepackage{makecell}
\usepackage[]{natbib}
\bibliographystyle{apalike}
\title{An analysis of Cryptocurrency}
\author{Timothy McCormack, Carolyn Herrera, Wei Chun Tien,}
\date{2020-11-09}
\begin{document}
\maketitle
{
\setcounter{tocdepth}{1}
\tableofcontents
}
\hypertarget{preface}{%
\chapter*{Preface}\label{preface}}
\addcontentsline{toc}{chapter}{Preface}
\hypertarget{intro}{%
\chapter{Introduction}\label{intro}}
\hypertarget{data-import-and-clean}{%
\chapter{Data Import and Clean}\label{data-import-and-clean}}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{library}\NormalTok{(tidyverse)}
\KeywordTok{library}\NormalTok{(tidytext)}
\KeywordTok{library}\NormalTok{(textclean)}
\KeywordTok{library}\NormalTok{(dplyr)}
\KeywordTok{library}\NormalTok{(stringr)}
\KeywordTok{library}\NormalTok{(knitr)}
\KeywordTok{library}\NormalTok{(wordcloud)}
\KeywordTok{library}\NormalTok{(kableExtra)}
\KeywordTok{library}\NormalTok{(DT)}
\KeywordTok{library}\NormalTok{(tidygraph)}
\KeywordTok{library}\NormalTok{(ggraph)}
\KeywordTok{library}\NormalTok{(tm)}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{knitr}\OperatorTok{::}\NormalTok{opts_chunk}\OperatorTok{$}\KeywordTok{set}\NormalTok{(}\DataTypeTok{message =} \OtherTok{FALSE}\NormalTok{, }\DataTypeTok{warning =} \OtherTok{FALSE}\NormalTok{, }\DataTypeTok{echo =} \OtherTok{TRUE}\NormalTok{)}
\CommentTok{# set to TRUE to run this on only one reference file}
\NormalTok{debugging <-}\StringTok{ }\OtherTok{FALSE}
\CommentTok{# this will expect the file or files to be in a subdirectory with the following name}
\NormalTok{refsource <-}\StringTok{ "MungingProj2"}
\NormalTok{dataDir <-}\StringTok{ "Proj2Data"}
\NormalTok{workingDir <-}\StringTok{ }\NormalTok{refsource}
\CommentTok{# prefixes for all File reads and writes}
\CommentTok{# titles for tables}
\NormalTok{titletext <-}\StringTok{ "RedditCrypto"}
\NormalTok{srs =}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"CryptoCurrency"}\NormalTok{,}\StringTok{"CryptoMarkets"}\NormalTok{)}
\StringTok{`}\DataTypeTok{%notin%}\StringTok{`}\NormalTok{ <-}\StringTok{ }\KeywordTok{Negate}\NormalTok{(}\StringTok{`}\DataTypeTok{%in%}\StringTok{`}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\hypertarget{import-data}{%
\section{Import Data}\label{import-data}}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{subm_fnames <-}\StringTok{ }\KeywordTok{list.files}\NormalTok{(dataDir, }\DataTypeTok{pattern =} \StringTok{"*_submissions.csv"}\NormalTok{, }\DataTypeTok{full.names =} \OtherTok{TRUE}\NormalTok{)}
\NormalTok{comm_fnames <-}\StringTok{ }\KeywordTok{list.files}\NormalTok{(dataDir, }\DataTypeTok{pattern =} \StringTok{"*_comments.csv"}\NormalTok{, }\DataTypeTok{full.names =} \OtherTok{TRUE}\NormalTok{)}
\NormalTok{subr_fnames <-}\StringTok{ }\KeywordTok{list.files}\NormalTok{(dataDir, }\DataTypeTok{pattern =} \StringTok{"*_subreddit.csv"}\NormalTok{, }\DataTypeTok{full.names =} \OtherTok{TRUE}\NormalTok{)}
\ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\KeywordTok{length}\NormalTok{(subm_fnames)) }
\KeywordTok{assign}\NormalTok{(srs[i], }\KeywordTok{read.csv}\NormalTok{(subm_fnames[i]))}
\NormalTok{SubmData<-}\StringTok{ }\KeywordTok{rbind}\NormalTok{(CryptoCurrency, CryptoMarkets)}
\ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\KeywordTok{length}\NormalTok{(comm_fnames)) }
\KeywordTok{assign}\NormalTok{(srs[i], }\KeywordTok{read.csv}\NormalTok{(comm_fnames[i]))}
\NormalTok{CommData<-}\StringTok{ }\KeywordTok{rbind}\NormalTok{(CryptoCurrency, CryptoMarkets)}
\ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\KeywordTok{length}\NormalTok{(subr_fnames)) }
\KeywordTok{assign}\NormalTok{(srs[i], }\KeywordTok{read.csv}\NormalTok{(subr_fnames[i]))}
\NormalTok{SubrData<-}\StringTok{ }\KeywordTok{rbind}\NormalTok{(CryptoCurrency, CryptoMarkets)}
\end{Highlighting}
\end{Shaded}
\hypertarget{fix-names}{%
\subsection{Fix Names}\label{fix-names}}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{srs <-}\StringTok{ }\KeywordTok{unique}\NormalTok{(SubmData}\OperatorTok{$}\NormalTok{subreddit)}
\NormalTok{SubmData <-}\StringTok{ }\NormalTok{SubmData }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{mutate}\NormalTok{(}
\DataTypeTok{subreddit =} \KeywordTok{case_when}\NormalTok{(}
\NormalTok{ .}\OperatorTok{$}\NormalTok{subreddit }\OperatorTok{==}\StringTok{ }\NormalTok{srs[}\DecValTok{1}\NormalTok{] }\OperatorTok{~}\StringTok{ "r/CryptoCurrency"}\NormalTok{,}
\NormalTok{ .}\OperatorTok{$}\NormalTok{subreddit }\OperatorTok{==}\StringTok{ }\NormalTok{srs[}\DecValTok{2}\NormalTok{] }\OperatorTok{~}\StringTok{ "r/CryptoMarkets"}\NormalTok{)}
\NormalTok{ ) }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{ungroup}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\hypertarget{data-collection}{%
\section{Data Collection}\label{data-collection}}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{submnums <-}\StringTok{ }\KeywordTok{table}\NormalTok{(SubmData}\OperatorTok{$}\NormalTok{subreddit)}
\NormalTok{SubmNums <-}\StringTok{ }\KeywordTok{as.data.frame}\NormalTok{(submnums, }\DataTypeTok{.name_repair =} \StringTok{"minimal"}\NormalTok{)}
\KeywordTok{colnames}\NormalTok{(SubmNums)[}\DecValTok{1}\NormalTok{] <-}\StringTok{ "Subreddit"}
\KeywordTok{ggplot}\NormalTok{(SubmNums, }\KeywordTok{aes}\NormalTok{(}\DataTypeTok{x =}\NormalTok{ Subreddit, }\DataTypeTok{y =}\NormalTok{ Freq, }\DataTypeTok{fill =}\NormalTok{ Subreddit)) }\OperatorTok{+}\StringTok{ }\KeywordTok{geom_bar}\NormalTok{(}\DataTypeTok{stat =} \StringTok{"identity"}\NormalTok{) }\OperatorTok{+}\StringTok{ }\KeywordTok{scale_y_continuous}\NormalTok{(}\DataTypeTok{name=}\StringTok{"# of Submissions by Subreddit"}\NormalTok{, }\DataTypeTok{labels =}\NormalTok{ scales}\OperatorTok{::}\NormalTok{comma)}
\end{Highlighting}
\end{Shaded}
\includegraphics{bookdown-demo_files/figure-latex/unnamed-chunk-5-1.pdf}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{c <-}\StringTok{ }\NormalTok{CommData}
\NormalTok{commnums <-}\StringTok{ }\KeywordTok{table}\NormalTok{(c}\OperatorTok{$}\NormalTok{subreddit)}
\NormalTok{CommNums <-}\StringTok{ }\KeywordTok{as.data.frame}\NormalTok{(commnums, }\DataTypeTok{.name_repair =} \StringTok{"minimal"}\NormalTok{)}
\KeywordTok{colnames}\NormalTok{(CommNums)[}\DecValTok{1}\NormalTok{] <-}\StringTok{ "Subreddit"}
\KeywordTok{ggplot}\NormalTok{(CommNums, }\KeywordTok{aes}\NormalTok{(}\DataTypeTok{x =}\NormalTok{ Subreddit, }\DataTypeTok{y =}\NormalTok{ Freq, }\DataTypeTok{fill =}\NormalTok{ Subreddit)) }\OperatorTok{+}\StringTok{ }\KeywordTok{geom_bar}\NormalTok{(}\DataTypeTok{stat =} \StringTok{"identity"}\NormalTok{) }\OperatorTok{+}\StringTok{ }\KeywordTok{scale_y_continuous}\NormalTok{(}\DataTypeTok{name=}\StringTok{"# of Comments by Subreddit (from 300 posts each)"}\NormalTok{, }\DataTypeTok{labels =}\NormalTok{ scales}\OperatorTok{::}\NormalTok{comma)}
\end{Highlighting}
\end{Shaded}
\includegraphics{bookdown-demo_files/figure-latex/unnamed-chunk-6-1.pdf}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{kable}\NormalTok{(SubrData) }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{kable_styling}\NormalTok{(}\StringTok{"striped"}\NormalTok{, }\DataTypeTok{full_width =}\NormalTok{ F)}
\end{Highlighting}
\end{Shaded}
\begin{table}[H]
\centering
\begin{tabular}{r|l|r|l|l}
\hline
X & title & subscribers & created & public\_description\\
\hline
0 & Cryptocurrency News \& Discussion & 1117855 & 2013-03-11 17:51:50 & The official source for CryptoCurrency News, Discussion \& Analysis.\\
\hline
0 & r/CryptoMarkets & 236835 & 2013-11-12 18:50:17 & FOREX community for cryptocurrencies.
Tags: mt gox bitcoin, long term potential, open source exchange, low inflation rate, demand and price, technical analysis, fundamentals, Bitcoin, Ethereum, Litecoin, Monero, Dash, Augur, token, volume, oscillator, RSI, stochastic, trend, sentiment, strategy, scam, coin, coinmarketcap, altcoin, Peercoin, script, blockchain, PoW, PoS, Proof of Work,\\
\hline
\end{tabular}
\end{table}
\emph{Submissions} from each subreddit:
\emph{Comments} in each subreddit:
\hypertarget{clean}{%
\section{Clean}\label{clean}}
To ensure that our data accurately represent activity within the communitas, we want to ensure that each observation is a unique instance of engagement. Reposts tend to be common on reddit, so using a \texttt{distinct()} function on the text column will remove any duplicate posts. Additionally, any posts that are removed from the subreddits return an \texttt{NA} value in \texttt{user} column, thus we can remove deleted comments by filtering out all non \texttt{user\ ==\ NA}.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#Clean}
\KeywordTok{Encoding}\NormalTok{(SubmData}\OperatorTok{$}\NormalTok{text) <-}\StringTok{ "UTF-8"}
\NormalTok{Submissions <-}\StringTok{ }\NormalTok{SubmData }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{group_by}\NormalTok{(user) }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{filter}\NormalTok{(}\OperatorTok{!}\KeywordTok{is.na}\NormalTok{(user)) }\OperatorTok{%>%}\StringTok{ }\CommentTok{# Take out deleted comments}
\StringTok{ }\KeywordTok{ungroup}\NormalTok{() }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{distinct}\NormalTok{(text, }\DataTypeTok{.keep_all =} \OtherTok{TRUE}\NormalTok{) }\CommentTok{#remove duplicate submissions}
\KeywordTok{paste}\NormalTok{(}\StringTok{"Removed"}\NormalTok{, }\KeywordTok{nrow}\NormalTok{(SubmData) }\OperatorTok{-}\StringTok{ }\KeywordTok{nrow}\NormalTok{(Submissions),}\StringTok{"submissions."}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] "Removed 14 submissions."
\end{verbatim}
\hypertarget{submissions}{%
\subsection{Submissions}\label{submissions}}
The analysis of langauge begins by quantifying the presence of words in each Subm, through the process of \emph{tokenization}. \emph{Tokens} are discrete strings of words or characters that can be isolated as n-grams; with \emph{n} pertaining to the number of words in each token. Tokens are pulled from the body of text that is most informative for the purposes of analysis. For submissions, the informative text is the \emph{title} of the submission, which contains information on topics; whereas, for comments, the informative text is the \texttt{comment} itself.
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{data}\NormalTok{(stop_words)}
\NormalTok{SubmissionTkns <-}\StringTok{ }\NormalTok{Submissions }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{group_by}\NormalTok{(subreddit) }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{unnest_tokens}\NormalTok{(word, text) }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{ungroup}\NormalTok{()}
\CommentTok{#G2Subm <- Submissions %>%}
\CommentTok{# group_by(subreddit) %>%}
\CommentTok{# unnest_tokens(bigram, text, token = "ngrams", n = 2, n_min = #2) %>% #repeat above as bigram}
\CommentTok{# ungroup()}
\end{Highlighting}
\end{Shaded}
As part of the tokenization process for submissions, we remove stop words (e.g.~``and'', ``a'', ``the'') because we are interested in using our tokens to identify prevalent topics of discussion and attitudes in the religiously affiliated social clusters. In contrast, stop words are not removed from text data in comments because they are useful in the analysis of linguistic sentiment and behavior.
\#\#\#\# 1-grams (Clean)
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#Create object for numbers, so that we can remove them from the data}
\NormalTok{nums <-}\StringTok{ }\NormalTok{SubmissionTkns }\OperatorTok{%>%}\StringTok{ }\KeywordTok{filter}\NormalTok{(}\KeywordTok{str_detect}\NormalTok{(word, }\StringTok{"^[0-9]"}\NormalTok{)) }\OperatorTok{%>%}\StringTok{ }\KeywordTok{select}\NormalTok{(word) }\OperatorTok{%>%}\StringTok{ }\KeywordTok{unique}\NormalTok{() }\CommentTok{#Source: https://richpauloo.github.io/2017-12-29-Using-tidytext-to-make-word-clouds/}
\NormalTok{SubmissionTkns <-}\StringTok{ }\NormalTok{SubmissionTkns }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{anti_join}\NormalTok{(stop_words, }\DataTypeTok{by =} \StringTok{"word"}\NormalTok{) }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{anti_join}\NormalTok{(nums, }\DataTypeTok{by =} \StringTok{"word"}\NormalTok{) }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{filter}\NormalTok{(}\OperatorTok{!}\KeywordTok{grepl}\NormalTok{(}\StringTok{"_"}\NormalTok{, .}\OperatorTok{$}\NormalTok{word))}
\NormalTok{G1Subm <-}\StringTok{ }\NormalTok{SubmissionTkns}
\end{Highlighting}
\end{Shaded}
\hypertarget{grams-clean}{%
\subsubsection{2-grams (Clean)}\label{grams-clean}}
2-gram code is not used in analysis due to unsolved cleaning error.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#G2Subm_separated <- G2Subm %>% #Separate bigrams to remove stopwords and transitional phrases}
\CommentTok{# separate(bigram, c("word1", "word2"), sep = " ") #run boolean functions for cleaning on `word1` and `word2`}
\CommentTok{#G2Subm_filtered <- G2Subm_separated %>% }
\CommentTok{# filter(!word1 %in% stop_words$word) %>% #remove stop words}
\CommentTok{# filter(!word2 %in% stop_words$word) %>%}
\CommentTok{# filter(!word1 == "NA") %>% #remove anomylous strings}
\CommentTok{# filter(!word2 == "NA") %>%}
\CommentTok{# filter(!word1 %in% nums) %>% #remove numbers}
\CommentTok{# filter(!word2 %in% nums)}
\CommentTok{#G2Subm <- G2Subm_filtered %>% }
\CommentTok{# unite(bigram, word1, word2, sep = " ")#reunify cleaned word columns}
\CommentTok{#G2Subm <- G2Subm[!grepl("_", G2Subm$bigram),]}
\CommentTok{#rm(G2Subm_separated)}
\CommentTok{#rm(G2Subm_filtered)}
\end{Highlighting}
\end{Shaded}
\hypertarget{count}{%
\subsection{Count}\label{count}}
Following tokenization, the presence of each n-gram is tallied within each communitas. The tallies are then sorted to show what the most frequently occuring words are in each category.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{G1SubmCount <-}\StringTok{ }\NormalTok{G1Subm }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{group_by}\NormalTok{(subreddit) }\OperatorTok{%>%}\StringTok{ }\CommentTok{#group words by affiliation label}
\StringTok{ }\KeywordTok{count}\NormalTok{(word, }\DataTypeTok{sort =} \OtherTok{FALSE}\NormalTok{) }\OperatorTok{%>%}\StringTok{ }\CommentTok{#count and create column 'n'}
\StringTok{ }\KeywordTok{ungroup}\NormalTok{()}
\CommentTok{#G2SubmCount <- G2Subm %>% #repeat for bigrams}
\CommentTok{# group_by(subreddit) %>%}
\CommentTok{# count(bigram, sort = FALSE) %>%}
\CommentTok{# ungroup() }
\KeywordTok{rm}\NormalTok{(G1Subm)}
\CommentTok{#rm(G2Subm)}
\end{Highlighting}
\end{Shaded}
\hypertarget{comments}{%
\section{Comments}\label{comments}}
The cleaning and tokenization process for comments is similar to that of submissions; however, a few extra steps must be taken considering the casual and conversational nature of reddit comments.
\textbf{NEED TO DO:} Additionally, the number of comments on each submission varies, so it will be important to standardize word counts across subreddits..
\hypertarget{clean-1}{%
\subsection{Clean}\label{clean-1}}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{Encoding}\NormalTok{(CommData}\OperatorTok{$}\NormalTok{text) <-}\StringTok{ "UTF-8"}
\NormalTok{Comments <-}\StringTok{ }\NormalTok{CommData }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{group_by}\NormalTok{(user) }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{filter}\NormalTok{(}\OperatorTok{!}\KeywordTok{is.na}\NormalTok{(user)) }\OperatorTok{%>%}\StringTok{ }\CommentTok{# Take out deleted comments}
\StringTok{ }\KeywordTok{ungroup}\NormalTok{() }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{distinct}\NormalTok{(text, }\DataTypeTok{.keep_all =} \OtherTok{TRUE}\NormalTok{) }\CommentTok{#remove duplicate comments}
\KeywordTok{paste}\NormalTok{(}\StringTok{"Removed"}\NormalTok{, }\KeywordTok{nrow}\NormalTok{(CommData) }\OperatorTok{-}\StringTok{ }\KeywordTok{nrow}\NormalTok{(Comments),}\StringTok{"comments"}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] "Removed 12771 comments"
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{url_regex <-}\StringTok{ "http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*}\CharTok{\textbackslash{}\textbackslash{}}\StringTok{(}\CharTok{\textbackslash{}\textbackslash{}}\StringTok{),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+"}
\NormalTok{Comments}\OperatorTok{$}\NormalTok{text <-stringi}\OperatorTok{::}\KeywordTok{stri_trans_general}\NormalTok{(Comments}\OperatorTok{$}\NormalTok{text, }\StringTok{"latin-ascii"}\NormalTok{) }\CommentTok{#converts text encoding, otherwise the tokenizer won't retain contractions}
\NormalTok{Comments}\OperatorTok{$}\NormalTok{text <-}\StringTok{ }\KeywordTok{str_remove_all}\NormalTok{(Comments}\OperatorTok{$}\NormalTok{text, url_regex)}
\end{Highlighting}
\end{Shaded}
\hypertarget{tokenize}{%
\subsection{Tokenize}\label{tokenize}}
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#tokenize}
\NormalTok{G1Comm <-}\StringTok{ }\NormalTok{Comments }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{unnest_tokens}\NormalTok{(word, text) }\CommentTok{# %>%}
\CommentTok{#ungroup()}
\CommentTok{#G2Comm <- Comments %>%}
\CommentTok{# #group_by(subreddit) %>%}
\CommentTok{# unnest_tokens(bigram, text, token = #"ngrams", n = 2, n_min = 2) #%>% #repeat above as bigram}
\CommentTok{#ungroup()}
\end{Highlighting}
\end{Shaded}
\hypertarget{grams}{%
\subsubsection{1-grams}\label{grams}}
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#Create object for numbers, so that we can remove them from the data}
\NormalTok{nums <-}\StringTok{ }\NormalTok{G1Comm }\OperatorTok{%>%}\StringTok{ }\KeywordTok{filter}\NormalTok{(}\KeywordTok{str_detect}\NormalTok{(word, }\StringTok{"^[0-9]"}\NormalTok{)) }\OperatorTok{%>%}\StringTok{ }\KeywordTok{select}\NormalTok{(word) }\OperatorTok{%>%}\StringTok{ }\KeywordTok{unique}\NormalTok{() }\CommentTok{#Source: https://richpauloo.github.io/2017-12-29-Using-tidytext-to-make-word-clouds/}
\NormalTok{Bots <-}\StringTok{ }\KeywordTok{filter}\NormalTok{(G1Comm, word }\OperatorTok{==}\StringTok{ "bot"}\NormalTok{)}
\KeywordTok{data}\NormalTok{(stop_words)}
\NormalTok{CommTkns <-}\StringTok{ }\NormalTok{G1Comm }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{anti_join}\NormalTok{(stop_words) }\OperatorTok{%>%}\StringTok{ }\CommentTok{#take out stop words (i.e. 'a', 'an', 'of', 'the')}
\StringTok{ }\KeywordTok{anti_join}\NormalTok{(nums, }\DataTypeTok{by =} \StringTok{"word"}\NormalTok{) }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{filter}\NormalTok{(user }\OperatorTok{%notin%}\StringTok{ }\NormalTok{Bots}\OperatorTok{$}\NormalTok{user) }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{filter}\NormalTok{(}\OperatorTok{!}\KeywordTok{grepl}\NormalTok{(}\StringTok{"_"}\NormalTok{, .}\OperatorTok{$}\NormalTok{word))}
\CommentTok{#ungroup()}
\NormalTok{G1Comm <-}\StringTok{ }\NormalTok{CommTkns}
\KeywordTok{paste}\NormalTok{(}\StringTok{"Removed"}\NormalTok{, }\KeywordTok{nrow}\NormalTok{(CommData) }\OperatorTok{-}\StringTok{ }\KeywordTok{nrow}\NormalTok{(Comments),}\StringTok{"tokens"}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] "Removed 12771 tokens"
\end{verbatim}
\hypertarget{grams-clean-1}{%
\subsubsection{2-grams (Clean)}\label{grams-clean-1}}
2-gram code is not used in analysis due to unsolved cleaning error.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#G2Comm_separated <- G2Comm %>% #Separate bigrams to remove stopwords and transitional phrases}
\CommentTok{# separate(bigram, c("word1", "word2"), sep = " ") #run boolean functions for cleaning on `word1` and `word2`}
\CommentTok{#G2Comm_filtered <- G2Comm_separated %>% }
\CommentTok{# filter(!word1 %in% stop_words$word) %>% #remove stop words}
\CommentTok{# filter(!word2 %in% stop_words$word) %>%}
\CommentTok{# filter(!word1 == "NA") %>% #remove anomylous strings}
\CommentTok{# filter(!word2 == "NA") %>%}
\CommentTok{# filter(!word1 %in% nums) %>% #remove numbers}
\CommentTok{# filter(!word2 %in% nums)}
\CommentTok{#G2Comm <- G2Comm_filtered %>% }
\CommentTok{# unite(bigram, word1, word2, sep = " ")#reunify cleaned word columns}
\CommentTok{#G2Comm <- G2Comm[!grepl("_", G2Comm$bigram),]}
\CommentTok{#rm(G2Comm_separated)}
\CommentTok{#rm(G2Comm_filtered)}
\end{Highlighting}
\end{Shaded}
\hypertarget{count-1}{%
\subsection{Count}\label{count-1}}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{G1CommCount <-}\StringTok{ }\NormalTok{G1Comm }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{group_by}\NormalTok{(subreddit) }\OperatorTok{%>%}\StringTok{ }\CommentTok{#group words by affiliation label}
\StringTok{ }\KeywordTok{count}\NormalTok{(subreddit, word, }\DataTypeTok{sort =} \OtherTok{FALSE}\NormalTok{) }\OperatorTok{%>%}\StringTok{ }\CommentTok{#count and create column 'n'}
\StringTok{ }\KeywordTok{ungroup}\NormalTok{()}
\CommentTok{#G2CommCount <- G2Comm %>% #repeat for bigrams}
\CommentTok{# group_by(subreddit) %>%}
\CommentTok{# count(bigram, sort = FALSE) %>%}
\CommentTok{# ungroup() }
\KeywordTok{rm}\NormalTok{(G1Comm)}
\CommentTok{#rm(G2Comm)}
\end{Highlighting}
\end{Shaded}
\hypertarget{export}{%
\section{Export}\label{export}}
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#write.csv(G1SubmCount, "clean_data/1GSubm.csv", row.names = FALSE)}
\CommentTok{#write.csv(G2SubmCount, "clean_data/2GSubm.csv", row.names = FALSE)}
\KeywordTok{write.csv}\NormalTok{(SubmissionTkns, }\KeywordTok{paste0}\NormalTok{(dataDir, }\StringTok{"/SubmTkns.csv"}\NormalTok{), }\DataTypeTok{row.names =} \OtherTok{FALSE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{write.csv}\NormalTok{(CommTkns, }\KeywordTok{paste0}\NormalTok{(dataDir, }\StringTok{"/CommTkns.csv"}\NormalTok{), }\DataTypeTok{row.names =} \OtherTok{FALSE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\hypertarget{works-cited}{%
\section{Works Cited}\label{works-cited}}
Boe B (2014). PRAW: The Python Reddit API Wrapper. 2012-, \url{https://github.com/praw-dev/praw/} {[}Online; accessed 2017-09-29{]}.
Silge J, Robinson D (2016). ``tidytext: Text Mining and Analysis Using Tidy Data Principles in R.'' JOSS, 1(3). doi: 10.21105/joss.00037, \url{http://dx.doi.org/10.21105/joss.00037}.
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{dim}\NormalTok{(Comments)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] 164800 8
\end{verbatim}
\hypertarget{token-analysis}{%
\chapter{Token Analysis}\label{token-analysis}}
{[}In-Progress{]}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{knitr}\OperatorTok{::}\NormalTok{opts_chunk}\OperatorTok{$}\KeywordTok{set}\NormalTok{(}\DataTypeTok{message =} \OtherTok{FALSE}\NormalTok{, }\DataTypeTok{warning =} \OtherTok{FALSE}\NormalTok{, }\DataTypeTok{echo =} \OtherTok{TRUE}\NormalTok{)}
\CommentTok{# set to TRUE to run this on only one reference file}
\NormalTok{debugging <-}\StringTok{ }\OtherTok{FALSE}
\CommentTok{# this will expect the file or files to be in a subdirectory with the following name}
\NormalTok{refsource <-}\StringTok{ "MungingProj2"}
\NormalTok{dataDir <-}\StringTok{ "Proj2Data"}
\NormalTok{workingDir <-}\StringTok{ }\NormalTok{refsource}
\CommentTok{# prefixes for all File reads and writes}
\CommentTok{# titles for tables}
\NormalTok{titletext <-}\StringTok{ "RedditCrypto"}
\NormalTok{srs =}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"CryptoCurrency"}\NormalTok{,}\StringTok{"CryptoMarkets"}\NormalTok{)}
\StringTok{`}\DataTypeTok{%notin%}\StringTok{`}\NormalTok{ <-}\StringTok{ }\KeywordTok{Negate}\NormalTok{(}\StringTok{`}\DataTypeTok{%in%}\StringTok{`}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\hypertarget{import}{%
\section{Import}\label{import}}
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#Submissions}
\CommentTok{#Subms1G<- read.csv(paste0(dataDir, "/clean_data/1GComms.csv")) #token counts by pol_group}
\CommentTok{#Subms2G<- read.csv("clean_data/2GSubm.csv") #token counts by }
\CommentTok{#TknsS<- read.csv(paste0(dataDir, "/SubmTkns.csv")) #token counts (full dataset)}
\CommentTok{#Comments}
\CommentTok{#Comms1G<- read.csv(paste0(dataDir, "/clean_data/1GComms.csv"))}
\CommentTok{#Comms2G <- read.csv("clean_data/2GComms.csv")}
\NormalTok{TknsC<-}\StringTok{ }\KeywordTok{read.csv}\NormalTok{(}\KeywordTok{paste0}\NormalTok{(dataDir, }\StringTok{"/CommTkns.csv"}\NormalTok{))}
\CommentTok{#Matrices}
\CommentTok{#MtrxS <- read.csv("clean_data/SubMtrx.csv")}
\CommentTok{#MtrxC <- read.csv(paste0(dataDir,"/CommMtrx.csv"))}
\CommentTok{#MtrxU <- read.csv(paste0(dataDir, "/UserMtrx.csv"))}
\end{Highlighting}
\end{Shaded}
\hypertarget{token-counts}{%
\section{Token Counts}\label{token-counts}}
Token counts are done at several different levels. The first being word frequency within each subreddit, which allows for a comparison of common words across communitas. The second is word frequency within each subreddit page (i.e.~``Top'', ``Controversial'', ``Hot''), which allows for a comparison of common words across strata within each communitas.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{allCount <-TknsC }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{group_by}\NormalTok{(subreddit) }\OperatorTok{%>%}\StringTok{ }\CommentTok{#group words by affiliation label}
\StringTok{ }\KeywordTok{count}\NormalTok{(word, }\DataTypeTok{sort =} \OtherTok{TRUE}\NormalTok{) }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{ungroup}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\hypertarget{most-frequent-tokens}{%
\subsection{Most Frequent Tokens}\label{most-frequent-tokens}}
The following code show the top 25 most frequently occuring words within each subreddit.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{sr_all_n25 <-}\StringTok{ }\NormalTok{allCount }\OperatorTok{%>%}\StringTok{ }
\StringTok{ }\KeywordTok{group_by}\NormalTok{(subreddit) }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{top_n}\NormalTok{(}\DecValTok{25}\NormalTok{) }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{ungroup}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\hypertarget{wordclouds}{%
\section{Wordclouds}\label{wordclouds}}
Wordclouds are then constructed using the top 50 words from each subreddit page.
\hypertarget{all-words-submissions-and-comments}{%
\subsection{All words (Submissions and Comments)}\label{all-words-submissions-and-comments}}
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#sr_allCount <- allCount %>% filter(!subreddit == "r/all [control]")}
\NormalTok{sr_allWC <-}\StringTok{ }\NormalTok{allCount }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{top_n}\NormalTok{(}\DecValTok{100}\NormalTok{) }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{mutate}\NormalTok{(}\DataTypeTok{prop =}\NormalTok{ n }\OperatorTok{/}\StringTok{ }\KeywordTok{max}\NormalTok{(n))}
\NormalTok{color=}\KeywordTok{diverge_hcl}\NormalTok{(}\KeywordTok{length}\NormalTok{(sr_allWC}\OperatorTok{$}\NormalTok{prop))[}\KeywordTok{rank}\NormalTok{(sr_allWC}\OperatorTok{$}\NormalTok{prop)]}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{set.seed}\NormalTok{(}\DecValTok{29}\NormalTok{)}
\KeywordTok{ggplot}\NormalTok{(sr_allWC, }\KeywordTok{aes}\NormalTok{(}\DataTypeTok{label =}\NormalTok{ word, }\DataTypeTok{size =}\NormalTok{ prop, }\DataTypeTok{color =}\NormalTok{ prop)) }\OperatorTok{+}
\StringTok{ }\KeywordTok{geom_text_wordcloud_area}\NormalTok{(}\DataTypeTok{shape =} \StringTok{'circle'}\NormalTok{, }\DataTypeTok{rm_outside =} \OtherTok{TRUE}\NormalTok{) }\OperatorTok{+}
\StringTok{ }\KeywordTok{scale_size_area}\NormalTok{(}\DataTypeTok{max_size =} \DecValTok{30}\NormalTok{) }\OperatorTok{+}
\StringTok{ }\CommentTok{#scale_colour_gradient2(low = "black", mid = "red4", high = "red", space = "Lab", aesthetics = "color") +}
\StringTok{ }\KeywordTok{theme_minimal}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\includegraphics{bookdown-demo_files/figure-latex/wordcloud1-1.pdf}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{ggsave}\NormalTok{(}\StringTok{"CC_wordcloud.pdf"}\NormalTok{, }\DataTypeTok{device =} \StringTok{"pdf"}\NormalTok{, }\DataTypeTok{path =} \StringTok{"CCViz"}\NormalTok{, }\DataTypeTok{height =} \DecValTok{10}\NormalTok{, }\DataTypeTok{width =} \DecValTok{16}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
TF\_IDF is a tool used to show distinct word usage across groups by taking the difference between the term frequency \texttt{tf} and inverse document frequency \texttt{idf}, ultimately revealing the tokens that are most unique to each subreddit.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#Create TF-IDF matrix for Controversial Submissions}
\NormalTok{allIDF <-}\StringTok{ }\NormalTok{allCount }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{bind_tf_idf}\NormalTok{(word, subreddit, n) }\OperatorTok{%>%}\StringTok{ }\CommentTok{#construct tf_idf by affiliation label}
\StringTok{ }\KeywordTok{arrange}\NormalTok{(}\KeywordTok{desc}\NormalTok{(tf_idf)) }\OperatorTok{%>%}\StringTok{ }
\StringTok{ }\KeywordTok{mutate}\NormalTok{(}\DataTypeTok{word =} \KeywordTok{factor}\NormalTok{(word, }\DataTypeTok{levels =} \KeywordTok{rev}\NormalTok{(}\KeywordTok{unique}\NormalTok{(word))))}
\end{Highlighting}
\end{Shaded}
\hypertarget{tf-idf}{%
\section{TF-IDF}\label{tf-idf}}
The below figure shows that the religious communitas can be identified through the symbols presented in their distinguishing tokens. This methodology is useful for exploring the divergent symbolic identities of collective consciousnesses. The tf-ifd analysis below presents us with sets of names, concepts, and phrases that reference the symbolic systems of each religious identity.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{allIDF }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{group_by}\NormalTok{(subreddit) }\OperatorTok{%>%}\StringTok{ }
\StringTok{ }\KeywordTok{top_n}\NormalTok{(}\DecValTok{10}\NormalTok{) }\OperatorTok{%>%}\StringTok{ }
\StringTok{ }\KeywordTok{ungroup}\NormalTok{()}\OperatorTok{%>%}
\StringTok{ }\KeywordTok{ggplot}\NormalTok{(}\KeywordTok{aes}\NormalTok{(word, tf_idf, }\DataTypeTok{fill =}\NormalTok{ subreddit)) }\OperatorTok{+}\StringTok{ }\KeywordTok{geom_col}\NormalTok{(}\DataTypeTok{show.legend =} \OtherTok{FALSE}\NormalTok{) }\OperatorTok{+}\StringTok{ }\KeywordTok{labs}\NormalTok{(}\DataTypeTok{x =} \OtherTok{NULL}\NormalTok{, }\DataTypeTok{y =} \StringTok{"tf-idf"}\NormalTok{) }\OperatorTok{+}\StringTok{ }\KeywordTok{facet_wrap}\NormalTok{(}\OperatorTok{~}\NormalTok{subreddit, }\DataTypeTok{ncol =} \DecValTok{2}\NormalTok{, }\DataTypeTok{scales =} \StringTok{"free_y"}\NormalTok{) }\OperatorTok{+}\StringTok{ }\KeywordTok{coord_flip}\NormalTok{() }\OperatorTok{+}\StringTok{ }\KeywordTok{theme}\NormalTok{(}\DataTypeTok{axis.text.x =} \KeywordTok{element_text}\NormalTok{(}\DataTypeTok{angle =} \DecValTok{45}\NormalTok{, }\DataTypeTok{hjust =} \DecValTok{1}\NormalTok{))}
\end{Highlighting}
\end{Shaded}
\includegraphics{bookdown-demo_files/figure-latex/all tf_idf-1.pdf}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{ggsave}\NormalTok{(}\StringTok{"tf_idf.pdf"}\NormalTok{, }\DataTypeTok{device =} \StringTok{"pdf"}\NormalTok{, }\DataTypeTok{path =} \StringTok{"CCViz"}\NormalTok{, }\DataTypeTok{height =} \DecValTok{8}\NormalTok{, }\DataTypeTok{width =} \DecValTok{6}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\hypertarget{posts-over-time}{%
\section{Posts over time}\label{posts-over-time}}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{TknsByDate <-}\StringTok{ }\NormalTok{TknsC }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{separate}\NormalTok{(created, }\KeywordTok{c}\NormalTok{(}\StringTok{"created"}\NormalTok{, }\StringTok{"time"}\NormalTok{), }\StringTok{" "}\NormalTok{) }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{mutate}\NormalTok{(}\DataTypeTok{created =} \KeywordTok{ymd}\NormalTok{(created)) }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{mutate_at}\NormalTok{(}\KeywordTok{vars}\NormalTok{(created), }\KeywordTok{funs}\NormalTok{(year, month, day))}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{TknsByMonth <-TknsByDate }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{filter}\NormalTok{(year }\OperatorTok{>}\StringTok{ }\DecValTok{2015}\NormalTok{) }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{mutate}\NormalTok{(}\DataTypeTok{Month =} \KeywordTok{make_date}\NormalTok{(year, month))}
\NormalTok{monthCount <-}\StringTok{ }\NormalTok{TknsByMonth }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{group_by}\NormalTok{(subreddit, month, year) }\OperatorTok{%>%}\StringTok{ }\CommentTok{#group words by affiliation label}
\StringTok{ }\KeywordTok{count}\NormalTok{(word, }\DataTypeTok{sort =} \OtherTok{TRUE}\NormalTok{) }\OperatorTok{%>%}\StringTok{ }\CommentTok{#count and create column 'n'}
\StringTok{ }\KeywordTok{top_n}\NormalTok{(}\DecValTok{10}\NormalTok{, n) }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{ungroup}\NormalTok{()}
\NormalTok{monthCount <-}\StringTok{ }\NormalTok{monthCount }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{group_by}\NormalTok{(subreddit, month, year) }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{mutate}\NormalTok{(}\DataTypeTok{prop =}\NormalTok{ n }\OperatorTok{/}\StringTok{ }\KeywordTok{max}\NormalTok{(n))}
\KeywordTok{table}\NormalTok{(monthCount}\OperatorTok{$}\NormalTok{subreddit)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
##
## CryptoCurrency CryptoMarkets
## 396 435
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{ggplot}\NormalTok{(monthCount, }\KeywordTok{aes}\NormalTok{(}
\DataTypeTok{label =}\NormalTok{ word,}
\DataTypeTok{size =}\NormalTok{ prop,}
\DataTypeTok{color =}\NormalTok{ prop}
\NormalTok{)) }\OperatorTok{+}
\StringTok{ }\KeywordTok{geom_text_wordcloud_area}\NormalTok{(}\DataTypeTok{rm_outside =} \OtherTok{TRUE}\NormalTok{) }\OperatorTok{+}
\StringTok{ }\KeywordTok{scale_size_area}\NormalTok{(}\DataTypeTok{max_size =} \DecValTok{20}\NormalTok{) }\OperatorTok{+}
\StringTok{ }\CommentTok{#scale_colour_gradient2(low = "black", mid = "red4", high = "red", space = "Lab", aesthetics = "color") +}
\StringTok{ }\KeywordTok{theme_minimal}\NormalTok{() }\OperatorTok{+}
\StringTok{ }\KeywordTok{facet_grid}\NormalTok{(}\KeywordTok{vars}\NormalTok{(month), }\KeywordTok{vars}\NormalTok{(year))}
\end{Highlighting}
\end{Shaded}
\includegraphics{bookdown-demo_files/figure-latex/unnamed-chunk-20-1.pdf}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{ggsave}\NormalTok{(}
\StringTok{"CC_timeline.pdf"}\NormalTok{,}
\DataTypeTok{device =} \StringTok{"pdf"}\NormalTok{,}
\DataTypeTok{path =} \StringTok{"CCViz"}\NormalTok{,}
\DataTypeTok{height =} \DecValTok{30}\NormalTok{,}
\DataTypeTok{width =} \DecValTok{20}
\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\hypertarget{currencies}{%
\section{Currencies}\label{currencies}}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{BTC <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"Bitcoin"}\NormalTok{, }\StringTok{"bitcoin"}\NormalTok{, }\StringTok{"BTC"}\NormalTok{, }\StringTok{"btc"}\NormalTok{)}
\NormalTok{ETH <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"Ethereum"}\NormalTok{, }\StringTok{"ethereum"}\NormalTok{, }\StringTok{"ETH"}\NormalTok{, }\StringTok{"eth"}\NormalTok{)}
\NormalTok{XRP <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"Ripple"}\NormalTok{, }\StringTok{"ripple"}\NormalTok{, }\StringTok{"XRP"}\NormalTok{, }\StringTok{"xrp"}\NormalTok{)}
\NormalTok{LTC <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"Litecoin"}\NormalTok{, }\StringTok{"litecoin"}\NormalTok{, }\StringTok{"LTC"}\NormalTok{, }\StringTok{"ltc"}\NormalTok{)}
\NormalTok{LINK <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"Chainlink"}\NormalTok{, }\StringTok{"chainlink"}\NormalTok{, }\StringTok{"LINK"}\NormalTok{, }\StringTok{"link"}\NormalTok{)}
\NormalTok{currencies <-}\StringTok{ }\KeywordTok{c}\NormalTok{(BTC, ETH, XRP, LTC, LINK)}
\CommentTok{# For frequency analysis}
\NormalTok{CurTkns <-}\StringTok{ }\NormalTok{TknsByDate }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{filter}\NormalTok{(word }\OperatorTok{%in%}\StringTok{ }\NormalTok{currencies)}
\CommentTok{# For sentiment analysis}
\NormalTok{CurComms <-}\StringTok{ }\NormalTok{CommData }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{filter}\NormalTok{(comm_id }\OperatorTok{%in%}\StringTok{ }\NormalTok{CurTkns}\OperatorTok{$}\NormalTok{comm_id)}
\end{Highlighting}
\end{Shaded}
\hypertarget{assigning-identifiers}{%
\subsection{Assigning Identifiers}\label{assigning-identifiers}}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{CurTkns <-}\StringTok{ }\NormalTok{CurTkns }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{mutate}\NormalTok{(}\DataTypeTok{Coin =} \KeywordTok{case_when}\NormalTok{(}
\NormalTok{ .}\OperatorTok{$}\NormalTok{word }\OperatorTok{%in%}\StringTok{ }\NormalTok{BTC }\OperatorTok{~}\StringTok{ "BTC"}\NormalTok{,}
\NormalTok{ .}\OperatorTok{$}\NormalTok{word }\OperatorTok{%in%}\StringTok{ }\NormalTok{ETH }\OperatorTok{~}\StringTok{ "ETH"}\NormalTok{,}
\NormalTok{ .}\OperatorTok{$}\NormalTok{word }\OperatorTok{%in%}\StringTok{ }\NormalTok{XRP }\OperatorTok{~}\StringTok{ "XRP"}\NormalTok{,}
\NormalTok{ .}\OperatorTok{$}\NormalTok{word }\OperatorTok{%in%}\StringTok{ }\NormalTok{LTC }\OperatorTok{~}\StringTok{ "LTC"}\NormalTok{,}
\NormalTok{ .}\OperatorTok{$}\NormalTok{word }\OperatorTok{%in%}\StringTok{ }\NormalTok{LINK }\OperatorTok{~}\StringTok{ "LINK"}
\NormalTok{ ))}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{curTknsByMonth <-CurTkns }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{mutate}\NormalTok{(}\DataTypeTok{Month =} \KeywordTok{make_date}\NormalTok{(year, month))}
\NormalTok{curCounts <-}\StringTok{ }\NormalTok{curTknsByMonth }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{group_by}\NormalTok{(Coin) }\OperatorTok{%>%}\StringTok{ }\CommentTok{#group words by affiliation label}
\StringTok{ }\KeywordTok{count}\NormalTok{(word, }\DataTypeTok{sort =} \OtherTok{TRUE}\NormalTok{) }\OperatorTok{%>%}\StringTok{ }\CommentTok{#count and create column 'n'}
\StringTok{ }\KeywordTok{ungroup}\NormalTok{()}
\NormalTok{curCountsbyMonth <-}\StringTok{ }\NormalTok{curTknsByMonth }\OperatorTok{%>%}
\StringTok{ }\KeywordTok{group_by}\NormalTok{(Coin, Month) }\OperatorTok{%>%}\StringTok{ }\CommentTok{#group words by affiliation label}
\StringTok{ }\KeywordTok{count}\NormalTok{(word, }\DataTypeTok{sort =} \OtherTok{TRUE}\NormalTok{) }\OperatorTok{%>%}\StringTok{ }\CommentTok{#count and create column 'n'}
\StringTok{ }\KeywordTok{ungroup}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{ggplot}\NormalTok{(curCounts) }\OperatorTok{+}
\StringTok{ }\KeywordTok{geom_bar}\NormalTok{(}\KeywordTok{aes}\NormalTok{(}\DataTypeTok{x =}\NormalTok{ Coin, }\DataTypeTok{y =}\NormalTok{ n, }\DataTypeTok{fill =}\NormalTok{ Coin), }\DataTypeTok{stat =} \StringTok{"identity"}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\includegraphics{bookdown-demo_files/figure-latex/unnamed-chunk-24-1.pdf}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{ggplot}\NormalTok{(curCountsbyMonth) }\OperatorTok{+}
\StringTok{ }\KeywordTok{geom_bar}\NormalTok{(}\KeywordTok{aes}\NormalTok{(}\DataTypeTok{x =}\NormalTok{ Coin, }\DataTypeTok{y =}\NormalTok{ n, }\DataTypeTok{fill =}\NormalTok{ Coin), }\DataTypeTok{stat =} \StringTok{"identity"}\NormalTok{) }\OperatorTok{+}
\StringTok{ }\KeywordTok{facet_wrap}\NormalTok{(}\OperatorTok{~}\NormalTok{Month)}
\end{Highlighting}
\end{Shaded}
\includegraphics{bookdown-demo_files/figure-latex/unnamed-chunk-25-1.pdf}
\bibliography{book.bib,packages.bib}
\end{document}
| {
"alphanum_fraction": 0.7341780535,
"avg_line_length": 53.2952941176,
"ext": "tex",
"hexsha": "b5ec816f06aed464de3e97f6d30d127db4502c59",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-01-01T23:44:43.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-01-01T23:44:43.000Z",
"max_forks_repo_head_hexsha": "0d2cd271edbde6652d6f045252ee82b4e4fae671",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "KalaniStanton/MungingProj2",
"max_forks_repo_path": "bookdown-demo.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0d2cd271edbde6652d6f045252ee82b4e4fae671",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "KalaniStanton/MungingProj2",
"max_issues_repo_path": "bookdown-demo.tex",
"max_line_length": 908,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0d2cd271edbde6652d6f045252ee82b4e4fae671",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "KalaniStanton/MungingProj2",
"max_stars_repo_path": "bookdown-demo.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 14460,
"size": 45301
} |
\chapter{Description of Alternative 5-D Initial Density}\label{ext:pde-5d-initial}
\begin{figure}
\centering
\includegraphics[width=0.675\linewidth]{figures/pde-highd/pde-highd-alt_initial_D5_m100.png}
\caption{
Knowledge of the behavior of $g$ at the boundaries allows for more than half of each of the three remaining intervals in five dimensions to be ruled out as infeasible regions when we look at high-probability samples from the 2-D SIP solution.
}
\label{fig:pde-highd-5d-study}
\end{figure}
In Figure~\ref{fig:pde-highd-5d-study}, we plot the parameter samples whose relative ratios exceeded $1/1000$ and note that they sweep out a family of curves that can be used to estimate bounds not only on $\lambda_2$ and $\lambda_4$ (the previous $\lambda_1$ and $\lambda_2$ from the 2-D problem)\---which exhibit correlation structure\---but also on the remaining three knots.
To form the intervals shown in orange in Fig.~\ref{fig:pde-highd-5d-study}, we take the upper and lower bounds of the curves passing through the vertical lines drawn at the three new knot values.
To be conservative, we multiply our lower bound by $1.2$ and the upper by $0.8$.
With these choices, we are still more than halving the interval-length in each direction as compared to the previous 5-D problem.
One could establish a lower tolerance for accepting likely samples and avoid the multiplication factor, or make any number of other choices for a refined initial density.
However, a thorough exploration of how to best leverage the ratio of observed to predicted densities is left to future work, and will always be highly problem--dependent.
% \section{Capturing Correlation Structure from 2-D SIP Solution}
For the two remaining directions, we want to capture the correlation structure that we were able to visually identify in Fig.~\ref{fig:pde-highd-2d-updated} and impose a uniform density over the support of the set.
To achieve this desired refinement of an initial density, we perform a singular-value decomposition on the likely samples from the scalar--valued 2-D solution, since there are so many more samples\footnote{ The scalar-valued contour was found to better characterize the direction of the equivalence class, suggesting perhaps a justifiable use for solving the problem with it. We could have formed an estimate of the updated density from using the vector--valued QoI and sampled from that instead. Many such approaches can be looked into in the future and are briefly discussed in the last section of this chapter.}.
The singular vectors are used to transform the vector-valued samples, and a uniform sampling is performed over the rectangular bounding box for these points, shown in the center of Figure~\ref{fig:pde-highd-2d-study}.
These generated samples, however, leave $\pspace$ when transformed back to their native space, as seen in the left panel.
To ameliorate this problem, we instead perform sampling in a while-loop, sampling from this uniform box and rejecting any that would get mapped back outside $\pspace$, until we reach our desired thousand samples.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.95\linewidth]{figures/pde-highd/pde-highd-alt_initial_D2_m100}
\caption{
Generating proposal samples for the two directions informed by solving the 2-D inverse problem, aided by singular-value decompositions and an ad-hoc sampling procedure.
}
\label{fig:pde-highd-2d-study}
\end{figure}
Analogously speaking, we set out to define a computational open cover for the relatively high-probability samples.
In the center panel of Fig~\ref{fig:pde-highd-2d-study}, there are corner-regions of the space we want to avoid wasting samples on as well, so we reject samples that have squared two-norm greater than $0.05$ from their nearest vector--valued sample.
This sampling procedure produces the set shown in orange on the right side of the figure (ten thousand shown to demonstrate coverage).
One thousand of these samples are kept at random and joined with the samples generated from the three other directions.
| {
"alphanum_fraction": 0.7985203453,
"avg_line_length": 98.9024390244,
"ext": "tex",
"hexsha": "7ab643e3ad1076c8280b45a56194816244224f11",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mathematicalmichael/thesis",
"max_forks_repo_path": "extensions/mud_pde_5d_initial.tex",
"max_issues_count": 59,
"max_issues_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e",
"max_issues_repo_issues_event_max_datetime": "2021-11-24T17:52:57.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-12-27T23:15:05.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mathematicalmichael/thesis",
"max_issues_repo_path": "extensions/mud_pde_5d_initial.tex",
"max_line_length": 615,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mathematicalmichael/thesis",
"max_stars_repo_path": "extensions/mud_pde_5d_initial.tex",
"max_stars_repo_stars_event_max_datetime": "2020-12-28T20:34:29.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-04-24T08:05:49.000Z",
"num_tokens": 929,
"size": 4055
} |
\documentclass[12pt,english]{article}
%pandoc --bibliography=references201804.bib -o DRAFT20190831_IER_submit.docx DRAFT20190831_IER_submit.tex
%%%%%%%%%%%%%%%%%%%%%%%%%
%% SEC: PACKAGEs MAIN %%
%%%%%%%%%%%%%%%%%%%%%%%%%
\usepackage{mathptmx}
\usepackage{microtype}
\everypar{\looseness=-1}
\linepenalty=1000
\usepackage{amsmath}
\usepackage{bbm}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{babel}
\usepackage{setspace}
\usepackage{breakurl}
\usepackage{graphicx}
\usepackage{booktabs,dcolumn}
\usepackage{array}
\usepackage{multirow}
\usepackage[font=singlespacing, skip=3pt]{caption}
\usepackage[usenames,dvipsnames,svgnames,table]{xcolor}
\usepackage{float}
\usepackage[export]{adjustbox}[2011/08/13]
\usepackage{enumitem}
\usepackage{tikz}
\usepackage{subfig}
\usepackage{float}
%\usepackage[nolists, tablesfirst, nomarkers]{endfloat}
\usepackage[colorlinks=true, linkcolor=blue, citecolor=blue, plainpages=false, pdfpagelabels=true, urlcolor=blue]{hyperref}
%\usepackage{geometry}
\usepackage[text={16cm,24cm}]{geometry}
\usepackage{ragged2e}
\usepackage{hyperref}
%%%%%%%%%%%%%%%%%%%%%%%%%
%% SEC: Indentation %%
%%%%%%%%%%%%%%%%%%%%%%%%%
\geometry{
a4paper,
noheadfoot=true,
left=1.0in,
right=1.0in,
top=1.0in,
bottom=1.0in,
}
\makeatletter
%\date{Janunary 2, 2018}
\date{\today}
%\date{}
%%%%%%%%%%%%%%%%%%%%%%%%%
%% SEC: new commands %%
%%%%%%%%%%%%%%%%%%%%%%%%%
%\exhyphenpenalty=10000\hyphenpenalty=10000
\newcommand\invisiblesection[1]{%
\refstepcounter{section}%
\addcontentsline{toc}{section}{\protect\numberline{\thesection}#1}%
\sectionmark{#1}}
\newcommand{\rowgroup}[1]{\hspace{-0.5em}#1}
\newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcommand*\samethanks[1][\value{footnote}]{\footnotemark[#1]}
\newcommand{\sym}[1]{\ifmmode^{#1}\else\(^{#1}\)\fi}
%%%%%%%%%%%%%%%%%%%%%%%%
%% SEC: Bibliography %%
%%%%%%%%%%%%%%%%%%%%%%%%
\usepackage[authordate,
backend=biber,
doi=false,
isbn=false,
sorting=nyt,
maxbibnames=10,
sortcites=False]{biblatex-chicago}
\bibliography{references201804} % or
\AtEveryBibitem{\clearlist{issn}} % clears issn
\AtEveryBibitem{%
\ifentrytype{online}{%
\clearfield{urlyear}\clearfield{urlmonth}\clearfield{urlday}
}{%
\clearfield{url}%
\clearfield{urlyear}\clearfield{urlmonth}\clearfield{urlday}
}
}%
\renewcommand*{\bibfont}{\small}
%%%%%%%%%%%%%%%%%%%%%%%%
%% SEC: Footer %%
%%%%%%%%%%%%%%%%%%%%%%%%
\usepackage{calc}
\setlength{\footskip}{\paperheight
-(1in+\voffset+\topmargin+\headheight+\headsep+\textheight)
-0.75in}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% SEC: Image and Table Sizing %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newcommand{\imgscaleindiff}{0.85}
\newcommand{\imgscale}{0.25}
\newcommand{\imgscaletwopanel}{0.80}
\newcommand{\imgscalefourpanel}{0.60}
\newcommand{\imgscalematlab}{0.52}
\newcommand{\imgscalematlablarge}{0.70}
\newcommand{\imgscalestata}{0.25}
\newcommand{\imgscalestatathree}{0.23}
\newcommand{\imgscalestatasmall}{0.20}
\newcommand{\imgscalestatalarger}{0.25}
\newcommand{\imgscalestatalargest}{0.33}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Reduce Spacing for Title
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\usepackage[compact]{titlesec}
\titlespacing\section{0pt}{0pt}{0pt}
\titlespacing\subsection{0pt}{0pt}{0pt}
\titlespacing\subsubsection{0pt}{0pt}{0pt}
%\titlespacing\paragraph{0pt}{15pt}{1pt}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Remove the spacing between paragraphs and have a small paragraph indentation
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\setlength{\parskip}{0cm}
\setlength{\parindent}{15pt}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Reduce Spacing for Math Equations
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\makeatletter
\g@addto@macro\normalsize{%
\setlength\abovedisplayskip{1.5pt}
\setlength\belowdisplayskip{1.5pt}
\setlength\abovedisplayshortskip{1.5pt}
\setlength\belowdisplayshortskip{1.5pt}
}
\makeatother
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Reduce Spacing for Figures
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\setlength{\textfloatsep}{2pt}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Line Spacing for Main Text and Footnotes
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Using the setspace package:
%\setstretch{1.5}
\doublespacing
%\onehalfspacing
%Change footnote
\usepackage[hang]{footmisc}
% separation between foonotes
\addtolength{\footnotesep}{1mm}
% within footnote separation
\renewcommand{\footnotelayout}{\setstretch{1.3}}%
\setlength{\footnotemargin}{4mm}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% SEC: Abstract Width %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\renewenvironment{abstract}
%{\small
% \begin{center}
% \bfseries \abstractname\vspace{-.5em}\vspace{0pt}
% \end{center}
% \list{}{%
% \setlength{\leftmargin}{0mm}% <---------- CHANGE HERE
% \setlength{\rightmargin}{\leftmargin}%
% }%
% \item\relax}
%{\endlist}
\begin{document}\fontsize{12}{14}\rm
| {
"alphanum_fraction": 0.6206229445,
"avg_line_length": 26.6443298969,
"ext": "tex",
"hexsha": "3df9c76a727a92bb49063e502c85f1fd82e00c09",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7bdbfb29e956d31239bd592b6392574e4aec5c15",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "guohui-jiang/Tex4Econ",
"max_forks_repo_path": "fragments/journals/international_economic_review.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7bdbfb29e956d31239bd592b6392574e4aec5c15",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "guohui-jiang/Tex4Econ",
"max_issues_repo_path": "fragments/journals/international_economic_review.tex",
"max_line_length": 123,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "7bdbfb29e956d31239bd592b6392574e4aec5c15",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "guohui-jiang/Tex4Econ",
"max_stars_repo_path": "fragments/journals/international_economic_review.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1572,
"size": 5169
} |
%---------------------------Aspect Frobenius-----------------------------
\section{Aspect Frobenius\label{s:tri-aspect-Frobenius}}
The aspect Frobenius is the sum of the edge lengths squared divided by the area
and normalized so that a unit equilateral triangle has a value of $1$.
\[
q = \frac{{\normvec{{L_0}}}^{2} +
{\normvec{{L_1}}}^{2} +
{\normvec{{L_2}}}^{2}}{4A\sqrt{3}}
\]
Note that in earlier versions of \verd{}, this metric was
called the triangle aspect ratio.
\trimetrictable{aspect Frobenius}%
{$1$}% Dimension
{$[1,1.3]$}% Acceptable range
{$[1,DBL\_MAX]$}% Normal range
{$[1,DBL\_MAX]$}% Full range
{$1$}% Unit equilateral triangle value
{\cite{pebay:03}}% Reference(s)
{v\_tri\_aspect\_frobenius}% Verdict function name
| {
"alphanum_fraction": 0.4564393939,
"avg_line_length": 44,
"ext": "tex",
"hexsha": "8d7511d450a7aec22758920de364151e59589386",
"lang": "TeX",
"max_forks_count": 5,
"max_forks_repo_forks_event_max_datetime": "2022-01-03T11:15:39.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-03-23T21:13:19.000Z",
"max_forks_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "Armand0s/homemade_vtk",
"max_forks_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/TriAspectFrobenius.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a",
"max_issues_repo_issues_event_max_datetime": "2022-03-02T21:23:25.000Z",
"max_issues_repo_issues_event_min_datetime": "2022-02-17T11:40:17.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "Armand0s/homemade_vtk",
"max_issues_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/TriAspectFrobenius.tex",
"max_line_length": 85,
"max_stars_count": 8,
"max_stars_repo_head_hexsha": "b54ac74f4716572862365fbff28cd0ecb8d08c3d",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "Lin1225/vtk_v5.10.0",
"max_stars_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/TriAspectFrobenius.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-23T10:49:02.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-06-01T00:15:23.000Z",
"num_tokens": 241,
"size": 1056
} |
\documentclass[../../main.tex]{subfiles}
\begin{document}
% START SKRIV HER
\section{Stock-out Prediction}
* Illustration of network
* Single-echelon, multi-location, continuous review (s,S), with partial pooling.*
* Få frem hvorfor vi vil være nærme kunden
* Two warehouses, Oslo and Bergen. They serve Norway’s national demand, but only the warehouse in Bergen can fulfill to customers the same day. However, we can transfer inventory from Oslo to Bergen, and then fulfill to customers within one day. Receives inventory from vendors. Retail. Continuous order policy (s,S).
* Assumption: Oslo+Bergen has all the inventory customer demands, hence only difference is that Oslo warehouse can fulfill same-day to Oslo customers.
% https://www.tandfonline.com/doi/abs/10.1080/09593960903498284
% SLUTT SKRIV HER
\end{document} | {
"alphanum_fraction": 0.7801932367,
"avg_line_length": 51.75,
"ext": "tex",
"hexsha": "d3676f42ad4d06f9bbd74b2bc1ca69cd69e17383",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "24cf0b8e1d8ce0bb7961eafa92084827eafd74e6",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Marcusln/master-thesis",
"max_forks_repo_path": "sections/02_background/020501_stockout_prediction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "24cf0b8e1d8ce0bb7961eafa92084827eafd74e6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Marcusln/master-thesis",
"max_issues_repo_path": "sections/02_background/020501_stockout_prediction.tex",
"max_line_length": 318,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "24cf0b8e1d8ce0bb7961eafa92084827eafd74e6",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Marcusln/master-thesis",
"max_stars_repo_path": "sections/02_background/020501_stockout_prediction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 209,
"size": 828
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Arsclassica Article
% LaTeX Template
% Version 1.1 (1/8/17)
%
% This template has been downloaded from:
% http://www.LaTeXTemplates.com
%
% Original author:
% Lorenzo Pantieri (http://www.lorenzopantieri.net) with extensive modifications by:
% Vel ([email protected])
%
% License:
% CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/)
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%----------------------------------------------------------------------------------------
% PACKAGES AND OTHER DOCUMENT CONFIGURATIONS
%----------------------------------------------------------------------------------------
\documentclass[
12pt, % Main document font size
a4paper, % Paper type, use 'letterpaper' for US Letter paper
oneside, % One page layout (no page indentation)
%twoside, % Two page layout (page indentation for binding and different headers)
headinclude,footinclude, % Extra spacing for the header and footer
BCOR5mm, % Binding correction
]{scrartcl}
\usepackage[english]{babel}
\usepackage{url}
\usepackage{graphicx}
\usepackage{subcaption}
\usepackage{float}
\usepackage{epigraph}
\usepackage{mathcomp}
\usepackage{textcomp}
\usepackage{amsmath}
\usepackage{url}
\usepackage{sectsty}
\usepackage[dvipsnames,table,xcdraw,svgnames]{xcolor}
\usepackage{listings}
\lstset{language=R,
basicstyle=\small\ttfamily,
stringstyle=\color{DarkGreen},
otherkeywords={0,1,2,3,4,5,6,7,8,9},
morekeywords={TRUE,FALSE},
deletekeywords={data,frame,length,as,character},
keywordstyle=\color{blue},
commentstyle=\color{DarkGreen},
}
\usepackage[pdftex,
pdfauthor={Marzio De Corato},
pdftitle={A statistical analysis on factors that contributed/slowed down the spread of COVID-19},
pdfsubject={Project for the exam: Machine learning, statistical learning, deep learning and artificial intelligenc},
pdfproducer={Latex},
pdfcreator={pdflatex}]{hyperref}
\input{structure.tex} % Include the structure.tex file which specified the document structure and layout
\sloppy
\hyphenation{Fortran hy-phen-ation} % Specify custom hyphenation points in words with dashes where you would like hyphenation to occur, or alternatively, don't put any dashes in a word to stop hyphenation altogether
%----------------------------------------------------------------------------------------
% TITLE AND AUTHOR(S)
%----------------------------------------------------------------------------------------
\begin{document}
\title{\normalfont{A statistical analysis on factors that contributed/slowed down the spread of COVID-19}} % The article title
\subtitle{Project for the exam: Machine learning, statistical learning, deep learning and artificial intelligence} % Uncomment to display a subtitle
\author{Marzio De Corato} % The article author(s) - author affiliations need to be specified in the AUTHOR AFFILIATIONS block
\date{\today} % An optional date to appear under the author(s)
%----------------------------------------------------------------------------------------
%----------------------------------------------------------------------------------------
% HEADERS
%----------------------------------------------------------------------------------------
\renewcommand{\sectionmark}[1]{\markright{\spacedlowsmallcaps{#1}}} % The header for all pages (oneside) or for even pages (twoside)
%\renewcommand{\subsectionmark}[1]{\markright{\thesubsection~#1}} % Uncomment when using the twoside option - this modifies the header on odd pages
\lehead{\mbox{\llap{\small\thepage\kern1em\color{halfgray} \vline}\color{halfgray}\hspace{0.5em}\rightmark\hfil}} % The header style
\pagestyle{scrheadings} % Enable the headers specified in this block
%----------------------------------------------------------------------------------------
% TABLE OF CONTENTS & LISTS OF FIGURES AND TABLES
%----------------------------------------------------------------------------------------
\maketitle % Print the title/author/date block
\newpage
\setlength\epigraphwidth{.7\textwidth}
\epigraph{ "Haec ratio quondam morborum et mortifer aestus \\
finibus in Cecropis funestos reddidit agros \\
vastavitque vias, exhausit civibus urbem. \\
nam penitus veniens Aegypti finibus ortus, \\
aeëra permensus multum camposque natantis, \\
incubuit tandem populo Pandionis omni." \\
Lucretius, De Rerum Natura VI \\
\vspace{5mm}
Twas such a manner of disease, 'twas such \\
Mortal miasma in Cecropian lands \\
Whilom reduced the plains to dead men's bones,\\
Unpeopled the highways, drained of citizens \\
The Athenian town. For coming from afar,\\
Rising in lands of Aegypt, traversing \\
Reaches of air and floating fields of foam, \\
At last on all Pandion's folk it swooped \\
(As translated by William Ellery Leonard)
}
\newpage
%----------------------------------------------------------------------------------------
% ABSTRACT
%----------------------------------------------------------------------------------------
\section*{Abstract} % This section will not appear in the table of contents due to the star (\section*)
The anisotropic spread of the virus \textit{SARS-CoV-2} raised a scientific challenge about the features that prevent or instead increase its diffusion. In order to get a bird's-eye view on this issue concerning to the first wave (up to 24/08/2020) the principal component analysis (PCA) was performed on the following clusters: the Italian provinces, the Italian regions, and a selected set of 150 countries. Such inspection allowed to investigate the disease diffusion not only for a clusters sets of different sizes, but also with different features since, in principle the indices that are available for a set of provinces are different with respect to the ones that are available for a set of countries. It was found that while some features such as the mean income, the GDP and the public transport are positively correlated with a higher number of COVID19 cases, due to the fact that they increase the connectivity of the agents, others that limit the latter, as the unemployment slow the diffusion. Moreover we found that, concerning the health service, the most important factor that is negatively correlated with the COVID19 cases is the normalized number of general practitioner and their normalized number of visits, while a less important role is played by the health quality indices.
\nocite{*}
\setcounter{tocdepth}{2} % Set the depth of the table of contents to show sections and subsections only
\tableofcontents % Print the table of contents
%\listoffigures % Print the list of figures
%\listoftables % Print the list of tables
\newpage % Start the article content on the second page, remove this if you have a longer abstract that goes onto the second page
\section{Introduction} \label{introduction}
The recent pandemic diffusion of the virus \textit{SARS-CoV-2} connected to the \textit{COronaVIrus Disease 19} raised up a scientific issue about the different spread ratio among the different population clusters such as cities,regions and states. This behaviour can be ascribed to the fact that the features of these clusters, such as the densities, the public transports as well as the individual mean incomes are very heterogeneous \cite{sebhatu2020explaining,skorka2020macroecology}. Furthermore the individual policies undertaken by the decision makers of these clusters have a large effect on the spread of the virus \cite{block2020social}. A good tool for an analysis about the correlations of the spread of the virus with the cluster feature is the Principal Component Analysis (PCA): this approach allows to visualize in to a single 2D plot the main statistical correlation of a data set. It is worth nothing that this type of analysis represent only the first step for the identification of causal correlation between the features: the findings obtained via PCA should be validated with a model; however this second step is out from the aim of this paper \footnote{But may be considered in to a future one} . In this work three different type of cluster were considered: the Italian provinces (\textit{province}), the Italian regions (\textit{regioni}), and 150 countries. The choice of the of Italian provinces was motivated by the fact that this the smallest cluster for which the daily cases of COVID19 are easily available, on the other hand the regions provide aggregate data (in particular about the sanitary system) that are not easily retrievable for the provinces. Finally the inclusion of the states is aimed to see if the statistical correlation obtained for the smaller cluster were still valid in a macroscopic scale \footnote{For this last cluster the lock-down policy and its effect was not considered, because the data collection of the policies for each 150 countries was too time consuming, however this analysis may be an outlook for a future update of this work}.
\section{Data description} \label{Data_description}
In this section a brief overview about the data analysed is provided: in particular, for each cluster,a brief description is provided about the data gathering and assembling.
\subsection{Temporal interval}
In this work, for Italy, only the first wave of contagion was considered: this choice was motivated by the fact that the two waves of contagion (the first one starting from February to the end of August, and the second one starting from the end of September) may have a different spread distributions due to the different locations of initial cases. Moreover the second wave, up to the date of this work, is still active, therefore a complete analysis on it should be postponed to its end \footnote{This does not exclude, in principle, that the statistical correlations obtained in this work can be the same for the second wave}. On the other side for the other countries, a better approach should have considered the procedure described before: however for many of them the end of the first wave and the staring of the second one are not so clear as in Italy; on this bases the at state level no distinction of waves was considered \footnote{But will be considered in future development}.
\subsection{Provinces}
For this cluster the cumulative number of cases of COVID19 up to 24/08/2020 was obtained from the Github website of \textit{Protezione Civile} \cite{github-protezionecivile} . The overall number of total cases was than divided (normalized) by the population number of its province. The population number (2019) was retrieved from Istituto Nazionale di Statistica (ISTAT) \cite{ISTAT}. The size of the province was also considered in order to evaluate the density. Since the diffusion of a virus is causally correlated to the population connectivity \cite{lloyd2001viruses,kraemer2020effect} we considered also a set of economic indices in order to evaluate the industrialization and the wealth of each province: these are the mean income for person (2019), the public transport (2012 measured as demand for resident), the private transport (2012 measured as cars for resident), the pollution and the unemployment ( 2019). The first one was taken from \textit{Ministero dell'Economia e delle Finanze} as reported in the following website \cite{MEF} while the other ones were obtained from the ISTAT databases \cite{ISTAT}. Such indices can be considered as a good approximation of population connectivity
\subsection{Regions}
As for the provinces also for this cluster the cumulative number of cases of COVID19 was taken from from \cite{github-protezionecivile} up to 24/08/2020. In this case the same source provides the overall number of tests. The unemployment rate as well the mean income were also available from ISTAT \cite{ISTAT} (2019). Since the Italian constitution delegates partially the management of health services to the regions, new features connected to the latter are available. Here the following ones were considered: the number of resident citizens for general practitioner (\textit{medico di base}, the normalized number of structures for hospitalization, the normalized number of medical guards (\textit{guardie mediche}) multiplied by $10^{5}$ and their normalized number of visit performed multiplied by $10^{5}$. These data were retrieved from the statistical yearbook of national health service 2017 (\textit{Annuario Statistico del sistema sanitario nazionale 2017} published on the Italian minister of health \cite{AnnuarioSSN2017}. Finally also the marks for the essential assistance levels \textit{livelli essenziali di assistenza} were included as provided by the Italian minister of health for 2017 \cite{LeaRank} . These represent an evaluation of the health services for each region according to the Italian Government. A full description of the feature analysed can be found here \cite{LeaDesc}
\subsection{Countries}
For this type of cluster the overall number of cases and deaths was taken from from the World Health Organization \cite{whoCases} up to 27/08/2020. These number were normalized with the data about the total population for state provided by World Bank \cite{worldBank} up to 2019. The GDP (up to 2019) and Universal Health Coverage Index \cite{whoUHC} (up to 2017) for each country was also taken from the latter source \cite{worldBank} and than normalized. Finally the National Health Expenditure (NHA) (normalized) up to 2017, the Traffic Morality (normalized and scaled with a factor $10^{5}$) up to 2016, the Pollution Mortality (normalized), the PM 2.5 concentration ($\mu g/m^{3}$) up to 2016 were taken from the WHO databases \cite{whoDb}. It is worth nothing that the countries considered in this work were the ones for which all these indicators were available.
\section{Theoretical Background} \label{Theoretical Background}
As pointed out by Mackay \cite{mackay2003information} the basic idea of unsupervised learning is to mime human behaviour is to find regularities in data and group them. Among the different techniques of unsupervised learning \cite{james2013introduction} here we considered the PCA: in this section the basic ideas of this tool will be focused following the approach proposed by James et al. \cite{james2013introduction}. One of the main aim of the PCA is to plot $n$ observations with $p$ features in only one 2D plot,with the least possible loss of information, instead of $\binom{p}{2}$ 2D plots. To do so the PCA chooses, among the possible axis formed by the linear normalized combination of the features, the two associated with the largest variance, given that these two axis are orthogonal. Thus, supposing that the means of the features are null, the problem is to find the coefficients (usually called loadings) $\phi_{ij}$ that solves the following optimization problem \cite{james2013introduction}:
\begin{equation}
max\left\lbrace \dfrac{1}{n} \sum_{i=1}^{n} \left(\sum^{p} _{j=1} \phi_{j1}x_{ij} \right)^{2} \right\rbrace
\end{equation}
with the constraint \cite{james2013introduction}
\begin{equation}
\sum_{j=1}^{p}\phi_{j1}^{2}=1
\end{equation}
The loadings define the principal components (that have to be orthogonal) \cite{james2013introduction}:
\begin{equation}
z_{i1}=\phi_{11}x_{i1}+\phi_{21}x_{i2}+...+\phi_{p1}x_{ip}
\end{equation}
If the all the principal components are taken in to account no information is lost, however the complexity of it due to high dimensionality is the same of the starting one; however if we sacrifice the p-2 component, that have a lower variance with respect to the first two, the complexity is dramatically reduced to a 2D plot. In this case the toll paid is the lost of the information contained in the p-2 components. From a computational point of view this task can be performed with the standard techniques used for solving a eigenvalue problem. Once the two principal component are found, the data can plotted into the new coordinate set; the advantage of this new representation lies on the fact that if the first two principal component loading vectors are plotted a bird's-eye view of the statistical correlation between the features is obtained: basically the cosine of the angle between the loadings approximates the statistical correlation, while the position of the data with respect to the loadings will plot its feature. In this way an intuitive representation of the data, their features and the correlation among them is provided. Finally this can be accompanied by a plot of the correlation matrix as it was done in this work.
\section{Results and discussion}
For each cluster the following procedure was followed: first the correlation matrix was calculated together with its heath-map, then the PCA was performed on the scaled data. For this procedure an histogram about the importance of the components, a 2D plot of the loading vector normalized on unitary circle, and the full 2D PCA plot with the data were reported.
\subsection{Provinces} \label{sub_prov}
From the inspection of the plots \ref{Province_corr_matrix},\ref{Province_FULL_Variables-PCA} and \ref{Provinces_PCA_FULL} it is possible to point out that the public transport, the density, the normalized cumulative cases and the mean income are positively correlated. This statistical correlations are causally validated by different previous publications \cite{neiderud2015urbanization,world2010hidden,gangemi2020rich,weyers2008low}. The argument that explains this behaviour is that a higher density, a higher public transport demand and a higher income increase the rate of contact between the individuals: the first one since people are closer and so a contact between has a higher probability with respect to a low density areas (such as a rural context), the second one since when individuals use the public transport are very close each other (indeed there is no correlation between the cases and private transport), the third one because rich people can spend more money for social events or perhaps in to travels. This fact is corroborated by the negative statistical correlation between cases and unemployment (in this case also the fact that a worker has a higher mobility due to the fact that he/she has to reach the place of work must be considered). One can be at first confused by the fact that the air quality have a strong correlation with the private transport and the public transport has a different direction: this can ascribed to the fact that in rural provinces, were the air quality is higher and the public transport services are reduced, people are forced to own a private vehicle, while in urban provinces they can consider to use only the public transport. Finally note that looking to the plot \ref{Province_FULL_Variances} the PCA analysis reported in plots \ref{Province_FULL_Variables-PCA} and \ref{Provinces_PCA_FULL} captures almost the 40 $\%$ of variance.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1]{Pic/Province_FULL_CorrMatrix.pdf}
\caption{The Correlation matrix for the Italian provinces accompanied by its heat-map as generated by the following dataset: local unemployment (2019), local private transport (2012 number of cars for 1000 residents), the air quality (2012) ,the local public transport (2012 measured as the demand for resident), the density (2019 measured as resident for Km$^{2}$, the cumulative cases up to 26/08/2020 and the mean income (2019 measured in kEUR). The colors were assigned in the following descending order: red, blue, green. Note that the public transport, the density the cumulative cases and the mean income are highly correlated. On the other hand the unemployment is negatively correlated to this first cluster. }
\label{Province_corr_matrix}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1]{Pic/Province_FULL_Variances.pdf}
\caption{Percentage of variance for each component of the dataset concerning the provinces. Note that using the first two, almost the 60 $\% $ of variance is captured, as consequence the information lost is almost the 40 $\%$ }
\label{Province_FULL_Variances}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1]{Pic/Province_FULL_Variables-PCA.pdf}
\caption{2D plot of the loading vector for the dataset of provinces normalized on the unitary circle. The features, the date of their collection and unit of measure are the same of Fig. \ref{Province_FULL_Variables-PCA}}
\label{Province_FULL_Variables-PCA}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1]{Pic/Provinces_PCA_FULL.pdf}
\caption{The dataset of provinces plotted on the two principal components that capture the 60 $\% $ of variance. The provinces are labelled with their vehicle registration abbreviation. The features, the date of their collection and unit of measure are the same of Fig. \ref{Province_corr_matrix} }
\label{Provinces_PCA_FULL}
\end{center}
\end{figure}
\clearpage
\subsection{Regions}
Looking to the Fig. \ref{Regions_CorrMatrix},\ref{Regioni_PCA_loadings} and \ref{Regions_FULL_PCA} we found a positive correlation between the Mean Income and the Cases, as for the provinces,. This relation has the same explanation that was found in the previous paragraph. Furthermore here, since the overall number of tests are available as well as the number of deaths for COVID19, we see that these two normalized quantities are positively correlated with the cases. The causal connection for these three quantities is simple: in order to validate that a patient is infected a test is required, and a part of the infected patient are going to die for COVID19. The correlation matrix also highlights that there is a positive correlation between the number of people for general practitioner and the number of cases; moreover a higher number of medical guards visit is weakly negatively correlated with the number of cases and the mortality. The causal reason for this statistical correlation is currently under discussion but an interesting perspective, at a journalistic level, was performed on a Italian newspaper \cite{24plus}: the idea of the author is that the general practitioners play the role of sensor/sentinel for the COVID19 cases; as long as their number and their presence in all the cities is reduced, the health service is less sensitive to find new cases and the diffusion of the virus is favoured. On the other side the quality index LEA,the normalized number of medical and the number of Public Structure Normalized guards seems to play a weaker role. Finally it is worth nothing that, as shown in Fig. \ref{Region_variances} in the 2D plots \ref{Regioni_PCA_loadings} and \ref{Regions_FULL_PCA} only the 26.7 $ \% $ of information is lost.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1]{Pic/Regions_CorrMatrix.pdf}
\caption{The Correlation matrix for the Italian regions accompanied by its heat-map as generated by the following dataset: the normalized number of medical guards (2017), the normalized number of structures for hospitalization (2017), the normalized number of visit for medical guards (2017), the unemployment ratio (2019), the essential assistance levels (LEA - 2017, the number of normalized tests (24/08/2020), the mean income (2019), the population for general practitioner (2017), the normalized deaths for COVID19 (24/08/2020), and the normalized number of cases (24/08/2020). The colors were assigned in the following descending order: red, blue, green. }
\label{Regions_CorrMatrix}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1]{Pic/Region_variances.pdf}
\caption{Percentage of variance for each component of the dataset concerning the provinces. Note that using the first two, the 73.3 $\% $ of variance is captured, as consequence the information lost is almost the 26.7 $\%$}
\label{Region_variances}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1]{Pic/Regioni_PCA_loadings.pdf}
\caption{2D plot of the loading vector for the dataset of regions normalized on the unitary circle. The features, the date of their collection and unit of measure are the same of Fig. \ref{Regions_CorrMatrix}}
\label{Regioni_PCA_loadings}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1]{Pic/Regions_FULL_PCA.pdf}
\caption{The dataset of regions plotted on the two principal components that capture the 73.3 $\% $ of variance. The features, the date of their collection and unit of measure are the same of Fig. \ref{Province_corr_matrix}}
\label{Regions_FULL_PCA}
\end{center}
\end{figure}
\clearpage
\subsection{Countries}
For this dataset it is possible to see from the inspection of Fig. \ref{World_CorrMatrix}, \ref{World_PCA_loadings} and \ref{World_FULL_PCA} that among the different features considered for the countries only the GDP pro-capita seems to be statistically correlated with the cases and the deaths. This fact can be explained with the same argument discussed in the section \ref{sub_prov}. In principle a possible objection to this link can be that only the richest countries can invest money for the tests for the validation of a COVID19 case: however it can be noted from plots that the normalized health expenditure as well the UHC index are basically orthogonal with respect to the normalized COVID19 cases and deaths. Moreover it is rummy, on the basis of recent papers , to find that the COVID cases and deaths are orthogonal to the pollution: this fact may be ascribed to the fact that the a smaller cluster should be considered; however it is worth nothing that in the literature the a strong causal relation is far from being validated \cite{contini2020does}. Finally, from the inspection of Fig. \ref{World_variances}, it is possibile to point out that the in the 2D plots \ref{Regioni_PCA_loadings} and \ref{Regions_FULL_PCA} the percentage of variance reported is almost the 58 $ \% $ while the 42 $ \% $ is lost.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1]{Pic/CorrMatrix_WORLD.pdf}
\caption{The Correlation matrix for the a selected set of countries accompanied by its heat-map as generated by the following dataset: The PM 2.5 concentration ($\mu g/m^{3}$ 2016), the normalized traffic morality (2016), the normalized pollution mortality, the GDP pro-capita(2019), the normalized deaths for COVID19 (27/08/2020), the overall number of cases (27/08/2020), the density (2019), the normalised health expenditure (2017), and the Universal Health Coverage Index (2017). The colors were assigned in the following descending order: red, blue, green.}
\label{World_CorrMatrix}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1]{Pic/Variances-PCA_WORLD.pdf}
\caption{Percentage of variance for each component of the dataset concerning the selected countries. Note that using the first two, the 57.9 $\% $ of variance is captured, as consequence the information lost is 42.1 $\%$}
\label{World_variances}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1]{Pic/PCA-Loadings_WORLD.pdf}
\caption{2D plot of the loading vector for the dataset of the selected countries normalized on the unitary circle. The features, the date of their collection and unit of measure are the same of Fig. \ref{World_CorrMatrix}}
\label{World_PCA_loadings}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1]{Pic/World_FULLPCA.pdf}
\caption{The dataset of countries plotted on the two principal components that capture the 57.9 $\% $ of variance. The features, the date of their collection and unit of measure are the same of Fig. \ref{World_CorrMatrix}}
\label{World_FULL_PCA}
\end{center}
\end{figure}
\clearpage
\section{Conclusions}
The analysis of the features of the different clusters pinpointed the following features:
\begin{itemize}
\item The normalized number of cases and deaths for COVID19 are positively correlated for provinces and regions with the mean income, while for the country the same argument hold for the GDP pro-capita. As validated by the previous literature (see the subsection \ref{sub_prov}) , this relation is causal due to the fact a higher income increase the connectivity of the agents
\item The unemployment is negatively correlated with the number of cases and deaths: this effect is entangled with the mean income; thus for a deeper analysis on the causal relationship, an econometric research that disentangle the mean income and the employment is required.
\item The spread of COVID19 does not seems to be negatively correlated with the quality and quantity of hospitals (LEA, UHC, Number of public structure), in reverse the number of general practitioner and is negatively correlated with the number of cases and deaths (a weaker negative correlation was also found for the normalized medical guards). This can be due to the fact the a large number of normalized general practitioner are able to detect and isolate the infected agents, and this slows the spread of a disease. However a deeper analysis on this statical correlation is required in order support this argument.
\item The public transport is positively correlated with the normalized new cases: the causal reason for this is that the agents during the travel increase their interactions with other agents thus the probability to be infected is higher
\end{itemize}
\section{R code}
\definecolor{light-gray}{gray}{0.95}
\lstset{ columns=fullflexible, basicstyle=\ttfamily, backgroundcolor=\color{light-gray},xleftmargin=0.5cm,frame=lr,framesep=8pt,framerule=0pt,frame=single,breaklines=true, postbreak=\mbox{\textcolor{red}{$\hookrightarrow$}\space}}
\lstinputlisting[language=R]{R_code_unsup.R}
%------------------------------------------------
%----------------------------------------------------------------------------------------
% BIBLIOGRAPHY
%----------------------------------------------------------------------------------------
\renewcommand{\refname}{\spacedlowsmallcaps{References}} % For modifying the bibliography heading
\bibliographystyle{unsrt}
\bibliography{bibliography.bib} % The file containing the bibliography
%----------------------------------------------------------------------------------------
\end{document} | {
"alphanum_fraction": 0.7580538902,
"avg_line_length": 87.2356321839,
"ext": "tex",
"hexsha": "f80951d8ea68a75fc95f17bc2667e43331bd56a3",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7f037d3b9e6cc9169a40bf5a92e96d4bbb0a49ab",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "marzione00/COVID19_Factor_Analyis-",
"max_forks_repo_path": "Manuscript/Manuscript.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7f037d3b9e6cc9169a40bf5a92e96d4bbb0a49ab",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "marzione00/COVID19_Factor_Analyis-",
"max_issues_repo_path": "Manuscript/Manuscript.tex",
"max_line_length": 2095,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "7f037d3b9e6cc9169a40bf5a92e96d4bbb0a49ab",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "marzione00/COVID19_Factor_Analyis-",
"max_stars_repo_path": "Manuscript/Manuscript.tex",
"max_stars_repo_stars_event_max_datetime": "2021-04-23T20:13:52.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-04-23T20:13:52.000Z",
"num_tokens": 6820,
"size": 30358
} |
%\section{1.10 Exercise 10. An open mapping theorem}
\textit{Suppose that X and Y are topological vector spaces,
%
$\dim Y < \infty$,
%
$\Lambda : X \to Y$ is linear, and $\Lambda(X) = Y$.
%
\begin{enumerate}
\item{
Prove that $\Lambda$ is an open mapping.}
\item{
Assume, in addition, that the null space of $\Lambda$ is closed,
and prove that $\Lambda$ is continuous.
}
\end{enumerate}
%
}
%
\begin{proof}
%We discard the trivial case $\dim Y = 0$ then henceforth assume that $\dim Y$
%has positive dimension $n$. \\\\
%
Discard the trivial case $\Lambda = 0$ then assume that %
$\dim Y = n$ %
for some positive $n$. %
Let $e$ range over a base of $B$ of $Y$. %
Pick $W$ an arbitrary neighborhood of the origin: %
There so exists $V$ %
a balanced neighborhood of the origin such that
%
\begin{align}
\label{definition of v}
\underset{
\text{Put } V \text{ exactly } n \text{ time(s)}
}{
\underbrace{V+\cdots +V}
}\subset W,
\end{align}
%
since addition is continuous. %
Moreover, for each $e$,
there exists $x_e$ in $X$ such that
%
$\Lambda(x_e)=e$,
%
simply because $\Lambda$ is onto. So,
%
\begin{align}\label{1_10_sum}
y = \sum_{e} y_e \cdot \Lambda x_e,
\end{align}
%
given any element $y=\sum_{e}y_e\cdot e$ of $Y$. %
As a finite set, $\set{x_e}{e\in B}$ is bounded: %
In particular, there exists a positive scalar $s$ such that
%
\begin{align}
\forall e\in B, x_e \in s \cdot V.
\end{align}
%
Combining this with (\ref{1_10_sum}) shows that
%
\begin{align}
\label{y in sum of lambda V}
y \in \sum_e y_e \cdot s \cdot \Lambda (V).
\end{align}
%
We now come back to (\ref{definition of v}) and so conclude that %
%
\begin{align}
y \in \sum_e \Lambda (V) \subset \Lambda(W)
\end{align}
%
whether $\magnitude{y_e} < 1/s$; which proves (a).\\\\
%
%
To prove (b), assume that the null space
%
$\singleton{\Lambda = 0}$ %
%
is closed and let $f, \pi$ be as in Exercise 1.9, %
%
$\singleton{\Lambda = 0}$ %
%
playing the role of $N$.
%%
% Isomorphism:
% \begin{align}
% & X \to X/N \to Y . \\
% & x \overset{\pi}{\mapsto} \pi x \overset{f}{\mapsto} \Lambda x \nonumber
% \end{align}
%
Since $\Lambda$ is onto, the first isomorphism theorem (see Exercise 1.9)
asserts that
%
$f$ is an isomorphism of $X/N$ onto $Y$.
%
Consequently,
%
\begin{align}
\dim X/N= n.
\end{align}
%
$f$ is then an homeomorphism of
%
$X/N\equiv \C^{n}$
%
onto $Y$; see \citeresultFA{1.21}.
We have thus established that $f$ is continuous: So is $\Lambda = f\circ \pi$.
\end{proof} | {
"alphanum_fraction": 0.6075471698,
"avg_line_length": 24.3119266055,
"ext": "tex",
"hexsha": "c6ebdfa138fccea5dd60b4244bf9a772f50bee71",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4d54af9cab1ce2bf512341cc1f2a0c81d7097754",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "gitcordier/FunctionalAnalysis",
"max_forks_repo_path": "chapter_1/1_10.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4d54af9cab1ce2bf512341cc1f2a0c81d7097754",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "gitcordier/FunctionalAnalysis",
"max_issues_repo_path": "chapter_1/1_10.tex",
"max_line_length": 79,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4d54af9cab1ce2bf512341cc1f2a0c81d7097754",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "gitcordier/FunctionalAnalysis",
"max_stars_repo_path": "chapter_1/1_10.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 898,
"size": 2650
} |
\section{Planar Graphs}
\label{title}
\subsection{Bipartite Graphs}
\label{Bipartite}
\subsection{Hamiltonian Cycles}
\label{ham-cycles}
\subsection{Planar Graphs}
\label{planar}
Euler’s Relationship for any connected planar graph
\[
n - m + r = 2
\]
(Th10.2, p190)
If $G$ is planar then order, i.e. number of vertices, $n \ge 3$, size $m$, then $m \le 3n - 6$
The boundary of every region must have at least 3 edges.
This is obvious for $n = 3$, so $n \ge 4$ and $m \ge 3$.
Suppose $G$ is planar and has $r$ regions.
\[ n - m + r = 2 \]
Let’s sum up all the arcs touching all the regions, $N$
\[ 3r \le N \]
Each arc can only touch two regions
\begin{align}
N &\le 2r \\
2 &= n - m + r \\
6 &= 3n -3m + 3r
\end{align}
but
\[
3r \le 2m
\]
So
\begin{align}
6 &\le 3n - 3m + 2m \\
6 &<= 3n -m \\
m &<= 3n - 6 \\
\end{align}
We can go on to say if $G$ is a graph of order at least 3 and size $m$, $m > 3n - 6$, $G$ is nonplanar. For $K_5$ that works.
$K_{3,3}$ it doesn’t.
Proof that $K_{3,3}$ is not planar:
Using Euler
$A = 9$, $N=6$, so $R$ must be 5.
Pick a region, its boundary must be a cycle and (it is bipartite) it must be an even cycle, at least 4, perhaps 6.
$5$ regions, $5*4$ arcs, each arc separates 2 regions so $(5*4/2) = 10$ arcs required \ldots but we only have 9.
Contradiction
\subsection{Ore's Theorem}
Let $G$ be a (finite and simple) graph with $n \ge 3$ vertices. We denote by $deg_v$ the degree of a vertex $v$ in $G$, i.e. the number of incident edges in $G$ to $v$. Then, Ore's theorem states that if
$deg_v + deg_w \ge n$ for every pair of distinct non-adjacent vertices $v$ and $w$ of $G$
then $G$ is Hamiltonian.
\subsubsection{Proof}
It is equivalent to show that every non-Hamiltonian graph G does not obey the condition.
Accordingly, let $G$ be a graph on $n \ge 3$ vertices that is not Hamiltonian, and let $H$ be formed from $G$ by adding edges one at a time that do not create a Hamiltonian cycle, until no more edges can be added.
Let $x$ and $y$ be any two non-adjacent vertices in $H$. Then adding edge $xy$ to $H$ would create at least one new Hamiltonian cycle, and the edges other than $xy$ in such a cycle must form a Hamiltonian path $v_1 v_2 ... v_n$ in $H$ with $x = v_1$ and $y = v_n$.
For each index $i$ in the range $2 \le i \le n$, consider the two possible edges in $H$ from $v_1$ to $v_i$ and from $v_{i-1}$ to $v_n$. At most one of these two edges can be present in $H$, for otherwise the cycle $v_1 v_2 ... v_{i-1} v_n v_{n-1} ... v_i$ would be a Hamiltonian cycle.
Thus, the total number of edges incident to either $v_1$ or $v_n$ is at most equal to the number of choices of $i$, which is $n-1$. Therefore, $H$ does not obey the property, which requires that this total number of edges $(deg_{v_1} + deg_{v_n})$ be greater than or equal to $n$.
Since the vertex degrees in $G$ are at most equal to the degrees in $H$, it follows that $G$ also does not obey the property.
\footnote{Based on the proof outlined in \url{https://en.wikipedia.org/wiki/Ore\%27s_theorem}.}
| {
"alphanum_fraction": 0.668719611,
"avg_line_length": 33.9010989011,
"ext": "tex",
"hexsha": "215181ab6127e5f844b17d6d0e4c6ed2a56a555a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5b979a11b1161b0612cf964d93363c5896dce39e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "st-pauls-school/discrete-mathematics",
"max_forks_repo_path": "dm4-planar-content.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5b979a11b1161b0612cf964d93363c5896dce39e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "st-pauls-school/discrete-mathematics",
"max_issues_repo_path": "dm4-planar-content.tex",
"max_line_length": 287,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5b979a11b1161b0612cf964d93363c5896dce39e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "st-pauls-school/discrete-mathematics",
"max_stars_repo_path": "dm4-planar-content.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1058,
"size": 3085
} |
\section{Overview}
An automated regression test suite for any \boxlib\ code
exists in {\tt BoxLib/Tools/RegressionTesting/} as {\tt regtest.py}.
The test suite consists of a set of problem definitions (e.g., the
build directory, inputs files, etc.) and source locations for the
building blocks of the application.
When the suite is run the first time, the plotfiles created at the end
of each problem's execution are stored as a benchmark. After this
initialization, each subsequent run of the test suite compares the
current output of the code, level-by-level and zone-by-zone to the
stored benchmarks (using the {\tt fcompare.f90} routine in {\tt
BoxLib/Tools/Postprocessing/F\_Src/}). Any differences are flagged as errors.
A web page report is generated by the test suite and provides a
history of the regression testing. Many different types of tests
are supported, including:
\begin{itemize}
\item Single-processor and parallel tests (MPI and/or OpenMP)
\item compilation-only tests
\item self-tests (problems that determine by themselves whether they were successfully)
\item testing the checkpoint / restart capability
\item debug-mode tests
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=5.0in]{testsuite}
\caption{\label{fig:test_suite_main} Main test suite results page. Each
row indicates a single test suite run, arranged by date, and each column
indicates a different test problem. }
\end{figure}
\section{Test Suite Inputs File}
The inputs file for the test suite separates the problems into blocks.
The header of a problem block has the form {\tt [problem-name]}.
Beneath each problem block, there are a number of options set for each
problem. A separate heading, {\tt [main]}, is used for the suite-wide
options.
\subsection{Suite-wide options}
An example of the {\tt [main]} block from {\tt Maestro-tests.ini} is:
\begin{lstlisting}
[main]
boxLibDir = /home/zingale/testing/BoxLib/
sourceDir = /home/zingale/testing/MAESTRO/
testTopDir = /home/zingale/testing/
webTopDir = /home/www/Maestro/test-suite/
sourceTree = F_Src
FCOMP = gfortran
numMakeJobs = 4
suiteName = Maestro
reportActiveTestsOnly = 1
extSrcDir = /home/zingale/testing/AstroDev/
extSrcCompString = ASTRODEV_DIR
extraBuildDir = /home/zingale/testing/MAESTRO_Exec/
extraBuildDirCompString = MAESTRO_HOME
# MPIcommand should use the placeholders:
# @host@ to indicate where to put the hostname to run on
# @nprocs@ to indicate where to put the number of processors
# @command@ to indicate where to put the command to run
#
# only tests with useMPI = 1 will run in parallel
# nprocs is problem dependent and specified in the individual problem
# sections.
MPIcommand = mpiexec -n @nprocs@ @command@
# force double precision plotfiles regardless of what the inputs file says
globalAddToExecString = --single_prec_plotfiles F
\end{lstlisting}
The first group of options define the necessary paths:
\begin{itemize}
\item {\tt boxLibDir}: the location of the \boxlib\ source
\item {\tt sourceDir}: the top-level source directory (for \maestro\ in this case)
\item {\tt testTopDir}: the directory that the suite should use as its
root directory for running the tests
\item {\tt webTopDir}: the directory in which to place the web output.
\end{itemize}
The source tree for \boxlib\ (Fortran or C++) and compilers to use are set
as:
\begin{itemize}
\item {\tt sourceTree}: set to {\tt F\_Src} for the Fortran
\boxlib\ framework (appropriate for the \maestro\ example here). For
a C++ \boxlib\ code, we'd set it to {\tt C\_Src}.
\item {\tt FCOMP}: sets the Fortran compiler to use---this will
override what is listed in any {\tt GNUmakefile} to ensure that the
compiler stays consistent in the tests.
\item {\tt COMP}: sets the C++ compiler to use---only needed for C++
\boxlib\ codes
\item {\tt numMakeJobs}: tells the suite how many parallel build
jobs to do at once (through the {\tt make -j N} command)
\end{itemize}
For output, some descriptive options come next:
\begin{itemize}
\item {\tt suiteName}: the prefix to use on output directories---it
does not need to match the program name
\item {\tt reportActiveTestsOnly}: whether to show all the tests on
the web output known to the suite throughout its history, or
only the ones that are currently defined in the input file.
\end{itemize}
Some codes use a collection of git repos to manage the code, and
require additional source repos or build directories. These
options allow for that:
\begin{itemize}
\item {\tt extSrcDir}: an additional git repo containing source code
that should be compiled
\item {\tt extStrCompString}: the name of an environment variable to
set on the make line to point to the source. Here, by setting
this to {\tt ASTRODEV\_DIR}, the {\tt make} command will have
{\tt ASTRODEV\_DIR=\itshape{extSrcDir}} included, substituting
in the value of {\tt extSrcDir}
\item {\tt extraBuildDir}: an additional git repo that has problem
directories we will compile in
\item {\tt extraBuildDirCompString}: since we are no longer compiling
in the {\tt sourceDir}, we need to tell {\tt Make} where to find the
original source. We set this to the environment variable the {\tt
GNUmakefile} uses to refer to the main source directory. Here,
setting this to {\tt MAESTRO\_HOME} will add {\tt
MAESTRO\_HOME=\itshape{sourceDir}} to the {\tt make} command
\end{itemize}
Finally, there are several parameters that describe how to run the code
\begin{itemize}
\item {\tt MPIcommand}: specify the generic manner in which to run an
MPI program on the target system. If present, the string {\tt
@host@} in the {\tt MPIcommand} will be substituted by the {\tt
MPIhost} string by the test suite. Similarly the {\tt @nprocs@}
string will be substituted by the number of processors, which is set
on a problem-by-problem basis. Finally, the {\tt MPIcommand} should
include the string {\tt @command@}, which is where the
\maestro\ executable and inputs file will be substituted. For
single processor runs, these options are ignored.
\item {\tt globalAddtoExecString}: an option to append to the
executable string after the exectuable name and inputs file. For
example, here we override the parameter {\tt single\_prec\_plotfiles}
which may be set in the problem's input file to ensure double
precision output
\end{itemize}
There are a few additionally options, including those for sending
e-mail upon test completion---see {\tt Castro-tests.ini} for these.
\subsection{Problem-specific options}
Each problem to be run by the test suite gets its own block. For
example, a test listing for the {\tt test2} problem might look like:
\begin{lstlisting}
[test2]
buildDir = TEST_PROBLEMS/test2/
inputFile = inputs_2d
aux1File = model.hse.cool.coulomb
link1File = helm_table.dat
dim = 2
doVis = 1
visVar = "tfromp"
compileTest = 0
restartTest = 0
useMPI = 1
numprocs = 4
\end{lstlisting}
Here {\tt test2} contained inside the {\tt []} is the name of the
problem, as the test suite will refer to it. The options
for this test are then:
\begin{itemize}
\item {\tt buildDir}: th path beneath {\tt sourceDir} where the {\tt
make} command should be executed.
If instead we are building in the {\tt extraBuildDir}, then we would
set {\tt useExtraBuildDir = 1}, and {\tt buildDir} will be relative
to the extra build source directory.
\item {\tt inputFile}: the inputs file to use with the exectuable
\item {\tt aux1File}, {\tt aux2File}, {\tt aux3File}: any necessary
auxillary files that need to be in the run directory (e.g., for C++
\boxlib, the {\tt probin} file).
Note that {\tt inputFile} and {\tt aux?File} are specified relative
to the {\tt buildDir}.
\item {\tt dim}: the problem's dimensionality.
\item {\tt link1File}, {\tt link2File}, {\tt link3File}: these work
like the {\tt aux?File}s, but instead of copying the specified file
into the run directory, a symbolic link is made.
\item {\tt useMPI}: set to {\tt 1} if we are running parallel with
MPI. In this case, we also need to set {\tt numprocs} to the number
of processors.
\item {\tt useOMP}: set to {\tt 1} if we are running parallel with
OpenMP. In this case, we also need to set {\tt numthreads} to the
number of OpenMP threads
\end{itemize}
Less commonly used options include:
\begin{itemize}
\item {\tt compileTest}: set to {\tt 1} if we only want to test compiling
the code, not running it.
\item {\tt restartTest}: set to {\tt 1} if we wish to test the
checkpoint/restart capability. In this case, we also set {\tt
restartFileNum} to the number of the checkpoint file to restart
from. The suite will run the problem as usual and then restart from
the specified checkpoint and run to completion again. The output
from the initial run will then be compared to the output from the
restart. In a restart test, there is no stored benchmark.
\item {\tt selfTest}: Some problems don't output plotfiles, but
instead check internally whether they are successful. For these,
set {\tt selfTest} to {\tt 1}, and set {\tt stSuccessString} to the
string output to look for at the end of the test to determine if it
was successful.
\item {\tt doVis}: set to {\tt 1}, this enables simple visualization
to the test suite webpage. You need to set {\tt visVar} to the name
of the plotfile variable to visualize. An image of that field from
the last plotfile will be appended to the problem's test webpage.
\item {\tt compareFile}: Ordinarily, the test suite uses the last
plotfile output to compare to. To force the comparison to a
specific file, set {\tt compareFile} to the name of the file to
compare to.
\item {\tt addToCompileString}: provides a means to override some of
the options in the problem's {\tt GNUmakefile} (e.g.\ use a
different reaction network). Set {\tt addToCompileString} to the
string to add to the {\tt make} line that compiles the problem.
\end{itemize}
\section{Initializing the Test Suite}
The first time you run the test suite there are no benchmark files to
compare to. Once you generate an inputs file, as described above, you
would simply run the suite as:
\begin{verbatim}
./regtest.py --make_benchmarks "initial run" ./Maestro_tests.ini
\end{verbatim}
The string following {\tt --make\_benchmarks} is simply a comment that
will be added to the web report. This command creates three output
directories, using the {\tt suiteName} as the prefix.
\begin{itemize}
\item {\tt suiteName-tests/} is where the tests are run. Each time the
test suite is run, a subdirectory, based on the date, is created,
with a subdirectory for each test. All the files necessary to run
the test are copied into the test subdirectory.
\item {\tt suiteName-web/} is where the web-based reports for the test
are generated. The master webpage is {\tt
suiteName-web/index.html}.
Note: if {\tt webTopDir} is set in the {\tt [main]} option block, we
use that instead
\item {\tt suiteName-benchmarks} is where the test benchmark files are
stored. This are used for comparison to the current output.
\end{itemize}
\section{Regular Use}
Once the initial benchmarks are created, you can compare the current
version of the code to the stored results by simply doing:
\begin{verbatim}
./regtest.py ./Maestro_tests.ini
\end{verbatim}
This will do a git update in all the source repos (including \boxlib),
generate {\tt ChangeLog} files listing all of the git comments for the
code, build the test comparison tools, and then loop over each test,
building and running the executable and comparing the output to the
benchmarks as required.
Upon completion of all the runs, a web page for this invocation of the
test suite will be generated as well as pages showing the details for
each of the problems run. Test failures indicate that the current
output does not match the stored benchmarks.
There are additional runtime options to run only tests of a certain
dimensionality, those that previously failed, etc. Typing {\tt
./regtest.py} with no options will give a description of these
options.
\section{Updating Benchmarks}
A test failure means that the current version of the code gives a
different answer than the stored benchmark. A test can fail either
because a bug was introduced into the code or a bug was fixed or new
feature introduced.
If a bug was introduced into the code recently, then by examining the
test history you can determine the time period in which the bug was
introduced. The {\tt ChangeLog}s linked to on each test date's webpage
will list all the changes committed to git up to that point, which is
useful for tracking down the bug. Once the bug is fixed, rerunning
the suite should generate a `pass'.
If a bug was fixed or a new feature was introduced, and you are
confident that the latest output is correct, then you can tell the
test suite to update the benchmarks. If you want to do this for all
the test problems, you would do:
\begin{verbatim}
./regtest.py --make_benchmarks "X bug fixed" ./Maestro_tests.ini
\end{verbatim}
where the string after ``{\tt --make\_benchmarks}'' is a note that is listed
on the regression suite web page describing the reason for the benchmark
update. Subsequent runs of the test suite will use the new benchmarks.
If you only want to update the benchmarks of a single test, then you
can use the ``{\tt --single\_test test}'' flag on the commandline, where
{\tt test} is the name of the test to update.
Finally, if you just want to rerun the tests the previously failed,
you can use the {\tt --redo\_failed} option.
| {
"alphanum_fraction": 0.753147877,
"avg_line_length": 40.8982035928,
"ext": "tex",
"hexsha": "5922cea8e6707f48824f1ac854188bb057153dfa",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-08-06T13:09:09.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-08-06T13:09:09.000Z",
"max_forks_repo_head_hexsha": "d35c11aaa85eed03baa04317c346921a349fdfcf",
"max_forks_repo_licenses": [
"BSD-3-Clause-LBNL"
],
"max_forks_repo_name": "dappelha/BoxLib-omp4-Hackathon2016",
"max_forks_repo_path": "Docs/Regression/test_suite.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d35c11aaa85eed03baa04317c346921a349fdfcf",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause-LBNL"
],
"max_issues_repo_name": "dappelha/BoxLib-omp4-Hackathon2016",
"max_issues_repo_path": "Docs/Regression/test_suite.tex",
"max_line_length": 87,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d35c11aaa85eed03baa04317c346921a349fdfcf",
"max_stars_repo_licenses": [
"BSD-3-Clause-LBNL"
],
"max_stars_repo_name": "dappelha/BoxLib-omp4-Hackathon2016",
"max_stars_repo_path": "Docs/Regression/test_suite.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3520,
"size": 13660
} |
\subsubsection{\stid{2.10} PROTEAS-TUNE - FLACC and MLIR: Creating and Maintaining OpenACC in LLVM/Flang}\label{s:flacc}
\paragraph{Overview}
Heterogeneous and manycore processors (e.g., multicore CPUs, GPUs,
Xeon Phi, etc.) are becoming the de facto architectures for current
HPC platforms and future Exascale platforms. These architectures are
drastically diverse in functionality, performance, programmability,
and scalability, significantly increasing the complexity that ECP app
developers face as they attempt to fully utilize available hardware.
A key enabling technology pursued as part of PROTEAS is OpenACC.
While OpenMP has historically focused on shared-memory multi-core,
OpenACC was launched in 2010 as a portable programming model for
heterogeneous accelerators. Championed by institutions like NVIDIA,
PGI, and ORNL, OpenACC has evolved into one of the most portable and
well recognized programming models for accelerators today.
Despite the importance of OpenACC, the only non-academic open-source OpenACC
compiler cited by the OpenACC website is GCC \cite{openaccOrgTools}.
However, GCC has lagged behind commercial compilers, such as PGI's, in
providing production-quality support for the latest OpenACC specifications
\cite{openACCValidationSuite}. Moreover, GCC is known within the compiler
community to be challenging to extend and, especially within the DOE, is
losing favor to Clang and LLVM for new compiler research and development
efforts.
Recent efforts to build a Fortran counter-part to Clang in LLVM project have
been accelerated and big chunk of the Flang have been upstreamed.
Directive-based programming model are heavily used in Fortran applications ported
to accelerators. Unlike C and C++, Fortran does not have many alternatives.
FLACC proposes to develop a prototype OpenACC 3.0 implementation in Flang based
on MLIR to fill this gap. As the implementation of the OpenMP target offload
feature in Flang does not have a clear path, this work will help in this regard
by sharing code in the MLIR dialects and lowering sections with the Flang and
SOLLVE projects.
\paragraph{Key Challenges}
\begin{enumerate}
\item \textbf{OpenACC Support:} Developing production-quality,
standards-conforming OpenACC compiler and runtime support is a large
undertaking. Complicating that undertaking further is the need for
optimization strategies that are competitive with existing commercial
compilers, such as PGI's, which have been developed over many years
since before the conception of the OpenACC standard.
\item \textbf{MLIR:} Flang and the OpenACC support for it rely on the MLIR
project for the IR. MLIR has been upstreamed to
the core LLVM project in early 2020 and it is still actively under
development. Flang will be the first core frontend relying on MLIR.
\item \textbf{Runtime:} LLVM does not include an OpenACC runtime but only one
for OpenMP at the moment. This runtime can be generalized to support
missing OpenACC features. This generalization needs to be accepted by the
current OpenMP community.
\item \textbf{OpenMP Stability:} As we plan to generalize the OpenMP runtime to
support OpenACC, we will also rely on various part of the runtime that
are already here. There has been some concern on the stability of the
current OpenMP runtime implementation and especially the
textit{libomptarget} responsible for the target offload part.
\end{enumerate}
\paragraph{Solution Strategy}
~
\vspace{-1em}
\begin{enumerate}
\item Flacc design follows a similar design as the OpenMP implementation for
Flang. This design includes the following aspects:
\begin{enumerate}
\item An OpenACC MLIR dialect part of the core MLIR project.
\item A lowering from the Flang AST to a mix of FIR and OpenACC MLIR dialect.
\item A progressive lowering from MLIR to LLVM IR with runtime call.
\end{enumerate}
\end{enumerate}
\paragraph{Recent Progress}
\begin{enumerate}
\item OpenACC 3.0 parser is upstreamed to the Flang front-end. It covers the
full specification and also implements the unparsing feature of Flang.
\item Semantic checking for OpenACC 3.0 is also upstreamed in Flang. While
implementing this part, a new TableGen backed for directive-based language
has been contributed upstreamed. This is used by OpenACC and OpenMP for
both Clang and Flang.
\item Discussed Flacc at various ECP, HPC, and LLVM venues.
\end{enumerate}
\paragraph{Next Steps}
\begin{enumerate}
\item Continue the definition of the OpenACC MLIR dialect and complete the
lowering to it.
\item Integration of Flacc runtime support with OpenMPIRBuilder being developed in the broader ECP Project.
\item Develop mapping of OpenACC runtime calls to OpenMP based on early work in Clacc.
\item Continue adding support to lower from MLIR to LLVM IR and runtime call.
\end{enumerate}
| {
"alphanum_fraction": 0.7897830935,
"avg_line_length": 46.1028037383,
"ext": "tex",
"hexsha": "a19f607409bc85c51aee529b34807dd66f69a0fd",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2f7ace1cbbb5ed42e17ece61a10dea5bcce105c9",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "rsth/ECP-ST-CAR-PUBLIC",
"max_forks_repo_path": "projects/2.3.2-Tools/2.3.2.10-PROTEAS-YTUNE/2.3.2.10-FLACC-MLIR.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2f7ace1cbbb5ed42e17ece61a10dea5bcce105c9",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "rsth/ECP-ST-CAR-PUBLIC",
"max_issues_repo_path": "projects/2.3.2-Tools/2.3.2.10-PROTEAS-YTUNE/2.3.2.10-FLACC-MLIR.tex",
"max_line_length": 120,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2f7ace1cbbb5ed42e17ece61a10dea5bcce105c9",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "rsth/ECP-ST-CAR-PUBLIC",
"max_stars_repo_path": "projects/2.3.2-Tools/2.3.2.10-PROTEAS-YTUNE/2.3.2.10-FLACC-MLIR.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1119,
"size": 4933
} |
\documentclass[a4paper,10pt]{article}
\usepackage[utf8]{inputenc}
\usepackage{fullpage}
\usepackage[T1]{fontenc}
\usepackage{textcomp}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\input{inputs/definitions.tex}
\usepackage{natbib}
\usepackage{listings}
\usepackage[title,titletoc,toc]{appendix}
\usepackage[table]{xcolor}
\usepackage[breaklinks=true]{hyperref}
\usepackage{rotating} % To include side ways figures
\usepackage[nottoc,notlof,notlot]{tocbibind} % Put the bibliography in the ToC
\usepackage{booktabs} % for much better looking tables
\usepackage[font={rm,md}]{caption, subfig}
\captionsetup[figure]{labelfont=bf,textfont=it}
\captionsetup[table]{labelfont=bf,textfont=it,width=0.9\linewidth}
\newcommand{\mail}[1]{{\href{mailto:#1}{#1}}}
\newcommand{\httplink}[1]{{\href{#1}{#1}}}
\newcommand{\todo}[1]{\textbf{{\color{red}#1}}}
%%%% FOR FLOW CHARTS
\usepackage{tikz}
\usetikzlibrary{shapes,arrows}
%%%%%%%%%%%%%%%
\usepackage{wrapfig}
\usepackage{graphicx}
\graphicspath{{images/}}
\usepackage{float}%permet de gerer les figures et tableaux [H]--> position fixe
\lstset{ %
% language=C++, % choose the language of the code
% basicstyle=\footnotesize, % the size of the fonts that are used for the code
% numbers=left, % where to put the line-numbers
% numberstyle=\footnotesize, % the size of the fonts that are used for the line-numbers
% stepnumber=1, % the step between two line-numbers. If it is 1 each line will be numbered
% numbersep=5pt, % how far the line-numbers are from the code
% backgroundcolor=\color{white}, % choose the background color. You must add \usepackage{color}
% showspaces=false, % show spaces adding particular underscores
% showstringspaces=false, % underline spaces within strings
% showtabs=false, % show tabs within strings adding particular underscores
frame=single, % adds a frame around the code
% tabsize=2, % sets default tabsize to 2 spaces
% captionpos=b, % sets the caption-position to bottom
% breaklines=true, % sets automatic line breaking
% breakatwhitespace=false, % sets if automatic breaks should only happen at whitespace
% escapeinside={\%*}{*)} % if you want to add a comment within your code
}
%opening
\title{\textbf{SALSA: StrAy Light SimulAtor v0.7}\\\emph{Documentation}
\vspace{2cm}\\
\includegraphics[width=0.6\linewidth]{700_50_mag_125_SAA}
\includegraphics[width=0.2\linewidth]{instrument_large_rot.png}
\vspace{2cm}\\
}
\author{Thibault Kuntzer\footnote{\mail{[email protected]}}\ \ \& Andrea Fortier\footnote{\mail{[email protected]}}}
\date{Version 1 Revision 0 -- \today}
\begin{document}
\maketitle
\clearpage
\emph{Cover image: computer rendering of CHEOPS instrument pointing towards a cumulative observation map for faint stars.}
\vspace{2em}
\tableofcontents
\section{Scope} \label{sec:scope}
Solar photons travel to Earth surface and may be reflected off the surface of the Earth back to space. Space-borne observatory that are bounded to an orbit around the Earth will be contaminated by this \emph{Earth stray light} radiation. The amount of such radiation received by an optical detector onboard the observatory will vary with time according to several criteria that correlate notably with the chosen orbit, the period of the year, the design of the telescope itself and the pointing of the satellite.
\paragraph{}
The goal of this code is to get a first-order approximation of the Earth stray light contamination for space-borne instrument design in the first step of the mission. This tool was -- and still is -- developed to assess whether the design of an instrument could enable a photometric mission to observe faint stars ($M_V\sim12$) for a long period of time and on a spatially large field.
More information on this exoplanet CHEOPS mission is available in \S\ref{sec:CHEOPS}. This code and the analysis tools provided are therefore influenced by the task that are at hand. It is designed in a very modular way which enables the user to develop new modules that fit his needs.
In its current version, the scope of the code extends from the simulation of the stray light contamination to its analysis in terms of ``observable'' sky. The definition of observable is given by criteria on the available time and the deepest measurable magnitude.
\paragraph{}
This software is not designed as stand-alone analysis tool of this particular aspect of the mission. It needs to be fed with the trajectory of the satellite throughout the epoch (hence enabling complex orbits) as well as the position of the Sun during this period. Those files can be obtained by dedicated software.
The CHEOPS orbit was generated using the commercial \verb=STK= software from company AGI. A satellite orbiting the Earth -- especially Low-Earth Orbit (LEO) -- can only observe some regions in the sky due to the presence of the Earth itself and is also limited by the presence of the Moon or other bright object in the field. The ``allowed'' region (or absolute or raw observation maps) were must also be computed beforehand.
In the case of CHEOPS, this was done using an ad hoc \verb=MATLAB= script. The transmission of a given photon entering the telescope off-axis is characterised by a axisymmetric function about the line of sight: the Point Source Transmittance (PST). This important function depends on the exact design of the telescope and requires particular attention. Several commercial software are available to compute the PST of a given design.
A satellite may have period during which it is unable to observe -- due for example of the presence of the South Atlantic Anomaly (SAA) -- hence another table describing shutdown phases must be provided. The European Space Agency (ESA) built a tool -- Spenvis -- that can create such tables.
As it was already mentioned, this must be seen as a first approximation software for worst case scenario, because of a simplification: the albedo of the Earth. The exact amount of photon reflected off the surface of the Earth depend on the type of material making up -- or covering -- the surface as well as on the weather. Around the poles, the albedo is higher due to the ice and the snow (around 0.9) while oceans or continents have a much lower albedo (down to 0.1) \citep{Pater2001}. In the code, the Earth is considered as a perfect sphere with an albedo of 1. This choice was made to (i) greatly simplify the code, (ii) the weather is impossible to include in this simulation and (3) the worst case estimate is wanted here. Other limitations are that only \emph{Earth} stray light is included there is no consideration of the radiation due to the Moon or any other body. The cycle of the Sun is not included in the software, but this is far from being a first-order or even a second-order
effect.
\paragraph{}
The output of this code are as follow:
\begin{itemize}
\item A file per minute per orbit with the table of observable region in the sky and the corresponding stray light value;
\item Statistics files about the simulations;
\item Error analysis of adaptive orbit step (see \ref{sec:Usage:Methods});
\item Analysis of the mean and the maximum stray light flux;
\item Movie of the evolution of stray light and corresponding deepest magnitude observable on short time scale;
\item Movie of available zone in the sky versus a minimum stray light exclusion angle (see \ref{sec:Usage:Methods});
\item Observation capabilities on the sky summed over the whole epoch according to observation rules;
\item Prediction tables of observability of given target in the sky (beta version);
\item Planet transit detection probabilities over the whole epoch.
\end{itemize}
\paragraph{}
The purpose of this document is to explain the usage of SALSA. \S\ref{sec:Installation} describes the installation process, \S\ref{sec:Usage} shows how to use the present code, which was used for a LEO exoplanet transit observation mission -- CHEOPS. The results of this study are summarised in \S\ref{sec:CHEOPS}. \S\ref{sec:Troubleshooting} is an attempt at troubleshooting problems the user could encounter while \S\ref{sec:KnownIssues} lists the known issues of the current version.
\section{Installation} \label{sec:Installation}
\subsection{Download}
SALSA can be downloaded from its GitHub repository: https://github.com/kuntzer/SALSA-public.
\subsection{External Dependencies}
The following software and packages are required for the code to run smoothly, so make sure they are installed before starting the simulations or the analysis. The executables for these external dependencies must all be located in your \verb=PATH=.
\begin{itemize}
\item Python
\begin{itemize}
\item Numpy (version $\geq 1.7$) \& SciPy (\httplink{http://www.scipy.org/})
\item MatPlotLib (\httplink{http://matplotlib.org/})
\item Basemap (\httplink{http://matplotlib.org/basemap/})
\end{itemize}
\item \LaTeX\ (including \verb=texlive-latex-base= and \verb=texlive-fonts-recommended=)
\item \verb=gfortran=
\item \verb=epstopdf= for fancy plots.
\item \verb=pdfcrop= for fancy plots.
\end{itemize}
\subsection{Required Data}
As treated in \S\ref{sec:scope}, the code is not thought as stand-alone. It necessitate data on the satellite and on its orbit to work. The following lists the required files as well as their format and how they could be generated. In the following \verb=<id>= is an integer indexing the trajectory of the satellite and \verb=<k>= is a free integer variable describing an indexation.
\begin{description}
\item [PIPELINE]
\item [p1 - Trajectory of the satellite] \verb=orbit_<id>.dat=
\\ Description: Table containing the position of the satellite with respect with the centre of the Earth.
\\ Format: $t\ x_\text{sat}\ y_\text{sat}\ z_\text{sat}$ (blank space separators)
\\ Units: $t$ [minutes] from the beginning of the epoch, $x_\text{sat}\ y_\text{sat}\ z_\text{sat}$ [km].
\\ Generation: \verb=STK= (\httplink{http://www.agi.com/})
\item [p2 - Position of the Sun] \verb=sun_<id>.dat=
\\ Description: Table containing the position of the Sun with respect with the centre of the satellite.
\\ Format: $t\ x_\odot^{(\text{sat})}\ y_\odot^{(\text{sat})}\ z_\odot^{(\text{sat})}$ (blank space separators)
\\ Units: $t$ [minutes] from the beginning of the epoch, $x_\odot^{(\text{sat})}\ y_\odot^{(\text{sat})}\ z_\odot^{(\text{sat})}$ [km].
\\ Generation: \verb=STK= (\httplink{http://www.agi.com/})
\item [p3 - Raw Observable Maps] \verb=raw_maps_<id>/orbit_<k>.dat=
\\ Description: Tables containing the allowed observation regions (cells of a grid) in the sky per one orbit and per minute. $k$ runs on all orbits.
\\ Format: $t,\alpha,\delta$
\\ Units: $t$ [minutes] from the beginning of the epoch, $\alpha$ (right ascension), $\delta$ (declination) [rad].
\\ Generation: Ad hoc software that takes into account the constraints (SL exclusion angle, Moon exclusion angle, Solar system bodies, \dots) on the observations.
\item [p4 - PST] \verb=pst.dat=
\\ Description: Table of the PST values.
\\ Format: $\theta\quad PST$
\\ Units: $\theta$ [degree], $PST$ [-]
\\ Generation: Ad hoc software such as \verb=Zeemax= or \verb=TracePro=.
\item [ANALYSIS TOOLS]
\item [a1 - Index of the Orbits] \verb=minute_table_<id>.dat=
\\ Description: Contains a conversion table between orbit $k$ and start and stop time stamps. The keyword \verb=rel= means time since the beginning of the current orbit while \verb=abs= since the beginning of the epoch. Hence $t_\text{rel}$ runs from 0 to one period.
\\ Format: $k_\text{orbit},t_\text{start,rel},t_\text{start,abs}$ [new line] $k_\text{orbit},t_\text{stop,rel},t_\text{stop,abs}$
\\ Units: $k_\text{orbit}$ [-], $t$ [min]
\\ Generation: Ad hoc script that analyses \verb=orbit_<id>.dat=.
\item [a2 - Shutdown Events] \verb=SAA_table_<id>.dat=
\\ Description: Describes when the observations are not possible (notably when over the South Atlantic Anomaly -- SAA).
\\ Format: $t_\text{start},1$[new line]$t_\text{stop},0$
\\ Units: $t_\text{start}, t_\text{stop}$ [min]
\\ Generation: ESA tool Spenvis \httplink{http://www.spenvis.oma.be/}
\item [a3 - Position of the Moon] \verb=moon_<id>.dat=
\\ Description: Table containing the position of the Moon with respect with the centre of the satellite.
\\ Format: $t\ x_m^{(\text{sat})}\ y_m^{(\text{sat})}\ z_m^{(\text{sat})}$ (blank space separators)
\\ Units: $t$ [minutes] from the beginning of the epoch, $x_m^{(\text{sat})}\ y_m^{(\text{sat})}\ z_m^{(\text{sat})}$ [km].
\\ Generation: \verb=STK= (\httplink{http://www.agi.com/})
\end{description}
\subsection{Setup} \label{sec:setup}
The complete code can be seen as two separate codes. The first part is called \emph{pipeline} in the following during which the stray light flux on the detector is computed. The second part is the analysis (\emph{Analysis Tools}) of the simulated stray light flux. They can be run only sequentially which increase modularity at the cost of increasing the required time. Therefore, the pipeline can be set up separately from the analysis tools. They both require some common libraries as well as several strictly-organised software. The version downloaded should already contain a working folder tree.
\subsubsection{Pipeline}
Several instances of the pipeline can be run simultaneously. Therefore, for the pipeline only, a variable \verb=part/p= is added.
The situation can be summarised as follow (with \verb=id= the orbit id, \verb=SL= the SL exclusion angle, \verb=p= the CPU number):
\input{inputs/folder-tree-pipeline.tex}
In the \verb=resources/= folder, the following files are required:
\begin{itemize}
\item \verb=minute_table_<id>.dat=
\item \verb=sun_<id>.dat=
\item \verb=moon_<id>.dat=
\item \verb=orbit_<id>.dat=
\item \verb=saa_table_<id>.dat=
\end{itemize}
To adjust the wavelengths of the instrument of the satellite, the Solar constant in this band must be modified in \verb=constants= while the pixel size and the minimal SL exclusion angles must be given in \verb=parameters=. Other characteristics of the detector must be entered in \verb=parameters.py= in the root directory. Those include:
\begin{itemize}
\item \verb=radius_psf= The size of the PSF in pixel
\item \verb=radius_tele= The radius of the telescope in metres
\item \verb=dlambda_over_lambda= the factor $\frac{\delta\lambda}{\lambda}$ (See \S\ref{sec:numerics-functions} for more information).
\end{itemize}
One important parameter must be set up in the \verb=straylight_<id>_<p>/CODE/parameter= file :
\begin{itemize}
\item \verb=pixel= which is the pixel size in cm$^2$.
\end{itemize}
Further parameters control debugging functions and should be self-explanatory. A grid representing the Earth of at least $1001\times 501$ is required. This is needed for the simulations to converge to a good estimation of the flux.
\subsubsection{Analysis Tools}
There, at least the following three folders must exists:
\begin{itemize}
\item \verb=<id>_figures/= contains all the figures generated
\item \verb=<id>_flux/= contains all the flux data generated by the pipeline
\item \verb=<id>_misc/= contains all data generated by the analysis
\end{itemize}
In addition, the file \verb=parameters.py= must contain the following information:
\begin{itemize}
\item \verb=last_orbits= a Python dictionary \verb=<id>:<last orbit>=
\item \verb=ppm_threshold= the minimum SNR acceptable in ppm.
\item \verb=resx,resy= the resolution of the rectangular sky grid.
\item \verb=mirror_efficiency= the efficiency of the optical system. I.e. the ratio of photons reaching the detector by the number of photons entering the optical system.
\item \verb=SL_post_treat_reduction= efficiency of the stray light removal algorithm in the data reduction pipeline.
\end{itemize}
The other options are optional and self-explanatory. Further setup of the analysis tools are described in each module. In \verb=resources/constants.py=, there are definitions of the wavelengths used:
\begin{itemize}
\item \verb=wavelenght_visual=
\item \verb=Fv=
\item \verb=Jy=
\end{itemize}
For more information about the variables, refer to \ref{sec:numerics-functions}.
\section{Numerical Methods}\label{sec:Usage:Methods}
In this section, the algorithms and the softwares used and prepared are presented. The presentation starts by discussing the calculation of the visible area that was coded by Luzius Kronig from EPFL/Swiss Space Center, continues by describing the stray light code of Andrea Fortier at Unibe and Thibault Kuntzer and ends by the description of the various algorithms and techniques developed to derive stray light maps and tables. A flow chart summarises the numerical codes used in Fig. \ref{fig:data-flow}.
\input{inputs/data-flow.tex}
\subsection{Observable Area} \label{sec:numerics-obs}
An visible region in the sky is an area that the satellite can observe at a given time. The geometry of this zone is primarily defined by the position of the satellite relative to objects that could contaminate the image either by direct imaging or by diffuse light such stray or zodical lights. Most of the constraints are therefore derived from the pointing direction -- the line of sight -- of the telescope. This paragraph describes quickly the steps leading to the computation of observable sky.
\paragraph{Constraints} \label{sec:numerics-visibility-constraints}
\begin{wrapfigure}{r}{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{exclusion-angles}
\caption{\emph{Top}. Exclusion angles from the line of sight (LOS) of the telescope. \emph{Bottom.} Close-up for different cases for observations above the dark side of the Earth. All values refer to the CHEOPS mission.}
\vspace{-5em}
\label{fig:exclusion-angles}
\end{center}
\end{wrapfigure}
The first constraints on this area are that it is forbidden to look directly at or close to bright objects namely the Sun, the Earth and the Moon. Moreover, to reduce the contamination by diffuse light of the Sun or Earth and Moon stray lights, there are conditions on the minimum angles between the line of sight (LOS) of the telescope and the limb of those objects.
\begin{description}
\item [Earth Occultation] The telescope shall not point to a target, which projected altitude from the surface of Earth is less than 100 km (thickness of the Earth's atmosphere);
\item [Earth stray light exclusion angle] In order to limit stray light contamination, the minimum angle allowed between the line of sight and any (visible) illuminated part of the Earth limb, the so-called Earth stray light exclusion angle shall be $\alpha_\oplus$;
\item [Sun exclusion half-angle] The Sun must be outside the cone around the line of sight (LOS) of the telescope having a half-angle, the so-called sun exclusion angle, of $\alpha_\odot$;
\item [Moon exclusion half-angle] The bright Moon must not be inside a cone around the line-of-sight of the
telescope having a half-angle, the so-called moon exclusion angle, of $\alpha_m$.
\end{description}
The constraints shape the observable regions; figure \ref{fig:exclusion-angles} depicts those exclusion angles for CHEOPS.
The South Atlantic Anomaly (SAA) plays also an important role as crossing it requires instrument shutdown.
The maps of the visible regions in the sky are generated without taking the SAA into account and the restriction is applied afterwards during the analysis. Otherwise, if the SAA would already be included in the observability maps, it would be impossible to perform an adaptive orbit step as described in the next section. Indeed, the crossing of the SAA does not always occur at the same time in the orbit. Moreover, the time spent over the SAA changes as well.
\paragraph{Coordinate System.} The observable region in the sky must be in reference to the satellite. To describe the system (Sun, Earth, Moon, satellite) a common coordinate system must be chosen. The chosen frame is the International Celestial Reference Frame (ICRF). In order to perform the computations, the centre of this coordinate system is translated into the satellite. Therefore, all positions are translated to the satellite and not the Earth.
\paragraph{Discretisation in Space.}
The sky is simulated by a grid of points described by the following relationships:
\begin{eqnarray}
%\begin{array}{l}
\alpha_g(i) = \frac{\Delta \alpha}{2}+i\Delta\alpha,\ i=\{0,\dots,n-1\} \nonumber\\ % \in [0,2\pi]
\delta_g(j) = -\frac{\pi}{2} + \frac{\Delta \delta}{2}+j\Delta\delta,\ j=\{0,\dots,m-1\} % \in [-\frac{\pi}{2},\frac{\pi}{2}]
%\end{array}
\label{eq:grid}
\end{eqnarray}
where $n,m$ describe the resolution of the gird. This grid yield cells on the sky of $\Delta \alpha \times \Delta \delta$ or $9^\circ \times 9^\circ$. Fairly large cell are to be used. It arises from the computation time of the different steps which scales at least as $\mathcal{O}(n\cdot m)$.
\paragraph{Discretisation in Time.}
The orbital period of the satellite depends of course on the altitude following the famous third Kepler law:
$$ P = 2\pi \sqrt{\frac{a^3}{Gm_\oplus}} $$
The observability maps will be used to plan the observation and therefore, it is better to have a time in minute rather than the position in a given orbit. The time $t_0=0$ min is the beginning of the epoch.
\paragraph{Outputs}
The output files must be a list of the right ascension $\alpha$ and the declination $\delta$ of the grid points which are observable. In order to reduce the number of files generated, the visible points are all grouped in one single file for one orbit. Hence, the final output files are given in the following format : \verb=t, ra, dec= in, respectively, minutes and radians.
\subsection{Stray Light Code} \label{numercis:stray_light.f}
To compute the Earth stray light that enters the telescope and hit the detector, a dedicated code in \verb=Fortran= was designed by Andrea Fortier and Thibault Kuntzer. This program named \verb=stray_light.f= can compute independently of the observability map code, the stray light contamination at any time and in any direction.
\paragraph{The Steps to Stray Light Evaluation.}
The stray light computation is divided in several steps:
\begin{enumerate}
\item Compute the illuminated part of the Earth;
\item Compute the gradient of the illumination;
\item Compute which parts of the illuminated Earth are visible by the telescope;
\item For every visible region or ``targets'', do the following:
\begin{enumerate}
\item Compute the flux of light reflecting off Earth's surface that reaches the telescope;
\item Convolve this signal by the PST to get the flux of Earth stray light at the detector.
\end{enumerate}
\end{enumerate}
Those steps are explained in further details below. It is necessary to caution the reader that the description of the process is very linear here and does not exactly mirror the implementation in which numerical optimisation was planned.
\paragraph{}
In the software, a few simplifications are made: Firstly, only the \emph{Earth} stray light is computed. Contamination from the Moon is not considered as it is assumed that its exclusion angle is sufficient. Secondly, the Earth is considered to be a perfect sphere thus neglecting its bulge and hence slightly miscalculating the off-axis angle of an incoming photon.
The surface of this sphere is supposed to be uniform: no distinction is made between oceans or land, the weather is not simulated (therefore thunderstorms and aurorae are not considered) and cities not modelled. To get a upper estimate of the stray light, the albedo of the Earth is set to 1. The albedo is a multiplicative factor in the calculation of the flux (See \S\ref{sec:numerics-functions} for more details). The value of the averaged albedo ($\sim$ 0.3 -- 0.5) on the Earth could therefore be used.
However, it was chosen to set the albedo to 1 to compute a worst case scenario, but with little difference to a finely tuned value because of the logarithmic behaviour of the final output of the project (as discussed in \S\ref{sec:KnownIssues}).
\paragraph{Illuminated Earth.}
The first step is to describe with enough precision the illuminated part of the Earth. The only interesting part of the Earth at this point is the one in daylight. Indeed, it is considered that half of the Earth which is the night does not contributed to stray light.There are thunderstorms, aura or even city lights that could contaminate the signal if the LOS were too close to the limb of the Earth.
Those effects which are extremely complex to simulate are not integrated in the code, but the constraint to point above the atmosphere with a given minimal altitude used to ensure that there is no contamination.
The surface of the Earth is represented by a grid for which every cell is determined to be either in the day or in the night depending on the dot product of its normal and the direction of the Sun. The minimum grid that can achieve sufficient accuracy, \emph{i.e.} that the flux of stray light has converged, is quite large with 1000 by 500 cells.
\paragraph{Illumination Gradient.}
For the cells that are in the sunlight, the intensity $I$ of the light is computed by determining the cosine of the angle at which the light impacts the cell $c$. This depends upon the relative position of the cell and the Sun and can be computed using the following formula:
\begin{equation}
I_c \propto \cos \varphi
\end{equation}
where $\varphi$ is the angle between the direction of the Sun and the normal vector of the cell (See \S\ref{app:great-circle-distance}).
The amount of radiation that arrives at a particular cell is also given by the energy received by the Sun at the distance of the orbit of the Earth. Here, the common value of about 1360 W/m$^2$ is not considered as it takes the whole spectrum into account. The solar constant must be reduced to the applicable band window. For example, CHEOPS instrument is sensitive to the range of wavelengths from 400 to 1100 nm and the solar ``constant'' for this interval can be computed to be 880 W/m$^2$.
\paragraph{Changing Viewpoints.}
The two previous parts can be carried out without the knowledge of the orbit at that particular time. Indeed, the satellite plays no role. This step determines which illuminated cells can be seen by the satellite at any given time. Assuming a position for the satellite, the code finds all cells within a cone defined by all visible tangent directions on the surface of the Earth and flags them as relevant to the stray light computation. See figure \ref{fig:numerics-illuminated-earth} for an example.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.6\linewidth]{map-2}
\caption{Mollweide projection of the illuminated Earth. The circle on top of the maximal illuminated region is the projection of the position of the Sun, the white star-shaped marker symbolises the satellite and the green region is the illuminated Earth seen by the satellite.\label{fig:numerics-illuminated-earth}}
\end{center}
\end{figure}
\paragraph{Flux of Photons at the Telescope.} The flux of photons at the entrance of the telescope (not at the detector or somewhere else in the optical path) depends upon the direction of the line of sight. Let $\theta$ be the angle between the LOS and the photon path. The target star is denoted by $\pmb{r}_\star$, the position of the satellite by $\pmb{r}_s$ the cell position by $\pmb{r}_c$ all with reference to the centre of the Earth. The direction to which the photon bounced off the cell to the satellite can be defined by the vector $\pmb{v} = \pmb{r}_s-\pmb{r}_c$. If the angle $\theta$ is larger than $\pi/2$, then no photons will hit the telescope:
\begin{eqnarray}
\pmb{r}_\star\cdot \pmb{r} = |\pmb{r}_\star|\cdot |\pmb{r}|\cos\theta \\
\nonumber\\
\cos \theta = \begin{cases} >0&\text{No hit} \\ \leq 0 & \text{photon hit} \end{cases}
\end{eqnarray}
To obtain the value of the total flux of photons all that is left is to integrate on all possible directions. The surface of the Earth is assumed to be Lambertian\footnote{A Lambertian surface is a surface whose radiance (power per unit solid angle per unit projected source area) is the same independently of the angle of view. Its apparent brightness is the same.}. This is justified as the albedo is assumed to be constant and because the radiant intensity (power per unit solid angle) depends upon the cosine of the angle described by the direction of the radiation leaving the point. The same cosine factor is applied to the surface that emits the radiation and therefore they cancel out.
\paragraph{Accounting for the Photon Rejection.} Up until this point, the considerations were mostly geometrical. The Earth is assumed to be a perfect sphere which reflects light with the same radiance in any direction. The cells that can be seen by the satellite are defined by its orbit. Hence computing the angles from the LOS to them is easy.
So, at this point in the code, the amount of photons reaching the pupil of the telescope is computed each with an off-axis angle $\theta$. The transmission of the incoming photons through the telescope can be approximated by a axisymmetric function that depends upon the off-axis angle $\theta$ of the radiation : the point source transmission (PST) -- See for example \cite{Breault2010} or \cite{Pompea1995}.
The energy that arrives at the detector is multiplied by the value of the PST as function of the off-axis angle of the photon.
An interpolation of the PST is performed to fill the gaps in the knowledge of this function which is sampled using large steps. The flux is also converted to photons per second per pixel. As the stray light is scattered by different parts of the telescope and not through the optical path of the mirrors and focal planes, its signal is not convolved by the PSF and hits a single pixel.
\paragraph{Practicalities.} The code was optimised to perform stray light computations for known observability maps and ephemerides for the Sun. It takes as input parameters a PST (\verb=theta [deg],PST=), the position of the satellite (in the frame of reference of the Earth) and the Sun in separate file containing \verb=x,y,z,ra,dec,r= with distances in km and angles in radians. Finally two files describe the target list: one containing the number of targets to optimise the \verb=Fortran= memory management and file handling and a list of targets with two columns: the right ascension and the declination both in radians.
\subsection{A Pipeline to Compute the Stray Light Flux} \label{sec:numerics-pipeline}
To generate a stray light map of the observable sky at a given time, a \verb=Python= script looks for the corresponding observability map, creates the list of visible points which are then given as inputs to the stray light calculator along with a few other variables (position of the Sun and the Earth, \dots) as described in section \S\ref{numercis:stray_light.f}. To speed up computation, an adaptive time step is implemented. An orbit is computed every minute, however as there are around 15 orbits per day in LEO, consecutive orbits are very similar in most of the cases. Hence the capability of skipping orbits that are too similar. This is done by computing orbit $i$ and then orbit $i+10$. If the comparison of the two orbits $(i, i+\Delta i)$ reveals a difference of more than 5\% in stray light flux, then the orbit $i+\Delta i/2$ is computed. Those parameters can be changed.
The algorithm always tries to increase the number of step and doubles the step size if it is (1) smaller than the optimum and (2) the error to the previous orbit is less than the threshold. The maximum bearable error to the next step can be chosen such that it would ensure that the \emph{maximal} error in the orbit in term of equivalent magnitude (see \S\ref{sec:numerics-functions}).
This step of generating stray light maps for the whole epoch is the most lengthy one. The time needed to compute a whole year with the adaptive orbit step described in the above depends on the altitude as the stray light is less prominent at high altitude. The outputs are one file per minute containing the following characteristics: time, grid point position in radians (right ascension, declination) and the flux of stray light in ph px$^{-1}$ s$^{-1}$. The relative high number of output files ($\sim 5-10\cdot 10^5$) weight around 1--2 GB and will be analysed in the next steps for a whole year.
The temporal resolution is chosen to be the same as the observability maps.
\subsection{Analysis of Stray Light Data} \label{sec:numerics-analysis}
To interpret the data, several analysis tools were developed. The scripts range from the very simple counting of computed orbits to a much more elaborated probability of observing a transit estimation. A description of some of those tools is provided here.
\paragraph{Computed Orbits.} This is the first logical step -- compulsory in order to proceed to other functions --: check which orbits have been computed. This seems redundant as this list can easily be retrieved from logging at the previous step when computing the stray light maps. In order to speed up the computations, most of the stray light data can be generated with several CPUs: not in parallel, but working on different periods in the epoch. To avoid any mistake when merging back the results of the different CPUs, this scripts tests if minutes 0, 20 and 60 were computed. As the SAA (shutdown periods) is not considered, each of those minutes had to be computed. If any fails, then it is considered that this orbit was not computed.
\paragraph{Analysis of the Error Evolution.} The second step is to check that the error from one orbit to the next is not larger than the limit or if it is, then that every orbit is computed. This operation is rather slow as the code must compare the minutes pairwise in each orbit and due to non-integer period, several combinations must be made to find the best fit.
\paragraph{Analysis of the Stray Light Flux.} From this point forward, the data is considered as robust if the two previous steps did not pointed out to erroneous behaviours. The stray light flux depends upon the amount of Earth illuminated cells seen and their illumination. In order to characterise those changes the stray light flux is analysed by four different quantities:
\begin{enumerate}
\item The \emph{maximal value}: the maximal value of the flux for a given orbit;
\item The \emph{maximal mean}: taking the maximum value of the flux for one minute, repeating for every minute in the orbit and averaging;
\item The \emph{twice-averaged mean}: taking the average of the flux at a particular moment in the orbit, repeating for every minute in the orbit and taking the average on those averages per minute;
\item The \emph{evolution in one orbit} of the stray light in one direction.
% \item The \emph{direction of maximal flux}: finding out which where is the .
\end{enumerate}
\paragraph{Stray Light Flux Maps.} The list of the visible targets is completed by the stray light flux. Therefore, a projection of the targets can be made on a map with a colour assigned to the stray light. A conversion to limiting magnitude can be made to represent how faint the observed stars can be given a certain SNR (see \S\ref{sec:numerics-functions}). Those maps have the advantage -- beside being aesthetically pleasant -- of showing the varying visible regions in the sky and the effect of the relative alignment of the Earth and the Sun.
\paragraph{Cumulative Observation Maps.} The cumulative observation time is a value that is computed on a long period of time. There is a minimum of observation time to consider the observation worth it. On top of this minimal observation time, the telescope takes time to align its LOS to the star: the acquisition time.
Those maps give informations about which regions of the sky are invisible which can be observed the longest and the effects of the constraints and the SAA. Those maps are relatively long to establish as the appearance and disappearance times -- which occurs many times over the course of the year -- of every grid point must be computed.
The time of observation can be understood in different ways. We provide proposed definitions:
\begin{description}
\item [Cumulative observation time] describes how much time can the satellite observe in one orbit whose start and ending are arbitrary defined. Interruptions in the cumulative observation time are acceptable. In this document, it is often short-handed to ``observation time''.
\item [Consecutive observation time] describes how much time can the satellite observe without interruptions in the observations. If needed arbitrary start and ending time of the orbits can be defined.
\item [Accumulated observation time] is the total amount of either cumulative or consecutive time.
\item [Period of observation] represent the number of days a target is observable. It does not describe the observable time, rather how many days the target can be followed.
\end{description}
Observation maps are produced for the two definitions (cumulative and consecutive observation time). Subtraction of two maps to see the difference between two observational strategies are
\paragraph {Probability of transit detection} SALSA was developed for a mission which goal is to characterise planets by observing their transits. This tool aims at estimating the probabilities of transit detection given the absolute observation regions and the stray light contamination. This tool is limited to looking at the transit duration versus the effective observation possibility. It does not evaluate the possibility of detecting the transit on the basis of its depth. SALSA looks for specific transit patterns. They represent certain families of objects transiting their star. Those families have typical period of rotation and transit time \citep{redbook}. Their ingress and exgress time is supposed to be unknown. SALSA assumes that the ingress time is described by an uniform probability distribution. The next step is to find how many of those transit times per rotation around the star are visible. Depending upon the observational conditions, certain objects can be observed for more than the time between two transits and thus will have probabilities of more than one. If an object can be observed non-consecutively and the total observation time is still larger than the time between two orbits, the probability is still considered more than one.
\paragraph {Almanac of catalogs} Using a similar procedure and providing a catalog of objects, their rise and setting time in the effective observational region are computed. Such kind of data can be used for the planning of the observations. This module can then easily be turned into an observation scheduler.
\subsection{Computational Details}\label{sec:numerics-functions}
\paragraph{From Stray Light Flux to Maximum Magnitude Visible.}
The question asked here is: given the stray light flux, what is the faintest target visible? The function \verb=flux2mag= (and the reciprocal \verb=mag2flux=) yields this limiting magnitude (or inversely the flux) of the target star. The noise represented by the stray light to the flux of the star must be at maximum $T$. The flux is related to the magnitude $m$ by:
\begin{equation}
F = 10^{-2/5\cdot m}
\end{equation}
which has units of ergs s$^{-1}$ cm$^{-2}$. The flux of stray light is given in units of ph s$^{-1}$ px$^{-1}$. Moreover, only photons with a certain energy matter and not all the spectrum of light. Therefore the flux is now given by:
\begin{equation}
F_\text{sl}(m_V) = \underbrace{F_V J_V\frac{\Delta \lambda}{\lambda}}_{\equiv A}\cdot\underbrace{\left(\frac{R_\text{tel}}{R_\text{PSF}}\right)^2}_{\equiv B}\cdot10^{-2/5\cdot m_V}
\end{equation}
% \todo{see: \httplink{http://astro.wku.edu/strolger/UNITS.txt}}
where $F_V = 3640$ Jy cm$^2$/ergs converts from ergs to Jansky for the band $V$, $J_V$ transforms from energy to the flux of photons for the band $V$: $J_V=1.51\cdot10^7$ photons s$^{-1}$ m$^{-2}$ / Jy. $\frac{\Delta \lambda}{\lambda}$ is the bandwidth of the instrument. Those three parameters grouped in $A$ depend upon the choice of the band. $R_\text{tel}$ is the radius of the aperture in metres. $R_\text{PSF}$ px is the radius of the PSF in pixels. The PSF is modelled here as an axisymmetric step function (top hat function) such as
\begin{equation} \label{eq:modelled-PSF}
\text{PSF}(r) = \begin{cases} 1 & r\leq R_\text{PSF}\\
0 & r>R_\text{PSF}
\end{cases}
\end{equation}
Those two radius are geometrical parameters and their ratio squared transforms from m$^2$ to pixels and is noted for convenience $B$.
The SNR for the stray light is given by $SNR=1/T$ and therefore, the maximum stray light flux tolerable for a given magnitude is:
\begin{equation} \label{eq:num-mag2flux}
F_\text{sl}(m_V) = T\cdot AB\cdot10^{-2/5\cdot m_V}=\frac{AB\cdot10^{-2/5\cdot m_V}}{SNR}
\end{equation}
This computation is implemented in \verb=mag2flux=. The reciprocal function uses equation \ref{eq:num-mag2flux} to express $m_V(F_\text{sl})$. This implies that there is a logarithmic of base 10 function. In order to avoid numerical errors while performing this computation, the stray light flux is clipped to a minimal value of $10^{-40}$ ph s$^{-1}$ px$^{-1}$.
\paragraph{Comparing Two Orbits.} This function is important as it decides whether the orbit step size in the calculations of the stray light flux must be reduced. It must, therefore, read the data from the two orbits and compare them. This function is quite conservative in the sense, that it returns the \emph{maximal} difference in the orbit and not the \emph{mean}. The two orbits (the \verb=reference= and the \verb=current= as defined in the code) are compared pairwise: minute by minute. As evoked before, this is a none trivial problem as the orbital period is not an integer of minutes. Therefore, three combinations are tested using a shift $s=\{0,1,2\}$ in time between the reference and the current orbits.
\paragraph{Great Circle Distance.} \label{app:great-circle-distance}
The angular separation of two points is required several times in the different steps of this project. This separation must be computed reliably and robustly otherwise several numerical issues arise. Those numerical issues are due to two things: (1) singularities of the inverse trigonometric functions at multiples of $\pi/2$ and (2) the inverse cosine which handles badly two close-by points \citep{Vincenty1975}. Therefore, to compute the angular distance between two points 1 and 2, the following formula was used:
\begin{equation}
\begin{aligned}
&\varphi =\arctan\left(\frac{\sqrt{a^2+b^2}}{\sin\delta_1\sin\delta_2+\cos\delta_1\cos\delta_2\cos\Delta\alpha}\right) \\
&\text{where}\ a = \cos\delta_2\sin\Delta\alpha, &\\
& \ \ \ \ \qquad b = \cos\delta_1\sin\delta_2-\sin\delta_1\cos\delta_2\cos\Delta\alpha
\end{aligned}
\end{equation}
Similarly, the angular separation cannot be simply computed by the scalar product, but by the equivalent, but always correct cross product:
\begin{equation}
\varphi =\text{arctan}\left( \frac{\left| \pmb{r}_1\times \pmb{r}_2 \right|}{\pmb{r}_1\cdot \pmb{r}_2} \right)
\end{equation}
where $\pmb{r}_i$ are vectors that point to the position of the two coordinates.
\section{Usage} \label{sec:Usage}
\subsection{Running the Pipeline}
If the setup was done correctly (\S\ref{sec:setup}), then running the pipeline should be relatively easy. During this part of the simulations, the code can be run in parallel.
However, there is no built-in parallelism management system therefore, the user should duplicate all the files and folder that are indexed by \verb=<p>= in Fig. \ref{fig:folder-tree-pipeline}.
The file \verb=1-compute_MASTER.py= must be copied, renamed and parametrised to reflect the number of CPU in use and the number of orbit that the code should simulate.
To run the code simply type:
\begin{lstlisting}
python 1-compute_<id>.py
\end{lstlisting}
After a few moments, the screen should look like Fig. \ref{fig:ex-pipeline}. The amount of time necessary to compute an orbit depends on the complexity of the calculation and to a lesser extent the number of cells that are observable. An orbit very close to the surface of the Earth will take a large amount of time to simulate.
The likelihood of two orbits separated by a step size being similar to some degree is less than high orbits as the flux of stray light radiation is much larger. With the current version of the code, for a circular orbit at 700--800 km and an CHEOPS-like instrument, the code needs of the order of 20 hours to complete the simulation for one year.
Thanks to the adaptive orbital step approach, the number of orbits computed is reduced. However, for the same example and using a maximum step size of 10, the number of resulting file is still of the order of $\sim50\cdot 10^3$. While this is still very manageable, lower orbits or worse PST may generate up to $\sim500\cdot 10^3$ files which does slow the analysis.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.7\linewidth]{ex-pipeline}
\end{center}
\caption{Typical screen output of the pipeline\label{fig:ex-pipeline}}
\end{figure}
\subsection{Running the Analysis Tools}
Once the simulations of the SL contamination are finished, the data must be copied in the folder \verb=<id>_flux/=. The only mandatory module to run is the first one (\verb=2-orbits_computed.py=) as it builds the index of the \emph{effectively} computed orbits. The order proposed in the original pack is increasing in the complexity of the analysis. It first focuses on the parameters of the simulation and testing the robustness of the data. The few last modules analyses the data from the point of view of observation capabilities and prepares the planning of the observations.
All the provided modules contain a documentation at the beginning of the file explaining the purpose of the file, the parameters, requirements, \dots\ Some modules require additional folder to store the output figures, maps or data. The amount of time required for a given module can be quite long and depends on the length of the epoch and usually on the number of orbits computed during the pipeline phase.
\section{Troubleshooting} \label{sec:Troubleshooting}
\subsection{Pipeline}
\begin{description}
\item [The error is always greater than the threshold] There is a parameter that could influence the error made from one orbit to the next: the parameter \verb=magnitude_max= in \verb=parameters.py= which is used to clip low values of the SL flux. If this number is set too high, then it becomes highly sensitive.
\item [The pipeline refuses to compile the code] There is a problem of permission for the file\\ \verb=stray_light_<id>/CODE/compile=. Make sure that the CHMOD is configured to 777 for this file. The compilation of the Fortran code necessitate the \verb=gfortran= compiler. The automatic compilation can be turned off in \verb=1-compute-MASTER.py=.
\end{description}
\subsection{Analysis Tools}
\begin{description}
\item [Python cannot write data file] This should start when you run this error analysis already. The distribution of Numpy may still be lower than 1.7.0. If this is the case, we recommend you install the latest version (downloaded from their website) manually. If this does not fix the issue, make sure that the output directory exists.
\item [Replacing flux files] For some reason, if some part of the simulations were not done correctly, it may be necessary to replace some of the data. Manipulating the flux files by hand can reveal very tedious. A script was designed to replace a portion of the data. Running
\begin{lstlisting}
python b-replace_flux.py
\end{lstlisting}
will move the current data used in the analysis to a trash folder which must \emph{exist} prior to running the code. The new data are copied from the the pipeline folder. See the file itself for the parameters.
\end{description}
\section{Known Issues} \label{sec:KnownIssues}
In this section, the known weakness of the code are presented. User who notice other issues are welcomed to send their comments to the authors.
\begin{itemize}
\item A constant albedo is used not modelling correctly the surface of the Earth, the weather or the thunders. In the stray light code, the albedo $A$ is taken to be one to yield a worst case scenario. This factor is a multiplicative factor in the computation of the stray light as:
\begin{equation}
F_\text{sl} = S_V\int_{\Omega({\text{i,s,}\oplus})} A\cos\vartheta\cos\varphi\text{d}\Omega
\end{equation}
where $\Omega({\text{i,s,}\oplus})$ is the illuminated Earth visible from the satellite, $S_V$ the Solar flux in the relevant band, $\vartheta$ the angle LOS--illuminated surface element, $\varphi$ angle direction of Sun--normal of surface element. It can be computed that changing the albedo to an averaged value of 0.35 \citep{Pater2001} would push the limiting magnitude by about 1.1 magnitude fainter. Around the poles however, the albedo is higher due to the ice and the snow (around 0.9) while oceans or continents have a much lower albedo (down to 0.1).
\item PSF, Aperture and Uncertainty. Changing the aperture of the telescope by 1 cm with a constant PSF increases the flux by $\sim13\%$ or a 0.13 mag brighter in the CHEOPS case. To get a more precise value therefore of the equivalent magnitude, a convolution of the signal is to be made. Given a change in what would be the equivalent PSF radius for a top-hat function of $\pm 20\%$ would lead to a change of $^{-0.20}_{+0.25}$ respectively in the magnitude.
\item The function \verb=compare_two_orbits()= may cause some problems. The cause is that the time 0 in an orbit may not always be located at the same point in the orbit as the period of the orbit is not an integer. The code will try to find the best possible fit by adjusting a shift between the two orbits. It is also sensitive to the parameter \verb=ppm_threshold= set in \verb=parameters.py=.
\item When drawing images of the stray light on the sky (\verb=9-plot_flux.py=), the function \verb=draw_boundaries()= does not work properly if the observable zone is made up of more than one region.
\item The script that calculate the almanac of a single target does not work properly. In general, ephemerides calculation from a target catalogue should be considered as beta implementation.
\item There might be some unexpected differences (up to 1\% in the total sky coverage) in observability maps as shutdowns of the satellite are not taken into account. This can be explained by the way the observable cells in the sky maps are computed.
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%% BIBLOGRAPHY %%%%%%%%%%%%%%%%%%%%%%%%%%%%
% \newpage
\bibliographystyle{apj}
% \nocite{*}
\bibliography{documentation.bib}
%%%%%%%%%%%%%%%%%%%%%%%%% appendices %%%%%%%%%%%%%%%%%%%%%%%%%%%%
% \newpage
\begin{appendices}
\section{List of Abbreviations} \label{app:abbreviations}
\begin{table*}[!h]
\begin{center}
\begin{tabular}{lp{13.5cm}} \toprule
% CCD & Charged Coupled Device \\
CHEOPS & CHaracterizing Exoplanets Satellite \\
EPFL & Ecole Polytechnique F\'ed\'erale de Lausanne \\
% ETHZ & Eidgen\"ossische Technische Hochschule Z\"urich \\
% EChO & Exoplanet Characterisation Observatory \\
ESA & European Space Agency \\
% ESO & European Southern Observatory \\
% FOV & Field Of View \\
% FWHM & Full Width at Half Maximum \\
% HARPS & High Accuracy Radial velocity Planetary Search \\
% HST & Hubble Space Telescope \\
% IAU & International Astronomical Union \\
% ICRF & International Celestial Reference Frame \\
INAF & Istituto Nazionale di Astrofisica / Italian national institute for astrophysics\\
LEO & Low Earth Orbit \\
LOS & Line Of Sight \\
% LTAN & Local Time of the Ascending Node\\
% MOA & Microlensing Observations in Astrophysics \\
% NASA & National Aeronautics and Space Administration \\
% OGLE & Optical Gravitational Lensing Experiment \\
% ppm & Parts per million \\
% PSF & Point Spread Function \\
PST & Point Source Transmittance function \\
% RAAN & Right Ascension of the Ascending Node \\
% RMS & Root mean square \\
% RV & Radial Velocity detection technique \\
SAA & South Atlantic Anomaly \\
% S/C & Spacecraft \\
% SDSS & Sloane Digital Sky Survey \\
SL & Stray Light \\
SNR & Signal-to-Noise Ratio \\
% SSO & Sun-Synchronous Orbit \\
% TAPS & Theoretical Astrophysical and Planetary Science Group at UniBe\\
% TESS & Transiting Exoplanet Survey Satellite\\
Unibe & University of Bern \\
% UniGe & University of Geneva \\
\bottomrule
\end{tabular}
\end{center}
\end{table*}
\end{appendices}
\end{document}
| {
"alphanum_fraction": 0.7674598095,
"avg_line_length": 100.8007590133,
"ext": "tex",
"hexsha": "fbabf66eee5ad0c46b095c29c65f08fbd082e65c",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-07-30T10:38:54.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-07-30T10:38:54.000Z",
"max_forks_repo_head_hexsha": "79fd601d3999ac977bbc97be010b2c4ef81e4c35",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "kuntzer/SALSA-public",
"max_forks_repo_path": "Documentation/Documentation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "79fd601d3999ac977bbc97be010b2c4ef81e4c35",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "kuntzer/SALSA-public",
"max_issues_repo_path": "Documentation/Documentation.tex",
"max_line_length": 1266,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "79fd601d3999ac977bbc97be010b2c4ef81e4c35",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "kuntzer/SALSA-public",
"max_stars_repo_path": "Documentation/Documentation.tex",
"max_stars_repo_stars_event_max_datetime": "2021-07-30T09:59:41.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-07-30T09:59:41.000Z",
"num_tokens": 12914,
"size": 53122
} |
\section{\module{locale} ---
Internationalization services.}
\declaremodule{standard}{locale}
\modulesynopsis{Internationalization services.}
The \code{locale} module opens access to the \POSIX{} locale database
and functionality. The \POSIX{} locale mechanism allows programmers
to deal with certain cultural issues in an application, without
requiring the programmer to know all the specifics of each country
where the software is executed.
The \module{locale} module is implemented on top of the
\module{_locale}\refbimodindex{_locale} module, which in turn uses an
ANSI \C{} locale implementation if available.
The \module{locale} module defines the following exception and
functions:
\begin{funcdesc}{setlocale}{category\optional{, value}}
If \var{value} is specified, modifies the locale setting for the
\var{category}. The available categories are listed in the data
description below. The value is the name of a locale. An empty string
specifies the user's default settings. If the modification of the
locale fails, the exception \exception{Error} is
raised. If successful, the new locale setting is returned.
If no \var{value} is specified, the current setting for the
\var{category} is returned.
\function{setlocale()} is not thread safe on most systems. Applications
typically start with a call of
\begin{verbatim}
import locale
locale.setlocale(locale.LC_ALL,"")
\end{verbatim}
This sets the locale for all categories to the user's default setting
(typically specified in the \code{LANG} environment variable). If the
locale is not changed thereafter, using multithreading should not
cause problems.
\end{funcdesc}
\begin{excdesc}{Error}
Exception raised when \function{setlocale()} fails.
\end{excdesc}
\begin{funcdesc}{localeconv}{}
Returns the database of of the local conventions as a dictionary. This
dictionary has the following strings as keys:
\begin{itemize}
\item \code{decimal_point} specifies the decimal point used in
floating point number representations for the \code{LC_NUMERIC}
category.
\item \code{grouping} is a sequence of numbers specifying at which
relative positions the \code{thousands_sep} is expected. If the
sequence is terminated with \code{locale.CHAR_MAX}, no further
grouping is performed. If the sequence terminates with a \code{0}, the last
group size is repeatedly used.
\item \code{thousands_sep} is the character used between groups.
\item \code{int_curr_symbol} specifies the international currency
symbol from the \code{LC_MONETARY} category.
\item \code{currency_symbol} is the local currency symbol.
\item \code{mon_decimal_point} is the decimal point used in monetary
values.
\item \code{mon_thousands_sep} is the separator for grouping of
monetary values.
\item \code{mon_grouping} has the same format as the \code{grouping}
key; it is used for monetary values.
\item \code{positive_sign} and \code{negative_sign} gives the sign
used for positive and negative monetary quantities.
\item \code{int_frac_digits} and \code{frac_digits} specify the number
of fractional digits used in the international and local formatting
of monetary values.
\item \code{p_cs_precedes} and \code{n_cs_precedes} specifies whether
the currency symbol precedes the value for positive or negative
values.
\item \code{p_sep_by_space} and \code{n_sep_by_space} specifies
whether there is a space between the positive or negative value and
the currency symbol.
\item \code{p_sign_posn} and \code{n_sign_posn} indicate how the
sign should be placed for positive and negative monetary values.
\end{itemize}
The possible values for \code{p_sign_posn} and \code{n_sign_posn}
are given below.
\begin{tableii}{c|l}{code}{Value}{Explanation}
\lineii{0}{Currency and value are surrounded by parentheses.}
\lineii{1}{The sign should precede the value and currency symbol.}
\lineii{2}{The sign should follow the value and currency symbol.}
\lineii{3}{The sign should immediately precede the value.}
\lineii{4}{The sign should immediately follow the value.}
\lineii{LC_MAX}{Nothing is specified in this locale.}
\end{tableii}
\end{funcdesc}
\begin{funcdesc}{strcoll}{string1,string2}
Compares two strings according to the current \constant{LC_COLLATE}
setting. As any other compare function, returns a negative, or a
positive value, or \code{0}, depending on whether \var{string1}
collates before or after \var{string2} or is equal to it.
\end{funcdesc}
\begin{funcdesc}{strxfrm}{string}
Transforms a string to one that can be used for the built-in function
\function{cmp()}\bifuncindex{cmp}, and still returns locale-aware
results. This function can be used when the same string is compared
repeatedly, e.g. when collating a sequence of strings.
\end{funcdesc}
\begin{funcdesc}{format}{format, val, \optional{grouping\code{ = 0}}}
Formats a number \var{val} according to the current
\constant{LC_NUMERIC} setting. The format follows the conventions of
the \code{\%} operator. For floating point values, the decimal point
is modified if appropriate. If \var{grouping} is true, also takes the
grouping into account.
\end{funcdesc}
\begin{funcdesc}{str}{float}
Formats a floating point number using the same format as the built-in
function \code{str(\var{float})}, but takes the decimal point into
account.
\end{funcdesc}
\begin{funcdesc}{atof}{string}
Converts a string to a floating point number, following the
\constant{LC_NUMERIC} settings.
\end{funcdesc}
\begin{funcdesc}{atoi}{string}
Converts a string to an integer, following the \constant{LC_NUMERIC}
conventions.
\end{funcdesc}
\begin{datadesc}{LC_CTYPE}
\refstmodindex{string}
Locale category for the character type functions. Depending on the
settings of this category, the functions of module \module{string}
dealing with case change their behaviour.
\end{datadesc}
\begin{datadesc}{LC_COLLATE}
Locale category for sorting strings. The functions
\function{strcoll()} and \function{strxfrm()} of the \module{locale}
module are affected.
\end{datadesc}
\begin{datadesc}{LC_TIME}
Locale category for the formatting of time. The function
\function{time.strftime()} follows these conventions.
\end{datadesc}
\begin{datadesc}{LC_MONETARY}
Locale category for formatting of monetary values. The available
options are available from the \function{localeconv()} function.
\end{datadesc}
\begin{datadesc}{LC_MESSAGES}
Locale category for message display. Python currently does not support
application specific locale-aware messages. Messages displayed by the
operating system, like those returned by \function{os.strerror()}
might be affected by this category.
\end{datadesc}
\begin{datadesc}{LC_NUMERIC}
Locale category for formatting numbers. The functions
\function{format()}, \function{atoi()}, \function{atof()} and
\function{str()} of the \module{locale} module are affected by that
category. All other numeric formatting operations are not affected.
\end{datadesc}
\begin{datadesc}{LC_ALL}
Combination of all locale settings. If this flag is used when the
locale is changed, setting the locale for all categories is
attempted. If that fails for any category, no category is changed at
all. When the locale is retrieved using this flag, a string indicating
the setting for all categories is returned. This string can be later
used to restore the settings.
\end{datadesc}
\begin{datadesc}{CHAR_MAX}
This is a symbolic constant used for different values returned by
\function{localeconv()}.
\end{datadesc}
Example:
\begin{verbatim}
>>> import locale
>>> loc = locale.setlocale(locale.LC_ALL) # get current locale
>>> locale.setlocale(locale.LC_ALL, "de") # use German locale
>>> locale.strcoll("f\344n", "foo") # compare a string containing an umlaut
>>> locale.setlocale(locale.LC_ALL, "") # use user's preferred locale
>>> locale.setlocale(locale.LC_ALL, "C") # use default (C) locale
>>> locale.setlocale(locale.LC_ALL, loc) # restore saved locale
\end{verbatim}
\subsection{Background, details, hints, tips and caveats}
The C standard defines the locale as a program-wide property that may
be relatively expensive to change. On top of that, some
implementation are broken in such a way that frequent locale changes
may cause core dumps. This makes the locale somewhat painful to use
correctly.
Initially, when a program is started, the locale is the \samp{C} locale, no
matter what the user's preferred locale is. The program must
explicitly say that it wants the user's preferred locale settings by
calling \code{setlocale(LC_ALL, "")}.
It is generally a bad idea to call \function{setlocale()} in some library
routine, since as a side effect it affects the entire program. Saving
and restoring it is almost as bad: it is expensive and affects other
threads that happen to run before the settings have been restored.
If, when coding a module for general use, you need a locale
independent version of an operation that is affected by the locale
(e.g. \function{string.lower()}, or certain formats used with
\function{time.strftime()})), you will have to find a way to do it
without using the standard library routine. Even better is convincing
yourself that using locale settings is okay. Only as a last resort
should you document that your module is not compatible with
non-\samp{C} locale settings.
The case conversion functions in the
\module{string}\refstmodindex{string} and
\module{strop}\refbimodindex{strop} modules are affected by the locale
settings. When a call to the \function{setlocale()} function changes
the \constant{LC_CTYPE} settings, the variables
\code{string.lowercase}, \code{string.uppercase} and
\code{string.letters} (and their counterparts in \module{strop}) are
recalculated. Note that this code that uses these variable through
`\keyword{from} ... \keyword{import} ...', e.g. \code{from string
import letters}, is not affected by subsequent \function{setlocale()}
calls.
The only way to perform numeric operations according to the locale
is to use the special functions defined by this module:
\function{atof()}, \function{atoi()}, \function{format()},
\function{str()}.
\subsection{For extension writers and programs that embed Python}
\label{embedding-locale}
Extension modules should never call \function{setlocale()}, except to
find out what the current locale is. But since the return value can
only be used portably to restore it, that is not very useful (except
perhaps to find out whether or not the locale is \samp{C}).
When Python is embedded in an application, if the application sets the
locale to something specific before initializing Python, that is
generally okay, and Python will use whatever locale is set,
\emph{except} that the \constant{LC_NUMERIC} locale should always be
\samp{C}.
The \function{setlocale()} function in the \module{locale} module contains
gives the Python progammer the impression that you can manipulate the
\constant{LC_NUMERIC} locale setting, but this not the case at the \C{}
level: \C{} code will always find that the \constant{LC_NUMERIC} locale
setting is \samp{C}. This is because too much would break when the
decimal point character is set to something else than a period
(e.g. the Python parser would break). Caveat: threads that run
without holding Python's global interpreter lock may occasionally find
that the numeric locale setting differs; this is because the only
portable way to implement this feature is to set the numeric locale
settings to what the user requests, extract the relevant
characteristics, and then restore the \samp{C} numeric locale.
When Python code uses the \module{locale} module to change the locale,
this also affects the embedding application. If the embedding
application doesn't want this to happen, it should remove the
\module{_locale} extension module (which does all the work) from the
table of built-in modules in the \file{config.c} file, and make sure
that the \module{_locale} module is not accessible as a shared library.
| {
"alphanum_fraction": 0.7857202927,
"avg_line_length": 42.773381295,
"ext": "tex",
"hexsha": "d95b733931c0322edb2f627b25c05787e6876a81",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7fbaeb819ca7b20dca048217ff585ec195e999ec",
"max_forks_repo_licenses": [
"Unlicense",
"TCL",
"DOC",
"AAL",
"X11"
],
"max_forks_repo_name": "1byte2bytes/cpython",
"max_forks_repo_path": "Doc/lib/liblocale.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7fbaeb819ca7b20dca048217ff585ec195e999ec",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense",
"TCL",
"DOC",
"AAL",
"X11"
],
"max_issues_repo_name": "1byte2bytes/cpython",
"max_issues_repo_path": "Doc/lib/liblocale.tex",
"max_line_length": 76,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "7fbaeb819ca7b20dca048217ff585ec195e999ec",
"max_stars_repo_licenses": [
"Unlicense",
"TCL",
"DOC",
"AAL",
"X11"
],
"max_stars_repo_name": "1byte2bytes/cpython",
"max_stars_repo_path": "Doc/lib/liblocale.tex",
"max_stars_repo_stars_event_max_datetime": "2019-10-25T21:41:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-10-25T21:41:07.000Z",
"num_tokens": 2899,
"size": 11891
} |
\documentclass[UKenglish]{ifimaster} %% ... or USenglish or norsk or nynorsk
\usepackage[utf8]{inputenc} %% ... or latin1
\usepackage[T1]{fontenc,url}
\urlstyle{sf}
\usepackage{babel,textcomp,csquotes,duomasterforside,varioref,graphicx}
\usepackage[backend=biber,style=numeric-comp]{biblatex}
%\usepackage{subcaption}
\usepackage[hidelinks]{hyperref}
\usepackage{cleveref}
\usepackage{listings}
\usepackage{parskip}
\usepackage{alltt}
\usepackage{subfig}
\usepackage{todonotes}
\usepackage{enumitem}
\usepackage{textcomp}
\usepackage{epigraph}
\usepackage[page,toc]{appendix}
\setlength{\parindent}{2em}
\setcounter{secnumdepth}{4}
\colorlet{punct}{red!60!black}
\definecolor{background}{HTML}{EEEEEE}
\definecolor{delim}{RGB}{20,105,176}
\colorlet{numb}{magenta!60!black}
\lstset{
language=Java,
columns=flexible,
breaklines=true,
keywordstyle=\color{black}\bfseries,
showstringspaces=false,
numbers=left,
numberstyle=\tiny,
stepnumber=4,
numbersep=5pt,
tabsize=4,
}
\lstdefinelanguage{json}{
basicstyle=\normalfont\ttfamily,
numbers=left,
numberstyle=\scriptsize,
stepnumber=1,
numbersep=8pt,
showstringspaces=false,
breaklines=true,
frame=lines,
aboveskip=20pt,
}
\title{Nidra: An Extensible Android Application for Recording, Sharing and Analyzing \\ Breathing Data}
\subtitle{An Engineering Approach}
\author{Jagat Deep Singh}
\addbibresource{sources.bib} %% ... or whatever
\includeonly{
sections/introduction/index,
sections/background/index,
sections/relatedwork/index,
sections/design/index,
sections/implementation/index,
sections/experiments/index,
sections/conclusion/index,
sections/appendix/index
}
\begin{document}
\duoforside[dept={Department of Informatics}, %% ... or your department
program={Programming and Networks}, %% ... or your programme
long] %% ... or long
\frontmatter{}
\chapter*{Abstract}
A vast majority of medical examinations requires the patient to be present at the hospital or laboratory. Statistics Norway \cite{ssb} presents that between 2017-2018 the cost of diagnosis, treatment, and rehabilitation in Norway increased with 7.3 percent for municipal health service. Likewise, the man-years for physicians in the municipal health service increased with 2.4 percent. As such, the growth of medical attendance results in more work and stress induced on the physicians and a higher demand for medical attention from the patients.
Mobile applications that focus on improving healthcare are known as mHealth applications \cite{kumar2013mobile}. An excellent example of a mHealth application is the CESAR project, which aims to use low-cost sensor kits to monitor physiological signals during sleep in order to provide early detection of obstructive sleep apnea (OSA) from home \cite{cesar}. The project facilitates tools that provide a common interface for sensor sources with different Link Layer technology (e.g., BlueTooth, USB, and ANT+) and sensor-specific protocols designed by the manufacturer.
In this thesis, we extend the project by designing and implementing an Android application for users to record, share, and analyze breathing data collected over an extended period. Also, extending one of the tools to support the Flow sensor, which is a respiratory belt for measuring breathing. The motivation is to collect breathing data that can aid in the analysis and early detection of sleep apnea; albeit, the application can be used in other fields of study (e.g., physical activities). Additionally, we facilitate an extensible application, that allows for future developers to create modules that extend the functionality in the application or enrich the data from the user's records. The name of the application is Nidra---named after the Hindu goddess of sleep.
Experiments and observations have shown that the application is capable of collecting data over an extended period using the Flow sensor. More specifically, 9-hours of recording use approximately 1395 mAh of the battery capacity, which is well sufficient based on the average battery capacity of mobile devices. Also, the application is able to reconnect with the sensor upon disconnects---with the results from the experiment; worst-case shows no more than 40\% data loss. Finally, other developers are successfully able to create modules that integrate with Nidra.
\chapter*{Acknowledgements}
First of all, I would like to thank my supervisor, Professor Dr. Thomas Peter Plagemann, for his guidance throughout the work in this thesis. The discussions between us have given me invaluable insights that have been helpful during the development and writing of this thesis. For that, I am genuinely grateful for the effort and dedication he put into helping me.
Next, I'd like to thank my friends that I've gained during the study, as well as my childhood friends. Their encouragement and motivation have kept me going, through thick and thin.
Above all, I'm truly grateful for my parents' unconditional love and care. Without the support and affection from my Mom, I would not be where I am today. To her, this degree means much more than it does to me.
\vspace*{\fill}
\epigraph{\hfill{\textit{A man is but the product of his thoughts; what he thinks, \\he becomes.}}}{\textbf{Mahatma Gandhi}}
\tableofcontents
\addcontentsline{toc}{chapter}{List of Tables}
\listoftables
\addcontentsline{toc}{chapter}{List of Figures}
\listoffigures
\mainmatter{}
\part{Introduction and Background}
\include{sections/introduction/index}
\include{sections/background/index}
\include{sections/relatedwork/index}
\part{Design and Implementation}
\include{sections/design/index}
\include{sections/implementation/index}
\part{Evaluation and Conclusion}
\include{sections/experiments/index}
\include{sections/conclusion/index}
\backmatter{}
\printbibliography
\begin{appendices}
\include{sections/appendix/index}
\end{appendices}
\end{document}
| {
"alphanum_fraction": 0.7743107149,
"avg_line_length": 45.2014925373,
"ext": "tex",
"hexsha": "c96922f5fcc0532c50e310f5a70083d5055c51a9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1f6dfe2c19686782a0acbcc3466b80b1df072156",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "RiatIO/Master",
"max_forks_repo_path": "Thesis/mymaster.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1f6dfe2c19686782a0acbcc3466b80b1df072156",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "RiatIO/Master",
"max_issues_repo_path": "Thesis/mymaster.tex",
"max_line_length": 774,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1f6dfe2c19686782a0acbcc3466b80b1df072156",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "RiatIO/Master",
"max_stars_repo_path": "Thesis/mymaster.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1411,
"size": 6057
} |
\documentclass[11pt,]{article}
\usepackage{float}
\usepackage[left=1in,top=1in,right=1in,bottom=1in]{geometry}
\newcommand*{\authorfont}{\fontfamily{phv}\selectfont}
\usepackage[]{mathptmx}
\usepackage[dvipsnames]{xcolor}
\usepackage{listings}
\lstset{
language=R,
basicstyle=\fontfamily{lmvtt}\selectfont\small\color{blue},
columns=fullflexible,
breaklines,
%numbers=left,
%stepnumber=1,
%numbersep=5pt,
% numberstyle=\tiny, %% adjust the size of numbers
morekeywords={assert}
%caption={A simple example},
%captionpos=b, %% sets the caption-position to bottom,
%xleftmargin=\parindent,
}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{abstract}
\renewcommand{\abstractname}{} % clear the title
\renewcommand{\absnamepos}{empty} % originally center
\renewenvironment{abstract}
{{%
\setlength{\leftmargin}{0mm}
\setlength{\rightmargin}{\leftmargin}%
}%
\relax}
{\endlist}
\makeatletter
\def\@maketitle{%
\newpage
% \null
% \vskip 2em%
% \begin{center}%
\let \footnote \thanks
{\fontsize{18}{20}\selectfont\raggedright \setlength{\parindent}{0pt} \@title \par}%
}
%\fi
\makeatother
\setcounter{secnumdepth}{0}
\usepackage{longtable,booktabs}
\usepackage{graphicx,grffile}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
\title{The Effects of Heavy Drinking, Smoking, Anxiety and Depression on Stroke
Odds: A Cross-Sectional Study }
\author{\Large Robert Wardrup\vspace{0.05in} \newline\normalsize\emph{} \and \Large Elizabeth Murphy\vspace{0.05in} \newline\normalsize\emph{} \and \Large Natalie Requinez\vspace{0.05in} \newline\normalsize\emph{} \and \Large Joshua Averitt\vspace{0.05in} \newline\normalsize\emph{} }
\date{}
\usepackage{titlesec}
\titleformat*{\section}{\normalsize\bfseries}
\titleformat*{\subsection}{\normalsize\itshape}
\titleformat*{\subsubsection}{\normalsize\itshape}
\titleformat*{\paragraph}{\normalsize\itshape}
\titleformat*{\subparagraph}{\normalsize\itshape}
\usepackage{natbib}
\bibliographystyle{apa}
\usepackage[strings]{underscore} % protect underscores in most circumstances
\newtheorem{hypothesis}{Hypothesis}
\usepackage{setspace}
\makeatletter
\@ifpackageloaded{hyperref}{}{%
\ifxetex
\PassOptionsToPackage{hyphens}{url}\usepackage[setpagesize=false, % page size defined by xetex
unicode=false, % unicode breaks when used with xetex
xetex]{hyperref}
\else
\PassOptionsToPackage{hyphens}{url}\usepackage[unicode=true]{hyperref}
\fi
}
\@ifpackageloaded{color}{
\PassOptionsToPackage{usenames,dvipsnames}{color}
}{%
\usepackage[usenames,dvipsnames]{color}
}
\makeatother
\hypersetup{breaklinks=true,
bookmarks=true,
pdfauthor={Robert Wardrup () and Elizabeth Murphy () and Natalie Requinez () and Joshua Averitt ()},
pdfkeywords = {Stroke, Alcohol, Smoking, Anxiety, Depression},
pdftitle={The Effects of Heavy Drinking, Smoking, Anxiety and Depression on Stroke
Odds: A Cross-Sectional Study},
colorlinks=true,
citecolor=blue,
urlcolor=blue,
linkcolor=magenta,
pdfborder={0 0 0}}
\urlstyle{same} % don't use monospace font for urls
% set default figure placement to htbp
\makeatletter
\def\fps@figure{htbp}
\makeatother
% add tightlist ----------
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\begin{document}
% \pagenumbering{arabic}% resets `page` counter to 1
%
% \maketitle
{% \usefont{T1}{pnc}{m}{n}
\setlength{\parindent}{0pt}
\thispagestyle{plain}
{\fontsize{18}{20}\selectfont\raggedright
\maketitle % title \par
}
{
\vskip 13.5pt\relax \normalsize\fontsize{11}{12}
\textbf{\authorfont Robert Wardrup} \hskip 15pt \emph{\small } \par \textbf{\authorfont Elizabeth Murphy} \hskip 15pt \emph{\small } \par \textbf{\authorfont Natalie Requinez} \hskip 15pt \emph{\small } \par \textbf{\authorfont Joshua Averitt} \hskip 15pt \emph{\small }
}
}
\begin{abstract}
\hbox{\vrule height .2pt width 39.14pc}
\vskip 8.5pt % \small
\noindent Though numerous studies have investigated the relationships between
lifestyle risk factors and stroke outcome, few have done so while
controlling as age and race/ethnicity range. Utilizing quasi-binomial
logistic regression, we show that the odds of having had a stroke
increase with being a daily smoker, days of anxiety, and having a
depression diagnosis.
\vskip 8.5pt \noindent \emph{Keywords}: Stroke, Alcohol, Smoking, Anxiety, Depression \par
\hbox{\vrule height .2pt width 39.14pc}
\end{abstract}
\vskip 6.5pt
\doublespacing
\noindent \section{Introduction}\label{introduction}
Strokes nearly affect 800,000 people in the United States every year and
are the fifth leading cause of death in the United States
\citep{AmericanStrokeAssociation2017}. Despite increased efforts on
education and general public knowledge about stroke and other
cardiovascular risk diseases, the prevalence rates have shown little
improvement within the past ten years \citep{Mozaffarian2015}. Previous
research has shown that a variety of controllable risk factors are
strongly associated with stroke outcome. Although behaviors such as
smoking and poor diet have been well-known risk factors, other factors
such as stress, lack of sleep, depression, and anxiety still have
unclear associations {[}(Risk Factors Stroke Association,
\citep{AmericanStrokeAssociation2017a}.
Stress, lack of sleep, and mental disorders such as depression and
anxiety are issues prominent throughout the United States. Anxiety is
the most common mental illness and approximately 18\% of the U.S
population is affected while depression affects nearly 7\% of adults
\citep{AnxietyandDepressionAssociationofAmerica2017}. Those who are
diagnosed with anxiety also have a 50\% chance of being diagnosed with
depression and vice versa
\citep{AnxietyandDepressionAssociationofAmerica2017}. Furthermore,
approximately, 30\% of Americans suffer from lack of sleep
\citep{CentersforDiseaseControlandPrevention2015}. Although average
stress levels have decreased since the recession, the average stress
scores for American adults are still at moderate levels, but is
increased in lower household incomes
\citep{AmericanPsychologicalAssociation2015}. Each of these factors has
a considerable significance to public health as issues on their own. If
someone is suffering from multiple risk factors, the risk of stroke
could be significant and possibly result in death.
The goal of this study is to test the if there is an association between
the different risk factors, particularly with stress, lack of sleep,
depression, and anxiety, with strokes using data from the Behavioral
Risk Factor Surveillance System (BRFSS). The BRFSS collects data from 50
states, D.C, and the even throughout the territories via landline and
cellphones. It is in partnership with the CDC and has been useful for
providing large quantities of health related data
\citep{CentersforDiseaseControlandPrevention2014}.
\section{Methods}\label{methods}
The BRFSS survey data were weighted using the raking method, which is a
two-part methodology to help insure unbiased results by accounting for
noncoverage and nonresponse bias and forcing the total number of cases
to equal the population estimate of each state in the United States
\citep{CentersforDiseaseControlandPrevention2007a}. Raking works by
repeatedly adjusting weight across a set of selected variables until the
weights converge and the survey population totals are equal to the
census population totals for each selected variable \citep{Fricker1993}.
\subsection{Dependent and Independent
Variables}\label{dependent-and-independent-variables}
The dependent variable, stroke outcome, was recoded to a binary variable
for use in a logistic regression model. Depression diagnosis and heavy
drinker were also recoded into a binary variable. For each variable used
in the analysis, \emph{don't know / unsure} and \emph{refused} responses
were dropped. Table 1 lists the variables included in the model.
\begin{longtable}[]{@{}ll@{}}
\caption{Dependent and Independent Variables Used in Logistic Regression
Model}\tabularnewline
\toprule
Variable Name & Variable Type\tabularnewline
\midrule
\endfirsthead
\toprule
Variable Name & Variable Type\tabularnewline
\midrule
\endhead
Stroke Diagnosis & Dichotomous (DV)\tabularnewline
Heavy Alcohol Drinker & Dichotomous\tabularnewline
Depression Diagnosis & Dichotomous\tabularnewline
Sex & Dichotomous\tabularnewline
Days Anxious & Continuous\tabularnewline
Daily Sleep Hours & Continuous\tabularnewline
Smoker Status & Categorical\tabularnewline
Race/Ethnicity & Categorical\tabularnewline
Age Group & Categorical\tabularnewline
\bottomrule
\end{longtable}
A subject was designated as a heavy drinker if they were either an adult
male who reported consuming more than 14 drinks per week or an adult
female who reported consuming more than 7 drinks per week. This value
was calculated after asking the subjects, ``during the past 30 days, on
the days when you drank, about how many drinks did you drink on
average?'' and ``during the past 30 days, how many days per week or
month did you have at least one drink of any alcoholic beverage such as
beer, wine, a malt beverage or liquor?''
\citep{CentersforDiseaseControlandPrevention2007}.
\subsection{Logistic Regression Model}\label{logistic-regression-model}
The first model used in calculating the logistic regression utilized the
standard binomial function to predict the log odds of binary outcome
\(k\). The binomial formula, is given as
\[P(k)={n \choose k}p^{k}(1-p)^{n-k},\] where \(k\) represents a stroke
outcome, \(n\) represents the sample size, and \(p\) is the probability
that a stroke will occur. The results of the logistic regression model
were then exponentiated to produce the odds ratios (OR) reported within
this paper.
\section{Results}\label{results}
The weighted number of strokes in the United States in 2016 was
8,020,080, representing approximately 2.5\% of the total 2016 United
States population. Out of these, 3.20\% were female and 3.13\% were
male.
\begin{longtable}[]{@{}lll@{}}
\caption{Proportion of Stroke Outcomes in Males and
Females}\tabularnewline
\toprule
Diagnosis & Male & Female\tabularnewline
\midrule
\endfirsthead
\toprule
Diagnosis & Male & Female\tabularnewline
\midrule
\endhead
No Stroke & 96.87\% & 96.80\%\tabularnewline
Stroke & 3.13\% & 3.20\%\tabularnewline
\bottomrule
\end{longtable}
The proportion of subjects who have suffered from a stroke by smoker
status is shown in Table 3, and visually in Figure 2. The percentage of
subjects who have had a stroke is highest for the subjects who smoke
daily, followed by subjects who are former smokers, followed by subjects
who smoke some days. A possible explanation for the lower percentage of
stroke in subjects who smoke some days than subjects who are former
smokers is that stroke victims may simply have ceased smoking after
having a stroke.
\begin{longtable}[]{@{}lll@{}}
\caption{Proportion of Stroke Outcomes and Smoker Status}\tabularnewline
\toprule
Smoker Status & No Stroke & Stroke\tabularnewline
\midrule
\endfirsthead
\toprule
Smoker Status & No Stroke & Stroke\tabularnewline
\midrule
\endhead
Never Smoked & 97.8\% & 2.2\%\tabularnewline
Former Smoker & 95.4\% & 4.6\%\tabularnewline
Smokes Some Days & 96.2\% & 3.8\%\tabularnewline
Smokes Daily & 95.0\% & 5.0\%\tabularnewline
\bottomrule
\end{longtable}
The proportions shown in Table 3 are depicted as a mosaic plot in Figure
2. The mosaic plot is a graphical depiction the proportions within each
table cell, shaded by the difference from the expected observation. Blue
depicts a higher than expected number of observations for that cell, and
red depicts a lower than expected number of observations for that cell.
As shown in Figure 2, there were a greater than expected number of
non-smokers that have never had a stroke, and a greater than expected
number of smokers and former smokers who have had a stroke.
\begin{figure}
\centering
\includegraphics{report_files/figure-latex/figWithCaption3-1.pdf}
\caption{Proportion of Stroke Outcomes and Smoker Status}
\end{figure}
A chi-squared test of independence revealed that there is statistically
significant evidence that stroke outcome and smoking status were not
independent from one another, \(\chi^2\) (3, \(N\)=486,303) = 2352.5,
\(p <\) 0.001.
The mean hours of sleep for those who had a stroke was 6.99 hours and
6.98 for those who did not have a stroke. Though the difference in
average hours of sleep was small, approximately 6 seconds, the
difference nearly achieved statistical significance at \(\alpha=0.05\)
with a p-value of 0.06.
Using the Tukey method, where observations greater or less than the sum
of the median plus or minux 1.5 times the IQR marked as outliers, we
noted that there were a large number of outliers in the reported number
of alcoholic drinks weekly. There were 69,934 observations that were
marked as outliers, amounting to 18.1\% of the pre-weighted data. The
mean of the outliers was 1,465.8 drinks weekly. The mean number of
alcoholic drinks consumed weekly, including outliers, was 1,465.8 drinks
weekly, and 72.83 drinks weekly with the outliers removed. It is unclear
if the CDC accounted for these outliers when calculating the \emph{heavy
drinker} variable.
\begin{longtable}[]{@{}lll@{}}
\caption{Mean Values of Selected Risk Factors}\tabularnewline
\toprule
Factor & Outcome & Mean\tabularnewline
\midrule
\endfirsthead
\toprule
Factor & Outcome & Mean\tabularnewline
\midrule
\endhead
Days Anxious & No Stroke & 4.79\tabularnewline
& Stroke & 8.05\tabularnewline
Nightly Sleep Hours & No Stroke & 6.98\tabularnewline
& Stroke & 6.99\tabularnewline
Number Drinks Weekly, With Outliers & No Stroke & 314.12\tabularnewline
& Stroke & 267.97\tabularnewline
Number Drinks Weekly, Outliers Removed & No Stroke &
79.07\tabularnewline
& Stroke & 40.31\tabularnewline
\bottomrule
\end{longtable}
\subsection{Logistic Regression
Model}\label{logistic-regression-model-1}
The binomial logistic regression model resulted in a \(\phi\) value of
2.52, indicating that there is overdispersion in the estimates and the
quasi-binomial function should be used instead.
The results of the quasi-binomial logistic regression (Table 4) indicate
that, controlling for age, race/ethnicity, and sex, Several of the
tested risk factors were significantly correlated with stroke outcome.
Being a daily smoker, Monthly days with anxiety, and diagnosed
depression were all positively associated with greater odds of stroke.
Daily cigarette smokers had 1.68 times the odds of having had a stroke
than those who have never smoked cigarettes. For each additional day
heavily affected by anxiety,the odds of having had a stroke were 1.03
times higher, and for those who have been diagnosed with depression the
odds of stroke were 2.12 times higher. Of the control variables, age of
25 years old and above were the only factors significantly associated
with stroke outcome, with an odds ratio that increased with age.
\begin{longtable}[]{@{}lll@{}}
\caption{Logistic Regression Model Estimating Effects of Risk Factors \&
Demographic Variables on Stroke Outcome Odds.}\tabularnewline
\toprule
Variable & OR (95\% CI) & P-Value\tabularnewline
\midrule
\endfirsthead
\toprule
Variable & OR (95\% CI) & P-Value\tabularnewline
\midrule
\endhead
\textbf{Risk Factors} & &\tabularnewline
~~~\emph{Drinking} & &\tabularnewline
~~~~~Non-Heavy Drinker & 1 (Baseline Factor) &\tabularnewline
~~~~~Heavy Drinker & 0.989 (0.41 - 2.38) & 0.98\tabularnewline
~~~\emph{Smoking} & &\tabularnewline
~~~~~Non-Smoker & 1 (Baseline Factor) &\tabularnewline
~~~~~Former Smoker & 1.24 (0.91 - 1.70) & 0.18\tabularnewline
~~~~~Smoker (Some days) & 1.65 (0.95 - 2.88) & 0.08\tabularnewline
~~~~~Smoker (Daily) & 1.68 (1.12 - 2.53) & 0.01\tabularnewline
~~~\emph{Mental Health} & &\tabularnewline
~~~~~Monthly Days Anxious & 1.03 (1.01 - 1.04) &
\textless{}0.01\tabularnewline
~~~~~No Depression Diagnosis & 1 (Baseline Factor) &\tabularnewline
~~~~~Depression Diagnosis & 2.12 (1.48 - 3.04) &
\textless{}0.01\tabularnewline
~~~~~Nightly Sleep Hours & 0.91 (0.82 - 1.00) & 0.06\tabularnewline
~~~\emph{Sex} & &\tabularnewline
~~~~~Male & 1 (Baseline Factor) &\tabularnewline
~~~~~Female & 0.77 (0.58 - 1.02) & 0.07\tabularnewline
\textbf{Race/Ethnicity} & &\tabularnewline
~~~Other Race / Non-Hispanic & 1 (Baseline Factor) &\tabularnewline
~~~Hispanic & 1.42 (0.43 - 4.70) & 0.57\tabularnewline
~~~Black Only / Non-Hispanic & 2.41 (0.88 - 6.62) & 0.09\tabularnewline
~~~White Only / Non-Hispanic & 1.87 (0.73 - 4.83) & 0.20\tabularnewline
~~~Multiracial / Non-Hispanic & 2.92 (0.90 - 9.50) & 0.08\tabularnewline
\textbf{Age Group} & &\tabularnewline
~~~Age 18 to 24 & 1 (Baseline Factor) &\tabularnewline
~~~Age 25 to 34 & 3.92 (0.80 - 19.66) & 0.10\tabularnewline
~~~Age 35 to 44 & 9.33 (2.00 - 43.51) & \textless{}0.01\tabularnewline
~~~Age 45 to 54 & 12.35 (2.79 - 54.71) & \textless{}0.01\tabularnewline
~~~Age 55 to 64 & 20.93 (4.80 - 91.34) & \textless{}0.01\tabularnewline
~~~Age 65 or Older & 44.87 (10.36 - 194.43) &
\textless{}0.01\tabularnewline
\bottomrule
\end{longtable}
The effect of each significant risk factor, stratified by depression
diagnosis, on the probability of having had a stroke is shown in Figure
2. The probability of a person having a stroke increases with the number
of days a person is significantly affected by anxiety. This probability
is multiplied if a person has been diagnosed with depression. For
non-smokers with no depression diagnosis who did not significantly
suffer from anxiety, the probability of having had a stroke approaches
zero. If the same subject was diagnosed with depression, the probability
of having had a stroke nears 5\%.
The probability of having had a stroke is multiplied in daily smokers,
beginning at approximately 5\% for daily smokers who have not been
diagnosed with depression nor have suffered from significant anxiety.
With a depression diagnosis, the probability originates at approximately
7\% for subjects with zero days of anxiety and increases to 15\% for
subjects who experienced thirty days of significant anxiety. This
relationship is similar for subjects who currently smoke some days.
\begin{figure}
\centering
\includegraphics{report_files/figure-latex/figWithCaption2-1.pdf}
\caption{Effects of Anxious Days on Stroke Outcome, Stratified by
Depression Diagnosis and Smoker Status}
\end{figure}
\section{Conclusion}\label{conclusion}
Our study assessed the association between heavy drinking, anxiety,
depression on stroke odds. Using the 2016 Behavioral Risk Factor
Surveillance System dataset from the CDC, we determined that being a
daily smoker or having a depression diagnosis are associated with
increased odds of having had a stroke. Additionally, we found that the
number of days a person experienced significant anxiety was associated
with increased odds of having had a stroke. Neither the number of hours
per night a subject slept, nor being a heavy drinker was associated with
the odds of having had a stroke.
One limitation of this study is its cross-sectional design. Because
stroke prevalence was used, causality and risk cannot be inferred from
our data. Furthermore, we made the assumption that there was no pattern
behind the \emph{unsure} and \emph{refused} survey responses, when there
could have been an underlying reason for these answers. Although
removing these responses still afforded the authors adequate statistical
power, the use of multiple imputation to recover lost data could have
resulted in more robust results.
Another limitation in our study is the possibility of biases inherent in
any prevalence study. Subjects may not always be forthcoming when
answering questions about unhealthy behaviors such as, ``during the past
30 days, how many days per week or per month did you have at least one
drink of any alcoholic beverage\ldots{}'' or ``\ldots{} During the past
30 days, on the days when you drank, how many drinks did you drink on
the average?'' Questions involving mental health may also invoke in the
subject the same desire to ``save face.''
Finally, there were clearly outliers in the reported weekly number of
alcohol drinks. It is unclear whether or not the CDC accounted for
outliers in their calculation of heavy drinker status, but
Reducing the incidence of stroke is an important goal of public health
that would result in a vast reduction of the health care burden. Our
findings provide researchers and policymakers a set of variables that
should be further investigated in order to elucidate causative
relationships that may occur between these risk factors and stroke
outcome. Future studies should look further at the effect of mental
health diagnosis on stroke odds, as well as how smoking can increase an
individual's odds of a stroke.
\newpage
\singlespacing
\bibliography{../Data/Misc/bios.bib}
\appendix
\section{Code}
%\fontsize{8}{12}\selectfont
%\subsection{Import Data}
%\lstinputlisting{/storage/Research/bios5300/Data/Misc/import.r}
%\rule{\textwidth}{.4pt}
%\subsection{Parse Data}
%\lstinputlisting{/storage/Research/bios5300/Data/Misc/transform.r}
%\rule{\textwidth}{.4pt}
%\subsection{Analyse Data}
%\lstinputlisting{/storage/Research/bios5300/Data/Misc/analyse.r}
\end{document} | {
"alphanum_fraction": 0.7702586011,
"avg_line_length": 38.9433628319,
"ext": "tex",
"hexsha": "74dde25e88bfe2cc7332b4bb2dda72c4f75c1982",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8030cd806a78d78844c98ad9def62c1df40ba28a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "minorsecond/bios5300",
"max_forks_repo_path": "Rmd/report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8030cd806a78d78844c98ad9def62c1df40ba28a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "minorsecond/bios5300",
"max_issues_repo_path": "Rmd/report.tex",
"max_line_length": 291,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8030cd806a78d78844c98ad9def62c1df40ba28a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "minorsecond/bios5300",
"max_stars_repo_path": "Rmd/report.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6006,
"size": 22003
} |
\section{Procedural Networks and Link Grammar}
\p{My goal in this section is to incorporate Link Grammar into
a phenomenological and Cognitive Grammar perspective, more
than to offer a neutral exposition of Link Grammar theory.
Therefore I will accept some terminology and exposition
not necessarily originating from the initial
Link Grammar projects (though influenced by
subsequnt research, e.g. [expectation]). I also
want to wed Link Grammar to my own semantic intuitions,
set forth earlier, that word-meanings and morphosyntactic
interpretations should be grounded on pre- or para-linguistic
cognitive \q{scripts} that are activated (but not
structurally replicated, the way that according
to truth-thoretic semantics linguistic form
evokes-by-simulating propositional form) by linguistic
phenomena.
}
\p{Link Grammar is, depending on one's perspective, either related
to or a variant of Dependency Grammar, which in turn is contrasted
with \q{phrase structure} grammars (linguists tnd to designate
competing schools with acronyms, like \q{DG} for Dependency Grammar
and \q{HPSG} for Head-Driven Phrase Structure Grammar).
Link and Dependency Grammars define syntactic structures in
terms of word-pairs; phrase structure may be implicit to
inter-word relations but is not explicitly modeled by
DG formalisms \mdash{} there is typically no representation
of \q{noun phrass} or \q{verb phrases}, for example.
Phrase structure is instead connoted via how relations
fit together \mdash{} in \q{rescued dogs were fed}, for instance,
the adjectival \i{rescued}-\i{dogs} relation interacts
with the \i{dogs}-\i{fed} (or \i{dogs}-\i{were} plus
\i{were}-\i{fed}) predication, an interaction that in a
phrase-structure paradigm is analyed as the noun-phrase
\q{rescued dogs} subsuming the noun \q{dogs}. Dependency
analyses often seem more faithful to real-world semantics
because, in practice, phrases do not \i{entirely} subsume
their constitunt parts. Linguistic structure is
actually multi-layered, where semantic and morphosyntactic
connections resonate between units within phrases separate and
apart from how linguistic structure is organized into
phrasal units themselves.
}
\p{Except for phrases that coalesce into pseudo-lexemes
or proper names (like \q{United Nations} or \q{Member
pf Parliament}), or indeed become shortened to single
words (like \q{waterfall}), we perceive phrases both
as signifying units and as aggregate structures
whose detailed combinative rationale needs contectualization
and interpretation. In short, phrases are not \q{canned}
semantic units but instead are context-sensitive performances
that require interpretive understanding. This interpretive
dimension is arguably better conveyed by DG-style models
whose consituent units are word-relations, as opposed
to phrase-structure grammars which (even if only by notational
practice) give the impression that phrases conform
to predetermined, conventionalized gestalts.
}
\p{While Link and Dependency Grammars are both contrastd
with phrase-structure grammars, Link Grammar is also
distinguished than mainstream DG in terms of how
inter-word relations are conceived. Standard DG
recognizes an assymetry between the elements in word-relations
\mdash{} one element (typically but not exclusively a word) is
treated as \q{dependent} on another. The most common case is where
one word carries greater information than a second, which
in turn adds nuance or detail \mdash{} say, in \q{rescued dogs}
the second word is more essential to the sentence's meaning.
This potentially raises questions about how we
can actually quantify the centrality of one word or another
\mdash{} in many cases, for instance, the conceptual
significance of an adjctive is just as trenchant as the
noun which it modifies. In practice, however, the salient
aspect of \q{head} vs \q{dependent} assymetry is that any
inter-word pair is \q{directed}, and one part of the relation
defined as dependent on another, however this
dependency is understood in a given case.
}
\p{By contrast, Link Grammar dos not identify a head-dependent
assymetry within inter-word relations. Instead, words
(along with other lexically signifant units, like
certain morphemes, or punctuation/prosodic units) are
seen as forming pairs based on a kind of mutual
incompleteness \mdash{} each word supplying some structural
or signifying aspect that the other lacks. Words, then,
carry with them different varieties of \q{incompleteness}
which primes them to link up with other modls. Semantic
and grammatical models then revolve around tracing
the \i{gaps} in information content, or syntactic
acceptability, which \q{pull} words into relation
with other words. This approach also integrates
semantic and syntactic details \mdash{} unlike frameworks
such as Combinatory Categorical Grammar, which
also treats certain words as \q{incomplete} but
identifies word connctions only on
surface-level grammatical terms \mdash{} Link Grammar
invites us to explore how semantic and
syntactic \q{completion} intersects and overlaps.
}
\p{Words can be incomplete for different reasons
and in different ways. Both verbs and adjectives
generally need to pair up with nouns to form a
complete idea. On the other hand, nouns may be
incomplete as lexical spotlights on the
extra-linguistic situation: the important point
is not that people feed dogs in general, but that
\i{the rescued} dogs were fed prior to their rescue.
So \i{dogs} \q{needs} \i{rescue} for conceptual
specificity as much as \i{rescue} needs \i{dogs}
for anchoring \mdash{} while also
\i{dogs} needs \i{the} for \i{cognitive} specificity,
because the discourse is about some particular dogs
(presumed known to the addressee), signaled by
the definitive article. In other cases,
incompleteness is measured in terms of syntactic
propriety, as in:
\begin{sentenceList}\sentenceItem{} We learned that people fed the rescued dogs.
\sentenceItem{} No-one seriously entertained the belief that
he would govern in a bipartisan manner.
\end{sentenceList}
In both cases the word \q{that} is needed because
a verb, insofar as it forms a complete predicate with
the proper subject and objects, cannot
always be inserted into an enclosing sentence.
\q{People fed the rescued dogs} is complete as a
sentence unto itself, but is not complete as a grammatical
unit when the speaker wishes to reference the
signified predicate as an epistemic object,
something believed, told, disputed, etc. A
connector like \q{that} transforms the
word or words it is paired with syntactically,
converting across part-of-speech boundaries
\mdash{} e.g. converting a proposition to
a noun \mdash{} so that the associated words can be
included into a larger aggregate.
}
\p{The interesting resonance between Link Grammar
and Cognitive Grammar is that this perspective allows
us to analyze how syntactic incompleteness
mirrors semantic incompletenss, and vice-versa.
\q{Incompleteness} can also often be characterized
as \i{expectation}: an adjective \q{expects} a noun
to produce a more tailored and situationally
refined noun (or nominal \q{idea}); a verb expects
a noun, to form a proposition.
Analogously,when we have in discourse an adjctive or
a verb we expect a corresponding noun \mdash{} so
via syntactic norms language creates certain expectations
in us and then communicates larger ideas by
how these expectations are met. Is a noun-expectation
fulfilled by a single noun or a complex phrase?
The notion of semantic and syntactic expectations
also coordinates nicely with type-theoretic
semantics; for example, the verb \q{believe}
pairs with a semantic unit that can be
interpreted in epistemic terms \mdash{} not
any noun but a noun of a kind that can be the
subject of propositional attitudes (beliefs,
opinions, assertions, arguments, etc.).
}
\p{Kenneth Holmqvist,
whom I discussed earlier as integrating Conceptual Space Theory
with Cognitive Grammar, made a study of \q{expectation}
in this or a similar sense a central feature of his
doctoral dissertation \cite{HolmqvistDiss}; I'll point
out that there is an implicit resonance between \i{expectations}
and link grammar; so Holmqvist research actually potentially
triangulates between Conceptual Space Theory, Cognitive Grammar,
and Link Grammar.
}
\p{Continuing analysis of \q{that} qua subordinator:
the syntactic incompleteness of propositional
phrases modified by \q{that} can therefore
be traced to the semantic expectations
raised by \q{believe}, and analogous
verbs (opine, argue, claim, testify).
The object of \i{testify}, say, is a statement
of potential fact which we know not
to take as necessarily true or honestly
made (part of the nature of testimony is that
it may be deliberately or accidentally
fallacious). But to properly pair with
\q{testify}, then, phrases must be
semantically reinterpreted as nominalizations
of propositions, rather than as mere linguistic
exprssion of propositional content via
complete sentences. The \q{epistemic}
context transforms sentential contnt into
nominal contnt available for further refinement:
\begin{sentenceList}\sentenceItem{} The Trump campaign colluded with Russia.
\sentenceItem{} Several witnesses testified that
the Trump campaign colluded with Russia.
\sentenceItem{} Reputable newspapers have reported that
the Trump campaign colluded with Russia.
\sentenceItem{} Most Democrats believe that
the Trump campaign colluded with Russia.
\end{sentenceList}
The grammatical stipulation that a modifier like
\i{that} is often necessary in such formulations correlates
with the semantic detail that the \q{claimed},
\q{testified}, or \q{believed} content is not being
directly asserted by the spaker as if in a
unadorned declarative expression, as in the
first sentence.
}
\p{Morphosyntactic transformation similarly models
the correlation between semantic and syntactic
expectation \mdash{} as can be demonstrated by a
variant of the \q{believe} forms, via the phrase
\q{believe in}:
\begin{sentenceList}\sentenceItem{} I believed in Father Christmas.
\sentenceItem{} I believed in Peace on Earth.
\sentenceItem{} I believed in Obama.
\sentenceItem{} I believed in lies.
\end{sentenceList}
Whereas \i{that} (after \i{believe})
\q{nominalizes} propositions, \i{in} reconceives
(type-theoretically we would say \q{coerces})
ordinary nouns into epistemic nouns (compatible
with propositional attitudes). Obama is not an
\i{idea}, but the connector \i{in} triggers an
interprtation where we have to read \q{Obama} as
something believed \mdash{} effectively a type-theoretic
tension resolved by understanding \i{Obama} in this
context to designate either his platform or his
ability to implement it. Intrpretive \i{tension}
is a natural correlary to a mismatch in expectations:
\i{believe} expects something epistemic, but the
discourse gives us a proper name. Analogously
\i{budge} expects a brute physical entity in
its simplest meaning, but in \i{Obama wouldn't
budge on reproductive rights} we get a \q{sentient}
noun, and have to read \i{budge} metaphorically.
In short, \i{expectation}, \i{interpretive tension},
and \i{incompleteness} are interlocking facets of
semiotic primitives that gestate into discursive
maneuvers via which ideas are communicated
economically and context-sensitively.
}
\p{Link Grammar, proper, represents only the
most immediate (mostly grammatical) facet of
word links. For sake of discussion,
I will discuss links in general as markers of
\i{mutual dependency} between words, so a
\q{link grammar} is essentially a
\q{bi-directional} Dependency Grammar. Mutual
dependencies manifest syntactic norms and
contextual details that make individual
words inadequate signifying vehicles for
a particular communicating content. This overall
principle becomes concrete in one form via
grammatical relations, which is the layer
modeled by Link Grammar proper.
I have mentioned several ready examples \mdash{}
the syntactic dependency of verbs and adjectives
on nouns for them to enter correctly into
discourse (correlate with a semantic dependency
in the other dirction, to shape noun-ideas to the
proper context and signifying intent); also
part-of-speech or \q{subtyping} dependencies
reflecting mandates that (in my examples)
propositional phrases are coerced to nouns or
nouns coerced to \q{epistemic} nouns. Technical
Link Grammars recognize a broad spectrum of
\q{link relationship} \mdash{} between 50 and 100
for different languages. Parsing a sentence
is then a matter of identifying
all of the mutual dependencies \mdash{} at least those
evident on a syntactic level \mdash{} that appear as
inter-word links in the sentence. Phrase
structure may be implicit in some links in
combination \mdash{} for example verb-subject plus
verb-object links generate propositional phrases
\mdash{} but the technical parse is a \q{graph} of
inter-word links rather than a \q{tree} of
phrases ordered heirarchically. The parse-graph
itself is only a provisional rading of the
sentence, and linguistic understanding
exists only insofar as its skeletal outline
is filled out with semantic and situational
details. But the graph layer articulated by
a Link Grammar still provids a useful
inermediat representation, showing
mutual dependencies in their syntactic manifestations
that then point toward thir deeper semantic and
situational origins.
}
\p{For each \i{syntactic} bi-dependnecy, on this theory,
there is a concordant semantic and signifying
bi-dependency, partly conventionalized as a feature
of the language and partly hewn to the current
discourse context. To leverage Link Grammar
in an overall Cognitive Linguistic environment,
then, we need to examine the semantic relations
that drive syntactic bi-dependency: how the
grammatical structure of one word completing another
is a codification of of \i{semantic} mutual dependency.
The \i{semantic} bi-dependencies operate on both
abstract and concrete levels. Abstractly,
it is obvious that an adjective or a verb depends
on a linked noun to complete an idea. This
abstract prototype of bi-dependency then takes
concrete forms in each specific discourse,
acquiring detail and specificity from
situational contexts.
}
\p{The crucial dimension in this theory is
neither abstract nor concrete bi-dependency
but the intersection of the two. The
conventionalized lexical, syntactic, and
morphosyntactic norms of a language present
abstract prototypes of mutual word
dependencies. The concrete instantiation of
these forms \mdash{} via word-pairs whose
surface presentation indicates the
presence of specific link relations
(often with the aid of morphology, agreement,
spoken inflection, and other
morphosyntactic cues) \mdash{} invites us
to consider how an abstract bi-dependency
becomes situationally concretized in the
present, momentary context. In essence,
abstract mutual dependencies are the raw
materials from which situational appraisals
are creates. A pairing like \q{rescued dogs} uses
a certain abstract-bidependent prototype
\mdash{} here the double-refinement of a noun and adjective
grounding each other \mdash{} to trigger
the listener's awareness that the spakr's
discourse is centered on one specific aspect of the
dogs (that they were at some point rescued)
with its concptual corrolaries and unstated assumptions
(that, being in need of rescue, they were abandoned,
in danger, and so on). Similarly
the further link to the definit article
\mdash{} \q{\i{the} dogs} \mdash{} evokes the prototype of a
definite article grounding a noun, which
in turn communicates the speaker's beliefs
about the current state of the discourse.
}
\p{This last \q{bi-dependency} deserves further comments,
because nouns more often than not reveal some
dependency on an article: \i{some dog(s)},
\i{the dog(s)}, \i{a dog}, \i{many dogs}.
These pairings paint a picture both through the
choice of article and the presence or absence of a
plural. This picture is partly situational
\mdash{} obviously whether the speaker is talking
about one or multiple dogs is situationally
important \mdash{} but it is also meta-discursive:
selecting the definite article indicates the
speaker's belief that the listeners know which
dogs are on topic. The lexeme \q{the} thereby
signifies a meta-discursive stance as well
as a cognitive framing \mdash{} both that the
collection has enough coherence to function as
a conceptual unit in context, as \i{the} dogs
(and not something less specific like
\q{\i{some} dogs}) \i{and also} that the
speaker and listeners share compatible
cognitive pictures as a result of prior
course of the discourse. This also introduces
several avenues for future discursive evolution
\mdash{} the listeners can respond to the
speaker on both cognitive and meta-discursive
levels. A direct question like \i{which dogs?}
signifies that the first speaker's meta-discursive
were flawed \mdash{} the referent of \q{the dogs} has
not been properly settled by the discourse to that
point. Or questions for clarification
\mdash{} like \i{how many dogs are there?} \mdash{} indicate
the listeners' sense that all parties' respectiv
construal of the situation needs to be more neatly
aligned for the discourse to proceed optimally.
}
\p{The point I particularly want to emphasize here,
though, is that these discursive/cognitive effects
inhere not only in the word \q{the} but in its pairing
with other words, like \i{the dogs}. We tend to see
the lexical substratum of a language as the ground
level of its signifying potential, but we should
perhaps recognize bidependent prototypes as
equally originating. Th communicative
content of \q{the dogs} is ultimate;y
traced not only to the lexical potentialities
of \q{the} and \q{dogs} as word-senses, but to the
abstract prototype of the definite-article
bidependence, which becomes concretized in the
\q{the dogs} pairing at the same time as the
individual words do.
}
\p{In order to bring this account full-circle, I
would then add that lexical units mutually
completed by an instantiated bidependence
can also be seen as a tie-in between
interpretive procedures. Lexical interpretive
scripts \mdash{} the cognitive processing solicited
by \q{the} and \q{dogs} in isolation
\mdash{} are themselves open-ended and un- or
incompletely grounded. We can speak metaphorically
of \q{words} being incomplete, or carrying expectations,
but it is really the cognitive scripts associated
with words that are lacking in detail. The resolution
of a merely schematic cognitive outline to a
reasonably complete situational construal
can be likned to a rough sketch filled out in color
\mdash{} but we have to imagine that a sketch can be
completed by pairing it with a second sketch, and
the content in each one, crossing over, allows a
completed picture to coalesce. \q{The} in itself
evokes an interpretive process that in itslf
cannot be completed, and likewise \q{dogs}; but
each script supplies the content missing from
the other. In this sense the bidependent form
concretized by the pair is actualy evoking
an interpretive phenomenon of mutual completion
\mdash{} the language structure here is guiding us
toward an intrpretive interpenetration where
the two scripts tie each other's open ends.
Whereas lexical items can be associated with
single \q{scripts}, prototypes of mutual
dependency model patterns in how script-pairs
can become mutually complete. But unlike
lexemes, which are notated directly by
language, the instantiation of
bidependnet script-pairs occurs indirectly.
Some paired-up words are of course adjacent,
but adjacency between words does not have the same
binding determinacy as sequencing among morphemes
in \i{one} word. Instead, word adjaceny
is only one signal among many others suggesting
that some prototypical inter-word relation
applies between two words (which might be
widely separated in a sentence). Morphology
and syntax also point towards the pairings operative
in a sentence \mdash{} they are to bidependency prototypes
what word-choice is to the lexicon.
}
\spsubsectiontwoline{Link Grammar as the Syntax of
Procedural-Network Semantics}
\p{Thus far I have made an admittedly
\i{philosophical} and speculative case for
\q{interpretive mutual dependence} as
a constituent building block of linguistic
understanding. This theory will remain troublingly
incomplete if the more philosophical presentation
cannot be wed to a more rigorous formal
methodology. True, an essential core of this theory
is that interpretive \q{scripts} are largely
prelinguistic and so not covered by linguistic
formalisms in themselves. However, I have also
argued that formal linguistic structures \i{do}
govern how we identify which links apply to
which word-pairs and the gneral outlines of
how word-pairing coordinats cognitiv processes associated
with single words \mdash{} the fully contextualized
synthesis of lexically triggered cognitive procedures
may involve extra-linguistic grounding, but
abstract prototyps of bidependnet relations
are also prototypes of a synthesis between
cognitive/interpretive functions. It would
accordingly be reassuring if notions like
\q{bidependency} and \q{mutual completion}
could be employed as foundations for a formal
theory of grammar and/or semantics with a
degree of rigor comparable to, say, Link
Grammar in its computational form, or type-Thoretic
Semantics in the sense of Zhaohui Luo,
James Pustejovsky, etc...
Such a theory \mdash{} and potentially concrete
technologies associated with it \mdash{} would also
then have a reasonable ground of comparison
to the Semantic Web and, in the contxt of
Phenomnology, to the formalizing influenc
which Semantic Web paradigms have exerted on
projects to unite phenomenology with science
and with Analytic Philosophy.
}
\p{Given these considerations, I propose that
formal grammars with the same underlying
structure as Natural-Language Link Grammars
can indeed be used as a foundation for type-theortic
and programming-language-design methodology. The
key step here is to generalize Link Grammar's
notion of a \q{connector} \mdash{} the aspect of a
word or lexeme that allows (or requires) compltion
via another word \mdash{} to a gneric data structure
where connections can be made between different
parts of a system on the basis of double
potentials that must be in some sense \q{compatible}
for the connection to be valid. One way to
visualize such a system is via graph theory:
imagine a form of graps where nodes are
annotated with \q{potentials} or \q{half-edges};
a complete edge is then a union of two half-edges.
Half edges are also classified into different
families, and there are rules governing when a
hald-edge of one family may link with a half-edge
of another. In the case of Link Grammar, these
classifications are based on surface language structure
\mdash{} head/dependent and left-to-right relations \mdash{}
from which a suite of links and connection rules
are define (for instance abstractly a head/right
word must link to a dependent/left word, a rule that
then becomes manifest in specific syntactic rules,
like how a verb links to its subject). For a
more generic model, however, we can stipulate only
that there is \i{some} classification of connectors
governs by \i{some} linkage rules, to be specified
in different details for different modeling domains.
}
\p{Such a graph model expands upon the notion
of \i{labeled} graphs, where edges are annotated
with labels that characterize the kind of relation
modeled via the edge itself. A canonical example
is Semantic Web graphs: the edges in any
Semantic Web structure are labeled with \q{predicates},
defined in different Ontologies, specifying what sort
of relation exists between its adjacent nodes.
That is, in the Semantic Web, nodes are not
\q{abstractly} linked but rather exhibit
concrete relations: a person is a citizen
of a country, two persons are married, and so forth.
These structures are then concrete instancs of Labeled
Graphs as abstract mathematical structures. Based on
Link Grammar, we can then refine this model
be splitting labels into two parts, and
allowing edges to be incomplete: a fully
formed edge is possible when the label-parts
on one side are compatible with the label-parts
on another. One valid class of graph transforms
is then a mapping where a graph is altered by unifying
two half-edges into a complete edge, subject to
the relevant linkage rules.
}
\p{Another way of modeling this kind of structure is via
edge-annotations and a rule for unifying two edges into
an edge-annotation pairing. For sake of
discussion, I will express this in terms of
Directed Hypergraphs: assume that edges are
\q{hyperedges}, connecting \i{sets} of nodes.
In Hypergraph theory, the nodes incident to a hyperedge
are divided into a \q{head set} and \q{tail set};
these sts can then aggregate as \q{hypernodes}.
We can then define a kind uf unification where
the \q{tail hypernode} of one hyperedges joins
with the \q{head hypernode} of another, producing
a new hypredge whose head comes from the
first former hyperdge and whose tail comes from the
secon. The merged hypernodes, in turn, form a new
hypernode which \q{annotates} the new hyperedge
(this new hypernode is not connected to the graph
via other nodes, but is indirectly \q{part} of
the graph through the hyperedge it annotates).
\i{Annotated} hyperedges therefore differ from
\q{non-annotated} hyperdges in that the former are the
result of a merger between two of the latter.
The rules governing when such merger is possible
\mdash{} and how to map a pair of hypernodes into a
single \q{annotative} hypernode (which
belongs to the graph through the aegis of
its annotated hyperdge) are not internal
to the graph theory, but presumed to be
specified by the modeling environment where
implemementations of such graphs are
technologically applied. Annotated Directed
Hypergraphs are then \q{complete} in a sense
if every \q{un-annotated} hyperedge has been
subsumed into an annotated hyperedge, via a
fusional process we can call a
\q{annotative-fusional transform}.
}
\p{Extending this model further, we can say that the
tail of an \i{un-annotated} hyperedge is a
\q{tail pre-annotation}, since it is poised to
be merged into an annotation. Analogously,
the head of an un-annotated hyperedge is a
\q{head pre-annotation}, and \q{annotative fusion}
is the synthesis of a head and a
tail pre-annotation (triggering a synthesis of
their incident hyperedges). Correlate to annotative
\i{fusion} we can define a notion of annotative
\i{partiality}, referring to the \q{incompleteness}
of pre-annotations which leaves room for their fusion.
}
\p{It turns out that annotative fusion and partiality
in this sense is a non-trivial
model for computation in general, and can be extended
to a form of type theory and process calculus.
The idea is that computational procedures can be
modeled as hypergaphs (computer source code
can certainly be modeled as hypergraphs which
are productions of a certain class of parsing
engines). Each \q{value} affected by a
procedure \mdash{} or more technically the source code symbols
and memory addresses that \q{carry} a value \mdash{} is
then modeled as a hypernode that can link with
other hypernodes in the scenario where one procedure
calls a different one. Annotative fusion is then
a phenomenon of values being transferred from
one excuction environment (associated with
the caller procedure) to a second one (associated with
the callee). The \q{annotations} themselves
are then in this context the full set of type
coercions, type checks, synchronization
(e.g. resource locks or thread blocks depending on
whether or not the caller waits for the callee to
finish), and any other validations to ensure that
the procedure call is appropriate. Annotative fusion
also provides a formal basis for developing the
intuition that \q{procedural networks} are rigorous
representations of information spaces \mdash{}
annotative fusions capture the precise details
of procedures linking (via caller/callee relations)
with other procedures.
}
\p{The constituent units of procedural networks are
inter-procedure calls \mdash{} but procedural networks
also reveal dimensions of connectivity and
and clustering characteristic of large, complex
networks (and the graphs that represent them) in general.
What appears as one function-call in source code can
actually represent many different inter-procedure connections,
a phenomenon reflecting \q{overloading} and \q{genericity}
in programming language theory. Functions are generic
in the sense that any one of their arguments can take
multiple types \mdash{} either because the function is
explicitly declared to tak a \q{typeclass} or a single type
for that argument, or because an instance of a given
type may actually be at runtime an instance of some subtype.
The engine which actually implements inter-procedure calls
\mdash{} i.e., the programming-language implementation \mdash{}
needs to factor this genericity into runtime decisions, so
a single expression in one function body can branch
to many different called procedures. This is the
essential core of the \q{semantics} of programming
languages: data structures manipulated by computer code do
not \i{intrinsically} represent real-world,
non-digital phenomena, though they
ar enginered to model external data when used properly.
However a code base does \i{internally} posess a space
of implementd functions, and a symbol at one
place in source can match to some set of other
functions so as to effect a procedure call. This
\q{matching}, and the rules governing how
\q{overload resolution} occurs \mdash{} \q{overload}
meaning that a given notated procedure call can actually
branch to multiple functions, so runtime
information is needed to select the right one \mdash{} are
the essential formal principles governing the
semantics of computer code.
}
\p{From this basis, all the same, computer code can model
a wide range of empirical phenomena. Generic code
represents generic patterns of functional organization,
allowing models to be built from varying layers of
abstraction. From this perspective, to
describe an empirical system it is neessary to
identify important behaviors and functional patterns
via which the system's observed behavior can be
notated and/or simulated. To the degree that
systems take on functional organizations that can be
abstractly described, similar to the functional
dynamics of other systems, their behavior
can be represented and/or simulated via gneric
cod. To the degree that thre are particular
details of a system's behavior that are more
idiosyncratic to that system, and need to be modeled
precisely, procedures can be crafted specifically
for obsevring and emulating that exact behavior.
More generic and more exact procedural implementations
can coexist in a single code base, with generic
functions calling granular functions narroed to
precise types, and vice-versa. The coexistence of
generalization and specificty is an essential fatures
of code bases and, by extension, of procedural networks,
ensuring the flexibility of these ntworks
as tools to model information spaces.
}
\p{Unfortunately, this kind of \q{procedural} modeling
is hard to intgrate with the more static techniques
reprsentd by the Semantic Web and the current
\q{Big Data} fad. The latter paradigms tend to treat
data as a static repository to be mined for patterns
and insights, rather than a digital
simulation or encoding of dynamic real-world systems.
The Semantic Web, as a large, collaborative modeling
project, evolved largely apart from the technological
community concerned with computer simulations and
the programming techniques which emerged from there,
like Object-Orientation. This divergnce is relevant
for linguistics and cognitive science, because I would
argue that the more \q{dynamic} paradigm is actually
more \q{Semantic} in a Natural Language sense.
In other words, our cognitive dispositions
when interpreting empirical phenomena \mdash{} and
matching these interpretations to linguistic
cues \mdash{} are more like procedural networks
capturing functional patterns and layers of genericity
in observed phenomena, rather than an accretion of
static data. The techniques of procedural data
modeling may therefore be relevant for Cognitive
Linguistics and Cognitive Phenomenology
because they aspire to something which, arguably,
the mind does instinctively: build
cognitive or computational models of
dynamic, functionally organized phenomena.
}
\p{As a corrolary, the theoretical building-blocks
of Procedural Data Modeling \mdash{} how it leverages
type theory, programming language semantics, and
so forth \mdash{} can provide at least
analogs or case-studies for corresponding cognitive
phenomena. Here I would argue that generalizing
Link Grammar from Natural Language to formal
languages, type systems, and lambda calculii
yields added structures to type theory that
are useful toward a more rigorous \q{theory} of
Procedural Data Modeling \mdash{} a theory of natural
linguistics generalized to a theory of general data
reprsentation which, in turn, may offer
insights onto the cognitive dynamics underlying
(prelinguistic) situational/perceptual
comportments and interpretation.
}
\p{Type-theoretically, annotative partiality \mdash{} which
recall is my terminology for the abstract
generalization of the mutual \q{incompleteness} in
Link Grammar connctors, driving their
link-fusion \mdash{} extends conventional applied type theory
(as in the Typed Lambda Calculus) in parallel
to partial-labels extending labeled graph theory.
It is paradigmatic in the theory of typed procedures
and of \q{effect systems} that the type of a procedure
is determined (up to certain equivalences that may discard
overly granular type distinctions) by the types of
all values affected by the procedure (including but
not necessarily limited to the types of input
and output parameters). We can then superimpose
on this model an account of annotative partiality.
Specifically, on the paradigm that procedure-calls
are structurally represented as annotative fusions over
Directed Hypergraphs, the values manipulated by
a procedure are pre-annotations: they are not
(in the \i{implementation} of a procedure,
as a formal object) single values but rather
typed spaces that can take on a spectrum of
possible values depending on the inhabitants of
their types. When a procedure is \i{called} these values
become concretized, but as a formal system procedural
networks model software in terms of its capabilities
and expected behavior, rather than the state at
any moment when the software is actually running.
Partiality therefore models how precedures
(as formal objects) deal with potentialities
\mdash{} we do not know what values will \i{actually}
be present at runtime (e.g. what specific
values passed to a procedure as arguments),
so procedural analysis is essentially characterized
by a partiality of information. When one procedure
calls another, the caller must build an
\i{expression} \mdash{} a gathering of values that provide all
the information the callee requires \mdash{} thereby
creating a case of mutual-completion: the caller has
values but not an algorithm
to operate on them; the callee has an algorithm
(that's what it implements) but needs concrete
values so to produce concrete values.
This dual partiality allows the caller to
call the callee, via an \i{expression} which is part
of the callee's implementation (represented as a
hypergraph) which must in turn match the
callee's signature \mdash{} epigrammatically, we
can say that \q{expressions are annotative-fusional
duals of signatures}. The point here is that whereas
signatures are conventionally understood to be
type-declarations assigning types to procedures,
with annotative partially we can more
precisely recognize signatures as stipulating
\i{pre-annotative} types. The values carried
within expressions also have pre-annotative types,
but there is a distinction between types in the context
of expressions and types in the context of signatures
\mdash{} and moreover this distinction is precisely the
manifestation of the abstract head-pre-annotation
and tail-pre-annotation contrast in the specific
context of procedural networks. Just as
signatures unify multiple types into one
profile, we can analogously define
\q{expression types} as the aggregate of all types
from values affecting the expression \mdash{} note that
this is different from the type of the exprssion's
calculated \i{result}, just as the type of
a function is different from the type of its
return value. Expression-types and signature-types
are almost exact duals (the complication being
default values for optional parameters, which
are not directly represented in exprssions
\mdash{} obviously, since then they would not be
missing). The \q{duality} involved here
derives from partitioning a type system into
\q{expression annotative partials} and
\q{signature annotative partials}, a projection
of head/tail duality in an abstract theory
of Annotated Directed Hypergraphs
(and analogous to head/dependent
and left/right partiality in Link Grammar).
}
\p{}
| {
"alphanum_fraction": 0.8081831126,
"avg_line_length": 46.0111248455,
"ext": "tex",
"hexsha": "85e2b8bea744ce7331e958fcddcda1632c320b05",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8fd304d7df709d32367e49a98fb99f16162c5477",
"max_forks_repo_licenses": [
"BSL-1.0"
],
"max_forks_repo_name": "ScignScape-RZ/phcg",
"max_forks_repo_path": "cstx/sectionpnlg.ngml.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8fd304d7df709d32367e49a98fb99f16162c5477",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSL-1.0"
],
"max_issues_repo_name": "ScignScape-RZ/phcg",
"max_issues_repo_path": "cstx/sectionpnlg.ngml.tex",
"max_line_length": 80,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8fd304d7df709d32367e49a98fb99f16162c5477",
"max_stars_repo_licenses": [
"BSL-1.0"
],
"max_stars_repo_name": "ScignScape-RZ/phcg",
"max_stars_repo_path": "cstx/sectionpnlg.ngml.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 8826,
"size": 37223
} |
\section{\sys\ Implementation}
\label{sec:clio:impl}
Apart from challenges discussed in \S\ref{sec:clio:design}, our implementation of \sys\ also needs to overcome several practical challenges, for example, how can different hardware components most efficiently work together in \sysboard, how to minimize software overhead in \syslib.
This section describes how we implemented \sysboard\ and \syslib, focusing on the new techniques we designed to overcome these challenges.
Currently, \sys\ consists of 24.6K SLOC (excluding computation offloads and third-party IPs).
They include 5.6K SLOC in SpinalHDL~\cite{SpinalHDL} and 2K in C HLS for FPGA hardware, and 17K in C for \syslib\ and ARM software.
We use vendor-supplied interconnect and DDR IPs, and an open-source MAC and PHY network stack~\cite{Corundum-FCCM20}.
\ulinebfpara{\sysboard\ Prototyping.}~~
We prototyped \sysboard\ with a low-cost (\$2495 retail price) Xilinx MPSoC board~\cite{ZCU106} and build the hardware fast path (which is anticipated to be built in ASIC) with FPGA.
All \sys's FPGA modules run at 250\,MHz clock frequency and 512-bit data width.
They all %(network, pre-processor, core memory) are able to
achieve an {\em Initiation Interval} ({\em II}) of one
(II is the number of clock cycles between the start time
of consecutive loop iterations, and it decides the maximum
achievable bandwidth). Achieving II of one is not easy and
requires careful pipeline design in all the modules. With II one, our data path can
achieve a maximum of 128\Gbps\ throughput even with just the slower FPGA clock frequency and would be higher with real ASIC implementation.
Our prototyping board consists of a small FPGA with 504K logic cells (LUTs) and 4.75\MB\ FPGA memory (BRAM),
a quad-core ARM Cortex-A53 processor,
two 10\Gbps\ SFP+ ports connected to the FPGA,
and 2\GB\ of off-chip on-board memory.
This board has several differences from our anticipated real \sysboard:
its network port bandwidth and on-board memory size are both much lower than our target,
and like all FPGA prototypes, its clock frequency is much lower than real ASIC.
Unfortunately, no board on the market offers the combination of small FPGA/ARM (required for low cost)
and large memory and high-speed network ports. %high-speed network and/or large amounts of on-board DRAM with small processing units,
%but building one only requires board-level changes.
Nonetheless, certain features of this board are likely to exist in a real \sysboard,
and these features guide our implementation.
Its ARM processor and the FPGA connect through an interconnect that has high bandwidth (90\GB/s) but high delay (40\mus).
Although better interconnects could be built, crossing ARM and FPGA would inevitably incur non-trivial latency.
%that allows the FPGA to perform DMA operations to
%ARM's local memory (the ARM has \fixme{XXX} local DRAM). % (\fixme{XXX} RTT and \fixme{XXX} bandwidth).
With this board, the ARM's access to on-board DRAM is much slower than the FPGA's access because the ARM has to first physically cross the FPGA then to the DRAM.
%We envision each board to host 100\GB{}s of DRAM.
%the physical connection between software cores and ASIC/FPGA is likely to continue having high bandwidth but non-trivial delay,
A better design would connect the ARM directly to the DRAM,
but it will still be slower for the ARM to access on-board DRAM than its local on-chip memory.
To mitigate the problem of slow accesses to on-board DRAM from ARM,
we maintain shadow copies of metadata at ARM's local DRAM.
%to avoid the much higher (79\x\ in our experiment) cost of going to the on-board DRAM.
For example, we store a {\em shadow} version of the page table in ARM's local memory,
so that the control path can read page table content faster.
When the control path needs to perform a virtual memory space allocation, it reads the shadow page table to test if an address would cause an overflow (\S\ref{sec:clio:addr-trans}).
We keep the shadow page table in sync with the real page table by updating both tables when adding, removing, or updating the page table entries.
%
%first, we shift operations involving ARM off the performance-critical path.
In addition to maintaining shadow metadata, we employ an efficient polling mechanism for ARM/FPGA communication.
We dedicate one ARM core to busy poll an RX ring buffer between ARM and FPGA,
where the FPGA posts tasks for ARM.
This polling thread hands over tasks to other worker threads for task handling %that perform the tasks
and post responses to a TX ring buffer.
%We use DMA to implement the ring buffers,
%as DMA is the fastest communication methods we found among all the available ones.
%matching future high-bandwidth networks.
\sysboard's network stack builds on top of standard, vendor-supplied Ethernet physical and link-layer IPs, with just an additional thin checksum-verify and ack-generation layer on top.
This layer uses much fewer resources compared to a normal RDMA-like stack (\S\ref{sec:clio:results-cost}).
%
We use lossless Ethernet with Priority Flow Control (PFC) for less packet loss and retransmission. Since PFC has issues like head-of-line blocking~\cite{DCQCN-sigcomm15,hpcc-sigcomm19,alibaba-rdma-nsdi21,IRN}, we rely on our congestion and incast control to avoid triggering PFC as much as possible.
Finally, to assist \sys\ users in building their applications, we implemented a simple software simulator
of \sysboard\ which works with \syslib\ for developers to test their code without the need to run an actual \sysboard.
\ulinebfpara{\syslib\ Implementation.}~~
Even though we optimize the performance of \sysboard, the end-to-end application performance can still be hugely impacted if the host software component (\syslib) is not as fast.
Thus, our \syslib\ implementation aims to provide low-latency performance by adopting several ideas (e.g., data inlining, doorbell batching) from recent low-latency I/O solutions~\cite{ERPC,Kalia14-RDMAKV,Kalia16-ATC,Tsai17-SOSP,Shinjuku,Shenango,demikernel-sosp21}.
We implemented \syslib\ in the user space.
It has three parts: a user-facing request ordering layer that performs dependency check and ordering of address-conflicting requests,
a transport layer that performs congestion/incast control and request-level retransmission,
and a low-level device driver layer that interacts with the NIC (similar to DPDK~\cite{DPDK} but simpler).
\syslib\ bypasses kernel and directly issues raw Ethernet requests to the NIC with zero memory copy.
%The transport layer implements our core logic. The shim layer is similar to DPDK but much simplified and customized for our own usage.
%We use per-thread inline polling.
For synchronous APIs, we let the requesting thread poll the NIC for receiving the response right after each request.
For asynchronous APIs, the application thread proceeds with other computations after issuing the request and only busy polls when the program calls \poll. | {
"alphanum_fraction": 0.7945185823,
"avg_line_length": 84.987804878,
"ext": "tex",
"hexsha": "4abba2af83fad408ad5f2f49a4be7fac4a4ba5e1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "859886a5c8524aa73d7d0784d5d695ec60ff1634",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "lastweek/2022-UCSD-Thesis",
"max_forks_repo_path": "clio/impl.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "859886a5c8524aa73d7d0784d5d695ec60ff1634",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "lastweek/2022-UCSD-Thesis",
"max_issues_repo_path": "clio/impl.tex",
"max_line_length": 299,
"max_stars_count": 12,
"max_stars_repo_head_hexsha": "859886a5c8524aa73d7d0784d5d695ec60ff1634",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "lastweek/2022-UCSD-Thesis",
"max_stars_repo_path": "clio/impl.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-18T16:28:32.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-03-14T03:09:38.000Z",
"num_tokens": 1623,
"size": 6969
} |
\documentclass[DIV16,twocolumn,10pt]{scrreprt}
\usepackage{paralist}
\usepackage{graphicx}
\usepackage[final]{hcar}
%include polycode.fmt
\begin{document}
\begin{hcarentry}{Auto-parallelizing Pure Functional Language System}
\report{Kei Davis}
\status{active}
\participants{Dean Prichard, David Ringo, Loren Anderson, Jacob Marks}
\makeheader
%(WHAT IS IT?)
The main project goal is the demonstration of a light-weight, higher-order,
polymorphic, pure functional language implementation in which we can
experiment with automatic parallelization strategies, varying degrees of
default function and constructor strictness. A secondary goal is to
experiment with mechanisms for transparent fault tolerance.
%
We do not consider speculative or eager evaluation, or semantic strictness
inferred by program analysis, so potential parallelism is dictated by the
specified degree of default strictness and any strictness annotations.
Our approach is similar to that of the
\href{https://dl.acm.org/citation.cfm?id=2503779}{Intel Labs Haskell Research
Compiler}: to use GHC as a front-end to generate STG (or Core), then exit to
our own back-end compiler that is itself written in Haskell. As in their case
we do not attempt to use the GHC runtime. Our implementation is
\emph{light-weight} in that we are not attempting to support or recreate the
vast functionality of GHC and its runtime. This approach is also similar to
\href{http://www.cse.unsw.edu.au/~pls/thesis/dons-thesis.ps.gz}{Don Stewart's}
except that we generate C instead of Java.
%(WHAT IS ITS STATUS? / WHAT HAS HAPPENED SINCE LAST TIME?)
\subsubsection*{Current Status}
Currently we have a fully functioning serial implementation and a primitive
proof-of-design parallel implementation.
%(CAN OTHERS GET IT?)
%(WHAT ARE THE IMMEDIATE PLANS?)
\subsubsection*{Immediate Plans}
We are currently developing a more realistic parallel runtime.
%
Bridging the gap between GHC STG (or Core) to our STG representation will be
undertaken starting June 2016. An instrumentation framework will be developed
in summer 2016.
\FurtherReading
A \href{https://github.com/losalamos/APPFL}{project web site} is under construction.
\subsubsection*{Undergraduate/post-graduate Internships}
If you are a United States citizen or permanent resident alien studying
computer science or mathematics at the undergraduate level, or are a recent
graduate, with strong interests in Haskell programming, compiler/runtime
development, and pursuing a spring, fall, or summer internship at Los
Alamos National Laboratory, this could be for you.
We don't expect applicants to necessarily already be highly accomplished
Haskell programmers---such an internship is expected to be a combination of
further developing your programming/Haskell skills and putting them to good
use. If you're already a strong C hacker we could use that too.
\emph{\bfseries The application process requires a bit of work so don't leave
enquiries until the last day/month.}\\
\begin{tabular}{l|l}
Term & Application Deadline \\
\hline
Summer 2016 & Closed \\
Fall 2016 & May 31, 2016 \\
Spring 2017 & Approx. July 2016 \\
Summer 2017 & Approx. January 2017 \\
Fall 2017 & Approx. May 2017
\end{tabular}\\
\noindent Email me at kei (at) lanl (dot) gov if interested in more information, and
feel free to pass this along. \\
\end{hcarentry}
\end{document}
| {
"alphanum_fraction": 0.7877358491,
"avg_line_length": 38.5454545455,
"ext": "tex",
"hexsha": "16584eb5a517b5bf9827eb77d4978acc770fa557",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2020-03-19T07:37:30.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-06-21T20:36:24.000Z",
"max_forks_repo_head_hexsha": "0f29f390d4735e863904348711a48b8f38ca5d03",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "losalamos/APPFL",
"max_forks_repo_path": "pubs/hcar16.1/hcar16.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0f29f390d4735e863904348711a48b8f38ca5d03",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "losalamos/APPFL",
"max_issues_repo_path": "pubs/hcar16.1/hcar16.tex",
"max_line_length": 85,
"max_stars_count": 21,
"max_stars_repo_head_hexsha": "0f29f390d4735e863904348711a48b8f38ca5d03",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "losalamos/APPFL",
"max_stars_repo_path": "pubs/hcar16.1/hcar16.tex",
"max_stars_repo_stars_event_max_datetime": "2017-01-12T14:12:59.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-12-10T08:43:18.000Z",
"num_tokens": 819,
"size": 3392
} |
\section{Metric spaces}\label{sec:metric_spaces}
Metric spaces are \hyperref[def:uniform_space]{uniform} \hyperref[def:topological_space]{topological spaces} with a lot of desirable properties. Most of real analysis that does not rely on certain algebraic structures generalizes well to metric spaces.
| {
"alphanum_fraction": 0.8250825083,
"avg_line_length": 75.75,
"ext": "tex",
"hexsha": "85de291359a54934e953d3b6994c9df61ba8b59d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "v--/anthology",
"max_forks_repo_path": "src/metric_spaces.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "v--/anthology",
"max_issues_repo_path": "src/metric_spaces.tex",
"max_line_length": 252,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "v--/anthology",
"max_stars_repo_path": "src/metric_spaces.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 68,
"size": 303
} |
\documentclass[12pt]{article}
\usepackage{amsfonts}
%\usepackage{mathptmx}
\usepackage[utf8]{inputenc}
\usepackage{comment}
%\usepackage{pgfplots}
%\pgfplotsset{width=10cm, compat=1.9}
%\documentclass[border=2mm,tikz]{standalone}
%\usetikzlibrary{datavisualization}
\usepackage[onehalfspacing]{setspace}
\usepackage{fancyhdr}
\usepackage{comment}
\usepackage[a4paper, top=2.5cm, bottom=2.5cm, left=2.5cm, right=2.5cm]%
{geometry}
\usepackage{times}
\usepackage{amsmath}
\usepackage{changepage}
\usepackage{amssymb}
\usepackage{graphicx}
\setcounter{MaxMatrixCols}{30}
\newtheorem{theorem}{Theorem}
\newtheorem{acknowledgement}[theorem]{Acknowledgement}
\newtheorem{algorithm}[theorem]{Algorithm}
\newtheorem{axiom}{Axiom}
\newtheorem{case}[theorem]{Case}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{conclusion}[theorem]{Conclusion}
\newtheorem{condition}[theorem]{Condition}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{criterion}[theorem]{Criterion}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{exercise}[theorem]{Exercise}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{notation}[theorem]{Notation}
\newtheorem{problem}[theorem]{Problem}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{solution}[theorem]{Solution}
\newtheorem{summary}[theorem]{Summary}
\newenvironment{proof}[1][Proof]{\textbf{#1.} }{\ \rule{0.5em}{0.5em}}
\newcommand{\Q}{\mathbb{Q}}
\newcommand{\R}{\mathbb{R}}
\newcommand{\C}{\mathbb{C}}
\newcommand{\Z}{\mathbb{Z}}
\begin{document}
\title{MAT120: Integral Calculus and
Differential Equations \\
BRAC University}
\author{Syed Zuhair Hossain \\ St. ID - 19101573 \\ Section - 07 \\ Set-Q \\ Assignment - 04}
\date{\today}
\maketitle
%%%%%%%%%%%Starting Point%%%%%%%%%%%%%%%
%%%%%MATH 01%%%%%%%%%%%%
\section{Evaluate the integral \\[5mm]
$\int_{1}^{2} \int_{z}^{2} \int_{0}^{\sqrt{3}y} \frac{y}{x^2 + y^2} \ dx \ dy \ dz$
}
\textbf{Solution \ :}
\begin{align*}
Let,\\
&u = \frac{x}{y} & \Rightarrow x^2 = y^2 u^2\\
&du = \frac{1}{y} dx\\
\therefore \ &dx = y \ du
\end{align*}
\begin{align*}
\therefore \ \int \frac{y}{x^2 + y^2} \ dx &= \int \frac{y^2}{y^2 u^2 + y^2 } du \\
&= \int \frac{y^2}{u^2 + 1} du\\
&= \int \frac{1}{u^2 +1} du\\
&= tan^{-1}(u)\\
&= tan^{-1}\frac{x}{y}
\end{align*}
\begin{align*}
&\int_{1}^{2} \int_{2}^{3} (\frac{\pi}{3}) dy dx\\[3mm]
&= \int_{1}^{2} \frac{\pi}{3} \times [y]_{z}^{2} \ dz\\[3mm]
&= \int_{1}^{2} \frac{\pi}{3} (2-z) dz\\[3mm]
&= \int_{1}^{2} \frac{2\pi}{3} dz - \int_{1}^{2} \frac{\pi}{3} z dz\\[3mm]
&= \frac{2\pi}{3} \int_{1}^{2} dz - \frac{\pi}{3} \int_{1}^{2} z dz\\[3mm]
&= \frac{2\pi}{3} \int_{1}^{2} dz - \left[ \frac{\pi}{3} \times \frac{z^2}{2}\right]_{1}^{2}\\
&= \frac{2\pi}{3} \int_{1}^{2} dz + (- \frac{4 \pi}{6} + \frac{1 \cdot \pi }{6})\\[3mm]
&= \frac{2\pi}{3} \int_{1}^{2} dz - \frac{\pi}{2}\\[3mm]
&= \frac{2 \pi \times 2}{3} - \frac{2\pi \cdot 1}{3} - \frac{\pi}{2}\\[3mm]
&= \frac{2\pi}{3} - \frac{\pi}{2}\\[3mm]
&= \frac{4\pi-3\pi}{6}\\[3mm]
&= \frac{\pi}{6} &[Answer]
\end{align*}
%%%%%%%%%%%%%%%%%%%%%%%
\pagebreak
\section{Solve}
\centering $(x+1) \frac{dy}{dx} + y = lnx, y(1) = 10$
\renewcommand{\baselinestretch}{2.0}
\begin{align*}
\textbf{Converting into standard form:}&\\
& \frac{dy}{dx} + \left(\frac{1}{x+1}\right) y = \frac{ln(x)}{(x+1)} \ \ \ \ \ \ \ \ \ [\textit{dividing by (x+1) in both side}]\\
\textbf{according to the formula}&\\
&y{'} + \rho (t) y = g(t)\\
\therefore \rho(x) &= e^{\int \frac{1}{x+1}dx}\\
&= e^{ln(x+1)}\\
&= (x+1)\\\\
\raggedright\therefore (x+1) y &= \int ln(x)dx\\
\Rightarrow(x+1)y &= xln(x) - x + c\\
\therefore y &= \frac{x(ln(x)-x+c)}{x+1}\\
\textbf{apply the given condition,}\\
&y(1) = 10\qquad x=1\\
&\Rightarrow \frac{1\times ln(1)-1+c}{1+1} = 10\\
&\Rightarrow \frac{0-1+c}{2} = 0\\
&c = 21\\
&\therefore y = \frac{xln(x) - x + 21}{(x+1)}\qquad \textbf{[Answer]}
\end{align*}
\pagebreak
\raggedright\section{Evaluate \qquad $\int_{1}^{4} \int_{0}^{\sqrt{x}}\frac{3}{2} e^{\frac{y}{\sqrt{x}}} dy dx$}
\begin{align*}
&= \int_{1}^{4} \frac{3}{2} \left[e^{\frac{y}{\sqrt{x}} \times \sqrt{x}}\right]_{0}^{\sqrt{x}} dx\\
&= \int_{1}^{4} \frac{3}{2} \left[e^1 \cdot \sqrt{x} - e^0 \cdot \sqrt{x}\right] dx\\
&= \int_{1}^{4} \frac{3}{2} (e\sqrt{x} - \sqrt{x}) dx\\
&= \int_{1}^{4} \frac{3}{2} \times \sqrt{x} \times (e-1) dx\\
&= \frac{3}{2} \int_{1}^{4} \sqrt{x} (e-1) dx\\
&= \frac{3}{2} \times (e-1) \int_{1}^{4}\sqrt{x} dx\\
&= \frac{3}{2}(e-1) [\frac{2x^\frac{3}{2}}{3}]_{1}^{4}\\
&= (e-1) [x^{\frac{3}{2}}]_{1}^{4}\\
&= (e-1) [4^{\frac{3}{2}}-1]\\
&= 7(e-1)\\
&\approx 12.0279728 \hspace{100} \textbf{[Answer]}
\end{align*}
\pagebreak
\section{Solve the differential equation using variables separable method : \qquad x^{2} \frac{dy}{dx} =y-xy; y(-1) = -1.}
\begin{align*}
&\Rightarrow x^2 \frac{dx}{dy} = y(1-x)\\
&\Rightarrow \frac{1}{y} \frac{dy}{dx} = \frac{1-x}{x*2} = \frac{1}{x^2} - \frac{1}{x}\\
&\Rightarrow \int y^{-1} dy = \int \frac{(1-x)}{x^2} dx\\
\textit{Now, integrating both sides,}\\
&\int \frac{dy}{y} = \int \frac{1}{x^2}dx - \int \frac{1}{x} dx\\
&\Rightarrow ln(y) = - \frac{1}{x} - ln(x) + c\\\\
\textbf{Given that,}\\
&y=-1 \hspace{20} x=-1\\
&\therefore ln|-1| = \frac{-1}{-1} - ln|-1| + c\\
&\Rightarrowln(1) = 1- ln(1) + c\\
&\therefore c=0\\\\
&\therefore log y = \frac{1}{x} - log(x)-1\\
&\Rightarrow log(y) + log(x) = - \frac{1}{x} - 1\\
&\Rightarrow log(yx) = - \frac{1}{x} -1\\
&\Rightarrow yx = e^{-\frac{1}{x} -1}\\
&\Rightarrow yx = e^{\frac{-1}{x}} e^{-1}\\
&\Rightarrow y = \frac{e^{\frac{-1}{x}}}{e^{x}}\hspace{25} \textbf{[Answer]}
\end{align*}
\newpage
\section{Evaluate the integral: \qquad \int_{0}^{\frac{\pi}{4}} \int_{0}^{1} \int_{0}^{x^2} x cosy \ dz \ dx \ dy }
\begin{align*}
&= \int_{0}^{\frac{\pi}{4}} \int_{0}^{1} [x z cos y]_{0}^{x^2} dx dy\\
&= \int_{0}^{\frac{\pi}{4}} \int_{0}^{1} (x \cdot x^2 cosy - x \cdot 0 \cdot cosy) dx dy\\
&= \int_{0}^{\frac{\pi}{4}} \int_{0}^{1} x^3 cosy dx dy\\
&= \int_{0}^{\frac{\pi}{4}} cosy \int_{0}^{1} x^3 dx dy\\
&= \int_{0}^{\frac{\pi}{4}} \left[\frac{x^4}{4} cos(y)\right]_{0}^{1}dy\\
&= \int_{0}^{\frac{\pi}{4}} \left[\frac{x^4}{4} cos(y) - \frac{0}{4} cos(y) \right]dy\\
&= \int_{0}^{\frac{\pi}{4}} \frac{cosy}{4} dy\\
&= \frac{1}{4} \int_{0}^{\frac{\pi}{4}} cos \ y \ dy\\
&= \left[\frac{1}{4} \times sin y\right]_{0}^{\frac{\pi}{4}}\\
&= \left[\frac{1}{4} \times sin y \right]_{0}^{\frac{\pi}{4}}\\
&=(\frac{1}{4} sin \frac{\pi}{4} - \frac{1}{4} sin(0))\\
&= \frac{1}{4} \times \frac{1}{\sqrt{2}} -0\\
&= \frac{1}{4\sqrt{2}}\\
\bigbreak
&\hspace{200} \textbf{[Answer]}
\end{align*}
\pagebreak
\section{Solve the system for x and y in terms of u and v then find the Jacobian $\frac{\partial(x,y)}{\partial(u,v)} .$}
\centering \bigtitle{\textbf{u = x-y; v = 2x+y}}
\begin{align*}
Here,\\
&u = x-y............(1)\\
&v = 2x+y...........(2)
\end{align*}
\begin{align*}
\textit{adding equation (1) and (2) we get,}&\\
& u + v = 3x\\
& x = \frac{u+v}{3}\\
& x = \frac{u}{3} + \frac{v}{3}\\
\end{align*}
\begin{align*}
\textit{Again, extracting equation (2) from (1) we get,}&\\
& v-u = 2y -x\\
& 2y = v-u +x\\
& 2y = v - u + \frac{u}{3} + \frac{v}{3}\\
& y = \frac{3v-3u+u+v}{3 \times 2}\\
& y = \frac{4v-2u}{6}\\
& y = \frac{2v}{3} - \frac{1\cdot u}{3}
\end{align*}
\pagebreak
\begin{align*}
&\frac{\partial x}{\partial u} =\frac{1}{3} & \frac{\partial y}{\partial u} = -\frac{2}{3}\\
&\frac{\partial x}{\partial v} =\frac{1}{3} & \frac{\partial y}{\partial u} = \frac{1}{3}\\
\end{align*}
\begin{equation*}
\textbf{Jacobian,J}
=\frac{\partial(x,y)}{\partial(u,v)}
= \begin{vmatrix}
\frac{\partial x}{\partial u} & \frac{\partial y}{\partial u}\\
\frac{\partial x}{\partial v} & \frac{\partial y}{\partial u}
\end{vmatrix}
=
\begin{vmatrix}
\frac{1}{3} & -\frac{2}{3}\\
\frac{1}{3} & \frac{1}{3}
\end{vmatrix}
= \frac{1}{3} \times \frac{1}{3} - \left(-\frac{2}{3} \right) \times \frac{1}{3}
=\frac{1}{9} + \frac{2}{9} = \frac{1}{3}
\end{equation*}
\bigbreak
\centering \title{\textbf{[Answer]}}
\end{document}
| {
"alphanum_fraction": 0.5486127302,
"avg_line_length": 36.0420168067,
"ext": "tex",
"hexsha": "c156248034e4a549097e8c2d00b2fabe5000a985",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cf8279293abae776cb9d022e04a201e39302310c",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "ZuhairHossain/LaTeX-Projects",
"max_forks_repo_path": "MAT120 Assignments/Assignment-04/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cf8279293abae776cb9d022e04a201e39302310c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "ZuhairHossain/LaTeX-Projects",
"max_issues_repo_path": "MAT120 Assignments/Assignment-04/main.tex",
"max_line_length": 134,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "cf8279293abae776cb9d022e04a201e39302310c",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "ZuhairHossain/LaTeX-Projects",
"max_stars_repo_path": "MAT120 Assignments/Assignment-04/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3693,
"size": 8578
} |
\documentclass[a4paper, 12pt]{article}
\usepackage{authblk}
\usepackage{amsmath}
\usepackage[usenames, dvipsnames]{xcolor}
\newcommand{\comment}[1]{\textcolor{RedOrange}{#1}}
\title{Summary of\\TTK4145 Real-Time Programming}
\date{\today}
\author{Morten Fyhn Amundsen}
\affil{NTNU}
\begin{document}
\maketitle
\tableofcontents
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Disclaimer}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
These are notes from some parts of the TTK4145 curriculum. None of this is really mine.
\comment{add something about two phase commit}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Reliability and fault tolerance}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\comment{something about process pairs}
Four sources of faults:
\begin{itemize}
\item Inadequate spec.
\item Software design errors.
\item Hardware failure.
\item Interference in communication.
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Reliability, failure, and faults}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
A fault that becomes active leads to an error which can lead to a failure.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Failure modes}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{itemize}
\item Value failure.
\item Time failure.
\item Arbitrary failure: Combination of the above.
\end{itemize}
Types of system failures:
\begin{itemize}
\item aoditd
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Fault prevention and fault tolerance (2.3)}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsubsection{Fault prevention}
Fault avoidance (don't create faults) and fault removal (un-create the ones you've made).
\subsubsection{Fault tolerance}
\begin{itemize}
\item Full fault tolerance: No significant degradation.
\item Graceful degradation: System continues, but partially degraded during recovery.
\item Fail safe: Maintains integrity, temporarily halts.
\end{itemize}
\subsubsection{Redundancy}
\paragraph{Static redundancy: } Several identical components with voting.
\paragraph{Dynamic redundancy: } The module informs somehow whether its output is erroneous.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{\textit{N}-version programming (2.4)}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\(N\) independent programs for the same task are made. A driver invokes all \(N\) programs, waits for them to finish, and compares outputs.
\subsubsection{Vote comparison}
Easy for exact values, tricky for `analog' values.
\subsubsection{Principal issues in \textit{N}-version programming}
\begin{itemize}
\item Initial spec: Will mess up all \(N\) if incorrect.
\item Independence of design effort: All is lost if the versions still give identical errors.
\item Budget: Super costly to develop and maintain \(N\) times as much software.
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Software dynamic redundancy (2.5)}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsubsection{Error detection}
\begin{itemize}
\item Replication checks (N-version programming).
\item Timing checks (watchdog, deadline).
\item Reversal checks (calculate input from output and compare).
\item Coding checks (checksums).
\item Reasonableness checks (variable range, assertions).
\item Structural checks (integrity of data structures).
\item Dynamic reasonableness check (reasonable compared to prev. value).
\end{itemize}
\subsubsection{Damage confinement and assessment}
\paragraph{Modular decompositon:} Confine errors by modularising and good interfaces.
\paragraph{Atomic actions:} An activity that is completed with \emph{no} interaction to the system (also called transactions).
\subsubsection{Error recovery}
\paragraph{Forward error recovery} tries to continue by making selective corrections to the state. Pretty fast, often necessary for time-critical things.
\paragraph{Backward error recovery} restores the system to a safe, previous state (a \emph{recovery point}). Creating a recovery point is called \emph{checkpointing}. Problematic if something has happened in hardware since the last recovery point. For concurrent processes, one must use \emph{recovery lines}.
\paragraph{Domino effect:} If a thread discovers an error and rolls back past some communication, the other side of the communication must also roll back. This can cascade backwards, rolling back way too much. To avoid this, we need a consistent state across both processes that they can roll back to. Discussed more in Section \ref{sec:atomic}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{The recovery block}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
A normal programming language block, but with a recovery point at the entrance and an acceptance test at the exit. If the test fails, the block recovers and tries to execute an alternative module (i.e. do the thing some other way and pray that works).
% \lstinputlisting[language=Erlang]{recovery_block.txt}
\subsection{Comparison between \textit{N}-version and recovery blocks (2.7)}
\begin{itemize}
\item Static/dynamic redundancy: \(N\)-version uses static redundancy (errors don't affect execution).
\item Overhead: \(N\)-version requires a driver, recovery blocks are extra overhead by themselves.
\item Spec. errors: Both are vulnerable to bad spec.
\item Error detection: In difficult voting situations, acceptance testing may be more flexible.
\item Atomicity: \(N\)-version avoids damage to environment because errorneous results are discarded before use. Backward error recovery does not.
\end{itemize}
Lesson: Consider using both.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Dynamic redundancy and exceptions}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Exceptions and handling can be used to:
\begin{itemize}
\item cope with abnormal environment conditions;
\item enable toleration of program design faults;
\item provide general-purpose error detection and recovery.
\end{itemize}
\subsubsection{Ideal fault-tolerant system components}
A component that takes requests, and possibly makes further requests before yielding a response. Two possible error types:
\begin{itemize}
\item Interface exceptions: Illegal requests.
\item Internal malfunction.
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Safety, reliability, and depentability}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsubsection{Dependability}
\begin{itemize}
\item Threats: Circumstances causing non-dependability.
\item Means: Means to deliver a dependable service with the required confidence.
\item Attributes: The way the quality can be judged.
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Shared variable-based synchronization and communication}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\paragraph{Shared variables:} Objects that are accessed by several tasks. Often reasonable when tasks use a common physical memory.
\paragraph{Message passing:} Explicit exchange of data between tasks. Can be good in systems with no common memory.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Mutual exclusion and condition synchronization (5.1)}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\paragraph{Critical section:} A sequence of statements that must be executed indivisibly.
\paragraph{Mutual exclusion:} The synchronisation to allow it.
\paragraph{Condition synchronisation:} When a task must wait for another task to finish something.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Busy waiting (5.2)}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
It'a a spin lock. If the resource is busy, the process keeps checking until it is free. Works for condition sync., not so much for mutual exclusion (difficult/complex). Repeatedly testing a variable/flag can lead to livelock.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Suspend and resume (5.3)}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Similar, but a process suspends (stops running) while it waits. Requires even more complicated code to avoid race conditions, and isn't really used on its own. Ada includes a safe version of suspend and resume.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Semaphores (5.4)}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\paragraph{\texttt{wait(S)}:} Wait until S is greater than zero, then decrement it by one and proceed.
\paragraph{\texttt{signal(S)}:} Increment S by one.
\subsubsection{Suspended tasks}
Busy waiting is bad because it wastes a lot of CPU time. We use suspension instead: \texttt{wait} on a zero semaphore invokes the RTSS\footnote{Run-time support system.}. The RTSS puts the task in a queue of tasks waiting on that semaphore. At some point, we will be allowed to execute (if the code is correct).
\subsubsection{Implementation}
The scheduler will make sure to make \texttt{wait} and \texttt{signal} \emph{non-preemtible}: Tasks may not be interrupted while executing these operations. The RTSS may disable interrupts briefly.
\subsubsection{Liveness}
\textbf{Liveness} is when a task does not suffer from any of the following:
\begin{itemize}
\item Deadlock: System stuck, cannot proceed.
\item Livelock: E.g. being stuck in a spin lock.
\item Starvation: A task is never executed because it is never scheduled.
\end{itemize}
\subsubsection{Binary and quantity semaphores}
A binary semaphore can be either 1 or 0. A quantity semaphore can be \(0 \dots N\) where \(N\) is some maximum value. The latter can be used to allow a precice maximum number of tasks concurrent access to a resource.
\subsubsection{Semaphores in Ada}
Ada does not directly support semaphores, but you can write your own pretty easily as an abstract structure.
\subsubsection{Semaphores in Java}
Java has several standard packages with concurrency things. Semaphores are included in one of them.
\subsubsection{Semaphores in C/POSIX}
The POSIX API provides counting semaphores.
\subsubsection{Criticism of semaphores}
Semaphores are error-prone. One error can mess up your whole program. Semaphores aren't really used now, at least not on their own.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Conditional critical regions}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
A \emph{critical region} is some code that is guaranteed to be run under mutual exclusion. A \emph{conditional critical region} is a critical region with a guard. (It must both get inside the mutual exclusion and pass the guard.)
An example is a bounded buffer. A producer can only write if a) there is room in the buffer \emph{and} b) it gets mutually exclusive access.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Monitors}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
CCRs are kinda bad because they can be spread throughout the program.
A \emph{monitor} encapsulates all critical regions as procedures in a single module. All procedure calls are guaranteed to execute with mutual exclusion. The RTSS implements correct entry and exit protocols automatically. Condition synchronisation is still needed.
\paragraph{Condition variable:} A semaphore-like thing that also uses \texttt{wait} and \texttt{signal} operators. \texttt{wait} \emph{always} blocks. A blocked task then releases the mutually exclusive hold on the monitor. A \texttt{signal} will release one blocked task, if any. If there are none, \texttt{signal} does nothing.
\subsubsection{Criticism of monitors}
Good for mutual exclusion, but clumsy for condition synchronisation (must resort to low-level semaphore-like primitives). The internals of a monitor can be hard to read
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Mutexes and condition variables in C/POSIX}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Mutexes and condition variables can be combined to function as a monitor.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Protected objects in Ada}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
A protected object encapsulates data and allows only mutually exclusive access. Similar to monitors and CCRs. The data can be accessed in three ways:
\begin{itemize}
\item \textbf{Procedures} give read/write access.
\item \textbf{Functions} give read-only access.
\item \textbf{Entries} are like procedures, but must pass a boolean guard before entering.
\end{itemize}
All these are guaranteed to operate in mutual exclusion. Many functions can do their reading concurrently, but not while a procedure or entry is running.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Synchronized methods in Java}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
A method with the \texttt{synchronize} modifier is protected by a lock. As long all functions on a module with concurrent access are synchronised, full mutual exclusion is guaranteed. If access is only needed in part of the function, \emph{block synchronisation} may be used.
\subsubsection{Waiting and notifying}
\begin{itemize}
\item \texttt{wait()} tells the calling thread to release the lock and sleep until it's awakened.
\item \texttt{notify()} wakes one waiting thread. It does not release the lock. The awokened thread keeps waiting until the lock is available.
\item \texttt{notifyAll()} is like \texttt{notify}, but wakes all sleeping threads.
\end{itemize}
It is safe, but a bit wasteful, to always check the waiting conditions and call \texttt{wait} in a loop, and always wake with \texttt{notifyAll}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Shared memory multiprocessors}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
A perfect program will work both on one processor and on SMP (symmetric multiprocessor). This is never the case, and programs that seem to be goon on one CPU might be awful on SMP.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Atomic actions, concurrent tasks and reliability}\label{sec:atomic}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Atomic actions}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\paragraph{Definition:} An action that---so far as other tasks are concerned---is indivisible and instantaneous.
\subsubsection{Two-phase atomic actions}
\begin{enumerate}
\item The coordinator gathers votes on whether to commit.
\item The coordinator makes a decision and notifies.
\end{enumerate}
\subsubsection{Atomic transactions}
An atomic action that may either succeed or fail (that is, it's okay to fail in a way it cannot recover from). In case of failure, it rollbacks. A normal atomic action would make a mess in case of unrecoverable failure.
\subsubsection{Requirements for atomic actions}
\begin{itemize}
\item Well-defined boundaries: The start boundary is the location in each task where the action begins. The end boundary is equivalent for the end of the action. The side boundaries separate participants from non-participants.
\item Indivisibility: Allowing communication between participants and no one else.
\item Nesting: Nesting is okay as long as the inner action is fully contained by the outer.
\item Concurrency: Concurrent atomic actions should be possible.
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Recoverable atomic actions}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsubsection{Backward error recovery}
A \emph{conversation} is an atomic action that also has a recovery block:
\begin{itemize}
\item Upon entry, all tasks make recovery points.
\item As with atomic actions, no communication except between participants.
\item All tasks must pass an acceptance test to exit.
\item If any participant fails, all participants roll back to the recovery point.
\end{itemize}
In some definitions, all tasks must enter before any may leave. Alternatively, participation can be made non-compulsory. Then a task with a deadline can leave and do something else if it needs to.
\subsubsection{Forward error recovery}
Main difference: Uses exceptions. If an exception occurs, it is raised in all participants. (Asynchronous exceptions.)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Asynchronous notification}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\paragraph{Resumption:} (Also called event handling.) Behaves like a software interrupt. A handler responds to the asynchronous event. This changes the flow of control temporarily: The interrupted task continues when the handler is finished.
\paragraph{Termination:} Each task specifies a domain in which it can accept asynchronous notifications. A notification will terminate the domain. Called ATC: Asynchronous transfer of control.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Scheduling real-time systems}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Scheduling is either \emph{static} or \emph{dynamic}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{The cyclic executive approach}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Given a fixed set of periodic tasks you can plan the schedule in advance. A number of fixed-duration minor cycles make up a major cycle. Repeatedly running the major cycle will give proper scheduling. This approach only works for simple systems.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Task-based scheduling}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsubsection{Scheduling approaches}
\begin{itemize}
\item Fixed-priority scheduling
\item Earliest deadline first
\item Value-based scheduling
\end{itemize}
\subsubsection{Scheduling characteristics}
\begin{itemize}
\item \emph{Sufficiency:} The test can guarantee that all deadlines are met.
\item \emph{Necessity:} Failing the test will lead to missing a deadline.
\item \emph{Sustainability:} If improved circumstances do not make a previosly passed test fail.
\end{itemize}
\subsubsection{(Non-) Preemtion}
In a preemptive scheme, a higher-priority task will immediately be allowed to run. In nonpreemtiveness, it must wait for the lower-priority task to finish.
\subsubsection{Simple task model}
For analysis, we assume the program follows a simple model:
\begin{itemize}
\item Fixed set of tasks.
\item Periodic tasks, known periods.
\item Independent tasks.
\item No overhead.
\item Deadlines equal to periods.
\item Fixed worst-case execution time.
\item No internal suspension points.
\item A single CPU.
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Fixed-priority scheduling}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Each task is assigned a unique priority based on its period. (Frequent task gets high priority).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Utilisation-based schedulability tests for FPS}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
If the total utilisation for a task set is below a bound, all tasks will meet their deadlines. The bound decreases when the number of tasks increases, but no lower than \(69.3 \; \% \). The utilisation and bound is given by \( \sum_{i=1}^{N} \left( \frac{C_i}{T_i} \right) \leq N (2^{\frac{1}{N}} - 1) \), where \(C\) is the worst-case execution time, \(T\) is the period, and \(N\) is the number of tasks.
This test is sufficient, but not necessary.
\subsubsection{Improved utilisation-based tests for FPS}
Group tasks into task families within which all tasks' periods are multiples of each other. The \(N\) from the equation above instead becomes the number of families. This is better, but not sustainable.
Another test is \( \prod_{i=1}^{N} \left( \frac{C_i}{T_i} + 1 \right) \leq 2 \).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Response time analsis for FPS}\label{sec:rta}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Starting with the highest-priority task, calculate worst-case response time of each task:
\begin{equation*}
w_i^{n+1} = C_i + \sum_{j \in hp(i)} \left\lceil \frac{w_i^n}{T_j} \right\rceil C_j
\end{equation*}
where \(C\) is the computation time, \(T\) is the period, \(hp(i)\) is the set of higher priority tasks, and \(w_i^0 = C_i\). If at some point \(w_i^n = w_i^{n+1}\), then that is the worst-case response time. \comment{mention stopping criteria}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Sporadic and aperiodic tasks}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Assume \(T\) is the minimum interval of a task, and that the deadline \(D\) can be shorter than \(T\). The same method as in Section \ref{sec:rta} works here, but with stopping criterion \(w_i^{n+1} > D_i\).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Task systems with \emph{D} < \emph{T}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
For \(D < T\), \emph{deadline monotonic} priority ordering (DMPO) is optimal.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Task interactions and blocking}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\paragraph{Priority inversion} is when poor scheduling makes an important thread wait for less important threads. Example: A low-pri thread runs first, and gets a mutex. A mid-pri preempts it. A high-pri preempts that, and wants the mutex. It suspends because it's must wait on the mutex, and the mid pri gets to continue. When the mid-pri is done, the low-pri can run again, and finish with the mutex. Only after that will the high-pri be allowed to run again. (Here it would be better to prioritise finishing the thread that had the mutex required, and running the mutex-independent mid-pri after the high-pri.)
\paragraph{Priority inheritance:} If a high-pri needs a lower-pri to do something first, the lower-pri inherits the priority of the high-pri, so it can finish quickly.
\comment{maybe something about max blocking time and usage and whatnot}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Priority ceiling protocols}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Running a priority ceiling protocol on one processor gives:
\begin{itemize}
\item A high-pri can be blocked no more than once by a lower-pri.
\item Deadlocks are prevented.
\item Transitive blocking is prevented.
\item Mutual exclusive access to resources is ensured.
\end{itemize}
\subsubsection{Original ceiling priority protocol}
\begin{enumerate}
\item Tasks have a static default priority.
\item Resources have a static ceiling value (equal to the maximum priority of any task accessing it).
\item Tasks have a dynamic priority that is the maximum of its static priority and any inherited priority (from blocking higher-pris).
\item A task can only lock a resource if its dynamic priority is higher than the ceiling of any currently locked resource.
\end{enumerate}
With this, a second resource can only be locked if there is no higher priority task that uses both resources. A high-priority task can only be blocked once by any lower-priority task.
\subsubsection{Immediate ceiling priority protocol}
\begin{enumerate}
\item Tasks have a static default priority.
\item Recources have a static ceiling value defined (equal to the maximum priority of any task accessing it).
\item Tasks have a dynamic priority that is the maximum of its static priority and the ceiling value of any recources it has locked.
\end{enumerate}
This way a task can only be blocked in the beginning of its execution.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{An extendible task model for FPS}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsubsection{Release jitter}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Java transactions}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{What is a transaction?}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
A transaction (or atomic transaction) have these properties:
\begin{itemize}
\item \emph{Atomicity:} Completes successfully or rolls back all its effects.
\item \emph{Consistency:} Produces consistent results.
\item \emph{Isolation:} Intermediate states are hidden. Transactions appear to happen serially.
\item \emph{Durability:} The effects of a committed transaction are never lost.
\end{itemize}
A \emph{coordinator} is associated with every transaction. Informs all participants about whether they should commit or rollback.
\subsection{Atomicity}
A two-phase commit protocol is usually used to ensure consensus and thus atomicity. If a participant replies with `abort' or does not reply, the whole action aborts. If all answer positive, the coordinator stores the decision and enters phase 2.
\end{document} | {
"alphanum_fraction": 0.6964803835,
"avg_line_length": 52.1513157895,
"ext": "tex",
"hexsha": "002dd89836a6bd9a4447d03e021ad8da296da43e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jakoblover/ntnu-course-summaries",
"max_forks_repo_path": "TTK4145 Real-time programming/TTK4145-Summary.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jakoblover/ntnu-course-summaries",
"max_issues_repo_path": "TTK4145 Real-time programming/TTK4145-Summary.tex",
"max_line_length": 613,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jakoblover/ntnu-course-summaries",
"max_stars_repo_path": "TTK4145 Real-time programming/TTK4145-Summary.tex",
"max_stars_repo_stars_event_max_datetime": "2019-04-11T02:42:40.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-05-30T09:19:22.000Z",
"num_tokens": 5290,
"size": 23781
} |
\section{Coin and Multi-Asset Token algebras}
\label{sec:coin-ma}
In this chapter we introduce the concept of a \emph{Token algebra},
which is an abstraction used to generalize this specification to two
different eras depending on the Token algebra chosen.
\begin{definition}[Token algebra]
A \emph{Token algebra} is a partially ordered commutative monoid
$T$, written additively, (i.e. a commutative monoid together with a
partial order, such that addition is monotonic in both variables)
together with the functions and properties as described in Figure
\ref{fig:TokenAlgebra}.
\end{definition}
%%
%% Figure ValMonoid and its Functions
%%
\begin{figure}[htb]
%
\emph{Functions}
%
\begin{align*}
& \fun{coin} ~\in~ T \to \Coin\\
& \text{Return the Ada contained inside the $\mathsf{ValType}$ element}
\nextdef
%
& \fun{inject} ~\in~ \Coin \to T\\
& \text{Create a $\mathsf{ValType}$ element containing only this amount of Ada}
\nextdef
%
& \fun{policies} ~\in~ T \to \powerset{\PolicyID} \\
& \text{The set of policy IDs required for minting}
\nextdef
%
& \fun{size} ~\in~ T \to \MemoryEstimate\\
& \text{Return the size, in words, of a $\mathsf{ValType}$ element}
\end{align*}
%
\emph{Properties}
%
\begin{align*}
& \fun{coin} \circ \fun{inject} = \fun{id}_{\Coin}
\end{align*}
\caption{Additional functions and properties required for a Token algebra}
\label{fig:TokenAlgebra}
\end{figure}
A Token algebra is precisely the structure required to generalize the
$\Coin$ type of the Shelley specification in transaction outputs for
this ledger. We can then describe the ShelleyMA
transaction processing rules without fixing a concrete Token algebra.
Depending on the Token algebra chosen, we obtain distinct ledgers. In
particular, we get
\begin{itemize}
\item the Allegra ledger rules with $\Coin$, and
\item the Mary ledger rules with $\Value$ (defined below).
\end{itemize}
When multi-asset support on the ledger is introduced, Ada ($\Coin$) will still be
the most common type of asset on the ledger, as the ledger rules enforce that
some quantity of it (specified via
the $\fun{coinsPerUTxOWord}$ protocol parameter) must
be contained in every UTxO on the ledger.
It is the only
type of asset used for all non-UTxO ledger accounting, including deposits,
fees, rewards, treasury, and the proof of stake protocol. For this reason, not
all occurrences of $\Coin$ inside a transaction or in the ledger state can or
should be replaced by the chosen Token algebra.
Below we give the definitions of all the functions that must be defined on
$\Coin$ and $\Value$ in order for them to have the structure of a Token algebra.
In Section \ref{sec:other-valmonoids} we give several other types which we can meaningfully
support the definition of the required functions (addition, size, etc.), including
an optimized representation that more accurately represents the implementation
used in the Haskell implementation of the multi-asset type.
These types are convertible to and from the $\Value$ type, and appear in other
parts of the system as a multi-asset representation which is more suitable in particular
use cases.
\subsection{$\Coin$ as a Token algebra}
This section defines the Token algebra structure for the $\Coin$ type,
see Figure \ref{fig:coin}. The structure of a partially ordered monoid
is inherited from the (additive) integers.
For $\Coin$, no policies are associated with $\Coin$, since minting of Ada is not allowed.
%%
%% Figure Coin Functions
%%
\begin{figure}[htb]
\begin{align*}
& \fun{coin} = \fun{id}_{\Coin}
\nextdef
%
& \fun{inject} = \fun{id}_{\Coin}
\nextdef
%
& \fun{policies}~v = \emptyset
\nextdef
%
& \fun{size}~v = 0
\end{align*}
\caption{The Token algebra structure of $\Coin$}
\label{fig:coin}
\end{figure}
\subsection{Multi-assets and the Token algebra $\Value$}
Elements of $\Value$ represent heterogeneous collections of assets,
both user-defined and Ada. The Mary era ledger uses $\Value$ as its Token algebra
in order to support multi-assets.
$\Value$ and its Token algebra structure are given in Figure \ref{fig:defs:value}.
\begin{figure*}[t!]
\emph{Derived types}
%
\begin{equation*}
\begin{array}{r@{~\in~}l@{\qquad=\qquad}lr}
\var{aname} & \AssetName & \mathsf{ByteString} \\
\var{pid} & \PolicyID & \ScriptHash \\
\var{adaID} & \AdaIDType & \{~\mathsf{AdaID}~\} \\
\var{aid} & \AssetID & \AdaIDType \uniondistinct (\PolicyID \times \AssetName) \\
\var{quan} & \Quantity & \Z \\
\var{v}, \var{w} & \Value & \AssetID \mapsto_0 \Quantity
\end{array}
\end{equation*}
%
\emph{Token algebra structure of $\Value$}
%
\begin{align*}
& \fun{coin}~\var{v} = \var{v}~\mathsf{AdaID}
\nextdef
%
& \fun{inject}~c = \mathsf{AdaID}~\mapsto_0~\var{c}
\nextdef
%
& \fun{policies}~v = \{~\var{pid}~\vert~(\var{pid,~\wcard}) \in \supp v~\}
\nextdef
%
& \fun{size} ~~~ \text{see Section \ref{sec:value-size}}
\end{align*}
\caption{$\Value$ and its Token algebra structure}
\label{fig:defs:value}
\end{figure*}
\begin{itemize}
\item $\PolicyID$ identifies monetary policies. A policy ID $\var{pid}$ is associated with a script
$s$ such that $\fun{hashScript}~s=pid$. When a transaction attempts to create or destroy assets
that fall under the policy ID $\var{pid}$,
$s$ verifies that the transaction
respects the restrictions that are imposed by the monetary policy.
See sections \ref{sec:transactions} and \ref{sec:utxo} for details.
\item $\AssetName$ is a byte string used to distinguish different assets with the same $\PolicyID$.
Each $aname$ identifies a particular kind of asset out of all the assets under the
$\var{pid}$ policy (but not necessarily among assets under other policies).
The maximum length of this
byte string is 32 bytes (this is not explicitly enforced in this specification).
\item $\AssetID$ is either $\mathsf{AdaID}$ or a pair of a policy ID and an asset name.
It is a unique and permanent
identifier of an asset. That is, there are is no mechanism to change it or
any part of it for any assets.
Mary MA assets are fungible with each other if and only if they have to the same $\AssetID$.
The reason the unique identifier is a pair of two elements (except for the non-mintable Ada case) is to allow
minting arbitrary collections of unique assets under a single policy.
\item $\mathsf{AdaID}$ is a special asset ID for Ada, different than all other asset IDs.
It is a term of the single-term type $\AdaIDType$.
It does not include a policy, so instead, the validation outcome in the presence
of Ada in the $\fun{mint}$ field of the transaction is specified in the UTXO
ledger rule. The rule disallows the $\fun{mint}$ field to contain Ada.
\item $\Quantity$ is an integer type that represents an amount of a specific $\AssetName$. We associate
a $q\in\Quantity$ with a specific asset to track how much of that asset is contained in a given asset value.
\item $\Value$ is the multi-asset type that is used to represent
a collection of assets, including Ada. This type is a finitely supported map.
If $\var{aid}$ is an $\AssetID$ and $v \in \Value$,
the quantity of assets with that assed ID contained in $v$ is $v~\var{aid}$.
Elements of $\Value$ are sometimes also referred to as
\emph{asset bundles}.
\end{itemize}
To give $\Value$ the structure of a partially ordered monoid, we define the
required operations
pointwise, in accordance with the usual definitions of these operations on
finitely supported maps. See Figure \ref{fig:pointwise}.
\begin{figure*}[t!]
\begin{align*}
v + w &= \{~ aid \mapsto v~\var{aid} + w~\var{aid} ~\vert~ \var{aid} \in \fun{dom}~v \cup \fun{dom}~w ~\} \\
v \leq w &\Leftrightarrow \forall~\var{aid} \in \AssetID, v~\var{aid} \leq w~\var{aid}
\end{align*}
\caption{Pointwise operations on Value}
\label{fig:pointwise}
\end{figure*}
\subsection{Special Ada representation}
Although all assets are native on the Cardano ledger (ie. the accounting and
transfer logic for them is done directly by the ledger), Ada is still treated in a
special way by the ledger rules, and is the most common type of asset on the ledger.
It can
be used for special purposes (such as fees) for which other assets cannot be used.
The underlying consensus algorithm relies on Ada in a way that
cannot be extended to user-defined assets.
Ada can also neither be minted nor burned.
Note that in the $\Value$ definition above, we pick a special asset ID for Ada, that
is not part of the type which represents the asset IDs for all other assets.
Combining the asset name and policy for Ada gives a type-level guarantee that there is exactly
one kind of asset that is associated with it in any way, rather than
deriving this guarantee as an emergent property of minting rules.
Additionally, not giving Ada an actual policy ID
(that could have a hash-associated policy) eliminates the possibility
certain cryptographic attacks.
We sometimes refer to Ada as the primary or principal currency. Ada does not,
for the purposes of the Mary ledger specification, have a $\PolicyID$ or an $\AssetName$.
\subsection{Fixing a Token algebra}
For the remainder of this specification, let $\ValMonoid$ be an
arbitrary but fixed Token algebra. As described above, choosing
$\Coin$ results in the Allegra ledger, and choosing $\Value$ results
in the Mary ledger.
| {
"alphanum_fraction": 0.7165231822,
"avg_line_length": 41.2008547009,
"ext": "tex",
"hexsha": "34896122c06d8d23ceb5934fed9275cac9c0820b",
"lang": "TeX",
"max_forks_count": 86,
"max_forks_repo_forks_event_max_datetime": "2021-10-04T17:17:15.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-03-29T06:53:05.000Z",
"max_forks_repo_head_hexsha": "9c3b4737b13b30f71529e76c5330f403165e28a6",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "MELD-labs/cardano-ledger",
"max_forks_repo_path": "eras/shelley-ma/formal-spec/value.tex",
"max_issues_count": 1266,
"max_issues_repo_head_hexsha": "9c3b4737b13b30f71529e76c5330f403165e28a6",
"max_issues_repo_issues_event_max_datetime": "2021-11-04T12:50:51.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-03-18T20:23:28.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "MELD-labs/cardano-ledger",
"max_issues_repo_path": "eras/shelley-ma/formal-spec/value.tex",
"max_line_length": 115,
"max_stars_count": 108,
"max_stars_repo_head_hexsha": "9c3b4737b13b30f71529e76c5330f403165e28a6",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "MELD-labs/cardano-ledger",
"max_stars_repo_path": "eras/shelley-ma/formal-spec/value.tex",
"max_stars_repo_stars_event_max_datetime": "2021-10-30T05:27:16.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-03-24T02:26:41.000Z",
"num_tokens": 2621,
"size": 9641
} |
\documentclass[11pt, twoside, pdftex]{article}
% This includes all the settings that we should use for the document
\newcommand{\PDFTitle}{Using ROS* on DE-Series Boards}
\newcommand{\commonPath}{../../Common}
\input{\commonPath/Docs/defaulttext.tex}
\input{\commonPath/Docs/preamble.tex}
\usepackage{placeins}
%%%%%%%%%%%%%%%%%%%%%%%%%
% Add title
\newcommand{\doctitle}{Using ROS* on DE-Series Boards}
\newcommand{\dochead}{Using ROS* on DE-Series Boards}
% Usually no need to change these two lines
\title{\fontfamily{phv}\selectfont{\doctitle} }
\chead{ \small{\textsc{\bfseries \dochead} } }
% Customizations
%%%%%%%%%%%%%%%%%%%%%%%%%
% Allows multiple fig\refures per page
\renewcommand\floatpagefraction{.9}
\renewcommand\topfraction{.9}
\renewcommand\bottomfraction{.9}
\renewcommand\textfraction{.1}
\setcounter{totalnumber}{50}
\setcounter{topnumber}{50}
\setcounter{bottomnumber}{50}
\raggedbottom
\usepackage{listings}
\usepackage{accsupp}
\usepackage{color}
\definecolor{dkgreen}{rgb}{0,0.6,0}
\definecolor{gray}{rgb}{0.5,0.5,0.5}
\definecolor{mauve}{rgb}{0.58,0,0.82}
\newcommand{\noncopynumber}[1]{
\BeginAccSupp{ActualText={}}
#1
\EndAccSupp{}
}
\lstset{frame=tb,
language=C++,
aboveskip=3mm,
belowskip=3mm,
showstringspaces=false,
columns=flexible,
basicstyle={\small\ttfamily},
numbers=left,
numberstyle=\tiny\noncopynumber,
keywordstyle=\color{blue},
commentstyle=\color{dkgreen},
stringstyle=\color{mauve},
breaklines=true,
breakatwhitespace=true,
tabsize=4
}
%%%%%%%%%%%%%%%%%%
%%% DOCUMENT START
%\begin{document}
\begin{document}
\begin{table}
\centering
\begin{tabular}{p{5cm}p{4cm}}
\hspace{-3cm}
&
\raisebox{1\height}{\parbox[h]{0.5\textwidth}{\Large\fontfamily{phv}\selectfont{\textsf{\doctitle}}}}
\end{tabular}
\label{tab:logo}
\end{table}
\colorbox[rgb]{0,0.384,0.816}{\parbox[h]{\textwidth}{\color{white}\textsf{\textit{\textBar}}}}
\thispagestyle{plain}
\section{Introduction}
This tutorial describes the usage of Robot Operating System (ROS) with a release of Linux* which is available for a variety of embedded systems that feature an Intel\textsuperscript{\textregistered} Cyclone\textsuperscript{\textregistered} V SoC device. The content in this tutorial can be applied to the following development and education (DE-series) boards: DE1-SoC, DE10-Standard, and DE10-Nano. For this tutorial we will assume that the reader is using the DE10-Standard board. If you are using a different board, then some minor adjustments to the instructions given in the tutorial may be needed. We will make note of such cases in various sections of the tutorial.
ROS* is a framework made for facilitating communication, interaction, and control between various modular processes. ROS runs only on Unix-based platforms, so for this tutorial we will be running ROS using a custom Linaro-based platform that runs on the ARM* processor in the Cyclone~V SoC device. In this tutorial, we will cover creating a simple ROS network on the DE-Series board and also show how we can interface ROS with the hardware resources on the DE-series board. These resources include peripherals in the hard processor system (HPS), and custom hardware peripherals implemented within the FPGA in the SoC device.
{\bf Contents:}
\begin{itemize}
\item Getting Started with ROS on the board
\item Creating a Simple Publisher and Subscriber in ROS
\item Interacting with Hardware Components
\end{itemize}
{\bf Requirements:}
\begin{itemize}
\item One of the DE-series development and education boards mentioned above. These boards are
described on Intel's FPGA University Program website, and are available from the manufacturer
Terasic Technologies.
\item Host computer, running either Microsoft* Windows* (version 10 is recommended) or Linux
(Ubuntu, or a similar Linux distribution). The host computer would typically be either a
desktop computer or laptop. You will use the host computer to send commands to the Linux
running on the DE-series board, and to see the results produced by those commands.
\item Mini-USB cable, Ethernet cable, and/or WiFi USB adaptor for connecting the DE-series board
to the host computer
\item MicroSD card (8 GB or larger)
\end{itemize}
\section{Getting Started with ROS* on DE-Series Boards}
ROS is a meta operating system that is used as a functionality framework in many robotics applications. While typically used in traditional desktop CPUs, this tutorial gives an overview of using ROS on embedded systems such as the DE-Series boards.
%This tutorial expects users to already be familiar with the basics of using ROS. For an introduction to using ROS, you may wish to go through the ROS beginner tutorials on the ROS website (\url{http://wiki.ros.org/ROS/Tutorials}).
\subsection{ROS* on the Linux* Distribution Image}
This tutorial requires the usage of the Intel FPGA University Program's custom Linux distribution on the DE-Series board. For the DE10-Standard board, this would be the DE10-Standard-UP-Linux-ROS distribution. To get this Linux image running on your board, please first follow the installation instructions found in the tutorial {\it Using Linux on DE-Series Boards}. You will also need to follow the steps in the aforementioned tutorial to connect your board to a host computer for control
For convenience, we have preinstalled the embedded version of ROS on this Linux image. The version of ROS installed is ROS Hydro and detailed installation instructions can be found at \url{http://wiki.ros.org/hydro/Installation/UbuntuARM}. Note that due to compatibility reasons, we have preinstalled a old version of ROS, and so some of the newest features and packages may not be available. Nevertheless, the core functionality of ROS remains the same in this version, and this tutorial should be completely applicable even for current versions of ROS. For ease of use when working with the ROS framework, we have also preinstalled python-catkin-tools. Both of these tools, along with the various installed drivers and libraries for interacting with the hardware on the DE10-Standard board should provide a strong starting foundation for working with ROS on this board.
\newpage
\subsubsection{Creating a Catkin Workspace}
ROS work is done under a catkin workspace. This is essentially a folder which is used to organize the source code, build process, and installation of software packages. Create a catkin workspace somewhere on the DE-Series computer with the following commands (for this tutorial, we have placed the workspace under the directory ~/ROS):
\begin{lstlisting}
mkdir catkin_ws
cd catkin_ws
catkin init
\end{lstlisting}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.55]{figures/create_catkin_ws.png}
\caption{Creation of a Catkin Workspace - Part I}
\label{fig:createcatkinws}
\end{center}
\end{figure}
\newpage
This creates and initializes a new catkin workspace in a new folder called {\sf catkin\_ws}. The newly created catkin workspace needs a source folder. Create a source folder using \lstinline|mkdir src| and then run \lstinline|catkin build| to have catkin automatically generate your workspace base. Notice that a build and a devel folder have automatically been created for you.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.55]{figures/catkin_build_empty_ws.png}
\caption{Creation of a Catkin Workspace - Part II}
\label{fig:catkinbuildemptyws}
\end{center}
\end{figure}
\subsubsection{Creating a Package}
Under your first catkin workspace, you can create a catkin package by typing \lstinline|catkin create pkg --rosdistro HYDRO tutorial_pkg| in the source folder. This will create a default catkin package with the name {\sf tutorial\_pkg}.
We can build this package (which currently is empty) by again calling \lstinline|catkin build| in the root folder of this catkin workspace. By default this builds all packages in the \lstinline|src| folder and generates necessary environment variable information for the workspace. This environmental variable information can be brought into your current shell by running the command \lstinline|source devel/setup.bash| in the catkin workspace root. It is very important to source the setup.bash file after every build, or otherwise many ROS commands will not work properly.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.55]{figures/catkin_build_tutorial_pkg_1.png}
\caption{Building your First Package}
\label{fig:catkinbuildtutorialpkg1}
\end{center}
\end{figure}
\section{Creating a Simple Publisher and Subscriber}
To add some functionality to our newly created package, we will create some ROS nodes. ROS nodes are essentially executables that are connected to the ROS network and serve as a basic building block of communications and data processing. Two of the most basic nodes of any ROS network are publisher and subscriber nodes. Publisher nodes distribute information to the ROS network by broadcasting a ROS message under a certain ROS topic name. Subscriber nodes can then subscribe to the ROS topic to read the information that the publisher node has published.
\subsection{Creating a Publisher}
We want to create a publisher node in our {\sf tutorial\_package}. Lets begin by changing to the directory that our package is contained in and creating a .cpp file for our code.
\begin{lstlisting}
roscd tutorial_pkg/
mkdir src
touch src/tutorial_publisher.cpp
\end{lstlisting}
Next, open up the file in a text editor of your choice and insert the following code:
\begin{figure}[H]
\begin{lstlisting}
#include <ros/ros.h>
#include <std_msgs/UInt16.h>
void publishData(ros::Publisher pub)
{
std_msgs::UInt16 msg;
msg.data = 1;
pub.publish(msg);
ROS_INFO("Published: %d", msg.data);
}
int main(int argc, char** argv)
{
ros::init(argc, argv, "publisher");
ros::NodeHandle nh;
ros::Publisher pub = nh.advertise<std_msgs::UInt16>("published_data", 100);
ros::Rate loop_rate(10); // 10Hz
while(ros::ok())
{
publishData(pub);
loop_rate.sleep();
}
}
\end{lstlisting}
\caption{Code for the publisher}
\end{figure}
The code above creates a very simple publisher that creates to the topic "published\_data" on the ROS network and publishes data to the topic using the \lstinline|publishData()| function. There are three major sections to the code:
\begin{enumerate}
\item In lines 4-10, we define the \lstinline|publishData()| function. This function publishes the number 1 using the publisher it gets passed in as an argument. Lines 6-7 create a \lstinline|std_msgs::UInt16| message and set its data to the number 1. Line 8 publishes the created message, and line 9 calls \lstinline|ROS_INFO()| which prints a message to the console that indicates a successful publishing of the message.
\item In lines 14-16, we initialize the ROS node. Line 14 initializes the ROS framework using the command \lstinline|ros::init()| which gets passed \lstinline|argc|,~\lstinline|argv| and the name of the node to be created. Line 15 instantiates the \lstinline|ros::NodeHandle| object, which is used as an access point for all communications objects in ROS. Line 16 then uses the \lstinline|NodeHandle| object to create a \lstinline|ros::Publisher| that publishes to the ROS topic "published\_data" with a buffer of 100.
\item In lines 18-23, we do the actual work of the node. Line 18 and line 22 together create and use a \lstinline|ros::Rate| object to ensure that the \lstinline|while| loop runs at a frequency of 10 Hz. Each time the \lstinline|while| loop is called, it first checks \lstinline|ros::ok()| in line 19, which returns \lstinline|false| only if a SIGINT is received, this node is somehow shut down, the ROS network is somehow shut down, or no NodeHandle objects exist. Finally, in line 21 we call the function which will publish the desired data.
\end{enumerate}
\subsection{Creating a Subscriber}
Next, we want to create a subscriber node in our {\sf tutorial\_package}. Go back to our package is contained in and create a .cpp file for our code.
\begin{lstlisting}
roscd tutorial_pkg/
touch src/tutorial_subscriber.cpp
\end{lstlisting}
Open up the file in a text editor of your choice and insert the following code:
\begin{figure}[H]
\begin{lstlisting}
#include <ros/ros.h>
#include <std_msgs/UInt16.h>
void subCallback(const std_msgs::UInt16 msg)
{
ROS_INFO("Message reads: %d", msg.data);
}
int main(int argc, char** argv)
{
ros::init(argc, argv, "subscriber");
ros::NodeHandle nh;
ros::Subscriber sub = nh.subscribe<std_msgs::UInt16>("published_data", 100, subCallback);
ros::Rate loop_rate(10); // 10Hz
while(ros::ok())
{
ros::spinOnce();
loop_rate.sleep();
}
}
\end{lstlisting}
\caption{Code for the subscriber}
\end{figure}
The code above creates a very simple subscriber that connects to the topic "published\_data" on the ROS network and reads and processes data via the subCallback() function. Again, there are three major sections to the code:
\begin{enumerate}
\item Lines 4-7 define the subCallback() function that processes messages incoming into the subscriber node. Currently, we just print the data contained in the message to the console in line 6 with a \lstinline|ROS_INFO()| call.
\item Lines 11-13 function much the same as in the publisher node in that we are creating and initializing the ROS node. The major difference here is that in line 11 we are creating a \lstinline|ros::Subscriber|, which has an additional third parameter that we must pass to it to designate the callback function that is called when the subscriber gets a message. Here, we pass it the \lstinline|subCallback()| function. Note that the signal parameter of the callback function gets automatically assigned to a pointer to the incoming ROS message when it is called by the \lstinline|ros::Subscriber| object.
\item Lines 15-20 do the actual work of the node. Here, we again set a loop rate to run the node at 10 Hz, but instead of calling a publishing function in the \lstinline|while| loop as we did in the publisher, we instead call \lstinline|ros::spinOnce()|. This function tells the ROS node to go through all the messages that its \lstinline|ros::Subscriber| objects have received and call the corresponding callback function for each one. Without this function call, our node would just keep recording messages on the topic \lstinline|published_data| into a queue but never actually do anything!
\end{enumerate}
\subsection{Configuring Cmake}
Catkin uses cmake to manage, build, and install packages. When you created a package in the previous section, the package automatically comes with a CMakeLists.txt that tells cmake what to do with the package during the build process. Additionally, because the package was created using python-catkin-tools, the CMakeLists.txt file comes with some commented out lines that serve as a basic guide for configuring cmake.
Now that we have finished writing the code for our publisher and subscriber, we must get the catkin workspace to build the publisher with our project. Open up the CMakeLists.txt file in the tutorial package directory and navigate to the line that contains the \lstinline|find_package()| call. This command locates packages and sets cmake variables depending on if it was able to successfully locate those packages. In ROS, this is typically used to specify that your package depends on certain other packages given by this command. For this tutorial, we need to change the line to the following:
\begin{lstlisting}
find_package(catkin REQUIRED COMPONENTS roscpp std_msgs)
\end{lstlisting}
Next, navigate to the line that contains the \lstinline|add_executable()| call. This command adds a target to be built by cmake. The first parameter is the name of the target, and the second parameter is the location of the source file to be built. For our package, we have two targets: the publisher and the subscriber, so we need to modify the lines as follows:
\begin{lstlisting}
add_executable(publisher src/tutorial_publisher.cpp)
add_executable(subscriber src/tutorial_subscriber.cpp)
\end{lstlisting}
Then, navigate to the line that contains the \lstinline|add_dependencies| call that is located after the \lstinline|add_executable()| calls we just added. This command tells cmake what your targets depend to determine the workspace build order. Here, we need to depend on \lstinline|${catkin_EXPORTED_TARGETS}| which tells cmake to build all message or service headers before building any of our targets. For our package, we use \lstinline|std_msgs| and so require that to be built first. Therefore, lets add the following lines:
\begin{lstlisting}
add_dependencies(publisher ${catkin_EXPORTED_TARGETS})
add_dependencies(subscriber ${catkin_EXPORTED_TARGETS})
\end{lstlisting}
Finally, navigate to the line that contains the \lstinline|target_link_libraries()| call. This command tells cmake which libraries the linker should link our targets against. For now, we only need to link against \lstinline|${catkin_LIBRARIES}|, and so we should change the lines to the following:
\begin{lstlisting}
target_link_libraries(publisher
${catkin_LIBRARIES}
)
target_link_libraries(subscriber
${catkin_LIBRARIES}
)
\end{lstlisting}
There are many other catkin configuration parameters that will be useful and necessary in building certain other projects. We recommend the user to reference the ROS hydro catkin documentation located at \url{docs.ros.org/hydro/api/catkin/html/index.html} for more information on configuring cmake with catkin.
\subsection{Configuring your Package Manifest}
You may have noticed that along with creating a CMakeLists.txt, the call to \lstinline|catkin create pkg| also created a package.xml file. This is a file that contains information about our package properties, including meta-data such as package name, author, etc. as well as dependency information on other packages. For the tutorial package, we only depend on the \lstinline|std_msgs| ROS package, and so we need to add the following lines to our package.xml:
\begin{lstlisting}
<depend>std_msgs</depend>
\end{lstlisting}
Depending on what you do with your package and the contents inside, your package.xml may be much more complicated than in this example. It is highly recommended to browse the ROS wiki's package.xml page located at \url{http://wiki.ros.org/catkin/package.xml} for more information on writing the package.xml file.
\subsection{Running Your First ROS Node}\label{sec:rosrun}
Now that the build process of the tutorial package is properly configured, we can compile the project and run each ROS node. Change directories back to the root of our catkin workspace, build the catkin packages, and source the workspace. Because we actually have some code to build this time, the processor may take some time to process the commands.
\begin{lstlisting}
cd ~/ROS/catkin_ws
catkin build
source devel/setup.bash
\end{lstlisting}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.55]{figures/catkin_build_tutorial_pkg_2.png}
\caption{Building the Semi-complete Package}
\label{fig:catkinbuildtutorialpkg2}
\end{center}
\end{figure}
After the build finishes, open up three new terminals. In each terminal, first source the setup.bash file in our catkin workspace
\begin{lstlisting}
cd ~/ROS/catkin_ws
source devel/setup.bash
\end{lstlisting}
Then in the first terminal, run \lstinline|roscore|. \lstinline|roscore| is a set of ROS nodes that facilitate the base functionalities of the ROS network. You can think of this as "turning on" the ROS network. Additionally, \lstinline|roscore| must be run before starting any other nodes, and failure to do so will result in an error during the initialization of your node.
In the second terminal, run \lstinline|rosrun tutorial_pkg publisher|. This starts the ROS node that we wrote to publish information to the \lstinline|published_data| topic. You should see the \lstinline|ROS_INFO| printouts of the timestamp and published message.
In the third terminal, run \lstinline|rosrun tutorial_pkg subscriber|. This starts the ROS node that we wrote to subscribe to the \lstinline|published_data| topic. You should see the \lstinline|ROS_INFO| printouts of the timestamp and received message. Note that the received message is the same as the published message, but there is a difference in the timestamps that indicate the delay between sending and receiving the message.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.55]{figures/rosrun_tutorial_pkg_1.png}
\caption{Running the Tutorial Package}
\label{fig:rosruntutorialpkg1}
\end{center}
\end{figure}
Once you are satisfied with the results, hit ctrl-C on each of the terminals to stop the ROS commands being run.
\section{Interacting with Hardware Components}
Now lets add some functionality to the publisher and subscriber. Our overall goal will be to create a system that allows us to control the LEDs by using the switches on the DE-Series board. Our simple ROS system will consist of a publisher that constantly polls the state of the switches. The state of the switches will be represented by a binary number where the ith digit of the binary number being 1 represents the ith switch being toggled in the on state. The publisher will broadcast this binary number in an \lstinline|uint16| variable, which will then be packaged into a message that in broadcasted onto the ROS network under the topic \lstinline|/published_data|. On the other hand, our subscriber will be actively monitoring this ROS topic and set the LED lights based on the information in the messages it sees on the ROS topic.
\newpage
\subsection{Accessing the Prebuilt Device Drivers}
To access the switches and LED hardware components, we need to use the prebuilt libraries for hardware access that are included in the Linux image. These libraries are located at \lstinline|~/Linux_Libraries/C4DE/include|. To use these libraries with our project, we must include the header files in the C++ source code, tell CMake to link the intelfpgaup.so library when building the project, and make sure to insert corresponding kernel modules during runtime. For the first step, we add the lines in Figures \ref{lst:publisherheader} and \ref{lst:subscriberheader} below to the beginning of tutorial\_publisher.cpp and tutorial\_subscriber.cpp respectively. Note that the libraries used to access the hardware components are built in C, while ROS runs on C++. This means that we must add the linkage-specification \lstinline|extern "C"| and the pre-processor directive \lstinline|#ifdef __cplusplus| when we include the header files to specify that these are C header files being used in C++ (for more information on why this is necessary, feel free to search up “name mangling” on the Internet).
{\it Note: on the DE10-Nano, the LEDR driver and its corresponding wrappers are named LED. For the DE0-Nano, please reference the files at \lstinline|~/Linux_Libraries/C4DE/include| for header name changes. Make sure to change the code in this section accordingly to match your board.}
\begin{figure}[H]
\begin{lstlisting}
#ifdef __cplusplus
extern "C" {
#endif
#include <intelfpgaup/SW.h>
#ifdef __cplusplus
}
#endif
\end{lstlisting}
\caption{Header include for publisher}
\label{lst:publisherheader}
\end{figure}
\begin{figure}[H]
\begin{lstlisting}
#ifdef __cplusplus
extern "C" {
#endif
#include <intelfpgaup/LEDR.h>
#ifdef __cplusplus
}
#endif
\end{lstlisting}
\caption{Header include for subscriber}
\label{lst:subscriberheader}
\end{figure}
Next, to link the intelfpgaup.so library, we first need to let cmake know where our library is located. Open up the CMakeLists.txt file that is in the root folder of our package and add the following line right after the \lstinline|catkin_package()| call.
\begin{lstlisting}
find_library(intelfpgaup_LIBRARIES NAMES libintelfpgaup.so PATHS /root/home/Linux_Libraries/C4DE)
\end{lstlisting}
The above call to \lstinline|find_library()| tells cmake to look in a specified path for our intelfpgaup.so library and assign the cmake variable \lstinline|intelfpgaup_LIBRARIES| accordingly. Now, to specify that we want to link these libraries to the target, we add the generated cmake variable to our \lstinline|target_link_libraries()| call for both our publisher and subscriber nodes. In the end, it should look something like this:
\begin{lstlisting}
target_link_libraries(publisher
${catkin_LIBRARIES}
${intelfpgaup_LIBRARIES}
)
target_link_libraries(subscriber
${catkin_LIBRARIES}
${intelfpgaup_LIBRARIES}
)
\end{lstlisting}
Note that while we must individually include the library header file for each hardware component that we are using (eg. LEDR.h), we only have to link the intelfpgaup.so library once to each target in the CMakeLists.txt file.
Finally, to insert the kernel modules for each file, simply run the command \lstinline|insmod <path-to-module>| where <path-to-module> points to one of the kernel modules found in the folder \lstinline|~/Linux_Libraries/drivers|. For this tutorial, we will need to insert the switches.ko and LEDR.ko modules.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.55]{figures/insmod_SW_LEDR.png}
\caption{Inserting the Kernel Modules}
\label{fig:insmodSWLEDR}
\end{center}
\end{figure}
\newpage
\subsection{Using the Prebuilt Device Drivers}
Now, lets update the contents of our publisher and subscriber. Modify the code of tutorial\_publisher.cpp and tutorial\_subscriber.cpp to match the code shown in the corresponding figures below.
\begin{figure}[H]
\begin{lstlisting}
#include <ros/ros.h>
#include <std_msgs/UInt16.h>
#ifdef __cplusplus
extern "C" {
#endif
#include <intelfpgaup/SW.h>
#ifdef __cplusplus
}
#endif
void publishData(ros::Publisher pub)
{
std_msgs::UInt16 msg;
int data;
if(SW_read(&data))
{
msg.data = data;
pub.publish(msg);
}
}
int main(int argc, char** argv)
{
ros::init(argc, argv, "publisher");
ros::NodeHandle nh;
ros::Publisher pub = nh.advertise<std_msgs::UInt16>("published_data", 100);
ros::Rate loop_rate(10);
if(SW_open() != 1) return -1;
while(ros::ok())
{
publishData(pub);
loop_rate.sleep();
}
SW_close();
}
\end{lstlisting}
\caption{Code for the Publisher}
\label{lst:publishData()}
\end{figure}
\begin{figure}[H]
\begin{lstlisting}
#include <ros/ros.h>
#include <std_msgs/UInt16.h>
#ifdef __cplusplus
extern "C" {
#endif
#include <intelfpgaup/LEDR.h>
#ifdef __cplusplus
}
#endif
void switchCallback(const std_msgs::UInt16::ConstPtr& msg)
{
LEDR_set(msg->data);
}
int main(int argc, char** argv)
{
ros::init(argc, argv, "subscriber");
ros::NodeHandle nh;
ros::Subscriber sub = nh.subscribe<std_msgs::UInt16>("published_data", 100, switchCallback);
ros::Rate loop_rate(10);
if(LEDR_open() != 1) return -1;
while(ros::ok())
{
ros::spinOnce();
loop_rate.sleep();
}
LEDR_close();
}
\end{lstlisting}
\caption{Code for the Subscriber}
\label{lst:switchCallback()}
\end{figure}
The usage of the intelfpgaup.so library for hardware component access is fairly straightforward, and a copy of the header files with documentation can be found in Appendix D of this document. Briefly, each hardware component is controlled by first opening a connection to the hardware component using the \lstinline|open()| command. Then, we can read or write using the specific functions included in the header file for that hardware component. For example, in line 16 of Figure \ref{lst:publishData()} we read from the switches port by calling the \lstinline|SW_read()| function provided by the switches.h header file. Similarly, we write to the LEDR hardware component by calling the \lstinline|LEDR_set()| function in line 14 of Figure \ref{lst:switchCallback()}. Finally, when we are done using the hardware components, we must make sure to close the connection to them using the \lstinline|close()| command, as seen in line 36 of Figure \ref{lst:publishData()} and line 30 of Figure \ref{lst:switchCallback()}.
\subsection{Testing the Tutorial Package}
With all the code set, we can once again try running our package. In a new terminal window, change directories to the root of your catkin workspace and run the following commands
\begin{lstlisting}
catkin clean --build
catkin build
source devel/setup.bash
roscore &
rosrun tutorial_pkg publisher &
rosrun tutorial_pkg subscriber &
\end{lstlisting}
This starts the three ROS processes mentioned in Section \ref{sec:rosrun} in a single terminal (which is useful for not cluttering up your screen). To make sure that the nodes are working correctly, you can open up a new terminal window, source the setup.bash file in your catkin workspace, and run \lstinline|rostopic echo /published_data|. This echos the messages being published to \lstinline|/published_data| in your terminal. Try flipping some of the switches and seeing if the published messages change accordingly. You should see an output similar to the image in Figure \ref{fig:rosruntutorialpkg2}, and be able to observe the LEDs on your DE-Series board changing according to the position of the switches.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.55]{figures/rosrun_tutorial_pkg_2.png}
\caption{Running the Tutorial Package with Hardware Interaction}
\label{fig:rosruntutorialpkg2}
\end{center}
\end{figure}
Once you are satisfied with the results, close the terminal window to terminate the ROS processes.
\subsection{Using Custom Hardware with ROS}
It is also possible to interact with custom hardware connected to the DE-Series board. While the interaction with ROS would be done in a similar way as described above, the user must also write a custom driver to access the hardware. Detailed instructions for writing custom drivers for the DE-Series boards can be found in the tutorial {\it Linux on DE Series Boards} under the section {\it Device Drivers}. This tutorial is located on the Intel FPGA University Program Website.
\newpage
\section*{Appendix A~~~~ Using Intel\textsuperscript{\textregistered} FPGA Device Drivers}
\noindent In addition to being able to create your own kernel modules, as
discussed above, the \textit{DE1-SoC-UP} Linux distribution provides a number of pre-built
kernel modules that are available for communicating with hardware modules in the DE1-SoC
Computer. These pre-built modules are listed in Table 1.
Each of the kernel modules listed in Table 1 provides a {\it character device driver} for
accessing a port in the DE1-SoC Computer. This means that each module has a file-based
interface that can be used to read information from the driver, or write information to
it. The file-based interface is provided in the folder \texttt{/dev/IntelFPGAUP}. One way to
read from a driver is to use the Linux \texttt{cat} command. For example, to read the
states of the pushbutton switches you can type \texttt{cat /dev/IntelFPGAUP/KEY}. Usage
information for each driver can be found by writing \texttt{-$\,$-} to the driver. For example,
to see how to use the video driver you could type the Linux command
\texttt{echo -$\,$- > /dev/IntelFPGAUP/video}.
The drivers listed in Table 1 are not inserted into the Linux kernel during the boot
process. To insert a driver, navigate to the directory
\texttt{/home/root/Linux\_Libraries/drivers}. To insert a specific driver, you can use the
Linux command \texttt{insmod}. For example to insert the \texttt{KEY} driver you would
type \texttt{insmod KEY.ko}. To insert all available drivers, you can execute the script
\texttt{load\_drivers}. Similarly, you can then remove an individual driver by using the
command \texttt{rmmod}, or remove all of the drivers by executing the script
\texttt{remove\_drivers}.
For convenience, a set of {\it wrapper} functions is provided in the C language for use with
each character device driver. To use these functions in a program, you need
to provide the statement \texttt{\#include "IntelFPGAUP/{\it xxx}.h"} in your C code,
where {\it xxx} is the name of the driver from Table 1 (\texttt{KEY}, \texttt{SW}, $\ldots$).
In addition, you have to append the option \texttt{-lintelfpgaup} to the \texttt{gcc} command when
compiling your code. The contents of the files {\it xxx.h}, which list all of the
available wrapper functions, are shown in Appendix B. The wrapper source code
files and examples can be found in the directory \texttt{$\sim$/Linux\_Libraries/C4DE}.
\begin{table}[H]
\centering
\begin{tabular}{l|p{8cm}}
{\bf Kernel module} & {\bf Description}\\\hline
\rule{0cm}{.5cm}\texttt{KEY} & Used to access the pushbutton KEY port\\
\texttt{SW} & Used to access the slide switch SW port\\
\texttt{LEDR} & Used to access the red light LEDR port\\
\texttt{HEX} & Used to access the seven-segment HEX display port\\
\texttt{video} & Used to access the VGA video-out port\\
\texttt{audio} & Used to access the digital audio port\\
\texttt{accel} & Used to access the 3-D accelerometer port\\
\end{tabular}
\center{Table 1. Pre-built kernel modules}
\label{tab:drivers}
\end{table}
\subsubsection*{Drivers for the DE10-Standard Board}
If you are using the DE10-Standard board, then all of the device drivers listed in Table 1
are available, as well as their corresponding wrappers. In addition, there is a device
driver called \texttt{LCD}, which controls the $128 \times 64$ LCD display on the
DE10-Standard board. The wrappers provided for the \texttt{LCD} driver are identical to
those of the video driver in Table 1.
\subsubsection*{Drivers for the DE10-Nano Board}
If you are using the DE10-Nano board, then all of the device drivers listed in Table 1 are
available, except for the \texttt{HEX} driver. Also, for this board the \texttt{LEDR}
driver, and corresponding wrappers, is named \texttt{LED}.
\newpage
\section*{Appendix B~~~~ Wrapper Functions for using Character Device Drivers with C Code}
\subsection*{Header File for Pushbutton KEY Device}
\lstset{language=C,numbers=none}
\lstinputlisting{C4DE/include/KEY.h}
\newpage
\subsection*{Header File for Slide Switch SW Device}
\lstinputlisting{C4DE/include/SW.h}
\newpage
\subsection*{Header File for Red Light LEDR Device}
\lstinputlisting{C4DE/include/LEDR.h}
\newpage
\subsection*{Header File for Seven-Segment HEX Device}
\lstinputlisting{C4DE/include/HEX.h}
\newpage
\subsection*{Header File for VGA Video Device}
\lstinputlisting{C4DE/include/video.h}
\newpage
\subsection*{Header File for Digital Audio Device}
\lstinputlisting{C4DE/include/audio.h}
\newpage
\subsection*{Header File for 3-D Accelerometer Device}
\lstinputlisting{C4DE/include/accel.h}
% Copyright and Trademark
\input{\commonPath/Docs/copyright.tex}
\end{document}
| {
"alphanum_fraction": 0.7751565583,
"avg_line_length": 58.9730185497,
"ext": "tex",
"hexsha": "48f5f4f0efd5c144b057d4e244988fb82b3ce69a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d2c352472bc3dfab88a3497efd259f5fabbf3952",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "fpgacademy/Tutorials",
"max_forks_repo_path": "Linux/Robot_Operating_System/ROS.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d2c352472bc3dfab88a3497efd259f5fabbf3952",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "fpgacademy/Tutorials",
"max_issues_repo_path": "Linux/Robot_Operating_System/ROS.tex",
"max_line_length": 1100,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d2c352472bc3dfab88a3497efd259f5fabbf3952",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "fpgacademy/Tutorials",
"max_stars_repo_path": "Linux/Robot_Operating_System/ROS.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 8612,
"size": 34971
} |
% $Id: aboutcryptool.tex 3714 2016-04-08 18:34:16Z esslinger $
% ............................................................................
% TEXT OF THE 2nd PAGE (Overview)
%
% ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\clearpage\phantomsection
\addcontentsline{toc}{chapter}{Overview about the Content of the CrypTool Book}
\chapter*{Overview about the Content of the CrypTool Book}
\parskip 4pt
%\vskip +12 pt
The rapid spread of the Internet has led to intensified research in the
technologies involved, especially within the area of cryptography where a good
deal of new knowledge has arisen.
In this {\em book accompanying the CrypTool programs} \index{CrypTool}
you will find predominantly mathematically oriented information on using
cryptographic procedures. Also included are many sample code pieces written in the
computer algebra system {\bf SageMath}\index{SageMath} (see appendix~\ref{s:appendix-using-sage}).
The main chapters have been written by various {\bf authors}
(see appendix~\ref{s:appendix-authors}) %\hyperlink{appendix-authors}{authors}
and are therefore independent from one another. At the end of most chapters
you will find references and web links.
The sections have been enriched with many {\em footnotes}. Within the footnotes
you can see where the described functions can be called in the different CrypTool
versions.
The \hyperlink{Kapitel_1}{first chapter} explains the principles of symmetric
and asymmetric {\bf encryption} and definitions for their resistibility.
Because of didactic reasons the \hyperlink{Kapitel_PaperandPencil}
{second chapter} gives an exhaustive overview
about {\bf paper and pencil encryption methods}.
Big parts of this book are dedicated to the fascinating topic of
{\bf prime numbers} (chap. \ref{Label_Kapitel_Primes}).
%\hyperlink{Kapitel_2}{{\bf prime numbers}}.
Using numerous examples, {\bf modular arithmetic} and
{\bf elementary number theory} (chap. \ref{Chapter_ElementaryNT})
are introduced. The features of the {\bf RSA procedure} are a key aspect.
By reading chapter \ref{Chapter_ModernCryptography}
you'll gain an insight into the mathematical ideas and concepts behind
{\bf modern cryptography}.
Chapter \ref{Chapter_Hashes-and-Digital-Signatures} gives
an overview about the status of attacks against modern {\bf hash algorithms}
and is then shortly devoted to {\bf digital signatures},
which are an essential component of e-business applications.
Chapter \ref{Chapter_EllipticCurves} describes {\bf elliptic curves}:
They could be used as an alternative to RSA and in addition are extremely
well suited for implementation on smartcards.
Chapter \ref{Chapter_BitCiphers} introduces {\bf Boolean algebra}.
Boolean algebra is the foundation for most modern, symmetric encryption algorithms
as these operate on bit streams and bit groups. Principal construction
methods are described and implemented in SageMath.
Chapter \ref{Chapter_HomomorphicCiphers} describes {\bf homomorphic crypto
functions}: They are a modern research topic which got especial attention
in the course of cloud computing.
The \hyperlink{Chapter_Crypto2020}{last chapter} {\bf Crypto2020}
discusses threats for currently used cryptographic methods and introduces
alternative research approaches to achieve long-term security
of cryptographic schemes.
Whereas the CrypTool \textit{e-learning programs}\index{e-learning} motivate
and teach you how to use cryptography in practice, the \textit{book} provides
those interested in the subject with a deeper understanding of the mathematical
algorithms used -- trying to do it in an instructive way.
Within the {\bf appendices}
\ref{s:appendix-menu-overview-CT1},
\ref{s:appendix-template-overview-CT2},
\ref{s:appendix-function-overview-JCT}, and
\ref{s:appendix-function-overview-CTO}
you can gain a fast overview about the functions delivered by the different
CrypTool variants\index{CrypTool 1}\index{CrypTool 2}\index{JCrypTool} via:
\begin{itemize}
\item the function list and
the \hyperlink{appendix-menu-overview-CT1}
{menu tree of CrypTool 1 (CT1)},
\item the function list and
the \hyperlink{appendix-template-overview-CT2}
{templates in CrypTool 2 (CT2)},
\item the \hyperlink{appendix-function-overview-JCT}
{function list of JCrypTool (JCT)}, and
\item the \hyperlink{appendix-function-overview-CTO}
{function list of CrypTool-Online (CTO)}.
\end{itemize}
% Bernhard Esslinger, Matthias B\"uger, Bartol Filipovic, Henrik Koy,
% Roger Oyono and J\"org Cornelius Schneider
The authors would like to take this opportunity to thank their colleagues
in the particular companies and at the universities of Bochum, Darmstadt,
Frankfurt, Gie\ss en, Karlsruhe and Siegen.
\enlargethispage{12pt}
As with the e-learning program CrypTool\index{CrypTool}, the quality of the
book is enhanced by your suggestions and ideas for improvement.
We look forward to your feedback.
% Local Variables:
% TeX-master: "../script-en.tex"
% End:
| {
"alphanum_fraction": 0.7535637571,
"avg_line_length": 46.1351351351,
"ext": "tex",
"hexsha": "d532863c57238129e2c4103307ccdf9a15a5a802",
"lang": "TeX",
"max_forks_count": 5,
"max_forks_repo_forks_event_max_datetime": "2021-10-02T14:58:30.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-07-02T12:59:28.000Z",
"max_forks_repo_head_hexsha": "6468257af2e1002418882f22a9ed9fabddde096d",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "flomar/CrypTool-VS2015",
"max_forks_repo_path": "trunk/script/en/chapters/aboutcryptool.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6468257af2e1002418882f22a9ed9fabddde096d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "flomar/CrypTool-VS2015",
"max_issues_repo_path": "trunk/script/en/chapters/aboutcryptool.tex",
"max_line_length": 98,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "6468257af2e1002418882f22a9ed9fabddde096d",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "flomar/CrypTool-VS2015",
"max_stars_repo_path": "trunk/script/en/chapters/aboutcryptool.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1211,
"size": 5121
} |
\section{$\alpha$}
\section{Extra section with $\epsilon$ warnings}
| {
"alphanum_fraction": 0.7142857143,
"avg_line_length": 14,
"ext": "tex",
"hexsha": "3bd5512a398efa002572d34e4be6c96b60adb061",
"lang": "TeX",
"max_forks_count": 630,
"max_forks_repo_forks_event_max_datetime": "2022-03-31T15:00:46.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-03-10T20:55:31.000Z",
"max_forks_repo_head_hexsha": "001545e8a3ac81f4e6e3fb1302d409d8efb6361e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "tiagoboldt/vimtex",
"max_forks_repo_path": "test/tests/test-quickfix/input with spaces.tex",
"max_issues_count": 2195,
"max_issues_repo_head_hexsha": "001545e8a3ac81f4e6e3fb1302d409d8efb6361e",
"max_issues_repo_issues_event_max_datetime": "2022-03-31T23:21:51.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-03-08T16:20:32.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "tiagoboldt/vimtex",
"max_issues_repo_path": "test/tests/test-quickfix/input with spaces.tex",
"max_line_length": 48,
"max_stars_count": 4193,
"max_stars_repo_head_hexsha": "001545e8a3ac81f4e6e3fb1302d409d8efb6361e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "tiagoboldt/vimtex",
"max_stars_repo_path": "test/tests/test-quickfix/input with spaces.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-31T03:03:40.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-03-11T18:43:41.000Z",
"num_tokens": 18,
"size": 70
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Friggeri Resume/CV for A4 paper format
% XeLaTeX Template
% Version 1.1
%
% A4 version author:
% Marvin Frommhold (depressiverobot.com)
% https://github.com/depressiveRobot/friggeri-cv-a4
%
% Original author:
% Adrien Friggeri ([email protected])
% https://github.com/afriggeri/CV
%
% License:
% CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/)
%
% Important notes:
% This template needs to be compiled with XeLaTeX and the bibliography, if used,
% needs to be compiled with biber rather than bibtex.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Options
% 'print': remove colors from this template for printing
% 'nocolors' to disable colors in section headers
\documentclass[]{friggeri-cv-a4}
\addbibresource{bibliography.bib} % Specify the bibliography file to include publications
\begin{document}
\header{Ricardo }{Kleinlein}{AI - machine learning researcher} % Your name and current job title/field
%----------------------------------------------------------------------------------------
% SIDEBAR SECTION
%----------------------------------------------------------------------------------------
\begin{aside} % In the aside, each new line forces a line break
\section{date of birth}
May - 24 - 1992
\section{contact}
Av. Alberto Alcocer 30
Madrid, Madrid, 28036,
Spain
~
+(0034) 600394426
skype: ricardo.kleinlein
~
{\scriptsize \href{mailto:[email protected]}{[email protected]} }
{\scriptsize \href{https://ricardokleinklein.github.io}{ricardokleinklein.github.io} }
{\small \href{https://www.linkedin.com/in/rfaundezcarrasco}{LinkedIn profile} }
\section{languages}
\textbf{ESP} mother tongue
\textbf{ENG} fluent (C1)
\textbf{JPN} basic (N5)
\section{programming}
Python
Java
C/C++
R
Matlab \& Simulink
Bash
CSS3 \& HTML5
\LaTeX
\section{skills}
Responsability
Teamwork
Communication
Fast learner
Mathematical skills
Computer Vision
Speech Processing
HPC
Kalman filters
State-Space models
\end{aside}
%----------------------------------------------------------------------------------------
% GOAL SECTION
%----------------------------------------------------------------------------------------
\section{Summary}
I am a physicist profoundly interested in Artificial Intelligence. Experience hands-on in Deep Learning in time series analysis both in speech treatment and medical data.
%----------------------------------------------------------------------------------------
% PUBLICATIONS SECTION
%----------------------------------------------------------------------------------------
\section{Publications}
\begin{itemize}
\item Kleinlein, R., García-Faura, Á., Luna Jiménez, C., Montero, J.M., Díaz-de-María, F. and Fernández-Martínez, F. Predicting Image Aesthetics for Intelligent Tourism Information Systems. Electronics 2019, 8, 671.
\item Kleinlein, R. and Riaño, D. Persistence of data-driven knowledge to predict breast cancer survival. International Journal of Medical Informatics (2019).~(Accepted)
\item Kleinlein, R., Luna-Jiménez, C., Montero, J. M., Callejas, Z. and Fernández-Martínez, F. Predicting Group-Level Skin Attention to Short Movies from Audio-Based LSTM-Mixture of Experts Models. Interspeech 2019~(Accepted)
\end{itemize}
%----------------------------------------------------------------------------------------
% EDUCATION SECTION
%----------------------------------------------------------------------------------------
\section{education}
\begin{entrylist}
%------------------------------------------------
\entry
{2017--Now}
{PhD. student}
{Universidad Politécnica de Madrid}
{\small{Human knowledge, or the representation we build of the world, comes from the joint understanding of all senses (touch, smell, taste, pain...). Traditional machine learning, and even the most recent papers on multi-task learning always take into account just one main task or input data type. We propose novel architectures that try to replicate in a computational environment the working principles of human brain.}}
%------------------------------------------------
\entry
{2015--2017}
{M.Sc. {\normalfont in Automation and Robotics}}
{Universidad Politécnica de Madrid}
{\small{\emph{Deep Learning strategies for the enhancement of Automatic Speech Recognition architectures} }\\ This thesis explored different state-of-the-art techniques such as LSTM cells, word2vec embeddings and convolutional layers and their effect on an HMM-DNN ASR system. Graded 10/10, candidate to Honors.}
%------------------------------------------------
\entry
{2016--2018}
{M.Sc. {\normalfont in Computational and Mathematical Engineering}}
{\\Universitat Rovira i Virgili \& Universitat Oberta de Catalunya}
{\small{\emph{Prediction of Breast Cancer survival rates}} \\ Thesis reporting on data mining and machine learning algorithms predicting the evolution of both survival rates and treatments' effectiveness on breast cancer patients. Accepted for its publication in the \emph{International Journal of Medical Informatics}. }
%------------------------------------------------
\entry
{2010--2015}
{Bachelor {\normalfont in Physics}}
{Universidad Autónoma de Madrid}
{\small{\emph{Isotropic-Nematic-Liquid crystal phase transition: a lattice model}} \\ This thesis reported on Monte-Carlo simulations of liquid crystal's lattices undergoing phase transitions due to temperature or shape modifications. Graded 8.8/10.}
%------------------------------------------------
\end{entrylist}
\newpage
%----------------------------------------------------------------------------------------
% WORK EXPERIENCE SECTION
%----------------------------------------------------------------------------------------
\section{experience}
\begin{entrylist}
%------------------------------------------------
\entry
{2017--2018}
{National Institute of Informatics - NII}
{Tokyo, Japan}
{\emph{Research Intern} \\
Working on multi-model models trained and designed in a fashion so they can perform at the same time traditionally unrelated tasks, such as Speech Enhancement, Voice Conversion and Text-to-Speech Synthesis.}
\end{entrylist}
%------------------------------------------------
\begin{entrylist}
%------------------------------------------------
\entry
{2017}
{Escuela Técnica Superior de Ingenieros de Telecomunicación - UPM}
{Madrid, Spain}
{\emph{Research Assistant} \\
Focused Automatic Speech Recognition, doing research on different approaches based on Deep Learning to improve the accuracy of the whole system, at both acoustic and language level.}
%------------------------------------------------
\end{entrylist}
%------------------------------------------------
\begin{entrylist}
%------------------------------------------------
\entry
{2015-2016}
{Medicsen}
{London, United Kingdom}
{\emph{Research \& Development} \\
Development of the first fully functional and automatic pancreas for diabetic patients. In charge of building the algorithms and mathematical models of the disease from scratch. Main achievements:
\begin{itemize}
\item Glycemic curve predicted with $85\%$ accuracy on the 2-hours-ahead glycemic level from inputs on meal intake, insulin dose and exercise.
\item \textbf{Patent}: MedicSen, 2016. Non-Invasive Artificial Pancreas, U.S. Applica-
tion 50389, MED-001PR, filled January 2016.
\item Co-speaker with Eduardo Jorgensen (MIT Innovator 2017) in RE-WORK Deep Learning
in Healthcare Summit in London, April 2016.
\end{itemize}
}
\end{entrylist}
%------------------------------------------------
%------------------------------------------------
%\begin{entrylist}
%
%\entry
%{2015}
%{La Paz Hospital}
%{Madrid, Spain}
%{\emph{Intern} \\
%Worked within the Radio-therapy and Nuclear Medicine Departments. Dealt with cancer treatments using Monte-Carlo simulations and denoising of medical images.
%}
%------------------------------------------------
%\end{entrylist}
%----------------------------------------------------------------------------------------
% VOLUNTEERING SECTION
%----------------------------------------------------------------------------------------
\section{volunteering}
\begin{entrylist}
%------------------------------------------------
\entry
{2017}
{Collaborator in AILoveU}
{Madrid}
{Speaker at the Bussiness Institute (IE) in AILoveU Vol.2: "Siri's hearing aid" on the limitations of current Deep Learning technologies and its potential.}
%------------------------------------------------
\entry
{2014}
{Collaborator in ESN-UAM}
{Erasmus Students Network}
{In charge of cultural city tours around the city of Madrid.}
%------------------------------------------------
\entry
{2012}
{Board Member in AEGEE-Madrid}
{European Students Forum, Brussels}
{Treasurer, Summer University organiser of this student organisation, based on student exchanges and non-formal training.}
%------------------------------------------------
\entry
{2012}
{Board Member of Séptimo Arte UAM}
{Universidad Autónoma de Madrid}
{Co-founder and Vice-president of this cinema forum student association.}
%------------------------------------------------
\end{entrylist}
%----------------------------------------------------------------------------------------
% INTERESTS SECTION
%----------------------------------------------------------------------------------------
\section{interests}
\textbf{professional:} data science, neuroscience, quantum computing. \textbf{personal:} karate (1st Dan), meditation, basketball, travelling, beer tasting (awarded in a Prague brewery).
\end{document}
| {
"alphanum_fraction": 0.5905816018,
"avg_line_length": 35.4703703704,
"ext": "tex",
"hexsha": "513ea54a91436ebf2339c6676f852fbe12b7c249",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266326fc4965f7437ba8a1ace510a90e7fb72e78",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ricardokleinklein/resume_template",
"max_forks_repo_path": "cv.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "266326fc4965f7437ba8a1ace510a90e7fb72e78",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ricardokleinklein/resume_template",
"max_issues_repo_path": "cv.tex",
"max_line_length": 424,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266326fc4965f7437ba8a1ace510a90e7fb72e78",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ricardokleinklein/resume_template",
"max_stars_repo_path": "cv.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2026,
"size": 9577
} |
\chapter{Introduction}
This is a dissertation template.
Here is the introduction text.
\input{example_chapter/sections/section_1_example_text}
\input{example_chapter/sections/section_2_example_tables}
\input{example_chapter/sections/section_3_example_figures} | {
"alphanum_fraction": 0.8659003831,
"avg_line_length": 32.625,
"ext": "tex",
"hexsha": "a592378fdea0dd0cacaa67c78641954554e41654",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e8d6666f41cb02041758a5b35df804fd0d2027bd",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Orpheon/Pokerbot-Thesis",
"max_forks_repo_path": "example_chapter/example_chapter.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e8d6666f41cb02041758a5b35df804fd0d2027bd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Orpheon/Pokerbot-Thesis",
"max_issues_repo_path": "example_chapter/example_chapter.tex",
"max_line_length": 58,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e8d6666f41cb02041758a5b35df804fd0d2027bd",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Orpheon/Pokerbot-Thesis",
"max_stars_repo_path": "example_chapter/example_chapter.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 63,
"size": 261
} |
% !TEX root = ../main.tex
\newpage
\section{\theory The Theta Neuron Model} \label{TheThetaNeuronModel}
%\subsection{Canonical neuron models}
A number of neuron model families have been identified, and often there exists a continuous change of variables from models of the same family into a \textit{canonical} model that can represent the whole family \cite{Hoppensteadt2001CanonicalNM}. As the transformation is not required to be invertible, we can study the universal neurocomputational properties of the family in a low dimensional model.
It was Hodgkin \cite{Hodgkin1948} who classified neurons into two types based on their excitability, upon experimenting with the electrical stimulation of cells. Class 1 models begin to spike at an arbitrarily slow rate, and the spiking frequency increases when the applied current is increased. Class 2 models spike as soon as their internal threshold is exceeded and the spiking frequency stays relatively constant within a certain frequency band \cite{Hoppensteadt2001CanonicalNM}.
\subsection{Theta Neuron model description} \label{sec:TheThetaNeuronModelDescription}
In \cite{Ermentrout1986}, a Class 1 canonical phase model was proposed:
\begin{align}
\dot{\theta} = (1-\cos \theta)+(1+\cos \theta) \cdot I \qquad \theta \in \T \label{eq:thetaneuron}
\end{align}
with $I$ a bifurcation parameter on the supplied current. We can visualise the dynamics on the unit circle, like in Figure \ref{fig:thetaneuronbifurcationtikz}. The neuron produces a spike when $\theta$ surpasses $\pi$, upon which $\theta \leftarrow -\pi$.
\begin{figure}[H]
\minipage{0.33\linewidth}
\centering
\begin{tikzpicture}
\draw (0,0) circle [radius=1];
\draw (0,-1.2) node[below]{Excitable regime: $I < 0$};
\draw (-1,0) node[left]{$\pi$};
\draw[fill=black, black] (1,0) circle [radius=0.025];
\draw (1,0) node[right]{0};
\draw[fill=red, red] (-1,0) circle [radius=0.05];
\draw (-1,0) node[right]{spike};
\draw[black, ->] (0.866, 0.5)to[out=-60,in=90](1,0);
\draw[fill=white, draw=black] (0.866,0.5) circle [radius=0.1];
\draw (0.866,0.5) node[left]{\small{threshold}};
\draw[fill=black, draw=black] (0.866,-0.5) circle [radius=0.1];
\draw (0.866,-0.5) node[left]{\small{rest}};
\draw[black, ->] (0.5,0.866)to[out=150,in=0](0,1);
\draw[black, ->] (-0.5,-0.866)to[out=-30,in=180](0,-1);
\end{tikzpicture}
\endminipage
\minipage{0.33\linewidth}
\centering
\begin{tikzpicture}
\draw (0,0) circle [radius=1];
\draw (0,-1.2) node[below]{Bifurcation: $I = 0$};
\draw (1,0) node[right]{0};
\draw[fill=red, red] (-1,0) circle [radius=0.05];
\draw (-1,0) node[right]{spike};
\draw (-1,0) node[left]{$\pi$};
\draw[fill=gray, draw=black] (1,0) circle [radius=0.1];
\draw[black, ->] (0.5,0.866)to[out=150,in=0](0,1);
\draw[black, ->] (-0.5,-0.866)to[out=-30,in=180](0,-1);
\end{tikzpicture}
\endminipage
\minipage{0.33\linewidth}
\centering
\begin{tikzpicture}
\draw (0,0) circle [radius=1];
\draw (0,-1.2) node[below]{Periodic regime: $I > 0$};
\draw (-1,0) node[left]{$\pi$};
\draw (1,0) node[right]{0};
\draw[fill=black, black] (1,0) circle [radius=0.025];
\draw[fill=red, red] (-1,0) circle [radius=0.05];
\draw (-1,0) node[right]{spike};
\draw[black, dotted] (0,0)to(1,0);
\draw(0,0) node[above]{$\theta$};
\draw[black, dotted] (0,0)to(0.866,0.5);
\draw[black, ->] (0.5,0.866)to[out=150,in=0](0,1);
\draw[black, ->] (-0.5,-0.866)to[out=-30,in=180](0,-1);
\end{tikzpicture}
\endminipage
\caption{SNIC bifurcation of the theta neuron model. A spike occurs when $\theta = \pi$. For $I < 0$, the neuron is in a rest state but \textsl{excitable} and we observe one stable and one unstable equilibrium point. For $I > 0$, $\dot{\theta} > 0$ so that $\theta$ moves continuously around the circle and we can observe \textsl{periodic} sustained spiking. The saddle-node bifurcation occurs at $I = 0$, so that $\theta$ will spike when it is larger than 0.}
\label{fig:thetaneuronbifurcationtikz}
\end{figure}
We can recognise the features of the class 1 model in Figure \ref{fig:ThetaNeuronResponseToCurrent}. This makes \eqref{eq:thetaneuron} the normal form of the \textit{saddle-node-on-invariant-circle} ($\SNIC$) bifurcation \cite{Luke2013}, as it collapses $\R$ to $\T$.
\begin{figure}[H]
\centering
\includegraphics[width = \textwidth]{../Figures/ThetaNeuronResponseToCurrent.pdf}
\caption{Properties of the theta neuron model, with solutions of \eqref{eq:thetaneuron} in blue, spikes marked in dotted lines, and the current $I$ in red. Left: the spike frequency of $\theta$ increases as $I$ is increased over time, which is the distinguishing feature of class 1 canonical models. Middle: spikes occur within a finite time period when $I > 0$ and within infite time when $I = 0$. Right: when $I$ is large, the neuron \textsl{bursts}.}
\label{fig:ThetaNeuronResponseToCurrent}
\end{figure}
Equilibria only exist for the \textsl{excitable} regime $I < 0$:
\begin{align*}
\dot{\theta} &= 1-\cos \theta+I+I \cdot \cos \theta = (I+1)+(I-1) \cdot \cos \theta \\
\theta^{\ast}_{1, 2} &= \pm \arccos \left(\frac{I+1}{1-I}\right)+2 \pi n
\end{align*}
We can find the stability of the equilibria through:
\begin{align*}
\frac{\mathop{d}}{\mathop{d \theta}}((1-\cos \theta)+(1+\cos \theta) \cdot I) &= \sin \theta-\sin \theta \cdot I = (1-I) \cdot \sin \theta
\end{align*}
In the equilibria this yields:
\begin{align*}
\frac{\mathop{d}}{\mathop{d \theta}}\left( \theta^{\ast}_{1, 2} \right) &= \pm(1-I) \cdot \sqrt{1 - \left( \frac{I+1}{1-I} \right)^2 } = \pm(1-I) \cdot \frac{2 \sqrt{-I}}{1-I} = \pm2 \sqrt{-I}
\end{align*}
We find that $\theta^{\ast}_{1}$ is an unstable equilibrium point, and that $\theta^{\ast}_{2}$ is stable. This means that as $\theta$ gets perturbed above $\theta^{\ast}_{1}$, a spike occurs and $\theta$ converges to $\theta^{\ast}_{2}$. This is demonstrated in Figure \ref{fig:ThetaModelEquilibriumPoints}.
\begin{figure}[H]
\centering
\includegraphics[width = \textwidth]{../Figures/ThetaModelEquilibriumPoints.pdf}
\caption{Equilibria $\theta^{\ast}$ for different values of $I < 0$. Left: $I = -1$ yields $\theta^{\ast}_{1,2} = \pm \frac{\pi}{2}$, one of the simulations is started exactly on the unstable equilibrium and stays there. Middle: $I = -0.5$, we see how a spike occurs when $\theta > \theta^{\ast}_{1}$ upon which $\theta \rightarrow \theta^{\ast}_{2}$. Right: bifurcation diagram of the \SNIC bifurcation, with the stable equilibria in blue, and the unstable in red. At $I = 0$ the equilibria merge into one.}
\label{fig:ThetaModelEquilibriumPoints}
\end{figure}
\subsection{Solutions for static currents} \label{sec:TheThetaNeuronModelSolutionPeriodics}
Gaining insight into \eqref{eq:thetaneuron} is hard, due to the difficulty of finding an analytical solution. However, it has been noted that there exists a simple transformation which yields (see \ref{app:TransformationToQIF}):
\begin{align}
V &\equiv \tan \left( \frac{\theta}{2} \right) \label{eq:QIFtransformation} \\
\dot{V} &= V^2 + I \label{eq:QIFmodel}
\end{align}
This model is called the \textsl{Quadratic Integrate and Fire model} (\QIF). \eqref{eq:QIFmodel} models the membrane potential of a neuron, which spikes at $V=\infty$ and resets to $V \leftarrow -\infty$. The transformation \eqref{eq:QIFtransformation} is continuous between spikes, so insights from a solution for $V$ can be transformed directly to $\theta$. The equilibria of the \QIF model are simply $\pm \sqrt{-I}$ (as $I < 0$) so that we can express $\theta^{\ast}_{1, 2} = 2 \arctan \left( \pm \sqrt{-I} \right)$ \cite{Gutkin2014}. \\
The solution for the excitable regime $I < 0$ is :
\begin{align}
V(t) = \frac{2 \sqrt{-I}}{1 - e^{2 t \sqrt{-I}}}-\sqrt{-I} \label{eq:ThetaNeuronModelSolutionPeriodicExcitable}
\end{align}
The solution at the bifurcation $I = 0$ is :
\begin{align}
V(t) = \frac{-1}{t} \label{eq:ThetaNeuronModelSolutionPeriodicBifurcation}
\end{align}
The solution for the periodic regime $I > 0$ is :
\begin{align}
V(t) = -\sqrt{I} \cdot \cot (t \sqrt{I}) \label{eq:ThetaNeuronModelSolutionPeriodic}
\end{align}
These equations assume that at $t=0$ a spike has occured. The steps required to find the solutions (\ref{eq:ThetaNeuronModelSolutionPeriodicExcitable}) to (\ref{eq:ThetaNeuronModelSolutionPeriodic}) are described in \ref{app:ThetaModelSolutions}. Solutions for $\theta$ are found by simply taking the inverse of the transformation in \eqref{eq:QIFtransformation}.
If the \QIF model is so much simpler, then why bother using the Theta model? Simulating the \QIF model requires an artificial reset threshold, because we cannot expect a computer to represent infinity easily. Finite thresholds make the analytical solutions more difficult and convoluted. By using the Theta model the dynamics remain smooth and bounded on $\T$. %Since we take the cosine of the phase angle in \eqref{eq:thetaneuron}, we do not even
\subsection{Numerical solutions} \label{sec:TheThetaNeuronModelODE45}
When $I$ is not static, we need to revert to numerical solutions. A fixed-step 4-stage Runge-Kutta method (Dormand-Prince 45) was implemented to numerically solve all differential equations. A fixed-step algorithm makes it possible to finely tune the large memory demand of the systems presented in this work.
\subsection{Frequency response} \label{sec:TheThetaNeuronModelFrequencyResponse}
\begin{wrapfigure}{r}{0.39\textwidth}
\centering
\vspace{-\baselineskip}
\includegraphics[width = \linewidth]{../Figures/ThetaNeuronfI.pdf}
\caption{Frequency response of the Theta model. For $I \leq 0$ the spike period is infinite, which is why we see the solutions to (\ref{eq:thetaneuron}) approach $\theta = 0$ for $I = 0$ in Figure \ref{fig:ThetaNeuronResponseToCurrent}. }
\label{fig:ThetaNeuronfI}
\end{wrapfigure}
As we already saw in Figure \ref{fig:ThetaNeuronResponseToCurrent}, an increasing current increases the spiking frequency. We can compute this relationship by measuring how long it takes for $V$ to reach a spike: we solve \eqref{eq:ThetaNeuronModelSolutionPeriodic} for $t$ at $V(t) = +\infty$ in \ref{app:ThetaModelFrequencyResponse}. This yields the oscillation period $T = \frac{\pi}{\sqrt{I}}$, which we can see in Figure \ref{fig:ThetaNeuronfI}.
We know that when $\theta > \theta^{\ast}_{1}$ a spike occurs in the excitable regime, or in any case in the periodic regime. But the time that it takes to reach the spike can be arbitrarily long, depending on how far we are over $\theta^{\ast}_{1}$. So, spikes will occur, but after a delay that is dependant on the stimulus. Explicitly, if we perturb $\theta(0) = \theta^{\ast}_{1} + \varepsilon$ we obtain from \cite{Gutkin2014}:
\begin{align*}
T_{\text {spike}} = \frac{-\tanh ^{-1}\left(1+\frac{\epsilon}{\sqrt{I}}\right)}{\sqrt{I}}
\end{align*}
The delay to the spike blows up as $\varepsilon \rightarrow$ 0 so that spikes may occur after a very large delay. \\
In most of our future work, $I$ will not be a static current. We ask ourselves: how sensitively does $T$ depend on $I$ when $I$ is perturbed? We can measure this as a \textsl{relative} perturbation using $\mathop{dI}/I$ and $\mathop{dT/T}$ \cite{IntroductionModelingDynamics} :
\begin{align*}
\left| \frac{dT}{dI} \frac{I}{T} \right| &= \left| \frac{dT / T}{dI / I}\right|
= \left|- \frac{\pi}{2} \left(\frac{1}{\sqrt{I}}\right)^3 \frac{I}{T} \right|
= \left| \frac{\pi}{2} \left(\frac{T}{\pi}\right)^3 \frac{I}{T} \right|
= \frac{1}{2} \left|\left(\frac{T}{\pi}\right)^2 \left(\frac{\pi}{T}\right)^2 \right| = \frac{1}{2}
\end{align*}
Hence, a 1\% change in $I$ will result in a 0.5 \% change in the period.
\subsection{Phase response} \label{sec:TheThetaNeuronModelPhaseResponse}
Perturbations on the period can also be understood from the perspective of the phase. Changes to the phase $\theta$ can delay or advance the event of a spike, and in general this depends on exactly when the stimulus occurs. The phase response curve (\PRC) gives us exactly that relation \cite{Perez2020, Gutkin2014}.
Let us define $\phi \in [0, T[$, which represents the time since the last event of a spike. When we add a small bifurcation $\varepsilon$ to $\theta$ at time $\phi$, a spike will occur at $T_{\phi}$, and we have that $\theta(\phi_{\rm new}) = \theta(\phi) + \varepsilon$. The time to the new spike is now $T_{\phi} = T + (\phi - \phi_{\text{new}})$. The \PRC can then be defined as:
\begin{align}
\PRC(\phi) = T - T_{\phi} \label{eq:PRC1}
\end{align}
The \PRC is thus the expected delay of the period in function of the timing of that delay. This process has been visualised in Figure \ref{fig:ThetaNeuronPRC}, after \cite{Perez2020}. For infinitesimally small perturbations to the phase, we can find the \PRC as the \textsl{adjoint} of the solution \cite{Gutkin2014}:
\begin{align}
\PRC(\phi) = \frac{1}{d V(\phi) / d \phi} = \frac{1}{2 \sqrt{I}} \left(1-\cos \left( \frac{2 \pi}{T} \phi \right) \right) \label{eq:PRC2}
\end{align}
We can use $\phi \in [0, T[$ and $\theta \in \T$ to see that \eqref{eq:PRC2} can be expressed as:
\begin{align}
\PRC(\theta) \sim 1 + \cos \theta \label{eq:PRC3}
\end{align}
which is the magnitude with which $I$ excites the model, see \eqref{eq:thetaneuron} \cite{Ermentrout1996}. Analysis of the \PRC thus also allows us to study how the bifurcation of $\theta$ with magnitude $I$ occurs. The \PRC is always positive, which indicates that a positive bifurcation will accelerate the time of the spike, and vice versa. This has also been reported as a distinguishing feature of Class 1 models \cite{Ermentrout1996}. \\
As bifurcations are not always small, we need to dig a little deeper. An exact formulation for $T_{\phi}$ can be obtained by integrating after the bifurcation, see \ref{app:ThetaModelPhaseResponse}. The \PRC then becomes:
\begin{align}
\PRC(\phi, \varepsilon) = \frac{1}{\sqrt{I}}\left(\frac{\pi}{2} + \arctan \left(\frac{\varepsilon}{\sqrt{I}} - \cot \left(\phi\sqrt{I} )\right)\right)\right) - \phi \label{eq:PRCepsilon}
\end{align}
When plotting \eqref{eq:PRCepsilon} for different values of $\varepsilon$ in Figure \ref{fig:ThetaNeuronPRC}, we can see that larger bifurcations yield larger delays on the phase. For infinitesimally small bifurcations the response of \eqref{eq:PRC2} is symmetric about the middle of the period. However, here we see that with increasing $\varepsilon$ the \PRC loses symmetry and skews to the left. For large bifurcations, we can expect the greatest perturbation of the phase briefly after the action potential. This tendency continues to skew to the left for increasing $I$, and the \PRC converges to a value of $T$ at $\phi = 0$. A delay with magnitude $T$ is really the largest delay we can achieve with a single bifurcation. The effect of a bifurcation diminishes over the period: as $\theta$ approaches $\pi$, the ability to advance a spike in time disappears.\\
\begin{figure}[H]
\centering
\includegraphics[width = \textwidth]{../Figures/ThetaNeuronPRC.pdf}
\caption{Response of the Theta model to bifurcations on the phase. Left: a bifurcation $\varepsilon < 0$ at time $\phi$ perturbs $\theta(t)$ (in blue) which results in a delayed spike (trajectory in red). For $\varepsilon > 0$ spikes are advanced in time. Right: the \PRC, given by \eqref{eq:PRCepsilon}, plotted in orange for $\varepsilon$ ranging from 0.1 to 1 - a weak bifurcation is more translucent. A solution for $\theta$ (in blue) shows when the model is the most susceptible to bifurcations over the course of one period.}
\label{fig:ThetaNeuronPRC}
\end{figure}
| {
"alphanum_fraction": 0.7171968826,
"avg_line_length": 79.8673469388,
"ext": "tex",
"hexsha": "e6afe5a1694e59ff926953c248eadfbce4ed2424",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "506a4e8aba392330f8a6ecc6b229e2c8322b8e83",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "simonaertssen/AdaptiveNeuronalNetworks",
"max_forks_repo_path": "Writing/Mainmatter/Theta Neuron Model.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "506a4e8aba392330f8a6ecc6b229e2c8322b8e83",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "simonaertssen/AdaptiveNeuronalNetworks",
"max_issues_repo_path": "Writing/Mainmatter/Theta Neuron Model.tex",
"max_line_length": 867,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "506a4e8aba392330f8a6ecc6b229e2c8322b8e83",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "simonaertssen/AdaptiveNeuronalNetworks",
"max_stars_repo_path": "Writing/Mainmatter/Theta Neuron Model.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4963,
"size": 15654
} |
\documentclass[10pt,twocolumn]{article}
\usepackage{amsmath}
\usepackage{graphicx}
\graphicspath{ {img/} }
\usepackage{geometry}
\geometry{left=3cm,right=3cm,top=3cm,bottom=3cm}
\usepackage{enumitem}
\usepackage{natbib}
\usepackage[symbol]{footmisc}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\begin{document}
\title{CrankChain: Decentralized Asset Registration and Exchange}
\author{
{\normalsize Edward E. Kim}\\
\normalsize cranklin.com
}
\date{}
\maketitle
\subsection*{Abstract}
The author introduces CrankChain, a decentralized network for asset registration, and exchange. CrankChain enables arbitrary asset registration on the network and can be used to establish one's ownership of the asset with the added benefit of tokenization; assets can be distributed and exchanged as first-class members on-chain without relying on smart contracts, a virtual machine or state machine. Furthermore, metadata pertaining to an asset can be appended to the asset's history by its owner(s). CrankChain achieves this without sacrificing the simplicity nor the security of a single-dimensional blockchain.
\section{Introduction}
Many of the complaints surrounding \textit{Bitcoin} and other single-dimensional blockchains point towards the suboptimal transaction speed and cost\cite{wiki:btcscalability}. While this makes it less than ideal for microtransactions, the benefits of this design is often overlooked. \textit{Brewer's (CAP) Theorem}\cite{computer:brewercap} does not cease to exist in a decentralized datastore. In fact, the boundaries are heightened. Just as different centralized databases coexist with different strengths and weaknesses, so shall it be in decentralized networks.\\
In this paper, we introduce CrankChain which utilizes the many strengths of Satoshi Nakamoto's original design\cite{conf:nakamoto}. By default, a decentralized network must be partition tolerant. For the purposes of asset registration, tokenization, and exchange, we cannot sacrifice availability. Consistency is important, but eventual consistency is sufficient. Waiting for confirmation blocks will signal that the transaction is synchronized across a majority of nodes. \\
CrankChain is designed under the philosophy that over-engineering opens the door to vulnerabilities. By reducing the software's intrinsic complexity, open-source contributors are less likely to encounter surprises and improve upon the software's reliability and security. For this reason, the software excludes a virtual machine, state machine, and support for smart contracts. Though there are legitimate use cases for smart contracts\cite{whitepaper:buterin}, the use cases here do not require state changes per contract via transaction receipts\cite{yellow:wood}. \\
The network strives to be truly decentralized. It exists as a protocol. The implementation can be used as a reference for future builds. There is neither a gatekeeper to a permissioned network nor a central coordinator to route transactions.
\section{Protocol}
\subsection{Asset Transactions}
The tokenization of an arbitrary asset establishes its existence on the chain as if it were its own genesis block creation. This is done by sending an \textit{ASSET\_CREATION} transaction. Forth, the token can be traded as a first-class member on-chain with \textit{STANDARD} transactions. The native asset, Cranky Coin, is an asset just like any other tokenized asset on the network. Its outlying property as the native asset is the fact that transaction fees and \textit{COINBASE} transactions are dealt in Cranky Coin.
All transactions include an asset type which defaults to the SHA256 sum of the string "Cranky Coin". New assets can be registered by submitting a transaction with a \textit{ASSET\_CREATION} type enumerated value. Hereinafter, the registrant owns that asset and may transact that asset though fees remain in the native asset type - Cranky Coin.
\subsection{Metadata}
Arbitrary metadata can be added to the chain via an \textit{ASSET\_ADDENDUM} transaction.
Due to the nature of the blockchain's immutability, and infinite retention policy, all metadata addendums are appended and the history remains in tact. CrankChain does not enforce a protocol for the metadata, but the author recommends using a protocol compatible with GNU diff\cite{man:diff}.
\subsection{**Exchange}
Asset owners are able to post a buy/sell limit order via \text{ORDER} transaction types. Another transaction type, \textit{FILL}, allows for users to fill the posted limit order. The limit order may also be cancelled by the original poster if the \textit{CANCEL} transaction is mined before a fill transaction.
\footnotetext[7]{future implementation}
\subsection{Peer Discovery}
When a new node joins the network, it queries all known nodes for its list of peers. The node iterates over the known list of peers and initiates a handshake with each peer. The handshake is initiated by requesting the peer's network configuration properties. If the properties are identical to its own, it proceeds by sending a http POST request with a request body containing its own configuration properties. The recipient compares its own configuration properties with the ones it received. If they are identical and the recipient has not exceeded its maximum peer threshold, it adds the initiator to its peer list and returns an accepted status code. The initiator waits for an accepted status code before adding the new peer to its list. Minimum and maximum number of peers are configurable per node.\\
**The \textit{ZeroMQ} PUB/SUB model is applied. Hosts subscribe to peers that possess identical configuration properties
\subsection{Wallet}
Crankchain uses an Elliptical Curve Digital Signature Algorithm variant known as secp256k1 similar to \textit{Bitcoin}\cite{btcwiki:secp256k1}. Wallets can be generated with the included wallet client which does not require a client to download the chain. Paper wallets can be generated by instructing the wallet client to reveal the private key. The full node and wallet configuration properties stores the user's symmetrically encrypted private key. The supported symmetric cryptosystem is AES256\cite{pub:aes}. An AES256 encryption and decryption tool is provided with the node. An \textit{M} or \textit{T} wallet address prefix will be used to distinguish Mainnet from Testnet.
\subsection{Uncle Blocks}
In the event that different blocks are received on CrankChain node for the same height, both are stored. The first block will continue to build on the 0 branch while subsequent blocks will append to a separate branch. This alternate block is known as the \textit{uncle block}. \textit{Uncle} is a jargon used in the \textit{Ethereum} whitepaper to describe such blocks (\textit{Bitcoin} refers to these as orphan blocks)\cite{whitepaper:buterin}\cite{conf:nakamoto}. On a CrankChain node, branch numbers are determined by SQLite's default primary key sequence generator. This is preferred over \textit{autoincrement} which prevents the reuse of a previously assigned key\cite{doc:sqliteautoinc}. This is also preferred over a code generated sequence because the latter may be susceptible to race conditions given the multi-process nature of the node.\\
Though branch numbers are stored in the database, it is not a property of the block itself. All blocks (primary and/or uncle) will be stored \textbf{if and only if} there exists a persisted block with a hash equal to that of the new block's previous hash property. If a block of that height exists, the branch number is incremented by 1. Uncle blocks will be omitted (and pruned in the future) if the difference in height with the primary branch is greater than 6.
\subsection{Uncle Transactions}
Transactions are stored in a similar fashion to blocks. Primary keys are composite keys of the transaction hash and branch number as the only guarantee for uniqueness since uncle blocks may reference the same transaction in different blocks. This means that transaction data will be unique per branch, but may be replicated across branches. The node relies on the database's contraints to prevent a potential double-spend.
\subsection{Longest Chain}
Locally, the node always considers the 0 branch to be the main branch. In the event that an uncle branch grows taller than the main branch, the branch numbers are swapped for blocks and their transactions. Transaction hashes are not necessarily unique in storage as duplicates may be stored under a different branch number. When competing blocks come in, if they are valid, we accept both.... but only one is authoritative (the longest one)\\
When an alternate branch outpaces the primary branch, the node must restructure its chains. If the node did not store uncle blocks, it would be a costly process to download each block since the split, and validate each block. Chain restructuring is a critical part of eventual consistency and cannot be avoided. In order for CrankChain to restructure chains quickly without a performance hit, the author presents the following restructuring strategy:
\begin{enumerate}[noitemsep]
\item The branch number \textit{B} of the longest chain is identified
\item Beginning with the tallest block, \textit{B} is swapped with the 0 branch block at the same height. If a 0 branch block does not exist at that height, \textit{B} is just replaced with 0.
\item The previous hash of the swapped block is followed to the next tallest block in that branch. The branch number is then set to 0 while the 0 branch block at the same height is set to \textit{B}. This step is repeated until we arrive back at the main 0 branch and no more swaps are required.
\end{enumerate}
\begin{align}
&Let\ c' = primary\ chain\nonumber\\
&Let\ c'' = new\ tallest\ chain\nonumber\\
\nonumber\\
&PROMOTE\mbox{-}CHAIN(c',c'')\nonumber\\
&if\ height[c''] > height[c']\\
&\quad then\ n\leftarrow height[c'']\\
&\quad\quad b\leftarrow branch[c''[n]]\\
&\quad\quad while\ branch[c''[n]]\neq 0\\
&\quad\quad\quad do\ if\ c'[n]\neq NIL\\
&\quad\quad\quad\quad\quad then\ branch[c'[n]]\leftarrow b\\
&\quad\quad\quad\quad branch[c''[n]]\leftarrow 0\\
&\quad\quad\quad\quad n\leftarrow n-1
\end{align}
This results in a restructured chain without exceeding a \textit{O(n)} complexity and without needing to validate the blocks and/or the transactions contained within. The diagram illustrates a highly unlikely scenario where the node has received 6 distinct valid blocks of the same height. We witness that branch 3 has outpaced the previous dominant branch 0 and must now be promoted to the authoritative branch. Notice \textit{B} was preserved throughout the swap process. The two chains, which spanned across branches 0, 1, and 2 are now fully contained within 0 and 3. This protocol allows the chain to remain thin and allow for easier pruning.
\includegraphics[width=1.00\linewidth]{branch_restructure3.png}
\subsection{Dynamic Hash Difficulty}
Hash difficulty adjustment is performed based on a simple moving average. The network is configured so that target time \textit{v} is 10 minutes per block, difficulty adjustment span \textit{s} is 500, timestamp sample size \textit{n} is 2, and minimum hash difficulty \textit{m} is 1. \textit{t} denotes the block's timestamp at height \textit{h}. \textit{d} denotes the present difficulty:
\begin{align*}
&Let\ {f} : (t,h)\mapsto \frac{t_h + t_{h-1} + \dots + t_{h-n}}{n+1} \\\\
&Let\ \Delta{t} = {f(t_h,h)} - {f(t_{h-s},h-s)}\\
\end{align*}
\[
d_h =
\begin{cases}
d_{h-1} & (h-2\leq s) \vee (\Delta{t} = v\cdot s)\\
d_{h-1} + 1 & \Delta{t}<v\cdot s\\
d_{h-1} - 1 & \Delta{t}>v\cdot s\\
\end{cases}
\]
By taking this approach, the block's hash difficulty can be calculated deterministically without the need to persist it in the block header.\\
Single timestamps are not a reliable metric as miners can populate the value arbitrarily. Therefore, we take the arithmetic mean $\bar{t}$ of $t_h$ and its \textit{n} previous \textit{t}'s:
\begin{align*}
\bar{t} = \frac{1}{n+1}\sum_{i=h-n}^{h} t_i
\end{align*}
\subsection{Network}
CrankChain takes a unique approach in the full node design by implementing a RESTful API over remote procedure calls. Rest allows the CRUD operations necessary for the network to operate and pose no disadvantages over the latter. In fact, interaction with the full nodes becomes simpler for web-based applications wishing to communicate with a node since HTTP verbs are already supported by clients.
**Publish/Subscribe models are used to broadcast messages across peers using \textit{ZeroMQ}.
\footnotetext[7]{future implementation}
\subsection{Mining}
Currently, CrankChain operates on a \textit{Proof-of-Work}\cite{conf:nakamoto} system. In order to avoid centralization of mining, memory-hard slow hashes are preferred over pure computational fast hashes. \textit{Scrypt} was chosen as the hash algorithm. Though it has received criticism for not being ASIC-resistant, this can be corrected by using \textit{Scrypt} with different factors. Configurable \textit{Scrypt} factors according to \textit{(Scrypt---Wikipedia)}\cite{wiki:scrypt} include:
\begin{itemize}
\item \textit{Passphrase} - The string of characters to be hashed.
\item \textit{Salt} - A string of characters that modifies the hash to protect against Rainbow table attacks
\item \textit{N} - CPU/memory cost parameter.
\item \textit{p} - Parallelization parameter; a positive integer satisfying \textit{p} $\leq$ (2\textsuperscript{32}-1) * \textit{hLen / MFLen}.
\item \textit{dkLen} - Intended output length in octets of the derived key; a positive integer satisfying dkLen $\leq$ (2\textsuperscript{32}-1) * hLen.
\item \textit{r} - The blocksize parameter, which fine-tunes sequential memory read size and performance. 8 is commonly used.
\item \textit{hLen} - The length in octets of the hash function (32 for SHA256).
\item \textit{MFlen} - The length in octets of the output of the mixing function. Defined as \textit{r} * 128 in RFC7914.
\end{itemize}
CrankChain's Scrypt hash is configured with the following factors: \textit{N}=1024, \textit{r}=1, \textit{p}=1, \textit{dkLen}=32
\subsection{**Proof of Collaboration (proposal)}
\footnotetext[7]{future implementation}
\subsubsection{Pain point}
Proof of Work mining is essentially a race to find a nonce that yields the correct hash pattern.
There are no guaranteed rewards, and the likelihood that a miner using standard hardware is able to compete with large mining operations is very low. A clear advantage exists for individuals with a) capital to invest in such operations and b) have access to free or inexpensive electricity. \\
Mining pools are still orchestrated by a single entity with mercenary computational resources.
This would not be a problem if the probability of being rewarded grew linear to one's hash rate. Nevertheless, one’s probability grows exponentially as their ability to compute hashes grows. A slower machine's probability of discovering a valid block diminishes as the qualifying nonce reaches higher numbers. The likelihood of a low nonce decreases as the hash difficulty increases. Since the network rewards a miner for discovering blocks quicker than the rest of the network, it opens up susceptibility to 51\% attacks\cite{btcwiki:weaknesses} while discouraging common folks from mining.\\
In essence, the system designed around decentralization can be inadvertently centralized. There is a valid argument that the reward of hijacking a PoW network would not outweigh the cost\cite{btcwiki:weaknesses}. On the other hand, cost may not be a hindrance to particular entities that may feel threatened by the existence of such a network.
\subsubsection{Goals}
\begin{itemize}
\item wider, evenly distributed network
\item linear relationship between block rewards and capital investment
\item linear relationship between block rewards and uptime
\item zero slope relationship between block rewards and hash time
\item reward uptime rather than hash rate
\item no centralization
\item no censorship
\end{itemize}
\subsubsection{Proposal}
The author proposes an alternative system to \textit{Proof-of-Work} known as \textit{Proof-of-Collaboration}. The system adheres to the following protocol:
\paragraph{On-chain node registry}
\begin{enumerate}[noitemsep,nolistsep]
\item New node registers by transacting a fee to a collaborator registration address and \textit{REGISTRATION} transaction type. The transaction would include the host's address in the metadata.
\item Once the block containing the node registry transaction has reached finality, other collaborators acknowledge the new node.
\item If a collaborator is unable to reach another collaborator, it records downtime in its local peer database.
\item Collaborators are acknowledged in ascending order by the locally recorded downtime. The greatest advantage goes to the collaborators with the least recorded downtimes which are guaranteed to partake in the mining of every block
\end{enumerate}
\paragraph{Collaborative mining}
\footnotetext[1]{this step may not be necessary}
By following this protocol, the chain effectively eliminates miners competing for a block. This also means:
\begin{enumerate}[noitemsep]
\item Collaborators take turns initiating the mining process by its registration transaction's block height. When it is a collaborator's turn to initiate the mining of a block, it starts by creating a block. *It may be the case that any collaborator can initiate the mining of a new block.
\item The collaborator calculates the merkle tree based on the included transaction hashes plus the coinbase transaction, and hashes it once (with the starting nonce, 0).
\item If the block pattern does not fit, it forwards the block header to a subset of registered collaborators sorted by ascending downtime.
\item The next set of collaborators adds its own coinbase transaction repeats the previous two steps.
\item If the block pattern fits the chain, it broadcasts the block to all nodes.
\item If the block pattern does not fit, and all registered collaborators were exhausted, the nonce is incremented by 1 and repeated *It may be the case that simply changing the order of contributing collaborators will provide enough permutations of the merkel tree that the block pattern may be satisfied.
\[{n}_{Pr} = \frac{n!}{(n-r)!}\]
\item Block rewards are divided among contributing collaborators.
\end{enumerate}
\begin{itemize}[noitemsep]
\item The block header must store additional information: transactions index and coinbase addresses.
\item Hash difficulty does not need to increase due to the nature of this system
\item Each block will involve a varying number of nodes, thus varying reward amounts
\item Registry transactions should cost a small fee in coins so that nodes don’t register and go offline without being penalized
\item Network stability is more important than computational power
\item Cost/reward is linearly correlated and encourages more users to contribute to the network
\end{itemize}
Coinbase transaction fees are calculated:
\[Let\ {C} = \{col_1,col_2,\dots,col_n\}\]
\[{fee}=\frac{reward}{|C|}\]
\paragraph{Attack vectors}
Sybil attacks are currently the greatest concern for Proof of Collaboration. A malicious actor may attempt to register a large number of collaborators and hash every permutation of the actors' own node addresses until the block hash satisfies the hash difficulty pattern. There are a few strategies to mitigate this to some degree:
\begin{enumerate}[noitemsep]
\item Restrict IPs based on CIDR block.
\item Disallow incrementing the nonce completely
\item Adjust hash difficulty based on the number of registrants
\item Enforce a certain order based on registration block height
\item Passing a bloom filter?
\item Chain of verifiable signatures?
\[S_{priv_C}(S_{priv_B}(S_{priv_A}(hash))) = {validhash}\]
\[V_{pub_A}(V_{pub_B}(V_{pub_C}(validhash))) = {hash}\]
\end{enumerate}
\section{System Overview}
\subsection{Scaling}
The network largely follows the traditional single dimensional blockchain design as transaction speed is less of a concern than durability and security. The author does not believe \textit{Bitcoin} to have a scaling issue\cite{wiki:btcscalability}, but a performance issue. The network's bandwidth is very limited, but the mempool acts as an effective rate limiter and priority queue based on transaction fees.\\
In order for CrankChain to be resilient to large volumes of transactions, each node must be able to withstand a reasonable amount of inbound requests. We achieve this by separating the resource heavy requests from the lighter requests and routing the heavier requests to a queue. Queue consumers are loosely coupled and can be scaled to a configurable number of processes optimal for that particular node's hardware. Meanwhile, light resource requests can fetch data directly or optionally via cache. Costly request endpoints are also permissioned to known connected peers.\\
The mempool transactions are persisted to physical store and deleted when they are mined. This offers a few advantages over traditional in-memory mempools. The most important advantage is the fact that separate processes can access the same data without relying on an interprocess pipe.\\
Miners are also a separate process which can be enabled/disabled independently from the rest of the node.
\includegraphics[width=1.00\linewidth]{node_design.png}
\subsection{Persistence}
CrankChain uses SQLite for persistence. SQLite possesses several properties that make it ideal as the underlying datastore. It is fast, process-safe, thread-safe, resilient to file corruption, and it does not require a separate standalone instance. In addition, it includes an advanced query language, context manager for transactions, and built-in locking. Python requires multiprocessing over threads to achieve performance gain. LevelDB was considered due to its superiority in performance, but it is not process-safe and susceptible to file corruption.
\subsection{Queueing}
CrankChain relies heavily on the queue to regulate the inflow of requests that require validation and database write operations. A light whitelist validation is all that occurs upfront to provide a synchronous acceptance or rejection response before the message is enqueued.\\
The queue process is a \textit{ZeroMQ} push/pull proxy bound to a local UNIX socket. \textit{ZeroMQ} is a brokerless messaging queue, but the proxy acts as a simple broker\cite{website:zeromqguide}.\\
**The notification process is a \textit{ZeroMQ} publisher that is TCP bound.\\
**The listener process is a \textit{ZeroMQ} subscriber that is also TCP bound.
\footnotetext[7]{future implementation}
\subsection{Processes and Threads}
A major architectural objective is horizontal scalability. Implementation in Python adds challenges in concurrent operations due to the nature of the global interpreter lock. Since Python is not especially known for its benchmarking performance, relying on threads to increase speed is less than ideal. A better approach is to spawn separate processes or deployments as needed. The downside to running multiple processes is the lack of a shared memory space; communication between processes must rely on an interprocess pipe or queue. CrankChain dedicates a process for the API layer, a process for the queue, and a configurable number of processes for the queue processors. The API layer is a thin \textit{Bottle} process which merges public and permissioned endpoints and routes them to the appropriate service or queue. The queue process independently runs the \textit{ZeroMQ} proxy which acts as a messaging broker. Each of the producers and consumers communicate to the broker via a local UNIX socket which makes it interprocess compatible. Consumer processes have the majority of the responsibilities as it validates, persists, and notifies. While the consumer processes can be scaled to an arbitrarily high number, node operators should be cautious as to not overwhelm the database with excessive write operations as SQLite does not allow concurrent writes.\\
Finally, the miner is a separate module that runs independently of the node. It communicates with the queue and the database. It should use the optional C compiled module for a more desirable hash rate.\\
**The notification process runs a \textit{ZeroMQ} publishing server for broadcasting outbound notifications. Consumers of this queue are external.\\
**The listener process runs a \textit{ZeroMQ} subscriber for listening to its external peer subscriptions for inbound notifications.
\footnotetext[7]{future implementation}
\subsection{Caching}
Caching is optional but encouraged. CrankChain does not natively provide caching, but is trivial to integrate any number of available standalone caching services such as Memcached or Redis.
\section{Attack Vectors}
\subsection{Double-spend Attack}
CrankChain nodes will calculate account balances (when queried) rather than seek the last known transaction and return the UTXO (\textit{Bitcoin}). CrankChain has the luxury of doing this because it uses a less primitive data store with indexed public addresses for a \textit{O(1)} random access lookup. UTXO does, however, provide another purpose---prevent double-spend attacks\cite{conf:nakamoto}. During each transaction, the entire balance of the wallet is spent before it is returned upon completion. Concurrent transactions cannot occur. \textit{Ethereum} follows a different strategy to mitigate double-spend attacks---include an incrementing nonce with each transaction\cite{whitepaper:buterin}. A transaction that does not include a nonce with the correct sequence is invalid.\\
CrankChain follows neither strategy and requires transactions to include a hash of the previous transaction, similar to blocks. Effectively, each public address forms a mini-chain of transactions which prove undeniably, that one transaction follows the next. Since transaction hashes are guaranteed to be unique (per branch), duplicate transaction hashes, ie transactions of the same amount, to the same recipient, with the exact same timestamp are \textbf{not} allowed.
\subsection{DDOS Attack}
While no public service is completely resistant to DDOS attacks, certain measures are taken. In order to mitigate DDOS attacks from unknown hosts, POST transaction endpoints (from light clients) and light GET request endpoints will be the only endpoints available to the public. Costlier calls will be restricted to known peers or dropped with a 403. Known peers must go through a prior handshake/verification. In order to mitigate DDOS attacks from known peers, a few measures are taken. First, a maximum number of peers is allowed and configurable per node. Second, costly POST requests will be asynchronously placed on a queue before validation/processing take place on one of the queue consumer processes which is also configurable per node. Optionally, a caching layer can be added behind. The Python implementation of CrankChain uses \textit{Bottle}\cite{website:bottledocs} which ships with adapters to common WSGI servers. It may also be beneficial for each node to operate behind an NGINX reverse proxy with rate limiting enabled.
\paragraph{Spam Attack}
Transaction fees and the previous transaction hash requirement prevent a user from spamming the network with an excessive number of blocks. It is relatively cheap for a node to filter out invalid transactions from entering the mempool. A random access \textit{O(1)} lookup is all that is necessary to drop such transactions. If a transaction isn't finalized, its hash does not qualify as the subsequent transaction's previous hash.\\
Proof-of-Work prevents a miner from spamming the network with an excessive number of blocks. Such an attack would require a hash rate much higher than the rest of the network. If the attacker invested in the resources capable of performing a spam attack, it would be short-lived due to the adjustable hash rate.
\paragraph{Chain Death Spiral}
A \textit{Chain Death Spiral} occurs when a network faces a sudden loss of mining power, causing the targeted block rate to be missed. This weakness was witnessed on \textit{Bitcoin} Cash when its blocktime reached 15 hours, prompting it to hard fork and apply an \textit{Emergency Difficulty Adjustment} to relax the difficulty sooner than scheduled\cite{website:btcandblockchain}.
If an attacker with a dominant hash rate decides to cease mining CrankChain, a \textit{Chain Death Spiral} should not pose a threat due to CrankChain's \textit{dynamic hash difficulty adjustment}. The hash difficulty is recalculated every block in a deterministic fashion using moving averages and may adjust either way.
\subsection{Replay Attack}
Like many other blockchains, replay attacks are prevented by prefixed wallet addresses\cite{btcwiki:addressprefixes}. CrankChain mainnet will be prefixed with \textit{m} while testnet addresses will be prefixed with \textit{t}
\subsection{Sybil Attack}
Sybil attacks are also mitigated in a similar fashion to other networks. By restricting IPs based on CIDR block, CrankChain makes it difficult to perform this attack. \textit{Bitcoin} also takes this approach by restricting nodes from establishing an outbound connection to more than one IP address per /16 (x.y.0.0)\cite{btcwiki:weaknesses}.
\subsection{Quantum Attack}
If quantum computing resources become available and threatens the current cryptosystems in play, CrankChain can simply upgrade to quantum resistant cryptosystems and allow users to migrate their wallets
\section{Conclusion}
In conclusion, the author has presented implementation details surrounding CrankChain. In doing so, it was proven that decentralized asset registration, tokenization, and exchange can be achieved without added complexity, smart contracts, nor compromised decentralization. Furthermore, a scalable implementation was built using Python, a language not known for performance. \\
The author has also proposed a new mining mechanism known as \textit{Proof of Collaboration} or \textit{PoC} with hopes that the protocol will mature and eventually be implemented into CrankChain and other future blockchains.\\\\
\textit{CrankChain was first introduced as Cranky Coin on cranklin.com, 2017}\cite{website:cranklincrankycoin}
\clearpage
\bibliography{references}
\bibliographystyle{plain}
\end{document} | {
"alphanum_fraction": 0.8006679327,
"avg_line_length": 131.0815450644,
"ext": "tex",
"hexsha": "0dc6863ccd008403993d636cbdea2cfe3ddeba92",
"lang": "TeX",
"max_forks_count": 63,
"max_forks_repo_forks_event_max_datetime": "2022-01-09T21:49:38.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-07-16T13:35:14.000Z",
"max_forks_repo_head_hexsha": "41c7ac6226842fb867b5eaa554c3604e998507c2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Rajatkalsotra/crankycoin",
"max_forks_repo_path": "docs/crankycoin.tex",
"max_issues_count": 14,
"max_issues_repo_head_hexsha": "41c7ac6226842fb867b5eaa554c3604e998507c2",
"max_issues_repo_issues_event_max_datetime": "2021-06-05T13:44:10.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-09-05T08:38:55.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Rajatkalsotra/crankycoin",
"max_issues_repo_path": "docs/crankycoin.tex",
"max_line_length": 1378,
"max_stars_count": 162,
"max_stars_repo_head_hexsha": "41c7ac6226842fb867b5eaa554c3604e998507c2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Rajatkalsotra/crankycoin",
"max_stars_repo_path": "docs/crankycoin.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-17T14:24:12.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-07-11T11:27:10.000Z",
"num_tokens": 6871,
"size": 30542
} |
\section{More on sequences}
| {
"alphanum_fraction": 0.7333333333,
"avg_line_length": 7.5,
"ext": "tex",
"hexsha": "d428418119d04acb7069e767b75b593f0c338fe7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/algebra/sequencesInfinite/01-00-More_on_sequences.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/algebra/sequencesInfinite/01-00-More_on_sequences.tex",
"max_line_length": 27,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/algebra/sequencesInfinite/01-00-More_on_sequences.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 8,
"size": 30
} |
%
% John Minter' LaTeX Document Template
%
\documentclass[12pt]{article}
% \bibliographystyle{plain}
\usepackage[utf8]{inputenc} % set input encoding (not needed with XeLaTeX)
\RequirePackage{filecontents}
% \usepackage[square,numbers]{natbib}
\usepackage[square,numbers]{}
\bibliographystyle{plainnat}
\usepackage{geometry} % to change the page dimensions
\geometry{letterpaper} % or letterpaper (US) or a5paper or....
\geometry{margin=1in} % for example, change the margins to 2 inches all round
\usepackage{graphicx} % support the \includegraphics command and options
%%% PACKAGES
\usepackage{booktabs} % for much better looking tables
\usepackage{array} % for better arrays (eg matrices) in maths
\usepackage{paralist} % very flexible & customisable lists (eg. enumerate/itemize, etc.)
\usepackage{verbatim} % adds environment for commenting out blocks of text & for better verbatim
\usepackage{subfig} % make it possible to include more than one captioned figure/table in a single float
% These packages are all incorporated in the memoir class to one degree or another...
% \usepackage[comma,authoryear]{natbib}
\usepackage{natbib}
%%% HEADERS & FOOTERS
\usepackage{fancyhdr} % This should be set AFTER setting up the page geometry
\pagestyle{fancy} % options: empty , plain , fancy
\renewcommand{\headrulewidth}{0pt} % customise the layout...
% \lhead{J. R. Minter} % Your name
% \rhead{Report Title} % Your report title
%%% SECTION TITLE APPEARANCE
\makeatletter
\renewcommand{\@biblabel}[1]{[#1]\hfill}
\makeatother
\usepackage{sectsty}
\allsectionsfont{\sffamily\mdseries\upshape} % (See the fntguide.pdf for font help)
% (This matches ConTeXt defaults)
%%% ToC (table of contents) APPEARANCE
\usepackage[nottoc,notlof,notlot]{tocbibind} % Put the bibliography in the ToC
\usepackage[titles,subfigure]{tocloft} % Alter the style of the Table of Contents
\renewcommand{\cftsecfont}{\rmfamily\mdseries\upshape}
\renewcommand{\cftsecpagefont}{\rmfamily\mdseries\upshape} % No bold!
\title{The Report Title}
\author{J. R. Minter}
\date{2021-05-03}
\begin{document}
\maketitle
\nocite{*}
\section{First section}
This document is an example of \texttt{thebibliography} environment using
in bibliography management. Three items are cited: \textit{The \LaTeX\ Companion}
book \cite{latexcompanion}, the Einstein journal paper \cite{einstein}, and the
Donald Knuth's website \cite{knuthwebsite}. The \LaTeX\ related items are
\cite{latexcompanion,knuthwebsite}.
\medskip
% this left-justifies the bibliography
\raggedright
\bibliography{rep-w-external-bib }
\end{document} | {
"alphanum_fraction": 0.758463294,
"avg_line_length": 33.7051282051,
"ext": "tex",
"hexsha": "9a23e0b10f02463a6e491018fda4e1cfeed80083",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5dd3412b808476bf0354cd0a201a0ac3a101dcf9",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jrminter/jrm-latex-examples",
"max_forks_repo_path": "rept-w-external-bib/rept-w-external-bib.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5dd3412b808476bf0354cd0a201a0ac3a101dcf9",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jrminter/jrm-latex-examples",
"max_issues_repo_path": "rept-w-external-bib/rept-w-external-bib.tex",
"max_line_length": 104,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5dd3412b808476bf0354cd0a201a0ac3a101dcf9",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jrminter/jrm-latex-examples",
"max_stars_repo_path": "rept-w-external-bib/rept-w-external-bib.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 753,
"size": 2629
} |
\subsubsection{~Wave model modules} \label{sec:wave_mod}
\vsssub
At the core of the wave model are the wave model initialization module and the
wave model module.
\vspace{\baselineskip} \noindent
Main wave model initialization module \hfill {\file w3initmd.ftn}
\begin{flisti}
\fit{w3init}{The initialization routine {\F w3init}, which
prepares the wave model for computations (internal).}
\fit{w3mpii}{\mpi\ initialization (internal).}
\fit{w3mpio}{\mpi\ initialization for I/O (internal).}
\fit{w3mpip}{\mpi\ initialization for I/O (internal, point output only).}
\end{flisti}
\noindent
Main wave model module \hfill {\file w3wavemd.ftn}
\begin{flisti}
\fit{w3wave}{The actual wave model {\F w3wave}.}
\fit{w3gath}{Data transpose to gather data for spatial propagation in
a single array (internal).}
\fit{w3scat}{Corresponding scatter operation (internal).}
\fit{w3nmin}{Calculate minimum number of sea points per processor
(internal).}
\end{flisti}
\noindent
The main wave model routines and all other subroutines require a data
structure to exist. The data structure is contained in the following modules.
\vspace{\baselineskip} \noindent
Define model grids and parameter settings \hfill {\file w3gdatmd.ftn}
\begin{flisti}
\fit{w3nmod}{Set number of grids to be considered.}
\fit{w3dimx}{Set dimensions for spatial grid and allocate storage.}
\fit{w3dims}{Set dimensions for spectral grid and allocate storage.}
\fit{w3setg}{Set pointers to selected grid.}
\fit{w3dimug}{Set dimensions for arrays specific to the triangle-based grids (grid connectivity ...).}
\fit{w3gntx}{Develop unstructured grid structures.}
\end{flisti}
\noindent
Dynamic wave data describing sea state \hfill {\file w3wdatmd.ftn}
\begin{flisti}
\fit{w3ndat}{Set number of grids to be considered.}
\fit{w3dimw}{Set dimensions and allocate storage.}
\fit{w3setw}{Set pointers to selected grid.}
\end{flisti}
\noindent
Auxiliary storage \hfill {\file w3adatmd.ftn}
\begin{flisti}
\fit{w3naux}{Set number of grids to be considered.}
\fit{w3dima, w3xdma, w3dmnl}{}
\fit{ }{Set dimensions and allocate storage.}
\fit{w3seta, w3xeta}{}
\fit{ }{Set pointers to selected grid.}
\end{flisti}
\pb \noindent
Model output \hfill {\file w3odatmd.ftn}
\begin{flisti}
\fit{w3nout}{Set number of grids to be considered.}
\fit{w3dmo2, w3dmo3, w3dmo5}{}
\fit{ }{Set dimensions and allocate storage.}
\fit{w3seto}{Set pointers to selected grid.}
\end{flisti}
\noindent
Model input \hfill {\file w3idatmd.ftn}
\begin{flisti}
\fit{w3ninp}{Set number of grids to be considered.}
\fit{w3dimi}{Set dimensions and allocate storage.}
\fit{w3seti}{Set pointers to selected grid.}
\end{flisti}
\noindent
The configuration files such as traditional form (.inp) and namelist form (.nml)
are processed differently in the program. The traditional configuration file is
directly read in the main program. The namelist configuration file is read by the
subroutines in the following modules.
\vspace{\baselineskip} \noindent
Grid preprocessor namelist module \hfill {\file w3nmlgridmd.ftn}
\begin{flisti}
\fit{w3nmlgrid}{Read and report all the namelists.}
\fit{read\_spectrum\_nml}{Init and save values to a derived type.}
\fit{read\_run\_nml}{Init and save values to a derived type.}
\fit{read\_timesteps\_nml}{Init and save values to a derived type.}
\fit{read\_grid\_nml}{Init and save values to a derived type.}
\fit{read\_rect\_nml}{Init and save values to a derived type.}
\fit{read\_curv\_nml}{Init and save values to a derived type.}
\fit{read\_unst\_nml}{Init and save values to a derived type.}
\fit{read\_smc\_nml}{Init and save values to a derived type.}
\fit{read\_mask\_nml}{Init and save values to a derived type.}
\fit{read\_obst\_nml}{Init and save values to a derived type.}
\fit{read\_slope\_nml}{Init and save values to a derived type.}
\fit{read\_sed\_nml}{Init and save values to a derived type.}
\fit{read\_inbound\_nml}{Init and save values to a derived type.}
\fit{read\_excluded\_nml}{Init and save values to a derived type.}
\fit{read\_outbound\_nml}{Init and save values to a derived type.}
\fit{report\_spectrum\_nml}{Output default and user-defined values.}
\fit{report\_run\_nml}{Output default and user-defined values.}
\fit{report\_timesteps\_nml}{Output default and user-defined values.}
\fit{report\_grid\_nml}{Output default and user-defined values.}
\fit{report\_rect\_nml}{Output default and user-defined values.}
\fit{report\_curv\_nml}{Output default and user-defined values.}
\fit{report\_smc\_nml}{Output default and user-defined values.}
\fit{report\_depth\_nml}{Output default and user-defined values.}
\fit{report\_mask\_nml}{Output default and user-defined values.}
\fit{report\_obst\_nml}{Output default and user-defined values.}
\fit{report\_slope\_nml}{Output default and user-defined values.}
\fit{report\_sed\_nml}{Output default and user-defined values.}
\fit{report\_inbound\_nml}{Output default and user-defined values.}
\fit{report\_excluded\_nml}{Output default and user-defined values.}
\fit{report\_outbound\_nml}{Output default and user-defined values.}
\end{flisti}
\vspace{\baselineskip} \noindent
NetCDF boundary conditions namelist module \hfill {\file w3nmlbouncmd.ftn}
\begin{flisti}
\fit{w3nmlbounc}{Read and report all the namelists.}
\fit{read\_bound\_nml}{Init and save values to a derived type.}
\fit{report\_bound\_nml}{Output default and user-defined values.}
\end{flisti}
\vspace{\baselineskip} \noindent
NetCDF input field preprocessor namelist module \hfill {\file w3nmlprncmd.ftn}
\begin{flisti}
\fit{w3nmlprnc}{Read and report all the namelists.}
\fit{read\_forcing\_nml}{Init and save values to a derived type.}
\fit{read\_file\_nml}{Init and save values to a derived type.}
\fit{report\_forcing\_nml}{Output default and user-defined values.}
\fit{report\_file\_nml}{Output default and user-defined values.}
\end{flisti}
\vspace{\baselineskip} \noindent
Generic shell namelist module \hfill {\file w3nmlshelmd.ftn}
\begin{flisti}
\fit{w3nmlshel}{Read and report all the namelists.}
\fit{read\_domain\_nml}{Init and save values to a derived type.}
\fit{read\_input\_nml}{Init and save values to a derived type.}
\fit{read\_output\_type\_nml}{Init and save values to a derived type.}
\fit{read\_output\_date\_nml}{Init and save values to a derived type.}
\fit{read\_homogeneous\_nml}{Init and save values to a derived type.}
\fit{report\_domain\_nml}{Output default and user-defined values.}
\fit{report\_input\_nml}{Output default and user-defined values.}
\fit{report\_output\_type\_nml}{Output default and user-defined values.}
\fit{report\_output\_date\_nml}{Output default and user-defined values.}
\fit{report\_homogeneous\_nml}{Output default and user-defined values.}
\end{flisti}
\vspace{\baselineskip} \noindent
Multi-grid shell namelist module \hfill {\file w3nmlmultimd.ftn}
\begin{flisti}
\fit{w3nmlmultidef}{Set the number of model and forcing grids.}
\fit{w3nmlmulticonf}{Read and report all the namelists.}
\fit{read\_domain\_nml}{Init and save values to a derived type.}
\fit{read\_input\_grid\_nml}{Init and save values to a derived type.}
\fit{read\_model\_grid\_nml}{Init and save values to a derived type.}
\fit{read\_output\_type\_nml}{Init and save values to a derived type.}
\fit{read\_output\_date\_nml}{Init and save values to a derived type.}
\fit{read\_homogeneous\_nml}{Init and save values to a derived type.}
\fit{report\_domain\_nml}{Output default and user-defined values.}
\fit{report\_input\_grid\_nml}{Output default and user-defined values.}
\fit{report\_model\_grid\_nml}{Output default and user-defined values.}
\fit{report\_output\_type\_nml}{Output default and user-defined values.}
\fit{report\_output\_date\_nml}{Output default and user-defined values.}
\fit{report\_homogeneous\_nml}{Output default and user-defined values.}
\end{flisti}
\vspace{\baselineskip} \noindent
Gridded output NetCDF post-processor namelist module \hfill {\file w3nmlounfmd.ftn}
\begin{flisti}
\fit{w3nmlounf}{Read and report all the namelists.}
\fit{read\_field\_nml}{Init and save values to a derived type.}
\fit{read\_file\_nml}{Init and save values to a derived type.}
\fit{read\_smc\_nml}{Init and save values to a derived type.}
\fit{report\_field\_nml}{Output default and user-defined values.}
\fit{report\_file\_nml}{Output default and user-defined values.}
\fit{report\_smc\_nml}{Output default and user-defined values.}
\end{flisti}
\vspace{\baselineskip} \noindent
Point output NetCDF post-processor namelist module \hfill {\file w3nmlounpmd.ftn}
\begin{flisti}
\fit{w3nmlounp}{Read and report all the namelists.}
\fit{read\_point\_nml}{Init and save values to a derived type.}
\fit{read\_file\_nml}{Init and save values to a derived type.}
\fit{read\_spectra\_nml}{Init and save values to a derived type.}
\fit{read\_param\_nml}{Init and save values to a derived type.}
\fit{read\_source\_nml}{Init and save values to a derived type.}
\fit{report\_point\_nml}{Output default and user-defined values.}
\fit{report\_file\_nml}{Output default and user-defined values.}
\fit{report\_spectra\_nml}{Output default and user-defined values.}
\fit{report\_param\_nml}{Output default and user-defined values.}
\fit{report\_source\_nml}{Output default and user-defined values.}
\end{flisti}
\vspace{\baselineskip} \noindent
Track output NetCDF post-processor namelist module \hfill {\file w3nmltrncmd.ftn}
\begin{flisti}
\fit{w3nmltrnc}{Read and report all the namelists.}
\fit{read\_track\_nml}{Init and save values to a derived type.}
\fit{read\_file\_nml}{Init and save values to a derived type.}
\fit{report\_track\_nml}{Output default and user-defined values.}
\fit{report\_file\_nml}{Output default and user-defined values.}
\end{flisti}
\noindent
The input fields such as winds and currents are transferred to the model
through the parameter list of {\F w3wave}. The information is processed within
{\F w3wave} by the routines in the following module.
\vspace{\baselineskip} \noindent
Input update module \hfill {\file w3updtmd.ftn}
\begin{flisti}
\fit{w3ucur}{Interpolation in time of current fields.}
\fit{w3uwnd}{Interpolation in time of wind fields.}
\fit{w3uini}{Generate initial conditions from the initial
wind field.}
\fit{w3ubpt}{Updating of boundary conditions in nested runs.}
\fit{w3uice}{Updating of the ice coverage.}
\fit{w3ulev}{Updating of water levels.}
\fit{w3utrn}{Updating grid box transparencies.}
\fit{w3ddxy}{Calculation of spatial derivatives of the water depth.}
\fit{w3dcxy}{Calculation of spatial derivatives of the currents.}
\end{flisti}
\noindent
There are seven types of \ws\ data files (other than the preprocessed input
fields, which are part of the program shall rather than the actual wave
model). The corresponding routines are gathered in six modules.
\vspace{\baselineskip} \noindent
I/O module ({\file mod\_def.ww3}) \hfill {\file w3iogrmd.ftn}
\begin{flisti}
\fit{w3iogr}{Reading and writing of {\file mod\_def.ww3}.}
\end{flisti}
\noindent
I/O module ({\file out\_grd.ww3}) \hfill {\file w3iogomd.ftn}
\begin{flisti}
\fit{w3outg}{Calculation of gridded output parameters.}
\fit{w3iogo}{Reading and writing of {\file out\_grd.ww3}.}
\end{flisti}
\noindent
I/O module ({\file out\_pnt.ww3}) \hfill {\file w3iopomd.ftn}
\begin{flisti}
\fit{w3iopp}{Processing of requests for point output.}
\fit{w3iope}{Calculating point output data.}
\fit{w3iopo}{Reading and writing of {\file out\_pnt.ww3}.}
\end{flisti}
\noindent
I/O module ({\file track\_o.ww3}) \hfill {\file w3iotrmd.ftn}
\begin{flisti}
\fit{w3iotr}{Generate track output in {\file track\_o.ww3}.}
\end{flisti}
\noindent
I/O module ({\file restart.ww3}) \hfill {\file w3iorsmd.ftn}
\begin{flisti}
\fit{w3iors}{Reading and writing of {\file restart{\sl{n}}.ww3}.}
\end{flisti}
\noindent
I/O module ({\file nest.ww3}) \hfill {\file w3iobcmd.ftn}
\begin{flisti}
\fit{w3iobc}{Reading and writing of {\file nest{\sl{n}}.ww3}.}
\end{flisti}
\noindent
I/O module ({\file partition.ww3}) \hfill {\file w3iofsmd.ftn}
\begin{flisti}
\fit{w3iofs}{Writing of {\file partition.ww3}.}
\end{flisti}
\noindent
There are presently several propagation schemes and GSE alleviation techniques
available for rectangular and curvilinear grids, as well as a 'slot' for a
user supplied propagation routine, and there are four schemes for
triangle-based grids. The propagation schemes are packaged in the following
modules.
\vspace{\baselineskip} \noindent
Propagation module (first order, no GSE alleviation) \hfill {\file w3pro1md.ftn}
\begin{flisti}
\fit{w3map1}{Generation of auxiliary maps.}
\fit{w3xyp1}{Propagation in physical space.}
\fit{w3ktp1}{Propagation in spectral space.}
\end{flisti}
\noindent
Propagation module (higher order scheme with GSE diffusion) \hfill {\file
w3pro2md.ftn}
\begin{flisti}
\fit{w3map2}{Generation of auxiliary maps.}
\fit{w3xyp2}{Propagation in physical space.}
\fit{w3ktp2}{Propagation in spectral space.}
\end{flisti}
\noindent
Propagation module (higher order scheme with GSE averaging) \hfill {\file
w3pro3md.ftn}
\begin{flisti}
\fit{w3map3}{Generation of auxiliary maps.}
\fit{w3mapt}{Generation of transparency maps.}
\fit{w3xyp3}{Propagation in physical space.}
\fit{w3ktp3}{Propagation in spectral space.}
\end{flisti}
\noindent
Propagation module (slot for user supplied routines) \hfill {\file
w3proxmd.ftn}
\begin{flisti}
\fit{w3xypx}{Propagation in physical space.}
\fit{w3ktpx}{Propagation in spectral space.}
\end{flisti}
\noindent
Propagation module (generic UQ) \hfill {\file w3uqckmd.ftn}
\begin{flisti}
\fit{w3qck{\it{n}}}{Routines performing \uq\ scheme in arbitrary
spaces (1: regular grid. 2: irregular grid
3: regular grid with obstructions).}
\end{flisti}
\noindent
Propagation module (generic UNO) \hfill {\file w3uqckmd.ftn}
\begin{flisti}
\fit{w3uno, w3unor w3unos}{}
\fit{}{Like UQ schemes above.}
\end{flisti}
\noindent
SMC grid routines \hfill {\file w3psmcmd.ftn}
\begin{flisti}
\fit{W3PSMC }{Spatial propagation on SMC grid.}
\fit{W3KSMC }{Spectral modification by GCT and refraction.}
\fit{SMCxUNO2/3 }{Irregular grid mid-flux on U-faces by UNO2/3.}
\fit{SMCyUNO2/3 }{Irregular grid mid-flux on V-faces by UNO2/3.}
\fit{SMCxUNO2r/3r}{Regular grid mid-flux on U-faces by UNO2/3.}
\fit{SMCyUNO2r/3r}{Regular grid mid-flux on V-faces by UNO2/3.}
\fit{SMCkUNO2 }{Shift in k-space due to refraction by UNO2.}
\fit{SMCGradn }{Evaluate field gradient at cell centre.}
\fit{SMCAverg }{1-2-1 weighted average for centre field.}
\fit{SMCGtCrfr}{Refraction and GCT rotation in theta.}
\fit{SMCDHXY }{Evaluate depth gradient and refraction limiter.}
\fit{SMCDCXY }{Evaluate current velocity gradient.}
\fit{W3GATHSMC W3SCATSMC}{}
\fit{ }{Gather and scatter spectral components.}
\end{flisti}
\noindent
Triangle-based propagation schemes \hfill {\file w3profsmd.ftn}
\begin{flisti}
\fit{w3xypug}{Interface to the unstructured propagation schemes.}
\fit{w3cflug}{Computes the maximum CFL number for spatial propagation.}
\fit{w3xypfsn2}{N-scheme.}
\fit{w3xypfspsi2}{PSI-scheme.}
\fit{w3xypfsnimp}{Implicit version of the N-scheme.}
\fit{w3xypfsfct2}{FCT-scheme.}
\fit{bcgstab}{Part of the iterative SPARSKIT solver, used for the implicit scheme.}
\end{flisti}
\vspace{\baselineskip} \noindent
The source term calculation and integration is contained in several
modules. The module {\file w3srcemd.ftn} manages the general calculation and
integration. Additional modules contain the actual source term options.
\vspace{\baselineskip} \noindent
Source term integration module \hfill {\file w3srcemd.ftn}
\begin{flisti}
\fit{w3srce}{Integration of source terms.}
\end{flisti}
\noindent
Flux (stress) module (Wu, 1980) \hfill {\file w3flx1md.ftn}
\begin{flisti}
\fit{w3flx1}{Calculation of stresses.}
\end{flisti}
\noindent
Flux (stress) module (Tolman and Chalikov) \hfill {\file w3flx2md.ftn}
\begin{flisti}
\fit{w3flx2}{Calculation of stresses.}
\end{flisti}
\noindent
Flux (stress) module (Tolman and Chalikov, capped) \hfill {\file w3flx3md.ftn}
\begin{flisti}
\fit{w3flx3}{Calculation of stresses.}
\end{flisti}
\noindent
Flux (stress) module (slot for user supplied routines) \hfill {\file
w3flxxmd.ftn}
\begin{flisti}
\fit{w3flxx}{Calculation of stresses.}
\fit{inflxx}{Initialization routine.}
\end{flisti}
\noindent
Linear input (Cavaleri and Malanotte Rizzoli) \hfill {\file w3sln1md.ftn}
\begin{flisti}
\fit{w3sln1}{Calculation $S_{lin}$.}
\end{flisti}
\pb \noindent
Linear input (slot for user supplied routines) \hfill {\file w3slnxmd.ftn}
\begin{flisti}
\fit{w3slnx}{Calculation $S_{lin}$.}
\fit{inslnx}{Corresponding initialization routine.}
\end{flisti}
\noindent
Input and dissipation module (dummy version) \hfill {\file w3src0md.ftn}
\begin{flisti}
\fit{w3spr0}{Calculation of mean wave parameters (single grid point).}
\end{flisti}
\noindent
Input and dissipation module (\wam-3) \hfill {\file w3src1md.ftn}
\begin{flisti}
\fit{w3spr1}{Calculation of mean wave parameters (single grid point).}
\fit{w3sin1}{Calculation of $S_{in}$.}
\fit{w3sds1}{Calculation of $S_{ds}$.}
\end{flisti}
\noindent
Input and dissipation module Tolman and Chalikov 1996 \hfill {\file w3src2md.ftn}
\begin{flisti}
\fit{w3spr2}{Calculation of mean wave parameters (single grid point).}
\fit{w3sin2}{Calculation of $S_{in}$.}
\fit{w3sds2}{Calculation of $S_{ds}$.}
\fit{inptab}{Generation of the interpolation table for $\beta$.}
\fit{w3beta}{Function to calculate $\beta$ (internal).}
\end{flisti}
\noindent
Input and dissipation module \wam-4 and ECWAM. \hfill {\file w3src3md.ftn}
\begin{flisti}
\fit{w3spr3}{Calculation of mean wave parameters (single grid point).}
\fit{w3sin3}{Calculation of $S_{in}$.}
\fit{w3sds3}{Calculation of $S_{ds}$.}
\fit{tabu\_stress}{Tabulation of wind stress as a function of $U_{10}$ and $\tau_w$}
\fit{tabu\_tauhf}{Tabulation of the short waves-supported stress}
\fit{calc\_ustar}{Computes friction velocity using stress table}
\end{flisti}
\noindent
Input and dissipation module Ardhuin et al. 2010 \hfill {\file w3src4md.ftn}
\begin{flisti}
\fit{w3spr4}{Calculation of mean wave parameters (single grid point).}
\fit{w3sin4}{Calculation of $S_{in}$.}
\fit{w3sds4}{Calculation of $S_{ds}$.}
\fit{tabu\_stress}{Tabulation of wind stress as a function of $U_{10}$ and $\tau_w$}
\fit{tabu\_tauhf}{Tabulation of the short waves-supported stress}
\fit{tabu\_tauhf2}{Tabulation of the short waves-supported stress with sheltering}
\fit{tabu\_swellft}{Tabulation of oscillatory friction factor for negative part of $S_{in}$.}
\fit{calc\_ustar}{Computes friction velocity using stress table}
\end{flisti}
\noindent
Input and dissipation module BYDRZ \hfill {\file w3src6md.ftn}
\begin{flisti}
\fit{w3spr6 }{Integral parameter calculation following {\F st1}.}
\fit{w3sin6 }{Observation-based wind input.}
\fit{w3sds6 }{Observation-based dissipation.}
\fit{irange }{Generate a sequence of integer values.}
\fit{lfactor }{Calculate reduction factor for Sin.}
\fit{tauwinds}{Normal stress calculation for Sin.}
\fit{polyfit2}{Quadratic fit using least-squares.}
\end{flisti}
\noindent
Input and dissipation module (slot for user supplied routines) \hfill {\file
w3srcxmd.ftn}
\begin{flisti}
\fit{w3sinx}{Calculation of $S_{in}$.}
\fit{w3sdsx}{Calculation of $S_{ds}$.}
\end{flisti}
\noindent
Swell dissipation module \hfill {\file w3swldmd.ftn}
\begin{flist}
\fit{w3swl4}{Ardhuin et al (2010+) swell dissipation.}
\fit{w3swl6}{Babanin (2011) swell dissipation.}
\fit{irange}{Generate a sequence of integer values.}
\end{flist}
\noindent
Nonlinear interaction module (\dia) \hfill {\file w3snl1md.ftn}
\begin{flisti}
\fit{w3snl1}{Calculation of $S_{nl}$.}
\fit{insnl1}{Initialization for $S_{nl}$.}
\end{flisti}
\noindent
Nonlinear interaction module (\xnl) \hfill {\file w3snl2md.ftn}
\begin{flisti}
\fit{w3snl2}{Interface routine for $S_{nl}$.}
\fit{insnl2}{Initialization for $S_{nl}$.}
\end{flisti}
These routines provide the interface to the \xnl\ routines. The \xnl\ routines
are provided in the files {\file mod\_constants.f90}, {\file mod\_fileio.f90},
{\file mod\_xnl4v4.f90}, and {\file serv\_xnl4v4.f90}. For details on these
files, see \cite{rep:vVl02b}.
\vspace{\baselineskip}
\noindent
Nonlinear interaction module (GMD) \hfill {\file w3snl3md.ftn}
\begin{flisti}
\fit{w3snl3}{Calculation of $S_{nl}$.}
\fit{expand}{Expand spectral space.}
\fit{expan2}{Map form expanded to original spectral space.}
\fit{insnl3}{Initialization for $S_{nl}$.}
\end{flisti}
\vspace{\baselineskip}
\noindent
Nonlinear interaction module (slot for user supplied routines) \hfill {\file
w3snlxmd.ftn}
\begin{flisti}
\fit{w3snlx}{Calculation of $S_{nl}$.}
\fit{insnlx}{Initialization for $S_{nl}$.}
\end{flisti}
\vspace{\baselineskip}
\noindent
Nonlinear high-frequency filter \hfill {\file w3snlsmd.ftn}
\begin{flisti}
\fit{w3snls}{Calculation of filter.}
\fit{expand}{Expand spectral space.}
\fit{insnls}{Initialization for filter.}
\end{flisti}
\noindent
Bottom friction module (\js) \hfill {\file w3sbt1md.ftn}
\begin{flisti}
\fit{w3bt1}{Calculation of $S_{bot}$.}
\end{flisti}
\noindent
Bottom friction module (\showex) \hfill {\file w3sbt4md.ftn}
\begin{flisti}
\fit{insbt4}{Initialization of $S_{bot}$})
\fit{tabu\_erf}{Table or error function.}
\fit{w3sbt4}{Calculation of $S_{bot}$, and energy and momentum fluxes to the
bottom boundary layer.}
\end{flisti}
\noindent
Fluid mud dissipation \citep{art:DL78} \hfill {\file w3sbt8md.ftn}
\begin{flisti}
\fit{w3sbt8}{Source term.}
\end{flisti}
\pb \noindent
Fluid mud dissipation \citep{art:Ng00} \hfill {\file w3sbt9md.ftn}
\begin{flisti}
\fit{w3sbt9}{Source term.}
\end{flisti}
\noindent
Bottom friction module (slot for user supplied routines) \hfill {\file
w3sbtxmd.ftn}
\begin{flisti}
\fit{w3sbtx}{Calculation of $S_{bot}$.}
\fit{insbtx}{Initialization of $S_{bot}$.}
\end{flisti}
\noindent
Depth induced breaking module (Battjes-Janssen) \hfill {\file w3sdb1md.ftn}
\begin{flisti}
\fit{w3sdb1}{Calculation of $S_{db}$.}
\end{flisti}
\noindent
Depth induced breaking module (slot for user supplied routines) \hfill {\file
w3sdbxmd.ftn}
\begin{flisti}
\fit{w3sdbx}{Calculation of $S_{db}$.}
\fit{insdbx}{Initialization of $S_{db}$.}
\end{flisti}
\noindent
Triad interactions module (LTA) \hfill {\file w3str1md.ftn}
\begin{flisti}
\fit{w3str1}{Calculation of $S_{tr}$.}
\end{flisti}
\noindent
Triad interactions module (slot for user supplied routines) \hfill {\file
w3strxmd.ftn}
\begin{flisti}
\fit{w3strx}{Calculation of $S_{tr}$.}
\fit{instrx}{Initialization of $S_{tr}$.}
\end{flisti}
\noindent
Bottom scattering module \hfill {\file w3sbs1md.ftn}
\begin{flisti}
\fit{w3sbs1}{Calculation of $S_{bs}$ and associated momentum flux to the
bottom.}
\fit{insbs1}{Initialization of $S_{bs}$.}
\end{flisti}
\noindent
Bottom scattering module (slot for user supplied routines) \hfill {\file
w3sbsxmd.ftn}
\begin{flisti}
\fit{w3sbsx}{Calculation of $S_{bs}$.}
\fit{insbsx}{Initialization of $S_{bs}$.}
\end{flisti}
\pb \noindent
Wave-ice interactions (simple) \hfill {\file w3sic1md.ftn}
\begin{flisti}
\fit{w3sic1}{Calculation of $S_{id}$.}
\end{flisti}
\noindent
Wave-ice interactions (Liu et al.) \hfill {\file w3sic2md.ftn}
\begin{flisti}
\fit{w3sic2}{Calculation of $S_{id}$.}
\fit{ }{Interpolation tables.}
\end{flisti}
\noindent
Wave-ice interactions \cite{art:WS10} \hfill {\file w3sic3md.ftn}
\begin{flisti}
\fit{w3sic3}{Calculation of $S_{id}$.}
\fit{bsdet }{Calculate the determinant for the dispersion relation.}
\fit{wn\_complex}{Calculate complex wavenumber in ice.}
\fit{cmplx\_root\_muller}{Find root for complex numbers.}
\fit{fun\_zhao}{Wrapper for functions below.}
\fit{func0\_zhao, finc1\_zhao}{}
\end{flisti}
\begin{flisti}
\fit{w3sis2}{Calculation of $S_{is}$.}
\end{flisti}
\noindent
Waves scattering in ice and ice break-up \hfill {\file w3sis2md.ftn}
\noindent
Shoreline reflection \hfill {\file w3ref1md.ftn}
\begin{flisti}
\fit{w3ref1}{Calculation of $S_{ref}$.}
\end{flisti}
\noindent
Module for unclassified source term (slot for user supplied routines) \hfill
{\file w3sxxxmd.ftn}
\begin{flisti}
\fit{w3sxxx}{Calculation of $S_{xx}$.}
\fit{insxxx}{Initialization of $S_{xx}$.}
\end{flisti}
\noindent
To complete the basic wave model, several additional modules are needed. For
the actual contents of the service modules see the documentation in the source
code files.
\begin{flist}
\fit{constants.ftn}{Physical and mathematical constants and Kelvin functions.}
%\begin{flisti}
%\fit{kzeone}{Pre-calculation for Kelvin functions}
%\fit{kerkei}{Calculation of Kelvin functions}
%\end{flisti}
\fit{w3arrymd.ftn}{Array manipulation routines including
'print plot' routines.}
\fit{w3bullmd.ftn}{Perform bulletin style output for output points.}
\fit{w3cspcmd.ftn}{Conversion of spectral discretization.}
\fit{w3dispmd.ftn}{Routines to solve the Laplace dispersion relation (linear waves, flat bottom, no ice),
including interpolation tables. Includes also ice corrections in liu\_foreward\_dispersion and liu\_inverse\_dispersion.}
\fit{w3gsrumd.ftn}{Regridding utilities.}
\fit{w3partmd.ftn}{Perform spectral partitioning for a single spectrum.}
\fit{w3servmd.ftn}{General service routines.}
\fit{w3timemd.ftn}{Time management routines.}
\fit{w3triamd.ftn}{Basic routines for triangle-based grids: reading,
interpolation, definition of miscellaneous arrays, determination of boundary
points.}
\end{flist}
\noindent
This completes the description of the basic wave model routines. The relation
between the initialization routine and other routines is illustrated in
Fig.~\ref{fig:w3init}. % (page~\pageref{fig:w3init}).
A similar relational
diagram for the wave model routine is presented in Fig.~\ref{fig:w3wave}.
% (page~\pageref{fig:w3wave}).
% fig:w3init
% fig:w3wave
\input{sys/fig_w3init}
\input{sys/fig_w3wave}
| {
"alphanum_fraction": 0.7523298178,
"avg_line_length": 33.6451612903,
"ext": "tex",
"hexsha": "16beb93a2953fe05fce2a3e62e254b88ea087062",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-06-01T09:29:46.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-06-01T09:29:46.000Z",
"max_forks_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803",
"max_forks_repo_licenses": [
"Apache-2.0",
"CC0-1.0"
],
"max_forks_repo_name": "minsukji/ci-debug",
"max_forks_repo_path": "WW3/manual/sys/files_w3.tex",
"max_issues_count": 5,
"max_issues_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803",
"max_issues_repo_issues_event_max_datetime": "2021-06-04T14:17:45.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-05-31T15:49:26.000Z",
"max_issues_repo_licenses": [
"Apache-2.0",
"CC0-1.0"
],
"max_issues_repo_name": "minsukji/ci-debug",
"max_issues_repo_path": "WW3/manual/sys/files_w3.tex",
"max_line_length": 140,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803",
"max_stars_repo_licenses": [
"Apache-2.0",
"CC0-1.0"
],
"max_stars_repo_name": "minsukji/ci-debug",
"max_stars_repo_path": "WW3/manual/sys/files_w3.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 8475,
"size": 26075
} |
\section{Introduction}
Ristogo is an application that can be used by customers and restaurateurs.
Customers can book a table and modify later their reservations.
Restaurateurs can see the list of daily reservations to organize the work.
The application can be launched with a Graphical User Interface (default) or
with a Command Line Interface. This manual mainly speaks about the operations
that can be done using the GUI.
| {
"alphanum_fraction": 0.8130841121,
"avg_line_length": 35.6666666667,
"ext": "tex",
"hexsha": "e6cd5f25e4052d96b7b24d5ccec8cc58263bba15",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-07-25T08:43:38.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-07-25T08:43:38.000Z",
"max_forks_repo_head_hexsha": "a556a8308274da2ab134520d19b4feb6f1c44325",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "SpeedJack/lsmsd1",
"max_forks_repo_path": "doc/chapters/manual/introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a556a8308274da2ab134520d19b4feb6f1c44325",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "SpeedJack/lsmsd1",
"max_issues_repo_path": "doc/chapters/manual/introduction.tex",
"max_line_length": 77,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a556a8308274da2ab134520d19b4feb6f1c44325",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "SpeedJack/lsmsd1",
"max_stars_repo_path": "doc/chapters/manual/introduction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 85,
"size": 428
} |
\chapter{Stellar Feedback}
\label{ch:feedback}
\marginnote{
\textbf{Suggested background reading:}
\begin{itemize}
\item \href{http://adsabs.harvard.edu/abs/2014prpl.conf..243K}{Krumholz, M.~R., et al. 2014, in ``Protostars and Planets VI", ed.~H.~Beuther et al., pp.~243-266} \nocite{krumholz14e}
\end{itemize}
\textbf{Suggested literature:}
\begin{itemize}
\item \href{http://adsabs.harvard.edu/abs/2010ApJ...709..191M}{Murray, N., Quataert, E., \& Thompson, T.~A. 2010, ApJ, 709, 191} \nocite{murray10a}
\item \href{http://adsabs.harvard.edu/abs/2014MNRAS.442..694D}{Dale, J.~E., Ngoumou, J., Ercolano, B., \& Bonnell, I.~A. 2014, MNRAS, 442, 694} \nocite{dale14a}
\end{itemize}
}
The final piece of physics we will cover before moving on to the star formation process itself is the interaction of stellar radiation, winds, and other forms of feedback with the interstellar medium. Our goal in this chapter is to develop a general formalism for describing the various forms of feedback that stars an exert on their environments, and to make an inventory of the most important processes.
\section{General Formalism}
\subsection{IMF-Averaged Yields}
In most cases when considering feedback, we will be averaging over many, many stars. Consequently, it makes sense to focus not on individual stars, but on the collective properties of stellar populations. For this reason, a very useful first step is to consider budgets of mass, momentum, and energy.
We have already encountered a formalism of this sort in our discussion of galactic star formation rate indicators in Chapter \ref{ch:obsstars}, and the idea is similar here. To begin, let us fix the IMF
\begin{equation}
\xi(m) \equiv \frac{dn}{d\ln m},
\end{equation}
with the normalization chosen so that $\int \xi(m) \, dm = 1$. Note that $\xi(m)$ is defined per unit logarithm mass rather than per unit mass, so that it describes the number of stars in a mass range from $\ln m$ to $\ln m + d\ln m$. However, this function also has a second interpretation, since $dn/d\ln m = m (dn/dm)$: this quantity is the total stellar mass found in stars with masses between $m$ and $m+dm$. Consequently, the mean stellar mass is
\begin{equation}
\overline{m} = \frac{\int_{-\infty}^\infty m \xi(m) \, d\ln m}{\int_{-\infty}^\infty \xi(m) \, d\ln m} = \frac{1}{\int_{-\infty}^\infty \xi(m) \, d\ln m},
\end{equation}
where the second step follows from our choice of normalization. The numerator here represents the total mass of the stars, and the denominator is the number of stars. Note that $\xi(m)$ is presumably zero outside some finite interval in mass -- we are writing the limits of integration as $-\infty$ to $\infty$ only for convenience.
We will further assume that, from stellar evolution, we know the rate $q$ at which stars produce some quantity $Q$ as a function of their starting mass and age, where $\dot{Q} = q$. For example if the quantity $Q$ we are concerned with is total radiant energy $E$, then $q$ is the bolometric luminosity $L(m,t)$ of a star of mass $m$ and age $t$. Now consider a population of stars that forms in a single burst at time 0. The instantaneous production rate for these stars is
\begin{equation}
q(t) = M \int_{-\infty}^{\infty} d\ln m \, \xi(m) q(m,t).
\end{equation}
We use this equation to define the IMF-averaged production rate,
\begin{equation}
\left\langle \frac{q}{M}\right\rangle = \int_{-\infty}^{\infty} d\ln m \, \xi(m) q(m,t).
\end{equation}
Note that this rate is a function of the age of the stellar population $t$. We can also define a lifetime-averaged yield. Integrating over all time, the total amount of the quantity produced is
\begin{equation}
Q = M \int_{-\infty}^{\infty} d\ln m \, \xi(m) \int_0^\infty dt\, q(M,t).
\end{equation}
We therefore define the IMF-averaged yield
\begin{equation}
\left\langle \frac{Q}{M} \right\rangle = \int_{-\infty}^{\infty} d\ln m \, \xi(m) \int_0^\infty dt\, q(M,t).
\end{equation}
The meaning of these quantities is that $\langle q/M\rangle$ is the instantaneous rate at which the stars are producing $Q$ per unit stellar mass, and $\langle Q/M\rangle$ is the total amount produced per unit mass of stars formed over the stars' entire lifetimes.
In practice we cannot really integrate to infinity for most quantities, since the lifetimes of some stars may be very, very long compared to what we are interested in. For example the luminous output of a stellar population will have a large contribution for $\sim 5$ Myr coming from massive stars, which is mostly what is of interest. However, if we integrate for $1000$ Gyr, we will find that the luminous output is dominated by the vast numbers of $\sim 0.2$ $\msun$ stars near the peak of the IMF that are fully convective and thus are able to burn all of their hydrogen to He. In reality, though, this is longer than the age of the Universe. In practice, therefore, we must define our lifetime averages as cutting off after some finite time.
It can also be useful to define a different IMF average. The quantities we have discussed thus far are yields per unit mass that goes into stars. Sometimes we are instead interested in the yield per unit mass that stays locked in stellar remnants for a long time, rather than the mass that goes into stars for $\sim 3-4$ Myr and then comes back out in supernovae. Let us define the mass of the remnant that a star of mass $m$ leaves as $w(m)$. If the star survives for a long time, $w(m) = m$. In this case, the mass that is ejected back into the ISM is
\begin{equation}
M_{\rm return} = M \int_{-\infty}^{\infty} d\ln m \, \xi(m) [m - w(m)] \equiv R M,
\end{equation}
where we define $R$ as the return fraction. The mass fraction that stays locked in remnants is $1-R$.
Of course ``long time" here is a vague term. By convention (defined by \citealt{tinsley80a}), we choose to take $w(m) = m$ for $m=1$ $\msun$. We take $w(m) = 0.7$ $\msun$ for $m=1-8$ $\msun$ and $w(m) = 1.4$ $\msun$ for $m>8$ $\msun$, i.e., we assume that stars from $1-8$ $\msun$ leave behind $0.7$ $\msun$ white dwarfs, and stars larger than that mass form $1.4$ $\msun$ neutron stars. If one puts this in for a \citet{chabrier05a} IMF, the result is $R=0.46$, meaning that these averages are larger by a factor of $1/0.54$.
Given this formalism, it is straightforward to use a set of stellar evolutionary tracks plus an IMF to compute $\langle q/M\rangle$ or $\langle Q/M\rangle$ for any quantity of interest. Indeed, this is effectively what starburst99 \citep{leitherer99a} and programs like it do. The quantities of greatest concern for massive star feedback are the bolometric output, ionizing photon output, wind momentum and energy output, and supernova output.
\subsection{Energy- versus Momentum-Driven Feedback}
Before discussing individual feedback mechanisms in detail, it is also helpful to lay out two general categories that can be used to understand them. Let us consider a population of stars surrounded by initially-uniform interstellar gas. Those stars eject both photons and baryons (in the form of stellar winds and supernova ejecta) into the surrounding gas, and these photons and baryons carry both momentum and energy. We want to characterize how the ISM will respond.
One important consideration is that, as we have already shown, it is very hard to raise the temperature of molecular gas (or even dense atomic gas) because it is able to radiate so efficiently. A factor of $\sim 10$ increase in the radiative heating rate might yield only a tens of percent increase in temperature. This is true as long as the gas is cold and dense, but at sufficiently high temperatures or if the gas is continuously illuminated then the cooling rate begins to drop off, and it is possible for gas to remain hot.
A critical distinction is therefore between mechanisms that are able to keep the gas hot for a time that is long enough to be significant (generally of order the crossing time of the cloud or longer), and those where the cooling time is much shorter. For the latter case, the energy delivered by the photons and baryons will not matter, only the momentum delivered will. The momentum cannot be radiated away. We refer to feedback mechanism where the energy is lost rapidly as momentum-driven feedback, and to the opposite case where the energy is retained for at least some time as energy-driven, or explosive, feedback.
To understand why the distinction between the two is important, let us consider two extreme limiting cases. We place a cluster of stars at the origin and surround it by a uniform region of gas with density $\rho$. At time $t=0$, the stars "turn on" and begin emitting energy and momentum, which is then absorbed by the surrounding gas. Let the momentum and energy injection rates be $\dot{p}_w$ and $\dot{E}_w$; it does not matter if the energy and momentum are carried by photons or baryons, so long as the mass swept up is significantly greater than the mass carried by the wind.
The wind runs into the surrounding gas and causes it to begin moving radially outward, which in turn piles up material that is further away, leading to an expanding shell of gas. Now let us compute the properties of that shell in the two extreme limits of all the energy being radiated away, and all the energy being kept. If all the energy is radiated away, then at any time the radial momentum of the shell must match the radial momentum injected up to that time, i.e.,
\begin{equation}
p_{\rm sh} = M_{\rm sh} v_{\rm sh} = \dot{p}_w t.
\end{equation}
The kinetic energy of the shell is
\begin{equation}
E = \frac{p_{\rm sh}^2}{2 M_{\rm sh}} = \frac{1}{2} v_{\rm sh} \dot{p}_w t.
\end{equation}
For comparison, if none of the energy is radiated away, the energy is simply
\begin{equation}
E = \dot{E}_w t.
\end{equation}
Thus the energy in the energy-conserving case is larger by a factor of
\begin{equation}
\frac{1}{v_{\rm sh}} \cdot \frac{2\dot{E}_w}{\dot{p}_w}.
\end{equation}
If the energy injected by the stars is carried by a wind of baryons, then $2\dot{E}_w/\dot{p}_w$ is simply the speed of that wind, while if it is carried by photons, then $2\dot{E}_w/\dot{p}_w = 2 c$. Thus the energy in the energy-conserving case is larger by a factor of $2c/v_{\rm sh}$ for a photon wind, and $v_w/v_{\rm sh}$ for a baryon wind. These are not small factors: observed expanding shells typically have velocities of at most a few tens of km s$^{-1}$, while wind speeds from massive stars, for example, can be thousands of km s$^{-1}$. Thus it matters a great deal where a particular feedback mechanism lies between the energy- and momentum-conserving limits.
\section{Momentum-Driven Feedback Mechanisms}
We are now ready to consider individual mechanisms by which stars can deliver energy and momentum to the gas around them. Our goal is to understand what forms of feedback are significant and to estimate their relative budgets of momentum and energy.
\subsection{Radiation Pressure and Radiatively-Driven Winds}
The simplest form of feedback to consider is radiation pressure. Since the majority of the radiant energy deposited in the ISM will be re-radiated immediately, radiation pressure is (probably) a momentum-driven feedback. To evaluate the momentum it deposits, one need merely evaluate the integrals over the IMF we have written down using the bolometric luminosities of stars. \citet{murray10b} find
\begin{equation}
\left\langle \frac{L}{M}\right \rangle = 1140 \,\lsun\,\msun^{-1} = 2200\mbox{ erg s}^{-1}\mbox{ g}^{-1},
\end{equation}
and the corresponding momentum injection rate is
\begin{equation}
\left\langle \frac{\dot{p}_{\rm rad}}{M}\right \rangle = \frac{1}{c} \left\langle \frac{L}{M}\right \rangle = 7.3\times 10^{-8}\mbox{ cm s}^{-2} = 23\mbox{ km s}^{-1}\mbox{ Myr}^{-1}
\end{equation}
The physical meaning of this expression is that for every gram of matter that goes into stars, those stars produce enough light over 1 Myr to accelerate another gram of matter to a speed of 23 km s$^{-1}$. For very massive stars, radiation pressure also accelerates winds off the star's surfaces; for such stars, the wind carries a bit under half the momentum of the radiation field. Including this factor raises the estimate by a few tens of percent. However, these winds may also be energy conserving, a topic we will approach momentarily.
Integrated over the lifetimes of the stars out to 100 Myr, the total energy production is
\begin{equation}
\left\langle \frac{E_{\rm rad}}{M}\right\rangle = 1.1\times 10^{51}\mbox{ erg}\,\msun^{-1}
\end{equation}
The majority of this energy is produced in the first $\sim 5$ Myr of a stellar population's life, when the massive stars live and die.
It is common to quote the energy budget in units of $c^2$, which gives a dimensionless efficiency with which stars convert mass into radiation. Doing so gives
\begin{equation}
\epsilon = \frac{1}{c^2} \left\langle \frac{E_{\rm rad}}{M}\right\rangle = 6.2\times 10^{-4}.
\end{equation}
The radiation momentum budget is simply this over $c$,
\begin{equation}
\left\langle \frac{p_{\rm rad,tot}}{M}\right\rangle = 190\mbox{ km s}^{-1}.
\end{equation}
This is an interesting number, since it is not all that different than the circular velocity of a spiral galaxy like the Milky Way. It is a suggestion that the radiant momentum output by stars may be interesting in pushing matter around in galaxies -- probably not by itself, but perhaps in conjunction with other effects.
\subsection{Protostellar Winds}
A second momentum-driven mechanism, that we will discuss in more detail in Chapters \ref{ch:disks_obs} and \ref{ch:disks_theory}, is protostellar jets. All accretion disks appear to produce some sort of wind that carries away some of the mass and angular momentum, and protostars are no exception. The winds from young stars carry a mass flux of order a few tens of percent of the mass coming into the stars, and eject it with a velocity of order the Keplerian speed at the stellar surface. Note that these winds are distinct from the radiatively-driven ones that come from main sequence O stars. They are very different in both their driving mechanism and physical characteristics.
Why do we expect protostellar winds to be a momentum-driven feedback mechanism instead of an energy-driven one? The key lies in their characteristic speeds. Consider a star of mass $M_*$ and radius $R_*$. Its wind will move at a speed of order
\begin{equation}
v_w \sim \sqrt{\frac{GM_*}{R_*}} = 250\mbox{ km s}^{-1}\left(\frac{M_*}{M_\odot}\right)^{1/2} \left(\frac{R_*}{3R_\odot}\right)^{-1/2},
\end{equation}
where the scalings are for typical protostellar masses and radii. The kinetic energy per unit mass carried by the wind is $v_w^2/2$, and when the wind hits the surrounding ISM it will shock and this kinetic energy will be converted to thermal energy. We can therefore find the post-shock temperature from energy conservation. The thermal energy per unit mass is
$(3/2) k_B T/\mu m_{\rm H}$, where $\mu$ is the mean particle mass in H masses. Thus the post-shock temperature will be
\begin{equation}
T = \frac{\mu m_{\rm H} v_w^2}{3 k_B} \sim 5 \times 10^6\mbox{ K}
\end{equation}
for the fiducial speed above, where we have used $\mu=0.61$ for fully ionized gas. This is low enough that gas at this temperature will be able to cool fairly rapidly, leaving us in the momentum-conserving limit.
So how much momentum can we extract? To answer that, we will use our formalism for IMF averaging. Let us consider stars forming over some timescale $t_{\rm form}$. This can be a function of mass if we wish. Similarly, let us assume for simplicity that the accretion rate during the formation stage is constant; again, this assumption actually makes no difference to the result, it just makes the calculation easier. Thus a star of mass $m$ accretes at a rate $\dot{m} = m/t_{\rm form}$ over a time $t_{\rm form}$, and during this time it produces a wind with a mass flux $f \dot{m}$ that is launched with a speed $v_K$. Thus IMF-averaged yield of wind momentum is
\begin{equation}
\left\langle\frac{p_w}{M}\right\rangle = \int_{-\infty}^{\infty} d\ln m \, \xi(m) \, \int_0^{t_{\rm form}} dt \, \frac{f m v_K}{t_{\rm form}}.
\end{equation}
In reality $v_K$, $f$, and the accretion rate probably vary over the formation time of a star, but to get a rough answer we can assume that they are constant, in which case the integral is trivial and evaluates to
\begin{equation}
\left\langle\frac{p_w}{M}\right\rangle = f v_K \int_{-\infty}^{\infty} d\ln m \, \xi(m) m = f v_K
\end{equation}
where the second step follows from the normalization of the IMF. Thus we learn that winds supply momentum to the ISM at a rate of order $f v_K$. Depending on the exact choices of $f$ and $v_K$, this amounts to a momentum supply of a few tens of km s$^{-1}$ per unit mass of stars formed.
Thus in terms of momentum budget, protostellar winds carry over the full lifetimes of the stars that produce them about as much momentum as is carried by the radiation each Myr. Thus if one integrates over the full lifetime of even a very massive, short-lived star, it puts out much more momentum in the form of radiation than it does in the form of outflows. So why worry about outflows at all, in this case?
There are two reasons. First, because the radiative luminosities of stars increase steeply with stellar mass, the luminosity of a stellar population is dominated by its few most massive members. In small star-forming regions with few or no massive stars, the radiation pressure will be much less than our estimate, which is based on assuming full sampling of the IMF, suggests. On the other hand, protostellar winds produce about the same amount of momentum per unit mass accreted no matter what stars are doing the accreting -- this is just because $v_K$ is not a very strong function of stellar mass. (This is a bit of an oversimplification, but it is true enough for this purpose.) This means that winds will be significant even in regions that lack massive stars, because they can be produced by low-mass stars too.
Second, while outflows carry less momentum integrated over stars' lifetimes, when they are on they are much more powerful. Typical formation times, we shall see, are of order a few times $10^5$ yr, so the instantaneous production rate of outflow momentum is typically $\sim 100$ km s$^{-1}$ Myr$^{-1}$, a factor of several higher than radiation pressure. Thus winds can dominate over radiation pressure significantly during the short phase when they are on.
\section{(Partly) Energy-Driven Feedback Mechanisms}
\subsection{Ionizing Radiation}
Massive stars produce significant amounts of ionizing radiation. From \citet{murray10b}, the yield of ionizing photons from a zero-age population is
\begin{equation}
\left\langle\frac{S}{M}\right\rangle = 6.3\times 10^{46}\mbox{ photons s}^{-1}\,M_\odot^{-1}.
\end{equation}
The corresponding lifetime-averaged production of ionizing photons is
\begin{equation}
\left\langle \frac{S_{\rm tot}}{M}\right\rangle = 4.2\times 10^{60}\mbox{ photons}\,\msun^{-1}.
\end{equation}
\paragraph{H~\textsc{ii} Region Expansion}
We will not go into tremendous detail on how these photons interact with the ISM, but to summarize: photons capable of ionizing hydrogen will be absorbed with a very short mean free path, producing a bubble of fully ionized gas within which all the photons are absorbed. The size of this bubble can be found by equating the hydrogen recombination rate with the ionizing photon production rate, giving
\begin{equation}
S = \frac{4}{3} \pi r_i^3 n_e n_p \alphab,
\end{equation}
where $r_i$ is the radius of the ionized region, $n_e$ and $n_p$ are the number densities of electrons and protons, and $\alphab$ is the recombination rate coefficient for case B, and which has a value of roughly $3\times 10^{-13}$ cm$^3$ s$^{-1}$. Cases A and B, what they mean, and how this quantity is computed, are all topics discussed at length in standard ISM references such as \citet{osterbrock06a} and \citet{draine11a}, and here we will simply take $\alphab$ as a known constant.
The radius of the ionized bubble is known as the Str\"omgren radius after Bengt Str\"omgren, the person who first calculated it. If we let $\mu\approx 1.4$ be the mean mass per hydrogen nucleus in the gas in units of $m_{\rm H}$, and $\rho_0$ be the initial density before the photoionizing stars turn on, then $n_p = \rho_0/\mu m_{\rm H}$ and $n_e = 1.1 \rho_0/\mu m_{\rm H}$, with the factor of 1.1 coming from assuming that He is singly ionized (since its ionization potential is not that different from hydrogen's) and from a ratio of 10 He nuclei per H nucleus. Inserting these factors and solving for $r_i$, we obtain the Str\"omgren radius, the equilibrium radius of a sphere of gas ionized by a central source:
\begin{equation}
r_S = \left(\frac{3 S \mu^2 m_{\rm H}^2}{4(1.1) \pi \alphab \rho_0^2}\right)^{1/3} = 2.8 S_{49}^{1/3} n_2^{-2/3}\mbox{ pc},
\end{equation}
where $S_{49} = S/10^{49}$ s$^{-1}$, $n_2 = (\rho_0/\mu m_{\rm H})/100$ cm$^{-3}$, and we have used $\alphab = 3.46\times 10^{-13}$ cm$^3$ s$^{-1}$, the value for a gas at a temperature of $10^4$ K.
The photoionized gas will be heated to $\approx 10^4$ K by the energy deposited by the ionizing photons. The corresponding sound speed in the ionized gas will be
\begin{equation}
c_i = \sqrt{2.2 \frac{k_B T_i}{\mu m_{\rm H}}} = 11 T_{i,4}^{1/2}\mbox{ km s}^{-1},
\end{equation}
where $T_{i,4} = T_i/10^4$ K, and the factor of $2.2$ arises because there are 2.2 free particles per H nucleus (0.1 He per H, and 1.1 electrons per H). The pressure in the ionized region is $\rho_0 c_i^2$, which is generally much larger than the pressure $\rho_0 c_0^2$ outside the ionized region, where $c_0$ is the sound speed in the neutral gas. As a result, the ionized region is hugely over-pressured compared to the neutral gas around it. The gas in this region will therefore begin to expand dynamically.
The time to reach ionization balance is short compared to dynamical timescales, so we can assume that ionization balance is always maintained as the expansion occurs. Consequently, when the ionized region has reached a radius $r_i$, the density inside the ionized region must obey
\begin{equation}
\rho_i = \left[\frac{3 S \mu^2 m_{\rm H}^2}{4(1.1)\pi \alphab r_i^3}\right]^{1/2}.
\end{equation}
At the start of expansion $\rho_i = \rho_0$, but we see here that the density drops as $r_i^{-3/2}$ as expansion proceeds. Since the expansion is highly supersonic with respect to the external gas (as we will see shortly), there is no time for sound waves to propagate away from the ionization front and pre-accelerate the neutral gas. Instead, this gas must be swept up by the expanding H~\textsc{ii} region. However, since $\rho_i \ll \rho_0$, the mass that is swept up as the gas expands must reside not in the ionized region interior, but in a dense neutral shell at its edges. At late times, when $r_i \gg r_S$, we can neglect the mass in the shell interior in comparison to that in the shell, and simply set the shell mass equal to the total mass swept up. We therefore have a shell mass
\begin{equation}
M_{\rm sh} = \frac{4}{3} \pi \rho_0 r_i^3.
\end{equation}
We can write down the equation of motion for this shell. If we neglect the small ambient pressure, then the only force acting on the shell is the pressure $\rho_i c_i^2$ exerted by ionized gas in the H~\textsc{ii} region interior. Conservation of momentum therefore requires that
\begin{equation}
\frac{d}{dt} \left(M_{\rm sh} \dot{r}_i\right) = 4\pi r_i^2 \rho_i c_i^2.
\end{equation}
Rewriting everything in terms of $r_i$, we arrive at an ordinary differential equation for $r_i$:
\begin{equation}
\frac{d}{dt} \left(\frac{1}{3} r_i^3 \dot{r}_i\right) = c_i^2 r_i^2 \left(\frac{r_i}{r_S}\right)^{-3/2},
\end{equation}
where we have used the scaling $\rho_i = \rho_0 (r_i/r_S)^{-3/2}$.
This ODE is straightforward to solve numerically, but if we focus on late times when $r_i \gg r_S$, we can solve it analytically. For $r_i \gg r_S$, we can take $r_i \approx 0$ as $t\rightarrow 0$, and with this boundary condition the ODE cries out for a similarity solution. As a trial, consider $r_i = f r_S (t/t_S)^\eta$, where
\begin{equation}
t_S = \frac{r_S}{c_i} = 240 S_{49}^{1/3} n_2^{-2/3} T_{i,4}^{-1/2}\mbox{ kyr}
\end{equation}
and $f$ is a dimensionless constant. Substituting this trial solution in, there are numerous cancellations, and in the end we obtain
\begin{equation}
\frac{1}{4} \eta (4\eta-1) f^4 \left(\frac{t}{t_S}\right)^{4\eta-2} = f^{1/2} \left(\frac{t}{t_S}\right)^{\eta/2}.
\end{equation}
Clearly we can obtain a solution only if $4\eta-2 = \eta/2$, which requires $\eta = 4/7$. Solving for $f$ gives $f=(49/12)^{2/7}$. We therefore have a solution
\begin{equation}
r_i = r_S \left(\frac{7 t}{2\sqrt{3} t_S}\right)^{4/7} = 9.4 S_{49}^{1/7} n_2^{-2/7} T_{i,4}^{2/7} t_6^{4/7}\mbox{ pc}
\end{equation}
at late times, where $t_6 = t/1$ Myr.
\paragraph{Feedback Effects of H~\textsc{ii} Regions}
Given this result, what can we say about the effects of an expanding H~\textsc{ii} region? There are several possible effects: ionization can eject mass, drive turbulent motions, and possibly even disrupt clouds entirely. First consider mass ejection. In our simple calculation, we have taken the ionized gas to be trapped inside a spherical H~\textsc{ii} region interior. In reality, though, once the H~\textsc{ii} region expands to the point where it encounters a low density region at a cloud edge, it will turn into a "blister" type region, and the ionized gas will freely escape into the low density medium.\footnote{This is a case where the less pleasant nomenclature has won out. Such flows are sometimes also called "champagne" flows, since the ionized gas bubbles out of the dense molecular cloud like champagne escaping from a bottle neck. However, the more common term in the literature these days appears to be blister. What this says about the preferences and priorities of the astronomical community is left as an exercise for the reader.} The mass flux carried in this ionized wind will be roughly
\begin{equation}
\dot{M} = 4\pi r_i^2 \rho_i c_i,
\end{equation}
i.e., the area from which the wind flows times the characteristic density of the gas at the base of the wind times the characteristic speed of the wind. Substituting in our similarity solution, we have
\begin{equation}
\dot{M} = 4\pi r_S^2 \rho_0 c_i \left(\frac{7t}{2\sqrt{3} t_S}\right)^{2/7} = 7.2\times 10^{-3} t_6^{2/7} S_{49}^{4/7} n_2^{-1/7} T_{i,4}^{1/7}\,\msun\mbox{ yr}^{-1}.
\end{equation}
We therefore see that, over the roughly $3-4$ Myr lifetime of an O star, it can eject $\sim 10^3 - 10^4$ $\msun$ of mass from its parent cloud, provided that cloud is at a relatively low density (i.e., $n_2$ is not too big). Thus massive stars can eject many times their own mass from a molecular cloud. In fact, some authors have used this effect to make an estimate of the star formation efficiency in GMCs \citep[e.g.,][]{matzner02a}.
We can also estimate the energy contained in the expanding shell. This is
\begin{eqnarray}
E_{\rm sh} & = & \frac{1}{2} M_{\rm sh} \dot{r}_i^2 = \frac{32}{147}\pi \rho_0 \frac{r_S^5}{t_S^2} \left(\frac{7t}{2\sqrt{3} t_S}\right)^{6/7}
\nonumber \\
& = & 8.1\times 10^{47} t_6^{6/7} S_{49}^{5/7} n_2^{-10/7} T_{i,4}^{10/7}\mbox{ erg}.
\end{eqnarray}
For comparison, the gravitational binding energy of a $10^5$ $\msun$ GMC with a surface density of $0.03$ g cm$^{-2}$ is $\sim 10^{50}$ erg. Thus a single O star's H~\textsc{ii} region provides considerably less energy than this. On the other hand, the collective effects of $\sim 10^2$ O stars, with a combined ionizing luminosity of $10^{51}$ s$^{-1}$ or so, can begin to produce H~\textsc{ii} regions whose energies rival the binding energies of individual GMCs. This means that H~\textsc{ii} region shells may sometimes be able to unbind GMCs entirely. Even if they cannot, they may be able to drive significant turbulent motions within GMCs.
We can also compute the momentum of the shell, for comparison to the other forms of feedback we discussed previously. This is
\begin{equation}
p_{\rm sh} = M_{\rm sh} \dot{r}_i = 1.1\times 10^5 n_2^{-1/7} T_{i,4}^{-8/7} S_{49}^{4/7} t_6^{9/7} \, M_\odot\mbox{ km s}^{-1}.
\end{equation}
Since this is non-linear in $S_{49}$ and in time, the effects of HII regions will depend on how the stars are clustered together, and how long they live. To get a rough estimate, though, we can take the typical cluster to have an ionizing luminosity around $10^{49}$ s$^{-1}$, since by number most clusters are small, and we can adopt an age of 4 Myr. This means that (also using $n_2 = 1$ and $T_{i,4} = 1$) the momentum injected per $10^{49}$ photons s$^{-1}$ of luminosity is $p = 3-5\times 10^5$ $M_\odot$ km s$^{-1}$. Recalling that we get $6.3\times 10^{46}$ photons s$^{-1}$ $M_\odot^{-1}$ for a zero-age population, this means that the momentum injected per unit stellar mass for HII regions is roughly
\begin{equation}
\left\langle\frac{p_{\rm HII}}{M}\right\rangle \sim 3\times 10^3\mbox{ km s}^{-1}.
\end{equation}
This is obviously a very rough calculation, and it can be done with much more sophistication, but this analysis suggests that H~\textsc{ii} regions are likely the dominant feedback mechanism compared to winds and H~\textsc{ii} regions.
There is one important caveat to make, though. Although in the similarity solution we formally have $v_i \rightarrow \infty$ as $r_i \rightarrow 0$, in reality the ionized region cannot expand faster than roughly the ionized gas sound speed: one cannot drive a 100 km s$^{-1}$ expansion using gas with a sound speed of 10 km s$^{-1}$. As a result, all of these effects will not work in any cluster for which the escape speed or the virial velocity exceeds $\sim 10$ km s$^{-1}$. This is not a trivial limitation, since for very massive star clusters the escape speed can exceed this value. An example is the R136 cluster in the LMC, which has a present-day stellar mass of $5.5\times 10^4$ $\msun$ inside a radius of $1$ pc \citep{hunter96a}. The escape speed from the stars alone is roughly 20 km s$^{-1}$. Assuming there was gas in the past when the cluster formed, the escape speed must have been even higher. For a region like this, H~\textsc{ii} regions cannot be important.
\subsection{Hot Stellar Winds}
Next let us consider the effects of stellar winds. As we alluded to earlier, O stars launch winds with velocities of $v_w \sim 1000-2500$ km s$^{-1}$ and mass fluxes of $\dot{M}_w \sim 10^{-7}$ $\msun$ yr$^{-1}$. We have already seen that the momentum carried by these winds is fairly unimportant in comparison to the momentum of the protostellar outflows or the radiation field, let alone the momentum provided by H~\textsc{ii} regions. However, because of the high wind velocities, repeating the analysis we performed for protostellar jets yields a characteristic post-shock temperature that is closer to $10^8$ K than $10^6$ K. Gas at such high temperatures has a very long cooling time, so we might end up with an energy-driven feedback. We therefore consider that case next.
Since the winds are radiatively driven, they tend to carry momenta comparable to that carried by the stellar radiation field. The observed correlation between stellar luminosity and wind momentum \citep[e.g.,][]{repolust04a} is that
\begin{equation}
\dot{M}_w v_w \approx 0.5 \frac{L_*}{c},
\end{equation}
where $L_*$ is the stellar luminosity. This implies that the mechanical luminosity of the wind is
\begin{equation}
L_w = \frac{1}{2} \dot{M}_w v_w^2 = \frac{L_*^2}{8 \dot{M}_w c^2} = 850 L_{*,5}^2 \dot{M}_{w,-7}^{-1} \, \lsun.
\end{equation}
This is not much compared to the star's radiant luminosity, but that radiation will mostly not go into pushing the ISM around. The wind, on the other hand might. Also notice that over the integrated power output is
\begin{equation}
E_w = L_w t = 1.0\times 10^{50} L_{*,5}^2 \dot{M}_{w,-7}^{-1} t_6\mbox{ erg},
\end{equation}
so over the $\sim 4$ Myr lifetime of a very massive star, one with $L_{*,5}\sim 3$, the total mechanical power in the wind is not much less than the amount of energy released when the star goes supernova.
If energy is conserved, and we assume that about half the available energy goes into the kinetic energy of the shell and half is in the hot gas left in the shell interior,\footnote{This assumption is not quite right. See \citet{castor75a} and \citet{weaver77a} for a better similarity solution. However, for an order of magnitude estimate, which is of interest to us, this simple assumption suffices.} conservation of energy then requires that
\begin{equation}
\frac{d}{dt} \left(\frac{2}{3}\pi \rho_0 r_b^3 \dot{r}_b^2\right) \approx \frac{1}{2} L_w.
\end{equation}
As with the H~\textsc{ii} region case, this cries out for similarity solution. Letting $r_b = A t^\eta$, we have
\begin{equation}
\frac{4}{3} \pi \eta^2 (5\eta-2) \rho_0 A^5 t^{5\eta-3} \approx L_w.
\end{equation}
Clearly we must have $\eta=3/5$ and $A=[25 L_w/(12\pi \rho_0)]^{1/5}$. Putting in some numbers,
\begin{equation}
r_b = 16 L_{*,5}^{2/5} \dot{M}_{w,-7}^{-1/5} n_2^{-1/5} t_6^{3/5}\mbox{ pc}.
\end{equation}
Note that this is greater than the radius of the comparable H~\textsc{ii} region, so the wind will initially move faster and drive the H~\textsc{ii} region into a thin ionized layer between the hot wind gas and the outer cool shell -- {\it if the energy-driven limit is correct}. A corollary of this is that the wind would be even more effective than the ionized gas at ejecting mass from the cloud.
However, this may not be correct, because this solution assumes that the energy carried by the wind will stay confined within a closed shell. This may not be the case: the hot gas may instead break out and escape, imparting relatively little momentum. Whether this happens or not is difficult to determine theoretically, but can be addressed by observations. In particular, if the shocked wind gas is trapped inside the shell, it should produce observable X-ray emission. We can quantify how much X-ray emission we should see with a straightforward argument. It is easiest to phrase this argument in terms of the pressure of the X-ray emitting gas, which is essentially what an X-ray observation measures.
Consider an expanding shell of matter that began its expansion a time $t$ ago. In the energy-driven case, the total energy within that shell is, up to factors of order unity, $E_w = L_w t$. The pressure is simply $2/3$ of the energy density (since the gas is monatomic at these temperatures). Thus,
\begin{equation}
P_X = \frac{2 E_w}{3 [(4/3)\pi r^3]} = \frac{L_*^2 t}{16\pi \dot{M}_w c^2 r^3}.
\end{equation}
It is useful to compute the ratio of this to the pressure exerted by the radiation, which is simply twice that exerted by the wind in the momentum-driven limit. This is
\begin{equation}
P_{\rm rad} = \frac{L_*}{4\pi r^2 c}.
\end{equation}
We define this ratio as the trapping factor:
\begin{equation}
f_{\rm trap} = \frac{P_X}{P_{\rm rad}} = \frac{L_* t}{4\dot{M}_w c r} \approx \frac{L_*}{4\dot{M}_w c v},
\end{equation}
where in the last step we used $v \approx r/t$, where $v$ is the expansion velocity of the shell. If we now use the relation $\dot{M}_w v_w \approx (1/2)L_*/c$, we finally arrive at
\begin{equation}
f_{\rm trap} \approx \frac{v_w}{2v}.
\end{equation}
Thus if shells expand in the energy-driven limit due to winds, the pressure of the hot gas within them should exceed the direct radiation pressure by a factor of roughly $v_w/v$, where $V$ is the shell expansion velocity and $v_w$ is the wind launch velocity. In contrast, the momentum driven limit gives $P_X / P_{\rm rad} \sim 1/2$, since the hot gas exerts a force that is determined by the wind momentum, which is roughly has the momentum carried by the stellar radiation field.
\citet{lopez11a} observed the 30 Doradus H~\textsc{ii} region, which is observed to be expanding with $v\approx 20$ km s$^{-1}$, giving a predicted $f_{\rm trap} = 20$ for a conservative $v_w = 1000$ km s$^{-1}$. They then measured the hot gas pressure from the X-rays and the direct radiation pressure from the stars optical emission. The result is that $f_{\rm trap}$ is much closer to 0.5 than 20 for 30 Doradus, indicating that the momentum-driven solution is closer to reality there. \citet{harper-clark09a} reached a similar conclusion about the Carina Nebula.
\subsection{Supernovae}
We can think of the energy and momentum budget from supernovae as simply representing a special case of the lifetime budgets we've computed. In this case, we can simply think of $q(M,t)$ as being a $\delta$ function: all the energy and momentum of the supernova is released in a single burst at a time $t=t_l(m)$, where $t_l(m)$ is the lifetime of the star in question. We normally assume that the energy yield per star is $10^{51}$ erg, and have to make some estimate of the minimum mass at which a SN will occur, which is roughly 8 $\msun$. We can also, if we want, imagine mass ranges where other things happen, for example direct collapse to black hole, pair instability supernovae that produce more energy, or something more exotic. These choices usually do not make much difference, though, because they affect very massive stars, and since the supernova energy yield (unlike the luminosity) is not a sharp function of mass, the relative rarity of massive stars means they make a small contribution. Thus it usually safe to ignore these effects.
Given this preamble, we can write the approximate supernova energy yield per unit mass as
\begin{equation}
\left\langle \frac{E_{\rm SN}}{M}\right\rangle = E_{\rm SN} \int_{m_{\rm min}}^\infty d\ln m\, \xi(m) \equiv E_{\rm SN} \left\langle \frac{N_{\rm SN}}{M}\right\rangle,
\end{equation}
where $E_{\rm SN} = 10^{51}$ erg is the constant energy per SN, and $m_{\rm min} = 8$ $\msun$ is the minimum mass to have a supernova. Note that the integral, which we have named $\langle N_{\rm SN}/M\rangle$, is simply the number of stars above $m_{\rm min}$ per unit mass in stars total, which is just the expected number of supernovae per unit mass of stars. For a Chabrier IMF from $0.01-120$ $\msun$, we have
\begin{equation}
\left\langle \frac{N_{\rm SN}}{M}\right\rangle = 0.011\, \msun^{-1}
\quad
\left\langle \frac{E_{\rm SN}}{M}\right\rangle = 1.1\times 10^{49}\mbox{ erg}\,\msun^{-1}=6.1\times 10^{-6} c^2.
\end{equation}
A more detailed calculation from starburst99 agrees very well with this crude estimate. Note that this, plus the Milky Way's SFR of $\sim 1$ $\msun$ yr$^{-1}$, is the basis of the oft-quoted result that we expect $\sim 1$ supernova per century in the Milky Way.
The momentum yield from SNe can be computed in the same way. This is slightly more uncertain, because it is easier to measure the SN energy than its momentum -- the latter requires the ability to measure the velocity or mass of the ejecta before they are mixed with significant amounts of ISM. However, roughly speaking the ejection velocity is $v_{\rm ej} \approx 10^9$ cm s$^{-1}$, which means that the momentum is $p_{\rm SN} = 2 E_{\rm SN}/v_{\rm ej}$. Adopting this value, we have
\begin{equation}
\left\langle \frac{p_{\rm SN}}{M}\right\rangle = \frac{2}{v_{\rm ej}} \left\langle \frac{E_{\rm SN}}{M}\right\rangle = 55 v_{\rm ej,9}^{-1} \mbox{ km s}^{-1}.
\end{equation}
Physically, this means that every $\msun$ of matter than goes into stars provides enough momentum to raise another $\msun$ of matter to a speed of 55 km s$^{-1}$. This is not very much compared to other feedbacks, but of course supernovae, like stellar winds, may have an energy-conserving phase where their momentum deposition grows. We will discuss the question of supernova momentum deposition more in Chapter \ref{ch:sflaw_th} in the context of models for regulation of the star formation rate. | {
"alphanum_fraction": 0.7461521135,
"avg_line_length": 122.1926605505,
"ext": "tex",
"hexsha": "0f8aa832cc95b6da113c20281a18e7cc0b9d33b2",
"lang": "TeX",
"max_forks_count": 14,
"max_forks_repo_forks_event_max_datetime": "2020-05-08T15:58:05.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-05-22T17:47:29.000Z",
"max_forks_repo_head_hexsha": "d1c8a10f84fc1676b492ddb4f3bd8b73455b5d07",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "keflavich/star_formation_notes",
"max_forks_repo_path": "chapters/chapter7.tex",
"max_issues_count": 9,
"max_issues_repo_head_hexsha": "d1c8a10f84fc1676b492ddb4f3bd8b73455b5d07",
"max_issues_repo_issues_event_max_datetime": "2022-01-31T02:07:47.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-05-31T17:15:19.000Z",
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "keflavich/star_formation_notes",
"max_issues_repo_path": "chapters/chapter7.tex",
"max_line_length": 1112,
"max_stars_count": 67,
"max_stars_repo_head_hexsha": "d1c8a10f84fc1676b492ddb4f3bd8b73455b5d07",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "Open-Astrophysics-Bookshelf/star_formation_notes",
"max_stars_repo_path": "chapters/chapter7.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-02T02:02:57.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-05-05T22:43:39.000Z",
"num_tokens": 11333,
"size": 39957
} |
\documentclass{lmcs}
\pdfoutput=1
\keywords{\textpi-calculus, linear logic, session types, non-determinism, deadlock freedom}
\usepackage[british]{babel}
\usepackage{graphicx}
\DeclareGraphicsRule{.ai}{pdf}{.ai}{}
\usepackage{lscape}
\usepackage{amsmath,amssymb}
\usepackage{scalerel,rotating}
\usepackage{textgreek,textcomp,stmaryrd}
\usepackage{centernot}
\usepackage{xspace}
\xspaceaddexceptions{]\}}
\arraycolsep=1ex
\usepackage[all]{foreign}
%% Names
\newcommand{\Ami}{Ami\xspace}
\newcommand{\Boe}{Bo\'{e}\xspace}
\newcommand{\Cat}{Cat\xspace}
%% Emoji
\newcommand{\emoji}[2][1em]{\ensuremath{\vcenter{%
\hbox{\includegraphics[width=#1]{#2}}}}\xspace}
\newcommand{\twemoji}[2][1em]{\emoji[#1]{twemoji/2/assets/#2.ai}}
\newcommand{\ami}[1][1em]{\twemoji[#1]{1f9d1-1f3fd}}%{1f466-1f3fe}}
\newcommand{\boe}[1][1em]{\twemoji[#1]{1f469-1f3fd}}%{1f469-1f3fb}}
\newcommand{\cat}[1][1em]{\twemoji[#1]{1f467-1f3fd}}
\newcommand{\sliceofcake}[1][1em]{\twemoji[#1]{1f370}}
\newcommand{\doughnut}[1][1em]{\twemoji[#1]{1f369}}
\newcommand{\cake}[1][1em]{\twemoji[#1]{1f382}}
\newcommand{\nope}[1][1em]{\twemoji[#1]{1f64c-1f3fd}}%{1f64c}}
\newcommand{\money}[1][1em]{\twemoji[#1]{1f4b0}}
\newcommand{\bill}[1][1em]{\twemoji[#1]{1f4b7}}
\newcommand{\bank}[1][1em]{\twemoji[#1]{1f3e6}}
\newcommand{\store}[1][1em]{\twemoji[#1]{1f3ea}}
\newcommand{\bob}[1][1em]{\twemoji[#1]{1f474-1f3fc}}
%% Terms, types, and constraints
\definecolor{tmcolor}{HTML}{C02F1D}
\definecolor{tycolor}{HTML}{1496BB}
\providecommand{\tm}[1]{\textcolor{tmcolor}{\ensuremath{\normalfont#1}}}
\providecommand{\ty}[1]{\textcolor{tycolor}{\ensuremath{\normalfont#1}}}
%% Proof trees
\usepackage{bussproofs}
\EnableBpAbbreviations
\providecommand{\seq}[2][]{\ensuremath{\tm{#1}\;\vdash\;\ty{#2}}}
\providecommand{\tmty}[2]{\ensuremath{\tm{#1}\colon\!\ty{#2}}}
\providecommand{\NOM}[1]{\RightLabel{\textsc{#1}}}
\providecommand{\SYM}[1]{\RightLabel{\ensuremath{#1}}}
\newenvironment{prooftree*}{\leavevmode\hbox\bgroup}{\DisplayProof\egroup}
\providecommand{\hole}{\ensuremath{\square}}
\providecommand{\fv}[1]{\ensuremath{\text{fv}(#1)}}
\providecommand{\freeIn}[2]{\tm{#1}\in\tm{#2}}
\providecommand{\notFreeIn}[2]{\tm{#1}\not\in\tm{#2}}
\providecommand{\reducesto}[3][]{\ensuremath{\tm{#2}\overset{#1}{\Longrightarrow}\tm{#3}}}
%% System names
\providecommand{\dhcp}{\ensuremath{\text{DHCP}}\xspace}
\providecommand{\hcp}{\ensuremath{\text{HCP}}\xspace}
\providecommand{\nodcap}{\ensuremath{\hcp_{\text{ND}}}\xspace}
\providecommand{\piDILL}{\textpi DILL\xspace}
\providecommand{\SILL}{$\text{SILL}$\xspace}
\providecommand{\SILLS}{$\text{\SILL}_{S}$\xspace}
\providecommand{\cp}{CP\xspace}
%% PI - Terms
\providecommand{\ppar}{\ensuremath{\parallel}}
\providecommand{\piCalc}[0]{\textpi-calculus\xspace}
\providecommand{\piSend}[3]{\ensuremath{#1[ #2 ].#3}}
\providecommand{\piUSend}[3]{\ensuremath{#1\langle #2 \rangle.#3}}
\providecommand{\piRecv}[3]{\ensuremath{#1( #2 ).#3}}
\providecommand{\piPar}[2]{\ensuremath{#1 \ppar #2}}
\providecommand{\piNew}[3]{\ensuremath{(\nu #1#2)#3}}
\providecommand{\piRepl}[1]{\ensuremath{!#1}}
\providecommand{\piHalt}[0]{\ensuremath{0}}
\providecommand{\piSub}[3]{\ensuremath{#3\{#1/#2\}}}
%% CP - Terms
\providecommand{\cpLink}[2]{\ensuremath{#1{\leftrightarrow}#2}}
\providecommand{\cpCut}[3]{\ensuremath{\piNew{#1}({\piPar{#2}{#3}})}}
\providecommand{\cpSend}[4]{\ensuremath{#1[#2].(\piPar{#3}{#4})}}
\providecommand{\cpRecv}[3]{\ensuremath{#1(#2).#3}}
\providecommand{\cpWait}[2]{\ensuremath{#1().#2}}
\providecommand{\cpHalt}[1]{\ensuremath{#1[].0}}
\providecommand{\cpInl}[2]{\ensuremath{#1\triangleleft\texttt{inl}.#2}}
\providecommand{\cpInr}[2]{\ensuremath{#1\triangleleft\texttt{inr}.#2}}
\providecommand{\cpCase}[3]{\ensuremath{#1\triangleright\{\texttt{inl}:#2;\texttt{inr}:#3\}}}
\providecommand{\cpAbsurd}[1]{\ensuremath{#1\triangleright\{\}}}
\providecommand{\cpSub}[3]{\ensuremath{\piSub{#1}{#2}{#3}}}
\providecommand{\cpPlug}[2]{\ensuremath{{#1}[{#2}]}}
%% CP - Types
\newcommand*{\queue}[1]{\ensuremath{\text{queue} \; #1}}%
\newcommand*{\acquire}[1]{\ensuremath{{\uparrow^{S}_{L}}#1}}%
\newcommand*{\release}[1]{\ensuremath{{\downarrow^{S}_{L}}#1}}%
\providecommand{\parr}{\mathbin{\bindnasrepma}}
\providecommand{\with}{\mathbin{\binampersand}}
\providecommand{\plus}{\ensuremath{\oplus}}
\providecommand{\tens}{\ensuremath{\otimes}}
\providecommand{\one}{\ensuremath{\mathbf{1}}}
\providecommand{\nil}{\ensuremath{\mathbf{0}}}
\providecommand{\emptycontext}{\ensuremath{\,\cdot\,}}
\providecommand{\hsep}{\ensuremath{\parallel}}
\providecommand{\emptyhypercontext}{\varnothing}
%% HCP - Structural rules
\providecommand{\hcpInfAx}{%
\begin{prooftree*}
\AXC{$\vphantom{\seq[ Q ]{ \Delta, \tmty{y}{A^\bot} }}$}
\NOM{Ax}
\UIC{$\seq[ \cpLink{x}{y} ]{ \tmty{x}{A}, \tmty{y}{A^\bot} }$}
\end{prooftree*}}
\providecommand{\hcpInfCut}{%
\begin{prooftree*}
\AXC{$\seq[{ P }]{
\mathcal{G}\hsep
\Gamma, \tmty{x}{A} \hsep
\Delta, \tmty{x'}{A^\bot}
}$}
\NOM{Cut}
\UIC{$\seq[{ \piNew{x}{x'}{P} }]{
\mathcal{G}\hsep
\ty{\Gamma}, \ty{\Delta}
}$}
\end{prooftree*}}
\providecommand{\hcpInfMix}{%
\begin{prooftree*}
\AXC{$\seq[ P ]{\mathcal{G} }$}
\AXC{$\seq[ Q ]{\mathcal{H} }$}
\NOM{H-Mix}
\BIC{$\seq[ \piPar{P}{Q} ]{
\mathcal{G} \hsep \mathcal{H} }$}
\end{prooftree*}}
\providecommand{\hcpInfHalt}{%
\begin{prooftree*}
\AXC{$\vphantom{\seq[ Q ]{ \Delta, \tmty{y}{A^\bot} }}$}
\NOM{H-Mix$_0$}
\UIC{$\seq[{ \piHalt }]{ \emptyhypercontext }$}
\end{prooftree*}}
%% HCP - Logical rules
\providecommand{\hcpInfBoundTens}{%
\begin{prooftree*}
\AXC{$\seq[{ P }]{
\ty{\Gamma}, \tmty{y}{A} \hsep \ty{\Delta}, \tmty{x}{B}
}$}
\SYM{\tens}
\UIC{$\seq[{ \piSend{x}{y}{P} }]{
\ty{\Gamma}, \ty{\Delta}, \tmty{x}{A \tens B}
}$}
\end{prooftree*}}
\providecommand{\hcpInfUnboundTens}{%
\begin{prooftree*}
\AXC{$\seq[{ P }]{
\ty{\Gamma}, \tmty{x}{B}
}$}
\UIC{$\seq[{ \piUSend{x}{y}{P} }]{
\ty{\Gamma}, \tmty{x}{A \tens B}, \tmty{y}{A^\bot}
}$}
\end{prooftree*}}
\providecommand{\hcpInfParr}{%
\begin{prooftree*}
\AXC{$\seq[ P ]{%
\Gamma , \tmty{y}{A} , \tmty{x}{B} }$}
\SYM{(\parr)}
\UIC{$\seq[ \cpRecv{x}{y}{P} ]{
\Gamma , \tmty{x}{A \parr B} }$}
\end{prooftree*}}
\providecommand{\hcpInfOne}{%
\begin{prooftree*}
\AXC{$\seq[{ P }]{\emptyhypercontext}$}
\SYM{\one}
\UIC{$\seq[{ \piSend{x}{}{P} }]{
\tmty{x}{\one}
}$}
\end{prooftree*}}
\providecommand{\hcpInfBot}{%
\begin{prooftree*}
\AXC{$\seq[ P ]{
\Gamma }$}
\SYM{(\bot)}
\UIC{$\seq[ \cpWait{x}{P} ]{
\Gamma , \tmty{x}{\bot} }$}
\end{prooftree*}}
\providecommand{\hcpInfPlus}[1]{%
\ifdim#1pt=1pt
\begin{prooftree*}
\AXC{$\seq[ P ]{
\Gamma , \tmty{x}{A} }$}
\SYM{(\plus_1)}
\UIC{$\seq[{ \cpInl{x}{P} }]{
\Gamma , \tmty{x}{A \plus B} }$}
\end{prooftree*}
\else%
\ifdim#1pt=2pt
\begin{prooftree*}
\AXC{$\seq[ P ]{
\Gamma , \tmty{x}{B} }$}
\SYM{(\plus_2)}
\UIC{$\seq[ \cpInr{x}{P} ]{
\Gamma , \tmty{x}{A \plus B} }$}
\end{prooftree*}
\else%
\fi%
\fi%
}
\providecommand{\hcpInfWith}{%
\begin{prooftree*}
\AXC{$\seq[ P ]{
\Gamma , \tmty{x}{A} }$}
\AXC{$\seq[ Q ]{
\Gamma, \tmty{x}{B} }$}
\SYM{(\with)}
\BIC{$\seq[ \cpCase{x}{P}{Q} ]{
\Gamma , \tmty{x}{A \with B} }$}
\end{prooftree*}}
\providecommand{\hcpInfTop}{%
\begin{prooftree*}
\AXC{}
\SYM{(\top)}
\UIC{$\seq[ \cpAbsurd{x} ]{ \Gamma, \tmty{x}{\top} }$}
\end{prooftree*}}
\providecommand{\hcpInfNil}{%
(no rule for \ty{\nil})}
%% CPND - Terms
\providecommand{\ncHalt}{\ensuremath{0}}
\providecommand{\ncExpn}[3]{\ensuremath{#1\uparrow #2.#3}}
\providecommand{\ncIntl}[3]{\ensuremath{#1\downarrow #2.#3}}
\providecommand{\ncSrv}[3]{\ensuremath{{\star}{#1}(#2).#3}}
\providecommand{\ncCnt}[3]{\ensuremath{{\star}{#1}[#2].#3}}
\providecommand{\ncPool}[2]{\ensuremath{(\piPar{#1}{#2})}}
\providecommand{\ncCont}[3]{\ensuremath{\cpSub{#1}{#2}{#3}}}
%% CPND - Types
\providecommand{\give}[2][]{\ensuremath{{ ? }_{#1}{#2}}}
\providecommand{\take}[2][]{\ensuremath{{ ! }_{#1}{#2}}}
%% CPND - Reduction rule names
\providecommand{\ncRedBetaStar}[1]{\textsc{E-Request}\xspace}
%% CPND - Typing judgement
\providecommand{\ncInfTake}[1]{%
\begin{prooftree*}
\AXC{$\seq[{ P }]{
\Gamma, \tmty{y}{A} }$}
\SYM{(\take[1]{})}
\UIC{$\seq[{ \ncCnt{x}{y}{P} }]{
\Gamma, \tmty{x}{\take[1]{A}} }$}
\end{prooftree*}}
\providecommand{\ncInfGive}[1]{%
\begin{prooftree*}
\AXC{$\seq[{ P }]{
\Gamma, \tmty{y}{A} }$}
\SYM{(\give[1]{})}
\UIC{$\seq[{ \ncSrv{x}{y}{P} }]{
\Gamma, \tmty{x}{\give[1]{A}} }$}
\end{prooftree*}}
\providecommand{\ncInfCont}{%
\begin{prooftree*}
\AXC{$\seq[{ P }]{
\mathcal{G} \hsep
\Gamma, \tmty{x}{\give[m]{A}} , \tmty{x'}{\give[n]{A}} }$}
\SYM{\textsc{Cont}_{?}}
\UIC{$\seq[{ \cpSub{x}{x'}{P} }]{
\mathcal{G} \hsep
\Gamma, \tmty{x}{\give[m+n]{A}} }$}
\end{prooftree*}}
\providecommand{\ncInfPool}{%
\begin{prooftree*}
\AXC{$\seq[{ P }]{
\mathcal{G} \hsep
\Gamma, \tmty{x}{\take[m]{A}} \hsep
\Delta, \tmty{x'}{\take[n]{A}} }$}
\SYM{\textsc{Cont}_{!}}
\UIC{$\seq[{ \cpSub{x}{x'}{P} }]{
\mathcal{G} \hsep
\Gamma, \Delta, \tmty{x}{\take[m+n]{A}} }$}
\end{prooftree*}}
%% HCP - Names of rules
\providecommand{\hcpEquivLinkComm}{\textsc{SC-LinkSwap}\xspace}
\providecommand{\hcpEquivMixAss}{\textsc{SC-ParAssoc}\xspace}
\providecommand{\hcpEquivMixComm}{\textsc{SC-ParComm}\xspace}
\providecommand{\hcpEquivMixHalt}{\textsc{SC-ParNil}\xspace}
\providecommand{\hcpEquivScopeExt}{\textsc{SC-ResExt}\xspace}
\providecommand{\hcpEquivNewHalt}{\textsc{SC-ResNil}\xspace}
\providecommand{\hcpEquivNewComm}{\textsc{SC-ResComm}\xspace}
\providecommand{\hcpRedAxCut}[1]{\textsc{E-Link}\xspace}%
\providecommand{\hcpRedBetaTensParr}{\textsc{E-Send}\xspace}%
\providecommand{\hcpRedBetaOneBot}{\textsc{E-Close}\xspace}%
\providecommand{\hcpRedBetaPlusWith}[1]{\textsc{E-Sel}\textsubscript{#1}\xspace}%
\providecommand{\hcpRedGammaNew}{\textsc{E-LiftRes}\xspace}%
\providecommand{\hcpRedGammaMix}{\textsc{E-LiftPar}\xspace}%
\providecommand{\hcpRedGammaEquiv}{\textsc{E-LiftSC}\xspace}%
\usepackage{hyperref}
\usepackage{cleveref}
\crefname{thm}{theorem}{theorems}
\Crefname{thm}{Theorem}{Theorems}
\crefname{lem}{lemma}{lemmas}
\Crefname{lem}{Lemma}{Lemmas}
\crefname{defi}{definition}{definitions}
\Crefname{defi}{Definition}{Definitions}
\newtheorem*{remark}{Remark}
\usepackage{doi}
\usepackage[strings]{underscore}
\usepackage{upgreek}
\begin{document}
\title{Towards Races in Linear Logic}
\author[W.~Kokke]{Wen Kokke}
\address{University of Edinburgh, Edinburgh, UK}
\email{[email protected]}
\author[G.~Morris]{J.\ Garrett Morris}
\address{University of Kansas, Lawrence, KS, USA}
\email{[email protected]}
\author[P.~Wadler]{Philip Wadler}
\address{University of Edinburgh, Edinburgh, UK}
\email{[email protected]}
\begin{abstract}
Process calculi based in logic, such as \piDILL and CP, provide a foundation for deadlock-free concurrent programming, but exclude non-determinism and races. \hcp is a reformulation of CP which addresses a fundamental shortcoming: the fundamental operator for parallel composition from the \textpi-calculus does not correspond to any rule of linear logic, and therefore not to any term construct in CP.
We introduce \nodcap, which extends \hcp with a novel account of non-determinism. Our approach draws on bounded linear logic to provide a strongly-typed account of standard process calculus expressions of non-determinism. We show that our extension is expressive enough to capture many uses of non-determinism in untyped calculi, such as non-deterministic choice, while preserving \hcp's meta-theoretic properties, including deadlock freedom.
\end{abstract}
\maketitle
\section{Introduction}\label{sec:introduction}
Consider the following scenario:
\begin{quote}
\Ami and \Boe are working from home one morning when they each get a craving for a slice of cake. Being denizens of the web, they quickly find the nearest store which does home deliveries. Unfortunately for them, they both order their cake at the \emph{same} store, which has only one slice left. After that, all it can deliver is disappointment.
\end{quote}
This is an example of a \emph{race condition}. We can model this scenario in the \textpi-calculus, where \ami, \boe and \store are processes modelling \Ami, \Boe and the store, and \sliceofcake and \nope are channels giving access to a slice of cake and disappointment, respectively. This process has two possible outcomes: either \Ami gets the cake, and \Boe gets disappointment, or vice versa.
\begin{center}
\(
\begin{array}{c}
\tm{(\piPar%
{\piPar{\piRecv{x}{y}{\ami}}{\piRecv{x}{z}{\boe}}}
{\piUSend{x}{\sliceofcake}{\piUSend{x}{\nope}{\store}}}
)}
\\[1ex]
\rotatebox[origin=c]{270}{$\Longrightarrow^{\star}$}
\\[1ex]
\tm{(\piPar%
{\piPar
{\piSub{\sliceofcake}{y}{\ami}}
{\piSub{\nope}{z}{\boe}}
}
{\store}
)}
\quad
\text{or}
\quad
\tm{(\piPar{\store}{\piPar{\piSub{\nope}{y}{\ami}}{%
\piSub{\sliceofcake}{z}{\boe}}})}
\end{array}
\)
\end{center}
While \Ami or \Boe may not like all of the outcomes, it is the store which is responsible for implementing the online delivery service, and the store is happy with either outcome. Thus, the above is an interaction we would like to be able to model.
Now consider another scenario, which takes place \emph{after} \Ami has already bought the cake:
\begin{quote}
\Boe is \emph{really} disappointed when she finds out the cake has sold out. \Ami, always looking to make some money, offers to sell the slice to her for a profit. \Boe agrees to engage in a little bit of back-alley cake resale, but sadly there is no trust between the two. \Ami demands payment first. \Boe would rather get her slice of cake before she gives \Ami the money.
\end{quote}
This is an example of a \emph{deadlock}. We can also model this scenario in the \textpi-calculus, where \bill\ is a channel giving access to some adequate amount of money.
\begin{center}
\(
\begin{array}{c}
\tm{(\piPar{%
\piRecv{x}{z}{\piUSend{y}{\sliceofcake}{\ami}}
}{%
\piRecv{y}{w}{\piUSend{x}{\bill}{\boe}}
})}
\quad
\centernot\Longrightarrow^{\star}
\end{array}
\)
\end{center}
The above process does not reduce. As both \Ami and \Boe would prefer the exchange to be made, this interaction is desired by \emph{neither}. Thus, the above is an interaction we would like to exclude.
Session types~\cite{honda1993} statically guarantee that concurrent programs, such as those above, respect communication protocols. Session-typed calculi with logical foundations, such as \piDILL~\cite{caires2010} and CP~\cite{wadler2012}, obtain deadlock freedom as a result of a close correspondence with logic. These systems, however, also rule out non-determinism and race conditions. In this paper, we demonstrate that logic-inspired type systems need not rule out races.
We present \nodcap, an extension of \hcp with a novel account of non-determinism and races. Inspired by bounded linear logic~\cite{girard1992}, we introduce a form of shared channels in which the type of a shared channel tracks how many times it is reused. As in the untyped \textpi-calculus, sharing introduces the potential for non-determinism. We show that our approach is sufficient to capture practical examples of races, such as an online store, as well as other formal characterizations of non-determinism, such as non-deterministic choice. However, \nodcap does not lose the meta-theoretical benefits of \hcp: we show that it enjoys termination and deadlock-freedom.
An important limitation of our work is that types in \nodcap explicitly count the potential races on a channel. It works fine when there are two or three races, but not $n$ for an arbitary $n$. The latter case is obviously important, and we see the main value of our work as a stepping stone to this more general case.
\nodcap is based on \hcp~\cite{kokke2018tlla,kokke2019pacmpl}. \hcp is a reformulation of CP which addresses a fundamental shortcoming: the fundamental operator for parallel composition from the \textpi-calculus does not correspond to any rule of linear logic, and therefore not to any term construct in CP.
This paper proceeds as follows. In \cref{sec:local-choice}, we discuss recent approaches to non-determinism in logic-inspired session-typed process calculi. In \cref{sec:cp-revisited}, we introduce a variant of \cp and prove progress and preservation. In \cref{sec:cpnd}, we introduce \nodcap. In \cref{sec:leftovers}, we discuss cuts with leftovers. In \cref{sec:manifest}, we discuss the relation to manifest sharing~\cite{balzer2017}. Finally, in \cref{sec:conclusion}, we conclude with a discussion of the work done in this paper and potential avenues for future work.
\begin{remark}[Variants of \hcp]
There are two variants of \hcp: a~version with delayed actions, introduced by Kokke, Montesi, and Peressotti~\cite{kokke2019pacmpl}, and a version without delayed actions, introduced by Kokke, Montesi, and Peressotti~\cite{kokke2018tlla}.
Delayed actions are not an essential part of HCP, but significantly complicate the theory. Therefore, we base our work on the variant without delayed actions. For typographical simplicity, we will refer to the system without delayed actions as \hcp, instead of $\hcp^{-}$. Should we need to refer to the system with delayed actions, we will use \dhcp.
\end{remark}
\section{Non-determinism, Logic, and Session Types}\label{sec:local-choice}
Recent work extended \piDILL and \cp with operators for non-deterministic behaviour~\cite{atkey2016,caires2014,caires2017}. These extensions all implement an operator known as non-deterministic local choice. (This operator is written as \tm{P+Q}, but should not be confused with input-guarded choice from the \textpi-calculus~\cite{milner1992b}.) Non-deterministic local choice can be summarised by the following typing and reduction rules:
\begin{center}
\begin{prooftree*}
\AXC{$\seq[{ P }]{ \Gamma }$}
\AXC{$\seq[{ Q }]{ \Gamma }$}
\BIC{$\seq[{ P + Q }]{ \Gamma }$}
\end{prooftree*}
\hspace*{2cm}
\(
\begin{array}{c}
\reducesto{P + Q}{P}\\
\reducesto{P + Q}{Q}
\end{array}
\)
\end{center}
Local choice introduces non-determinism explicitly, by listing all possible choices. This is unlike the \textpi-calculus, where non-determinism arises due to multiple processes communicating on shared channels. We can easily implement local choice in the \textpi-calculus, using a nullary communication:
\begin{center}
\(
\begin{array}{c}
\tm{( \piPar{\piPar{\piUSend{x}{}{\piHalt}}{\piRecv{x}{}{P}}}{\piRecv{x}{}{Q}} )}
\\[1ex]
\rotatebox[origin=c]{270}{$\Longrightarrow^{\star}$}
\\[1ex]
\tm{( \piPar{P}{\piRecv{x}{}{Q}} )}
\quad
\text{or}
\quad
\tm{( \piPar{\piRecv{x}{}{P}}{Q} )}
\end{array}
\)
\end{center}
In this implementation, the process \tm{\piUSend{x}{}{0}} will ``unlock'' either \tm{P} or \tm{Q}, leaving the other process deadlocked. Or we could use input-guarded choice:
\begin{center}
\(
\tm{( \piPar{\piUSend{x}{}{\piHalt}}{( \piRecv{x}{}{P} + \piRecv{x}{}{Q} )} )}
\)
\end{center}
However, there are many non-deterministic processes in the \textpi-calculus that are awkward to encode using non-deterministic local choice. Let us recall our example:
\begin{center}
\(
\begin{array}{c}
\tm{(\piPar{%
\piUSend{x}{\sliceofcake}{\piUSend{x}{\nope}{\store}}
}{%
\piPar{\piRecv{x}{y}{\ami}}{\piRecv{x}{z}{\boe}}
})}
\\[1ex]
\rotatebox[origin=c]{270}{$\Longrightarrow^{\star}$}
\\[1ex]
\tm{(\piPar{\store}{\piPar{\piSub{\sliceofcake}{y}{\ami}}{
\piSub{\nope}{z}{\boe}}})}
\quad
\text{or}
\quad
\tm{(\piPar{\store}{\piPar{\piSub{\nope}{y}{\ami}}{\piSub{
\sliceofcake}{z}{\boe}}})}
\end{array}
\)
\end{center}
This non-deterministic interaction involves communication. If we wanted to write down a process which exhibited the same behaviour using non-deterministic local choice, we would have to write the following process:
\begin{center}
\(
\begin{array}{c}
\tm{
(\piPar{%
\piUSend{x}{\sliceofcake}{\piUSend{y}{\nope}{\store}}
}{%
\piPar{\piRecv{x}{z}{\ami}}{\piRecv{y}{w}{\boe}}
})
+
(\piPar{%
\piUSend{y}{\sliceofcake}{\piUSend{x}{\nope}{\store}}
}{%
\piPar{\piRecv{x}{z}{\ami}}{\piRecv{y}{w}{\boe}}
})
}
\\[1ex]
\rotatebox[origin=c]{270}{$\Longrightarrow^{\star}$}
\\[1ex]
\tm{(\piPar{\store}{\piPar{\piSub{\sliceofcake}{y}{\ami}}{\piSub{\nope}{z}{\boe}}})}
\quad
\text{or}
\quad
\tm{(\piPar{\store}{\piPar{\piSub{\nope}{y}{\ami}}{\piSub{\sliceofcake}{z}{\boe}}})}
\end{array}
\)
\end{center}
In essence, instead of modelling a non-deterministic interaction, we are enumerating the resulting deterministic interactions. This means non-deterministic local choice cannot model non-determinism in the way the \textpi-calculus does.
%
Enumerating all possible outcomes becomes worse the more processes are involved in an interaction. Imagine the following scenario:
\begin{quote}
Three customers, \Ami, \Boe, and \Cat, have a craving for cake. Should cake be sold out, however, well... a doughnut will do. They prepare to order their goods via an online store. Unfortunately, they all decide to use the same \emph{shockingly} under-stocked store, which has only one slice of cake, and a single doughnut. After that, all it can deliver is disappointment.
\end{quote}
We can model this scenario in the \textpi-calculus, where \ami, \boe, \cat, and \store are four processes modelling \Ami, \Boe, \Cat, and the store, and \sliceofcake, \doughnut, and \nope are three channels giving access to a slice of cake, a so-so doughnut, and disappointment, respectively.
\begin{center}
\makebox[\textwidth][c]{\ensuremath{
\begin{array}{c}
\tm{(\piPar{%
\piUSend{x}{\sliceofcake}{\piUSend{x}{\doughnut}{\piUSend{x}{\nope}{\store}}}
}{%
\piPar{\piRecv{x}{y}{\ami}}{\piPar{\piRecv{x}{z}{\boe}}{\piRecv{x}{w}{\cat}}
})}}
\\[1ex]
\rotatebox[origin=c]{270}{$\Longrightarrow^{\star}$}
\\[1ex]
\tm{(\piPar{\store}{\piPar{\piSub{\sliceofcake}{y}{\ami}}{\piPar{\piSub{\doughnut}{z}{\boe}}{\piSub{\nope}{w}{\cat}}}})}
\;\text{or}\;
\tm{(\piPar{\store}{\piPar{\piSub{\sliceofcake}{y}{\ami}}{\piPar{\piSub{\nope}{z}{\boe}}{\piSub{\doughnut}{w}{\cat}}}})}
\\[1ex]
\tm{(\piPar{\store}{\piPar{\piSub{\doughnut}{y}{\ami}}{\piPar{\piSub{\nope}{z}{\boe}}{\piSub{\sliceofcake}{w}{\cat}}}})}
\;\text{or}\;
\tm{(\piPar{\store}{\piPar{\piSub{\doughnut}{y}{\ami}}{\piPar{\piSub{\sliceofcake}{z}{\boe}}{\piSub{\nope}{w}{\cat}}}})}
\\[1ex]
\tm{(\piPar{\store}{\piPar{\piSub{\nope}{y}{\ami}}{\piPar{\piSub{\sliceofcake}{z}{\boe}}{\piSub{\doughnut}{w}{\cat}}}})}
\;\text{or}\;
\tm{(\piPar{\store}{\piPar{\piSub{\nope}{y}{\ami}}{\piPar{\piSub{\doughnut}{z}{\boe}}{\piSub{\sliceofcake}{w}{\cat}}}})}
\\[1ex]
\end{array}
}}
\end{center}
With the addition of one process, modelling \Cat, we have increased the number of possible outcomes enormously! In general, the number of outcomes for these types of scenarios is $n!$, where $n$ is the number of processes. This means that if we wish to translate any non-deterministic process to one using non-deterministic local choice, we can expect a factorial growth in the size of the term.
\section{Hypersequent Classical Processes}\label{sec:cp-revisited}
In this section, we introduce \hcp~\cite{kokke2018tlla}, the basis for our calculus \nodcap.
%
The term language for \hcp is a variant of the \textpi-calculus~\cite{milner1992b}. In HCP, processes ($\tm{P}$, $\tm{Q}$, $\tm{R}$) communicate using names ($\tm{x}$, $\tm{y}$, $\tm{z}$, \dots). Each name is one of the two endpoints of a bidirectional communication channel~\cite{vasconcelos2012}. A channel is formed by connecting two endpoints using name restriction. This is in contrast to~\cref{sec:introduction,sec:local-choice}, where we used names to represent channels.
\begin{defi}[Terms]\label{def:hcp-terms}
\[
\begin{array}{lrll}
\tm{P}, \tm{Q}, \tm{R}
& ::=& \tm{\cpLink{x}{y}} &\text{link}
\\&\mid& \tm{\piHalt} &\text{terminated process}
\\&\mid& \tm{\piNew{x}{x'}{P}} &\text{name restriction, ``cut''}
\\&\mid& \tm{( \piPar{P}{Q} )} &\text{parallel composition, ``mix''}
\\&\mid& \tm{\piSend{x}{y}{P}} &\text{output}
\\&\mid& \tm{\piRecv{x}{y}{P}} &\text{input}
\\&\mid& \tm{\piSend{x}{}{P}} &\text{halt}
\\&\mid& \tm{\cpWait{x}{}{P}} &\text{wait}
\\&\mid& \tm{\cpInl{x}{P}} &\text{select left choice}
\\&\mid& \tm{\cpInr{x}{P}} &\text{select right choice}
\\&\mid& \tm{\cpCase{x}{P}{Q}} &\text{offer binary choice}
\\&\mid& \tm{\cpAbsurd{x}} &\text{offer nullary choice}
\end{array}
\]
\end{defi}\noindent
The variables $\tm{x}$, $\tm{y}$, $\tm{z}$, $\tm{u}$, $\tm{v}$, and $\tm{w}$ range over channel endpoints. Occasionally, we use $\tm{a}$, $\tm{b}$, and $\tm{c}$ to range over \emph{free} endpoints, \ie those which are not connected to another endpoint. The construct $\tm{\cpLink{x}{y}}$ links two endpoints~\cite{sangiorgi1996,boreale1998}, forwarding messages received on \tm{x} to \tm{y} and vice versa. The construct $\tm{\piNew{x}{x'}{P}}$ creates a new channel by connecting endpoints $\tm{x}$ and $\tm{x'}$. By convention, we name dual endpoints using primes, \eg $\tm{x}$ and $\tm{x'}$. However, the primes are merely a naming convention, and do not denote co-names, \eg $\tm{x}$ and $\tm{x'}$ are not inherently dual, only under a $\nu$-binder $\tm{\piNew{x}{x'}{}}$. The construct $\tm{\piPar{P}{Q}}$ and composes two processes. In \tm{\piRecv{x}{y}{P}} and \tm{\piSend{x}{y}{P}}, round brackets denote input, square brackets denote output. We use bound output~\cite{sangiorgi1996}, meaning that both input and output bind a new name.
Terms in \hcp are identified up to structural congruence.
\begin{defi}[Structural congruence]\label{def:hcp-equiv}
The structural congruence $\equiv$ is the congruence closure over terms which satisfies the following additional axioms:
\[
\setlength{\arraycolsep}{3pt}
\begin{array}{llcllllcll}
\hcpEquivLinkComm
& \tm{\cpLink{x}{y}}
& \equiv
& \tm{\cpLink{y}{x}}
\\ \hcpEquivMixComm
& \tm{\piPar{P}{Q}}
& \equiv
& \tm{\piPar{Q}{P}}
\\ \hcpEquivMixAss
& \tm{\piPar{P}{( \piPar{Q}{R} )}}
& \equiv
& \tm{\piPar{( \piPar{P}{Q} )}{R}}
\\ \hcpEquivMixHalt
& \tm{\piPar{P}{\piHalt}}
& \equiv
& \tm{P}
\\ \hcpEquivNewHalt
& \tm{\piNew{x}{x'}{\piHalt}}
& \equiv
& \tm{\piHalt}
\\ \hcpEquivNewComm
& \tm{\piNew{x}{x'}{\piNew{y}{y'}{P}}}
& \equiv
& \tm{\piNew{y}{y'}{\piNew{x}{x'}{P}}}
\\ \hcpEquivScopeExt
& \tm{\piNew{x}{x'}{( \piPar{P}{Q} )}}
& \equiv
& \tm{\piPar{P}{\piNew{x}{x'}{Q}}}
& \text{if }\tm{x},\tm{x'}\not\in\tm{P}
\end{array}
\]
\end{defi}
Channels in \hcp are typed using a session type system which is a conservative extension of linear logic.
\begin{defi}[Types]\label{def:cp-types}
\[
\begin{array}{lrlllrll}
\ty{A}, \ty{B}, \ty{C}
& ::=& \; \ty{A \tens B} &\text{independent channels}
& &\mid& \; \ty{\one} &\text{unit for} \; {\tens}
\\&\mid& \; \ty{A \parr B} &\text{interdependent channels}
& &\mid& \; \ty{\bot} &\text{unit for} \; {\parr}
\\&\mid& \; \ty{A \plus B} &\text{internal choice}
& &\mid& \; \ty{\nil} &\text{unit for} \; {\plus}
\\&\mid& \; \ty{A \with B} &\text{external choice}
& &\mid& \; \ty{\top} &\text{unit for} \; {\with}
\end{array}
\]
\end{defi}
Duality plays a crucial role in both linear logic and session types. In \hcp, the two endpoints of a channel are assigned dual types. This ensures that, for instance, whenever a process \emph{sends} across a channel, the process on the other end of that channel is waiting to \emph{receive}. Each type \ty{A} has a dual, written \ty{A^\bot}. Duality (\ty{\cdot^\bot}) is an involutive function on types.
\begin{defi}[Duality]\label{def:cp-negation}
\[
\setlength{\arraycolsep}{3pt}
\begin{array}{lclclcllclclcl}
\ty{(A \tens B)^\bot} &=& \ty{A^\bot \parr B^\bot}
&\quad \ty{\one^\bot} &=& \ty{\bot}
&\quad \ty{(A \parr B)^\bot} &=& \ty{A^\bot \tens B^\bot}
&\quad \ty{\bot^\bot} &=& \ty{\one}
\\ \ty{(A \plus B)^\bot} &=& \ty{A^\bot \with B^\bot}
&\quad \ty{\nil^\bot} &=& \ty{\top}
&\quad \ty{(A \with B)^\bot} &=& \ty{A^\bot \plus B^\bot}
&\quad \ty{\top^\bot} &=& \ty{\nil}
\end{array}
\]
\end{defi}
Environments associate channels with types. Names in environments must be unique, and environments \ty{\Gamma} and \ty{\Delta} can only be combined ($\ty{\Gamma}, \ty{\Delta}$) if $\text{cn}(\ty{\Gamma}) \cap \text{cn}(\ty{\Delta}) = \varnothing$, where $\text{cn}{(\ty{\Gamma})}$ denotes the set of channel names in $\ty{\Gamma}$.
\begin{defi}[Environments]\label{def:cp-environments}
$\ty{\Gamma}, \ty{\Delta}, \ty{\Theta} ::= \tmty{x_1}{A_1}\dots\tmty{x_n}{A_n}$
\end{defi}
\hcp registers parallelism using hyper-environments. A hyper-environment is a multiset of environments. While names within environments must be unique, names may be shared between multiple environments in a hyper-environment. We write $\ty{\mathcal{G} \hsep \mathcal{H}}$ to combine two hyper-environments.
\begin{defi}[Hyper-environments]\label{def:hcp-hyper-environment}
$\ty{\mathcal{G}}, \ty{\mathcal{H}} ::= \; \ty{\emptyhypercontext} \; \mid \; \ty{\mathcal{G} \hsep \Gamma}$
\end{defi}
Typing judgements associate processes with collections of typed channels.
\begin{defi}[Typing judgements]\label{def:cp-typing-judgement}
A typing judgement $\seq[P]{\Gamma_1 \hsep \dots \hsep \Gamma_n}$ denotes that the process $\tm{P}$ consists of $n$ independent, but potentially entangled processes, each of which communicates according to its own protocol $\Gamma_i$. Typing judgements can be constructed using the inference rules below.
\\[0.5\baselineskip]
{Structural rules}
\begin{center}
\hcpInfAx
\hcpInfCut
\end{center}
\begin{center}
\hcpInfMix
\hcpInfHalt
\end{center}
{Logical rules}
\begin{center}
\hcpInfBoundTens
\hcpInfParr
\end{center}
\begin{center}
\hcpInfOne
\hcpInfBot
\end{center}
\begin{center}
\hcpInfPlus1
\hcpInfPlus2
\end{center}
\begin{center}
\hcpInfWith
\end{center}
\begin{center}
\hcpInfNil
\hcpInfTop
\end{center}
\end{defi}
\paragraph{Alternative syntax.}
In $(\one)$, the only well-typed continuation $\tm{P}$ is the terminated process $\tm{\piHalt}$. We could use an alternative formulation of the rule, which combines $(\one)$ and $\textsc{H-Mix}_0$. However, as $\textsc{H-Mix}_0$ is used on its own, and not just in combination with $(\one)$, we chose the present formulation to avoid having multiple different representations of the terminated process in the language.
Reductions relate processes with their reduced forms.
\begin{defi}[Reduction]\label{def:hcp-reduction}
Reductions are described by the smallest relation $\Longrightarrow$ on process
terms closed under the rules below:
\begin{gather*}
\begin{array}{llcll}
\hcpRedAxCut1
& \tm{\piNew{x}{x'}{(\piPar{\cpLink{w}{x}}{P})}}
& \Longrightarrow
& \tm{\cpSub{w}{x'}{P}}
\\
\hcpRedBetaTensParr
& \tm{\piNew{x}{x'}{(\piPar{\piSend{x}{y}{P}}{\piRecv{x'}{y'}{R}})}}
& \Longrightarrow
& \tm{\piNew{x}{x'}{\piNew{y}{y'}{(\piPar{P}{R})}}}
\\
\hcpRedBetaOneBot
& \tm{\piNew{x}{x'}{(\piPar{\piSend{x}{}{P}}{\piRecv{x'}{}{Q}})}}
& \Longrightarrow
& \tm{\piPar{P}{Q}}
\\
\hcpRedBetaPlusWith1
& \tm{\piNew{x}{x'}{(\piPar{\cpInl{x}{P}}{\cpCase{x'}{Q}{R}})}}
& \Longrightarrow
& \tm{\piNew{x}{x'}{(\piPar{P}{Q})}}
\\
\hcpRedBetaPlusWith2
& \tm{\piNew{x}{x'}{(\piPar{\cpInr{x}{P}}{\cpCase{x'}{Q}{R}})}}
& \Longrightarrow
& \tm{\piNew{x}{x'}{(\piPar{P}{R})}}
\end{array}
\end{gather*}
\vspace*{0.5\baselineskip}
\begin{center}
\begin{prooftree*}
\AXC{$\reducesto{P}{P^\prime}$}
\SYM{\hcpRedGammaNew}
\UIC{$\reducesto{\piNew{x}{x'}{P}}{\piNew{x}{x'}{P^\prime}}$}
\end{prooftree*}
\begin{prooftree*}
\AXC{$\reducesto{P}{P^\prime}$}
\SYM{\hcpRedGammaMix}
\UIC{$\reducesto{\piPar{P}{Q}}{\piPar{P^\prime}{Q}}$}
\end{prooftree*}
\end{center}
\begin{prooftree}
\AXC{$\tm{P}\equiv\tm{Q}$}
\AXC{$\reducesto{Q}{Q^\prime}$}
\AXC{$\tm{Q^\prime}\equiv\tm{P^\prime}$}
\SYM{\hcpRedGammaEquiv}
\TIC{$\reducesto{P}{P^\prime}$}
\end{prooftree}
\end{defi}
We define unbound output in terms of bound output and link~\cite{lindley2015semantics}:
\[
\begin{array}{c}
\tm{\piUSend{x}{y}{P}} \triangleq \tm{\cpSend{x}{z}{\cpLink{y}{z}}{P}}
\\
\\
\hcpInfUnboundTens
\qquad
\begin{array}{l}
\tm{\piNew{x}{x'}{(\piPar{\piUSend{x}{y}{P}}{\piRecv{x'}{y'}{Q}})}}
\\
\qquad\qquad\Longrightarrow
\tm{\piNew{x}{x'}{(\piPar{P}{\cpSub{y}{y'}{Q}})}}
\end{array}
\end{array}
\]
\subsection{Example}
\label{sec:hcp-example}
\hcp uses hyper-sequents to structure communication, and it is this structure which rules out deadlocked interactions. Let us go back to our example of a deadlocked interaction from \cref{sec:introduction}. If we want to type this interaction in \hcp, we run into a problem: to connect $\tm{x}$ and $\tm{y}$, and $\tm{z}$ and $\tm{w}$, such that we get a deadlock, we need to construct the following term:
\begin{center}
\(
\tm{\piNew{x}{x'}{\piNew{y}{y'}{(\piPar
{\piRecv{x}{u}{\piUSend{y}{\sliceofcake}{\ami}}}
{\piRecv{y'}{v}{\piUSend{x'}{\bill}{\boe}}})}}}.
\)
\end{center}
However, there is no typing derivation for this term. We can construct a typing derivation down to the sequent below, but we cannot introduce \emph{both} name restrictions: the \textsc{Cut} rule eliminates a hypersequent separator, which ensures that it only ever connects two independent processes, but the sequent below only has \emph{one}.
\[
\tm{
\piRecv{x}{z}{\piUSend{y}{\sliceofcake}{\ami}}
\ppar
\piRecv{y'}{w}{\piUSend{x'}{\bill}{\boe}}
}
\vdash
\setlength{\arraycolsep}{0pt}
\begin{array}{lclcllclcllcl}
\tm{x} &\; : &\; \ty{\money^\bot} & \ty{\parr} & \ty{\bot},
&\;\tm{y} &\; : &\; \ty{\cake} & \ty{\tens} & \ty{\one},
&\;\tm{\sliceofcake} &\; : &\; \ty{\cake^\bot} \hsep
\\ \tm{x'} &\; : &\; \ty{\money} & \ty{\tens} & \ty{\one},
&\;\tm{y'} &\; : &\; \ty{\cake^\bot} & \ty{\parr} & \ty{\bot},
&\;\tm{\bill} &\; : &\; \ty{\money^\bot}
\end{array}
\]
\subsection{Metatheory}
\label{sec:hcp-metatheory}
\hcp enjoys subject reduction, termination, and progress~\cite{kokke2018tlla}.
\begin{lem}[Preservation for $\equiv$]\label{lem:hcp-preservation-equiv}
If $\tm{P}\equiv\tm{Q}$, then $\seq[P]{\mathcal{G}}$ iff $\seq[Q]{\mathcal{G}}$.
\end{lem}
\begin{proof}
By induction on the derivation of $\tm{P}\equiv\tm{Q}$.
\end{proof}
\begin{thm}[Preservation]\label{thm:hcp-preservation}
If $\seq[P]{\mathcal{G}}$ and $\reducesto{P}{Q}$, then $\seq[Q]{\mathcal{G}}$.
\end{thm}
\begin{proof}
By induction on the derivation of $\reducesto{P}{Q}$.
\end{proof}
\begin{defi}[Actions]
A process $\tm{P}$ acts on $\tm{x}$ whenever $\tm{x}$ is free in the outermost
term constructor of $\tm{P}$, \eg, $\tm{\piSend{x}{y}{P}}$ acts on $\tm{x}$
but not on $\tm{y}$, and $\tm{\cpLink{x}{y}}$ acts on both $\tm{x}$ and $\tm{y}$.
A process $\tm{P}$ is an action if it acts on some channel $\tm{x}$.
\end{defi}
\begin{defi}[Canonical forms]\label{def:hcp-canonical-forms}
A process $\tm{P}$ is in canonical form if
\[
\tm{P} \equiv \tm{\piNew{x_1}{x'_1}{\dots\piNew{x_n}{x'_n}{(P_1 \mid \dots \mid P_{n+m+1})}}},
\]
such that: no process $\tm{P_i}$ is a cut or a mix; no process $\tm{P_i}$ is a link acting on a bound channel $\tm{x_i}$ or $\tm{x'_i}$; and no two processes $\tm{P_i}$ and $\tm{P_j}$ are acting on dual endpoints $\tm{x_i}$ and $\tm{x'_i}$ of the same channel.
\end{defi}
\begin{lem}
If a well-typed process $\tm{P}$ is in canonical form, then it is blocked on
an external communication, \ie,
$\tm{P}\equiv\tm{\piNew{x_1}{x'_1}{\dots\piNew{x_n}{x'_n}{(P_1\mid\dots\mid P_{n+m+1})}}}$
such that at least one process $\tm{P_i}$ acts on a free name.
\end{lem}
\begin{proof}
We have
\(
\tm{P} \equiv \tm{\piNew{x_1}{x'_1}{\dots\piNew{x_n}{x'_n}{(P_1 \ppar \dots \ppar P_{n+m+1})}}},
\)
such that no $\tm{P_i}$ is a cut or a link acting on a bound channel, and no two processes $\tm{P_i}$ and $\tm{P_j}$ are acting on the endpoints of the same channel. The prefix of cuts and mixes introduces $n$ channels. Each application of cut requires an application of mix, so the prefix introduces $n+m+1$ processes. Therefore, at least $m+1$ of the processes $\tm{P_i}$ must be acting on a free channel, i.e., blocked on an external communication.
\end{proof}
\begin{thm}[Progress]\label{thm:hcp-progress}
If $\seq[P]{\mathcal{G}}$, then either $\tm{P}$ is in canonical form, or there exists a process $\tm{Q}$ such that $\tm{P}\Longrightarrow\tm{Q}$.
\end{thm}
\begin{proof}
We consider the maximum prefix of cuts and mixes of $\tm{P}$ such that
\[
\tm{P} \equiv \tm{\piNew{x_1}{x'_1}{\dots\piNew{x_n}{x'_n}{(P_1 \ppar \dots \ppar P_{n+m+1})}}},
\]
and no $\tm{P_i}$ is a cut. If any process $\tm{P_i}$ is a link, we reduce by $(\cpLink{}{})$. If any two processes $\tm{P_i}$ and $\tm{P_j}$ are acting on dual endpoints $\tm{x_i}$ and $\tm{x'_i}$ of the same channel, we rewrite by $\equiv$ and reduce by the appropriate $\beta$-rule. Otherwise, $\tm{P}$ is in canonical form.
\end{proof}
\begin{thm}[Termination]\label{thm:hcp-termination}
If $\seq[P]{\mathcal{G}}$, then there are no infinite $\Longrightarrow$-reduction sequences.
\end{thm}
\begin{proof}
Every reduction reduces a single cut to zero, one or two cuts. However, each of these cuts is smaller, measured in the size of the cut formula. Furthermore, each instance of the structural congruence preserves the size of the cut. Therefore, there cannot be an infinite $\Longrightarrow$-reduction sequence.
\end{proof}
\subsection{Erratum for HCP}\label{sec:erratum}
The typing rules for HCP presented here are more restrictive than those in earlier publications~\cite{kokke2018tlla}. Progress does not hold for the earlier version. For instance, the following process is stuck, yet typeable:
\begin{prooftree}
\AXC{}
\UIC{$\seq[\piHalt]{\emptyhypercontext}$}
\UIC{$\seq[\piSend{y}{}{\piHalt}]{\tmty{y}{\one}}$}
\UIC{$\seq[\piSend{x}{}{\piSend{y}{}{\piHalt}}]{\tmty{x}{\one}\hsep\tmty{y}{\one}}$}
\AXC{}
\UIC{$\seq[\cpLink{x}{z}]{\tmty{x}{\bot},\tmty{z}{\one}}$}
\UIC{$\seq[\piRecv{y}{}{\cpLink{x}{z}}]{\tmty{x}{\bot},\tmty{y}{\bot},\tmty{z}{\one}}$}
\BIC{$\seq%
[\piSend{x}{}{\piSend{y}{}{\piHalt}}\ppar\piRecv{y}{}{\cpLink{x}{z}}]
{\tmty{x}{\one}\hsep\tmty{y}{\one}\hsep\tmty{x}{\bot},\tmty{y}{\bot},\tmty{z}{\one}}$}
\UIC{$\seq%
[(\nu{y})({\piSend{x}{}{\piSend{y}{}{\piHalt}}\ppar\piRecv{y}{}{\cpLink{x}{z}}})]
{\tmty{x}{\one}\hsep\tmty{x}{\bot},\tmty{z}{\one}}$}
\UIC{$\seq%
[(\nu{x})(\nu{y})({\piSend{x}{}{\piSend{y}{}{\piHalt}}\ppar\piRecv{y}{}{\cpLink{x}{z}}})]
{\tmty{z}{\one}}$}
\end{prooftree}
The earlier typing rules failed to guarantee a crucial property: each typing environment should correspond to one top-level action. The rules presented in this paper fixes the problem by disallowing hyper-environments in logical rules.
The move from channel names to endpoint names is not essential to the fix, but significantly streamlines the presentation. Otherwise, the type system must guarantee that each channel name occurs at most twice in the hypersequent, and if twice, then with dual types. Using endpoint names, it is sufficient to require that all names be distinct.
\section{Shared Channels and Non-determinism}\label{sec:cpnd}
In this section, we will discuss our main contribution: an extension of \hcp which allows for races while still excluding deadlocks. We have seen in \cref{sec:hcp-example} how \hcp excludes deadlocks, but how exactly does \hcp exclude races? Let us return to our example in \textpi-calculus from \cref{sec:introduction}, to the interaction between \Ami, \Boe and the store.
\begin{center}
\(
\begin{array}{c}
\tm{(\piPar{%
\piUSend{x}{\sliceofcake}{\piUSend{x}{\nope}{\store}}
}{%
\piPar{\piRecv{x}{y}{\ami}}{\piRecv{x}{z}{\boe}}
})}
\\[1ex]
\rotatebox[origin=c]{270}{$\Longrightarrow^{\star}$}
\\[1ex]
\tm{(\piPar{\store}{\piPar{\piSub{\sliceofcake}{y}{\ami}}{\piSub{\nope}{z}{\boe}}})}
\quad
\text{or}
\quad
\tm{(\piPar{\store}{\piPar{\piSub{\nope}{y}{\ami}}{\piSub{\sliceofcake}{z}{\boe}}})}
\end{array}
\)
\end{center}
Races occur when more than two processes attempt to communicate simultaneously over the \emph{same} channel. However, the \textsc{Cut} rule of \hcp requires that \emph{exactly two} processes communicate over each channel:
\begin{center}
\hcpInfCut
\end{center}
We could attempt to write down a protocol for our example, stating that the store
has a pair of channels $\tm{x}, \tm{y} : \ty{\cake}$ with which it communicates
with \Ami and \Boe, taking \cake to be the type of interactions in which cake
\emph{may} be obtained, i.e.\ of both \sliceofcake and \nope, and state that the
store communicates with \Ami \emph{and} \Boe over a channel of type \ty{\cake
\parr \cake}.
However, this \emph{only} models interactions such as the following:
\begin{prooftree}
\AXC{$\seq[{ \ami }]{ \Gamma, \tmty{y}{\cake^\bot} }$}
\AXC{$\seq[{ \boe }]{ \Delta, \tmty{x}{\cake^\bot} }$}
\NOM{H-Mix}
\BIC{$\seq[{ (\piPar{\ami}{\boe}) }]{
\Gamma, \tmty{y}{\cake^\bot} \hsep \Delta, \tmty{x}{\cake^\bot} }$}
\SYM{(\tens)}
\UIC{$\seq[{ \cpSend{x}{y}{\ami}{\boe} }]{
\Gamma, \Delta, \tmty{x}{\cake^\bot \tens \cake^\bot} }$}
\AXC{$\seq[{ \store }]{ \Theta, \tmty{y'}{\cake}, \tmty{x'}{\cake} }$}
\SYM{(\parr)}
\UIC{$\seq[{ \cpRecv{x'}{y'}{\store} }]{
\Theta, \tmty{x'}{\cake \parr \cake} }$}
\NOM{H-Mix}
\BIC{$\seq[{ (\piPar{\cpSend{x}{y}{\ami}{\boe}}{\cpRecv{x'}{y'}{\store}}) }]{
\Gamma, \Delta, \tmty{x}{\cake^\bot \tens \cake^\bot}
\hsep
\Theta, \tmty{x'}{\cake \parr \cake} }$}
\NOM{Cut}
\UIC{$\seq[{ \piNew{x}{x'}{(\piPar{\cpSend{x}{y}{\ami}{\boe}}{\cpRecv{x'}{y'}{\store}})} }]{
\Gamma, \Delta, \Theta }$}
\end{prooftree}
In this interaction, \Ami will get whatever the store decides to send on \tm{x}, and \Boe will get whatever the store decides to send on \tm{y}. This means that this interactions gives the choice of who receives what \emph{to the store}. This is not an accurate model of our original example, where the choice of who receives the cake is non-deterministic and depends on factors outside of any of the participants' control!
Modelling racy behaviour, such as that in our example, is essential to describing the interactions that take place in realistic concurrent systems. We would like to extend \hcp to allow such races in a way which mirrors the way in which the \textpi-calculus handles non-determinism. Let us return to our example:
\begin{center}
\(
\tm{(\piPar{%
\piUSend{x}{\sliceofcake}{\piUSend{x}{\nope}{\store}}
}{%
\piPar{\piRecv{x}{y}{\ami}}{\piRecv{x}{z}{\boe}}
})}
\)
\end{center}
In this interaction, we see that the channel \tm{x} is only used as a way to connect the various clients, \Ami and \Boe, to the store. The \emph{real} communication, sending the slice of cake and disappointment, takes places on the channels \tm{\sliceofcake}, \tm{\nope}, \tm{y} and \tm{z}. Inspired by this, we add two new constructs to the term language of \hcp for sending and receiving on a \emph{shared} channel. These actions are marked with a \tm{\star} to distinguish them from ordinary sending and receiving.
\begin{defi}[Terms]\label{def:nc-terms}
We extend \cref{def:hcp-terms} as follows:
\[\!
\begin{aligned}
\tm{P}, \tm{Q}, \tm{R}
::=& \; \dots
\\ \mid& \; \tm{\ncCnt{x}{y}{P}} &&\text{client creation}
\\ \mid& \; \tm{\ncSrv{x}{y}{P}} &&\text{server interaction}
\end{aligned}
\]
\end{defi}
As before, round brackets denote input, square brackets denote output. Note that $\tm{\ncCnt{x}{y}{P}}$, much like $\tm{\piSend{x}{y}{P}}$, is a bound output: both client creation and server interaction bind a new name.
The structural congruence, which identifies certain terms, is the same as \cref{def:hcp-equiv}.
In any non-deadlock interaction between a server and some clients,
there must be \emph{exactly} as many clients as there are server interactions.
Therefore, we add two new \emph{dual} types for client pools and servers, which
track how many clients or server interactions they represent.
\begin{defi}[Types]\label{def:nc-types}
We extend \cref{def:cp-types} as follows:
\[
\begin{array}{lrll}
\ty{A}, \ty{B}, \ty{C}
& ::=& \; \dots
\\&\mid& \; \ty{\take[n]{A}} &\text{pool of} \; n \; \text{clients}
\\&\mid& \; \ty{\give[n]{A}} &n \; \text{server interactions}
\end{array}
\]
\end{defi}
The types $\ty{\take[n]{A}}$ and $\ty{\give[n]{A^{\bot}}}$ are dual. (The subscripts must be identical, \ie $\ty{\take[n]{A}}$ is not dual to $\ty{\give[m]{A^{\bot}}}$ if ${n}\neq{m}$.)
Duality remains an involutive function.
We have to add typing rules to associate our new client and server interactions
with their types.
The definition for environments will remain unchanged, but we will extend the
definition for the typing judgement.
To determine the new typing rules, we essentially answer the question
``What typing constructs do we need to complete the following proof?''
\begin{prooftree}
\AXC{$\seq[{ \ami }]{ \Gamma, \tmty{x'}{\cake^\bot} }$}
\noLine\UIC{$\smash{\vdots}\vphantom{\vdash}$}
\AXC{$\seq[{ \boe }]{ \Delta, \tmty{y'}{\cake^\bot} }$}
\noLine\UIC{$\smash{\vdots}\vphantom{\vdash}$}
\AXC{$\seq[{ \store }]{ \Theta, \tmty{z}{\cake}, \tmty{z'}{\cake} }$}
\noLine\UIC{$\smash{\vdots}\vphantom{\vdash}$}
\noLine\TIC{$\seq[{
\piNew{x}{x'}{(\piPar{\ncPool{\ncCnt{x}{z}{\ami}}{\ncCnt{x}{z'}{\boe}}}{
\ncSrv{x'}{w}{\ncSrv{x'}{w'}{\store}}})} }]{
\Gamma, \Delta, \Theta }$}
\end{prooftree}
The constructs $\tm{\ncCnt{x}{y}{P}}$ and $\tm{\ncSrv{x}{y}{P}}$ introduce a single client or server action, respectively---hence, channels of type $\ty{\take[1]{}}$ and $\ty{\give[1]{}}$. However, when we cut, we want to cut on both interactions simultaneously. We need rules for the \emph{contraction} of shared channel names.
\subsection{Clients and Pooling}\label{sec:clients-and-pooling}
A client pool represents a number of independent processes, each wanting to interact with the same server. Examples of such a pool include \Ami and \Boe from our example, customers for online stores in general, and any number of processes which interact with a single, centralised server.
We introduce two new rules: one to construct clients, and one to pool them together. The first rule, $(\take[1]{})$, interacts over a channel as a client. It does this by receiving a channel $\tm{y}$ over a \emph{shared} channel $\tm{x}$. The channel $\tm{y}$ is the channel across which the actual interaction will eventually take place. The second rule, $\textsc{Cont}_{!}$, allows us to contract shared channel names with the same type. When used together with \textsc{H-Mix}, this allows us to pool clients together.
\begin{center}
\ncInfTake1
\ncInfPool
\end{center}%
Using these rules, we can derive the left-hand side of our proof by marking \Ami and \Boe as clients, and pooling them together.
\begin{prooftree}
\AXC{$\seq[{ \ami }]{ \Gamma, \tmty{z}{\cake^\bot} }$}
\SYM{(\take[1]{})}
\UIC{$\seq[{ \ncCnt{x}{z}{\ami} }]{ \Gamma, \tmty{z}{\take[1]{\cake^\bot}} }$}
\AXC{$\seq[{ \boe }]{ \Delta, \tmty{z'}{\cake^\bot} }$}
\SYM{(\take[1]{})}
\UIC{$\seq[{ \ncCnt{x'}{z'}{\boe} }]{ \Delta, \tmty{x'}{\take[1]{\cake^\bot}} }$}
\NOM{H-Mix}
\BIC{$\seq[{ \ncPool{\ncCnt{x}{z}{\ami}}{\ncCnt{x'}{z'}{\boe}} }]{
\Gamma, \tmty{x}{\take[1]{\cake^\bot}} \hsep
\Delta, \tmty{x'}{\take[1]{\cake^\bot}} }$}
\SYM{\textsc{Cont}_{!}}
\UIC{$\seq[{ \ncPool{\ncCnt{x}{z}{\ami}}{\ncCnt{x}{z'}{\boe}} }]{
\Gamma, \Delta, \tmty{x}{\take[2]{\cake^\bot}} }$}
\end{prooftree}
\subsection{Servers and Sequencing}\label{sec:servers-and-sequencing}
Dual to a pool of $n$ clients in parallel is a server with $n$ actions in sequence. Our interpretation of a server is a process which offers some number of interdependent interactions of the same type. Examples include the store from our example, which gives out slices of cake and disappointment, online stores in general, and any central server which interacts with some number of client processes.
We introduce two new rules to construct servers. The first rule, $(\give[1]{})$, marks a interaction over some channel as a server interaction. It does this by sending a channel $\tm{y}$ over a \emph{shared} channel $\tm{x}$. The channel $\tm{y}$ is the channel across which the actual interaction will take place. The second rule, $\textsc{Cont}_{?}$, allows us to merge two (possibly interleaved) sequences of server interactions. This allows us to construct a server which has multiple interactions of the same type, across the same shared channel.
\begin{center}
\ncInfGive1
\ncInfCont
\end{center}
Using these rules, we can derive the right-hand side of our proof, by marking each of the store's interactions as server interactions, and then contracting them.
\begin{prooftree}
\AXC{$\seq[{ \store }]{ \Theta, \tmty{w}{\cake}, \tmty{w'}{\cake} }$}
\SYM{(\give[1]{})}
\UIC{$\seq[{ \ncSrv{y'}{w'}{\store} }]{
\Theta, \tmty{z}{\cake}, \tmty{y'}{\give[1]{\cake}} }$}
\SYM{(\give[1]{})}
\UIC{$\seq[{ \ncSrv{y}{w}{\ncSrv{y'}{w'}{\store}} }]{
\Theta, \tmty{y}{\give[1]{\cake}}, \tmty{y'}{\give[1]{\cake}} }$}
\SYM{\textsc{Cont}_{?}}
\UIC{$\seq[{ \ncSrv{y}{w}{\ncSrv{x}{w'}{\store}} }]{
\Theta, \tmty{y}{\give[2]{\cake}} }$}
\end{prooftree}
Thus, we complete the typing derivation of our example.
\begin{defi}[Typing judgements]\label{def:nc-typing-judgement}
We extend \cref{def:cp-typing-judgement} as follows:
{\normalfont
\begin{center} \ncInfTake1 \ncInfGive1 \end{center}
\begin{center} \ncInfPool \ncInfCont \end{center}
}
\end{defi}
\subsection{Running Clients and Servers}\label{sec:nc-running-clients-and-servers}
Finally, we need to extend the reduction rules to allow for the reduction of client and server processes. The reduction rule we add is a variant of $\hcpRedBetaTensParr$.
\begin{defi}[Reduction]\label{def:nc-reduction}
We extend \cref{def:hcp-reduction} as follows:
\[
\begin{array}{llll}
\ncRedBetaStar{}
& \tm{\piNew{x}{x'}{(\piPar{(\piPar{\ncCnt{x}{y}{P}}{\ncSrv{x'}{y'}{Q}})}{R})}}
& \Longrightarrow \;
& \tm{\piNew{x}{x'}{(\piPar{\piNew{y}{y'}{(\piPar{P}{Q})}}{R})}}
\end{array}
\]
\end{defi}
The difference between $\ncRedBetaStar{}$ and $\hcpRedBetaTensParr$ is that the former allows reduction to happen in the presence of an unrelated process $\tm{R}$, which is passed along unchanged. This is necessary, as there may be other clients waiting to interact with the server on the shared channel $\tm{x}$, which cannot be moved out of scope of the name restriction $\piNew{x}{}$. When there is no unrelated process $\tm{R}$, \ie, when there is only a single client, we can rewrite by $\hcpEquivMixHalt$ before and after applying $\ncRedBetaStar{}$.
So where does the non-determinism in \nodcap come from? Let us say we have a term of the following form:
\[
\tm{
\piNew{x}{x'}{(\piPar
{\ncPool{\ncCnt{x}{y_1}{P_1}}{\dots \mid \ncCnt{x}{y_n}{P_n}}}
{\ncSrv{x'}{y'_1}{\dots\ncSrv{x'}{y'_n}{Q}}})}
}
\]
As parallel composition is commutative and associative, we can rewrite this term to pair any client in the pool with the server before applying $\ncRedBetaStar{}$. Thus, like in the \textpi-calculus, the non-determinism is introduced by the structural congruence.
Does this mean that, for an arbitrary client pool $\tm{P}$ in $\tm{\piNew{x}{y}{(\piPar{P}{\ncSrv{y}{w}{Q}})}}$, every client in that pool is competing for the server interaction on $\tm{x}$? Not necessarily, as some portion of the clients can be blocked on an external communication. For instance, in the term below, clients $\tm{\ncCnt{x}{z_{n+1}}{P_{n+1}}} \dots \tm{\ncCnt{x}{z_m}{P_m}}$ are blocked on a communication on the external channel $\tm{a}$:
\[
\arraycolsep=0pt
\tm{
\begin{array}{lrl}
\piNew{x}{x'}{}
& ((&\; \ncPool{\ncCnt{x}{y_1}{P_1}}{\dots\mid\ncCnt{x}{y_n}{P_n}}\\
&\mid&\; \cpWait{a}{\ncPool{\ncCnt{x}{y_{n+1}}{P_{n+1}}}{\dots \mid \ncCnt{x}{y_m}{P_m}}}\;)\\
&\mid&\; \ncSrv{x'}{y'_1}{\dots\ncSrv{x'}{y'_m}{Q}}\;)
\end{array}}
\]
If we reduce this term, then only the clients $\tm{\ncCnt{x}{z_1}{P_1}} \dots \tm{\ncCnt{x}{z_n}{P_n}}$ will be assigned server interactions, and we end up with the following canonical form:
\[
\arraycolsep=0pt
\tm{
\begin{array}{lrl}
\piNew{x}{x'}{}
& (&\; \cpWait{a}{\ncPool{\ncCnt{x}{y_{n+1}}{P_{n+1}}}{\dots\mid\ncCnt{x}{y_m}{P_m}}}\\
&\mid&\; \ncSrv{x'}{y'_{n+1}}{\dots\ncSrv{x'}{y'_m}{Q}}\;)
\end{array}}
\]
This matches our intuition and the behaviour of the \textpi-calculus. For instance, we can now encode our example, where \Ami and \Boe both send a request for cake to the store, and the store sends back either a cake or nothing:
\[
\tm{\piNew{x}{x'}{}
\left(
\begin{array}{l}
\ncCnt{x}{x}{\piRecv{x}{y}{\ami}}\ppar
\\
\ncCnt{x}{x}{\piRecv{x}{z}{\boe}}\ppar
\\
\ncSrv{x'}{x'_1}{\piUSend{x'_1}{\sliceofcake}{\ncSrv{x'}{x'_2}{\piUSend{x'_2}{\nope}{\store}}}}
\end{array}
\right)
}
\Longrightarrow^\star
\begin{array}{c}
\tm{(\cpSub{\sliceofcake}{y}{\ami}\ppar\cpSub{\nope}{z}{\boe}\ppar\store)}
\\
\text{or}
\\
\tm{(\cpSub{\nope}{y}{\ami}\ppar\cpSub{\sliceofcake}{z}{\boe}\ppar\store)}
\end{array}
\]
The encoding presented above is slightly more complex than necessary: after the store receives a request as $\tm{x'_1}$, it could simply perform the cake interaction over that channel, and similarly for $\tm{x'_2}$. However, we include these actions for clarity.
\paragraph{Alternative syntax.}
If we choose to reuse the terms $\tm{\piSend{x}{y}{P}}$ and $\tm{\piRecv{x}{y}{P}}$ for shared channels, we could replace $\hcpRedBetaTensParr$ with $\ncRedBetaStar{}$, using the latter rule for both cases.
\subsection{Metatheory}
\label{sec:nc-metatheory}
\nodcap enjoys subject reduction, termination, and progress.
\begin{lem}[Preservation for $\equiv$]\label{lem:nc-preservation-equiv}
If $\tm{P}\equiv\tm{Q}$ and $\seq[P]{\mathcal{G}}$, then $\seq[Q]{\mathcal{G}}$.
\end{lem}
\begin{proof}
By induction on the derivation of $\tm{P}\equiv\tm{Q}$.
\end{proof}
\begin{thm}[Preservation]\label{thm:nc-preservation}
If $\seq[P]{\mathcal{G}}$ and $\reducesto{P}{Q}$, then $\seq[Q]{\mathcal{G}}$.
\end{thm}
\begin{proof}
By induction on the derivation of $\reducesto{P}{Q}$.
\end{proof}
\begin{defi}[Actions]
A process $\tm{P}$ acts on $\tm{x}$ whenever $\tm{x}$ is free in the outermost term constructor of $\tm{P}$, \eg, $\tm{\ncSrv{x}{y}{P}}$ acts on $\tm{x}$ but not on $\tm{y}$, and $\tm{\cpLink{x}{y}}$ acts on both $\tm{x}$ and $\tm{y}$. A process $\tm{P}$ is an action if it acts on some channel $\tm{x}$.% Two actions are dual when they introduce dual type constructors, \eg, $\tm{\piSend{x}{y}{P}}$ is dual to $\tm{\piRecv{x'}{y'}{Q}}$, but $\tm{\cpLink{x}{y}}$ is not dual to any action.
\end{defi}
\begin{defi}[Canonical forms]\label{def:nc-canonical-forms}
A process $\tm{P}$ is in canonical form if
\[
\tm{P} \equiv \tm{\piNew{x_1}{x'_1}{\dots\piNew{x_n}{x'_n}{(P_1 \mid \dots \mid P_{n+m+1})}}},
\]
such that: no process $\tm{P_i}$ is a cut or a mix; no process $\tm{P_i}$ is a link acting on a bound channel $\tm{x_i}$ or $\tm{x'_i}$; and no two processes $\tm{P_i}$ and $\tm{P_j}$ are acting on dual endpoints $\tm{x_i}$ and $\tm{x'_i}$ of the same channel.
\end{defi}
\begin{lem}
If a well-typed process $\tm{P}$ is in canonical form, then it is blocked on
an external communication, \ie,
$\tm{P}\equiv\tm{\piNew{x_1}{x'_1}{\dots\piNew{x_n}{x'_n}{(P_1\mid\dots\mid P_{n+m+1})}}}$
such that at least one process $\tm{P_i}$ acts on a free name.
\end{lem}
\begin{proof}
We have
\(
\tm{P} \equiv \tm{\piNew{x_1}{x'_1}{\dots\piNew{x_n}{x'_n}{(P_1 \ppar \dots \ppar P_{n+m+1})}}},
\)
such that no $\tm{P_i}$ is a cut or a link acting on a bound channel, and no two processes $\tm{P_i}$ and $\tm{P_j}$ are acting on the same bound channel with dual actions. The prefix of cuts and mixes introduces $n$ channels. Each application of cut requires an application of mix, so the prefix introduces $n+m+1$ processes. Each application of $\textsc{Cont}_{!}$ requires an application of mix, so there are at most $m$ clients acting on the same bound channel. Therefore, at least \emph{one} of the processes $\tm{P_i}$ must be acting on a free channel, i.e., blocked on an external communication.
\end{proof}
\begin{thm}[Progress]\label{thm:nodcap-progress}
If $\seq[P]{\mathcal{G}}$, then either $\tm{P}$ is in canonical form, or there exists a process $\tm{Q}$ such that $\tm{P}\Longrightarrow\tm{Q}$.
\end{thm}
\begin{proof}
We consider the maximum prefix of cuts and mixes of $\tm{P}$ such that
\[
\tm{P} \equiv \tm{\piNew{x_1}{x'_1}{\dots\piNew{x_n}{x'_n}{(P_1 \ppar \dots \ppar P_{n+m+1})}}},
\]
and no $\tm{P_i}$ is a cut. If any process $\tm{P_i}$ is a link, we reduce by $(\cpLink{}{})$. If any two processes $\tm{P_i}$ and $\tm{P_j}$ are acting dual endpoints $\tm{x_i}$ and $\tm{x'_i}$ of the same channel, we rewrite by $\equiv$ and reduce by the appropriate $\beta$-rule. Otherwise, $\tm{P}$ is in canonical form.
\end{proof}
\begin{thm}[Termination]\label{thm:nodcap-termination}
If $\seq[P]{\mathcal{G}}$, then there are no infinite $\Longrightarrow$-reduction sequences.
\end{thm}
\begin{proof}
Every reduction reduces a single cut to zero, one or two cuts. However, each of these cuts is smaller, measured in the size of the cut formula. Furthermore, each instance of the structural congruence preserves the size of the cut. Therefore, there cannot be an infinite $\Longrightarrow$-reduction sequence.
\end{proof}
\subsection{\nodcap and Non-deterministic Local Choice}\label{sec:nc-local-choice}
In \cref{sec:local-choice}, we discussed the non-deterministic local choice operator, which is used in several extensions of \piDILL and \cp~\cite{atkey2016,caires2014,caires2017}. This operator is admissible in \nodcap. We can derive the non-deterministic choice \tm{P+Q} by constructing the following term:
\[%
\arraycolsep=0pt
\tm{
\begin{array}{lrlrl}
\piNew{x}{x'}{}
&(( & \; \ncCnt{x}{y}{\cpInl{y}{\cpHalt{y}}} \\
&\mid& \; \ncCnt{x}{z}{\cpInr{z}{\cpHalt{z}}} \; )\\
&\mid& \; \ncSrv{x'}{y'}{\ncSrv{x'}{z'}{y'\:{\triangleright}}}\\
& & \quad
\begin{array}{rl}
\{ & \texttt{inl}: \; \piNew{w}{w'}{(\piPar{\cpCase{z'}{\cpWait{z'}{\cpHalt{w}}}{
\cpWait{z'}{\cpHalt{w}}}}{\cpWait{w'}{P}})}
\\
; & \texttt{inr}: \; \piNew{w}{w'}{(\piPar{\cpCase{z'}{\cpWait{z'}{\cpHalt{w}}}{
\cpWait{z'}{\cpHalt{w}}}}{\cpWait{w'}{Q}})} \; \})
\end{array}
\end{array}
}
\]
This term is a cut between two processes.
\begin{itemize}
\item
On the left-hand side, we have a pool of two processes, $\tm{\ncCnt{x}{y}{\cpInl{y}{\cpHalt{y}}}}$ and $\tm{\ncCnt{x}{z}{\cpInr{z}{\cpHalt{z}}}}$. Each makes a choice: the first sends $\tm{\texttt{inl}}$, and the second sends $\tm{\texttt{inr}}$.
\item
On the right-hand side, we have a server with both $\tm{P}$ and $\tm{Q}$. This server has two channels on which a choice is offered, $\tm{y'}$ and $\tm{z'}$. The choice on $\tm{y'}$ selects between $\tm{P}$ and $\tm{Q}$. The choice on $\tm{z'}$ does not affect the outcome of the process at all. Instead, it is discarded.
\end{itemize}
When these clients and the server are put together, the choices offered by the server will be non-deterministically lined up with the clients which make choices, and either $\tm{P}$ or $\tm{Q}$ will run.
While there is a certain amount of overhead involved in this encoding, it scales linearly in terms of the number of processes. The reverse---encoding the non-determinism present in \nodcap using non-deterministic local choice---scales exponentially, see, \eg, the examples in \cref{sec:local-choice}.
\section{Cuts with Leftovers}\label{sec:leftovers}
So far, our account of a non-determinism in client/server interactions only allows for interactions between equal numbers of clients and server interactions. A natural question is whether or not we can deal with the scenario in which there are more client than server interactions or vice versa, \ie, whether or not the following rules are derivable:
\begin{center}
\begin{prooftree*}
\AXC{$\seq{ \Gamma, \ty{\take[n+m]{A}} }$}
\AXC{$\seq{ \Delta, \ty{\give[n]{A^\bot}} }$}
\BIC{$\seq{ \Gamma, \Delta, \ty{\take[m]{A}} }$}
\end{prooftree*}
\begin{prooftree*}
\AXC{$\seq{ \Gamma, \ty{\take[n]{A}} }$}
\AXC{$\seq{ \Delta, \ty{\give[n+m]{A^\bot}} }$}
\BIC{$\seq{ \Gamma, \Delta, \ty{\give[m]{A^\bot}} }$}
\end{prooftree*}
\end{center}
These rules are derivable using a link. For instance, we can derive the rule for the case in which there are more clients than servers as follows:
\begin{prooftree}
\AXC{$\seq[{ P }]{ \Gamma, \tmty{x}{\take[n+m]{A}} }$}
\AXC{$\seq[{ Q }]{ \Delta, \tmty{x'}{\give[n]{A^\bot}} }$}
\AXC{$\seq[{ \cpLink{x''}{w} }]{
\tmty{x''}{\give[m]{A^{\bot}}}, \tmty{w}{\take[m]{A}} }$}
\NOM{H-Mix}
\BIC{$\seq[{ (\piPar{Q}{\cpLink{x''}{w}}) }]{
\Delta, \tmty{x'}{\give[n]{A^\bot}} \hsep
\tmty{x''}{\give[m]{A^{\bot}}}, \tmty{w}{\take[m]{A}} }$}
\SYM{\textsc{Cont}_{!}}
\UIC{$\seq[{ {(\piPar{Q}{\cpLink{x'}{w}})} }]{
\Delta, \tmty{x'}{\give[n+m]{A^\bot}}, \tmty{w}{\take[m]{A}} }$}
\NOM{H-Mix}
\BIC{$\seq[{ (\piPar{P}{(\piPar{Q}{\cpLink{x'}{w}})}) }]{
\Gamma, \tmty{x}{\take[n+m]{A}} \hsep
\Delta, \tmty{x'}{\give[n+m]{A^\bot}}, \tmty{w}{\take[m]{A}} }$}
\NOM{Cut}
\UIC{$\seq[{ \piNew{x}{x'}{(\piPar{P}{(\piPar{Q}{\cpLink{x'}{w}})})} }]{
\Gamma, \Delta, \tmty{w}{\take[m]{A}} }$}
\end{prooftree}
\section{Relation to Manifest Sharing}\label{sec:manifest}
In \cref{sec:local-choice}, we mentioned related work which extends \piDILL and \cp with non-deterministic local choice~\cite{atkey2016,caires2014,caires2017}, and contrasted these approaches with ours.
In this section, we will contrast our work with the more recent work on manifest sharing~\cite{balzer2017}.
Manifest sharing extends the session-typed language \SILL with two connectives, $\ty{\acquire{A}}$ and $\ty{\release{A}}$, which represent the places in a protocol where a shared resource is aquired and released, respectively. In the resulting language, \SILLS, we can define a type for, \eg, shared queues (using the notation for types introduced in this paper):
\[
\ty{\queue{A}} ::=
\ty{\acquire{
(\; A^\bot \parr \release{(\queue{A})} \;)
\with
(\; (A \plus \bot) \parr \release{(\queue{A})} \;)
}}
\]
The type $\ty{\queue{A}}$ types a shared channel which, after we aqcuire exclusive access, gives us the choice between enqueuing a value ($\ty{A^\bot}$) and releasing the queue, or dequeuing a value if there is any ($\ty{A \plus \bot}$) and releasing the queue.
The language \SILLS is much more expressive than \nodcap, as it has support for both shared channels and recursion.
In fact, Balzer, Pfenning, and Toninho~\cite{balzer2018} show that \SILLS is expressive enough to embed the untyped asynchronous \textpi-calculus. This expressiveness comes with a cost, as \SILLS processes are not guaranteed to be deadlock free, though recent work addresses this issue~\cite{balzer2019}.
Despite the difference in expressiveness, there are some similarities between \nodcap and \SILLS. In the former, shared channels represent (length-indexed) streams of interactions of the same type. In the latter, it is necessary for type preservation that shared channels are always released at the same type at which they were acquired, meaning that shared channels also represent (possibly infinite) streams of interactions of the same type. In fact, in \nodcap, the type for queues (with $n$ interactions) can be written as $\ty{\take[n]{(A^\bot \with (A \plus \bot))}}$.
One key difference between \nodcap and \SILLS is that in \SILLS a server must finish interacting with one client before interacting with another, whereas in \nodcap the server may interact with multiple clients simultaneously.
\section{Discussion and Future Work}\label{sec:conclusion}
We presented \nodcap, an extension of \hcp which permits non-deterministic communication without losing the strong connection to logic. We gave proofs for preservation, progress, and termination for the term reduction system of \nodcap. We showed that we can define non-deterministic local choice in \nodcap.
Our formalism so far has only captured servers that provide for a fixed number of clients. More realistically, we would want to define servers that provide for arbitrary numbers of clients. This poses two problems: how would we define arbitrarily-interacting stateful processes, and how would we extend the typing discipline of \nodcap to account for them without losing its static guarantees.
One approach to defining server processes would be to combine \nodcap with structural recursion and corecursion, following the $\mu\text{CP}$ extension of Lindley and Morris~\cite{lindley2016}. Their approach can express processes which produce streams of \ty{A} channels. Such a process would expose a channel with the co-recursive type \ty{\nu X. A \parr (1 \plus X)}. Given such a process, it is possible to produce a channel of type \ty{A \parr A \parr \cdots \parr A} for any number of \ty{A}s, allowing us to satisfy the type \ty{\give[n]{A}} for an arbitrary $n$.
We would also need to extend the typing discipline to capture arbitrary use of shared channels. One approach would be to introduce resource variables and quantification. Following this approach, in addition to having types \ty{\give[n] A} and \ty{\take[n] A} for concrete $n$, we would also have types \ty{\give[x] A} and \ty{\take[x] A} for resource variables $x$. These variables would be introduced by quantifiers \ty{\forall x A} and \ty{\exists x A}. Defining terms corresponding to \ty{\forall x A}, and its relationship with structured recursion, presents an interesting area of further work.
Our account of \hcp did not include the exponentials \ty{\give A} and \ty{\take A}. The type \ty{\take A} denotes arbitrarily many independent instances of \ty{A}, while the type \ty{\give A} denotes a concrete (if unspecified) number of potentially-dependent instances of \ty{A}. Existing interpretations of linear logic as session types have taken \ty{\take A} to denote \ty{A}-servers, while \ty{\give A} denotes \ty{A}-clients. However, the analogy is imperfect: while we expect servers to provide arbitrarily many instances of their behaviour, we also expect those instances to be interdependent.
With quantification over resource variables, we can give precise accounts of both \cp's exponentials and idealised servers and clients. \cp exponentials could be embedded into this framework using the definitions $\ty{\take{A}} ::= \ty{\forall{n}\take[n]{A}}$ and $\ty{\give{A}} ::= \ty{\exists{n}{\give[n]{A}}}$. We would also have types that precisely matched our intuitions for server and client behavior: an \ty{A} server is of type \ty{\forall{n}{\give[n] A}}, as it serves an unbounded number of requests with the requests being interdependent, while a collection of \ty{A} clients is of type \ty{\exists{n}{\take[n] A}}, as we have a specific number of clients with each client being independent.
\bibliographystyle{alpha}
\bibliography{main}
\end{document}
%%% Local Variables:
%%% TeX-master: "main"
%%% End:
| {
"alphanum_fraction": 0.65946545,
"avg_line_length": 54.1957878315,
"ext": "tex",
"hexsha": "e4c419deb7780556256e5e2cec137dda13c7284b",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2018-09-05T08:58:13.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-09-05T08:58:13.000Z",
"max_forks_repo_head_hexsha": "fb5e78d6182276e4d93c4c0e0d563b6b027bc5c2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "pepijnkokke/nodcap",
"max_forks_repo_path": "doc/coordination2019/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fb5e78d6182276e4d93c4c0e0d563b6b027bc5c2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "pepijnkokke/nodcap",
"max_issues_repo_path": "doc/coordination2019/main.tex",
"max_line_length": 1044,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "fb5e78d6182276e4d93c4c0e0d563b6b027bc5c2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "wenkokke/nodcap",
"max_stars_repo_path": "doc/coordination2019/main.tex",
"max_stars_repo_stars_event_max_datetime": "2019-09-24T20:16:35.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-09-05T08:58:11.000Z",
"num_tokens": 23424,
"size": 69479
} |
\documentclass[twoside,openright,titlepage,fleqn,
headinclude,11pt,a4paper,BCOR5mm,footinclude ]{scrbook}
%--------------------------------------------------------------
\newcommand{\myTitle}{Analisi di reti metaboliche basata su
propriet\`a di connessione\xspace}
% use the right myDegree option
\newcommand{\myDegree}{Corso di Laurea in Informatica\xspace}
%\newcommand{\myDegree}{
%Corso di Laurea Specialistica in Scienze e Tecnologie
%dell'Informazione\xspace}
\newcommand{\myName}{Massimo Nocentini\xspace}
\newcommand{\myProf}{Pierluigi Crescenzi\xspace}
\newcommand{\myOtherProf}{Nome Cognome\xspace}
\newcommand{\mySupervisor}{Nome Cognome\xspace}
\newcommand{\myFaculty}{
Facolt\`a di Scienze Matematiche, Fisiche e Naturali\xspace}
\newcommand{\myDepartment}{
Dipartimento di Sistemi e Informatica\xspace}
\newcommand{\myUni}{\protect{
Universit\`a degli Studi di Firenze}\xspace}
\newcommand{\myLocation}{Firenze\xspace}
\newcommand{\myTime}{Anno Accademico 2010-2011\xspace}
\newcommand{\myVersion}{Version 0.1\xspace}
%--------------------------------------------------------------
\usepackage[latin1]{inputenc}
\usepackage[T1]{fontenc}
\usepackage[square,numbers]{natbib}
\usepackage[fleqn]{amsmath}
\usepackage[english]{babel}
%--------------------------------------------------------------
\usepackage{dia-classicthesis-ldpkg}
%--------------------------------------------------------------
% Options for classicthesis.sty:
% tocaligned eulerchapternumbers drafting linedheaders
% listsseparated subfig nochapters beramono eulermath parts
% minionpro pdfspacing
\usepackage[eulerchapternumbers,subfig,beramono,eulermath,
parts]{classicthesis}
%--------------------------------------------------------------
\newlength{\abcd} % for ab..z string length calculation
% how all the floats will be aligned
\newcommand{\myfloatalign}{\centering}
\setlength{\extrarowheight}{3pt} % increase table row height
\captionsetup{format=hang,font=small}
%--------------------------------------------------------------
% Layout setting
%--------------------------------------------------------------
\usepackage{geometry}
\geometry{
a4paper,
ignoremp,
bindingoffset = 1cm,
textwidth = 13.5cm,
textheight = 21.5cm,
lmargin = 3.5cm, % left margin
tmargin = 4cm % top margin
}
%--------------------------------------------------------------
\usepackage{listings}
\usepackage{hyperref}
% My Theorem
\newtheorem{oss}{Observation}[section]
\newtheorem{exercise}{Exercise}[section]
\newtheorem{thm}{Theorem}[section]
\newtheorem{cor}[thm]{Corollary}
\newtheorem{lem}[thm]{Lemma}
\newcommand{\vect}[1]{\boldsymbol{#1}}
% questo comando e' relativo alle correzioni che puo
% apportare il prof se lo desidera.
\newcommand{\prof}[1]{\boldsymbol{#1}}
% instead of boldsymbol I can use the arrow above the letter with
%\newcommand{\vect}[1]{\vec{#1}}
% page settings
% \pagestyle{headings}
%--------------------------------------------------------------
\begin{document}
\frenchspacing
\raggedbottom
\pagenumbering{roman}
\pagestyle{plain}
%--------------------------------------------------------------
% Frontmatter
%--------------------------------------------------------------
%\include{titlePage}
\pagestyle{scrheadings}
%--------------------------------------------------------------
% Mainmatter
%--------------------------------------------------------------
\pagenumbering{arabic}
% settings for the lstlisting environment
\lstset{
language = java
, numbers = left
, basicstyle=\sffamily%\footnotesize
%, frame=single
, tabsize=2
, captionpos=b
, breaklines=true
, showspaces=false
, showstringspaces=false
}
\chapter*{Report of activity: ``Architetture e metodi per la
cooperazione applicativa'' }
Florence, \today\\\\
In this document we describe the state of the project under the scholarship
``Architetture e metodi per la cooperazione applicativa'', started in date
$25/06/13$. The work is supervised by Prof. Enrico Vicario, the study of
fluid dynamics equations and optimizations is due to Fabio Tarani and
implemented by Massimo Nocentini using the $C\#$ programming language.
\\\\
Problem: write a library that, consuming a fluid dynamic network specification
containing topological connections and physical data such as loads and
setup pressures on nodes, produces for each node the relative pressure and for
each edge the relative flow.
\\\\
In the beginning stage of the project there was a phase of study, performed in
two distinct branches: $(i)$ study of existing product ``GeoNet'' supplied by
Terranova Software, $(ii)$ analysis of suitable object-oriented implementation
for the network concept. These branches joined together with the study of equations
about fluid dynamics context. Since that point the implementation phase has
dominated the others and is currently active despite it is quite stable.
We describe the implementation and proceed step by step, from the architecture
toward input/output, observing the main points of interest:
\begin{description}
\item[architecture] the problem of describing the input network is solved using
a ``language'' based on the idea of inductive set construction.
%We provide an
%example regarding the definition of nodes set:
%\begin{itemize}
% \item a topological node $t$ with identifier $id$ and height $h$ is a node;
% \item if $n$ is a node then a node $m$ decorating $n$ with a gadget $g$
% (which in turn is defined by induction), say with setup pressure $p$,
% is a node.
%\end{itemize}
Using this approach to define \emph{every} object of the network,
allow us to tackle the problem from a functional point of view. In fact, given a
network specified using the above schema, it is possible to create dedicated
sub domains, each relative to an aspect of interest, extracting the
necessary data only. For example it
is possible to generate a domain for the visual representation of the network or
to generate a domain for the Newton-Raphson algorithm in order to compute
pressures and flows and so on.
\item[Visitor pattern]
the process of sub domain creation (aka a function over a recursively defined
type) is the counterpart of a proof by structural
induction done in mathematics. This creation process is implemented using the
\emph{Visitor} design pattern, which has the following pros:
\begin{itemize}
\item allow the implementation of a domain creation to be written in a
single class;
\item decouple the type's variants from the operations which can be
defined over them, keeping the variants' code clean.
\end{itemize}
We've experienced a cons of this approach regarding the possibility of adding a
new variant, since we have to resort to a downcast to type check our implementation.
\item[input specification] it is possible to specify the input instance using
two main formalisms:
\begin{itemize}
\item \emph{RDF triples}, this formalism is based on the W3C
recommendation for data description in the semantic web. It is a
verbose input specification but allow a user friendly way to
describe the network;
\item \emph{GeoNet format}, this formalism is used by the existing
engine and is a ``quick and dirty'' way to describe the network,
without the overhead of RDF triples notation but less
readable.
\end{itemize}
\item[Newton-Raphson algorithm] we implemented this algorithm to compute
pressures on nodes and flows on edges by consecutive iterations.
Actually we coded a variant which use
$(i)$ a theoretical idea about smart Jacobi matrix construction reached by Fabio
Tarani and $(ii)$ a relaxing factor in order to cut down the number of
iterations due to oscillation of some variables.
\item[pure $C\#$ library] the entire code base has been implemented using
\emph{only} $C\#$ code, paying attention to focus on the ``message passing''
paradigm which is the base for true object orientation. We promote delegation
over inheritance and don't use ``exotic'' features of $C\#$
such as coroutines, threads and external process invocation in favor of a
minimal core. We've divided the solution in many little projects in order to be
modular and, possibly, allow reuse in other projects. Currently
Terranova Software's projects are completely implemented in $C\#$, however
the GeoNet engine, written in $C++$ instead, is invoked as an external
process: using our implementation it is possible to have a complete solution running
on .NET platform without resorting to foreign calls.
\item[simulations] we've exercised the library against some test networks, some
of them are simple and are to be considered as ``toy'' networks in order to
check the consistency of the computation. In the last month Fabio Tarani found a
network which abstract a real context of a Polish town: running the analysis for
that network we obtain satisfactory results both for execution time both for the
correctness of computed pressures and flows. Comparing with GeoNet our
implementation computes smaller values for pressures.
\end{description}
\begin{tabular}{ l c l }
written by Massimo Nocentini, & & approved by Enrico Vicario, \\
Dr. in Computer Science & & Full Professor \\
\url{[email protected]} & & \url{[email protected]} \\
\end{tabular}
\end{document}
| {
"alphanum_fraction": 0.7076641156,
"avg_line_length": 43.5539906103,
"ext": "tex",
"hexsha": "c9737e5209483d5507e9dda7b6f15d53ef8362b1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2ed76c02b17a6e61fdc462877a7e8901b89136de",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "massimo-nocentini/network-reasoner",
"max_forks_repo_path": "documentation/first-doc-about-scholarship/first-doc-about-scholarship.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2ed76c02b17a6e61fdc462877a7e8901b89136de",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "massimo-nocentini/network-reasoner",
"max_issues_repo_path": "documentation/first-doc-about-scholarship/first-doc-about-scholarship.tex",
"max_line_length": 85,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2ed76c02b17a6e61fdc462877a7e8901b89136de",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "massimo-nocentini/network-reasoner",
"max_stars_repo_path": "documentation/first-doc-about-scholarship/first-doc-about-scholarship.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2231,
"size": 9277
} |
\documentclass[output=paper]{langsci/langscibook}
\ChapterDOI{10.5281/zenodo.5675847}
\author{Tabea Reiner\affiliation{LMU München}}
\title{Recursion and paradigms}
\abstract{This paper sketches the current status of morphology and paradigms in linguistic theorising. In particular, it is shown that from a constructionist as well as from a compositional perspective, morphology including paradigms tends to dissolve. The former might be less obvious; however the paper argues that a constructional deconstruction of paradigms and morphology follows directly from \citegen{Haspelmath2011} take on \citet{Booij2010} and related approaches in the realm of \textup{Construction Morphology (CxM)}. The latter is more obvious; in particular, proponents of \textup{Distributed Morphology (DM)} regularly emphasise that morphology is but an interface and paradigms are epiphenomenal. Throughout the paper I assume some familiarity with Construction Morphology and Construction Grammar more generally whereas I introduce DM specifically. However, the paper is not intended as a thorough discussion of the approaches presented (nor do I take sides); rather it is their shared detachment from paradigms that is at stake here. Consequently, also what is sometimes called \textup{Autonomous Morphology} is addressed in the paper: a rather recent approach that advocates morphology as an irreducible level of description and upholds the paradigm as a format of description in its own right. The balance of the paper is rather pessimistic for morphology and paradigms but eventually I come up with a presumably new argument in favour of regarding paradigms as fundamental: restrictions on inflectional recursion fall out naturally from them.}
\begin{document}
\multicolsep=.25\baselineskip
\maketitle
\section{Introduction} \label{reiner:1}
In the last decades, morphology as an independent level of description has come under pressure from two sides. In construction-oriented approaches it runs the risk to dissolve somewhere in the middle of the lexicon-syntax continuum: “it’s constructions all the way down” \citep[18]{Goldberg2006}. In certain composition-oriented approaches, in turn, its traditional tasks have been transferred to syntax: it’s “syntactic hierarchical structure all the way down” \citep[3]{HarleyHarley1999}. The last quotation is from Distributed Morphology, which does have \textit{Morphology} in its name but refers to a mere interface between syntax and phonology by this (\citealt[114]{HalleHalle1993}, \citealt[1]{Bobaljik2017}).
Together with morphology, a certain format of description appears to be discarded, which has been central to at least what is traditionally termed inflectional morphology:\footnote{This is the kind of morphology on which the present paper focuses; however, most points before \sectref{reiner:4} carry over, mutatis mutandis, to derivational morphology: word formation can be modelled using paradigms \citep{HathoutNamer2019} but it can probably be modelled just as well using constructions or rules, including special rules. On a related note, let me add a word on terminology: in the present paper, the root \textit{deriv}{}- is sometimes used meaning ʻrelated to the formation of new wordsʼ and sometimes used meaning ʻrelated to compositionʼ (especially when presenting DM); I assume that in each case the context suffices for disambiguation.} the paradigm. Throughout this paper, \textit{paradigm} is to be understood in the following sense, unless noted otherwise:
\begin{quote}
[…] a set of \textsc{cells}; each such cell is the pairing of a word form with the set of morphosyntactic properties which that word form realizes. \citep[147]{Stump2002}
\end{quote}
I consider this definition quite general, since, in principle, it does not exclude an incremental instead of a realisational approach (for an overview of morphological theories cf. \citealt{Stewart2016}): after completion, the string can be regarded as a whole. Please note, however, that the definition given here differs from \citegen{Politt2019} contemporary concept of paradigm in two respects. First, it is narrower in that it refers to merely word forms (the notion of word is a point to be discussed below); second it is broader in that it does not take a stance on whether paradigms are part of native speaker knowledge (another point to be discussed below).
To give an impression of how the discussion in the present paper is going to proceed, I pick out two quotations from the literature, which coincide in rendering paradigms superfluous eventually. First, consider \citet{Haspelmath2011} on constructions:
\begin{quote}
Clearly, the form–meaning relationship is often straightforward and compositional, but it is also often more complex. For the latter cases, morphologists have used paradigms and realisation-based rules, and syntacticians have used constructional idioms. The similarity between realisation-based morphology and construction-based syntax has recently been emphasized especially by \citet{Gurevich2006} and \citet{Booij2010}. As far as I have been able to determine, the differences between them mostly derive from different traditions, not from any substantive differences. \citep[59]{Haspelmath2011}
\end{quote}
I conclude from this quotation that paradigms can be rewritten as constructions. Incidentally, the reverse is also true, provided that the definition of paradigm is relaxed to include all sorts of paradigmatic relations in a structuralist sense: imagine a micro-paradigm like \textit{be all ears} vs. \textit{be all eyes}. However, from a constructionist perspective, constructions are needed anyway, so it is the paradigm that one can do without. In practice, the rewriting of paradigms as constructions can be thought of as conceptualising every single cell from a paradigm as the small semiotic entity that it is. For example, \tabref{tab:reiner:1} from German can be rewritten in the manner of \figref{fig:reiner:1}.
\begin{table}
\caption{Present indicative of German \textit{sein} ʻbeʼ\label{tab:reiner:1}}
\begin{tabular}{lll}
\lsptoprule
& Singular & Plural\\\midrule
1\textsuperscript{st} person & bin & sind\\
2\textsuperscript{nd} person & bist & seid\\
3\textsuperscript{rd} person & ist & sind\\
\lspbottomrule
\end{tabular}
\end{table}
\begin{figure}
\includegraphics[width=\textwidth]{figures/Reiner1.pdf}
\caption{Constructions instead of a paradigm\label{fig:reiner:1}}
\end{figure}
The rewriting can also be accomplished in a less space-consuming way (cf. \sectref{reiner:2.1}.); however I use a partly Saussurean way of presentation here in order to highlight the fact that every cell is a sign. Needless to say, these signs (=\,constructions) may form networks among themselves, which in turn might equal paradigms. Crucially however, paradigms are no more than emergent from this perspective.
This coincides surprisingly well with the view on paradigms following from an otherwise very different theoretical perspective, i.e. from the perspective of Distributed Morphology (henceforth referred to as \textit{DM}). Consider the following footnote in a recent paper.
\begin{quote}
[…], we use paradigms only for representational issues. As is well-known, in DM, paradigms are epiphenomenal. […] \citep[473]{PominoPomino2019}
\end{quote}
Thus, paradigms are not rejected as such but they are rejected as parts of mental grammar (which is the only area of interest in DM). The most detailed argument to that effect might be found in \citet{Bobaljik2002}. More recently, David \citet{Embick2015} even managed to write a book-long introduction to DM without even using the term \textit{paradigm}(\textit{atic}) beyond the bibliography and one footnote \citep[232]{Embick2015}. These researchers’ view on paradigms derives directly from the architecture of grammar assumed in DM, which will be laid out in \sectref{reiner:2.2} of the present paper.
Summarizing for now, the theoretical significance of paradigms may be seriously called into question from otherwise very different theoretical perspectives. The present contribution aims to portray these positions in some depth (\sectref{reiner:2}), explore a common defence strategy applied by Autonomous Morphology (\sectref{reiner:3}), and eventually come up with one task in linguistic theorising that only paradigms appear to fulfil directly, i.e. delimiting recursion (\sectref{reiner:4}). Hence, the key term \textit{recursion} will not reappear until the last section.
\section{No morphology, no paradigms} \label{reiner:2}
\subsection{From a construction-oriented perspective} \label{reiner:2.1}
If the introductory example had not been from German but from Turkish or another language with considerably less fusion and suppletion than German (\citealt[4]{AikhenvaldDixon2017}, \citealt[334]{Dressler1985} but also cf. \citealt{Bacanlı2011}), both the paradigmatic as well as the constructional notation would have displayed forms that are systematically segmentable to a large extent. So the question arises whether such forms should be described by rules rather than by paradigms or constructions. The present section aims at answering this question from a constructionist perspective. To anticipate the answer, compositional word forms are to be described by constructions like compositional syntax is to be described by constructions.
\begin{sloppypar}
For a start, consider the Turkish example to which \citet[59]{Haspelmath2011} alludes, stemming from \citet[403]{Hankamer1989}: one Turkish verb root yields 1,830,248 different forms when counting (what would traditionally be called) inflection and (what would traditionally be called) derivation, not even allowing for iterations. Obviously, this is too much to write down in paradigms. And things get worse when iterations are eventually taken into account. \REF{ex:reiner:1} provides an example.
\end{sloppypar}\largerpage[-1]
\ea \label{ex:reiner:1}
{Turkish (\citealt[396, emphasis added]{Hankamer1989})}\\
\gll daya {}-n {}-ış \textit{{}-tır} \textit{{}-t} {}-ıl {}-a {}-mı {}-yabil {}-ecek {}-ti {}-k\\
prop\_up \textsc{rfl} \textsc{rcp} \textit{\textsc{caus}} \textit{\textsc{caus}} \textsc{pass} \textsc{pot}1 \textsc{neg} \textsc{pot}2 \textsc{asp} \textsc{tns} \textsc{agr}{\footnotemark}\\
\glt ‘we might not have been able to be made to make someone else practice mutual aid’
\z
\footnotetext{Abbreviations (in order of occurrence): \textsc{rfl} – reflexive, \textsc{rcp} – reciprocal, \textsc{caus} – causative, \textsc{pass} – passive, \textsc{pot1} – first potential, \textsc{neg} – negation, \textsc{pot2} – second potential, \textsc{asp} – aspect, \textsc{tns} – tense, \textsc{agr} – agreement.}
When, starting from \REF{ex:reiner:1}, one tries to imagine all other possible combinations of affixes, this example gives an impression of the enormous size and systematicity any Turkish verbal paradigm would have. So using paradigms just does not seem to make sense for Turkish verbs.\footnote{The same holds for other categories in the language. For example, when \citet[311--320]{Kornfilt1997} presents demonstrative and interrogative pronouns, she does spell out quite a few paradigms but only in order to illustrate the high degree of systematicity with which the pronominal items are followed by separate suffixes for number and case.} Are things different when counting exclusively inflection? Yes and no. Yes, when only counting those forms that are usually considered inflectional in the language (i.e. those expressing aspect, tense, passive, mood, agreement, and negation) then the total number amounts to 576 (\citealt{Kornfilt1997}, my count), which does exceed the number for, e.g., Latin easily (up to 120, \citealt[396, my count]{Matthews1972}) but still seems to be manageable. Now, writing down the (reduced) paradigm still appears to be pointless as the realisation of the cells is largely predictable, quite different from the situation in Latin. As an example, consider the future and past forms of Turkish \textit{yapmak} ʻdoʼ in \tabref{tab:reiner:2}.\largerpage[-2]
\begin{table}
\caption{Future and past forms of Turkish \textit{yapmak} ʻdoʼ (\citealt{Kornfilt1997}, Ch. 2.1.3)\label{tab:reiner:2}}
\begin{tabular}{lll}
\lsptoprule
& {\textsc{fut}} & {\textsc{pst}}\\\midrule
{\textsc{1sg}} & yap-acağ-ım & yap-tı-m\\
{\textsc{2sg}} & yap-acak-sın & yap-tı-n\\
{\textsc{3sg}} & yap-acak & yap-tı\\
{\textsc{1pl}} & yap-acağ-ız & yap-tı-k\\
{\textsc{2pl}} & yap-acak-sınız & yap-tı-nız\\
{\textsc{3pl}} & yap-acak-lar & yap-tı-lar\\
\lspbottomrule
\end{tabular}
\end{table}
The root (\textit{yap}) stays the same throughout, the tense suffixes (-\textit{AcAK} and -\textit{DI} respectively) merely undergo phonological alternations,\footnote{In particular, note that the change from <k> to <ğ>, i.e. from [k] to \textrm{∅} (with a lengthening of the previous vowel), appears to be purely phonologically conditioned \citep[13–14, 91]{Ketrez2012}.} and the agreement suffixes are clearly separable. In fact, the paradigm above may be replaced by two simple instructions, cf. \REF{ex:reiner:2}.
\ea \label{ex:reiner:2}
{Turkish}\\
\ea
\{root; \textsc{tns}:\textsc{fut}\} ${\Rightarrow}$ root-\textit{AcAK}{}-\textsc{agr}1 \\
\ex \{root; \textsc{tns}:\textsc{pst}\} ${\Rightarrow}$ root-\textit{DI}{}-\textsc{agr}2
\z \z
Read: for expressing (an instantiation of) the meaning on the left hand side in Turkish, use (an instantiation of) the form on the right hand side. The manner of notation is equivalent to the graphical way of representation in \figref{fig:reiner:1}. Note that the instructions can be read as either rules or constructions, which is, in fact, not a contradiction \citep[123--124]{Rostila2011}. However, since I am portraying constructionist thinking here, the natural choice is for constructions.
Admittedly, the existence of more than one set of agreement forms in Turkish introduces some irregularity into the picture, especially since, in total, four such sets are posited \citep[382]{Kornfilt1997}. Kornfilt refers to these sets as paradigms; in fact, this is one of the few places in the book where she uses the term at all. Although this usage is not in line with the definition adopted in the present paper – where paradigms are not about morphemes but about whole word forms – it already indicates that the purview of what we call paradigms is the moderately irregular. This becomes more tangible when we shift from Turkish to Latin. Consider \tabref{tab:reiner:3}, which is an (approximate) translation of \tabref{tab:reiner:2}.
\begin{table}
\caption{future and past (“perfect”) forms of Latin \textit{facere} ʻdoʼ \citep[Ch. 7]{Panhuis2009}\label{tab:reiner:3}}
\begin{tabular}{lllllll}
\lsptoprule
& {\textsc{1sg}} & {\textsc{2sg}} & {\textsc{3sg}} & {\textsc{1pl}} & {\textsc{2pl}} & {\textsc{3pl}}\\
\midrule
{\textsc{fut}} & faciam & facies & faciet & faciemus & facietis & facient\\
{\textsc{pst}} & feci & fecisti & fecit & fecimus & fecistis & fecerunt\\
\lspbottomrule
\end{tabular}
\end{table}
Instead of two clearly distinguished tense suffixes we find merely one (-\textit{e} in the future) plus a root vowel change between the two tenses (\textit{fac}{}- > \textit{fec}{}-). Moreover, the future marker -\textit{e} becomes -\textit{a} in the 1\textsuperscript{st} person singular and in all future forms an -\textit{i} slips in between the root and tense. Now, this appears to be exactly the kind of situation for which paradigms have been invented in the first place: they provide an economic way to write down the unpredictable (e.g., the -\textit{i}) coupled directly with the predictable (e.g., 1\textsc{pl} -\textit{mus} but also \textit{f}V\textit{c}{}-). To this extent, paradigms are a convenient analytical tool and at the same time a concise format for instructed L2-acquisition.
However, paradigms are not the only option. In order to capture the unpredictable as well as the predictable in a very general fashion, Construction Grammarians have long developed other means, i.e. constructions. Crucially, these are not only meant for syntax but for the whole syntax-lexicon continuum, including words and even morphemes \citep[5]{Goldberg2006}. For example, the rules/constructions chosen in \REF{ex:reiner:2} above for the Turkish data from \tabref{tab:reiner:2} can be transferred to the Latin data from \tabref{tab:reiner:3}, cf. \REF{ex:reiner:3}.
\ea \label{ex:reiner:3}
{Latin}\\
\ea \{ʻdoʼ; \textsc{ps}:1, \textsc{num}:\textsc{sg}, \textsc{tns}:\textsc{fut}\} ${\Rightarrow}$ \textit{fac}{}-\textit{i}{}-\textit{a}{}-\textsc{agr}1
\ex \{ʻdoʼ; \textit{not} (\textsc{ps}:1 \& \textsc{num}:\textsc{sg}), \textsc{tns}:\textsc{fut}\} ${\Rightarrow}$ \textit{fac}{}-\textit{i}{}-\textit{e}{}-\textsc{agr}1
\ex \{ʻdoʼ; \textsc{tns}:\textsc{pst}\} ${\Rightarrow}$ \textit{fec}{}-\textsc{agr}2
\z\z
Admittedly, these are three lines for one verb instead of two lines for a whole range of verbs like in \REF{ex:reiner:2}. We cannot even generalise from \textit{facere} to all fifth-conjugation verbs, since not all of them show the root-vowel change, e.g., \textit{cupere} ʻdesireʼ, \textit{fugere} ʻfleeʼ, \textit{rapere} ʻplunder, seizeʼ \citep{BennettEdwin1918, GreenoughEtAl}.\footnote{Note that the fifth conjugation is also called third-\textit{io} conjugation.} However, in principle, the notation is possible. It is like listing idioms, complete with their schematic parts. In contrast, the same kind of notation for Turkish was more like stating syntactic rules or writing down highly abstract constructions. The crucial point is that the same kind of notation is apt for both types of data.
In this sense, constructions can replace paradigms: paradigms cells \textit{are} constructions. In order to elaborate on this idea, I am going to discuss additional examples in the following paragraphs. Most of the examples are adopted from the literature referenced in \citet[58--59]{Haspelmath2011}, i.e. from \citet{Spencer2001}, \citet{Gurevich2006}, and \citet{Booij2010}. Importantly, the first one of these authors, i.e. Andrew Spencer, does uphold the paradigm as a theoretically relevant notion. He belongs to a school of thought which not only holds that there are genuinely morphological phenomena (not reducible to something else, especially syntax) but also maintains that these phenomena can be described best by using paradigms. This school of thought seems to thrive especially within the Surrey Morphology Group and is called \textit{Autonomous Morphology} here.\footnote{Accordingly, I will call the practitioners \textit{Autonomous Morphologists}, accepting the bracketing paradox. The classic reference is \citet{Aronoff1994}; later publications include \citet{MaidenEtAl2011}. Also Stump’s Paradigm Function Morphology belongs here \citep{Stump2016}; however I will not treat this theory in any detail in the present paper since this would require another introduction (in addition to the one to DM).} In any case, what \citet{Spencer2001} presents are, at the same time, realisations of paradigm cells \textit{and} constructions. One of my main aims will be to demonstrate in detail how his examples can indeed be written down as constructions; a fact, whose further theoretical consequences will be explored in \sectref{reiner:3}. By contrast, Olga Gurevich as well as Geert Booij explicitly opt for a purely constructionist approach to morphology with paradigms being merely emergent. As a whole, the following paragraphs may be read as a fleshing out of Haspelmath’s rather brief remarks on the equivalence of realisational morphology and constructionist syntax, partially quoted above in the introduction (\sectref{reiner:1}). Having said this, Haspelmath himself does not explicitly state that his observations render the traditional paradigm superfluous; rather this is the conclusion that I have drawn above (in particular with respect to the examples from Turkish and Latin) and that I will substantiate in the course of the following discussion.
First, consider some examples based on \citet{Spencer2001}, starting with an extended version of his example for cumulative exponence.
\ea \label{ex:reiner:4}
{Spanish (based on \citealt[285]{Spencer2001} and \citealt[170--172]{ButtEtAl2019})}\\
\ea \{ʻsingʼ; \textsc{ps}:1, \textsc{num}:\textsc{sg}, \textsc{tns}:\textsc{prs}, \textsc{mood}:\textsc{indic}\} ${\Rightarrow}$ \textit{canto} \label{ex:reiner:4a}
\ex \{ʻsingʼ; \textsc{ps}:1, \textsc{num}:\textsc{sg}, \textsc{tns}:\textsc{pst}, \textsc{mood}:\textsc{indic}\} ${\Rightarrow}$ \textit{canté} \label{ex:reiner:4b}
\ex \{ʻsingʼ; \textsc{ps}:1, \textsc{num}:\textsc{sg}, \textsc{tns}:\textsc{prs}, \textsc{mood}:\textsc{sbjv}\} ${\Rightarrow}$ \textit{cante} \label{ex:reiner:4c}
\ex \{ʻsingʼ; \textsc{ps}:1, \textsc{num}:\textsc{sg}, \textsc{tns}:\textsc{impf}, \textsc{mood}:\textsc{indic}\} ${\Rightarrow}$ \textit{cantaba} \label{ex:reiner:4d}
\ex \{ʻsingʼ; \textsc{ps}:2, \textsc{num}:\textsc{sg}, \textsc{tns}:\textsc{impf}, \textsc{mood}:\textsc{indic}\} ${\Rightarrow}$ \textit{cantabas} \label{ex:reiner:4e}
\z \z
Here, the predictable part is the structure [\textit{cant}-\,+\,X] and the unpredictable part is whether and how the respective feature values are expressed cumulatively, i.e. together in one morph. For example, the -\textit{o} in context \REF{ex:reiner:4a} \textit{canto} appears to realise 1\textsuperscript{st} person, singular, present, indicative (and active) all at once, while the -\textit{abas} in \REF{ex:reiner:4e} \textit{cantabas} might be split into a thematic vowel (\nobreakdash-\textit{a}\nobreakdash-), an imperfective past tense marker for the relevant inflectional class (\nobreakdash-\textit{ba}\nobreakdash-), and an exponent of agreement (\nobreakdash-\textit{s}). So while the latter form seems to be compositional and apt for a non-constructionist morpheme-by-morpheme description, the former escapes such a description (provided that we try to avoid null elements). Here, the holistic pairing of form and meaning, i.e. the conception as a construction, presents itself as the only option as opposed to an incremental approach. Accordingly, the format of presentation chosen in \REF{ex:reiner:4} is simply the same one as in \REF{ex:reiner:2} and \REF{ex:reiner:3} above: \{\textsc{meaning}\} ${\Rightarrow}$ \textsc{form}.
Since this format is equally apt for forms with a higher degree of compositionality (cf. the discussion above), it has been chosen in \REF{ex:reiner:4} throughout, even for \REF{ex:reiner:4e} \textit{cantabas}. However, please note that on closer inspection not even this form meets the agglutinating (Turkish-style) ideal: in contrast to the other feature values, indicative mood is not signalled by a dedicated suffix but has to be inferred from the fact that \textit{cant}{}- belongs to the -\textit{ar} inflectional class, which would have -\textit{e} as its thematic vowel in the present subjunctive. So this, like \REF{ex:reiner:4a} above, is a situation we would usually describe by putting the form as a whole into a paradigm cell – while it can be captured equally well by setting up a construction.
Next is \citegen{Spencer2001} example for extended exponence, again written down as a construction here.
\ea \label{ex:reiner:5}
{Spanish \citep[286]{Spencer2001}}\\
\glt \{ʻeatʼ; \textsc{ps}:1, \textsc{num}:\textsc{sg}, \textsc{tns}:\textsc{impf}, \textsc{mood}:\textsc{indic} \} ${\Rightarrow}$ \textit{comía}
\z
According to Spencer, the feature value imperfective is signalled twice within this word form: by the imperfective marker for verbs of the \nobreakdash-\textit{er} inflectional class, i.e. -\textit{í}, as well as by the \nobreakdash-\textit{a}, since the latter is a first person singular marker only in the imperfective (provided that the \nobreakdash-\textit{a} in the present subjunctive form \textit{coma} is a thematic vowel, not a person\slash number suffix). This is extended exponence: the marking of one meaning extends over more than one morph. Again, this is a situation that is a) hard to capture by an incremental approach, b) traditionally captured by drawing a paradigm, and c) equally well captured by writing down a construction.
The same is true for Spencer’s examples of zero exponence, e.g. \REF{ex:reiner:6}.
\ea \label{ex:reiner:6}
{Latvian (\citealt{Spencer2001}:286, \citealt{FennellGelsen1980}:542)}\\ \ea
\label{ex:reiner:6a}
\textup{\{ʻtravel, driveʼ;} \textsc{ps}\textup{:2,} \textsc{num}\textup{:}\textsc{sg}, \textsc{tns}\textup{:}\textsc{prs}\textup{\} ${\Rightarrow}$} \textit{\textbf{brauc}}\\
\ex \label{ex:reiner:6b} \textup{\{ʻtravel, driveʼ;} \textsc{ps}\textup{:2,} \textsc{num}\textup{:}\textsc{sg}, \textsc{tns}\textup{:}\textsc{pst}\textup{\} ${\Rightarrow}$} \textit{brauci}\\
\ex \label{ex:reiner:6c} \{ʻtravel, driveʼ; \textsc{ps}:2, \textsc{num}:\textsc{pl}, \textsc{tns}:\textsc{pst}\} ${\Rightarrow}$ \textit{braucat}\\
\ex \label{ex:reiner:6d} \textup{\{ʻtravel, driveʼ;} \textsc{ps}\textup{:3,} \textsc{num}\textup{:}\textsc{sg}, \textsc{tns}\textup{:}\textsc{prs}\textup{\} ${\Rightarrow}$} \textit{\textbf{brauc}}\\
\ex \label{ex:reiner:6e} \{ʻtravel, driveʼ; \textsc{ps}:3, \textsc{num}:\textsc{sg}, \textsc{tns}:\textsc{pst}\} ${\Rightarrow}$ \textit{brauca}\\
\ex \label{ex:reiner:6f} \textup{\{ʻtravel, driveʼ;} \textsc{ps}\textup{:3,} \textsc{num}\textup{:}\textsc{pl}, \textsc{tns}\textup{:}\textsc{prs}\textup{\} ${\Rightarrow}$} \textit{\textbf{brauc}}\\
\ex \label{ex:reiner:6g} \{ʻtravel, driveʼ; \textsc{imp}\} ${\Rightarrow}$ \textbf{\textit{brauc}}\\
\z\z
What can be seen in \textit{brauc} is essentially what you have to expect for any construction due to its nature as a linguistic sign: polysemy. Compare \REF{ex:reiner:7} to \REF{ex:reiner:8}.
\begin{exe}
\ex \label{ex:reiner:7} \textit{brauc} ${\Rightarrow}$ \{ʻtravelʼ; (\textsc{ps}:2 \textit{or} 3, \textsc{num}:\textsc{sg}, \textsc{tns}:\textsc{prs)} \textit{or} (\textsc{ps}:3, \textsc{num}:\textsc{pl}, \textsc{tns}:\textsc{prs)} \textit{or} \textsc{imp}\}
\ex \label{ex:reiner:8} \textit{drive} ${\Rightarrow}$ \{ʻoperate a vehicle \textit{or} motivate the processʼ\}\\ \citep[38]{RumshiskyBatiukova2008}
\end{exe}
To be sure, ʻ[…] 2\textsuperscript{nd} \textit{or} 3\textsuperscript{rd} person […]ʼ and ʻoperate a vehicle \textit{or} motivate the processʼ represent very different kinds of meanings (in constructionist terms: a finite verb represents a partly schematic construction while a lexical entry represents a substantive construction, \citealt[2]{HoffmannHoffmann2013}). However, the underspecification in both examples can be neatly captured by writing them down as constructions. So, ironically, a constructionist analysis can do something that also DM strives for (albeit in a different way, cf. \sectref{reiner:2.2}): treating syncretisms\footnote{Syncretism = in a given context two feature values are not overtly distinguished although they are overtly distinguished in another context in the same language (adopted from \citealt[96]{Kramer2016}).\label{fn:reiner:7}} as cases of underspecification.
Please note that, strictly speaking, the constructions presented so far are only halves. For being full constructions, they would need a double arrow, signalling that not only the respective meaning triggers the respective form but also the other way round. However, against the background of syncretisms as treated above it is clear that the back arrow would require a more complete picture of the languages at hand than can be given here. For example, after ensuring that through the entire verbal paradigm of Spanish the form \textit{canto} (without stress on the final vowel) is really only 1\textsuperscript{st} person, singular, present, indicative (and active), \REF{ex:reiner:4a} could be rewritten as \REF{ex:reiner:9}.
\ea \label{ex:reiner:9}
{Spanish (\citealt[170--712]{ButtEtAl2019})}\\
\textup{\{ʻsingʼ;} \textsc{ps}\textup{:1,} \textsc{num}\textup{:}\textsc{sg}, \textsc{tns}\textup{:}\textsc{prs}, \textsc{mood}\textup{:}\textsc{indic}\textup{\} $\leftrightarrow $} canto
\z
Let me add another word on modes of presentation. When I refrain from adding morpheme boundaries and associating the resulting units with individual meanings I take the following passage from \citet{Spencer2001} seriously.
\begin{quote}
[…], in a sense it’s a mistake to speak of meanings being concentrated in one morph or spread across several morphs or realized by zero morphs. \citep[287]{Spencer2001}
\end{quote}
So in my running text wordings like “the \nobreakdash-\textit{o} in context \REF{ex:reiner:4} \textit{canto} appears to realise […]” should be taken with a grain of salt: from the constructionist perspective, which I am portraying here, it is not necessarily the \nobreakdash-\textit{o}, not even the \nobreakdash-\textit{o} in a certain context, but \textit{canto} as a whole that has some meaning to begin with. Similarly, if the instructions for Turkish in \REF{ex:reiner:2} above are read as constructions, the hyphens indicate what have turned out to be internal semantic entities (probably aligning with internal distributional entities); however neither Autonomous Morphology (based on paradigms) nor Construction Grammar (with emergent paradigms) depends on the presence of any semantic parts below the level of the word form.\footnote{This does not only hold for inflection but also for word formation, cf. \citet[428]{Booij2016}.}
Before turning to his main example – auxiliary structures in Slavic – \citet[287]{Spencer2001} mentions a last general group of examples in favour of the realisational approach to morphology: meaningless morphemes. More specifically, he judges Spanish \nobreakdash-\textit{ar} to be a case in point: in one class of verbs this element appears after the root in all three, the infinitive, the future, and the conditional. I am not sure whether we are rather dealing with a case of polysemy rather than meaning\textit{less}ness here. However, a more obvious example may be found in the diachrony of German vs. Dutch:
\ea \label{ex:reiner:10}
{From Middle High German to New High German \citep[200–201]{Roberge1985}}\\
leb\textit{e}te > lebte \textup{ʻlivedʼ}\\
rett\textit{e}te \textup{> *}rett\sout{t}e \textup{but} rett\textit{e}te \textup{ʻrescuedʼ}\\
\ex\label{ex:reiner:11}
{From Middle Dutch to Present-Day Dutch \citep[200–201]{Roberge1985}}\\
redd\textit{e}de > redd\sout{d}e \textup{ʻrescuedʼ}
\z
That is, German retains a thematic vowel where Dutch does not – and this vowel does not appear to have any meaning (anymore), not even indicating inflectional class. Admittedly, the \nobreakdash-\textit{e}\nobreakdash- does distinguish the 3\textsuperscript{rd} person singular past (\textit{rettete}, /retətə/) from the 1\textsuperscript{st} person singular present (\textit{rette}, /ʁɛtətə/ = /ʁɛttə/ = /ʁɛttə/ = /ʁɛtə/). However, distinguishing between meanings is not the same thing as having a meaning (recall the classical definition of phonemes vs. morphemes). The only “meaning” that could be assigned to the \nobreakdash-\textit{e}\nobreakdash- in \textit{rettete} would be ʻif you have a choice between a 3\textsuperscript{rd}+past reading and a 1\textsuperscript{st}+present reading, choose the formerʼ. This piece of information does not count as a meaning since, as far as I can see, it is not directly evoked by the \nobreakdash-\textit{e}\nobreakdash- in native speakers (let alone vice versa). As a result, we get a meaningless element in an otherwise more or less segmentable string, cf. \REF{ex:reiner:12}.
\ea \label{ex:reiner:12}
{German}\\
\gll rett-e-te\\
rescue-?-1/3\textsc{sg}.\textsc{pst}\\
\glt ʻrescuedʼ
\z
Thus, again a realisational notation appears to be practical. My point is that such a notation does not intrinsically need paradigms but solely constructions, cf. the construction in \REF{ex:reiner:13}.
\ea \label{ex:reiner:13}
{German}\\
\textup{\{ʻrescueʼ;} \textsc{ps}\textup{:1 \textit{or} 3,} \textsc{num}\textup{:}\textsc{sg}\textup{, ((}\textsc{tns}\textup{:}\textsc{pst}, \textsc{mood}\textup{:}\textsc{indic)} \textit{or} \textsc{mood}\textup{:}\textsc{irr}\textup{)\} $\leftrightarrow $} rettete
\z
As an aside, from example \REF{ex:reiner:7} onwards in this paper, one-to-many relations between form and meaning have been treated as cases of polysemy; however, a constructionist account is even able to distinguish between polysemy and homonymy.\footnote{Both of which can be special cases of syncretism as defined in the present paper, cf. fn. \ref{fn:reiner:7}.} Consider \tabref{tab:reiner:4}, of which the light shaded cells are presented as a case of polysemy in \REF{ex:reiner:14} and the dark shaded cells are presented as a case of homonymy in \REF{ex:reiner:15}. The motivation for drawing the distinction is that 1\textsuperscript{st} and 3\textsuperscript{rd} plural share a positive feature value (plural), while 3\textsuperscript{rd} singular and 2\textsuperscript{nd} plural do not; moreover, the former syncretism runs through all verbal forms in German (and extends to the present infinitive) while the latter dissolves in the past tense as well as in the present tense of umlaut verbs.
\begin{table}
\caption{Present indicative of German \textit{kaufen} ʻbuyʼ (\citealt[23]{HelbigHelbig2001})\label{tab:reiner:4}}
\begin{tabular}{lll}
\lsptoprule
& Singular & Plural\\\midrule
1\textsuperscript{st} person & kaufe & \cellcolor{lsLightGray}kaufen\\
2\textsuperscript{nd} person & kaufst & \cellcolor{lsGuidelinesGray}kauft\\
3\textsuperscript{rd} person & \cellcolor{lsGuidelinesGray}kauft & \cellcolor{lsLightGray}kaufen\\
\lspbottomrule
\end{tabular}
\end{table}
\ea \label{ex:reiner:14}
{German}\\
\{ʻbuyʼ; \textsc{ps}:1 \textit{or} 3, \textsc{num}:\textsc{pl}, \textsc{tns}:\textsc{prs}, \textsc{mood}:\textsc{indic}\} $\leftrightarrow $ kaufen\\
\ex \label{ex:reiner:15}
{German}\\
\ea \textup{\{ʻbuyʼ;} \textsc{ps}\textup{:3,} \textsc{num}\textup{:}\textsc{sg}, \textsc{tns}\textup{:}\textsc{prs}, \textsc{mood}\textup{:}\textsc{indic}\textup{\} $\leftrightarrow $} kauft \textit{and}\\
\ex \textup{\{ʻbuyʼ;} \textsc{ps}\textup{:2,} \textsc{num}\textup{:}\textsc{pl}, \textsc{tns}\textup{:}\textsc{prs}, \textsc{mood}\textup{:}\textsc{indic}\textup{\} $\leftrightarrow $} kauft
\z\z
To be sure, technically speaking, also the dark shaded cells of \tabref{tab:reiner:4} could be rendered as a case of polysemy, cf. \REF{ex:reiner:16}.
\ea \label{ex:reiner:16}
{German}\\
\textup{\{ʻbuyʼ; ((}\textsc{ps}\textup{:3,} \textsc{num}\textup{:}\textsc{sg}\textup{) \textit{or} (}\textsc{ps}\textup{:2,} \textsc{num}\textup{:}\textsc{pl})), \textsc{tns}\textup{:}\textsc{prs}, \textsc{mood}\textup{:}\textsc{indic}\textup{\} $\leftrightarrow $} kauft
\z
However, as argued above, in the specific case at hand, other facts from the language cast doubt on this latter analysis. So here I would opt for the former analysis in terms of homonymy rather than for the latter in terms of polysemy. In sum, while both analyses make use of underspecification they are apt for different kinds of syncretisms.
\begin{sloppypar}
Turning now to Spencer’s main example, auxiliary structures in Slavic, it seems natural that his realisational treatment of not fully compositional word forms is transferred to not fully compositional strings of words that appear to realise similar meanings (e.g., tenses or aspects). It is especially this generalisation that is picked up by \citet[59]{Haspelmath2011} and, among other issues, contributes to Haspelmath’s overall thesis that any universal morphology/syntax boundary is elusive for the time being. Against this background, it will not come as a surprise that also auxiliary structures (or periphrases) may be written down as constructions, so I will not go through this here. However, please note that \sectref{reiner:4} of the present paper will provide a fundamentally different look on such structures.
Summarising my fleshing out of \citegen{Haspelmath2011} reference to \citet{Spencer2001} for the moment, there are reasons to use realisational rules rather than incremental procedures; however it does not seem to matter whether the feature specifications are conceived of as a grid, establishing paradigm cells to be realised, or simply as meanings of constructions (accordingly, the sounds/characters materialise a cell or provide the signifiant of a construction). The notation used in \citet{Spencer2001} is, in fact, similar to the one used above. Going one step further, there is a choice between one representational format that is needed anyway from a constructionist perspective, i.e. the construction, and another representational format that is not needed anyway (though might emerge from applications of the former). Theoretical parsimony requires that we stick to the format needed anyway.
\end{sloppypar}
As a last reflection on \citet{Spencer2001} let me note that the author is perfectly aware of the connection between realisational and constructionist approaches. For example, consider the following quotation on auxiliary structures.
\begin{quote}
We are dealing here with constructional idioms much like phrasal verbs. \citep[283]{Spencer2001}
\end{quote}
In the same vein, consider the following paragraph, where he compares (in\-fer\-en\-tial-realisational) Word-and-Paradigm models (which, needless to say, he favours) to Item-and-Arrangement models.\footnote{Item-and-Arrangement (i.e. incremental), Item-and-Process as well as Word-and-Paradigm are families of morphological theories, for their characteristics cf. \citet{Stewart2016}.}
\begin{quote}
Inferential models can be contrasted with lexical models in which the mapping between form and meaning is specified in the lexical entry of a morpheme. On such a theory, the \nobreakdash-\textit{z}{}-suffix contributes its own [Num:Pl] ʻmeaningʼ to the word form by being combined with the (numberless) form \textit{dog}. In this respect, the plural suffix is a Saussurean sign, a pre-compiled pairing of form and meaning. It is precisely that conception that is denied in paradigmatic and inferential-realisational approaches to morphology. Inferential-realisational models by their nature cannot involve lexically listed mor\-phemes-as-signs. As a consequence, inflectional formatives turn out to be simply morphophonological ʻmarkersʼ on stems, signalling (realising) some subset of the feature set to be expressed. In the simplest cases there is a one:one mapping observable between form and function and the formative then has the appearance of a classical morpheme, but it’s important to realise that this is just one extreme of the form-function mapping (and in inflection a rather rare occurrence at that). (\citealt[280--281]{Spencer2001})
\end{quote}
\begin{sloppypar}
Note the affinity of this approach to morphology with constructionist approaches to syntax: the primary signifiant is the expression as a whole; its subparts have their function within the expression but they do not necessarily have the same function in other environments as well (cf. the reflection on Spanish theme vowels above). A remaining difference seems to be that inferential-realisational morphology still refers to superordinate rules for realisation, e.g.: if there is no dedicated expression for a given combination of feature values then use the default form (\citealt[280, 289]{Spencer2001}). As an alternative to such rules, in Construction Grammar there are, however, relationships between constructions \citep[109]{Goldberg1995}. In fact, also realisational rules can be captured by such relationships, e.g.: given a set of meaningful expressions (i.e. constructions) organised in an inheritance hierarchy, use the one that maximally specifies the meaning you want to express (and the maximally specific construction available might turn out to be rather unspecific). Such a conception, including further developments, is run-of-the-mill in Sign-Based Construction Grammar (\citealt[9--14]{SagEtAl2012}).
\end{sloppypar}\largerpage
In sum, the inspection of \citet{Spencer2001} shows that constructionist thinking is influential in morphology and, from an outside perspective, that constructions might even fully replace paradigms. I can be briefer about the two other works mentioned in \citet[59]{Haspelmath2011} and pursued here, i.e. \citet{Gurevich2006} and \citet{Booij2010}. For what had to be worked out carefully with regard to \citet{Spencer2001} is utterly explicit in in these works: Construction Grammar applies to morphology just as it applies to syntax.\footnote{It does become explicit in \citet[84]{Spencer2004}: “Essentially, we need to think of inflected words as akin to constructional idioms”.} So I confine myself here to a central figure from \citet[170]{Gurevich2006} and an exemplary formula from \citet[241]{Booij2010}.
\citet{Gurevich2006} is a book-long treatment of the so called version in Georgian. This is a pre-radical vowel with a whole range of functions, crucially depending on context (\citealt[6--13]{Gurevich2006}). For example, \nobreakdash-\textit{i}\nobreakdash- as a pre-radical vowel indicates that the indirect object is affected (in the presence of an overt 1\textsuperscript{st} or 2\textsuperscript{nd} person indirect object) or that the subject is affected as a beneficiary (in the absence of an overt indirect object) or that there is no affected participant (in the passive) or that the subject is 1\textsuperscript{st}/2\textsuperscript{nd} person (in the evidential perfect). In a way, this is a scaling-up of what was demonstrated above for thematic vowels in Spanish: a given element in a word form does make a specific contribution to the overall meaning of the word and clause but in order to identify that contribution the hearer has to have information far beyond the element as such.
Now, let’s see how \citet[170]{Gurevich2006} combines all these functions of version vowels (of which the example above was but a snippet) into a constructional network. Her presentation is reproduced here as \figref{fig:reiner:2}.
\begin{figure}
\includegraphics[width=.75\textwidth]{figures/Reiner2.pdf}
\caption{Constructional network for Georgian version vowels\linebreak\citep[170]{Gurevich2006}}
\label{fig:reiner:2}
\end{figure}
Syncretisms\footnote{I assume that all of the distinctions are marked overtly elsewhere in the language.} like the one involving \nobreakdash-\textit{i}\nobreakdash- can easily be read off the network.
Having addressed syncretisms (as well as other kinds of non-bijective form-meaning relations) several times in the present paper, some standard problems for classical Item-and-Arrangement approaches to morphology are covered. Another standard problem from the wealth of non-bijective form-meaning relations, however, has not been stated explicitly so far, i.e. allomorphy. This is tackled by \citegen{Booij2010} last chapter. For example, he gives the following formula for capturing the ablaut in \textit{sing} – \textit{sang}:\largerpage
\begin{center}
[X i Y]\textsubscript{V, [–past]} ${\approx}$ [X a Y]\textsubscript{V, [+past]} \citep[241]{Booij2010}
\end{center}
\noindent Crucially, none of the forms is derived from the other or from some underlying form but they are equally stored in the mental lexicon according to \citet[Ch. 10]{Booij2010}.\footnote{I concentrate on this last chapter since the other chapters are rather on word formation than on inflection whereas the present paper focusses on the latter, to the extent that the difference can be upheld, cf. \citet{Haspelmath2011}.}
In principle, true suppletion \citep{Mel’čuk1994} may be captured the same way as far as I can see, e.g. for the partial paradigm given in \tabref{tab:reiner:1} (viewed synchronically):
\begin{center}
[bi Y]\textsubscript{V, [1/2}\textsc{\textsubscript{sg}}\textsubscript{]} ${\approx}$ [is Z]\textsubscript{V, [3}\textsc{\textsubscript{sg]}} ${\approx}$ [sind]\textsubscript{V, [1/3}\textsc{\textsubscript{pl}}\textsubscript{]} ${\approx}$ [sei \textsc{Z}]\textsubscript{V, [2}\textsc{\textsubscript{pl}}\textsubscript{]},
\end{center}
\noindent where Z is the regular agreement suffix (phonetically) and Y is either the regular agreement suffix or something else (\citealt[6--7]{Plank2016}). Taxonomically speaking, the formula involves four sister constructions (\citealt[111]{JackendoffAudring2020}).\footnote{Also note the relevance of networks in another recent constructionist approach, i.e. \citet{CrysmannBonami2017}.} The important point about these interconnected constructions is that they can, and do, serve to restate paradigms (\citealt[151--154]{JackendoffAudring2020}). In fact, \citet{JackendoffAudring2020} provide a very nice example of what is going on with paradigms in Construction Morphology (CxM): in a first step, they are acknowledged as central to inflection \citep[133--134]{JackendoffAudring2020} but in a second step, they are restated, one by one, as interconnected constructions (e.g., \citealt[151--154]{JackendoffAudring2020}). This is reminiscent of my argumentation above that cells are in fact constructions and paradigms are (specific) relations between constructions.
The extension of \citet{Booij2010} concludes my fleshing-out of \citegen[58--59]{Haspelmath2011} remark that, essentially, realisational morphology and constructionist syntax do the same thing. To summarise: against the background of strictly agglutinating languages, certain phenomena familiar from fusional languages (fusion as such, syncretism, allomorphy, …) suddenly appear fairly idiosyncratic – just like against the background of compositional syntax idioms appear quite idiosyncratic. For the latter case Construction Grammarians concluded that we simply had the cart before the horse, i.e. that we have to start conceiving of the whole as primary to its parts – and now their solution is transferred to morphology. For \citet{Haspelmath2011}, this expansion of the constructionist idea raises, together with a range of other considerations, the question where to draw the line between the two levels of description generally. In any case, traditional inflectional morphology’s favourite format, i.e. the paradigm, seems to be replaceable by constructions, integrated in networks.
We will see in the next section that DM tries to capture exactly the same problematic groups of phenomena but opts for a radically different solution, i.e. the strict separation of meaning and form (at the outset).
\subsection{From a composition-oriented perspective} \label{reiner:2.2}
\subsubsection{A short introduction to DM} \label{reiner:2.2.1}
Distributed Morphology (DM) is a strictly compositional approach to morphology, presupposing GB or minimalist syntax. Its origins lie in \citet{HalleHalle1993}, fairly recent overviews are \citet{Embick2015} and \citet{Bobaljik2017}, for a critical review of the former cf. \citet{Spencer2019}. The following summary is mainly based on these works; however, occasionally other publications will be referred to as well.
The key feature of DM is not so much the distribution of the morphology as the distribution of the lexicon, which is divided into three lists \citep[28]{Bobaljik2017}. These lists are called A, B, and C below:
\begin{description}
\item[List A:] containing roots, i.e. abstractions over forms, e.g. ${\surd}$\textsc{buy}, as well as feature values, e.g. plural.
\item[List B:] containing vocabulary items, i.e. pairings of phonological form and information about where that form may be inserted (\citealt[4]{HarleyHarley1999}), e.g. /s/ $\leftrightarrow $ [3\textsc{sg}] or /ba\textsc{i/} $\leftrightarrow $ V.
\item[List C:] containing non-grammatical concepts connected to the roots in list A (\citealt[209--210]{Embick2015}), e.g. there is a concept ʻget for moneyʼ in list C linked to the root ${\surd}$\textsc{buy} in list A. So the purview of list C is handling the idiosyncratic – even up to idioms. For example, if the aforementioned link does not fit the context, another link can be established like ʻaccept as trueʼ~–~${\surd}$\textsc{buy} (cf. \textit{I don’t buy your conclusion}).
\end{description}
These lists are the cornerstones of a derivational model for inflection and word formation. The derivation starts by selecting items from list A that attach to the terminal nodes of an appropriate syntactic representation, yielding an output like ${\surd}$\textsc{buy}-3\textsc{sg}. This output is then handed over to list B where matching phonological strings are inserted into the positions, e.g. /ba\textsc{i}/-/s/. For less straightforward derivations, certain processes directly before or after insertion may be appealed to in addition. One of these options, i.e. impoverishment, will be presented in more detail below. First, let’s visualise the basic architecture and say a few more words about the individual steps of the derivation, in particular about vocabulary insertion. For the latter objective it is suitable to pick a language with more agreement than English, so I chose German and the verb \textit{kaufen} ʻbuyʼ whose past indicative forms are given in \tabref{tab:reiner:5}. Needless to say, the paradigmatic presentation is only for convenience, the forms being represented differently in the visualisation to follow.\largerpage[1.5]
\begin{table}
\caption{Past indicative of German \textit{kaufen} ʻbuyʼ \citep[23]{HelbigHelbig2001}\label{tab:reiner:5}}
\begin{tabular}{lll}
\lsptoprule
& Singular & Plural\\\midrule
1\textsuperscript{st} person & kaufte & kauften\\
2\textsuperscript{nd} person & kauftest & kauftet\\
3\textsuperscript{rd} person & kaufte & kauften\\
\lspbottomrule
\end{tabular}
\end{table}
Now consider \figref{fig:reiner:3}, where \textit{kauften} \{ʻkaufenʼ, \textsc{past}, 3\textsc{pl}\} is derived.\footnote{Note that the corresponding present tense example would be more difficult to handle \citep[231]{Spencer2019}.}
\begin{figure}
% \begin{forest}
% [IP,name=IP
% [~]
% [I$'$
% [VP
% [~]
% [V$^0$
% [o, name=o1]
% ]
% ]
% [I$^0$
% [T$^0$
% [o, name=o2]
% ]
% [AGR$^0$
% [o, name=o3]
% ]
% ]
% ]
% ]
% \node(listA)[above=2cm of IP,xshift=-2cm]{List A: };
% \node(kauf)[right=0mm of listA,draw,dashed,rectangle,rounded corners,inner sep=1mm]{$\sqrt{\text{KAUF}}$};
% \node(rest)[right=0mm of kauf]{, $\sqrt{\text{LAUF}}$,$\sqrt{\text{HUND}}$,$\sqrt{\text{GUT}}$, \ldots\\};
% \node(tensepast)[below=2mm of kauf,draw,dashed,rectangle,rounded corners,inner sep=1mm]{\footnotesize [\textsc{tense}:\textsc{past}]};
% \node(person12)[right=0mm of tensepast]{\footnotesize , [\textsc{person}:1], [\textsc{person}:2], };
% \node(person3numpl)[right=0mm of person12,draw,dashed,rectangle,rounded corners,inner sep=1mm]{\footnotesize [\textsc{person}:3], [\textsc{num}:\textsc{pl}]};
% \node(genf)[right=0mm of person3numpl]{\footnotesize , [\textsc{gen}:\textsc{f}]};
% %
% \node(vkaufpast3pl)[below = 9cm of kauf]{$\sqrt{\text{KAUF}}$-\textsc{past}-3-\textsc{pl}};
% \node(listB)[below = 3cm of vkaufpast3pl,xshift=-2cm]{List B:};
% \node(kaOf)[right=0mm of listB]{/kaOf/,};
% \node(past-t@)[right=0mm of kaOf]{[\textsc{past}] $\leftrightarrow$ /t@/,};
% \node(2pl-t)[right=0mm of past-t@,text width=2.5cm,text align=center]{[2\textsc{pl}] $\leftrightarrow$ /t/, \dots};
% \node(listC)[right=0mm of 2pl-t]{List C: img img img};
% \node(remainder)[below=0mm of 2pl-t,text width=2cm]{[\textsc{pl}] $\leftrightarrow$ /n/\\
% {}[\textsc{2}] $\leftrightarrow$ /st/\\
% {}[] $\leftrightarrow$ /$\empty$/};
% \draw[->](o2)--(vkaufpast3pl);
% \draw(kaOf) -- node[above,sloped] {\footnotesize match} (vkaufpast3pl);
% \draw(past-t@)--node[above,sloped] {\footnotesize match} (vkaufpast3pl);
% \draw(2pl-t.north)--node[above,sloped] {\footnotesize 1st trial no match}(vkaufpast3pl);
% \draw(remainder.north)--node[below,sloped] {\footnotesize 2nd trial match}(vkaufpast3pl);
% %
% \path[dashed](o1)edge [bend left=70,in=40] (kauf.north);
% \path[dashed](o2)edge [bend left=70] (tensepast);
% \draw[dashed](o3.east)edge [bend right=40](person3numpl);
% \path[gray](listC.east) edge [out=30,in=30] (kauf.north);
% \end{forest}
\includegraphics[width=\textwidth]{figures/Reiner3.pdf}
\caption{example \textit{kauften} \{ʻkaufenʼ, \textsc{past}, 3\textsc{pl}\}, showing the basic DM-architecture \citet[6]{Bobaljik2017}, \citet[248--250]{AlbrightAlbright2012}}
\label{fig:reiner:3}
\end{figure}
In list A it can be seen that there are several roots, including ${\surd}$\textsc{kauf}, as well as several feature values, including past, 3\textsuperscript{rd} person, and plural. Crucially, the roots are neither concepts (list C) nor forms (list B). As to the feature values, I have taken the liberty to write them down as such, i.e. [\textsc{feature}:\textsc{value}], whereas it seems more common in DM to write the values only and refer to them as \textit{features}. In any case, this is the sole kind of structure (apart from the distinction between roots and feature values) that enters list A. Otherwise this list is meant to be an unstructured cloud. In particular, the individual items are not pre-arranged into paradigms.
The next step in \figref{fig:reiner:3} is the selection of a root and several features, all of which attach to the terminal nodes of the syntactic tree. From my perspective as an outsider to the theory, a mystery is connected with this step: what kind of mechanism, apart from communicative intent, determines which feature values are selected? For example, somewhere in the model there must be information on which contexts require which tense-aspect categories to be realised in the language at hand (cf. English \textit{He is eating} vs. German \textit{Er isst}\textsubscript{perfective-or-imperfective}). In any case, the root and the feature values are arranged by the syntax, independently from any phonological information. So the intermediate result is an abstract string like ${\surd}$\textsc{kauf}-\textsc{past}.3\textsc{pl}.
Now, vocabulary insertion can take place. The phonological form /kaɔf/ is one of the forms over which ${\surd}$\textsc{kauf} is an abstraction, so they match (at least this is what I have been able to conjecture; insertion for roots in simple cases is not extensively discussed in the DM-literature). Also for the second part of the abstract string, i.e. \textsc{past}, there is just one matching vocabulary item, i.e. [\textsc{past}] $\leftrightarrow $ /tə/, so this item is inserted. Things get more interesting and more difficult with respect to [3\textsc{pl}]. Several vocabulary items are available for realising AGR and, crucially, these are ordered from most to least specific. Now the mechanism runs through this ordered list and first tries the highest (= most specific) vocabulary item, i.e. [2\textsc{pl}] $\leftrightarrow $ /t/. However, 2\textsuperscript{nd} person does not match 3\textsc{pl}, so this item is discarded. The second highest item is [\textsc{pl}] $\leftrightarrow $ /n/, which does match 3\textsuperscript{rd} person plural (i.e.: there is no contradiction). So this item is inserted. Crucially, the same item matches 1\textsc{pl}, hence it would likewise be inserted in the derivation of \textit{kauften} \{ʻkaufenʼ, \textsc{past}, 1\textsc{pl}\}.
What can be seen here are two uses of underspecification at once. First, it serves to order the list in such a way that, by browsing the list downward, insertion may operate on the Paninian Principle (=~Elsewhere Condition~=~Subset Principle). That is: given two rules, where one is more specific than the other, application of the more specific one – here: insertion of the more specific item – blocks application of the less specific one (for an overview on the principle cf. \citealt[132]{Anderson1992}). Put differently: the most general rule – here: insertion of the least specific item – is the default and whenever there are more specific rules/items available, the most specific one of them will overwrite the default. The “only” job of the data-tackling linguist is to find the most economic list for the data set at hand. With respect to the German past indicative forms, I adopted the list from \citet[6]{Bobaljik2017}, although from a German Linguistics point of view it might be debatable whether the schwa belongs to the tense or agreement suffix (pro Bobaljik’s solution: the verb \textit{tu\sout{e}n} ʻdoʼ). Moreover, as far as I can see, the order of [\textsc{pl}] $\leftrightarrow $ /n/ and [2] $\leftrightarrow $ /st/ is arbitrary since none of them is more specific than the other.
The second, ensuing, use of underspecification here is modelling syncretism. As mentioned above, the vocabulary item [\textsc{pl}] $\leftrightarrow $ /n/, matching 3\textsc{pl} as well as 1\textsc{pl,} may be inserted into both potential strings, ${\surd}$\textsc{kauf}-\textsc{past}.\textit{3}\textsc{pl} (cf. \figref{fig:reiner:3}) as well as ${\surd}$\textsc{kauf}-\textsc{past}.\textit{1}\textsc{pl}. Likewise, when we derive the realisation of ${\surd}$\textsc{kauf}-\textsc{past}.3\textit{\textsc{sg}} or ${\surd}$\textsc{kauf}-\textsc{past}.1\textit{\textsc{sg}}, the mechanism runs through the list, discards [2\textsc{pl}] $\leftrightarrow $ /t/, [\textsc{pl}] $\leftrightarrow $ /n/, and [2] $\leftrightarrow $ /st/ as not matching and ultimately reaches the least specific pairing [] $\leftrightarrow $ ∅. So for both persons, AGR is realised as null. Please note that this way systematic syncretisms may be captured without referring to paradigms.\footnote{The 1/3 conformity really is a syncretism in the sense of fn. \ref{fn:reiner:7} since the two person values are distinguished overtly elsewhere in the language, viz. in the singular of the present tense (\textit{kaufe} vs. \textit{kauft}).}
However, underspecification is not the only way to model syncretism in DM (although the most desirable one according to \citealt[253]{Harley2008}). In particular, syncretisms that appear to be even more systematic than the ones treated above may be modelled by a process called impoverishment. This is the deletion of features/feature values in the abstract string that has been issued from the syntax and awaits vocabulary insertion \citep[139]{Embick2015}. For example, German determiners distinguish three gender values in the nominative singular (nominative: \textit{der}, \textit{die}, \textit{das}) but neutralise this distinction in the nominative plural in favour of the feminine form (nominative: \textit{die}),\footnote{Cf. \citet[214]{HelbigHelbig2001}. Note that the following analysis works only if \textit{die} is viewed as feminine per se (\citealt{Meinunger2017}, but cf. \citealt[291--292]{Leiss1994} on why this feature value has little to do with femininity as such). For relevant considerations cf. also \citet[184]{Kramer2019}.} in accordance with Greenberg’s well-known universal no. 37. So all three output strings, ${\surd}$\textsc{d}-\textsc{nom}.\textit{\textsc{masc}}.\textsc{pl}, ${\surd}$\textsc{d}-\textsc{nom}.\textit{\textsc{fem}}.\textsc{pl}, and ${\surd}$\textsc{d}-\textsc{nom}.\textit{\textsc{neutr}}.\textsc{pl} have to yield \textit{die} in the end; likewise for the other case values (\textit{derer}/\textit{n}, \textit{den}, \textit{die}). In this situation it seems reasonable to delete the gender value from the string altogether. A rule expressing this is shown in \REF{ex:reiner:17} for the example at hand.
\ea \label{ex:reiner:17}
[+\textsc{masc}/\textsc{fem}/\textsc{neutr}] ${\Rightarrow}$ \textsc{fem}/\_\textsc{pl}\\
read: any gender value is set to the feminine under the condition of a plural environment
\z
Having addressed syncretism, which is handled by underspecification or impoverishment in DM, I have to touch upon allomorphy, too, since these two types of phenomena – syncretism and allomorphy – appear to be the standard testing ground for any morphological theory. Allomorphy then can be handled by adding conditions on vocabulary items \citep[169]{Embick2015}. For example, the German vocabulary item [\textsc{past}] $\leftrightarrow $ /tə/ works well in a derivation like the one in \figref{fig:reiner:3} above but would yield ungrammatical forms for strong verbs, e.g. the past 3\textsuperscript{rd} person plural of \textit{gehen} ʻwalkʼ would be wrongly predicted as */gehtən/. In order to derive the correct form /g\textsc{i}ŋən/ we may use the adopted vocabulary item in \REF{ex:reiner:18}.
\ea \label{ex:reiner:18}
/g\textsc{i}ŋ/ $\leftrightarrow $ ${\surd}$\textsc{geh}/\_\textsc{past}\\
read: /g\textsc{i}ŋ/ is the form to be inserted for ${\surd}$\textsc{geh} under the condition of a past environment
\z
Additionally, the insertion of [\textsc{past}] $\leftrightarrow $ /tə/ has to be suspended in some way or other. A remaining problem of this analysis is how to supply the schwa of /g\textsc{i}ŋən/ when assuming \citegen{Bobaljik2017} segmentation.
At this point I have presented a snapshot of DM in action, not even mentioning further processes like fusion under locality conditions, fission, or readjustment (also note that linearisation is not standardly predictable from syntax).\footnote{Fusion = combination of two sister nodes into one, which retains the features of both input nodes but has no internal structure \citep[15]{Bobaljik2017}; mnemonic: fusional morphology in the typological sense. Fission = splitting of a single node in the syntax into two nodes in the morphological representation \citep[19]{Bobaljik2017}; mnemonic: multiple exponence. Readjustment = phonological alternation after vocabulary insertion \citep[7]{Bobaljik2017}; mnemonic: remedy for everything else that leaves derivation ill-formed.} To conclude for the moment, DM’s slogan, it’s “syntactic hierarchical structure all the way down” \citep[3]{HarleyHarley1999} is to be taken with a grain of salt: it’s syntactic hierarchical structure all the way down until vocabulary insertion and its satellite processes. That is, many things can happen on the way from syntax to final spell-out and it is this many-faceted interface that deserves the name \textit{morphology} in DM. Crucially, however, DM strives to make even these processes as predictable as possible.
In any case, the architecture as a whole permits reading paradigms off individual derivations (e.g., \tabref{tab:reiner:5} may be read off the derivation depicted in \figref{fig:reiner:3}) but there are no pre-designed paradigms in the model. That is: to the extent that DM provides a model of language representation and processing, paradigms do not have any psychological reality in the theory (cf. also \citealt[53]{Bobaljik2002}). This negative attitude towards paradigms will be summarised and put into perspective in the following section (\sectref{reiner:2.2.2}).
\subsubsection{The status of paradigms in DM} \label{reiner:2.2.2}
This section is not a real section but a convenient synopsis for those readers who skipped the introduction to DM. So the main point of the section is summarising what was laid out in \sectref{reiner:2.2.1}. with respect to paradigms in DM. There are at least two places in the architecture at which paradigmatic structures would be expected by the average linguist but in fact do not play a role.
First, feature values (list A) are not organised into paradigms. Obviously, they could be (though, with a placeholder for roots/stems); however, according to the theory this is just not necessary for deriving correct forms. And unnecessary pre-syntactic structure is to be avoided in a strictly compositional approach \citep[17]{Embick2015}.\largerpage[1.5]
Second, vocabulary items (list B) are indeed ordered but not according to intersecting feature values (as in paradigms) but according to specificity. It is striking that DM is all the same able to capture syncretism and allomorphy – phenomena that are otherwise thought to be inextricably linked with paradigms (cf. also \citealt[54]{Bobaljik2002}).
All of this does not mean, however, that practitioners of DM do not use paradigms for presentation. To pick a random example, \citet{Harley2008} is full of paradigms. The important thing is that they do not have a primary status in the architecture. To use the food metaphor from \citegen{Trommer2016} title: from a DM perspective, compiling paradigms is like pre-sorting the ingredients in your kitchen cupboard according to nutrients – useful for some purposes but not necessary (and hence not desirable) for cooking tasty meals.
I conclude this micro-section by a synoptic quotation, even more explicit than the corresponding one in \sectref{reiner:1}:
\begin{quote}
Importantly, paradigms are epiphenomenal in DM. They have no theoretical status and they are never referred to by morphological operations. \citep[97]{Kramer2016}\footnote{One reviewer remarks: “Interestingly, they [= paradigms] seem to organise those morphological operations into meaningfully related sets, which would place them above those operations. In this sense, they are not epiphenomenal but even more abstract than the abstract operations”. From a DM-perspective, this is not a contradiction: the meaningfully related sets might be constructed by the linguist or the L2-teacher; however they are not part of any L1-speaker’s mental grammar. This position gains some plausibility from anecdotal evidence: it appears hard to write down L1-paradigms if one is asked to do so for the very first time.}
\end{quote}
\subsection{Comparison} \label{reiner:2.3}\largerpage[1.5]
This section serves to compare the two approaches regarding their shared detachment from paradigms and their shared tendency to reduce inflectional morphology to something else (constructions or the syntax/phonology interface). To this end I will first address their respective standard data and then delve into a more theoretical discussion. For completeness, also Autonomous Morphology is occasionally integrated into the picture before it takes centre stage in \sectref{reiner:3}.
While \citet[58--59]{Haspelmath2011} refers to Turkish for showing that morphology can look like syntax, proponents of DM adduce Swahili for the same reason. However, Swahili inflection does not only display a high degree of compositionality but it also displays purely positional contrasts – something that we tend to expect from syntax exclusively. For example, consider the pair of examples in \REF{ex:reiner:19a} and \REF{ex:reiner:19b}, taken from \citet[18]{Trommer2001}.\footnote{In a similar vein cf. \citet{Crippen2019}.}
\ea \label{ex:reiner:19}
Swahili (Atlantic-Congo, Tanzania et al.)\footnote{\url{https://glottolog.org/resource/languoid/id/swah1253}}
\begin{multicols}{2}\raggedcolumns
\ea
\gll \label{ex:reiner:19a}ni-wa-penda\\
1\textsc{sg}{}-3\textsc{pl}{}-like\\
\glt ʻI like them.ʼ\columnbreak
\ex \label{ex:reiner:19b}
\gll wa-ni-penda\\
3\textsc{pl}{}-1\textsc{sg}{}-like\\
\glt ʻThey like me.ʼ
\z
\end{multicols}
\z
To be sure, this does not work equally well for all forms, cf. \tabref{tab:reiner:6}.
\begin{table}
\caption{Swahili person forms \citep[15, 102; only M-/WA-class]{AlmasiAlmasi2014}\label{tab:reiner:6}}
\begin{tabular}{lllll}
\lsptoprule
& \multicolumn{2}{c}{subject} & \multicolumn{2}{c}{object}\\\cmidrule(lr){2-3}\cmidrule(lr){4-5}
& \textsc{sg} & \textsc{pl} & \textsc{sg} & \textsc{pl}\\\midrule
{1\textsuperscript{st}} {person} & ni & tu & ni & tu\\
{2\textsuperscript{nd}} {person} & u & m & ku & wa\\
{3\textsuperscript{rd}} {person} & a & wa & m(w) & wa\\
\lspbottomrule
\end{tabular}
\end{table}
\begin{table}
\begin{floatrow}
\captionsetup{margin=.05\linewidth}
\ttabbox{\begin{tabular}{lll}
\lsptoprule
& {Subject} & {Object}\\\midrule
{Singular} & ni & ni\\
{Plural} & tu & tu\\
\lspbottomrule
\end{tabular}}
{\caption{Swahili person forms extracted from \tabref{tab:reiner:6}\label{tab:reiner:7}}}
\ttabbox{\begin{tabular}{lll}
\lsptoprule
& {Subject} & {Object}\\
\midrule
{Singular} & car & car\\
{Plural} & cars & cars\\
\lspbottomrule
\end{tabular}}
{\caption{English nominal case forms\label{tab:reiner:8}}}
\end{floatrow}
\end{table}
Still, writing down the first line of the paradigm appears to make as much sense as writing down nominal case forms for English, compare \tabref{tab:reiner:7} to \tabref{tab:reiner:8}.
Obviously, here it is not the paradigm that tells the language user which form is the subject or object – it is the combinatorics (here: surface linear order). Switching to a constructionist perspective taken to the extreme, even the second and third line of the Swahili paradigm may be treated very much like the first line: it is still the relative position (slot) of a given form that determines its syntactic function – and makes it vary in idiosyncratic ways (e.g. \textit{u} ${\Rightarrow}$ \textit{ku}).
So to some extent both kinds of data – Turkish-style and Swahili-style – support the idea that paradigms might be quite parochial a format: apt for Latin type languages but hardly beyond.
To repeat, with regard to the constructionist perspective, the above conclusion is my own one, neither \citet[58--59]{Haspelmath2011} nor related works plainly oppose against paradigms. To the contrary, \citet{Haspelmath2000} even allows periphrases as paradigm cells and \citet{Booij2016} reconceptualises paradigms as “second order schemas”. However, let’s have a closer look at these two conceptions.\largerpage[2]
\citet{Haspelmath2000} argues against a gap-filling account of periphrasis and in favour of a grammaticalisation-based account.\footnote{Some of \citegen{Haspelmath2000} arguments against a gap-filling account may be countered by the criterion of feature intersection (\citealt[250--252]{BrownBrown2012}, \citealt{Reiner2020}); however this is beyond the scope of the present paper.} Crucially, he does not only allow periphrases as paradigm cells but, by extension, also entire clauses:
\begin{quote}
However, it is not difficult to find syntactic phenomena that provide a striking analog of inflectional paradigms, gaps, and periphrasis in morphology. Again, a good example comes from English, where only a small subclass of verbs can occur without complications in interrogative and negative clauses. In (16), this well-known pattern is represented in such a way that the similarities with morphological suppletive periphrasis become apparent. […] Clearly, “periphrastic \textit{do}” is periphrastic in much the same way as the cases of morphological periphrasis, but the filled gaps in \REF{ex:reiner:16} are not morphological monolectic forms. \textit{Did you see} is a syntactic phrase which replaces the impossible syntactic phrase *\textit{saw you}. \citep[662]{Haspelmath2000}
\end{quote}
Here is Haspelmath’s example number (16), reproduced as \tabref{tab:reiner:e20}.
% \ea \label{ex:reiner:20}
% {English \citep[662]{Haspelmath2000}}\\
% \glll decl., affirm. interrogative \hphantom{text} \hphantom{text} \hphantom{text} \hphantom{text} negative \\
% \textit{You are here} \textit{Are you here?} \hphantom{text} \textit{You are not here}\\
% \textit{You saw her} [\textit{Did you see her?}] [\textit{You did not see her}]\\
% (*\textit{Saw you her?}) (*\textit{You saw not her})
% \z
\begin{table}
\caption{English \citep[662]{Haspelmath2000}\label{tab:reiner:e20}}
\begin{tabular}{lll}
\lsptoprule
decl., affirm. & interrogative & negative\\
\midrule
\textit{You are here} & \textit{Are you here?} & \textit{You are not here}\\
\textit{You saw her} & [\textit{Did you see her?}] & [\textit{You did not see her}]\\
& (*\textit{Saw you her?}) & (*\textit{You saw not her})\\
\lspbottomrule
\end{tabular}
\end{table}
This extension seems simply logical; however, it raises the question whether there is anything at all that can\textit{not} be described by paradigms in this sense, i.e. by oppositions. In fact, a strictly constructionist perspective mandates that any set of clauses (transparent or not) is viewed as a set of constructions between which the language user may chose, hence as a paradigm in the above sense. This notion of paradigm, then, is so abstract that it becomes vacuous: if everything is paradigmatic it is pointless to state that such and such linguistic phenomenon (e.g., inflection) is, indeed, organised paradigmatically.
Turning now to Booij’s CxM, as summarised in \citet{Booij2016}, the first thing to note is that inflection as well as word formation and phrasal idioms are captured by constructional schemas. \REF{ex:reiner:21} is an example for inflection.
\ea \label{ex:reiner:21}
{English \citep[440, number (37) there]{Booij2016}}\\
〈[(x\textsubscript{i})\textsubscript{ω-j} $\leftrightarrow $ [N\textsubscript{i}, +sg]\textsubscript{j} $\leftrightarrow $ [SG [SEM\textsubscript{i}]]\textsubscript{j}〉
\z
Read: x\textsubscript{i} constitutes a phonological word (ω), which is associated with a certain morphosyntactic structure, which is associated with a certain semantic structure; the indices show identity relations.\footnote{Inflectional class information can be integrated into the morphosyntactic structure or modelled by second order schemas (to be introduced below).}
Schemas that relate to each other (i.e.: share at least one element) constitute second order schemas, e.g. \REF{ex:reiner:22}.
\ea
\label{ex:reiner:22}
{English \citep[440, number (39) there]{Booij2016}}\\
\hphantom{${\approx}$} 〈(x\textsubscript{i})\textsubscript{ω{}-j} $\leftrightarrow $ [N\textsubscript{i}, +sg]\textsubscript{j} $\leftrightarrow $ [SG [SEM\textsubscript{i}]]\textsubscript{j}〉 \\
${\approx}$ 〈(x\textsubscript{i}{}-z)\textsubscript{ω{}-j} $\leftrightarrow $ [N\textsubscript{i}, +pl]\textsubscript{j} $\leftrightarrow $ [PL [SEM\textsubscript{i}]]\textsubscript{j}〉
\z
Such schemas correspond to traditional paradigms in an obvious way and, crucially, they are said to organise language in their own special manner (for a lucid example from Saami cf. \citealt[442]{Booij2016}). However, paradigms in this sense, central as they are, do not constitute the fundamental building blocks of morphological theory as Stump or Spencer would have it. Rather, they emerge from a more general organisational principle, which is the constructional schema: as soon as two or more schemas share one or more elements, they constitute a second order schema and in this sense a paradigm. To put it in a nutshell: paradigms are relevant but not basic (also cf. \citealt[239–240, 257]{MarziEtAL2020}).
To conclude on \citet{Haspelmath2000} and \citet{Booij2016}, while they do use paradigms in some sense, I argued that, implicitly, they deprive paradigms of any fundamental theoretical status: the former extends the notion to such a degree that it becomes void and the latter’s approach allows reducing paradigms to a mere consequence of the fact that there are shared elements between constructional schemas.
\begin{sloppypar}
Note that both takes on paradigms just presented differ from the position of Autonomous Morphology. There, periphrases are allowed into paradigms, too; however only under a very restricted notion of periphrases, which ensures that they are mere surrogates for true word forms \citep{BrownBrown2012}. When, moreover, \citet[147--148]{Stump2002} speaks of “syntactic paradigms”, he means something different: word forms seen as instantiations of a lexeme, among which a given syntactic context may chose the appropriate one.
\end{sloppypar}
Coming back to CxM, with regard to the non-fundamental status of paradigms it is surprisingly close to DM, except that the demotion is explicit in DM. However, in another regard, it is DM that implements considerations otherwise basic to CxM as well as to Autonomous Morphology. Prima facie non-compositional phenomena like syncretism, allomorphy (including suppletion) or polyfunctionality seem to call for a constructionist account and/or might constitute the irreducibly morphological in language (more on this relation below). Yet, also DM has developed means to deal with such phenomena: some are handled directly by underspecification, others may require processes like impoverishment, fusion, fission, or readjustment (cf. \sectref{reiner:2.2.1}). Although even these processes are designed to be as predictable as possible, one cannot deny that this is much more than just syntactic derivation plus vocabulary insertion. Put differently, if morphology is only an interface between syntax and phonology (as DM says) then it is a quite rich one. To be fair, not all processes (or rules) mentioned above are embraced equally by all proponents of DM. For example, Trommer takes a very critical stance on the accumulation of rule types in DM, summarised in \citet[61]{Trommer2016}. In the same volume, \citet{Haugen2016} argues explicitly against the use of readjustment rules, in particular when modelling stem allomorphy. Also other practitioners of DM try to restrict the purview of readjustment rules (e.g. \citealt[475]{PominoPomino2019}). In fact, my example of /g\textsc{i}ŋ/ above has been designed in this spirit.
As an interim summary, I compared CxM and DM with respect to their take on paradigms (implicit vs. explicit demotion) as well as with respect to their conception of morphology vis-à-vis other levels of description (explicit vs. implicit acknowledgment, but see below). In the rest of this section, I will extend the comparison to three further dimensions, ordered in growing distance from the actual topic of paradigms and independent morphology: the notion of word, the question of psychological reality, and the kind of restrictiveness found in the respective approach.
\textit{The notion of word} is discussed at length in \citet{Haspelmath2011}, from which I have cited but a snippet up to now. In brief, his overall discussion concludes that criteria for a universal definition of word remain elusive\footnote{But cf. \citet{Gil2020} for a recent proposal.} – and so does a definite border between morphology and syntax. This fits well with the general constructionist idea of a lexicon-syntax continuum. However, from my perspective as an outsider to the theory, any commitment to this idea is in conflict with acknowledging morphology as a level of description qualitatively different from both, syntax and the lexicon: how can we draw solid lines in the middle of a continuum? Thus, reflecting the notion of word casts some doubt on the extent to which constructionist accounts are able to acknowledge “morphology by itself” (to allude to \citeauthor{Aronoff1994}'s \citeyear{Aronoff1994} title). This may be a dividing line between CxM (Cx“M”?) and Autonomous Morphology.
\begin{sloppypar}
Concerning their own definition of word, Autonomous Morphologists are hard to pin down, though. Working in the tradition of Word-and-Paradigm approaches, they seem to accept that for any given string we can tell whether it constitutes a word or not – no matter if we are looking at data from Latin, Turkish, Swahili or West Greenlandic. The only explicit pertinent discussion I am aware of is in a footnote in \citet[127–128]{SpencerLuis2013}, where they elegantly delegate the task to Canonical Typology (more on this relationship in Sections~\ref{reiner:3} and~\ref{reiner:5}).
\end{sloppypar}
Turning now to DM, the theory is not particularly obvious with respect to its attitude towards a universal notion of word. Certainly, it is universalist in spirit and its hallmark is the (far-reaching) structural isomorphism between word structure and phrase/clause structure, which seems to leave no room for words as different from phrases/clauses. However, even in DM there is an endpoint of the derivation, e.g. /ba\textsc{i}z/; and other terminals in the same clause structure tree host their own derivations, e.g. /hi:/. In this sense, there are words as opposed to larger phrases. So, latently, DM does recognise words.
\textit{The question of psychological reality} can be phrased as: do formats of description (possibly including paradigms) have a role to play in our mental representation of language? This question is the least obviously answered in Autonomous Morphology. Again, the topic does not appear to receive much attention in the literature. My impression is that Autonomous Morphologists silently follow a weak version of a Language-as-an-abstract-object approach \citep{Katz1981}, i.e. it is simply not their intention to describe any mental representation of language as a system (Chomsky’s competence or Saussure’s langue), rather they focus on abstractions over the parole. For example, this position is suggested by \citet[197--198]{Aronoff2016}, where the high usefulness of paradigms in language description is enough to justify their role as a central tool for the linguist. Note that being useful for the linguist and being represented in the language user’s mind are not necessarily the same thing \citep[92, fn. 7]{Haspelmath2018}.\footnote{This is one of the few aspects of Haspelmath’s comparative concepts that I embrace, cf. \citet{ReinerInPress}.}
This is in sharp contrast to DM, which inherits the demand for providing a psychologically real model from its background in GB/Minimalist syntax. The demand concerns every aspect of the model, including the irrelevance of paradigms. However, pertinent psycholinguistic evidence seems to be scarce. Even \citet{BarnerBarner2002}, who do collect many pieces of psycholinguistic evidence from the literature, only cover one aspect of the model (viz. roots lack syntactic category information). There is a need for more comprehensive, custom-tailored experiments, in particular for ones checking whether or not speakers draw on ready-made paradigms in language production and/or comprehension.\largerpage
{\interfootnotelinepenalty=10000 In comparison to DM, the psycholinguistic evidence adduced in favour of CxM seems to be much more encompassing. In particular, it has been shown that word forms can be both, computable as well as holistically stored, without contradiction (\citealt{Zwitserlood2018}, \citealt[7–8]{MasiniAudring2019}, \citealt[Ch. 7]{JackendoffAudring2020}).\footnote{Typological coverage might be better; however this is a problem of current psycholinguistic research more generally, which may be overcome in the future (cf., e.g., \url{http://www.llf.cnrs.fr/labex-efl} (as of 24.03.2020)).} The crucial point from the perspective of CxM, though, is not that word forms \textit{can} be stored but that they \textit{are} in fact stored (different from DM, cf. \citealt[392, fn. 3]{McGinnis-Archibald2016}). If stored forms relate to each other, they may constitute paradigms in \citegen{Booij2016} sense.}
\textit{The attitude towards restrictiveness} seems to differ vastly between the theories discussed here. The difference is often said to lie in striving for maximal empirical coverage (CxM\,+\,Autonomous Morphology) vs. striving for testable predictions (DM). For example, \citet{Kramer2016} draws this line. However, I will argue that the difference is rather about \textit{where} to formulate restrictions.\largerpage
Surveying work in Autonomous Morphology, it would be utterly wrong to say that these researchers do not formulate restrictions: every generalisation over data is a falsifiable prediction to the effect that new data are hypothesised to comply with the generalisation. For example, \citegen[220–221]{BaermanBaerman2005} types of syncretism restrict the range of syncretisms we expect to find in the languages of the world. In short, there \textit{are} restrictions and they reside in generalisations over (descriptions of) data, with paradigms being used as a central tool for description. Another – more obvious and more recent – example of restrictiveness is \citet{Herce2019}.
Similarly, practitioners of CxM or Construction Grammar more generally use the construction as a maximally flexible tool of description \textit{before} they start looking for cross-linguistic tendencies in the data thus described (e.g., \citealt[Ch. 7–9]{Goldberg2006}). The tendencies are then to be explained by general cognitive principles \citep{Goldberg2006}. Crucially, cognitive principles restrict the range of what we expect to find in human (linguistic) behaviour but there is no need to build the restrictions into one’s descriptive tools.
This is different from DM, which, as a generative theory, intends to model language as a specific competence, identified by specific restrictions, which, consequently, have to be part of the model. Thus, anything that cannot be derived by the model is predicted not to be accepted by native speakers and vice versa (derivable $\leftrightarrow $ accepted).
So both kinds of approach acknowledge restrictions (and I am agnostic as to which way of doing so is the better one). Incidentally, both also embrace the liberties of language, at least to a certain extent. For DM, this might be not so obvious; however recall that they have list C at their disposal, although this part of the architecture seems to be the one that is worked out least of all.
Taking stock of this section, it appears that, surprisingly, Autonomous Morphology never patterns with CxM, cf. \tabref{tab:reiner:9}.
\begin{table}
\caption{Summary of theory comparison\label{tab:reiner:9}}
\begin{tabularx}{\textwidth}{QQQQ}
\lsptoprule
& {CxM} & {Autonomous Morphology} & {DM}\\
\midrule
{universal} {notion} {of} {word?} & rather no & latently yes & latently yes\\\tablevspace
{paradigms} {psychologically} {real} {and} {useful?} & maybe psychologically real, not fundamentally useful & useful & not psychologically real, occasionally useful\\\tablevspace
{morphology} {qualitatively} {distinct} {from} {syntax/lexicon?} & ? & yes & the little that there is: yes\\
\lspbottomrule
\end{tabularx}
\end{table}
\section{Defending morphology and paradigms} \label{reiner:3}
\begin{sloppypar}
Recalling \tabref{tab:reiner:9}, one might expect that Autonomous Morphologists defend their approach against CxM sense much more forcefully than they defend it against the various incarnations of DM. However, the opposite is true. The present section will elaborate on this remarkable situation and evaluate tentatively whether the Autonomists’ defence of morphology by itself – including paradigms – succeeds.
\end{sloppypar}
Virtually everyone who advocates the idea of Autonomous Morphology includes in their overview publications some words on why DM fails. For example, \citet[19–29]{BrownBrown2012} present DM as an alternative but inferior approach. Other examples include \citet[73–89]{Spencer2004}, \citet[194–195]{Aronoff2016} and, avant la lettre, \citet[82–85]{Aronoff1994}. Additionally, \citegen{Spencer2019} review of \citet{Embick2015} represents a recent argument against DM from an Autonomist’s perspective.
One of the central arguments is that there are phenomena that cannot be described in purely syntactic or lexical terms, i.e. so called morphomes (for a recent overview of the notion cf. \citealt[160–166]{Enger2019}). Crucially, the most prominent examples of morphomes directly refer to paradigms: inflectional classes and patterns of stem allomorphy \citep{Maiden2009}. For substantiation, consider an instance of the latter case, viz. \citegen{Maiden2009} L-pattern in Romance verb morphology. \tabref{tab:reiner:10} represents the pattern abstractly, \tabref{tab:reiner:11} gives an example.
\begin{table}
\captionsetup{margin=.05\linewidth}
\begin{floatrow}
\ttabbox{\begin{tabular}{lcc}
\lsptoprule
& \textsc{prs.ind} & \textsc{prs.sbjv}\\\midrule
{1\textsc{sg}} & A- & A-\\
{2\textsc{sg}} & B- & A-\\
{3\textsc{sg}} & B- & A-\\
{1\textsc{pl}} & B- & A-\\
{2\textsc{pl}} & B- & A-\\
{3\textsc{pl}} & B- & A-\\
\lspbottomrule
\end{tabular}}
{\caption{L-pattern \citep{Maiden2009}\label{tab:reiner:10}}}
\ttabbox{\begin{tabular}{lll}
\lsptoprule
& \textsc{prs.ind} & \textsc{prs.sbjv}\\\midrule
{1\textsc{sg}} & digo & diga\\
{2\textsc{sg}} & dices & digas\\
{3\textsc{sg}} & dice & diga\\
{1\textsc{pl}} & decimos & digamos\\
{2\textsc{pl}} & decís & digáis\\
{3\textsc{pl}} & dicen & digan\\
\lspbottomrule
\end{tabular}}
{\caption{Spanish example for L-pattern \citep{Maiden2009}\label{tab:reiner:11}}}
\end{floatrow}
\end{table}
Crucially, this pattern, among two additional ones, pervades irregular verbal morphology in Romance languages and comes without any obvious lexical or syntactic core: why should certain verbs show the same stem for exactly these feature value combinations and not for others? In particular, it is hard to imagine a semantic or syntactic property that is shared by, e.g. \textit{digo} and \textit{digan} but not \textit{digo} and \textit{dicen}. What the pattern does refer to, however, are the pairings of word forms and feature values they realise, i.e. the cells of a paradigm: it is always the same set of cells that shares a stem. So while the stem as such is not predictable, its distribution is and, crucially, this distribution refers to paradigm set-up. In short, morphomes seem to provide a very clear indication that, after all, there is something genuinely morphological about language, more precisely something that must be captured by paradigms in Stump’s sense (cf. \sectref{reiner:1}).\footnote{For a typological survey of morphomic structures cf. \citet{Herce2020}.}
In a way, this line of research is continued by \citet{AckermanAckerman2013}, who adduce evidence that it is the complexity of paradigm structure rather than the complexity of individual distinctions and realisations that predicts learnability (so here also the question of psychological reality is touched upon). However, their results have recently been called into question (\citealt{JohnsonEtAlPreprint}).
Moreover, recall that also in DM there is more between syntax and spell-out than just vocabulary insertion (\sectref{reiner:2.2}, \sectref{reiner:2.3}). Against this background, it does not come as a surprise that by now also proponents of DM have started to embrace morphomes. \citet{Trommer2016} provides a “postsyntactic morphome cookbook”, not even using the full DM-machinery. To be prudent, Trommer’s paper is the only DM-treatment of morphomes that I am currently aware of. Future research will show whether Autonomous Morphologists will be beaten at their own game.
A more urgent, but largely unrecognised threat comes from the “friendly take\-over” by constructionist accounts (\sectref{reiner:2.1} and \sectref{reiner:2.3}), especially when considering that even morphomic patterns might be conceived of as second order schemas involving homonymy. While, e.g., \textcites[]{Spencer2001}[84--86]{Spencer2004} welcomes constructionist thinking, he does not seem to recognise that this family of theories tears down the very boundaries he tries to defend: as indicated in \sectref{reiner:2.3} above, Construction Grammar does not recognise any qualitative difference between syntax and the lexicon, let alone a discrete level in between to be justly called morphology (cf. also \cites[5]{Goldberg2006}[17]{Goldberg2013}[1]{HoffmannHoffmann2013}).
As a consequence, it does not appear to be a coincidence that precisely those morphologists who adopt a constructionist perspective most consistently and most explicitly (e.g., \citealt{Booij2010}) are those who are \textit{not} at the same time Autonomists. Moreover, while (Autonomist) Network Morphology (\citealt{BrownBrown2012}) stands side-by-side with Canonical Typology in the Surrey Morphology Group,\footnote{Even literally so: \url{https://www.smg.surrey.ac.uk/approaches/} (as of 24.03.2020).} the latter theory is free to develop an integrative perspective on autonomy. That is, canonical (ideal) morphology might be regarded as \textit{non}{}-autonomous, reducible to either syntax or the lexicon or both, whereas deviations from this ideal represent autonomous morphology – with the deviations often presenting themselves as patterns in paradigms. To the extent that it is the deviations rather than the canonical ideal that we expect to find in real languages, Canonical Typology and Network Morphology fit well together indeed.\footnote{These deliberations are what I understand from \citet{GagliaGaglia2016}, \citet{Hippisley2017} and \citet{Herce2020b}.} This relationship, however, does not prevent Autonomous Morphology, including its commitment to paradigms, from being largely absorbed by constructionist approaches, as described above.
In sum, so far the attempts at defending morphology (as a distinct level of description) and defending paradigms (as defined in the introduction of this paper) have not ultimately succeeded – or the attempts are missing altogether. Does this mean that paradigms have nothing to offer for any linguistic theory? To my mind, one use of paradigms remains in any case. This is the topic of the next section.
\section{A remaining use for paradigms: Restricting recursion} \label{reiner:4}
The marginalisation of paradigms as fundamental organisational units has been noted and criticised recently by \citet{Diewald2020a}, focussing on constructionist approaches. Arguing from a diachronic perspective, she reminds us that [+\,par\-a\-dig\-mat\-i\-city] and [\textminus\,paradigmatic variability] are two of \citegen[132]{Lehmann2015} six famous criteria for grammaticalisation (and that speaking of grammaticalisation is pointless without an analytical distinction between grammar and non-grammar). Thus, in Diachronic Construction Grammar paradigms are both: marginalised as well as fundamentally needed. To escape from this dilemma, she suggests the following concept of paradigms: as constructions-of-constructions they are complex signs, which may constitute nodes in a constructional network. She notes that this is similar to \citet{Booij2016}; however, as far as I can see, \citet[297--301]{Diewald2020a} goes one step further in that she explicitly excludes open class paradigmatic relations from the notion of paradigm. This makes, I would argue, a huge difference, in particular from a diachronic perspective. For example, consider German \textit{R}/\textit{richtung} ʻdirection/toʼ in \REF{ex:reiner:23} vs. German \textit{R}/\textit{riesen} ʻgiantʼ in \REF{ex:reiner:24}.
\ea \label{ex:reiner:23}
{German (constructed)}\\
\gll Ich geh Richtung Bahnhof.\\
I walk direction/to station\\
\glt ʻI am walking towards the station.ʼ
\ex \label{ex:reiner:24}
{German (constructed)}\\
\ea Dort ist eine Riesen-Statue.\\
there is a giant-statue\\
\glt ʻThere is a giant statue.ʼ
\ex Dort ist eine riesen Statue.\\
there is a giant statue\\
\glt ʻThere is a giant statue.ʼ
\z \z
The former would qualify as grammaticalised on \citegen{Diewald2020a} account since it enters the closed class of prepositions. Note that I am assuming here that this class may be described by pairs of feature values. The latter, by contrast, would not qualify as grammaticalised on \citegen{Diewald2020a} account since there is no closed class of either “grading initial parts of compounds” or adjectives.
Thus, \citegen{Diewald2020a} conception of paradigms might reintroduce them into Construction Grammar as fundamental organisational units. Crucially, however, as nodes in constructional networks they have to be psychologically real by themselves (not merely being a consequence of some other psychologically real entities and processes). Here, \citet[306--310]{Diewald2020a} provides a rough design for a psycholinguistic study as well as a core linguistic argumentation to the effect that certain grammaticalisation processes are hardly conceivable without mental representations of paradigms in her sense.
So again, the question whether paradigms (in whichever sense) are psychologically real turns out to be pivotal. To the extent that this remains a tricky question, I suggest that anyone arguing in favour of a fundamental status of paradigms should look for arguments independent of this question, i.e. for a use of paradigms that is just that: fundamental to the linguist but not necessarily reflected in the language user’s mind (very much in the spirit of \citealt[197--198]{Aronoff2016}, quoted above). In the rest of this section, I will present such a use.
There is one kind of data that neither CxM nor DM is good at handling, i.e. limits on recursion. The following paragraphs provide a snapshot of such data, explains why they are problematic for the two approaches and, finally, presents the paradigm in Stump’s sense as a solution.
Basically, CxM and Construction Grammar more generally share with Autonomous Morphology the strategy to seek restrictions in generalisations over data rather than building the restrictions directly into their descriptive tools (cf. \sectref{reiner:2.3}). This strategy adheres to \citegen{Haspelmath2004} postulate of a sharp distinction between description and explanation. At the same time, this strategy requires the descriptive tools to be as flexible as possible: they must be able to capture whatever may be found in a language. So the often heard accusation that “everything is a construction” (e.g., \citealt[236]{VanValin2007}) misses the point since here lack of restrictiveness in descriptive tools is a virtue, not a weakness.
Against this background, it is clear that CxM can handle every kind of data well: all we need is an association between a form and a meaning, i.e. a construction. This means that individual morphs like, e.g., thematic vowels are not forced to have any meaning in isolation. It is the verb form as a whole that has a meaning to begin with and this association of meaning and form constitutes a construction. Likewise it is the clause as a whole that has a meaning and again, this association constitutes a construction. Incidentally, a construction may have constituent constructions and in this sense there is internal structure; however the internal structure is not generated bottom-up via valency or subcategorisation.
So verb forms are constructions, clauses are constructions, and the rare case of a monofunctional affix (that is, rare in SAE) is a construction, too. Ideally, we obtain a comprehensive network of constructions for each language. These networks, I hold, are again organised in a network to the extent that they share certain properties.\footnote{For a pertinent but rather non-constructionist proposal, cf. \citealt{ReinerInPress}.} Crucially, these properties are abstractions over individual minds rather than having any psychological reality in their pure form. In principle, the most encompassing of networks might be read as set of generalisations about language. As argued above (\sectref{reiner:2.3}.), generalisations \textit{are} predictions, including predictions about what there is \textit{not}. To take an example from Autonomous Morphology, if there is no language to be found with syncretism of 1\textsc{sg}, 2\textsc{du}, and 3\textsc{pl} subject agreement,\footnote{Surrey Person Syncretism Database, \url{https://www.smg.surrey.ac.uk/personsyncretism/}.} we may predict that this kind of syncretism is impossible. The next step is gathering more data (from the same as well as other languages) in order to see whether the prediction of absence holds. However, most practitioners of CxM and Construction Grammar more generally do not seem to be concerned too much with that “dark matter” (I am borrowing the metaphor from \citealt{Werner2018} on word formation here). To pick a recent example, the ongoing project “FrameNet \& Konstruktion des Deutschen” focuses, according to its self-description, on the wealth of what there is, not on finding systematic gaps.\footnote{\url{https://gsw.phil.hhu.de/}{;} the project relates to parallel projects for other languages.} This might seem to be a mere matter of emphasis – but if we do not even care for the potential absence of certain phenomena we will not be able to explain (in any sense of the word) that absence.
Beside a certain type of syncretism, another example for a potentially non-occurring kind of phenomenon is the one I am concerned with in the present section: meaningful iteration of affixes (presuming here that we know in each case what counts as an affix as opposed to a clitic or function word and that we can always decide whether the affix is inflectional or derivational). Consider \REF{ex:reiner:25} based on \REF{ex:reiner:26}.
\ea \label{ex:reiner:25}
{Turkish (p.c., Seda Yilmaz Wörfel)}\\
\gll yap-tı-m\\
do-\textsc{pst}{}-1\textsc{sg}\\
\glt ʻI didʼ
\ex \label{ex:reiner:26}
{Turkish (p.c., Seda Yilmaz Wörfel)}\\
\gll \textup{*} yap-tı-tı-m\\
{} do-\textsc{pst}{}-\textsc{pst}{}-1\textsc{sg}\\
\glt intended: ʻI had doneʼ
\z
Judging from the “syntax-like” systematicity of Turkish verbal forms and from the possibility to iterate derivational (causative) affixes (cf. \sectref{reiner:2.1}), the unavailability of a systematic iteration in \REF{ex:reiner:26} is surprising. Why not have a past form (\textit{yaptım}) as the input to another past formation (-\textit{tı}), yielding a past-of-past meaning? In other words: if we define recursion as the application of a given structural-semantic operation to an output of some former application of the same operation, example \REF{ex:reiner:26} represents the limits of recursion. In fact, I am not aware of any language that allows for recursion of inflection (or what is usually described under the rubric of inflection).\footnote{But cf. \citet{Voort2016} for potential examples. Importantly, it has to be the whole structural-semantic operation that is applied twice. Hence, for instance reduplicative plurals do not count, since here we are dealing with one structural operation having one semantic effect.} So the limitation seen in \REF{ex:reiner:26} might be quite general. For present purposes, however, the important point is that there is a limitation at all.
Thus, this is a kind of language fact that CxM will capture implicitly but not care for. What about DM? Striving for a restrictive model, this approach is expected to predict the ungrammaticality of \REF{ex:reiner:26} explicitly. At first sight, this is true: as long as there is only one T-node in the syntax, the pattern in \REF{ex:reiner:26} is indeed excluded, since there is simply no terminal for the second -\textit{tı} to attach to.
However, in DM the problem resurrects at a later stage of the derivation: after vocabulary insertion has selected the most specific candidate available, how can we stop the mechanism from starting over again, yielding for example *\textit{play}{}-\textit{ed}{}-\textit{ed} \citep[97]{Embick2015}? Note that this is not just an empty iteration of forms, since every vocabulary item, including [\textsc{past}] $\leftrightarrow $ /əd/, comes with a function. More precisely, a vocabulary item is the pairing of a phonological form and information on where this form may be inserted (cf. \sectref{reiner:2.2.1}.), with this information consisting of morphosyntactic feature values. So when [\textsc{past}] $\leftrightarrow $ /əd/ attaches to the single T-node multiple times (in principle, ad infinitum), it adds [\textsc{past}] every time. Embick’s solution to this problem is a stipulation called \textit{uniqueness}: “In a derivation, only one Vocabulary Item may apply to a morpheme” \citep[98]{Embick2015}. So DM can handle data like \REF{ex:reiner:26} but only by means of a stipulation.
According to \citegen{Spencer2019} review of \citet{Embick2015}, the need for the uniqueness stipulation arises from the “attempt to derive word structure (directly) from syntactic structure” \citep[218]{Spencer2019}. I tried to show above that syntax is not the problem here. However, I agree with \citet{Spencer2019} that paradigm-based theories are not affected by the problem of recursion. The reason is that the paradigm as such restricts recursion: since rows and columns represent different pieces of information, iteration is excluded automatically. For example, since \textsc{past} will not appear in both, a row and a column, no cell can contain a realisation of \textsc{past}{}-\textsc{past}. Consider the partial paradigm for Turkish in \tabref{tab:reiner:12}.
\begin{table}
\caption{Partial paradigm for Turkish, gathered from \citet[Ch. 2.1.3]{Kornfilt1997}.\label{tab:reiner:12}}
\begin{tabular}{llll}
\lsptoprule
& {\textsc{fut}} & {\textsc{rep pst}} & {\textsc{pst}}\\\midrule
{\textsc{1sg}} & {}-(y)AcAK{}-Im & {}-mIs-Im & {}-DI-m\\
{\textsc{2sg}} & {}-(y)AcAK{}-sIn & {}-mIs-sIn & {}-DI-n \\
{\textsc{3sg}} & {}-(y)AcAK{}-∅ & {}-mIs-∅ & {}-DI-∅\\
{\textsc{1pl}} & {}-(y)AcAK{}-Iz & {}-mIs-Iz & {}-DI-k\\
{\textsc{2pl}} & {}-(y)AcAK{}-sInIz & {}-mIs-sInIz & {}-DI-nIz\\
{\textsc{3pl}} & {}-(y)AcAK{}-lAr & {}-mIs-lAr & {}-DI-lAr\\
\lspbottomrule
\end{tabular}
\end{table}
Thus, the paradigm can do what Construction Grammar fails to do and what DM needs a stipulation for: modelling the limits of recursion. Another advantage of paradigms is that no one needs to postulate them. They do have a merely secondary status in CxM as well as in DM (cf. \sectref{reiner:2}); however, they are there.
Before concluding the paper, let me draw your attention to a limitation of my reflections: I focussed solely on phenomena that are usually considered to be inflectional in a narrow sense. Though presenting theories that can as well cope with periphrasis and word formation, I did not have much to say about these phenomena. Indeed, these appear to be inherently different from inflection when it comes to recursion: they do permit it, to a certain extent. For recursion in periphrasis cf. \citet{Rothstein2012, Rothstein2013, Rothstein2013b} on double futures and the references therein to the wealth of works on double perfects. For recursion in word formation cf., e.g., \citet{BratticoEtAl2007}.
\section{Conclusion} \label{reiner:5}\largerpage
The “Morphome Debate” (\citealt{LuísEtAl2016}) is far from settled and it remains to be seen whether morphomes provide an ultimate argument for morphology as a distinct level of description, organised by paradigms in Stump’s sense. In the present paper I only presented an outline of this debate, not even elaborating on inflectional classes as morphomes. The main point of the paper was showing that even if paradigms are becoming secondary in current theorising, the traditional paradigm in Stump’s sense still serves an apparently unique function: it provides an economic way of modelling restrictions on recursion in inflection. More precisely, the habit of having the rows and columns host different features (and a fortiori different feature values) prevents any given feature value from operating on some former application of itself like in *[\textit{play}{}-\textit{ed}]-\textit{ed}.
However, recall that paradigms in Stump’s sense are pairings of word forms and the morphosyntactic properties they realise. Thus, an old problem raises its ugly head again, i.e. the very problem with which \citet{Haspelmath2011} is concerned: how can we tell what constitutes a word (form) in the first place? I doubt that the delegation of this task to Canonical Typology (cf. \sectref{reiner:2.3}) is sufficient. The only thing that, in this respect, Canonical Typology can tell us about a given string in a given language is this: it is a canonical word (form) to such and such a degree. However, in order to decide whether the string can realise a paradigm cell, we need to know whether it \textit{is} a word or not, categorically.
\section*{Abbreviations}
\begin{sloppypar}
Please note that abbreviations in examples adopted from other authors are spelled out in footnotes throughout.
\end{sloppypar}
\begin{multicols}{2}
\begin{tabbing}
\textsc{neutr} \hspace{1ex} \= neuter \kill
1 \> first person \\
2 \> second person \\
3 \> third person \\
\textsc{f} \> feminine \\
\textsc{du} \> dual \\
\textsc{gb} \> Government and Binding \\ \> Theory (Principles and \\ \> Parameters) \\
\textsc{imp} \> imperative \\
\textsc{impf} \> imperfective past tense \\
\textsc{ind} \> indicative \\
\textsc{irr} \> irrealis \\
\textsc{l2} \> second language \\
\textsc{masc} \> masculine \\
\textsc{neutr} \> neuter \\
\textsc{nom} \> nominative \\
\textsc{num} \> number \\
\textsc{pl} \> plural \\
\textsc{prs} \> present \\
\textsc{ps} \> person \\
\textsc{pst} \> past \\
\textsc{ptcp} \> participle \\
\textsc{sae} \> Standard Average European \\ \> \citep{Haspelmath2001} \\
\textsc{sbjv} \> subjunctive \\
\textsc{sg} \> singular \\
\textsc{tns} \> tense \\
\textsc{v} \> verb; vowel
\end{tabbing}
\end{multicols}
{\sloppy\printbibliography[heading=subbibliography,notkeyword=this]}
\end{document}
| {
"alphanum_fraction": 0.7726935987,
"avg_line_length": 136.8423028786,
"ext": "tex",
"hexsha": "846348611e922e13f1909401bb8278caebef4f94",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0ae6afe8eca4fe5feb3db4f0650d6f144758b46e",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "langsci/318",
"max_forks_repo_path": "chapters/05_reiner.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0ae6afe8eca4fe5feb3db4f0650d6f144758b46e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "langsci/318",
"max_issues_repo_path": "chapters/05_reiner.tex",
"max_line_length": 2376,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0ae6afe8eca4fe5feb3db4f0650d6f144758b46e",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "langsci/318",
"max_stars_repo_path": "chapters/05_reiner.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 29755,
"size": 109337
} |
\section{Trace and Determinant}
\begin{definition}
Let $A\in M_n(F)=M_{n,n}(F)$ be a square $n\times n$ matrix.
The trace of $A$ is defined to be
$$\operatorname{tr}A=\sum_{i=1}^nA_{ii}$$
\end{definition}
\begin{remark}
The map sending a matrix to its trace is a linear form.
\end{remark}
\begin{lemma}
$\operatorname{tr}(AB)=\operatorname{tr}(BA)$.
\end{lemma}
\begin{proof}
Write stuff out.
\end{proof}
\begin{corollary}
Similar matrices have the same trace.
\end{corollary}
\begin{proof}
$\operatorname{tr}(P^{-1}AP)=\operatorname{tr}(APP^{-1})=\operatorname{tr}(A)$.
\end{proof}
\begin{definition}
If $\alpha:V\to V$ is linear, then $\operatorname{tr}\alpha=\operatorname{tr}[\alpha]_B$ for any choice of basis $B$ of $V$.
It is well-defined by the preceding corollary.
\end{definition}
\begin{lemma}
Let $\alpha:V\to V$ be linear and $\alpha^\ast:V^\ast\to V^\ast$ be the dual map, then $\operatorname{tr}\alpha=\operatorname{tr}\alpha^\ast$.
\end{lemma}
\begin{proof}
Choose any basis $B$ of $V$, then
$$\operatorname{tr}\alpha=\operatorname{tr}[\alpha]_B=\operatorname{tr}[\alpha]_B^\top=\operatorname{tr}[\alpha^\ast]_{B^\ast}=\operatorname{tr}\alpha^\ast$$
as desired.
\end{proof}
Recall that we can decompose any permutation $\sigma\in S_n$ into a product of transpositions.
\begin{definition}
The signature of a permutation is the (necessarily unique) homomorphism $\epsilon:S_n\to\{1,-1\}$ that sends any transposition to $-1$.
\end{definition}
This map $\epsilon$ is well-defined as we know that the parity of the number of transpositions that builds up a permutation is fixed.
\begin{definition}
Let $A=(a_{ij})\in M_n(F)$.
We define the determinant of $A$ as
$$\det A=\sum_{\sigma\in S_n}\epsilon(\sigma)a_{\sigma(1)1}a_{\sigma(2)2}\cdots a_{\sigma(n)n}$$
\end{definition}
\begin{example}
For $n=2$, we have
$$\det\begin{pmatrix}
a_{11}&a_{12}\\
a_{21}&a_{22}
\end{pmatrix}=a_{11}a_{22}-a_{12}a_{21}$$
\end{example}
\begin{lemma}
If $A=(a_{ij})$ is an upper (resp. lower) triangular matrix, i.e. $a_{ij}=0$ for $i>j$ (resp. $i<j$), then $\det A=0$.
\end{lemma}
\begin{proof}
The only permutation $\sigma$ such that $\sigma(j)\le j$ (resp. $\sigma(j)\ge j$) for all $j$ is the identity.
\end{proof}
\begin{lemma}
$\det A=\det A^\top$.
\end{lemma}
\begin{proof}
For any $\sigma\in S_n$ we know that $\epsilon(\sigma)=\epsilon(\sigma^{-1})$.
\end{proof}
\begin{definition}
A volumn form $d$ in $F^n$ is a function $(F^n)^n\to F$ such that:\\
1. It is multilinear:
For any $i\in\{1,\ldots,n\}$ and $v_1,\ldots,v_{i-1},v_{i+1},\ldots,v_n\in F^n$, the map
$$v\mapsto d(v_1,\ldots,v_{i-1},v,v_{i+1},\ldots,v_n\in F^n)$$
is linear.\\
2. It is an alternating form:
If $v_i=v_j$ for some $i\neq j$, then $d(v_1,\ldots,v_n)=0$.
\end{definition}
What we want to prove that there is only one volumn form (up to multiplicative constant).
If this is true, then it necessarily equals $\det$ in the following way:
\begin{lemma}
$\det$ is a volumn form via the obvious identification $M_n(F)=(F^n)^n$ by grouping the $n$ column vectors as a tuple.
\end{lemma}
\begin{proof}
$\det$ is linear as it is linear in any entry.
It is an alternating form as $\epsilon$ sends any transposition to $-1$.
\end{proof}
\begin{lemma}
Let $d$ be a volumn form, then swapping two entries changes the sign.
\end{lemma}
\begin{proof}
For any $i\neq j$, $d(v_1,\ldots,v_i,\ldots,v_j,\ldots,v_n)+ d(v_1,\ldots,v_j,\ldots,v_i,\ldots,v_n)= d(v_1,\ldots,v_i+v_j,\ldots,v_i+v_j,\ldots,v_n)=0$.
\end{proof}
\begin{corollary}
For any $\sigma\in S_n$ and volume form $d$,
$$d(v_{\sigma_1},\ldots,v_{\sigma_n})=\epsilon(\sigma)d(v_1,\ldots,v_n)$$
\end{corollary}
\begin{proof}
Just decompose $\sigma$ into transpositions.
\end{proof}
\begin{theorem}
Let $A\in M_n(F)$ and let $A^{(i)}$ be the $i^{th}$ column of $A$.
For any volumn form $d$, we have
$$d(A^{(1)},\ldots,A^{(n)})=\det(A)d(e_1,\ldots,e_n)$$
where $(e_i)_j=\delta_{ij}$.
\end{theorem}
This is what we wanted.
\begin{proof}
Just expand using linearity and the preceding corollary.
\end{proof}
\begin{corollary}
$\det$ is the unique volumn form that maps $(e_1,\ldots,e_n)$ to $1$.
\end{corollary} | {
"alphanum_fraction": 0.6439530686,
"avg_line_length": 41.8113207547,
"ext": "tex",
"hexsha": "842f7570547b2f7d72eddd5b23d32994bb922979",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5a499f7ed33ef0110facb27323e13f42883aa0c5",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "david-bai-notes/IB-Linear-Algebra",
"max_forks_repo_path": "11/det.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5a499f7ed33ef0110facb27323e13f42883aa0c5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "david-bai-notes/IB-Linear-Algebra",
"max_issues_repo_path": "11/det.tex",
"max_line_length": 162,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5a499f7ed33ef0110facb27323e13f42883aa0c5",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "david-bai-notes/IB-Linear-Algebra",
"max_stars_repo_path": "11/det.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1535,
"size": 4432
} |
\documentclass{beamer}
\usefonttheme[onlymath]{serif}
\usepackage[english]{babel} %For internationalization
\usepackage[utf8]{inputenc} %For character encoding
\usepackage{amsmath} %For mathematical typesetting
\usepackage{amssymb} %For mathematical typesetting
\usepackage{graphicx} %For handling graphics
\newcommand{\be}{\begin{equation}}
\newcommand{\ben}[1]{\begin{equation}\label{#1}}
\newcommand{\ee}{\end{equation}}
\title
{An Introduction to Discontinuous Galerkin Methods}
\subtitle{Module 1: What is DG?}
\author[Bevan] % (optional, for multiple authors)
{J.~Bevan}
\institute[UMass Lowell] % (optional)
{
Department of Mechanical Engineering, Grad Student\\
University of Massachusetts at Lowell
}
\date[Fall 2014] % (optional)
{}
\subject{Discontinuous Galerkin}
\begin{document}
\frame{\titlepage}
\frame{\frametitle{Module 1: What is DG?}\tableofcontents}
%NEW SECTION
\section{Overall Content Structure}
\subsection{Assumed Prerequisite Knowledge}
\frame{\frametitle{\textbf{\secname}: \subsecname}
\begin{itemize}
\item It is assumed the interested viewer is an advanced undergrad or graduate student with the typical STEM background of Calculus, Linear Algebra, and ODEs/PDEs.
\item Additionally, it is assumed the viewer has at least a basic background in a programming language of their choice (Matlab etc.)
\item Finally it is assumed the viewer has taken a general Numerical Methods course as well a Solution of PDEs course.
\item Not intended to teach common underlying techniques (interpolation etc.), but we may [Recall] important features of them.
\end{itemize}
}
\subsection{Numerical Methods Prerequisites}
\frame[shrink]{\frametitle{\subsecname}
\begin{itemize}
\item Linear algebra
\begin{itemize}
\item Vector spaces, bases, properties, etc.
\item Orthogonality
\item maybe some useful spaces (Hilbert, square integrable, etc)?
\end{itemize}
\item Polynomial interpolation(1 and 2D)
\begin{itemize}
\item Lagrange, Hermite
\item monomial basis (and ill-conditioned nature)
\item Orthogonal basis (Legendre, Chebyshev)
\item L2 projection
\item choice of interpolation points (equispaced, GL, LGL, etc)
\item Runge phenomenon
\item Vandermonde matrix (transformation from modal to nodal spaces)
\end{itemize}
\item Quadrature (1 and 2D)
\begin{itemize}
\item Newton-Cotes
\item Gauss/Hermite(Legendre)
\item relation to interpolation
\end{itemize}
\item Solution of ODEs
\begin{itemize}
\item Forward Euler
\item RK4
\item Implicit schemes (e.g. Backward Euler)
\item Stability, Convergence
\end{itemize}
\end{itemize}
}
\subsection{Solution of PDEs Prerequisites}
\frame[shrink]{\frametitle{\subsecname}
\begin{itemize}
\item Domain representation
\begin{itemize}
\item meshing
\item BCs (Neumann and Dirichlet)
\end{itemize}
\item Finite difference methods (FDM)
\begin{itemize}
\item Pointwise spatial derivatives
\item Computational vs Physical domains
\item Basic mapping (bilinear)
\end{itemize}
\item Finite volume methods (FVM)
\begin{itemize}
\item Flux functions
\item Artificial viscosity
\item Linear vs nonlinear fluxes
\end{itemize}
\item Finite element methods (FEM)
\begin{itemize}
\item Weak and strong form formulation
\item Piecewise linear solution approximation
\item Galerkin style test functions
\item local support
\end{itemize}
\end{itemize}
}
\subsection{Lecture Goals}
\frame[shrink]{\frametitle{\subsecname}
\begin{itemize}
\item Understand DG spatial discretization (advective)
\begin{itemize}
\item DG weak form (test function to minimize residual or test function orthogonal)
\item solution approximation (and initial conditions)
\item mapping physical to computation domain (for curvilinear domains)
\item DG Galerkin formulation
\item Integration by parts $\rightarrow$ flux functions (solution smoothness requirements): differences from FEM
\item linear vs non-linear flux: ramifications for semi-discrete system
\item hyperbolic vs parabolic
\item applying BCs (include periodic BCs)
\end{itemize}
\item Understand time discretization
\begin{itemize}
\item Method of lines style semi-discrete form
\item Types: e.g. Forward Euler, RK4
\item CFL condition and stability
\end{itemize}
\item Learn how to apply DG to arbitrary PDEs and realm of applicability
\begin{itemize}
\item intuitive understanding of methodology
\item conceptualization of process (not tied down to specific examples)
\item understand pros/cons
\item understand how DG “simplifies” to FVM and FEM
\end{itemize}
\item Generate runnable code of your own
\begin{itemize}
\item Self-contained set of knowledge and algorithms to be able to write a full solver
\end{itemize}
\end{itemize}
}
\subsection{Topics Layout}
\frame{\frametitle{\subsecname}
\textbf{Module 1: What is DG?}\\
DG motivation (why vs FEM, FVM, FDM)\\
Scalar conservation law (linear) PDE\\
Weak form derivation\\
Global domain vs local element\\
Multiple-valued element boundaries\\
Recall: Flux functions\\
}
\frame{\frametitle{\subsecname (cont.)}
\textbf{Module 2: A Simple 1D DG Solver}\\
Linear solution approximation\\
Test function choice (Galerkin)\\
Upwind flux\\
Mass Matrix\\
Stiffness Matrix\\
Putting it all together (linear system)\\
Semi-discrete system\\
Forward Euler\\
Investigate h-convergence\\
Investigate t-convergence\\
Investigate stability (CFL)\\
}
\frame{\frametitle{\subsecname (cont.)}
\textbf{Module 3: To Higher-Orders (nodal) }\\
\textbf{3A: Sol’n Approximation}\\
Revisit weak form\\
-Approx. space\\
-L2 Projection minimizes residual norm\\
-Test space $\rightarrow$ orthogonal\\
Monomial basis?\\
Ill-conditioning of monomials\\
Recall: Lagrange interpolation (code)\\
Derive Lagrange spatial approximation\\
Equispaced interp points?\\
Runge phenomenon\\
Why: Bernstein/Markov inequality\\
Roots of Leg instead\\
}
\frame{\frametitle{\subsecname (cont.)}
\textbf{Module 3: To Higher-Orders (nodal) }\\
\textbf{3B: Discrete System}\\
Numerical Quadrature (Gauss)\\
Hermite interpolation (2N+1 quad)\\
Truncation error/exact quadrature\\
GL Lagrange orthogonality\\
Local Mapping Fun\\
Mass Integral ->diagonal/inversion\\
Log differentiation\\
Flux interpolation\\
Stiffness Integral\\
Numerical Flux (interpolated)\\
Assembly of system\\
RK4 time discretization\\
Investigate p-convergence (smoothness reqs)\\
}
\subsection{A Pedagogical Comment}
\frame{\frametitle{\subsecname}
\begin{itemize}
\item Take advantage of format: replay, pause, speed up, slow down
\item Each section may have subsections, but the overall section is intended to be a self-contained concept. The first slide of a new section has the title format \textbf{Section}: Subsection
\item Easy to "zone-out", before the start of a new section try and put what you learned into action. Make a code snippet to test your understanding or verify a claimed result etc.
\item Each Module has a larger self-contained concept. You should be able to put together a script that accomplishes something substantial.
\end{itemize}
}
%NEW SECTION
\section{DG Motivation: Why DG?}
\frame{\frametitle{\textbf{\secname}}
\begin{itemize}
\item Overall purpose: PDE models physical system, solve PDE numerically
\item Compare to common techniques FDM, FEM, FVM:
\item Increasing order of solution can be more efficient, smooth functions especially. Most comm. packages low order (LO)
\item FDM not explicitly conservative, extended stencil for high order (HO) a problem for unstructured grids
\item FEM not good for hyperbolic problems, discontinuities in sol'n troublesome
\item FVM constant solution of volume necessitates extended stencil for HO, ruins flexibility for unstructured grids
\item DG is explicitly conservative, well-suited for hyperbolic problems, able to handle discontinuities, and can still use unstructured grids at HO
\item Local nature of solution permits good parallelizability
\item Can use numerical flux functions to capture physical behavior
\end{itemize}
}
%NEW SECTION
\section{Example PDE: Scalar Conservation Law}
\subsection{A brief motivating example}
\frame[shrink]{\frametitle{\textbf{\secname}: \subsecname}
Imagine some scalar quantity that is subject to a conservation law
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{square.PNG}
\end{figure}
}
\subsection{General Approach}
\frame{\frametitle{\subsecname}
\begin{itemize}
\item As worked out, PDE for system in 1D is
\be \frac{\partial q}{\partial t} + \frac{\partial f(q)}{\partial x} = S(x) \ee
\item We will need to decide how we solve this system, and we will need to discretize in space and time
\item The first term will be discretized with a Forward Euler approach, the second term will be handled by DG
\item For simplicity we assume no sources, and a linear flux with an externally prescribed velocity field
\end{itemize}
}
%NEW SECTION
\section{The Weak Form of the PDE}
\frame{\frametitle{\textbf{\secname}}
\begin{itemize}
\item We will discuss the finer points of this in Module 3, for now permit the following; the weak form solution of the PDE is the integral (over the global domain $\Omega$) of our solution times some test function $\phi$:
\be \int_{\Omega} \frac{\partial q}{\partial t} \phi \,dx + \int_{\Omega} \frac{\partial f(q)}{\partial x} \phi \,dx = 0 \ee
\item In DG the global domain is split into K elements, where the local sol'n is defined for a particular element. The global sol'n is then the direct sum of each of these local polynomials, a piecewise polynomial
\item This is similar to a FEM approach, however continuity is not enforced across elements
\end{itemize}
}
%NEW SECTION
\section{Domain Decomposition: Global vs Local}
\frame{\frametitle{\textbf{\secname}}
\begin{itemize}
\item The integration domain is now over the element instead of the whole domain such that $\{x|x \in k, x_L \leq x \leq x_R\}$
\item To reduce smoothness requirements on the flux we integrate the second term by parts to get
\be \int_k\! \frac{\partial q}{\partial t} \,\phi \,dx +
[f(q)\phi] \Big\rvert_{x_L}^{x_{R}} -
\int_k\! f(q) \,\frac{d\phi}{dx} \,dx = 0 \ee
\item Notice that we include the endpoints of our domain $k$
\item Endpoints of neighboring elements are coincident, so $q$ is multiply defined, what ramifications does this have? Also, we now have K independent local solutions, how do we recover the global solution?
\end{itemize}
}
%NEW SECTION
\section{Element Boundaries: Multiply Defined?}
\frame{\frametitle{\textbf{\secname}}
\begin{itemize}
\item If $q$ is multiply defined at endpoints, what should $f(q)$ be?
\item In order to ensure conservation flux between elements should be equal $-f_{k-1}(q(x_R)) = f_{k}(q(x_L))$
\item Take a FVM approach and permit a \textit{numerical flux} denoted $\hat{f}(q)$
\item The numerical flux function permits communication between elements, allowing recovery of global sol'n
\end{itemize}
}
%NEW SECTION
\section{[Recall] Flux Functions}
\frame{\frametitle{\textbf{\secname}}
\begin{itemize}
\item Flux functions describe the "flow" of some quantity depending on the quantity itself and possibly other factors
\item Permits insertion of problem specific knowledge into an otherwise agnostic PDE
\item Many choices, Lax-Friedrichs, Richtmyer, Godunov, Osher, Roe, etc.
\item For simplicity we will use the upwind flux where $\hat{f}(q^+,q^-)=cq^-$ if $c>0$ and $cq^+$ if $c<0$
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=.35\textwidth]{qBoundary.PNG}
\end{figure}
}
\end{document} | {
"alphanum_fraction": 0.7670923465,
"avg_line_length": 38.1158940397,
"ext": "tex",
"hexsha": "a839c7aba464a35eb20b3fdbdce1320019ffd47d",
"lang": "TeX",
"max_forks_count": 22,
"max_forks_repo_forks_event_max_datetime": "2021-12-14T09:03:21.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-04-30T00:20:42.000Z",
"max_forks_repo_head_hexsha": "34527e2fa77a193a58bd2f47f781833d70c59315",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "userjjb/22_6xx-DG-for-PDEs",
"max_forks_repo_path": "Module 1/22_6xx-Module1.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "e9463ab6faee98af26102949e04ab8be75405154",
"max_issues_repo_issues_event_max_datetime": "2016-03-20T08:48:32.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-03-15T11:50:32.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "LuciaZhang9/22_6xx-DG-for-PDEs",
"max_issues_repo_path": "Module 1/22_6xx-Module1.tex",
"max_line_length": 222,
"max_stars_count": 31,
"max_stars_repo_head_hexsha": "34527e2fa77a193a58bd2f47f781833d70c59315",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "sconde/22_6xx-DG-for-PDEs",
"max_stars_repo_path": "Module 1/22_6xx-Module1.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-20T00:41:34.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-03-15T11:40:20.000Z",
"num_tokens": 3180,
"size": 11511
} |
\chapter{Appendix}
\begin{longtable}{ | P{6cm} | P{6cm} | }
\hline
\textbf{Multi-layer perceptron} & \textbf{Radial Basis Function} \newline \\ \hline
Internal representations are distributed representations \newline &
Hidden unit response are local with respect to the input spaces \newline \\ \hline
Multi-layer architecture \newline &
Three layer architecture \newline \\ \hline
Weights are determined simultaneously\newline &
Weights are determined by clustering algorithm while output layer is supervised \newline \\ \hline
Required less hidden-layer unit &
Required more hidden-layer units \newline \\ \hline
\end{longtable} | {
"alphanum_fraction": 0.7124277457,
"avg_line_length": 40.7058823529,
"ext": "tex",
"hexsha": "c68650ad4274b8fb8f2d73b60cd0e5573e3424f0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fb7298936a3abafd9fe2d1063f7ef8c324b28a48",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Andyccs/neural-network-summary",
"max_forks_repo_path": "etc.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fb7298936a3abafd9fe2d1063f7ef8c324b28a48",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Andyccs/neural-network-summary",
"max_issues_repo_path": "etc.tex",
"max_line_length": 102,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "fb7298936a3abafd9fe2d1063f7ef8c324b28a48",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Andyccs/neural-network-summary",
"max_stars_repo_path": "etc.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 165,
"size": 692
} |
% !TeX root = ../report-phd-first-year.tex
% !TeX encoding = UTF-8
% !TeX spellcheck = en_GB
\section*{Research plan for the next year}
One of the main topics that will be further investigated during the next year is the analysis of assembly lines. At the moment, the work produced \cite{biagi2017inspection} sets a good base for investigation of analysis techniques for assembly lines, but lacks many aspects in order to be usable with real assembly lines. The first restriction that should be surpassed is the lack of buffers between workstations. In real assembly lines there is usually a buffer of fixed size, so one of the first investigations should be towards the introduction of buffering capacity, considering the same capacity for every workstation or even different sizes. More performance measures should also be introduced, such as the time until the production from the whole assembly line of a specific product in the line, or the time until the next $N$ products will be completed by the assembly line. These kind of measures would be more complex than the ones already proposed in \cite{biagi2017inspection}, but would also allow a more accurate analysis of the assembly line. Evaluation of these additional measures should also be derived in a compositional fashion, as done already in \cite{biagi2017inspection}, in order to keep the technique computationally feasible. Lastly, a better upper bound for the measures proposed should be derived, similarly to the lower bound derived in \cite{biagi2017inspection}.
Another topic that will be investigated is that of smart drugs restocking for the \acr{LINFA} project. The work conducted at the moment is still far from being usable in a real ward scenario. This is mainly due the problem of state space explosion, since the problem to be modelled is quite complex and many aspects would need to be modelled, causing the so called state space explosion and thus rendering it infeasible to analyse. One of the first directions is thus that of optimising the prediction model itself, in order to model more aspects while maintaining the number of generated states feasible. The PRISM model checker used up until now doesn't provide any solution for state space optimisation, so alternative modelisation tools might be investigated, such as the Storm tool \cite{dehnert2017storm}. Then, the restrictions currently present in the model should one by one be surpassed. An example would be to introduce different healthcare protocols for different kinds of patients, including different phases with different drugs consumption rates, thus personalising more the sojourn of patients in the ward. This could be done by employing techniques for process mining \cite{van2004workflow}, allowing to generate models automatically from historical data, such as a dataset of patients from a real ward.
Lastly, the topic of \ac{AR}, especially in the scenario of \ac{AAL}, will be further investigated. The idea is to expand on the work of \cite{biagi2016stochastic,carnevali2015continuous} and on the investigation conducted in Jaén in order to enhance the \ac{AR} model. Among the possible enhancements there is the possibility of adding support for continuous sensors, such as a thermometer or an accelerometer, which is now lacking in the current model. Another important research activity for this topic is that of finding good datasets for \ac{AAL} \ac{AR} in order to apply techniques of process mining more accurately and refining the existing model, possibly following the recommendations shown in \cite{patara2015recommendations}.
\newpage
| {
"alphanum_fraction": 0.8115661317,
"avg_line_length": 258.1428571429,
"ext": "tex",
"hexsha": "bd08f1e19cb078aa405ca8828cc13fbb1c85f656",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a1226bd41b0208d0aac08c15c3372a759df0cb63",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "oddlord/uni",
"max_forks_repo_path": "phd/committee/first-year/report/body/research_plan.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a1226bd41b0208d0aac08c15c3372a759df0cb63",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "oddlord/uni",
"max_issues_repo_path": "phd/committee/first-year/report/body/research_plan.tex",
"max_line_length": 1396,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a1226bd41b0208d0aac08c15c3372a759df0cb63",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "oddlord/uni",
"max_stars_repo_path": "phd/committee/first-year/report/body/research_plan.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 740,
"size": 3614
} |
\begin{figure}[t]
\centering
\vspace{0.25cm}
\includegraphics[width=0.65\columnwidth]{corridor_ovc.png}
\caption{View from the left camera of an Open Vision
Computer. Detected AprilTags are shown in color.}
\label{fig:corridor_ovc}
\end{figure}
\section{Application Example}
\label{sec:application}
We illustrate the versatility of TagSLAM by showing how it can be used
to achieve loop closure for VIO. The synchronized images and IMU data
that serve as input for the VIO are collected with an Open Vision
Computer \cite{quigley2018}. During 13 minutes, a total of 15595
stereo frames are recorded at 20Hz along the 630m long trajectory
through the rooms of an abandoned chemical
laboratory. Fig.\ \ref{fig:corridor_ovc} shows an example image of
some of the 57 tags that are strategically placed along the
corridor. Their poses are deterimined from the wall orientations and
from laser distance measurements with a Leica Disto D3a. The odometry
is computed with the stereo VIO algorithm as described in
Ref. \cite{sun2018} but, running offline with abundant CPU resources
available, we use a larger number of visual feature points to improve
drift.
Fig.\ \ref{fig:loop_closure} shows the trajectories for VIO (cyan),
loop-closed TagSLAM (magenta), and stereo ORB-SLAM2
\cite{murartal2017} (yellow). The tag locations are visible in the map
as well. All trajectories start at the same point at the bottom of
the map, but only the TagSLAM trajectory returns correctly to the
starting point. Both VIO and ORB-SLAM2 exhibit drift, and evidently
ORB-SLAM2 does not achieve loop closure. This is not surprising since
the hallway images look very different while returning. By combining
tag projection factors from the camera images with relative pose
factors from the odometry, TagSLAM by design closes the loop.
\begin{figure}[ht]
\centering
\includegraphics[angle=90, width=0.5\columnwidth]{loop_closure_building_227.png}
\caption{Trajectory using {\color{cyan} VIO}, {\color{magenta} TagSLAM},
and {\color{yellow} ORB-SLAM2}. The grid cell size is 1m.}
\label{fig:loop_closure}
\end{figure}
Creating and incrementally optimizing the graph upon factor insertion
takes 188s on an Intel i9-9900K 3.6GHz CPU, which is an average
performance of about 12ms per frame. A final full (non-incremental)
graph optimization adds another 4.3s to the total processing
time. While 12ms time per frame seems to indicate the possibility of
running in real time, individual frames can take longer to process,
depending on iSAM2 relinearization. As the graph grows over time, so
does the CPU load, and individual frames can take as much as
260ms. However, for situations where there already is a trusted map of
tag poses available, TagSLAM can be configured to retain only the last
two time steps in the graph, making it suitable for real-time
operation.
| {
"alphanum_fraction": 0.7936452514,
"avg_line_length": 50.2456140351,
"ext": "tex",
"hexsha": "bcab3765ec987bf1b071f1c51e94daabe5286cbb",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c7a7c80f8d6472cf57941bab66c83ac8a36e1510",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "berndpfrommer/tagslam_web",
"max_forks_repo_path": "writeup/sections/application_example.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c7a7c80f8d6472cf57941bab66c83ac8a36e1510",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "berndpfrommer/tagslam_web",
"max_issues_repo_path": "writeup/sections/application_example.tex",
"max_line_length": 82,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "c7a7c80f8d6472cf57941bab66c83ac8a36e1510",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "berndpfrommer/tagslam_web",
"max_stars_repo_path": "writeup/sections/application_example.tex",
"max_stars_repo_stars_event_max_datetime": "2021-04-26T09:06:36.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-10-21T12:26:36.000Z",
"num_tokens": 748,
"size": 2864
} |
% $ Id: conclusion.tex $
% !TEX root = main.tex
%%
\section{Conclusion}
\label{sec:conclusion}
CollabIDE is a development environment prototype for distributed teams, incorporating automated
versioning and enabling the generation of multiple software products according to the code's version
base.
CollabIDE is developed with the goal of increasing the productivity of software development teams by
reducing the time developers must dedicate to tasks other than coding; specifically targeting the
versioning and product variant deployment tasks of the software development process. CollabIDE is
based on \ac{COP} to tackle both tasks. Situations in which the development task take place (\eg user,
time, code extract of the modification) are defined as contexts, which can be use directly to represent
and define the versions of the system. Extraction, definition, and usage of such contexts/versions takes
place autonomously, so developers need not to be distracted by these activities.
\ac{COP} enables flexible composition of software systems while they execute. Using this feature it is
possible to compose different versions as part of a fully working program variant. Composition of such
variants requires developers to select which of the contexts they want to use. Available contexts in the
project are accessible to developers within the tool itself, in order to minimize the disruption and time
spent in generating the variants.
The validation of CollabIDE evidences its effectiveness in reducing the overhead of dealing with version
control or product variant generation systems, when working in a distributed software development team
or when employing the product lines development model.
Although the scale of the experiment is small, it already shows the potential that an approach like
CollabIDE has. Thanks to the flexibility of the technology used to create CollabIDE, it can easily be
extended with features that are common in other IDEs like code linting, auto-completion, support for
other programming languages and debugging among others. One important extension that could be
made is the possibility of managing various files and folders as a software development project rarely
relies on only one file. Furthermore, there are many ways the existing features can be tweaked to make
them better at approaching the problem they intend to solve. Some examples would be introducing
additional code highlighting, a code history feature that lets users navigate to a specific change made
or even letting different product variants be merged in a similar fashion of how it’s done on git. Another
limitation of CollabIDE that is worth discussing is the maximum amount of developers in a project. Currently CollabIDE supports
up to 8 developers. This is due to the need to identify each one with an easily distinguishable color.
If the number of developers was higher it would get increasingly difficult to choose colors that can be distinguished from one another.
This limitation is worth approaching for the case of open-source projects
where there can be a high number of developers lending their contributions in single or multiple files.
As future work, we plan to conduct a more extensive case study to evaluate the relevance of CollabIDE
in larger development teams. Additionally, we see the value of the tool for code reviews, therefore we
want to extend CollabIDE with code productivity metrics (\eg Lines of Code, code clones) and analysis
facilities~\cite{lienhard12}. Such extension would enable code reviewers to better evaluate the
productivity of particular developers. CollabIDE, can be particularly useful in academic/teaching
environments to quickly differentiate how much a particular student participated in an assignment, or
when the majority of work took place.
\endinput
| {
"alphanum_fraction": 0.8174252276,
"avg_line_length": 73.9423076923,
"ext": "tex",
"hexsha": "b408d634bf4ef8a2466c436de4b90c3083ef6d5a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d96df12976663ea9de408a96be1f50a4f80b3c15",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ncardozo/sasoposter",
"max_forks_repo_path": "conclusion.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d96df12976663ea9de408a96be1f50a4f80b3c15",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ncardozo/sasoposter",
"max_issues_repo_path": "conclusion.tex",
"max_line_length": 135,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d96df12976663ea9de408a96be1f50a4f80b3c15",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ncardozo/sasoposter",
"max_stars_repo_path": "conclusion.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 745,
"size": 3845
} |
\subsubsection{\stid{1.16} SICM}
\paragraph{Overview} \leavevmode \\
The goal of this project is to create a universal interface for discovering, managing and sharing within complex memory hierarchies. The result will be a memory API and a software library which implements it. These will allow operating system, runtime and application developers and vendors to access emerging memory technologies. The impact of the project will be immediate and potentially wide reaching, as developers in all areas are struggling to add support for the new memory technologies, each of which offers their own programming interface.
\paragraph{Key Challenges} \leavevmode \\
The challenge SICM addresses is how to program the deluge of existing and emerging complex memory technologies on HPC systems. This includes the High Bandwidth Memory (HBM), MCDRAM (on Intel Knights Landing), NV-DIMM, PCI-E NVM, SATA NVM, PCM, memristor, and 3Dxpoint. Also, near node technologies, such as disaggregated memory or network attached memories, have been proposed in exascale memory designs. Current practice depends on ad hoc solutions rather than a uniform API that provides the needed specificity and portability. This approach is already insufficient and future memory technologies will only exacerbate the problem by adding additional proprietary APIs.
\paragraph{Solution Strategy} \leavevmode \\
The SICM solution is to provide a unified two-tier node-level complex memory API. The target for the low-level interface are system and runtime developers, as well as expert application developers that prefer full control of what memory types the application is using.
The SICM high-level interface employs application profiling and analysis to direct data management across complex memory hierarchy. The earlier approach is based on an offline profiling approach called MemBrain. The team extended the SICM high-level interface with facilities to profile, analyze, and migrate application data within the same program run. This online approach, Figure~\ref{fig:sicm-online}, reduces user burden and achieves similar performance to the earlier offline approach after a short initial startup period.
The low-level interface is primarily an engineering and implementation project. The solution it provides is urgently needed by the HPC community; as developers work independently to support these novel memory technologies, time and effort is wasted on redundant solutions and overlapping implementations. Adoption of the software is focused on absorption into existing open source projects such as hwloc, KOKKOS, Umpire, CLANG/LLVM, OpenMP, and Jemalloc.
Additionally, SICM is developing Metall, a persistent memory allocator designed to provide developers with an API to allocate custom C++ data structures in both block-storage and byte- addressable persistent memories (e.g., NVMe and Intel Optane DC Persistent Memory) beyond a single process lifetime. Metall relies on a file-backed mmap mechanism to map a file in a filesystem into the virtual memory of an application, allowing the application to access the mapped region as if it were regular memory which can be larger than the physical main-memory of the system.
\paragraph{Recent Progress} \leavevmode
\begin{itemize}
\item Low-Level Interface: Finished refactor of low-level interface supporting memory arenas on different memory types. Added support for Umpire, OpenMP and KOKKOS. Investigating features need to fully support these runtimes, currently KOKKOS and the Nalu-wind ECP application. SICM now supports Intel Optane memory, the first NVM memory that can be used as an extension of traditional DRAM memory.
Pull requests have been developed for OpenMP/CLANG/LLVM, KOKKOS and Umpire. the patches to Clang/LLVM/OpenMP turn OpenMP memory spaces in OpenMP 5.x into SICM library calls in the LLVM/OpenMP runtime. The same codepath that supports memkind library was refactored to support multiple custom memory allocators – more general than just SICM support.
SICM currently supports ``pragma openmp allocate'' with memory types: omp\_ (default, large\_cap, const, high\_bw, low\_lat ) \_mem\_spaces and supports KNL, Optane, testing on Sierra/Summit. SICM continues to target the early access systems for Frontier and Aurora.
%Recently, SICM has been ported, compiled and tested on the early access system for Frontier and Aurora.
\item High-Level API: SICM has employed application profiling and analysis to direct data management across complex memory hierarchy, the team extended the SICM high-level interface with application-directed data tiering based on the MemBrain approach which is more effective than an unguided first touch policy. The impact of using different data features to steer hot program data into capacity-constrained device tiers was modeled.
\item High-Level Graph Interface: For the Metall high-level interface focusing on graph applications, work has used miniVite, and ECP graph proxy application. miniVit has been modified to store and reuse graph data using Metall. Graph generation has been a bottleneck in this application. Metall can use mmap and UMap (user-level mmap library in Argo PowerSteering project) underneath to enhance its performance and capability.
\end{itemize}
\paragraph{Next Steps} \leavevmode
\begin{itemize}
\item Low-Level Interface: Focus on performance of support for runtimes and adding feature requested to support Umpire, OpenMP KOKKOS, and MPI and address the slow move pages implementation in the Linux kernel. Initial efforts to accelerate move pages have not been sucessful and progress has been hampered by current work environment. The Linux kernel modifications for page migration are in collaboration with ECP project Argo 2.3.5.05 and RIKEN research center in Japan. Start collaborating with applications to enable use of heterogenous memory on ECP target platforms. Additionally, the team needs to reconnect with the hwloc team on memory topology discovery.
\item For the high-level interface analysis work, the team will adapt their approach and toolset to remove dependence on source code analysis and recompilation of the target application. The goal is to enable guided data management for arbitrary processes, even if source code is not available. The new tools will also integrate with the Linux memory manager in order to direct data management for multiple processes running on the same platform simultaneously.
\item Metall will continue its collaboration with the ExaGraph team to integrate Metall into an ExaGraph application or benchmark to enable persistent and external-memory data structure support.
\end{itemize}
\begin{figure}[htb]
\centering
\includegraphics[width=0.7\textwidth]{projects/2.3.1-PMR/2.3.1.16-SICM/sicm-online-diagram.pdf}
\caption{
Data tiering with online application guidance. (a) Users first compile the application with a custom pass to insert annotations at each allocation call site, (b) Program execution proceeds inside the SICM runtime layer, which automatically profiles memory usage behavior, converts it into tier recommendations for each allocation site, and enforces these recommendations during program execution. In (b), interactions and operations drawn with dashed lines only occur at regular, timer-based intervals, while the solid lines correspond to activities that can occur throughout the program execution~\cite{olson2021online}.
\label{fig:sicm-online}
}
\end{figure} | {
"alphanum_fraction": 0.8173668718,
"avg_line_length": 196.6842105263,
"ext": "tex",
"hexsha": "9eeeff5f3742a92ccee97d8a47ce7f6d7173c7a6",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-10-07T14:40:24.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-10-07T14:40:24.000Z",
"max_forks_repo_head_hexsha": "6a42725738658c73ed14d61ca5c0bbe330441dbc",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "PHHargrove/ECP-ST-CAR-PUBLIC",
"max_forks_repo_path": "projects/2.3.1-PMR/2.3.1.16-SICM/2.3.1.16-SICM.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "6a42725738658c73ed14d61ca5c0bbe330441dbc",
"max_issues_repo_issues_event_max_datetime": "2020-10-12T19:39:54.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-10-12T19:39:54.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "PHHargrove/ECP-ST-CAR-PUBLIC",
"max_issues_repo_path": "projects/2.3.1-PMR/2.3.1.16-SICM/2.3.1.16-SICM.tex",
"max_line_length": 671,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "6a42725738658c73ed14d61ca5c0bbe330441dbc",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "PHHargrove/ECP-ST-CAR-PUBLIC",
"max_stars_repo_path": "projects/2.3.1-PMR/2.3.1.16-SICM/2.3.1.16-SICM.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1573,
"size": 7474
} |
% Created 2016-02-27 Sat 15:28
% -----------------------------*- LaTeX -*------------------------------
\documentclass[12pt]{report}
\usepackage{scribe_hgen486}
\begin{document}
\scribe{Nicholas Knoblauch} % required
\lecturenumber{11} % required, must be a number
\lecturedate{February 9} % required, omit year
\lecturer{John Novembre}
\maketitle
% please leave this comment
\framebox[.95\textwidth]{\parbox{.93\textwidth}{ {{\bf Note:}} These
lecture notes are still rough, and have only have been mildly
proofread. }}
\vspace*{.1in}
% feel free to delete content below this line
% ----------------------------------------------------------------------
% please leave this comment
\tolerance=1000
\providecommand{\alert}[1]{\textbf{#1}}
\title{HMMLec}
\author{nwknoblauch}
\date{\today}
\section{Continuous Time Markov Chains}
\label{sec-1}
\subsection{Differential equations that lead to the poisson distribution}
\label{sec-1-1}
We can describe the probability that the number of events that have occured by time $t$ as $N_t$, and can write the probability as:
$P(N_t=i)=P_i(t)$
With the rate parameter $\lambda$, the probability that $N_t=i$ at $t+h$ is given by:
$P_i(t+h)=P_{\mathrm{i-1}}(t)(\lambda h + o(h)) + P_i(t)(1-\lambda h + o(h)) $
Here we are summing over the probability of two scenarios. The first represents that $N_t=i-1$ and that there was another event in time $h$. The second is that $N_t=i$ and that there was no event in time $h$. For small vaues of $h$ we can ignore the probabilty of two steps in time $h$.
The limit as $h$ goes to 0 is: $\frac{P_i(t+h)-P_i(t)}{h}=\frac{d}{dt}P_i(t)=P_{i-1}(t)(\lambda h + o(h)) + P_i(t)(1-\lambda h + o(h))-P_i(t)$
Cancel the one and divide by $h$ to get:
$(\lambda )P_{i-1}(t) -P_i(t) P_0(t+h)=P_0(t)(1-\lambda h + o(h))$
$\frac{d P_0(t)}{dt}= -\lambda P_0(t)$
The solution to this differential equation is:
$P_0(t)= e^{\mathrm{-\lambda t}}$ plus some constant
$\frac{P_i(t)}{dt}=\lambda P_0(t) -\lambda P_i(t)=\lambda e^{-\lambda t} -\lambda P_1(t)$
To get the solution for $P_1$ we use an integrating factor
$\frac{d P_1(t)}{dt} + \lambda P_1(t) = \lambda e^{-\lambda t}$
$\int_0^T e^{-\lambda t} \frac{d P_1(t)}{dt} + \lambda e^{-\lambda t} P_1(t) = \int_0^T \lambda$
Remembering the product rule: $(fg)'=f'g+g'f$
$$ \int_0^T \frac{d}{dt} = \int_0^T \frac{d}{dt} ( e^{\lambda t} P_1(t)) = \int_0^T \lambda dt = P_1(t)=\lambda t e^{-\lambda t}$$
We can keep iterating:
$P_2(t)= \frac{(\lambda t)^2}{2} e^{-\lambda t}$
And eventually we have
$P_n(t) = \frac{(\lambda t)^n}{n!} e^{-\lambda t}$
This is the poisson process
\subsection{A More General Process, the Pure Birth Process}
\label{sec-1-2}
More general than the poisson process is a ``birth process'', or ``pure birth process''
In a pure birth process there is a state dependent rate of arrival:
$\frac{d}{dt} P_i(t) = \lambda_{i-1} P_{i-1}(t) -\lambda_i P_i(t)$
The poisson process is a special case where $\lambda_i$ is constant.
Another example is the \textbf{linear birth process}, where the $\lambda_i=i \lambda$ (This is also known as a Yule process)
$P_j(t) = {{j-1}\choose{k-1}} e^{-\lambda t} (1-e^{\lambda t})^{k-i}$ where $k$ is the starting size. This is a negative binomial distribution (and is a transient solution)
Another major class is a birth-death process. We have a set of birth rates and a set of death rates. ($\lambda$ from 0 to infinity, and $\mu$ from 1 to infinity)
$\frac{d}{dt} P_i(t) = \lambda_{i-1} P_{i-1}(t) + \mu_{i-1}P_{i+1}(t) - (\lambda_i + \mu_i)P(i)(t)$
There is a linear birth-death process $\lambda_i=i \lambda$ and $\mu_i = i\mu$
There is also linear birth-death with immigration
$\lambda_i = i \lambda + \theta$
$\mu_i = i \mu$
\subsection{Continuous version of the Markov property}
\label{sec-1-3}
Independence of the past before the immediate past
$P(X(t+s)=j|X(s)=i,X_u=x(u),0 \leq u < s ) = P(X(t+s)=j|X(s)=i)$
\subsection{Rate matrix $Q$}
\label{sec-1-4}
We have been thinking about going from 0 to 1, but we can think in general about going from $i$ to $j$
Defining the generator matrix or Rate Matrix $Q$
$P_{i,j}(h) = q_{ij} h + o(h)$
$P_{i,i}(h)= 1- v_i h + o(h)$ Where $v_i = \sum_j q_{ij}$
If we define $q_{ii}=-v_i$ Then $Q = {q_{ij}}$
Inter-event times depend on $i$ and they are exponential with rate $v_i$
$P_{ij}=\frac{q_{ij}}{v_i}$
This looks like a discrete markov chain. There is an idea that we have a discrete markov chain embedded in a continuous markov chain
\subsubsection{$Q$ matrix for poisson}
\label{sec-1-4-1}
For the poisson process, $Q$ is infinite in both directions.
The diagonal has $-\lambda$ along the diagonal. One to the right of the diagonal is $\lambda$, the rate at which we arrive at the next state. (There is no return to previous states in the poisson process)
\subsubsection{$Q$ matrix for the birth-death process}
\label{sec-1-4-2}
Matrix goes on infinitely in both directions. Along the diagonal we have $\lambda0$ for 0,0 followed by $-(\mu_k+\lambda_k)$ at $k,k$ at $k,k-1$ we have $\mu_k$ and at $k,k+1$ we have $\lambda_k$
For $P$, we have $\frac{\mu_k}{\lambda_k+\mu_k}$, at $k,k-1$. At $k,k+1$ we have $\frac{\lambda_k}{\lambda_k+\mu_k}$, and at the diagonal we have 1 minus the two off diagonals
$\frac{P_{ij}(t)}{dt} = v_j P{ij}(t) + \sum_{k \neq j} P_{ik}(t) q_{kj}$ This is the probability of going from $i$ to $j$, plus the probability of going form $i$ to $k$ and then going to $j$
$\frac{d P_t}{dt} = P_t Q$
$P_t = e^{Qt}$
If $Q$ is diagonalizable, then we can write $Q t =U D U^{-1}$ where $D$ is a diagonal matrix, then $e^{Qt} = Ue^{D}U^{-1}$
There are a branch of methods called Krylov methods for exponentiating matrices.
\subsection{What about stationary distributions?}
\label{sec-1-5}
\subsubsection{The Global Balance Equations}
\label{sec-1-5-1}
Define $P_i$ as the stationary probability of being in state $i$ which is the limit as $t$ goes to infinity of $P_{ij}(t)$
$v_j P_j = \sum_k q_{kj} P_k$
This equation describes a situation where the rate out of state $j$ (weighted by the probability of being in state $j$, or the flux) is equal to the flux into state $j$
If a Continuous Time Markov Chain is time reversible, then $P_i q_{ij} = P_j q_{ji}$ and it satisfies the local balance equations. The flux from $i$ to $j$ is equal to the flux from $j$ to $i$
What is the flux out for state 0?
\begin{center}
\begin{tabular}{rll}
State & Flux out & Flux in \\
\hline
0 & $\lambda_0 P_0$ & $\mu_1 P_1$ \\
1 & $(\lambda_1 + \mu_1) P_1$ & $\lambda_0 P_0 + \mu_2 P_2$ \\
2 & $(\lambda_2 + \mu_2) P_2$ & $\lambda_1 P_1 + \mu_3 P_3$ \\
n & $(\lambda_n + \mu_n) P_n$ & $\lambda_{n-1} + \mu_{n+1} P_{n+1}$ \\
\end{tabular}
\end{center}
These all have to be equal
$\lambda_0 P_0 = \mu_1 P_1$
$\lambda_1 P_1 = \mu_2 P_2$
$\lambda_n P_n= \mu_{n+1} P_{n+1}$
We can start solving everything in terms of $P_0$
$P_1 = \frac{\lambda_0 P_0}{\mu_1}$
$P_3 = \frac{\lambda_2}{\mu_3} P_2$
$P_n = \frac{\lambda_{n-1} ... \lambda_1}{\mu_n ... \mu_1} P_0$
We know that $1= P_0 + \sum_{n=1}^\infty P_n P_0$
in the linear birth-death model
$P_n = \frac{\lambda}{\mu}^n (\frac{1}{1+\sum_{i=1}{\infty} \frac{\lambda}{\mu}^n}$
Even though this is an infinte sum, it turns out that:
$P_n= \frac{\lambda}{\mu}^n (1-\frac{\lambda}{\mu})$
We know this because $\sum_{n=1}^\infty p(1-p)^{n-1} = 1$
\end{document}
| {
"alphanum_fraction": 0.6451570335,
"avg_line_length": 39.0618556701,
"ext": "tex",
"hexsha": "5d8e292249dd6f96b1bd255dc9d65a215ccf2ede",
"lang": "TeX",
"max_forks_count": 18,
"max_forks_repo_forks_event_max_datetime": "2022-01-11T23:09:29.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-01-08T16:59:46.000Z",
"max_forks_repo_head_hexsha": "e5901f91d81ba4902ae1399a2db3f0903454b34e",
"max_forks_repo_licenses": [
"CC-BY-4.0",
"BSD-3-Clause"
],
"max_forks_repo_name": "stephens999/hgen48600",
"max_forks_repo_path": "docs/scribe_notes_2016/lec11.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "e5901f91d81ba4902ae1399a2db3f0903454b34e",
"max_issues_repo_issues_event_max_datetime": "2017-03-07T20:15:19.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-02-05T00:34:09.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0",
"BSD-3-Clause"
],
"max_issues_repo_name": "stephens999/hgen48600",
"max_issues_repo_path": "docs/scribe_notes_2016/lec11.tex",
"max_line_length": 288,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "e5901f91d81ba4902ae1399a2db3f0903454b34e",
"max_stars_repo_licenses": [
"CC-BY-4.0",
"BSD-3-Clause"
],
"max_stars_repo_name": "stephens999/hgen48600",
"max_stars_repo_path": "docs/scribe_notes_2016/lec11.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-11T22:15:28.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-03-19T18:02:47.000Z",
"num_tokens": 2617,
"size": 7578
} |
% \clearpage
\section*{Programming Jargon and Concept Taxonomy} % (fold)
\label{sub:concept_taxonomy}
Programming has a lot of its own jargon. As you learn to develop software it is also important that you start to learn this \emph{special language} that software developers use to discuss their programs. You will find that this terminology is used in many places. It is used in programming texts, in discussions between developers, in discussion boards, blogs, anywhere that developers are discussing software development. Having a clear understanding of this terminology will help you make the most of these resources.
The concepts in this book are closely linked to this programming terminology. To help you understand each concept, we have classified them using one of the following categories:
\begin{itemize}
\item \textbf{Artefact}: An artefact is something that you can create in your code.
\item \textbf{Action}: Actions are things that you can \emph{command} the computer to do.
\item \textbf{Term}: These are general terms, used to describe some aspect.
\end{itemize}
When you are reading about the different concepts in this book you can use these classifications to help you think about how you may use the knowledge you are gaining.
\paragraph{Artefacts:} % (fold)
\label{par:artefacts}
Artefacts are things that you create in your code. Programming is a very \emph{abstract} activity, you spend most of your time working with concepts and ideas. You write text, code, that will create things within the computer when your code is run.
When you are learning about a new kind of artefact come up with ways of visualising it. It is a \textbf{thing} that you are creating with your code. Try to picture the artefact within your code. These artefacts are the basic building blocks that you have to work with. You need to be very familiar with them, how they work, and what you can do with them.
% subsubsection artefacts (end)
\paragraph{Actions:} % (fold)
\label{par:actions}
Actions get the computer to perform a task. Your actions will be coded within the \textbf{artefacts} that you create, and will define how artefacts behave when they are used. The actions themselves are commands that you issue to the computer. They are executed one at a time, and each kind of action gets the computer to carry out certain tasks.
When you are learning a new kind of action you need to see what this action does. To start with you should play with it, test it out, and see if you can understand what it is getting the computer to do. As you progress you need to start thinking about how you can sequence these actions so that the computer performs the tasks you want it to. There are only a very few kinds of actions, so it is by combining them that you can get the computer to do what you want.
% subsubsection actions (end)
\paragraph{Terms:} % (fold)
\label{par:terms}
The remaining terms are words that developers use to explain concepts. These are not things that you create, or actions that you request. These are just words that you need to \emph{know}.
When you are learning a new term you need to try to commit it to memory. Memorise the terms, try to use them in sentences, explain them to others. All of these tasks will help you understand, and remember these terms.
% subsubsection terms (end)
% subsection concept_taxonomy (end) | {
"alphanum_fraction": 0.784272997,
"avg_line_length": 76.5909090909,
"ext": "tex",
"hexsha": "8f5ea9518337b420897c5500c884856d46afd260",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2022-03-24T07:42:53.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-06-02T03:18:37.000Z",
"max_forks_repo_head_hexsha": "8f3040983d420129f90bcc4bd69a96d8743c412c",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "macite/programming-arcana",
"max_forks_repo_path": "topics/preface/ConceptTaxonomy.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07",
"max_issues_repo_issues_event_max_datetime": "2021-12-29T19:45:10.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-12-29T19:45:10.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "thoth-tech/programming-arcana",
"max_issues_repo_path": "topics/preface/ConceptTaxonomy.tex",
"max_line_length": 519,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "thoth-tech/programming-arcana",
"max_stars_repo_path": "topics/preface/ConceptTaxonomy.tex",
"max_stars_repo_stars_event_max_datetime": "2021-08-10T04:50:54.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-08-10T04:50:54.000Z",
"num_tokens": 758,
"size": 3370
} |
\chapter{First appendix}
\chapter{Second appendix}
\lipsum[1-6] | {
"alphanum_fraction": 0.7777777778,
"avg_line_length": 21,
"ext": "tex",
"hexsha": "77e7f2e528ca05f5b05d3661ea3bba0a75c8b759",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "13ef97d5b59d5c029cfede85ce16a78460af52e9",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mkeutgen/GeoPT-thesis",
"max_forks_repo_path": "report/Backmatter/Appendix.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "13ef97d5b59d5c029cfede85ce16a78460af52e9",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mkeutgen/GeoPT-thesis",
"max_issues_repo_path": "report/Backmatter/Appendix.tex",
"max_line_length": 25,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "13ef97d5b59d5c029cfede85ce16a78460af52e9",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mkeutgen/GeoPT-thesis",
"max_stars_repo_path": "report/Backmatter/Appendix.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 20,
"size": 63
} |
\documentclass[11,]{article}
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{ifxetex,ifluatex}
\usepackage{fixltx2e} % provides \textsubscript
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\else % if luatex or xelatex
\ifxetex
\usepackage{mathspec}
\usepackage{xltxtra,xunicode}
\else
\usepackage{fontspec}
\fi
\defaultfontfeatures{Mapping=tex-text,Scale=MatchLowercase}
\newcommand{\euro}{€}
\fi
% use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
% use microtype if available
\IfFileExists{microtype.sty}{%
\usepackage{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\usepackage[margin=1.0in]{geometry}
\usepackage{graphicx}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
\ifxetex
\usepackage[setpagesize=false, % page size defined by xetex
unicode=false, % unicode breaks when used with xetex
xetex]{hyperref}
\else
\usepackage[unicode=true]{hyperref}
\fi
\hypersetup{breaklinks=true,
bookmarks=true,
pdfauthor={},
pdftitle={Preprinting Microbiology},
colorlinks=true,
citecolor=blue,
urlcolor=blue,
linkcolor=magenta,
pdfborder={0 0 0}}
\urlstyle{same} % don't use monospace font for urls
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}
\setlength{\emergencystretch}{3em} % prevent overfull lines
\setcounter{secnumdepth}{0}
%%% Use protect on footnotes to avoid problems with footnotes in titles
\let\rmarkdownfootnote\footnote%
\def\footnote{\protect\rmarkdownfootnote}
%%% Change title format to be more compact
\usepackage{titling}
% Create subtitle command for use in maketitle
\newcommand{\subtitle}[1]{
\posttitle{
\begin{center}\large#1\end{center}
}
}
\setlength{\droptitle}{-2em}
\title{\textbf{Preprinting Microbiology}}
\pretitle{\vspace{\droptitle}\centering\huge}
\posttitle{\par}
\author{}
\preauthor{}\postauthor{}
\date{}
\predate{}\postdate{}
\usepackage{helvet} % Helvetica font
\renewcommand*\familydefault{\sfdefault} % Use the sans serif version of the font
\usepackage[T1]{fontenc}
\usepackage[none]{hyphenat}
\usepackage{setspace}
\doublespacing
\setlength{\parskip}{1em}
\usepackage{lineno}
\usepackage{pdfpages}
\usepackage{comment}
\begin{document}
\maketitle
\begin{center}
\vspace{25mm}
Patrick D. Schloss${^\dagger}$
\vspace{30mm}
$\dagger$ To whom correspondence should be addressed: [email protected]; Department of Microbiology and Immunology, University of Michigan, Ann Arbor, MI
\vspace{10mm}
\textbf{Format:} Perspective
\textbf{Counts:} \textasciitilde5600 words plus 50 references, 1 figure, and a 150 word abstract
\end{center}
\newpage
\linenumbers
\subsection{Abstract}\label{abstract}
The field of microbiology has experienced significant growth due to
transformative advances in technology and the influx of scientists
driven by a curiosity to understand how microbes sustain myriad
biochemical processes that maintain the Earth. With this explosion in
scientific output, a significant bottleneck has been the ability to
rapidly disseminate new knowledge to peers and the public. Preprints
have emerged as a tool that a growing number of microbiologists are
using to overcome this bottleneck. Posting preprints can help to
transparently recruit a more diverse pool of reviewers prior to
submitting to a journal for formal peer-review. Although use of
preprints is still limited in the biological sciences, early indications
are that preprints are a robust tool that can complement and enhance
peer-reviewed publications. As publishing moves to embrace advances in
internet technology, there are many opportunities for preprints and
peer-reviewed journals to coexist in the same ecosystem.
\newpage
\textbf{\emph{Background.}} A preprint is an interim research product
that is made publicly available before going through an official
peer-review process with the goals of soliciting feedback, accelerating
dissemination of results, establishing priority, and publicizing
negative results (1--5). Authors can post their manuscript to a preprint
server for others to read, share, and comment. In the 1960s, Information
Exchange Groups were the first formal attempt to broadly disseminate
paper-based preprints among physicists and biologists (6, 7). Although
the biological community's commitment to preprints waned by 1967, the
physics community persisted and eventually adopted what is now the
\emph{arXiv} (pronounced ``archive'') preprint server that was hosted at
the Los Alamos National Laboratories from 1991 to 1999 and then at
Cornell University (8). For some physicists and mathematicians, posting
a preprint to \emph{arXiv} optionally followed by submission to a
peer-reviewed journal has become a standard publication pathway.
Although \emph{arXiv} has hosted a number of computational biology
preprints, the server has not drawn widespread attention from
biologists. Among proponents of \emph{arXiv}, preprints have aided in
the development of research communication by accelerating the release of
science and helping authors reach a wider audience for critique and
establishment of priority (9). Considering the broadening adoption of
preprints among microbiologists, I sought to explore the specific uses
of and concerns regarding preprints.
\textbf{\emph{Landscape of preprint servers.}} In 2013, two preprint
servers, the \emph{bioRxiv} (pronounced ``bio-archive'') and \emph{PeerJ
Preprints}, were launched as preprint servers for biologists that would
parallel \emph{arXiv} (10). According to information provided on the
\emph{bioRxiv} and \emph{PeerJ Preprints} websites and my personal
experiences, both platforms offer similar features: preprint posting is
free; each preprint receives a digital object identifier (DOI) that
facilitates the ability to cite preprints in other scholarly work; if
the preprint is ever published, the preprint is linked to the published
version; the submission process for both options is relatively simple
allowing authors to upload a PDF version of their preprint and
supplemental materials; preprints are typically publicly available in
about 24 hours; they have built-in venues for authors to discuss their
research with people who leave comments on the manuscript; preprints
undergo a basic screening process to remove submissions with offensive
or non-scientific content; and the sites provide article-level metrics
indicating the number of times an abstract has been accessed or the
preprint has been downloaded. There are several important differences
between the two options. First, \emph{PeerJ Prints} is a for-profit
organization and \emph{bioRxiv} is a non-profit organization sponsored
by Cold Spring Harbor Laboratory. This difference can be meaningful to
authors since some journals, including the American Society for
Microbiology (ASM) Journals, will only accept submissions that have been
posted on preprint servers hosted by not-for-profit organizations (11).
Second, preprints at \emph{PeerJ Preprints} are posted under the
Creative Commons Attribution License (CC-BY) and \emph{bioRxiv}
preprints can be posted under one of four CC-BY licenses or with no
permission for reuse. This can be relevant for authors hoping to submit
their work to \emph{Proceedings of the National Academy of Sciences} as
the journal will not consider manuscripts posted as preprints under a
CC-BY license. The NIH encourages authors to post preprints under the
CC-BY or public domain licenses (5). The flexibility of the
\emph{bioRxiv} licensing empowers authors to choose the model that best
suits them, while ensuring the rapid posting of their research results;
however, it is important to provide clear information to authors on the
legal and practical tradeoffs of each option. A cosmetic, but still
relevant difference is the layout and feel of the two websites. Compared
to the \emph{bioRxiv} site (Figure S1), the \emph{PeerJ Preprint} site
is more fluid, gives readers the ability to ``follow'' a preprint, and
provides better access to article keywords and the ability to search
preprints (Figure S2). With broader acceptance of preprints by
traditional journals, many journals, including all of the ASM journals,
have established mechanisms to directly submit manuscripts that are
posted as preprints on \emph{bioRxiv}. The only direct submission
mechanism for manuscripts submitted at \emph{PeerJ Preprint} is to the
\emph{PeerJ} journal. In many ways, preprint servers have taken on the
feel of a journal. As adoption of this approach expands, it is likely
that the features of these sites will continue to improve. It is also
worth noting that numerous other opportunities exist for other forms of
interim research products (e.g.~blog posts, videos, protocols, etc.) to
obtain DOIs that make the work citable. As these possibilities increase,
the preprint landscape risks becoming fractured.
One solution to the fracturing of the preprint landscape would be the
creation of indexing sites that allow a user to easily search for
content across multiple preprint servers. Several examples of these
efforts already exist and it is likely that these interfaces and their
ability to span the landscape will improve. For example, although Google
Scholar includes preprints hosted at \emph{bioRxiv} and \emph{PeerJ
Preprints} in their search results, PubMed and Web of Science do not. A
relatively new example of what this might look like is PrePubMed (12),
which seeks to index preprints from numerous sources. A more organized
effort is being initiated with funding through ASAPbio to create a
``Central Service'' that would aggregate preprints in the life sciences
(13). As preprint servers and other content providers begin to look and
act like traditional journals by incorporating features and interfaces,
it is important to value the strength of the preprint - that of an
interim research product that is nimble and quickly posted. It is
therefore essential to balance the requirements placed on authors for
features associated with preprints with the efficiency of the preprint
format.
\textbf{\emph{Specific challenges for microbiology.}} Although preprints
offer an efficient and novel venue for disseminating microbiology
research, there are several considerations that the scientific community
and those that oversee preprint servers must consider. It is critical
that assurances be given that policies are in place to address these
issues and that these policies are made transparent. First, attention
has to be given to dual use research of concern (DURC) since
microbiology-related research could offer insights to individuals
seeking to engage in inappropriate activities. Second, for researchers
engaging in research that involves human subjects and other vertebrates,
it is critical that assurances be made that institutional oversight
committees have been consulted and have approved of the research. Third,
there is significant concern regarding researchers disclosing potential
conflicts of interest that could affect a project's experimental design,
analysis, and interpretation of results. Finally, recent expansions in
scientific publishing have revealed numerous cases of plagiarism or
misconduct. Again, while hoping to maintain the efficiency of the
preprint format, traditional microbiology journals have screening
procedures and oversight committees that address these issues. Similar
efforts need to be implemented by preprint servers. As preprint usage
continues to expand many of these problems may also grow similar to the
experiences within the traditional publishing industry has expanded.
\textbf{\emph{Acceptance of preprints by journals.}} An early
controversy encountered by researchers interested in posting their work
as preprints as a stage in disseminating their research was whether it
constituted prior publication (14). The broad consensus of the
International Committee of Medical Journal Editors and numerous journals
is that preprints do not constitute prior publication (15). This
consensus is reflected in the current policies of journals that commonly
publish microbiology research including those published by ASM, the
Microbiology Society, International Society for Microbial Ecology, PLOS,
the \emph{Proceedings of the National Academy of Science},
\emph{Science}, \emph{Nature}, \emph{Journal of Infectious Diseases},
and Cell press. Each take a generally permissive stance towards posting
of preprints prior to submission. Comprehensive lists of journals'
attitudes towards preprints are available online and are regularly
updated (16, 17). Considering the relatively fluid nature of many of
these policies and the journals' specific policies, prospective authors
should be aware of the positions taken by the journals where they may
eventually submit their work.
\textbf{\emph{Preprints and peer-review.}} The use of preprints for
citations in other scientific reports and grant proposals has recently
been called into question (18). It is important to note that the
peer-review process was adapted to the technologies and trends that have
evolved over the past 100 years. The formal peer-review system that most
journals currently use was not developed until the end of the 1800s with
the advent of typewriters and carbon paper (19). Editorial decisions
were typically made by a single person or a committee (i.e.~the
editorial board) who had an expertise that covered the scope of the
journal. As science became more specialized, new journals would form to
support and provide a source of validation to the new specialty. The
growth in science in the mid 1900s resulted in a shift from journals
struggling to find sufficient numbers of manuscript to publish to having
too many manuscripts submitted. It has been argued that the widespread
adoption of decentralized peer-review was due to the increased
specialization and to deal with the large number of manuscript
submissions (20). Peer-review did not achieve widespread use at many
journals, including the \emph{Journal of Bacteriology}, until the 1940s
and 1950s. Thus the ``tradition'' of peer-review is only 70 years old.
Given the rapid advances in communication technology and even greater
specialization within microbiology, it is worth pondering whether the
current scientific publishing system and peer-review system, in
particular, need to continue to adapt with our science.
Communicating research has traditionally been done within research group
meetings, departmental seminars, conferences, and as publications. Along
this continuum, there is an assumption that the quality of the science
has been improved because it has been vetted by more experts in the
field. The public dissemination of one's research is a critical
component of the scientific method. By describing their research,
scientists subject their work to formal and informal peer-review. Their
research is scrutinized, praised, and probed to identify questions that
help seed the next iteration of the scientific method. A common critique
of more modern approaches to publishing has been an inability to assess
the quality of the science without the validation of peer-review.
Attached to assertions of the validity of the research has been
assertions of the impact and robustness of the research. These are all
quality assessments that many acknowledge are difficult to assess by the
traditional peer-review process. This has led to some journals, most
notably \emph{PLOS ONE}, calling for referees to place a reduced
emphasis on the perceived impact or significance of the work. It has
also led to the call for replacing or complementing pre-publication
peer-review with post-publication peer-review using PubMed Commons,
PubPeer, journal-based discussion forums, F1000Research, and other
mechanisms. Alas if scientists are going to depend on post-publication
peer-review or informal methods of peer-review for documents like
preprints, they must be willing to provide constructive feedback on the
work of others.
\textbf{\emph{Preprints have the potential to change the advancement of
science.}} Preprints are often viewed as existing in a state of
scientific limbo. As noted above, they represent a formal communication,
but an interim one, not officially published. As the use of preprints
grows and scientists' perceptions of preprints matures, there are a
number of issues that will need to be addressed.
First, a common concern is that if a researcher posts their work as a
preprint, it will be ``scooped'' by another researcher and the preprint
author will lose their ability to claim primacy or their ability to
publish the work in a journal. Considering the preprint is a citable
work with a DOI, it would, in fact, be the preprint author that scooped
the second. Furthermore, a preprint could prevent getting scooped since
a preprint would indicate to others in the field that the work had
already been done, which would prevent wasted time and effort. The use
of preprints uncouples the communication of the discovery from the
relevance of the discovery, which will come later based on peer-review,
comments from other scientists at meetings or online, and eventually
citations. A growing number of scientific societies and journals,
including ASM view preprints as citable and as having a legitimate claim
to primacy (1, 21--23); however, it remains to be determined whether the
journals will stand by these policies. Some scientists worry that with
such protection a researcher can make a claim without valid data to
support their claims (3). This is possible; however, it is also the
responsibility of the scientific community to utilize the peer-review
mechanisms that are available to comment on those preprints pointing out
methodological problems or to indicate that they are speaking beyond the
data. As preprints gain broader adoption, the tension between
establishing primacy and the completeness of the preprint may test the
policies of preprint-friendly journals.
A second area of concern is whether a preprint can be used to support a
grant proposal. Given the length limitations placed on grant proposals
by funding agencies, there is a push to cite previous work to indicate a
research team's competence in an area or to provide preliminary data.
The National Institutes of Health (NIH) recently released a notice
clarifying their position on the use of preprints and synthesizing
feedback that they received as part of a request for information (5). In
this notice, the NIH indicated that preprints can be cited anywhere that
other research is cited including research plans, bibliographies,
biosketches, and progress reports. Some fear that the use of preprints
will allow scientists to circumvent page limits by posting preliminary
manuscripts (24). One would hope that both consumers of preprints and
grant proposal reviewers would be able to differentiate between someone
trying to game the system and someone that is using preprints as a
mechanism to improve their science. This would be greatly facilitated by
following the NIH recommendation of using preprints as evidence for
research progress, but providing an indication that the preprints are
not peer-reviewed publications (5). This would help review panels in
rendering their decisions and help authors substantiate their
preliminary data.
A third concern is what role preprints should have in assessing a
scientist's productivity. Clearly use of publication metrics as an
indicator of a scientist's productivity and impact is a contentious
topic without even discussing the role of preprints. Regardless, given
the propensity for researchers to list manuscripts as being ``in
preparation'' or ``in review'' on an application or curriculum vitae,
listing them instead as preprints that can be reviewed by a committee
would significantly enhance an application and a reviewer's ability to
judge the application. In fact, several funding agencies including the
NIH, Wellcome Trust, UK Medical Research Council encouraging fellowship
applicants to include preprints in their materials (5).
Beyond these concerns, preprints are also causing some to change their
publication goals. Some authors are explicitly stating that a preprint
will not be submitted to a journal (25). Although these authors may be a
minority of those who post preprints, such an approach may be attractive
to those who need to cite a report of a brief research communication, a
critique of another publication, or negative results. It is clear that
the adoption of preprints will challenge how scientists interact and
evaluate each other's work. There is great potential to empower
researchers by controlling when a citable piece of work is made public.
\textbf{\emph{Microbiology anecdotes.}} The peer-review editorial
process can be lengthy and adversarial. Because preprints are public and
freely available they represent a rapid and potentially collaborative
method for disseminating research. Several anecdotes from the
microbiology literature are emblematic of benefits of the rapid release
cycle that is inherent in the use of preprints.
First, preprints have proven useful for rapidly disseminating results
for disease outbreaks and new technologies. Prior to the recent Zika
virus outbreak there were approximately 50 peer-reviewed publications
that touched on the biology and epidemiology of the virus; as of April
2017 the number of Zika virus-related peer-reviewed publications was
over 2,300. During the recent outbreak, more than 150 Zika virus-related
preprints have been posted at \emph{bioRxiv}. Any manuscript that was
formally published went through several month delays in releasing
information to health care workers, the public, and scientists needing
to learn new methods to study a previously obscure virus. In contrast,
those that posted their work as a preprint were able to disseminate
their methods and results instantly. Another interesting use of
preprints to disseminate new information about Zika virus has been the
posting of a preprint describing the Zika virus outbreak in the US
Virgin Islands that will be continually updated as new data and analyses
are performed (26). Over the last several years there have also been
rapid advances in DNA sequencing technologies that have fundamentally
changed how microbial science is performed. One notable technology, the
MinIon sequencing platform from Oxford Nanopore, has received
considerable attention from researchers who have posted more than 110
preprints describing new MinIon-based methods and results to preprint
servers. For such a rapidly developing technology, the ability to share
and consume methods from other scientists has created a feed forward
effect where the technology has likely advanced at a faster rate than it
otherwise would have.
Second, preprints have proven useful for rapidly correcting the
scientific literature. On February 9, 2015, \emph{Cell Systems}
published a study that collected and analyzed metagenomic sequence data
from the New York City subway system and reported finding \emph{Yersinia
pestis} and \emph{Bacillus anthracis} (27). Because of the focus on
these two bioterrorism agents, this study generated a considerable
amount of media attention. On April 25, 2015, Petit et al. (28) posted a
preprint to Zenodo demonstrating that there was no evidence for \emph{B.
anthracis} in the dataset. On July 29, 2015, a critique was published by
\emph{Cell Systems} along with a response from the original authors
offering a correction to their manuscript (29, 30). A second anecdote of
using preprints to aid in post-publication peer-review surrounds the
publishing of a draft tardigrade genome in \emph{The Proceedings of the
National Academy of Sciences}. On November 23, 2015 a study by Boothby
et al. (31) was published online. The authors claimed that 17.5\% of its
genes came from bacteria, archaea, fungi, plants, and viruses. Another
group had been analyzing sequence data from a parallel tardigrade genome
sequencing project and did not observe the same result. A week later, on
December 1, 2015, the second group posted a preprint comparing the two
genome sequences and demonstrating that the exciting claims of
horizontal gene transfer were really the product of contaminants (32);
this analysis would eventually be peer-reviewed and published online by
the original journal on March 24, 2016 followed by a rebuttal by the
original authors on May 31, 2016 (33, 34). Two other analyses of the
original data were peer-reviewed and published in May 2016 and a third
was posted as a preprint on February 2, 2016 (35--37). Both of these
anecdotes underscore the value of having a rapid posting cycle to
correcting errors in the scientific literature and that results posted
to preprint servers were able to correct the record within weeks of the
initial publication while the traditional peer-review path took six
months in both cases. A final notable case where preprints have
accelerated the correction of the scientific record was a preprint
posted by Bik et al. (38) reporting numerous cases of image manipulation
in peer-reviewed studies. This was a case where a journal may have been
reluctant to publish the findings because it could have put the journal
in a bad light. Posting the manuscript as a preprint removed potential
conflicts of interests from journals that could have hindered its
ability to be formally published in a journal. After the preprint was
posted on April 20, 2016 it was peer-reviewed and published in
\emph{mBio} on June 7, 2016 (39). Instead of using preprints to react to
published papers that have been through peer-review, it would be
interesting to consider how the editorial process for these examples and
the infamous ``Arsenic Life'' paper (40) would have been different had
they initially been posted as preprints.
\textbf{\emph{Metrics for microbiology-affiliated preprints.}} To
analyze the use of preprints, I downloaded the \emph{bioRxiv} on April
17, 2017. I chose to analyze \emph{bioRxiv} preprints because these
preprints are amenable for submission to ASM journals and there were
9,780 \emph{bioRxiv} preprints compared to the 2,911 preprints that were
available at \emph{PeerJ Preprint} on the same date. The code used to
analyze these preprints and the rest of this manuscript are available as
a reproducible GitHub repository at
\url{http://www.github.com/SchlossLab/Schloss_PrePrints_mBio_2017}.
Among the 9,780 preprints on bioRxiv, 483 were assigned by the authors
into the Microbiology category. One limitation of the \emph{bioRxiv}
interface is the inability to assign manuscripts to multiple categories
or to tag the content of the preprint. For example, this manuscript
could be assigned to either the Microbiology or the Scientific
Communication and Education categories. To counter this limitation, I
developed a more permissive approach that classified preprints as being
microbiology-affiliated if their title or abstract had words containing
\emph{yeast}, \emph{fung}, \emph{viral}, \emph{virus}, \emph{archaea},
\emph{bacteri}, \emph{microb}, \emph{microorganism}, \emph{pathogen}, or
\emph{protist}. I identified 1,617 additional manuscripts that I
considered microbiology-affiliated. These microbiology-affiliated
preprints were primarily assigned to the Evolutionary Biology (N=283),
Bioinformatics (N=237), or Genomics (N=231) categories.
As the total number of preprints has grown exponentially since the
creation of \emph{bioRxiv}, submission of microbiology-affiliated
preprints has largely followed this growth (\textbf{Figure 1A}).
Although preprints are still relatively new, the collection of
microbiology-affiliated preprints indicates widespread experimentation
with the format and considerable geographic diversity. Reflecting the
relative novelty of preprints, 1,484 (85.5\%) corresponding authors who
submitted a microbiology-affiliated preprint (N=1,735 total) have posted
a single preprint and 4.6\% have posted 3 or more preprints.
Corresponding authors that have posted microbiology-affiliated preprints
are from 67 countries and are primarily affiliated with institutions in
the United States (46.2\% of microbiology-affiliated preprints), United
Kingdom (12.9\%), and Germany (4.6\%). As the preprint format matures,
it will be interesting to see whether the fraction of authors that post
multiple preprints increases and whether the geographic diversity
amongst those authors is maintained.
As stated above, preprints offer researchers the opportunity to improve
the quality of their work by adding a more formal and public step to the
scientific process. Among the microbiology-affiliated preprints, 197
(9.3\%) had been commented on at least once and only 48 (2.3\%) more
than three times using the \emph{bioRxiv}-hosted commenting feature.
Although the hosted commenting is only one mechanism for peer-review,
this result was somewhat disturbing since the preprint model implicitly
depends on people's willingness to offer others feedback. In spite of
the lack of tradition within the scientific community to comment
publicly online about colleagues' research results, I am optimistic that
this will change given the possibilities of new media (e.g.~Twitter,
Facebook, blogs); the advantage of the centralized commenting is that it
is easier for the authors and others to integrate the feedback with the
preprint. It is possible that incentives for open commenting and
reviewing could shift the trend. Importantly, authors do appear to be
incorporating feedback from colleagues or editorial insights from
journals as 545 (25.9\%) microbiology-affiliated preprints were revised
at least once. Among the preprints posted prior to January 1, 2016,
31.3\% of the Microbiology category preprints, 35.6\% of the
microbiology-affiliated preprints, and 33.6\% of all preprints have been
published. As noted above, not all authors submit their preprints to
journals. This would indicate that the ``acceptance rates'' are actually
higher. Regardless, considering that these acceptance rates are higher
than many peer-reviewed journals (e.g.~approximately 20\% at ASM
Journals), these results dispel the critique that preprints represent
overly preliminary research.
Measuring the impact and significance of scientific research is
notoriously difficult. Using several metrics I sought to quantify the
effect that broadly defined microbiology-affiliated preprints have had
on the work of others. Using the download statistics associated with
each preprint, I found that the median number of times an abstract or
PDF had been accessed was 922 (IQR: 601 to 1446) and 301 (IQR: 155 to
549), respectively. These values represent two aspects of posting a
preprint. First, they reflect the number of times people were able to
access science before it was published. Second, they reflect the number
of times people were able to access a version of a manuscript that is
published behind a paywall. To obtain a measure of a preprint's ability
to garner attention and engage the general public, I obtained the
Altmetric Attention Score for each preprint (\textbf{Figure 1B}). The
Altmetric Attention Score measures the number of times a preprint or
paper is mentioned in social media, traditional media, Wikipedia, policy
documents, and other sources; it does not include the number of
citations (41). A higher score indicates that a preprint received more
attention. Microbiology-affiliated preprints have had a median Altmetric
Attention Score of 7.6 (IQR: 3.2 to 16.6) and those of all preprints
hosted at \emph{bioRxiv} have had a median score of 7.7 (IQR: 3.1 to
16.2). For comparison, the median Altmetric Attention Score for articles
published in \emph{mBio} published since 2013 was 5.0 (IQR: 1.5 to
14.5). Of all scholarship tracked by Altmetric, the median Altmetric
Attention Score for preprints posted at \emph{bioRxiv} ranks at the 87
percentile (IQR: 75 to 94). A controversial, yet more traditional metric
of impact has been the number of citations an article receives. I
obtained the number of citations for the published versions of
manuscripts that were initially posted as preprints. To allow for a
comparison to traditional journals, I considered the citations for
preprints published in 2014 and 2015 as aggregated by Web of Science
(\textbf{Figure 1C}). Among the preprints that were published and could
be found in the Web of Science database, the median number of citations
was 9 (IQR: 3-19; mean: 17.1). For comparison, among the papers
published in \emph{mBio} in 2014 and 2015, the median number of
citations was 6 (IQR: 3-11; mean: 8.5). Although it is impossible to
quantify the quality or impact of research with individual metrics, it
is clear that the science presented in preprints and the publications
that result from them are accepted by the microbiology community at a
level comparable to more traditionally presented research.
\textbf{\emph{Preprints from an author's perspective.}} Posting research
as a preprint gives an author great control over when their work is made
public. Under the traditional peer-review model, an author may need to
submit and revise their work multiple times to several journals over a
long period before it is finally published. In contrast, an author can
post the preprint at the start of the process for others to consume and
comment on as it works its way through the peer-review process. A first
example illustrates the utility of preprints for improving access to
research and the quality of its reporting. In 2014, my research group
posted a preprint to \emph{PeerJ Preprints} describing a method of
sequencing 16S rRNA gene sequences using the Pacific Biosciences
sequencing platform (42). At the same time, we submitted the manuscript
for review at \emph{PeerJ}. While the manuscript was under review, we
received feedback from an academic scientist and from scientists at
Pacific Biosciences that the impact of the results could be enhanced by
using a recently released version of the sequencing chemistry. Instead
of ignoring this feedback and resubmitting the manuscript to address the
reviews, we generated new data and submitted an updated preprint a year
later with a simultaneous submission to \emph{PeerJ} that incorporated
the original reviews as well as the feedback we received from the
academic scientist and Pacific Biosciences. It was eventually published
by \emph{PeerJ} (43, 44). Since 2015, we have continued to post
manuscripts as preprints at the same time as we have submitted
manuscripts. Although the feedback to other manuscripts has not always
been as helpful as our initial experience, in each case we were able to
publicize our results prior to lengthy peer-review processes by
immediately making our results available; in one case our preprint was
available 7 months ahead of the final published version (45, 46). As
another example, I posted a preprint of the current manuscript to
\emph{bioRxiv} on February 22, 2017. I then solicited feedback on the
manuscript using social media. On March 14, 2017 I incorporated the
comments and posted a revised preprint and submitted the manuscript to
\emph{mBio}. During that time, the abstract was accessed 189 times and
the PDF was accessed 107 times. This process engaged 3 commenters on
\emph{bioRxiv}, 61 people either tweeted or re-tweeted the preprint on
Twitter, 2 people on the manuscript's GitHub repository, 1 person on a
blog, and 2 via email. Compared to the two scientists that eventually
reviewed the manuscript, the preliminary round of informal peer-review
engaged a much larger and more diverse community than had I foregone the
posting of a preprint. By the time that the final version of the
manuscript was submitted on April 21, 2017, the preprint version of this
manuscript had an Attention Score of 58, which placed it in the top 5\%
of all research scored by Altmetric and the abstract and PDF had been
accessed 2,152 and 512 times, respectively. Although there are concerns
regarding the quality of the science posted to a preprint server, I
contend that responsible use of preprints as a part of the scientific
process can significantly enhance the science.
\textbf{\emph{Preprints from a publisher's perspective.}} A lingering
question is what role traditional journals will have in disseminating
research if there is broad adoption of preprints. Edited peer-reviewed
journals offer and will continue to offer significant added value to a
publication. A scholarly publishing ecosystem in which preprints coexist
with journals will allow authors to gain value from the immediate
communication of their work associated with preprints and also benefit
from the peer-reviewed, professionally edited publication that
publishers can provide.
The professional copyediting, layout, and publicity that these
publishers offer are also unique features of traditional journals. An
alternative perspective is that preprints will eventually replace
traditional journals. Certainly, this is a radical perspective, but it
does serve to motivate publishers to capture the innovation
opportunities offered by preprints. By adopting preprint-friendly
policies, journals can create an attractive environment for authors. As
discussed above, a growing number of journals have created mechanisms
for authors to directly submit preprints to their journals. An example
is offered by the ASM, which earlier this year launched a new venture
from \emph{mSphere}. mSphereDirect is a publication track of the journal
that capitalizes on the opportunity offered to couple preprints with
rigorous peer-review. mSphereDirect actively encourages authors to post
their manuscripts as preprints as part of an author-driven editorial
process where an editorial decision is rendered within five days and
publication in \emph{mSphere} within a month (47). As the mSphereDirect
mechanism evolves and is perhaps adopted by other journals, it will be
interesting to see whether public feedback on preprints will be used to
further streamline the editorial process. ASM is developing a new
platform, MicroNow, which will help coalesce specific communities within
the microbial sciences, further enhancing the use of preprints as well
as published articles (Stefano Bertuzzi, personal communication). In
addition to integrating preprints into the traditional editorial
process, several professional societies have also explicitly supported
citation of preprints in their other publications and recognize the
priority of preprints in the literature (21--23). These are policies
that empower authors and make specific journals more attractive. Other
practices have great potential to improve the reputation of journals. As
measured above, preprints are able to garner attention on par with
papers published in highly selective microbiology journals. Thus, it is
in a journal's best interest to recruit these preprints to their
journals. Several journals including \emph{PLOS Genetics} and
\emph{Genome Biology} have publicly stated that they scout preprints for
this purpose (48, 49). Preprints can also be viewed as a lost
opportunity to journals. A preprint that garners significant attention
may be ignored when it is finally published, bringing little additional
attention to the journal. Going forward, there will likely be many
innovative approaches that publishers develop to benefit from
incorporating preprints into their process and whether publishers'
influence is reduced by the widespread adoption of preprints.
\textbf{\emph{Conclusions.}} Since the first microbiology-affiliated
preprint was posted on bioRxiv in November 2013 (50), an increasing
number of microbiologists are posting their unpublished work to preprint
severs as an efficient method for disseminating their research prior to
peer-review. A number of critical concerns remain about how widespread
their adoption will be, how they will be perceived by traditional
journals and other scientists, and whether traditional peer-review will
adapt to the new scientific trends and technologies. Regardless,
preprints should offer a great opportunity for both scientists and
journals to publish high quality science.
\subsection{Acknowledgements}\label{acknowledgements}
I am grateful to Stefano Bertuzzi and Lynn Enquist for their helpful
comments on earlier versions of this manuscript and to the numerous
individuals who provided feedback on the preprint version of the
manuscript. This work was supported in part by funding from the National
Institutes of Health (P30DK034933). I appreciate the support of
Altmetric, Inc and Thompson Reuters who provided advanced programming
interface (API) access to their databases. The workflow utilized
commands in GNU make (v.3.81), GNU bash (v.4.1.2), and R (v.3.3.3).
Within R I utilized the cowplot (v.0.7.0), dplyr (v.0.5.0), ggplot2
(v.2.2.1), httr (v.1.2.1), RCurl (v.1.95-4.8), rentrez (v.1.0.4),
RJSONIO (v.1.3-0), rvest (v.0.3.2), sportcolors (v.0.0.1), and tidyr
(v.0.6.1) packages. All journal policies and the information cited using
webpage links was current on April 21, 2017.
\newpage
\textbf{Figure 1. Summary of microbiology-affiliated preprints since the
creation of} \textbf{\emph{bioRxiv.}} The total number of preprints
posted for each quarter ending March 31, 2017 has largely tracked the
overall submission of preprints to \emph{bioRxiv} (A). The Altmetric
attention scores of preprints posted to \emph{bioRxiv} are similar to
those published in \emph{mBio} since November 2013 indicating preprints
engender a similar level of attention (B). The number of times preprints
that were published in 2014 and 2015 have been cited is similar to the
number of citations for papers published in \emph{mBio} in 2014 and 2015
indicates that published preprints are frequently cited (C). Regions
with common background shading in A are from the same year. The vertical
lines in B and C indicate the median Altmetric impact score and the
median number of citations.
\textbf{Supplemental Figure 1. Screen shot of the preprint for this
manuscript at bioRxiv.}
\textbf{Supplemental Figure 2. Screen shot of a preprint by the author
hosted at PeerJ Preprints.}
\newpage
\subsection{References}\label{references}
1. \textbf{Vale RD}. 2015. Accelerating scientific publication in
biology. Proceedings of the National Academy of Sciences
\textbf{112}:13439--13446.
doi:\href{http://dx.doi.org/10.1073/pnas.1511912112}{10.1073/pnas.1511912112}.
2. \textbf{Desjardins-Proulx P}, \textbf{White EP}, \textbf{Adamson JJ},
\textbf{Ram K}, \textbf{Poisot T}, \textbf{Gravel D}. 2013. The case for
open preprints in biology. PLoS Biology \textbf{11}:e1001563.
doi:\href{http://dx.doi.org/10.1371/journal.pbio.1001563}{10.1371/journal.pbio.1001563}.
3. \textbf{Berg JM}, \textbf{Bhalla N}, \textbf{Bourne PE},
\textbf{Chalfie M}, \textbf{Drubin DG}, \textbf{Fraser JS},
\textbf{Greider CW}, \textbf{Hendricks M}, \textbf{Jones C},
\textbf{Kiley R}, \textbf{King S}, \textbf{Kirschner MW},
\textbf{Krumholz HM}, \textbf{Lehmann R}, \textbf{Leptin M},
\textbf{Pulverer B}, \textbf{Rosenzweig B}, \textbf{Spiro JE},
\textbf{Stebbins M}, \textbf{Strasser C}, \textbf{Swaminathan S},
\textbf{Turner P}, \textbf{Vale RD}, \textbf{VijayRaghavan K},
\textbf{Wolberger C}. 2016. Preprints for the life sciences. Science
\textbf{352}:899--901.
doi:\href{http://dx.doi.org/10.1126/science.aaf9133}{10.1126/science.aaf9133}.
4. \textbf{Bhalla N}. 2016. Has the time come for preprints in biology?
Molecular Biology of the Cell \textbf{27}:1185--1187.
doi:\href{http://dx.doi.org/10.1091/mbc.e16-02-0123}{10.1091/mbc.e16-02-0123}.
5. Reporting preprints and other interim research products.
\url{https://grants.nih.gov/grants/guide/notice-files/NOT-OD-17-050.html}.
6. 1966. Preprints galore. Nature \textbf{211}:897--898.
doi:\href{http://dx.doi.org/10.1038/211897a0}{10.1038/211897a0}.
7. \textbf{Till JE}. 2001. Predecessors of preprint servers. Learned
Publishing \textbf{14}:7--13.
doi:\href{http://dx.doi.org/10.1087/09531510125100214}{10.1087/09531510125100214}.
8. \textbf{Ginsparg P}. 2011. ArXiv at 20. Nature \textbf{476}:145--147.
doi:\href{http://dx.doi.org/10.1038/476145a}{10.1038/476145a}.
9. \textbf{Shuai X}, \textbf{Pepe A}, \textbf{Bollen J}. 2012. How the
scientific community reacts to newly submitted preprints: Article
downloads, twitter mentions, and citations. PLoS ONE \textbf{7}:e47523.
doi:\href{http://dx.doi.org/10.1371/journal.pone.0047523}{10.1371/journal.pone.0047523}.
10. \textbf{Callaway E}. 2013. Biomedical journal and publisher hope to
bring preprints to life. Nature Medicine \textbf{19}:512--512.
doi:\href{http://dx.doi.org/10.1038/nm0513-512}{10.1038/nm0513-512}.
11. mBio instructions to authors.
\url{http://mbio.asm.org/site/misc/journal-ita_edi.xhtml}.
12. PrePubMed. \url{http://www.prepubmed.org}.
13. ASAPbio awarded \$1 million from Helmsley Charitable Trust for
next-generation life sciences preprint infrastructure.
\url{http://asapbio.org/feb23}.
14. \textbf{Angell M}, \textbf{Kassirer JP}. 1991. The ingelfinger rule
revisited. New England Journal of Medicine \textbf{325}:1371--1373.
doi:\href{http://dx.doi.org/10.1056/nejm199111073251910}{10.1056/nejm199111073251910}.
15. Recommendations for the conduct, reporting, editing, and publication
of scholarly work in medical journals.
\url{http://www.icmje.org/news-and-editorials/icmje-recommendations_annotated_dec16.pdf}.
16. List of academic journals by preprint policy.
\url{https://en.wikipedia.org/wiki/List_of_academic_journals_by_preprint_policy}.
17. Publisher copyright policies \& self-archiving.
\url{http://www.sherpa.ac.uk/romeo/index.php}.
18. \textbf{Drubin DG}. 2016. The mismeasure of scientific research
articles and why MBoC quickly embraced preprints. Molecular Biology of
the Cell \textbf{27}:3181--3182.
doi:\href{http://dx.doi.org/10.1091/mbc.e16-09-0651}{10.1091/mbc.e16-09-0651}.
19. \textbf{Spier R}. 2002. The history of the peer-review process.
Trends in Biotechnology \textbf{20}:357--358.
doi:\href{http://dx.doi.org/10.1016/s0167-7799(02)01985-6}{10.1016/s0167-7799(02)01985-6}.
20. \textbf{Burnham JC}. 1990. The evolution of editorial peer review.
JAMA: The Journal of the American Medical Association \textbf{263}:1323.
doi:\href{http://dx.doi.org/10.1001/jama.1990.03440100023003}{10.1001/jama.1990.03440100023003}.
21. \textbf{Pulverer B}. 2016. Preparing for preprints. The EMBO Journal
\textbf{35}:2617--2619.
doi:\href{http://dx.doi.org/10.15252/embj.201670030}{10.15252/embj.201670030}.
22. \textbf{Loew LM}. 2016. Peer review and bioRxiv. Biophysical Journal
\textbf{111}:E01--E02.
doi:\href{http://dx.doi.org/10.1016/j.bpj.2016.06.035}{10.1016/j.bpj.2016.06.035}.
23. \textbf{Vale RD}, \textbf{Hyman AA}. 2016. Priority of discovery in
the life sciences. eLife \textbf{5}.
doi:\href{http://dx.doi.org/10.7554/elife.16931}{10.7554/elife.16931}.
24. Interim research product rFI response from fASEB.
\url{http://www.faseb.org/Portals/2/PDFs/opa/2016/Interim\%20Research\%20Product\%20RFI.pdf}.
25. \textbf{Chawla DS}. 2017. When a preprint becomes the final paper.
Nature.
doi:\href{http://dx.doi.org/10.1038/nature.2017.21333}{10.1038/nature.2017.21333}.
26. \textbf{Black A}, \textbf{Potter B}, \textbf{Dudas G},
\textbf{Feldstein L}, \textbf{Grubaugh ND}, \textbf{Andersen KG},
\textbf{Ellis BR}, \textbf{Ellis EM}, \textbf{Bedford T}. 2017. Genetic
characterization of the zika virus epidemic in the uS virgin islands.
bioRxiv 113100.
27. \textbf{Afshinnekoo E}, \textbf{Meydan C}, \textbf{Chowdhury S},
\textbf{Jaroudi D}, \textbf{Boyer C}, \textbf{Bernstein N},
\textbf{Maritz JM}, \textbf{Reeves D}, \textbf{Gandara J},
\textbf{Chhangawala S}, \textbf{Ahsanuddin S}, \textbf{Simmons A},
\textbf{Nessel T}, \textbf{Sundaresh B}, \textbf{Pereira E},
\textbf{Jorgensen E}, \textbf{Kolokotronis S-O}, \textbf{Kirchberger N},
\textbf{Garcia I}, \textbf{Gandara D}, \textbf{Dhanraj S},
\textbf{Nawrin T}, \textbf{Saletore Y}, \textbf{Alexander N},
\textbf{Vijay P}, \textbf{H{é}naff EM}, \textbf{Zumbo P}, \textbf{Walsh
M}, \textbf{O'Mullan GD}, \textbf{Tighe S}, \textbf{Dudley JT},
\textbf{Dunaif A}, \textbf{Ennis S}, \textbf{O'Halloran E},
\textbf{Magalhaes TR}, \textbf{Boone B}, \textbf{Jones AL}, \textbf{Muth
TR}, \textbf{Paolantonio KS}, \textbf{Alter E}, \textbf{Schadt EE},
\textbf{Garbarino J}, \textbf{Prill RJ}, \textbf{Carlton JM},
\textbf{Levy S}, \textbf{Mason CE}. 2015. Geospatial resolution of human
and bacterial diversity with city-scale metagenomics. Cell Systems
\textbf{1}:72--87.
doi:\href{http://dx.doi.org/10.1016/j.cels.2015.01.001}{10.1016/j.cels.2015.01.001}.
28. \textbf{Petit III RA}, \textbf{Ezewudo M}, \textbf{Joseph SJ},
\textbf{Read TD}. 2015. Searching for anthrax in the New York City
subway metagenome.
doi:\href{http://dx.doi.org/10.5281/zenodo.17158}{10.5281/zenodo.17158}.
29. \textbf{Ackelsberg J}, \textbf{Rakeman J}, \textbf{Hughes S},
\textbf{Petersen J}, \textbf{Mead P}, \textbf{Schriefer M},
\textbf{Kingry L}, \textbf{Hoffmaster A}, \textbf{Gee JE}. 2015. Lack of
evidence for plague or anthrax on the new york city subway. Cell Systems
\textbf{1}:4--5.
doi:\href{http://dx.doi.org/10.1016/j.cels.2015.07.008}{10.1016/j.cels.2015.07.008}.
30. \textbf{Afshinnekoo E}, \textbf{Meydan C}, \textbf{Chowdhury S},
\textbf{Jaroudi D}, \textbf{Boyer C}, \textbf{Bernstein N},
\textbf{Maritz JM}, \textbf{Reeves D}, \textbf{Gandara J},
\textbf{Chhangawala S}, \textbf{Ahsanuddin S}, \textbf{Simmons A},
\textbf{Nessel T}, \textbf{Sundaresh B}, \textbf{Pereira E},
\textbf{Jorgensen E}, \textbf{Kolokotronis S-O}, \textbf{Kirchberger N},
\textbf{Garcia I}, \textbf{Gandara D}, \textbf{Dhanraj S},
\textbf{Nawrin T}, \textbf{Saletore Y}, \textbf{Alexander N},
\textbf{Vijay P}, \textbf{H{é}naff EM}, \textbf{Zumbo P}, \textbf{Walsh
M}, \textbf{O'Mullan GD}, \textbf{Tighe S}, \textbf{Dudley JT},
\textbf{Dunaif A}, \textbf{Ennis S}, \textbf{O'Halloran E},
\textbf{Magalhaes TR}, \textbf{Boone B}, \textbf{Jones AL}, \textbf{Muth
TR}, \textbf{Paolantonio KS}, \textbf{Alter E}, \textbf{Schadt EE},
\textbf{Garbarino J}, \textbf{Prill RJ}, \textbf{Carlton JM},
\textbf{Levy S}, \textbf{Mason CE}. 2015. Modern methods for delineating
metagenomic complexity. Cell Systems \textbf{1}:6--7.
doi:\href{http://dx.doi.org/10.1016/j.cels.2015.07.007}{10.1016/j.cels.2015.07.007}.
31. \textbf{Boothby TC}, \textbf{Tenlen JR}, \textbf{Smith FW},
\textbf{Wang JR}, \textbf{Patanella KA}, \textbf{Nishimura EO},
\textbf{Tintori SC}, \textbf{Li Q}, \textbf{Jones CD}, \textbf{Yandell
M}, \textbf{Messina DN}, \textbf{Glasscock J}, \textbf{Goldstein B}.
2015. Evidence for extensive horizontal gene transfer from the draft
genome of a tardigrade. Proceedings of the National Academy of Sciences
\textbf{112}:15976--15981.
doi:\href{http://dx.doi.org/10.1073/pnas.1510461112}{10.1073/pnas.1510461112}.
32. \textbf{Koutsovoulos G}, \textbf{Kumar S}, \textbf{Laetsch DR},
\textbf{Stevens L}, \textbf{Daub J}, \textbf{Conlon C}, \textbf{Maroon
H}, \textbf{Thomas F}, \textbf{Aboobaker A}, \textbf{Blaxter M}. 2016.
No evidence for extensive horizontal gene transfer in the genome of the
tardigrade hypsibius dujardini. bioRxiv 033464.
doi:\href{http://dx.doi.org/10.1101/033464}{10.1101/033464}.
33. \textbf{Boothby TC}, \textbf{Goldstein B}. 2016. Reply to bemm et
al. and arakawa: Identifying foreign genes in independentHypsibius
dujardinigenome assemblies. Proceedings of the National Academy of
Sciences \textbf{113}:E3058--E3061.
doi:\href{http://dx.doi.org/10.1073/pnas.1601149113}{10.1073/pnas.1601149113}.
34. \textbf{Koutsovoulos G}, \textbf{Kumar S}, \textbf{Laetsch DR},
\textbf{Stevens L}, \textbf{Daub J}, \textbf{Conlon C}, \textbf{Maroon
H}, \textbf{Thomas F}, \textbf{Aboobaker AA}, \textbf{Blaxter M}. 2016.
No evidence for extensive horizontal gene transfer in the genome of the
tardigradeHypsibius dujardini. Proceedings of the National Academy of
Sciences \textbf{113}:5053--5058.
doi:\href{http://dx.doi.org/10.1073/pnas.1600338113}{10.1073/pnas.1600338113}.
35. \textbf{Bemm F}, \textbf{Wei{ß} CL}, \textbf{Schultz J},
\textbf{Förster F}. 2016. Genome of a tardigrade: Horizontal gene
transfer or bacterial contamination? Proceedings of the National Academy
of Sciences \textbf{113}:E3054--E3056.
doi:\href{http://dx.doi.org/10.1073/pnas.1525116113}{10.1073/pnas.1525116113}.
36. \textbf{Arakawa K}. 2016. No evidence for extensive horizontal gene
transfer from the draft genome of a tardigrade. Proceedings of the
National Academy of Sciences \textbf{113}:E3057--E3057.
doi:\href{http://dx.doi.org/10.1073/pnas.1602711113}{10.1073/pnas.1602711113}.
37. \textbf{Delmont TO}, \textbf{Eren AM}. 2016. Identifying
contamination with advanced visualization and analysis practices:
metagenomic approaches for eukaryotic genome assemblies.
doi:\href{http://dx.doi.org/10.7287/peerj.preprints.1695v1}{10.7287/peerj.preprints.1695v1}.
38. \textbf{Bik EM}, \textbf{Casadevall A}, \textbf{Fang FC}. 2016. The
prevalence of inappropriate image duplication in biomedical research
publications. bioRxiv 049452.
doi:\href{http://dx.doi.org/10.1101/049452}{10.1101/049452}.
39. \textbf{Bik EM}, \textbf{Casadevall A}, \textbf{Fang FC}. 2016. The
prevalence of inappropriate image duplication in biomedical research
publications. mBio \textbf{7}:e00809--16.
doi:\href{http://dx.doi.org/10.1128/mbio.00809-16}{10.1128/mbio.00809-16}.
40. \textbf{Wolfe-Simon F}, \textbf{Blum JS}, \textbf{Kulp TR},
\textbf{Gordon GW}, \textbf{Hoeft SE}, \textbf{Pett-Ridge J},
\textbf{Stolz JF}, \textbf{Webb SM}, \textbf{Weber PK}, \textbf{Davies
PCW}, \textbf{Anbar AD}, \textbf{Oremland RS}. 2010. A bacterium that
can grow by using arsenic instead of phosphorus. Science
\textbf{332}:1163--1166.
doi:\href{http://dx.doi.org/10.1126/science.1197258}{10.1126/science.1197258}.
41. How is the Altmetric Attention Score calculated?
\url{https://help.altmetric.com/support/solutions/articles/6000060969-how-is-the-altmetric-attention-score-calculated-}.
42. \textbf{Schloss PD}, \textbf{Westcott SL}, \textbf{Jenior ML},
\textbf{Highlander SK}. 2015. Sequencing 16S rRNA gene fragments using
the PacBio SMRT DNA sequencing system.
doi:\href{http://dx.doi.org/10.7287/peerj.preprints.778v1}{10.7287/peerj.preprints.778v1}.
43. \textbf{Schloss PD}, \textbf{Jenior ML}, \textbf{Koumpouras CC},
\textbf{Westcott SL}, \textbf{Highlander SK}. 2016. Sequencing 16S rRNA
gene fragments using the PacBio SMRT DNA sequencing system.
doi:\href{http://dx.doi.org/10.7287/peerj.preprints.778v2}{10.7287/peerj.preprints.778v2}.
44. \textbf{Schloss PD}, \textbf{Jenior ML}, \textbf{Koumpouras CC},
\textbf{Westcott SL}, \textbf{Highlander SK}. 2016. Sequencing 16S rRNA
gene fragments using the PacBio SMRT DNA sequencing system. PeerJ
\textbf{4}:e1869.
doi:\href{http://dx.doi.org/10.7717/peerj.1869}{10.7717/peerj.1869}.
45. \textbf{Baxter NT}, \textbf{Koumpouras CC}, \textbf{Rogers MA},
\textbf{Ruffin MT}, \textbf{Schloss P}. 2016. DNA from fecal
immunochemical test can replace stool for microbiota-based colorectal
cancer screening. bioRxiv 048389.
doi:\href{http://dx.doi.org/10.1101/048389}{10.1101/048389}.
46. \textbf{Baxter NT}, \textbf{Koumpouras CC}, \textbf{Rogers MAM},
\textbf{Ruffin MT}, \textbf{Schloss PD}. 2016. DNA from fecal
immunochemical test can replace stool for detection of colonic lesions
using a microbiota-based model. Microbiome \textbf{4}.
doi:\href{http://dx.doi.org/10.1186/s40168-016-0205-y}{10.1186/s40168-016-0205-y}.
47. \textbf{Imperiale MJ}, \textbf{Shenk T}, \textbf{Bertuzzi S}. 2016.
mSphereDirect: Author-initiated peer review of manuscripts. mSphere
\textbf{1}:e00307--16.
doi:\href{http://dx.doi.org/10.1128/msphere.00307-16}{10.1128/msphere.00307-16}.
48. \textbf{Vence T}. 2017. Journals seek out preprints. TheScientist.
49. \textbf{Barsh GS}, \textbf{Bergman CM}, \textbf{Brown CD},
\textbf{Singh ND}, \textbf{Copenhaver GP}. 2016. Bringing PLOS genetics
editors to preprint servers. PLOS Genetics \textbf{12}:e1006448.
doi:\href{http://dx.doi.org/10.1371/journal.pgen.1006448}{10.1371/journal.pgen.1006448}.
50. \textbf{Hockett KL}, \textbf{Nishimura MT}, \textbf{Karlsrud E},
\textbf{Dougherty K}, \textbf{Baltrus DA}. 2013. Interactions between
genome architecture and virulence genes in pseudomonas syringae, strain
cC1557 as a model. bioRxiv 000869.
\end{document}
| {
"alphanum_fraction": 0.7941980719,
"avg_line_length": 56.4851485149,
"ext": "tex",
"hexsha": "be0c8f4c02a2587c3b8337c382f6adc0680258f5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e4051ec121e6833fad414420a1b24dce31e9bba5",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "SchlossLab/Schloss_PrePrints_mBio_2016",
"max_forks_repo_path": "submission/Schloss_PrePrints_mBio_2017.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e4051ec121e6833fad414420a1b24dce31e9bba5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "SchlossLab/Schloss_PrePrints_mBio_2016",
"max_issues_repo_path": "submission/Schloss_PrePrints_mBio_2017.tex",
"max_line_length": 154,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e4051ec121e6833fad414420a1b24dce31e9bba5",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "SchlossLab/Schloss_PrePrints_mBio_2016",
"max_stars_repo_path": "submission/Schloss_PrePrints_mBio_2017.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 15198,
"size": 57050
} |
\chapter{Additional Proofs}
In this appendix we provide proofs of incidental statements made throughout this thesis.
These proofs are provided in the interest of completeness.
In the introduction, it was stated that the word problem for any finitely presented group is recursively enumerable.
This is a well-known fact which we prove as follows.
\begin{proposition}\label{prop:appendix/wp-fp}
Finitely presented groups have recursively enumerable word problems.
\end{proposition}
\begin{proof}
Let $G$ be a group with presentation $\left\langle X \mid R \right\rangle$ where $X$ and $R$ are both finite.
We then see that a word $w \in X^*$ is in the word problem if and only if we have
\[
w
=_{F_X}
\prod_{i=0}^n u_i r_i^{\delta_i} u_i^{-1}
\]
for some $n \in \mathbb{N}$ where each $u_i \in X^*$, $\delta_j \in \{-1,1\}$ and $r_i \in R$.
Notice here that `$=_{F_X}$' denotes that the left and right-hand sides are the same word after free-reduction is performed, that is, they are equivalent if viewed as elements of the free group.
We may then construct a Turing machine $M$ which takes a word $w \in X^*$ as input, then iterates through the set of all finite products $\prod_{i=0}^n u_i r_i^{\delta_i} u_i^{-1}$.
At each iteration, the machine should compare the word $w$ and the result of the product.
The machine then terminate and accepts only if the two words are equal.
We see that the machine $M$ accepts a word $w \in X^*$ if and only if it lies within the word problem $\WP_X$, that is, membership to the word problem $\WP_X$ is \emph{semi-decidable}.
It is well known that a problem is semi-decidable if and only if the set of all accepted words (in this case the word problem) is recursively enumerable.
This can be proven by constructing a machine which checks all words in parallel using a technique known as \emph{dovetailing} (see Theorem 20.8 on p.~441 of \cite{rich2007} for a proof of this fact).
\end{proof}
In \cref{sec:linear-constraints} we gave an example of a linearly constrained language, and provided its generating function.
In the following we show that this generating function is holonomic by explicitly constructing a system of linear differential equations which it satisfies.
\begin{proposition}\label{prop:appendix/holonomic-function}
The multivariate power series
\[
f(x,y,z) =
\sum_{n \in \mathbb{N}}
\frac{(3n)!}{(n!)^3}
x^n y^n z^n
\]
satisfies the differential equations
\begin{equation}\label{eq:appendix/holonomic-differential-eq}
\left.
\begin{aligned}
(x^2 - 27 x^3 y z) \partial_x^2 f(x,y,z)
+ (x - 54 x^2 y z) \partial_x f(x,y,z)
- 6 x y z \, f(x,y,z)
&= 0
\\
(y^2 - 27 x y^3 z) \partial_y^2 f(x,y,z)
+ (y - 54 x y^2 z) \partial_y f(x,y,z)
- 6 x y z \, f(x,y,z)
&= 0
\\
(z^2 - 27 x y z^3) \partial_z^2 f(x,y,z)
+ (z - 54 x y z^2) \partial_z f(x,y,z)
- 6 x y z \, f(x,y,z)
&= 0.
\end{aligned}
\right\}
\end{equation}
Thus, $f(x,y,z)$ is holonomic.
\end{proposition}
\begin{proof}
Notice that the system of differential equations in \eqref{eq:appendix/holonomic-differential-eq} is equivalent to
\begin{equation}\label{eq:appendix/holonomic-differential-eq2}
\left.
\begin{aligned}
x^2 \partial_x^2 f(x,y,z)
+ x \partial_x f(x,y,z)
-27 x^2 \partial_x^2 (x y z \, f(x,y,z))
- 6 x y z \, f(x,y,z)
&= 0
\\
y^2 \partial_y^2 f(x,y,z)
+ y \partial_y f(x,y,z)
-27 y^2 \partial_y^2 (x y z \, f(x,y,z))
- 6 x y z \, f(x,y,z)
&= 0
\\
z^2 \partial_z^2 f(x,y,z)
+ z \partial_z f(x,y,z)
-27 z^2 \partial_z^2 (x y z \, f(x,y,z))
- 6 x y z \, f(x,y,z)
&= 0.
\end{aligned}
\right\}
\end{equation}
This can be shown using the \emph{product rule} of differentiation.
Let
\[
f(x,y,z) =
\sum_{n \in \mathbb{N}}
\frac{(3n)!}{(n!)^3}
x^n y^n z^n,
\]
then
\begin{align*}
x \partial_x f(x,y,z)
&=
\sum_{n=1}^\infty
\frac{(3n)!}{(n!)^3}
\,n\,
x^n y^n z^n,
\\
x^2 \partial_x^2 f(x,y,z)
&=
\sum_{n=1}^\infty
\frac{(3n)!}{(n!)^3}
\,n(n-1)\,
x^n y^n z^n,
\\
xyz \, f(x,y,z)
&=
\sum_{n=1}^\infty
\frac{(3(n-1))!}{((n-1)!)^3}
\,
x^n y^n z^n,\text{ and}
\\
x^2 \partial_x^2 (xyz \, f(x,y,z))
&=
\sum_{n=1}^\infty
\frac{(3(n-1))!}{((n-1)!)^3}
\,n(n-1)\,
x^n y^n z^n.
\end{align*}
We then see that
\[
x^2 \partial_x^2 f(x,y,z)
+ x \partial_x f(x,y,z)
-27 x^2 \partial_x^2 (x y z \, f(x,y,z))
- 6 x y z \, f(x,y,z)
=
\sum_{n=1}^\infty c_n x^n y^n z^n
\]
where each
\begin{align*}
c_n
&=
n^2
\frac{(3n)!}{(n!)^3}
-
(27n^2 - 27n + 6)
\frac{(3(n-1))!}{((n-1)!)^3}
\\
&=
\frac{1}{n}
\left[
n^3
\frac{(3n)!}{(n!)^3}
-
3n(3n-1)(3n-2)
\frac{(3(n-1))!}{((n-1)!)^3}
\right].
\end{align*}
Moreover, we see that each $c_n = 0$ as
\[
\frac{(3n)!}{(n!)^3}
=
\frac{3n(3n-1)(3n-2)}{n^3}
\cdot
\frac{(3(n-1))!}{((n-1)!)^3}.
\]
Thus, we have
\[
x^2 \partial_x^2 f(x,y,z)
+ x \partial_x f(x,y,z)
-27 x^2 \partial_x^2 (x y z \, f(x,y,z))
- 6 x y z \, f(x,y,z)
=
0.
\]
The proofs of the other two differential equations in \eqref{eq:appendix/holonomic-differential-eq2} are the same.
\end{proof}
| {
"alphanum_fraction": 0.6153697997,
"avg_line_length": 28.5274725275,
"ext": "tex",
"hexsha": "07d74216770ed71b5a636cf09277a531e332dcbe",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "06f7d5f3f5fa8e6bdb9aa48796223acd9ba4ae3d",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "alexbishop/phd-thesis",
"max_forks_repo_path": "chapter/A_Additional_Proofs.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "06f7d5f3f5fa8e6bdb9aa48796223acd9ba4ae3d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "alexbishop/phd-thesis",
"max_issues_repo_path": "chapter/A_Additional_Proofs.tex",
"max_line_length": 200,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "06f7d5f3f5fa8e6bdb9aa48796223acd9ba4ae3d",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "alexbishop/phd-thesis",
"max_stars_repo_path": "chapter/A_Additional_Proofs.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2095,
"size": 5192
} |
\documentclass[fleqn, final]{../styles/unmphythesis}
\usepackage{../styles/qxd}
\renewcommand{\thechapter}{5}
%\newcommand{\thechapter}{1}
\makeindex
\begin{document}
%<*birefringenceprotocol>
\chapter[Dispersive response theory, QND measurement and spin squeezing]{Dispersive response theory with applications to quantum nondemolition measurement and spin squeezing of atoms near a nanofiber}\label{chap:birefringence}
\section{Introduction}
Strong coupling between atoms and photons is at the heart of many quantum information processing protocols including efficient generation of remote entanglement \cite{Duan2001, Julsgaard2001Experimental}, quantum data storage and retrieval \cite{ Eisaman2005Electromagnetically}, and QND measurements \cite{Eckert2008Quantum}.
From a general perspective, strong coupling arises when atoms radiate predominantly into the electromagnetic field mode that defines the quantum atom-light interface.
For an individual atom, the strong coupling regime is attained via the Purcell effect, whereby the boundary conditions of nearby dielectrics and/or conductors enhance radiation into a desired mode relative to all other modes.
This can be achieved with Fabry-Perot cavities (cavity QED) \cite{Miller2005} and/or via nanophotonic structures engineered such that the radiation is predominantly into a specified mode \cite{MangaRao2007Single, Hakuta2012, Hung2013}. The Purcell enhancement factors for emission into a guided or cavity mode scale respectively as $ \Gamma_{\oneD}/\Gamma_{\vac} \sim \sigma_0/A$ and $\Gamma_{\cav}/\Gamma_{\vac} \sim Q \lambda^3/V \sim \mathcal{F} \sigma_0/A$. Here $\Gamma_{\vac}$ is the free space spontaneous emission rate, $\sigma_0 \propto \lambda^2$ is the resonant absorption cross section, $Q$, $V$, and $\mathcal{F}$, are the cavity quality factor, volume, and finesse respectively, and $A$ is the effective area of the cavity or guided mode that couples to the atom. The strongest coupling occurs on resonance, and thus much effort has been devoted to developing the largest possible $\Gamma_{\cav}$ and $\Gamma_{\oneD}$ through ultra-high-$Q$, small-volume resonators \cite{Raimond2001Manipulating, Wallraff2004Strong, Miller2005} and through nanophotonic plasmonic \cite{Dzsotjan2010, Tame2013Quantum}, metamaterial \cite{Yao2009}, and dielectric \cite{Hung2013, Goban2014} waveguides.
In free space, where there is no Purcell enhancement, strong coupling can be achieved via the {\em cooperativity} of atomic ensembles.
This is most naturally implemented in a dispersive regime, off resonance, where light elastically scattered from the ensemble constructively interferes to match the mode of an exciting paraxial probe \cite{Baragiola2014}. The cooperativity per atom in a typical paraxial beam is small, $\Gamma_{\oneD}/\Gamma_{\vac} \sim \sigma_0/A \sim 10^{-6}$.
The total cooperativity, however, can be significant for sufficiently large ensembles, e.g. $N_A \sim 10^7$ atoms. The key parameter that characterizes cooperativity is the total resonant optical density of the ensemble, OD $= N_A (\sigma_0/A)$.
Such strong cooperativity in free space has been employed in a variety of applications including quantum memory for storage of photonic states \cite{Chaneliere2005Storage} and the generation of squeezed states\index{state!spin squeezed state} of the collective spin of the ensemble via quantum nondemolition (QND) measurement \cite{Kuzmich2000, Appel2009Mesoscopic, Takano2009Spin, Sewell2012Magnetic}.
%========= FIGURE: Geometry (coupling strength, magic wavelength, area and detuning) =========%
\begin{figure}
\centering
\includegraphics[scale=1.10]{../chap5/Fig1}
\caption[Comparing mode-matching and cooperativity of quantum interfaces in free-space and a nanofiber.]{Cooperativity and mode-matching for various atom-light geometries. (a) The beam area at the waist of a tightly focused beam is closely matched with the atomic scattering cross section, but the scattered light of a single atom is poorly mode-matched with the probe. (b) A paraxial beam probing a rarefied atomic cloud whose scattered radiation interferes constructively in the forward direction. (c) Atoms trapped in a 1D optical lattices near the surface of an optical nanofiber interacting with a fiber-guided probe. The tight confinement and automatic mode matching that accompanies scattering into the guided mode leads to strong cooperativity in the atom-photon interaction.}\label{Fig::ModeMatching}
\end{figure}
%=============================================
A particular system that combines the elements above consists of cold atoms trapped in the evanescent field of the guided mode of a tapered optical nanofiber with a subwavelength diameter \cite{Vetsch2010Optical, Lacroute2012, Balykin2014, Grover2015, Lee2015} (see \frf{Fig::ModeMatching}).
The typical resonant OD per atom, or OD/$N_A$, in the nanofiber ($\sigma_0/A \sim 10^{-2}$) is boosted by orders of magnitude over free space for paraxial beams.
However, one cannot reach the strong coupling regime where $\Gamma_{\oneD}$ is on the order of $\Gamma_{\vac}$ as is possible in engineered nanophotonic waveguides, such as those arising in photonic crystals \cite{Hung2013}, where atoms can be trapped at positions of peak intensity of the field.
One can, however, achieve strong cooperativity in the dispersive regime with a moderately sized ensemble. When compared to free space, all light scattered into the guided mode is automatically mode matched, and thus, given the relatively large ratio $\sigma_0/A$, one can achieve high OD with only a few thousand atoms (see \frf{Fig::ModeMatching}).
Such strong cooperativity opens the door to new regimes to create non-Gaussian quantum states of the ensemble \cite{Dubost2012Efficient} and potentially to implement nonlinear optics at the level of a few photons \cite{Spillane2008, Pittman2013Ultralow, OShea2013}.
One-dimensional optical lattices in nanofibers based on multiple co- and counter-propagating trapping beams have been loaded with up to several thousand alkali atoms \cite{Vetsch2010Optical, Lacroute2012}.
This has proved a fruitful platform for quantum information processing.
The anisotropic nature of the strong atom-light coupling has been exploited for control of internal atomic states \cite{Mitsch2014}, enhanced coupling into a preferred propagation direction \cite{Petersen2014Chiral, Mitsch2014a}, and optical switching \cite{OShea2013}.
Off resonance, dispersive coupling has allowed for non-destructive atom counting \cite{Dawkins2011, Beguin2014}
and storage of fiber-guided light \cite{Gouraud2015Demonstration, Sayrin2015}.
Recent demonstrations of photonic crystal cavities fabricated on the nanofiber \cite{Wuttke2012Nanofiber, Nayak2014Optical, Schell2015} promise further enhanced atom-light coupling.
In this chapter we study the quantum atom-light interface in the dispersive regime for an optical nanofiber geometry. We focus here on the coupling between the atomic spin and light polarization induced by the elastic scattering of photons by tensor-polarizable cesium atoms trapped near the surface of the nanofiber. This provides an entangling interaction that can be employed to generate spin squeezing via QND measurement. Our analysis unifies a variety of different approaches found in the literature, including direct calculation of the dyadic Green's function for photon scattering \cite{Sakoda1996Optical, Dung2000, Sondergaard2001, Klimov2004, Wubs2004, Fussell2005Decay, MangaRao2007Single, Dzsotjan2010} and the input-output formalism studied for one-dimensional field theories based on Heisenberg-Langevin equations \cite{Gardiner1985Input, Blow1990Continuum, Shen2005, LeKien2005a, LeKien2008, Fan2010}.
The remainder of this article is organized as follows.
In \srf{Sec::GreensFunction} we solve for the mode decomposition of the dyadic Green's function which determines the electric field scattered by a point dipole near the surface of the nanofiber.
This allows us to calculate the phase shift and polarization transformation for fiber-guided photons induced by tensor-polarizable atoms in the dispersive regime.
We connect this with a fully quantum mechanical treatment based on a Heisenberg-Langevin picture in \srf{Sec::HeisenbergLangevin}.
The formalism we develop is used in \srf{Sec::QNDMeasurement} to study QND measurement of atoms based on polarization spectroscopy.
We consider shot-noise-limited atom detection as well as measurement-backaction-induced squeezing of spin projection noise\index{noise!projection noise}.
We study squeezing of the collective pseudospin associated with ensembles of atoms in the atomic clock state and calculate its dynamics based on a first principles stochastic master equation that includes both the effects of QND measurement as well as decoherence due to optical pumping.
We conclude with a summary and outlook for future research in \srf{Sec::Conclusion}.
%========GREEN'S FUNCTION AND INPUT/OUTPUT RESPONSE=========%
\section[Dyadic Green's function and input-output field response]{Dyadic Green's function and input-output\\ field response} \label{Sec::GreensFunction}
Given a point particle with tensor polarizability $\tensor{\boldsymbol{\alpha}}$ at position $\br'$ near the surface of a nanofiber, the field at frequency $\omega_0$ is given by the solution to the wave equation,
\begin{align}\label{Eq::WaveEquationSource}
\left[ - \nabla\times\nabla\times + \, n^2(\br)k_0^2 \right] \mathbf{E}(\br) &= -4\pi k_0^2 \delta^{(3)}(\br-\br')\, \tensor{\boldsymbol{\alpha}}\cdot \mathbf{E}(\br),
\end{align}
where $k_0=\omega_0/c$ and $n(\mbf{r})$ is the spatially varying index of refraction that describes the fiber; Gaussian-cgs units are used throughout.
For an asymptotic input field $\mathbf{E}_{\inp}(\br)$, the scattering solution to \erf{Eq::WaveEquationSource} is given by the Lippmann-Schwinger equation \cite{Wubs2004},
\begin{subequations}
\begin{align}
\mathbf{E}_{\out}(\br) &=\mathbf{E}_{\inp}(\br)+\tensor{\mathbf{G}}^{(+)}(\br , \br'; \omega_0)\cdot
\tensor{\boldsymbol{\alpha}}\cdot \mathbf{E}_{\out}(\br')\\
&\approx \mathbf{E}_{\inp}(\br)+ \tensor{\mathbf{G}}^{(+)}(\br , \br'; \omega_0) \cdot
\tensor{\boldsymbol{\alpha}}\cdot \mathbf{E}_{\inp}(\br'), \label{Eq::ScatteredField}
\end{align}
\end{subequations}
where in \erf{Eq::ScatteredField} we have made the first Born approximation valid for weak scattering. The fundamental object that fully characterizes the scattered radiation as well as the energy level shift and modified decay rate of a scatterer near the dielectric is the dyadic Green's function, $\tensor{\mathbf{G}}(\br, \br';\omega_0)$. This determines the scattered field from a point dipole at $\br'$, $\mathbf{E}_{\rm scat}(\br)= \tensor{\mathbf{G}}^{(+)}(\br , \br'; \omega_0)\cdot \mathbf{d}$, and satisfies the equation of motion,
\begin{align} \label{Eq::GreensDiffEq}
\left[ -\nabla\times\nabla\times + n^2(\mbf{r}) k_0^2 \right] \tensor{\mathbf{G}}(\br, \br';\omega_0) &= -4\pi
k_0^2 \delta^{(3)}(\mathbf{r}-\mathbf{r}') \unittens,
\end{align}
where $\unittens$ is the unit tensor.
The solution for the Green's function $\tensor{\mathbf{G}}(\br ,\br' ; \omega_0)$, following from Maxwell's equations,
has been studied previously \cite{Sakoda1996Optical,Sondergaard2001,Wubs2004}.
In Chapter 2, we have also derived the eigenmode decomposition solutions for the dyadic Green's function of a nanophotonic waveguide and shown that the unguided modes do not contribute to the sign measured at the end of the waveguide.
As we are interested here in the forward-scattered components that lead to phase shifts and polarization transformations, we apply the eigenmode decomposition approach derived in Chapter 2 to directly calculate a nanofiber's $\tensor{\mathbf{G}}(\br ,\br' ; \omega_0)$ using the fiber's normal modes and study how the guided modes respond to atoms with a tensor polarizability.
We treat an optical nanofiber of radius $a$ with step-index profile,
\begin{align} \label{Eq::IndexofRefraction}
n(r_\perp) = \Big\{
\begin{array}{l l} n_1 & \quad r \leq a \\
n_2 & \quad r > a
\end{array},
\end{align}
for a silica core ($n_1 = 1.4469$)~\cite{Kien2004} and infinite vacuum cladding ($n_2 = 1$). For a cylindrically symmetric dielectric the guided modes are $\mathbf{f}_\mu (\br) = \mathbf{u}_\mu (\br_\perp) e^{i\beta z}/\sqrt{2 \pi}$, with indices $\mu=\{j, \beta , p\}$ for the $j$th guided mode with propagation constant $\beta$ at frequency $\omega_\mu=\omega(\beta)$ and polarization $p$.
Two convenient guided-mode bases to describe $ \mathbf{u}_\mu (\br_\perp) $ are the quasilinear and quasicircular polarization modes, described in Appendix~\ref{chap:fibereigenmodes}~\cite{Kien2004}.
We consider nanofibers that support only the lowest HE$_{11}$ guided modes at the relevant frequency $\omega_0$ \cite{Snyder1983Optical}, which have four guided modes: two polarizations $p$, each with propagation constant $\beta(\omega_0) = \pm\beta_0$ corresponding to forward and backward propagation. We use $ b=\pm $ to indicate the propagation direction. Substituting the guided mode indices into Eq.~\eqref{Eq::GreensGuided_general}, the guided-mode contribution to the dyadic Greens function for $ z\neq z' $ is then
\begin{align}
\tensor{\mathbf{G}}^{(+)}_g(\br,\br'; \omega_0) = &2\pi i \sum_{b,p} {\rm Res}\vert_{\beta =b\beta_0}
\left[\frac{-2 \omega_0^2 }{ \omega_0^2-\omega^2(\beta)}\right] \mathbf{u}_{b\beta_0, p}
(\br_\perp)\mathbf{u}^*_{b\beta_0, p} (\br_{\perp}^\prime)e^{ib \beta_0 (z-z')} \nonumber \\
= & 2\pi i \frac{\omega_0}{v_g } \sum_{b,p} \mathbf{u}_{b, p} (\br_\perp)\mathbf{u}^*_{b, p}
(\br_{\perp}^\prime) e^{i b\beta_0(z-z')} \Theta \big( b(z-z') \big), \label{Eq::GreensGuided}
\end{align}
where $v_g= \vert d\omega/d\beta \vert_{\beta=\beta_0}$ is the group velocity of the HE$_{11}$ modes at $\omega_0$, and $\Theta \big( b(z-z') \big)$ is a Heaviside function. At $ \br=\br' $, Eq.~\eqref{Eq::ImGreenLocal_general} yields the imaginary part of the dyadic Green's function:
\begin{equation}\label{Eq::ImGreenLocal}
{\rm Im} \big[\tensor{\mathbf{G}}^{(+)}_g(\br',\br'; \omega_0=\omega_{eg}) \big] = \pi \frac{\omega_{eg}}{v_g } \sum_{b, p}
\mathbf{u}_{b, p} (\br_{\!\perp}^\prime)\mathbf{u}^*_{b , p} (\br_{\!\perp}^\prime),
\end{equation}
where we have used $\omega_0=\omega_{eg}$ for future calculations of the decay rates of the quantum transitions between the ground state $ \ket{g} $ and the excited state $ \ket{e} $ of the atom. The real part of the Green's function at $\br = \br'$ determines the energy level shift of the scatterer.
Equation (\ref{Eq::GreensGuided}) is the central result from which we can calculate the dispersive response. Consider a forward-propagating input field in the guided modes with frequency $\omega_0$, positive-frequency amplitude $\Eamp$, and arbitrary polarization, $\mathbf{E}^{(+)}_{\inp}(\br) = \Eamp \mathbf{u}_{\rm in}(\br_\perp) e^{i \beta_0 z}$ dispersively coupled to an atom at position $\mathbf{r}'$. The effective mode area at the atom's position is determined from the total cycle-averaged power transported along the nanofiber, $P_{{\rm in},z} = (v_g/2\pi) \int d^2\br \, n^2(r_\perp) |\mathbf{E}^{(+)}_{\inp}(\br) |^2$, and the intensity at the atom, $I_{\rm in}(\mathbf{r}') = (c/2\pi) |\mathbf{E}^{(+)}_{\inp}(\br') |^2$, via the relation \cite{Domokos2002a},
\begin{align} \label{Eq::AreaIn}
A_{\rm in} \equiv \frac{P_{{\rm in}}}{I_{\rm in}(\mathbf{r}')} = \frac{1}{n_g |\mathbf{u}_{\rm in}(\mathbf{r}'_\perp)|^{2}},
\end{align}
where $n_g\equiv c/v_g$ is the group index of refraction.
Substitution of the guided-mode Green's function, \erf{Eq::GreensGuided}, into the Lippman-Schwinger equation, \erf{Eq::ScatteredField}, yields the transmitted (forward-scattered) and reflected (backward-scattered) output fields, \begin{equation}
\mathbf{E}_{\out}(\br) = \Eamp \big[ \mathbf{u}_{\trans, \out} (\br_\perp) e^{i \beta_0 z} + \mathbf{u}_{\refl,\out} (\br_\perp) e^{-i \beta_0 z} \big],
\end{equation}
where
\begin{subequations}
\begin{align}
\mathbf{u}_{\trans, \out} (\br_\perp) &= \sum_{p,p'} \, c_{p} t_{pp'} \mathbf{u}_{\fwd, p'}(\br_\perp) \\
\mathbf{u}_{\refl,\out} (\br_\perp) &= \sum_{p,p'} \, c_{p} r_{pp'} \mathbf{u}_{\bwd, p'}(\br_\perp),
\end{align}
\end{subequations}
where we have decomposed the input into the polarization eigenmodes, $\mbf{u}_{\rm in}(\mbf{r}_\perp) = \sum_{p} c_{p} \mathbf{u}_{\fwd,p}(\br_\perp)$.
For $z>z'$, the transmission and reflection matrices are
\begin{subequations}
\begin{align} \label{Eq::PolarizationTransformation}
t_{pp'} =& \delta_{p,p'} + 2\pi i k_0 n_g \, \mathbf{u}^*_{+, p}(\br'_\perp) \cdot
\tensor{\boldsymbol{\alpha}} \cdot \mathbf{u}_{+, p'}(\br'_\perp) , \\
r_{pp'} =& 2\pi i k_0 n_g \, \mathbf{u}^*_{\bwd, p}(\br'_\perp) \cdot
\tensor{\boldsymbol{\alpha}} \cdot \mathbf{u}_{\fwd, p'}(\br'_\perp) e^{2 i\beta_0 z'} ,
\end{align}
\end{subequations}
We focus here on the transmitted fields whose interference with the input field for $z>z'$ results in a phase shift and a polarization transformation.
For weak scattering the diagonal terms, $t_{p p} \approx \sqrt{1-R_p}e^{i \delta \phi_p}$, determine the phase shift and attenuation induced on each polarization mode,
\begin{subequations}
\begin{align}
\delta \phi_p &= \frac{2 \pi k_0}{A_{\rm in}} \re(\alpha_{pp}), \label{Eq::PhaseShift} \\
R_p &= \frac{4 \pi k_0}{A_{\rm in}} \im(\alpha_{pp}) .\label{Eq::Attenuation}
\end{align}
\end{subequations}
Here, the $\{p,p'\}$-element of the tensor polarizability is given by, $\alpha_{pp'} \equiv \mathbf{e}^*_{p'} \cdot \tensor{\boldsymbol{\alpha}}\cdot \mathbf{e}_{p}$, with unit vectors for each of the forward-propagating mode functions,\linebreak[4] $\mathbf{e}_{p}\equiv \mathbf{u}_{+,p}(\br'_\perp)/|\mathbf{u}_{+,p}(\br'_\perp)|$.
The phase shift per atom, \erf{Eq::PhaseShift}, is modified over free space in two ways, both of which are captured by the effective mode area $A_{\rm in}$. First, although material dispersion in an optical fiber is negligible over the distances we consider, additional waveguide dispersion can lead to a significant reduction in the group velocity~\cite{Hung2013,Goban2014}. Such ``slow light" enhances the atom-photon coupling strength.
In the nanofiber geometry this effect is moderate -- we calculated the group index to be $n_g \approx 1.40$.
Second and more importantly, the tight spatial confinement as measured by OD/$N_A$ significantly increases the coupling strength over free space for every atom along the nanofiber, which yields strong cooperativity.
In contrast, in free space diffraction restricts the collective phase shift for an ensemble of atoms~\cite{Tanji-Suzuki2011Chapter, Baragiola2014}.
For a Gaussian beam with beam waist $w_0$, the total phase shift induced by a collection of polarizable atoms will be $\delta \phi = N_{\eff} 2 \pi k_0 {\rm Re}({\alpha})/A$, where $A = \pi w^2_0/2$ is the beam area at the focus and $N_{\eff}$ is the effective number of atoms that radiate into this mode.
One can couple strongly to few atoms at the center by tightly focusing the beam or couple weakly to many atoms by choosing a larger focal volume, but hence, smaller cooperativity per atom.
The off-diagonal terms in the transmission matrix, \erf{Eq::PolarizationTransformation}, describe the polarization transformation. For example, if we take the polarization of the modes to be the quasilinear, $p = \{H,V\}$ as defined in \erf{Eq::QuasilinearModes}, then $t_{HV} \equiv \chi_{\rm Far}$ is the rotation angle of the Stokes vector on the Poincar\'{e} sphere corresponding to the Faraday effect \cite{Hammerer2010, Deutsch2010a}.
The phase difference in that basis, $\delta \phi_H - \delta \phi_V$, corresponds to birefringence induced on the guided mode and $t_{HV}$ to Faraday rotation.
Analyzed in the quasicircular polarization modes ($p=\pm$), given in \erf{Eq::QuasicircularModes}, the differential phase $\delta \phi_+ -\delta \phi_-$ corresponds to Faraday rotation and $t_{+-}$ to birefringence\index{birefringence effect}.
We make use of such polarization transformations as a means to nondestructively measure the atoms and generate collective spin squeezing.
%===================Heisenberg-Langevin Equations=====================%
\section[Heisenberg-Langevin-picture solution and atomic response]{Heisenberg-Langevin-picture solution\\ and atomic response} \label{Sec::HeisenbergLangevin}
The Lippmann-Schwinger solution, \erf{Eq::ScatteredField}, determines the input-output relation for linear atomic response given by the polarizability tensor $\tensor{\alpha}$.
In this section we connect this with the fully quantum mechanical description of dispersive atomic response and input-output relations for the quantized guided modes.
Following Ref. \cite{LeKien2005a}, we use a Heisenberg-Langevin approach for one-dimensional systems.
The positive frequency component of the quantized electric field operator decomposes into guided and unguided (radiation) modes, $\hat{\mathbf{E}}^{(+)}=\hat{\mathbf{E}}_g^{(+)}+\hat{\mathbf{E}}_{r}^{(+)}$, where
\begin{subequations}
\begin{align}
\hat{\mathbf{E}}_g^{(+)}(\br) &= \sum_{b,p} \int_0^{\infty}\!\!\!\!\! \mathrm{d}\omega \sqrt{\frac{ \hbar \omega}{ v_g}} \; \awg \mathbf{u}_\mu (\br\!_\perp) e^{i b\beta(\omega) z } ,\label{Eq::QuantizedElectricField} \\
\hat{\mathbf{E}}_r^{(+)}(\br) &= \sum_{m,p}\int_0^{\infty}\!\!\!\!\! \mathrm{d}\omega \int_{-kn_2}^{kn_2}\mathrm{d}\beta \, \sqrt{ \hbar \omega}\;\awr \mathbf{u}_\nu (\br\!_\perp) e^{i\beta(\omega) z }.
\end{align}
\end{subequations}
The HE$_{11}$ guided modes are specified by $\mu =(\omega, b, p)$, where $\omega$ is the mode frequency, $p$ is the polarization, and the propagation direction $b=\pm$ corresponds to wavenumber $b \beta (\omega)$. The unguided modes are specified by $\nu=(\omega, \beta, m, p)$, where $m$ is the azimuthal (angular momentum) quantum number, $p$ labels the two orthogonal polarizations, and longitudinal propagation constant $\beta$ can vary continuously from $-kn_2$ to $kn_2$, with $k = \omega/c$ \cite{Sondergaard2001,LeKien2005a}.
The creation/annihilation operators satisfy the usual continuous-mode commutation relations, $[\hat{a}_\mu, \hat{a}^\dag_{\mu'} ] = \delta_{b,b'} \delta_{p,p'} \delta ( \omega - \omega ') $ and $[\hat{a}_\nu ,\hat{a}^\dag_{\nu'} ] = \delta_{m,m'} \delta_{p,p'} \delta ( \omega - \omega ') \delta ( \beta - \beta') $.
The Hamiltonian for the system is
\begin{equation}
\hat{H} = \hat{H}_F+\hat{H}_A + \hat{H}_{\inter},
\end{equation}
where the free-field Hamiltonian decomposes into guided and unguided modes,
\begin{equation}
\hat{H}_F = \sum_{b,p}\int_0^{\infty}\!\!\!\!\! \mathrm{d}\omega \, \hbar \omega \hat{a}^\dagger_\mu \hat{a}_\mu
+\sum_{m,p} \int_0^{\infty}\!\!\!\!\! \mathrm{d}\omega \int_{-k n_2}^{k n_2} \mathrm{d}\beta \, \hbar \omega
\hat{a}^\dagger_\nu \hat{a}_\nu.
\end{equation}
We consider here alkali atoms with ground and excited levels, $\{ \ket{g}=\ket{nS_{1/2}, f, m_f}\}$, $\{ \ket{e} =\ket{nP_{j'}, f', m_{f'}}\}$, where $\ket{f, m_f}$ denotes the hyperfine sublevels. The free atomic Hamiltonian is
\begin{equation}
\hat{H}_A = \sum_g E_g \hat{\sigma}_{gg} + \sum_e E_e \hat{\sigma}_{ee},
\end{equation}
where $\hat{\sigma}_{ij} \equiv \ket{i}\bra{j}$. In the rotating wave approximation, the atom-field interaction Hamiltonian is
\begin{align}
\hat{H}_{\inter} &= -\hat{\mathbf{d}}\cdot \hat{\mathbf{E}} =- \sum_{e,g} \left[ \hat{\mathbf{d}}_{eg}\cdot
\hat{\mathbf{E}}^{(+)}(\br')+\hat{\mathbf{d}}_{ge}\cdot \hat{\mathbf{E}}^{(-)}(\br') \right],
\end{align}
where the atomic dipole operator is projected between excited and ground subspaces, $\hat{\mathbf{d}}_{eg}= \hat{P}_e \hat{\mathbf{d}} \hat{P}_g $. The interaction Hamiltonian then takes the form,
\begin{equation}
\hat{H}_{\inter} = -\sum_{e,g} \left(\sum_{b,p} \int_0^{\infty}\!\!\!\!\!\mathrm{d}\omega \; \hbar g_{\mu, e,g}\, \hat{a}_\mu \,
\hat{\sigma}_{eg}+ \sum_{m,p} \int_0^{\infty}\!\!\!\!\!\mathrm{d}\omega \! \int_{-kn_2}^{kn_2}\mathrm{d}\beta \, \hbar
g_{\nu, e,g}\, \hat{a}_\nu \, \hat{\sigma}_{eg}\right) + {H.c.},
\end{equation}
where the coupling constants for guided and unguided (radiation) modes are
\begin{subequations} \label{Eq::CouplingConstants}
\begin{align}
\hbar g_{\mu, e,g} &= \sqrt{\frac{\hbar \omega}{ v_g }}\, \bra{e} \hat{\mathbf{d}} \ket{g}
\cdot\mathbf{u}_\mu ( \br'_\perp ) e^{i b \beta(\omega)z} , \\
\hbar g_{\nu, e,g} &= \sqrt{ \hbar \omega } \, \bra{e} \hat{\mathbf{d}} \ket{g} \cdot \mathbf{u}_\nu ( \br'_\perp) e^{i\beta(\omega)z} .
\end{align}
\end{subequations}
The Heisenberg equations of motion are
\begin{subequations}
\begin{align}
\der{\hat{a}_\mu} &= -i\omega \hat{a}_\mu +i\sum_{e,g} g_{\mu, e,g}^* \hat{\sigma}_{ge} \label{eq:da},\\
\der{\hat{a}_\nu} &= -i\omega \hat{a}_\nu +i\sum_{e,g} g_{\nu, e,g}^* \hat{\sigma}_{ge}\label{eq:danu},\\
\der{\hat{\sigma}_{ge}} &= -i\omega_{eg} \hat{\sigma}_{ge} \label{Eq::dsigma} \\
&+ i\!\int_0^{\infty}\!\!\!\!\! \mathrm{d}\omega \sum_{e',g'} \bigg\{ \big(\delta_{ee'} \hat{\sigma}_{gg'} \!-\! \delta_{gg'} \hat{\sigma}_{e'e} \big) \bigg[ \sum_{b,p} g_{\mu, e',g'}\hat{a}_\mu \!+\! \sum_{m,p} \!\int_{-kn_2}^{kn_2}\!\!\!\!\!\! \mathrm{d}\beta \; g_{\nu, e',g'} \hat{a}_\nu \bigg] \bigg\}. \nonumber
\end{align}
\end{subequations}
Integrating the field equations,
\begin{subequations}\label{eq:aout1}
\begin{align}
\hat{a}_\mu(t) &= \hat{a}_\mu(t_0) e^{-i\omega (t-t_0)} +i \sum_{e,g} g_{\mu,e,g}^* \int_{t_0}^t
\mathrm{d} t'\, e^{-i\omega (t-t')}\hat{\sigma}_{ge}(t'), \label{Eq::aguidedEOM}\\
\hat{a}_\nu (t) &= \hat{a}_\nu (t_0) e^{-i\omega (t-t_0)} +i \sum_{e,g} g_{\nu,e,g}^* \int_{t_0}^t \mathrm{d}t'\, e^{-i\omega (t-t')}\hat{\sigma}_{ge}(t'),
\end{align}
\end{subequations}
substituting into \erf{Eq::dsigma}, and making the usual Markov\linebreak[4] approximation \cite{LeKien2005a} gives an expression for the ground-excited coherences. This yields
\begin{align}
&\quad\dt{\hat{\sigma}_{ge}} =-i\omega_{eg}
\hat{\sigma}_{ge}-\sum_{e'}\frac{\Gamma_{ee'}}{2}\hat{\sigma}_{ge'} \\
&\!+i\! \sum_{e',g'}\!\!\bigg\{\!\! (\delta_{e,e'} \hat{\sigma}_{\!gg'} \!-\! \delta_{g,g'}
\hat{\sigma}_{e'e})\!\!\int_0^{\infty}\!\!\!\!\!\!\mathrm{d}\omega \!\bigg[\! \sum_{b,p} \!g_{\mu, e'\!,g'} \hat{a}_\mu (t_0)
\!+\!\! \sum_{m,p}\!\! \int_{-kn_2}^{kn_2}\!\!\!\!\!\!\!\!\!\mathrm{d}\beta g_{\nu, e'\!,g'} \hat{a}_\nu(t_0) \!\bigg]\! e^{-i\omega
(t-t_0)}\!\bigg\}\!, \nonumber
\end{align}
where the decay rates of excited-populations and coherences are given by
\begin{equation}
\Gamma_{ee'} = 2\pi \sum_{\mu,g} g_{\mu,e,g}g^*_{\mu,e',g} \vert_{\omega=\omega_{eg}}+2\pi
\sum_{m,p,g} \int_{-kn_2}^{kn_2}\!\!\!\!\! d\beta \, g_{\nu,e,g}g^*_{\nu,e',g} \vert_{\omega=\omega_{eg}}, \label{Eq::TotaleeDecayRate}
\end{equation}
and the small energy shift is absorbed into the transition frequency $\omega_{eg} = (E_e - E_g)/\hbar$.
Equation (\ref{Eq::TotaleeDecayRate}) captures the modification of the spontaneous emission rate due to the nanofiber.
The first sum describes decay into the guided modes and the second into the unguided radiation modes \cite{ Nha1997,Klimov2004,LeKien2005a,Maslov2006, Scheel2015}. The decay rate of a given excited state into all guided modes is given by
\begin{equation}
\Gamma_e^{\oneD}= 2\pi \sum_{b,p,g} |g_{\mu,e,g} |^2_{\omega = \omega_{eg}} = \frac{ 2\pi }{\hbar} \frac{ \omega_{eg} }{v_g} \sum_{b,p,g} \big|\bra{e}\hat{\mathbf{d}}\ket{g} \cdot \mathbf{u}_{bp}(\br'_\perp)\big|^2 .
\end{equation}
This is in agreement with the expected expression from the guided-mode contribution to the dyadic Green's function in \erf{Eq::ImGreenLocal},
\begin{equation} \label{Eq::Gamma1DGreens}
\Gamma_e^{\oneD} = \frac{2}{\hbar} \sum_{g} \bra{g}\hat{\mathbf{d}}\ket{e}\cdot
{\rm Im} \Big[\tensor{\mathbf{G}}^{(+)}_g(\br', \br'; \omega_{eg} ) \Big] \cdot \bra{e}\hat{\mathbf{d}}\ket{g},
\end{equation}
which is enhanced over the free-space rate by the Purcell factor.
Here we are interested in linear response for excitation far from resonance. We follow Ref. \cite{LeKien2014Propagation} and consider an atom sufficiently far from the fiber surface such that the modification of the spontaneous emission rate is small. In this case the decay rate is approximated as $\Gamma_{ee'} \approx \delta_{e,e'} \Gamma_{e}$, where $\Gamma_e$ is the total decay rate from excited state $\ket{e}$, given by the diagonal elements of \erf{Eq::TotaleeDecayRate}. In steady state, the dipole operator in the linear regime ($\hat{\sigma}_{ee'} \rightarrow 0 $) is approximately
\begin{align}
\hat{\sigma}_{ge} \approx -\sum_{g'} \hat{\sigma}_{gg'}\int_0^{\infty}\mathrm{d}\omega \bigg( & \sum_{b,p}
\frac{g_{\mu, e,g'}}{\omega-\omega_{eg} + i \Gamma_{e}/2 }\, \hat{a}_\mu (t_0) \\
&+\sum_{m,p} \int_{-kn_2}^{kn_2}\mathrm{d}\beta \, \frac{g_{\nu, e,g'}}{\omega-\omega_{eg} + i \Gamma_{e}/2 } \,\hat{a}_\nu (t_0) \bigg)e^{-i\omega (t-t_0)} . \nn
\end{align}
By substituting this into \erf{Eq::aguidedEOM} and defining asymptotic modes, $\hat{a}^{\inp}(\omega) = \lim_{t_0\rightarrow -\infty} \hat{a}(t_0) e^{i\omega t_0}$, $\hat{a}^{\out}(\omega) = \lim_{t\rightarrow +\infty} \hat{a}(t) e^{i\omega t}$ \cite{Fan2010}, we obtain the input-output relationship for the guided modes,
\begin{align} \label{Eq::aout}
\hat{a}^{\out}_\mu (\omega) \!= \hat{a}^{\inp}_\mu (\omega) &-\! 2\pi i\sum_{b',p'}
\sum_{e,g,g'}\!\!\hat{\sigma}_{gg'}\frac{ g_{\mu,e,g}^* g_{\mu'\!\!,e,g'}}{ \omega \!-\! \omega_{eg} \!+\! i \Gamma\!_{e}/2 }\hat{a}^{\inp}_{\mu'}(\omega) \nn\\
&-\! 2\pi i\!\sum_{m\!,p} \sum_{e\!,g\!,g'}\! \int^{kn_2}_{-kn_2}\!\!\!\! \mathrm{d}\beta \,\hat{\sigma}\!_{gg'}\frac{ g_{\mu,e,g}^* g_{\nu'\!\!,e,g'}}{ \omega \!-\! \omega_{eg} \!\!+\! i \Gamma\!_{e}/2 }\hat{a}^{\inp}_{\nu}(\omega).
\end{align}
This input-output relation contains the phase shift on forward scattered modes as well as attenuation due to elastic scattering into all other modes. For a probe with frequency $\omega_0$, \erf{Eq::aout} agrees with the expected form given by the Lippmann-Schwinger equation in the first Born approximation \footnote{In a careful derivation of the Lippman-Schwinger scattering equation, it is not the total Green's function that appears in \erf{Eq::ScatteredField} but rather a related dyadic quantity, $\mbf{K}(\br,\br', \omega_0) = \mbf{G}(\br,\br', \omega_0) + \delta(\br-\br')/n^2(\br)$ \cite{Wubs2004}.
This function arises by proper accounting of the scatterer's coupling to the \emph{displacement} rather than the electric field \cite{Yao2009} with a distinction that becomes important at the source point $\mbf{r} = \mbf{r}'$. However, for lossless dielectrics $n(\mathbf{r})$ is real and ${\rm Im}[\tensor{\mathbf{G}}(\br',\br'; \omega)] = {\rm Im}[\tensor{\mathbf{K}}(\br',\br'; \omega)]$ \cite{Yao2010chip}. },
\begin{equation} \label{Eq::IOScatteredField}
\hat{\mathbf{E}}^{(+)}_{\out,g}(\br, \omega_0)=\hat{\mathbf{E}}^{(+)}_{ \inp, g}(\br, \omega_0)+\tensor{\mathbf{G}}_g^{(+)}(\br,\br', \omega_0)\cdot \poltens \cdot \big[\hat{\mathbf{E}}^{(+)}_{\inp, g}(\br', \omega_0)+\hat{\mathbf{E}}^{(+)}_{\inp,r}(\br', \omega_0) \big],
\end{equation}
by noting that for the guided-mode dyadic Green's function given in \erf{Eq::GreensGuided},
\begin{align}
\int d^2 \mbf{r}_\perp \, \mathbf{u}^*_{\mu} (\br_\perp)\cdot \tensor{\mathbf{G}}_g^{(+)}(\br,\br',\omega_0)\cdot \poltens \cdot \mathbf{u}_{\mu'} (\br'_\perp) &= i \frac{ 2\pi \omega_0}{v_g} \mathbf{u}^*_{b,p} (\br'_\perp) \cdot \poltens \cdot \mathbf{u}_{b'\!\!, p'} (\br'_\perp).
\end{align}
Here, the atomic polarizability operator \cite{Buhmann2004, Deutsch2010a,LeKien2013}, is given by
\begin{equation} \label{Eq::PolarizabilityOperator}
\poltens = - \frac{1}{\hbar} \sum_{e,g,g'}\ket{g}\frac{\bra{g}\hat{\mathbf{d}}\ket{e}\bra{e}
\hat{\mathbf{d}}\ket{g'}}{\Delta_{eg} + i \Gamma_{e}/2 }\bra{g'},
\end{equation}
and $\Delta_{eg} = \omega_0 - \omega_{eg}$ is the laser detuning from the atomic transition.
For an atom in ground state $\ket{g}$ and polarization $p$, the phase shift can be expressed as \cite{LeKien2014Propagation}
\begin{align} \label{Eq::PhaseShiftMultilevel}
\delta \phi_{p,g} &=2\pi \frac{ \omega_{0} }{v_g} \mathbf{u}^*_{+, p}(\br'_\perp) \cdot {\rm Re}\big[\bra{g} \hat{\tensor{\boldsymbol{\alpha}}} \ket{g} \big] \cdot \mathbf{u}_{+, p}(\br'_\perp)\nonumber\\
&= -\frac{ \omega_{0} }{v_g} \sum_e \frac{2 \pi |\bra{e}\hat{\mathbf{d}}\ket{g} \cdot \mathbf{u}_{+,p}(\br'_\perp)|^2}{ \hbar \Delta_{eg} } .
\end{align}
We employ this dispersive response for QND measurement of atoms, as we describe in the next section.
%============== SECITON: QND Measurement ==============%
\section{QND measurement of atoms based on the birefringence effect} \label{Sec::QNDMeasurement}
%========= FIGURE: Geometry (coupling strength, magic wavelength, area and detuning) =========%
\begin{figure}[t]
\centering
\includegraphics[scale=1.5]{../chap5/Fig2}
\caption[Quantum interface for spin-polarization coupling of two 1D lattices of cold, trapped atoms and the guided modes of an optical nanofiber based on the birefringence effect.]{Quantum interface for spin-polarization coupling of two 1D lattices of cold, trapped atoms and the guided modes of an optical nanofiber. a) Schematic of the interface. A linearly polarized probe is launched into the nanofiber and the output light is analyzed in a polarimeter. The atoms (green circles), trapped in the $x$-$z$ plane, couple to the evanescent portion of the guided $H$ and $V$ modes. Contours of the $H$- and $V$-mode intensities in b) the $x$direction and c) the transverse $xy$ plane show the mode anisotropy at the atomic positions. }\label{Fig::Schematic}
\end{figure}
%=============================================
The dispersive interface between the atoms and nanofiber guided photons provides the entangling mechanism necessary to perform a QND measurement on the atoms.
We restrict here to the quasilinear modes, $p =\{H,V\}$, of a single HE$_{11}$ guided mode at frequency $\omega_0$, whose form is given explicitly in \erf{Eq::QuasilinearModes}.
In typical experimental configurations, two one-dimensional arrays of atoms are trapped on either side of the nanofiber, see Fig. \ref{Fig::Schematic}.
We define coordinate axes $(x,y,z)$ with $z$ oriented along the fiber axis for forward propagation, and the two chains of atoms lie in the $x$-$z$ plane at azimuthal angles $\phi' = \{0, \pi\}$.
In the evanescent region, the $H$ mode is purely $\mathbf{e}_x$-polarized at $\phi = \pm \pi/2$ and the $V$ mode is purely $\mathbf{e}_y$-polarized at $\phi = \{0,\pi\}$.
At other azimuthal angles the electric field is generally rotating along an ellipse in the $x$-$z$ plane. The atoms at $\phi'=0$ experience $H$ and $V$ fields,
\begin{subequations}
\begin{align}
\mbf{u}_{b,H}(r_\perp, \phi = 0) = & \sqrt{2} \big[ \mathbf{e}_x u_r(r_\perp)+ i b \mathbf{e}_z u_z(r_\perp) \big] \\
\mbf{u}_{b,V}(r_\perp, \phi = 0) = & \sqrt{2} \mathbf{e}_y u_\phi(r_\perp),
\end{align}
\end{subequations}
where the real-valued functions $u_\alpha(r_\perp)$, given in \erf{Eq::ProfileFunctions}, depend only on the radial coordinate.
On the opposite side of the fiber at $\phi' = \pi$, atoms experience the same transverse electric field, but the $z$-component changes sign. This broken symmetry has been used to selectively address and separately control the two atomic arrays \cite{Mitsch2014, Mitsch2014a, Sayrin2015}.
We consider quasi-monochromatic fields at carrier frequency $\omega_0$ that are sufficiently narrowband, $\Delta \omega \ll \omega_0$.
For each guided mode we define input propagating, continuous-mode field operators in the interaction picture \cite{Gardiner1985Input, Blow1990Continuum, LeKien2008},
\begin{align}
\hat{a}_{b,p}(z,t) =\frac{1}{\sqrt{2 \pi}} \int_0^{\infty}\!\!\!\!\! d \omega \, \hat{a}_{b,p}(\omega) e^{i[b \beta_0 z- (\omega-\omega_0) t ]},
\end{align}
that satisfy the free-field commutation relations,
\begin{equation} \label{Eq::InputOutputCommutation}
\big[\hat{a}_{b,p}(z,t),\hat{a}^\dag_{b',p'}(z',t')\big]=\delta_{b,b'}\delta_{p,p'} \delta(t-t'-(z-z')/v_g).
\end{equation}
In terms of these propagating modes the quantized electric field operator, \erf{Eq::QuantizedElectricField}, becomes
\begin{equation} \label{Eq::PropagatingElectricField}
\hat{\mathbf{E}}^{(+)}(r\!_\perp,\phi,z;t) = \sum_{b,p} \sqrt{ \frac{2 \pi \hbar \omega_0}{ v_g} } \mathbf{u}_{b,p}(r\!_\perp,\phi) \hat{a}_{b,p}(z,t) e^{i b \beta_0 z}.
\end{equation}
Considering here only the forward-propagating guided modes ($b=+$), we drop the $b$ index.
The propagating electric field, \erf{Eq::PropagatingElectricField}, interacts with the trapped atoms via the dispersive light-shift Hamiltonian~\cite{Deutsch2010a,LeKien2013,Baragiola2014Open},
\begin{equation} \label{Eq::LightShiftHam}
\hat{H}_{LS} = - \sum_{n=1}^{N_A} \hat{\mathbf{E}}^{(-)}(\mathbf{r}'_n ; t ) \cdot \poltens {}^{(n)} \cdot \hat{\mathbf{E}}^{(+)}(\mathbf{r}'_n ;t ),
\end{equation}
where $\poltens {}^{(n)}$ is the atomic tensor polarizability operator, given in \erf{Eq::PolarizabilityOperator}, for the $n^{th}$ atom trapped near the nanofiber surface at position $\mathbf{r}'_n$. We ignore here any effects of atomic motion and treat the atoms as localized at fixed positions in space.
The Lippmann-Schwinger scattering equation, \erf{Eq::IOScatteredField}, follows in the time domain as the evolution of coarse-grained input-ouput modes \cite{Gardiner1985Input, Fan2010, LeKien2014Propagation}.
Since multiple scattering is negligible and the propagation time across the ensemble is small compared to the atomic dynamics, we drop the position label $z$ and index the propagating fields by time alone; $\hat{a}_{b,p}(z,t) \rightarrow \hat{a}_{b,p}(t) $ as is standard in input-output theory \cite{Gardiner1985Input, Stockton2004Deterministic} .
It follows that the effects of retardation can be ignored, in which case each term in the sum over atoms contributes equally for all atoms (for details see \cite{LeKien2008, Baragiola2014Open}).
The forward-propagating output fields are then given by the Fourier transform of \erf{Eq::aout}, yielding \cite{LeKien2008}
\begin{align} \label{Eq::ScatteringSolution}
&\hat{a}^{\rm out}_{p}(t) = \hat{a}^{\rm in}_{p}(t) \nonumber\\
&+\! i \frac{2\pi \omega_0}{v_g}\!\sum_{p'}\!\!\Big[\! N_0 \mathbf{u}^*_p(r'_\perp,0) \!\cdot\! \poltens \!\cdot\! \mathbf{u}_{p'}(r'_\perp,0) \!+\! N_\pi \mathbf{u}^*_p(r'_\perp,\pi) \!\cdot\! \poltens \!\cdot\! \mathbf{u}_{p'}(r'_\perp,\pi)\! \Big]\! \hat{a}^{\rm in}_{p'}(t),
\end{align}
where $\{N_0,N_\pi \}$ are the total number of atoms trapped at $\phi' = \{0,\pi\}$. The quantum effects from the first term give rise to shot noise\index{noise!shot noise} in the transmitted field at the detector.
The second term represents scattering into the guided modes, as described by the dyadic Green's function, \erf{Eq::Gamma1DGreens}.
We have introduced the vector Stokes operators that describe the polarization of the propagating fields in Eqs.~\eqref{Eq::StokesComponents} and~\eqref{Eq::StokesCommutation} in the quasilinear $HV$-basis.
These operators are used to reexpress the Hamiltonian, \erf{Eq::LightShiftHam}, in the $HV$-basis,
\begin{align}
\hat{H}_{LS} =& - 2 \pi \hbar k_0 n_g \nonumber \\
&\times\!\sum_{\phi'=0,\pi}N_{\phi'} \Big\{ \big[ \polcomp_{HH}(\phi')+\polcomp_{VV}(\phi') \big] \hat{S}_0(t) + \big[\polcomp_{HH}(\phi') - \polcomp_{VV}(\phi') \big] \hat{S}_1(t) \nonumber \\
&\quad\quad +\! \big[\!\polcomp_{HV}(\phi') \!+\! \polcomp_{VH}(\phi') \!\big] \hat{S}_2(t) \!+\! i \big[\! \polcomp_{HV}(\phi')\!-\!\polcomp_{VH}(\phi') \big]\hat{S}_3(t) \Big\}. \label{Eq::GenHamiltonian}
\end{align}
The atomic couplings to the $\{H,V\}$ modes,
\begin{align}
\polcomp_{p p'}(\phi') & \equiv |\mathbf{u}^*_p(r'_\perp, \phi')||\mathbf{u}_{p'}(r'_\perp, \phi')| \, \hat{\alpha}_{p p'}(\phi') ,
\end{align}
are determined by components of the quantum mechanical tensor operator weighted by the transverse mode functions at the atomic position, $\hat{\alpha}_{p p'}(\phi') = \mathbf{e}^*_{p'}(\phi') \cdot \poltens \cdot \mathbf{e}_{p}(\phi') $, whose classical analog appeared in \erf{Eq::PhaseShift}.
We explore a QND measurement of ${}^{133}$Cs atoms in the electronic ground state, $6S_{1/2}$, via polarization spectroscopy based on the collective atom-light coupling described by the dispersive light-shift Hamiltonian in \erf{Eq::GenHamiltonian}. Polarization transformations occur due to the tensor nature of the atomic response,
\begin{align} \label{Eq::Polarizability}
\poltens &= \sum_{f,f'} \charpol \sum_{i,j} \hat{\tensor{\mbf{A}}}(f,f'),
\end{align}
where the operator $\hat{\tensor{\mbf{A}}}(f,f') = \sum_{i,j} \hat{A}_{ij}(f,f')\mathbf{e}_i \otimes \mathbf{e}_j$ decomposes into irreducible components within each ground hyperfine multiplet $f$ for light detuned near excited multiplet $f'$,
\begin{align} \label{Eq::PolarizabilityIrrep}
\hat{A}_{ij}(f,f')&= C_{ff'}^{(0)} \delta_{i,j}+ iC_{ff'}^{(1)}\epsilon_{ijk}\hat{f}_k+ C_{ff'}^{(2)} \Big[ \smallfrac{1}{2} ( \hat{f}_i\hat{f}_j +\hat{f}_j\hat{f}_i )-\smallfrac{1}{3} \hat{\mathbf{f}}\!\cdot\!\hat{\mathbf{f}} \delta_{i,j} \Big].
\end{align}
Here, $\charpol = -\frac{\sigma_0}{8\pi k_0}\frac{\Gamma }{\Delta_{ff'}+i\Gamma/2}$ is the characteristic dynamic polarizability where $\sigma_0 = 3 \lambda^2/2\pi$ is the resonant scattering cross section, $\hat{\mathbf{f}}$ is the atomic spin operator in hyperfine multiplet $f$, and $C_{ff'}^{(K)}$ are coefficients for irreducible rank-$K$ components defined in \cite{Deutsch2010a}.
In addition to the atomic tensor response, the nanofiber geometry gives rise to unique features of polarization spectroscopy not present in free space. The spatial anisotropy of the intensity for the quasilinearly polarized guided modes leads to unequal scattering of the $H$ and $V$ modes, producing \emph{intrinsic} birefringence\index{birefringence effect} even for a purely scalar atomic polarizability.
In particular, atoms trapped on the quasi-$H$ axis leads to a phase delay of this mode relative to the fast quasi-$V$ axis.
This birefringence was exploited by Dawkins {\em et al.} \cite{Dawkins2011} as a mechanism for implementing a dispersive QND measurement of the number of atoms trapped around the nanofiber, as we treat in the next section.
%=================== Atom number measurement =====================%
\subsection{Dispersive atom number measurement} \label{Sec::AtomNumberMeasurement}
The anisotropy of the guided modes provides a mechanism for counting the number of atoms trapped around the nanofiber based on polarization spectroscopy.
We consider $N_A$ atoms, each in a completely mixed hyperfine spin state. In this case the atomic polarizability tensor in \erf{Eq::PolarizabilityIrrep} reduces to $\langle \hat{A}_{ij}(f,f') \rangle = C_{ff'}^{(0)} \delta_{i,j}$, and the collective interaction is determined entirely by the the scalar (rank-0) terms.
With the atoms trapped along the quasi-$H$ axis, while $\expt{ \polcomp_{HH} } \neq \expt{ \polcomp_{VV} }$, the off-diagonal elements in \erf{Eq::GenHamiltonian} do not contribute to the Birefringent interaction we are interested in and actually vanish ($\expt{ \polcomp_{HV} } = \expt{ \polcomp_{VH} } =0$) when $x-$, $ y- $ or $ z- $axis is chosen as the quantization axis which includes the one close to the optimal choice of quantization axis for spin squeezing we will discuss in the next section.
Atoms on either side of the nanofiber experience the same scalar light shift yielding from \erf{Eq::GenHamiltonian} the Hamiltonian for QND measurement of atom number,
\begin{align}
\hat{H}_{N} =& -2\pi \hbar k_0 n_g \sum_{\phi' = \{0,\pi \}} N_{\phi'} \big[ \expt{ \polcomp_{HH}(\phi')} - \expt{ \polcomp_{VV}(\phi')} \big] \hat{S}_1(t) \nonumber \\
& = \hbar \chiN N_A \hat{S}_1(t). \label{Eq::MixedHamiltonian}
\end{align}
This birefringent interaction induces a rotation of the Stokes vector around the $S_1$ axis on the Poincar\'{e} sphere through an angle,
\begin{equation} \label{Eq::RotationAngle}
\chiN = \frac{\sigma_0}{\Abir} \sum_{f,f'} C_{ff'}^{(0)} \frac{\Gamma}{2 \Delta_{ff'}},
\end{equation}
characterized by an effective area, $\Abir^{-1} \equiv (n_g/2) \big[ |\mathbf{u}_{H}(\br'_\perp)|^2 - |\mathbf{u}_{V}(\br'_\perp)| ^2 \big]$. In this case, the polarizability of the atom can be treated as a scalar, since the atoms do not have a preferred state of photons to emit. Therefore, Eq.~\ref{eq:birefringencerotang} characterizes the rotation angle of the polarization state of the light, $ \varphi $, on the \Poincare sphere due to the interaction with one atom. With $ N_A $ atoms, the total rotation angle of the light on the \Poincare sphere is proportional to $ \varphi_N=N_A\varphi $, scaled by the power of the input light.
Dawkins {\em et al.} \cite{Dawkins2011} used this interaction to make a dispersive measurement of $N_A$ via birefringence\index{birefringence effect} polarimetry in the usual way: launching linearly polarized light at 45$^\circ$ to the quasi-$H$ axis, $\mbf{u}_\inp = (\mbf{u}_H + \mbf{u}_V )/\sqrt{2}$, and measuring the differential power between the guided right-and left-circularly polarized photons.
Thus, the integrated measurement is described by the operator $\hat{\mathcal{M}} \equiv \int_0^T dt' \hat{S}^{\rm out}_3(t').$ The shot-noise variance of the polarimeter, $\shotnoise = \chiN^2 \dot{N}_L T$ for integration time $T$, determines the fundamental resolution of the polarimeter.
The smallest detectable atom number using this dispersive measurement is thus, $\delta N_A \sim ( \chiN^2 \dot{N}_L T)^{-1/2}$ \cite{Smith2003a}. We provide the detailed analysis in Appendix~\ref{sec:birefringenceresolution}.
In an ideal setting, $\delta N_A$ can always be reduced by increasing the integration time, but in practice this time is limited by atom loss. As a coarse approximation we take this time to be $T=\gamma_s^{-1}$, where $\gamma_s$ is the photon scattering rate in free space, and assume perfect quantum efficiency of the detectors.
For detuning $\Delta$ large compared to the excited hyperfine splitting on the D1- or D2-line ($j' = 1/2$ or $3/2$), the unit-oscillator scattering rate is
$\gamma_s =\frac{\sigma_0}{A_{\rm in}}\left(\frac{\Gamma}{2 \Delta}\right)^2 \dot{N}_L $,
with effective area determined by the probe at the atomic position, \erf{Eq::AreaIn}.
In this limit, the rotation angle $\chiN = C^{(0)}_{j'} (\sigma_0/\Abir)(\Gamma/2\Delta)$, \erf{Eq::RotationAngle}, yields a shot noise-limited\index{noise!shot noise limit} atom number resolution,
\begin{align} \label{Eq::AtomNumberResolution}
\delta N_A &\sim \frac{1}{C^{(0)}_{j'}}\sqrt {\frac{\Abir^2}{A_{\rm in} \sigma_0}},
\end{align}
where $ C^{(0)}_{j'}=\sum_{f,f'}C^{(0)}_{ff'}$ are the far-detuned, rank-$0$ coefficients on a $j \rightarrow j'$ transition \cite{Deutsch2010a}.
Using the parameters reported by Dawkins \emph{et al.}~\cite{Dawkins2011}, we find the shot-noise limited minimum detectable atom-number $\delta N_A \sim 10$ for atoms trapped at $ 1.8a\sim 2.0a $ from the fiber axis with a D2-line probe light. We plot the estimation result in Fig.~\ref{fig:AminNA_D12_n1d4469_a225nm_rp}.
\begin{figure}
\begin{minipage}{.48\linewidth}
\centering
\subfloat[]{\label{fig:deltaNA_D12_n1d4469_a225nm_rp}\includegraphics[scale=0.45]{../media/Figs/deltaNA_D12_r1to2d5a}}
\end{minipage}
\begin{minipage}{.48\linewidth}
\centering
\subfloat[]{\label{fig:A_D12_n1d4469_a225nm_rp}\includegraphics[scale=0.45]{../media/Figs/ANAin_D12_r1to2d5a}}
\end{minipage}
\caption[Minimum detectable atom number and mode areas as functions of the atom radial position, $ r\!_\perp/a $, when the D1- and D2-line probe lasers are used.]{Minimum detectable atom number (a) and mode areas (b) as functions of the atom radial position, $ r\!_\perp/a $, when the D1- and D2-line probe lasers are used. The group indices of refraction are $ n_g=1.4737 $ and $ n_g=1.4974 $ for the D1 ($ \lambda=894 $ nm) and D2 ($ \lambda=952 $ nm) lines, respectively. }\label{fig:AminNA_D12_n1d4469_a225nm_rp}
\end{figure}
In practice, loss and decoherence limit the atom-number resolution~\cite{Dawkins2011, Zhang2012Collective}.
The experimental implementation reported by Dawkins \emph{et al.}~\cite{Dawkins2011} implies a resolution of a few tens of atoms for $ 200\sim 1000 $ trapped atoms.
A similar experiment based on a two-color QND measurement in a nanofiber geometry was recently carried out by \emph{B\'{e}guin et al.} \cite{Beguin2014} to squeeze the uncertainty in the number of trapped atoms. They achieved an atom number uncertainty of $\delta N_A = 8$ for $N_A\sim2500$ atoms, well below standard quantum limit, $\delta N_A=\sqrt{N_A}$.
%===================QND spin squeezing=====================%
\subsection{Collective spin squeezing via QND measurement}
The same birefringent interaction, \erf{Eq::GenHamiltonian}, can be utilized in a QND measurement to squeeze the projection noise of the collective atomic spin. We consider squeezing of the uncertainty associated with the ``clock states" of cesium\index{state!clock state}, $\ket{\uparrow} = \ket{6S_{1/2}, f=4, m_f=0}$ and $\ket{\downarrow} = \ket{6S_{1/2}, f=3, m_f=0}$, which define a pseudospin within each atom and associated Pauli operators $\{\hat{\sigma}_x, \hat{\sigma}_y, \hat{\sigma}_z\}$. The quantum uncertainty in the collective pseudospin,
\begin{align}
\jz = \frac{1}{2} \sum_{n=1}^{N_A} \hat{\sigma}_z^{(n)},
\end{align}
fundamentally limits the precision of atomic clocks \cite{Wineland1992}. For atoms prepared in a spin coherent state (SCS)\index{state!spin coherent state} the projection noise, $\varz \big|_{\scs} = N_A/4$, sets the standard quantum limit for spin measurements. A spin squeezed state (SSS)\index{state!spin squeezed state} exhibits reduced fluctuations, $ \varz \big|_{\rm SSS} < N_A/4$, due to negative pairwise correlations between the atoms \cite{Kitagawa1993Squeezed}. Spin squeezing is typically quantified with the metrological squeezing parameter defined by Wineland \emph{et al.} \cite{Wineland1992},
\begin{align} \label{Eq::SqueezingParameter}
\xi^2 \equiv N_A \frac{ \varz }{ \expt{\hat{J}_{||}}^2 },
\end{align}
where $\expt{\hat{J}_{||}}$ is the mean collective spin along the direction of spin polarization.
The clock states\index{state!clock state} are defined to have zero projection of angular momentum with respect to a bias magnetic field that defines a quantization axis, $\mathbf{e}_{\tilde{z}}$.
Within the clock-state subspace the rank-1 vector light shift in the dispersive Hamiltonian, \erf{Eq::GenHamiltonian}, vanishes since $\bra{\uparrow}\hat{f}_k \ket{\uparrow} =\bra{\downarrow}\hat{f}_k \ket{\downarrow} = 0$ for any direction of the spin, $k$, and any quantization axis, $\mathbf{e}_{\tilde{z}}$.
Furthermore, as shown below, atoms on either side of the nanofiber experience the same birefringent coupling.
The resulting Hamiltonian, restricted to the clock subspace, couples the guided field of the nanofiber to the $J_z$-component of the collective pseudospin. The interaction has contributions from both the scalar and tensor light shifts,
\begin{align} \label{Eq::ClockHamiltonian}
\hat{H}_{J_z} = \hbar \Big\{ & \big[ \big( \chi_{H,\uparrow} +\chi_{V,\uparrow} \big) - \big( \chi_{H,\downarrow} + \chi_{V,\downarrow}\big) \big] \jz \hat{S}_0(t) \\
+ & \big[ \big( \chi_{H, \uparrow} - \chi_{V,\uparrow} \big) - \big(\chi_{H,\downarrow} - \chi_{V,\downarrow} \big) \big] \jz \hat{S}_1(t) \Big\}, \nonumber
\end{align}
where the coupling strength between an atom in the clock subspace and a photon with polarization $p = \{H,V\}$ is
\begin{equation} \label{Eq::ClockCouplingStrength}
\chi_{p,f} \equiv - 2\pi k_0 n_g | \mathbf{u}_p(\mbf{r}_\perp)|^2 \bra{f,0}\hat{\alpha}_{pp} \ket{f,0},
\end{equation}
and $f = \{4,3\}$ labels $\{\uparrow,\downarrow\}$.
The diagonal terms in the polarizability tensor are the same for atoms at positions above and below the nanofiber, and thus all atoms contribute equally.
In addition, a constant birefringence\index{birefringence effect} proportional to $ \hat{J}_0\hat{S}_1 $ is neglected here as it can be canceled with a compensating waveplate.
Finally, the first term in \erf{Eq::ClockHamiltonian} does not affect polarization spectroscopy, but will act to rotate the pseudo-spin around the $J_z$ axis of the generalized Bloch sphere\index{Bloch sphere} proportional to classical intensity fluctuations.
While this does not affect the squeezing of projection noise\index{noise!projection noise} in $\hat{J}_z$, it affects the metrologically relevant squeezing by adding uncertainty to the direction of the mean spin. By choosing a ``magic frequency" at which the light shifts on the two clock states\index{state!clock state} are equal,
\begin{align} \label{Eq::MagicWavelengthCondition}
\chi_{H,\uparrow} +\chi_{V,\uparrow} = \chi_{H,\downarrow} + \chi_{V,\downarrow},
\end{align}
this term can be canceled \cite{Chaudhury2006}, where we have ignored the imaginary part of the coupling strengths in the dispersive regime.
Using the D1-line of $^{133}$Cs atoms as the probe light, there are two magic-frequency solutions, $ \magic{3} $ and $\magic{4}$, shown in \frf{Fig::CouplingStrength}(a).
Because the guided probe light at the position of the atom will generally be elliptical, the light-shift interaction coherently couples different magnetic sublevels in a given manifold $f$, and thus does not conserve $\hat{J}_z$. For example, the ellipticity of the probe light leads to a fictitious magnetic field proportional to $i \mathbf{E}^{(-)}_{\inp}(\br') \times \mathbf{E}^{(+)}_{\inp}(\br')$ that causes a precession of the spin within hyperfine manifold $f$. This can be mitigated by a sufficiently strong bias magnetic field compared to the fictitious field \cite{Smith2004Continuous}.
%========= FIGURE: Geometry (coupling strength, magic wavelength, area and detuning) =========%
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.64]{../chap5/Fig3}
\caption[Parameters of the atom-light interface using the clock states of $^{133}$Cs and magic frequencies.]{Parameters of the atom-light interface using the clock states of $^{133}$Cs.
(a) Energy level structure for atoms probed with one of two magic frequencies on the D1-line.
(b) Magnitude of the magic detunings at which the clock states are equally light-shifted, $| \tilde{\Delta}_{f}|/2\pi \equiv | \omega_0 - \magic{f} |/2\pi$ (in units of MHz). There are two solutions shown as blue (dashed) and red lines. (c) The coupling strength, as measured by the magnitude of the polarization rotation angle on the Poincar\'{e} sphere, $|\chi_{_{ J_z}}|$, for an atom trapped in the $x$-$z$ plane at a distance $ r'\!_\perp=1.8a $ from the fiber center. In both (b) and (c) we plot the parameter as the direction of the clock-state quantization axis is varied in the $x$-$y$ plane, for the two possible choices of magic detunings.
See text for details.}\label{Fig::CouplingStrength}
\end{figure}
%=============================================
The remaining QND interaction Hamiltonian is
\begin{equation} \label{Eq::FaradayHam}
\hat{H}_{J_z} = \hbar \chieff \jz \hat{S}_1(t),
\end{equation}
where the rotation angle on the Poincar\'{e} sphere at the magic wavelength is,
\begin{equation}\label{eq:chiJ3}
\chieff = \big( \chi_{H, \uparrow} - \chi_{V,\uparrow} \big) - \big(\chi_{H,\downarrow} - \chi_{V,\downarrow} \big) = 2(\chi_{H, \uparrow}-\chi_{H, \downarrow}).
\end{equation}
In the standard way, squeezing the uncertainty in $\jz$ by QND measurement can be generated by preparing the atoms in a SCS along $\jx$, passing a probe prepared along $\hat{S}_2$ with photon flux $\dot{N}_L$, and continuously monitoring the $S_3$-component of the guided light in a polarimeter. The measurement strength,
\begin{align} \label{Eq::MeasurementStrength}
\kappa \equiv |\chieff|^2 \dot{N}_L,
\end{align}
quantifies the rate at which we squeeze projection noise\index{noise!projection noise}, with $\chieff$ given in \erf{eq:chiJ3}.
In the absence of any decoherence, such a QND measurement for integration time $T$ squeezes the initial uncertainty in $\jz$ according to $(\Delta J_z^2)_{\rm out}= (\Delta J_z^2)_{\rm in}/(1+r)$, where
\begin{equation}
r = \kappa T (\Delta J_z^2)_{\inp}
\end{equation}
is the integrated measurement strength \cite{Hammerer2010, Baragiola2014}.
The strength of the birefringent interaction arises from two fundamental sources. The anisotropy of the $H$ and $V$ polarized modes leads to a polarization-dependent index of refraction, as described in Sec.~\ref{Sec::AtomNumberMeasurement}. In addition there is a dependence of the atom-photon coupling on the internal spin state of the atom due to the atomic tensor polarizability. In particular, we are interested in the dependence on the two clock states of the atom. This spin-dependent coupling will depend on the choice of quantization axis that defines the clock state\index{state!clock state} with projection $m_f=0$.
We combine these two effects and obtain a compact expression for the coupling strength $\chieff$ using the irreducible tensor decomposition of the atomic polarizability, \erf{Eq::PolarizabilityIrrep}.
Let $\{\mathbf{e}_{\tilde{x}},\mathbf{e}_{\tilde{y}}, \mathbf{e}_{\tilde{z}}\}$ be a space-fixed Cartesian coordinate system, where $\mathbf{e}_{\tilde{z}}$ defines the quantization axis of the atom, set by the magnetic field. Because of the azimuthal symmetry of clock state around the $\qaxis$ axis, the polarizability tensor is diagonal in that basis.
Noting that $\langle f,0 | \hat{f}_{\tilde{z}}| f,0 \rangle =0$ and $\langle f,0 | \hat{f}_{\tilde{x}}^2| f,0 \rangle = \langle f,0 | \hat{f}_{\tilde{y}}^2| f,0 \rangle = \langle f,0 | \hat{\mathbf{f}}^2| f,0 \rangle /2 =f(f+1)/2$, it follows that the expectation value of the irreducible rank-2 component of the atomic polarizability is
\begin{align} \label{Eq::CouplingAngleMagic}
\langle f,0 | \poltens \phantom{}^{(2)}| f,0 \rangle = \sum_{f'} \charpol C^{(2)}_{ff'} \frac{f(f+1)}{6} \Big( \unittens - 3\qaxis \otimes \qaxis \Big).
\end{align}
The combined scalar and tensor light shifts yield a coupling strength, \erf{Eq::ClockCouplingStrength},
\begin{align}
\chi_{p,f} & = n_g \sigma_0 \big( a_{f} \left|\mathbf{u}_p(\br'_\perp)\right|^2 - b_{f} \left|\mathbf{e}_{\tilde{z}} \cdot \mathbf{u}_p(\br'_\perp)\right|^2 \big), \label{Eq::ClockStateCoupling}
\end{align}
with coefficients that depend on detunings and atomic structure,
\begin{align}
a_f &= \sum_{f'} \Big(C^{(0)}_{ff'} + \frac{f(f+1)}{6} C^{(2)}_{ff'} \Big) \frac{\Gamma}{4 \Delta_{ff'}},\\
b_f &= \frac{f(f+1)}{2}\sum_{f'} C^{(2)}_{ff'} \frac{\Gamma}{4 \Delta_{ff'}}.
\end{align}
At the magic wavelength set by \erf{Eq::MagicWavelengthCondition},
\begin{equation}
\frac{a_4-a_3}{b_4-b_3} = \frac{ |\mathbf{e}_{\tilde{z}} \cdot \mathbf{u}_V(\br'_\perp)|^2 + |\mathbf{e}_{\tilde{z}} \cdot \mathbf{u}_H(\br'_\perp) |^2 }{ |\mathbf{u}_H(\br'_\perp)|^2 + |\mathbf{u}_V(\br'_\perp)|^2 },
\end{equation}
which depends on the choice of quantization axis. Note that, in Appendix~\ref{chap:magicwavelength}, we prove the magic wavelengths can be found only if we utilize the tensor polarizability of atoms and design some multi-color schemes for the QND measurement and spin squeezing. We will not discuss the details in the main text here.
We write the effective rotation angle in the Hamiltonian, \erf{Eq::FaradayHam}, as
\begin{align} \label{Eq::chieff}
\chieff = \frac{\sigma_0}{A_{J_z}} \frac{\Gamma}{ 2 \Delta_{J_z}},
\end{align}
with an ``effective detuning" set by the magic-wavelength condition,
\begin{align} \label{Eq::SqueezingEffectiveDetuning}
\Delta_{J_z}^{-1} \equiv \frac{4}{\Gamma} (b_4 - b_3) = \sum_{f'} \left( C^{(2)}_{4f'}\frac{10}{\Delta_{4f'}} - C^{(2)}_{3f'}\frac{6}{ \Delta_{3f'} } \right),
\end{align}
and an effective area given by
\begin{align} \label{Eq::SqueezingModeArea}
A_{J_z}^{-1} & = n_g \frac{ |\mathbf{e}_{\tilde{z}} \cdot \mathbf{u}_V(\br'_\perp)|^2 |\mathbf{u}_H(\br'_\perp)|^2 - |\mathbf{e}_{\tilde{z}} \cdot \mathbf{u}_H(\br'_\perp) |^2 |\mathbf{u}_V(\br'_\perp)|^2 }{ |\mathbf{u}_H(\br'_\perp)|^2 + |\mathbf{u}_V(\br'_\perp)|^2 }
\end{align}
We see here the explicit dependence of the coupling strength on both the anisotropy of the modes and on the tensor atomic response, which in turn depends on a particular choice of clock states\index{state!clock state}. The quantization axis that maximizes $\chieff$ is that which minimizes $A_{J_z}$ at a given magic detuning.
Since the $z$-component of the guided modes is $90^\circ$ out-of-phase with the transverse components, the quantization axis maximizing the atom-light coupling is specified by an angle in the transverse $x$-$y$ plane, $\varphi$,
\begin{align} \label{Eq::QuantizationAxis}
\qaxis = \cos \varphi \mbf{e}_x + \sin \varphi \mbf{e}_y.
\end{align}
The dependence of the magic detunings on the direction of quantization axis is shown in \frf{Fig::CouplingStrength}(b) for atoms trapped at a typical distance of $r_\perp'=1.8a$ on the $x$ axis.
In typical operating regimes, the magic frequencies are hundreds of MHz from resonance with either excited state, placing the interaction in the off-resonant, dispersive regime.
Using these magic detunings, in \frf{Fig::CouplingStrength}(c) we show the variation in $\chieff$ as a function of $\varphi$.
This suggests that, based solely on the strength of the coherent interaction, the $x$ axis is the optimal quantization axis.
As we will see in the next section, the optimal quantization axis is significantly modified when decoherence due to optimal pumping is included.
%====== SUBSECTION: Including optical pumping ======%
\subsection{Decoherence due to optical pumping}\label{sec:decoherence}
The treatment above considers an idealized QND interaction.
The coupling of the atoms to the probe, however, will always lead to scattering of photons into modes other than the forward-scattered guided mode.
This is accompanied by optical pumping that destroys the entanglement associated with spin squeezing. In addition it reduces the metrologically useful signal.
The maximum achievable metrologically relevant squeezing is determined by the balance of this decoherence with the QND measurement.
We model this using a first-principles stochastic master equation description (SME)~\cite{Jacobs2006, Baragiola2014},
\begin{align} \label{Eq::SME}
d \hat{\rho} = s\sqrt{\frac{\kappa}{4}} \mathcal{H}[\hat{\rho}] dW + \frac{\kappa}{4} \mathcal{L}[\hat{\rho}] dt + \sum_n \mathcal{D}_n [\hat{\rho}] dt,
\end{align}
where $s = {\rm sign}(\chieff)$ and $\hat{\rho}$ is the collective atomic state.
The measurement strength $\kappa =|\chieff|^2 \dot{N}_L$ determines the rate of the spin squeezing in the absence of decoherence.
The first two terms describe the QND measurement, where $dW$ is a stochastic Weiner increment satisfying $dW^2 = dt$.
The conditional dynamics that result from the measurement are generated by the superoperator
\begin{align}
\mathcal{H}[\hat{\rho}] = \jz \hat{\rho} + \hat{\rho} \jz - 2 \expt{\jz} \hat{\rho},
\end{align}
and the collective Lindblad map is
\begin{align}
\mathcal{L}[\hat{\rho}] = - \smallfrac{1}{2} \big( \hat{\rho} \jz^2 + \jz^2 \hat{\rho} \big) + \jz \hat{\rho} \jz.
\end{align}
The final term in \erf{Eq::SME} describes the effect of optical pumping acting locally on each atom along the nanofiber.
The optical pumping map is governed by a standard master equation~\cite{Deutsch2010a}.
Restricting to the two-dimensional subspace associated with the clock states\index{state!clock state}, the action on the $n^{th}$ atom is
\begin{align}
\mathcal{D}_n[\hat{\rho}] = \sum_{f=3,4} \Big\{& -\frac{\gamma_{f}}{2} \big[ \hat{\rho} (\op{f,0}{f,0})^{(n)} + (\op{f,0}{f,0})^{(n)}\hat{\rho} \big]\\
&+ \sum_{\tilde{f} =3,4} \gamma_{f \rightarrow \tilde{f}}(\op{\tilde{f},0}{f,0})^{(n)} \hat{\rho}(\op{f,0}{\tilde{f},0})^{(n)} \Big\}.\nonumber
\end{align}
Here, $\gamma_{f}$ is the total rate of photon scattering by atoms in state $\ket{f,0}$ and $\gamma_{f \rightarrow \tilde{f} }$ is the rate of optical pumping between the clock states, $\ket{f,0} \rightarrow \ket{\tilde{f},0}$ (see Appendix~\ref{Appendix::Rates}). Expressed in terms of Pauli operators on the clock-state pseudospin, the map acts as
\begin{align} \label{Eq::OpticalPumpingMapSchr}
\mathcal{D}_n [\hat{\rho}]
=& - \!\bigg[ \frac{2(\gammau \!+\! \gammad) \!-\! \gammauu \!-\! \gammadd}{4} \bigg] \hat{\rho} \!-\! \frac{ \gammau \!-\! \gammad \!-\! \gammauu \!+\! \gammadd }{4} \big( \hat{\sigma}_z^{(n)} \hat{\rho}\!+\! \hat{\rho} \hat{\sigma}_z^{(n)} \big) \nn \\
&+ \frac{\gammauu+\gammadd}{4} \hat{\sigma}_ 3^{(n)}\hat{\rho} \hat{\sigma}_z^{(n)} + \gammaud \hat{\sigma}_-^{(n)} \hat{\rho} \hat{\sigma}_+^{(n)} + \gammadu \hat{\sigma}_+^{(n)} \hat{\rho} \hat{\sigma}_- ^{(n)}.
\end{align}
There are three important features of this map that are not typical in a QND measurement of ideal spin-$\half$ particles.
First, the map is not trace preserving because atoms can be pumped out of the clock states\index{state!clock state}.
Second, unequal rates of optical pumping for $\ket{\uparrow}$ and $\ket{\downarrow}$ polarize the mean $\expt{\jz}$ towards a value different from that found in the QND measurement.
Third, owing to the large ground hyperfine splitting, photons arising from optical pumping of $f \rightarrow \tilde{f}=3$ and $f \rightarrow \tilde{f}=4$ are distinguishable, thus these processes destroy coherences between $\ket{\uparrow}$ and $\ket{\downarrow}$.
We calculate the squeezing parameter as a function of time based on the evolution of atomic correlation functions, where operators evolve according to the adjoint form of the SME in \erf{Eq::SME}. The collective atomic variables obey the following stochastic equations of motion (see Appendix \ref{Appendix::OpticalPumping}),
\begin{subequations}\label{Eq::collectivedynamics}
\begin{align}
&d N_C = -\gamma_{00} N_C dt + 2\gamma_{03} \expt{\hat{J}_z} dt \label{Eq::NA}\\
&d \expt{\hat{J}_x} = -\gamma_{11} \expt{\hat{J}_x} dt , \label{Eq::MeanSpinDecay} \\
&d \expt{\hat{J}_z} = s\sqrt{\kappa} \varz dW -\gamma_{33} \expt{\hat{J}_z}dt + \smallfrac{1}{2}\gamma_{30} N_C dt, \\
&d \varz = \!- \kappa \big(\varz\big)\!^2\! dt\!-\! 2 \gamma_{33} \varz dt \!\!+\! \smallfrac{1}{4} \!\big( 2\gamma_{33}\!\!-\!\!\gamma_{00} \!\big) N_C dt \!\!+\! \smallfrac{1}{2} \big( \gamma_{03} \!\!-\!\! 2 \gamma_{30} \big)\! \expt{\!\hat{J}_z\!} dt, \label{Eq::varJz}
\end{align}
\end{subequations}
where the decay and feeding rates are given in \erf{Eq::DecayRates}.
The total number of atoms in the clock-state subspace is given by $N_C$, which primarily decays at rate $\gamma_{00}$.
The final term in \erf{Eq::varJz}, proportional to $\expt{\hat{J}_z}$, is typically negligible since in most applications $\expt{\hat{J}_z} \ll N_C$.
We retain this small correction since unbalanced optical pumping acts to polarize the atoms and alters the rate of atom loss.
To find the peak squeezing in the presence of optical pumping, we numerically integrate Eqs. (\ref{Eq::NA}--\ref{Eq::varJz}) and then use \erf{Eq::SqueezingParameter} to calculate the metrological squeezing parameter, $\xi^2$, as a function of time.
We choose here the magic frequency close to the $ f=4\leftrightarrow f'=4 $ transition, $ \magic{4} $, which is furthest from resonance with both excited hyperfine transitions.
Typical time evolution is shown in \frf{Fig::Squeezing_Dynamics} for $2500$ atoms trapped a distance $r'\!_\perp=1.8a$ from the center of the nanofiber, where time is scaled to the characteristic scattering rate,
\begin{align}\label{Eq::gamma_s}
\gamma_s \equiv \frac{\sigma_0}{A_{\rm in}}\frac{\Gamma^2}{4 \Delta_{J_z}^2} \dot{N}_L .
\end{align}
We study the dynamics for two choices of quantization axis: (i) along the $x$ axis and (ii) along the numerically determined optimal axis. Figure \ref{Fig::Squeezing_Dynamics}(a) shows the time evolution of the squeezing parameter. We achieve a maximum squeezing of 4.7 dB when the clock states are chosen along the optimal axis; $\qangle_{\rm opt} \approx 86^\circ$ in \erf{Eq::QuantizationAxis}.
%========= SQUEEZING DYNAMICS: \xi, variance, mean, N_A =========
\begin{figure*}[tbp]
\centering
\hspace{-50pt}
\includegraphics[scale=0.42]{../chap5/Fig4}
\caption[Squeezing on the clock states as a function of time in units of the scattering rate $\gamma_s$ for $2500$ atoms trapped in the $x$-$z$ plane at a distance $ r'\!_\perp=1.8a$ from the axis of the nanofiber.]{Squeezing on the clock states as a function of time in units of the scattering rate $\gamma_s$ for $2500$ atoms trapped in the $x$-$z$ plane at a distance $ r'\!_\perp=1.8a$ from the axis of the nanofiber. The optimal quantization axis (red) is compared to the quantization along the $x$ axis (blue).
Dashed lines indicate simulations without optical pumping; i.e. no decoherence.
(a) Metrological spin squeezing parameter $\xi^{-2}$, \erf{Eq::SqueezingParameter}, in dB.
(b) Collective mean spin $\expt{\hat{J}_1}\equiv \expt{\hat{J}_x}$ and decaying atom number in the clock states, $N_C$ (inset).
(c) Conditional squeezed variance $\varz$.
The inset shows the decomposition at the optimal quantization axis of $ \Delta J_3^2\equiv \Delta J_z^2 $ (red dashed) into the single-body variance, $N_A (\Delta j_z^{(1)})^2$ (green) and the two-body covariance, $N_A(N_A-1)\expect{\Delta j_z^{(1)}\Delta j_z^{(2)}}$ (black), as given by \erf{Eq::VarianceDecomposition}.
}\label{Fig::Squeezing_Dynamics}
\end{figure*}
%================
The peak squeezing is ultimately limited by the combined effects of optical pumping on both $\expt{\jx}$ and $\varz$. Here, as in a free-space model \cite{Baragiola2014}, the primary factor that limits metrological squeezing is the decay of the collective mean spin $\expt{\jx}$.
A scattered photon eliminates the initial coherence between $\ket{\uparrow}$ and $\ket{\downarrow}$ within a single atom, thus depolarizing $\expt{\jx}$.
Atoms optically pumped to magnetic sublevels outside of the clock subspace decay $N_C$, further reducing $\expt{\jx}$.
These effects are captured by the depolarization rate $\gamma_{11}$ in the equation for $\expt{\jx}$, \erf{Eq::MeanSpinDecay}, whose solution is plotted in \frf{Fig::Squeezing_Dynamics}(b).
We can gain deeper understanding in the microscopic effects of optical pumping on spin squeezing by looking at the evolution of the one and two-body correlation functions. In terms of its constituent pseudospins, the collective variance takes the form
\begin{align} \label{Eq::VarianceDecomposition}
\varz & = N_A \big( \Delta j_z^{(1)} \big)^2 + N_A(N_A-1) \expt{ \Delta j_z^{(1)} \Delta j_z^{(2)} }
\end{align}
for permutationally symmetric states considered here, where $(1)$ and $(2)$ label any two atoms in the ensemble. Loss of atoms affects the first (single-body) variance term, which scales as $N_A$.
The two-body correlations which contribute as $N_A^2$ to the collective fluctuations,
\begin{align}
\expt{ \Delta j_z^{(1)} \Delta j_z^{(2)} } &\equiv \smallfrac{1}{4} \big( \expt{ \hat{\sigma}_z^{(1)} \otimes \hat{\sigma}_z^{(2)} } - \expt{\hat{\sigma}_z^{(1)} }^2 \big).
\end{align}
have a much larger influence on the total variance. Spin-spin correlations at the heart of spin squeezing, $\expt{ \hat{\sigma}_z^{(1)} \otimes \hat{\sigma}_z^{(2)} }$, rapidly generated by the measurement backaction decohere by optical pumping according to Eqs. (\ref{Eq::TwoBodyDecay}--\ref{Eq::OperatorMap}),
\begin{align} \label{Eq::TwoBodySpinDecay}
\frac{d}{dt} \expt{\hat{\sigma}_z^{(1)} \otimes \hat{\sigma}_z^{(2)} } \Big|_{\rm op} = - 2 \gamma_{33} \expt{\hat{\sigma}_z^{(1)} \otimes \hat{\sigma}_z^{(2)}} + \gamma_{30} \expt{ \hat{\mathbbm{1}}^{(1)}_C \otimes \hat{\sigma}_z^{(2)} + \hat{\sigma}_z^{(1)} \otimes \hat{\mathbbm{1}}^{(2)}_C },
\end{align}
where $\hat{\mathbbm{1}}^{(n)}_C \equiv \big( \op{\uparrow}{\uparrow} + \op{\downarrow}{\downarrow} \big){}^{(n)}$ is the single-body projector onto the clock states.
In addition, atoms that return to the clock subspace after scattering a photon inject additional noise into $\varz$.
All of these effects are included in the equation for $\varz$, \erf{Eq::varJz}, whose overall and decomposed dynamical evolutions are shown in Fig. \ref{Fig::Squeezing_Dynamics}(c).
%========= FIGURE: Peak squeezing (\xi, measurement strength, scattering rates) =========
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.5]{../chap5/Fig5}
\caption[Dependence of the parameters that determine metrological squeezing on the direction of the quantization axis that defines the clock states, $\mathbf{e}_{\tilde{z}}$.]{Dependence of the parameters that determine metrological squeezing on the direction of the quantization axis that defines the clock states, $\qaxis$. In all cases we consider $2500$ atoms trapped in the $x$-$z$ plane at distance $ r'\!_\perp=1.8a$ from the axis of the nanofiber.
In (b--d) $\qaxis$ is confined to the $x$-$y$ plane, where the optimal peak squeezing occurs.
(a,b) Peak achievable squeezing at the maximum time, measured in dB, as a function of the direction of $\qaxis$.
(c) OD/$N_A$=$\sigma_0 A_{in}/A^2_{J_z}$, \erf{Eq::OD/atom}.
(d) Rates of atom loss, $\gamma_{00}$, and depolarization, $\gamma_{11}$ relative to the characteristic scattering rate $\gamma_s$. See Eqs.~\eqref{Eq::lrate} and~\eqref{Eq::frate}. }\label{Fig::Squeezing_QuantizationAxis}
\end{figure}
%================
With our model, we explore optimal conditions for generating spin squeezing. The choice of quantization axis $\qaxis$ that defines clock states affects both the measurement strength and the relative rates of optical pumping. We plot the peak squeezing as a function of the direction of $\qaxis$ in the $x$-$y$ plane in \frf{Fig::Squeezing_QuantizationAxis}(b).
We gain insight into the tradeoffs between QND entangling interaction and decoherence by independent inspection of the measurement strength and optical pumping rates.
First, the rate of squeezing is determined by the effective optical density per atom on resonance,
\begin{align}\label{Eq::OD/atom}
\mathrm{OD}/N_A \equiv \frac{\kappa}{\gamma_s}=\frac{\sigma_0 A_{\rm in}}{A_{J_z}^2},
\end{align}
which peaks when $\qaxis$ is along the $y$ axis, as seen in \frf{Fig::Squeezing_QuantizationAxis}(c).
Choosing $\qaxis$ along $y$, the OD/$N_A$ is about 50\% larger than along $x$ axis.
The various forms of decoherence similarly vary with quantization axis, as seen in \frf{Fig::Squeezing_QuantizationAxis}(d), where we plot the dominant rate of atom loss, $\gamma_{00}$, and the depolarization rate of the mean pseudospin $\expt{\jx}$, $\gamma_{11}$.
Because the magic frequency $\magic{4}$ is nearly equidistant from $f'=3$ and $f'=4$ when the quantization axis is near the $y$ axis (see \frf{Fig::CouplingStrength}(b)) this choice of quantization axis provides more protection from decoherence.
While the decoherence rates in \frf{Fig::Squeezing_QuantizationAxis}(d) are largest near the $y$ axis, the increase in $\kappa$ more than compensates to provide optimal peak squeezing.
Finally, we explore the optimal conditions as a function of the trapping geometry.
The dispersive entangling interaction is based on the collective atomic coupling to the evanescent guided-mode fields, which decay exponentially away from the nanofiber surface, as seen in OD/$N_A$ plotted in \frf{Fig::Squeezing_Distance}(a).
From Eq.~\eqref{Eq::varJz}, the optimal choice of quantization axis depends not only on distance from the fiber but also weakly on the atom number, \frf{Fig::Squeezing_Distance}(b) because of the competition between squeezing and decoherence.
At the optimal quantization axis, the strong dependence of peak achievable squeezing on distance from the fiber is as seen in \frf{Fig::Squeezing_Distance}(c) along with the expected increase as more atoms contribute to the atom-light interface.
Several effects limit the reliability of the simulations for atoms trapped very near the fiber surface as $r'\!_\perp \rightarrow a$.
First, strong van der Waals interactions modify the light shifts and magic frequencies \cite{Vetsch2010Opticala, Lacroute2012}.
Second, the optical pumping model used here breaks down when the local density of states is significantly modified by the presence of the dielectric nanofiber \cite{LeKien2005a, LeKien2006}.
At distances $r'\!_\perp > 1.5a$ the atoms' local environment is roughly that of unmodified vacuum \cite{LeKien2005a} and a free-space optical pumping model in \erf{Eq::OpticalPumpingMapSchr} suffices.
%======= FIG: Scaling with atom number and distance =======
\begin{figure*}[t]
\centering
\hspace{-50pt}
\includegraphics[scale=0.41]{../chap5/Fig6}
\caption[Parameters that define squeezing as a function of trapping distance $r'_\perp$ and initial atom number $N_A$.]{Parameters that define squeezing as a function of trapping distance $r'_\perp$ and initial atom number $N_A$. (a) OD/$N_A$. Inset is the corresponding magic detuning in units of MHz (see text).
(b) Optimal quantization axis orientation angle $\qaxis$ in the $x$-$y$ plane for different atom numbers.
The line with $ 500 $ atoms terminates when the squeezing effect is too weak to be observed ($ r'\!_\perp>2.3a $).
(c) Peak metrological squeezing at the optimal $\qaxis$ for different atom numbers.} \label{Fig::Squeezing_Distance}
\end{figure*}
%================================
%====== SECTION: Summary and outlook ======%
\section{Summary and Outlook} \label{Sec::Conclusion}
We studied the strong cooperativity in the atom-light interface that can be achieved based on atoms trapped in the evanescent field surrounding an optical nanofiber, and interacting with a guided mode in the dispersive regime.
The key parameter that determines the coupling is the resonant optical density per atom.
Due to the tight confinement of the guided mode over the entire chain of atoms this parameter is $ \mathrm{OD}/N_A\sim 10^{-2} $ for typical geometries used in current experiments, which approaches that achieved for atomic ensembles trapped inside optical cavities of moderate finesse \cite{Chen2011Conditional, Zhang2012Collective}.
In contrast, the atom-light coupling for atoms in free space is typically orders of magnitude smaller, $ \mathrm{OD}/N_A \sim 10^{-6}$.
Under ideal conditions the atom-light interaction is entirely symmetric along the nanofiber, providing a platform for long-range correlations independent of distance between the atoms.
As the light is entirely guided, fiber- or waveguide-coupled atomic ensembles can be networked together or coupled to other physical systems in a hybrid platform \cite{Hafezi2012Atomic, Liebermeister2014Tapered, Meng2015, Tiecke2015} for truly long-range entanglement generation and distribution.
We calculated the dispersive response based on a modal decomposition of the dyadic Green's function, which provides a general method to calculate the induced phase shifts and polarization rotations of the guided modes.
With this we studied the QND measurement of atoms via polarization spectroscopy.
In particular, we studied squeezing of the collective pseudospin associated the atomic clock states of cesium.
The atoms induce a birefringent index of refraction on the light, conditional on the spin state, which provides a mechanism for measuring the atomic spin projection and thus squeezing its uncertainty.
Based on our formalism we calculated the nanofiber-enhanced measurement strength that determines the rate of squeezing.
The peak squeezing one can generate depends on a detailed balance between the reduction of spin projection noise\index{noise!projection noise} based on QND measurement and the damage done to the spin ensemble due to optical pumping.
Both measurement and optical pumping arise from the same physical mechanism -- scattering of photons by atoms.
The former corresponds to cooperative forward scattering into the guided mode whereas the latter corresponds to local scattering into all other modes, primarily the unguided ``radiation" modes. The cooperativity, specified by the effective OD/$N_A$, determines the ratio of these two effects and thus the ultimate power of the quantum atom-light interface.
We studied QND measurement using a first-principles stochastic master equation model, which allowed us to track the atomic correlation functions that define the metrologically relevant squeezing parameter.
These include the atomic projection noise\index{noise!projection noise} uncertainty as well as the length of the collective spin vector that defines the metrological signal. We find that decoherence acts primarily to depolarize the mean pseudospin and optically pump atoms out of the clock subspace, which we treat as loss. In addition, optical pumping decoheres the spin correlations at the heart of spin squeezing, but at a reduced rate compared with the effect on the mean pseudospin.
The combined effect of QND measurement and decoherence yields a peak squeezing approaching 5 dB with $\sim 2500$ atoms. Larger enhancements in atom-light coupling and QND squeezing are possible with modest increases in the number of trapped atoms and/or for atoms trapped closer to the nanofiber surface.
Whereas we have assumed here that atoms can be prepared in a desired clock state defined by a particular quantization axis, in practice such preparation will require optical pumping that may be challenging for atoms near the surface of the nanofiber. In addition, though we have treated the atoms as localized at well defined points, in practice the atoms' thermal motion can reduce the strong coupling described here. Our formalism provides a starting point for developing models necessary to study the dynamics of optical pumping, including the possibility of cooling atoms to the vibrational ground state, where thermal motion is negligible.
Finally, though we have treated here the case of strong coupling due solely to tight confinement of the guided mode for atoms near the surface of the nanofiber, we can achieve even greater enhancement by combining this effect with longitudinal confinement provided by fiber-based optical cavities~\cite{LeKien2009a, Wuttke2012Nanofiber, Yalla2014Cavity, Bohnet2014Reduced, Nayak2014Optical}.
The coupling can be further improved under EIT conditions that substantially slow the group velocity \cite{Gouraud2015Demonstration, Sayrin2015, Kumar2015Autler, LeKien2015Electromagnetically}. In addition, quantum control of the internal hyperfine state \cite{Smith2013Quantuma} can greatly enhance the entangling power of the atom-light interface~\cite{Trail2010Strongly, Norris2012Enhanced}.
For large enough coupling, QND measurement should allow production of highly entangled spin states beyond the Gaussian regime \cite{Stockton2004Deterministic, McConnell2015Entanglement}.
%</birefringenceprotocol>
ACKNOWLEDGMENTS
We gratefully acknowledge Sudhakar Prasad for helpful discussions regarding the dyadic Green's function.
We thank the UNM Center for Advanced Research Computing for computational resources used in this work.
This work was support by the AFOSR, under grant Y600242, and the NSF, under grants PHY-1212445, 1307520.
\appendix
%=========== APPENDIX ===========%
%\chapter[Deriving birefringence QND measurement dynamics]{Some detailed derivation of the birefringence QND measurement dynamics}
%<*atomnumberdetection>
\chapter[Minimum detectable atom number in a measurement protocol]{Minimum detectable atom number using the birefringence effect of a nanofiber}\label{sec:birefringenceresolution}
The Faraday spectroscopy theory for optical lattice has been published in Ref.~\cite{Smith2003a}. Following this thread, we present the birefringence\index{birefringence effect} spectroscopy analysis for our nanofiber case.
\section[Shot-noise limit and minimum detectable atom number]{Shot-noise limit and minimum detectable \\ atom number}
To measure the Stocks vector rotation angle on the \Poincare sphere, that is the transformation of the polarization of the light, we launch a probe light along the bisection line between $ x $ and $ y $ axes, and trap the atom on the $ x $ axis, and then measure the output powers of the counterclockwise-rotating ($ \sigma_+ $) and the clockwise-rotating ($ \sigma_- $) components of the output. From Eq.~\eqref{eq:birefringencerotang} and considering the quantum transition factor factor in the Hamiltonian, Eq.~\eqref{Eq::RotationAngle}, the total differential phase shift between the $ \sigma_+ $ and $ \sigma_- $ components of the probed traveling light interacted with $ N_A $ atoms can be given by
\begin{align}
\varphi_{_N} &= N_A C_{j'}^{(0)}\sigma_0\frac{c}{v_g}\frac{\Gamma_0}{4\Delta}\left[| \mathbf{u}_H(r'_{\!\perp})|^2- | \mathbf{u}_V(r'_{\!\perp})|^2 \right],
\end{align}
where $ C_{j'}^{(0)}=C_{j'f}^{(0)}=\sum_{f,f'}C_{f,f'}^{(0)}=\frac{2^{j'-1/2}}{3} $ are the far-detuning rank-$ 0 $ coefficient on a $ j\rightarrow j' $ transition, which is $ \frac{1}{3} $ for the D1 line transition and $ 2/3 $ for the D2 line transition for the alkali atoms we study~\cite{Deutsch2010a}.
The transformation of polarization is measured in terms of the total power difference between the $ x $- and $ y $-components of the field. The measured signal power is given by
\begin{align}
\Delta P_S &= P_0 \sin(\varphi_{_N}) \approx P_0 \varphi_{_N}, \label{eq:polsignal}
\end{align}
where $ P_0 $ is the power of the signal, and we have assumed the rotation angle is very small for our study case.
Our resolution of detection is limited by the shot noise\index{noise!shot noise} characterized by the fluctuations in the power difference with a root mean square amplitude of
\begin{align}
\Delta P_{SN} &= \sqrt{\frac{P_0 \hbar \omega_0 }{2\eta \tau_{pd}}}, \label{eq:shotnoise}
\end{align}
where $ \eta $ is the quantum efficiency of the detector, and $ \tau_{pd} $ is the response time of the photon detector.
In the extreme case that the signal is equal to the shot noise\index{noise!shot noise}, that is to make $ \Delta P_S=\Delta P_{SN} $, we can obtain the minimum number of atoms that we can detect (the resolution) as
\begin{align}
N_{min} &= \frac{1}{C_{j'}^{(0)}}\sqrt{\frac{8 \hbar \omega_0 }{P_0\eta \tau_{pd}}}\frac{\Delta}{\sigma_0 n_g\Gamma_0}\frac{1}{\left[| \mathbf{u}_H(r'_{\!\perp})|^2- | \mathbf{u}_V(r'_{\!\perp})|^2 \right]}\\
&=\frac{1}{C_{j'}^{(0)}} \sqrt{\frac{8 \hbar \omega_0 }{P_0\eta \tau_{pd}}}\frac{2\pi\Delta}{3\lambda^2 \frac{4}{3} \left( \frac{\omega_0}{c}\right)^3 \frac{|d_{eg}|^2}{\hbar}}\frac{A_{e\!f\!f}^V}{\zeta -1 }\\
&= \frac{1}{C_{j'}^{(0)}}\sqrt{\frac{\hbar^3 \pi c }{P_0\eta \tau_{pd}\lambda}}\frac{\lambda\Delta }{ \left( 2\pi\right)^2 |d_{eg}|^2}\frac{A_{e\!f\!f}^V}{\zeta -1 }\\
&= \frac{1}{4\pi C_{j'}^{(0)}}\sqrt{\frac{\hbar^3 c \lambda }{\pi P_0\eta \tau_{pd}}}\frac{\Delta }{ |d_{eg}|^2}\frac{A_{e\!f\!f}^V}{\zeta -1 },\label{eq:Nmin_birefringence}
\end{align}
with
\begin{align}
A_{e\!f\!f}^V &= \frac{1}{n_g| \mathbf{u}_V(r'_{\!\perp})|^2}\\
\zeta &= \frac{| \mathbf{u}_H(r'_{\!\perp})|^2}{| \mathbf{u}_V(r'_{\!\perp})|^2}.
\end{align}
Eq.~\eqref{eq:Nmin_birefringence} is the formula to find the minimum detectable atom number in the shot-noise limit if all practical parameters are known for the experiment. Below, we estimate $ N_{min} $ when the detector is ideal for the measurement protocol.
\section{Estimate the minimum detectable atom number in an ideal case}
We consider the critical time scale that
\begin{align}
\tau_{pd} &= \frac{1}{\gamma_s},
\end{align}
where $ \gamma_{s} $ is the photon scattering rate (see Appendix~\ref{chap:summaryofatomicphysicsformulas}). The minimum detectable atom number can be estimated using the relationship between the scattering rate and the scattering cross section $ \sigma(\Delta) $ or the saturation cross section $ \sigma_0 $ under the critical time scale. Now that
\begin{align}
\gamma_s &= \sigma(\Delta) \frac{I(\br')}{\hbar \omega_0} \\
\sigma(\Delta) &=\frac{\sigma_0}{1+\frac{4\Delta^2}{\Gamma^2_0}}\approx \sigma_0 \frac{\Gamma_0^2}{4\Delta^2} \quad \text{(far detuning)}
\end{align}
where $ I(\br') $ is the light intensity at the atom position.
Hence,
\begin{align}
\gamma_s &\approx \sigma_0\frac{\Gamma_0^2}{4\Delta^2} \frac{I(\br')}{\hbar \omega_0}=\frac{\sigma_0}{A_{in}}\frac{P_0}{\hbar\omega_0}\frac{\Gamma_0^2}{4\Delta^2},
\end{align}
where the effective incident mode area at the atom position, $A_{in} $, is defined as
\begin{align}
A_{in }&= \frac{P_0}{I(\br')}.
\end{align}
We also set the quantum efficiency $ \eta=1 $. The critical shot noise\index{noise!shot noise} defined in Eq.~\eqref{eq:shotnoise} then becomes
\begin{align}
\Delta P_{SN} &\approx \sqrt{\frac{P_0 \hbar \omega_0 \gamma_s}{2\eta }} =P_0\sqrt{\frac{ \sigma_0 }{2A_{in}}}\frac{\Gamma_0}{2\Delta}.
\end{align}
The signal defined in Eq.~\eqref{eq:polsignal} can be rewritten as
\begin{align}
\Delta P_S &= {N_A P_0}{C_{j'}^{(0)}} \frac{\sigma_0}{\Abir}\frac{\Gamma_0}{4\Delta},
\end{align}
where the effective mode area at the atom position for our configuration is defined by
\begin{align}
\Abir &= \frac{1}{n_g}\frac{1}{| \mathbf{u}_H(r'_{\!\perp})|^2- | \mathbf{u}_V(r'_{\!\perp})|^2}.
\end{align}
If we make the shot noise\index{noise!shot noise} power equal to the minimum detectable signal power, we can obtain the minimum detectable atom number as
\begin{align}
N_{min} &= \frac{1}{C_{j'}^{(0)}}\sqrt{\frac{2\Abir^2}{A_{in}\sigma_0}}.
\end{align}
As an estimate of the scale, we can ignore the constant factor of $ \sqrt{2} $ and claim the minimum detectable atom number is on the order of
\begin{align}
\delta N_A \sim \frac{1}{C_{j'}^{(0)}}\sqrt{\frac{\Abir^2}{A_{in}\sigma_0}}=\frac{1}{C_{j'}^{(0)}}\sqrt{\frac{\mathrm{OD}}{N_A}},
\end{align}
where $ \mathrm{OD}/N_A=\frac{\Abir^2}{A_{in}\sigma_0} $ is the optical depth per atom for this birefringence measurement configuration using mixed states of atoms.
%</atomnumberdetection>
%=========== APPENDIX: Nanofiber mode functions ===========%
\chapter{Guided-mode functions for the optical nanofiber} \label{chap:fibereigenmodes}
In this Appendix we provide, for reference, the fundamental HE$_{11}$ solutions to the homogeneous wave equation, \erf{Eq::WaveEquationSource} with $\tensor{\boldsymbol{\alpha}} = 0$, for a cylindrical nanofiber of radius $a$ and index of refraction given by \erf{Eq::IndexofRefraction}. At a given frequency, $\omega_0 = c k_0$, the magnitudes of the longitudinal and transverse wave vectors for a guided mode are related by $n^2 k_0^2 = \beta_0^2 + k_\perp^2$.
The positive propagation constant, $\beta_0 \equiv \beta(\omega_0)$, is determined from the eigenvalue equation that results from enforcing physical boundary conditions at the fiber surface \cite{Snyder1983Optical},
\begin{align}
\frac{J_0(ha)}{ha J_x(ha)} = - \frac{n_1^2+n_2^2}{2n_1^2} \frac{K'(qa)}{qa K_1(qa)} + \frac{1}{h^2 a^2} - \bigg[ \bigg(\frac{n_1^2 - n_2^2}{2 n_1^2} \frac{K'(qa)}{qa K_1(qa)} \bigg)^2 + \frac{\beta_0^2}{n^2_1 k^2} \bigg(\frac{1}{q^2a^2} + \frac{1}{h^2a^2} \bigg)^2 \bigg]^{1/2}.
\end{align}
Inside the nanofiber the transverse wavevector is real, $k_\perp = q$, where $q=\sqrt{\beta_0^2- n_2^2k_0^2}$, and outside the nanofiber it is purely imaginary, $k_\perp = i h$, where $h=\sqrt{n_1^2 k_0^2 - \beta_0^2}$. The vector eigenfunctions are expressed as $\mbf{f}_{\mu}(\br) = (2\pi)^{-1/2}\mbf{u}_{b,p}(\mbf{r}_\perp) e^{i b \beta_0 z}$, where the modes are indexed by frequency $\omega_0$, propagation direction $b = \pm$, and polarization $p$.
A relatively simple form for the guided-mode functions can be expressed in a cylindrical basis $(r_\perp, \phi, z)$ with longitudinal unit vector $\mathbf{e}_z$, oriented along the fiber axis.
The transverse unit vectors are related to their fixed Cartesian counterparts via the relations
\begin{subequations}
\begin{align}
\mathbf{e}_{r_{\!\perp}} &= \mathbf{e}_x \cos \phi + \mathbf{e}_y \sin \phi, \\
\mathbf{e}_\phi &= - \mathbf{e}_x \sin \phi + \mathbf{e}_y \cos \phi.
\end{align}
\end{subequations}
The transverse profile for the quasicircular guided modes, $p = \pm$, is
\begin{align} \label{Eq::QuasicircularModes}
\mbf{u}_{b,\pm}(\mathbf{r}_\perp) = \big[\mathbf{e}_{r_{\!\perp}} u_{r_{\!\perp}}(r_\perp) \pm i \mathbf{e}_\phi u_\phi(r_\perp) + i b \mathbf{e}_z u_z(r_\perp) \big]e^{ \pm i \phi},
\end{align}
and for the quasilinear guided modes, $p = \{H,V\}$, is
\begin{subequations} \label{Eq::QuasilinearModes}
\begin{align}
\mbf{u}_{b,H}(\mathbf{r}_\perp) = & \sqrt{2} \big[ \mathbf{e}_{r_{\!\perp}} u_{r_{\!\perp}}(r_\perp) \cos \phi - \mathbf{e}_\phi u_\phi(r_\perp) \sin \phi + ib \mathbf{e}_z u_z(r_\perp) \cos \phi \big] \\
\mbf{u}_{b,V}(\mathbf{r}_\perp) = & \sqrt{2} \big[ \mathbf{e}_{r_{\!\perp}} u_{r_{\!\perp}}(r_\perp) \sin \phi + \mathbf{e}_\phi u_\phi(r_\perp) \cos \phi + ib \mathbf{e}_z u_z(r_\perp) \sin \phi \big].
\end{align}
\end{subequations}
The modes are expressed in terms of real-valued functions that depend only on the radial coordinate $r_\perp$,
\begin{subequations} \label{Eq::ProfileFunctions}
\begin{align}
u_{r_{\!\perp}}(r_\perp) =& u_0 \big[ (1-s) K_0(q{r_{\!\perp}}) + (1+s)K_2(q{r_{\!\perp}})\big] \\
u_\phi(r_\perp) =& u_0\big[ (1-s) K_0(q{r_{\!\perp}}) - (1+s)K_2(q{r_{\!\perp}})\big] \\
u_z(r_\perp) =& u_0 \frac{2 q}{\beta_0} \frac{K_1(qa)}{J_x(ha)} J_x(h{r_{\!\perp}}), \label{Eq::zprofile}
\end{align}
\end{subequations}
where $u_0$ is set by the normalization condition, $\int d^2 \mathbf{r}_\perp n(r_\perp) | \mathbf{u}_\mu(\br_\perp)|^2=1$, $J_n$ and $K_n$ are the $n^{th}$ Bessel functions of the first and second kind, $f'(x)$ indicates a derivative with respect to the argument $x$, and
\begin{align}
s = \frac{1/(q^2 a^2)^{2} + 1/(h^2 a^2)^{2}}{[J'_1(ha)/haJ_x(ha) + K'_1(qa)/qaK_1(qa)]}.
\end{align}
Of particular interest is the $z$-component, \erf{Eq::zprofile}, which can become appreciable. Note that the phase convention in Eqs. (\ref{Eq::QuasicircularModes}-\ref{Eq::ProfileFunctions}) has been chosen to emphasize properties of the quasilinear modes and differs from that of \emph{Le Kien et al.} -- for instance in Ref. \cite{LeKien2014Propagation}.
Further details about the guided-mode fields inside the nanofiber ($r_\perp\leq a$), the unguided (radiation) modes, and the quantized form of both can be found in Refs. \cite{Sondergaard2001, Tong2004, Kien2004, LeKien2005a, Vetsch2010Opticala}.
%<*birefringencerates>
%===================APPENDIX: Photon scattering and optical pumping rates =====================%
\chapter{Photon scattering and optical pumping rates} \label{Appendix::Rates}
In this Appendix we give the explicit expressions for the photon scattering rates used in Sec.~\ref{Sec::QNDMeasurement} following the formalism given in~\cite{Deutsch2010a}. The total rate of photon scattering by an atom in the clock state $\ket{f,0}$ is
\begin{equation}\label{Eq::gammaf}
\gamma_{f}=- \frac{2}{\hbar} {\rm Im} \big[ \bra{f,0} \hat{h}_{\rm eff}\ket{f,0} \big] ,
\end{equation}
where the effective non-Hermitian light-shift Hamiltonian for one atom is
\begin{align}
\hat{h}_{\rm eff} = - \hat{\mathbf{E}}^{(-)}_{\rm in}(\mathbf{r}' ; t ) \cdot \poltens \cdot \hat{\mathbf{E}}^{(+)}_{\rm in}(\mathbf{r}' ;t )
\end{align}
as follows from \erf{Eq::LightShiftHam}, where $\charpol = -\frac{\sigma_0}{8\pi k_0}\frac{\Gamma}{\Delta_{ff'}+i\Gamma/2}$ is the complex polarizability and the irreducible tensor operator $ \hat{\tensor{\mbf{A}}}(f,f') $ is given in \erf{Eq::PolarizabilityIrrep}.
The rate of optical pumping between clock states $\ket{f,0} \rightarrow \ket{\tilde{f},0}$ is
\begin{equation}\label{Eq::gammaff}
\gamma_{f \rightarrow \tilde{f} }
=\sum_{q}\big| \bra{\tilde{f},0} \hat{W}_q^{\tilde{f}f} \ket{f,0} \big|^2,
\end{equation}
where $ \hat{W}_q^{\tilde{f}f} = \sum_{f'}\frac{\Omega/2}{\Delta_{f'\tilde{f}}+i\Gamma/2}(\mathbf{e}_q^*\cdot\hat{\mathbf{D}}_{\tilde{f} f'} )(\mathbf{e}_{\rm in}\cdot \hat{\mathbf{D}}^\dagger_{f'f} ) $ are the Lindblad jump operators for optical pumping between ground levels $ f\rightarrow \tilde{f} $~\cite{Deutsch2010a}.
Each jump operator $\hat{W}_q^{\tilde{f}f}$ is associated with absorption of the probe photon polarized along $ \mathbf{e}_{\rm in} $ followed by spontaneous emission of a photon with polarization $ \mathbf{e}_q $, where $q= \{0,\pm 1\}$ labels spherical basis elements for $\pi$ and $ \sigma_\pm$ transitions.
To find the dependence on the input field intensity, we define a characteristic photon scattering rate, $\gamma_s \equiv \frac{\Gamma\Omega^2}{4\Delta_{J_z}^2}= \frac{\sigma_0}{A_{\rm in}}\frac{\Gamma^2}{4 \Delta_{J_z}^2} \dot{N}_L $, with Rabi frequency $ \Omega=2\bra{j}|d|\ket{j'}\mathcal{E}^{(+)}_{\rm in}/\hbar $, reduced optical dipole matrix element $\bra{j}|d|\ket{j'}$, and field amplitude $ \mathcal{E}^{(+)}_{\rm in}=|\mathbf{E}_{\rm in}^{(+)}(\br')| $.
Eqs.~\eqref{Eq::gammaf} and~\eqref{Eq::gammaff} yield,
\begin{subequations}
\begin{align}
\gamma_f &=n_g\dot{N}_L \sum_{f'} \sigma (\Delta_{ff'} ) \mathbf{u}^*_\inp(\br'_\perp)\cdot \bra{f,0} \hat{\tensor{\mbf{A}}}(f,f') \ket{f,0} \cdot \mathbf{u}_\inp(\br'_\perp)\\
&= \dot{N}_L\frac{\pi \omega_0}{\hbar v_g} \!\!\sum_{f'\! , q}\! \frac{\Gamma |\bra{j'}|d|\ket{j}|^2}{\Delta_{ff'}^2+\Gamma^2/4} |o^{j'f'}_{jf}\! C^{f 0;1q}_{f' q}|^2
|\mathbf{e}_q \!\cdot\!
\left( \mathbf{u}_{H}^*(\br^\prime\!\!_\perp) \!+\!
\mathbf{u}_{V}^*(\br^\prime\!\!_\perp)\right) |^2\\
&\approx \gamma_s \sum_{f'} \frac{\Delta_{J_z}^2}{\Delta_{ff'}^2}\sum_q \big| o_{jf}^{j'f'}C_{f'q}^{f0;1 q} \big|^2 \mathbf{e}_q^* \cdot (\mathbf{e}_{\rm in}\mathbf{e}_{\rm in}^* )\cdot \mathbf{e}_q,
\end{align}
\end{subequations}
\begin{subequations}
\begin{align}
\gamma_{f \rightarrow \tilde{f}} &= \frac{\Gamma\Omega^2(\mathbf{r}')}{4}\sum_{f',q} \frac{ \left|o^{j'f'}_{j\tilde{f}}\! o^{j'f'}_{jf}\! C^{\tilde{f} 0;1q}_{f' q}\! C^{f 0;1q}_{f' q}\! \left(\mathbf{e}^*_{q}\! \cdot\! \mathbf{e}_{\rm in} \right)\right|^2}{ \Delta_{ff'}^2 \!\!+\! \Gamma^2/4}\\
&\approx \gamma_s \sum_{f'} \frac{\Delta_{J_z}^2}{\Delta_{ff'}^2}\sum_q \big| o_{j\tilde{f}}^{j'f'} o_{jf}^{j'f'}C_{f'q}^{\tilde{f}0;1 q}C_{f'q}^{f0;1q} \big|^2 \mathbf{e}_q^* \cdot (\mathbf{e}_{\rm in}\mathbf{e}_{\rm in}^* )\cdot \mathbf{e}_q,
\end{align}
\end{subequations}
where $ \sigma (\Delta_{ff'} ) = \sigma_0 \Gamma^2/4\Delta^2_{f' f}$ is the the scattering cross section at the probe detuning, $ C_{f'q}^{f0;1 q}=\Braket{f'q}{f0;1q}$ are the Clebsch-Gordan coefficients, and
\begin{equation}
\big| o_{jf}^{j'f'} \big|^2=(2j'+1)(2f+2) \bigg\{
\begin{array}{ccc}
f' & 7/2 & j' \\
j & 1 & f
\end{array}
\bigg\}
\end{equation}
are the relative oscillator strengths determined by the relevant Wigner 6-$J$ symbol.
We have also expanded the jump operators as
\begin{align} \label{Eq::JumpOperators}
\hat{W}_q^{\tilde{f}f}
&= \sum_{f'} \sqrt{\Gamma}\frac{\Omega(\mathbf{r}')/2}{ \Delta_{f'f} + i\Gamma/2} o^{j'f'}_{j\tilde{f}} o^{j'f'}_{jf}\nonumber\\
& \sum_{\tilde{m}=0,m=0,m'=q}\!\!\!\!\!\!\!\!\!\! C^{\tilde{f} \tilde{m};1q}_{f' m'} \sum_{q'=q}C^{f m;1q'}_{f' m'} \left(\mathbf{e}^*_{q'} \cdot\mathbf{e}_{\rm in} \right) \op{\tilde{f},\tilde{m}}{f,m}\nn\\
&= \sum_{f'} \!\!\frac{\sqrt{\Gamma}\Omega(\mathbf{r}')/2}{ \Delta_{f'f} \!\!+\! i\Gamma/2} o^{j'f'}_{j\tilde{f}}\! o^{j'f'}_{jf}\! C^{\tilde{f} 0;1q}_{f' q}\! C^{f 0;1q}_{f' q}\! \left(\mathbf{e}^*_{q}\! \cdot\! \mathbf{e}_{\rm in} \right)\! \op{\tilde{f},0}{f,0}.
\end{align}
As shorthands, we also denote $ \gamma_{\tilde{f}f}=\gamma_{\tilde{f}\rightarrow f} $.
%</birefringencerates>
%<*birefringencemotioneq>
%===================APPENDIX: Equations of motion =====================%
\chapter[Equations of motion for the few-body moments]{Derivation of the equations of motion for the few-body moments} \label{Appendix::OpticalPumping}
In this Appendix we derive the equations of motion for the correlation functions that define the metrologically relevant squeezing parameter, $\xi^2 = N_A \Delta J_z^2/\expt{\hat{J}_x}^2$.
We seek the time evolution of the one and two-body correlation functions:
\begin{subequations}
\begin{align}
&\expt{\hat{N}_C} = \sum_n \expt{\hat{\mathbbm{1}}^{(n)}_C} \\
&\expt{\hat{J}_x} = \frac{1}{2} \sum_n \expt{\hat{\sigma}_x^{(n)}} \\
&\expt{\hat{J}_z} = \frac{1}{2} \sum_n \expt{\hat{\sigma}_z^{(n)}} \\
&\expt{\hat{J}_z^2} = \frac{\expt{\hat{N}_C}}{4} +\frac{1}{4} \sum_{m \neq n} \expt{\hat{\sigma}_z^{(m)}\otimes \hat{\sigma}_z^{(n)}},
\end{align}
\end{subequations}
where $\hat{\mathbbm{1}}_C \equiv \op{\uparrow}{\uparrow} + \op{\downarrow}{\downarrow}$ is the single-atom projector onto the clock states.
To include optical pumping, we apply the following equations of motion. For a collective, single-body operator, $\hat{X} = \sum_n \hat{x}^{(n)}$, the evolution due to optical pumping is $d\expt{ \hat{X}}|_{\rm op} = \sum_n \Tr [\mathcal{D}_n[\hat{\rho}]\hat{X} ]dt = \sum_n \expt{\mathcal{D}_n\dg[\hat{x}^{(n)}]} dt$, where the map, which acts locally on atoms along the nanofiber, is given in \erf{Eq::OpticalPumpingMapSchr}.
Two-body microscopic operators decay by optical pumping according to \cite{Baragiola2014}
\begin{align} \label{Eq::TwoBodyDecay}
\frac{d}{dt} \expt{ \hat{x}^{(m)} \otimes \hat{y}^{(n)} } \Big|_{\rm op}= &\expt{ \mathcal{D}_m\dg[ \hat{x}^{(m)}] \otimes \hat{y}^{(n)} } + \expt{ \hat{x}^{(m)}\otimes \mathcal{D}_n\dg[ \hat{y}^{(n)}] },
\end{align}
where the superscripts refer to the $m^{th}$ and $n^{th}$ atoms.
\section{Summary of results}
Since the derivation is rather long, we first summarize the main results below, and then provide more details in the next section.
Applying the adjoint map to the single-atom operators yields
\begin{subequations} \label{Eq::OperatorMap}
\begin{align}
\mathcal{D}\dg[\hat{\mathbbm{1}}_C] & = - \gamma_{00} \hat{\mathbbm{1}}_C +\gamma_{03} \hat{\sigma}_z \label {Eq::Idecay} \\
\mathcal{D}\dg[\hat{\sigma}_z] & =- \gamma_{33} \hat{\sigma}_z + \gamma_{30} \hat{\mathbbm{1}}_C
\label{Eq::zdecay} \\
\mathcal{D}\dg[\hat{\sigma}_x] & = - \gamma_{11} \hat{\sigma}_x\label{Eq::xdecay},
\end{align}
\end{subequations}
with rates
\begin{subequations} \label{Eq::DecayRates}
\begin{align}
\gamma_{00}
& = \frac{\gamma_{\uparrow}+\gamma_{\downarrow} - \gammauu-\gammaud -\gammadd-\gammadu}{2} \label{Eq::lrate} \\
\gamma_{03}
& = \frac{-\gamma_{\uparrow}+\gamma_{\downarrow} +\gammauu + \gammaud - \gammadd - \gammadu }{2}\\
\gamma_{33}
& = \frac{\gamma_{\uparrow}+\gamma_{\downarrow} - \gammauu+\gammaud -\gammadd+\gammadu}{2}\\
\gamma_{30}
& = \frac{-\gamma_{\uparrow} + \gamma_{\downarrow} + \gammauu - \gammaud - \gammadd + \gammadu }{2} \\
\gamma_{11}
& = \frac{\gamma_{\uparrow}+\gamma_{\downarrow}}{2}. \label{Eq::frate}
\end{align}
\end{subequations}
Given Eqs. (\ref{Eq::TwoBodyDecay}, \ref{Eq::OperatorMap}), the equations for the two-body spin correlations, \erf{Eq::TwoBodySpinDecay}, follow. Similarly, one can derive equations of motion for the remaining two-body microscopic operator correlations $ \expt{\hat{\mathbbm{1}}^{(m)}_C \otimes \hat{\mathbbm{1}}^{(n)}_C} $ and $ \expt{\hat{\mathbbm{1}}^{(m)}_C \otimes \hat{\sigma}_z^{(n)} + \hat{\sigma}_z^{(m)} \otimes \hat{\mathbbm{1}}^{(n)}_C } $ when $ m\neq n $ and from these, the macroscopic operator expectation values $ \expt{\hat{J}_z^2} $, $ \expt{\hat{N}_C^2} $, and $ \expt{\hat{N}_C\hat{J}_z} $. As we have examined numerically, on the time scale of the QND measurement, the correlation between atom number in the clock state subspace and the pseudospin moment is weak, and one can thus treat the atom number operator in the clock state subspace as a $c$-number.
We therefore set $ \expt{\hat{N}_C\hat{J}_z}-\expt{\hat{N}_C}\expt{\hat{J}_z} = 0 $ and $ \expt{\hat{N}_C^2} - \expt{\hat{N}_C}^2 = 0 $ and define $ N_C\equiv \expt{\hat{N}_C}$.
The equations of motion for the moments of $\jz$ are now found from the SME, \erf{Eq::SME},
\begin{subequations} \label{Eq::J3MomentEquations}
\begin{align}
d \expt{\jz} =& s \sqrt{\kappa} \varz \, dW - \gamma_{33} \expt{\jz}dt + \smallfrac{1}{2} \gamma_{30} N_C dt , \\
d \expt{\jz^2} =& 2 s\sqrt{\kappa} \expt{\jz}\Delta J_z^2 \, dW - 2 \gamma_{33} \expt{\jz^2}dt + \smallfrac{1}{4} \big( 2 \gamma_{33}-\gamma_{00}\big) N_C dt \\
&+ \gamma_{30} \expt{\jz} N_C dt + \smallfrac{1}{2}\left(\gamma_{03} -2 \gamma_{30}\right) \expt{\jz} dt. \nonumber
\end{align}
\end{subequations}
The stochastic term in $d\expt{\jz^2}$ was simplified by assuming Gaussian statistics \cite{Jacobs2006}, $\expt{\jz^3} = 3\expt{\jz^2}\expt{\jz}- 2\expt{\jz}^3$.
Finally, the It\={o} calculus governing the stochastically evolving moments requires that differentials be taken to second order, and the evolution of the variance is given by $d \varz = d \expt{\jz^2} - 2 \expt{\jz} d \expt{\jz} - ( d \expt{\jz} )^2$.
The equation of motion for the conditional variance, \erf{Eq::varJz}, then follows from Eqs. \eqref{Eq::J3MomentEquations}.
\section{More details of the derivation}
In the \emph{clock-state}\index{state!clock state} case, we define $ \ket{\uparrow} = \ket{f=4,m=0} $, $ \ket{\downarrow}=\ket{f_3,m=0} $ for an arbitrary $ i $-th atom.
There are some useful properties we can use for our calculations below:
\begin{align}
\ket{f,0}\bra{f,0} &= \frac{1}{2}\left(\hat{\mathbbm{1}}_C+(-1)^f\hat{\sigma}_z \right)\\
\op{\uparrow}{\downarrow} &= \hat{\sigma}_+=\frac{1}{2}(\hat{\sigma}_x+i\hat{\sigma}_y)\\
\op{\downarrow}{\uparrow} &= \hat{\sigma}_-=\frac{1}{2}(\hat{\sigma}_x-i\hat{\sigma}_y).
\end{align}
From the equations of motion (we will show later the physics meaning of terms), we define
\begin{align}
\gamma_s^+ &= \frac{\gamma_{\uparrow}+\gamma_{\downarrow} }{2}\\
\gamma_s^- &= \frac{\gamma_{\uparrow}-\gamma_{\downarrow} }{2}\\
\Gamma_{\rm{loss},\uparrow} &\equiv \gamma_\uparrow - (\gamma_{44}+\gamma_{43})=\gamma_\uparrow - \big(\gammauu + \gammadu \big), \\
\Gamma_{\rm{loss},\downarrow} &\equiv \gamma_\downarrow - (\gamma_{33}+\gamma_{34})=\gamma_\downarrow - \big(\gammadd + \gammaud \big),\\
\Gamma_{\rm loss} &\equiv \frac{\Gamma_{\rm{loss},\uparrow} +\Gamma_{\rm{loss},\downarrow}}{2}=-\left[\frac{1}{2}\left(\gamma_{44}+\gamma_{33}+\gamma_{43}+\gamma_{34} \right)-\gamma_s^+\right]=\gamma_{00}, \\
\Gamma_{\rm pol} &\equiv -\frac{\Gamma_{\rm{loss},\uparrow} - \Gamma_{\rm{loss},\downarrow}}{2}=\left[\frac{1}{2}\left(\gamma_{44}-\gamma_{33}+\gamma_{43}-\gamma_{34}\right) -\gamma_s^-\right]=\gamma_{30},\\
\Gamma_1 &\equiv \frac{\gammauu + \gammadd + \gammaud + \gammadu}{2}=\frac{\gamma_{44}+\gamma_{33}+\gamma_{43}+\gamma_{34}}{2},\\
\Gamma_3 &\equiv \gammaud+\gammadu=\gamma_{43}+\gamma_{34},\\
\Gamma_a &= \Gamma_{\rm pol}+\gamma_{34}-\gamma_{43}= \left[\frac{1}{2}\left(\gamma_{44}-\gamma_{33}-\gamma_{43}+\gamma_{34}\right) -\gamma_s^-\right]=\gamma_{03},\\
\Gamma_b &= \Gamma_{\rm loss}+\gamma_{34}+\gamma_{43}=-\left[\frac{1}{2}\left(\gamma_{44}+\gamma_{33}-\gamma_{43}-\gamma_{34} \right)-\gamma_s^+\right]=\gamma_{33} .
\end{align}
$ \Gamma_1 $ is the rate of $ \expect{\hat{J}_x} $ depolarization, and $ \Gamma_3 $ is the rate of $ \expect{\hat{J}_z} $ depolarization.
The loss part of the effective Hamiltonian at a magic frequency can be given by
\begin{align}
\hat{h}_{\rm eff} &= -i\hbar \left( \frac{\gamma_{\uparrow}}{2} \op{\uparrow}{\uparrow} + \frac{\gamma_{\downarrow}}{2} \op{\downarrow}{\downarrow} \right)\\
&= -\frac{i\hbar}{2} \left[ \frac{ \gamma_{\uparrow}}{2} \left(\hat{I}+\hat{\sigma}_z \right) + \frac{ \gamma_{\downarrow}}{2} \left(\hat{I}-\hat{\sigma}_z \right) \right]\\
&= -\frac{i\hbar}{2} \left( \frac{\gamma_{\uparrow}+\gamma_{\downarrow}}{2} \hat{I} + \frac{\gamma_{\uparrow}-\gamma_{\downarrow}}{2} \hat{\sigma}_z \right)\\
&= -\frac{i\hbar}{2}\left(\gamma_s^+\hat{I} + \gamma_s^-\hat{\sigma}_z \right)
\end{align}
We can calculate the fluctuation of the collective spin operators $ \hat{\mathbf{J}}=\sum_i^{N_A}\hat{\boldsymbol{\sigma}}^{(i)}/2 $ using the master equation.
For instance,
\begin{align}
\dt{\expect{\hat{J}_z}} &= \frac{1}{2}\sum_i^{N_A} \dt{\expect{\hat{\sigma}_z^{(i)}}}\\
&= \left[\frac{1}{2}\left(\gamma_{44}+\gamma_{33}-\gamma_{43}-\gamma_{34} \right)-\gamma_s^+\right]\expect{\hat{J}_z} \nonumber \\
&\quad + \left[\left(\gamma_{44} -\gamma_{33}+\gamma_{43}-\gamma_{34}\right)- 2\gamma_s^-\right] \frac{\expect{\hat{N}_C}}{4}\\
&= -\Gamma_b \expect{\hat{J}_z} +\frac{\Gamma_{\rm pol}}{2}\expect{\hat{N}_C}.\label{eq:expJzeq}
\end{align}
The evolution of the atom population operator on the clock-state subspace, $ \hat{N}_C=\sum_i^{N_A}\hat{\mathbbm{1}}_C^{(i)} $, evolves as
\begin{align}
\dt{\expect{\hat{N}_C}} &= \sum_i^{N_A} \dt{ \expect{\hat{\mathbbm{1}}_C^{(i)}}} =\sum_i^{N_A} \expect{\mathcal{D}^{(i)\dagger}\left[\hat{\mathbbm{1}}_C^{(i)}\right] }\\
&= \left[\frac{1}{2}(\gamma_{44}+\gamma_{33}+\gamma_{43}+\gamma_{34}) -\gamma_s^+\right]\expect{\hat{N}_C} \nonumber \\
&\quad + \left[(\gamma_{44}-\gamma_{33}-\gamma_{43}+\gamma_{34}) -2\gamma_s^- \right] \expect{\hat{J}_z}\\
&= -\Gamma_{\rm loss}\expect{\hat{N}_C} +2\Gamma_a \expect{\hat{J}_z}. \label{Eq:NAt}
\end{align}
Similarly,
\begin{align}
\dt{\expect{\hat{J}_x}} &= \frac{1}{2} \sum_i^{N_A} \expect{\mathcal{D}^{(i)\dagger}\left[\hat{\sigma}_x^{(i)} \right] }\\
&= -\gamma_s^+\expect{\hat{J}_x}\\
&= -\left(\Gamma_{\rm loss} +\Gamma_1 \right)\expect{\hat{J}_x} .
\end{align}
%We know the solution for the differential equation
%\begin{align}
%\dt{x} &= -A x+B
%\end{align}
%is
%\begin{align}
%x(t) &= x(0)e^{-At} + \frac{B}{A} (1-e^{-At}).
%\end{align}
%Similarly, one can solve the equations for $ \expect{\hat{J}_x} $ and $ \expect{\hat{N}_C} $ with $ \expect{\hat{J}_z(t)}\approx 0 $ to give
%\begin{align}
%\expect{\hat{J}_x(t)} &= \expect{\hat{J}_x(0)}e^{-\gamma_s^+ t}+ \frac{\gamma_+}{4\gamma_s^+}N_A^2\left(1-e^{-\gamma_s^+ t} \right) \nonumber\\
%&= \frac{N_A}{2} e^{-\gamma_s^+ t}+ \frac{\gamma_+}{4\gamma_s^+}N_A^2\left(1-e^{-\gamma_s^+ t} \right)\\
%\expect{\hat{N}_C(t)} &= N_A e^{-\gamma_s^+ t}+ \frac{\gamma_+}{2\gamma_s^+}N_A^2\left(1-e^{-\gamma_s^+ t} \right).
%\end{align}
In the meantime, we use the property that $ \hat{J}_z^2=\frac{1}{4} \sum_{i,j} \hat{\sigma}_z^{(i)}\hat{\sigma}_z^{(j)}=\frac{1}{4} \sum_{i} \left(\hat{\sigma}_z^{(i)}\right)^2 + \frac{1}{4} \sum_{i\neq j} \hat{\sigma}_z^{(i)}\hat{\sigma}_z^{(j)} $, and then
\begin{align}
\dt{\expect{\hat{J}_z^2}} &= \frac{1}{4}\dt{\expect{\hat{N}_C}} \!+\! \frac{1}{4}\sum_{i\neq j}^{N_A}\dt{}\expect{\hat{\sigma}_z^{(i)}\hat{\sigma}_z^{(j)}}\\
&= \frac{1}{4}\left[ \frac{1}{2} (\gamma_{44}+\gamma_{33}+\gamma_{43}+\gamma_{34}) -\gamma_s^+\right]\expect{\hat{N}_C} \nonumber\\
&\quad \!+\! \frac{1}{4} \left[ (\gamma_{44}-\gamma_{33}-\gamma_{43}+\gamma_{34}) -2\gamma_s^-\right] \expect{\hat{J}_z} \nonumber\\
&\quad \!+\! \frac{1}{4}\sum_{i,j\neq i}^{N_A} \left[ (\gamma_{44}+\gamma_{33}-\gamma_{43}-\gamma_{34}) -2\gamma_s^+\right]\expect{\hat{\sigma}_z^{(i)} \hat{\sigma}_z^{(j)}} \nonumber\\
&\quad \!+\! \frac{1}{4}\sum_{i,j\neq i}^{N_A} \left[ (\gamma_{44}-\gamma_{33}+\gamma_{43}-\gamma_{34}) -2\gamma_s^-\right]\expect{\mathrm{Sym}(\hat{\mathbbm{1}}_C^{(i)}\hat{\sigma}_z^{(j)} ) }\\
&= \frac{1}{4}\left[ \frac{1}{2} (\gamma_{44}+\gamma_{33}+\gamma_{43}+\gamma_{34}) -\gamma_s^+\right]\expect{\hat{N}_C} \nonumber\\
&\quad \!+\! \frac{1}{4} \left[ (\gamma_{44}-\gamma_{33}-\gamma_{43}+\gamma_{34}) -2\gamma_s^-\right] \expect{\hat{J}_z} \nonumber\\
&\quad \!+\! \frac{N_A(N_A-1)}{4} \left[ (\gamma_{44}+\gamma_{33}-\gamma_{43}-\gamma_{34}) -2\gamma_s^+\right]\expect{\hat{\sigma}_z^{(1)} \hat{\sigma}_z^{(2)}} \nonumber\\
&\quad \!+\! \frac{N_A(N_A-1)}{4} \left[ (\gamma_{44} \!-\! \gamma_{33}\!+\!\gamma_{43}\!-\!\gamma_{34}) \!-\! 2\gamma_s^-\right]\expect{\mathrm{Sym}(\hat{\mathbbm{1}}_C^{(1)}\hat{\sigma}_z^{(2)} ) }.\label{eq:expJz2eq}
\end{align}
Note that we explicitly use $ \mathrm{Sym}(\hat{A}^{(1)}\hat{B}^{(2)})= \left(\hat{A}^{(1)}\hat{B}^{(2)}+ \hat{B}^{(1)}\hat{A}^{(2)}\right)/2$ to denote symmetric two-body operators in this appendix. In the main text, we assume atoms have exchange symmetry, and hence we hide the $ \mathrm{Sym}(\cdot) $ symbol for two-body operators.
To calculate the equation above, we also have a set of equations for some other expectation values of two-body operators:
\begin{align}
\left.\dt{}\expect{\hat{\sigma}_z^{(i)} \hat{\sigma}_z^{(j)}}\right|_{i\neq j} &= \dt{}\expect{\hat{\sigma}_z^{(1)} \hat{\sigma}_z^{(2)}}\\
&= \left[ (\gamma_{44}+\gamma_{33}-\gamma_{43}-\gamma_{34}) -2\gamma_s^+\right]\expect{\hat{\sigma}_z^{(1)} \hat{\sigma}_z^{(2)}}\nonumber\\
&\quad \!\!+\! \left[ (\gamma_{44}\!-\!\gamma_{33}\!+\!\gamma_{43}\!-\!\gamma_{34}) \!-\! 2\gamma_s^-\right]\! \expect{\mathrm{Sym}(\hat{\mathbbm{1}}_C^{(\!1\!)}\!\hat{\sigma}_z^{(2)} )}\\
&=-2\Gamma_b \expect{\hat{\sigma}_z^{(1)} \hat{\sigma}_z^{(2)}} \!+\! 2\Gamma_{\rm pol} \expect{\mathrm{Sym}(\hat{\mathbbm{1}}_C^{(\!1\!)}\!\hat{\sigma}_z^{(2)} )}\\
\left.\dt{}\expect{\mathrm{Sym}(\hat{\mathbbm{1}}_C^{(i)}\hat{\sigma}_z^{(j)} ) }\right|_{i\neq j} &\equiv \dt{}\expect{\frac{1}{2}\left( \hat{\mathbbm{1}}_C^{(i)}\hat{\sigma}_z^{(j)}+\hat{\sigma}_z^{(i)}\hat{\mathbbm{1}}_C^{(j)} \right) }\\
&= \dt{}\expect{\mathrm{Sym}( \hat{\mathbbm{1}}_C^{(1)}\hat{\sigma}_z^{(2)} ) }\\
&= \left[(\gamma_{44}+\gamma_{33})-2\gamma_s^+ \right] \expect{\mathrm{Sym}( \hat{\mathbbm{1}}_C^{(1)}\hat{\sigma}_z^{(2)} ) }\nonumber\\
&\quad + \left[ \frac{1}{2}(\gamma_{44}-\gamma_{33}-\gamma_{43}+\gamma_{34}) -\gamma_s^-\right]\expect{\hat{\sigma}_z^{(1)} \hat{\sigma}_z^{(2)}} \nonumber\\
&\quad + \left[ \frac{1}{2}(\gamma_{44}-\gamma_{33}+\gamma_{43}-\gamma_{34}) -\gamma_s^-\right]\expect{\hat{\mathbbm{1}}_C^{(1)} \hat{\mathbbm{1}}_C^{(2)}}\\
&= -(\Gamma_{\rm loss}+\Gamma_b )\expect{\mathrm{Sym}( \hat{\mathbbm{1}}_C^{(1)}\hat{\sigma}_z^{(2)} ) } \nonumber\\
&\quad +\Gamma_a \expect{\hat{\sigma}_z^{(1)} \hat{\sigma}_z^{(2)}} + \Gamma_{\rm pol}\expect{\hat{\mathbbm{1}}_C^{(1)} \hat{\mathbbm{1}}_C^{(2)}}\\
\left.\dt{}\expect{\hat{\mathbbm{1}}_C^{(i)} \hat{\mathbbm{1}}_C^{(j)}}\right|_{i\neq j} &= \dt{}\expect{\hat{\mathbbm{1}}_C^{(1)} \hat{\mathbbm{1}}_C^{(2)}}\\
&= \left[ (\gamma_{44}+\gamma_{33}+\gamma_{43}+\gamma_{34}) -2\gamma_s^+\right]\expect{\hat{\mathbbm{1}}_C^{(1)} \hat{\mathbbm{1}}_C^{(2)}}\nonumber\\
&\quad +\left[ (\gamma_{44}-\gamma_{33}-\gamma_{43}+\gamma_{34}) -2\gamma_s^-\right]\expect{\mathrm{Sym}(\hat{\mathbbm{1}}_C^{(1)} \hat{\sigma}_z^{(2)})} \nonumber\\
&= -2\Gamma_{\rm loss}\expect{\hat{\mathbbm{1}}_C^{(1)} \hat{\mathbbm{1}}_C^{(2)}}+2\Gamma_a \expect{\mathrm{Sym}(\hat{\mathbbm{1}}_C^{(1)} \hat{\sigma}_z^{(2)})}.
\end{align}
Note, with $ i\neq j $, $ \sum_{i,j\neq i}\mathrm{Sym}(\hat{\mathbbm{1}}_C^{(i)}\hat{\sigma}_z^{(j)} )= \sum_{i,j\neq i}\hat{\mathbbm{1}}_C^{(i)}\hat{\sigma}_z^{(j)}$.
Using the relationship that
\begin{align}
\expect{\hat{J}_z^2} &= \sum_{m,n}^{N_A} \hat{j}_z^{(m)}\hat{j}_z^{(n)} = \frac{\expect{\hat{N}_C}}{4} + \sum_{m,n\neq m}^{N_A} \hat{j}_z^{(m)}\hat{j}_z^{(n)}\\
&= \frac{\expect{\hat{N}_C}}{4} + \frac{N_A(N_A-1)}{4} \expect{\hat{\sigma}_z^{(1)} \hat{\sigma}_z^{(2)}}\\
\expect{\hat{N}_C \hat{J}_z} &= \expect{\hat{J}_z } + \frac{N_A(N_A-1)}{2} \expect{\mathrm{Sym}(\hat{\mathbbm{1}}_C^{(1)}\hat{\sigma}_z^{(2)} ) }\\
\expect{\hat{N}_C^2} &= \expect{\hat{N}_C} + N_A(N_A-1)\expect{\hat{\mathbbm{1}}_C^{(1)}\hat{\mathbbm{1}}_C^{(2)}},
\end{align}
Eq.~\eqref{eq:expJz2eq} can be rewritten as
\begin{align}
\dt{\expect{\hat{J}_z^2}} &=\left[ (\gamma_{44}+\gamma_{33}-\gamma_{43}-\gamma_{34}) -2\gamma_s^+\right]\expect{\hat{J}_z^2} \nonumber\\
&\quad \!+\! \left[ \frac{1}{2}(\gamma_{44}-\gamma_{33}+\gamma_{43}-\gamma_{34}) -\gamma_s^-\right]\expect{\hat{N}_{A}\hat{J}_z } \nonumber\\
&\quad \!+\! \frac{1}{4}\left[ -\frac{1}{2} (\gamma_{44}+\gamma_{33} - 3\gamma_{43} -3\gamma_{34}) +\gamma_s^+\right]\expect{\hat{N}_C} \nonumber\\
&\quad \!-\! \frac{1}{4} \left[ (\gamma_{44}-\gamma_{33}+3\gamma_{43}-3\gamma_{34}) -2\gamma_s^-\right] \expect{\hat{J}_z}\\
&= -2\Gamma_b\expect{\hat{J}_z^2}+\Gamma_{\rm pol}\expect{\hat{N}_{A}\hat{J}_z }\nonumber\\
&\quad +\left(2\Gamma_b-\Gamma_{\rm loss} \right)\frac{\expect{\hat{N}_C}}{4} +\left(\frac{\Gamma_a}{2}-\Gamma_{\rm pol} \right) \expect{\hat{J}_z},\label{eq:expJz2eq_op}
\end{align}
together with
\begin{align}
\dt{\expect{\hat{N}_C\hat{J}_z}} &=\dt{} \expect{\hat{J}_z } + \frac{N_A(N_A-1)}{2} \dt{}\expect{\mathrm{Sym}(\hat{\mathbbm{1}}_C^{(1)}\hat{\sigma}_z^{(2)} ) }\\
&= -\Gamma_b \expect{\hat{J}_z} +\frac{\Gamma_{\rm pol}}{2}\expect{\hat{N}_C} \nonumber\\
&\quad -\frac{N_A(N_A-1)}{2}(\Gamma_{\rm loss}+\Gamma_b) \expect{\mathrm{Sym}( \hat{\mathbbm{1}}_C^{(1)}\hat{\sigma}_z^{(2)} ) }\nonumber\\
&\quad +\frac{N_A(N_A-1)}{2}\Gamma_a\expect{\hat{\sigma}_z^{(1)} \hat{\sigma}_z^{(2)}} \!+\! \frac{N_A(N_A-1)}{2}\Gamma_{\rm pol} \expect{\hat{\mathbbm{1}}_C^{(\!1\!)}\!\hat{\mathbbm{1}}_C^{(2)} }\\
&= -\Gamma_b \expect{\hat{J}_z} +\frac{\Gamma_{\rm pol}}{2}\expect{\hat{N}_C} \nonumber\\
&\quad -(\Gamma_{\rm loss}+\Gamma_b) \left( \expect{ \hat{N}_C\hat{J}_z }-\expect{\hat{J}_z}\right)\nonumber\\
&\quad+2\Gamma_a\left(\expect{\hat{J}_z^2 }-\frac{\expect{\hat{N}_C}}{4}\right)+\frac{\Gamma_{\rm pol}}{2}\left(\expect{\hat{N}_C^2}-\expect{\hat{N}_C}\right)\\
&=-\left(\Gamma_{\rm loss}+\Gamma_b\right)\expect{\hat{N}_C \hat{J}_z} +2\Gamma_a\expect{\hat{J}_z^2}+\frac{\Gamma_{\rm pol}}{2}\expect{\hat{N}_C^2 }\nonumber\\
&\quad+\Gamma_{\rm loss} \expect{\hat{J}_z} -\frac{\Gamma_a}{2}\expect{\hat{N}_C}
\end{align}
\begin{align}
\dt{\expect{\hat{N}_C^2}} &= \dt{}\expect{\hat{N}_C} + N_A(N_A-1)\dt{}\expect{\hat{\mathbbm{1}}_C^{(1)}\hat{\mathbbm{1}}_C^{(2)}}\\
&= -\Gamma_{\rm loss}\expect{\hat{N}_C} +2\Gamma_a \expect{\hat{J}_z}\nonumber\\
&\quad -2N_A(N_A-1)\Gamma_{\rm loss}\expect{\hat{\mathbbm{1}}_C^{(1)} \hat{\mathbbm{1}}_C^{(2)}} +2N_A(N_A-1)\Gamma_a\expect{\mathrm{Sym}(\hat{\mathbbm{1}}_C^{(1)} \hat{\sigma}_z^{(2)})}\nonumber\\
&= -\Gamma_{\rm loss}\expect{\hat{N}_C} +2\Gamma_a \expect{\hat{J}_z}\nonumber\\
&\quad - 2\Gamma_{\rm loss} \left(\expect{\hat{N}_C^2}-\expect{\hat{N}_C} \right) + 4\Gamma_a \left(\expect{\hat{N}_C\hat{J}_z} -\expect{\hat{J}_z} \right)\\
&= 4\Gamma_a \expect{\hat{N}_C\hat{J}_z} -2\Gamma_{\rm loss}\expect{\hat{N}_C^2} -2\Gamma_a \expect{\hat{J}_z} +\Gamma_{\rm loss}\expect{\hat{N}_C} .
\end{align}
%\textcolor{red}{To be included: the measurement effect in generating spin squeezing or the $ \Delta J_z^2 $ term. A quick fix is given below:}
Eqs.~\eqref{eq:expJz2eq} and~\eqref{eq:expJzeq} only describe the optical pumping dynamics of the spin system. The measurement effect will yield a $ -\kappa \left(\Delta J_z^2 \right)^2 $ term in the differential equation of $ \Delta J_z^2 $. So, we can use the relationship that
\begin{align}
\mathrm{d} \Delta J_z^2 &= \mathrm{d} \expect{\hat{J}_z^2} - 2\expect{\hat{J}_z}\mathrm{d}\expect{\hat{J}_z},
\end{align}
and hence, considering only the optical pumping effect, Eqs.~\eqref{eq:expJz2eq} and~\eqref{eq:expJzeq} yield
\begin{align}
\!\!\!\! \dt{\Delta J_z^2} &= \dt{\expect{\hat{J}_z^2}} - 2\expect{\hat{J}_z}\dt{\expect{\hat{J}_z}}\\
&= \left[(\gamma_{44}+\gamma_{33}-\gamma_{43}-\gamma_{34} ) -2\gamma_s^+\right] \Delta J_z^2\nonumber\\
&\quad -\frac{1}{4}\left[\frac{1}{2}(\gamma_{44}+\gamma_{33}-3\gamma_{43}-3\gamma_{34} ) -\gamma_s^+\right] \expect{\hat{N}_C }\nonumber\\
&\quad +\frac{1}{4}\left[(\gamma_{44}-\gamma_{33}-\gamma_{43}+\gamma_{34} ) -2\gamma_s^-\right] \expect{\hat{J}_z }\nonumber\\
&\quad -\left[\frac{1}{2}(\gamma_{44}-\gamma_{33}+\gamma_{43}-\gamma_{34} ) -\gamma_s^-\right] \expect{\hat{J}_z }\expect{\hat{N}_C}\nonumber\\
&\quad +\frac{N_A(N_A-1)}{4} \left[ (\gamma_{44}-\gamma_{33}+\gamma_{43}-\gamma_{34}) -2\gamma_s^-\right]\expect{\mathrm{Sym}(\hat{\mathbbm{1}}_C^{(1)}\hat{\sigma}_z^{(2)} ) }.
\end{align}
Now with the measurement effect correction, we have
\begin{align}
\dt{\Delta J_z^2} &= -\kappa \left(\Delta J_z^2 \right)^2 +\left[(\gamma_{44}+\gamma_{33}-\gamma_{43}-\gamma_{34} ) -2\gamma_s^+\right] \Delta J_z^2\nonumber\\
&\quad -\frac{1}{4}\left[\frac{1}{2}(\gamma_{44}+\gamma_{33}-3\gamma_{43}-3\gamma_{34} ) -\gamma_s^+\right] \expect{\hat{N}_C }\nonumber\\
&\quad +\frac{1}{4}\left[(\gamma_{44}-\gamma_{33}-\gamma_{43}+\gamma_{34} ) -2\gamma_s^-\right] \expect{\hat{J}_z }\nonumber\\
&\quad -\left[\frac{1}{2}(\gamma_{44}-\gamma_{33}+\gamma_{43}-\gamma_{34} ) -\gamma_s^-\right] \expect{\hat{J}_z }\expect{\hat{N}_C}\nonumber\\
&\quad +\frac{N_A(N_A-1)}{4} \left[ (\gamma_{44}-\gamma_{33}+\gamma_{43}-\gamma_{34}) -2\gamma_s^-\right]\expect{\mathrm{Sym}(\hat{\mathbbm{1}}_C^{(1)}\hat{\sigma}_z^{(2)} ) }\\
&= -\kappa \left(\Delta J_z^2 \right)^2 +\left[(\gamma_{44}+\gamma_{33}-\gamma_{43}-\gamma_{34} ) -2\gamma_s^+\right] \Delta J_z^2\nonumber\\
&\quad -\frac{1}{4}\left[\frac{1}{2}(\gamma_{44}+\gamma_{33}-3\gamma_{43}-3\gamma_{34} ) -\gamma_s^+\right] \expect{\hat{N}_C }\nonumber\\
&\quad +\frac{1}{4}\left[(\gamma_{44}-\gamma_{33}-\gamma_{43}+\gamma_{34} ) -2\gamma_s^-\right] \expect{\hat{J}_z }\nonumber\\
&\quad +\left[\frac{1}{2}(\gamma_{44}-\gamma_{33}+\gamma_{43}-\gamma_{34} ) -\gamma_s^-\right] \expect{\Delta J_z \Delta N_C}\\
&=-\kappa \left(\Delta J_z^2 \right)^2 +\left[(\gamma_{44}+\gamma_{33}-\gamma_{43}-\gamma_{34} ) -2\gamma_s^+\right] \Delta J_z^2\nonumber\\
&\quad -\left[\frac{1}{2}(\gamma_{44}+\gamma_{33}-3\gamma_{43}-3\gamma_{34} ) -\gamma_s^+\right] \frac{\expect{\hat{N}_C }}{4}\nonumber\\
&\quad -\frac{1}{4}\left[(\gamma_{44}-\gamma_{33}+3\gamma_{43}-3\gamma_{34} ) -2\gamma_s^-\right] \expect{\hat{J}_z }\nonumber\\
&\quad +\left[\frac{1}{2}(\gamma_{44}-\gamma_{33}+\gamma_{43}-\gamma_{34} ) -\gamma_s^-\right] \left(\expect{\hat{N}_C \hat{J}_z}-\expect{\hat{N}_C}\expect{\hat{J}_z}\right)\\
&= -\kappa(\Delta J_z^2)^2 -2\Gamma_b\Delta J_z^2 + \left(2\Gamma_b-\Gamma_{\rm loss}\right)\frac{\expect{\hat{N}_C}}{4} +\left(\frac{\Gamma_a}{2}-\Gamma_{\rm pol} \right)\expect{\hat{J}_z}\nonumber\\
&\quad +\Gamma_{\rm pol}\left( \expect{\hat{N}_C\hat{J}_z}-\expect{\hat{N}_C}\expect{\hat{J}_z} \right) .
\end{align}
We have defined the two-body operator covariance $ \expect{\Delta J_z \Delta N_C}=\expect{\Delta N_C\Delta J_z}\equiv \frac{N_A(N_A-1)}{2}\expect{\mathrm{Sym}(\hat{\mathbbm{1}}_C^{(1)}\hat{\sigma}_z^{(2)} ) } - \expect{\hat{J}_z }\expect{\hat{N}_C}$. Along this line, we can define the following two-body operator covariances,
\begin{align}
\expect{\Delta N_C \Delta N_C} &\equiv N_A(N_A-1)\expect{\hat{\mathbbm{1}}_C^{(1)}\hat{\mathbbm{1}}_C^{(2)} } - \expect{\hat{N}_C}^2\\
\expect{\Delta J_z \Delta J_z} &\equiv \frac{N_A(N_A-1)}{4}\expect{\hat{\sigma}_z^{(1)}\hat{\sigma}_z^{(2)} } - \expect{\hat{J}_z }^2.
\end{align}
Notice that the two-body covariances $ \expect{\Delta J_z \Delta J_z} $ and $ \expect{\Delta N_C \Delta N_C} $ may be related to the collective spin variances $ \Delta J_z^2 $ and $\Delta N_C^2 $ by
\begin{align}
\Delta J_z^2 &= \expect{\Delta J_z \Delta J_z} + \frac{1}{4} \expect{\hat{N}_C}\\
\Delta N_C^2 &= \expect{\Delta N_C \Delta N_C} + \expect{\hat{N}_C}.
\end{align}
But these are not obvious from the equations of motion of those operators as will be discussed shortly.
If the relationships above are valid, then $ \Delta J_z^2 $ could be non-zero in the steady state limit if two-body covariance exists--even though $\expect{\hat{N}_C}\rightarrow 0 $.
From the two-body covariances, one may also define two-body operator correlations given by
\begin{align}
\mathrm{cor}(\hat{J}_z,\hat{J}_z) &= \frac{\expect{\Delta J_z \Delta J_z}}{\Delta J_z^2}\\
\mathrm{cor}(\hat{N}_C,\hat{N}_C) &= \frac{\expect{\Delta N_C \Delta N_C}}{\Delta N_C^2}\\
\mathrm{cor}(\hat{J}_z,\hat{N}_C) &= \frac{\expect{\Delta J_z \Delta N_C}}{\sqrt{\Delta J_z^2\Delta N_C^2}}.
\end{align}
The three two-body covariances are governed by a close set of equations of motion as given below:
\begin{align}
\dt{} \expect{\Delta J_z \Delta N_C} &= \left[(\gamma_{44}+\gamma_{33})-2\gamma_s^+ \right] \expect{\Delta J_z \Delta N_C} \nonumber \\
&\quad + \frac{1}{2} \left[\frac{1}{2}(\gamma_{44}-\gamma_{33}+\gamma_{43}-\gamma_{34})-\gamma_s^- \right]\expect{\Delta N_C \Delta N_C}\nonumber\\
&\quad + \left[(\gamma_{44}-\gamma_{33} - \gamma_{43}+\gamma_{34})-2\gamma_s^- \right]\expect{\Delta J_z \Delta J_z}\\
\dt{} \expect{\Delta N_C \Delta N_C} &= \left[(\gamma_{44}+\gamma_{33}+\gamma_{43}+\gamma_{34})-2\gamma_s^+ \right]\expect{\Delta N_C \Delta N_C}\nonumber\\
&\quad +2 \left[(\gamma_{44}-\gamma_{33}-\gamma_{43}+\gamma_{34})-2\gamma_s^- \right]\expect{\Delta J_z \Delta N_C}\\
\dt{}\expect{\Delta J_z \Delta J_z} &= \left[(\gamma_{44}+\gamma_{33}-\gamma_{43}-\gamma_{34})-2\gamma_s^+ \right]\expect{\Delta J_z \Delta J_z}\nonumber\\
&\quad +\left[\frac{1}{2}(\gamma_{44}-\gamma_{33}+\gamma_{43}-\gamma_{34})-\gamma_s^- \right]\expect{\Delta J_z \Delta N_C}.
\end{align}
In the steady state, all two-body correlations vanish.
It implies that variance like $ \Delta J_z^2 $ would also vanish in the steady state limit.
Notice that, since the equations for those covariances have not considered measurement backaction which is a second-order but important effect, the model of covariance dynamics is not complete.
\section{Approximations and the physical interpretation of coefficients}
To estimation of the solution of the $ \Delta J_z^2 $ equation above, we may assume $ \expect{\hat{J}_z}\approx 0 $ and $ \expect{\hat{N}_C}=N_A $ as constant.
The $ \Delta J_z^2 $ equation can be rewritten in the following general form
\begin{align}
\frac{d x}{d t} = -A x^2 - B x + C
\end{align}
with $ x=\Delta J_z^2 $, $ A = \kappa $, $ B=(\gamma_{44}+\gamma_{33}-\gamma_{43}-\gamma_{34} ) -2\gamma_s^+ $ and\\ $ C=-\frac{N_A}{4}\left[\frac{1}{2}(\gamma_{44}+\gamma_{33}-3\gamma_{43}-3\gamma_{34} ) -\gamma_s^+\right] $.
It can be solved as~\cite{Baragiola2014Open}
\begin{align}
x(t)& = \frac{ \sqrt{B^2 + 4 AC}x_0 - (Bx_0-2C) \tanh \left( \sqrt{B^2 + 4 AC} \frac{t}{2} \right) }{\sqrt{B^2 + 4 AC} + (2Ax_0 + B) \tanh \left( \sqrt{B^2 + 4 AC} \frac{t}{2} \right) } ,
\end{align}
with the initial condition $x_0=\Delta J_z^2(0)=N_A/4$.
If we consider $ \expect{\hat{N}_C}\rightarrow N_A(t) $ is a pure decay function governed by $ \dt{\expect{\hat{N}_C}} = -\Gamma_{\rm loss}\expect{\hat{N}_C} $ with $ \Gamma_{\rm loss}= \gamma_s^+ - \frac{1}{2}(\gamma_{44}+\gamma_{33}+\gamma_{43}+\gamma_{34}) $ from Eq.\eqref{Eq:NAt}, then the equation for $ \Delta J_z^2 $ can be approximated in the follow form:
\begin{align}
\frac{d x}{d t} = -A x^2 - B x + C e^{-\Gamma_{\rm loss}t},
\end{align}
where $ x=\Delta J_z^2 $, $ x_0=\Delta J_z^2(0)=N_A^2/4=d $, $ A = \kappa $, $ B=(\gamma_{44}+\gamma_{33}-\gamma_{43}-\gamma_{34} ) -2\gamma_s^+ $ and $ C=-\frac{N_A}{4}\left[\frac{1}{2}(\gamma_{44}+\gamma_{33}-3\gamma_{43}-3\gamma_{34} ) -\gamma_s^+\right] $.
The solution for $ x(t) $ or $ \Delta J_z^2(t) $ can be given by
\begin{align}
&\Delta J_z^2(t) = x(t)\nonumber\\
% Solution with initial value included in the equation explicitly.
%&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!=\left. \frac{e^{\frac{Bt}{2}-k t} \left(\frac{AC}{k^2}\right)^{-\frac{B}{k}}
%\left(2 k \left(\frac{ AC e^{-k t}}{k^2}\right)^{\frac{ B}{k}} \left((A d+B) \, _0\tilde{F}_1\left(;1-\frac{B}{k};\frac{A C}{k^2}\right)+k \, _0\tilde{F}_1\left(;-\frac{B}{k};\frac{A C}{k^2}\right)\right) \left(A C \, _0\tilde{F}_1\left(;\frac{B}{k}+2;\frac{A C e^{-k t}}{k^2}\right)+k e^{k t} \left(k \, _0\tilde{F}_1\left(;\frac{B}{k};\frac{A C e^{-k x}}{k^2}\right)+B \, _0\tilde{F}_1\left(;\frac{B+k}{k};\frac{A C e^{-k t}}{k^2}\right)\right)\right)
%-2 A C \left(\frac{AC}{k^2}\right)^{\frac{B}{k}} \left(k \left((2 A d+B) \, _0\tilde{F}_1\left(;\frac{B+k}{k};\frac{A C}{k^2}\right)+k \, _0\tilde{F}_1\left(;\frac{B}{k};\frac{A C}{k^2}\right)\right)+A C \, _0\tilde{F}_1\left(;\frac{B}{k}+2;\frac{A C}{k^2}\right)\right) \, _0\tilde{F}_1\left(;2-\frac{B}{k};\frac{A C e^{-k t}}{k^2}\right)\right)}
%% Bottom part
%{k^2 \left(\frac{2 A e^{\frac{Bt}{2}} \left(k \left((2 A d+B) \, _0\tilde{F}_1\left(;\frac{B+k}{k};\frac{A C}{k^2}\right)+k \, _0\tilde{F}_1\left(;\frac{B}{k};\frac{A C}{k^2}\right)\right)+A C \, _0\tilde{F}_1\left(;\frac{B}{k}+2;\frac{A C}{k^2}\right)\right) \, _0\tilde{F}_1\left(;1-\frac{B}{k};\frac{A C e^{-k x}}{k^2}\right)}{k}
%-4 A \left(\frac{AC}{k^2}\right)^{-\frac{B}{2k}} \left((A d+B) \, _0\tilde{F}_1\left(;1-\frac{B}{k};\frac{A C}{k^2}\right)+k \, _0\tilde{F}_1\left(;-\frac{B}{k};\frac{A C}{k^2}\right)\right) I_{\frac{B}{k}}\left(\frac{2 \sqrt{AC} \sqrt{e^{-k t}}}{k}\right)\right)}\right\}
% Solution with initial value separated.
%&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!= -\frac{D\sqrt{ACe^{\Gamma_{\rm loss} (-t)}} \Gamma \left(1\!-\!\frac{B}{\Gamma_{\rm loss} }\right) I_{1\!-\!\frac{B}{\Gamma_{\rm loss} }}\left(\frac{2 \sqrt{ACe^{-t \Gamma_{\rm loss} }}}{\Gamma_{\rm loss} }\right)+D B \Gamma \left(1-\frac{B}{\Gamma_{\rm loss} }\right) I_{-\frac{B}{\Gamma_{\rm loss} }}\left(\frac{2 \sqrt{AC e^{-t \Gamma_{\rm loss} }}}{\Gamma_{\rm loss} }\right)+D \sqrt{ACe^{\Gamma_{\rm loss} (-t)}} \Gamma \left(1\!-\!\frac{B}{\Gamma_{\rm loss} }\right) I_{-\frac{B+\Gamma_{\rm loss} }{\Gamma_{\rm loss} }}\left(\frac{2 \sqrt{AC e^{-t \Gamma_{\rm loss} }}}{\Gamma_{\rm loss} }\right)+ (-1)^{B/\Gamma_{\rm loss} } \sqrt{AC e^{\Gamma_{\rm loss} (-t)}} \Gamma \left(\frac{B+\Gamma_{\rm loss} }{\Gamma_{\rm loss} }\right) I_{\frac{B}{\Gamma_{\rm loss} }\!-\!1}\left(\frac{2 \sqrt{AC e^{-t \Gamma_{\rm loss} }}}{\Gamma_{\rm loss} }\right) + (-1)^{B/\Gamma_{\rm loss} }B \Gamma \left(\frac{B+\Gamma_{\rm loss} }{\Gamma_{\rm loss} }\right) I_{\frac{B}{\Gamma_{\rm loss} }}\left(\frac{2 \sqrt{AC e^{-t \Gamma_{\rm loss} }}}{\Gamma_{\rm loss} }\right)
%+ (-1)^{B/\Gamma_{\rm loss} } \sqrt{AC e^{\Gamma_{\rm loss} (-t)}} \Gamma \left(\frac{B+\Gamma_{\rm loss} }{\Gamma_{\rm loss} }\right) I_{\frac{B+\Gamma_{\rm loss} }{\Gamma_{\rm loss} }}\left(\frac{2 \sqrt{AC e^{-t \Gamma_{\rm loss} }}}{\Gamma_{\rm loss} }\right)}
%{2 A D \Gamma \left(1\!-\!\frac{B}{\Gamma_{\rm loss} }\right) I_{-\frac{B}{\Gamma_{\rm loss} }}\left(\frac{2 \sqrt{AC e^{-t \Gamma_{\rm loss} }}}{\Gamma_{\rm loss} }\right)
%+(-1)^{B/\Gamma_{\rm loss} } \Gamma \left(\frac{B+\Gamma_{\rm loss} }{\Gamma_{\rm loss} }\right) I_{\frac{B}{\Gamma_{\rm loss} }}\left(\frac{2 \sqrt{AC e^{-t \Gamma_{\rm loss} }}}{\Gamma_{\rm loss} }\right)}\\
&= \frac{\mathcal{P}+\mathcal{Q}+\mathcal{R}+\mathcal{S}+\mathcal{T}+\mathcal{U} }
{\mathcal{V}+\mathcal{W}},
\end{align}
where
\begin{align}
\mathcal{P} &= D\sqrt{ACe^{-\Gamma_{\rm loss} t}} \Gamma \left(1\!-\!\frac{B}{\Gamma_{\rm loss} }\right) I_{1\!-\!\frac{B}{\Gamma_{\rm loss} }}\left(\frac{2 \sqrt{ACe^{-t \Gamma_{\rm loss} }}}{\Gamma_{\rm loss} }\right)\\
\mathcal{Q} &= D B \Gamma \left(1-\frac{B}{\Gamma_{\rm loss} }\right) I_{-\frac{B}{\Gamma_{\rm loss} }}\left(\frac{2 \sqrt{AC e^{-t \Gamma_{\rm loss} }}}{\Gamma_{\rm loss} }\right)\\
\mathcal{R} &= D \sqrt{ACe^{-\Gamma_{\rm loss} t}} \Gamma \left(1\!-\!\frac{B}{\Gamma_{\rm loss} }\right) I_{-\frac{B+\Gamma_{\rm loss} }{\Gamma_{\rm loss} }}\left(\frac{2 \sqrt{AC e^{-t \Gamma_{\rm loss} }}}{\Gamma_{\rm loss} }\right)\\
\mathcal{S} &= (-1)^{B/\Gamma_{\rm loss} } \sqrt{AC e^{-\Gamma_{\rm loss} t}} \Gamma \left(\!\frac{B\!+\!\Gamma_{\rm loss} }{\Gamma_{\rm loss} }\!\right) I_{\frac{B}{\Gamma_{\rm loss} }-1}\left(\!\frac{2 \sqrt{AC e^{-t \Gamma_{\rm loss} }}}{\Gamma_{\rm loss} }\!\right)\\
\mathcal{T} &= (-1)^{B/\Gamma_{\rm loss} }B \Gamma \left(\frac{B+\Gamma_{\rm loss} }{\Gamma_{\rm loss} }\right) I_{\frac{B}{\Gamma_{\rm loss} }}\left(\frac{2 \sqrt{AC e^{-\Gamma_{\rm loss} t}}}{\Gamma_{\rm loss} }\right)\\
\mathcal{U} &= (-1)^{B/\Gamma_{\rm loss} } \sqrt{AC e^{-\Gamma_{\rm loss} t}} \Gamma \left(\!\frac{B\!+\!\Gamma_{\rm loss} }{\Gamma_{\rm loss} }\!\right) I_{\frac{B+\Gamma_{\rm loss} }{\Gamma_{\rm loss} }}\left(\!\frac{2 \sqrt{AC e^{- \Gamma_{\rm loss}t }}}{\Gamma_{\rm loss} }\!\right)\\
\mathcal{V} &= 2 A D \Gamma \left(1\!-\!\frac{B}{\Gamma_{\rm loss} }\right) I_{-\frac{B}{\Gamma_{\rm loss} }}\left(\frac{2 \sqrt{AC e^{-\Gamma_{\rm loss}t }}}{\Gamma_{\rm loss} }\right)\\
\mathcal{W} &= (-1)^{B/\Gamma_{\rm loss} } \Gamma \left(\frac{B+\Gamma_{\rm loss} }{\Gamma_{\rm loss} }\right) I_{\frac{B}{\Gamma_{\rm loss} }}\left(\frac{2 \sqrt{AC e^{-\Gamma_{\rm loss}t }}}{\Gamma_{\rm loss} }\right)
\end{align}
with
\begin{align}
D &\!= \frac{- \left(-\frac{AC }{\Gamma_{\rm loss}^2 }\right)^{\frac{ B}{\Gamma_{\rm loss} }} e^{-(B+\Gamma_{\rm loss})t} \Gamma \left(\frac{B+\Gamma_{\rm loss} }{\Gamma_{\rm loss} }\right) }
{\Gamma_{\rm loss} \left[]2 (A d\!+\!B) \, _0F\!_1\left(;1\!-\!\frac{B}{\Gamma_{\rm loss} };\frac{A C e^{\!-\Gamma_{\rm loss} t}}{\Gamma_{\rm loss} ^2}\right)\!-\!2 B \, _0F\!_1\left(;-\frac{B}{\Gamma_{\rm loss} };\frac{A C e^{\!-\Gamma_{\rm loss}t }}{\Gamma_{\rm loss} ^2}\right)\right]}\nonumber\\
&\times\! \left\{\!\!\Gamma_{\rm loss} e^{\!\Gamma_{\rm loss} t}\! \left[\!(2 A d \!\!+\!\! B) \, _0\!\tilde{F}\!_1\!\!\left(\!;\!\frac{B}{\Gamma_{\rm loss} }\!+\!1;\!\frac{A C e^{\!-\Gamma_{\rm loss} t}}{\Gamma_{\rm loss} ^2}\!\right)\!\!+\!\!\Gamma_{\rm loss} \, _0\!\tilde{F}\!_1\!\left(\!;\!\frac{B}{\Gamma_{\rm loss} }\!;\!\frac{A C e^{\!-\Gamma_{\rm loss} t}}{\Gamma_{\rm loss} ^2}\!\right)\!\right]\right. \nonumber\\
&\quad \quad \left. +A C \, _0\!\tilde{F}_1\!\!\left(\!;\!\frac{B}{\Gamma_{\rm loss} }\!+\! 2;\!\frac{A C e^{-\Gamma_{\rm loss} t }}{\Gamma_{\rm loss} ^2}\!\right)\!\right\}.
\end{align}
Above, $ I_n(z) $ are the Bessel functions of the first kind\index{Bessel function!Bessel function of the first kind}, $ \Gamma(z) $ is the Euler gamma function\index{Euler gamma function}, $ _0F_1(;b;z) $ are the confluent hypergeometric functions\index{confluent hypergeometric function}, and $ _0\tilde{F}_1(;,b;z) $ are the regularized confluent hypergeometric functions\index{regularized confluent hypergeometric function}\footnote{This problem is solved in \emph{Mathematica} (version 10).}.
To obtain the solution for $ \Delta J_z^2(t) $ exactly, numerical methods can be applied to the full set of equations discussed above in this section.
The variance, $\Delta J_z^2 $, can be characterized by the normalized rate $ \gamma_{\rm norm}= \sqrt{B^2 + 4 AC}$ or by the characteristic decay rate $ \gamma_s $.
In the end, for a weak measurement, $ \hat{J}_\perp\approx \hat{J}_z $ and $ \hat{J}_\parallel\approx\hat{J}_x $, we should expect the Wineland squeezing parameter~\cite{Wineland1992} to be
\begin{align}
\xi &= 2j\expect{\hat{N}_C(t=0)}\frac{\Delta J_z^2}{\expect{\hat{J}_x}^2}
=N_A\frac{\Delta J_z^2}{\expect{\hat{J}_x}^2}.
\end{align}
%The dynamics of the expectation values of collective spin operators and their variance can be seen in Fig.~\ref{fig:clockdynamics_m_yq_rp1d5}. The squeezing parameter is shown in Fig.~\ref{fig:xi_magic}.
%In the plots, we have normalized the time scale in accordance to $ 1/\gamma_{\rm norm} $, where $ \gamma_{\rm norm} = \sqrt{ (\gamma_{44} +\gamma_{33} -\gamma_{43} -\gamma_{34} -2\gamma_s^+ )^2 -\kappa [\frac{1}{2}(\gamma_{44} +\gamma_{33} -3\gamma_{43} -3\gamma_{34}) -\gamma_s^+ ] N_A } $.
Since the optical depth per atom in the nanofiber case is orders of magnitude higher than the free-space QND measurement case, the correlations between atoms are strong. Therefore, equations of motions are inevitably coupled to each other.
The two approximations above about the equation of $ \Delta J_z^2 $ have a big deviation from the exact solution.
However, we can still make the following approximation as being verified numerically.
During the measurement period, the atom number is fixed and hence one can ignore the quantum nature of atom number operators--that is no fluctuation and correlations with the atom number operator. In our equations, for example, $ \expect{\hat{J}_z\hat{N}_C}=\expect{\hat{J}_z}\expect{\hat{N}_C}= \expect{\hat{J}_z}N_C$ or $ \expect{\hat{\mathbbm{1}}_C\hat{\sigma}_z}+\expect{\hat{\sigma}_z\hat{\mathbbm{1}}_C}=2\expect{\hat{\sigma}_z} $.
We also have $ \Delta N_C=0 $ and can vanish terms like $ \expect{\Delta N_C\Delta N_C} $ in our equations of motion.
Now, we consider the physics meaning of different coefficients in the exact equations of motion.
The $ \gamma_\uparrow $ and $ \gamma_\downarrow $ coefficients come from the imaginary part of the effective Hamiltonian and should indicate the population loss from the $ \ket{\uparrow} $ and $ \ket{\downarrow} $, respectively.
Other rates from the expectation values of jump operators, like $ \gamma_{34} $, are the optical pumping terms indicating coherence refeeding from one sublevel to another for the collective spins.
Since the spin dynamics in the clock space are not trace-preserving and the atomic state will be polarized slightly, from the equation of $ \expect{\hat{N}_C} $, one can interpret the coefficient associated with the $ \expect{\hat{N}_C} $ as the loss rate while the coefficient associated with the $ \expect{\hat{J}_z} $ as the polarization rate. Therefore, we define
\begin{align}
\gamma_{\rm{loss},\uparrow} &\equiv \gamma_\uparrow - (\gamma_{44}+\gamma_{43})=\gamma_\uparrow - \big(\gammauu + \gammadu \big), \\
\gamma_{\rm{loss},\downarrow} &\equiv \gamma_\downarrow - (\gamma_{33}+\gamma_{34})=\gamma_\downarrow - \big(\gammadd + \gammaud \big),\\
\gamma_{\rm loss} &\equiv \frac{\gamma_{\rm{loss},\uparrow} +\gamma_{\rm{loss},\downarrow}}{2}=\gamma_{00}\\
\gamma_{\rm pol} &\equiv \frac{\gamma_{\rm{loss},\uparrow} - \gamma_{\rm{loss},\downarrow}}{2},\\
\gamma_{\rm flip} &\equiv \frac{\gammauu + \gammadd + \gammaud + \gammadu}{2}.
\end{align}
$ \gamma_{\rm{loss},\uparrow} $ is the total rate of population loss for the state $\ket{\uparrow}$ and similarly for $\gamma_{\rm{loss},\downarrow}$.
There are two important features of this map that are not typical in a QND measurement of an ensemble of spin-1/2 particles. First, since $\gamma_{\rm{loss},\uparrow} \neq \gamma_{\uparrow \rightarrow \downarrow}$ and $\gamma_{\rm{loss},\downarrow} \neq \gamma_{\downarrow \rightarrow \uparrow}$, the map is {\em not} trace preserving to account for atoms optically pumped out of the clock subspace.
In addition, since $\gamma_{\rm{loss},\uparrow} \neq \gamma_{\rm{loss},\downarrow}$, optical pumping will lead to a polarization of the mean along $\hat{J}_z$, different from the value found in a usual QND measurement.
Similarly, we introduce $\tilde{\gamma}_\uparrow \equiv \gamma_\uparrow - \gamma_{\uparrow \rightarrow \uparrow}$ as the rate of optical pumping out of the state $\ket{\uparrow}$ excluding the optical self-refeeding effect, and we definite $\tilde{\gamma}_\downarrow$ for the $ \ket{\downarrow} $. Then one can define
\begin{align}
\Gamma'_1 &= \frac{\tilde{\gamma}_\uparrow + \tilde{\gamma}_\downarrow}{2}\\
\Gamma'_2 &= \frac{\tilde{\gamma}_\uparrow - \tilde{\gamma}_\downarrow}{2},
\end{align}
which appear in the equations of motion of some expectation values of operators to count for the sum of and the difference between the population losses of the $ \ket{\uparrow} $ and $ \ket{\downarrow} $ states.
These decay rates are associated with depolarization processes of the spin dynamics.
%\begin{figure}
%\begin{minipage}{.5\linewidth}
%\centering
%\subfloat[]{\label{fig:clockdynamics_m33_yq_rp1d5_exact}\includegraphics[scale=0.45]{../media/Figs/clockdynamics_m33_yq_rp1d5_exact}}
%\end{minipage}
%\begin{minipage}{.5\linewidth}
%\centering
%\subfloat[]{\label{fig:clockdynamics_m33_yq_rp1d5_approx}\includegraphics[scale=0.45]{../media/Figs/clockdynamics_m33_yq_rp1d5_approx}}
%\end{minipage}
%\par\medskip
%\begin{minipage}{.5\linewidth}
%\centering
%\subfloat[]{\label{fig:clockdynamics_m44_yq_rp1d5_exact}\includegraphics[scale=0.45]{../media/Figs/clockdynamics_m44_yq_rp1d5_exact}}
%\end{minipage}
%\begin{minipage}{.5\linewidth}
%\centering
%\subfloat[]{\label{fig:clockdynamics_m44_yq_rp1d5_approx}\includegraphics[scale=0.45]{../media/Figs/clockdynamics_m44_yq_rp1d5_approx}}
%\end{minipage}
%\caption{Evolution of expectation values of collective spin operators as a function of time with $ N_A=1000 $ atoms trapped at $ r'\!_\perp =1.5a $ along the $ H/x $ axis of the nanofiber. Subfigs.~\protect\subref{fig:clockdynamics_m33_yq_rp1d5_exact} and~\protect\subref{fig:clockdynamics_m33_yq_rp1d5_approx} use the magic frequencies close to the $ F=3\leftrightarrow F'=3 $ transition frequency $ \omega_{33'} $. Subfigs.~\protect\subref{fig:clockdynamics_m44_yq_rp1d5_exact} and~\protect\subref{fig:clockdynamics_m44_yq_rp1d5_approx} used the magic frequencies close to $ \omega_{44'} $. The $ \phi $ axis was defined as the quantization axis for these plots. Subfigs.~\protect\subref{fig:clockdynamics_m33_yq_rp1d5_exact} and~\protect\subref{fig:clockdynamics_m44_yq_rp1d5_exact} were calculated using the exact solution of the coupled differential equations of the expectation values of spin operators. In contrast, Subfigs.~\protect\subref{fig:clockdynamics_m33_yq_rp1d5_approx} and~\protect\subref{fig:clockdynamics_m44_yq_rp1d5_approx} use approximations by ignoring the dynamics of $ \expect{\hat{J}_z} $ for all calculations, and treat $ \expect{\hat{N}_C} $ as a constant in solving the differential equation for $ \Delta J_z^2 $. Other parameters used: $ \Omega/2\pi=52 $ MHz (does not matter). }\label{fig:clockdynamics_m_yq_rp1d5}
%\end{figure}
%\begin{figure}
%\begin{minipage}{.5\linewidth}
%\centering
%\subfloat[]{\label{fig:xi_magic33_yq}\includegraphics[scale=0.45]{../media/Figs/xi_magic33_yq}}
%\end{minipage}
%\begin{minipage}{.5\linewidth}
%\centering
%\subfloat[]{\label{fig:xi_magic44_yq}\includegraphics[scale=0.45]{../media/Figs/xi_magic44_yq}}
%\end{minipage}
%\par\medskip
%\begin{minipage}{.5\linewidth}
%\centering
%\subfloat[]{\label{fig:xi_magic33_xq}\includegraphics[scale=0.45]{../media/Figs/xi_magic33_xq}}
%\end{minipage}
%\begin{minipage}{.5\linewidth}
%\centering
%\subfloat[]{\label{fig:xi_magic44_xq}\includegraphics[scale=0.45]{../media/Figs/xi_magic44_xq}}
%\end{minipage}
%%\centering\makebox[\textwidth]{
%%\includegraphics[width=0.65\textwidth]{./Figs/xi_magic44_yq}}
%\caption{Spin squeezing evolution as a function of time with $ N_A=1000 $ atoms trapped at $ r'\!_\perp =1.5a $ along the $ H/x $ axis of the nanofiber. Subfigs.~\protect\subref{fig:xi_magic33_yq} and~\protect\subref{fig:xi_magic33_xq} use the magic frequencies close to the $ F=3\leftrightarrow F'=3 $ transition frequency $ \omega_{33'} $. Subfigs.~\protect\subref{fig:xi_magic44_yq} and~\protect\subref{fig:xi_magic44_xq} used the magic frequencies close to $ \omega_{44'} $. The magic frequencies and spin squeezing parameters are different for different choices of quantization axis. Subfigs.~\protect\subref{fig:xi_magic33_yq} and~\protect\subref{fig:xi_magic44_yq} define $ \phi $ axis as the quantization axis. In contrast, subfigs.~\protect\subref{fig:xi_magic33_xq} and~\protect\subref{fig:xi_magic44_xq} use $ r\!_\perp $ axis as the quantization axis. When $ z $ axis is chosen to be the quantization axis, almost no squeezing effect can be observed at the magic frequencies. As can be seen, when the quantization axis close to the local field direction, the spin squeezing parameter can reach to a higher value. Other parameters used: $ \Omega/2\pi=52 $ MHz (does not matter). }\label{fig:xi_magic}
%\end{figure}
%</birefringencemotioneq>
%\bibliography{Nanofiber}
%\bibliographystyle{amsplain}
\bibliographystyle{../styles/abbrv-alpha-letters-links}
%\bibliographystyle{unsrt}
% \nocite{*}
\bibliography{../refs/Archive}
\printindex
\end{document}
| {
"alphanum_fraction": 0.6894386495,
"avg_line_length": 107.6218611521,
"ext": "tex",
"hexsha": "d17b4df6aca7372b8f56883eb8d7a51c265cd05c",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2018-07-17T21:55:09.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-07-17T21:55:09.000Z",
"max_forks_repo_head_hexsha": "a9bc6bc4213896c70c90cbb3d9b533782d428761",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "i2000s/PhD_Thesis",
"max_forks_repo_path": "chap5/Birefringence.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "a9bc6bc4213896c70c90cbb3d9b533782d428761",
"max_issues_repo_issues_event_max_datetime": "2018-07-18T01:47:21.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-07-18T01:47:21.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "i2000s/PhD_Thesis",
"max_issues_repo_path": "chap5/Birefringence.tex",
"max_line_length": 1621,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "a9bc6bc4213896c70c90cbb3d9b533782d428761",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "i2000s/PhD_Thesis",
"max_stars_repo_path": "chap5/Birefringence.tex",
"max_stars_repo_stars_event_max_datetime": "2021-04-27T19:11:43.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-03-26T01:58:05.000Z",
"num_tokens": 50567,
"size": 145720
} |
%!TEX TS-program = xelatex
\documentclass{nirev-cv}
\addbibresource{bibliography.bib}
\usepackage{ulem}
\begin{document}
\header{Guilherme}{DeMaio}
{Software Developer}
% example on how to place an image in exact location
%\usetikzlibrary{calc}
%\begin{tikzpicture}[remember picture,overlay]
% \node[anchor=north west,inner sep=0pt] at ($(current page.north west)-(2cm,5cm)$) {
% \includegraphics[width=4cm]{profile.jpg}
% };
%\end{tikzpicture}
% In the aside, each new line forces a line break
\begin{aside}
%\includegraphics[width=4cm]{profile.jpg}
\section{contact}
São Paulo - Brasil
~
+55 11 9 8393 2639
{\faSkype } guilherme\_nirev
\href{mailto:[email protected]}{[email protected]}
\href{https://nirev.dev}{https://nirev.dev}
\href{https://github.com/nirev}{\faGithub /nirev}
\href{https://twitter.com/nirev}{\faTwitter /nirev}
\href{https://linkedin.com/in/nirev}{\faLinkedin /nirev}
\section{languages}
native portuguese
fluent english
basic french
\section{programming}
Elixir, Java, Ruby, Javascript, C, Python,
HTML, css, bash,
Kafka, Spark, docker,
monitoring tools
\section{interests}
distributed systems, making a better world, data structures, music, martial arts, science fiction, fighting fascism
\end{aside}
\section{about me}
I've been developing software for over 10 years, from web apps to embedded platforms. \\
During this time I've come to believe:
- Empathy first, always
- Trust is key
- Less is more
On the tech side of things, I'm more of a backend/operations type, but I do dabble a bit on the frontend.
I enjoy taking part and ownership of where and how applications are deployed and monitored.
I take pride in being part of team. I enjoy giving talks and getting to know people at conferences.
I've been a team lead before, and I dislike the term "soft skills". Once you're in a position as such, those so called "soft" skills are exactly what is most important.
\section{recent experience}
\workentry
{November 2017}{*}
{Staff Engineer at Telnyx}
{Remote / Chicago - US}
{I joined Telnyx as an Elixir software developer tasked with building their new Call Control product,
which is all about abstracting away telephony and providing APIs for interfacing with telephone calls.
In a short timeframe I was the tech lead of the project, and have since been promoted to Staff Engineer. During this time I've been involved in: \\ \small
- Building, deploying, monitoring and operaring the new product \\
- Shaping the product and roadmap together with Product Managers \\
- Talking to clients to understand use cases and provide guidance integrating with the API \\
- Supporting Sales and Marketing onboarding and bringing new customers \\
- Hiring, both Elixir and frontend engineers critical to product growth
}
\workentry
{September 2015}{September 2017}
{Senior Software Engineer at XERPA}
{São Paulo - Brazil}
{I joined the company as their first employee to help build the SaaS platform tackling HR bureaucracy in Brazil.
We started from day-0 with Elixir and Phoenix on the backend, with the frontend initially in simple html/css/jquery. We eventually migrated the frontend to ClojureScript using a GraphQL API.
Among things I took part are: \\ \small
- Bootstrapped the platform using Elixir, Phoenix, PostgreSQL, Terraform and Ansible \\
- Built libraries for: Auth, ElasticSearch, image manipulation, payment gateway integration \\
%--- Authentication and Authorization \\
%--- Indexing things to ElasticSearch, as well as defining mappings \\
%--- Resizing and manipulating images \\
%--- Integrating with payment gateways \\
- Defining and prioritizing features together with product and business/sales teams. \\
- Interviewing "beta" clients to get a better understanding of the problems we needed to solve. \\
- Collaborating heavily with Customer Success team to help streamline client usage after signup \\%, focusing on quickly solving bugs and improving user experience \\
- Built a document distribution using OCR which greatly improved sales and customer satisfaction. % to identify unique numbers in PDFs
}
\workentry
{November 2012}{September 2015}
{Team Lead (Search and Analytics) at Elo7}
{São Paulo - Brazil}
{A marketplace with high traffic and transaction volume. The stack was based on Java, Ruby and Scala on AWS infrastructure. A fast paced environment, with agile practices, and focusing on delivering fast with quality. We measured and logged using StatsD, Graphite, and NewRelic, and deployed constantly. I was leading the Search and Analytics Team: a small team using Solr, Kafka and Spark to improve search and real time reports, and responsible for its infrastructure and pipeline tools.}
\subsection{past experiences}
details available upon request
\begin{smallentrylist}
\shortworkentry
{04/2012}{10/2012}
{Intern at INRIA Paris-Roquencourt}
{Paris - France}
{Research on fault-tolerance requirements for Wireless Sensor Networks macroprogramming.}
\shortworkentry
{03/2011}{09/2012}
{Undergrad Researcher at USP - CHOReOS project}
{São Paulo - Brazil}
{Research and implementation involving: choreography analysis using graph metrics,
dynamic adaptation techniques, implementation of a testing framework, and cloud computing.}
\shortworkentry
{08/2010}{02/2011}
{Software Developer (C/C++) at RR Sistema}
{Vitória/ES - Brazil}
{Porting their implementation of Brazilian DTV middleware to different platforms.
Mainly programming in C/C++, with low-level and multimedia libraries (GStreamer, DirectFB, pthreads, osal, etc),
and required knowledge of the Brazilian digital television standards.}
\shortworkentry
{01/2009}{12/2010}
{Undergrad Intern at Multimedia and Network Research Lab}
{Vitória/ES - Brazil}
{Working with the Brazilian DTV middleware; ported it for Android. }
\shortworkentry
{08/2008}{12/2008}
{Undergrad Intern at LASSY}
{University of Luxembourg - Luxembourg}
{RESIST project: developed a prototype for SOA eHealth application with multiple services.}
% \shortworkentry
% {January 2008}{March 2008}
% {Intern at CISA Trading S.A.}
% {Trading company, São Paulo/SP - Brazil}
% {Maintaining their internal software solution which manages all of the companies processes.
% The system was written in the Progress4GL language.}
\shortworkentry
{03/2007}{12/2007}
{Undergrad Intern at NINFA laboratory}
{Vitória/ES - Brazil}
{Joint project with ESCELSA, state's electric power utility, building automated classifiers
for finding frauds done by its clients. Used optimization heuristics, neural networks and clustering.
Tools developed in Java, C++ and Matlab.}
%\shortworkentry
% {09/2006}{03/2007}
% {PHP Developer at Lettera Soluções}
% {Vitória/ES - Brazil}
% {Developing and managing web sites using CSS, (x)HTML, PHP, ASP, MySQL, PostgreSQL, as well as common Linux admin tools.}
\end{smallentrylist}
\section{education}
\begin{entrylist}
\entry
{2011–2014}
{Master in Computer Science}
{University of São Paulo - Brazil}{}
\entry
{2005–2010}
{B.Sc. in Computer Science}
{Federal University of Espírito Santo - Brazil}{}
\end{entrylist}
\printbibsection{article}{article in peer-reviewed journal}
\printbibsectionpar{inproceedings}{international peer-reviewed conferences/proceedings}{notkeyword={brazil}}
\printbibsectionpar{inproceedings}{local peer-reviewed conferences/proceedings}{keyword={brazil}}
\printbibsection{misc}{other publications}
\printbibsection{report}{research reports}
\end{document}
| {
"alphanum_fraction": 0.7480630165,
"avg_line_length": 42.0869565217,
"ext": "tex",
"hexsha": "3d584540a3a779e214e611def4a622fd05d0b6c2",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2017-09-04T17:04:29.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-09-04T17:04:29.000Z",
"max_forks_repo_head_hexsha": "bf4ba871ca3dafce05d1164fff5a0d321f43ccaf",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "nirev/cv",
"max_forks_repo_path": "cv.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "bf4ba871ca3dafce05d1164fff5a0d321f43ccaf",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "nirev/cv",
"max_issues_repo_path": "cv.tex",
"max_line_length": 492,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "bf4ba871ca3dafce05d1164fff5a0d321f43ccaf",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "nirev/cv",
"max_stars_repo_path": "cv.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1954,
"size": 7744
} |
\subsection{ARIMA interpolation}
\subsection{Last Observation Carried Forward (LOCF)}
\subsection{Next Observation Carried Backward (NOCB)}
\subsection{Other}
Multi period averages for imputation on time series.
| {
"alphanum_fraction": 0.7945205479,
"avg_line_length": 16.8461538462,
"ext": "tex",
"hexsha": "a33b0c6a67a4d692b9729a734bc949a2089f89ed",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/statistics/missingInference/02-01-time.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/statistics/missingInference/02-01-time.tex",
"max_line_length": 53,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/statistics/missingInference/02-01-time.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 49,
"size": 219
} |
\PassOptionsToPackage{unicode=true}{hyperref} % options for packages loaded elsewhere
\PassOptionsToPackage{hyphens}{url}
\documentclass[11pt,ignorenonframetext,aspectratio=169]{beamer}
\IfFileExists{pgfpages.sty}{\usepackage{pgfpages}}{}
\setbeamertemplate{caption}[numbered]
\setbeamertemplate{caption label separator}{: }
\setbeamercolor{caption name}{fg=normal text.fg}
\beamertemplatenavigationsymbolsempty
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{ifxetex,ifluatex}
\usepackage{fixltx2e} % provides \textsubscript
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\else % if luatex or xelatex
\ifxetex
\usepackage{mathspec}
\else
\usepackage{fontspec}
\fi
\defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase}
\fi
\usetheme[]{metropolis}
% use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
% use microtype if available
\IfFileExists{microtype.sty}{%
\usepackage{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\newif\ifbibliography
\hypersetup{
pdftitle={Introduction to plant breeding},
pdfauthor={Deependra Dhakal},
pdfborder={0 0 0},
breaklinks=true}
%\urlstyle{same} % Use monospace font for urls
% Prevent slide breaks in the middle of a paragraph:
\widowpenalties 1 10000
\raggedbottom
\AtBeginPart{
\let\insertpartnumber\relax
\let\partname\relax
\frame{\partpage}
}
\AtBeginSection{
\ifbibliography
\else
\let\insertsectionnumber\relax
\let\sectionname\relax
\frame{\sectionpage}
\fi
}
\AtBeginSubsection{
\let\insertsubsectionnumber\relax
\let\subsectionname\relax
\frame{\subsectionpage}
}
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{0}
\usepackage{setspace}
\usepackage{wasysym}
% \usepackage{fontenc}
\usepackage{booktabs,siunitx}
\usepackage{longtable}
\usepackage{array}
\usepackage{multirow}
\usepackage{wrapfig}
\usepackage{float}
\usepackage{colortbl}
\usepackage{pdflscape}
\usepackage{tabu}
\usepackage{threeparttable}
\usepackage{threeparttablex}
\usepackage[normalem]{ulem}
\usepackage{makecell}
\usepackage{xcolor}
\usepackage{tikz} % required for image opacity change
\usepackage[absolute,overlay]{textpos} % for text formatting
\usepackage[skip=0.333\baselineskip]{caption}
% \usepackage{newtxtext,newtxmath}% better than txfonts
\usepackage[english]{babel}
\usepackage{pgfpages}
\sisetup{per-mode=symbol}
% % Added by CII
% \usepackage[format=hang,labelfont=bf,margin=0.5cm,justification=centering]{caption}
% \captionsetup{font=small,width=0.9\linewidth,labelfont=small,textfont={small}}
% % End of CII addition
% \usepackage{subcaption}
% \newcommand{\subfloat}[2][need a sub-caption]{\subcaptionbox{#1}{#2}}
\captionsetup[sub]{font=footnotesize,labelfont=footnotesize,textfont=footnotesize}
% \captionsetup[subfigure]{font=small,labelfont=small,textfont=small}
% \captionsetup[subfloat]{font=scriptsize,labelfont=scriptsize,textfont=scriptsize}
% this font option is amenable for beamer, although these are global settings
\setbeamerfont{caption}{size=\tiny}
% \setbeamerfont{subcaption}{size=\tiny} % this does not chage subfloat fonts
% \setbeamerfont{subfloat}{size=\tiny} % this does not change subfloat fonts
% use single line spacing ?
\singlespacing
% use cslreferences environment
\newlength{\cslhangindent}
\setlength{\cslhangindent}{1.5em}
\newenvironment{cslreferences}%
{\setlength{\parindent}{0pt}%
\everypar{\setlength{\hangindent}{\cslhangindent}}\ignorespaces}%
{\par}
% notes will be showed
% \setbeamertemplate{note page}{\insertnote} % a plain style
\setbeameroption{show notes}
%
% \usepackage{etoolbox}
%
% % use different background canvas
% \defbeamertemplate{background canvas}{ddefault}{%
% \includegraphics[width=\paperwidth,height=\paperheight]{}
% }
% \defbeamertemplate{background canvas}{standout}{%
% \includegraphics[width=\paperwidth,height=\paperheight]{}
% }
%
% \BeforeBeginEnvironment{frame}{%
% \setbeamertemplate{background canvas}[ddefault]%
% }
%
% \makeatletter
% \define@key{beamerframe}{standout}[true]{%
% \setbeamertemplate{background canvas}[standout]%
% }
% \makeatother
% % use different frame headline styles
% \defbeamertemplate{headline}{mydefault}{%
% \rule{.5\paperwidth}{14mm}%
% }
% \defbeamertemplate{headline}{rightslide}{%
% \emph{\hfill\rule{.5\paperwidth}{14mm}}%
% }
%
% \BeforeBeginEnvironment{frame}{%
% \setbeamertemplate{headline}[mydefault]%
% }
%
% \makeatletter
% \define@key{beamerframe}{rightslide}[true]{%
% \setbeamertemplate{headline}[rightslide]%
% }
% \makeatother
\title[]{Introduction to plant breeding}
\author[
Deependra Dhakal
]{Deependra Dhakal}
\institute[
]{
Gokuleshwor Agriculture and Animal Science College\\
Tribhuwan University\\
\textit{[email protected]}\\
\url{https://rookie.rbind.io}
}
\date[
Academic year 2019-2020
]{
Academic year 2019-2020
}
\begin{document}
% Hide progress bar and footline on titlepage
\begin{frame}[plain]
\titlepage
\end{frame}
\hypertarget{introduction}{%
\section{Introduction}\label{introduction}}
\begin{frame}{}
\protect\hypertarget{section}{}
\begin{figure}
{\centering \includegraphics[width=0.45\linewidth]{./images/x_men_mutants}
}
\caption{Mutant seekers}\label{fig:x-men}
\end{figure}
\end{frame}
\begin{frame}{What experts say\ldots{}}
\protect\hypertarget{what-experts-say}{}
``For more than half a century, I have worked with the production of
more and better wheat for feeding the hungry world, but wheat is merely
a catalyst, a part of the picture. I am interested in the total
development of human beings. Only by attacking the whole problem can we
raise the standard of living for all people, in all communities, so that
they will be able to live decent lives. This is something we want for
all people on this planet''.
Norman E. Borlaug (March 25,1914 - September 12, 2009)
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-1}{}
\begin{figure}
\includegraphics[width=0.45\linewidth,height=0.75\textheight]{./images/borlaug_wheat_crossing_block} \caption{Dr. Borlaug working in a wheat crossing block}\label{fig:borlaug}
\end{figure}
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-2}{}
``The greatest service which can be rendered any country is to add a
useful plant to its culture; especially a bread grain''
Thomas Jefferson
\begin{figure}
\includegraphics[width=0.45\linewidth,height=0.75\textheight]{./images/Triticum3} \caption{Modern wheat genotype under test in cool temperate region (UK)}\label{fig:modern-triticum-uk}
\end{figure}
\end{frame}
\begin{frame}{Background (Overview)}
\protect\hypertarget{background-overview}{}
\begin{itemize}
\tightlist
\item
There are over a quarter of a million plant species of which
approximately 5000 are cultivated. But, only a 100 can be considered
as important crops.
\item
Ancestors of most of the modern crops (Used either in food chain or in
industrial chain solely) have different appearance and even forms than
what is seen today.
\item
Plant breeding has a commercial history of 2/3 century at least.
\item
By 19th century's end, artificial crossing, bulk and pedigree methods
and alternative progeny testing schems were already used
\end{itemize}
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-3}{}
\begin{itemize}
\tightlist
\item
After rediscovery of Mendel's law and developments in genetics plant
breeding has evolved rapidly
\item
In the period deemed green revolution, genetic contribution to yield
gains was of secondary importance (Why?)
\item
Rising consciousness of environmental stewardship underlines greater
role of plant breeding
\begin{itemize}
\tightlist
\item
Optimum environments
\item
Marginal environments
\item
Economic value of resources
\end{itemize}
\item
Extermely wide and cross cutting topic nowadays!
\item
Plant breeding has answered many fundamental genetic questions and
posed new ones too.
\end{itemize}
\end{frame}
\hypertarget{concept}{%
\section{Concept}\label{concept}}
\begin{frame}{}
\protect\hypertarget{section-4}{}
\begin{columns}
\begin{column}{0.5\textwidth}
"Breeder's eye" viewpoint of Plant Breeding:
\begin{itemize}
\item Plant breeding is an applied science and an art
\item Early plant breeders depended primarily on intuition, skill, and judgement in their work.
\item Plant breeding was practiced first when people learned to look for superior plants to harvest for seed; thus selection became the earliest method of plant breeding. However few people may have had conscious efforts in so-doing.
\end{itemize}
\end{column}
\begin{column}{0.5\textwidth} %%<--- here
Scientific viewpoint of Plant Breeding:
\begin{itemize}
\item "Breeding" is about a more active process of crossing different strains of plant for particular results.
\item Plant evolution directed by man (N.I. Vavilov)
\item Process of generating and utilizing variability
\item "Selection" is about picking the best from each generation
\end{itemize}
\end{column}
\end{columns}
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-5}{}
\begin{itemize}
\item
As human knowledge about plants increased, people were able to select
more intelligently.
\item
With the discovery of sex in plants, hybridization was added to
breeding techniques.
\item
Although hybridization was practiced before the time of Mendel, its
significance in inheritance was not clearly understood.
\item
Mendel's experiments provided a basis for understanding the mechanism
of heredity and how it may be manipulated in the development of
improved varieties.
\item
A more precise explanation of the heredity mechanism has become
possible in recent years with advances in biochemical genetics.
\item
Now, plant breeding is more about making precise crossess and accurate
prediction
\end{itemize}
\[\Large \Delta{G}=\frac{Ih^2 \sigma}{I}\]
\end{frame}
\hypertarget{definition}{%
\section{Definition}\label{definition}}
\begin{frame}{}
\protect\hypertarget{section-6}{}
\begin{itemize}
\item
\emph{Plant breeding is the art and the science of changing and
improving the heredity of plants}.
\item
Breeding is about manipulating plant attributes, structure and
composition, to make them more useful to humans.
\item
Plant breeding is essentially an election made by man of the best
plants within a variable population as a potential cultivar.
\item
In other words plant breeding is a `selection' made possible by the
existence of `variability'.
\end{itemize}
\end{frame}
\begin{frame}{How do you define art?}
\protect\hypertarget{how-do-you-define-art}{}
\end{frame}
\hypertarget{history}{%
\section{History}\label{history}}
\begin{frame}{History (Overview)}
\protect\hypertarget{history-overview}{}
\begin{itemize}
\tightlist
\item
Broadly, two distinct stages in plant breeding could be characterized
\begin{itemize}
\tightlist
\item
Domestication of the first crops to the birth of Mendelian genetics
\item
Acceptance of Medelian genetics to post-mendel era
\end{itemize}
\item
The work of Gregor Mendel and further advances in science that
followed his discoveries established that plant traits are controlled
by hereditary factors or genes that consist of DNA (deoxyribose
nucleic acid, the hereditary material).
\end{itemize}
\end{frame}
\begin{frame}{History (Pre-mendel era)}
\protect\hypertarget{history-pre-mendel-era}{}
\begin{itemize}
\tightlist
\item
Plant breeding started with sedentary agriculture and the
domestication of the first agricultural plants, the cereals, which
were chosen by early man.
\item
A family of French seed growers, the Vilmorins, established in 1727
the first company devoted to plant breeding and the production of new
varieties.
\item
The sexuality of plants was described by Caesalpinus in 1583 and in
1696 Camerarius published an essay entitled `De sexo plantarum', but
it was Kolreuter, a German botanist who was the first to exploit this
knowledge in the production of the first artificial plant hybrids in
Nicotiana.
\end{itemize}
\end{frame}
\note{
Sedentary agriculture and domestication led to the rapid elimination of undesirable characters such as seed shattering and dormancy. It seems natural that the need to collect fruits, seeds and roots for nutrition brought with it the knowledge to identify plants, aspects of their biology and their potential as food, poison or other uses. The need for pollination for fruit setting in dates was recognized some nine centuries before Christ, as can be deduced from the observation of an Asyrian bas-relief showing the pollination of female date palms with male pollen. Until the first decenium of this century plant breeding was in the hands of 'experts' who in many cases were unable to make progress due to a lack of knowledge of the fundamental plant processes such as the reproductive system. In spite of this, however, some important goals were reached before the rediscovery of Mendel's laws and the consecration of genetics as a science. Some examples are worth mentioning. In France, in the 17th century, several varieties of 'heading lettuce' were developed, some of which are still in cultivation.
}
\note{
A family of French seed growers, the Vilmorins, established in 1727 the first company devoted to plant breeding and the production of new varieties. A member of the family, Louis de Vilmorin, was the first, some years later, to use the progeny test, evaluating a selection by the study of its descendants.
}
\note{
A well-known example of premendelian plant breeding is that of sugar beet. In 1747 Margraaf discovered that the roots of the forage beet contained about 6\% of sucrose. By the beginning of the 19th century, Achard, by means of mass selection, increased this to 11\%. Application of the Vilmorin pedigree method had further increased this to 16\% by 1810.
}
\note{
The use of artificial crosses in premendelian breeding can be exemplified by the case of Fragaria x ananassa developed in the botanical garden of Paris by Duchesne, in the 17th century by crossing F. chiloense with F. virginiana. In England at about the same time new varieties of fruits, wheat and peas were being obtained by artificial hybridization.
}
\begin{frame}<presentation:0>[noframenumbering] % this will make the empty frame disappear and unnumbered
% \begin{frame}<0>[noframenumbering] % same due to this
\end{frame}
\begin{frame}{History (Post-mendel era)}
\protect\hypertarget{history-post-mendel-era}{}
\begin{itemize}
\tightlist
\item
After the rediscovery of the work of Mendel in 1900 it was some six
years before Bateson, who coined the name `genetics' for the new
science, realized that this new discipline could give a scientific
basis and new openings to plant breeding methods.
\item
Genetics has given to breeding a better knowledge of the processes
involved in the mechanisms of variability and the necessary
information for regulating and increasing such variability.
\end{itemize}
\end{frame}
\note{
Modern plant breeding is, as has already been stated, applied genetics, but its scientific basis is broader and uses, as conceptual and technical tools, cytology, systematics, physiology, pathology, entomology, chemistry and statistics and has also developed its own technology. There have been many advances in these various areas which have made a substantial contribution to plant breeding. These include the demonstration of the auto- and allopolyploid nature of many crop plants and their origin. This opened the way to the production of artificial polyploid forms and their direct or indirect exploitation. The discovery of the possibility of increasing the frequency of mutations by means of radiation or chemicals was hailed as a means of increasing the variability available to the breeder. There have been many other developments which have all contributed to the progress and success of plant breeding too numerous to exemplify in detail but include such areas as quantitative inheritance and genotype x environment interaction, resistance breeding, and the conservation and characterization of genetic resources.
}
\note{
More recent genetical advances than those outlined earlier have allowed breeders to design new methods as is well exemplified by the developments in such areas as RFLP mapping and marker-assisted selection, gene cloning and genetic transformation. The ceilings for yield that some authorities claim have been reached will be surpassed by the combined application of the classical and the new methods available to the breeder. Where yield as such is not the prime criterion of a breeding programme they will allow the production of more efficient cultivars which can maintain production without the requirement for expensive inputs. At the same time breeding can and will make a positive contribution to halting the decline in the agricultural environment.
}
\begin{frame}<presentation:0>[noframenumbering] % this will make the empty frame disappear and unnumbered
% \begin{frame}<0>[noframenumbering] % same due to this
\end{frame}
\begin{frame}{History (Domestication and evolution)}
\protect\hypertarget{history-domestication-and-evolution}{}
\begin{figure}
\includegraphics[width=0.6\textwidth, keepaspectratio,height=0.38\textheight]{./images/rye_crop} \caption{Image of ancestor of wheat \textit{Secale cereale}}\label{fig:secale-cereale}
\end{figure}
\begin{figure}
\includegraphics[width=0.6\textwidth, keepaspectratio,height=0.38\textheight]{./images/wheat_borlaug-100} \caption{Modern wheat \textit{Triticum aestivum}}\label{fig:modern-wheat}
\end{figure}
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-7}{}
\begin{figure}
\includegraphics[width=0.6\textwidth, keepaspectratio,height=0.45\textheight]{./images/rose_prickly_or_common} \caption{rose ancestor (Wild rose)}\label{fig:rose-wild}
\end{figure}
\begin{figure}
\includegraphics[width=0.6\textwidth, keepaspectratio,height=0.45\textheight]{./images/rose-modern} \caption{Rose modern}\label{fig:rose-modern}
\end{figure}
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-8}{}
\begin{figure}
\includegraphics[width=0.45\linewidth,height=0.4\textheight]{./images/Tomato_galapagos_Solanum-cheesmanii} \caption{Wild tomato (Lycopersicum cheesmanii); Unique and rare, wild, small cherry tomato from the rocky, lava flow areas of Ecuador's famed Galapagos Islands. These cherry shaped, yellowish-orange fruited tomatoes are flavorful and have lemon scented foliage. They enjoy heat and tolerate drought. Similar to cultivated tomatoes in flavor but with more disease and insect tolerance, as well as salt water tolerance. Plants typically vine, are very productive, and easy to grow.}\label{fig:tomato-wild}
\end{figure}
\begin{figure}
\includegraphics[width=0.6\textwidth, keepaspectratio,height=0.4\textheight]{./images/modified_tomatoes} \caption{Modern tomato (Lycopersicum esculentum)}\label{fig:tomato-modern}
\end{figure}
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-9}{}
\begin{figure}
\includegraphics[width=0.6\textwidth, keepaspectratio,height=0.45\textheight]{./images/brassica_arabidopsis} \caption{Arabidopsis spp.}\label{fig:wild-brassica}
\end{figure}
\begin{figure}
\includegraphics[width=0.6\textwidth, keepaspectratio,height=0.45\textheight]{./images/Ornamental_cabbage} \caption{Modern day cauliflower}\label{fig:modern-brassica}
\end{figure}
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-10}{}
\begin{figure}
\includegraphics[width=0.6\textwidth, keepaspectratio,height=0.45\textheight]{./images/Teosinte} \caption{Wild progenitor of modern day maize -- Teosinte}\label{fig:maize-ancestor-teosinte}
\end{figure}
\begin{figure}
\includegraphics[width=0.6\textwidth, keepaspectratio,height=0.45\textheight]{./images/Teosinte_maize} \caption{Modern day maize (Zea mays)}\label{fig:maize-modern}
\end{figure}
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-11}{}
``Charles Darwin's writing in the nineteenth century first formulated
the idea of evolution, the idea that plants and animals are in constant
competition, which results in the fittest or the best adapted surviving
and passing on their genes to the next generation, those less
well-adapted dying or failing to reproduce and so not passing on their
genetic code. Life, then, is a constant winnowing and sifting of genetic
material.'' (Kingsbury 2009)
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-12}{}
\begin{itemize}
\item
Historically, good seed from high-yielding plants were stored in woven
baskets and bartered for metal tools or cloth.
\item
Nowadays test tubes and computers have replaced the baskets, and huge
research budgets have replaced the barter.
\end{itemize}
\end{frame}
\begin{frame}{History (Interesting facts)}
\protect\hypertarget{history-interesting-facts}{}
\begin{itemize}
\item
Luther Burbank is the last great prescientific plant breeder.
\item
Burbank admired Darwin but not Mendel
\item
Remarkably strong start in plant breeding was made by Soviet
Communists in twentieth century. But dissapointingly, intrigue and
ideology ensured that Mendelian genetics was discarded.
\item
For detailed history refer to Principles of Plant Genetics and
Breeding (Acquaah 2009), Chapter 2 (Page 22-39).
\item
For detailed history of Genetics refer to Griffiths et al. (2015) .
\end{itemize}
\end{frame}
\begin{frame}{History (Timeline)}
\protect\hypertarget{history-timeline}{}
\begin{table}
\caption{\label{tab:history-table1}History of genetics}
\centering
\fontsize{8}{10}\selectfont
\begin{tabular}[t]{l>{\raggedright\arraybackslash}p{40em}}
\toprule
Year & Event\\
\midrule
\rowcolor{gray!6} 1865 & Gregor Mendel showed that traits are controlled by discrete factors now known as genes.\\
1869 & Friedrich Miescher isolated DNA from the nuclei of white blood cells.\\
\rowcolor{gray!6} 1903 & Walter Sutton and Theodor Boveri hypothesized that chromosomes are the hereditary elements.\\
1905 & William Bateson introduced the term “genetics” for the study of inheritance.\\
\rowcolor{gray!6} 1908 & G. H. Hardy and Wilhelm Weinberg proposed the Hardy–Weinberg law, the foundation for population genetics\\
\addlinespace
1910 & Thomas H. Morgan demonstrated that genes are located on chromosomes.\\
\rowcolor{gray!6} 1913 & Alfred Sturtevant made a genetic linkage map of the Drosophila X chromosome, the first genetic map.\\
1918 & Ronald Fisher proposed that multiple Mendelian factors can explain continuous variation for traits, founding the field of quantitative genetics.\\
\rowcolor{gray!6} 1931 & Harriet Creighton and Barbara McClintock showed that crossing over is the cause of recombination.\\
1941 & Edward Tatum and George Beadle proposed the one-gene—one-polypeptide hypothesis.\\
\bottomrule
\end{tabular}
\end{table}
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-13}{}
\begin{table}
\caption{\label{tab:history-table2}History of genetics...}
\centering
\fontsize{8}{10}\selectfont
\begin{tabular}[t]{l>{\raggedright\arraybackslash}p{40em}}
\toprule
Year & Event\\
\midrule
\rowcolor{gray!6} 1944 & Oswald Avery, Colin MacLeod, and Maclyn McCarty provided compelling evidence that DNA is the genetic material in bacterial cells.\\
1946 & Joshua Lederberg and Edward Tatum discovered bacterial conjugation.\\
\rowcolor{gray!6} 1948 & Barbara McClintock discovered mobile elements (transposons) that move from one place to another in the genome.\\
1950 & Erwin Chargaff showed DNA composition follows some simple rules for the relative amounts of A, C, G, and T.\\
\rowcolor{gray!6} 1952 & Alfred Hershey and Martha Chase proved that DNA is the molecule that encodes genetic information.\\
\addlinespace
1953 & James Watson and Francis Crick determined that DNA forms a double helix.\\
\rowcolor{gray!6} 1958 & Matthew Meselson and Franklin Stahl demonstrated the semiconservative nature of DNA replication.\\
1958 & Jérôme Lejeune discovered that Down syndrome resulted from an extra copy of the 21st chromosome.\\
\rowcolor{gray!6} 1961 & François Jacob and Jacques Monod proposed that enzyme levels in cells are controlled by feedback mechanisms.\\
1961-1967 & Marshall Nirenberg, Har Gobind Khorana, Sydney Brenner, and Francis Crick "cracked" the genetic code.\\
\bottomrule
\end{tabular}
\end{table}
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-14}{}
\begin{table}
\caption{\label{tab:history-table3}History of genetics...}
\centering
\fontsize{8}{10}\selectfont
\begin{tabular}[t]{l>{\raggedright\arraybackslash}p{40em}}
\toprule
Year & Event\\
\midrule
\rowcolor{gray!6} 1968 & Motoo Kimura proposed the neutral theory of molecular evolution.\\
1977 & Fred Sanger, Walter Gilbert, and Allan Maxam invented methods for determining the nucleotide sequences of DNA molecules.\\
\rowcolor{gray!6} 1980 & Christiane Nüsslein-Volhard and Eric F. Wieschaus defined the complex of genes that regulate body plan development in Drosophila.\\
1989 & Francis Collins and Lap-Chee Tsui discovered the gene causing cystic fibrosis.\\
\rowcolor{gray!6} 1993 & Victor Ambrose and colleagues described the first microRNA.\\
\addlinespace
1995 & First genome sequence of a living organism (Haemophilus influenzae) published.\\
\rowcolor{gray!6} 1996 & First genome sequence of a eukaryote (Saccharomyces cerevisiae) published.\\
1998 & First genome sequence of an animal (Caenorhabditis elegans) published.\\
\rowcolor{gray!6} 2000 & First genome sequence of a plant (Arabidopsis thaliana) published.\\
2001 & The sequence of the human genome first published.\\
\addlinespace
\rowcolor{gray!6} 2006 & Andrew Fire and Craig Mello win the Nobel prize for their discovery of gene silencing by double-stranded RNA.\\
2012 & John Gurdon and Shinya Yamanaka win the Nobel prize for their discovery that just four regulatory genes can convert adult cells into stem cells.\\
\bottomrule
\end{tabular}
\end{table}
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-15}{}
\begin{figure}
\includegraphics[width=0.4\linewidth,height=0.7\textwidth, keepaspectratio]{./images/plant_breeding_history} \caption{History of plant breeding Rolf H. J. Schlegel, 2018}\label{fig:plb-history-book}
\end{figure}
\end{frame}
\hypertarget{objectives}{%
\section{Objectives}\label{objectives}}
\begin{frame}{Objectives}
\begin{itemize}
\tightlist
\item
Before initiating a breeding project, clear breeding objectives are
defined based on factors such as producer needs, consumer preferences
and needs, and environmental impact.
\end{itemize}
\end{frame}
\begin{frame}{Broadly, objectives could be,}
\protect\hypertarget{broadly-objectives-could-be}{}
\begin{itemize}
\tightlist
\item
Feed the growing population
\item
Maximize resource and energy use efficiency
\item
Fasten return on investment
\item
Introgress desired modification in crop species
\end{itemize}
\end{frame}
\begin{frame}{Specifically, in crop improvement,}
\protect\hypertarget{specifically-in-crop-improvement}{}
\begin{itemize}
\tightlist
\item
Increasing the potential productivity of a plant by modifying its
morphological characteristics such as:
\begin{itemize}
\tightlist
\item
The number of kernels per ear in a cereal
\item
The weight of individual seeds within the pod of a pulse,
\end{itemize}
\item
Modifying physiological traits such as:
\begin{itemize}
\tightlist
\item
harvest index,
\item
utilization of nutrients,
\item
tolerance to stress.
\end{itemize}
\item
Quality and nutritive value are now of increasing importance,
particularly in association with improved efficiency of production.
\end{itemize}
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-16}{}
\begin{itemize}
\tightlist
\item
Modern agriculture is highly mechanized and for this reason some
breeding programmes include objectives to make the crop more amenable
to mechanical handling.
\begin{itemize}
\tightlist
\item
Development of monogerm beets for mechanical sowing thus eliminating
the need for thinning, or the introduction of `jointless' tomatoes
for mechanical harvesting.
\item
Use of some agrochemicals is often coupled with the need for
specific crop characteristics e.g.~herbicide resistance.
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}{Activities}
\protect\hypertarget{activities}{}
\begin{figure}
\includegraphics[width=0.7\textwidth,keepaspectratio,height=0.6\textheight]{./images/interdisciplinary_science} \caption{Inter-desciplinary linkage of plant breeding}\label{fig:plb-disciplines}
\end{figure}
\end{frame}
\begin{frame}{Inter-disciplinary linkage of plant breeding (Poehlman
1987)}
\protect\hypertarget{inter-disciplinary-linkage-of-plant-breeding-poehlman1987breeding}{}
\begin{itemize}
\item
Botany: Plant breeders should be accomplished botanists in order to
understand the taxonomy, anatomy, morphology, and reproduction of the
plants with which they work.
\item
Genetics and Cytogenetics: The plant breeder needs a thorough
understanding of the mechanism of heredity in plants since modern
plant-breeding methods are based on a knowledge of genetic principles
and chromosome behavior. This knowledge is being extended to the
molecular level with advances in biochemical genetics.
\item
Plant Physiology: Variety adaptation is determined by the response of
plants to their environment, which includes the effects of heat, cold,
drought, and soil nutrient response. The plant breeder strives to make
inherent modifications of physiological processes that will enable the
plant to function more efficiently.
\end{itemize}
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-17}{}
\begin{itemize}
\item
Plant Pathology: Plant disease reduces crop yields. Host resistance is
an important means of combating many plant diseases. Evaluation of the
response ofthe plant genotype to infection by the pathogen is an
essential part of breeding for host plant resistance.
\item
Entomology: Biological control of insect populations by breeding for
insect resistance is an important way of reducing insect damage in
crop plants.
\item
Plant Biochemistry: Inherent improvements in the nutritive value of a
crop variety are given attention by the plant breeder. Suitability for
industrial utilization often determines the market demand for a
particular variety of a crop. This includes such characteristics as
the milling and baking qualities of a wheat variety, the cooking and
eating qualities of a rice variety, or the fiber qualities of a cotton
variety. Biochemical genetics is contributing toward a better
understanding of the structure and function of the gene.
\end{itemize}
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-18}{}
\begin{itemize}
\item
Statistics: The plant breeder compares the performance of many
genetically different strains. Sound field plot techniques and
suitable methods for statistical analyses of data are necessary to
obtain reliable results and to interpret the results correctly. The
application of statistical procedures has provided for a better
understanding of the inheritance of quantitative characteristics and
for predicting the possible genetic advance that may be obtained with
particular systems of mating.
\item
Agronomy: In addition to all of these, the breeder of field crops
should be a sound agronomist. Plant breeders should know crops and
their production. They should understand what the farmer wants and
needs in the way of new varieties. Only then will they be able to
evaluate critically the breeding materials available to them, plan an
efficient breeding program, and direct their breeding efforts toward
the agronomically important objectives.
\end{itemize}
\end{frame}
\hypertarget{steps-activities-in-plant-breeding}{%
\section{Steps (Activities) in plant
breeding:}\label{steps-activities-in-plant-breeding}}
\begin{frame}{Steps (Activities) in plant breeding:}
\begin{itemize}
\item
Setting objective/s
\item
Germplasm
\item
Creation of variation
\begin{itemize}
\tightlist
\item
Genetic variation can be created by domestication, germplasm
collection, plant introduction, hybridization, polyploidy,
somaclonal variation and genetic engineering
\end{itemize}
\item
Selection
\begin{itemize}
\tightlist
\item
Identification and isolation of plants having desirable combinations
of characters and growing their progeny is called selection.
Selection is necessarily based on phenotype. Various breeding
methods have been designed to increase efficacy of selection.
Selection finally yields an improved lines or population.
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-19}{}
\begin{itemize}
\tightlist
\item
Evaluation
\begin{itemize}
\tightlist
\item
Newly selected lines/population are tested for yield and other
traits. Performance is compared with existing best varieties
(checks). Evaluation is a step-wise process, ordinary conducted at
several locations for three or more years.
\end{itemize}
\item
Multiplication
\begin{itemize}
\tightlist
\item
This step concerns the large scale production of source/certified
seed after release and notification of varieties. Seed production is
usually done by seed production organizations, in concert with seed
certification agencies.
\end{itemize}
\item
Certification and cultivar release
\begin{itemize}
\tightlist
\item
Certified seed is ultimately sold to the farmers who use it for
commercial crop production. This activity alone makes it possible to
reap the economic benefits from above activities.
\end{itemize}
\end{itemize}
\end{frame}
\hypertarget{achievements}{%
\section{Achievements}\label{achievements}}
\begin{frame}{Achievements}
\begin{itemize}
\tightlist
\item
Yield increase
\begin{itemize}
\item
Has been achieved through either directly targeting the yield
\emph{per se} or its components. Theoretically, any characterstic
that enables plant to perform better in given biotic/abiotic stress
condition can increase yield.
\item
In US, yield of corn rose from about 2000 kg/ha in the 1940s to
about 7000 kg/ha in the 1990s.
\item
In England, it took only 40 years for wheat yields to rise from 2
metric tons/ ha to 6 metric tons/ha.
\item
Between 1961 and 2000, FAO shows:
\item
Wheat yield increased by 681\% in China, 301\% in India, 299\% in
Europe, 235\% in Africa, 209\% in South America, and 175\% in the
USA.
\item
Although, a half proportion of the entire gain in yield levels could
be attributed to genetic improvement per se, others being due to
agronomic practices.
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-20}{}
\begin{itemize}
\tightlist
\item
Enhanced value of food crops due to nutritional quality or
compositional trait improvement or special purpose crops.
\begin{itemize}
\tightlist
\item
The shelf life of fruits (e.g., tomato) has been extended through
the use of genetic engineering techniques to reduce the expression
of compounds associated with fruit deterioration.
\item
For example, cereals tend to be low in lysine and threonine, while
legumes tend to be low in cysteine and methionine (both sulfur
containing amino acids). Now biofortified series of cereal crops
that have enhanced amino acids contents are available (QPM maize).
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-21}{}
\begin{itemize}
\tightlist
\item
Enhanced value of food crops \ldots{}
\begin{itemize}
\tightlist
\item
Zincol, ZincShakti and Mayil in Wheat are enriched for protein and
Zinc mineral. Similarly, recently notified lentil varieties Khajura
Masuro-4 (ILL-7723) and Black masuro are high Zinc and iron
containing genotypes.
\item
Rice, a major world food, lacks pro-vitamin A (the precursor of
vitamin A). IRRI undertaking The Golden Rice project with Golden
rice 2 (a variety with a 20-fold increase in pro-vitamin A)
implementing the concept of developed by Syngenta's Jealott's Hill
International Research Centre in Berkshire, UK.
\item
Breeding for reduction of toxin and anti-nutritional constituents
(Eg. Aflatoxin,
\href{https://en.wikipedia.org/wiki/Phytohaemagglutinin}{phytohaemagglutinin}).
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-22}{}
\begin{itemize}
\tightlist
\item
Breeding stripy petunias (Petunia x hybrida); Ornamental breeding
\item
Improvement of crop production systems and relieving pressures on
environmental resources
\begin{itemize}
\tightlist
\item
GM resistant to pests that would otherwise require the use of
pesticides.
\item
Fertilizer responsive crops
\item
Elimination of wild trait (E.g. Photoperiod responsiveness in Paddy,
Shattering behavior of brassica oil seeds)
\item
Corn (the ultimate construction of nature and culture working hand
in hand).
\end{itemize}
\end{itemize}
\end{frame}
\hypertarget{constraints}{%
\section{Constraints}\label{constraints}}
\begin{frame}{}
\protect\hypertarget{section-23}{}
\begin{enumerate}
\tightlist
\item
Plant breeding in political arena
\end{enumerate}
\begin{itemize}
\tightlist
\item
The \emph{north-south gene drain}
\item
Biopiracy
\item
The corporate food chain
\item
Loss of crop biodiversity
\item
Loss of soverignty
\end{itemize}
\begin{enumerate}
\setcounter{enumi}{1}
\item
Nature of genetics and breeding attacts highly focused people rather
than ``big picture'' people
\item
Subject to environmental vagaries
\end{enumerate}
\begin{itemize}
\tightlist
\item
Field conditions are not always amicable to optimal plant growth and
development. Since, undertaking a breeding program requires extended
periods, various factors including weather, biotic and abiotic
stresses might cause an experiment to fail.
\end{itemize}
\end{frame}
\hypertarget{opportunities}{%
\section{Opportunities}\label{opportunities}}
\begin{frame}{}
\protect\hypertarget{section-24}{}
\begin{itemize}
\tightlist
\item
Need to develop plants with traits that confer adaptive benefits in
stress environments, in the face of climate change.
\item
Addressing world food and feed quality needs. An estimated 800 million
people in the world, including 200 million children, suffer chronic
under-nutrition, with its attendant health issues.
\item
Addressing food supply needs for a growing population. Latest report
on world population, according to UN population division, places the
status at 7,383,009,000 \href{https://population.un.org/}{UN
population database, 2019}.
\end{itemize}
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-25}{}
\begin{figure}
\includegraphics[width=0.7\linewidth]{./images/world_production_of_major_crops} \caption{Word production of major crops (1955-2011); Source: Brown and Campos, 2014, Page 21}\label{fig:world-production-major}
\end{figure}
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-26}{}
\begin{itemize}
\item
(Brown and Campos 2014), Page 26; Graph of total world crop production
and total world crop production per capita (Since 1955 to 2011)
\item
It is plant breeders and their stories who explain the dramatic gap
between the thin white roots of wild carrot and the plump orange ones
on the supermarket shelf.
\item
Many people in the world still find feeding on a diverse diet of crops
and meat or fish a rare luxury, and suffer malnutrition as a result.
Plant breeding could potentially rescue from the situation is a
powerful illustration of its importance.
\end{itemize}
\end{frame}
\begin{frame}{}
\protect\hypertarget{section-27}{}
\begin{itemize}
\item
Then again, well encapsulated in chinese philosophy is an idea that
once people have enough to live on and their survival is assured, they
turn almost at once to the improvement of their surroundings and the
creation of objects of beauty.
\item
Modern advances in genetic engineering bring the ``designer plant''
much closer, a prospect that fills some people with alarm. Or creation
of entirely new horticultural species.
\item
With continued unraveling of the genetic code of life, the
possibilities for plant breeding is becoming even greater.
\item
Industrial and other end-use requirements.
\end{itemize}
\end{frame}
\hypertarget{bibliography}{%
\section{Bibliography}\label{bibliography}}
\begin{frame}{References}
\protect\hypertarget{references}{}
\hypertarget{refs}{}
\begin{cslreferences}
\leavevmode\hypertarget{ref-acquaah2009principles}{}%
Acquaah, George. 2009. \emph{Principles of Plant Genetics and Breeding}.
John Wiley \& Sons.
\leavevmode\hypertarget{ref-brown2014plant}{}%
Brown, Peter, Jack Caligari, and Hugo A. Campos. 2014. \emph{Plant
Breeding}. Wiley Blackwell.
\leavevmode\hypertarget{ref-griffiths2015introduction}{}%
Griffiths, Anthony JF, Susan R Wessler, Richard C Lewontin, William M
Gelbart, David T Suzuki, Jeffrey H Miller, and others. 2015. \emph{An
Introduction to Genetic Analysis}. W.H.Freeman.
\leavevmode\hypertarget{ref-kingsbury2009hybrid}{}%
Kingsbury, Noel. 2009. \emph{Hybrid: The History and Science of Plant
Breeding}. University of Chicago Press.
\leavevmode\hypertarget{ref-poehlman1987breeding}{}%
Poehlman, John M. 1987. \emph{Breeding Field Crops}. Springer Science \&
Business Media.
\end{cslreferences}
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.770787205,
"avg_line_length": 36.1310638298,
"ext": "tex",
"hexsha": "3efeea805a1073efce3e6db2bee67fbb57c614b0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e289df17f840f70979956756942be1bdeea54bd1",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "DeependraD/courseware_plbgen",
"max_forks_repo_path": "PLB221_introductory_plant_breeding/plb221_presentation/01-introduction_to_plant_breeding_pdf.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e289df17f840f70979956756942be1bdeea54bd1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "DeependraD/courseware_plbgen",
"max_issues_repo_path": "PLB221_introductory_plant_breeding/plb221_presentation/01-introduction_to_plant_breeding_pdf.tex",
"max_line_length": 1124,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "e289df17f840f70979956756942be1bdeea54bd1",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "DeependraD/courseware_plbgen",
"max_stars_repo_path": "PLB221_introductory_plant_breeding/plb221_presentation/01-introduction_to_plant_breeding_pdf.tex",
"max_stars_repo_stars_event_max_datetime": "2021-08-14T06:42:45.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-08-14T06:42:45.000Z",
"num_tokens": 11482,
"size": 42454
} |
\documentclass[a4paper,10pt]{article}
\usepackage{fullpage}
\usepackage{times}
\usepackage{url}
\begin{document}
\title{L41: Lab 2 - Kernel Implications of IPC}
\author{Dr Robert N.M. Watson
\and
Dr Graeme Jenkinson}
\date{Michaelmas Term 2016}
\maketitle
%
% Key methodology: how to trace stuff with DTrace; probe effect; benchmark
% reproducibility; statistical significance.
%
% Key kernel concepts: system calls, other traps, kernel memory allocation,
% (perhaps) kernel threads, I/O behaviour, different types of linking, the
% overhead imposed by filesystems.
%
\noindent
The goals of this lab are to:
\begin{itemize}
\item Continue to gain experience tracing user-kernel interactions via system calls and traps
\item Explore the performance of varying IPC models, buffer sizes, and process models
% \item Measure the probe effect and explore mitigation techniques such as sampling
\item Gather data to support writing your first assessed lab report
\end{itemize}
\noindent
You will do this by using DTrace to analyse the behaviour of a potted,
kernel-intensive IPC benchmark.
\section*{Background: POSIX IPC objects}
POSIX defines several types of Inter-Process Communication (IPC) objects,
including pipes (created using the \texttt{pipe()} system call) and sockets
(created using the \texttt{socket()} and \texttt{socketpair()} system calls).
\begin{description}
\item[Pipes] are used most frequently between pairs of processes in a UNIX
\textit{process pipeline}: a chain of processes started by a single command
line, whose output and input file descriptors are linked.
Although pipes can be set up between unrelated processes, the primary means of
acquiring a pipe is through inheritance across \texttt{fork()}, meaning that
they are used between closely related processes (e.g., with a common parent
process).
\item[Sockets] are used when two processes are created in independent contexts
and must later rendezvous -- e.g., via the filesystem, but also via TCP/IP.
In typical use, each endpoint process creates a socket via the
\texttt{socket()} system call, which are then interconnected through use of
\texttt{bind()}, \texttt{listen()}, \texttt{connect()}, and \texttt{accept()}.
However, there is also a \texttt{socketpair()} system call that returns a pair
of interconnected endpoints in the same style as \texttt{pipe()} -- convenient
for us as we wish to compare the two side-by-side.
\end{description}
Both pipes and sockets can be used to transmit ordered byte streams: a
sequence of bytes sent via one file descriptor that will be received reliably
on the other without loss or reordering.
As file I/O, the \texttt{read()} and \texttt{write()} system calls can be used
to read and write data on file descriptors for pipes and sockets.
It is useful to know that these system calls are permitted to return
\textit{partial reads} and \textit{partial writes}: i.e., a buffer of some
size (e.g., 1k) might be passed as an argument, but only a subset of the
requested bytes may be received or sent, with the actual size returned via the
system call's return value.
This may happen if the in-kernel buffers for the IPC object are too small for
the full amount, or if \textit{non-blocking I/O} is enabled.
When analysing traces of IPC behaviour, it is important to consider both the
size of the buffer passed and the number of bytes returned in evaluating the
behaviour of the system call.
You may wish to read the FreeBSD \texttt{pipe(2)} and \texttt{socketpair(2)}
manual pages to learn more about these APIs before proceeding with the lab.
\section*{The benchmark}
As with our earlier I/O benchmark, the IPC benchmark is straightforward: it
sets up a pair of IPC endpoints referencing a shared pipe or socket, and then
performs a series of \texttt{write()} and \texttt{read()} system calls on the
file descriptors to send (and then receive) a total number of bytes of data.
Data will be sent using a smaller userspace buffer size -- although as hinted
above, there is no guarantee that a full user buffer will be sent or received
in any individual call.
Also as with the I/O benchmark, there are several modes of operation:
sending and receiving within a single thread, a pair of threads in the same
process, or between two threads in two different processes.
The benchmark will set up any necessary IPC objects, threads, and processes,
sample the start time using the \texttt{clock\_gettime()} system call, perform
the IPC loop (perhaps split over two threads), and then sample the finish time
using the \texttt{clock\_gettime()} system call.
Optionally, both the average bandwidth across the IPC object, and also more
verbose information about the benchmark configuration, may be displayed.
Both statically and dynamically linked versions of the binary are provided:
\texttt{ipc-static} and \texttt{ipc-dynamic}.
\subsection*{Compiling the benchmark}
%You will need to copy the lab bundle to your BeagleBone Black, untar it, and
%build it before you can begin work.
%Once you have configured the BBB so that you can log in (see \textit{L41: Lab
%Setup}), you can use SSH to copy the bundle to the BBB:
%\begin{verbatim}
%scp /anfs/www/html/teaching/1516/L41/labs/ipc.tgz [email protected]:/data
%\end{verbatim}
%\noindent
%Or via the web:
%\noindent
%\url{https://www.cl.cam.ac.uk/teaching/1516/L41/labs/ipc.tgz}
%\noindent
The laboratory IPC bundle (ipc.tgz) has been preinstalled onto the BeagleBone
Black (BBB) SD card image.
However, you will need to untar and build it before you can begin work.
Once you have configured the BBB so that you can log in (see \textit{L41: Lab
Setup}), untar and build the bundle:
\begin{verbatim}
# cd /data
# tar -xzf ipc.tgz
# cd ipc
# make
# cd ..
\end{verbatim}
\subsection*{Running the benchmark}
Once built, you can run the benchmark binaries as follows, with command-line
arguments specifying various benchmark parameters:
\begin{verbatim}
# ipc/ipc-static
\end{verbatim}
\noindent
or:
\begin{verbatim}
# ipc/ipc-dynamic
\end{verbatim}
If you run the benchmark without arguments, a small usage statement will be
printed, which will also identify the default IPC object type, IPC buffer, and
total IPC sizes configured for the benchmark.
As in the prior lab, you will wish to be careful to hold most variables
constant in order to isolate the effects of specific variables.
For example, you might wish the vary the IPC object type while holding the
total IPC size constant.
\subsection*{Required operation argument}
Specify the mode in which the benchmark should operate:
\begin{description}
\item[1thread] Run the benchmark entirely within one thread; note that, unlike
other benchmark configurations, this mode interleaves the IPC calls and must
place the file descriptors into non-blocking mode or risk deadlock.
This may have observable effects on the behaviour of the system calls with
respect to partial reads or writes.
\item[2thread] Run the benchmark between two threads within one process: one
as a `sender' and the other as a `receiver', with the sender capturing the
first timestamp, and the receiver capturing the second.
System calls are blocking, meaning that if the in-kernel buffer fills
during a \texttt{write()}, then the sender thread will sleep; if the
in-kernel buffer empties during a \texttt{read()}, then the receiver thread
will sleep.
\item[2proc] As with the \texttt{2thread} configuration, run the benchmark in
two threads -- however, those threads will be in two different processes.
The benchmark creates a second process using \texttt{fork()} that will run
the sender.
System calls in this variation are likewise blocking.
\end{description}
\subsection*{Optional I/O flags}
\begin{description}
\item[-b \textit{buffersize}] Specify an alternative userspace IPC buffer size
in bytes -- the amount of memory allocated to hold to-be-sent or received
IPC data.
The same buffer size will be used for both sending and receiving.
The total IPC size must be a multiple of buffer size.
\item[-i \textit{ipctype}] Specify the IPC object to use in the benchmark:
\texttt{pipe} or \texttt{local}.
\item[-t \textit{totalsize}] Specify an alternative total IPC size in bytes.
The total IPC size must be a multiple of userspace IPC buffer size.
\item[-B] Run in \texttt{bare mode}: disable normal quiescing activities such
as using \texttt{sync()} to cause the filesystem to synchronise before the
IPC loop runs, and using \texttt{sleep()} to await terminal-I/O quietude.
This will be the more appropriate mode in which to perform whole-program
analysis but may lead to greater variance if simply analysing the IPC loop.
\item[-s] When operating on a socket, explicitly set the in-kernel
socket-buffer size to match the userspace IPC buffer size rather than using
the kernel default.
Note that per-process resource limits will prevent use of very large buffer
sizes.
\end{description}
\subsection*{Terminal output flags}
\noindent
The following arguments control terminal output from the benchmark; remember
that output can substantially change the performance of the system under test,
and you should ensure that output is either entirely suppressed during tracing
and benchmarking, or that tracing and benchmarking only occurs during a period
of program execution unaffected by terminal I/O:
\begin{description}
\item[-q] \textit{Quiet mode} suppress all terminal output from the benchmark,
which is preferred when performing whole-program benchmarking.
\item[-v] \textit{Verbose mode} causes the benchmark to print additional
information, such as the time measurement, buffer size, and total IPC size.
\end{description}
\subsection*{Example benchmark commands}
This command performs a simple IPC benchmark using a pipe and default
userspace IPC buffer and total IPC sizes within a single thread of a single
process:
\begin{verbatim}
# ipc/ipc-static -i pipe 1thread
\end{verbatim}
\noindent
This command performs the same pipe benchmark, but between two threads of the
same process:
\begin{verbatim}
# ipc/ipc-static -i pipe 2thread
\end{verbatim}
\noindent
And this command does so between two processes:
\begin{verbatim}
# ipc/ipc-static -i pipe 2proc
\end{verbatim}
\noindent
This command performs a socket-pair benchmark, and requests non-default
socket-buffer sizes synchronised to a userspace IPC buffer size of 1k:
\begin{verbatim}
# ipc/ipc-static -i local -s -b 1024 2thread
\end{verbatim}
\noindent
As with the I/O benchmark, additional information can be requested using
\textit{verbose mode}:
\begin{verbatim}
# ipc/ipc-static -v -i pipe 1thread
\end{verbatim}
\noindent
And, likewise, all output can be suppressed, and \textit{bare mode} can be
used, for whole-program analysis:
\begin{verbatim}
# ipc/ipc-static -q -B -i pipe 1thread
\end{verbatim}
\section*{Note on kernel configuration}
By default, the kernel limits the maximum per-socket socket-buffer size that
can be configured, in order to avoid resource starvation.
You will need to tune the kernel's default limits using the following command,
run as root, prior to running benchmarks.
Note that this should be set before any benchmarks are run, whether or not
they are explicitly configuring the socket-buffer size, as the limit will also
affect socket-buffer auto-sizing.
\begin{verbatim}
sysctl kern.ipc.maxsockbuf=33554432
\end{verbatim}
\section*{Notes on using DTrace}
On the whole, this lab will be concerned with just measuring the IPC loop,
rather than whole-program behaviour.
As in the last lab, it is useful to know that the system call
\texttt{clock\_gettime} is both run immediately before, and immediately after,
the IPC loop.
In this benchmark, these events may occur in different threads or processes,
as the sender performs the initial timestamp before transmitting the first
byte over IPC, and the receiver performs the final timestamp after receiving
the last byte over IPC.
You may wish to bracket tracing between a return probe for the former, and an
entry probe for the latter; see the notes from the last lab for an example.
As with the last lab, you will want to trace the key system calls of the
benchmark: \texttt{read()} and \texttt{write()}.
For example, it may be sensible to inspect \texttt{quantize()} results for
both the execution time distributions of the system calls, and the amount of
data returned by each (via \texttt{arg0} in the system-call return probe).
You will also want to investigate scheduling events using the \texttt{sched}
provider.
This provider instruments a variety of scheduling-related behaviours, but it
may be of particular use to instrument its \texttt{on-cpu} and
\texttt{off-cpu} events, which reflect threads starting and stopping
execution on a CPU.
You can also instrument \texttt{sleep} and \texttt{wakeup} probes to trace
where threads go to sleep waiting for new data in an empty kernel buffer (or
for space to place new data in a full buffer).
When tracing scheduling, it is useful to inspect both the process ID
(\texttt{pid}) and thread ID (\texttt{tid}) to understand where events are
taking place.
By its very nature, the probe effect is hard to investigate, as the probe
effect does, of course, affect investigation of the effect itself!
However, one simple way to approach the problem is to analyse the results of
performance benchmarking with and without DTrace scripts running.
When exploring the probe effect, it is important to consider not just the
impact on bandwidth average/variance, but also on systemic behaviour: for
example, when performing more detailed tracing, causing the runtime of the
benchmark to increase, does the number of context switches increase, or the
distribution of \texttt{read()} return values?
In general, our interest will be in the overhead of probes rather than the
overhead of terminal I/O from the DTrace process -- you may wish to suppress
that output during the benchmark run so that you can focus on probe overhead.
\section*{Notes on benchmark}
As with the prior lab, it is important to run benchmarks more than once to
collect a distribution of values, allowing variance to be analysed.
You may wish to discard the first result in a set of benchmark runs as the
system will not yet have entered its steady state.
Do be sure that terminal I/O from the benchmark is not included in tracing or
time measurements (unless that is the intent).
\section*{Exploratory questions}
These questions are intended to help you understand the behaviour of the IPC
benchmark, and may provide supporting evidence for your experimental
questions.
However, they are just suggestions -- feel free to approach the problem
differently!
These questions do not need to be addressed in your lab report.
\begin{enumerate}
\item How do the six basic modes of the benchmark compare in terms of
IPC throughput: \{1thread, 2thread, 2proc\} $\times$ \{pipe, socket\}?
\item How do distributions of \texttt{read()} and \texttt{write()}
system-call return values vary from one another within one benchmark?
Between different benchmark configurations?
\item How does setting socket-buffer size impact performance for the
socket configuration?
\item How much time is spent in system calls for the pipe vs. socket
benchmarks for the default buffer size and total IPC size?
\item How does the number of context switches vary between the pipe and
socket benchmarks, both with and without socket-buffer size tuning?
\end{enumerate}
\section*{Experimental questions (part 1)}
You will receive a separate handout during the next lab describing \textit{Lab
Report 2}; however, this description will allow you to begin to prepare for
the assignment, which will also depend on the outcome of the next lab.
Your lab report will compare several configurations of the IPC benchmark,
exploring (and explaining) performance differences between them.
Do ensure that your experimental setup suitably quiesces other activity on the
system, and also use a suitable number of benchmark runs; you may wish to
consult the \textit{FreeBSD Benchmarking Advice} wiki page linked to from the
module's reading list for other thoughts on configuring the benchmark setup.
The following questions are with respect to a fixed total IPC size with a
statically linked version of the benchmark, and refer only to IPC-loop, not
whole-program, analysis.
Using \texttt{2thread} and \texttt{2proc} modes, explore how varying IPC model
(pipes, sockets, and sockets with \texttt{-s}) and IPC buffer size affect
performance:
\begin{itemize}
\item How does increasing IPC buffer size uniformly change performance
across IPC models -- and why?
\item Is using multiple threads faster or slower than using multiple
processes?
\end{itemize}
\noindent
Graphs and tables should be used to illustrate your measurement results.
Ensure that, for each question, you present not only results, but also a
causal explanation of those results -- i.e., why the behaviour in question
occurs, not just that it does.
For the purposes of graphs in this assignment, use achieved bandwidth, rather
than total execution time, for the Y axis, in order to allow you to more
directly visualise the effects of configuration changes on efficiency.
\end{document}
| {
"alphanum_fraction": 0.7825835884,
"avg_line_length": 42.5479115479,
"ext": "tex",
"hexsha": "6c1000c1560227003ae4d85cc1c48b8f5c154a95",
"lang": "TeX",
"max_forks_count": 71,
"max_forks_repo_forks_event_max_datetime": "2022-02-28T21:16:03.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-11-20T00:42:58.000Z",
"max_forks_repo_head_hexsha": "b7f40a0ffd18f2be31603b12d1079c9ea1043734",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "admdev8/course",
"max_forks_repo_path": "masters/2016-2017/Handouts/lab-2-ipc.tex",
"max_issues_count": 3,
"max_issues_repo_head_hexsha": "b7f40a0ffd18f2be31603b12d1079c9ea1043734",
"max_issues_repo_issues_event_max_datetime": "2020-02-28T22:57:26.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-12-25T15:59:42.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "admdev8/course",
"max_issues_repo_path": "masters/2016-2017/Handouts/lab-2-ipc.tex",
"max_line_length": 93,
"max_stars_count": 402,
"max_stars_repo_head_hexsha": "b7f40a0ffd18f2be31603b12d1079c9ea1043734",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "admdev8/course",
"max_stars_repo_path": "masters/2016-2017/Handouts/lab-2-ipc.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-27T21:55:58.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-11-19T18:37:30.000Z",
"num_tokens": 4029,
"size": 17317
} |
\section{Measurement Error Modeling}
The general model of a measurement error is as follows:
\begin{equation}
e = b + v
\end{equation}
where $b$ models the systematic errors, and $v$ models the measurement noise. We assume that the measurement noise is a discrete sequence of uncorrelated random numbers. Variables such as $v$ are known as random variables, and the next subsection describes how to model them. Subsequent subsections describe models for the systematic errors.
The discussion of systematic errors treats such errors as scalar quantities to simplify the exposition; generalization to the vector case is straightforward. Note that if the measurement is non-scalar, but the errors in the component measurements are independent of one another, then we can model each measurement independently, so modeling the biases as vector is not required. If the measurement errors are not independent, then many estimators require that we apply a transformation to the data prior to processing so that the data input to the estimator have independent measurement errors; the next subsection describes some ways to accomplish this transformation.
\subsection{Models and Realizations of Random Variables}
A continuous random variable is a function that maps the outcomes of random events to the real line. Realizations of random variables are thus real numbers. A vector of $n$ random variables maps outcomes of random events to $\mathcal{R}^n$. For our purposes, random variables will always be associated with a probability density function that indicates the likelihood that a realization occurs within a particular interval of the real line, or within a particular subspace of $\mathcal{R}^n$ for the vector case. Currently, all of our models assume that this density is the normal or Gaussian density. For the vector case, the normal probability density function is
\begin{equation}
f(x) = \frac{1}{2\pi|P|}\text{e}^{-\frac{1}{2}(x-\mu)'P^{-1}(x-\mu)}
\end{equation}
where $\mu$ is a vector of mean values for each component of $x$, and $P$ is a matrix that contains the variances of each component of $x$ along its diagonal, and the covariances between each component as its off-diagonal components. The covariances indicate the degree of correlation between the random variables composing $x$. The matrix $P$ is thus called the variance-covariance matrix, which we will hereafter abbreviate to just ``covariance matrix,'' or ``covariance.'' Since the normal density is completely characterized by its mean and covariance, we will use the following notation as a shorthand to describe normally-distributed random vectors:
\begin{equation}
x \sim N(\mu, P)
\end{equation}
Thus, the model for the measurement noise is
\begin{equation}
v \sim N(0, R)
\end{equation}
For the scalar case, or for the vector case when the covariance is diagonal, we may directly generate realizations of a normally-distributed random vector from normal random number generators available in most software libraries. If $P$ has non-zero off-diagonal elements, we must model the specified correlations when we generate realizations. If $P$ is strictly positive definite, we can factor it as follows:
\begin{equation}
P = SS'
\end{equation}
where $S$ is a triangular matrix known as a Cholesky factor; this can be viewed as a ``matrix square root.'' The Cholesky factorization is available in many linear algebra libraries. We can then use $S$ to generate correlated realizations of $x$ as follows. Let $z$ be a normally-distributed random vector of the same dimension as $x$, with zero mean and unit variance, that is
\begin{equation}
z \sim N(0,I)
\end{equation}
Then, with
\begin{equation}
x = S z
\end{equation}
we can generate properly correlated realizations of $x$. We can also use a Cholesky factorization of the measurement noise covariance $R$, if $R$ is non-diagonal, to transform correlated measurements into uncorrelated auxiliary measurements for cases in which the estimator cannot handle correlated measurement data.
If $P$ is only non-negative definite, i.e.\@ $P \geq 0$ rather than $P > 0$ as above, the Cholesky factorization does not exist. In this case, since $P$'s eigenvalues are real and distinct, it has a diagonal factorization:
\begin{equation}
P = V D V'
\end{equation}
where $V$ is a matrix of eigenvectors and $D$ is a diagonal matrix of eigenvalues.
Then, with $z$ as above,
\begin{equation}
x = V \sqrt{D} z
\end{equation}
where $\sqrt{D}$ implies taking the square roots of each diagonal element.
\subsection{Zero-Input Bias State Models}
The simplest non-zero measurement error consists only of measurement noise. The next simplest class of measurement errors consists of biases which are either themselves constant, or are the integrals of constants. We can view such biases as the output of a system which has zero inputs, and which may have internal states. In the sequel, we will consider cases where there are random inputs to the system.
In cases were the bias is the output of a system with internal states, the estimator may treat the internal states as solve-for or consider parameters. In such cases, the estimator requires a measurement partials matrix. Otherwise, the ``measurement partial'' is just $H = \partial b / \partial b = 1$.
\subsubsection{Random Constant}
The simplest type of systematic error is a constant bias on the measurement. There are two types of such biases: deterministic constants, which are truly constant for all time, and random constants, which are constant or very nearly so over a particular time of interest. For example, each time a sensor is power-cycled, a bias associated with it may change in value, but so long as the sensor remains powered on, the bias will not change.
In some cases, we may have reason to believe that a particular systematic error source truly is a deterministic bias, but due to limited observability, we do not have knowledge of its true value. In such cases, we may view our estimate of the bias as a random constant, and its variance as a measure of the imprecision of our knowledge.
Thus, we may view all constants that could be solve-for or consider parameters in orbit determination as random constants. Our model for a random constant is
\begin{equation}
\dot{b}(t) = 0, \, b(t_o) \sim N(0, p_{bo}).
\end{equation}
Since $b(t)$ is a zero-mean constant, its mean is zero for all time, and its covariance is constant for all time as well. Thus, to simulate a realization of the random constant, we need only generate a random number according to $N(0, p_{bo})$, as the previous subsection described.
\subsubsection{Random Ramp}
The random ramp model assumes that the rate of change of the bias is itself a random constant; thus the random ramp model is
\begin{equation}
\ddot{b}(t) = 0, \, \dot{b}(t_o) \sim N(0, p_{\dot{b}o}).
\end{equation}
Thus, the initial condition $\dot{b}(t_o)$ is a random constant. For a pure random ramp, the initial condition on $b(t_o)$ and its covariance are taken to be zero, but an obvious and common generalization is to allow $b(t_o)$ to also be a random constant.
It is convenient to write this model as a first-order vector system as follows:
\begin{eqnarray}
\begin{bmatrix} \dot{b}(t) \\ \ddot{b}(t) \end{bmatrix} =
\begin{bmatrix} \dot{b}(t) \\ \dot{d}(t) \end{bmatrix} &=&
\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}
\begin{bmatrix} b(t) \\ d(t) \end{bmatrix} \\
\dot{x}(t) &=& A(t) x(t)
\end{eqnarray}
The resulting output equation is
\begin{eqnarray}
e & = & \begin{bmatrix} 1 & 0 \end{bmatrix} x + v \\
& = & H x + v
\end{eqnarray}
Note that the ensemble of realizations of $x(t)$ has zero-mean for all time. The covariance evolves in time according to
\begin{equation}
P_x(t) = \Phi(t-t_o)P_{xo}\Phi'(t-t_o)
\end{equation}
where
\begin{equation}
\Phi(t) = \begin{bmatrix} 1 & t \\ 0 & 1 \end{bmatrix} \, \text{and} \;
P_{xo} = \begin{bmatrix} p_{bo} & 0 \\ 0 & p_{\dot{b}o} \end{bmatrix}
\end{equation}
which we can also write in recursive form as
\begin{equation}
P_x(t+\Delta t) = \Phi(\Delta t)P_x(t)\Phi'(\Delta t)
\end{equation}
Thus, we can generate realizations of the random ramp with either $x(t) \sim N(0, P_x(t))$ or recursively from
\begin{equation}
x(t+\Delta t) = \Phi(\Delta t) x(t)
\end{equation}
Note that $\|P_x\|$ becomes infinite as $t^2$ becomes infinite. This could lead to an overflow of the representation of $P_x$ in a computer program if the propagation time is large, and could also lead to the representation of $P_x$ losing either its symmetry and/or its positive definiteness due to roundoff and/or truncation.
\subsubsection{Higher-Order Derivatives of Random Constants}
In principle, a random constant may be associated with any derivative of the bias in a straightforward extension of the models above. In practice, it is rare to need more than two derivatives. Conventional terminology does not appear in the literature for derivatives of higher order than the random ramp. The slope of the bias is most commonly described as the ``bias drift,'' so that a ``drift random ramp'' would be one way to describe a bias whose second derivative is a random constant. The measurement partials matrix needs to be accordingly padded with trailing zeros for the derivatives of the bias in such cases.
\subsection{Single-Input Bias State Models}
The simplest non-constant systematic errors are systems with a single input that is a random process. We can think of a random process as the result of some kind of limit in which the intervals between an uncorrelated sequence of random variables get infinitesimally small. In this limit, each random increment instantaneously perturbs the sequence, so that the resulting process is continuous but non-differentiable. We call this kind of a random input ``process noise.''
Although such random processes are non-differentiable, there are various techniques for generalizing the concept of integration so that something like integrals of the process noise exist, and hence so do the differentials that appear under the integral signs. It turns out that so long as any coefficients of the process noise are non-random, these differentials behave for all practical purposes as if they were differentiable.
\subsubsection{Random Walk}
The random walk is the simplest random process of the type described above. In terms of the ``formal derivatives'' mentioned above, the random walk model for a measurement bias is
\begin{equation}
\dot{b}(t) = w(t), \, w(t) \sim N(0, q\delta(t-s))
\end{equation}
The input noise process on the right hand side is known as ``white noise,'' and the Dirac delta function that appears in the expression for its variance indicates that the white noise process consists of something like an infinitely-tightly spaced set of impulses. The term $q$ that appears along with the delta function is the intensity of each impulse\footnote{Another way to imagine the input sequence, in terms of a frequency domain interpretation, is that it is a noise process whose power spectral density, $q$, is non-zero at all frequencies, which implies infinite bandwidth.}. The initial condition $b(t_o)$ is an unbiased random constant. Since $b(t_o)$ and $w(t)$ are zero-mean, then $b(t)$ is also zero-mean for all time. The variance of $b$ evolves in time according to
\begin{equation}
p_b(t) = p_{bo} + q (t-t_o)
\end{equation}
which we can also write in recursive form as
\begin{equation}
p_b(t+\Delta t) = p_b(t) + q\Delta t
\end{equation}
Thus, to generate a realization of the random walk at time $t$, we need only generate a random number according to $N(0, p_b(t))$. Equivalently, we could also generate realizations of $w_\Delta(t) \sim N(0, q\Delta t)$, and recursively add these discrete noise increments to the bias as follows:
\begin{equation}
b(t+\Delta t) = b(t) + w_\Delta(t)
\end{equation}
Note that $p_b$ becomes infinite as $t$ becomes infinite. This could lead to an overflow of the representation of $p_b$ in a computer program if both the propagation time and $q$ are large.
\subsubsection{Random Run}
The random run model assumes that the rate of change of the bias is itself a random walk; thus the random run model is
\begin{equation}
\ddot{b}(t) = w(t), \, w(t) \sim N(0, q\delta(t-s))
\end{equation}
The initial condition $\dot{b}(t_o)$ is a random constant. For a pure random run, the initial condition on $b(t_o)$ and its covariance are taken to be zero, but an obvious and common generalization is to allow $b(t_o)$ to also be a random constant.
It is convenient to write this model as a first-order vector system as follows:
\begin{eqnarray}
\begin{bmatrix} \dot{b}(t) \\ \ddot{b}(t) \end{bmatrix} =
\begin{bmatrix} \dot{b}(t) \\ \dot{d}(t) \end{bmatrix} &=&
\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}
\begin{bmatrix} b(t) \\ d(t) \end{bmatrix} +
\begin{bmatrix} 0 \\ 1 \end{bmatrix} w(t) \\
\dot{x}(t) &=& A(t) x(t) + b(t) w(t)
\end{eqnarray}
The measurement partial is the same as for the random ramp. The initial condition $x(t_o)$ is an unbiased random constant. Since $x(t_o)$ and $w(t)$ are zero-mean, then $x(t)$ is also zero-mean for all time. The covariance evolves in time according to
\begin{equation}
P_x(t) = \Phi(t-t_o)P_{xo}\Phi'(t-t_o) + Q_\Delta(t-t_o)
\end{equation}
where
\begin{equation}
\Phi(t) = \begin{bmatrix} 1 & t \\ 0 & 1 \end{bmatrix} \, \text{and} \;
P_{xo} = \begin{bmatrix} p_{bo} & 0 \\ 0 & p_{\dot{b}o} \end{bmatrix}
\end{equation}
and
\begin{equation}
Q_\Delta(t) = q\begin{bmatrix} t^3/3 & t^2/2 \\ t^2/2 & t \end{bmatrix}
\end{equation}
which we can also write in recursive form as
\begin{equation}
P_x(t+\Delta t) = \Phi(\Delta t)P_x(t)\Phi'(\Delta t) + Q_\Delta(\Delta t)
\end{equation}
Thus, we can generate realizations of the random run with either $x(t) \sim N(0, P_x(t))$ or recursively from
\begin{equation}
x(t+\Delta t) = \Phi(\Delta t) x(t) + w_\Delta(t)
\end{equation}
where $w_\Delta(t) \sim N(0, Q_\Delta(\Delta t))$. Note that a Cholesky decomposition of $Q_\Delta(t)$ is
\begin{equation}
\sqrt[C]{Q_\Delta(t)} = \begin{bmatrix} \sqrt{3t^3}/3 & 0 \\ \sqrt{3t}/2 & \sqrt{t}/2 \end{bmatrix}
\end{equation}
Note that $\|P_x\|$ becomes infinite as $t^3$ becomes infinite. This could lead to an overflow of the representation of $P_x$ in a computer program if both the propagation time and $q$ are large, and could also lead to the representation of $P_x$ losing either its symmetry and/or its positive definiteness due to roundoff and/or truncation.
\subsubsection{Higher-Order Derivatives of Random Walks}
In principle, a random walk may be associated with any derivative of the bias in a straightforward extension of the models above. In practice, it is rare to need more than two derivatives. Conventional terminology does not appear in the literature for derivatives of higher order than the random run. A ``drift random run'' would be one way to describe a bias whose second derivative is a random walk. Below, we will refer to such a model as a ``random zoom.''
\subsubsection{First-Order Gauss-Markov}
The first-order Gauss-Markov (FOGM) process is one of the simplest random processes that introduces time correlation between samples. In terms of a frequency domain interpretation, we can view it as white noise passed through a low-pass filter. Since such noise, often called ``colored noise,'' has finite bandwidth, it is physically realizable, unlike white noise. In the notation of formal derivatives, the FOGM model is
\begin{equation}
\dot{b}(t) = -\frac{1}{\tau} b(t) + w(t),
\end{equation}
where, as with the random walk, $b(t_o) \sim N(0, P_{bo})$, and $w(t) \sim N(0, q\delta(t-s))$. The time constant, $\tau$, also known as the ``half-life,'' gives the correlation time, or the time over which the intensity of the time correlation will fade to half its value.
Since $x(t_o)$ and $w(t)$ are zero-mean, then $x(t)$ is also zero-mean for all time. The covariance evolves in time according to
\begin{equation}
p_b(t) = \text{e}^{-\frac{2}{\tau}(t-t_o)}p_{bo} + q_\Delta(t-t_o)
\end{equation}
where
\begin{equation}
q_\Delta(t-t_o) = \frac{q\tau}{2}\left(1-\text{e}^{-\frac{2}{\tau}(t-t_o)}\right)
\end{equation}
which we can also write in recursive form as
\begin{equation}
p_b(t+\Delta t) = \text{e}^{-\frac{2\Delta t}{\tau}}p_b(t) + q_\Delta(\Delta t)
\end{equation}
Thus, to generate a realization of the random walk at time $t$, we need only generate a random number according to $N(0, p_b(t))$. Equivalently, we could also generate realizations of $w_\Delta(t) \sim N(0, q_\Delta(\Delta t))$, and recursively add these discrete noise increments to the bias as follows:
\begin{equation}
b(t+\Delta t) = \text{e}^{-\frac{\Delta t}{\tau}}b(t) + w_\Delta(t)
\end{equation}
Note that $p_b$ approaches a finite steady-state value of $q\tau/2$ as $t$ becomes infinite. We can choose the parameters of the FOGM so that this steady-state value avoids any overflow of the representation of $p_b$ in a computer program.
\subsubsection{Integrated First-Order Gauss-Markov Model}
As with the random walk and random constant models, any number of derivatives of the bias may be associated with a FOGM process. However, integation of the FOGM destroys its stability. For example, the singly integrated first-order Gauss-Markov model is given by
\begin{equation}
\left[\begin{array}{c}
\dot{b}(t) \\
\dot{d}(t)
\end{array}\right] =
\left[\begin{array}{cc}
0 & 1 \\
0 & -1/\tau
\end{array}\right]
\left[\begin{array}{c}
b(t) \\
d(t)
\end{array}\right]
+ \left[\begin{array}{c}
0 \\
w(t)
\end{array}\right],
\end{equation}
which leads to the following state transition matrix,
\begin{equation}
\Phi(t) = \left[\begin{array}{cc}
1 & \tau \left( 1-e^{-t/\tau} \right) \\
0 & e^{-t/\tau}
\end{array}\right],
\end{equation}
and process noise covariance,
\begin{equation}
Q_\Delta(t) = \frac{q\tau}{2} \left[\begin{array}{cc}
\tau^2 \left\{ \left( 1-e^{-2t/\tau} \right)^2 +2t/\tau
+ 4 \left( 1-e^{-t/\tau} \right) \right\}
& \tau \left( 1-e^{-t/\tau} \right)^2 \\
\tau \left( 1-e^{-t/\tau} \right)^2 & \left( 1-e^{-2t/\tau} \right)
\end{array}\right].
\end{equation}
Clearly, this is an unstable model, as the bias variance increases linearly with elapsed time. If a Gauss-Markov model is desired because of its stability properties, the following second-order model is available.
\subsubsection{Second-Order Gauss-Markov}
The model for a second-order Gauss-Markov random process is
\begin{equation}
\ddot{b}(t) = -2\zeta\omega_n \dot{b}(t) -\omega_n^2 b(t) + w(t), \, w(t) \sim N(0, q\delta(t-s))
\end{equation}
The initial conditions $b(t_o)$ and $\dot{b}(t_o)$ are random constants. It is convenient to write this model as a first-order vector system as follows:
\begin{eqnarray}
\begin{bmatrix} \dot{b}(t) \\ \ddot{b}(t) \end{bmatrix} =
\begin{bmatrix} \dot{b}(t) \\ \dot{d}(t) \end{bmatrix} &=&
\begin{bmatrix} 0 & 1 \\ -\omega_n^2 & -2\zeta\omega_n \end{bmatrix}
\begin{bmatrix} b(t) \\ d(t) \end{bmatrix} +
\begin{bmatrix} 0 \\ 1 \end{bmatrix} w(t) \\
\dot{x}(t) &=& A(t) x(t) + b(t) w(t)
\end{eqnarray}
The measurement partial is the same as for the random ramp. The initial condition $x(t_o)$ is an unbiased random constant. Since $x(t_o)$ and $w(t)$ are zero-mean, then $x(t)$ is also zero-mean for all time.
The covariance evolves in time according to according to
\begin{equation}
P_x(t) = \Phi(t-t_o)P_{xo}\Phi'(t-t_o) + Q_\Delta(t-t_o)
\end{equation}
which we can also write in recursive form as
\begin{equation}
P_x(t+\Delta t) = \Phi(\Delta t)P_x(t)\Phi'(\Delta t) + Q_\Delta(\Delta t)
\end{equation}
Thus, we can generate realizations of the random run with either $x(t) \sim N(0, P_x(t))$ or recursively from
\begin{equation}
x(t+\Delta t) = \Phi(\Delta t) x(t) + w_\Delta(t)
\end{equation}
where $w_\Delta(t) \sim N(0, Q_\Delta(\Delta t))$.
For the underdamped case ($\zeta < 1$), the state transition matrix and discrete process noise covariance are given by\footnote{M.~C. Wang and G.~E. Uhlenbeck. On the theory of brownian motion ii. In N.~Wax, editor, {\em Selected Papers on Noise and Stochastic Processes}, pages 113--132. Dover, 1954.}:
\begin{equation}
\Phi(t) = \frac{\text{e}^{-\zeta\omega_{n}t}}{w_{d}}
\begin{bmatrix}
(\omega_{d}\cos\omega_{d}t + \zeta\omega_{n}\sin\omega_{d}t) &
\sin\omega_{d}t \\
-\omega_{n}^{2}\sin\omega_{d}t &
(\omega_{d}\cos\omega_{d}t - \zeta\omega_{n}\sin\omega_{d}t)
\end{bmatrix}
\end{equation}
and
\begin{eqnarray}
Q^{(1,1)}_\Delta(t) &=& \frac{q}{4\zeta\omega_{n}^{3}}\left[ 1
- \frac{\text{e}^{-2\zeta\omega_{n}t}}{w_{d}^{2}}(\omega_{d}^{2}
+ 2\zeta\omega_{n}\omega_{d}\cos\omega_{d}t\sin\omega_{d}t
+ 2\zeta^{2}\omega_{n}^{2}\sin^{2}\omega_{d}t)\right] \\
Q^{(2,2)}_\Delta(t) &=& \frac{q}{4\zeta\omega_{n}}\left[ 1
- \frac{\text{e}^{-2\zeta\omega_{n}t}}{w_{d}^{2}}(\omega_{d}^{2}
- 2\zeta\omega_{n}\omega_{d}\cos\omega_{d}t\sin\omega_{d}t
+ 2\zeta^{2}\omega_{n}^{2}\sin^{2}\omega_{d}t)\right] \\
Q^{(2,1)}_\Delta(t) = Q^{(1,2)}_\Delta(t) &=& \frac{q}{2\omega_{d}^{2}}
\text{e}^{-2\zeta\omega_{n}t}\sin^{2}\omega_{d}t
\end{eqnarray}
where $\omega_d = \omega_n\sqrt{1-\zeta^2}$. In the over-damped case ($\zeta > 1$), replace $\sin$ and $\cos$ with $\sinh$ and $\cosh$, respectively. In the critically-damped case,
\begin{equation}
\Phi(t) =
\begin{bmatrix}
\text{e}^{-\omega_{n}t}(1 + \omega_{n}t) &
t\text{e}^{-\omega_{n}t} \\
-\omega_{n}^{2}t\text{e}^{-\omega_{n}t} &
\text{e}^{-\omega_{n}t}(1 - \omega_{n}t)
\end{bmatrix}
\end{equation}
and
\begin{eqnarray}
Q^{(1,1)}_\Delta(t) &=& \frac{q}{4\omega_{n}^{3}}\left[ 1
- \text{e}^{-2\omega_{n}t}(1
+ 2\omega_{n}t
+ 2\omega_{n}^{2}t^2)\right] \\
Q^{(2,2)}_\Delta(t) &=& \frac{q}{4\omega_{n}}\left[ 1
- \text{e}^{-2\omega_{n}t}(1
- 2\omega_{n}t
+ 2\omega_{n}^{2}t^2)\right] \\
Q^{(2,1)}_\Delta(t) = Q^{(1,2)}_\Delta(t) &=& \frac{q t^2}{2}
\text{e}^{-2\omega_{n}t}
\end{eqnarray}
Note that for any damping ratio, $\|P_x\|$ remains finite, since as $t\rightarrow\infty$,
\begin{equation}
P_x(t\rightarrow\infty) = \frac{q}{4\zeta\omega_{n}}\left[
\begin{array}{cc}
1/\omega_{n}^{2} & 0 \\
0 & 1
\end{array}\right].
\end{equation}
Thus, the ratio of the steady-state standard deviations of $x$ and $\dot{x}$ will be
\begin{equation}
\frac{\sigma_{d}}{\sigma_{b}} = \omega_{n},
\end{equation}
and these are related to the power spectral density by
\begin{equation}
q = 4\zeta\frac{\sigma_{d}^{3}}{\sigma_{b}}.
\end{equation}
Hence, we can choose the parameters of the SOGM so that we avoid any overflow, loss of symmetry and/or positive definiteness of $P_x$ due to roundoff and/or truncation.
\subsection{Multi-Input Bias State Models}
We may combine any of the above models to create multi-input bias models; for example the bias could be a second-order Gauss-Markov, and the bias rate could be a first-order Gauss-Markov. In practice, the most useful have been found to be the following.
\subsubsection{Bias and Drift Random Walks (Random Walk + Random Run)}
A common model for biases in clocks, gyros, and accelerometers is that the bias is driven by both its own white noise input, and also by the integral of the white noise of its drift. Such models derive from observations that the error magnitudes of these devices depend on the time scale over which the device is observed. They are often characterized by Allan deviation specifications, which may be heuristically associated with the white noise power spectral densities. The model is as follows:
\begin{eqnarray}
\begin{bmatrix} \dot{b}(t) \\ \dot{d}(t) \end{bmatrix} &=&
\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}
\begin{bmatrix} b(t) \\ d(t) \end{bmatrix} +
\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}
\begin{bmatrix} w_b(t) \\ w_d(t) \end{bmatrix} \\
\dot{x}(t) &=& A(t) x(t) + B(t) w(t)
\end{eqnarray}
The measurement partial is the same as for the random ramp. The initial condition $x(t_o)$ is an unbiased random constant. Since $x(t_o)$ and $w(t)$ are zero-mean, then $x(t)$ is also zero-mean for all time. The covariance evolves in time according to
\begin{equation}
P_x(t) = \Phi(t-t_o)P_{xo}\Phi'(t-t_o) + Q_\Delta(t-t_o)
\end{equation}
where
\begin{equation}
\Phi(t) = \begin{bmatrix} 1 & t \\ 0 & 1 \end{bmatrix} \, \text{and} \;
P_{xo} = \begin{bmatrix} p_{bo} & 0 \\ 0 & p_{\dot{b}o} \end{bmatrix}
\end{equation}
and
\begin{equation}
Q_\Delta(t) = \begin{bmatrix} q_b t + q_d t^3/3 & q_d t^2/2 \\ q_d t^2/2 & q_d t \end{bmatrix}
\end{equation}
which we can also write in recursive form as
\begin{equation}
P_x(t+\Delta t) = \Phi(\Delta t)P_x(t)\Phi'(\Delta t) + Q_\Delta(\Delta t)
\end{equation}
Thus, we can generate realizations of the random run with either $x(t) \sim N(0, P_x(t))$ or recursively from
\begin{equation}
x(t+\Delta t) = \Phi(\Delta t) x(t) + w_\Delta(t)
\end{equation}
where $w_\Delta(t) \sim N(0, Q_\Delta(\Delta t))$. Note that a Cholesky decomposition of $Q_\Delta(t)$ is
\begin{equation}
\sqrt[C]{Q_\Delta(t)} = \begin{bmatrix} \sqrt{q_b t + q_d t^3/12} & 0 \\ \sqrt{q_d t^3}/2 & \sqrt{q_d t} \end{bmatrix}
\end{equation}
Note that $\|P_x\|$ becomes infinite as $t^3$ becomes infinite. This could lead to an overflow of the representation of $P_x$ in a computer program if both the propagation time and $q$ are large, and could also lead to the representation of $P_x$ losing either its symmetry and/or its positive definiteness due to roundoff and/or truncation.
\subsubsection{Bias, Drift, and Drift Rate Random Walks (Random Walk + Random Run + Random Zoom)}
Another model for biases in very-high precision clocks, gyros, and accelerometers is that the bias is driven by two integrals of white noise in addition to its own white noise input. Such models are often characterized by Hadamard deviation specifications, which may be heuristically associated with the white noise power spectral densities. The model is as follows:
\begin{eqnarray}
\begin{bmatrix} \dot{b}(t) \\ \dot{d}(t) \\ \ddot{d}(t) \end{bmatrix} &=&
\begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix}
\begin{bmatrix} b(t) \\ d(t) \\ \dot{d}(t) \end{bmatrix} +
\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}
\begin{bmatrix} w_b(t) \\ w_d(t) \\ w_{\dot{d}}(t) \end{bmatrix} \\
\dot{x}(t) &=& A(t) x(t) + B(t) w(t)
\end{eqnarray}
The resulting output equation is
\begin{eqnarray}
e & = & \begin{bmatrix} 1 & 0 & 0 \end{bmatrix} x + v \\
& = & H x + v
\end{eqnarray}
The initial condition $x(t_o)$ is an unbiased random constant. Since $x(t_o)$ and $w(t)$ are zero-mean, then $x(t)$ is also zero-mean for all time. The covariance evolves in time according to
\begin{equation}
P_x(t) = \Phi(t-t_o)P_{xo}\Phi'(t-t_o) + Q_\Delta(t-t_o)
\end{equation}
where
\begin{equation}
\Phi(t) = \begin{bmatrix} 1 & t & t^2/2 \\ 0 & 1 & t \\ 0 & 0 & 1 \end{bmatrix} \, \text{and} \;
P_{xo} = \begin{bmatrix} p_{bo} & 0 & 0 \\ 0 & p_{do} & 0 \\ 0 & 0 & p_{\dot{d}o} \end{bmatrix}
\end{equation}
and
\begin{equation}
Q_\Delta(t) = \begin{bmatrix}
q_b t + q_d t^3/3 + q_{\dot{d}} t^5/5 &
q_d t^2/2 + q_{\dot{d}} t^4/8 &
q_{\dot{d}}t^3/6 \\
q_d t^2/2 + q_{\dot{d}} t^4/8 &
q_d t + q_{\dot{d}} t^3/3 &
q_{\dot{d}}t^2/2 \\
q_{\dot{d}}t^3/6 & q_{\dot{d}}t^2/2 & q_{\dot{d}} t
\end{bmatrix}
\end{equation}
which we can also write in recursive form as
\begin{equation}
P_x(t+\Delta t) = \Phi(\Delta t)P_x(t)\Phi'(\Delta t) + Q_\Delta(\Delta t)
\end{equation}
Thus, we can generate realizations of the random run with either $x(t) \sim N(0, P_x(t))$ or recursively from
\begin{equation}
x(t+\Delta t) = \Phi(\Delta t) x(t) + w_\Delta(t)
\end{equation}
where $w_\Delta(t) \sim N(0, Q_\Delta(\Delta t))$. Note that a Cholesky decomposition of $Q_\Delta(t)$ is
\begin{equation}
\sqrt[C]{Q_\Delta(t)} = \begin{bmatrix}
\sqrt{q_b t + q_d t^3/12 + q_{\dot{d}}t^5/720} & 0 & 0 \\
t/2\sqrt{q_d t + q_{\dot{d}}t^3/12} & \sqrt{q_d t + q_{\dot{d}}t^3/12} & 0 \\
t^2/6\sqrt{q_{\dot{d}}t} & t/2\sqrt{q_{\dot{d}}t} &
\sqrt{q_{\dot{d}} t}
\end{bmatrix}
\end{equation}
Note that $\|P_x\|$ becomes infinite as $t^3$ becomes infinite. This could lead to an overflow of the representation of $P_x$ in a computer program if both the propagation time and $q$ are large, and could also lead to the representation of $P_x$ losing either its symmetry and/or its positive definiteness due to roundoff and/or truncation.
\subsubsection{Bias and Drift Coupled First- and Second-Order Gauss-Markov}
The following model provides a stable alternative to the ``Random Walk + Random Run'' model\footnote{R.~Carpenter and T.~Lee. A stable clock error model using coupled first- and second-order gauss-markov processes. In {\em Astrodynamics 2008}, Advances in the Astronautical Sciences. Univelt, 2008.}. Its transient response can be tuned to approximate the Random Walk + Random Run model, and its stable steady-state response can be used to avoid computational issues with long propagation times. The model is as follows.
\begin{eqnarray}
\begin{bmatrix} \dot{b}(t) \\ \dot{d}(t) \end{bmatrix} &=&
\begin{bmatrix} -1/\tau & 1 \\ -\omega_n^2 & -2\zeta\omega_n \end{bmatrix}
\begin{bmatrix} b(t) \\ d(t) \end{bmatrix} +
\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}
\begin{bmatrix} w_b(t) \\ w_d(t) \end{bmatrix} \\
\dot{x}(t) &=& A(t) x(t) + B(t) w(t)
\end{eqnarray}
The measurement partial is the same as for the random ramp. The initial condition $x(t_o)$ is an unbiased random constant. Since $x(t_o)$ and $w(t)$ are zero-mean, then $x(t)$ is also zero-mean for all time. The covariance evolves in time according to
\begin{equation}
P_x(t) = \Phi(t-t_o)P_{xo}\Phi'(t-t_o) + Q_\Delta(t-t_o)
\end{equation}
where
\begin{equation}
\Phi(t) = \frac{e^{at}}{b}\left[\begin{array}{cc}
b\cos bt + \left(a + 2\zeta\omega_n\right)\sin bt & \sin bt \\
-\omega_n^2\sin bt & b\cos bt + \left(a + \beta\right)\sin bt
\end{array}\right]
\end{equation}
where
\begin{align}
\beta &= 1/\tau, \\
a &= -\frac{1}{2}\left(\beta + 2\zeta\omega_n\right), \\
b &= \sqrt{\omega_d^2 + \beta\zeta\omega_n - \frac{1}{4}\beta^2}, \\
\omega_d &= \omega_n\sqrt{1 - \zeta^2},
\end{align}
and we assume that $b^2 > 0$.
Let $$c = -\frac{\beta}{2} + \zeta\omega_n;$$ then, the process noise covariance is given by the following:
\begin{align}\begin{split}
Q_\Delta^{(1,1)}(t) =\; & q_b \left[ \frac{e^{2at}-1}{4a}
\left( 1+\frac{c^2}{b^2} \right) + \frac{e^{2at}\sin2bt}{4(a^2+b^2)}
\left( \frac{b^2-c^2+2ac}{b} \right) \right. \\
&\left. + \frac{e^{2at}\cos2bt-1}{4(a^2+b^2)}
\left( \frac{ab^2-ac^2+2b^2c}{b^2} \right) \right] \\
& + \frac{q_d}{b^2} \left( \frac{e^{2at}-1}{4a} -
\frac{e^{2at}(b\sin2bt + a\cos2bt) - a}{4(a^2+b^2)} \right)
\end{split} \\
\begin{split}
Q_\Delta^{(2,2)}(t) =\; & q_d \left[ \frac{e^{2at}-1}{4a}
\left( 1+\frac{c^2}{b^2} \right) + \frac{e^{2at}\sin2bt}{4(a^2+b^2)}
\left( \frac{b^2-c^2+2ac}{b} \right) \right. \\
&\left. + \frac{e^{2at}\cos2bt-1}{4(a^2+b^2)}
\left( \frac{ab^2-ac^2+2b^2c}{b^2} \right) \right] \\
& + \frac{q_b \omega_n^4}{b^2} \left( \frac{e^{2at}-1}{4a} -
\frac{e^{2at}(b\sin2bt + a\cos2bt) - a}{4(a^2+b^2)} \right)
\end{split} \\
\begin{split}
Q_\Delta^{(1,2)}(t) =\; & \frac{q_b \omega_n^2}{b^2} \left[ \frac{c}{4a}
\left( 1 - e^{2at} \right)
+ \frac{e^{2at} \left[ (bc-ab)\sin2bt + (ac-b^2)\cos2bt \right]
-(ac-b^2)}{4(a^2+b^2)} \right] \\
& + \frac{q_d}{b^2} \left[ \frac{c}{4a} \left( 1 - e^{2at} \right)
+ \frac{e^{2at} \left[ (ab+bc)\sin2bt + (ac-b^2)\cos2bt \right]
-(ac-b^2)}{4(a^2+b^2)} \right].
\end{split}\end{align}
Examining the solution given above, we see that the parameter $a$ governs the rate of decay of all of the exponential terms. Therefore, we define the ``rise time'' as that interval within which the transient response of the covariance will reach a close approximation to the above steady-state value; thus, we define the rise time as follows:
\begin{equation}
t_r = -\frac{3}{a}.
\end{equation}
Next, we note that all of the trigonometric terms are modulated by $2b$; thus we may view this value as a characteristic damped frequency of the coupled system. The period of the oscillation, $\Pi$, is then
\begin{equation}
\Pi = \pi/b
\end{equation}
In the limit as $t\rightarrow\infty$, all the exponential terms in the analytical solution die out, so that the steady-state value of the covariance simplifies to:
\begin{equation}
P(\infty) = -\frac{1}{4a(a^2+b^2)}
\left[\begin{array}{cc}
q_d + (2a^2 + b^2 + c^2 - 2ac) q_b & \beta (q_d + q_b \omega_n^2) \\
\beta (q_d + q_b \omega_n^2) & (2a^2 + b^2 + c^2 + 2ac) q_d
+ q_b \omega_n^4
\end{array}\right]
\end{equation} | {
"alphanum_fraction": 0.7027051783,
"avg_line_length": 61.9130434783,
"ext": "tex",
"hexsha": "ad228fc68920edd289cee2be2af016e5ecb6f2f1",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2020-12-09T07:06:55.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-10-13T10:26:49.000Z",
"max_forks_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f",
"max_forks_repo_licenses": [
"NASA-1.3"
],
"max_forks_repo_name": "ddj116/gmat",
"max_forks_repo_path": "doc/SystemDocs/MathematicalSpecification/MeasurementErrorModeling.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f",
"max_issues_repo_issues_event_max_datetime": "2018-03-20T20:11:26.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-03-15T08:58:37.000Z",
"max_issues_repo_licenses": [
"NASA-1.3"
],
"max_issues_repo_name": "ddj116/gmat",
"max_issues_repo_path": "doc/SystemDocs/MathematicalSpecification/MeasurementErrorModeling.tex",
"max_line_length": 786,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "d6a5b1fed68c33b0c4b1cfbd1e25a71cdfb8f8f5",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Randl/GMAT",
"max_stars_repo_path": "doc/SystemDocs/MathematicalSpecification/MeasurementErrorModeling.tex",
"max_stars_repo_stars_event_max_datetime": "2020-12-09T07:05:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-01-01T13:14:57.000Z",
"num_tokens": 10667,
"size": 32752
} |
\section{Process' Perspective}
\label{sec:processPerspective}
The following section focuses on how code and artifacts go from idea to production and which tools and processes are involved.
\subsection{Team Interaction and Organization}
\label{subsec:TeamInteraction}
Internal communication have gone through \href{https://discord.com/}{Discord}. Work has been split between remote using voice channels and physical at and continuing the allotted exercise time.
\subsection{CI/CD chains}
\label{subsec:cicd}
% github workflows
% - A complete description of stages and tools included in the CI/CD chains.
% That is, including deployment and release of your systems.
a CI/CD pipeline allows us to test, build, and deploy \mini incrementally with minimal manual interference. Thus, saving a significant amount of time setting up environments, allowing to deploy fast and often while still being able to quickly revert to an earlier version.
It provides us with ways of measuring and improving the quality of all code coming from local development to version control into production, reducing human error.\cite{Chen2015}
In essence, an automated CI/CD pipeline puts multiple DevOps ideas into practice:
\begin{itemize}
\item Flow (Keeping batch sizes small)\cite{Kim2016}
\item Feedback (Instant, rapid and continuous feedback on code entering pipeline)\cite{Kim2016}
\end{itemize}
\subsubsection{CI/CD Platform}
\label{subsubsec:cicdPlatform}
Our CI/CD chains are set up using GitHub Actions. GitHub Actions integrates seamlessly with GitHub repositories allowing us to trigger workflows every time an event occurs in a repository\cite{githubActions}. Many other providers, such as Travis CI or TeamCity, offer these same features or even more, but not for free and not with minimal configuration. We are aware of service availability concerns using the same provider for most tools. We prefer the ease of use and price tag over distributing our tools.
\subsubsection{CI - Continuous Integration}
\label{subsubsec:ci}
As illustrated in Figure \ref{fig:CIStateMachine}, the entry point for the CI pipeline is creating a pull request to the main branch, which triggers GitHub Actions workflows.
\begin{enumerate}
\item \textbf{.Net Build and test} - The backbone of the CI pipeline. Compiles, builds, and tests the backend. Provides us immediate feedback on whether the changes run and passes the test suite, which required time investment to setup.
\item \textbf{Quality analysis tools} - Provide feedback on the quality code (see \ref{app:codeAnal} for dashboards.
\item \textbf{Dependency scanning tools} - Scans dependencies and codebase for security hazards and fails if the security gate specified to "Critical" severity is met.
\end{enumerate}
\begin {figure}[H]
\centering
\includegraphics[scale=0.50]{images/ci_cd_diagrams/DevopsDiagrams-StateMachine CI.drawio(4).png}
\caption{CI Pipeline State Machine Diagram}
\label{fig:CIStateMachine}
\end{figure}
As shown in Figure \ref{fig:CIStateMachine}, if any of these fail, the CI pipeline will direct to a rejected state. Once all workflows are passed, the pull request awaits review until at least two developers have approved it. Afterward, changes can be merged into the main branch.
\subsubsection{CD - Continuous Delivery/Deployment}
\label{subsubsec:cd}
Our pipeline introduces a mix of Continuous Delivery and Continuous Deployment(Illustrated in Figure \ref{fig:CDStateMachine}). Deployment is done by the deployment workflow(cluster-deploy.yml).
The workflow is triggered every time a release is created. It also supports manual dispatch for hot fixing errors, and the weekly release workflow runs every Sunday evening, triggering the deployment pipeline. The deployment workflow is comprised of 4 jobs:
\begin{enumerate}
\item \textbf{.Net Build and Test} - This job is described in \ref{subsubsec:ci}.
\item \textbf{Build and Push containers} - Builds, tags, and pushes containers to docker hub. We can then pull all necessary containers from Docker Hub.
\item \textbf{Snyk Dependency scan} - Security gate. If a risk exceeds the gate, deployment will stop immediately and move to the canceled state. See Figure \ref{fig:CDStateMachine}
\item \textbf{Dispatch} - Dispatches the apply workflow in the Operations repository.
\end{enumerate}
\begin{figure}[H]
\centering
\includegraphics[scale=0.44]{images/ci_cd_diagrams/DevopsDiagrams-StateMachineCd.drawio.png}
\caption{CD Pipeline State Machine Diagram}
\label{fig:CDStateMachine}
\end{figure}
When the \textit{apply} workflow has been dispatched, the runner will set up kubectl using a Digital ocean token, with the cluster name saved as secrets in the repository. The apply shell script is executed, which "applies" all the configuration manifests, thus deploying the changes.
\subsection{Version Control}
\label{subsec:vs}
% Organization of your repositor(ies).
% That is, either the structure of of mono-repository or organization of artifacts across repositories.
% In essence, it has to be be clear what is stored where and why
\subsubsection{Organization of Repositories}
There is a total of 3 repositories Dev Repository (see \ref{app:devRepo}) and 2 submodules: Ops Repository (see \ref{app:opsRepo}) and \href{https://github.com/Akongstad/DevOps-Report}{Report Repository}.
\subsubsection{Branching Strategy}
% - Applied branching strategy.
% For example, how did you use issues, Kanban boards, etc. to organize open tasks
Our organization uses a Trunk Based Development branching model. We have two centralized branches, which were continuously debugged, tested, and in a working state. Features, patches, and bug fixes are continuously developed on dedicated temporary branches, which are deleted after being merged into their relative centralized branch.\\
\textbf{Dev Repository @ main} (see \ref{app:devRepo})\\\\
The \textbf{main} branch lives on GitHub and is the alpha of our centralized workflow. While we develop features and patches on temporary branches, everything worthwhile is eventually merged into the main. \\
Primary applications of the main branch are:
\begin{enumerate}
\item Store working source code
\item Make releases
\item Run workflows
\item Build and push docker images\\
\end{enumerate}
\textbf{Ops Repository @ master} \ref{app:opsRepo}\\\\
The master branch of the Ops repository contains infrastructure as code for creating and configuring a Kubernetes cluster, manifest files used to deploy our service to the cluster, and shell scripts for automating these steps.
\subsection{Development process and tools}
\label{subsec:process&tools}
% - Applied development process and tools supporting it
GitHub issues track the progress of tasks. Upon creation, issues are tagged and organized into GitHub Projects. By using a combination of issues, projects, and enforcement of code reviews, we promote transparency between developers, thus making sure to share progress and knowledge between developers.
\subsection{Monitoring And Logs}
\label{subsec:monitoring}
Monitoring is implemented by the Prometheus microservice, which scrapes a list of endpoints for PromQL metrics every 5 seconds.
The PromQL data is formatted and displayed by the Grafana microservice, which is publicly accessible.
% How do you monitor your systems and what precisely do you monitor?
% What do you log in your systems and how do you aggregate logs?
\subsubsection{Software Quality Definitions}
\label{subsubsec:software_quality_definitions}
The following definitions provide a clearer image of desirable metrics for monitoring \mini and analyzing logs.
\begin{table}[H]
\centering
\begin{tabular}{|c|p{6cm}|}\hline
Robustness & The predictability of our software's response to unexpected input \\\hline
Functionality & The predictability of our software's response to expected input \\\hline
Performance & The efficiency of resource usage \\ \hline
\end{tabular}
\caption{Definitions of Software Quality}
\label{tab:software_quality}
\end{table}
\subsubsection{Metrics}
From the definitions of software quality in table \ref{tab:software_quality} the metrics in table \ref{tab:metrics} were derived.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|} \hline
Robustness & Server error response count \\
& Client error response count \\ \hline
Functionality & API mean response time \\
& API endpoint call count \\
& API endpoint error count \\ \hline
Performance & Host memory usage \\
& Host CPU load \\ \hline
\end{tabular}
\caption{Metrics derived from definitions of software quality}
\label{tab:metrics}
\end{table}
\subsubsection{Logging}
Logging was temporarily implemented with the \textit{ELK} stack but proved unsustainable with the resources at hand.
Logs are generally categorized by levels.
The definitions of the levels vary from system to system, but their range is consistently between "Debug" and "Error."
The ELK stack was configured to scrape logs from STDOUT and STDERR from frontend and backend microservices which proved sufficient when investigating errors.
\subsection{Security}
\label{subsec:security}
To discover, define, and assess vulnerabilities of our system, we have conducted a security assessment and a pen-test. This section includes only selected results(see \ref{app:secAss} for more).
Using the OWASP Top 10\cite{owasp} we identified possible insecurities in our system. We constructed risk scenarios and analyzed their likelihood and impact.
The analysis yielded "\textit{Outdated Components}" as a top concern. As security breaches are discovered half a year later on average, the way to combat security threats is proactivity\cite{securitylecture}. To decrease the chance of having outdated components in production, we added Dependabot to our GitHub repository. Dependabot creates pull requests automatically, suggesting updating outdated components. Snyk, which was mentioned in \ref{subsubsec:code_quality} scans the repository for vulnerabilities and sensitive data leaks. Handling of secrets to prevent leaks is described in \ref{subsubsec:secrets}.
We conducted an automated penetration test to:
\begin{enumerate}
\item Detect vulnerabilities
\item Test the system under stress
\end{enumerate}
Using the logging system, we noticed the server received requests from worldwide, e.g., Nevada. In conclusion, except for acting as a DOS attack on our system, eventually crashing the ELK stack. Pen testing did not yield any system vulnerabilities.
\subsubsection{Secret Handling}
\label{subsubsec:secrets}
Securing sensitive data while allowing access to multiple parties is challenging. We use GitHub secrets for securing tokens and keys that are accessed and injected into our CI/CD pipeline. Alternatively, we have a local folder containing the database connection string, database-password, and jwt token key, all shared through discord. Security could be improved by splitting the key into multiple parts and sharing them separately. However, we deemed this excessive.
\clearpage
\subsection{Scaling And Load balancing}
\label{subsec:scaling}
% Applied strategy for scaling and load balancing.
%Include reflection over why scaling and load balancing benefit us, why kubernetes might have been overkill for the project, but which pros and cons we considered when deciding.
\subsubsection{Single server setup}
\label{subsubsec:scalingProd}
The original single server setup is located on \textbf{Dev Repository @ production} (see \ref{app:devRepo}), and can be seen in Figure \ref{fig:legacyDeploy}.
Although now deprecated, the production branch had our production server and was our main branch's lean and automated counterpart. It was developed such that our multi-container application could be rebuilt and updated on the production server to include updated Docker images and changes in configuration files with a single command, minimizing downtime without using load-balancers.\\
Only vertical scaling using the digital ocean API was possible. It would become exceedingly more expensive as more virtual CPU cores, RAM, and disk space is added, eventually reaching an upper limit. The single monolithic VM will forever be a single point of failure, and upgrading the VM will require the server shut down.
\begin {figure}[H]
\centering
\includegraphics[scale=0.47]{images/deployment_diagrams/DevopsDiagrams-Legacy deploy mini.drawio.png}
\caption{\mini single server deployment diagram}
\label{fig:legacyDeploy}
\end{figure}
\subsubsection{Scaling the application}
\label{subsubsec:scalingApp}
Eliminating the server as a single point of failure while allowing for rolling out updates without shutting down the application could be accomplished by introducing a setup with a primary server and backup server while swapping around IPs, but that would not allow for horizontal scaling.\\
Options that, out of the box, come with horizontal scaling, load balancing, and rolling updates while eliminating the single point of failure include container orchestration tools like Docker Swarm and Kubernetes.\\
We have chosen to migrate our application to a Kubernetes cluster. One can argue that the additional complexity and setup required is overengineering since Docker Swarm fulfills our requirements, thus conflicting with the simplicity principle: "\textit{Simplicity--the art of maximizing the amount of work not done--is essential}"\cite{beck2001agile}.
One reason for this choice is that documentation on the setup and management of a Kubernetes cluster was significantly more extensive, although the increased complexity might cause that. Secondly, we wished to gain experience using Kubernetes for container orchestration. Automated scaling saves developers' time, even if monitoring service load and manually scaling a swarm when required is a possible solution. In conclusion, by migrating to a Kubernetes cluster, we support horizontal scaling, load balancing, and rolling updates. However, we must admit that even though the database was flushed and deployed to the cluster, we do not have a replicated database because of consistency concerns and thus still have a single point of failure.
| {
"alphanum_fraction": 0.794376221,
"avg_line_length": 78.7472527473,
"ext": "tex",
"hexsha": "7914db1f8d19141ec028d2b028b72f7513d5a3af",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "13c312a1566a90e144e38b9787328fa18fcc3132",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Akongstad/DevOps-group-p",
"max_forks_repo_path": "report/tex/sections/02-ProcessPerspective.tex",
"max_issues_count": 108,
"max_issues_repo_head_hexsha": "13c312a1566a90e144e38b9787328fa18fcc3132",
"max_issues_repo_issues_event_max_datetime": "2022-03-31T22:38:57.000Z",
"max_issues_repo_issues_event_min_datetime": "2022-02-11T16:44:18.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Akongstad/DevOps-group-p",
"max_issues_repo_path": "report/tex/sections/02-ProcessPerspective.tex",
"max_line_length": 743,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "13c312a1566a90e144e38b9787328fa18fcc3132",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Akongstad/DevOps-group-p",
"max_stars_repo_path": "report/tex/sections/02-ProcessPerspective.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-27T13:05:53.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-03-27T13:05:53.000Z",
"num_tokens": 3076,
"size": 14332
} |
\documentclass[twocolumn]{article}
\begin{document}
\frenchspacing
\title{EntropyCoin: A Difficult-To-Manipulate Source of Entropy}
\author{
{\rm David Vorick}\\
Nebulous Labs
}
\author{
{\rm Luke Champine}\\
Nebulous Labs
}
\maketitle
\subsection*{Abstract}
EntropyCoin is a project designed to target an explicit problem: the deterministic and secure generation of entropy.
Currently, it is difficult for multiple hosts to generate
The best known solutions require a large portion of participants to be honest and can only provide bounded random numbers.
EntropyCoin presents a lightweight solution for generating difficult-to-manipulate random numbers.
\section{Introduction}
Decentralized networks need to be able to cope with malicious behavior by a set of nodes.
Decentralized networks can require the generation of random numbers.
Consider the problem of the Random Byzantine Generals Election.
There are a set of Byzantine Generals trying to randomly elect a commander.
The election results should insure that
\begin{enumerate}
\item The commander is elected at random.
\item The random number cannot be easily manipulated by traitors.
\item Manipulating the random number is expensive and uncertain.
\end{enumerate}
Existing solutions to this problem require both a high percentage of loyal generals and produce results that can be manipulated within some epsilon.
These solutions typically have each general generating a random outcome and then somehow combining the outcomes.
A different approach is needed.
% If possible, prove that internal random number generation will not work
% Explore the idea that traitors can withdraw from the process after waiting for a reveal.
The Bitcoin proof-of-work model provides this different approach.
% pick up here
EntropyCoin is intended to be used for deterministically generating random numbers as a group.
The numbers generated by EntropyCoin are public, and are not inteded to be used as entropy for secrets such as the seed to a private key.
Instead, the random numbers are to be used in public settings where randomness is required but secrecy is not.
An example would be randomly determining the address of a node in a network.
If an attacker can choose where a node is placed, certain attack vectors open, so secure public random number generation is required.
Maidsafe is one example of a network with such a need, Sia is a second example.
Bitcoin already provides such a source of entropy.
At present prices, the reward for producing a block on the bitcoin network is \$11,250.
The hash of a block header can serve as entropy for a network.
The most that such entropy could be manipulated is by producing multiple blocks and choosing the preferred random number.
Choosing a preferred random number means orphaning a valid Bitcoin block and sacrificing the 25 bitcoin reward.
Not only is it costly to sacrifice a reward, it's also unlikely that an attacker will be able to get two blocks in a row without a massive amount of mining infrastructure.
Entropy generated from block headers can therefore be assumed to be highly secure.
A network looking for entropy could use Bitcoin, however the Bitcoin blockchain is large and busy, having many transactions.
Furthermore, there is no internal value to Bitcoin, and no guarantee of the Bitcoin price.
Should the value of 25 bitcoins suddenly plummet, the security of the entropy is thrown into question.
EntropyCoin has been created from the same basic principles with these concerns and optimizations in mind.
\section{EntropyCoin as a Lightweight Blockchain}
The first optimization is to keep the EntropyCoin blockchain lightweight.
Each block, 1 EntropyCoin is mined.
Each transaction, an integral volume of entropycoins must be transferred.
This will keep dust off of the network.
Once generated, hashes do not need to be kept.
Therefore, old blocks are discarded and only the network state is kept.
The network state maintains a list of entropycoin owners and the volume that they own, nothing else.
Because transactions are integral, and only 1 entropycoin is created per block, the state will have at most one entropycoin owner per block.
The blockrate for EntropyCoin is set to 2.5 minutes, modeled after Litecoin.
This is to provide a constant stream of entropy while leaving plenty of time for network propagation.
Other models, such as a self-adjusting blockrate, have been rejected because they could open up attack vectors.
To maximize security, Entropycoin is kept rigid.
At one block per 2.5 minutes, the total number of blocks produced each year is 210,380.
At 200 bytes per wallet (a generous number, likely to be less than 64 bytes), that is a maximum network size of 42.1 mb per year.
Given that the majority of wallets are likely to have many coins, the actual size of the network should be below 10mb.
\section{Economic Incentive}
EntropyCoin does not try to compete as a way to trade value, so the value of entropy coin must be derived from elsewhere.
Instead, EntropyCoin derives value by being destructible.
Any wallet can release a transaction that destroys an integral volume of EntropyCoins.
It is assumed that various entities will have a vested interest in keeping entropy expensive to manipulate.
These entities are expected to contribute to the network by buying entropycoins and destroying them.
The other way to contribute is to mine entropycoins.
To maximize the incentive for mining on EntropyCoin, EntropyCoin will be merge mined with Bitcoin.
This means that Bitcoin miners can reap all of the standard profits of Bitcoin mining, as well as the profits from Entropycoin mining.
\section{Conclusion}
This relatively simple coin has been created as a lightweight solution to the problem of deterministic public entropy generation.
\end{document}
| {
"alphanum_fraction": 0.8113921872,
"avg_line_length": 56.9705882353,
"ext": "tex",
"hexsha": "51241ed763c633b77817899c2b99efdebfbbc41d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a4a3565660ac2f076b9eacf916cb94cb6a3eb1f9",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "DavidVorick/Experimental",
"max_forks_repo_path": "EntropyCoin.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a4a3565660ac2f076b9eacf916cb94cb6a3eb1f9",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "DavidVorick/Experimental",
"max_issues_repo_path": "EntropyCoin.tex",
"max_line_length": 171,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a4a3565660ac2f076b9eacf916cb94cb6a3eb1f9",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "DavidVorick/Experimental",
"max_stars_repo_path": "EntropyCoin.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1175,
"size": 5811
} |
\chapter{Introduction}\label{part:into}
As we all know, natural selection is the progress that eliminates inferior species gradually over time and organisms today are products of it. In other words, the biological structures of livings are fitting for current environment. In the mean time, It is clearly obvious that most animals have two eyes (referred to as Binocular vision). So what advantages of this biological structure? Why it is the choice of nature rather than single eye? One possible reason is that having two eyes provides a much wider field of view. However, research shows that humans have a maximum horizontal field of view of 194 degrees with two eyes but 114 degrees of this is the binocular field of view (seen by both eyes)\cite{Howard1995Binocular}. Why so much overlap of each field of view rather than less overlap and wider field of uni-ocular fields (seen by only one eye)?
The answer is having two eyes gives us perception of depth and 3-dimensional structure obtained on the basis of visual information deriving from them, which is referred to as stereopsis. Because the eyes of humans and many animals are located at different lateral positions on the head, binocular vision results in two slightly different images projected to the retinas of the eyes. The differences are mainly in horizontal position in two images, refer to binocular disparities. Our brain process these informations and gives precise depth perception.
In this project. We investigate how binocular works by implementing a stereo vision model. In stereo vision, two cameras instead of eyes are displaced horizontally from one another, and used to obtain two differing views on a scene, in a manner similar to human binocular vision. By comparing these two images, relative depth information can be extracted in the form of a disparity map, which encodes the difference in horizontal coordinates of corresponding image points. The values in this disparity map are inversely proportional to the scene depth at the corresponding pixel location.
Chapter 2 is about implementing spatial cross correlation computation model. In the first two sections, one dimensional cross correlation model is built, which is very useful in signal processing. To test the effectiveness of it, we apply it to a particular signal offset compute task. In the next two sections, the model is extended to two dimensions, which can work in image comparison. Similarly, the model is applied to a particular pattern search task. Cause spatial cross correlation is time consuming, a faster method to achieve same computational ability of cross correlation is needed. So we explore and implement spectral cross correlation in section 5, and compare the efficiency of two methods. The next section is an application of one dimensional cross correlation about pattern detection.
In Chapter 3, the image comparison model and calibration model are built separately and combined to reconstruct a 3D scenery from two 2 calibrate images. In first two sections, calibration model is created by using calibration plants. In section 3, image comparison model is built and a mapping algorithm to extract depth data from comparison result is presented. Section 4 is about some optimization strategies to image comparison model, for speed or accuracy. Section 5 and 6 is the reconstruction of 3D scenery using models presented before.
| {
"alphanum_fraction": 0.8174603175,
"avg_line_length": 261.6923076923,
"ext": "tex",
"hexsha": "436cc52a11ef25d8f7cf7a9d326ac2762f8597ca",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-06-14T11:59:13.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-06-14T11:59:13.000Z",
"max_forks_repo_head_hexsha": "07bdb49fd4c50035b7f2e80ca218ac2b620098e4",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "hidara2000/Unimelb-CS-Subjects",
"max_forks_repo_path": "The_Art_Of_Scientific_Computation/report/part0.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "07bdb49fd4c50035b7f2e80ca218ac2b620098e4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "hidara2000/Unimelb-CS-Subjects",
"max_issues_repo_path": "The_Art_Of_Scientific_Computation/report/part0.tex",
"max_line_length": 860,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "07bdb49fd4c50035b7f2e80ca218ac2b620098e4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "infinityglow/Unimelb-CS-Subjects",
"max_stars_repo_path": "The_Art_Of_Scientific_Computation/report/part0.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-14T16:31:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-02-14T16:31:07.000Z",
"num_tokens": 649,
"size": 3402
} |
\chapter{Problem Statement and Proposed Approach}
% =============================
\section{Problem Statement}
Since the introduction of the Transformer architecture, the quality of language models has drastically increased \cite{devlin2018bert, radford2018improving}. They are now utilized in numerous downstream applications such as chatbots for medical purposes and an interactive \textit{Dungeons \& Dragons} hosted by an AI \cite{aidungeon}. In short, language models have become a commodity that nearly anyone can use for services that require some sort of natural language processing.
Yet, language models are basically black boxes to us. Because of the enormous size of the support of these models, the learned probability distribution over natural language strings is difficult to analyze and therefore we have no idea what exactly goes on inside. We train language models on large amounts of text scraped from web sources and it is difficult to know what bias those large datasets might induce. For example, researchers have shown that the models trained by certain texts encode social biases, such as religion, gender and race \cite{abid2021persistent}. Through various probing techniques we can take a peak at some properties, which can then be influenced by tweaking parameters, dataset augmentation and "creating pseudo-training data to encode those biases" \cite{MarasovicGradient2018NLP} - but this is the extent of it. According to Ricardo Baeza-Yates from the Institute for Experimental AI \cite{lmfail}, the three major limitations of language models are "\textit{unequal language resources}", "\textit{the bias from learned text}" and "\textit{the lack of semantic understanding at an extreme cost}". This shows that we need better tools to understand what exactly our language models learn in order to be able to train and use them accordingly.
% =============================
\section{Proposed Approach}
We propose an approach for looking at the semantic spaces---as defined by topics---learned by these language models. In order to understand some of the model's inductive biases, we look at those semantic spaces and at how they differ from the ones the corpora models were trained on. Inductive biases are biases built into the model that are not based on the training data.
We utilize a known technique from natural language processing: the topic model. Through topic model techniques we cluster the generated output from certain language models as well as the corpus they are trained on (see fig. \ref{fig:generationexample} and \ref{fig:analyzing_pipeline}). Finally, we compare those two topic models using our own designed metric.
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{figures/generationexample}
\caption{Illustration of the correlation between corpus $C1$ and $C2$.}
\label{fig:generationexample}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=1.0\textwidth]{figures/analyzing_pipeline}
\caption{Illustration of the analyzation pipeline. $C1$ and $C2$ are different corpora.}
\label{fig:analyzing_pipeline}
\end{figure}
Specifically, we train a GPT-2 \cite{gpt-2} and a Transformer-XL \cite{transformer-xl} model on the full Wikitext103 \cite{wikitext} corpus ($C1$). Then, we use latent Dirichlet allocation (LDA) \cite{blei2003latent} and neural LDA \cite{neuralLDA} to train topic models for $C1$ and $C2$, where $C2$ is the generated output from our trained language models.
For us to be able to gain some insight, we need more data than only from comparing two language models and two topic models. Therefore, when generating a corpus from a language model, we compare different sampling techniques with each other (ancestral sampling, Top-P sampling \cite{holtzman2019curious} and Typical sampling \cite{meister2022typical}). For the topic models we use different topic sizes ($\{2, 3, 5, 10, 20, 50, 100\}$) and additional corpora (Wikitext103 \cite{wikitext}, ArXiv Metadata \cite{arxiv}, the original GPT-2 model \cite{gpt2model}, the original Transformer XL model \cite{trafoxlmodel}).
We know topic models use the words from tokenized text to create topics (probability distributions over words). Thus, we additionally consider the possibility of word combinations, e.g. \texttt{machine\_learning}, \texttt{deep\_neural\_network} and so on. This means that we compare the influence of bigrams (two word combinations) and trigrams (three word combinations) in our text instead of only using single words for the creation of topic models.
If we want to compare two topic models with each other, their vocabulary has to be exactly the same. The vocabulary contains all words of the training text for the topic model after the tokenization process. We decide between taking the intersection and the union of the two vocabularies. As we do not know the impact of taking one over the other, we evaluate both in our research.
To analyze the quality and stability of our topic models, we employ the coherence score $C_v$ (see sec. \ref{sec:coherence}). For comparing two topic models, we design our own metric. First, we calculate the probability of every topic in both corpora. Then, we calculate the distance matrix using the Jensen-Shannon distance (see sec. \ref{sec:diffmet}), weigh the value of the two closest topics for each row and column by the respective topic probability, and sum them up. | {
"alphanum_fraction": 0.7858195212,
"avg_line_length": 159.7058823529,
"ext": "tex",
"hexsha": "ce94cc833757865214a90ba067906c002bef6aed",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1f77c86c0e5d7113f28a4ef092209290a0382f9a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Frankie8472/masters-thesis",
"max_forks_repo_path": "thesis_report/chapters/problemstatement.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1f77c86c0e5d7113f28a4ef092209290a0382f9a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Frankie8472/masters-thesis",
"max_issues_repo_path": "thesis_report/chapters/problemstatement.tex",
"max_line_length": 1274,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1f77c86c0e5d7113f28a4ef092209290a0382f9a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Frankie8472/masters-thesis",
"max_stars_repo_path": "thesis_report/chapters/problemstatement.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1233,
"size": 5430
} |
\documentclass{article}
\usepackage[cm]{fullpage} %very small margins (around 1.5cm)
\usepackage{url} %use \url{} in the document
\usepackage{graphicx} %use \includegraphics[scale=1.00]{file.jpg} for images
\usepackage{minted}
\begin{document}
\title{Universal Game System specification}
\date{February 15, 2012}
\author{Bysiek Mateusz, Peryt Stanislaw, Wisniewski Adrian, Witan Maciej}
\maketitle
\tableofcontents
\pagebreak[4]
\section{Abstract}
\subsection{License}
Apache License 2.0
\subsection{Responsibilities}
This work is a result of collaboration of 4 people. To make things easier,
we've made clear who's doing what, and included that information in this document.
\subsubsection{People}
\paragraph{Bysiek M.} E-mail: \url{[email protected]}
Class diagrams. Protocol definition. Input data definition. Repository for documentation.
\paragraph{Peryt S.} E-mail: \url{[email protected]}
Quality control, coordination of the work. This document.
\paragraph{Wisniewski A.} E-mail: \url{[email protected]}
Use-case diagrams.
\paragraph{Witan M.} E-mail: \url{[email protected]}
Event flow diagrams. State diagrams. Activity diagrams.
\subsubsection{Material}
\begin{enumerate}
\item Requirements Specification
\begin{enumerate}
\item System actors' use cases
\begin{enumerate}
\item person running a game server
\item person running a game client
\item a game client accessing and using game server
\item game server communicating with game client
\end{enumerate}
All of the above: \url{wisniewskia}
\end{enumerate}
\item Complete Design Documentation (Game Server and Game)
\begin{enumerate}
\item Input data format specification: \url{bysiekm}
\item Class diagrams describing structures, modules and architecture of the system: \url{bysiekm}
\item Event flow diagrams describing interaction between game server and game applications: \url{witanm}
\item State diagrams describing states of components of Game server and components of game applications: \url{witanm}
\item Important activity diagrams describing overall activities of the server and clients: \url{witanm}
\item Communication protocol design (if possible a set of XML tags): \url{bysiekm}
\item Additional relevant comments: \url{peryts}
\item Special system states description (initialization, shut down, failures): \url{peryts}
\end{enumerate}
\item Vocabulary (if needed): \url{peryts}
\end{enumerate}
\subsection{Github repo}
This document (written in LaTeX) and all diagrams (created in Visio) are available online
in a open Github repository: \url{https://github.com/mbdevpl/UniversalGameSystem}
\pagebreak[4]
\subsection{Overview}
\subsubsection{Vocabulary}
\paragraph{Actor}
any of the three: game server, game master, game client.
\paragraph{Game}
activity performed together by a number of players. Each game must have same rules for all participants.
Game consists of moves done subsequently by all players in given order.
Rules are independent of the server. A game may begin when a number of required participants is collected.
Strategy for collecting participants maybe different for different games and is in responsibility
of the game master. Also the game master decides when the game is finished and who is the winner.
\paragraph{Game finalization}state of a game which closes single game.
All players are informed about the game result.
\paragraph{Game initiation} state of a game which collects players. This stage
is started by game server and is controlled by game master.
\paragraph{Game master} an application which connects to the server. It controls players
registration (collecting), moves order and correctness of moves. Game master
also defines the game that is going to be played by collected group of players. Each move
is sent first from a player to the server, then to the game master. If the move is correct
the master sends confirmation (including the new game state) to the server
which passes it to all players. Each game master may control many games in the same time.
\paragraph{Game State} information about game state that is sent between players,
game master and server. Server passes this information to game master who is responsible
for all rule related decisions. Game state is sent to all players after each move.
Server is not interpreting this information in any way since it is different
for different game types.
\paragraph{Game Type} checkers, chess, tic-tack-toe, ships, core-wars, etc.
\paragraph{Looser} player who looses. There is no limitation on number of losers in single game.
After a game is finished each player must be a looser or a winner.
\paragraph{Move} single move performed by a single player in a game upon given game state.
\paragraph{Player (game participant)} single game application that is capable of connecting
to a game server, define its game type, receive game state information and perform moves
using artificial intelligence.
\paragraph{Player pool} collection of players who wait for being invited to a game.
Each player who wants to join a game must first register in the pool for its particular
game type. Game server uses this pool to select players for games.
\paragraph{Result} a two state information about all players sent to each player when
a game is finished: winner, looser.
For example: player1: winner, player2: looser, player3:winner, player4: looser
\paragraph{Rules} set of rules for particular game. Rules are defined implicitly by players
who perform movements. Game server does not make up any rule related decisions.
Correctness of moves is controlled by the game master.
\paragraph{Server} organizes games between players. It generally passes messages between
the game master and players. It does not make up any rule related decisions.
They are all delegated to the game master. Game server may organize unlimited number of games
in the same time. Game server chooses players from the players pool. Players never communicate
directly but only via server.
\paragraph{Winner} player who wins. There is no limitation on number of winners in single game.
After a game is finished each player must be a looser or a winner.
\subsubsection{Game master}
\paragraph{Game master} is an application responsible for performing games between players. It is also responsible
for sending game termination message when detects illegal movement or timeout violation.
\paragraph{Activities} :
\begin{itemize}
\item sends game information to the server
\item receives list of players who participate this game
\item prepares game initial state
\item decides who makes the first move
\item sends move request to the selected player (via server)
\item receives move from the server
\item sends game state and move to the server (forwarded then to all
participants)
\item decides if the game is finished or not
\item in such case sends special end game message to the server
\item decides who makes the next move
\end{itemize}
\subsubsection{Player}
\paragraph{Player} is an application responsible for Artificial Intelligence in games. It should consider all possible moves and choose the best one. Player application is also responsible for sending game termination message when detects Game master timeout.
\paragraph{Activities} :
\begin{itemize}
\item receives a game state from the server
\item receives a move request from the server
\item makes move and sends it to the server
\item receives information about game final result
\end{itemize}
\subsubsection{Game server}
\paragraph{Game server} is an application responsible for providing communication in the system. Server application can run in one of two modes: normal mode and championship mode.
\paragraph{Activities} :
\begin{itemize}
\item receives game state from game master
\item receives the number of next player to make a move
\item sends game state to players
\item sends move request to given player
\item receives move from player
\item sends this move to game master
\end{itemize}
\section{Communication protocol}
XML tags listed below are the XML representations of objects, which are defined
in the class diagram section of this document. This is a subset of those classes,
because only their subset will be ever present in any communication.
\begin{itemize}
\item \verb|message| - generic container sent between actors present in the system
\begin{minted}[mathescape,linenos,numbersep=5pt,gobble=0,framesep=2mm]{xml}
<!-- 'from' and 'to' contain ids of actors that receive and send the message respectively,
to="" i.e. empty receiver means that message is a broadcast,
timestamp contains unix time value https://en.wikipedia.org/wiki/Unix_time
- milliseconds (i.e. fractions of unix time units) are supported
-->
<message from="" to="" timestamp="">
<!-- data goes here -->
</message>
\end{minted}
\item \verb|systemdata| - for system events, i.e. login, logout, timeout, ping, registration,
and information about game state such as: game is on, player no.\# has won, etc.
\begin{minted}[mathescape,linenos,numbersep=5pt,gobble=0,framesep=2mm]{xml}
<message from="" to="" timestamp="">
<systemdata>
<!-- this message contains system data -->
</systemdata>
</message>
\end{minted}
\item \verb|move| - contains location to which the move is being performed, and any other relevant information, so that
actor that receives this data can simulate the action of actor that have sent it and get identical resulting game state
as in case of the original move
\begin{minted}[mathescape,linenos,numbersep=5pt,gobble=0,framesep=2mm]{xml}
<message from="" to="" timestamp="">
<!-- 'player' contains id of a player that is making the move,
it generally is expected to be equal to 'from' but there can be some exceptions
if a game allows for example movement of units of an allied player -->
<gamemove player="">
<!-- this message contains a single move -->
</gamemove>
</message>
\end{minted}
\item \verb|gamestate| - contains current state of the game, i.e. game board, list of players, possibly
history of the moves, and any other relevant information, such that the client can reconstruct the view
of the game from scratch using only information from this tag
\begin{minted}[mathescape,linenos,numbersep=5pt,gobble=0,framesep=2mm]{xml}
<message from="" to="" timestamp=""> <!-- this message contains state of the game -->
<!-- 'currplayer' is the id of a current player,
and currplayer="" i.e. empty current player means
that there is no current player because for example the game is not turn based -->
<gamestate currplayer="">
<scores>
<!-- set of scores, that by the way (in a manner of speaking)
provides the complete list of players -->
</scores>
<realm>
<!-- universe of the game, meaning: game board, set of cards,
or something else - depending on the game -->
</realm>
</gamestate>
</message>
\end{minted}
\end{itemize}
%\includegraphics[scale=1.00]{UGS_protocol.jpg}
%\pagebreak[4]
\section{Use-cases}
\subsection{Person running a game server}
Actions that can be performed by a person running a game server.
\includegraphics[scale=1.00]{UGS_usecases_gameserver.jpg}
\subsection{Person running a game client}
Actions that can be performed by a person running a game client, after successful connection to a server.
\includegraphics[scale=1.00]{UGS_usecases_client.jpg}
\pagebreak[4]
\subsection{A game client accessing and using game server}
Actions performed by all parties during the game.
\includegraphics[scale=1.20]{UGS_usecases_application-client,server,master.jpg}
\subsection{Game server communicating with game client}
Game server communicating with game client, during the game.
\includegraphics[scale=1.20]{UGS_usecases_application-client,server.jpg}
\section{Input data format}
Every object that can be sent between game actors is capable of being converted into XML and back,
without any UGS-specific data loss.
\pagebreak[4]
\section{Class diagram}
GameActor is the core class of the system, because it defines the common
characteristics of server, master and client. Class Message is also important, because it can carry
any information between any two actors.
Game rules is the set of any relevant data that make the rules of the game, and set of operations
that make it possible for the interested parties to check an aspect of the game such as: validity
of the move, is the game still on or rather someone just won, constraints of the game such as time limits
for a single move, time limits for a player for a whole game, etc.
Game type contains information about kind of game, in the sense of whether the players play simultaneously
or take turns, also whether there are teams of players in this game or not, and any other relevant information.
Game state contains current state of the game: game board, and state of each players' hands
in case of playing cards or other similar games, list of players, and any other relevant information,
such that there is no ambiguity about the state of the game when an actor is basing its knowledge
only on an object of this class.
\includegraphics[scale=1.20]{UGS_classes.jpg}
\pagebreak[4]
\section{Event flow diagrams}
Interaction between game server and game applications.
\subsection{Game initialization}
Below diagram shows the events happening during game initialization process.
\includegraphics[scale=1.00]{UGS_events_Game_initilalization.jpg}
\subsection{Game finalization}
That is how the game will be finalized.
\includegraphics[scale=1.00]{UGS_events_Game_finalization.jpg}
\pagebreak[4]
\subsection{Client-server communication}
Complete view on client server communication.
\includegraphics[scale=1.00]{UGS_events_Client-server_communication.jpg}
\subsection{Master-server-player interaction}
Complete view on how the game master will communicate with game clients via the game server.
\includegraphics[scale=1.00]{UGS_events_master-server-player_interaction.jpg}
\pagebreak[4]
\section{State diagrams}
%\subsection{Client-server communication}
%Text text text\ldots
%\includegraphics[scale=1.00]{UGS_states_cielnt-server_communication.jpg}
\subsection{Game master}
List of states of the game master. At the beginning the game master is started, and is waiting for requests.
\includegraphics[scale=1.10]{UGS_states_game_master.jpg}
\subsection{Player-game}
\includegraphics[scale=1.10]{UGS_states_player-game.jpg}
\pagebreak[4]
\subsection{Server}
\includegraphics[scale=1.10]{UGS_states_server.jpg}
\section{Activity diagrams}
\subsection{Championship management}
\includegraphics[scale=0.80]{UGS_activities_champioship_management.jpg}
\pagebreak[4]
\subsection{Game creation}
Both server and game master participate in the process of game creation.
\includegraphics[scale=1.00]{UGS_activities_game_creation.jpg}
\subsection{Login}
The below diagram shows the procedure of adding a client to a server.
\includegraphics[scale=1.00]{UGS_activities_login.jpg}
\pagebreak[4]
\subsection{Logout}
Below diagram depicts the process of removing the client from the server.
\includegraphics[scale=1.00]{UGS_activities_logout.jpg}
\section{Additional comments}
\begin{itemize}
\item Universal Game System is designed to organize games between game applications.
\item It consists of three applications working together: game player, game master and game server.
\item Master and Player application supports timeout and illegal movement detection.
\item The server is the only connection between game masters and players.
\item All players work in a loop.
\item Game termination messages are handled in a different way depending on what is the reason of termination.
\item All application should support handling termination messages.
\end{itemize}
\end{document}
| {
"alphanum_fraction": 0.7780001251,
"avg_line_length": 39.097799511,
"ext": "tex",
"hexsha": "32b6c8e415e6a486a913cb55c1617adfc39ba75d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c4b0f3396389bce0d34660a25a2e437432cb1777",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "mbdevpl/paper-soccer",
"max_forks_repo_path": "doc/specification/ugs.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c4b0f3396389bce0d34660a25a2e437432cb1777",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "mbdevpl/paper-soccer",
"max_issues_repo_path": "doc/specification/ugs.tex",
"max_line_length": 259,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c4b0f3396389bce0d34660a25a2e437432cb1777",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "mbdevpl/paper-soccer",
"max_stars_repo_path": "doc/specification/ugs.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3720,
"size": 15991
} |
\subsection{Milling}
| {
"alphanum_fraction": 0.7391304348,
"avg_line_length": 5.75,
"ext": "tex",
"hexsha": "869450123a6cd42413346ef0af0adbabf1505aa7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/culture/dough/01-02-Milling.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/culture/dough/01-02-Milling.tex",
"max_line_length": 20,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/culture/dough/01-02-Milling.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 7,
"size": 23
} |
%% !TEX root = manual.tex
\chapter{Topologies}
\label{chapter:topologies}
The torus topology is straightforward and easy to understand.
Here we introduce the basics of other topologies within SST that are more complex and require extra documentation to configure properly.
These are generally higher-radix or path-diverse topologies like fat tree, dragonfly, and flattened butterfly.
As noted in \ref{sec:tutorial:topology}, a more thorough and excellent discussions of these topologies is given in ``High Performance Datacenter Networks'' by Dennis Abts and John Kim.
%\section{Topology Query Utility}
%\label{sec:topologyQuery}
%Understanding topology inputs and geometries can sometimes be challenging.
%\sstmacro provides an executable for testing topology inputs and doing example coordinate computations.
%After making and installing, an executable \inlineshell{sstmac_top_info} will appear in the \inlineshell{bin} folder.
%The invocation of \inlineshell{sstmac_top_info} is exactly the same as the main \inlineshell{sstmac} executable.
%For the example parameter file named \inlineshell{machine.ini}:
%
%\begin{ViFile}
%topology.name = fat_tree
%topology.geometry = 4 3
%\end{ViFile}
%
%we run
%
%\begin{ShellCmd}
%bin> sstmac_top_info -f machine.ini
%\end{ShellCmd}
%which produces the output
%
%\begin{ViFile}
%Number of nodes: 81
%Number of leaf switches: 27
%Number of switches: 94
%\end{ViFile}
%
%detailing the produced geometry. Here the fat tree has a total of 94 switches, 27 of which are ``leaf'' switches directly connected to compute nodes.
%The output is followed by the prompt
%
%\begin{ShellCmd}
%NextInput:
%\end{ShellCmd}
%
%One can either enter a single number (switch ID) or set of coordinates.
%If given a switch ID, the coordinates are computed.
%If coordinates are given, the switch ID is computed.
%
%\begin{ShellCmd}
%NextInput: 32
%Switch ID maps to coordinates [ 2 0 1 2 ]
%NextInput: 2 0 1 2
%Coordinates map to switch ID 32
%\end{ShellCmd}
%
%The program is just exited with Ctrl-C.
%The meaning of the above coordinates is detail below for fat tree (Section \ref{sec:tutorial:fattree}).
\input{Torus}
\input{Hypercube}
\input{FatTree}
\input{Cascade}
| {
"alphanum_fraction": 0.7705882353,
"avg_line_length": 34,
"ext": "tex",
"hexsha": "7f86bec3ed1176894383df80b72a28ae63ff6afb",
"lang": "TeX",
"max_forks_count": 36,
"max_forks_repo_forks_event_max_datetime": "2021-12-01T07:44:12.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-03-10T21:33:54.000Z",
"max_forks_repo_head_hexsha": "bcc1f43034281885104962586d8b104df84b58bd",
"max_forks_repo_licenses": [
"BSD-Source-Code"
],
"max_forks_repo_name": "jpkenny/sst-macro",
"max_forks_repo_path": "docs/manual/Topology.tex",
"max_issues_count": 542,
"max_issues_repo_head_hexsha": "bcc1f43034281885104962586d8b104df84b58bd",
"max_issues_repo_issues_event_max_datetime": "2022-03-22T20:14:08.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-03-29T22:50:58.000Z",
"max_issues_repo_licenses": [
"BSD-Source-Code"
],
"max_issues_repo_name": "jpkenny/sst-macro",
"max_issues_repo_path": "docs/manual/Topology.tex",
"max_line_length": 184,
"max_stars_count": 20,
"max_stars_repo_head_hexsha": "bcc1f43034281885104962586d8b104df84b58bd",
"max_stars_repo_licenses": [
"BSD-Source-Code"
],
"max_stars_repo_name": "jpkenny/sst-macro",
"max_stars_repo_path": "docs/manual/Topology.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-17T11:31:55.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-01-26T09:28:23.000Z",
"num_tokens": 573,
"size": 2210
} |
% $Id$
%
% Earth System Modeling Framework
% Copyright 2002-2019, University Corporation for Atmospheric Research,
% Massachusetts Institute of Technology, Geophysical Fluid Dynamics
% Laboratory, University of Michigan, National Centers for Environmental
% Prediction, Los Alamos National Laboratory, Argonne National Laboratory,
% NASA Goddard Space Flight Center.
% Licensed under the University of Illinois-NCSA License.
%\subsubsection{Restrictions and Future Work}
\begin{enumerate}
\item{\bf Not reconciled.}
Attachable Methods are PET-local settings on an object. Currently Attachable
Methods cannot be reconciled (i.e. ignored during {\tt ESMF\_StateReconcile()}).
\item{\bf No copy nor move.}
Currently Attachable Methods cannot be copied or moved between objects.
\end{enumerate}
| {
"alphanum_fraction": 0.79375,
"avg_line_length": 34.7826086957,
"ext": "tex",
"hexsha": "15fecda79d2fea67f760ef18ce192b8c1e058eae",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3",
"max_forks_repo_licenses": [
"NCSA",
"Apache-2.0",
"MIT"
],
"max_forks_repo_name": "joeylamcy/gchp",
"max_forks_repo_path": "ESMF/src/Superstructure/AttachMethods/doc/AttachMethods_rest.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3",
"max_issues_repo_issues_event_max_datetime": "2022-03-04T16:12:02.000Z",
"max_issues_repo_issues_event_min_datetime": "2022-03-04T16:12:02.000Z",
"max_issues_repo_licenses": [
"NCSA",
"Apache-2.0",
"MIT"
],
"max_issues_repo_name": "joeylamcy/gchp",
"max_issues_repo_path": "ESMF/src/Superstructure/AttachMethods/doc/AttachMethods_rest.tex",
"max_line_length": 80,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3",
"max_stars_repo_licenses": [
"NCSA",
"Apache-2.0",
"MIT"
],
"max_stars_repo_name": "joeylamcy/gchp",
"max_stars_repo_path": "ESMF/src/Superstructure/AttachMethods/doc/AttachMethods_rest.tex",
"max_stars_repo_stars_event_max_datetime": "2018-07-05T16:48:58.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-07-05T16:48:58.000Z",
"num_tokens": 175,
"size": 800
} |
\section*{Exercise 20.3-4}
\subsection*{What happens if you call vEB-TREE-INSERT with an element, that is already in the vEB tree?}
The procedure will terminate without inserting anything:
\\
Line 1-2 is not possible, since there is at least one element in the tree. We know this, since $x$ is a duplicate of an element in the tree.
\\
Line 3-4 is not possible either, since $x$ is a duplicate of an element in the tree, so the lowest possible value for $x$ is $x=V.min$
\\
Line 6-8 is not possible either, so eventually we will hit line 9, where the recursive call brings us to a smaller cluster. This will keep happening until we hit a base-case vEB tree, meaning we no longer fulfill line 5, and moves on to line 10-11.
\\
Since line 10-11 is not possible either (again, $x$ is a duplicate of an element in the tree, so the highest possible value for $x$ is $x=V.max$), so the procedure terminates without doing anything.
\subsection*{What happens if you call vEB-TREE-DELETE with an element, that is not in the vEB tree?}
A wrong value will be deleted:
\\
Line 1-3 simply checks if the tree has only one element, and if so, deletes it, regardless of the value of $x$.
\\
Line 4-8 checks if we are in a base-case vEB tree, and if so, updates $V.min$ and $V.max$, based on the value of $x$, which is assumed to be either 0 or 1 (the only two possible values in the base-case vEB tree). Since $x$ is non of these values, this update will end up deleting $V.max$ and leaving $V.min$ unchanged.
\\
Line 9-12 is not possible, since we know, that $x$ is not in the tree, and so we will hit line 12, where the recursive call brings us to a smaller cluster. This will keep happening until we hit a base-case vEB tree, which brings us to line 4-8, which ends up deleting $V.max.$
\\
Line 14-22 is not possible to reach.
\subsection*{Explain why the procedures exhibit the behavior that they do}
Recursive calls will always bring us down to a smaller tree, until we hit a base case.
\subsection*{Show how to modify vEB trees and their operations so that we can check in constant time whether an element is present}
We introduce an extra array, which has $u$ elements, that are bits. For each element in the tree, the value of the corresponding entry in the array represents, whether the element is present (1) or not (0). | {
"alphanum_fraction": 0.7483843171,
"avg_line_length": 72.53125,
"ext": "tex",
"hexsha": "0e016f415a5355de75339df827143cfc89f84621",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a26e24d18adee973d3ce88bdfd96d857ec472fdf",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "pdebesc/AADS",
"max_forks_repo_path": "Uge3/Ex.20.3-4.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a26e24d18adee973d3ce88bdfd96d857ec472fdf",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "pdebesc/AADS",
"max_issues_repo_path": "Uge3/Ex.20.3-4.tex",
"max_line_length": 318,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a26e24d18adee973d3ce88bdfd96d857ec472fdf",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "pdebesc/AADS",
"max_stars_repo_path": "Uge3/Ex.20.3-4.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 614,
"size": 2321
} |
% Created 2021-03-26 Fri 08:53
% Intended LaTeX compiler: pdflatex
\documentclass[t]{beamer}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{graphicx}
\usepackage{grffile}
\usepackage{longtable}
\usepackage{wrapfig}
\usepackage{rotating}
\usepackage[normalem]{ulem}
\usepackage{amsmath}
\usepackage{textcomp}
\usepackage{amssymb}
\usepackage{capt-of}
\usepackage{hyperref}
\mode<beamer>{\usetheme{Amsterdam}}
\mode<beamer>{\usecolortheme{rose}}
\usepackage{fontspec}
\usepackage{polyglossia}
\setmainlanguage[babelshorthands=true]{german}
\usepackage{hyperref}
\usepackage{color}
\usepackage{xcolor}
\usepackage[misc]{ifsym}
\definecolor{darkblue}{rgb}{0,0,.5}
\definecolor{darkgreen}{rgb}{0,.5,0}
\definecolor{islamicgreen}{rgb}{0.0, 0.56, 0.0}
\definecolor{darkred}{rgb}{0.5,0,0}
\definecolor{mintedbg}{rgb}{0.95,0.95,0.95}
\definecolor{arsenic}{rgb}{0.23, 0.27, 0.29}
\definecolor{prussianblue}{rgb}{0.0, 0.19, 0.33}
\definecolor{coolblack}{rgb}{0.0, 0.18, 0.39}
\hypersetup{colorlinks=true, breaklinks=true, anchorcolor=blue,linkcolor=white, citecolor=islamicgreen, filecolor=darkred, urlcolor=darkblue}
\usepackage{booktabs}
\usepackage{pgf}
\usepackage{minted}
\RequirePackage{fancyvrb}
\DefineVerbatimEnvironment{verbatim}{Verbatim}{fontsize=\scriptsize}
\usetheme{default}
\author{Göran Kirchner\thanks{e\[email protected]}}
\date{2021-03-26}
\title{Funktionale Programmierung in F\# (3)}
\subtitle{Grundlagen \& Funktionales Design}
\hypersetup{
pdfauthor={Göran Kirchner},
pdftitle={Funktionale Programmierung in F\# (3)},
pdfkeywords={},
pdfsubject={},
pdfcreator={Emacs 27.1 (Org mode 9.4.4)},
pdflang={English}}
\begin{document}
\maketitle
\section{Ziel }
\label{sec:org83e3ac2}
\begin{frame}[label={sec:org5463128}]{Programm}
\begin{itemize}
\item Hausaufgaben
\item Prinzipien des funktionalen Designs
\item Refactoring (Übung)
\end{itemize}
\end{frame}
\section{Hausaufgaben }
\label{sec:org03c38bb}
\begin{frame}[label={sec:orgf94adee},fragile]{Queen Attack}
\begin{minted}[bgcolor=mintedbg,frame=none,framesep=0pt,mathescape=true,fontsize=\scriptsize,breaklines=true,linenos=false,numbersep=5pt,gobble=0]{fsharp}
open System
let create (row, col) = row >= 0 && row < 8 && col >= 0 && col < 8
let canAttack (queen1: int * int) (queen2: int * int) =
let (r1, c1) = queen1
let (r2, c2) = queen2
Math.Abs(r1 - r2) = Math.Abs(c1 - c2) || r1 = r2 || c1 = c2
let whiteQueen1, blackQueen1 = (2, 2), (1, 1)
let test1 = canAttack blackQueen1 whiteQueen1
let whiteQueen2, blackQueen2 = (2, 4), (6, 6)
let test2 = canAttack blackQueen2 whiteQueen2
[test1; test2]
\end{minted}
\begin{verbatim}
val create : row:int * col:int -> bool
val canAttack : int * int -> int * int -> bool
val whiteQueen1 : int * int = (2, 2)
val blackQueen1 : int * int = (1, 1)
val test1 : bool = true
val whiteQueen2 : int * int = (2, 4)
val blackQueen2 : int * int = (6, 6)
val test2 : bool = false
\end{verbatim}
\end{frame}
\begin{frame}[label={sec:org7a72229},fragile]{Raindrops}
\begin{minted}[bgcolor=mintedbg,frame=none,framesep=0pt,mathescape=true,fontsize=\scriptsize,breaklines=true,linenos=false,numbersep=5pt,gobble=0]{fsharp}
let rules =
[ 3, "Pling"
5, "Plang"
7, "Plong" ]
let convert (number: int): string =
let divBy n d = n % d = 0
rules
|> List.filter (fst >> divBy number)
|> List.map snd
|> String.concat ""
|> function
| "" -> string number
| s -> s
let test = convert 105
test
\end{minted}
\begin{verbatim}
val rules : (int * string) list = [(3, "Pling"); (5, "Plang"); (7, "Plong")]
val convert : number:int -> string
val test : string = "PlingPlangPlong"
\end{verbatim}
\end{frame}
\begin{frame}[label={sec:orgd9a2859},fragile]{Gigasecond}
\begin{minted}[bgcolor=mintedbg,frame=none,framesep=0pt,mathescape=true,fontsize=\scriptsize,breaklines=true,linenos=false,numbersep=5pt,gobble=0]{fsharp}
let add (beginDate : System.DateTime) = beginDate.AddSeconds 1e9
let test = add (DateTime(2015, 1, 24, 22, 0, 0)) = (DateTime(2046, 10, 2, 23, 46, 40))
test
\end{minted}
\begin{verbatim}
val add : beginDate:DateTime -> DateTime
val test : bool = true
\end{verbatim}
\end{frame}
\begin{frame}[label={sec:org4693a46},fragile]{Bank Account (1)}
\begin{minted}[bgcolor=mintedbg,frame=none,framesep=0pt,mathescape=true,fontsize=\scriptsize,breaklines=true,linenos=false,numbersep=5pt,gobble=0]{fsharp}
type OpenAccount =
{ mutable Balance: decimal }
type Account =
| Closed
| Open of OpenAccount
let mkBankAccount() = Closed
let openAccount account =
match account with
| Closed -> Open { Balance = 0.0m }
| Open _ -> failwith "Account is already open"
\end{minted}
\begin{verbatim}
type OpenAccount =
{ mutable Balance: decimal }
type Account =
| Closed
| Open of OpenAccount
val mkBankAccount : unit -> Account
val openAccount : account:Account -> Account
\end{verbatim}
\end{frame}
\begin{frame}[label={sec:org40062e0},fragile]{Bank Account (2)}
\begin{minted}[bgcolor=mintedbg,frame=none,framesep=0pt,mathescape=true,fontsize=\scriptsize,breaklines=true,linenos=false,numbersep=5pt,gobble=0]{fsharp}
let closeAccount account =
match account with
| Open _ -> Closed
| Closed -> failwith "Account is already closed"
let getBalance account =
match account with
| Open openAccount -> Some openAccount.Balance
| Closed -> None
let updateBalance change account =
match account with
| Open openAccount ->
lock (openAccount) (fun _ ->
openAccount.Balance <- openAccount.Balance + change
Open openAccount)
| Closed -> failwith "Account is closed"
\end{minted}
\begin{verbatim}
val closeAccount : account:Account -> Account
val getBalance : account:Account -> decimal option
val updateBalance : change:decimal -> account:Account -> Account
\end{verbatim}
\end{frame}
\begin{frame}[label={sec:orgfba2a2d},fragile]{Bank Account (3)}
\begin{minted}[bgcolor=mintedbg,frame=none,framesep=0pt,mathescape=true,fontsize=\scriptsize,breaklines=true,linenos=false,numbersep=5pt,gobble=0]{fsharp}
let account = mkBankAccount() |> openAccount
let updateAccountAsync =
async { account |> updateBalance 1.0m |> ignore }
let ``updated from multiple threads`` =
updateAccountAsync
|> List.replicate 1000
|> Async.Parallel
|> Async.RunSynchronously
|> ignore
let test1 = getBalance account = (Some 1000.0m)
test1
\end{minted}
\begin{verbatim}
val account : Account = Open { Balance = 1000.0M }
val updateAccountAsync : Async<unit>
val ( updated from multiple threads ) : unit = ()
val test1 : bool = true
\end{verbatim}
\end{frame}
\begin{frame}[label={sec:org2346c6b}]{Pause}
\begin{block}{}
You’re bound to be unhappy if you optimize everything.
\null\hfill -- Donald Knuth
\end{block}
\end{frame}
\section{Prinzipien des Funktionalen Designs }
\label{sec:org7afb2c6}
\begin{frame}[label={sec:orgcc0319b}]{Funktionales Design}
\(\leadsto\) \href{./3.1 Functional Design Patterns.pdf}{Functional Design Patterns}
\end{frame}
\begin{frame}[label={sec:org089c18f}]{Prinzipien (1)}
\begin{itemize}
\item Funktionen sind Daten!
\item überall Verkettung (Composition)
\item überall Funktionen
\item Typen sind keine Klassen
\item Typen kann man ebenfalls verknüpfen (algebraische Datentypen)
\item Typsignaturen lügen nicht!
\item statische Typen zur Modellierung der Domäne (später mehr;)
\end{itemize}
\end{frame}
\begin{frame}[label={sec:org48a8169}]{Prinzipien (2)}
\begin{itemize}
\item Parametrisiere alles!
\item Typsignaturen sind "Interfaces"
\item Partielle Anwendung ist "Dependency Injection"
\item Monaden entsprechen dem "Chaining of Continuations"
\begin{itemize}
\item bind für Optionen
\item bind für Fehler
\item bind für Tasks
\end{itemize}
\item "map" - Funktionen
\begin{itemize}
\item Nutze "map" - Funktion von generische Typen!
\item wenn man einen generischen Typ definiert, dann auch eine "map" - Funktion
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}[label={sec:org6c5a8a0}]{Pause}
\begin{block}{}
If we’d asked the customers what they wanted, they would have said “faster horses”.
\null\hfill -- Henry Ford
\end{block}
\end{frame}
\section{Refactoring }
\label{sec:org52cdd10}
\begin{frame}[label={sec:org42bb9c4},fragile]{Tree Building (Übung)}
\begin{minted}[bgcolor=mintedbg,frame=none,framesep=0pt,mathescape=true,fontsize=\scriptsize,breaklines=true,linenos=false,numbersep=5pt,gobble=0]{shell}
exercism download --exercise=tree-building --track=fsharp
\end{minted}
\end{frame}
\begin{frame}[label={sec:org2cdbc3a},fragile]{Tree Building (Imperativ)}
\begin{minted}[bgcolor=mintedbg,frame=none,framesep=0pt,mathescape=true,fontsize=\scriptsize,breaklines=true,linenos=false,numbersep=5pt,gobble=0]{fsharp}
let buildTree records =
let records' = List.sortBy (fun x -> x.RecordId) records
if List.isEmpty records' then failwith "Empty input"
else
let root = records'.[0]
if (root.ParentId = 0 |> not) then
failwith "Root node is invalid"
else
if (root.RecordId = 0 |> not) then failwith "Root node is invalid"
else
let mutable prev = -1
let mutable leafs = []
for r in records' do
if (r.RecordId <> 0 && (r.ParentId > r.RecordId || r.ParentId = r.RecordId)) then
failwith "Nodes with invalid parents"
else
if r.RecordId <> prev + 1 then
failwith "Non-continuous list"
else
prev <- r.RecordId
if (r.RecordId = 0) then
leafs <- leafs @ [(-1, r.RecordId)]
else
leafs <- leafs @ [(r.ParentId, r.RecordId)]
\end{minted}
\end{frame}
\begin{frame}[label={sec:orgc958f70},fragile]{Tree Building (Funktional)}
\begin{minted}[bgcolor=mintedbg,frame=none,framesep=0pt,mathescape=true,fontsize=\scriptsize,breaklines=true,linenos=false,numbersep=5pt,gobble=0]{fsharp}
let buildTree records =
records
|> List.sortBy (fun r -> r.RecordId)
|> validate
|> List.tail
|> List.groupBy (fun r -> r.ParentId)
|> Map.ofList
|> makeTree 0
let rec makeTree id map =
match map |> Map.tryFind id with
| None -> Leaf id
| Some list -> Branch (id,
list |> List.map (fun r -> makeTree r.RecordId map))
\end{minted}
\end{frame}
\begin{frame}[label={sec:org365a5cd},fragile]{Tree Building (Error Handling)}
\begin{minted}[bgcolor=mintedbg,frame=none,framesep=0pt,mathescape=true,fontsize=\scriptsize,breaklines=true,linenos=false,numbersep=5pt,gobble=0]{fsharp}
let validate records =
match records with
| [] -> failwith "Input must be non-empty"
| x :: _ when x.RecordId <> 0 ->
failwith "Root must have id 0"
| x :: _ when x.ParentId <> 0 ->
failwith "Root node must have parent id 0"
| _ :: xs when xs |> List.exists (fun r -> r.RecordId < r.ParentId) ->
failwith "ParentId should be less than RecordId"
| _ :: xs when xs |> List.exists (fun r -> r.RecordId = r.ParentId) ->
failwith "ParentId cannot be the RecordId except for the root node."
| rs when (rs |> List.map (fun r -> r.RecordId) |> List.max) > (List.length rs - 1) ->
failwith "Ids must be continuous"
| _ -> records
\end{minted}
\end{frame}
\begin{frame}[label={sec:org7a4fa1a},fragile]{Tree Building (Benchmarking)}
\begin{itemize}
\item \href{https://github.com/dotnet/BenchmarkDotNet}{BenchmarkDotNet}
\end{itemize}
\begin{minted}[bgcolor=mintedbg,frame=none,framesep=0pt,mathescape=true,fontsize=\scriptsize,breaklines=true,linenos=false,numbersep=5pt,gobble=0]{shell}
dotnet run -c release
\end{minted}
\begin{minted}[bgcolor=mintedbg,frame=none,framesep=0pt,mathescape=true,fontsize=\scriptsize,breaklines=true,linenos=false,numbersep=5pt,gobble=0]{shell}
sed -n 508,520p $benchmarks
\end{minted}
\tiny
// * Summary *
BenchmarkDotNet=v0.12.1, OS=macOS 11.2.2 (20D80) [Darwin 20.3.0]
Intel Core i7-7920HQ CPU 3.10GHz (Kaby Lake), 1 CPU, 8 logical and 4 physical cores
.NET Core SDK=5.0.101
[Host] : .NET Core 5.0.1 (CoreCLR 5.0.120.57516, CoreFX 5.0.120.57516), X64 RyuJIT DEBUG
DefaultJob : .NET Core 5.0.1 (CoreCLR 5.0.120.57516, CoreFX 5.0.120.57516), X64 RyuJIT
\begin{center}
\begin{tabular}{lllllrrrlll}
Method & Mean & Error & StdDev & Median & Ratio & RatioSD & Gen 0 & Gen 1 & Gen 2 & Allocated\\
\hline
Baseline & 10.113 μs & 0.1998 μs & 0.3849 μs & 10.074 μs & 1.00 & 0.00 & 3.6163 & - & - & 14.8 KB\\
Mine & 6.539 μs & 0.2335 μs & 0.6812 μs & 6.354 μs & 0.63 & 0.06 & 2.0828 & - & - & 8.52 KB\\
\end{tabular}
\end{center}
\end{frame}
\section{Ende }
\label{sec:org7f195f8}
\begin{frame}[label={sec:org4148e83}]{Zusammenfassung}
\begin{itemize}
\item funktionales Design
\item funktionales Refactoring
\end{itemize}
\end{frame}
\begin{frame}[label={sec:org8ff54b1}]{Links}
\begin{itemize}
\item \href{https://fsharp.org/}{fsharp.org}
\item \href{https://docs.microsoft.com/de-de/dotnet/fsharp/}{docs.microsoft.com/../dotnet/fsharp}
\item \href{https://sergeytihon.com/}{F\# weekly}
\item \href{https://fsharpforfunandprofit.com/}{fsharpforfunandprofit.com}
\item \href{https://github.com/fsprojects/awesome-fsharp}{github.com/../awesome-fsharp}
\end{itemize}
\end{frame}
\begin{frame}[label={sec:org3cc3253}]{Hausaufgabe}
\begin{itemize}
\item exercism.io (E-Mail bis 16.4)
\begin{itemize}
\item[{$\square$}] Accumulate
\item[{$\square$}] Space Age
\item[{$\square$}] Poker (Programmieraufgabe)
\end{itemize}
\end{itemize}
\end{frame}
\end{document} | {
"alphanum_fraction": 0.6950855479,
"avg_line_length": 34.4235588972,
"ext": "tex",
"hexsha": "9dfdb4d910e38b778b90906d96a96c3a9bafe070",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-03-26T12:39:02.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-03-26T12:39:02.000Z",
"max_forks_repo_head_hexsha": "6c8e845b1291ff753300d7647fbf671a14878cd5",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "kirchnergo/course.2021.hwr.fun",
"max_forks_repo_path": "slides/slides-03.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6c8e845b1291ff753300d7647fbf671a14878cd5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "kirchnergo/course.2021.hwr.fun",
"max_issues_repo_path": "slides/slides-03.tex",
"max_line_length": 155,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "6c8e845b1291ff753300d7647fbf671a14878cd5",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "kirchnergo/course.2021.hwr.fun",
"max_stars_repo_path": "slides/slides-03.tex",
"max_stars_repo_stars_event_max_datetime": "2021-03-21T12:34:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-03-21T12:34:38.000Z",
"num_tokens": 4499,
"size": 13735
} |
\section{Autoscaling}
\begin{itemize}
\item Motivation:
\begin{itemize}
\item \textbf{Scalability}: ability when resource $\uparrow$, application performance $\uparrow$, throughput $\uparrow$, latency $\downarrow$
\item \textbf{Elasticity}: ability to \textbf{dynamically adapt}(up/down) the resource scale to the actual workload \textbf{without reboot}.
$\rightarrow$ No under-/overprovisioning, cost $\downarrow$, customer satisfaction $\uparrow$
\item \textbf{Automation}: automatically, without interaction with the application owner
\end{itemize}
\item Capacity planning in cloud: avoid under-/overprovisioning by \textbf{dynmaic resource management} and \textbf{pay-per-use cost model}
\end{itemize}
\subsection{Scaling}
\begin{itemize}
\item scaling performance:
\begin{itemize}
\item speedup: $$\text{speedup(p processors)} = \frac{\text{performance(p processors)}}{\text{performance(1 processor)}}$$
\item efficiency: $$\text{efficiency(p processors)} = \frac{\text{speedup(p processors)}}{p}$$
\end{itemize}
\item scalabilty limit: restricted by \textbf{app design}, maximum \textbf{application capacity} or maximum \textbf{resource capacity}
$\rightarrow$ scaling with infinite servers impossible
\begin{itemize}
\item \textbf{efficiency} of scaling \textbf{drops} when \textbf{more resources} are provisioned. $\rightarrow$ \textbf{bottleneck} of application gets dominant
\item \textbf{poor application design} has lower scalability limit.
\item more servers, communication latency increases.
\end{itemize}
\item resource to be scaled: \textbf{CPU (\#CPU, CPU time/percentage)}, memory, disk storage(size, bandwidth), network(throughput, bandwidth)
\end{itemize}
\subsubsection{Vertical Scaling -- Scaling Up}
\begin{itemize}
\item increase capacity of a service instance by \textbf{increasing its allocated resources}. eg: CPU time percentage $\uparrow$, clock frequency $\uparrow$, cores $\uparrow$
\item Advantages:
\begin{itemize}
\item no re-design of app required
\item easy to replace, less management complexity
\end{itemize}
Disadvantages:
\begin{itemize}
\item powerful resources are expensive
\item \textbf{limited} scalability due to resource capacity limit
\end{itemize}
\end{itemize}
\subsubsection{Horizontal Scaling -- Scaling Out}
\begin{itemize}
\item increase capacity of a service instance by \textbf{increasing the amount of same resources}.
\item Advantages:
\begin{itemize}
\item lower cost, implementation of commodity hardware
\item larger scalability possibility (just increase the amount of resource)
\end{itemize}
Disadvantages:
\begin{itemize}
\item more management overhead
\item distributed architecture required, load balancing, replica handling.
\end{itemize}
\end{itemize}
\subsection{Autoscaling}
\begin{itemize}
\item Goal: scale automatically to fulfill \textbf{Service Level Objectives}(SLO) while \textbf{minimizing costs}
\begin{itemize}
\item latency of request
\item failed request rate
\item service availability
\end{itemize}
\item Process: monitoring data --> analyzer --> scheduler takes scaling actions according to autoscaling policy --> executor gives cloud commands to scale up/down resources
\item Autoscaling approaches:
\begin{itemize}
\item \textbf{reactive}: \textbf{real-time} scaling according to policy(thresholds). detection of under/overloaded service
\item \textbf{scheduled}: \textbf{periodical} scaling, time-stamped scaling events. eg: banking system, food ordering app
\item \textbf{predictive}: predict future workload and scale \textbf{ahead of time}. It adapts proactively the \textbf{minimum} of autoscaling group.
\end{itemize}
\item Autoscaling implementation:
\begin{itemize}
\item \textbf{resource-centric}: scaling action directly modifies \textbf{resource allocation}(\#VM, CPU, memory, bandwidth).
\begin{itemize}
\item Use-case: AWS Reactive Autoscaling -- scaling modifies \#VMs.
\end{itemize}
\item \textbf{service-centric}: scaling action directly modifies \textbf{\#service instances}.
\end{itemize}
\item Autoscaling Policies (AWS):
\begin{itemize}
\item \textbf{target tracking scaling}: a \textbf{target value} of a \textbf{metric} is predefined, eg: CPU utilization at 40\%, \#requests per target at 1000.
Metrics that decrease when capacity increases and increase when capacity decreases can be used to \textbf{proportionally scale out or in} the number of instances using target tracking.
\item \textbf{simple scaling}: scaling action is triggered based on metric, threshold or condition.
\begin{itemize}
\item \textbf{cooldown period exists}: responses to \textbf{no additional alarms} until cooldown period expires, wait until scaling result visible and health check.
\end{itemize}
\item \textbf{step scaling}: scaling action is triggered based on metric, threshold or condition with \textbf{step adjustments}. The adjustments vary based on the \textbf{size of alarm breach}.
\begin{itemize}
\item \textbf{no cooldown period}: continues to \textbf{respond to additional alarms}, even while a scaling activity or health check replacement is in progress. Therefore, \textbf{all alarms that are breached are evaluated}
\end{itemize}
\end{itemize}
\end{itemize}
\subsection{Load Balancing}
\begin{itemize}
\item for \textbf{scaling out}(horizontal scaling)
\item Goals:
\begin{itemize}
\item efficient resource utilization
\item increase service availability (health check, restart nodes)
\item increase aggregated capability of replicas $\rightarrow$ reduce response time and failure rate
\end{itemize}
\item Implementation: \textbf{multi-layer} load balancing
\begin{itemize}
\item different \textbf{layers} for different \textbf{functions}: service layer(request distribution), virtual machine layer(VM distribution), physical layer(server, CPU, storage, bandwidth distribution)
\item \textbf{static VS. dynamic}:
\begin{itemize}
\item static: scheduling according to \textbf{weighted round-robin}, \textbf{no feedback} about current load of servers
\item dynamic: \textbf{feedback} from server to load balancer, able to adapt according to feedback.
\end{itemize}
\item scalability: destributed design for load balancer
\end{itemize}
\item Algorithms:
\begin{itemize}
\item class-aware: \textbf{classification of requests} into sensitive, best-effort
\item content-aware: direct \textbf{similar request content} to same server
\item client-aware: based on \textbf{packet source}. Clients from similar area might share similar information, system can already \textbf{cache} the info.
\end{itemize}
\end{itemize}
| {
"alphanum_fraction": 0.7660113196,
"avg_line_length": 48.652173913,
"ext": "tex",
"hexsha": "628c77326a43d83a7d22845ac62f5dc8fc8e087d",
"lang": "TeX",
"max_forks_count": 69,
"max_forks_repo_forks_event_max_datetime": "2022-03-17T19:27:50.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-10-02T21:46:57.000Z",
"max_forks_repo_head_hexsha": "12e60881c225408d057b8637594c37fa54c3bcfa",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "YourPsychiatrist/TUM",
"max_forks_repo_path": "Cloud Computing/lectures/autoscaling.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "12e60881c225408d057b8637594c37fa54c3bcfa",
"max_issues_repo_issues_event_max_datetime": "2021-07-31T19:35:57.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-02-16T12:22:43.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "YourPsychiatrist/TUM",
"max_issues_repo_path": "Cloud Computing/lectures/autoscaling.tex",
"max_line_length": 226,
"max_stars_count": 225,
"max_stars_repo_head_hexsha": "12e60881c225408d057b8637594c37fa54c3bcfa",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "YourPsychiatrist/TUM",
"max_stars_repo_path": "Cloud Computing/lectures/autoscaling.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-29T22:25:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-10-02T10:49:41.000Z",
"num_tokens": 1785,
"size": 6714
} |
\chapter{Rod Cutting Algorithm}
The objective of this challenge is to determine how to cut an imaginary rod to
receive the maximum revenue. Revenue per rod length and initial rod lengths are
given. Note that, if the price for the initial rod length is high enough, the
optimal solution may require no cutting at all.
\section{Brute Force}
\imb[4][10]{\alg/rodcutting.cpp}
\imb[35][39]{\alg/rodcutting.cpp}
\imb[47][47]{\alg/rodcutting.cpp}
\imb[48][49]{\alg/rodcutting.cpp}
Since there are $n-1$ places to either cut or leave, there are $2^{n-1}$ ways in
total to cut a rod of $n$ inches. Therefore a brute force algorithm such as the
one presented solves all possible outcomes and finds the maximum value. Time
Complexity Analysis shows an exponential time function.
\newpage
\section{Dynamic Programming: Top-Down Approach}
\imb[12][22]{\alg/rodcutting.cpp}
\imb[35][39]{\alg/rodcutting.cpp}
\imb[41][45]{\alg/rodcutting.cpp}
\imb[48][49]{\alg/rodcutting.cpp}
The most critical reason for the brute force algorithm's exponentailly long time
conplexity is that brute force repeats sonving what is essentially the same set
of divisions. For example, when processing the cutting of a $9$ inch rod, brute
force algorithm processes the same 2, 3, and 4-inch dsivision as 6 different
methods $(2,~3,~4),~(3,~2,~4),\cdots$. This can be solved by recording results
of previous iterations of the recursion, or the \textbf{memoisation} of previous
data. With the top-down method presented above, the new array \imc{int* arr} is
a storage array for the previously processed results.
\section{Dynamic Programming: Bottom-Up Approach}
\imb[24][33]{\alg/rodcutting.cpp}
\imb[35][39]{\alg/rodcutting.cpp}
\imb[40][40]{\alg/rodcutting.cpp}
\imb[48][49]{\alg/rodcutting.cpp}
Unusual circumstances can lead to failures in top-down algorithms where not all
possible outcomes are tested. The bottom-up approach above often shows strength
in constant factors and overall stability, and can be mathematically proven in
its ability to always process all subproblems, although such proof will not be
provided in this manual.
Both Top-Down and Bottom-Up approaches show a time complexity of $T(n)=\Theta
\left(n^2\right)$.
| {
"alphanum_fraction": 0.7698232895,
"avg_line_length": 45.9791666667,
"ext": "tex",
"hexsha": "549be8997db94138bf0db981e3fc5e5fb2f4d86d",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-06-10T19:00:51.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-06-10T19:00:51.000Z",
"max_forks_repo_head_hexsha": "949e798206f79d26f5f69bda8ab5a546369a8d2b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "thekpaul/Programming-Methodology",
"max_forks_repo_path": "Algorithms/07_rodcutting.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "949e798206f79d26f5f69bda8ab5a546369a8d2b",
"max_issues_repo_issues_event_max_datetime": "2019-06-11T08:43:32.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-06-11T04:10:44.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "thekpaul/Programming-Methodology",
"max_issues_repo_path": "Algorithms/07_rodcutting.tex",
"max_line_length": 80,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "949e798206f79d26f5f69bda8ab5a546369a8d2b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "thekpaul/Programming-Methodology",
"max_stars_repo_path": "Algorithms/07_rodcutting.tex",
"max_stars_repo_stars_event_max_datetime": "2019-06-11T04:09:31.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-06-10T19:00:48.000Z",
"num_tokens": 603,
"size": 2207
} |
\section{Changes to previous versions}
\subsection{Pending changes for the next release}
\begin{itemize}
\item Public urls for the source code have been changed from a private
GitLab repository to a public GitHub repsoitory.
\item Adds a table of contents to make skimming and navigation easier.
\item Adds this ``change log'' itself. :)
\item Adds automatically-generated version and commit info to pdfs.
\item Fixes errors in the proof of Lemma A3.
\end{itemize}
\subsection{2015-04-23}
This version was not released.
\begin{itemize}
\item Combine the supplemental appendix with the main document (for
convenience).
\end{itemize}
\subsection{2015-01-09}
This is an incomplete list of the changes made in the `2015-01-09'
version of the paper. This version was resubmitted to
\textit{Econometrica} after the 2013-03-14 version received ``reject
and resubmit.''
\begin{itemize}
\item Moves supporting theoretical results to a supplementary
appendix.
\item Separates the statement of assumptions from the theoretical
results.
\item Replaces the multiple testing results with a much simpler
version.
\item Changes the NED assumptions slightly (the previous version had
an error).
\item Makes several changes to the wording of the paper.
\item Adds an explicit variance-covariance matrix and proves
consistency.
\item Changes the typeface used in the paper and makes other small
changes to the formatting.
\end{itemize}
\subsection{2014-04-01}
\begin{itemize}
\item Removes all of the bootstrap material from the paper to move it
to a separate paper at the request of the \textit{Econometrica}
editor.
\end{itemize}
\subsection{2013-03-14}
First released version of the paper. This version was submitted
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "mixedwindow"
%%% End:
| {
"alphanum_fraction": 0.7839955232,
"avg_line_length": 31.350877193,
"ext": "tex",
"hexsha": "ffa12fdd45ba5500513cb15c8aa07045d8f1117c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3b25a5acad1da570bcd72806e6c32fbf9c54845d",
"max_forks_repo_licenses": [
"MIT",
"Unlicense"
],
"max_forks_repo_name": "grayclhn-econ/mixedwindow",
"max_forks_repo_path": "CHANGELOG.tex",
"max_issues_count": 14,
"max_issues_repo_head_hexsha": "3b25a5acad1da570bcd72806e6c32fbf9c54845d",
"max_issues_repo_issues_event_max_datetime": "2016-02-08T21:21:38.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-01-07T16:44:10.000Z",
"max_issues_repo_licenses": [
"MIT",
"Unlicense"
],
"max_issues_repo_name": "grayclhn-econ/mixedwindow",
"max_issues_repo_path": "CHANGELOG.tex",
"max_line_length": 70,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3b25a5acad1da570bcd72806e6c32fbf9c54845d",
"max_stars_repo_licenses": [
"MIT",
"Unlicense"
],
"max_stars_repo_name": "grayclhn-econ/mixedwindow",
"max_stars_repo_path": "CHANGELOG.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 442,
"size": 1787
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Friggeri Resume/CV
% XeLaTeX Template
% Version 1.0 (5/5/13)
%
% This template has been downloaded from:
% http://www.LaTeXTemplates.com
%
% Original author:
% Adrien Friggeri ([email protected])
% https://github.com/afriggeri/CV
%
% License:
% CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/)
%
% Important notes !!! :
% Use texshop for better editor and this template needs to be compiled with XeLaTeX and the bibliography, if used,
% needs to be compiled with bibtex with biber backend.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\documentclass[style=verbose,maxnames=99,sorting=ydnt,backend=biber]{friggeri-cv} % Add 'print' as an option into the square bracket to remove colors from this template for printing
\addbibresource{cv-dhoto-kavli2016.bib} % Specify the bibliography file to include publications
%\usepackage[backend=biber]{biblatex}
\usepackage{wrapfig}
\begin{document}
\header{Sritrusta}{ Sukaridhoto}{Assistant Prof., Researcher and Guitarist} % Your name and current job title/field
%----------------------------------------------------------------------------------------
% SIDEBAR SECTION
%----------------------------------------------------------------------------------------
\begin{aside} % In the aside, each new line forces a line break
\section{contact}
\textbf{Politeknik Elektronika Negeri Surabaya}
Jl. Raya ITS Surabaya
~
Office:+62 (31) 5947280
Mobile:+62 823 6666 6379
~
\href{mailto:[email protected]}{[email protected]}
\href{http://dhoto.lecturer.pens.ac.id/}{Dhoto's Homepage}
\href{http://facebook.com/iseng4h}{fb://iseng4h}
\section{languages}
Indonesian (native)
English \& Japanese
\section{research interests}
{\color{red} $\varheartsuit$} Computer Networks,
Embedded System, Multimedia \& Internet of Things
\end{aside}
%----------------------------------------------------------------------------------------
% EDUCATION SECTION
%----------------------------------------------------------------------------------------
\section{biography}
\begin{wrapfigure}[20]{l}{1.5in}
\vspace{-10pt}
\begin{center}
\includegraphics[width=0.2\textwidth]{dhoto-jas.jpg}
\end{center}
\end{wrapfigure}
Received the B.E. degree in electrical engineering, computer science program from Sepuluh Nopember Institute of Technology, Indonesia, in 2002 and the Ph.D. degree in Communication Networks Engineering from Okayama University, Japan, in 2013. He joined at Electronics Engineering Polytechnic Institute of Surabaya, Indonesia, as a lecturer in 2002. He stayed at Tohoku University, Japan, in 2004, as a visiting researcher. His research interests include computer networks, embedded system, multimedia and Internet of Things. He received IEEE Young Researcher Award in 2009. He is a member of IEEE.
\section{education}
\begin{entrylist}
%------------------------------------------------
\entry
{2009--2013}
{Doctor {\normalfont of Philosophy}}
{Okayama University, Japan}
{\emph{Engineering} \\
Dissertation: "A Study of Performance Improvement Methods for Real-Time Applications in Wireless Mesh Networks" \\
Supervised by Prof. Nobuo Funabiki, Prof. M. Hata and Prof. Toru Nakanishi}
%------------------------------------------------
\entry
{1997--2002}
{Bachelor {\normalfont of Engineering}}
{Institut Teknologi Sepuluh Nopember, Indonesia}
{Electrical Engineering - Computer Science\\
Thesis: “Implementation of IPv6 at Institute Technology Sepuluh November of Surabaya”
\\Supervised by Dr. Surya Sumpeno and Dr. Supeno Mardi
}
%------------------------------------------------
\end{entrylist}
%----------------------------------------------------------------------------------------
% WORK EXPERIENCE SECTION
%----------------------------------------------------------------------------------------
\section{work experience}
\begin{entrylist}
%------------------------------------------------
\entry
{2002-Now}
{Politeknik Elektronika Negeri Surabaya}
{Surabaya, Indonesia}
{\emph{Assistant Professor} \\
Department of Multimedia Creative Technology,
\begin{itemize}
\item Teaching : Data Communication and Computer Networks
\item Supervision of Diploma students
\end{itemize}
Graduate School of Information Technology,
\begin{itemize}
\item Teaching : Advanced Computer Networks and Internet of Things
\item Supervision of Master students
\end{itemize}
}
\end{entrylist}
\\
\\
%----------------------------------------------------------------------------------------
% AWARDS SECTION
%----------------------------------------------------------------------------------------
\section{award}
\begin{entrylist}
%------------------------------------------------
\entry
{2014}
{Best Paper Award}
{The 16th International Electronics Symposium, Indonesia}
{\emph{Application Programming Interface Design of Microkernel Based Robotics Operating System}, Adhe Widianjaya, Dadet Pramadihanto, Sritrusta Sukaridhoto; The 16th International Electronics Symposium, Indonesia}
%------------
\entry
{2009}
{IEEE’s Young Researcher Award}
{ISCE2009, Japan}
{}
%------------------------------------------------
%\entry
%{2009}
%{Monbusho Scholarship for Doctoral Program}
%{Okayama University, Japan}
%{}
%------------------------------------------------
\end{entrylist}
%---------------------
%Research Grant
%--------------------
\section{research grants}
\begin{entrylist}
\entry
{2016}
{DESIGN AND DEVELOPMENT OF UNMANNED UNDERWATER VEHICLES FOR ENVIRONMENTAL MONITORING AND DATA ACQUISITION COMPATIBLE WITH 5D WORLD MAP SYSTEM IN INTERNET OF UNDERWATER THINGS}
{DIKTI}
{Penelitian Kerjasama Luar Negeri dan Publikasi International - year: 2 of 3 - Rp. 160.000.000,-}
\entry
{2016}
{Pusat Pendidikan dan Pelatihan Embedded System Bersertifikasi Sebagai Upaya Penyiapan Sumber Daya Manusia Menjelang Masyarakat Ekonomi ASEAN}
{DIKTI}
{Ipteks Bagi Inovasi Kreativitas Kampus - year: 1 of 3 - Rp. 189.000.000,- + Rp 50.000.000,-}
\entry
{2015}
{DESIGN AND DEVELOPMENT OF UNMANNED UNDERWATER VEHICLES FOR ENVIRONMENTAL MONITORING AND DATA ACQUISITION COMPATIBLE WITH 5D WORLD MAP SYSTEM IN INTERNET OF UNDERWATER THINGS}
{DIKTI}
{Penelitian Kerjasama Luar Negeri dan Publikasi International - year: 1 of 3 - Rp. 157.500.000,-}
\end{entrylist}
%----------------------------------------------------------------------------------------
% PUBLICATIONS SECTION
%----------------------------------------------------------------------------------------
\section{publications}
\nocite{*}
\printbibsection{article}{Journals} % Print all articles from the bibliography
%\begin{refsection} % This is a custom heading for those references marked as "inproceedings" but not containing "keyword=france"
%\newrefcontext[sorting=ydnt]
\printbibliography[type=inproceedings, title={Conferences/Proceedings}, heading=subbibliography]
%\end{refsection}
%\printbibsection{book}{Books} % Print all books from the bibliography
%----------------------------------------------------------------------------------------
\section{signature}
The fully truth of the details are guaranteed.
With strong sense of responsibility, hands on management and communication skills.
Strong loyalty and self-motivation.
\\
\\
regards,
\\
\\
\\
\\
\textbf{Sritrusta Sukaridhoto, ST. Ph.D.}
\end{document} | {
"alphanum_fraction": 0.6156573923,
"avg_line_length": 31.6034482759,
"ext": "tex",
"hexsha": "7b2b7eeec65e23c8d0a3fee57774b56c25684682",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cd40df748d51817c6569f8c27bf17de400ff4a96",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "iseng4h/cv-dhoto",
"max_forks_repo_path": "cv-dhoto-kavli2016.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cd40df748d51817c6569f8c27bf17de400ff4a96",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "iseng4h/cv-dhoto",
"max_issues_repo_path": "cv-dhoto-kavli2016.tex",
"max_line_length": 597,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "cd40df748d51817c6569f8c27bf17de400ff4a96",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "iseng4h/cv-dhoto",
"max_stars_repo_path": "cv-dhoto-kavli2016.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1759,
"size": 7332
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.