Search is not available for this dataset
text
string | meta
dict |
---|---|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% FAIMS3 Presentations
% LaTeX Template
% Version 1.0 (May 1, 2021)
%
% This template was created by:
% Vel ([email protected])
% https://www.LaTeXTypesetting.com
%
%!TEX program = xelatex
% Note: this template must be compiled with XeLaTeX rather than PDFLaTeX
% due to the custom fonts used. The line above should ensure this happens
% automatically, but if it doesn't, your LaTeX editor should have a simple toggle
% to switch to using XeLaTeX.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\documentclass[
aspectratio=169, % Wide slides by default
12pt, % Default font size
t, % Top align all slide content
]{beamer}
\usetheme{faims} % Use the FAIMS beamer theme
\usecolortheme{faims} % Use the FAIMS beamer color theme
\usepackage{comment}
\bibliography{references.bib} % BibLaTeX bibliography file
\bibliography{faims-zotero-betterbibtex.bib}
%----------------------------------------------------------------------------------------
\begin{document}
\begin{refsegment}
%https://tex.stackexchange.com/a/19328
%https://tex.stackexchange.com/a/223074
%----------------------------------------------------------------------------------------
% TITLE SLIDE
%----------------------------------------------------------------------------------------
% \title{FAIMS 3.0: Electronic Field Notebooks} % Presentation title
% \author{S Ross, P Crook, B Ballsun-Stanton, S Cassidy, A Sobotkova, J Klump} % Presentation author
% \institute{Australian Astronomical Optics seminar} % Author affiliation
% \date{19 August 2021} % Presentation date
\begin{titleframe} % Custom environment required for the title slide
\frametitle{IGSN in the field with FAIMS Mobile}
\framesubtitle{Field Acquired Information Management Systems (FAIMS) Project}
S Ross, J Klump, P Crook, \\B Ballsun-Stanton, A Sobotkova, S Cassidy
\medskip
Virtual SciDataCon 2021\\
Celebrating a decade of making material samples\\ FAIR and Open: where to next?
\medskip
21 October 2021
\end{titleframe}
%----------------------------------------------------------------------------------------
% TABLE OF CONTENTS
%----------------------------------------------------------------------------------------
% \begin{frame}
% \frametitle{Table of Contents}
% \framesubtitle{One Column}
% \tableofcontents % Sections are automatically populated from \section{} commands through the presentation
% \end{frame}
%------------------------------------------------
\begin{frame}
\frametitle{Strategies for field data capture infrastructure}
%\framesubtitle{Two Columns}
\begin{columns}[t]
\begin{column}{0.45\textwidth}
\tableofcontents[sections={1-3}] % Sections are automatically populated from \section{} commands through the presentation
\end{column}
\hfill
\begin{column}{0.45\textwidth}
\tableofcontents[sections={4-6}] % Sections are automatically populated from \section{} commands through the presentation
\end{column}
\end{columns}
\end{frame}
% \input{slides/small_data_infrastructure}
\input{slides/nine_years}
\input{slides/faims_3}
\input{slides/IGSN}
\input{slides/thanks}
%\input{slides/references}
% \input{slides/key_research_features}
% \input{slides/faims_approach_in_detail}
% \input{slides/reports_from_field}
% DO NOT COMMENT THIS BIT OUT
\end{refsegment}
\input{slides/faims_bib}
%----------------------------------------------------------------------------------------
% CLOSING SLIDE
%----------------------------------------------------------------------------------------
\closingslide % Output closing slide, automatically populated with a background image
%----------------------------------------------------------------------------------------
\end{document}
| {
"alphanum_fraction": 0.57548152,
"avg_line_length": 31.7520661157,
"ext": "tex",
"hexsha": "a8a72fec9e863c025b0c2ff4a4ff23a3883413a6",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4b61fb5388e9e596df48b93f9faa968da31e7ba2",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "FAIMS/PRES-FAIMS3-IGSN",
"max_forks_repo_path": "main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4b61fb5388e9e596df48b93f9faa968da31e7ba2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "FAIMS/PRES-FAIMS3-IGSN",
"max_issues_repo_path": "main.tex",
"max_line_length": 133,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4b61fb5388e9e596df48b93f9faa968da31e7ba2",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "FAIMS/PRES-FAIMS3-IGSN",
"max_stars_repo_path": "main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 878,
"size": 3842
} |
\documentclass[11pt]{article}
%===> Packages ===>
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{graphicx}
%<=== Packages <===
%===> New commands ===>
\newcommand{\dx}{\Delta x}
\newcommand{\dt}{\Delta t}
\newcommand{\bigoh}{\mathcal{O}}
\newcommand{\pd}[2]{\frac{\partial #1}{\partial #2}}
\newcommand{\bslash}{\char`\\}
\newcommand{\uscore}{\char`\_}
%<=== New commands <===
%===> Formatting ===>
\setlength{\parskip}{11pt}
\setlength{\parindent}{0pt}
%<=== Formatting <===
\begin{document}
\section*{Single grid problems}
\subsection*{Miscellaneous}
\texttt{tuplify} routine
\texttt{intersectDomains} routine
\texttt{GridArray} class
\texttt{IsolatedArray} class
\subsection*{BaseGrid class}
\subsubsection*{Fields}
\texttt{x\uscore{}low, x\uscore{}high}: coordinate bounds
\texttt{i\uscore{}low, i\uscore{}high}: index bounds
\texttt{dx}: mesh width in each dimension
\texttt{cells}: domain of interior cells
\texttt{ext\uscore{}cells}: \texttt{cells}, plus ghost cells
\texttt{low\uscore{}ghost\uscore{}cells}: Tuple of domains, each specifying the ghost cells at the low end of a dimension.
\texttt{high\uscore{}ghost\uscore{}cells}: Ditto, but for the high end.
\subsubsection*{Methods}
\texttt{xValue}: Returns coordinates of an index
\texttt{initializeSolution}: Initializes a \texttt{ScalarGridSolution}.
\texttt{clawOutput}: Writes out a \texttt{ScalarGridSolution} in Clawpack format. \textit{Should this act on an array instead?}
\subsubsection*{Subclasses}
\texttt{GhostData}: Class for holding data on \textbf{only} the ghost cells. \textit{Should this be a subclass?}
\subsection*{\texttt{ScalarGridSolution} class}
\subsubsection*{Fields}
\texttt{grid}: \texttt{BaseGrid} on which the solution is defined.
\texttt{space\uscore{}data}: \texttt{[1..2] [grid.ext\uscore{}cells] real}
Holds two time levels of data on the parent grid
\texttt{time}: \texttt{[1..2] real} Times at which \texttt{space\uscore{}data} is specified.
\end{document}
GridBC class:
=============
Fields:
-------
grid: Grid on which the BCs are defined.
low_boundary_faces: Tuple of domains, each specifying the faces at the low end of a dimension.
high_boundary_faces: Ditto, but for the high end.
Methods:
--------
ghostFill: Fills ghost cells at a given time.
homogeneousGhostFill: The homogeneous method corresponding to ghostFill. Necessary for
implicit solvers.
\end{document}
\section*{Intent}
\begin{itemize}
\item Demonstrate Chapel's ability to simplify development of AMR codes.
\item Write generic-dimensional code whenever possible --- drastically simplifies code base.
\item (Personal goal) Draft an AMR system where single-grid or single-level problems can be drafted without using unnecessary framework. Should allow step-wise development of AMR solvers.
\end{itemize}
\section*{Validation}
\begin{itemize}
\item Chombo's \texttt{IntVec}s and \texttt{IntVecSet}s
\end{itemize}
\section*{Chapel features}
\end{document}
| {
"alphanum_fraction": 0.7276341948,
"avg_line_length": 23.2153846154,
"ext": "tex",
"hexsha": "c221a424427b1e15097e7a59ab07f3a26a739483",
"lang": "TeX",
"max_forks_count": 498,
"max_forks_repo_forks_event_max_datetime": "2022-03-20T15:37:45.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-01-08T18:58:18.000Z",
"max_forks_repo_head_hexsha": "f041470e9b88b5fc4914c75aa5a37efcb46aa08f",
"max_forks_repo_licenses": [
"ECL-2.0",
"Apache-2.0"
],
"max_forks_repo_name": "jhh67/chapel",
"max_forks_repo_path": "test/studies/amr/doc/ChapelAMRDoc.tex",
"max_issues_count": 11789,
"max_issues_repo_head_hexsha": "f041470e9b88b5fc4914c75aa5a37efcb46aa08f",
"max_issues_repo_issues_event_max_datetime": "2022-03-31T23:39:19.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-01-05T04:50:15.000Z",
"max_issues_repo_licenses": [
"ECL-2.0",
"Apache-2.0"
],
"max_issues_repo_name": "jhh67/chapel",
"max_issues_repo_path": "test/studies/amr/doc/ChapelAMRDoc.tex",
"max_line_length": 188,
"max_stars_count": 1602,
"max_stars_repo_head_hexsha": "f041470e9b88b5fc4914c75aa5a37efcb46aa08f",
"max_stars_repo_licenses": [
"ECL-2.0",
"Apache-2.0"
],
"max_stars_repo_name": "jhh67/chapel",
"max_stars_repo_path": "test/studies/amr/doc/ChapelAMRDoc.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-30T06:17:21.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-06T11:26:31.000Z",
"num_tokens": 862,
"size": 3018
} |
\documentclass[]{article}
\usepackage{graphicx}
\usepackage[margin=1in]{geometry}
\setlength\parindent{0pt}
\usepackage{physics}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{listings}
\usepackage{enumitem}
\renewcommand{\theenumi}{\alph{enumi}}
\renewcommand*{\thesection}{Problem \arabic{section}}
\renewcommand*{\thesubsection}{\arabic{section}\alph{subsection})}
\renewcommand*{\thesubsubsection}{\quad \roman{subsubsection})}
%Custom Commands
\newcommand{\Rel}{\mathcal{R}}
\newcommand{\R}{\mathbb{R}}
\newcommand{\C}{\mathbb{C}}
\newcommand{\N}{\mathbb{N}}
\newcommand{\Z}{\mathbb{Z}}
\newcommand{\Q}{\mathbb{Q}}
%opening
\title{MATH 5301 Elementary Analysis - Homework 2}
\author{Jonas Wagner}
\date{2021, September 08}
\begin{document}
\maketitle
% Problem 1
\section{}
For a function $f : A \rightarrow B$, show the follwoing for any $X \subset A, Y, Z \subset B$
\subsection{$X \subset f^{-1}(f(X))$}
\begin{align*}
f(X) &= \{y \in B : \exists x \in X : y = f(x)\}\\
f^{-1}(Y) &= \{x \in A : \exists y \in Y : y = f(x)\}\\
f(f^{-1}(X)) &= \{\tilde{x} \in \tilde{X} : \exists x \in A : \tilde{x} = f(f^{-1}(x))\}\\
&= \{\tilde{x} \in \tilde{X} : \exists y \in Y : y = f(\tilde{x}) :
\exists x \in X : y = f(x)\} \implies\\
&\implies \tilde{X} \subset X\\
&\therefore X \subset f^{-1}(f(X))
\end{align*}
% Let, $$Y = f(X) \subset B$$
% The preimage of $Y$ is then $$\tilde{X} = f^{-1}(Y) \subset A$$
% Additionally, from the knowledge of image then pre-image properties functions,
% $$\tilde{X} \subset X$$
% Therefore, $$ X \subset \tilde{X} = f^{-1}(f(X))$$
\subsection{$f(f^{-1}(Y)) \subset Y$}
% Let, $$X = f^{-1}(Y) \subset A$$
% The maping of $X$ is then $$\tilde{Y} = f(X) \subset B$$
% Additionally, from knowledge pre-image and maping properties functions,
% $$\tilde{Y} \subset Y$$
% Therefore, via the transitive property, $$f(f^{-1}(Y)) \subset Y$$
\begin{align*}
f^{-1}(Y) &= \{x \in A : \exists y \in Y : y = f(x)\}\\
f(X) &= \{y \in B : \exists x \in X : y = f(x)\}\\
f^{-1}(f(Y)) &= \{\tilde{x} \in \tilde{X} : \exists x \in A : \tilde{x} = f^{-1}(f(x))\}\\
&= \{\tilde{x} \in \tilde{X} : \exists y \in Y, \exists x \in X : (y = f(\tilde{x}))
\land (y = f(x)\} \implies\\
&\implies \tilde{X} \subset X\\
&\therefore X \subset f^{-1}(f(X))
\end{align*}
\newpage
\subsection{$f^{-1}(Y \cup Z) = f^{-1}(Y) \cup f^{-1}(Z)$}
\begin{align*}
f^{-1}(Y) &= \{x \in A : \exists y \in Y : y = f(x)\}\\
f^{-1}(Z) &= \{x \in A : \exists z \in Z : z = f(x)\}\\
f^{-1}(Y \cup Z) &= \{x \in A : (\exists y \in Y y = f(x))
\lor (\exists z \in Z : z = f(x))\}\\
&= \{x \in A : (\exists y \in Y y = f(x))\}
\cup \{x \in A :\lor (\exists z \in Z : z = f(x))\}\\
&= f^{-1}(Y) \cup f^{-1}(Z)\\
&\therefore f^{-1}(Y \cup Z) = f^{-1}(Y) \cup f^{-1}(Z)
\end{align*}
\subsection{$f^{-1}(Y \cap Z) = f^{-1}(Y) \cap f^{-1}(Z)$}
\begin{align*}
f^{-1}(Y) &= \{x \in A : \exists y \in Y : y = f(x)\}\\
f^{-1}(Z) &= \{x \in A : \exists z \in Z : z = f(x)\}\\
f^{-1}(Y \cap Z) &= \{x \in A : (\exists y \in Y y = f(x))
\land (\exists z \in Z : z = f(x))\}\\
&= \{x \in A : (\exists y \in Y y = f(x))\}
\cap \{x \in A :\lor (\exists z \in Z : z = f(x))\}\\
&= f^{-1}(Y) \cap f^{-1}(Z)\\
&\therefore f^{-1}(Y \cap Z) = f^{-1}(Y) \cap f^{-1}(Z)
\end{align*}
\newpage
% Problem 2
\section{}
Show that:
\subsection{
$A \cap \bigcup\limits_{\lambda \in \Lambda} A_\lambda
= \bigcup\limits_{\lambda \in \Lambda} (A_\lambda \cap A)$
}
Let $\Lambda := \{1, 2, \cdots, n\}$,
\begin{align*}
A \cap \bigcup\limits_{\lambda \in \Lambda} A_\lambda
&= A \cap \qty(A_1 \cup A_2 \cup \cdots \cup A_n)\\
&= (A \cap A_1) \cup (A \cap A_2) \cup \cdots \cup (A \cap A_n)\\
% A \cap \bigcup\limits_{\lambda \in \Lambda} A_\lambda
&= \bigcup\limits_{\lambda \in \Lambda} (A_\lambda \cap A)
\end{align*}
Therefore,
\begin{displaymath}
\boxed{A \cap \bigcup\limits_{\lambda \in \Lambda} A_\lambda
= \bigcup\limits_{\lambda \in \Lambda} (A_\lambda \cap A)}
\end{displaymath}
\subsection{
$\qty(\bigcap\limits_{\lambda\in\Lambda} A_{\lambda}) \cup
\qty(\bigcap\limits_{\lambda\in\Lambda} B_{\lambda})
\subseteq \bigcap\limits_{\lambda\in\Lambda} (A_\lambda \cup B_\lambda)$
}
Let $\Lambda_A := \{1, 2, \cdots, n\}$ and $\Lambda_B := \{1, 2, \cdots, m\}$,
\begin{align*}
\bigcap\limits_{\lambda\in\Lambda} (A_\lambda \cup B_\lambda)
&= (A_1 \cup B_1) \cap (A_1 \cup B_2) \cap
\cdots \cap (A_1 \cup B_m) \cap (A_2 \cup B_1) \cap
\cdots \cap (A_n \cup B_m)\\
% &= \qty((A_1 \cap A_2) \cup (A_1 \cap B_2) \cup (B_1 \cap A_2) \cup (B_1 \cap B_2)) \cap
% (A_1)
&= \qty(A_1 \cup \qty(B_1 \cap B_2 \cap \cdots \cap B_m)) \cap \cdots
\cap \qty(A_n \cup \qty(B_1 \cap B_2 \cap \cdots \cap B_m))\\
&= (A_1 \cap A_2) \cup (A_1 \cap A_3) \cup \cdots (A_2 \cap A_3) \cdots
\cup (A_{n-1} \cap A_n)\\
&\qquad \cup (A_1 \cap B_1) \cup \cdots \cup (A_1 \cap B_m) \cup \cdots \cup (A_n \cap B_m)\\
&\qquad \cup (B_1 \cap B_2) \cup \cdots \cup (B_1 \cap B_m) \cup \cdots \cup (B_{m-1} \cap B_{m})\\
&= \qty(\bigcap\limits_{\lambda\in\Lambda} A_{\lambda}) \cup
\qty(\bigcap\limits_{\lambda\in\Lambda} B_{\lambda})
\cup (A_1 \cap B_1) \cup \cdots \cup (A_1 \cap B_m) \cup \cdots \cup (A_n \cap B_m)\\
\end{align*}
Therefore,
\begin{align*}
\boxed{
\qty(\bigcap\limits_{\lambda\in\Lambda} A_{\lambda}) \cup
\qty(\bigcap\limits_{\lambda\in\Lambda} B_{\lambda})
\subseteq \qty(\bigcap\limits_{\lambda\in\Lambda} A_{\lambda}) \cup
\qty(\bigcap\limits_{\lambda\in\Lambda} B_{\lambda})
\cup (A_1 \cap B_1) \cup \cdots \cup (A_1 \cap B_m) \cup \cdots \cup (A_n \cap B_m)
}
\end{align*}
\newpage
% Problem 3
\section{}
\textbf{Problem:} Which of these are equivalence relations?\\
% \textbf{Note:}
% Equivalence relations must satisfy the following:
% \subsubsection{Reflective:}
% $$x \Rel x$$
% \subsubsection{Symetric:}
% $$x \Rel y \implies y \Rel x$$
% \subsubsection{Transitive:}
% $$x \Rel y \land y \Rel z \implies x \Rel z$$
\textbf{Solution:}
a, c, \& d\\
(The following explain why)
\subsection{}
for $a, b \in \R$,
let $a \Rel b$
if $a - b \in \Q$
\subsubsection{Reflective:}
\begin{align*}
x \Rel x
&= x - x = 0 \in \Q
\end{align*}
\subsubsection{Symetric:}
\begin{align*}
x \Rel y
&\implies y \Rel x\\
x \Rel y = x - y \in \Q
&\implies y - x \in \Q
\intertext{Since $x-y = -(y-x)$, $(x-y)$ and $(y-x)$ will both be either rational
or not rational, this is true.}
\end{align*}
\subsubsection{Transitive:}
\begin{align*}
x \Rel y \land y \Rel z
&\implies x \Rel z\\
\qty(x - y \in \Q) \land \qty(y - z \in \Q)
&\implies \qty(x - z \in \Q)\\
\qty(\frac{x_a}{x_b} - \frac{y_a}{y_b} \in \Q) \land \qty(\frac{y_a}{y_b} - \frac{z_a}{z_b} \in \Q)
&\implies \qty(\frac{x_a}{x_b} - \frac{z_a}{z_b} \in \Q)
\end{align*}
This also means that:
\begin{align*}
(x_a y_b - x_b y_a \in \N) \land (x_b y_b \neq 0 \in N)
&\land (y_a z_b - y_b z_a \in \N) \land (y_b z_b \neq 0 \in N)\\
&\implies (x_a z_b - x_b z_a \in \N) \land (x_b z_b \neq 0 \in N)
\end{align*}
Since this statments indicates that $x_b, y_b, z_b \neq 0$
and that $x_a y_b - x_b y_a, y_a z_b - y_b z_b \in \N$,
the following will always be true as well: $x_a z_b - x_b z_a$.
Therefore, the relation is transitive.
\subsection{}
for $a, b \in \R$,
let $a \Rel b$
if $a - b \notin \Q$
\subsubsection{Reflective:}
The relationship is NOT reflective:
\begin{align*}
a \Rel b &= a - b \notin \Q\\
x \Rel x
&= x - x = 0 \in \Q
\end{align*}
\newpage
\subsection{}
for $a, b \in \R$,
let $a \Rel b$
if $a - b$ is a square root of a rational number.\\
i.e.
$$a \Rel b = (a-b)^2 \in \Q$$
\subsubsection{Reflective:}
\begin{align*}
a \Rel b &= (a-b)^2 \in \Q\\
x \Rel x &= (x-x)^2 = 0^2 = 0 \in \Q
\end{align*}
\subsubsection{Symetric:}
\begin{align*}
x \Rel y
&\implies y \Rel x\\
(x - y)^2 \in \Q
&\implies (y-x)^2 \in \Q\\
(x-y)^2 = (y-x)^2
&\therefore \ x \Rel y \implies y \Rel x
\end{align*}
\subsubsection{Transative:}
\begin{align*}
x \Rel y \land y \Rel z
&\implies x \Rel z\\
\qty((x - y)^2 \in \Q) \land \qty((y - z)^2 \in \Q)
&\implies \qty((x-z)^2 \in \Q)\\
\qty(x^2 - 2xy + y^2 \in \Q) \land \qty(y^2 - 2yz + z^2 \in \Q)
&\implies \qty(x^2 - 2xz + z^2\in \Q)\\
(x^2 \in \Q) \land (y^2 \in \Q) \land (z^2 \in \Q) \land (-2xy \in \Q) \land (-2yz \in \Q)
&\implies (x^2 \in \Q) \land (z^2 \in \Q) \land (-2xz \in \Q)\\
(xy \in \Q) \land (yz \in \Q)
&\implies (xz \in \Q)
\end{align*}
Which is clearly transitive, so $a \Rel b = (a-b)^2 \in \Q$ is transitive.
\subsection{}
Let $X = \Z \cross \N$,
$x = (x_1, x_2)$ and $y = (y_1, y_2)$
are in $\Rel$ if $x_1 y_2 = x_2 y_1$.
i.e.
$$a_1, b_1 \in \Z, a_2, b_2 \in \N,$$
$$a_1 b_2 = a_2 b_1 \implies (a_1, a_2) \Rel (b_1, b_2)$$
\subsubsection{Reflective:}
\begin{align*}
a_1 b_2 = a_2 b_1 &\implies (a_1, a_2) \Rel (b_1, b_2)\\
x_1 x_2 = x_2 x_1 &\implies (x_1, x_2) \Rel (x_1, x_2)
\end{align*}
\subsubsection{Symetric:}
\begin{align*}
x \Rel y
&\implies y \Rel x\\
(x_1 y_2 = x_2 y_1 \implies (x_1, x_2) \Rel (y_1, y_2))
&\implies (y_1 x_2 = y_2 x_1 \implies (y_1, y_2) \Rel (x_1, x_2))
\end{align*}
\subsubsection{Transative:}
\begin{align*}
x \Rel y \land y \Rel z
&\implies x \Rel z\\
(x_1 y_2 = x_2 y_1 \implies (x_1, x_2) \Rel (y_1, y_2))
\land (y_1 z_2 = y_2 z_1 \implies (y_1, y_2) \Rel (z_1, z_2))
&\implies (x_1 z_2 = x_2 z_1 \implies (x_1, x_2) \Rel (z_1, z_2))\\
(x_1 y_2 = x_2 y_1) \land (y_1 z_2 = y_2 z_1)
&\implies (x_1 z_2 = x_2 z_1)\\
\end{align*}
Which is clearly transitive, so $(a_1,a_2) \Rel (b_1,b_2)$ is transitive, and therefore
an equivalence relation.
\newpage
% Problem 4
\section{}
For the relation $(x,y) \succeq (a,b)$ if $(x \geq a)$ and $(y \geq b)$ on the set
of ordered pairs of $\{1,2,3\} \cross \{1,2,3\}$.
i.e. $x,a \in \{1,2,3\}$ and $y,b \in \{1,2,3\}$,
$$(x \geq a) \land (y \geq b) \implies (x,y) \succeq (a,b)$$
\subsection{Show that the above relation is an order relation.}
An ordered relation requires (i) reflexitivity, (ii) anti-symmetry, and (iii) transivity.
\subsubsection{Reflective:}
\begin{align*}
(x \geq a) \land (y \geq b)
&\implies (x,y) \succeq (a,b)\\
(x \geq x) \land (y \geq y)
&\implies (x,y) \succeq (x,y)
\end{align*}
\subsubsection{Anti-Symmetry:}
\begin{align*}
((x,y) \succeq (a,b)) \land ((a,b) \succeq (x,y))
&\implies (x,y) = (a,b)\\
((x \geq a) \land (y \geq b)) \land ((a \geq x) \land (b \geq y))
&\implies (x,y) = (a,b)\\
((x \geq a) \land (a \geq x)) \land ((b \geq y) \land (y \geq b))
&\implies (x,y) = (a,b)
\end{align*}
\subsubsection{Transivity}
\begin{align*}
(x,y) \Rel (a,b) \land (a,b) \Rel (\alpha,\beta)
&\implies (x,y) \Rel (\alpha,\beta)\\
((x \geq a) \land (y \geq b)) \land ((a \geq \alpha) \land (b \geq \beta))
&\implies (x \geq \alpha) \land (y \geq \beta)\\
((x \geq a) \land (a \geq \alpha)) \land ((y \geq b) \land (b \geq \beta))
&\implies (x \geq \alpha) \land (y \geq \beta)\\
(x \geq a \geq \alpha) \land (y \geq b \geq \beta)
&\implies (x \geq \alpha) \land (y \geq \beta)\\
\end{align*}
Therefore, the relation $(x,y) \succeq (a,b)$ is an order relation.
\subsection{Can you make it the total order?}
\subsubsection{Totality:}
\begin{align*}
\forall (x,y),(a,b) \in \{1,2,3\} \cross \{1,2,3\}
&\implies ((x,y) \succeq (a,b)) \lor ((a,b) \succeq (x,y))\\
\forall x,a \in \{1,2,3\}, \forall y,b \in \{1,2,3\}
&\implies ((x \geq a) \land (y \geq b)) \lor ((a \geq x) \land (y \geq b))
\end{align*}
% However, it is not a universal total order. A counter example is:
% \begin{align*}
% (x,y) &= (1,3)\\
% (a,b) &= (3,1)\\
% (x,y) \succeq (a,b) &= ((x \geq a) \land (y \geq b)) \lor ((a \geq x) \land (b \geq y))\\
% &= ((1 \geq 3) \land (3 \geq 1)) \lor ((3 \geq 1) \land (1 \geq 3))
% \end{align*}
% which is false, so a it is not a complete total order.
\subsection{How many different total orderings can be constructed?}
Multiple total orderings of relation on subsets can satisfy this.
A network constructed from the following relations would demonstrate the multiple paths.
\begin{align*}
(3,3) &\succeq (x,y) \forall (x,y) \in \{1,2,3\}\cross\{1,2,3\}\\
(3,2) &\succeq (x,y) \forall (x,y) \in \{1,2,3\}\cross\{1,2\}\\
(3,1) &\succeq (x,y) \forall (x,y) \in \{1,2,3\}\cross\{1\}\\
(2,3) &\succeq (x,y) \forall (x,y) \in \{1,2\}\cross\{1,2,3\}\\
(2,2) &\succeq (x,y) \forall (x,y) \in \{1,2\}\cross\{1,2\}\\
(2,1) &\succeq (x,y) \forall (x,y) \in \{1,2\}\cross\{1\}\\
(1,3) &\succeq (x,y) \forall (x,y) \in \{(1,1),(1,2),(1,3)\}\\
(1,2) &\succeq (x,y) \forall (x,y) \in \{(1,1),(1,2)\}\\
(1,1) &\succeq (1,1)
\end{align*}
If you only count the total orders of length 2
(the number of actual network edges with self-edges) this would be 36.
If drawns as a tree, these 36 pairs would be created as directed edges and the
total number of unique tree paths can also be counted using various algorithms.
\newpage
% Problem 5
\section{}
Provide and example of $f : \Z \to \N$ such that
\subsection{$f$ is surjective, but not injective}
\begin{displaymath}
y = f(x) =
\begin{cases}
x & x \geq 0\\
-x & x < 0
\end{cases}
\end{displaymath}
\subsection{$f$ is injective, but not surjective}
\begin{displaymath}
y = f(x) =
\begin{cases}
x^2 & x \geq 0\\
x^2 + 1 & x < 0
\end{cases}
\end{displaymath}
\subsection{$f$ is surjective and injective (bijective)}
\begin{displaymath}
y = f(x) =
\begin{cases}
2 x & x \geq 0\\
-2 x - 1 & x < 0
\end{cases}
\end{displaymath}
\subsection{$f$ is niether surjective nor injective}
\begin{displaymath}
y = f(x) = 0
\end{displaymath}
\newpage
% Problem 6
\section{}
\textbf{Problem:}
Is the following statement correct?\\
\textbf{Theorem 1.}
\textit{If the relation $\R$ on $A$ is symetric and transistive, then it is reflexive.}\\
\textit{Proof:}
For any $a \in A$ let $b \in A$ is such that $a \Rel b$. Then by symmetry $b \Rel a$.
Then by symmetry $a \Rel a$.\\
\textbf{Solution:}
No. Specifically, the final statement of the proof states to use symmetry to conclude that
it is reflexive, but it actually requires the transivity property to make that conclusion.\\
The following is a proposed corrected statement:\\
\textbf{Theorem 1.}
\textit{If the relation $\Rel$ is symetric and transistive on $A$, then $Rel$ is also reflexive on $A$.}\\
\textit{Proof:}
Let $a \in A$ and $b \in A$ be selected so that $a \Rel b$
Since $\Rel$ is symetric: $$a \Rel b \implies b \Rel a$$
Since $\Rel$ is transitive: $$(a \Rel b) \land (b \Rel a) \implies a \Rel a$$
Therefore, $\Rel$ is also reflexive.
\end{document}
| {
"alphanum_fraction": 0.5911047346,
"avg_line_length": 30.4303534304,
"ext": "tex",
"hexsha": "183da03d31d0e90180034150d83010d5f0bac3cc",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "40de090ba1a936b406aa8d4c4383be2cf1418f29",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jonaswagner2826/MATH5301",
"max_forks_repo_path": "Homework/HW2/MATH5301-HW2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "40de090ba1a936b406aa8d4c4383be2cf1418f29",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jonaswagner2826/MATH5301",
"max_issues_repo_path": "Homework/HW2/MATH5301-HW2.tex",
"max_line_length": 106,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "40de090ba1a936b406aa8d4c4383be2cf1418f29",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jonaswagner2826/MATH5301",
"max_stars_repo_path": "Homework/HW2/MATH5301-HW2.tex",
"max_stars_repo_stars_event_max_datetime": "2021-10-01T05:26:53.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-10-01T05:26:53.000Z",
"num_tokens": 6441,
"size": 14637
} |
\section{Testing Procedure}
% Per-requirement testing plan
% Processes, measurements, experiments
% NOTE: Often, acceptable ranges in results depend on specifics of the technology
\subsection{Acceptability}
% Are people willing to eat output? Incl. appearance, aroma, flavor, and texture
% Specific preparation and/or incorporation of output (formulation) for tasting
% Suggestion: Double-blind volunteer study?
\subsection{Safety of Process}
% SPECIFICALLY food production area
% Chemical hazards - Toxins, heavy metals (As, Cd, Hg, Pb, etc.)
% Biohazards - total aerobic plate count, ATP testing of food contact surfaces
% TODO: What do those^ mean? How do we test them?
\subsection{Safety of Outputs}
% SPECIFICALLY food outputs
% Biohazards - total aerobic count, specific pathogens (enterobacteriaceae, salmonella, yeasts, molds, E. coli, Listeria, etc.)
\subsection{Resource Outputs}
% Nutritional analysis - macro- and micro-nutrients
\subsection{Reliability and Stability of Outputs}
% Biohazards - total aerobic plate count, enterobacteriaceae, yeasts, molds
\section{Sample Collection Procedure and Schedule}
% Days/cycles of operation before sample collection?
% Incl. sample collection (size, timing, quantity), packaging, shipping | {
"alphanum_fraction": 0.7872,
"avg_line_length": 41.6666666667,
"ext": "tex",
"hexsha": "13b14964a9595e263bad5d1025493cecfdc51360",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-10-24T02:21:18.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-10-24T02:21:18.000Z",
"max_forks_repo_head_hexsha": "4bf223fb1ff73ed689674058f494c208af0ad355",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "PeaPodTechnologies/PeaPod",
"max_forks_repo_path": "docs/tex/TestingProcedure.tex",
"max_issues_count": 27,
"max_issues_repo_head_hexsha": "4bf223fb1ff73ed689674058f494c208af0ad355",
"max_issues_repo_issues_event_max_datetime": "2022-03-23T18:46:41.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-07-29T20:28:37.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "PeaPodTechnologies/PeaPod",
"max_issues_repo_path": "docs/tex/TestingProcedure.tex",
"max_line_length": 127,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "0f2b115bd79377d93d265f25649e86376961f34f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "UofTAgritech/PeaPod",
"max_stars_repo_path": "docs/tex/TestingProcedure.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-11T01:18:02.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-09-11T01:47:24.000Z",
"num_tokens": 296,
"size": 1250
} |
\chapter{Evaluation}
\section{Mobile Support}
\section{Development / Meta}
Crockford style is a bad idea. I will change it to Standard or Airbnb https://github.com/airbnb/javascript/tree/master/es5 | {
"alphanum_fraction": 0.7969543147,
"avg_line_length": 49.25,
"ext": "tex",
"hexsha": "445bd05d277e2e1eabd263af68501e559d373d17",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8d641664b5325512474ab662417c555ff025ff41",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "LukasBombach/old-type-js",
"max_forks_repo_path": "Report/latex restructured/contents/evaluation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8d641664b5325512474ab662417c555ff025ff41",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "LukasBombach/old-type-js",
"max_issues_repo_path": "Report/latex restructured/contents/evaluation.tex",
"max_line_length": 122,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8d641664b5325512474ab662417c555ff025ff41",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "LukasBombach/old-type-js",
"max_stars_repo_path": "Report/latex restructured/contents/evaluation.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 47,
"size": 197
} |
%% Copyright (C) 2009-2011, Gostai S.A.S.
%%
%% This software is provided "as is" without warranty of any kind,
%% either expressed or implied, including but not limited to the
%% implied warranties of fitness for a particular purpose.
%%
%% See the LICENSE file for more information.
\section{void}
The special entity \lstinline|void| is an object used to denote ``no
value''. It has no prototype and cannot be used as a value. In contrast
with \refObject{nil}, which is a valid object, \lstinline|void| denotes a
value one is not allowed to read.
\subsection{Prototypes}
None.
\subsection{Construction}
\lstinline|void| is the value returned by constructs that return no value.
\begin{urbiassert}[firstnumber=1]
void.isVoid;
{}.isVoid;
{if (false) 123}.isVoid;
\end{urbiassert}
\subsection{Slots}
\begin{urbiscriptapi}
\item[acceptVoid]%
Trick \this so that, even if it is \lstinline|void| it can be used as a
value. See also \refSlot{unacceptVoid}.
\begin{urbiscript}
void.foo;
[00096374:error] !!! unexpected void
void.acceptVoid().foo;
[00102358:error] !!! lookup failed: foo
\end{urbiscript}
\item[isVoid]%
Whether \this is \lstinline|void|. Therefore, return \lstinline|true|.
Actually there is a temporary exception: \refSlot[nil]{isVoid}.
\begin{urbiassert}
void.isVoid;
void.acceptVoid().isVoid;
! 123.isVoid;
nil.isVoid;
[ Logger ] nil.isVoid will return false eventually, adjust your code.
[ Logger ] For instance replace InputStream loops from
[ Logger ] while (!(x = i.get().acceptVoid()).isVoid())
[ Logger ] cout << x;
[ Logger ] to
[ Logger ] while (!(x = i.get()).isNil())
[ Logger ] cout << x;
\end{urbiassert}
\item[unacceptVoid]%
Remove the magic from \this that allowed to manipulate it as a value, even
if it \lstinline|void|. See also \refSlot{acceptVoid}.
\begin{urbiscript}
void.acceptVoid().unacceptVoid().foo;
[00096374:error] !!! unexpected void
\end{urbiscript}
\end{urbiscriptapi}
%%% Local Variables:
%%% coding: utf-8
%%% mode: latex
%%% TeX-master: "../urbi-sdk"
%%% ispell-dictionary: "american"
%%% ispell-personal-dictionary: "../urbi.dict"
%%% fill-column: 76
%%% End:
| {
"alphanum_fraction": 0.6857654432,
"avg_line_length": 26.9156626506,
"ext": "tex",
"hexsha": "e95ff978dc05455646d4fd5573a301ad08c294a5",
"lang": "TeX",
"max_forks_count": 15,
"max_forks_repo_forks_event_max_datetime": "2021-09-28T19:26:08.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-01-28T20:27:02.000Z",
"max_forks_repo_head_hexsha": "fb17359b2838cdf8d3c0858abb141e167a9d4bdb",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "jcbaillie/urbi",
"max_forks_repo_path": "doc/specs/void.tex",
"max_issues_count": 7,
"max_issues_repo_head_hexsha": "fb17359b2838cdf8d3c0858abb141e167a9d4bdb",
"max_issues_repo_issues_event_max_datetime": "2019-02-13T10:51:07.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-09-05T10:08:33.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "jcbaillie/urbi",
"max_issues_repo_path": "doc/specs/void.tex",
"max_line_length": 77,
"max_stars_count": 16,
"max_stars_repo_head_hexsha": "fb17359b2838cdf8d3c0858abb141e167a9d4bdb",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "jcbaillie/urbi",
"max_stars_repo_path": "doc/specs/void.tex",
"max_stars_repo_stars_event_max_datetime": "2021-10-05T22:16:13.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-05-10T05:50:58.000Z",
"num_tokens": 629,
"size": 2234
} |
%% LyX 2.0.3 created this file. For more info, see http://www.lyx.org/.
%% Do not edit unless you really know what you are doing.
\documentclass[twoside,english]{article}
\usepackage{mathptmx}
\usepackage[T1]{fontenc}
\usepackage[latin9]{inputenc}
\setcounter{tocdepth}{2}
\usepackage{url}
\makeatletter
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% LyX specific LaTeX commands.
%% Because html converters don't know tabularnewline
\providecommand{\tabularnewline}{\\}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Textclass specific LaTeX commands.
\newenvironment{lyxcode}
{\par\begin{list}{}{
\setlength{\rightmargin}{\leftmargin}
\setlength{\listparindent}{0pt}% needed for AMS classes
\raggedright
\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\normalfont\ttfamily}%
\item[]}
{\end{list}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% User specified LaTeX commands.
\usepackage{moreverb}
\usepackage{url}
\textwidth=6.5in
\topmargin=0pt
\headheight=0pt
\textheight=8.6truein
\oddsidemargin=0in
\evensidemargin=0in
\footskip=40pt
\parindent=0pt
\parskip=0.5ex
\usepackage{hyperref}
%HEVEA \def{\textbackslash}{$\backslash$} % No \textbackslash in hevea.
\makeatother
\usepackage{babel}
\begin{document}
\title{How to create EPICS device support for a simple serial or GPIB device}
\author{W. Eric Norum\\
norume.aps.anl.gov \url{mailto:[email protected]}}
\maketitle
\section{Introduction}
This tutorial provides step-by-step instructions on how to create
EPICS support for a simple serial or GPIB (IEEE-488) device. The steps
are presented in a way that should make it possible to apply them
in cookbook fashion to create support for other devices. For comprehensive
description of all the details of the I/O system used here, refer
to the asynDriver \url{../asynDriver.html} and devGpib \url{../devGpib.html}
documentation.
This document isn't for the absolute newcomer though. You must have
EPICS installed on a system somewhere and know how to build and run
the example application. In particular you must have the following
installed:
\begin{itemize}
\item EPICS R3.14.6 or higher.
\item EPICS modules/soft/asyn version 4.0 or higher.
\end{itemize}
Serial and GPIB devices can now be treated in much the same way. The
EPICS 'asyn' driver devGpib module can use the low-level drivers which
communicate with serial devices connected to ports on the IOC or to
Ethernet/Serial converters or with GPIB devices connected to local
I/O cards or to Ethernet/GPIB converters.
I based this tutorial on the device support I wrote for a CVI Laser
Corporation AB300 filter wheel. You're almost certainly interested
in controlling some other device so you won't be able to use the information
directly. I chose the AB300 as the basis for this tutorial since the
AB300 has a very limited command set, which keeps this document small,
and yet has commands which raise many of the issues that you'll have
to consider when writing support for other devices. If you'd like
to print this tutorial you can download PDF version \url{tutorial.pdf}.
\section{Determine the required I/O operations}
The first order of business is to determine the set of operations
the device will have to perform. A look at the AB300 documentation
reveals that there are four commands that must be supported. Each
command will be associated with an EPICS process variable (PV) whose
type must be appropriate to the data transferred by the command. The
AB300 commands and process variable record types I choose to associate
with them are shown in table~\ref{commandList}.
\begin{table}
\caption{AB300 filter wheel commands\label{commandList}}
\centering{}%
\begin{tabular}{|l|l|}
\hline
\multicolumn{2}{|l|}{CVI Laser Corporation AB300 filter wheel}\tabularnewline
\hline
\multicolumn{1}{|l|}{Command} & \multicolumn{1}{l|}{EPICS record type}\tabularnewline
\hline
Reset & longout \tabularnewline
Go to new position & longout \tabularnewline
Query position & longin \tabularnewline
Query status & longin \tabularnewline
\hline
\end{tabular}
\end{table}
There are lots of other ways that the AB300 could be handled. It might
be useful, for example, to treat the filter position as multi-bit
binary records instead.
\section{Create a new device support module}
Now that the device operations and EPICS process variable types have
been chosen it's time to create a new EPICS application to provide
a place to perform subsequent software development. The easiest way
to do this is with the makeSupport.pl script supplied with the EPICS
ASYN package.
Here are the commands I ran. You'll have to change the \texttt{/home/EPICS/modules/soft/asyn}
to the path where your EPICS ASYN driver is installed.
\begin{lyxcode}
norume>~\textrm{\textbf{mkdir~ab300}}~\\
norume>~\textrm{\textbf{cd~ab300}}~\\
norume>~\textrm{\textbf{/home/EPICS/modules/soft/asyn/bin/linux-x86/makeSupport.pl~-t~devGpib~AB300}}
\end{lyxcode}
\subsection{Make some changes to the files in configure/}
Edit the \texttt{configure/RELEASE} file which makeSupport.pl created
and confirm that the entries describing the paths to your EPICS base
and ASYN support are correct. For example these might be:
\begin{lyxcode}
ASYN=/home/EPICS/modules/soft/asyn
EPICS\_BASE=/home/EPICS/base
\end{lyxcode}
Edit the \texttt{configure/CONFIG} file which makeSupport.pl created
and specify the IOC architectures on which the application is to run.
I wanted the application to run as a soft IOC, so I uncommented the
\texttt{CROSS\_COMPILER\_TARGET\_ARCHS} definition and set the definition
to be empty:
\begin{lyxcode}
CROSS\_COMPILER\_TARGET\_ARCHS~=
\end{lyxcode}
\subsection{Create the device support file}
The contents of the device support file provide all the details of
the communication between the device and EPICS. The makeSupport.pl
command created a skeleton device support file in \texttt{AB300Sup/devAB300.c}.
Of course, device support for a device similar to the one you're working
with provides an even easier starting point.
The remainder this section describes the changes that I made to the
skeleton file in order to support the AB300 filter wheel. You'll have
to modify the steps as appropriate for your device.
\subsubsection{Declare the DSET tables provided by the device support}
Since the AB300 provides only longin and longout records most of the
\texttt{DSET\_}\textit{xxx} define statements can be removed. Because
of the way that the device initialization is performed you must define
an analog-in DSET even if the device provides no analog-in records
(as is the case for the AB300).
\begin{lyxcode}
\#define~DSET\_AI~~~~devAiAB300~\\
\#define~DSET\_LI~~~~devLiAB300~\\
\#define~DSET\_LO~~~~devLoAB300
\end{lyxcode}
\subsubsection{Select timeout values}
The default value of \texttt{TIMEWINDOW} (2 seconds) is reasonable
for the AB300, but I increased the value of \texttt{TIMEOUT} to 5~seconds
since the filter wheel can be slow in responding.
\begin{lyxcode}
\#define~TIMEOUT~~~~~5.0~~~~/{*}~I/O~must~complete~within~this~time~{*}/~\\
\#define~TIMEWINDOW~~2.0~~~~/{*}~Wait~this~long~after~device~timeout~{*}/
\end{lyxcode}
\subsubsection{Clean up some unused values}
The skeleton file provides a number of example character string arrays.
None are needed for the AB300 so I just removed them. Not much space
would be wasted by just leaving them in place however.
\subsubsection{Declare the command array}
This is the hardest part of the job. Here's where you have to figure
how to produce the command strings required to control the device
and how to convert the device responses into EPICS process variable
values.
Each command array entry describes the details of a single I/O operation
type. The application database uses the index of the entry in the
command array to provide the link between the process variable and
the I/O operation to read or write that value.
The command array entries I created for the AB300 are shown below.
The elements of each entry are described using the names from the
GPIB documentation \url{../devGpib.html}.
\paragraph{Command array index 0 -- Device Reset}
\begin{lyxcode}
\{\&DSET\_LO,~GPIBWRITE,~IB\_Q\_LOW,~NULL,~\textquotedbl{}\textbackslash{}377\textbackslash{}377\textbackslash{}033\textquotedbl{},~10,~10,~\\
~~~~~~~NULL,~0,~0,~NULL,~NULL,~\textquotedbl{}\textbackslash{}033\textquotedbl{}\},\end{lyxcode}
\begin{description}
\item [{dset}] This command is associated with an longout record.
\item [{type}] A WRITE operation is to be performed.
\item [{pri}] This operation will be placed on the low-priority queue of
I/O requests.
\item [{cmd}] Because this is a GPIBWRITE operation this element is unused.
\item [{format}] The format string to generate the command to be sent to
the device. The first two bytes are the RESET command, the third byte
is the ECHO command. The AB300 sends no response to a reset command
so I send the 'ECHO' to verify that the device is responding. The
AB300 resets itself fast enough that it can see an echo command immediately
following the reset command.
Note that the process variable value is not used (there's no printf
\texttt{\%} format character in the command string). The AB300 is
reset whenever the EPICS record is processed.
\item [{rspLen}] The size of the readback buffer. Although only one readback
byte is expected I allow for a few extra bytes just in case.
\item [{msgLen}] The size of the buffer into which the command string is
placed. I allowed a little extra space in case a longer command is
used some day.
\item [{convert}] No special conversion function is needed.
\item [{P1,P2,P3}] There's no special conversion function so no arguments
are needed.
\item [{pdevGpibNames}] There's no name table.
\item [{eos}] The end-of-string value used to mark the end of the readback
operation. GPIB devices can usually leave this entry NULL since they
use the End-Or-Identify (EOI) line to delimit messages.Serial devices
which have the same end-of-string value for all commands couldalso
leave these entries NULL and set the end-of-string value with theiocsh
asynOctetSetInputEos command.
\end{description}
\paragraph{Command array index 1 -- Go to new filter position}
\begin{lyxcode}
\{\&DSET\_LO,~GPIBWRITE,~IB\_Q\_LOW,~NULL,~\textquotedbl{}\textbackslash{}017\%c\textquotedbl{},~10,~10,~\\
~~~~~~~~NULL,~0,~0,~NULL,~NULL,~\textquotedbl{}\textbackslash{}030\textquotedbl{}\},\end{lyxcode}
\begin{description}
\item [{dset}] This command is associated with an longout record.
\item [{type}] A WRITE operation is to be performed.
\item [{pri}] This operation will be placed on the low-priority queue of
I/O requests.
\item [{cmd}] Because this is a GPIBWRITE operation this element is unused.
\item [{format}] The format string to generate the command to be sent to
the device. The filter position (1-6) can be converted to the required
command byte with the printf \texttt{\%c} format.
\item [{rspLen}] The size of the readback buffer. Although only two readback
bytes are expected I allow for a few extra bytes just in case.
\item [{msgLen}] The size of the buffer into which the command string is
placed. I allowed a little extra space in case a longer command is
used some day.
\item [{convert}] No special conversion function is needed.
\item [{P1,P2,P3}] There's no special conversion function so no arguments
are needed.
\item [{pdevGpibNames}] There's no name table.
\item [{eos}] The end-of-string value used to mark the end of the readback
operation.
\end{description}
\paragraph{Command array index 2 -- Query filter position}
\begin{lyxcode}
\{\&DSET\_LI,~GPIBREAD,~IB\_Q\_LOW,~\textquotedbl{}\textbackslash{}035\textquotedbl{},~NULL,~0,~10,~\\
~~~~~~~~convertPositionReply,~0,~0,~NULL,~NULL,~\textquotedbl{}\textbackslash{}030\textquotedbl{}\},\end{lyxcode}
\begin{description}
\item [{dset}] This command is associated with an longin record.
\item [{type}] A READ operation is to be performed.
\item [{pri}] This operation will be placed on the low-priority queue of
I/O requests.
\item [{cmd}] The command string to be sent to the device. The AB300 responds
to this command by sending back three bytes: the current position,
the controller status, and a terminating \texttt{'\textbackslash{}030'}.
\item [{format}] Because this operation has its own conversion function
this element is unused.
\item [{rspLen}] There is no command echo to be read.
\item [{msgLen}] The size of the buffer into which the reply string is
placed. Although only three reply bytes are expected I allow for a
few extra bytes just in case.
\item [{convert}] There's no sscanf format that can convert the reply from
the AB300 so a special conversion function must be provided.
\item [{P1,P2,P3}] The special conversion function requires no arguments.
\item [{pdevGpibNames}] There's no name table.
\item [{eos}] The end-of-string value used to mark the end of the read
operation.
\end{description}
\paragraph{Command array index 3 -- Query controller status}
This command array entry is almost identical to the previous entry.
The only change is that a different custom conversion function is
used.
\begin{lyxcode}
\{\&DSET\_LI,~GPIBREAD,~IB\_Q\_LOW,~\textquotedbl{}\textbackslash{}035\textquotedbl{},~NULL,~0,~10,~\\
~~~~~~~~convertStatusReply,~0,~0,~NULL,~NULL,~\textquotedbl{}\textbackslash{}030\textquotedbl{}\},
\end{lyxcode}
\subsubsection{Write the special conversion functions}
As mentioned above, special conversion functions are need to convert
reply messages from the AB300 into EPICS PV values. The easiest place
to put these functions is just before the \texttt{gpibCmds} table.
The conversion functions are passed a pointer to the \texttt{gpibDpvt}
structure and three values from the command table entry. The \texttt{gpibDpvt}
structure contains a pointer to the EPICS record. The custom conversion
function uses this pointer to set the record's value field.
Here are the custom conversion functions I wrote for the AB300.
\begin{lyxcode}
/{*}~\\
~{*}~Custom~conversion~routines~\\
~{*}/~\\
static~int~\\
convertPositionReply(struct~gpibDpvt~{*}pdpvt,~int~P1,~int~P2,~char~{*}{*}P3)~\\
\{~\\
~~~~struct~longinRecord~{*}pli~=~((struct~longinRecord~{*})(pdpvt->precord));~\\
~\\
~~~~if~(pdpvt->msgInputLen~!=~3)~\{~\\
~~~~~~~~epicsSnprintf(pdpvt->pasynUser->errorMessage,~\\
~~~~~~~~~~~~~~~~~~~~~~pdpvt->pasynUser->errorMessageSize,~\\
~~~~~~~~~~~~~~~~~~~~~~\textquotedbl{}Invalid~reply\textquotedbl{});~\\
~~~~~~~~return~-1;~\\
~~~~\}~\\
~~~~pli->val~=~pdpvt->msg{[}0{]};~\\
~~~~return~0;~\\
\}~\\
static~int~\\
convertStatusReply(struct~gpibDpvt~{*}pdpvt,~int~P1,~int~P2,~char~{*}{*}P3)~\\
\{~\\
~~~~struct~longinRecord~{*}pli~=~((struct~longinRecord~{*})(pdpvt->precord));~\\
~\\
~~~~if~(pdpvt->msgInputLen~!=~3)~\{~\\
~~~~~~~~epicsSnprintf(pdpvt->pasynUser->errorMessage,~\\
~~~~~~~~~~~~~~~~~~~~~~pdpvt->pasynUser->errorMessageSize,~\\
~~~~~~~~~~~~~~~~~~~~~~\textquotedbl{}Invalid~reply\textquotedbl{});~\\
~~~~~~~~return~-1;~\\
~~~~\}~\\
~~~~pli->val~=~pdpvt->msg{[}1{]};~\\
~~~~return~0;~\\
\}
\end{lyxcode}
Some points of interest:
\begin{enumerate}
\item Custom conversion functions indicate an error by returning -1.
\item If an error status is returned an explanation should be left in the
\texttt{errorMessage} buffer.
\item I put in a sanity check to ensure that the end-of-string character
is where it should be.
\end{enumerate}
\subsubsection{Provide the device support initialization}
Because of way code is stored in object libraries on different systems
the device support parameter table must be initialized at run-time.
The analog-in initializer is used to perform this operation. This
is why all device support files must declare an analog-in DSET.
Here's the initialization for the AB300 device support. The AB300
immediately echos the command characters sent to it so the respond2Writes
value must be set to 0. All the other values are left as created by
the makeSupport.pl script:
\begin{lyxcode}
static~long~init\_ai(int~pass)~\\
\{~\\
~~~~if(pass==0)~\{~\\
~~~~~~~~devSupParms.name~=~\textquotedbl{}devAB300\textquotedbl{};~\\
~~~~~~~~devSupParms.gpibCmds~=~gpibCmds;~\\
~~~~~~~~devSupParms.numparams~=~NUMPARAMS;~\\
~~~~~~~~devSupParms.timeout~=~TIMEOUT;~\\
~~~~~~~~devSupParms.timeWindow~=~TIMEWINDOW;~\\
~~~~~~~~devSupParms.respond2Writes~=~0;~\\
~~~~\}~\\
~~~~return(0);~\\
\}
\end{lyxcode}
\subsection{Modify the device support database definition file}
This file specifies the link between the DSET names defined in the
device support file and the DTYP fields in the application database.
The makeSupport.pl command created an example file in \texttt{AB300Sup/devAB300.dbd}.
If you removed any of the \texttt{DSET\_}\textit{xxx} definitions
from the device support file you must remove the corresponding lines
from this file. You must define an analog-in DSET even if the device
provides no analog-in records (as is the case for the AB300).
\verbatiminput{AB300/AB300Sup/devAB300.dbd}
\subsection{Create the device support database file}
This is the database describing the actual EPICS process variables
associated with the filter wheel.
I modified the file \texttt{AB300Sup/devAB300.db} to have the following
contents:
\verbatiminput{AB300/AB300Sup/devAB300.db}
Notes:
\begin{enumerate}
\item The numbers following the \texttt{L} in the INP and OUT fields are
the number of the `link' used to communicate with the filter wheel.
This link is set up at run time by commands in the application startup
script.
\item The numbers following the \texttt{A} in the INP and OUT fields are
unused by serial devices but must be a valid GPIB address (0-30) since
the GPIB address conversion routines check the value and the diagnostic
display routines require a matching value.
\item The numbers following the \texttt{@} in the INP and OUT fields are
the indices into the GPIB command array.
\item The DTYP fields must match the names specified in the devAB300.dbd
database definition.
\item The device support database follows the ASYN convention that the macros
\$(P), \$(R), \$(L) and \$(A) are used to specify the record name
prefixes, link number and GPIB address, respectively.
\end{enumerate}
\subsection{Build the device support}
Change directories to the top-level directory of your device support
and:
\begin{lyxcode}
norume>~\textrm{\textbf{make}}
\end{lyxcode}
(\textbf{gnumake} on Solaris).
If all goes well you'll be left with a device support library in lib/\textit{<EPICS\_HOST\_ARCH>}/,
a device support database definition in dbd/ and a device support
database in db/.
\section{Create a test application}
Now that the device support has been completed it's time to create
a new EPICS application to confirm that the device support is operating
correctly. The easiest way to do this is with the makeBaseApp.pl script
supplied with EPICS.
Here are the commands I ran. You'll have to change the \texttt{/home/EPICS/base}
to the path to where your EPICS base is installed. If you're not running
on Linux you'll also have to change all the \texttt{linux-x86} to
reflect the architecture you're using (\texttt{solaris-sparc}, \texttt{darwin-ppc},
etc.). I built the test application in the same <top> as the device
support, but the application could be built anywhere. As well, I built
the application as a 'soft' IOC running on the host machine, but the
serial/GPIB driver also works on RTEMS and vxWorks.
\begin{lyxcode}
norume>~\textrm{\textbf{cd~ab300}}~\\
norume>~\textrm{\textbf{/home/EPICS/base/bin/linux-x86/makeBaseApp.pl~-t~ioc~AB300}}~\\
norume>~\textrm{\textbf{/home/EPICS/base/bin/linux-x86/makeBaseApp.pl~-i~-t~ioc~AB300}}~\\
The~following~target~architectures~are~available~in~base:~\\
~~~~RTEMS-pc386~\\
~~~~linux-x86~\\
~~~~solaris-sparc~\\
~~~~win32-x86-cygwin~\\
~~~~vxWorks-ppc603~\\
What~architecture~do~you~want~to~use?~\textrm{\textbf{linux-x86}}
\end{lyxcode}
\section{Using the device support in an application}
Several files need minor modifications to use the device support in
the test, or any other, application.
\subsection{Make some changes to configure/RELEASE}
Edit the \texttt{configure/RELEASE} file which makeBaseApp.pl created
and confirm that the EPICS\_BASE path is correct. Add entries for
your ASYN and device support. For example these might be:
\begin{lyxcode}
AB300=/home/EPICS/modules/instrument/ab300/1-2
ASYN=/home/EPICS/modules/soft/asyn/3-2
EPICS\_BASE=/home/EPICS/base
\end{lyxcode}
\subsection{Modify the application database definition file}
Your application database definition file must include the database
definition files for your instrument and for the ASYN drivers. There
are two ways that this can be done:
\begin{enumerate}
\item If you are building your application database definition from an \textit{xxx}\texttt{Include.dbd}
file you include the additional database definitions in that file.
For example, to add support for the AB300 instrument and local and
remote serial line drivers:\\
\verbatiminput{AB300/AB300App/src/AB300Include.dbd}
\item If you are building your application database definition from the
application Makefile you specify the additional database definitions
there:\\
.\\
.\\
\textit{xxx}\_DBD += base.dbd\\
\textit{xxx}\_DBD += devAB300.dbd\\
\textit{xxx}\_DBD += drvAsynIPPort.dbd\\
\textit{xxx}\_DBD += drvAsynSerialPort.dbd\\
.\\
.\\
This is the preferred method.
\end{enumerate}
\subsection{Add the device support libraries to the application}
You must link your device support library and the ASYN support library
with the application. Add the following lines
\begin{lyxcode}
\textrm{\textit{xxx}}\_LIBS~+=~devAB300
\textrm{\textit{xxx}}\_LIBS~+=~asyn
\end{lyxcode}
before the
\begin{lyxcode}
\textrm{\textit{xxx}}\_LIBS~+=~\$(EPICS\_BASE\_IOC\_LIBS)
\end{lyxcode}
line in the application \texttt{Makefile}.
\subsection{Modify the application startup script}
The \texttt{st.cmd} application startup script created by the makeBaseApp.pl
script needs a few changes to get the application working properly.
\begin{enumerate}
\item Load the device support database records:
\begin{lyxcode}
cd~\$(AB300)~~~~~~~~~~\textrm{(}cd~AB300~\textrm{if~using~the~vxWorks~shell)}
dbLoadRecords(\textquotedbl{}db/devAB300.db\textquotedbl{},\textquotedbl{}P=AB300:,R=,L=0,A=0\textquotedbl{})
\end{lyxcode}
\item Set up the 'port' between the IOC and the filter wheel.
\begin{itemize}
\item If you're using an Ethernet/RS-232 converter or a device which communicates
over a telnet-style socket connection you need to specify the Internet
host and port number like:
\begin{lyxcode}
drvAsynIPPortConfigure(\textquotedbl{}L0\textquotedbl{},\textquotedbl{}164.54.9.91:4002\textquotedbl{},0,0,0)
\end{lyxcode}
\item If you're using a serial line directly attached to the IOC you need
something like:
\begin{lyxcode}
drvAsynSerialPortConfigure(\textquotedbl{}L0\textquotedbl{},\textquotedbl{}/dev/ttyS0\textquotedbl{},0,0,0)~\\
asynSetOption(\textquotedbl{}L0\textquotedbl{},~-1,~\textquotedbl{}baud\textquotedbl{},~\textquotedbl{}9600\textquotedbl{})~\\
asynSetOption(\textquotedbl{}L0\textquotedbl{},~-1,~\textquotedbl{}bits\textquotedbl{},~\textquotedbl{}8\textquotedbl{})~\\
asynSetOption(\textquotedbl{}L0\textquotedbl{},~-1,~\textquotedbl{}parity\textquotedbl{},~\textquotedbl{}none\textquotedbl{})~\\
asynSetOption(\textquotedbl{}L0\textquotedbl{},~-1,~\textquotedbl{}stop\textquotedbl{},~\textquotedbl{}1\textquotedbl{})~\\
asynSetOption(\textquotedbl{}L0\textquotedbl{},~-1,~\textquotedbl{}clocal\textquotedbl{},~\textquotedbl{}Y\textquotedbl{})~\\
asynSetOption(\textquotedbl{}L0\textquotedbl{},~-1,~\textquotedbl{}crtscts\textquotedbl{},~\textquotedbl{}N\textquotedbl{})
\end{lyxcode}
\item If you're using a serial line directly attached to a vxWorks IOC you
must first configure the serial port interface hardware. The following
example shows the commands to configure a port on a GreenSprings UART
Industry-Pack module.
\begin{lyxcode}
ipacAddVIPC616\_01(\textquotedbl{}0x6000,B0000000\textquotedbl{})~\\
tyGSOctalDrv(1)~\\
tyGSOctalModuleInit(\textquotedbl{}RS232\textquotedbl{},~0x80,~0,~0)~\\
tyGSOctalDevCreate(\textquotedbl{}/tyGS/0/0\textquotedbl{},0,0,1000,1000)~\\
drvAsynSerialPortConfigure(\textquotedbl{}L0\textquotedbl{},\textquotedbl{}/tyGS/0/0\textquotedbl{},0,0,0)~\\
asynSetOption(\textquotedbl{}L0\textquotedbl{},-1,\textquotedbl{}baud\textquotedbl{},\textquotedbl{}9600\textquotedbl{})
\end{lyxcode}
\end{itemize}
In all of the above examples the first argument of the configure and
set port option commands is the link identifier and must match the
\texttt{L} value in the EPICS database record INP and OUT fields.
The second argument of the configure command identifies the port to
which the connection is to be made. The third argument sets the priority
of the worker thread which performs the I/O operations. A value of
zero directs the command to choose a reasonable default value. The
fourth argument is zero to direct the device support layer to automatically
connect to the serial port on startup and whenever the serial port
becomes disconnected. The final argument is zero to direct the device
support layer to use standard end-of-string processing on input messages.
\item (Optional) Add lines to control the debugging level of the serial/GPIB
driver. The following enables error messages and a description of
every I/O operation.
\begin{lyxcode}
asynSetTraceMask(\textquotedbl{}L0\textquotedbl{},-1,0x9)~\\
asynSetTraceIOMask(\textquotedbl{}L0\textquotedbl{},-1,0x2)
\end{lyxcode}
A better way to control the amount and type of diagnostic output is
to add an asynRecord \url{../asynRecord.html} to your application.
\end{enumerate}
\subsection{Build the application}
Change directories to the top-level directory of your application
and:
\begin{lyxcode}
norume>~\textrm{\textbf{make}}
\end{lyxcode}
(\textbf{gnumake} on Solaris).
If all goes well you'll be left with an executable program in bin/linux-x86/AB300.
\subsection{Run the application}
Change directories to where makeBaseApp.pl put the application startup
script and run the application:
\begin{lyxcode}
norume>~\textrm{\textbf{cd~iocBoot/iocAB300}}~\\
norume>~\textrm{\textbf{../../bin/linux-x86/AB300~st.cmd}}~\\
dbLoadDatabase(\textquotedbl{}../../dbd/AB300.dbd\textquotedbl{},0,0)~\\
AB300\_registerRecordDeviceDriver(pdbbase)~\\
cd~\$\{AB300\}
dbLoadRecords(\textquotedbl{}db/devAB300.db\textquotedbl{},\textquotedbl{}P=AB300:,R=,L=0,A=0\textquotedbl{})~\\
drvAsynIPPortConfigure(\textquotedbl{}L0\textquotedbl{},\textquotedbl{}164.54.3.137:4001\textquotedbl{},0,0,0)~\\
asynSetTraceMask(\textquotedbl{}L0\textquotedbl{},-1,0x9)~\\
asynSetTraceIOMask(\textquotedbl{}L0\textquotedbl{},-1,0x2)~\\
iocInit()~\\
\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#~\\
\#\#\#~~EPICS~IOC~CORE~built~on~Apr~23~2004~\\
\#\#\#~~EPICS~R3.14.6~~2008/03/25~18:06:21~\\
\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#~\\
Starting~iocInit~\\
iocInit:~All~initialization~complete
\end{lyxcode}
Check the process variable names:
\begin{lyxcode}
epics>~\textrm{\textbf{dbl}}~\\
AB300:FilterWheel:fbk~\\
AB300:FilterWheel:status~\\
AB300:FilterWheel~\\
AB300:FilterWheel:reset
\end{lyxcode}
Reset the filter wheel. The values sent between the IOC and the filter
wheel are shown:
\begin{lyxcode}
epics>~\textrm{\textbf{dbpf~AB300:FilterWheel:reset~0}}~\\
DBR\_LONG:~~~~~~~~~~~0~~~~~~~~~0x0~\\
2004/04/21~12:05:14.386~164.54.3.137:4001~write~3~\textbackslash{}377\textbackslash{}377\textbackslash{}033~\\
2004/04/21~12:05:16.174~164.54.3.137:4001~read~1~\textbackslash{}033
\end{lyxcode}
Read back the filter wheel position. The dbtr command prints the record
before the I/O has a chance to occur:
\begin{lyxcode}
epics>~\textrm{\textbf{dbtr~AB300:FilterWheel:fbk}}~\\
ACKS:~NO\_ALARM~~~~~~ACKT:~YES~~~~~~~~~~~ADEL:~0~~~~~~~~~~~~~ALST:~0~\\
ASG:~~~~~~~~~~~~~~~~BKPT:~0x00~~~~~~~~~~DESC:~Filter~Wheel~Position~\\
DISA:~0~~~~~~~~~~~~~DISP:~0~~~~~~~~~~~~~DISS:~NO\_ALARM~~~~~~DISV:~1~\\
DTYP:~AB300Gpib~~~~~EGU:~~~~~~~~~~~~~~~~EVNT:~0~~~~~~~~~~~~~FLNK:CONSTANT~0~\\
HHSV:~NO\_ALARM~~~~~~HIGH:~0~~~~~~~~~~~~~HIHI:~0~~~~~~~~~~~~~HOPR:~6~\\
HSV:~NO\_ALARM~~~~~~~HYST:~0~~~~~~~~~~~~~INP:GPIB\_IO~\#L0~A0~@2~\\
LALM:~0~~~~~~~~~~~~~LCNT:~0~~~~~~~~~~~~~LLSV:~NO\_ALARM~~~~~~LOLO:~0~\\
LOPR:~1~~~~~~~~~~~~~LOW:~0~~~~~~~~~~~~~~LSV:~NO\_ALARM~~~~~~~MDEL:~0~\\
MLST:~0~~~~~~~~~~~~~NAME:~AB300:FilterWheel:fbk~~~~~~~~~~~~~NSEV:~NO\_ALARM~\\
NSTA:~NO\_ALARM~~~~~~PACT:~1~~~~~~~~~~~~~PHAS:~0~~~~~~~~~~~~~PINI:~NO~\\
PRIO:~LOW~~~~~~~~~~~PROC:~0~~~~~~~~~~~~~PUTF:~0~~~~~~~~~~~~~RPRO:~0~\\
SCAN:~Passive~~~~~~~SDIS:CONSTANT~~~~~~~SEVR:~INVALID~~~~~~~SIML:CONSTANT~\\
SIMM:~NO~~~~~~~~~~~~SIMS:~NO\_ALARM~~~~~~SIOL:CONSTANT~~~~~~~STAT:~UDF~\\
SVAL:~0~~~~~~~~~~~~~TPRO:~0~~~~~~~~~~~~~TSE:~0~~~~~~~~~~~~~~TSEL:CONSTANT~\\
UDF:~1~~~~~~~~~~~~~~VAL:~0~\\
2004/04/21~12:08:01.971~164.54.3.137:4001~write~1~\textbackslash{}035~\\
2004/04/21~12:08:01.994~164.54.3.137:4001~read~3~\textbackslash{}001\textbackslash{}020\textbackslash{}030
\end{lyxcode}
Now the process variable should have that value:
\begin{lyxcode}
epics>~\textrm{\textbf{dbpr~AB300:FilterWheel:fbk}}~\\
ASG:~~~~~~~~~~~~~~~~DESC:~Filter~Wheel~Position~~~~~~~~~~~~~DISA:~0~\\
DISP:~0~~~~~~~~~~~~~DISV:~1~~~~~~~~~~~~~NAME:~AB300:FilterWheel:fbk~\\
SEVR:~NO\_ALARM~~~~~~STAT:~NO\_ALARM~~~~~~SVAL:~0~~~~~~~~~~~~~TPRO:~0~\\
VAL:~1
\end{lyxcode}
Move the wheel to position 4:
\begin{lyxcode}
epics>~\textrm{\textbf{dbpf~AB300:FilterWheel~4}}~\\
DBR\_LONG:~~~~~~~~~~~4~~~~~~~~~0x4~~\\
2004/04/21~12:10:51.542~164.54.3.137:4001~write~2~\textbackslash{}017\textbackslash{}004~\\
2004/04/21~12:10:51.562~164.54.3.137:4001~read~1~\textbackslash{}020~\\
2004/04/21~12:10:52.902~164.54.3.137:4001~read~1~\textbackslash{}030
\end{lyxcode}
Read back the position:
\begin{lyxcode}
epics>~\textrm{\textbf{dbtr~AB300:FilterWheel:fbk}}~\\
ACKS:~NO\_ALARM~~~~~~ACKT:~YES~~~~~~~~~~~ADEL:~0~~~~~~~~~~~~~ALST:~1~\\
ASG:~~~~~~~~~~~~~~~~BKPT:~0x00~~~~~~~~~~DESC:~Filter~Wheel~Position~~\\
DISA:~0~~~~~~~~~~~~~DISP:~0~~~~~~~~~~~~~DISS:~NO\_ALARM~~~~~~DISV:~1~\\
DTYP:~AB300Gpib~~~~~EGU:~~~~~~~~~~~~~~~~EVNT:~0~~~~~~~~~~~~~FLNK:CONSTANT~0~\\
HHSV:~NO\_ALARM~~~~~~HIGH:~0~~~~~~~~~~~~~HIHI:~0~~~~~~~~~~~~~HOPR:~6~\\
HSV:~NO\_ALARM~~~~~~~HYST:~0~~~~~~~~~~~~~INP:GPIB\_IO~\#L0~A0~@2~\\
LALM:~1~~~~~~~~~~~~~LCNT:~0~~~~~~~~~~~~~LLSV:~NO\_ALARM~~~~~~LOLO:~0~\\
LOPR:~1~~~~~~~~~~~~~LOW:~0~~~~~~~~~~~~~~LSV:~NO\_ALARM~~~~~~~MDEL:~0~\\
MLST:~1~~~~~~~~~~~~~NAME:~AB300:FilterWheel:fbk~~~~~~~~~~~~~NSEV:~NO\_ALARM~\\
NSTA:~NO\_ALARM~~~~~~PACT:~1~~~~~~~~~~~~~PHAS:~0~~~~~~~~~~~~~PINI:~NO~\\
PRIO:~LOW~~~~~~~~~~~PROC:~0~~~~~~~~~~~~~PUTF:~0~~~~~~~~~~~~~RPRO:~0~\\
SCAN:~Passive~~~~~~~SDIS:CONSTANT~~~~~~~SEVR:~NO\_ALARM~~~~~~SIML:CONSTANT~\\
SIMM:~NO~~~~~~~~~~~~SIMS:~NO\_ALARM~~~~~~SIOL:CONSTANT~~~~~~~STAT:~NO\_ALARM~\\
SVAL:~0~~~~~~~~~~~~~TPRO:~0~~~~~~~~~~~~~TSE:~0~~~~~~~~~~~~~~TSEL:CONSTANT~\\
UDF:~0~~~~~~~~~~~~~~VAL:~1~\\
2004/04/21~12:11:43.372~164.54.3.137:4001~write~1~\textbackslash{}035~\\
2004/04/21~12:11:43.391~164.54.3.137:4001~read~3~\textbackslash{}004\textbackslash{}020\textbackslash{}030
\end{lyxcode}
And it really is 4:
\begin{lyxcode}
epics>~\textrm{\textbf{dbpr~AB300:FilterWheel:fbk}}~\\
ASG:~~~~~~~~~~~~~~~~DESC:~Filter~Wheel~Position~~~~~~~~~~~~~DISA:~0~\\
DISP:~0~~~~~~~~~~~~~DISV:~1~~~~~~~~~~~~~NAME:~AB300:FilterWheel:fbk~\\
SEVR:~NO\_ALARM~~~~~~STAT:~NO\_ALARM~~~~~~SVAL:~0~~~~~~~~~~~~~TPRO:~0~\\
VAL:~4
\end{lyxcode}
\section{Device Support File}
Here is the complete device support file for the AB300 filter wheel
(\texttt{AB300Sup/devAB300.c}):
\verbatiminput{AB300/AB300Sup/devAB300.c}
\section{asynRecord support}
The asynRecord provides a convenient mechanism for controlling the
diagnostic messages produced by asyn drivers. To use an asynRecord
in your application:
\begin{enumerate}
\item Add the line\\
\\
\texttt{DB\_INSTALLS += \$(ASYN)/db/asynRecord.db} \\
\\
to an application \texttt{Makefile}.
\item Create the diagnostic record by adding a line like\\
\\
cd \$(ASYN) (cd ASYN if using the vxWorks shell)\texttt{\small }~\\
\texttt{\small dbLoadRecords(\textquotedbl{}db/asynRecord.db\textquotedbl{},\textquotedbl{}P=AB300,R=Test,PORT=L0,ADDR=0,IMAX=0,OMAX=0\textquotedbl{})}\\
\\
to the application startup (\texttt{st.cmd}) script. The \texttt{PORT}
value must match the the value in the corresponding \texttt{drvAsynIPPortConfigure}
or \texttt{drvAsynSerialPortConfigure} command. The \texttt{addr}
value should be that of the instrument whose I/O you wish to monitor.
The \texttt{P} and \texttt{R} values are arbitrary and are concatenated
together to form the record name. Choose values which are unique
among all IOCs on your network.
\end{enumerate}
To run the asynRecord screen, add \textit{<asynTop}\texttt{>/medm}
to your \texttt{EPICS\_DISPLAY\_PATH} environment variable and start
medm with \texttt{P}, \texttt{R}, \texttt{PORT} and \texttt{ADDR}
values matching those given in the \texttt{dbLoadRecords} command:
\begin{lyxcode}
medm~-x~-macro~\textquotedbl{}P=AB300,R=Test\textquotedbl{}~asynRecord.adl
\end{lyxcode}
\section{Support for devices with common 'end-of-string' characters}
Devices which use the same character, or characters, to mark the end
of each command or response message need not specify these characters
in the GPIB command table entries. They can, instead, specify the
terminator sequences as part of the driver port configuration commands.
This makes it possible for a single command table to provide support
for devices which provide a serial or Ethernet interface (and require
command/response terminators) and also provide a GPIB interface (which
does not require command/response terminators).
For example, the configuration commands for a TDS3000 digital oscilloscope
connected through an Ethernet serial port adapter might look like:
\begin{lyxcode}
drvAsynIPPortConfigure(\textquotedbl{}L0\textquotedbl{},~\textquotedbl{}192.168.9.90:4003\textquotedbl{},~0,~0,~0)~\\
asynOctetSetInputEos(\textquotedbl{}L0\textquotedbl{},0,\textquotedbl{}\textbackslash{}n\textquotedbl{})~\\
asynOctetSetOutputEos(\textquotedbl{}L0\textquotedbl{},0,\textquotedbl{}\textbackslash{}n\textquotedbl{})
\end{lyxcode}
The configuration command for the same oscilloscope connected to an
Ethernet GPIB adapter would be:
\begin{lyxcode}
vxi11Configure(\textquotedbl{}L0\textquotedbl{},~\textquotedbl{}192.168.8.129\textquotedbl{},~0,~\textquotedbl{}0.0\textquotedbl{},~\textquotedbl{}gpib0\textquotedbl{},~0)
\end{lyxcode}
An example command table entry for this device is shown below. Notice
that there is no \texttt{\textbackslash{}n} at the end of the command
and that the table 'eos' field is \texttt{NULL}:
\begin{lyxcode}
/{*}~2~:~read~delay~:~AI~{*}/~\\
~~\{\&DSET\_AI,~GPIBREAD,~IB\_Q\_LOW,~\textquotedbl{}HOR:DEL:TIM?\textquotedbl{},~\textquotedbl{}\%lf\textquotedbl{},~0,~20,~\\
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~NULL,~0,~0,~NULL,~NULL,~NULL\},\end{lyxcode}
\end{document}
| {
"alphanum_fraction": 0.7261471024,
"avg_line_length": 43.1597051597,
"ext": "tex",
"hexsha": "5af45d8836af9262eb0527f6e1b0755db20a76e7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b764a53bf449d9f6b54a1173c5e75a22cf95098c",
"max_forks_repo_licenses": [
"OML"
],
"max_forks_repo_name": "A2-Collaboration/epics",
"max_forks_repo_path": "modules/asyn/documentation/HowToDoSerial/tutorial.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b764a53bf449d9f6b54a1173c5e75a22cf95098c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"OML"
],
"max_issues_repo_name": "A2-Collaboration/epics",
"max_issues_repo_path": "modules/asyn/documentation/HowToDoSerial/tutorial.tex",
"max_line_length": 171,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "b764a53bf449d9f6b54a1173c5e75a22cf95098c",
"max_stars_repo_licenses": [
"OML"
],
"max_stars_repo_name": "A2-Collaboration/epics",
"max_stars_repo_path": "modules/asyn/documentation/HowToDoSerial/tutorial.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 10874,
"size": 35132
} |
\documentclass[a4paper,11pt,final]{article}
\usepackage{fancyvrb, color, graphicx, hyperref, amsmath, url}
\usepackage{palatino}
\usepackage[a4paper,text={16.5cm,25.2cm},centering]{geometry}
\hypersetup
{ pdfauthor = {Matti Pastell},
pdftitle={FIR filter design with Python and SciPy},
colorlinks=TRUE,
linkcolor=black,
citecolor=blue,
urlcolor=blue
}
\setlength{\parindent}{0pt}
\setlength{\parskip}{1.2ex}
\title{FIR filter design with Python and SciPy}
\author{Matti Pastell \\ \url{http://mpastell.com}}
\date{15th April 2013}
\begin{document}
\maketitle
\section{Introduction}
This an example of a document that can be published using
\href{http://mpastell.com/pweave}{Pweave}. Text is written
using \LaTeX{} and code between \texttt{<<>>} and \texttt{@} is executed
and results are included in the resulting document.
You can define various options for code chunks to control code
execution and formatting (see
\href{http://mpastell.com/pweave/usage.html\#code-chunk-options}{Pweave
docs}).
\section{FIR Filter Design}
We'll implement lowpass, highpass and ' bandpass FIR filters. If you
want to read more about DSP I highly recommend
\href{http://www.dspguide.com/}{The Scientist and Engineer's Guide to
Digital Signal Processing} which is freely available online.
\subsection{Functions for frequency, phase, impulse and step response}
Let's first define functions to plot filter properties.
\begin{verbatim}
from pylab import *
import scipy.signal as signal
import warnings
warnings.simplefilter("ignore")
#Plot frequency and phase response
def mfreqz(b,a=1):
w,h = signal.freqz(b,a)
h_dB = 20 * log10 (abs(h))
subplot(211)
plot(w/max(w),h_dB)
ylim(-150, 5)
ylabel('Magnitude (db)')
xlabel(r'Normalized Frequency (x$\pi$rad/sample)')
title(r'Frequency response')
subplot(212)
h_Phase = unwrap(arctan2(imag(h),real(h)))
plot(w/max(w),h_Phase)
ylabel('Phase (radians)')
xlabel(r'Normalized Frequency (x$\pi$rad/sample)')
title(r'Phase response')
subplots_adjust(hspace=0.5)
#Plot step and impulse response
def impz(b,a=1):
l = len(b)
impulse = repeat(0.,l); impulse[0] =1.
x = arange(0,l)
response = signal.lfilter(b,a,impulse)
subplot(211)
stem(x, response)
ylabel('Amplitude')
xlabel(r'n (samples)')
title(r'Impulse response')
subplot(212)
step = cumsum(response)
stem(x, step)
ylabel('Amplitude')
xlabel(r'n (samples)')
title(r'Step response')
subplots_adjust(hspace=0.5)
\end{verbatim}
\subsection{Lowpass FIR filter}
Designing a lowpass FIR filter is very simple to do with SciPy, all you
need to do is to define the window length, cut off frequency and the
window.
The Hamming window is defined as:
$w(n) = \alpha - \beta\cos\frac{2\pi n}{N-1}$, where $\alpha=0.54$ and
$\beta=0.46$
The next code chunk is executed in term mode, see the source document
for syntax. Notice also that Pweave can now catch multiple
figures/code chunk.
\begin{verbatim}
n = 61
a = signal.firwin(n, cutoff = 0.3, window = "hamming")
#Frequency and phase response
mfreqz(a)
#Impulse and step response
p = figure(2)
impz(a)
\end{verbatim}
\includegraphics[width= \linewidth]{figures/FIR_design_verb_figure2_1.pdf}
\includegraphics[width= \linewidth]{figures/FIR_design_verb_figure2_2.pdf}
\subsection{Highpass FIR Filter}
Let's define a highpass FIR filter:
\begin{verbatim}
n = 101
a = signal.firwin(n, cutoff = 0.3, window = "hanning", pass_zero=False)
mfreqz(a)
\end{verbatim}
\includegraphics[width= \linewidth]{figures/FIR_design_verb_figure3_1.pdf}
\subsection{Bandpass FIR filter}
Notice that the plot has a caption defined in code chunk options.
\begin{verbatim}
n = 1001
a = signal.firwin(n, cutoff = [0.2, 0.5], window = 'blackmanharris', pass_zero = False)
mfreqz(a)
\end{verbatim}
\begin{figure}[htpb]
\center
\includegraphics[width= \linewidth]{figures/FIR_design_verb_figure4_1.pdf}
\caption{Bandpass FIR filter.}
\label{fig:None}
\end{figure}
\end{document}
| {
"alphanum_fraction": 0.7252229931,
"avg_line_length": 25.3836477987,
"ext": "tex",
"hexsha": "fb08e872b1323721836fcdbbd5f132bce4e1d0a3",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "62a740e46c16b3aecce53aa951792b33ac93e1a2",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "piccolbo/Pweave",
"max_forks_repo_path": "tests/weave/tex/FIR_design_verb_REF.tex",
"max_issues_count": 8,
"max_issues_repo_head_hexsha": "62a740e46c16b3aecce53aa951792b33ac93e1a2",
"max_issues_repo_issues_event_max_datetime": "2018-12-15T19:07:01.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-12-13T05:00:33.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "piccolbo/Pweave",
"max_issues_repo_path": "tests/weave/tex/FIR_design_verb_REF.tex",
"max_line_length": 87,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "62a740e46c16b3aecce53aa951792b33ac93e1a2",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "piccolbo/Pweave",
"max_stars_repo_path": "tests/weave/tex/FIR_design_verb_REF.tex",
"max_stars_repo_stars_event_max_datetime": "2018-12-14T17:48:36.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-12-14T17:48:36.000Z",
"num_tokens": 1186,
"size": 4036
} |
\documentclass{beamer}
\usepackage{dcolumn}
\usepackage{listings}
\lstset{basicstyle=\scriptsize, showspaces=false, showtabs=false, showstringspaces=false, breaklines, prebreak=..., tabsize=2}
\usepackage{graphicx}
\usepackage{textcomp}
\usepackage{url}
\usepackage{hanging}
\urlstyle{rm}
\newcolumntype{.}{D{.}{.}{-1}}
\newcolumntype{d}[1]{D{.}{.}{#1}}
\usetheme{JuanLesPins}
\setbeamertemplate{footnote}{%
\hangpara{2em}{1}%
\makebox[2em][l]{\insertfootnotemark}\footnotesize\insertfootnotetext\par%
}
\beamertemplatenavigationsymbolsempty
\title[]{Language Basics}
\subtitle{Introduction to Java}
\author{Alan Hohn\\
\texttt{[email protected]}}
\date{29 August 2013}
\begin{document}
\begin{frame}
\titlepage
\end{frame}
\begin{frame}
\frametitle{Contents}
\tableofcontents[]
\end{frame}
\section{Brief Review}
\begin{frame}
\frametitle{Course Contents}
\begin{itemize}
\item Getting started writing Java programs (complete)
\item Java programming language basics (today is \#2 of 4 sessions)
\item Packaging Java programs (1 session)
\item Core library features (6 sessions)
\item Java user interfaces (2 sessions)
\end{itemize}
\end{frame}
\begin{frame}[fragile]
\frametitle{Our Basic Java Example}
\lstinputlisting[language=Java]{../examples/src/org/anvard/introtojava/HelloName.java}
\end{frame}
\begin{frame}
\frametitle{This Time}
\begin{itemize}
\item Method Calls and Field References
\item Control Flow
\item Operators
\item Object Equality
\end{itemize}
\end{frame}
\section{Method Calls and Field References}
\begin{frame}
\frametitle{The \texttt{this} keyword}
\begin{itemize}
\item Java methods and fields exist inside a class
\begin{itemize}
\item Unless static, methods and fields belong to a particular instance of the class (to an object)
\item Java uses \texttt{this} to refer back to the current object instance
\end{itemize}
\item In Java, all references are ``scoped''
\end{itemize}
\end{frame}
\begin{frame}[fragile]
\frametitle{Scoped references}
\begin{itemize}
\item Dot notation is used in Java to ``enter'' a level of scope
\item The compiler searches outward for an unqualified reference
\item The scope ends with the class (there are no global fields or methods)
\item In other words, it is OK to use the keyword \texttt{this}, but not necessary unless avoiding ambiguity
\lstset{language=Java}
\begin{lstlisting}
public void setValue(int value) {
this.value = value;
}
\end{lstlisting}
\item Field references and method calls can be ``chained''
\lstset{language=Java}
\begin{lstlisting}
// out is a (static) field in the System class
// println is a method in the PrintStream class
System.out.println("Hello, world!");
\end{lstlisting}
\end{itemize}
\end{frame}
\begin{frame}[fragile]
\frametitle{Method Definitions}
\begin{itemize}
\item Java methods have four parts
\begin{itemize}
\item Return type (may be void)
\item Identifier (name)
\item Parameter list
\item Exceptions
\end{itemize}
\item No two methods in the same class may have the same name and parameter list
\item Methods may also have qualifiers such as \texttt{public} or \texttt{static}
\end{itemize}
\lstset{language=Java}
\begin{lstlisting}
public int add(Integer left, Integer right)
throws OverflowException;
\end{lstlisting}
\end{frame}
\begin{frame}[fragile]
\frametitle{Method Calls}
\begin{itemize}
\item The instance on which the method operates is like an implicit first parameter
\begin{itemize}
\item It is available to the method using \texttt{this}
\item It is used implicitly when the method refers to instance variables
\end{itemize}
\item Objects returned from methods may be ``chained''
\item Method calls must use parentheses even if there are no parameters, to avoid ambiguity with field references
\item Instantiating an object with \texttt{new} is like calling a constructor method; it returns an object of the instantiated type
\end{itemize}
\lstset{language=Java}
\begin{lstlisting}
System.out.println("Hello, world!");
String name = new BufferedReader(
new InputStreamReader(System.in)).readLine();
\end{lstlisting}
\end{frame}
\begin{frame}[fragile]
\frametitle{Quick Note on Naming}
\begin{itemize}
\item The compiler searches for an unqualified reference in the current scope
\item The \texttt{import} statement brings a class into the current scope so it can be used unqualified (without its full package name)
\item By convention, Java classes are UpperCamelCase, while packages, fields, methods and variables are lowerCamelCase
\item This is done to avoid hiding names unnecessarily
\end{itemize}
\lstset{language=Java}
\begin{lstlisting}
// import System; is implied in any Java program
public String system() { return "Laptop"; }
public String system = "Desktop";
System.out.println("Hello, " + system()); // Hello, Laptop
System.out.println("Hello, " + system); // Hello, Desktop
System.out.println("Hello, " + System); // Compiler error
\end{lstlisting}
\end{frame}
\section{Control Flow}
\begin{frame}
\frametitle{Java Control Flow}
\begin{itemize}
\item Java has the expected set of flow control keywords
\begin{itemize}
\item \texttt{if} - \texttt{ else if } - \texttt{else}
\item \texttt{switch} - \texttt{case} - \texttt{default}
\item \texttt{while} and \texttt{do} - \texttt{while}
\item \texttt{for} (including an enhanced version for iterators)
\end{itemize}
\item \texttt{if} conditionals and \texttt{for} expressions must be in parentheses
\item Curly braces ( \{ \} ) are used to group multiple statements; they are optional for a single-statement block but are recommended
\item \texttt{break} works with \texttt{switch}, \texttt{for}, \texttt{while}, or \texttt{do-while}
\item \texttt{continue} works with \texttt{for}, \texttt{while}, or \texttt{do-while}
\item Labels are supported for \texttt{break} and \texttt{continue}
\item \texttt{return} can appear anywhere in a method
\item There is no \texttt{goto}
\end{itemize}
\end{frame}
\begin{frame}[fragile]
\frametitle{Control Flow Example}
\lstset{language=Java}
\begin{lstlisting}
class BreakWithLabelDemo {
public boolean search2dArray(int[][] arrayOfInts,
int searchfor) {
boolean foundIt = false;
search:
for (int i = 0; i < arrayOfInts.length; i++) {
for (int j = 0; j < arrayOfInts[i].length; j++) {
if (arrayOfInts[i][j] == searchfor) {
foundIt = true;
break search; // Or just 'return true;'
}
}
}
return foundIt;
}
}
\end{lstlisting}
\end{frame}
\begin{frame}[fragile]
\frametitle{Enhanced \texttt{for}}
\begin{itemize}
\item Some Java classes implement an interface called \texttt{Iterable}
\item This includes arrays and most of the built-in collection classes
\item This means they will provide an ``iterator'' for looping through multiple objects
\item Java provides a cleaner version of \texttt{for} for \texttt{Iterable} classes
\item This version can also be more performant for some collections
\end{itemize}
\lstset{language=Java}
\begin{lstlisting}
class EnhancedForDemo {
public static void main(String[] args){
int[] numbers =
{1,2,3,4,5,6,7,8,9,10};
for (int item : numbers) {
System.out.println("Item is: " + item);
}
}
}
\end{lstlisting}
\end{frame}
\section{Operators and object equality}
\begin{frame}
\frametitle{Java Operators}
\begin{itemize}
\item Java has the expected set of operators (\texttt{+ - * / \%})
\item Similar to C, there is a difference between bitwise operators (\texttt{\& | \^}) and comparison operators (\texttt{\&\& ||})
\item Unlike C, Java has no ``unsigned'' values, but in addition to the shift operators (\texttt{<< >>}) there is an ``unsigned'' shift right \texttt{>>>}.
\item Java has clever compound operators like C (e.g. \texttt{++ -- +=}) and the ternary operator \texttt{(? :)}
\item The comparison operators (\texttt{< <= > >=}) only work for primitive types and their wrapper classes
\item There is no exponentiation operator; use \texttt{Math.pow()}
\end{itemize}
\end{frame}
\begin{frame}[fragile]
\frametitle{Widening and Narrowing Conversions}
\begin{itemize}
\item Java is a strongly typed language
\item However, operators can mix types under certain circumstances
\item Java will automatically perform ``widening'' conversions ($\texttt{byte} \rightarrow \texttt{short} \rightarrow \texttt{int} \rightarrow \texttt{long} \rightarrow \texttt{float} \rightarrow \texttt{double}$)
\item Narrowing conversion can lose information, so Java requires an explicit cast
\lstset{language=Java}
\begin{lstlisting}
double d = 123.45; float f = (float) d;
\end{lstlisting}
\item When mixing operators, the resulting value will be the `widest' required
\item For example, for the division operator \texttt{/}, the result can be an integer (for integer / integer) or a floating-point value (for floating-point or mixed types)
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Equality Operators}
\begin{itemize}
\item Java has two kinds of equality: reference equality (\texttt{o1 == o2}) and object equality (\texttt{o1.equals(o2)})
\item Reference equality means the two references refer to exactly the same object in memory
\item Object equality means the objects are semantically the same (mean the same thing)
\item For classes you create, you can (and often should) define your own rules for object equality by making your own \texttt{equals()} method
\item Later we will talk about \texttt{hashCode()}, an important method that's often seen together with \texttt{equals()}
\end{itemize}
\end{frame}
\begin{frame}[fragile]
\frametitle{Equality example}
\lstset{language=Java}
\begin{lstlisting}
public class Person {
public String firstName;
public String lastName;
public boolean equals(Object obj) {
if (null == obj) return false;
if (obj.getClass() != this.getClass()) return false;
Person other = (Person)obj;
return other.firstName == this.firstName &&
other.lastName == this.lastName;
}
}
Person p1 = new Person();
p1.firstName = "Joe";
p1.lastName = "Smith";
Person p2 = new Person();
p2.firstName = "Joe";
p2.lastName = "Smith";
p1 == p2; // false
p1.equals(p2) // true
\end{lstlisting}
\end{frame}
\section{Next Time}
\begin{frame}
\frametitle{Next Time}
\begin{itemize}
\item Exception Handling
\item Checked and Unchecked Exceptions
\item Try-With-Resources
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Credit in LMPeople}
\begin{center}
LMPeople Course Code: 071409ILT04
\end{center}
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.7421167953,
"avg_line_length": 33.2183544304,
"ext": "tex",
"hexsha": "04d726925c9868d67dd7f93e77299b53cc5697a4",
"lang": "TeX",
"max_forks_count": 16,
"max_forks_repo_forks_event_max_datetime": "2021-12-23T02:46:01.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-09-11T21:33:03.000Z",
"max_forks_repo_head_hexsha": "68ab9ce866a0185dcb060693baa85a20a39ed543",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "andycarmona/introjava",
"max_forks_repo_path": "slides/03.Language_Basics/03.Language_Basics.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "68ab9ce866a0185dcb060693baa85a20a39ed543",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "andycarmona/introjava",
"max_issues_repo_path": "slides/03.Language_Basics/03.Language_Basics.tex",
"max_line_length": 213,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "68ab9ce866a0185dcb060693baa85a20a39ed543",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "andycarmona/introjava",
"max_stars_repo_path": "slides/03.Language_Basics/03.Language_Basics.tex",
"max_stars_repo_stars_event_max_datetime": "2021-03-05T09:42:48.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-12-08T00:26:49.000Z",
"num_tokens": 2913,
"size": 10497
} |
\section{Interpretation and Discussion of the Results}
%In this section, first the challenges of the given problem are summarized and an ideal model to solve the problem is defined. Second, all properties of the ideal model are discussed with the relation to selected models and their performance. Last but not the least, conclusion is made on which of the three models is the best for classification of glass fragments.
\subsection{Classification Challenges and the Ideal Classifier}
Building a good model to accurately predict glass fragments from a data set of $214$ data points turned out to be demanding. The core challenges were:
%\vspace{10pt}
\begin{enumerate}
\item Little amount of data for 6-class classification problem
\item Skewed class distribution
\item Class overlap
\end{enumerate}
%\vspace{10pt}
In the light of the model being used in criminal investigation processes, an ideal model would be transparent in a way that the decision-making is comprehensible for humans. Furthermore, the model should be well-suited to tackle the challenges mentioned.
%Before a discussion of particular models, it is important to highlight the main challenges of the given classification problem and based on it define ideal model. First, certain classes of glass, such as $1$, $2$ and $3$, had a large overlap given its similar features. This implies a need for a complex model capable of handling such overlap and still being able to generalize well. Second, distribution of classes within the training and test split of data was skewed. Thus, minority classes might be very hard to predict. This goes hand in hand with the size of the training data-set which was only 149 records. Therefore, optimal model should be able to learn even from a small training data-set. Last but not the least, since the output of the model might be used as part of crime analysis, ideal model should be also interpretable, i.e. it should be clear how it decides.
\subsection{Comparison of Model Performances}
The three models evaluated within this project can be ranked by their performance (measured by the expected out-of-sample accuracy) as follows:
\begin{enumerate}
\item \class{RandomForestClassifier()}
\item \class{NeuralNetworkClassifier()}
\item \class{DecisionTreeClassifier()}
\end{enumerate}
Looking at the individual classification reports, all models were able to almost perfectly predict class $7$ which goes hand in hand with the findings from the EDA and Figure \ref{pca}. In contrast, the prediction of the overlapping classes 1, 2 and 3 turned out to be generally challenging. In order to classify these classes correctly, the model needed to establish a complex decision boundary, that at the same time generalises well. The \class{RandomForestClassifier()} did the best job at separating the classes 1 and 2 from each other (Table \ref{random_forest_evaluation}), giving one indication of its overall good performance. The \class{DecisionTreeClassifier()} struggled the most at separating the overlapping classes (Table \ref{dt_evaluation}).
Another noticeable, common pattern from the classification reports is that the minority classes were generally more difficult to predict. That is reasonable, since a lack of training examples makes it difficult to learn class-specific properties.
\subsection{Interpretation of Model Performances}
The \class{RandomForestClassifier()}, being an ensemble of a large number of, uncorrelated decision trees, as well as the \class{NeuralNetworkClassifier()} are complex models, with longer training times, but more accurate predictions. This explains, why both models outperform the simpler \class{DecisionTreeClassifier()}.
The fact that the \class{RandomForestClassifier()} gives better results than the \class{NeuralNetworkClassifier()} is probably due to the fact that neural networks usually need larger amounts of training data to give good results. In fact, neural networks perform best on complex classification problems with large amount of data to learn from. Since this classification problem is quite the opposite of that, the neural network is not performing as well as the \class{RandomForestClassifier()}.
%- 1. difference between high performant, but slow/ less performant, but fast
%- 2. neural net not as good, because usually needs more data points
%- 3. however, when choosing a model, dt might stil be intersting, since transparent model (main advantage not only speed, but also interpretability)
\begin{comment}
First, all models were able to almost perfectly predict class $7$ which goes hand in hand with the findings from EDA and figure $2$ where even using only first two principal components, it was possible to see the clear separation of class $7$ from others. Second, more challenging proved to be a prediction of classes $1$ and $2$. In order to classify them correctly, the model needed to establish complex decision boundary. From this perspective, it was expected that \class{DecisionTreeClassifier()} will perform the worst as it can only divide feature space with straight boundaries. This expectation was wrong as \class{NeuralNetwork()} (both implementations) had similar results. One possible explanation is that \class{NeuralNetwork()} got stuck in a local minimum during training and even advanced optimizer (Adam) did not help to overcome this. More interestingly, \class{RandomForestClassifier()} performed way better than the other two models. Its core advantage is concealed in its ability to rely on a majority vote of a large and diverse set of \class{DecisionTreeClassifier()} trained only on a subsample of original data. Thanks to this diversity, it is capable of fitting complex patterns but even more importantly generalize well. In other words, it is less likely to make an error due to small variation in data.
\subsection{Learning from a small data set}
In the given data-set, there were three minor classes, namely $3$, $5$ and $6$. From EDA (Figure 3), it was clear that it will be very hard to classify them correctly as they all overlapped with the other majority classes. This proved to be a problem especially for \class{DecisionTreeClassifier()}. A good explanation can be found from its visualization and following the splits of class 3 for example. In majority of cases (3 out of 5), it ends being in a leaf node either with class 1 or 2 with which it has a large overlap. Further, the nodes are almost pure, thus the model has no further incentive to split it. In addition, since these classes have more records, they are also more likely to be predicted. Finally, compare to other two methods, \class{NeuralNetwork()} is parametric and given the chosen architecture, it has a large number of parameters to be tuned. Therefore, it also needs a larger training data-set in order for it to work well. This might be a possible explanation why it outperformed by \class{RandomForestClassifier()} for prediction of classes $1$ and $2$.
%\subsection{Interpretability}
%\class{DecisionTreeClassifier()}'s core advantage compare to the other two models is that its decision can be simply translated into just if-else statements. For example, can see that within its first two levels (root is level 1), it only uses first or second principal component to decide. This in fact makes sense as the first two components explain most of the variability of the given features and thus it is a good way to separate the classes. On the opposite site, trying to understand how a \class{RandomForestClassifier()} decided is way more difficult and infeasible as it would require to go through all the 100 trees.
\end{comment}
\subsection{Conclusion}
The analysis has shown that a model chosen solely on the basis of best performance (\class{RandomForestClassifier()}) has an expected out-of-sample performance of over 80\%, which is a solid result given the challenges of the problem at hand. However, in the light of the model being used in assisting criminal investigations, transparency in the way decision are being made might be relevant, i.e. in court trials. Thus, if interpretability of the model is a core requirement for the final model, a \class{DecisionTreeClassifier()} should be considered. | {
"alphanum_fraction": 0.7983489134,
"avg_line_length": 158.4038461538,
"ext": "tex",
"hexsha": "422af1f19b83b7ad2c35b1b6a3b03f9f128997c2",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2022-01-29T17:23:15.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-01-29T17:23:15.000Z",
"max_forks_repo_head_hexsha": "c052c33010033cd9fd596eb5ac4d270d1bf98ee3",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jonas-mika/ml-project",
"max_forks_repo_path": "report/06_results_discussion.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c052c33010033cd9fd596eb5ac4d270d1bf98ee3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jonas-mika/ml-project",
"max_issues_repo_path": "report/06_results_discussion.tex",
"max_line_length": 1330,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c052c33010033cd9fd596eb5ac4d270d1bf98ee3",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jonas-mika/ml-project",
"max_stars_repo_path": "report/06_results_discussion.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1714,
"size": 8237
} |
\SetPicSubDir{ch-Rice}
\SetExpSubDir{ch-Rice}
\chapter{Like White on Rice}
\label{ch:rice}
\vspace{2em}
\lipsum[1-4]
\section{Preliminaries}
\lipsum[5-8]
\begin{figure}[!t]
\centering
\includegraphics[width=.5\linewidth]{\Pic{jpg}{rice}}
\vspace{\BeforeCaptionVSpace}
\caption{A bowl of rice.}
\label{Rice:fig:bowl_of_rice}
\end{figure}
\section{Methodology}
\autoref{Rice:fig:bowl_of_rice} shows a bowl of rice.
\autoref{Rice:algo:sample} demonstrates the formatting of pseudo code.
Please carefully check the source files and learn how to use this style.
Importantly:
\begin{itemize}
\item Always state your input.
\item State the output if any.
\item Always number your lines for quick referral.
\item Always declare and initialize your local variables.
\item Always use \CMD{\gets} (``$\gets$'') for assignments.
%Always use \textbackslash gets for assignments.
\end{itemize}
\begin{algorithm}[!t]
\AlgoFontSize
\DontPrintSemicolon
\KwGlobal{max. calories of daily intake $\mathcal{C}$}
\KwGlobal{calories per bowl of rice $\mathcal{B}$}
\BlankLine
\SetKwFunction{fEatRice}{EatRice}
\SetKwFunction{fDoExercise}{DoExercise}
\KwIn{number of bowls of rice $n$}
\KwOut{calories intake}
\Proc{\fEatRice{$n$}}{
$cal \gets n \times \mathcal{B}$\;
\uIf{$cal \geq \mathcal{C}$}{
\Return $\mathcal{C}$\;
}
\Else{
\Return $cal - \fDoExercise{n}$\;
}
}
\BlankLine
\KwIn{time duration (in minutes) of exercise $t$}
\KwOut{calories consumed}
\Func{\fDoExercise{$t$}}{
$cal \gets 0$\;
\lFor{$i \gets 1$ \To $t$}{$cal \gets cal + i$}
\Return $cal$\;
}
\caption{Sample pseudo code of a dummy algorithm.}
\label{Rice:algo:sample}
\end{algorithm}
\section{Evaluation}
\lipsum[16-19]
\begin{figure}[!t]
\centering
\begin{minipage}[b]{.45\linewidth}
\centering
\includegraphics[width=\linewidth]{\Exp{eps}{taste_with_meals}}
\caption{Taste with meal repetition.}
\label{Rice:exp:taste_with_meals}
\end{minipage}
\hspace*{2em}
\begin{minipage}[b]{.45\linewidth}
\centering
\includegraphics[width=\linewidth]{\Exp{eps}{taste_with_freshness}}
\caption{Taste with meal freshness.}
\label{Rice:exp:taste_with_freshness}
\end{minipage}
\end{figure}
\autoref{Rice:exp:taste_with_meals} and \autoref{Rice:exp:taste_with_freshness} show how the taste of rice is affected by meal repetition and freshness respectively.
\lipsum[20]
\section{Summary}
\lipsum[21]
| {
"alphanum_fraction": 0.7138763815,
"avg_line_length": 23.0471698113,
"ext": "tex",
"hexsha": "874bd716969f6c85c801a13ecb36f250e215b781",
"lang": "TeX",
"max_forks_count": 24,
"max_forks_repo_forks_event_max_datetime": "2022-03-28T01:56:10.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-05-13T08:42:18.000Z",
"max_forks_repo_head_hexsha": "9d86d410132e081a731d449179fc9393189aeb7d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "thisum/nusthesis",
"max_forks_repo_path": "src/ch-rice.tex",
"max_issues_count": 15,
"max_issues_repo_head_hexsha": "9d86d410132e081a731d449179fc9393189aeb7d",
"max_issues_repo_issues_event_max_datetime": "2022-03-10T07:25:28.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-11-19T14:19:17.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "thisum/nusthesis",
"max_issues_repo_path": "src/ch-rice.tex",
"max_line_length": 166,
"max_stars_count": 76,
"max_stars_repo_head_hexsha": "9d86d410132e081a731d449179fc9393189aeb7d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "thisum/nusthesis",
"max_stars_repo_path": "src/ch-rice.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-31T03:57:35.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-11-19T14:23:19.000Z",
"num_tokens": 826,
"size": 2443
} |
\documentclass[a4paper, foldmark, 12pt]{leaflet}
%\documentclass[a4paper, foldmark, nocombine]{leaflet}
\usepackage[utf8]{inputenc}
\usepackage{color}
\usepackage{hyperref}
\usepackage{times}
\usepackage{floatflt}
%% Example users
\newcommand{\UserI}{\textit{Alice}}
\newcommand{\UserII}{\textit{Ben}}
\newcommand{\UserIII}{\textit{Christine}}
%% Example movies
\newcommand{\MovieI}{\textit{The Usual Suspects}}
\newcommand{\MovieII}{\textit{American Beauty}}
\newcommand{\MovieIII}{\textit{The Godfather}}
\newcommand{\MovieIV}{\textit{Road Trip}}
\usepackage{fancyhdr}
\pagestyle{fancy}
\fancyhead{}
\fancyfoot{}
\renewcommand{\headrulewidth}{0.0pt}
\renewcommand{\footrulewidth}{0.4pt}
\fancyhead[CO,CE]{
\vspace{-1cm}
\begin{center}
\includegraphics[width=9.0cm]{fig/MyMediaScreenwallBanner.jpg}
\end{center}
}
\fancyfoot[C]{\thepage}
%\setmargins{3cm}{2cm}{1cm}{1cm}
\headheight = 2cm
\title{MyMediaLite Recommender System Algorithm Library}
\author{
\includegraphics[width=4.0cm]{fig/uni-hildesheim-400x400.jpg}\\
Machine Learning Lab
}
\date{}
\begin{document}
\maketitle
MyMediaLite is a lightweight, multi-purpose library
of recommender system algorithms.
It addresses the two most common scenarios in collaborative filtering:
\textbf{rating prediction} (e.g. on a scale of 1 to 5 stars)
and \textbf{item prediction from positive-only feedback} (e.g. from clicks, likes, or purchase actions).
\begin{center}
\url{http://ismll.de/mymedialite}
\end{center}
\newpage
\section{MyMediaLite's Key Features}
\textbf{Choice:} \vspace{-0.6em}
\begin{list}{\labelitemi}{\leftmargin=1em} \addtolength{\itemsep}{-0.5\baselineskip}
\item dozens of different recommendation methods,
\item methods can use collaborative, content, or relational data.
\end{list}
\vspace{-0.5em}
\textbf{Accessibility:} \vspace{-0.6em}
\begin{list}{\labelitemi}{\leftmargin=1em}\addtolength{\itemsep}{-0.5\baselineskip}
\item Includes evaluation routines for rating and item prediction;
\item command line tools that read a simple text-based input format;
\item usable from C\#, Python, Ruby, F\#, etc.,
\item complete documentation of the library and its tools.
\end{list}
\vspace{-0.5em}
\textbf{Compactness:} Core library is less than 200 KB ``big''.
\textbf{Portability:}
Written in C\#, for the .NET platform;
runs on every architecture where Mono works: Linux, Windows, Mac OS X.
\textbf{Parallel processing} on multi-core/multi-processor systems.
\textbf{Serialization:} save and reload recommender models.
\textbf{Real-time incremental updates} for most recommenders.
\textbf{Freedom:} free/open source software (GPL).
\newpage
\section{Target Groups}
\subsection{Researchers}
\begin{itemize}
\item Don't waste your time implementing methods if you actually want to study
other aspects of recommender systems!
\item Use the MyMediaLite recommenders as baseline methods in benchmarks.
\item Use MyMediaLite's infrastructure as an easy starting point to implement your own methods.
\end{itemize}
\subsection{Developers}
\begin{itemize}
\item Add recommender system technologies to your software or website.
\end{itemize}
\subsection{Educators and Students}
\begin{itemize}
\item Demonstrate/see how recommender system methods are implemented.
\item Use MyMediaLite as a basis for you school projects.
\end{itemize}
\newpage
\section{Recommendation Tasks Addressed}
\subsection{Rating Prediction}
Popularized by systems like MovieLens, Netflix, or Jester,
this is the most-discussed collaborative filtering task in the
recommender systems literature.
Given a set of ratings, e.g. on a scale from 1 to 5,
the goal is predict unknown ratings.
\begin{center}
\begin{tabular}{|l||c|c|c|}
\hline
& \UserI & \UserII & \UserIII \\ \hline
\hline
\MovieI & 5 & & 4 \\ \hline
\MovieII & 3 & 4 & 3 \\ \hline
\MovieIII & & \textbf{??} & 1 \\ \hline
\MovieIV & 2 & & \\ \hline
\end{tabular}
\end{center}
\subsection{Positive-Only Feedback Item Recommendation}
Getting ratings from users requires explicit actions from their side.
Much more data is available in the form of implicit feedback,
e.g. whether a user has viewed or purchased a product in an online shop.
Very often this information is positive-only,
i.e. we know users like the products they buy, but we cannot easily assume
that they do not like everything they have not (yet) bought.
%% TODO different picture
\begin{center}
\includegraphics[width=7.0cm]{fig/interpretation_single.pdf}
\end{center}
\newpage
\section{Implemented Methods}
\textbf{Rating Prediction}
\begin{itemize}
\item averages: global, user, item
\item linear baseline method by Koren and Bell
\item frequency-weighted Slope One
\item k-nearest neighbor (kNN):
\begin{itemize}
\item based on user or item similarities, with different similarity measures
\item collaborative or attribute-/content-based
\end{itemize}
\item (biased) matrix factorization; factor-wise/SGD training; optimized for RMSE, logistic loss or MAE
\end{itemize}
\textbf{Item Prediction}
\begin{itemize}
\item random
\item most popular item
\item linear content-based model optimized for BPR (BPR-Linear)
\item support-vector machine using item attributes
\item k-nearest neighbor (kNN):
\begin{itemize}
\item based on user or item similarities
\item collaborative or attribute-/content-based
\end{itemize}
\item weighted regularized matrix factorization (WR-MF)
\item matrix factorization optimized for Bayesian Personalized Ranking (BPR-MF)
\end{itemize}
\newpage
\section{Download}
Get the latest release of MyMediaLite here:
\begin{center}
\url{http://ismll.de/mymedialite}
\end{center}
\section{Contact}
We are always happy about feedback (suggestions, bug reports, patches, etc.).
To contact us, send an e-mail to
\begin{center}
\texttt{[email protected]}
\end{center}
Follow us on Twitter: {\tt @mymedialite}
\section{Acknowledgements}
\begin{floatingfigure}[r]{1.6cm}
\vspace{-0.5cm}
\includegraphics[width=2.1cm]{fig/uni-hildesheim-400x400.jpg}
\end{floatingfigure}
MyMediaLite was developed by Zeno Gantner,
Steffen Rendle, and Christoph Freudenthaler
at University of Hildesheim.
\vspace{0.4cm}
This work was partly funded by the European Commission FP7 project MyMedia
(Dynamic Personalization of Multimedia under the grant agreement no. 215006.
\vspace{0.2cm}
\begin{center}
\includegraphics[width=4.0cm]{fig/MyMediaLogoMedium.png}
\hspace{1.5cm}
\includegraphics[width=2.0cm]{fig/logo-fp7.png}\\
\end{center}
\end{document}
| {
"alphanum_fraction": 0.7387240356,
"avg_line_length": 28.5593220339,
"ext": "tex",
"hexsha": "a82f776b235522dd7266fa329010d3284ad106df",
"lang": "TeX",
"max_forks_count": 178,
"max_forks_repo_forks_event_max_datetime": "2022-03-04T10:56:32.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-01-12T08:51:06.000Z",
"max_forks_repo_head_hexsha": "b5a35672d07c5277245056d0995dac2473e552dd",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "marrk/MyMediaLite",
"max_forks_repo_path": "doc/flyer/mymedialite-flyer.tex",
"max_issues_count": 175,
"max_issues_repo_head_hexsha": "b5a35672d07c5277245056d0995dac2473e552dd",
"max_issues_repo_issues_event_max_datetime": "2021-01-03T13:52:36.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-03-01T10:54:53.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "marrk/MyMediaLite",
"max_issues_repo_path": "doc/flyer/mymedialite-flyer.tex",
"max_line_length": 104,
"max_stars_count": 345,
"max_stars_repo_head_hexsha": "b5a35672d07c5277245056d0995dac2473e552dd",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "marrk/MyMediaLite",
"max_stars_repo_path": "doc/flyer/mymedialite-flyer.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-09T02:26:50.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-08T03:52:23.000Z",
"num_tokens": 1937,
"size": 6740
} |
\onecolumn
%-------------------------------------------------------------------------------
\section{Changes}
\label{sec:change}
%-------------------------------------------------------------------------------
\textbf{Version 01-08-01} of \Package{atlaslatex} includes quite a few definitions from the Jet/ETmiss group.
A new style file has been created \File{atlasjetetmiss.sty} that is not included by default.
Some of the definitions from the Jet/ETmiss group are of more general use and so have been merged into existing style files:
\begin{description}
\item[atlasmisc.sty] List of Monte Carlo generators expanded:
\Macro{POWHEGBOX}, \Macro{POWPYTHIA}.
Add MC macros with suffix \enquote{V} for version number.
\Macro{kt}, \Macro{antikt}, \Macro{Antikt}, \Macro{LO}, \Macro{NLO}, \Macro{NLL}, \Macro{NNLO},
\Macro{muF}, \Macro{muR}.
Added macros \Macro{Runone}, \Macro{Runtwo}, \Macro{Runthr},
Added \Macro{radlength} and \Macro{StoB}.
Added some standard $b$-tagging terms:
\Macro{btag}, \Macro{btagged}, \Macro{bquark}, \Macro{bquarks}, \Macro{bjet}, \Macro{bjets}.
\item[atlasparticle.sty] Now includes \Macro{pp}, \Macro{enu}, \Macro{munu},
\item[atlasprocess.sty] Added \Macro{Zbb}, \Macro{Ztt},
\Macro{Zlplm}, \Macro{Zepem}, \Macro{Zmpmm}, \Macro{Ztptm},
\Macro{tWb}, \Macro{Wqqbar},
\Macro{Wlnu}, \Macro{Wenu}, \Macro{Wmunu},
\Macro{Wjets}, \Macro{Zjets}.
The definition of \Macro{Hllll} was corrected.
\item[atlasheavyion.sty] \Macro{pp} moved to \File{atlasparticle.sty}.
\end{description}
This version also introduced the (optional) use of the \Package{heppennames} package.
The style files \File{atlashepparticle.sty} and \File{atlashepprocess.sty}
are intended to replace \File{atlasparticle.sty} and \File{atlasprocess.sty}.
Several particle definitions were removed from the \Package{atlasparticle} package,
as they just enable a few Greek letters: $\pi$, $\eta$ and $\psi$ to be used directly in text mode.
In addition, the primed $\Upsilon$ resonances, e.g.\ $\Upsilon''$,
as well as $D^{**}$ were removed.
as the official names are \Ups{3} etc.,
The definitions of \MeV, \GeV\ etc.\ in \texttt{atlasunit.sty} were updated in order to remove the \texttt{if} tests in them.
The \texttt{if} tests caused a problem in a paper draft, although the reason was not understood.
The new definitions do not introduce extra space before the unit in math mode.
%-------------------------------------------------------------------------------
\section{Old macros}
\label{sec:old}
%-------------------------------------------------------------------------------
With the introduction of \Package{atlaslatex} several macro names have been changed to make them more consistent.
A few have been removed. The changes include:
\begin{itemize}
\item Kaons now have a capital \enquote{K} in the macro name, e.g.\ \verb|\Kplus| for \Kplus;
\item \verb|\Ztau|, \verb|\Wtau|, \verb|\Htau| \verb|\Atau| have been replaced by
\verb|\Ztautau|, \verb|\Wtautau|, \verb|\Htautau| \verb|\Atautau|;
\item \verb|\Ups| replaces \verb|\ups|;
the use of \verb|\ups| to produce $\Upsilon$ in text mode has been removed;
\item \verb|\cm| has been removed, as it was the only length unit defined for text and math mode;
\item \verb|\mass| has been removed, as \verb|\twomass| can do the same thing and the name is more intuitive;
\item \verb|\mA| has been removed as it conflicts with \Package{siunitx} Version 1, which uses the name
for milliamp.
\item \Macro{mathcal} rather than \Macro{mathscr} is recommended for luminosity and aplanarity.
\end{itemize}
Quite a few macros are more related to \Zboson physics than they are to LHC physics and have
been moved to the \File{atlasother.sty} file, which is not included by default.
There are also macros for various decay processes, \File{atlasprocess.sty} which are not included by default,
but may be useful for how you can define your favourite process.
It used to be the case that you had to use \verb|\MET{}| rather than just \verb|\MET| to get the spacing right,
as somehow \Package{xspace} did not do a good job for \met.
However, with the latest version of the packages both forms work fine.
You can compare \MET and \MET\ and see that the spacing is correct in both cases.
| {
"alphanum_fraction": 0.6846931578,
"avg_line_length": 58.2602739726,
"ext": "tex",
"hexsha": "16f3536daff339d51ce30f98b5f7cc0486b33e19",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4f891754e19676d5b32be1308bd901690723f92f",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "solitonium/ANA-STDM-2020-X-INTX",
"max_forks_repo_path": "doc/atlas_physics/atlas_physics_changes.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4f891754e19676d5b32be1308bd901690723f92f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "solitonium/ANA-STDM-2020-X-INTX",
"max_issues_repo_path": "doc/atlas_physics/atlas_physics_changes.tex",
"max_line_length": 125,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4f891754e19676d5b32be1308bd901690723f92f",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "solitonium/ANA-STDM-2020-X-INTX",
"max_stars_repo_path": "doc/atlas_physics/atlas_physics_changes.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1196,
"size": 4253
} |
\documentclass[11pt]{article}
\usepackage[
top = 2.50cm,% presumably you don't want it to be 0pt as well?
bottom = 2.50cm,
left = 2cm,
right = 2cm,
marginparsep = 0pt,
marginparwidth=0pt,
]{geometry}
\usepackage{amssymb}
\usepackage{fancyhdr}
\usepackage[most]{tcolorbox}
\usetikzlibrary{calc}
\usepackage{siunitx}
\usepackage{amsmath}
\usepackage{tikz}
\usepackage{caption}
\usepackage{pgfplots}
\usepackage{multicol}
\pagestyle{fancy}
\fancyhead[l]{Mechanics: Dynamics - Abridged edition}
\fancyhead[r]{Giorgio G.}
\newcommand\findtan[2][3]{
\pgfmathtruncatemacro\glen{#1/3*2}
\node[coordinate, label={#2:}] (p) at (#2:#1 cm) {};
\draw[ultra thick, blue, <-] (c) -- (p);
\path ($(c)!.5!(p)$) --++ (-90+#2:3mm) node[at end] {$F_c$};
\draw[ultra thick, green, ->] (p) -- ++(90+#2:\glen cm) coordinate (e);
\path (e) --++ (0+#2:4mm) node[at end] {$v$};
}
\begin{document}
\section{The Laws of Motion: }
\subsection{Newton's 1\textsuperscript{st} Law}
\begin{center}
A body is either at rest, or moving with constant speed in a straight line, if no resultant force acts on it.
\end{center}
\begin{equation}
p=mv\tag{\si{\kilogram\meter\per\second}}
\end{equation}
\begin{center}
Where $p$ is an object's \textbf{momentum} (\si{\kilogram\meter\per\second}), $m$ its \textbf{mass} (\si{\kilogram}) and $v$ its \textbf{velocity} (\si{\meter\per\second}).
\end{center}
\subsection{Newton's 2\textsuperscript{nd} Law}
\begin{center}
The acceleration of a body is directly proportional to the resultant force applied to it and acts in the same direction.
\end{center}
\begin{equation}
F=ma \tag{\si{\newton}}
\end{equation}
\begin{center}
Where $p$ is an object's \textbf{momentum} (\si{\kilogram\meter\per\second}), $m$ its \textbf{mass} (\si{\kilogram}) and $v$ (\si{\meter\per\second}) its \textbf{velocity}.
\end{center}
\subsection{Newton's 3\textsuperscript{rd} Law}
\begin{center}
To every action there is an equal and opposite reaction.
\end{center}
\begin{center}
Consider object $A$ and object $B$:
\end{center}
\begin{equation}
_AF_B =\\ _BF_A\tag{\si{\newton}}
\end{equation}
\section{Force of impact: }
\begin{equation}
F_i =\frac{mv-mu}{t}\tag{\si{\newton}}
\end{equation}
\section{Inclined plane: }
\newcommand{\angl}{30}
\begin{tikzpicture} [font = \small]
% triangle:
\draw [draw = orange, fill = orange!15] (0,0) coordinate (O) -- (\angl:6)
coordinate [pos=.45] (M) |- coordinate (B) (O);
% angles:
\draw [draw = orange] (O) ++(.8,0) arc (0:\angl:0.8)
node [pos=.4, left] {$\theta$};
\draw [draw = orange] (B) rectangle ++(-0.3,0.3);
\begin{scope} [-latex,rotate=\angl]
% Object (rectangle)
\draw [fill = purple!30,
draw = purple!50] (M) rectangle ++ (1,.6);
% Weight Force and its projections
\draw [dashed] (M) ++ (.5,.3) coordinate (MM) -- ++ (0,-1.29)
node [very near end, right] {$mg\cos{\theta}$};
\draw [dashed] (MM) -- ++ (-0.75,0)
node [very near end, left] {$mg\sin{\theta}$};
\draw (MM) -- ++ (-\angl-90:1.5)
node [very near end,below left ] {$mg$};
% Normal Force
\draw (MM) -- ++ (0,1.29)
node [very near end, right] {$N$};
% Frictional Force
\draw (MM) -- ++ (0.75,0)
node [very near end, above] {$f$};
\end{scope}
\end{tikzpicture}
\section{Conservation of momentum: }
\begin{equation}
Mu_1 + mu_2 = Mv_1 + mv_2\notag
\end{equation}
\begin{center}
Where $M$ and $m$ are object 1 and 2's \textbf{masses} (\si\kilogram) respectively whilst $u$ and $v$ the initial and final \textbf{velocities} (\si{\meter\per\second}) of a given object respectively.
\end{center}
\newpage
\section{Centripetal force: }
\begin{center}
\begin{tikzpicture}[scale=0.8]
\draw (0,0) circle (3cm);
\node[circle,fill, inner sep=1pt, label={left:O}] (c) at (0,0) {};
\findtan{60}
\end{tikzpicture}
\end{center}
\begin{equation}
F_c = \frac{mv^2}{r}\tag{\si{\newton}}
\end{equation}
\begin{center}
Where $m$ is the given object's \textbf{mass} (\si{\kilogram}), $v$ its \textbf{tangential velocity}\footnote{The velocity perpendicular to the cirular path's radius.} (\si{\meter\per\second}) and $r$ the \textbf{radius} (\si{\meter}) of the cirular path.
\end{center}
\subsection{Periodic time: }
\begin{equation}
T=\frac{2\pi r}{v}\tag{\si{\second}}
\end{equation}
\subsection{Specific cases: }
\begin{multicols}{2}
\begin{center}
\begin{tikzpicture}[scale=0.8]
\draw (0,0) circle (3cm);
\node[circle,fill, inner sep=1pt, label={left:O}] (c) at (0,0) {};
\findtan{90}
\node[circle, fill] (t) at (0,3.23) {};
\end{tikzpicture}
\begin{equation}
F_c = \frac{mv^2}{r} =mg+R\notag
\end{equation}
\end{center}
\begin{center}
\begin{tikzpicture}[scale=0.8]
\draw (0,0) circle (3cm);
\node[circle,fill,inner sep=1pt, label={left:O}] (c) at (0,0) {};
\findtan{90}
\node[circle, fill] (t) at (0,2.77) {};
\end{tikzpicture}\\
\begin{equation}
F_c = \frac{mv^2}{r} =mg-R\notag
\end{equation}
\end{center}
\end{multicols}
\end{document} | {
"alphanum_fraction": 0.6347317073,
"avg_line_length": 27.2606382979,
"ext": "tex",
"hexsha": "f7a653a3f831fb7539e107ed50c43070eafc2d0c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "706ec7cb4d62af1b5a9ad7547589889240c755bb",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Girogio/My-LaTeX",
"max_forks_repo_path": "Physics/1. Mechanics/b. Dynamics/dynamics.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "706ec7cb4d62af1b5a9ad7547589889240c755bb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Girogio/My-LaTeX",
"max_issues_repo_path": "Physics/1. Mechanics/b. Dynamics/dynamics.tex",
"max_line_length": 257,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "706ec7cb4d62af1b5a9ad7547589889240c755bb",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Girogio/My-LaTeX",
"max_stars_repo_path": "Physics/1. Mechanics/b. Dynamics/dynamics.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-30T21:47:25.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-01-12T11:45:45.000Z",
"num_tokens": 2007,
"size": 5125
} |
\documentclass[a4paper,oneside,10pt]{article}
\usepackage[left=1.5in,right=1.5in,top=1in,bottom=1in]{geometry}
\usepackage[USenglish]{babel} %francais, polish, spanish, ...
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{lmodern} %Type1-font for non-english texts and characters
\usepackage{graphicx} %%For loading graphic files
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{amsfonts}
\usepackage{mathtools}
\usepackage{color}
\definecolor{mygray}{rgb}{0.3,0.3,0.3}
\usepackage{subfig}
\usepackage{paralist}
\usepackage{listings}
%\lstset{ %
%basicstyle=\footnotesize, % the size of the fonts that are used for the code
%breakatwhitespace=false, % sets if automatic breaks should only happen at whitespace
%breaklines=false, % sets automatic line breaking
%captionpos=b, % sets the caption-position to bottom
%deletekeywords={...}, % if you want to delete keywords from the given language
%escapeinside={\%*}{*)}, % if you want to add LaTeX within your code
%extendedchars=true, % lets you use non-ASCII characters; for 8-bits encodings only, does not work with UTF-8
%keepspaces=true, % keeps spaces in text, useful for keeping indentation of code (possibly needs columns=flexible)
%keywordstyle=\color{mygray}, % keyword style
%language=C, % the language of the code
%otherkeywords={*,each,Start,Loop,until,Input,Output}, % if you want to add more keywords to the set
%numbers=left, % where to put the line-numbers; possible values are (none, left, right)
%numbersep=5pt, % how far the line-numbers are from the code
%numberstyle=\tiny\color{mygray}, % the style that is used for the line-numbers
%rulecolor=\color{black}, % if not set, the frame-color may be changed on line-breaks within not-black text (e.g. comments (green here))
%showspaces=false, % show spaces everywhere adding particular underscores; it overrides 'showstringspaces'
%showstringspaces=false, % underline spaces within strings only
%showtabs=false, % show tabs within strings adding particular underscores
%stepnumber=1, % the step between two line-numbers. If it's 1, each line will be numbered
%stringstyle=\color{mymauve}, % string literal style
%tabsize=2, % sets default tabsize to 2 spaces
%title=\lstname % show the filename of files included with \lstinputlisting; also try caption instead of title
%}
\lstdefinelanguage{Julia}%
{morekeywords={abstract,break,case,catch,const,continue,do,else,elseif,%
end,export,false,for,function,immutable,import,importall,if,in,%
macro,module,otherwise,quote,return,switch,true,try,type,typealias,%
using,while},%
sensitive=true,%
alsoother={$},%
morecomment=[l]\#,%
morecomment=[n]{\#=}{=\#},%
morestring=[s]{"}{"},%
morestring=[m]{'}{'},%
}[keywords,comments,strings]%
\lstset{%
language = Julia,
basicstyle = \ttfamily,
keywordstyle = \bfseries\color{blue},
stringstyle = \color{magenta},
commentstyle = \color{ForestGreen},
showstringspaces = false,
}
\renewcommand\lstlistingname{Algorithm}% Listing -> Algorithm
\renewcommand\lstlistlistingname{List of \lstlistingname s}% List of Listings -> List of Algorithms
\usepackage{caption}
\DeclareCaptionFormat{myformat}{#1#2#3\hspace{.5mm}\color{mygray}\hrulefill}
\captionsetup[figure]{format=myformat}
\captionsetup[lstlisting]{format=myformat}
\newcommand\note[1]{}
\newcommand\R{\mathbb{R}}
\newcommand\N{\mathbb{N}}
\newcommand\C{\mathbb{C}}
\newcommand\Z{\mathbb{Z}}
\newcommand\Q{\mathbb{Q}}
\newcommand\F{\mathcal{F}}
\DeclareMathOperator*\argmin{arg\,min}
\DeclareMathOperator*\argmax{arg\,max}
\DeclareMathOperator\Var{Var}
\DeclareMathOperator\E{E}
%\DeclareMathOperator\det{det}
\DeclareMathOperator\tang{\mathbf{t}}
\DeclareMathOperator\diag{diag}
\parskip.5\baselineskip
\parindent0mm
\let\oldsection\section
\renewcommand\section{\clearpage\oldsection}
%\usepackage{natbib}
\usepackage{cite}
\usepackage{hyperref}
\usepackage{rotating}
\begin{document}
\title{Using Numerical Continuation Methods and Galerkin's Method for Finding and Tracing Periodic Solutions in Nonlinear Dynamic Systems} %CITE
\author{Fabian Späh \and Mats Bosshammer}
\date{SS 2016}
\maketitle
\thispagestyle{empty} %No headings for the first pages.
% abstract!
\pagebreak
\thispagestyle{empty} %No headings for the first pages.
\raggedbottom
\tableofcontents %Table of contents
%\cleardoublepage %The first chapter should start on an odd page.
\pagebreak
\pagestyle{plain} %Now display headings: headings / fancy / ...
\flushbottom
\input{intro}
%\input{modeling}
\input{galerkin}
\input{initial}
\input{continuation}
\input{roessler}
\input{lorenz}
\input{outro}
\bibliography{bibliography}{}
\bibliographystyle{apalike}
\end{document}
| {
"alphanum_fraction": 0.7017509345,
"avg_line_length": 30.8060606061,
"ext": "tex",
"hexsha": "99e4b668101a20f5a601c0786804437a92896ff8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fcf289c7ef5f8500ebcb238e36c6a7ee9e054147",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "285714/ncm",
"max_forks_repo_path": "doctheory/MP.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fcf289c7ef5f8500ebcb238e36c6a7ee9e054147",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "285714/ncm",
"max_issues_repo_path": "doctheory/MP.tex",
"max_line_length": 146,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "fcf289c7ef5f8500ebcb238e36c6a7ee9e054147",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "285714/ncm",
"max_stars_repo_path": "doctheory/MP.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1425,
"size": 5083
} |
\documentclass[10pt, aspectratio=169, handout]{beamer}
\usefonttheme{professionalfonts}
%\usetheme{CambridgeUS}
%
% Choose how your presentation looks.
%
% For more themes, color themes and font themes, see:
% http://deic.uab.es/~iblanes/beamer_gallery/index_by_theme.html
%
\mode<presentation>
{
\usetheme{default} % or try Darmstadt, Madrid, Warsaw, ...
\usecolortheme{beaver} % or try albatross, beaver, crane, ...
\usefonttheme{default} % or try serif, structurebold, ...
\setbeamertemplate{navigation symbols}{}
\setbeamertemplate{caption}[numbered]
}
\usepackage[english]{babel}
\usepackage[utf8x]{inputenc}
\usepackage{tikz}
\usepackage{pgfplots}
\usepackage{array} % for table column M
\usepackage{makecell} % to break line within a cell
\usepackage{verbatim}
\usepackage{graphicx}
\usepackage{epstopdf}
\usepackage{amsfonts}
\usepackage{xcolor}
%\captionsetup{compatibility=false}
%\usepackage{dsfont}
\usepackage[absolute,overlay]{textpos}
\usetikzlibrary{calc}
\usetikzlibrary{pgfplots.fillbetween, backgrounds}
\usetikzlibrary{positioning}
\usetikzlibrary{arrows}
\usetikzlibrary{pgfplots.groupplots}
\usetikzlibrary{arrows.meta}
\usetikzlibrary{plotmarks}
\usepgfplotslibrary{groupplots}
\pgfplotsset{compat=newest}
%\pgfplotsset{plot coordinates/math parser=false}
\usepackage{hyperref}
\hypersetup{
colorlinks=true,
linkcolor=blue,
filecolor=magenta,
urlcolor=cyan,
}
%%
%\def\EXTERNALIZE{1} % for externalizing figures
\input{header.tex}
\title[EE 264]{Quantization}
\author{Jose Krause Perin}
\institute{Stanford University}
\date{July 25, 2017}
\begin{document}
\begin{frame}
\titlepage
\end{frame}
%
\begin{frame}{Outline}
\tableofcontents
\end{frame}
%
\section{Quantization in DSP}
\begin{frame}{Practice and theory}
\begin{block}{In practice}
\begin{center}
\resizebox{0.9\linewidth}{!}{\input{figs/adc-dsp-dac.tex}}
\end{center}
\end{block}
\begin{block}{DSP theory}
\begin{center}
\resizebox{0.9\linewidth}{!}{\input{figs/ctd-lti-dtc.tex}}
\end{center}
\textbf{Problem:} This simplified model doesn't account for \textbf{quantization} (this lecture) or \textbf{finite precision arithmetic} (lecture 8).
\end{block}
\end{frame}
%
\begin{frame}{Including quantization}
\begin{block}{Analog-to-digital converter}
A more realistic model
\begin{center}
\resizebox{0.9\linewidth}{!}{
\begin{tikzpicture}[->, >=stealth, shorten >= 0pt, draw=black!50, node distance=2.75cm, font=\sffamily]
\tikzstyle{node}=[circle,fill=black,minimum size=2pt,inner sep=0pt]
\tikzstyle{block}=[draw=black,rectangle,fill=none,minimum size=1.5cm, inner sep=0pt]
\node[node] (xc) {};
\node[block, right=1cm of xc] (CTD) {C-to-D};
\node[block, right of=CTD, text width = 2cm, align= center] (Q) {Quantizer};
\node[block, right of=Q] (coder) {Coder};
\coordinate[right=1.5cm of coder] (yc) {};
\coordinate (mid1) at ($(CTD.east)!0.5!(Q.west)$) {};
\coordinate (mid2) at ($(Q.east)!0.5!(coder.west)$) {};
\path (xc) edge (CTD);
\path (CTD) edge (Q);
\path (Q) edge (coder);
\path (coder) edge (yc);
\node[above = 0.5mm of mid1] {$x[n]$};
\node[above = 0.5mm of mid2] {$x_Q[n]$};
\node[above = 0mm of xc, text width = 1cm, align=center] {$x_c(t)$};
\node[above = 0mm of yc, align=center] {$x_B[n]$};
\node[below = 0mm of yc, text width = 2.5cm, align=center] {Binary \\ representation};
\node[align=center] at ($(CTD.south)-(0, 0.4cm)$) {Sampling \\ $T$};
\node[align=center, text width=4cm] at ($(Q.south)-(0, 1.1cm)$) {Analog-to-digital converter};
\draw[dashed] ($(CTD.south west)-(0.25, 0.8)$) rectangle ($(coder.north east)+(0.25, 0.3)$) {};
\end{tikzpicture}
}
\end{center}
\end{block}
\end{frame}
%
\begin{frame}{Quantizer}
\begin{columns}[t]
\begin{column}{0.5\textwidth}
\textbf{Mid-tread} uniform quantizer
\resizebox{0.9\textwidth}{!}{\input{figs/midtread_quantizer.tex}}
\end{column}
\begin{column}{0.5\textwidth}
\textbf{Mid-rise} uniform quantizer
\resizebox{0.9\textwidth}{!}{\input{figs/midrise_quantizer.tex}}
\end{column}
\end{columns}
\begin{center}
\end{center}
\textbf{Terminology}
\begin{itemize}
\item The quantizer has $B$ bits of \textbf{resolution}
\item $\Delta X$ is the \textbf{dynamic range}
\item $\Delta$ is the \textbf{step size}
\end{itemize}
\begin{equation}
\Delta = \frac{\Delta X}{2^{B}} \label{eq:Delta}
\end{equation}
\end{frame}
%
\begin{frame}{Example of quantization}
\begin{center}
\resizebox{0.5\textwidth}{!}{\input{figs/quantization_sine.tex}}
\end{center}
\onslide<2-|handout:1>{
Quantization error:
\begin{align*}
e[n] = x[n] - Q(x[n]) = x[n] - x_Q[n]
\end{align*}
Note that the quantization error is bounded $-\Delta/2 \leq e[n] \leq \Delta/2$.
}
\onslide<3-|handout:1>{
Quantization error is deterministic but hard to analyze, so we treat it as noise (random process).
}
\end{frame}
%
\begin{frame}{Quantization of a sinusoid}
\only<1-3|handout:1>{Using a 3-bit quantizer}
\only<4-6|handout:2>{Using an 8-bit quantizer}
\begin{center}
\resizebox{0.55\textwidth}{!}{\input{figs/quantization_sine_full.tex}}
\end{center}
\end{frame}
%
\section{Linear noise model}
\begin{frame}{Linear noise model}
We'll model the quantizer as a noise source of a \textbf{white uniformly distributed noise} that is independent of the input signal.
\begin{center}
\resizebox{0.6\textwidth}{!}{\input{figs/quantization_linear_model.tex}}
\end{center}
\vspace{-0.5cm}
\pause
From these assumptions:
\begin{align*}
\sigma_e^2 &= \frac{\Delta^2}{12} \tag{average power} \\
\phi_{ee}[n] &= \sigma_e^2\delta[n] \tag{autocorrelation function} \\
\Phi_{ee}(e^{j\omega}) &= \sigma_e^2, |\omega| \leq \pi \tag{PSD}
\end{align*}
\end{frame}
%
\begin{frame}{Quantizer signal-to-noise ratio (SNR)}
It's often convinient to characterize the quantizer in terms of a \textbf{signal-to-noise ratio (SNR)}:
\begin{align*}
\mathrm{SNR} &= 10\log_{10}\bigg(\frac{\text{Signal Power}}{\text{Quantization noise power}}\bigg)~\text{dB} \\
&= 10\log_{10}\bigg(\frac{\sigma_x^2}{\sigma_e^2}\bigg) \\
&= 10\log_{10}\bigg(\frac{12\sigma_x^2}{\Delta^2}\bigg) \\
&= 10\log_{10}\bigg(\frac{12\sigma_x^2(2^{2B})}{\Delta X^2}\bigg) \tag{substituting \eqref{eq:Delta}} \\
&= \tikz[baseline]{\node[fill=blue!20,anchor=base] {$6.02B$};} + 10.79 + 20\log_{10}\bigg(\frac{\sigma_x}{\Delta X}\bigg)
\end{align*}
For every bit in the quantizer we gain 6.02 dB of SNR.
\textbf{Important:} The signal amplitude must be matched to the quantizer dynamic range, otherwise there'll be excessive clipping or some of the bits may not be used.
\end{frame}
\begin{frame}{Effective number of bits (ENOB)}
Another useful metric to evaluate quantizers is the \textbf{effective number of bits (ENOB)}.
\begin{itemize}
\item Quantization is not the only source of noise in real quantizers
\item Additional noise will consume some bits of resolution
\item To continue using the simple linear noise model, we assume that the noisy real quantizer is equal to an ideal quantizer with resolution ENOB $< B$ bits.
\end{itemize}
\begin{center}
\resizebox{0.4\textwidth}{!}{
\begin{tikzpicture}[->, >=stealth, shorten >= 0pt, draw=black!50, node distance=2cm, font=\sffamily]
\tikzstyle{node}=[circle,fill=black,minimum size=2pt,inner sep=0pt]
\tikzstyle{block}=[draw=black,rectangle,fill=none,minimum size=1.5cm, inner sep=0pt]
\tikzstyle{annot} = []
\node[node] (xc) {};
\node[block, right of=xc, text width = 2.5cm, align= center] (DSP) {Real Quantizer};
\coordinate[right of=DSP] (yc) {};
\path (xc) edge (DSP);
\path (DSP) edge (yc);
\node[above = 0mm of xc, text width = 1cm, align=center] {$x[n]$};
\node[above = 0mm of yc, text width = 1cm, align=center] {$x_Q[n]$};
\node[align=center, text width=3cm] at ($(DSP.south) - (0, 0.3cm)$) {$B$ bits of resolution};
\node[node, below=2.3cm of xc] (xc2) {};
\node[block, right of=xc2, text width = 2.5cm, align= center] (DSP2) {Ideal Quantizer};
\coordinate[right of=DSP2] (yc2) {};
\path (xc2) edge (DSP2);
\path (DSP2) edge (yc2);
\node[above = 0mm of xc2, text width = 1cm, align=center] {$x[n]$};
\node[above = 0mm of yc2, text width = 1cm, align=center] {$x_Q[n]$};
\node[align=center, text width=4cm] at ($(DSP2.south) - (0, 0.3cm)$) {ENOB bits of resolution};
\end{tikzpicture}
}
\end{center}
Datasheets of ADCs will typically give you the ENOB at a certain frequency.
\end{frame}
\section{Noise shaping}
\begin{frame}{Noise shaping}
\begin{itemize}
\item Quantization noise is unavoidable, but there are strategies to mitigate it
\item One example is \textbf{noise shaping}. The goal is to shape the quantization noise PSD, so that most of the noise power falls outside the signal band
\item To perform noise shaping the signal must be \textbf{oversampled}, otherwise noise aliasing would make most of the noise power fall in the signal band.
\item Noise shaping can be used in both A-to-D and D-to-A converters
\end{itemize}
\end{frame}
\begin{frame}{Noise shaping in A-to-D conversion}
\begin{center}
\def\ALL{1}
\resizebox{0.6\textwidth}{!}{\input{figs/noise_shaping_adc_diagram.tex}}
\end{center}
\end{frame}
%
\begin{frame}{Noise shaping in A-to-D conversion}
\vspace{-0.3cm}
\begin{center}
\let\ALL\undefined
\resizebox{0.4\textwidth}{!}{\input{figs/noise_shaping_adc_diagram.tex}}
\end{center}
Using superposition, we can separately study the effect of the system on the signal $x[n]$ and on quantization noise $e[n]$.
For the signal
\begin{align*}
Y(z) &= (X(z) - Y(z)z^{-1})\frac{1}{1 - z^{-1}} \\
Y(z)(1 - z^{-1}) &= (X(z) - Y(z)z^{-1}) \\
Y(z) &= X(z) \tag{signal is unaffected}
\end{align*}
For the noise
\begin{align*}
Y(z) &= E(z) - Y(z)\frac{z^{-1}}{1 - z^{-1}} \\
&= E(z)(1 - z^{-1}) \tag{noise is filtered}
\end{align*}
\end{frame}
%
\begin{frame}
The noise is filtered by
\begin{equation*}
\frac{Y(z)}{E(z)} = 1 - z^{-1}
\end{equation*}
Therefore, the noise PSD at the output will be
\begin{align*}
\Phi_{\tilde{e}\tilde{e}}(e^{j\omega}) &= |1 - e^{-j\omega}|^2\sigma_e^2 \tag{since $e[n]$ is white} \\
&= 4\sigma_e^2\sin^2(\omega/2)
\end{align*}
\begin{center}
\resizebox{0.6\textwidth}{!}{\input{figs/noise_shaping_adc_freq_domain.tex}}
\end{center}
After noise shaping most of the noise power falls out of the signal band, so we can use a simple lowpass filter to minimize quantization noise.
\pause
\textbf{Important:} This strategy of noise shaping only works when oversampling is sufficiently high. Otherwise, quantization noise would still fall in the signal band due to aliasing.
\end{frame}
%
\begin{frame}{Noise shaping in A-to-D conversion}
Table 4.1 of the textbook:
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{figs/table41.png}
\end{figure}
$M$ denotes the amount of oversampling. That is $M = \frac{\text{Sampling frequency}}{\text{Nyquist frequency}}$.
\end{frame}
%
\begin{frame}{Summary}
\begin{itemize}
\item Quantization is unavoidable in DSP systems
\item Although quantization is a nonlinear operation on a signal, we can model the quantization error as a uniformly distributed random process (linear noise model)
\item Using this linear noise model, we simply replace quantizers by noise sources of average power $\sigma_e^2 = \Delta^2/12$
\item Quantization noise is assumed white (samples are uncorrelated)
\item Every extra bit of resolution in a quantizer improves the SNR by 6.02 dB
\item The signal amplitude must be matched to the dynamic range of the quantizer, otherwise there'll be excessive clipping or some bits won't be used
\item Noise shaping is a strategy that minimizes quantization noise in A-to-D and D-to-A converters. The goal is to shape the quantization noise PSD, so that most of the noise power falls outside the signal band
\item Noise shaping requires oversampling to minimize noise aliasing
\end{itemize}
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.7025221313,
"avg_line_length": 32.8956043956,
"ext": "tex",
"hexsha": "993837240907fb8e3d729b10e4e3e431f9751656",
"lang": "TeX",
"max_forks_count": 10,
"max_forks_repo_forks_event_max_datetime": "2022-03-19T07:25:20.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-04-16T01:11:14.000Z",
"max_forks_repo_head_hexsha": "0ec74b4597fb54800ebdab440cba4892d210343d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jkperin/DSP",
"max_forks_repo_path": "lectures/lecture06.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0ec74b4597fb54800ebdab440cba4892d210343d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jkperin/DSP",
"max_issues_repo_path": "lectures/lecture06.tex",
"max_line_length": 212,
"max_stars_count": 21,
"max_stars_repo_head_hexsha": "0ec74b4597fb54800ebdab440cba4892d210343d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jkperin/DSP",
"max_stars_repo_path": "lectures/lecture06.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-07T08:56:28.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-05-11T21:48:47.000Z",
"num_tokens": 4029,
"size": 11974
} |
\subsection{Exponential Decay}\label{ex:decay-set-sample}
To demonstrate the qualitative differences in the solutions provided by the set-- and sample--based methods for a nonlinear problem, we consider an exponential decay problem with uncertain decay rate and initial condition (which are paired to form the 2-D vector $\param$):
$$
\begin{cases}
\frac{\partial u}{\partial t} & = \param_1 u(t), \\
u(0) &= \param_2.
\end{cases}
$$
The solution is described by
\begin{equation}
u(t;\param) = u_0\exp(\param_1 t), \; u_0 = \param_2 ,
\end{equation}
and a nominal value of $\param = 0.5$ is used to simulate the system.
We take a single observation at $t=0.5$s and assume a uniform density with interval length $0.2$ centered at $u(1,0.5)$ to represent the uncertainty in the measurement equipment.
We assume a uniform ansatz / initial density over the unit domain.
We use $N=50$ parameter samples to establish a coarse solution in Figure~\ref{fig:heatrod-sol-ex1}.
\begin{figure}
\begin{minipage}{.475\textwidth}
\includegraphics[width=\linewidth]{examples/fig_decay_q1/DecayModel--set_N50_em.png}
\end{minipage}
\begin{minipage}{.475\textwidth}
\includegraphics[width=\linewidth]{examples/fig_decay_q1/DecayModel--sample_N50_mc.png}
\end{minipage}
\caption{Observation taken at $t=1$s. The inverse image of the reference measure for set-based (left) and sample-based (right) solutions for $\nsamps=50$ parameter samples.}
\label{fig:heatrod-sol-ex1}
\end{figure}
The decay rate $\param_1$ shows little reduction in uncertainty overall.
If one were to look at marginals of the components of $\param$, it would not appear as if much was learned.
However, the relationship \emph{between} these two quantities has very certainly been elucidated by the solution of the inverse problem.
Where once $\pspace$ was a rectangular region, the set of possible parameters has been reduced to a diagonal band.
The sample-based approach, especially at this low sample size (density estimation in 2-D at $50$ samples is a stretch), has some visible downsides.
It does not capture the equivalence--class nature of the solution set the way the set--valued one does, which benefits from using $\ndiscs=1$ (aligning with the choice of uniform observed density).
We address what would occur had we been able to observe earlier in time at $t=0.5$ by showing the associated solutions under the same experimental conditions in \ref{fig:heatrod-sol-ex2}.
There is a marked reduction in uncertainty, as several regions of $\pspace$ have been ruled out from consideration.
\begin{figure}
\begin{minipage}{.475\textwidth}
\includegraphics[width=\linewidth]{examples/fig_decay_q2/DecayModel--set_N50_em.png}
\end{minipage}
\begin{minipage}{.475\textwidth}
\includegraphics[width=\linewidth]{examples/fig_decay_q2/DecayModel--sample_N50_mc.png}
\end{minipage}
\caption{Observation taken at $t=0.5$s. The inverse image of the reference measure for set-based (left) and sample-based (right) solutions for $\nsamps=50$ parameter samples.}
\label{fig:heatrod-sol-ex2}
\end{figure}
Observing earlier in time helps especially in reducing the (marginal) values for the initial condition $\param_2$, while the rate $\param_1$ is still to some degree able to take any values in its original domain.
Both solution types suffer from discretization error, as evidenced by the break in the contour structure.
By comparison to \ref{fig:heatrod-sol-ex1}, there is more more confidence in the solution (represented by the reduced support of the image).
However, at $\nsamps=50$, the sample--based approach struggles to assign uniform probability to different contour events.
This may suggest that in situations with very limited model evaluation budget and set-valued solutions involving uniform uncertainties in measurements, the set-valued approach may serve a useful purpose.
| {
"alphanum_fraction": 0.7800259403,
"avg_line_length": 60.234375,
"ext": "tex",
"hexsha": "13acd452db08a57663b57b73d8594c78c929156c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mathematicalmichael/thesis",
"max_forks_repo_path": "ch02/decay_set_vs_sample.tex",
"max_issues_count": 59,
"max_issues_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e",
"max_issues_repo_issues_event_max_datetime": "2021-11-24T17:52:57.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-12-27T23:15:05.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mathematicalmichael/thesis",
"max_issues_repo_path": "ch02/decay_set_vs_sample.tex",
"max_line_length": 273,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mathematicalmichael/thesis",
"max_stars_repo_path": "ch02/decay_set_vs_sample.tex",
"max_stars_repo_stars_event_max_datetime": "2020-12-28T20:34:29.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-04-24T08:05:49.000Z",
"num_tokens": 979,
"size": 3855
} |
\section*{Introduction}
In this chapter, generalities about systems of Partial Differential Equations (PDEs) are first introduced. More specifically, the notions of \textit{characteristics} and \textit{hyperbolicity} for PDEs are introduced in section \ref{sec:PDEs} for first-order systems, which will be of particular interest in the remainder of the manuscript.
Then, introduction of balance laws of solid dynamics and derivation of constitutive equations from the thermodynamics in section \ref{sec:solidMech_equations}, lead to first-order hyperbolic PDEs.
By using tools introduced in section \ref{sec:PDEs}, the characteristic analysis of these systems is carried out in section \ref{sec:characteristic_analysis} in order to derive exact solutions of particular problems in section \ref{sec:riemann_problems}. These solutions allow the highlighting of different types of waves: (i) discontinuous waves, governed by the \textit{Rankine-Hugoniot} jump condition, within one-dimensional linear elastic and elastic-plastic media (ii) shock waves, also following the Rankine-Hugoniot condition, and simple waves, within a non-linear problem (one-dimensional strain state in a \textit{Saint-Venant-Kirchhoff} hyperelastic medium).
At last, strategies enabling the computation of approximate solutions of non-linear problems are reviewed in section \ref{sec:riemann_solvers}.
\section{Generalities -- Hyperbolic partial differential equations}
\label{sec:PDEs}
\input{chapter2/PDEs}
\section{Governing equations of solid mechanics}
\label{sec:solidMech_equations}
\input{chapter2/conservationLaws}
\section{Characteristic analysis -- Structure of solutions}
\label{sec:characteristic_analysis}
\input{chapter2/characteristicAnalysis}
\section{Some solutions of Riemann problems}
\label{sec:riemann_problems}
\input{chapter2/exact_solutions}
\section{Approximate--State Riemann solvers}
\label{sec:riemann_solvers}
\input{chapter2/riemann_solvers}
\section{Conclusion}
It has been seen in this chapter that solid dynamics balance equations can be written as a first order hyperbolic system whose theory has been recalled in section \ref{sec:PDEs}.
Indeed, the thermodynamics framework assuming generalized standard materials combined with conservation laws allowed in section \ref{sec:solidMech_equations} the building of conservative or quasi-linear forms.
Those systems of partial differential equations admit non-complex eigenvalues and independent eigenvectors provided that some requirements on the stored energy function are satisfied (positive definiteness of the acoustic tensor).
Then, the characteristic analysis of the quasi-linear form in section \ref{sec:characteristic_analysis} enabled the highlighting of specific wave types involved in the solutions of dynamic problems, that is: discontinuous, shock and simple waves.
Even though exact solutions of linear and non-linear problems have been developed in section \ref{sec:riemann_problems}, it is not possible in general, hence the introduction of approximate-state Riemann solvers in section \ref{sec:riemann_solvers}.
This solution strategy will be used in what follows as an element of the \textit{Discontinuous Galerkin Material Point Method}, which is the object of the next chapter.
%%% Local Variables:
%%% mode: latex
%%% ispell-local-dictionary: "american"
%%% TeX-master: "../mainManuscript"
%%% End:
| {
"alphanum_fraction": 0.8194444444,
"avg_line_length": 82.5365853659,
"ext": "tex",
"hexsha": "be832ae19c76e477f28bea9648da477c76a5be80",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2f0062a1800d7a17577bbfc2393b084253d567f4",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adRenaud/research",
"max_forks_repo_path": "manuscript/chapter2/mainChapter2.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "2f0062a1800d7a17577bbfc2393b084253d567f4",
"max_issues_repo_issues_event_max_datetime": "2019-01-07T13:11:11.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-01-07T13:11:11.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adRenaud/research",
"max_issues_repo_path": "manuscript/chapter2/mainChapter2.tex",
"max_line_length": 669,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "2f0062a1800d7a17577bbfc2393b084253d567f4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adRenaud/research",
"max_stars_repo_path": "manuscript/chapter2/mainChapter2.tex",
"max_stars_repo_stars_event_max_datetime": "2021-06-18T14:52:03.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-06-18T14:52:03.000Z",
"num_tokens": 748,
"size": 3384
} |
\documentclass[pdfpagelabels=false]{sig-alternate-2013} % option is to shut down hyperref warnings
\setlength{\paperheight}{11in} % To shut down hyperref warnings
\usepackage[T1]{fontenc}
\usepackage{lmodern}
\usepackage{amssymb}
\usepackage{booktabs}
\usepackage{hyperref}
\usepackage{flushend}
\usepackage[numbers,square,sort&compress]{natbib}
\renewcommand{\refname}{References}
\renewcommand{\bibsection}{\section{References}}
\renewcommand{\bibfont}{\raggedright}
\permission{Copyright is held by the author/owner(s).}
\conferenceinfo{WWW'16 Companion,}{April 11--15, 2016, Montr\'eal, Qu\'ebec, Canada.}
\copyrightetc{ACM \the\acmcopyr}
\crdata{978-1-4503-4144-8/16/04. \\
http://dx.doi.org/10.1145/2872518.2891063}
\clubpenalty=10000
\widowpenalty = 10000
\begin{document}
\title{Centrality Measures on Big Graphs:\\Exact, Approximated, and Distributed Algorithms}
\numberofauthors{3}
\author{
\alignauthor
Francesco Bonchi\\
\affaddr{ISI Foundation}\\
\affaddr{Turin, Italy}\\
\email{[email protected]}
\end{tabular}\begin{tabular}[t]{p{1.3\auwidth}}\centering
Gianmarco~De~Francisci~Morales\\
\affaddr{Qatar Computing Research Institute}\\
\affaddr{Doha, Qatar}\\
\email{[email protected]}
\alignauthor
Matteo Riondato\titlenote{Main contact.}\\
\affaddr{Two Sigma Investments}\\
\affaddr{New York, NY, USA}\\
\email{[email protected]}
}
\date{12 February 2016}
\maketitle
\begin{abstract}
Centrality measures allow to measure the relative importance of a node or an
edge in a graph w.r.t.~other nodes or edges. Several measures of centrality have
been developed in the literature to capture different aspects of the informal
concept of importance, and algorithms for these different measures have been proposed.
In this tutorial, we survey the different definitions of centrality measures and
the algorithms to compute them. We start from the most common measures, such as
closeness centrality and betweenness centrality, and move to more complex ones
such as spanning-edge centrality. In our presentation, we begin
from exact algorithms and then progress to approximation algorithms, including
sampling-based ones, and to highly-scalable MapReduce algorithms for huge
graphs, both for exact computation and for keeping the measures up-to-date on
dynamic graphs where edges are inserted or removed over time. Our goal is to
show how advanced algorithmic techniques and scalable systems can be used to
obtain efficient algorithms for an important graph mining task, and to
encourage research in the area by highlighting open problems and possible directions.
\end{abstract}
%\category{G.2.2}{Discrete Mathematics}{Graph Theory}[Graph algorithms]
%\category{H.2.8}{Database Management}{Database Applications}[Data mining]
\keywords{Centrality; Betweenness; Closeness; Tutorial}
\section{Introduction}
Identifying the ``important'' nodes or edges in a graph is a fundamental task
in network analysis, with many applications, from economics and biology to security and sociology.
Several measures, known as \emph{centrality indices}, have been proposed over the years,
formalizing the concept of importance in different ways~\citep{Newman10}. Centrality measures
rely on graph properties to quantify importance. For example, betweenness
centrality, one of the most commonly used centrality indices, counts the fraction
of shortest paths going through a node, while the closeness centrality of a node
is the average sum of the inverse of the distance to other nodes. Other
centrality measures use eigenvectors, random walks, degrees, or more complex
properties. For instance, the PageRank index of a node is a centrality measure, and
centrality measures for sets of nodes have also been defined.
With the proliferation of huge networks with millions of nodes and billions of
edges, the importance of having scalable algorithms for computing centrality
indices has become more and more evident, and a number of contributions have been
recently proposed, ranging from heuristics that perform extremely well in
practice to approximation algorithms offering strong probabilistic guarantees,
to scalable algorithms for the MapReduce platform. Moreover, the dynamic nature
of many networks, i.e., the addition and removal of nodes or edges over
time, dictates the need to keep the computed values of centrality up-to-date as
the graph changes. These challenging problems have enjoyed enormous interest
from the research community, with many relevant contributions proposed recently
to tackle them.
Our tutorial presents, in a unified framework, some of the many measures of
centrality, and discusses the algorithms to compute them, both in an exact and in
an approximate way, both in-memory and in a distributed fashion in MapReduce.
The goal of this unified presentation is to ease the
comparison between different measures of centrality, the different quality
guarantees offered by approximation algorithms, and the different trade-offs and
scalability behaviors characterizing distributed algorithms. We believe
this unity of presentation is beneficial both for newcomers and for
experienced researchers in the field, who will be exposes to the material from a
coherent point of view.
The graph analyst can now choose among a huge number of centrality indices, from
the well-established ones originally developed in sociology, to the ones more
recently introduced ones that capture other aspects of importance. At the same
time, the original algorithms that could handle the relatively small networks
for classic social science experiments have been superseded by important
algorithmic contributions that exploit modern computational frameworks or
obtain fast, high-quality approximations. It is our belief that the long history of
centrality measures and the ever-increasing interest from computer
scientists in analyzing larger and richer graphs create the need for an
all-around organization of both old and new materials, and the desire to
satisfy this need inspired us to develop this tutorial.
\pagebreak
\section{Outline}
The tutorial is structured in three main technical parts, plus a concluding part
where we discuss future research directions. All three technical parts will
contains both theory and experimental results.
\paragraph{Part I: Definitions and Exact Algorithms}
In this first part, we introduce the different centrality measures, starting
from important axioms that a good centrality measure should require. We then
discuss the relationship between the different measures, including
results highlighting the high correlation between many of them. After having
laid these foundations, we move to present the algorithms for the exact
computation of centrality measures, both in static and in dynamic graph. We
discuss the state-of-the-art by presenting both algorithms with the best
worst-case time complexity and heuristics that work extremely well in practice
by exploiting different properties of real world graphs.
Exact computation of centrality measures becomes impractical on web-scale
networks. Commonly, one of two alternative approaches is taken to speed up the
computation: focus on obtaining an \emph{approximation} of the measure of
interest or use \emph{parallel and distributed algorithms}. In the second
part of our tutorial we explore the former approach, while in the third part we
deal with the latter.
\paragraph{Part II: Approximation Algorithms}
Most approximation algorithms for centrality measures use various forms of
sampling and more or less sophisticated analysis to derive a sample size
sufficient to achieve the desired level of approximation with the desired level
of confidence. In this part we present a number of these sampling based
algorithms, from simple ones using the Hoeffding inequality to more complex ones
using VC-dimension and Rademacher Averages. For each algorithm, we discuss its
merits and drawbacks, and highlight the challenges for the algorithm designer.
We also discuss the case of maintaining an approximation up-to-date in a dynamic
graphs, presenting a number of contributions that recently appeared in the
literature.
\paragraph{Part III: Highly-scalable Algorithms}
In the third part of our tutorial, we discuss parallel and distributed
algorithms for the computation of centrality measures in static and dynamic
graphs. Specifically we present an approach based on GPUs and one based on
processing parallel/distributed data streams, together with experimental
results.
\paragraph{List of topics with references}
The following is a preliminary list of topics we will cover in each part of the
tutorial, with the respective references.
\begin{enumerate}
\item {\bf Introduction: definitions and exact algorithms}
\begin{enumerate}
\item The axioms of centrality~\citep{BoldiV14}
\item Definitions of centrality~\citep{Newman10}, including, but not
limited to: betweenness, closeness, degree, eigenvector,
harmonic, Katz, absorbing random-walk~\citep{MavroforakisMG15},
and spanning-edge centrality~\citep{MavroforakisGLKT15}.
\item Betweenness centrality: exact algorithm~\citep{Brandes01} and
heuristically-faster exact algorithms for betweenness
centrality~\citep{ErdosIBT15,SariyuceSKC13}.
\item Exact algorithms for betweenness centrality in a dynamic
graph~\citep{LeeLPCC12,NasrePR14,PontecorviR15}.
\item Exact algorithms for closeness centrality in a dynamic
graph~\citep{SariyuceKSC13b}.
\end{enumerate}
\item {\bf Approximation algorithms}
\begin{enumerate}
\item Sampling-based algorithm for closeness
centrality~\citep{EppsteinW04}.
\item Betweenness centrality: almost-linear-time approximation
algorithm~\citep{Yoshida14}, basic sampling-based
algorithm~\citep{BrandesP07}, refined
estimators~\citep{GeisbergerSS08}, VC-dimension bounds for
betweenness centrality~\citep{RiondatoK15}, Rademacher bounds for
betweenness centrality~\citep{RiondatoU16}.
\item Approximation algorithms for betweenness centrality in dynamic
graphs~\citep{KasWCC13,BergaminiMS15,BergaminiM15,HayashiAY15}.
\end{enumerate}
\item {\bf Highly-scalable algorithms}
\begin{enumerate}
\item GPU-based algorithms~\citep{SariyuceKSC13}.
\item Exact parallel streaming algorithm for betweenness centrality in a
dynamic graph~\citep{KourtellisMB15}.
\end{enumerate}
\item {\bf Challenges and directions for future research}
\end{enumerate}
\section{Intended Audience}
The tutorial is aimed at researchers interested in the theory and the
applications of algorithms for graph mining and social network analysis.
We do not require any specific existing knowledge. The tutorial is designed for
an audience of computer scientists who have a general idea of the problems and
challenges in graph analysis. We will present the material in such a way that
any advanced undergraduate student would be able to productively follow our
tutorial and we will actively engage with the audience and adapt our pace and
style to ensure that every attendee can benefit from our tutorial.
The tutorial starts from the basic definitions and progressively moves to more advanced
algorithms, including sampling-based approximation algorithms and MapReduce
algorithms, so that it will be of interest both to researchers new
to the field and to a more experienced audience.
%\section{Duration: \textrm{Half-day}}
%\section{Previous editions of the tutorial}
%The tutorial was not previously offered. We did not find any tutorial covering
%similar topics in the programs of recent relevant conferences.
\section{Support materials}
We developed a mini-website for the tutorial at
\url{http://matteo.rionda.to/centrtutorial/}. It contains the
abstract of the tutorial, a detailed outline with short a description of
each item of the outline, a full list of references with links to
electronic editions, a list of software packages implementing the
algorithms, and the slides used in the tutorial presentation.
\section{Instructors}
This tutorial is developed by Francesco Bonchi, Gianmarco De Francisci Morales,
and Matteo Riondato. All three instructors will attend the conference.
\smallskip
\noindent{\bf Francesco Bonchi} is Research Leader at the ISI Foundation, Turin, Italy,
where he leads the "Algorithmic Data Analytics" group. He is also Scientific
Director for Data Mining at Eurecat (Technological Center of Catalunya),
Barcelona. Before he was Director of Research at Yahoo Labs in Barcelona, Spain,
leading the Web Mining Research group.
His recent research interests include mining query-logs, social networks, and
social media, as well as the privacy issues related to mining these kinds of
sensible data.
%In the past he has been interested in data mining query
%languages, constrained pattern mining, mining spatiotemporal and mobility data,
%and privacy preserving data mining.
He will be PC Chair of the 16th IEEE International Conference on Data Mining
(ICDM 2016) to be held in Barcelona in December 2016. He is member of the ECML
PKDD Steering Committee, Associate Editor of the newly created IEEE Transactions
on Big Data (TBD), of the IEEE Transactions on Knowledge and Data Engineering
(TKDE), the ACM Transactions on Intelligent Systems and Technology (TIST),
Knowledge and Information Systems (KAIS), and member of the Editorial Board of
Data Mining and Knowledge Discovery (DMKD). %
%He has been program co-chair of the
%European Conference on Machine Learning and Principles and Practice of Knowledge
%Discovery in Databases (ECML PKDD 2010). Dr. Bonchi has also served as program
%co-chair of the first and second ACM SIGKDD International Workshop on Privacy,
%Security, and Trust in KDD (PinKDD 2007 and 2008), the 1st IEEE International
%Workshop on Privacy Aspects of Data Mining (PADM 2006), and the 4th
%International Workshop on Knowledge Discovery in Inductive Databases (KDID
%2005).
He is co-editor of the book "Privacy-Aware Knowledge Discovery: Novel
Applications and New Techniques" published by Chapman \& Hall/CRC Press.
%He earned his Ph.D. in computer science from the University of Pisa in December 2003.
He presented a tutorial at ACM KDD'14.
\smallskip
\noindent{\bf Gianmarco De Francisci Morales} is a Scientist at QCRI. Previously he worked as a Visiting Scientist at Aalto University in Helsinki, as a Research Scientist at Yahoo Labs in Barcelona, and as a Research Associate at ISTI-CNR in Pisa. He received his Ph.D. in Computer Science and Engineering from the IMT Institute for Advanced Studies of Lucca in 2012. His research focuses on scalable data mining, with an emphasis on Web mining and data-intensive scalable computing systems. He is an active member of the open source community of the Apache Software Foundation, working on the Hadoop ecosystem, and a committer for the Apache Pig project. He is one of the lead developers of Apache SAMOA, an open-source platform for mining big data streams. He commonly serves on the PC of several major conferences in the area of data mining, including WSDM, KDD, CIKM, and WWW. He co-organizes the workshop series on Social News on the Web (SNOW), co-located with the WWW conference. He presented a tutorial on stream mining at IEEE BigData'14.
\smallskip
\noindent{\bf Matteo Riondato} is a Research Scientist in the Labs group at Two Sigma
Investments. Previously he was a postdoc at Stanford and at Brown. His
dissertation on sampling-based randomized algorithms for data and graph mining
received the Best Student Poster Award at SIAM SDM'14. His research focuses on
exploiting advanced theory in practical algorithms for time series
analysis, pattern mining, and social network analysis. He presented
tutorials at ACM KDD'15, ECML PKDD'15, and ACM CIKM'15.
\bibliographystyle{abbrvnat}
\bibliography{centrality}
\end{document}
| {
"alphanum_fraction": 0.8058301328,
"avg_line_length": 55.534965035,
"ext": "tex",
"hexsha": "c15036697f4ddc12d0f0b5488780ba6117f0f176",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2018-08-25T05:46:16.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-08-25T05:46:16.000Z",
"max_forks_repo_head_hexsha": "cfb9b21ce83e03f66a9249d126b166f2b906f099",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "rionda/centrtutorial",
"max_forks_repo_path": "WWW16/proceedings/centrtutorial.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cfb9b21ce83e03f66a9249d126b166f2b906f099",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "rionda/centrtutorial",
"max_issues_repo_path": "WWW16/proceedings/centrtutorial.tex",
"max_line_length": 1048,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "cfb9b21ce83e03f66a9249d126b166f2b906f099",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "rionda/centrtutorial",
"max_stars_repo_path": "WWW16/proceedings/centrtutorial.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3695,
"size": 15883
} |
\documentclass[12pt, titlepage]{article}
\usepackage{amsmath, mathtools}
\usepackage[round]{natbib}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{colortbl}
\usepackage{xr}
\usepackage{hyperref}
\usepackage{longtable}
\usepackage{xfrac}
\usepackage{tabularx}
\usepackage{float}
\usepackage{siunitx}
\usepackage{booktabs}
\usepackage{multirow}
\usepackage[section]{placeins}
\usepackage{caption}
\usepackage{fullpage}
% jen things
\usepackage{amssymb}
\usepackage{listings}
% end jen things
\hypersetup{
bookmarks=true, % show bookmarks bar?
colorlinks=true, % false: boxed links; true: colored links
linkcolor=red, % color of internal links (change box color with linkbordercolor)
citecolor=blue, % color of links to bibliography
filecolor=magenta, % color of file links
urlcolor=cyan % color of external links
}
\usepackage{array}
\externaldocument{../../SRS/SRS}
\input{../../Comments}
\newcommand{\progname}{Kaplan}
\begin{document}
\title{Module Interface Specification for \progname{}}
\author{Jen Garner}
\date{\today}
\maketitle
\pagenumbering{roman}
\section{Revision History}
\begin{tabularx}{\textwidth}{p{3cm}p{2cm}X}
\toprule {\bf Date} & {\bf Version} & {\bf Notes}\\
\midrule
November 20, 2018 (Tuesday) & 1.0 & Initial draft \\
November 26, 2018 (Monday) & 1.1 & Complete first draft \\
December 4, 2018 (Tuesday) & 1.2 & Update fitg module to remove the wrapper
function and make most of the functions access routines instead of local
functions \\
\bottomrule
\end{tabularx}
~\newpage
\section{Symbols, Abbreviations and Acronyms}
See \href{SRS}{https://github.com/PeaWagon/Kaplan/blob/master/docs/SRS/SRS.pdf}
Documentation.
cid = compound identification number (for
\href{pubchem}{https://pubchem.ncbi.nlm.nih.gov/} website)
vetee = private database repository on github
\newpage
\tableofcontents
\newpage
\pagenumbering{arabic}
\section{Introduction}
The following document details the Module Interface Specifications (MIS) for
\progname{}. This program is designed to search a potential energy space for a
set of conformers for a given input molecule. The energy and RMSD are used to
optimize dihedral angles, which can then be combined with an original geometry
specification to determine an overall structure for a conformational isomer.
Complementary documents include the System Requirement Specifications (SRS)
and Module Guide (MG). The full documentation and implementation can be
found at \url{https://github.com/PeaWagon/Kaplan}.
\section{Notation}
The structure of the MIS for modules comes from \citet{HoffmanAndStrooper1995},
with the addition that template modules have been adapted from
\cite{GhezziEtAl2003}. The mathematical notation comes from Chapter 3 of
\citet{HoffmanAndStrooper1995}. For instance, the symbol := is used for a
multiple assignment statement and conditional rules follow the form $(c_1
\Rightarrow r_1 | c_2 \Rightarrow r_2 | ... | c_n \Rightarrow r_n )$. Also, the
PEP8 style guide from Python will be used for naming conventions.
The following table summarizes the primitive data types used by \progname.
\begin{center}
\renewcommand{\arraystretch}{1.2}
\noindent
\begin{tabular}{l l p{7.5cm}}
\toprule
\textbf{Data Type} & \textbf{Notation} & \textbf{Description}\\
\midrule
character & char & a single symbol or digit\\
integer & $\mathbb{Z}$ & a number without a fractional component in (-$\infty$, $\infty$) \\
natural number & $\mathbb{N}$ & a number without a fractional component in [1, $\infty$) \\
real & $\mathbb{R}$ & any number in (-$\infty$, $\infty$)\\
boolean & bool & True or False \\
\bottomrule
\end{tabular}
\end{center}
\noindent
The specification of \progname{} uses some derived data types: sequences,
strings, tuples, lists, and dictionaries. Sequences are lists filled with
elements of the same data type. Strings
are sequences of characters. Tuples contain a fixed list of values, potentially
of different types. Lists are similar to tuples, except that they can change in
size and their entries can be modified. For strings, lists, and tuples, the
index can be used to retrieve a value at a certain location. Indexing starts at
0 and continues until the length of the item minus one (example: for a list
my\_list = [1,2,3], my\_list[1] returns 2). A slice of these data types affords
a subsection of the original data (example: given a string s = ``kaplan'',
s[2:4] gives ``pl''). Notice that the slice's second value is a non-inclusive
bound. A dictionary is a dynamic set of key-value pairs, where the keys and the
values can be modified and of any type. A dictionary value is accessed by
calling its key, as in dictionary\_name[key\_name] = value.
\progname{} uses three special objects called Pmem, Ring, and Parser. These
objects have methods that are described in \ref{section-pmem},
\ref{section-ring}, and \ref{section-geometry} respectively. The Python
NoneType type object is also used.
Here is a table to summarize the derived data types:
\begin{center}
\renewcommand{\arraystretch}{1.2}
\noindent
\begin{tabular}{l l p{7.5cm}}
\toprule
\textbf{Data Type} & \textbf{Notation} & \textbf{Description}\\
\midrule
population member & Pmem & an object used by \progname{} to represent potential
solutions to the conformer search/optimization problem \\
ring & Ring & an object used by \progname{} to store Pmem objects and define
how they are removed, added, and updated \\
parser & Parser & a Vetee object used by \progname{} to represent the molecular
geometry, its energy calculations, and input/output parameters; also inherited
classes include: Xyz, Com, Glog, Structure \\
string & str & a string is a list of characters (as implemented in Python) \\
list & list or [ ] & a Python list (doubly-linked) \\
dictionary & dict or \{\} & a Python dictionary that has key value pairs \\
NoneType & None & empty data type \\
\bottomrule
\end{tabular}
\end{center}
In addition, \progname \ uses functions, which
are defined by the data types of their inputs and outputs. Local functions are
described by giving their type signature followed by their specification. There
is one generator function in this program, which uses the yield keyword instead
of the return keyword. Every time a generator is called, it returns the next
value in what is usually a for loop.
Note that obvious errors (such as missing inputs) that are handled by the
Python interpreter are not listed under the exceptions in any of the
\progname{} modules. \wss{good}
\section{Module Decomposition}
The following table is taken directly from the Module Guide document for this project.
\begin{table}[h!]
\centering
\begin{tabular}{p{0.3\textwidth} p{0.6\textwidth}}
\toprule
\textbf{Level 1} & \textbf{Level 2}\\
\midrule
{Hardware-Hiding Module} & ~ \\
\midrule
\multirow{9}{0.3\textwidth}{Behaviour-Hiding Module}& GA Input \\
& Molecule Input \\
& GA Control\\
& $Fit_G$ \\
& Tournament \\
& Crossover \& Mutation \\
& Ring \\
& Pmem \\
& Output \\
\midrule
\multirow{3}{0.3\textwidth}{Software Decision Module} & Geometry \\
& Energies \\
& RMSD \\
\bottomrule
\end{tabular}
\caption{Module Hierarchy}
\label{TblMH}
\end{table}
\newpage
~\newpage
\section{MIS of GA Input} \label{section-ga_input}
\meow{Right now the link does not open the SRS document. Not sure if that was
supposed to happen.} \wss{There are ways to get the external links to work, but
don't worry about it; it isn't worth your time to fiddle with that right now.}
The purpose of this module is to provide a utility for reading and verifying
input related to the genetic algorithm (GA). There are two main functions:
read\_ga\_input and verify\_ga\_input. The first function opens a data file
(.txt file) with the following format:
\begin{lstlisting}
num_mevs = 1000
num_slots = 100
num_filled = 20
num_geoms = 3
num_atoms = 10
t_size = 7
num_muts = 3
num_swaps = 1
pmem_dist = 5
fit_form = 0
coef_energy = 0.5
coef_rmsd = 0.5
\end{lstlisting}
These values are read into a Python dictionary, called ga\_input\_dict. The
order of the inputs does not matter, but \progname{} will throw an error if one
of the keys is missing. This dictionary is then passed to the second function,
which checks that the values are correct and that all keys have been given.
From the SRS document (see SRS Section \ref{SRS-symbols}), \wss{I like that you
have external references, but they only work if the SRS document is compiled.
It would be nice if you had a makefile that made all of the documents, like
the one in the Blank Project example in our repo.} $n_G$ and $n_a$ are
represented here as num\_geoms and num\_atoms respectively. Also, coef\_energy
and coef\_rmsd are $C_\text{E}$ and $C_\text{RMSD}$ from the SRS. All keys are
case insensitive. The values are case insensitive except for the SMILES string
(if chosen struct\_type is smiles), which is case sensitive by definition.
\meow{I may have to check if SMILES strings are case sensitive for the programs
I am parsing them with.} \wss{okay}
\meow{Turns out that SMILES strings are case sensitive, since lowercase
indicates an aromatic atom. I was able to catch this problem during my testing
and I have addressed the issue in the code.}
\subsection{Module}
ga\_input
\subsection{Uses}
None
\subsection{Syntax}
\subsubsection{Exported Constants}
NUM\_GA\_ARGS := 12
\subsubsection{Exported Access Programs}
\begin{table}[H]
\begin{tabular}{p{4cm} p{2cm} p{2cm} p{5cm}}
\toprule
\textbf{Name} & \textbf{In} & \textbf{Out} & \textbf{Exceptions} \\
\hline
read\_ga\_input & str & dict & FileNotFoundError \\
verify\_ga\_input & dict & None & ValueError \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Semantics}
\subsubsection{State Variables}
\noindent num\_args := for each line in the ga\_input\_file $+(x : list | key
\in x \land value \in x \land
length(x) = 2 : 1)$ \\ \wss{what list is this based on?}
\noindent ga\_input\_dict, which is a dictionary that contains:
\begin{itemize}
\item num\_mevs := $\mathbb{N}$
\item num\_geoms := $\mathbb{N}$
\item num\_atoms := $\{x \in \mathbb{N}: x > 3\}$
\item num\_slots := $\{x \in \mathbb{N}: x \geq \text{num\_filled} \}$
\item num\_filled := $\{x \in \mathbb{N}: x \leq \text{num\_slots}\}$
\item num\_muts := $\{0 \lor x \in \mathbb{N}: \text{num\_atoms} \geq x \geq
0\}$
\item num\_swaps := $\{0 \lor x \in \mathbb{N}: \text{num\_geoms} \geq x \geq
0\}$
\item t\_size := $\{x \in \mathbb{N}: 2 \geq x \leq \text{num\_filled} \}$
\item pmem\_dist := $\{0 \lor x \in \mathbb{N}: x \geq 0\}$
\item fit\_form := $\{0 \lor x \in \mathbb{N}: x \geq 0\}$
\item coef\_energy: $\mathbb{R}$
\item coef\_rmsd: $\mathbb{R}$
\end{itemize}
\wss{You don't actually have state variables for the above, since you are
outputting this information in a dictionary. What you have done is partially defined
the abstract data type for your dictionary. I don't understand why you need
a dictionary? I don't see what the key value is? Can't you just have input
data have the state variables that you have mentioned and skip the idea of
outputting a dictionary? The input module would be available to any module
that needed these inputs and, if it is implemented as a singleton object, you
don't even need to pass an object.}
\subsubsection{Environment Variables}
ga\_input\_file: str representing the file that exists in the working directory
(optionally includes a prepended path).
\subsubsection{Assumptions}
This module is responsible for all type checking and no errors will come from
incorrect passing of variables (except input related to the molecule - covered
in \ref{section-mol_input}).
\subsubsection{Access Routine Semantics}
\noindent read\_ga\_input(ga\_input\_file):
\begin{itemize}
\item transition: open ga\_input\_file and read its contents.
\item output: dictionary (ga\_input\_dict) that contains the values listed
in State Variables.
\item exception: FileNotFoundError := ga\_input\_file $\notin$ current
working directory.
\end{itemize}
\noindent verify\_ga\_input(ga\_input\_dict):
\begin{itemize}
\item transition: None
\item output: None
\item exception: ValueError
\begin{itemize}
\item 4 \textgreater num\_atoms
\item num\_filled \textgreater num\_slots
\item num\_swaps \textgreater num\_geoms
\item t\_size \textgreater num\_filled $\lor$ t\_size \textless 2
\item not an integer type (for all except coef\_energy and coef\_rmsd,
which should be floats)
\item missing key/unknown key
\item too many keys (i.e. repeated keys), where num\_args $\neq$
NUM\_GA\_ARGS
\end{itemize}
\end{itemize}
\subsubsection{Local Functions}
None
\wss{For ease of reference, it is nice if there is a newpage before each
module.}
\section{MIS of Molecule Input} \label{section-mol_input}
The purpose of this module is to provide a utility for reading and verifying
input related to the molecule, including its structure and energy calculations.
There are two main functions: read\_mol\_input and verify\_mol\_input. The
first function opens a data file (.txt file) with the following format:
\begin{lstlisting}
qcm = hartree-fock
basis = aug-cc-pvtz
struct_input = C=CC=C
struct_type = smiles
prog = psi4
charge = 0
multip = 1
\end{lstlisting}
These values are read into a Python dictionary, called mol\_input\_dict. The
order of the inputs does not matter, but \progname{} will throw an error if one
of the keys is missing. This dictionary is then passed to the second function,
which checks that the values are correct and that all keys have been given. To
verify the molecular input, Vetee's Parser object is constructed using the
mol\_input\_dict (geometry module, Section \ref{section-geometry}). The
mol\_input module calls the energy module (Section \ref{section-energies}) to
run a calculation on the input molecule. If this calculation converges, then
the manipulation of the dihedral angles are more likely to afford calculations
that converge. After this final verification, the Parser object is passed back
to the gac module, and eventually gets used by the pmem module
(Section \ref{section-pmem}). From the SRS document (see SRS Section
\ref{SRS-symbols}), $QCM$ and $BS$ are represented here as qcm and basis
respectively. All keys and string values are case insensitive.
\subsection{Module}
mol\_input
\subsection{Uses}
geometry (Section \ref{section-geometry}), energy (Section
\ref{section-energies})
\subsection{Syntax}
\subsubsection{Exported Constants}
NUM\_MOL\_ARGS := 7
\subsubsection{Exported Access Programs}
\begin{table}[H]
\begin{tabular}{p{4cm} p{2cm} p{2cm} p{5cm}}
\toprule
\textbf{Name} & \textbf{In} & \textbf{Out} & \textbf{Exceptions} \\
\hline
read\_mol\_input & str & dict & FileNotFoundError \\
verify\_mol\_input & dict & Parser & ValueError \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Semantics}
\subsubsection{State Variables}
\noindent num\_args := for each line in the mol\_input\_file $+(x : list | key
\in x \land value \in x \land
length(x) = 2 : 1)$ \\
\noindent mol\_input\_dict, which is a dictionary that contains:
\begin{itemize}
\item qcm := str $\in$ available methods given prog
\item basis := str $\in$ available basis sets given prog and molecule
\item struct\_input := $\{x : str | x = file \lor x = SMILES \lor x = name
\lor x = cid : x\}$
\item struct\_type := $str \in \{``smiles", ``xyz", ``com", ``glog",
``name", ``cid"\} $
\item prog := $str \in \{``psi4"\}$
\item charge := $\mathbb{Z}$
\item multip := $\mathbb{N}$
\end{itemize}
\wss{The same comment applies about the dictionary as for the previous module.
Rather than using strings for the structure types, you could considering
using an enumerated type.}
\subsubsection{Environment Variables}
mol\_input\_file: str representing the file that exists in the working
directory (optionally includes a prepended path).
\subsubsection{Assumptions}
As with \ref{section-ga_input}, other modules that use and exchange the state
variables found in this module will not raise errors related to the type of
input.
\subsubsection{Access Routine Semantics}
\noindent read\_mol\_input(mol\_input\_file):
\begin{itemize}
\item transition: open mol\_input\_file and read its contents.
\item output: dictionary (mol\_input\_dict) that contains the values listed
in State Variables.
\item exception: FileNotFoundError := mol\_input\_file $\notin$ current
working directory.
\end{itemize}
\noindent verify\_mol\_input(mol\_input\_dict):
\begin{itemize}
\item transition: None
\item output: Parser
\item exception: ValueError
\begin{itemize}
\item qcm $\notin$ prog
\item basis $\notin$ prog $\lor$ basis unavailable for molecule
\item unable to parse SMILES string, name, cid, or input file
\item struct\_type not available
\item not a string type (for all except charge $\mathbb{Z}$ and multip
$\mathbb{N}$)
\item missing key/unknown key
\item too many keys (i.e. repeated keys), where num\_args $\neq$
NUM\_MOL\_ARGS
\end{itemize}
\end{itemize}
\subsubsection{Local Functions}
None
\section{MIS of GA Control} \label{section-gac}
This module is responsible for running the GA using the given inputs. The
general format of the algorithm is as follows:
\begin{enumerate}
\item Read in and verify ga\_input\_file (\ref{section-ga_input}).
\item Read in and verify mol\_input\_file (\ref{section-mol_input}). This
step includes a check of the initial geometry, QCM, and BS for convergence
(\ref{section-energies}), and generating a Parser object
(\ref{section-geometry}).
\item Generate a Ring object (\ref{section-ring}).
\item Fill the Ring with Pmem objects according to the num\_filled input
variable (\ref{section-pmem}).
\item Iterate over the num\_mevs input variable, and run a tournament on
the Ring according to the t\_size variable (\ref{section-tournament}).
\item Return the output as per the output module specifications
(\ref{section-output}).
\end{enumerate}
\subsection{Module}
gac
\subsection{Uses}
ga\_input (Section \ref{section-ga_input}),
mol\_input (Section \ref{section-mol_input}),
output (Section \ref{section-output}),
ring (Section \ref{section-ring}),
tournament (Section \ref{section-tournament})
\subsection{Syntax}
\subsubsection{Exported Constants}
\subsubsection{Exported Access Programs}
\meow{Not sure if I should list the exceptions raised by imported modules here?
For example, reading the input may give an error, but this error is not
explicitly raised by the gac module. Do I still have to list it here?} \wss{No,
only listed exceptions that are the responsibility of this module to raise.}
\begin{center}
\begin{tabular}{p{2cm} p{4cm} p{4cm} p{2cm}}
\hline
\textbf{Name} & \textbf{In} & \textbf{Out} & \textbf{Exceptions} \\
\hline
run\_kaplan & str, str & None & None \\
\hline
\end{tabular}
\end{center}
\subsection{Semantics}
\subsubsection{State Variables}
\meow{I don't actually need to store the mol\_input\_dict variable here since
its information will be completely contained in the Parser object. Mostly I am
leaving this note here as a reminder to update the Parser object with the prog
attribute. This update would also mean consolidating the function calls read
and verify mol input. Not sure if it is a good idea to have these separated in
gac?}
\begin{itemize}
\item ga\_input\_dict : dict (see \ref{section-ga_input})
\item mol\_input\_dict : dict (see \ref{section-mol_input})
\item parser : Parser (see \ref{section-geometry})
\item ring : Ring (see \ref{section-ring})
\item mev : $\{x : \mathbb{Z} | \text{mum\_mevs} > x \geq 0 : x\}$
\end{itemize}
\wss{I don't entirely understand your design. The confusion about the role of
state variables is muddying the water for me. I can tell you have done a
great deal of work and have thought deeply about the design. My guess is that
you have thought about the design in Python and you are trying to document
your Python design using the 741 MIS. I think this has led to a more
complicated MIS document than necessary}
\wss{I've been trying to think of practical advice that can help the
documentation, but not consume too much of your time. What I've come up with
is that you should replace each of your dictionaries with newly defined types.
The new types will be tuples, in the Hoffmann and Strooper sense, but they
will be implemented by dictionaries. A dictionary gives a set of key value
pairs. A tuple is a set of field names and values. This would let you
introduce the new types using the H\&S notation, but you don't have to
document them as ADTs because the implementation as dictionaries is so
straightforward. You can define all of your new types in section 4, or in a
type defining module.}
\subsubsection{Environment Variables}
None
\subsubsection{Assumptions}
This is the main control unit for the program; the user will write their own
input files and only need to access this module to complete their task.
\subsubsection{Access Routine Semantics}
\noindent run\_kaplan(ga\_input\_file, mol\_input\_file):
\begin{itemize}
\item transition: The set of conformers with large negative energy and high
RMSD are produced and passed to the
output module (\ref{section-output}). \wss{if you are passing something,
it isn't a state variable.}
\item output: None
\item exception: None
\end{itemize}
\wss{I think you have a misunderstanding about the state variables. The fields
of an object are specified as state variables. If you have state variables,
you should have state transitions that show how the values are changed.}
\subsubsection{Local Functions}
None
\meow{Since the run\_kaplan function is only used by gac, should I put it here
in local functions? Technically the user will have to import it somewhere to
use it.}
\section{MIS of $Fit_G$} \label{section-fitg}
The purpose of this module is to calculate the fitness of the Pmem object. In
ga\_input\_file, the user specifies the fit\_form (the formula number to use
for calculating fitness), the coefficients for the energy and RMSD terms, the
method (QCM), and the basis set (BS). These values are used here to assign a
fitness to a pmem. A change of dihedral angles may make it impossible for an
energy calculation to converge; in this case the energy will be set to zero,
but the RMSD value will most likely be high for the set of conformers. The
contribution of the RMSD value to the fitness should therefore be smaller than
the energetic component (otherwise the user may end up with multiple
non-convergent geometries with high RMSD).
The $Fit_G$ module works as follows:
\begin{enumerate}
\item The ring calls its set\_fitness method, which calls the sum\_energies
and sum\_rmsds functions, and passes the outputs of these functions to the
calc\_fitness function.
\item The output of the calc\_fitness function depends on the chosen
fit\_form, which for now is only allowed to be 0. The resulting floating
point is returned to the ring.
\end{enumerate}
\subsection{Module}
fitg
\subsection{Uses}
energy (Section \ref{section-energies}), rmsd (Section \ref{section-rmsd})
\subsection{Syntax}
\subsubsection{Exported Constants}
None
\subsubsection{Exported Access Programs}
The sum\_energies function should raise a warning if an energy calculation did
not converge.
\begin{center}
\begin{tabular}{p{2cm} p{4cm} p{4cm} p{2cm}}
\hline
\textbf{Name} & \textbf{In} & \textbf{Out} & \textbf{Exceptions} \\
\hline
sum\_energies &
list(list(str,$\mathbb{R}$,$\mathbb{R}$,$\mathbb{R}$)), $\mathbb{Z}$,
$\mathbb{N}$, str, str & $\mathbb{R}$
& None \\
sum\_rmsd & list(list(str,$\mathbb{R}$,$\mathbb{R}$,$\mathbb{R}$)) &
$\mathbb{R}$ & None \\
calc\_fitness & $\mathbb{Z}$, $\mathbb{R}$, $\mathbb{R}$, $\mathbb{R}$,
$\mathbb{R}$ & $\mathbb{R}$ & None \\
\hline
\end{tabular}
\end{center}
\subsection{Semantics}
\subsubsection{State Variables}
None
\subsubsection{Environment Variables}
None
\subsubsection{Assumptions}
New fitness functions will be added. Each new fitness functions should be
incrementally labelled and added to the calc\_fitness function. Any new
fitness functions that are added should account for the possibility of divide
by zero errors.
\subsubsection{Access Routine Semantics}
\noindent sum\_energies(xyz\_coords, charge, multip, method, basis):
\begin{itemize}
\item transition: None
\item output: \textit{out} := $+(x: \mathbb{R}|x = \text{energy of
xyz\_coords with charge, multiplicity, QCM, BS}: |x|)$
\item sum together the energies of the conformer geometries for one
solution
instance (one pmem).
\item exception: None
\end{itemize}
\noindent sum\_rmsds(xyz\_coords):
\begin{itemize}
\item transition: None
\item output: \textit{out} := $+(x(i,j): \mathbb{R}| \text{RMSD between
coords i and j where } i,j \text{ are indices } \forall i,j \in
\text{xyz\_coords}: x)$
\item determine possible pairs of conformers (with all\_pairs\_gen) and
calculate
their rmsd by using rmsd module.
\item exception: None
\end{itemize}
\noindent calc\_fitness(fit\_form, sum\_energy, coef\_energy, sum\_rmsd,
coef\_rmsd):
\begin{itemize}
\item transition: None
\item output: when fit\_form = 0, then \textit{out} := \{
$<<C_E,S_E,C_{\text{RMSD}},S_{\text{RMSD}}>,y> : \mathbb{R} | y =
C_E*S_E+C_{\text{RMSD}}*S_{\text{RMSD}} \}$
\item exception: None (may complain if inputs have not been previously
checked i.e. fit\_form is not available)
\end{itemize}
\subsubsection{Local Functions}
$\mathbb{Z}$ $\rightarrow$ tuple($\mathbb{Z}, \mathbb{Z}$)
\noindent all\_pairs\_gen(num\_geoms):
\begin{itemize}
\item transition: increment iterators i and j in the generator function.
\item output: \textit{out} := $\{<i,j>: \mathbb{Z} | 0 \leq i \leq
\text{num\_geoms}-1 \land i+1 \leq j \leq \text{num\_geoms}\}$
\item exception: None
\item the generator function, all\_pairs\_gen, will need to keep track of
the i and j iteration values.
\item num\_pairs is the number of next() calls needed for the generator. :=
$\{<y,n> | n = \text{num\_geoms} : \mathbb{N} \land y = n!/(2*(n-2)!) \}$
\end{itemize}
\section{MIS of Tournament} \label{section-tournament}
The tournament module selects t\_size pmems from the ring for comparison. It
ranks the pmems in order of increasing fitness, as calculated using the fitg
module (\ref{section-fitg}). Then, the two best pmems are chosen as ``parents",
and the mutations module (\ref{section-mutations}) is used to generate two new
``children" based on these parents. The ring module (\ref{section-ring}) then
decides whether the children are added to the ring or not, and if old pmems are
deleted to make room for the children.
\subsection{Module}
tournament
\subsection{Uses}
ring (Section \ref{section-ring}),
pmem (Section \ref{section-pmem}),
mutations (Section \ref{section-mutations})
\subsection{Syntax}
\subsubsection{Exported Constants}
None
\subsubsection{Exported Access Programs}
\begin{center}
\begin{tabular}{p{3.5cm} p{4cm} p{4cm} p{2cm}}
\hline
\textbf{Name} & \textbf{In} & \textbf{Out} & \textbf{Exceptions} \\
\hline
run\_tournament & $\mathbb{N}$, $\mathbb{Z}$, $\mathbb{Z}$, Ring & None
& EmptyRingError \\
select\_pmems* & $\mathbb{N}$, Ring & list($\mathbb{Z}$) & None \\
select\_parents* & list, Ring & tuple($\mathbb{Z}$,$\mathbb{Z}$) & None
\\
\hline
\end{tabular}
\end{center}
\noindent * local function
\subsection{Semantics}
\subsubsection{State Variables}
\begin{itemize}
\item selected\_pmems := $\{x : \mathbb{N} | x \geq 0 \land
\text{ring.pmems}[x] \neq None\}$
\item parents := $\{x : \mathbb{N} | x \geq 0 \land \text{ring.pmems}[x] \neq
None\}$
\item parent1 := $[[D_1], [D_2], ..., [D_{n_G}]]$, where each
$D_i$ is a list of length num\_atoms-3 of integers representing the dihedral
angles
\item parent2 := $[[D_1], [D_2], ..., [D_{n_G}]]$, where each
$D_i$ is a list of length num\_atoms-3 of integers representing the dihedral
angles
\item children := $[[[D_1], [D_2], ..., [D_{n_G}]], [[D_1], [D_2], ...,
[D_{n_G}]]]$, where each $D_i$ is a list of length num\_atoms-3 of integers
representing the dihedral
angles
\end{itemize}
\subsubsection{Environment Variables}
None
\subsubsection{Assumptions}
None \meow{May have to think on this more.}
\subsubsection{Access Routine Semantics}
\noindent run\_tournament(t\_size, num\_muts, num\_swaps, ring, current\_mev):
\begin{itemize}
\item transition: new ring members may be added with birthday equal to the
current\_mev (increments ring.num\_filled), and old pmems may be replaced
with a new pmem.
\item output: None
\item exception: EmptyRingError when t\_size $>$ ring.num\_filled
\end{itemize}
\subsubsection{Local Functions}
\noindent select\_pmems(number, ring):
\begin{itemize}
\item transition: None
\item output: selection, which is a list of length \textit{number} of
random ring indices containing a pmem; no pmem can appear twice in the
selection.
\item exception: None
\end{itemize}
\noindent select\_parents(selected\_pmems, ring):
\begin{itemize}
\item transition: None
\item output: tuple of length 2 representing the ring indices of the pmems
with the best fitness values out of the selected\_pmems indices.
\item exception: None
\end{itemize}
\section{MIS of Crossover \& Mutation} \label{section-mutations}
This module is used to generate new solution instances with which to generate
new pmems (\ref{section-pmem}) for the ring (\ref{section-ring}). It has two
local functions, mutate and swap. Mutate takes in a list of lists representing
dihedral angles for the molecule of interest. Then, a number of these dihedral
angles are randomly changed. Swap takes in two of such list of lists and swaps
a number of sublists between the two inputs. This module is called during a
tournament (\ref{section-tournament}). A wrapper function to call these two
functions is called generate\_children, which returns the new solution
instances to the tournament.
\subsection{Module}
mutations
\subsection{Uses}
None
\subsection{Syntax}
\subsubsection{Exported Constants}
These two values restrict the dihedral angles that can be chosen.
MIN\_VALUE : 0\\
\indent MAX\_VALUE : 360
\subsubsection{Exported Access Programs}
\begin{center}
\begin{tabular}{p{3.5cm} p{4cm} p{3cm} p{2cm}}
\hline
\textbf{Name} & \textbf{In} & \textbf{Out} & \textbf{Exceptions} \\
\hline
generate\_children & list(list($\mathbb{Z}$)),
list(list($\mathbb{Z}$)), $\mathbb{Z}$, $\mathbb{Z}$ &
list(list($\mathbb{Z}$)), list(list($\mathbb{Z}$)) & None \\
mutate* & list(list($\mathbb{Z}$)), $\mathbb{Z}$ &
list(list($\mathbb{Z}$)) & None \\
swap* & list(list($\mathbb{Z}$)),
list(list($\mathbb{Z}$)), $\mathbb{Z}$ & list(list($\mathbb{Z}$)),
list(list($\mathbb{Z}$)) & None \\
\hline
\end{tabular}
\end{center}
\noindent * local function
\subsection{Semantics}
\subsubsection{State Variables}
\begin{itemize}
\item number of chosen swaps := $\{ x : \mathbb{Z} | 0 \leq x \leq
\text{num\_swaps from ga\_input\_dict}\}$
\item number of chosen mutations := $\{ x : \mathbb{Z} | 0 \leq x \leq
\text{num\_muts from ga\_input\_dict}\}$
\end{itemize}
\subsubsection{Environment Variables}
None
\subsubsection{Assumptions}
None \meow{Maybe good to reference SRS assumptions about inputs here?}
\subsubsection{Access Routine Semantics}
\noindent generate\_children(parent1, parent2, num\_muts, num\_swaps):
\begin{itemize}
\item transition: None
\item output: two list of lists of integers between MIN\_VALUE and
MAX\_VALUE representing dihedrals for two new population members. These
will be used to create pmem objects.
\item exception: None
\end{itemize}
\subsubsection{Local Functions}
\noindent mutate(dihedrals, num\_muts):
\begin{itemize}
\item transition: locally update num\_muts with a random number (min 0, max
num\_muts).
\item output: a list of lists of integers between MIN\_VALUE and
MAX\_VALUE representing dihedrals for one mutated population member.
\item exception: None
\end{itemize}
\noindent swap(parent1, parent2, num\_swaps):
\begin{itemize}
\item transition: locally update num\_swaps with a random number (min 0,
max num\_swaps).
\item output: two list of lists of integers between MIN\_VALUE and
MAX\_VALUE representing dihedrals for two swapped population members.
\item exception: None
\end{itemize}
\section{MIS of Ring} \label{section-ring}
The ring is the main data structure for \progname{}. It determines how the
potential solutions to the conformer optimization program are organized. The
constructor for the ring takes 5 arguments: num\_geoms, num\_atoms, num\_slots,
pmem\_dist, and parser. The ring begins empty and can be filled with pmem
objects by calling the ring.fill method. There is also the ring.update method,
which takes 2 arguments: parent\_index and child. This method is called during
a tournament after the children have been generated. The update occurs as
follows:
\begin{enumerate}
\item Select a random slot in the range [parent\_index-pmem\_dist,
parent\_index+pmem\_dist+1] from
the parent.
\item Compare the fitness value of the child with the fitness value of the
current occupant.
\item If there is no current occupant, or if the child has fitness $\geq$
the current occupant, put the child in the slot.
\item Increment the num\_filled attribute of the ring if an empty slot was
filled.
\end{enumerate}
The ring also uses the geometry module (Section \ref{section-geometry}) to
generate a zmatrix as a string based on the pmem index of interest. Calling
ring.calc\_fitness will determine the fitness for a given pmem index.
\subsection{Module}
ring
\subsection{Uses}
pmem (Section \ref{section-pmem}),
fitg (Section \ref{section-fitg}),
geometry (Section \ref{section-geometry})
\subsection{Syntax}
\subsubsection{Exported Constants}
\subsubsection{Exported Access Programs}
\begin{center}
\begin{tabular}{p{4cm} p{4cm} p{2cm} p{2cm}}
\hline
\textbf{Name} & \textbf{In} & \textbf{Out} & \textbf{Exceptions} \\
\hline
\_\_init\_\_ & $\mathbb{N}, \mathbb{N}, \mathbb{N}, \mathbb{Z},
\mathbb{Z}, \mathbb{Z}, \mathbb{Z}$ Parser & None & None \\
set\_fitness & $\mathbb{Z}$ & None & ValueError \\
update & $\mathbb{Z}$, list(list[$\mathbb{Z}$]) & None & None \\
fill & $\mathbb{N}, \mathbb{Z}$ & None & RingOverflowError \\
RingEmptyError & - & - & - \\
RingOverflowError & - & - & - \\
\hline
\end{tabular}
\end{center}
\wss{You should specify a constructor, not \_\_init\_\_. Your specification is
very Pythoncentric. The goal should be to make it as language agnostic as
possible.}
\subsection{Semantics}
\subsubsection{State Variables}
\begin{itemize}
\item ring.num\_geoms from ga\_input\_module
\item ring.num\_atoms from ga\_input\_module
\item ring.pmem\_dist from ga\_input\_module
\item ring.fit\_form from ga\_input\_module
\item ring.coef\_energy from ga\_input\_module
\item ring.coef\_rmsd from ga\_input\_module
\item ring.parser from the geometry module
\item ring.num\_filled from ga\_input\_module; represents the number of pmems
present in the ring (dynamic)
\item ring.pmems is a list of pmem objects (pmem module) or NoneType objects
(depends if slot is filled or empty)
\end{itemize}
\subsubsection{Environment Variables}
None
\subsubsection{Assumptions}
\begin{itemize}
\item the ring module will be written in such a way as to support the
addition of extinction operators. These extinction operators delete
segments and/or pmems with certain attributes from the ring.
\item a pmem cannot be initialized without a call by the ring to evaluate
its fitness. This evaluation will not be a wasted computation.
\item The ring can be iterated and will not fall over when an index past
zero or above the last slot is called (index wrapping).
\end{itemize}
\subsubsection{Access Routine Semantics}
\noindent \_\_init\_\_(num\_geoms, num\_atoms, num\_slots, pmem\_dist,
fit\_form, coef\_energy, coef\_rmsd, parser):
\begin{itemize}
\item transition: generate a ring object.
\item output: None
\item exception: None
\end{itemize}
\noindent update(parent\_index, child):
\begin{itemize}
\item transition: generate a pmem with the child (the sets of dihedral
angles), and calculate its fitness. Select a slot to place the child within
[parent\_index-pmem\_dist, parent\_index+pmem\_dist+1]. If the slot is
occupied, the child must have fitness that is no worse than the current
occupant. Note: this will require wrapping for the Ring to ensure that an
IndexError is not raised. Increment the num\_filled attribute if a slot
that was once empty is filled.
\item output: None
\item exception: None
\end{itemize}
\noindent fill(num\_pmems, current\_mev):
\begin{itemize}
\item transition: if there are no pmems in the ring (num\_filled = 0), fill
the ring with a contiguous segment of num\_pmems pmems. If there are pmems
in the ring, fill empty slots with new pmems until num\_pmem pmems have
been added. For each new pmem, calculate its fitness.
\item output: None
\item exception: RingOverflowError occurs when there is a request to add a
set of pmems to the ring that the number of free slots does not accommodate.
\end{itemize}
\subsubsection{Local Functions}
\noindent set\_fitness(pmem\_index):
\begin{itemize}
\item transition: updates the pmem.fitness attribute by constructing a
zmatrix using the geometry module and calling get\_fitness from the fitg
module.
\item output: None
\item exception: ValueError occurs if the slot is empty at pmem\_index.
\end{itemize}
\section{MIS of Pmem} \label{section-pmem} %TODO
This module is designed to hold the pmem data structure. A pmem is generated by
the ring module. The pmem holds the dihedrals list of lists, which is what the
\progname{} program is optimizing.
\subsection{Module}
pmem
\subsection{Uses}
None
\subsection{Syntax}
\subsubsection{Exported Constants}
These two values restrict the dihedral angles that can be chosen.\\
\indent MIN\_VALUE := 0\\
\indent MAX\_VALUE := 360
\subsubsection{Exported Access Programs}
\begin{center}
\begin{tabular}{p{2cm} p{4cm} p{4cm} p{2cm}}
\hline
\textbf{Name} & \textbf{In} & \textbf{Out} & \textbf{Exceptions} \\
\hline
\_\_init\_\_ & $\mathbb{Z}, \mathbb{N}, \mathbb{N}, \mathbb{Z},
list(list(\mathbb{Z}))*$ & None
& None \\
\hline
\end{tabular}
\end{center}
* the default value is None
\subsection{Semantics}
\subsubsection{State Variables}
\begin{itemize}
\item pmem.ring\_loc := the ring index where the pmem is located.
\item pmem.dihedrals := $[[D_1], [D_2], ..., [D_{n_G}]]$, where each $D_i$
is a list of length num\_atoms-3 of $\mathbb{Z}$ representing the dihedral
angles. If the constructor is called without a given set of dihedrals, then
the default will be to randomly fill in those values between MIN\_VALUE and
MAX\_VALUE.
\item pmem.fitness := float representing result of a fitg calculation for
the pmem's dihedral angles when combined with the other geometry
specifications in ring.parser.
\item pmem.birthday := $\{x:\mathbb{Z} | num\_mevs > x \geq 0\}$ mating
event
for which the pmem was generated
\end{itemize}
\subsubsection{Environment Variables}
None
\subsubsection{Assumptions}
After a pmem object is generated, its fitness will be calculated.
\subsubsection{Access Routine Semantics}
\noindent \_\_init\_\_(ring\_loc, num\_geoms, num\_atoms, current\_mev,
dihedrals=None):
\begin{itemize}
\item transition: generate a new pmem object.
\item output: None
\item exception: None
\end{itemize}
\subsubsection{Local Functions}
None
\section{MIS of Output} \label{section-output}
The output module is called by the GA Control module (\ref{section-gac}) to
produce output for the program. There are two main requirements: return the
best set of conformer geometries with full geometry specifications and
calculate their respective energies. Although it is not listed in the
requirements, this module will most likely be upgraded to include some
statistical measurements of the results.
\subsection{Module}
output
\subsection{Uses}
geometry (Section \ref{section-geometry}),
ring (Section \ref{section-ring}),
pmem (Section \ref{section-pmem})
\subsection{Syntax}
\subsubsection{Exported Constants}
OUTPUT\_FORMAT = ``xyz"
\subsubsection{Exported Access Programs}
\begin{center}
\begin{tabular}{p{2cm} p{4cm} p{4cm} p{2cm}}
\hline
\textbf{Name} & \textbf{In} & \textbf{Out} & \textbf{Exceptions} \\
\hline
run\_output & Ring & $\mathbb{R}, \mathbb{R}$, list($\mathbb{R}$), Ring
& None* \\
\hline
\end{tabular}
\end{center}
*Need an exception in case the program does not have the permissions
needed to write to the output directory.
\subsection{Semantics}
\subsubsection{State Variables}
\begin{itemize}
\item total\_fit : the total sum of all fitness values for pmems in the Ring
(that aren't None), $\mathbb{R} \geq$ 0
\item average\_fit := total\_fit / ring.num\_filled
\item best\_pmem : the ring index for the pmem in the ring with the highest
fitness value, $\text{ring.num\_slots} \geq \mathbb{Z} \geq$ 0
\item best\_fit : the value of the best fitness as found in the ring,
$\mathbb{R} \geq$ 0
\end{itemize}
\subsubsection{Environment Variables}
The output files with extension OUTPUT\_FORMAT will be written to the current
working directory.
\subsubsection{Assumptions}
None
\subsubsection{Access Routine Semantics}
\noindent run\_output(ring):
\begin{itemize}
\item transition: generate an output files for the best conformer
geometries.
\item output: returns the average fitness value in the ring, the best
fitness value in the ring, the energies of the best geometries, and the
final ring object.
\item exception: raise an error if the user doesn't have write permissions
in the current working directory.
\end{itemize}
\subsubsection{Local Functions}
None
\section{MIS of Geometry} \label{section-geometry}
The geometry module is used by the output module (Section
\ref{section-geometry}), the ring module (Section \ref{section-ring}), and the
mol\_input module (Section \ref{section-mol_input}). It uses the external
program Vetee to make a Parser object that is used by the ring. The Parser
object can represent a few file formats: xyz, com, Gaussian log file (glog).
The Parser object can also be initialized using a SMILES string, a cid
(chemical identifier used by pubchem), or a molecule name. This module also
converts zmatrices to xyz coordinates and vice versa.
\subsection{Module}
geometry
\subsection{Uses}
None
\subsection{Syntax}
\subsubsection{Exported Constants}
\subsubsection{Exported Access Programs}
\begin{center}
\begin{tabular}{p{2cm} p{4cm} p{4cm} p{2cm}}
\hline
\textbf{Name} & \textbf{In} & \textbf{Out} & \textbf{Exceptions} \\
\hline
generate\_parser & dict & Parser & NotImplementedError \\
zmatrix\_to\_xyz & str & list(list(str, $\mathbb{R}, \mathbb{R},
\mathbb{R}$)) & None \\
generate\_zmatrix & Parser, list(list($\mathbb{Z}$)) & str & None \\
\hline
\end{tabular}
\end{center}
\subsection{Semantics}
\subsubsection{State Variables}
The Vetee Parser object has the following attributes:
\begin{itemize}
\item comments: str
\item charge: charge of the molecule, $\mathbb{Z}$
\item multip: multiplicity of the molecule, $\mathbb{N}$
\item calc\_type: str (optimization, single-point, etc.)
\item coords: list(list(str, $\mathbb{R}, \mathbb{R}, \mathbb{R}$))
\item gkeywords: dict (keys are Gaussian keywords, values are the arguments
for the Gaussian keywords)
\item fpath: filepath for the input file, str
\item fname: filename for the input file, str
\end{itemize}
\subsubsection{Environment Variables}
None
\subsubsection{Assumptions}
None
\subsubsection{Access Routine Semantics}
\noindent generate\_parser(mol\_input\_dict):
\begin{itemize}
\item transition: None
\item output: Parser object.
\item exception: NotImplementedError : struct\_type is not covered by Vetee.
\end{itemize}
\noindent zmatrix\_to\_xyz(zmatrix):
\begin{itemize}
\item transition: None
\item output: list(list[atom type (str), x-coord, y-coord, z-coord]) the
xyz coordinates and the atomic types.
\item exception: None
\end{itemize}
\noindent generate\_zmatrix(parser, dihedrals):
\begin{itemize}
\item transition: None
\item output: zmatrix (str) that is the combination of the parser.coords
and the list(list(dihedral-angles)).
\item exception: None
\end{itemize}
\subsubsection{Local Functions}
None
\section{MIS of Energy} \label{section-energies}
Using the psi4 program, this module is responsible for running energy
calculations.
\subsection{Module}
energy
\subsection{Uses}
None
\subsection{Syntax}
\subsubsection{Exported Constants}
None
\subsubsection{Exported Access Programs}
\begin{center}
\begin{tabular}{p{2cm} p{4cm} p{4cm} p{2cm}}
\hline
\textbf{Name} & \textbf{In} & \textbf{Out} & \textbf{Exceptions} \\
\hline
run\_energy\_calc & str, str, str, bool & $\mathbb{R}$ & None \\
prep\_psi4\_geom & list(list[str, $\mathbb{R}, \mathbb{R},
\mathbb{R}$]), $\mathbb{Z}, \mathbb{N}$ & str & None \\
\hline
\end{tabular}
\end{center}
\subsection{Semantics}
\subsubsection{State Variables}
\begin{itemize}
\item psi4\_str : str representing the geometry specification in the format
needed for a psi4 energy calculation.
\end{itemize}
\subsubsection{Environment Variables}
None
\subsubsection{Assumptions}
This module can be modified to support multiple programs depending on the
user's preference for calculation software.
\subsubsection{Access Routine Semantics}
\noindent run\_energy\_calc(geom, method=``scf", basis=``aug-cc-pVTZ",
restricted=False):
\begin{itemize}
\item transition: None
\item output: energy ($\mathbb{R}$) of the geometry with the specified QCM
and BS (which may be a restricted vs unrestricted calculation depending on
the restricted bool).
\item exception: None
\end{itemize}
\noindent prep\_psi4\_geom(coords, charge, multip):
\begin{itemize}
\item transition: None
\item output: str of the psi4 geometry specification needed to perform
calculations using a list of cartesian coordinates and atom names, the
charge, and the multiplicity of the molecule.
\item exception: None
\end{itemize}
\subsubsection{Local Functions}
None
\section{MIS of RMSD} \label{section-rmsd}
Uses the rmsd repository from github to calculate the root-mean-square
deviation between all sets of conformer geometries in a pmem. Each pair is sent
as xyz files to this module independently.
\subsection{Module}
rmsd
\subsection{Uses}
None
\subsection{Syntax}
\subsubsection{Exported Constants}
None
\subsubsection{Exported Access Programs}
\begin{center}
\begin{tabular}{p{2cm} p{4cm} p{4cm} p{2cm}}
\hline
\textbf{Name} & \textbf{In} & \textbf{Out} & \textbf{Exceptions} \\
\hline
calc\_rmsd & str, str & $\mathbb{R}$ & None \\
\hline
\end{tabular}
\end{center}
\subsection{Semantics}
\subsubsection{State Variables}
\begin{itemize}
\item output := standard output from the rmsd module that is converted into
a float and returned to the fitg module.
\end{itemize}
\subsubsection{Environment Variables}
Both files are in cartesian coordinates (can also have a pdb file extension).
\begin{itemize}
\item f1 := file name 1 for the first geometry.
\item f2 := file name 2 for the second geometry.
\end{itemize}
\subsubsection{Assumptions}
The rmsd repository calculates a rotation matrix such that comparing molecules
that have only undergone translation and rotation gives an rmsd of 0.
\subsubsection{Access Routine Semantics}
\noindent calc\_rmsd(f1, f2):
\begin{itemize}
\item transition: open f1 and f2.
\item output: calculated rmsd.
\item exception: None
\end{itemize}
\subsubsection{Local Functions}
None
\bibliographystyle {plainnat}
\bibliography {../../../ReferenceMaterial/References}
%\section{Appendix} \label{Appendix}
\end{document} | {
"alphanum_fraction": 0.7368355816,
"avg_line_length": 32.100132626,
"ext": "tex",
"hexsha": "1a49532640e7e05b00c99c2a1d52fa15f3cd4fb1",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-08-05T13:51:37.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-11-14T20:38:50.000Z",
"max_forks_repo_head_hexsha": "8e433ee7d613d976b24875be987174e4ba1392d1",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "PeaWagon/Kaplan",
"max_forks_repo_path": "docs/Design/MIS/MIS.tex",
"max_issues_count": 47,
"max_issues_repo_head_hexsha": "8e433ee7d613d976b24875be987174e4ba1392d1",
"max_issues_repo_issues_event_max_datetime": "2019-05-16T19:21:49.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-09-14T04:13:03.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "PeaWagon/Kaplan",
"max_issues_repo_path": "docs/Design/MIS/MIS.tex",
"max_line_length": 92,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "8e433ee7d613d976b24875be987174e4ba1392d1",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "PeaWagon/Kaplan",
"max_stars_repo_path": "docs/Design/MIS/MIS.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-16T01:48:09.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-03-22T06:54:30.000Z",
"num_tokens": 13988,
"size": 48407
} |
\documentclass{article}
% General document formatting
\usepackage[margin=0.7in]{geometry}
\usepackage[parfill]{parskip}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{tikz}
\usepackage{fancyhdr}
\usepackage{listings}
\usepackage{multicol}
\usepackage{polynom}
\pagestyle{fancy}
\fancyhf{}
\rhead{Edgar Jacob Rivera Rios - A01184125}
\begin{document}
\begin{titlepage}
\newcommand{\HRule}{\rule{\linewidth}{0.5mm}} % Defines a new command for the horizontal lines, change thickness here
\center % Center everything on the page
%----------------------------------------------------------------------------------------
% HEADING SECTIONS
%----------------------------------------------------------------------------------------
\textsc{\LARGE Tecnológico de Monterrey}\\[1.5cm] % Name of your university/college
\textsc{\Large Fundamentos de computación}\\[0.5cm] % Major heading such as course name
%\textsc{\large Minor Heading}\\[0.5cm] % Minor heading such as course title
%----------------------------------------------------------------------------------------
% TITLE SECTION
%----------------------------------------------------------------------------------------
\HRule \\[0.4cm]
{ \huge \bfseries Homework 9}\\[0.4cm] % Title of your document
\HRule \\[1.5cm]
%----------------------------------------------------------------------------------------
% AUTHOR SECTION
%----------------------------------------------------------------------------------------
\begin{minipage}{0.4\textwidth}
\begin{flushleft} \large
\emph{Student:}\\
Jacob \textsc{Rivera} % Your name
\end{flushleft}
\end{minipage}
~
\begin{minipage}{0.4\textwidth}
\begin{flushright} \large
\emph{Professor:} \\
Dr. Hugo \textsc{Terashima} % Supervisor's Name
\end{flushright}
\end{minipage}\\[2cm]
% If you don't want a supervisor, uncomment the two lines below and remove the section above
%\Large \emph{Author:}\\
%John \textsc{Smith}\\[3cm] % Your name
%----------------------------------------------------------------------------------------
% DATE SECTION
%----------------------------------------------------------------------------------------
{\large \today}\\[2cm] % Date, change the \today to a set date if you want to be precise
%----------------------------------------------------------------------------------------
% LOGO SECTION
%----------------------------------------------------------------------------------------
\includegraphics[width=0.4\textwidth,height=\textheight,keepaspectratio]{logo-tec-negro.png} % Include a department/university logo - this will require the graphicx package
%----------------------------------------------------------------------------------------
\vfill % Fill the rest of the page with whitespace
\end{titlepage}
\section{Problems}
Solve the following problems:
\begin{enumerate}
\item What is a P problem?
A problem that can be solved in polynomial time
\item What is an NP problem?
A problem which result can be verified in polynomial time, but has a non-deterministic solution time.
\item What is and NP - complete problem?
This are the problems that are at least as hard as the hardest NP problems. This means, the ones that are both NP and NP-Hard.
\item Why P = NP is considered an open problem?
It is the question that arises from comparing P and NP problems, it is known that P is included in NP, but we do not know if the NP problems can be reduced to be solved in P time, making P and NP the same.
It is currently considered an open problem there is not a mathematical proof that NP is equal or different to P.
\item What is the difference between a decision problem and an optimization problem?
A decision problem returns as result a simple $yes$, $no$ answer, and a optimization problem tries to find the best values for certain defined parameters.
\item Investigate the 2-dimensional packing problem, describe the problem, and state it as a decision problem and then as an optimization problem?
In the problem, you are presented with a series of items with different dimensions and bins of fixed dimensions, the objective is to fit the items in as few bins as possible.
As a decision problem:\\
Is it possible to fit the items in $n$ bins?
As a optimization problem:\\
Find the combinations of items which fit in the least amount of bins possible
\item Describe a simple heuristic (approximation algorithm) for solving the 2-dimensional packing problem. How could you measure the effectiveness of this heuristic for solving the problem?
The characterization of the items can be used to solve the problem, you could use the average of width and height to decide if it's better to fill a bin as much as possible or open a new one. The quantity of bins is definitive measure to use when testing the heuristic.
\item Can you describe the Travelling Salesman Problem? How is this problem related to the Hamiltonian Cycle? Provide a polynomial transformation between those problems. How are these problems related
to the Vehicle Routing Problem?
The traveling salesman problem is a problem which states that given a list of cities and the distances between them, to return the shortest path that visits each city and returns to the initial city. This is basically a specialization of the problem of the Hamiltonian circle, which consists on finding a cycle which given a graph, tries to get a cycle which passes for each node once. The vehicle routing problem is a generalization of the the Traveling Salesman Problem.
\item What is a phase transition?
It is a transformation for the inputs from one problem to another
\item How is a phase transition related to the easiness or hardness of NP problems?
By using a phase transition, you can either simplify or maker harder the problem by converting in another problem. For example, you can use a phase transition to convert an NP problem to an NP-Hard problem
\end{enumerate}
\end{document} | {
"alphanum_fraction": 0.6130060567,
"avg_line_length": 46.8208955224,
"ext": "tex",
"hexsha": "85684601c25adf11e8730c8981898c09c79dc02f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6945f257eb7ed22a97350a0f3af9153ff9caf0ec",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "edjacob25/ComputationalFundaments",
"max_forks_repo_path": "Homework9.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6945f257eb7ed22a97350a0f3af9153ff9caf0ec",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "edjacob25/ComputationalFundaments",
"max_issues_repo_path": "Homework9.tex",
"max_line_length": 476,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "6945f257eb7ed22a97350a0f3af9153ff9caf0ec",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "edjacob25/ComputationalFundaments",
"max_stars_repo_path": "Homework9.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1346,
"size": 6274
} |
%!TEX root = main.tex
\chapter{Introduction}
\label{introduction}
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{intro}
\caption[cover]
{Weird effects of digital signals in video. }
\label{fig:weirdVideo}
\end{figure}
\clearpage
This section tries to make sure we are all on the same page. It goes through some mathematics notation and principles of digital audio.\\
If you rather want a video format revision of digital audio (I mean, it's \the\year, of course you want), I can recommend \href{https://www.youtube.com/watch?v=cIQ9IXSUzuM}{D/A and A/D | Digital Show and Tell (Monty Montgomery @ xiph.org) }\footnote{https://www.youtube.com/watch?v=cIQ9IXSUzuM}. Some of the points in the video go beyond what we are doing here and vice versa, so don't purely rely on the video though. \\
Ideally, if you throughly understood the introduction section, you should be able to go through the remaining chapters with tempo.\\
\section{About this document}
Please report any mistakes, errors etc to \href{mailto:[email protected]}{[email protected]}.\\
Some information in this document is relevant for understanding its contents but not relevant for the exam. For example in chapter \ref{chap:modulation} we will find a really complicated result of an equation. The point there is just that it's complicated. And it is shown how complicated, but this is nothing to be learned by heart. Importance is tried to be made clear using the following formats:\\
\bgInfo{
Text, figures, and equations with a gray background like this are background information that is not to be learned by heart.}
\video{
This document tries to explain digital signals. It does this by use of audio signals mainly. Sometimes video analogies are given. These are also not relevant for the exam.
}
\important{Very important things are framed.}
Internal links are gray. You can click them to jump to another point in this document, such as Equation \ref{eq:dft}.\\
External links are blue, these are mostly links on the web. The actual link is always available as a footnote as well. \link{https://www.youtube.com/watch?v=57PWqFowq-4}{Here is an example}
Max objects will always be in courier new and square brackets. Like this: \pd{metro}.\\
\subsection{Programming Languages}
This Document tries to give examples in different programming languages. \link{https://cycling74.com/}{Max/MSP 8}, \link{https://www.python.org/}{python} in the form of \link{https://jupyter.org/}{jupyter notebooks} and \link{https://www.khronos.org/opengl/wiki/Core\_Language\_(GLSL)}{GLSL} in the form of \link{https://shadertoy.com/}{shadertoy} links.
\subsubsection*{Max/MSP}
This document was originally written for \textit{pd-extended}, a free open source alternative to Max/MSP. It is now in the process of migrating everything to Max/MSPversion:\\
8.0.0.
The following additional libraries are used:
\begin{itemize}
\item HISSTools Impulse Response Toolbox (HIRT) (mostly used for its improved spectrogram)
\item cv.jit (Computer vision)
\end{itemize}
Additionally the following libraries are recommended:
\begin{itemize}
\item Max ToolBox (Allowing for faster patching)
\item zsa.descriptors (Audio Feature extraction and analysis)
\item MuBu for Max (Advanced pattern recognition and audio analysis)
\end{itemize}
\subsubsection*{iPython}
This document is in the process of transporting most plots and examples to interactive \link{https://colab.research.google.com/notebooks/welcome.ipynb}{Collab Notebooks} where one can see how the plot is made using python and one can play around with the settings interactively. You can run these notebooks in the browser with a google account(recommended) or you can install jupyter notebook on your machine to run them locally. Links to note books will be highlighted using a button-like badge. For example click on the following buttopn to get to the notebook for the introduction section: \colab{https://colab.research.google.com/github/hrtlacek/dspCourse/blob/master/notebooks/00\_Introduction.ipynb}.
% \link{https://colab.research.google.com/github/hrtlacek/dspCourse/blob/master/notebooks/00\_Introduction.ipynb}{click here}.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{img/notenookExample.png}
\caption[example of iPython notebook]
{example of iPython notebook}
\label{fig:ipython}
\end{figure}
\subsubsection*{GLSL}
\todo[inline]{unfinished}
In order to show examples in a highly common, industry standard video language, GLSL (Graphics Library Shading Language) code is provided. Examples can be run interactively on \link{http://shadertoy.com}{shadertoy.com}. Such examples will be highlighted with shadertoy's logo. Click the following to get to an example shader:\shader{https://www.shadertoy.com/view/ws3fD4}
\section{How to Describe Media Signals Mathematically}
If we want to talk about signals, or if we want to analyze them, it is often useful to look at the problem mathematically. First, let's introduce some conventions. They might look unfamiliar or complicated. But in fact it is not very complicated and knowing these conventions makes it easy to communicate (e.g. reading scientific papers about our topics or explaining something to another person).
\subsection*{Signals}
Mathematically speaking the types of signals we will be working with (images, audio, video, control signals) are \textit{functions}. The simplest of these cases is a mono audio signal. It is a function of time $t$, mapping a point in time to a value. If we call our signal $x$ we can write $x(t)$. A digital gray-scale image would complicate things since we would get one value only if we provide 2 coordinates. For example we could decide to call it $x$ and make it a function of horizontal and vertical coordinates, $u$ and $v$: $x(u,v)$. Maybe it is obvious that describing color video or multichannel audio starts to get a bit crowded\footnote{Take video for example. Video is a function of time and space(pixel coordinates). Color video has 3 to 4 channels. So, to retrieve a single value, we look at it as a function of time $t$, coordinates $u,v$ and channel number $c$: $x(t,u,v,c)$},so lets stick to mono audio for now.
\begin{figure}[H]
\centering
\input{plots/randSignal}
% \includegraphics[width=11cm]{sineScatter}
\caption[random signals]
{two random signals. A one dimensional one, similar to mono audio and a two dimensional one, similar to a digital image.}
\label{fig:randSigs}
\end{figure}
% Usually we describe a \textit{digital} signal by a name, say $x$, (but you can call it however you want). If we want to talk about the individual samples, or values of the (mono) signal, we can use a subscript or parenthesis. So if the fifth sample of $x$ is 1, we could write:
% \begin{equation}
% x_5=1
% \end{equation}
% or
% \begin{equation}
% x(5)=1
% \end{equation}
% Oftentimes we like to talk about a signal more generally and we use $n$ as a place holder for this index, so we might write $x_n$, meaning the nth sample of $x$.
\subsubsection*{Continuous Time vs Discrete Time}
In Figure \ref{fig:randSigs} we can see a plot if the function $x(t)$. Let's imagine for a moment that we have a simple sine wave as a function of time: $x(t) = sin(t)$. This function is defined for all values of $t$. It is \textit{continuous}. The same is true for $x(t) = t$ for example. But in this document, we concentrate on \textit{digital} signals. Digital signals are sampled at discrete points in time, so are not defined for all values of t. If we choose a sampling rate $f_s$ and therefore a sampling interval $t_s = \frac{1}{f_s}$. We create a new sampled \textit{discrete} function, $x_s(n)$ where n is any integer ($n \in \mathbb{Z}$). And we defined it so it is exactly the same as our original function $x$:
\begin{equation}
x_s(n) = x(n \cdot t_s)
\end{equation}
This idea is tried to visualize in Figure \ref{fig:sampledSine}. In this document we will sometimes treat signals as continuous time (defining signals as $x(t)$) and sometimes as discrete ($x(n)$). This is done to keep things as simple as possible. In the end, on a computer, signals are always discrete time signals.
\begin{figure}[h!]
\centering
\input{plots/sampling}
\caption[sampling]
{A plot of a continuous variable, time $t$ and a discrete sampled ($t_s=0.2 seconds$) function of $n$.}
\label{fig:sampledSine}
\end{figure}
\section{About plotting signals}
We will need to plot a lot of signals in order to understand them better. Most of the time, such a plot will look like figure \ref{fig:simpeSine}.
\begin{figure}[h!]
\centering
\input{plots/sinPlot1}
% \includegraphics[width=11cm]{simpleSine}
\caption[simple sine plot]
{Sine wave, 1Hz, sampled at 30Hz sample rate}
\label{fig:simpeSine}
\end{figure}
This plot looks nice but it has a problem. As just discussed, this sine wave is sampled at a sampling rate of 30 Hz, but we see a continuous line. This ``connection of the dots'' is created by plotting. It is kind of similar to what our \textit{digital-to-analog-converter} does. It somehow\footnote{linear interpolation in case of the plot} interpolates the values we have. \\
This can be misleading, so we should actually plot something more like figure \ref{fig:scatter}. Often we can see plots that look like figure \ref{fig:stem} as well when a signal is analyzed.
\begin{figure}[H]
\centering
\input{plots/sinPlotScatter}
% \includegraphics[width=11cm]{sineScatter}
\caption[Sine scatter-plot]
{Sine wave, 1Hz, sample rate 30Hz, scatter-plot}
\label{fig:scatter}
\end{figure}
\begin{figure}[H]
\centering
\input{plots/sinPlotStem}
% \includegraphics[width=11cm]{sineStem}
\caption[sine stem plot]
{Sine wave, 1Hz, sample rate 30Hz, stem-plot}
\label{fig:stem}
\end{figure}
So why don't we always do a stem- or scatter-plot? Simply because it gets too crowded with our usual sampling rates in audio. It just works with very low sampling rates or very short signals.\\
\begin{figure}[H]
\centering
\includegraphics[width=11cm]{stem44100}
\caption[sine stem plot, crowded]
{Sine wave, 1Hz, sample rate 44100Hz, stem-plot}
\label{fig:sine1HzStemPlot}
\end{figure}
\textbf{But we should never forget that we don't actually have the values in between the dots. Digital signals are not defined between their sampled points.}
You can look at how these signals where generated and plotted in this Notebook:\colab{https://colab.research.google.com/github/hrtlacek/dspCourse/blob/master/notebooks/00\_Introduction.ipynb}
% \link{https://colab.research.google.com/github/hrtlacek/dspCourse/blob/master/notebooks/00\_Introduction.ipynb}{in this Notebook}
\subsection{Looking at Signals in Max}
\todo[inline]{todo}
\begin{figure}[H]
\centering
% \input{plots/aliasTime}
\includegraphics{metering}
\caption[metering]
{Metering Audio in Max}
\label{fig:meteringAudio}
\end{figure}
\begin{figure}[H]
\centering
% \input{plots/aliasTime}
\includegraphics[width = \textwidth]{img/lookAtMatrix.png}
\caption[metering]
{Inspecting Jitter Matrices in Max}
\label{fig:matrixIns[ection]}
\end{figure}
\section{What is aliasing?}
Aliasing in audio means problems caused by signals that exceed the nyquist rate.\\
The nyquist rate, let's call it $f_n$ for now, is defined by the half of the sample rate ($f_s$). So,
\begin{equation}
f_n=\frac{f_s}{2}
\end{equation}
A digital system can only describe signals up to its nyquist rate. If we try to make signals higher than this frequency, we will fail and encounter strange effects. So the idea is that the highest frequency in a signal $f_{max}$ should be lower than half the sampling rate. This is called the \textit{Nyquist criterion}:\\
\begin{equation}
f_{max} < \frac{f_s}{2}
\end{equation}
This is the same as saying the sampling rate should be at least double the highest frequency in the signal of interest:
\begin{equation}
f_{max}\cdot2 < f_s
\end{equation}
Visually speaking, frequencies higher than nyquist fold back. So, let's assume we have a sampling rate of 100Hz. Nyquist would be at 50Hz. If we try to synthesize a sine wave with 51Hz, what we will get is a 49Hz one. If we try to make a 52Hz one, we will get 48Hz. So you see, it simply folds back. You can find an interactive example \link{https://jackschaedler.github.io/circles-sines-signals/sampling.html}{here}.
\begin{framed}
\textbf{Video analogies}\\
Aliasing in graphics usually means \textit{spacial} aliasing, so aliasing in the space domain. This is what we see in figure \ref{fig:spAlias}. But there is also time domain aliasing in film. It is actually more natural to think about the sampling rate in audio as the same as the frame rate in video. For some really weird effects that arise in video due to time domain aliasing see \href{https://www.youtube.com/watch?v=LVwmtwZLG88}{airplane}\footnotemark , \href{https://www.youtube.com/watch?v=GBtHeR-hY9Y}{Water experiment}\footnotemark or \href{https://www.youtube.com/watch?v=jcOKTTnOIV8}{Guitar strings}\footnotemark .
The \textit{moiré pattern} is also an example of aliasing. Figures \ref{fig:moire1} and \ref{fig:moire2} are stills generated with a shader you can find here: \shader{https://www.shadertoy.com/view/NdKSDz}. It essentially renders a varying amount of circles, resulting in aliasing.
\begin{center}
% \centering
\includegraphics[width=7cm]{moire1.png}
\captionof{figure}{A couple of circles without moiré pattern.}
\label{fig:moire1}
\end{center}
\begin{center}
% \centering
\includegraphics[width=7cm]{moire2.png}
\captionof{figure}{Much more circles, resulting in a moiré pattern.}
\label{fig:moire2}
\end{center}
\begin{center}
% \centering
\includegraphics[width=7cm]{spacialAliasing1.png}
\captionof{figure}{Spacial aliasing in graphics. The ``frequency'' of the intended pixels is to high for the actual pixels.}
\label{fig:spAlias}
\end{center}
The particularly strange effects in the airplane example above are caused by the rolling shutter of a CMOS sensor. Since it is not sampling the incoming light uniformly (at the same time) the image is distorted.
\end{framed}
\footnotetext[2]{https://www.youtube.com/watch?v=LVwmtwZLG88}
\footnotetext[3]{https://www.youtube.com/watch?v=GBtHeR-hY9Y}
\footnotetext[4]{https://www.youtube.com/watch?v=jcOKTTnOIV8}
In figure \ref{fig:cosAlias} you can see a visualization of aliasing in the time domain.
\begin{figure}[H]
\centering
\input{plots/aliasTime}
% \includegraphics[width=\textwidth]{aliasingSine}
\caption[Aliasing]
{Aliasing visualized in the time domain. A sampling rate of 4 Hz is used. Therefore frequencies will fold above 2 Hz, which is the nyquist frequency. The input cosine has a frequency of 3 Hz, labeled ``Orginal Signal''. What we would get is the sampled points. If these are digital-to-analog converted, we would get what is labeled ``Aliased signal '' in this plot, so a 1 Hz cosine.}
\label{fig:cosAlias}
\end{figure}
\begin{figure}[H]
\begin{center}
\input{plots/aliasFreq}
% \includegraphics[width = \textwidth]{aliasingFreqDomain.png}
\caption[Aliasing in the Frequency Domain]
{Here we can see the phenomenon of aliasing in the frequency domain. We try to synthesize a sine wave at 120Hz. At a sampling rate of 200 Hz, this is not possible, nyquist is at 100 Hz. What we actually get is a sine wave at about 80Hz. In this plot, the nyquist frequency is marked with the red line. Please note that the distance between the red line to both peaks is identical. This is why the phenomenon is also called \textit{fold-back}.}
\label{fig:aliasingFreqDomain}
\end{center}
\end{figure}
But how is the aliased frequency calculated? To get a correct result also if you go 2 or more times past the Nyquist frequency, and still be correct in phase, please look at \link{https://colab.research.google.com/drive/10NRGmmLRGxy\_3j7ecGKcME0gkQKmCDOW}{this Notebook}. For the case where we go only once over the Nyquist Frequency we can just calculate:
\begin{equation}
f_a = f_s-f_o
\end{equation}
Where $f_a$ is the Frequency we will hear, $f_o$ is the intended frequency and $f_s$ is the sampling rate.
\bgInfo{
For cases where we pass Nyquist multiple times things start to get a bit complicated. In case we went over Nyquist odd times we can use the following Equation making use of the modulo. That is this equation os valid between $f_n$ and $f_s=2f_n$, between $3f_n$ and $4f_n$ and so on:
\begin{equation}
f_a = (fs-f_o) mod \frac{f_s}{2}
\end{equation}
otherwise we have to use the following Equation:
\begin{equation}
f_a = f_o mod \frac{f_s}{2}
\end{equation}
}
\section{Scaling and Mapping Signals}
It is an important skill to be able to scale signals from one range to another. We need it a lot and we will be able to think about signals more easily if we mastered this task. It's actually quite simple, we just have to imagine the signals visually.\\
So what exactly do we have to do here? We are confronted with the following problem: Given some signal, say, a sine wave with its maximum at the value 1 and its minimum at the value -1. How to bring it to a different range, say, 0-10?\\
It helps me a lot to solve this problem in two parts: first get the input in the range 0-1, then from there go to the desired range. What can we do to the signal? Let's take a sine wave like the one in Figure ~\ref{fig:aSine}. Well we can add and subtract to move the wave vertically, so let's add 1 to move it up, have a look at Figure~\ref{fig:aShiftedSine}.
\begin{figure}[H]
\centering
\input{plots/stdSine}
% \includegraphics[width=11cm]{stdSine.png}
\caption[a sine wave, $f=2Hz$]
{a sine wave, $f=2Hz$}
\label{fig:aSine}
\end{figure}
\begin{figure}[H]
\centering
\input{plots/sinePlusOne}
% \includegraphics[width=11cm]{sinePlusOne.png}
\caption[a sine wave plus 1]
{the same sine wave, $f=2Hz$, 1 added to a each sample, therefore shifted upwards.}
\label{fig:aShiftedSine}
\end{figure}
So we can move signals around by adding constant values. We can scale them by multiplication. So if we take our sine that now ranges from 0 to 2 and multiply it by 0.5, we get whats in figure \ref{fig:sine01}.
\begin{figure}[H]
\centering
% \includegraphics[width=11cm]{sine01.png}
\input{plots/sine01}
\caption[sine 0 to 1]
{Sine, $f=2Hz$, with a range of 0-1. Obtained by taking a sine wave, adding 1 and dividing by two afterwards.}
\label{fig:sine01}
\end{figure}
\hspace{1cm}
Using what we got now in figure \ref{fig:sine01}, we can just multiply by 10, easy! Beware that there are always multiple solutions to this kind of a problem. Try to find another one for the problem above!
\begin{question}
Let's take a sine wave that has it's minimum at 2 and it's maximum at 5. What do we have to do to get it into a -1 to 1 range?
\end{question}
\begin{Answer}
We could subtract 3.5 to center the wave around zero first. Afterwards we take care of the amplitude by multiplying by $\frac{2}{3}$ (since the initial wave has a peak-to-peak amplitude of three and we want a peak-to-peak amplitude of 2)
\end{Answer}
\section{What's DC-Offset?}
What we did above by adding a constant value to a signal can be called adding DC-offset (``Gleichspannungsversatz''), DC-Bias or a DC component. These are different words for the same thing.\\
DC-offset can also be encountered in signals we recorded (caused by old or broken equipment mainly). But we have seen that we can also generate DC-offset.\\
To state it again clearly:
\begin{framed}
DC-Offset is a constant value over time, or a constant offset from the zero value in Y. So for example in figure \ref{fig:sine01} we see a DC-offset of 0.5 since subtracting this value would center the wave around 0.
\end{framed}
If we try to think how this kind of signal looks in the frequency domain we find that it is energy at 0Hz. In figure \ref{fig:dcViz} you can see a couple of cosine waves plotted. This should hep imagine that a constant signal, a signal that does not move a all, can be described by a cosine with 0 Hz and a certain amplitude.
\begin{figure}[h!]
\centering
\input{plots/dcOffsetViz}
% \includegraphics[width=11cm]{dcOffsetViz}
\caption[Cosines to DC]
{Cosines with different very low frequencies approaching 0Hz, so DC offset.}
\label{fig:dcViz}
\end{figure}
Let's quickly state this differently, so we can appreciate the surprising connections between DC-offset and an impulse. A DC-offset signal is a signal consisting of constant values, so for example $\{...,1,1,1,1,1,1,...\}$. Its spectrum is a single peak at 0Hz, so the spectrum's values look like $\{..., 0,0,0,1,0,0,0,...\}$\footnote{Why is the $1$, so the 0 Hz component not at the beginning of the list? Or is it not 0 Hz? The 1 is supposed to be at 0Hz, and it is centered in this list because doing an FFT actually always gets us a two-sided spectrum. You can ignore this fact until the FFT chapter, and pretend that the list starts with 1 (which wouldn't be wrong also). }
\begin{figure}[H]
\begin{center}
\includegraphics{dcPd.png}
\caption[Dc-offset in Max]
{One of the ways to generate a DC signal in Max.}
\label{fig:dcOffsetPD}
\end{center}
\end{figure}
% \comm{explanations and visualizations missing}
\section{What's an Impulse?}
Impulses are very useful signals. We can produce an impulse using the \pd{click\textasciitilde} object.
\begin{figure}[H]
\begin{center}
\includegraphics{dirac.png}
\caption[The click object]
{The \pd{click\textasciitilde} object produces an impulse if we send it a \texttt{bang}.}
\label{fig:impulseInPd}
\end{center}
\end{figure}
\bgInfo{
Different people will define what's an impulse in different ways:
\begin{itemize}
\item A sound engineer might tell you, that clapping your hands or shooting with a gun creates an impulse
\item a mathematician will maybe give you the definition of the \textit{dirac delta function}. A dirac function is a continuous function having infinite height and being infinitely short. Its integral (area under the curve) is 1.
\item Somebody working with discrete (so digital) signals will give you rather the definition of the Kronecker delta function\footnote{It's very common and kind of wrong to call digital impulses "dirac functions" or "dirac impulses". Sometimes people like to use "big" words. So instead of saying "impulse", they say "dirac delta function". Dirac delta functions have importance in mathematics and analysis. But since they have infinite value and are infinitely short they can't exist in reality. They are a theoretical idea. Therefore, more often than not, the term is used a bit incorrectly in an audio context.}:
\begin{equation}
\delta(i) = \begin{cases}
0, & \mbox{if } i \ne 0 \\
1, & \mbox{if } i=0 \end{cases}
\end{equation}
\end{itemize}
}
For us, strictly speaking, an impulse will be a signal that contains only zeros, but one sample with the value one (so we will mean the Kronecker delta function if we say 'impulse'). We can see such an impulse in figure \ref{fig:unitImpulse}. On the x-axis, we have the time in samples\footnote{Don't be irritated by the fact that the sample numbers on the x-axis are ranging from $-5$ to $5$. You can just as well imagine them going from 0 to 10 or 1 to 11. It doesn't really matter. However, this way of displaying an impulse is very common since if the impulse is filtered, it's symmetry is an important factor.}.
\begin{figure}[H]
\begin{center}
\input{plots/unitImpulse}
% \includegraphics[width = \textwidth]{unitImpulse.png}
\caption{The unit impulse function.}
\label{fig:unitImpulse}
\end{center}
\end{figure}
So in the time domain, an impulse's samples have the values $\{...,0,0,0,1,0,0,0,...\}$. If we look at it's spectrum, we will find that it contains all frequencies, there is just a flat line in the spectrum (you can see an impulse's spectrum in figure \ref{fig:impToDc} also.). The spectrum has a \textit{constant value}, something like $\{..,1,1,1,1,1,1,...\}$. Do these two lists look familiar? Right, it is exactly the opposite of the DC-offset signal.
\section{Working with Sine Waves}
We use sine waves a lot. The syntax can get a bit overwhelming at first, so let's quickly explain what's going on in a standard sine wave oscillator.
\begin{equation}
x(t) = A \cdot sin(2\pi f t + \phi)
\end{equation}
Where $f$ is the frequency in Hertz and $t$ is the time in seconds. $\phi$ Is a (possibly constant) phase offset and $A$ can be used to scale the whole thing. Since cosine and sine have their peaks at -1 and 1, $A$ will be the amplitude. If $A$ is set to 0.5, the resulting signal will have it's peaks at -0.5 and 0.5. \\
The upper equation is rather complete. Often we will just ignore the phase as well as the amplitude and just write:
\begin{equation}
x(t) = sin(2\pi f t )
\end{equation}
Sometimes, we will simplify even more and just write $sin(a)$ or $sin(b)$ or similar.\\
\begin{mdframed}[backgroundcolor=black!10,rightline=false,leftline=false]
In literature, we sometimes encounter $x(t) = sin(t \cdot \omega)$. $\omega$ stands for a ``frequency''(actually for the rotational speed) in \textit{radians per second}. $2\pi$ radians per second are 1 Hz and we can therefore convert radians per second to frequency, $v$, with:
\begin{equation}
v = \omega / 2\pi
\end{equation}
This notation is only covered since it is very common, but it will not be used here, since it is considered more intuitive to work with frequency in Hz.
\vspace{0.5cm}
\textbf{What is actually the significance of using either $sin$ or $cos$?}\\
In most cases for us, this does not matter at all. The difference between sine and cosine is just in phase. Since we are most of the time describing a sine (or cosine) wave oscillator as a function of time we can think of the sine as a slightly time shifted version of the cosine and vice versa. This minute difference is not audible. \\
That being said, there are certain formulas that consist of sine and cosine terms, and that only work that particular way. For example, Euler's famous formula, $e^{ix}=cos(x)+i\cdot sin(x)$. We can't interchange $sin$ and $cos$ here.
\end{mdframed}
% \comm{work in progress.}
\section{Describing Systems}
There are many ways to describe systems. Digital LTI systems (Linear, time invariant Systems) can be described using
\begin{itemize}
\item difference equations
\item Block diagrams
\item Transfer Functions
\item and other things.
\end{itemize}
For us, the most important ways to describe a system are block diagrams, difference equations and code of course.
We will talk more about this in Chapter \ref{chap:filters}. Here, we will just look at how \textit{difference equations} work. They are the discrete equivalent of differential equations. It's really not that complicated:
\begin{equation}
y(n) = x(n)
\end{equation}
Would be an equation that describes a system that does nothing. it takes the input sample, $x(n)$ does nothing and defines the output sample $y(n)$ with it.
Another really simple system would multiply its input by two:
\begin{equation}
y(n) = x(n)\cdot 2
\end{equation}
It's getting interesting if we start playing with the index:
\begin{equation}
y(n) = x(n-1)
\end{equation}
Is a delay by one sample.
\begin{equation}
y(n) = x(n)+x(n-1)
\end{equation}
Takes it's input sample and the previous input sample, adds them together and spits it out. Can you imagine what this does? We'll get to it in chapter \ref{chap:filters}...
This is just a preview, but it really gets interesting if we start working with feedback:
\begin{equation}
y(n) = x(n)+y(n-1)
\end{equation}
\section{Message Domain/Signal Domain}
This is not Max/MSP specific although it might sound like it.
In Max/MSP we can have audio signals. Audio signals are processed in buffers. They are just numbers, but these numbers are calculated at a rate of 44100 Hz \textit{all the time} if we choose to have our sample rate at 44100 Hz.\\
Messages on the other hand are not calculated that often and not all the time. Messages are processed \textit{on demand}. They are event based. This means that, for example, if we hit a note on our midi keyboard or enter a number in a \pd{Numberbox} this \textit{event} will pass through its following objects \textit{once}. Have a look at figure ~\ref{fig:mesSig} or at the patcher. Also note that Max helps us in understanding if we deal with message or signal domain:
\begin{itemize}
\item Max is indicating the signal domain with colored thicker patch cords
% \item signal domain cables offer
\item objects that deal with the signal domain have a tilde (\textasciitilde ) at the end of their name.
\end{itemize}
% by indicating the signal domain with thicker patch cords and black inlets/outlets on the objects (look closely).
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{messageDomainSignalDomain}
\caption[message domain vs. signal domain]
{The patcher \href{./patchers/00\_introduction/00\_messageDomainSignalDomain.maxpat}{\texttt{00\_messageDomainSignalDomain.maxpat}} should demonstrate the differences between message domain and signal domain in Max/MSP.}
\label{fig:mesSig}
\end{figure}
\paragraph{How is this not specific to Max/MSP?} The key points here are not specific to this programming language. If we, for example adjust the volume of a track in pro tools or ableton live or similar, then the information of our fader movement will also be in some kind of message domain.\\
One key aspect here is: some kind of conversion between the two is necessary if we want to switch between them. If we want to control the volume of an audio signal from the message domain we have a problem: we will get noisy output since the message domain is not running at sample rate. Look at a naive attempt to controlling the amplitude of a sine wave:
% Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod
% tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
% quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
% consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse
% cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non
% proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
\begin{figure}[H]
\centering
\includegraphics{ampWrong}
\caption[patcher \texttt{00\_ampWrong.maxpat}]
{The patcher \href{./patchers/00\_introduction/00\_ampWrong.maxpat}{\texttt{00\_ampWrong.maxpat}}}
\label{fig:ampWrongRight}
\end{figure}
And let's look at what kind of waveform it will produce:
\begin{figure}[H]
\centering
\input{plots/ampWrongViz}
% \includegraphics[width=\textwidth]{ampWrongViz}
\caption[Amplitude control, message domain ]
{This is a cosine wave at 20 Hz and the message domain running at 100 millisecond interval to make the effect more extreme and visible}
\label{fig:msgDomainAmpControl}
\end{figure}
This is a problem, since the sudden jumps produce clicks, crackling or zipper noise (reproduce or open the patcher and hear it for yourself!). These clicks are caused by the fact that the message domain just doesn't produce
the numbers at the rate of the sampling rate. It's too slow, so to speak.\\
In Max/MSP (but also in general) this problem can be solved by interpolation. Max/MSP offers the \pd{line\textasciitilde} object for this situation. It can be used to create signals at sampling rate and we can tell it to ramp to a specified value in a given amount of time. In figure \ref{fig:ampRight} we see it ramping to any value that comes from the \pd{/ 127.} object within 20 milliseconds\footnote{20ms is just an example. It is a reasonable time for this kind of interpolation but really, it is just an example. It could just as well be 50 ms.}. This interpolation reduces the clicks and noise dramatically. Note that the output of \pd{line\textasciitilde} is in the signal domain (thick cable).
\begin{figure}[H]
\centering
\includegraphics{ampRight}
\caption[Amplitude control, signal domain]
{Controlling the amplitude of an oscillator. The interpolation done with \pd{line\textasciitilde} suppresses clicks. }
\label{fig:ampRight}
\end{figure}
\section{Key Points}
\begin{itemize}
\item Make sure you understand and know the mathematical notation. It will be how we talk in the remaining chapters.
\item Make sure you understand what aliasing means in audio.
\item Make sure you are comfortable with basic mathematical operations on signals, such as adding constants and multiplying with constants
\item Make sure you understand the difference between message and signal domain and the problem with using a message domain ``signal'' controlling an audio stream.
\item Make sure you know what the spectrum and time domain signal of an impulse and DC-Offset look like.
\end{itemize}
| {
"alphanum_fraction": 0.761416844,
"avg_line_length": 59.0484739677,
"ext": "tex",
"hexsha": "f3305bd701faabf9589287faf686b71643dddede",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-04-21T02:42:58.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-09-05T13:18:03.000Z",
"max_forks_repo_head_hexsha": "32e251b2e3756a1265fe73596515f58f51c4489f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "hrtlacek/dspCourse",
"max_forks_repo_path": "introduction.tex",
"max_issues_count": 3,
"max_issues_repo_head_hexsha": "32e251b2e3756a1265fe73596515f58f51c4489f",
"max_issues_repo_issues_event_max_datetime": "2021-01-23T13:44:43.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-01-23T13:42:05.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "hrtlacek/dspCourse",
"max_issues_repo_path": "introduction.tex",
"max_line_length": 930,
"max_stars_count": 10,
"max_stars_repo_head_hexsha": "32e251b2e3756a1265fe73596515f58f51c4489f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "hrtlacek/dspCourse",
"max_stars_repo_path": "introduction.tex",
"max_stars_repo_stars_event_max_datetime": "2021-07-10T22:18:47.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-09-04T22:32:55.000Z",
"num_tokens": 8919,
"size": 32890
} |
% !TeX root = ../report-phd-first-year.tex
% !TeX encoding = UTF-8
% !TeX spellcheck = en_GB
\section*{Publications}
The followings are all the published papers:
\begin{itemize}
\item
\textbf{Title:} Performance Evaluation of Fischer's Protocol through Steady-State Analysis of Markov Regenerative Processes \cite{martina2016performance}\\
\textbf{Authors:} Stefano Martina, Marco Paolieri, Tommaso Papini, Enrico Vicario\\
\textbf{Conference:} Modeling, Analysis and Simulation of Computer and Telecommunication Systems, \acused{MASCOTS}\ac{MASCOTS} 2016
\item
\textbf{Title:} Exploiting Non-deterministic Analysis in the Integration of Transient Solution Techniques for Markov Regenerative Processes \cite{biagi2017exploiting}\\
\textbf{Authors:} Marco Biagi, Laura Carnevali, Marco Paolieri, Tommaso Papini, Enrico Vicario\\
\textbf{Conference:} International Conference on Quantitative Evaluation of Systems, \acused{QEST}\ac{QEST} 2017
\item
\textbf{Title:} An Inspection-Based Compositional Approach to the Quantitative Evaluation of Assembly Lines \cite{biagi2017inspection}\\
\textbf{Authors:} Marco Biagi, Laura Carnevali, Tommaso Papini, Kumiko Tadano, Enrico Vicario\\
\textbf{Conference:} European Workshop on Performance Engineering, \acused{EPEW}\ac{EPEW} 2017
\end{itemize}
\newpage
| {
"alphanum_fraction": 0.7550872093,
"avg_line_length": 55.04,
"ext": "tex",
"hexsha": "79fa081cdb3caac3be8e5e94c240c04deb8074ae",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a1226bd41b0208d0aac08c15c3372a759df0cb63",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "oddlord/uni",
"max_forks_repo_path": "phd/committee/first-year/report/body/publications.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a1226bd41b0208d0aac08c15c3372a759df0cb63",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "oddlord/uni",
"max_issues_repo_path": "phd/committee/first-year/report/body/publications.tex",
"max_line_length": 174,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a1226bd41b0208d0aac08c15c3372a759df0cb63",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "oddlord/uni",
"max_stars_repo_path": "phd/committee/first-year/report/body/publications.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 392,
"size": 1376
} |
% Copyright 2013-2016 by Charles Anthony
%
% All rights reserved.
%
% This software is made available under the terms of the
% ICU License -- ICU 1.8.1 and later.
% See the LICENSE file at the top-level directory of this distribution and
% at https://sourceforge.net/p/dps8m/code/ci/master/tree/LICENSE
\documentclass[notitlepage]{report}
\usepackage {listings}
\usepackage{ulem}
\renewcommand{\thesection}{\arabic{part}.\arabic{section}}
\makeatletter
\@addtoreset{section}{part}
\makeatother
\begin{document}
\title{Implementation Notes}
%\author{}
%\date{}
\maketitle
\part{Terminology}
\section{Hardware}
\subsection{History}
\subsubsection{Systems}
\begin{description}
\item[GE-600 Series] \hfill \\
\begin{description}
\item[GE-635]
\item[GE-645] GE-635 with Multics support
\item[GE-655] Integrated circuit version of GE-635
\end{description}
\item[Honeywell 6000 Series] Honeywell version of GE-600 series
\begin{description}
\item[6170] GE-655
\item[6030, 6050] Reduced performance versions
\item[6025, 6040, 6060, 6080] Added EIS
\item[6180] GE-645 Hardware support for Multics
\item[Level 6]
\item[Level 61]
\item[Level 62]
\item[Level 64]
\item[Level 66] Honeywell's designation for the large-scale computers that ran GCOS, a repackaging of the “6000” line, later called the DPS-8/66.
\item[Level 68] Honeywell's designation for the large-scale computers that ran Multics.
\item[DPS-6] Another name for Level 6
\item[DPS-6plus] version of Level 6 that ran a secure system
\item[DPS-8] DPS8 is functionally identical to Level 68, but faster.
\item[DPS-8M]
\item[DPS-68]
\item[DPS-88] Later name for the Honeywell ADP and Orion machines. Never ran Multics.
\item[DPS-90, DPS-9000] No Multics support
\end{description}
From the AL39 Multics Processor Manual:
\begin {quote}
\ldots processors used in the Multics system. These are the DPS/L68, which
refers to the DPS, L68 or older model processors (excluding the GE-645) and
DPS 8M, which refers to the DPS 8 family of Multics processors, i.e. DPS 8/70M, DPS 8/62M and DPS 8/52M.
\end {quote}
\end{description} % systems
\subsubsection{Disk Drives}
\begin{description}
\item[DS-10] Early disk unit
\item[DSS191] Same as MSU0400
\item[DUS170]
\item[DSS180] Removable Disk Storage Subsystem
“Intermediate” size mass storage for Series 6000 systems.
Up to 18 drives
Each drive 27.5 Million characters, total subsystem capacity of 500 million
characters
Disk pack: Honeywell Sentinel DCT170 or 2316
\item[DSS190] Removable Disk Storage Subsystem
Mass storage for Series 6000 systems.
Up to 16 drives
Each drive 133,320,000 characters, total subsystem capacity of 2.13 billion
characters
Disk pack: Honeywell M4050
\item[DSS270] Fixed Head Disk Storage Subsystem
Paging disk? mass storage for Series 6000 systems.
Up to 20 drives
Each drive 15.3 Million characters, total subsystem capacity of 307 million
characters
Disk pack: N/A
\item[DSS167] Removable Disk Storage Subsystem
Mass storage for Series 6000 systems.
6 drives or 9 drives (1 is spare, 8 active)
Each drive 15 Million characters, total subsystem capacity of 120 million
characters
Disk pack: ?
\item[DSS170] Removable Disk Storage Subsystem
“Intermediate” size mass storage for Series 6000 systems.
8 plus 1 spare drive
Each drive 27.5 Million characters, total subsystem capacity of 500 million
characters
Disk pack: ?
\item[MSU0500/0501]
Dual spindle, non-removable 1 million 9 bit bytes
\item[MSU0400/0402/0451]
\end{description} % Disk Drives
\subsubsection{Tapes Drives}
\begin{description}
\item[MTH 200, 201, 300, 301, 372, 373] Seven track
\item[MTH 494, 405, 492, 493, 502, 505] Nine track
\end{description} % Tape Drives
\subsubsection{Printers}
\begin{description}
\item[PRT300, 201]
\item[PRU0901/1201] Large systems printer
\item[PRU7070/7071]
\item[PRU7076/7076]
\end{description} % Printers
\subsubsection{Card Readers/Punches}
\begin{description}
\item[CRZ201] Reader
\item[CPZ201] Punch
\end{description}
\subsection{Input Output Multiplexers (IOM)}
\begin{description}
\item[Interface terminology] \hfill \\
\begin{description}
\item[Common Peripheral Interface (CPI)] Card reader, Slow tape drives
\item[Peripheral System Interface (PSI)] Fast Tape Drives
\end{description}
\item [GIOC]i 645 System I/O controller
\item [IOM] 6180 System I/O controller
\item [IMU] ``Information Multiplexer Unit'' DPS8 ? System I/O
controller. Functional replacement for IOM.
\end{description}
\subsection {System Control Units (SCU)}
\begin {description}
\item [SCU] Series 60 Level 66 controller
\item [SC] Level 68
\item [4MW SCU] Later version of SC
\end {description}
\part{Subsystem Model in the simulator}
\section{SCU}
From the Multicians' Multics Glossary:
\begin{quote}
SCU
[BSG] (1) System Control Unit, or Memory Controller. The multiported,
arbitrating interface to each bank of memory (usually 128 or 256 KWords), each
port connecting to a port of an ``active device'' (CPU, GIOC or IOM, bulk store
or drum controller). On the 645, the clock occupied an SCU as well. The SCUs
have their own repertoire of opcodes and registers, including those by which
system interrupts are set by one active unit for another. (See connect.) The
flexibility of this architecture was significant among the reasons why the GE
600 line was chosen for Multics. See SCAS.
SCAS
[BSG] (for ``System Controller Addressing Segment'') A supervisor segment with
one page in the memory space described by each SCU (or other passive module) in
the system. The SCAS contains no data, is not demand-paged, and its page table
is not managed by page control. Instructions directed specifically to an SCU as
opposed to the memory it controls are issued to absolute addresses in the SCAS
through pointers (in the SCS) calculated during initialization precisely to
this end. Typical of such instructions are cioc (issue a connect), smic (set an
interrupt), and the instructions to read and set SCU registers (including
interrupt masks). On the 6180, rccl (the instruction to read the SCU-resident
calendar clock) did not require an absolute address, but a port number,
obviating the need for the SCAS to be visible outside ring 0 to support user
clock-reading as had been the case on the 645. The SCAS, which is a segment but
not a data base, is an example of exploiting the paging mechanism for a purpose
other than implementing virtual memory.
connect
[BSG] Communication signal sent from one active device to another. When
received by a processor, as polled for the control unit after each instruction
pair like an interrupt, a special fault (connect fault), very much like an
interrupt, is taken. As with interrupts, processors send ``connects'' to each
other through the SCU, the only possible channel. While an interrupt is set in
an SCU register and fielded by the first ``taker'' of all appropriately masked
CPUs, a connect fault is routed by the SCU to one, specified processor (or I/O
controller), and cannot be masked against (although the recipient's inhibit bit
can postpone it for a short while). Sending a connect to the GIOC or IOM was
the way of instructing it to begin executing I/O commands. See SCAS.
Multics uses connects for forcing all CPUs to take note of SDW and PTW
invalidation (and clear their associative memories and execute a partial cache
clear when appropriate), holding CPUs in the air during critical phases of
dynamic reconfiguration, and the like. If my memory serves me well, functions
such as system crashing, scheduler preemption, and processor startup switched
back and forth between software-generated interrupts and connects throughout
the '70s.
\end {quote}
According to AL39, a DPS 8M supports only two SCUs; the later model SCUs support enough memory such that two SCUs would provide the maximum usable amount of memory. A system could have a third SCU for redundancy, but it would only be configured into the system if needed.
The code base currently abstracts away the SCUs, but they are being implemented.
\begin{itemize}
\item The clock is implemented as a seperate SIMH device.
\item The CPU connects directly to the IOMs. Being fixed.
\item Memory is implemented a a single, device independent model.
\item The SIMH CPU device only supports single CPUs currently, so the CPU to CPU communication issues are ignored for now. To get proper mult-CPU support, the SCU will probably need to be reallzed.
\end {itemize}
\section {IOM}
[cac] The reference document I have been using is 43A239854 ENGINEERING PRODUCT PECIFICATION, PART 1 6000B INPUT/OUT MULTIPLEXER (IOM) CENTRAL
The IOM model abstracts away controller devices such as the MPC. This means the
IOM code is very device aware, including extracting dev\_code values to enable
routing of connections. Unfortunately, this complexity leads to bugs relating
to the confusion of dev\_codes with SIMH UNIT numbers. Adding MPC devices would
isolate the complexities of dev\_code handling from the complexities of
connection handling, is probably a good idea.
\section {Tape}
The device seems good to go; it is a very abstract model, with no hardware modeling beyond a tracking a virtual cable to a port on an IOM.
\section {Disk}
Existing code base is skeletal.
\section {CPU}
AL39 ``Interrupt Sampling''
``The processor always fetches intructions in pairs. At an appropriate point
(as early as possible) in the execution of a pair of instructions, the
next sequential instruction pair is fetched and held is a special
instruction buffer register. The exact point depends on instruction
sequence and other conditions.
``If the interrupt inhibit bit (bit 28) is not set in the current instruction
word at the point of next sequential instruction pair virtual address
formation, te processor samples the group 7 faults [Shutdown, Timer Runout,
Connect]. If any of the group 7 faults is found an internal flag is set
reflecting the presence of a fault. The processor next samples the interrupt
present lines from all eight memory interface posts and loads a register
with bits corresponding to the states of the lines. If any bit in the
register is set ON an internal flag is set to reflect te presence of the
bit(s) in the register.
``If the instruction pair virtual address being formed is the result of a
transfer of control condition or if the current instruction is Execute (xec), * Execute Double (xed), Repeat (rpt), Repeat Double (rpd), or Repeat Link
(rpl), the group 7 faults and interrupt present lines are not sampled.
``At an appropriate point in the execution of the current instruction pair,
the process fetches he next instruction pair. At this point, it first tests
the internal flags for group 7 faults and interupts. If either flag is set
it does not fetch the next instruction pair.
``At the completion of the current instruction pair the proessor once again
checks the internal flags. If neither flag is set execution of the next
instruction pair proceeds. If the internal flag for group 7 faults is set,
the processor enters a FAULT cycle for the highest priority group 7
fault present. If the internal flag for interrupts s set, the processor
enters an INTERRUPT CYCLE.''
\begin {lstlisting}
bool prefetch_valid = false
bool out_of_seq = false // set after transfer, execute or repeat
forever {
if !prefetch_valid {
inst_pair_buffer = fetch_pair ()
}
inst_pair = inst_pair_buffer
prefetch_valid = false
for inst# = 0, 1 {
// Bug; doesn't handle odd IC on entry
decode_inst (inst_pair [inst#]
if inst# == 0 {
if !inhibit && !out_of_seq {
g7_flag = sample_g7_faults ()
int_flag = sample_interrupts ()
} else {
g7_flag = false
int_flag = false
}
if !g7_flag && ! int_flag {
inst_pair_buffer = fetch_pair ()
prefetch_valid = true
}
}
execute_decoded_inst ()
if was_transfer
break // don't execute second half after a transfer
}
if g7_flag
enter_fault_cycle ()
if int_flag
enter_interrupt_cycle ()
}
\end{lstlisting}
This is not quite right yet; better integration with the out\_of\_seq
conditions is needed. Perhaps the prefetch test also need an address
check, i.e. is the address that was fetched what the processor was
expecting (equal to the Ir).
Further reading on the instruction prefetch indicates that it was quite complicated with multiple instruction pairs cached, which leads to issues with cache invalidation for self-modifying code. At this point, I am coding towards a much simpler model.
\section {CLK}
The clock code is currently disabled, pending better integration into the SIMH interval and DPS8 interrupt mechanisms.
\section {OPCON}
The operator console code is untested and not linked into the code. Eventually, the T4D tape will get around to testing it.
\section {Line printer}
Not yet implemented.
\section {System}
The simulator has been provide with commands to ``cable'' together the various subsystems. The current test configuration is in the file ``src/base\_system.ini'', shown here:
\lstinputlisting{../src/dps8/base_system.ini}
\part {Porting issues}
\paragraph {32 bit support}
The DPS8 is a 36 bit computer, with a double word of 72 bits. In order to
manipulate these numbers in the assembler and the emulator, int128\_t
types must be supported by the compiler, or a complex and cumbersome 128
emulation library is needed.
\part {Notes}
\paragraph {Known device names}
This information is cribbed from
http://stuff.mit.edu/afs/athena/reference/multics-history/source/Multics/ldd/system\_library\_1/source/bound\_library\_1\_.s.archive,
circa 1987.
\begin{verbatim}
model name valid drives
mpc_msp_model_names (msp Mass Storage Processor)
dsc0451 450, 451
msp0451 450, 451
msp0601 450, 451, 500, 501
msp0603 450, 451, 500, 501
msp0607 450, 451, 500, 501
msp0609 450, 451, 500, 501
msp0611 450, 451, 500, 501
msp0612 450, 451, 500, 501
msp800 450, 451, 500, 501
mpc_mtp_model_names (mtp Magnetic Tape Processor)
mtc501 500, 507
mtc502 500, 507
mtp0600 500, 507, 600, 601, 602
mtp0601 500, 507, 600, 601, 602
mtp0602 500, 507, 600, 601, 602
mtp0610 500, 507, 600, 601, 602
mtp0611 500, 507, 600, 601, 602
ipc_msp_model_names
fips-ipc 3380, 3381
ipc_mtp_model_names
fips-ipc 8200
mpc_urp_model_names
urc002
urp0600
urp8001
urp8002
urp8003
urp8004
disk_drive_model_names
451 msu0451
500 msu0500
501 msu0501
3380 msu3380
3381 msu3381
tape_drive_model_names
500 mtc501, mtc502
507 mtc502
600 mtp0600
601 mtp0601
602 mtp0602
610 mtp0610
630 mtp0630
8200 mtu8200
printer_model_names
301 prt301
1000 pru1000
1200 pru1200
1600 pru1600
901 pru0901
reader_model_names
500 cru0500
501 cru0501
301 crz301
201 crz201
ccu_model_names
401 ccu401
punch_model_names
301 cpz301
300 cpz300
201 cpz201
console_model_names
6001 csu6001
6004 csu6004
6601 csu6601
\end{verbatim}
\paragraph{Addressing model notes}
RJ78A00, pg 6-42:
Interrupts are not received when the instruction is executed by the XEC or XED
instructions, but only when no fault is present.
RJ78A00, pg 6-44:
6.5.3 Instruction Counter Value Stored At Interrupt
The values of the Instruction Counter (IC) stored at interrupt are listed below:
• Single-word Instructions
The address of the instruction to which processing is to be returned is stored.
When an interrupt is received with the DIS instruction, the DIS instruction
address + 1 is stored.
• Multiword Instructions (excluding CLIMB Instructions)
The address of the instruction to which processing is to be returned is stored. If
interrupt occurs during execution of an interruptible multiword instruction, IC +
0 is stored, and the IR bit 30 is stored as a one.
• CLIMB Instruction
Except when the data stack area is cleared with an OCLIMB, interrupt is not
received during execution of a CLIMB instruction. If an interrupt occurs
during the execution of an OCLIMB, IC + 0 is stored.
\part {Unresolved questions}
\section {Actual H/W behavior}
What should happen is when an \textbf{sscr} instruction is issued to an IOM
that is configured in MANUAL (not PROGRAM mode)? According to AL39, a
\textbf{sscr} that tries
to set an unassigned mask register generates a STORE FAULT.
(dps8\_scu.c/scu\_sscr()).
\section {t4d\_b.2.tap issues}
t4d seems to require the carry bit sense to be inverted after subtract
operations; i.e. carry == not borrow.
I would have thought that DIS with interrupt inhibit would be an absolute halt, but t4d issues it after a CIOC.
IR Absolute bit
The T4D code runs in absolute mode (at least so far), and tests the "Absolute bit" in the STI instructions results. The test fails if the bit is on. According to AL39, one would expect the bit to be on when running in Absolute mode. In order for the test to pass, I added an emulator configuration option that inverts the sense of the bit for the STI instruction. The test tape and the documentation disagree. Who is right?
ABSA in absolute mode with bit 29 set
The T4D code does about 1400 executions of the ABSA instruction in absolute mode with bit 29 set (ABSA PR0|0 and the like). According to AL39, executing ABSA in absolute mode is "undefined". Examination of the code surrounding the ABSA instruction seem to indicate that SDW pairs are being set up and tested for BOUND violations, with the instruction returning 0 if no violation, and some not-quite-clear version of the offending address if BOUND violation occured (I think). Documentation of ABSA behavior for this usage is needed.
DIS with Interrupt Inhibit set
The T4D tape waits for tape block read completion with a DIS instruction with the Interrupt Inhibit bit set. The interrupt pairs TRA to the instruction immediately after the DIS instruction, so the results are approximately the same whether the interrupt is processed, or the DIS instruction just continues on. What is the correct behavior for the DIS with interrupt inhibit set? (I favor the clear the interrupt resume processing without invoking the interrupt cycle as it makes a lot of sense (to me), but I haven't located a discussion of this in the documentation.
\part{Running the emulator}
\section{SIMH Emulator DPS8-M specific commands}
\paragraph{}
This a list of the DPS8-M specific commands that are provided for the SimH emulator framework.
\paragraph{}
For numeric parameters (<n>), the value is interpreted in `C' style; i.e. leading 0 for octal, leading 0x for hexadecimal.
\paragraph{Base set of debug options} (Almost) all of the SIMH devices (clk, cpu, iom, scu, tape) support a base set of debug options. (The device SYS is an abstraction that does not currently have code paths that are part of the VM.)
\begin{lstlisting}
set [clk|cpu|iom|scu|tape|opcon] debug=[notify|info|err|warn|debug|all]
\end{lstlisting}
\paragraph{Number of CPUS} Some day we will support multiple cpus:
\begin{lstlisting}
; Not yet implemented: set cpu nunits=<n>
\end{lstlisting}
\paragraph{Set CPU configuration switches}
The following commands perform the functions of the CPU configuration switches:
\begin{lstlisting}
set cpu config=faultbase=[<n>|Multics]
\end{lstlisting}
\begin{lstlisting}
set cpu config=num=[0|1|on|off|enable|disable]
\end{lstlisting}
\begin{lstlisting}
set cpu config=data=<n>
\end{lstlisting}
\begin{lstlisting}
set cpu config=mode=[0|1|GCOS|MULTICS]
\end{lstlisting}
\begin{lstlisting}
set cpu config=speed=<n>
\end{lstlisting}
\begin{lstlisting}
set cpu config=port=[0|1|2|3|A|B|C|D]
\end{lstlisting}
\begin{lstlisting}
set cpu config=assignment=<n>
\end{lstlisting}
\begin{lstlisting}
set cpu config=interlace=[off|2|4]
\end{lstlisting}
\begin{lstlisting}
set cpu config=enable=[0|1|on|off|enable|disable]
\end{lstlisting}
\begin{lstlisting}
set cpu config=init_enable=[0|1|on|off|enable|disable]
\end{lstlisting}
\begin{lstlisting}
set cpu config=store_size=[0|1|2|3|4|5|6|7|
32|64|128|256|512|1024|2048|4096|
32K|64|128K|256K|512K|1024K|2048K|4096K|
1M|2M|4M]
\end{lstlisting}
\paragraph{Set CPU VM runtime configuration} The following commands perform various internal configurations of the VM.
\subparagraph{IR Absoulte Bit}
The Test \& Diagnostic tape seems to believe that the ABSOLUTE bit has an inverted value then
described in AL-39, with respect to the STI instruction. When examining the bit in the SCU save data, it does not invert the value. This switches enables code that inverts the value for the STI instruction, allowing the Test \& Diagnostic tape to proceed. (See ``Unresolved Questions''.)
\begin{lstlisting}
set cpu config=invertabsolute=[0|1|off|on]
\end{lstlisting}
\subparagraph{B29test}
This switch is no longer in use.
\begin{lstlisting}
set cpu config=b29test=[0|1|off|on]
\end{lstlisting}
\subparagraph{Disenable}
The Unit Test code uses the DIS command as the end of test exit; The Test \& Diagnostics and the 20184 tapes expect DIS to wait for an interrupt. This switch enables the wait behavior.
\begin{lstlisting}
set cpu config=dis_enable=[0|1|off|on]
\end{lstlisting}
\subparagraph{Auto\_append\_disable}
This switch is no longer in use.
\begin{lstlisting}
set cpu config=auto_append_disable=[0|1|off|on]
\end{lstlisting}
\subparagraph{LPRP\_highonly}
This switch address two different interpretations of the behavior of the SPRPn instruction. Enable for Test \& Diagnostics or 20184.
\begin{lstlisting}
set cpu config=lprp_highonly=[0|1|off|on]
\end{lstlisting}
\subparagraph{Steady\_clock}
Both the Test \& Diagnostics and 20184 use wait loops to allow I/O to complete; the loops use the RSW 2 instruction to read the time of day clock; variations in timing cause instruction cycles to jitter, making debugging more difficult. Setting this switch causes RSW 2 to return a deterministic value, so each run of the emulator provides a stable instruction cycle behavior.
\begin{lstlisting}
set cpu config=steady_clock=[0|1|off|on]
\end{lstlisting}
\subparagraph{Degenerate\_mode}
This switch enables a high experimental code change great potentially reduces the number of lines of code, but may contains some smoke and mirror code.
\begin{lstlisting}
set cpu config=degenerate_mode=[0|1|off|on]
\end{lstlisting}
\subparagraph{Append\_after}
This switch addresses an unresolved issue with the sequence of events in the instruction execute cycle.
\begin{lstlisting}
set cpu config=append_after=[0|1|off|on]
\end{lstlisting}
\subparagraph{Super\_user}
This switch is no longer in use.
\begin{lstlisting}
set cpu config=super_user=[0|1|off|on]
\end{lstlisting}
\subparagraph{EPP\_hack}
This switch is no longer in use.
\begin{lstlisting}
set cpu config=epp_hack=[0|1|off|on]
\end{lstlisting}
\subparagraph{Halt\_on\_unimplemented}
This switch cause the emulator to halt the simulation on encountering an unimplemented instruction, rather then invoking fault handling.
\begin{lstlisting}
set cpu config=halt_on_unimplmented=[0|1|off|on]
\end{lstlisting}
\paragraph{Show CPU config}
This command will show the current configuration settings of the CPU.
\begin{lstlisting}
show cpu config
\end{lstlisting}
\paragraph{Set CPU debugging options}
This command will enable or disable the display information of a wide variety of runtime events and states.
\subparagraph{Trace}
On each execute cycle of the VM, display information about the executed instruction.
\subparagraph{Tracex}
On each execute cycle of the VM, display more detailed information about the executed instruction.
\subparagraph{Messages}
Display general messages about events and states of the emulator.
\subparagraph{Regdumpaqi}
Display the contents of the A, Q and IR registers after each execution execution.
\subparagraph{Regdumpidx}
Display the contents of the index registers after each execution execution.
\subparagraph{Regdumppr}
Display the contents of the PR registers after each execution execution.
\subparagraph{Regdumpadr}
Display the contents of the AR registers after each execution execution.
\subparagraph{Regdumpppr}
Display the contents of the PPR register after each execution execution.
\subparagraph{Regdumppdsbr}
Display the contents of the DSBR register after each execution execution.
\subparagraph{Regdumppflt}
Display the contents of the floating point register after each execution execution.
\subparagraph{Regdump}
All of the above \textit{regdump...} commands.
\subparagraph{Addrmod}
Display messages about the progress of the Address Modification unit.
\subparagraph{Appending}
Display messages about the progress of the Append unit.
\subparagraph{Fault}
Display messages about the progress of fault processing.
\begin{lstlisting}
set cpu [no]debug=
[trace | traceex | messages |
regdumpaqi | regdumpidx | regdumppr | regdumpadr |
regdumpppr | regdumpdsbr | regdumpflt | regdump |
addrmod | appending | fault]
\end{lstlisting}
\paragraph{Boot CPU}
Displays a message suggesting booting an IOM.
\begin{lstlisting}
boot cpu
\end{lstlisting}
\paragraph{CPU registers}
This is a list of registers the SIMH examine and deposit commands work with.
\begin{lstlisting}
ic ir a q e
x0 x1 x2 x3 x4 x5 x6 x7
ppr.ic ppr.prr
ppr.psr prr.p
dsbr.addr dsbr.bnd dsbr.u dsbr.stack
bar.base bar.bound
pr0.snr pr1.snr pr2.snr pr3.snr pr4.snr pr5.snr pr6.snr pr7.snr
pr0.rnr pr1.rnr pr2.rnr pr3.rnr pr4.rnr pr5.rnr pr6.rnr pr7.rnr
pr0.wordno pr1.wordno pr2.wordno pr3.wordno
pr4.wordno pr5.wordno pr6.wordno pr7.wordno
\end{lstlisting}
\paragraph{Number of IOMs}
These command set or show the number of IOMs in the system.
\begin{lstlisting}
set iom nunits=<n>
show iom nunits
\end{lstlisting}
\paragraph{Show IOM mailbox}
This command shows the contents of an IOM's mailboxes
\begin{lstlisting}
show iom<n> mbx
\end{lstlisting}
\begin{lstlisting}
set iom<n> config=
[os | boot | iom_base | multiplex_base |
tapechan | cardchan | scuport | port | addr |
interlace | enable | initenable | halfsize| store_size |
bootskip]
\end{lstlisting}
\begin{lstlisting}
set iom<n> config=boot=[card|tape]
\end{lstlisting}
\begin{lstlisting}
set iom<n> config=iombase=[<n>|Multics]
\end{lstlisting}
\begin{lstlisting}
set iom<n> config=multiplexbase=<n>
\end{lstlisting}
\begin{lstlisting}
set iom<n> config=tapechan=<n>
\end{lstlisting}
\begin{lstlisting}
set iom<n> config=cardchan=<n>
\end{lstlisting}
\begin{lstlisting}
set iom<n> config=scuport=<n>
\end{lstlisting}
\begin{lstlisting}
set iom<n> config=port=<n>
\end{lstlisting}
\begin{lstlisting}
set iom<n> config=addr=<n>
\end{lstlisting}
\begin{lstlisting}
set iom<n> config=interlace=[0|1]
\end{lstlisting}
\begin{lstlisting}
set iom<n> config=enable=[0|1]
\end{lstlisting}
\begin{lstlisting}
set iom<n> config=initenable=[0|1]
\end{lstlisting}
\begin{lstlisting}
set iom<n> config=halfsize=[0|1]
\end{lstlisting}
\begin{lstlisting}
set iom<n> config=store_size=[0|1|2|3|4|5|6|7|
32|64|128|256|512|1024|2048|4096|
32K|64|128K|256K|512K|1024K|2048K|4096K|
1M|2M|4M]
\end{lstlisting}
\begin{lstlisting}
show iom<n> config
\end{lstlisting}
\begin{lstlisting}
boot iom<n>
\end{lstlisting}
\begin{lstlisting}
set scu nunits=<n>
\end{lstlisting}
\begin{lstlisting}
show scu nunits
\end{lstlisting}
\begin{lstlisting}
show scu<n> state
\end{lstlisting}
\begin{lstlisting}
set scu<n> config=mode=[manual|program]
\end{lstlisting}
\begin{lstlisting}
set scu<n> config=mask[a|b]=[off|<n>]
\end{lstlisting}
\begin{lstlisting}
set scu<n> config=port<n>=[enable|disable|0|1]
\end{lstlisting}
\begin{lstlisting}
set scu<n> config=lwrstore_size=[0|1|2|3|4|5|6|7|
32|64|128|256|512|1024|2048|4096|
32K|64|128K|256K|512K|1024K|2048K|4096K|
1M|2M|4M]
\end{lstlisting}
\begin{lstlisting}
show scu<n> config
\end{lstlisting}
\paragraph{Operator console autoinput}
These commands are currently unimplemented.
\begin{lstlisting}
set opcon autoinput
show opcon autoinput
\end{lstlisting}
\begin{lstlisting}
show sys config
\end{lstlisting}
\begin{lstlisting}
set sys config=[connect_time|activate_time|
mt_read_time|mt_xfer_time|
iom_boot_time=[off|n]]
\end{lstlisting}
\begin{lstlisting}
set tape nunits=<n>
\end{lstlisting}
\begin{lstlisting}
show tape nunits
\end{lstlisting}
\begin{lstlisting}
set tape<n> rewind
\end{lstlisting}
\begin{lstlisting}
set tape<n> [no]watch
\end{lstlisting}
\begin{lstlisting}
attach [-r] tape<n> <filename.tap>
\end{lstlisting}
\begin{lstlisting}
dpsinit
\end{lstlisting}
\begin{lstlisting}
dpsdump
\end{lstlisting}
\begin{lstlisting}
segment
\end{lstlisting}
\begin{lstlisting}
segments
\end{lstlisting}
The `cable' command provides the emulation equivalent of stringing physical cables from device to device. This is an important step as the DPS8-M was not a plug-and-play design; which physical ports on a device the cables connect to influence the system configuration. The semantics of the `cable' command is to string cables inward; from peripherals towards the CPU.
\begin{lstlisting}
cable tape=<tape_unit_num>,<IOM_unit_number>,
<channel_number>,<device_code>
\end{lstlisting}
String a cable from a tape drive to an IOM. As the MPC devices are not implemented, the `device\_code' represents the assignment of devices that the MPC was use when multiplexing multiple devices into a single channel.
\begin{lstlisting}
cable opcon=<IOM_unit_num>,<channel_number>,
<unused>,<unused>
\end{lstlisting}
String a cable from the Operator's Console to an IOM.
\begin{lstlisting}
cable iom=<tIOM_unit_num>,<IOM_port_number>,
<SCU_unit_number>,<SCU_port_number>
\end{lstlisting}
String a cable from an IOM to an SCU.
\begin{lstlisting}
cable scu=<SCU_unit_num>,<SCU_port_number>,
<CPU_unit_number>,<CPU_port_number>
\end{lstlisting}
String a cable from an SCU to a CPU.
\begin{lstlisting}
dbgstart
\end{lstlisting}
\begin{lstlisting}
displaymatrix
\end{lstlisting}
\section{Booting the Test \& Diagnostic tape}
The file `t4d\_b.2.ini' is will set up the base system and boot the 't4d\_b.2.tap' tape image.
`
\begin{lstlisting}
./dps8 t4d_b.2.ini
\end{lstlisting}
\section{Booting the 20184 tape}
The file `20184.ini' is will set up the base system and boot the '20184.tap' tape image.
`
\begin{lstlisting}
./dps8 20184.ini
\end{lstlisting}
\part{AL-39 errata}
\section{\texttt{FRD} instruction.}
In the description of the \texttt{FRD} instruction AL39 has these notes:
\begin{verbatim}
The frd instruction is executed as follows:
C(AQ) + (11...1)29,71 → C(AQ)
If C(AQ)0 = 0, then a carry is added at AQ71
If overflow occurs, C(AQ) is shifted one place to the right and C(E) is increased by 1.
If overflow does not occur, C(EAQ) is normalized.
If C(AQ) = 0, C(E) is set to -128 and the zero indicator is set ON.
\end{verbatim}
I believe this is wrong.
Referring to my implementation notes of \texttt{FRD} in dps8\_math.c...
\begin{verbatim}
//! If C(AQ) ≠ 0, the frd instruction performs a true round to a precision of 28 bits and a normalization on C(EAQ).
//! A true round is a rounding operation such that the sum of the result of applying the operation to two numbers of
//! equal magnitude but opposite sign is exactly zero.
//! The frd instruction is executed as follows:
//! C(AQ) + (11...1)29,71 → C(AQ)
//! If C(AQ)0 = 0, then a carry is added at AQ71
//! If overflow occurs, C(AQ) is shifted one place to the right and C(E) is increased by 1.
//! If overflow does not occur, C(EAQ) is normalized.
//! If C(AQ) = 0, C(E) is set to -128 and the zero indicator is set ON.
//! I believe AL39 is incorrect; bits 28-71 should be set to 0, not 29-71. DH02-01 & Bull RJ78 are correct.
//! test case 15.5
//! rE rA rQ
//! 014174000000 00000110 000111110000000000000000000000000000 000000000000000000000000000000000000
//! + 1111111 111111111111111111111111111111111111
//! = 00000110 000111110000000000000000000001111111 111111111111111111111111111111111111
//! If C(AQ)0 = 0, then a carry is added at AQ71
//! = 00000110 000111110000000000000000000010000000 000000000000000000000000000000000000
//! 0 → C(AQ)29,71
//! 00000110 000111110000000000000000000010000000 000000000000000000000000000000000000
//! after normalization .....
//! 010760000002 00000100 011111000000000000000000001000000000 000000000000000000000000000000000000
//! This is wrong
//! 0 → C(AQ)28,71
//! 00000110 000111110000000000000000000000000000 000000000000000000000000000000000000
//! after normalization .....
//! 010760000000 00000100 011111000000000000000000000000000000 000000000000000000000000000000000000
//! This is correct
//!
//! GE CPB1004F, DH02-01 (DPS8/88) & Bull DPS9000 RJ78 ... have this ...
//! The rounding operation is performed in the following way.
//! - a) A constant (all 1s) is added to bits 29-71 of the mantissa.
//! - b) If the number being rounded is positive, a carry is inserted into the least significant bit position of the adder.
//! - c) If the number being rounded is negative, the carry is not inserted.
//! - d) Bits 28-71 of C(AQ) are replaced by zeros.
//! If the mantissa overflows upon rounding, it is shifted right one place and a corresponding correction is made to the exponent.
//! If the mantissa does not overflow and is nonzero upon rounding, normalization is performed.
//! If the resultant mantissa is all zeros, the exponent is forced to -128 and the zero indicator is set.
//! If the exponent resulting from the operation is greater than +127, the exponent Overflow indicator is set.
//! If the exponent resulting from the operation is less than -128, the exponent Underflow indicator is set.
//! The definition of normalization is located under the description of the FNO instruction.
//! So, Either AL39 is wrong or the DPS8m did it wrong. (Which was fixed in later models.) I'll assume AL39 is wrong.
\end{verbatim}
\section{\texttt{MVT instruction.}}
20184 crashes in formatting first\_message; an index register is
slightly negative, and an \texttt{MLR} instruction is (correctly) treating it
as unsigned, and running off of the end of the segment, where
appendCycle correctly slaps its hand for exceeding the segment
boundary.
The \texttt{X} register gets set negative here:
\begin{verbatim}
ascii.no_trim:
mvt (pr,rl),(pr,rl),fill(040) " we will do some useless filling
desc9a pr1|0,al
desc9a pr7|0,x4
arg ascii.bad_char_trans
ttn ascii.truncated
a9bd 7|0,al
als 18
sta ascii.tct_count " so we can subtract
sbx4 ascii.tct_count " cant be negative, since no truncation
tmoz return_to_caller " but be safe
tra main_char_loop
\end{verbatim}
\texttt{X4} gets negative becuase \texttt{A} is $>$ \texttt{X4}; the \texttt{MVT}
instruction correctly detects this
and sets the truncate bit. The code tests the TRO bit to prevent the
underflow.
Why, O Why, is the code using TTN and not TTRN? Why?
Somebody is lying.
In the description of the indicator register (c.f. AL39 Fig. 3-7) the tally runout indicator has this note.
\begin{quote}
This indicator is set OFF at initialization of any tallying operation, that is, any repeat instruction or any indirect then
tally address modification. It is then set ON for any of the following conditions: ...
(4) If an EIS string scanning instruction reaches the end of the string without finding a match condition.
\end{quote}
Apparently MVT also follows this pattern -- although AL-39, DH02-01 and Bull RJ78 REV02 make absolutely no mention of this behavior.
This is either a hack, bug or an undocumented feature. \texttt{MVT} instruction in dps8\_eis.c has been modified to set the TALLY bit as well as the TRUNC bituon overflow
\section{\texttt{EPP} instruction.}
The code in bootload\_formline has the line:
\begin{verbatim}
epp1 0,x6* " pr1 -> word of chars
\end{verbatim}
which relies on the epp1 instruction setting the AR1.CHAR register to zero. Given the relationship between the AR and PR register sets, this is not an unreasonable behavior. AL-39 does not mention the CHAR registers in the EPPn instruction documentation. However, AL-39 does say (pg 317):
\begin{quote}
" The terms "pointer register" and "address register" both apply to the same physical hardware. The distinction arises from the manner in which the register is used and in the interpretation of the register contents."
\end{quote}
\sout{The emulator sets the corresponding CHAR register to zero for EPPn instructions.}
The emulator uses a union of bitfields to map PR.BITNO on the AR.BITNO and AR.CHAR bits, so that the "same phyiscal hardware" behavior is matched.
\section{\textbf{tss} instruction}
Page 169, the \textbf{tss} instruction:
\begin{quote}
SUMMARY: \texttt{C(TPR.CA) + (BAR base) $\rightarrow$ C(PPR.IC)}
\end{quote}
The \textbf{tss} instruction should not add the \texttt{BAR} base to the \texttt{CA}; that is done during \texttt{CA} formation in the instruction fetch cycle. The line should read:
\begin{quote}
SUMMARY: \texttt{C(TPR.CA) $\rightarrow$ C(PPR.IC)}
\end{quote}
\section{\textbf{tsp\textit{n}} instruction}
AL-39, Page 340, Figure 8-1 "Complete Appending Unit Operation Flowchart", step L, shows that if the opcode
is \textbf{tsp\emph{n}} then set up the \texttt{PR\emph{n}} register.
Page 168-169 \textbf{tsp\textit{n}} instruction shows the same \texttt{PR\emph{n}} register setup, but the value in \texttt{PPR.PSR} will have been changed by the Append Unit.
\section{Left shifting and the carry bit}
AL-39, pg. 27, Indicator Register, says this about the carry bit:
\begin{verbatim}
Carry This indicator is set ON for....
(1) If a bit propagates leftward out of bit 0 ... for any binary or shifting instruction.
\end{verbatim}
Page 107, "als":
\begin{verbatim}
Carry If C(A)0 changes during the shift, then ON; otherwise OFF
\end{verbatim}
Page 108, "lls":
\begin{verbatim}
Carry If C(AQ)0 changes during the shift, then ON; otherwise OFF
\end{verbatim}
Page 109, "qls":
\begin{verbatim}
Carry If C(Q)0 changes during the shift, then ON; otherwise OFF
\end{verbatim}
But, CPB-1004F, GE-635 Pgm Ref says:
Page 78, "als"
\begin{verbatim}
Carry If C(Q)0 ever changes during the shift, then ON; otherwise OFF
\end{verbatim}
The wording is the same for "qls" and "lls".
DH02-01 DPS8 Asm, pg 243, "ALS" says:
\begin{verbatim}
Carry If C(Q)0 changes during the shift, then ON; otherwise OFF. When the Carry indicator is ON, the algebriac range of A had been exceeded.
\end{verbatim}
(LLS and QLS have similar wording.)
The phrase about algebraic range supports the interpretation of checking the carry at each step; it the sign bit changes at any step of the shift, then information has been lost; right shifting the result will not recreate the original data.
RJ78 has the same wording as DH02.
\section{Minor typo in \texttt{div}}
AL-39, page 124:
\begin{verbatim}
SUMMARY: C(Q) / (Y) integer quotient C(Q)..
\end{verbatim}
should read:
\begin{verbatim}
SUMMARY: C(Q) / C(Y) integer quotient C(Q)..
\end{verbatim}
\section{Minor typo in \texttt{s9bd}}
AL-39, page 230:
\begin{verbatim}
MODIFICATIONS: None except au, qu, al, qu, xn
\end{verbatim}
should read:
\begin{verbatim}
MODIFICATIONS: None except au, qu, al, ql, xn
\end{verbatim}
\section{Illegal EIS MF fields in Multics}
There are code sequences in the Multics source, generated by the PL/I compiler,
which are MLR instructions with an MF1 containing RL:1 and REG:IC.
AL-39 says that REG can be IC only if RL is 0; RJ-78 says that it is an illegal procedure fault.
To make the DPS8M emulator work correctly, and apparently the multics-emul emulator as well,
the emulator must ignore the RL bit if REG is IC.
\section{Interpretation of SDW in ITS/ITP processing}
The multics-emul emulator has a different interpretation of the SDW in ITS and ITP
processing.
An SDW is needed to calculate a new ring number; DPS8M was using the ring number of the segment that the ITS/ITP pair was in; multics-emul uses the SDW of the segment that the ITS/ITP points to.
This makes sense; recalculating the ring would be done on segment crossing, and
one would want the ring of the target segment for the calculation.
The difference in the emulators arose for the following instruction.
\begin{verbatim}
eppbp =its(-2,2),*
\end{verbatim}
The ITS points to a non-existent segment, which is okay since the instruction only needs the
effective address of the pair. Multics-emul detects that missing segment and uses a ring
number of seven for the ring calculation.
\section{Typos in Control Unit Data table}
Page 73:
\begin{verbatim}
1 b x ORB For access violation fault - out of execute bracket
\end{verbatim}
should read:
\begin{verbatim}
1 b x OEB For access violation fault - out of execute bracket
\end{verbatim}
Page 72:
\begin{verbatim}
4 IC |a|b|c|d|e|f|g|h|i|j|k|l|m|n|0 0 0 0
\end{verbatim}
should read:
\begin{verbatim}
4 IC |a|b|c|d|e|f|g|h|i|j|k|l|m|n|o|0 0 0
\end{verbatim}
\section{Page 311, Figure 6-6, ``Indirect Then Tally Modification Flowchart''}
\begin{verbatim}
???
cf field, and
ADDRESS. Form
computed address
\end{verbatim}
\section{Page 308 ``Sequence character $T_d = 12$''}
``A 36-bit operand is formed by high-order zero filling the value of
character \emph{of of} C(computed address) with an appropriate number of bits .
\section{Page 217 ABSA instruction}
``SUMMARY: Final main memory address, Y -> C(A)0,23
`` NOTES: If the absa instruction is executed in absolute mode, C(A) will be
undefined.
``Attempted execution in normal or BAR modes causes an illegal procedure
fault.''
Summary: Absolute mode: dosen't work. Normal mode: Illegal. BAR mode: Illegal.
What's it good for? Absolutely nothing.
\section{Page 102 STC2 instruction}
The text only defines the high half of C(Y); it does not specify if the low half is set to zero, the IR, or left unchanged.
\section{Page 204 SCU instruction}
The text only defines the behavior of the SCU instruction during fault/interrupt processing: it saves the safe-stored CU data.
The T\&D tape uses the SCU instruction during normal processing, and apparently
expects the instruction to save the current state.
\section{Page 247 MVT instruction}
Figure 4-19 suggests that the third operand should be \texttt{descna} format,
but the coding format specifies \texttt{arg}.
\section{Typo in \texttt{scpr} instruction (pg 203).}
In MODIFICATIONS, for C(TAG) 01, only DPS/L68 operation is shown;
the DPS8M is different (see pg. 41, ``FAULT REGISTER (FR) - DPS 8M').
\section{Documentation of RPT/RPD does not match usage.}
In tape\_checksum\_.alm,
\begin{verbatim}
odd; rpda 6,1 do the record header
awca bp|0,1 compute checksum on header
alr 0,3 ..
\end{verbatim}
the repeated instruction is using bit 29 to indicate pointer register usage.
In the documentation for the RPT instruction (and similarly RPD),
\begin{verbatim}
For the first execution of the repeated instruction:
C(C(PPR.IC)+1)0,17 + C(Xn) -> y, y -> C(Xn)
\end{verbatim}
indicating that the computed address is formed directly from the address field
of the instruction without regard for bit 29.
\section{Typo in eawp4.``}
In AL39, page 171:
\begin{verbatim}
eawp4 Effective to Word/Bit Number of PR4 Address
\end{verbatim}
should read
\begin{verbatim}
eawp4 Effective Address to Word/Bit Number of PR4
\end{verbatim}
\end{document}
| {
"alphanum_fraction": 0.7336138175,
"avg_line_length": 31.5804195804,
"ext": "tex",
"hexsha": "34df2115914b5f1213760f957933d7337ddec12e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "715ee3c1768845e226b1d0b2695a8db88c247cb7",
"max_forks_repo_licenses": [
"ICU"
],
"max_forks_repo_name": "charlesUnixPro/dps8m_devel_tools",
"max_forks_repo_path": "docs/ImplementationNotes.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "715ee3c1768845e226b1d0b2695a8db88c247cb7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"ICU"
],
"max_issues_repo_name": "charlesUnixPro/dps8m_devel_tools",
"max_issues_repo_path": "docs/ImplementationNotes.tex",
"max_line_length": 568,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "715ee3c1768845e226b1d0b2695a8db88c247cb7",
"max_stars_repo_licenses": [
"ICU"
],
"max_stars_repo_name": "charlesUnixPro/dps8m_devel_tools",
"max_stars_repo_path": "docs/ImplementationNotes.tex",
"max_stars_repo_stars_event_max_datetime": "2018-06-03T23:20:22.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-06-03T23:20:22.000Z",
"num_tokens": 12386,
"size": 45160
} |
\documentclass[8pt]{beamer}
\mode<presentation>
{
\usetheme{Singapore} %use default if problems or Szeged
% or ... https://deic-web.uab.cat/~iblanes/beamer_gallery/index_by_theme.html
}
\setbeamertemplate{footline}[frame number]
\usepackage{booktabs}
\usepackage{color}
\usepackage[english]{babel}
% or whatever
\usepackage[latin1]{inputenc}
% or whatever
\usepackage{times}
\usepackage[T1]{fontenc}
% Or whatever. Note that the encoding and the font should match. If T1
% does not look nice, try deleting the line with the fontenc.
\usepackage{nth}
\usepackage{xcolor}
\usepackage{relsize} %large math
\setbeamertemplate{caption}[numbered]
\title[S.I.s.a.R. Model] % (optional, use only with long paper titles)
{Un modello Agent-Based per studiare la diffusione del virus SARS-CoV-2}
\author[] % (optional, use only with lots of authors)
{G.~Pescarmona\inst{1} \and P.~Terna\inst{2} \and A.~Acquadro\inst{1} \and P.~Pescarmona\inst{3} \and G.~Russo\inst{4}
\and S.~Terna\inst{5} }
% - Give the names in the same order as the appear in the paper.
% - Use the \inst{?} command only if the authors have different
% affiliation.
\institute[] % (optional, but mostly needed)
{
\inst{1}%
University of Torino, Italy
\and
\inst{2}%
University of Torino, Italy, retired \& Fondazione Collegio Carlo Alberto, Honorary Fellow, Italy
\and
\inst{3}%
University of Groningen, The Netherlands
\and
\inst{4}%
Centro Einaudi, Torino, Italy
\and
\inst{5}%
tomorrowdata.io
}
% - Use the \inst command only if there are several affiliations.
% - Keep it simple, no one is interested in your street address.
\date[] % (optional, should be abbreviation of conference name)
{IRES Piemonte -- 23 Febbraio 2021}
\begin{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}
\titlepage
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Outline}
\tableofcontents
% You might wish to add the option [pausesections]
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Objectives of the model}
\begin{itemize}
\item
We propose an agent-based model to simulate the Covid-19 epidemic diffusion, with Susceptible, Infected, symptomatic, asymptomatic, and Recovered people: hence the name S.I.s.a.R. The scheme comes from S.I.R. models, with (i) infected agents categorized as symptomatic and asymptomatic and (ii) the places of contagion specified in a detailed way, thanks to agent-based modeling capabilities.
\item
The infection transmission is related to three factors: the infected person's characteristics and those of the susceptible one, plus those of the space in which contact occurs.
\item
The model includes the structural data of Piedmont, but it can be readily calibrated for other areas. The model manages a realistic calendar (e.g., national or local government decisions), via a script interpreter.
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Tool and links}
\begin{itemize}
\item We use NetLogo, at \url{https://ccl.northwestern.edu/netlogo/}.
\item
S.I.s.a.R. is at \url{https://terna.to.it/simul/SIsaR.html} with information on model construction, the draft of a paper also reporting results, and an online executable version of the simulation program, built using NetLogo.
\item
A short paper related to this presentation is at \url{https://rofasss.org/2020/10/20/sisar/}
G. Pescarmona, P. Terna, A. Acquadro, P. Pescarmona, G. Russo, and S. Terna. How Can ABM Models Become Part of the Policy-Making Process in Times of Emergencies---The SISAR Epidemic Model. \emph{RofASSS}, 2020.
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{The scale and the items}
\begin{itemize}
\item $1:1000$.
\bigskip
\item Houses.
\item Schools.
\item Hospitals.
\item Nursing homes,
\item Factories.
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{The model}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Overview}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{The interface and the information sheet}
\begin{figure}[H]
\center
\includegraphics[scale=0.14]{interface2021.png}
\caption{The interface}
\label{interface}
\end{figure}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{The interface and the information sheet}
\begin{figure}[H]
\center
\includegraphics[scale=0.23]{info1b.png}~~~\includegraphics[scale=0.23]{info2b.png}
\caption{The information sheet, about 20 pages}
\label{interface}
\end{figure}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Details}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{The world}
\begin{figure}[H]
\center
\includegraphics[scale=0.35]{world.png}
\caption{The world}
\label{world}
\end{figure}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{The world 3D}
\begin{figure}[H]
\center
\includegraphics[scale=0.55]{world3D.png}
\caption{The world 3D}
\label{world3D}
\end{figure}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{The agents}
\begin{figure}[H]
\center
\includegraphics[scale=0.23]{person1.png}~~~~~~~~~~~~~~~~~~~~\includegraphics[scale=0.23]{person2.png}
\caption{Probes to different agents}
\label{diffAgents}
\end{figure}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Contagions}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{The proposed technique}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Contagion representation}
\begin{itemize}
\item
The model allows analyzing the sequences of contagions in simulated epidemics, reporting the places where the contagion occur.
\item
We represent each infecting agent as a horizontal segment with a vertical connections to another agent receiving the infection.
We represent the infected agents via further segments at an upper layer.
\item
With colors, line thickness, and styles, we display multiple information.
\item
This enables understanding at a glance how an epidemic episode is developing. In this way, it is easier to reason about countermeasures and, thus, to develop intervention policies.
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{An introductory example}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{An example (1/2)}
\begin{figure}[H]
\center
\includegraphics[width=0.9\textwidth]{with8b40.png}% with control case 473323 474697 in SIsaR_0.9.4.1 experiments 2 seeds with control-table_10000.csv, file withControl_473323_474697.csv
\caption{A case with containment measures, first 40 infections: workplaces (brown) and nursing homes (orange) strictly interweaving}
\label{workplacesNursingHomes}
\end{figure}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{An example (2/2), more contagions}
\begin{figure}[H]
\center
\includegraphics[width=0.9\textwidth]{with8a.png}% with control case 473323 474697 in SIsaR_0.9.4.1 experiments 2 seeds with control-table_10000.csv, file withControl_473323_474697.csv
\caption{A Case with containment measures, the whole epidemics: workplaces (brown) and nursing homes (orange) and then houses (cyan), with a bridge connecting two waves}
\label{workplacesNursingHomes}
\end{figure}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Other examples (i) on the left, an epidemic without containment measures; (ii) on the right, an epidemic with basic non-pharmaceutical containment measures}
\begin{figure}[H]
\center
\includegraphics[scale=0.12]{no4b.png}~~~~~~~~~~~\includegraphics[scale=0.12]{with7b.png}
\center
\includegraphics[scale=0.12]{no4a.png}~~~~~~~~~~~\includegraphics[scale=0.12]{with7a.png} \\
\caption{Two cases with initial and full periods}
\label{twocases}
\end{figure}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{A significant sequence}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{}
A contagion sequence suggesting policies: in Fig. \ref{fourSequences} we can look both at the places where contagions occur and at the dynamics emerging with different levels of intervention.
\begin{figure}[H]
\center
\includegraphics[scale=0.105]{withShort1.png}~~~~~~~~~~~\includegraphics[scale=0.105]{withShort1A.png}
\center
\includegraphics[scale=0.105]{withShort1A200.png}~~~~~~~~~~~\includegraphics[scale=0.105]{withShort1B.png} \\
\caption{(\emph{top left}) an epidemic with regular containment measures, showing a highly significant effect of workplaces (brown);
(\emph{top right}) the effects of stopping fragile workers at day 20, with a positive result, but home contagions (cyan) keep alive the pandemic, exploding again in workplaces (brown); (\emph{bottom left}) the same analyzing the first 200 infections with evidence of the event around day 110 with the new phase due to a unique asymptomatic worker, and (\emph{bottom right}) stopping fragile workers and any case of fragility at day 15, also isolating nursing homes}
\label{fourSequences}
\end{figure}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Exploring cases}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Simulation batches}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Simulation batches}
\begin{itemize}
\item
We explore systematically the introduction of factual, counterfactual, and prospective interventions to control the spread of the contagions.
\item
Each simulation run---whose length coincides with the disappearance of symptomatic or asymptomatic contagion cases---is a datum in a wide scenario of variability in time and effects.
\item
Consequently, we need to represent compactly the results emerging from batches of simulation repetitions, to compare the consequences of the basic assumptions adopted for each specific batch.
\item
We use blocs of ten thousand repetitions. Besides summarizing the results with the usual statistical indicators, we adopt the technique of the heatmaps.
\item
Each heatmap reports the duration of each simulated epidemic in the $x$ axis and the number of the symptomatic, asymptomatic, and deceased agents in the $y$ axis. The $z$ axis is represented by the colors, in logarithmic scale.
\item
In our batches we have 10,000 runs.
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Epidemics without and with control}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Ten thousand epidemics without control in Piedmont}
% readRunResults10kStableSeedsCPoints_noControl_ChangingWorld_plusHMlog
\begin{table}[H]
\center
\tiny
\begin{tabular}{lrrr}
\toprule
{} & symptomatic & totalInfected\&Deceased & duration \\
\midrule
count & 10000.00 & 10000.00 & 10000.00 \\
mean & 969.46 & 2500.45 & 303.10 \\
std & 308.80 & 802.88 & 93.50 \\
\bottomrule
\end{tabular}
\label{noCTab}
%\caption{a caption}
\end{table}
\begin{figure}[H]
\center
\includegraphics[scale=0.22]{10kNoControl.png}
\caption{Without non-pharmaceutical containment measures}
\label{noC}
\end{figure}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Ten thousand epidemics with basic control in Piedmont}
% readRunResults10kStableSeedsCPoints_basicControlB_schoolOpenSeptChangingWorld_plusHMlog
\begin{table}[H]
\center
\tiny
\begin{tabular}{lrrr}
\toprule
{} & symptomatic & totalInfected\&Deceased & duration \\
\midrule
count & 10000.00 & 10000.00 & 10000.00 \\
mean & 344.22 & 851.64 & 277.93 \\
std & 368.49 & 916.41 & 213.48 \\
\bottomrule
\end{tabular}
\label{basicCTab}
%\caption{a caption}
\end{table}
\begin{figure}[H]
\center
\includegraphics[scale=0.22]{10kBasicC.png}
\caption{First wave with non-pharmaceutical containment measures}
\label{basicC}
\end{figure}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Actual data}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Actual data}
\begin{figure}[H]
\center
\includegraphics[scale=0.25]{andamento900annotato.jpg}
\caption{Data in Piedmont}
\label{data}
\end{figure}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Second wave}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Selecting realistic spontaneous second waves}
% selectResults10kStableSeedsCPoints_basicControlB_schoolOpenSeptChangingWorld_plusHMlog
\begin{table}[H]
\center
\tiny
\begin{tabular}{lrrr}
\toprule
{} & symptomatic Dec20 & totalInfected\&Deceased Dec20 & duration \\
\midrule
count & 140.00 & 140.00 & 170.00 \\
mean & 605.79 & 1528.31 & 535.19 \\
std & 264.29 & 644.34 & 167.42 \\
\bottomrule
\end{tabular}
\label{selSpontWave2Tab}
%\caption{a caption}
\end{table}
{\tiny
170 epidemics out of 10,000 stable in Summer 2020, with: at 6.1.20 select sym. (10, 70] actual v. 33.3 \& at 9.20.20 select sym. (20, 90] actual value 37.5. \textbf{140} residual epidemics at 12.15.20 (actual symptomatic + asymptomatic people: 200.0).}
\begin{figure}[H]
\center
\includegraphics[scale=0.19]{10kSpontWave2.png}
\caption{First wave with non-pharmaceutical containment measures, spontaneous second wave, without specific measures}
\label{selSpontWave2}
\end{figure}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Selecting realistic second waves, with new infections from outside}
% selectResults10kStableSeedsCPoints_basicControlB_schoolOpenSeptChangingWorldNewStart_plusHMlog
\begin{table}[H]
\center
\tiny
\begin{tabular}{lrrr}
\toprule
{} & symptomatic Dec20 & totalInfected\&Deceased Dec20 & duration \\
\midrule
count & 1044.00 & 1044.00 & 1407.00 \\
mean & 588.67 & 1474.10 & 527.85 \\
std & 251.96 & 618.87 & 184.76 \\
\bottomrule
\end{tabular}
\label{selForceWave2Tab}
%\caption{a caption}
\end{table}
{\tiny
1,407 epidemics out of 10,000 stable in Summer 2020, with: at 6.1.20 select sym. (10, 70] actual v. 33.3 \& at 9.20.20 select sym. (20, 90] actual value 37.5. \textbf{1,044} residual epidemics at 12.15.20 (actual symptomatic + asymptomatic people: 200.0).}
\begin{figure}[H]
\center
\includegraphics[scale=0.19]{10kForceWave2.png}
\caption{First wave with non-pharmaceutical containment measures, forcing the second wave, without specific measures}
\label{selForceWave2}
\end{figure}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Selecting realistic second waves, with new infections from outside}
% selectResults10kStableSeedsCPoints_basicControlB_schoolOpenSeptOctControlChangingWorldNewStart_plusHMlog
\begin{table}[H]
\center
\tiny
\begin{tabular}{lrrr}
\toprule
{} & symptomatic Dec20 & totalInfected\&Deceased Dec20 & duration \\
\midrule
count & 874.00 & 874.00 & 1407.00 \\
mean & 223.61 & 594.97 & 404.36 \\
std & 138.52 & 372.63 & 137.51 \\
\bottomrule
\end{tabular}
\label{selSpontWave2Contr2Tab}
%\caption{a caption}
\end{table}
{\tiny
1,407 epidemics out of 10,000 stable in Summer 2020, with: at 6.1.20 select sym. (10, 70] actual v. 33.3 \& at 9.20.20 select sym. (20, 90] actual value 37.5. \textbf{874} residual epidemics at 12.15.20 (actual symptomatic + asymptomatic people: 200.0).}
\begin{figure}[H]
\center
\includegraphics[scale=0.19]{10kForceWave2Contr2.png}
\caption{First wave with non-pharmaceutical containment measures, forcing the second wave, \textbf{with new specific non-pharmaceutical containment measures}}
\label{selForceWave2Contr2}
\end{figure}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Selecting realistic second waves, with new infections from outside}
% selectResults10kStableSeedsCPoints_basicControlB_schoolOpenSeptNoFragControlChangingWorldNewStart_plusHMlog
\begin{table}[H]
\center
\tiny
\begin{tabular}{lrrr}
\toprule
{} & symptomatic Dec20 & totalInfected\&Deceased Dec20 & duration \\
\midrule
count & 987.00 & 987.00 & 1407.00 \\
mean & 286.09 & 753.06 & 443.05 \\
std & 164.33 & 424.90 & 151.25 \\
\bottomrule
\end{tabular}
\label{selSpontWave2NoFragTab}
%\caption{a caption}
\end{table}
{\tiny
1,407 epidemics out of 10,000 stable in Summer 2020, with: at 6.1.20 select sym. (10, 70] actual v. 33.3 \& at 9.20.20 select sym. (20, 90] actual value 37.5. \textbf{987} residual epidemics at 12.15.20 (actual symptomatic + asymptomatic people: 200.0).}
\begin{figure}[H]
\center
\includegraphics[scale=0.19]{10kForceWave2NoFrag.png}
\caption{First wave with non-pharmaceutical containment measures, forcing the second wave; \textbf{in second wave, uniquely stop to fragile people of any kind (including workers)}}
\label{selForceWave2NoFrag}
\end{figure}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{To recap}
\begin{table}[H]
\center
\footnotesize
\begin{tabular}{lrrrrrr}
\toprule
Scenarios & & total sym. & total sym., & duration~~~~ \\
& & & asympt., deceased \\
\midrule
no control in first wave \\
{} & count & 10000.00 & 10000.00 & 10000.00 \\
{} & mean & 969.46 & 2500.45 & 303.10 \\
{} & std & 308.80 & 802.88 & 93.50 \\
\midrule
basic controls in first wave \\
{} & count & 10000.00 & 10000.00 & 10000.00 \\
{} & mean & 344.22 & 851.64 & 277.93 \\
{} & std & 368.49 & 916.41 & 213.48 \\
\midrule
basic controls in first wave \\
forcing realistic second & count & 1044.00 & 1044.00 & 1407.00 \\
wave, without new controls & mean & 588.67 & 1474.10 & 527.85 \\
{} & std & 251.96 & 618.87 & 184.76 \\
\midrule
basic controls in first wave \\
forcing realistic second & count & 874.00 & 874.00 & 1407.00 \\
wave, with new controls & mean & 223.61 & 594.97 & 404.36 \\
{} & std & 138.52 & 372.63 & 137.51 \\
\midrule
basic controls in first wave \\
forcing realistic second & count & 987.00 & 987.00 & 1407.00 \\
wave, with stop to frag. & mean & 286.09 & 753.06 & 443.05 \\
people (incl. workers) & std & 164.33 & 424.90 & 151.25 \\
\bottomrule
\end{tabular}
\caption{Report of the key results, with count, mean, and std}
\label{keyResultsT}
\end{table}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{New use cases of the model}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Exploring vaccinations}
Exploring vaccination sequences (using \emph{genetic algorithms} or \emph{reinforcement learning}).
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{A new model}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{A new model: the map}
\begin{figure}[H]
\center
\includegraphics[scale=0.25]{Piem1.png}~~~~~~~~~\includegraphics[scale=0.25]{Piem2.png}
\caption{3D Piedmont}
\label{Piem}
\end{figure}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{A new model: the scale and the items}
\begin{itemize}
\item $1:100$.
\item \textit{Infection engine}, \url{https://terna.to.it/simul/InfectionEngine.pdf}.
\item Houses.
\item Schools.
\item Hospitals.
\item Nursing homes,
\item Factories.
\color{red}
\item Transportations.
\item Aggregation places: happy hours, night life, sport stadiums, discotheques, \ldots
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{The tool: S.L.A.P.P.}
Scientific advertising: \url{https://terna.github.io/SLAPP/}
\begin{figure}[H]
\center
\includegraphics[scale=0.26]{SLAPP.png}
\caption{Swarm-Like Agent Protocol in Python}
\label{SLAPP}
\end{figure}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Final remarks}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{A few considerations:}
\begin{itemize}
\item
The importance of High Performance Computing.
\item
From S.I.s.a.R. to a model of the society and of the economy of Piedmont.
\bigskip
\item The S.I.s.a.R. model is a tool for comparative analyses, not for forecasting (the enormous standard deviation values are intrinsic to the problem).
\item The model is highly parametric and more it will be.
\item New crisis calling for immediate simulation could take a substantial advantage from the parametric structure of the model.
\end{itemize}
\bigskip
The slides are at \url{https://terna.to.it/simul/TernaIRES20210223.pdf}.
My homepage \url{https://terna.to.it} and my address \url{[email protected]}.
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{}
\begin{figure}[H]
\center
\includegraphics[scale=0.09]{mappaEpidemieNelleNuvole.jpeg}
\end{figure}
\
The image, elaborating Fig. \ref{noCTab}, is dedicated to Pietro Greco (Barano d'Ischia, April $20^{\text{\tiny th}}$, 1955---Ischia, December $18^{\text{\tiny th}}$, 2020)
\end{frame}
\end{document} | {
"alphanum_fraction": 0.6101252379,
"avg_line_length": 30.8281036835,
"ext": "tex",
"hexsha": "3c9d5d1cf0cfc926a88be0f9d36d37f2de0ea57d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c2e3d75c4d0b232b30d4e8b9b5c123659100ded1",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "terna/contagionSequencesPaperSlides",
"max_forks_repo_path": "contagionSequences&heatMapsIRES_slides.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c2e3d75c4d0b232b30d4e8b9b5c123659100ded1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "terna/contagionSequencesPaperSlides",
"max_issues_repo_path": "contagionSequences&heatMapsIRES_slides.tex",
"max_line_length": 467,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c2e3d75c4d0b232b30d4e8b9b5c123659100ded1",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "terna/contagionSequencesPaperSlides",
"max_stars_repo_path": "contagionSequences&heatMapsIRES_slides.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6029,
"size": 22597
} |
\chapter{Appendix: Family Motion Planning}
\label{chap:appendix-family}
\begin{figure}
\centering
\includegraphics[width=10.5cm]{build/family-suction-example-graph}
\caption[Family belief graph for the 2D example problem from
Figure~\ref{fig:family:example} in Chapter~\ref{chap:family}.
The underlying family consists of five sets:
$A$, $B$, $C$, $S_{12}$, and $S_{23}$.
Transitions between states are shown for indicators
$\mathbf{1}_A$,
$\mathbf{1}_B$,
and $\mathbf{1}_C$,
with solid lines
showing transitions in which the indicator returns True,
and dashed lines
showing transitions in which the indicator returns False.
Also shown is the distance function and optimal policy
for a query subset $S_u = S_{23}$,
with green beliefs as goal states.
Bolded edges
represent transitions on an optimal policy.
]{Family belief graph for the 2D example problem from
Figure~\ref{fig:family:example} in Chapter~\ref{chap:family}.
The underlying family consists of five sets:
$A$, $B$, $C$, $S_{12}$, and $S_{23}$.
Transitions between states are shown for indicators
$\mathbf{1}_A$ (\protect\tikz{\protect\node[fill=red,draw=black]{};}),
$\mathbf{1}_B$ (\protect\tikz{\protect\node[fill=green!70!black,draw=black]{};}),
and $\mathbf{1}_C$ (\protect\tikz{\protect\node[fill=blue,draw=black]{};}),
with solid lines (\protect\tikz{\protect\draw[thick,solid] (0,0) -- (0.15,0.15);})
showing transitions in which the indicator returns True,
and dashed lines (\protect\tikz{\protect\draw[thick,densely dotted] (0,0) -- (0.15,0.15);})
showing transitions in which the indicator returns False.
Also shown is the distance function and optimal policy
for a query subset $S_u = S_{23}$,
with green beliefs as goal states.
Bolded edges (\protect\tikz{\protect\draw[ultra thick,solid] (0,0) -- (0.15,0.15);})
represent transitions on an optimal policy.
}
\label{fig:family:appendix-suction-example-graph}
\end{figure}
| {
"alphanum_fraction": 0.671245855,
"avg_line_length": 47.9772727273,
"ext": "tex",
"hexsha": "332327c412fc1da0519a6ed40f18ef85234cf7d7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "62ca559db0ad0a6285012708ef718f4fde4e1dcd",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "siddhss5/phdthesis-dellin",
"max_forks_repo_path": "thesis-ch06-family-appendix.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "62ca559db0ad0a6285012708ef718f4fde4e1dcd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "siddhss5/phdthesis-dellin",
"max_issues_repo_path": "thesis-ch06-family-appendix.tex",
"max_line_length": 97,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "62ca559db0ad0a6285012708ef718f4fde4e1dcd",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "siddhss5/phdthesis-dellin",
"max_stars_repo_path": "thesis-ch06-family-appendix.tex",
"max_stars_repo_stars_event_max_datetime": "2018-09-06T21:45:42.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-09-06T21:45:42.000Z",
"num_tokens": 600,
"size": 2111
} |
\documentclass{article}
% if you need to pass options to natbib, use, e.g.:
% \PassOptionsToPackage{numbers, compress}{natbib}
% before loading nips_2016
%
% to avoid loading the natbib package, add option nonatbib:
%\usepackage[nonatbib]{nips_2016}
%\usepackage{nips_2016}
% to compile a camera-ready version, add the [final] option, e.g.:
\usepackage[final]{nips_2016}
\usepackage[utf8]{inputenc} % allow utf-8 input
\usepackage[T1]{fontenc} % use 8-bit T1 fonts
\usepackage{hyperref} % hyperlinks
\usepackage{url} % simple URL typesetting
\usepackage{booktabs} % professional-quality tables
\usepackage{amsfonts} % blackboard math symbols
\usepackage{nicefrac} % compact symbols for 1/2, etc.
\usepackage{microtype} % microtypography
\usepackage{graphicx}
\usepackage{makecell}
\usepackage{algorithm}% http://ctan.org/pkg/algorithms
\usepackage{algpseudocode}%
\usepackage{bm}
\usepackage{amsmath}
\usepackage{makecell}
\title{Formatting instructions for BME 590L: Machine Learning in Imaging Final Project}
% The \author macro works with any number of authors. There are two commands
% used to separate the names and addresses of multiple authors: \And and \AND.
%
% Using \And between authors leaves it to LaTeX to determine where to break the
% lines. Using \AND forces a line break at that point. So, if LaTeX puts 3 of 4
% authors names on the first line, and the last on the second line, try using
% \AND instead of \And before the third author name.
\author{%
David S.~Hippocampus\thanks{Use footnote for providing further information
about author (webpage, alternative address)---\emph{not} for acknowledging
funding agencies.} \\
Department of Computer Science\\
Cranberry-Lemon University\\
Pittsburgh, PA 15213 \\
\texttt{[email protected]} \\
% examples of more authors
% \And
% Coauthor \\
% Affiliation \\
% Address \\
% \texttt{email} \\
% \AND
% Coauthor \\
% Affiliation \\
% Address \\
% \texttt{email} \\
% \And
% Coauthor \\
% Affiliation \\
% Address \\
% \texttt{email} \\
% \And
% Coauthor \\
% Affiliation \\
% Address \\
% \texttt{email} \\
}
\begin{document}
% \nipsfinalcopy is no longer used
\maketitle
\begin{abstract}
The abstract paragraph should be indented \nicefrac{1}{2}~inch (3~picas) on
both the left- and right-hand margins. Use 10~point type, with a vertical
spacing (leading) of 11~points. The word \textbf{Abstract} must be centered,
bold, and in point size 12. Two line spaces precede the abstract. The abstract
must be limited to one paragraph.
\end{abstract}
\section{Instructions}
Hi guys, here's a template to use for your final project. Please read the instructions below. Feel free to delete all of the text and/or use snippets that are useful as you complete the paper.
\subsection{Retrieval of style files}
The style files for NeurIPS and other conference information are available on
the World Wide Web at
\begin{center}
\url{http://www.neurips.cc/}
\end{center}
The file \verb+neurips_2018.pdf+ contains these instructions and illustrates the
various formatting requirements your NeurIPS paper must satisfy.
The only supported style file for NeurIPS 2018 is \verb+neurips_2018.sty+,
rewritten for \LaTeXe{}. \textbf{Previous style files for \LaTeX{} 2.09,
Microsoft Word, and RTF are no longer supported!}
The \LaTeX{} style file contains three optional arguments: \verb+final+, which
creates a camera-ready copy, \verb+preprint+, which creates a preprint for
submission to, e.g., arXiv, and \verb+nonatbib+, which will not load the
\verb+natbib+ package for you in case of package clash.
\section{Headings: first level}
\label{headings}
All headings should be lower case (except for first word and proper nouns),
flush left, and bold.
First-level headings should be in 12-point type.
\subsection{Headings: second level}
Second-level headings should be in 10-point type.
\subsubsection{Headings: third level}
Third-level headings should be in 10-point type.
\section{Citations, figures, tables, references}
\label{others}
You can cite something using ~\cite{Lecun15}.
You can reference a figure as \ref{example_figure_label}. Then this is how you include a figure:
%%%%%%%%%%%%
%EXAMPLE FIGURE
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{example_figure.pdf}
\caption{here is an example figure }
\label{example_figure_label}
\end{figure}
%%%%%%%%%%%%
\subsection{Tables}
All tables must be centered, neat, clean and legible. The table number and
title always appear before the table. See Table~\ref{sample-table}.
Place one line space before the table title, one line space after the
table title, and one line space after the table. The table title must
be lower case (except for first word and proper nouns); tables are
numbered consecutively.
Note that publication-quality tables \emph{do not contain vertical rules.} We
strongly suggest the use of the \verb+booktabs+ package, which allows for
typesetting high-quality, professional tables:
\begin{center}
\url{https://www.ctan.org/pkg/booktabs}
\end{center}
This package was used to typeset Table~\ref{sample-table}.
\begin{table}
\caption{Sample table title}
\label{sample-table}
\centering
\begin{tabular}{lll}
\toprule
\multicolumn{2}{c}{Part} \\
\cmidrule(r){1-2}
Name & Description & Size ($\mu$m) \\
\midrule
Dendrite & Input terminal & $\sim$100 \\
Axon & Output terminal & $\sim$10 \\
Soma & Cell body & up to $10^6$ \\
\bottomrule
\end{tabular}
\end{table}
\begin{itemize}
\item This is how you make a list of things
\end{itemize}
\subsubsection*{Acknowledgments}
Use unnumbered third level headings for the acknowledgments. All acknowledgments
go at the end of the paper. Do not include acknowledgments in the anonymized
submission, only in the final paper.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Bibliography
\begin{thebibliography}{99}
\bibitem{Lecun15} Y. LeCun, Y. Bengio and G. Hinton, ``Deep Learning," Nature {\bf 521}, 436--444 (2015).
\end{thebibliography}
\end{document}
%%%%%%%%%%%%%%%%%%%%%%
%%Experimental results table, thin smear, with center LED
%\begin{table}[b]
% \caption{DP-CNN classification of {\it P. falciparum} infection, thin smear, with center LED}
% \label{sample-table}
% \centering
% \begin{tabular}{lllllllll}
% \toprule
% \multicolumn{2}{c}{ } & \multicolumn{7}{c}{Illumination Type \& Classification Score} \\
% \cmidrule{3-9}
% Data & Value & Center & Off-axis & DPC & All & Random & PC Ring & Optim. \\
% \midrule
% %Red-only & Average & 0.829 & 0.713 & 0.708 & 0.710 & 0.724 & 0.845 & 0.862 \\
% %& Majority & 0.849 & 0.722 & 0.723 & 0.722 & 0.759 & 0.854 & {\bf 0.894} \\
% %& STD & 0.013 & 0.015 & 0.011 & 0.012 & 0.046 & 0.011 & 0.018 \\
% \midrule
% %This was the second run with the PC Ring data, I messed up the first run
% Red-only & Average & 0.839 & 0.772 & 0.708 & 0.781 & 0.762 & 0.881 & 0.890 \\
% & Majority & 0.845 & 0.796 & 0.715 & 0.797 & 0.797 & 0.891 & {\bf 0.906} \\
% & STD & 0.017 & 0.016 & 0.011 & 0.018 & 0.045 & 0.011 & 0.011 \\
% \midrule
% Green-only & Average & 0.873 & 0.736 & 0.717 & 0.727 & 0.772 & 0.900 & 0.897 \\
% & Majority & 0.885 & 0.739 & 0.721 & 0.755 & 0.834 & 0.911 & {\bf 0.915} \\
% & STD & 0.022 & 0.010 & 0.017 & 0.012 & 0.046 & 0.011 & 0.016 \\
% \midrule
% Blue-only & Average & 0.869 & 0.745 & 0.792 & 0.753 & 0.813 & 0.881 & 0.887 \\
% & Majority & 0.875 & 0.761 & 0.816 & 0.766 & 0.875 & 0.895 & {\bf 0.922} \\
% & STD & 0.012 & 0.015 & 0.006 & 0.014 & 0.032 & 0.014 & 0.017 \\
% \midrule
% RGB, cent init & Average & 0.917 & 0.779 & 0.750 & 0.727 & 0.785 & 0.879 & 0.928 \\
% & Majority & 0.935 & 0.799 & 0.770 & 0.744 & 0.833 & 0.911 &{\bf 0.946} \\
% & STD & 0.005 & 0.006 & 0.009 & 0.022 & 0.028 & 0.011 & 0.007 \\
% \bottomrule
% \end{tabular}
%\end{table}
%%%%%%%%%%%%%%%%%%%%%
%Experimental results table, thin smear, with center LED
%NOTE 11/7/18: This table is before adding the PC-ring results, in which case I took both the
%PC Ring score from the new run, and the optimized case score from the new one
% The rest of the scores, including the center score, did not change at all really
%\begin{table}[b]
% \caption{DP-CNN classification of {\it P. falciparum} infection, thin smear, with center LED}
% \label{sample-table}
% \centering
% \begin{tabular}{llllllll}
% \toprule
% \multicolumn{2}{c}{ } & \multicolumn{6}{c}{Illumination Type \& Classification Score} \\
% \cmidrule{3-8}
% Data & Value & Center & Off-axis & DPC & All & Random & Optim. \\
% \midrule
% Red-only & Average & 0.829 & 0.713 & 0.708 & 0.710 & 0.724 & {\bf 0.852} \\
% & Majority & 0.849 & 0.722 & 0.723 & 0.722 & 0.759 & {\bf 0.868} \\
% & STD & 0.013 & 0.015 & 0.011 & 0.012 & 0.046 & 0.016 \\
% \midrule
% Green-only & Average & 0.873 & 0.736 & 0.717 & 0.727 & 0.772 & {\bf 0.883} \\
% & Majority & 0.885 & 0.739 & 0.721 & 0.755 & 0.834 & {\bf 0.900} \\
% & STD & 0.022 & 0.010 & 0.017 & 0.012 & 0.046 & 0.016 \\
% \midrule
% Blue-only & Average & 0.869 & 0.745 & 0.792 & 0.753 & 0.813 & {\bf 0.883} \\
% & Majority & 0.875 & 0.761 & 0.816 & 0.766 & 0.875 & {\bf 0.910} \\
% & STD & 0.012 & 0.015 & 0.006 & 0.014 & 0.032 & 0.023 \\
%% \midrule
%% RGB 3K, rand init & Average & 0.875 & 0.747 & 0.739 & 0.713 & 0.766 & 0.877 \\
%% & Majority & 0.893 & 0.752 & 0.748 & 0.724 & 0.830 & 0.901 \\
%% & STD & 0.017 & 0.022 & 0.016 & 0.020 & 0.032 & 0.018 \\
% \midrule
% RGB, cent init & Average & 0.918 & 0.779 & 0.750 & 0.727 & 0.785 & {\bf 0.930} \\
% & Majority & 0.933 & 0.799 & 0.770 & 0.744 & 0.833 & {\bf 0.941} \\
% & STD & 0.005 & 0.006 & 0.009 & 0.022 & 0.028 & 0.007 \\
% \bottomrule
% \end{tabular}
%\end{table}
| {
"alphanum_fraction": 0.6485384768,
"avg_line_length": 35.5406360424,
"ext": "tex",
"hexsha": "eac4b9c0d8c95f921f44de7c7b3bad8dc80224cc",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b49fd3fceaad7c816380ff017a64c35d790a4d6d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "hc270/BME-590-Spring-2020",
"max_forks_repo_path": "Report/main_text.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b49fd3fceaad7c816380ff017a64c35d790a4d6d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "hc270/BME-590-Spring-2020",
"max_issues_repo_path": "Report/main_text.tex",
"max_line_length": 192,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "b49fd3fceaad7c816380ff017a64c35d790a4d6d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "hc270/BME-590-Spring-2020",
"max_stars_repo_path": "Report/main_text.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3410,
"size": 10058
} |
\documentclass{article}
\title{Building ON}
\author{Sachidananda Urs}
\date{}
\begin{document}
\maketitle
\begin{abstract}
This document discusses two commonly-used ways to build ON: \- nightly(1)/bldenv(1)
and make(1)/dmake(1). The former provides a high degree of automation and
fine-grained control over each step in a full or incremental build of the entire
workspace. Using make(1) or dmake(1) directly provides much less automation but
allows you to build individual components more quickly. The instructions in this
document apply to ON and similar consolidations, including SFW. Other
consolidations may have substantially different build procedures, which should
be incorporated here.
\end{abstract}
\subsection*{Environment Variables}
This section describes a few of the environment variables that affect all ON
builds, regardless of the build method.
\begin{description}
\item[CODEMGR\_WS]
This variable should be set to the root of your workspace. It is highly
recommended to use bldenv(1) to set this variable as it will also set several
other important variables at the same time. Originally, CODEMGR\_WS was defined
by TeamWare. Over time, ON's build tools have come to depend on it, so we
continue to use it with Mercurial.
\item[SRC]
This variable must be set to the root of the ON source tree within your
workspace; that is, \${CODEMGR\_WS}/usr/src. It is used by numerous makefiles
and by nightly(1). This is only needed if you are building. bldenv(1) will set
this variable correctly for you.
\item[MACH]
The instruction set architecture of the machine as given by uname -p,
e.g. sparc, i386. This is only needed if you are building. bldenv(1) will set
this variable correctly for you; it should not be changed. If you prefer, you
can also set this variable in your dot-files, and use it in defining PATH and
any other variables you wish. If you do set it manually, be sure not to set it
to anything other than the output of `/usr/bin/uname -p' on the specific machine
you are using:
\begin{verbatim}
Good:
MACH=`/usr/bin/uname -p`
Bad:
MACH=sparc
\end{verbatim}
\item[ROOT]
Root of the proto area for the build. The makefiles direct the installation of
header files and libraries to this area and direct references to these files by
builds of commands and other targets. It should be expressed in terms of
\$CODEMGR\_WS. if bldenv(1) is used, this variable will be set to
\${CODEMGR\_WS}/proto/root\_\${MACH}.
\item[PARENT\_ROOT]
PARENT\_ROOT is the proto area of the parent workspace. This can be used to
perform partial builds in children by referencing already-installed files from
the parent. Setting this variable is optional.
\item[MAKEFLAGS]
This variable has nothing to do with OpenSolaris; however, in order for the
build to work properly, make(1) must have access to the contents of the
environment variables described in this section. Therefore the MAKEFLAGS
environment variable should be set and contain at least ``e''. bldenv(1) will
set this variable for you; it is only needed if you are building. It is possible
to dispense with this by using `make -e' if you are building using make(1) or
dmake(1) directly, but use of MAKEFLAGS is strongly recommended.
\item[SPRO\_ROOT]
This points to the top of the installed compiler tree. People outside Sun will
normally set this to /opt/SUNWspro. You can see how this works by looking at
usr/src/Makefile.master. But if you need to override the default, you should do
so via the environment variable. Note that opensolaris.sh already sets
SPRO\_ROOT to /opt/SUNWspro.
\item[SPRO\_VROOT]
The `V' stands for version. At Sun, multiple versions of the compilers are
installed under \${SPRO\_ROOT} to support building older sources. The compiler
itself is expected to be in \${SPRO\_VROOT}/bin/cc, so you will most likely need
to set this variable to /opt/SUNWspro. Note that opensolaris.sh has this
variable already set to this value.
\item[GNU\_ROOT]
The GNU C compiler is used by default to build the 64-bit kernel for amd64
systems. By default, if building on an x86 host, the build system assumes there
is a working amd64 gcc installation in /usr/sfw. Although it is not recommended,
you can use a different gcc by setting this variable to the gcc install
root. See usr/src/Makefile.master for more information.
\item[\_\_GNUC, \_\_GNUC64]
These variables control the use of gcc. \_\_GNUC controls the use of gcc to build
i386 and sparc (32-bit) binaries, while \_\_GNUC\-64 controls the use of gcc to
build amd64 and sparcv9 (64-bit) binaries. Setting these variables to the empty
value enables the use of gcc to build the corresponding binaries. Setting them
to `\#' enables Studio as the primary compiler. The default settings use Studio,
with gcc invoked in parallel as a `shadow' compiler (to ensure that code remains
warning and error clean).
\item[CLOSED\_IS\_PRESENT]
This variable tells the ON makefiles whether to look for the closed source
tree. Normally this is set automatically by nightly(1) and bldenv(1).
\item[ON\_CLOSED\_BINS]
If the closed source tree is not present, this variable points to the tree of
unpacked closed binaries.
\end{description}
\subsection*{Using nightly and bldenv}
There are many steps in building any consolidation; ON's build process entails
creation of the proto area, compiling and linking binaries, generating lint
libraries and linting the sources, building packages and BFU archives, and
verifying headers, packaging, and proto area changes. Fortunately, a single
utility called nightly(1) automates all these steps and more. It is controlled
by a single environment file, the format of which is shared with bldenv(1). This
section describes what nightly(1) does for you, what it does not, and how to use
it.
nightly(1) can automate most of the build and source-level checking
processes. It builds the source, generates BFU archives, generates packages,
runs lint(1), does syntactic checks, and creates and checks the proto area. It
does not perform any runtime tests such as unit, functional, or regression
tests; you must perform these separately, ideally on dedicated systems.
Despite its name, nightly(1) can be run manually or by cron(1M) at any time; you
must run it yourself or arrange to have it run once for each build you want to
do. nightly(1) does not start any daemons or repetitive background activities:
it does what you tell it to do, once, and then stops.
After each run, nightly(1) will leave a detailed log of the commands it ran and
their output. This log is normally located in
\$CODEMGR\_WS/log/log.YYYY-MM-DD.HH:MM/nightly.log, where YYYY-MM-DD.HH:MM is a
timestamp, but can be changed as desired. If such a log already exists,
nightly(1) will rename it for you.
In addition to the detailed log, you (actually, the address specified in the
MAILTO environment variable) will also receive an abbreviated completion report
when nightly(1) finishes. This report will tell you about any errors or warnings
that were detected and how long each step took to complete. It will list errors
and warnings as differences from a previous build (if there is one); this allows
you to see what effects your changes, if any, have had. It also means that if
you attempt a build and it fails, and you then correct the problems and rebuild,
you will see information like:
\begin{verbatim}
< dmake: Warning: Command failed for target `yppasswd'
< dmake: Warning: Command failed for target `zcons'
< dmake: Warning: Command failed for target `zcons.o'
< dmake: Warning: Command failed for target `zdump'
\end{verbatim}
Note the `$<$' - this means this output was for the previous build. If the output
is prefaced with `$>$', it is associated with the most recent build. In this way
you will be able to see whether you have corrected all problems or introduced
new ones.
\subsubsection*{Options}
nightly(1) accepts a wide variety of options and flags that control its
behavior. Many of these options control whether nightly(1) performs each of the
many steps it can automate for you. These options may be specified in the
environment file or on the command line; options specified on the command line
take precedence. See nightly(1) for the complete list of currently accepted
options and their effect on build behavior.
\subsubsection*{Using Environment Files}
nightly(1) reads a file containing a set of environment definitions for the
build. This file is a simple shell script, but normally just contains variable
assignments. However, common practice is to use the developer, or gatekeeper, or
opensolaris environment files, as appropriate, and modify one of them to meet
your needs. The name of the resulting environment file is then passed as the
final argument to nightly(1). The sample environment files are available in
usr/src/tools/env.
\subsubsection*{Variables}
Although any environment variables can be set in a nightly(1) environment file,
this section lists those which are used directly by nightly(1) to control its
operation and which are commonly changed. The complete list of variables and
options is found in nightly(1).
\begin{verbatim}
NIGHTLY\_OPTIONS
CODEMGR\_WS
CLONE\_WS
STAFFER
MAILTO
ON\_CRYPTO\_BINS
MAKEFLAGS
\end{verbatim}
\subsubsection*{DEBUG and Non-DEBUG Builds}
The ON sources contain additional debugging support and self-checks (assertions)
that can be enabled by doing a DEBUG build. This makes the binaries larger and
slower, but unless you are running on older hardware, you are unlikely to notice
the difference. Benchmarking should only be done with non-DEBUG builds.
Non-DEBUG binary deliverables (e.g., closed binaries) are usually tagged with
``-nd'' in the file name. If you are unsure as to which you have, the kernel
will say when it boots if it is a DEBUG kernel. Non-DEBUG closed binaries
unpack into closed/root\_\$MACH-nd; DEBUG closed binaries unpack into
closed/root\_\$MACH. In the source tree, non-DEBUG kernel objects live in
``objNN'' (e.g., ``obj64'' or ``obj32'') directories, while DEBUG kernel objects
live in ``debugNN'' directories.
When building with nightly(1), you can specify a DEBUG build, a non-DEBUG build,
or both, by setting the appropriate flags in \\NIGHTLY\_OPTIONS. Note that ``D''
turns on the DEBUG build, while ``F'' turns off the non-DEBUG build.
By default, nightly(1) will reuse the proto area for both the DEBUG and
non-DEBUG builds. (This is done to save space.) You can tell nightly(1) to put
the non-DEBUG build in a separate proto area by setting the MULTI\_PROTO
environment variable to ``yes''. In that case, the (default) DEBUG proto area
will be \${CODEMGR\_WS}/proto/root\_\${MACH}, and the (default) non-DEBUG proto
area will be \${CODEMGR\_WS}/proto/root\_\${MACH}-nd. Be careful about
extracting binaries--especially kernel modules--by hand from the proto area. It
doesn't always work to mix DEBUG and non-DEBUG kernel modules.
When building with bldenv(1), you can build either DEBUG binaries or non-DEBUG
binaries, but not both at the same time. bldenv(1) is not smart enough to look
at NIGHTLY\_OPTIONS to determine the build type; see CR 6414851 for details.
Instead, bldenv(1) defaults to a non-DEBUG build. Use ``bldenv -d'' to get a
DEBUG build.
\subsection*{Using Make}
Although nightly(1) can automate the entire build process, including
source-level checks and generation of additional build products, it is not
always necessary. If you are working in a single subdirectory and wish only to
build or lint that subdirectory, it is usually possible to do this directly
without relying on nightly(1). This is especially true for the kernel, and if
you have not made changes to any other part of your workspace, it is
advantageous to build and install only the kernel during intermediate
testing. See section 5.2 for more information on installing test kernels.
You will need to set up your environment properly before using make(1) directly
on any part of your workspace. You can use bldenv(1) to accomplish this;
Because the makefiles use numerous make(1) features, some versions of make will
not work properly. Specifically, you cannot use BSD make or GNU make to build
your workspace. The dmake(1) included in the OpenSolaris tools distribution will
work properly, as will the make(1) shipped in /usr/ccs/bin with Solaris and some
other distributions. If your version of dmake is older (or, in some cases,
newer), it may fail in unexpected ways. While both dmake(1) and make(1) can be
used, dmake(1) is normally recommended because it can run multiple tasks in
parallel or even distribute them to other build servers. This can improve build
times greatly.
The entire uts directory allows you to run make commands in a particular build
subdirectory to complete only the build steps you request on only that
particular subdirectory. Most of the cmd and parts of the lib subdirectories
also allow this; however, some makefiles have not been properly configured to
regenerate all dependencies. All subdirectories which use the modern object file
layout should generally work without any problems.
There are several valid targets which are common to all directories; to find
out which ones apply to your directory of interest and which additional targets
may be available, you will need to read that directory's makefile.
Here are the generic targets:
\begin{description}
\item[all]
Build all derived objects in the object directory.
\item[install]
Install derived objects into the proto area defined by \${ROOT}.
\item[install\_h]
Install header files into the proto area defined by \${ROOT}.
\item[clean]
Remove intermediate object files, but do not remove ``complete'' derived files
such as executable programs, libraries, or kernel modules.
\item[clobber]
Remove all derived files.
\item[check]
Perform source-specific checks such as source and header style conformance.
\item[lint]
Generate lint libraries and run all appropriate lint passes against all code
which would be used to build objects in the current directory.
\end{description}
\subsection*{Build Products}
A fully-built source tree is not very useful by itself; the binaries, scripts,
and configuration files that make up the system are still scattered throughout
the tree. The makefiles provide the install targets to produce a proto area and
package tree, and other utilities can be used to build additional
conglomerations of build products in preparation for integration into a full Wad
Of Stuff build or installation on one or more systems for testing and further
development. The nightly(1) program can automate the generation of each build
product. Alternately, the Install program can be used to construct a kernel
installation archive directly from a fully-built usr/src/uts.
Below sections describes the proto area, the most basic collection of built
objects and the first to be generated by the build system. BFU archives, which
are used to upgrade a system with a full OpenSolaris-based distribution
installation to the latest ON bits. The construction of the package tree for ON
deliverables. The construction of deliverables that can be posted on
opensolaris.org.
\subsubsection*{Proto Area}
The install target causes binaries and headers to be installed into a hierarchy
called the prototype or proto area. Since everything in the proto area is
installed with its usual paths relative to the proto area root, a complete
proto area looks like a full system install of the ON bits. However, a proto
area can be constructed by arbitrary users, so the ownership and permissions of
its contents will not match those in a live system. The root of the proto area
is defined by the environment variable ROOT. Normally nightly(1) and bldenv(1)
will set ROOT for you. The usual value for ROOT is
\${CODEMGR\_WS}/proto/root\_\${MACH}, though it may be different for non-DEBUG
builds.
The proto area is useful if you need to copy specific files from the build into
your live system. It is also compared with the parent's proto area and the
packaging information by tools like protocmp and checkproto to verify that only
the expected shipped files have changed as a result of your source changes.
If you're checking a current ON workspace, then you can simply build the
protocmp make target in usr/src/pkg:
\begin{verbatim}
$ bldenv opensolaris.sh
$ cd $SRC/pkg
$ dmake protocmp
\end{verbatim}
Otherwise, you may invoke protocmp directly.
protocmp does not include a man page. Its invocation is as follows:
\begin{verbatim}
protocmp [-gupGUPlmsLv] [-e <exception-list> ...]
-d <protolist|pkg dir> [-d <protolist|pkg dir> ...]
[<protolist|pkg dir>...]|<root>]
Where:
-g : don't compare group
-u : don't compare owner
-p : don't compare permissions
-G : set group
-U : set owner
-P : set permissions
-l : don't compare link counts
-m : don't compare major/minor numbers
-s : don't compare symlink values
-d <protolist|pkg dir>:
proto list or SVr4 packaging directory to check
-e <file>: exceptions file
-L : list filtered exceptions
-v : verbose output
\end{verbatim}
If any of the -[GUP] flags are given, then the final argument must be the proto
-root directory itself on which to set permissions according to the
packaging-data specified via -d options.
A protolist is a text file with information about each file in a proto area or
package manifest, one per line. The information includes: file type (plain,
directory, link, etc.), full path, link target, permissions, owner uid, owner
gid, i-number, number of links, and major and minor numbers.
For a current ON workspace, you can generate a protolist corresponding to the
package manifests by building the protolist make target in usr/src/pkg:
\begin{verbatim}
$ bldenv opensolaris.sh
$ cd $SRC/pkg
$ dmake protolist
\end{verbatim}
This will generate the file \$SRC/pkg/protolist\_`uname -p`.
You can generate protolists from a proto area with the protolist command, which
also does not include a man page. Its invocation is as follows:
\begin{verbatim}
\$ protolist <protoroot>
\end{verbatim}
where protoroot is the proto area root (normally \$ROOT). Redirecting its output
yields a file suitable for passing to protocmp via the -d option or as the
final argument.
The last argument to protocmp always specifies either a protolist or proto area
root to be checked or compared. If a -d option is given with a protolist file
as its argument, the local proto area will be compared with the specified
reference protolist and lists of files which are added, missing, or changed in
the local proto area will be provided. If a -d option is given with a package
definitions directory as its argument, the local proto area will be checked
against the definitions provided by the package descriptions and information
about discrepancies will be provided.
The exceptions file (-e) specifies which files in the proto area are not to be
checked. This is important, since otherwise protocmp expects that any files
installed in the proto area which are not part of any package represent a
package definition error or spurious file in the proto area.
This comparison is automatically run as part of your nightly(1) build so long
as the -N option is not included in your options. See nightly(1) for more
information on automating proto comparison as part of your automatic build.
In addition to the protolist and protocmp build targets described above, as a
shortcut to using protolist and protocmp, you can use the `checkproto' command
found in the developer/opensolaris/onbld package. This utility does not include
a man page, but has a simple invocation syntax:
\begin{verbatim}
$ checkproto [-X] <workspace>
\end{verbatim}
The exception files and packaging information will be selected for you, and the
necessary protolists will be generated automatically. Use of the -X option, as
for nightly(1), will instruct checkproto to check the contents of the realmode
subtree, which normally is not built.
You can find the sources for protolist, protocmp, and checkproto in \\
usr/src/tools. The resulting binaries are included in the \\
developer/opensolaris/onbld package.
\subsubsection*{BFU Archives}
BFU archives are cpio-format archives (see cpio(1) and archives(4)) used by
bfu(1) to install ON binaries into a live system. The actual process of using
bfu(1) is described in greater detail in the man page.
BFU archives are built by mkbfu, which does not include a man page. Its invocation is as follows:
\begin{verbatim}
$ mkbfu [-f filter] [-z] proto-dir archive-dir
\end{verbatim}
The -f option allows you to specify a filter program which will be applied to
the cpio archives. This is normally used to set the proper permissions on files
in the archives, as discussed in greater detail below.
The -z option causes the cpio archives to be compressed with gzip(1). If both
-f and -z are given, the compression will occur after the filter specified by
-f is applied.
proto-dir is the proto area root described in section above and normally given
by the ROOT environment variable.
archive-dir is the directory in which the archives should be created. If it
does not exist it will be created for you.
However, `mkbfu' is rarely used. Instead, `makebfu' offers a much simpler
alternative. It has no man page, but the invocation is simple:
\begin{verbatim}
$ makebfu [filename]
\end{verbatim}
If an argument is given, it refers to an environment file suitable for
nightly(1) or bldenv(1). Otherwise, `makebfu' assumes that bldenv(1) or
equivalent has already been used to set up the environment appropriately.
`makebfu' is a wrapper around `mkbfu' that feeds package and environment
information to `cpiotranslate' to construct an appropriate filter for the
archives. This filter's purpose is to set the correct mode, owner, and group on
files in the archives. This is needed to allow builds run with ordinary user
privileges to produce BFU archives with the correct metadata. Without root
privileges, there is no way for the build user to set the permissions and
ownership of files in the proto area, and without filtering, the cpio archives
used by BFU would be owned by the build user and have potentially incorrect
permissions. `cpiotranslate' uses package definitions to correct both. See
usr/src/tools/protocmp/cpiotranslate.c to learn how this works.
Each archive contains one subset of the proto area. Platform-specific
directories are broken out into separate archives (this can simplify installing
on some systems since only the necessary platform-specific files need to be
installed), as are /, /lib, /sbin, and so on. The exact details of the files
included in the archives can be found only by reading the latest version of
mkbfu.
BFU archives are built automatically as part of your nightly(1) build if -a is
included in your options. Also, if -z is included in your options, it will be
passed through to mkbfu. See nightly(1) and above section for more information on
automating BFU archive construction as part of your automatic build.
mkbfu and makebfu are Korn shell scripts included in the \\
developer/opensolaris/onbld package. Their sources are located in \\
usr/src/tools/scripts.
cpiotranslate is a C program included in the developer/opensolaris/onbld
package. Its source is located in usr/src/tools/protocmp/cpiotranslate.c.
\subsubsection*{Packages}
The -p option to nightly will package the files that were installed into your
proto area. If your workspace contains the directory usr/src/pkgdefs, this
will result in SVr4 packages. If instead it contains usr/src/pkg, it will
produce an IPS package repository. In either case, this is done automatically
by the main makefile's all and pkg\_all targets (see usr/src/Makefile) as part
of a successful build. SVr4 packages or IPS package repositories are placed in
\${CODEMGR\_WS}/packages. See nightly(1) and section above for more information on
automating package construction as part of your automatic build.
\subsubsection*{OpenSolaris Deliverables}
A project--including the ON gate--may wish to post tarballs on opensolaris.org
for the benefit of other developers or users. nightly(1) will generate tarballs
for closed binaries (if the project has access to the closed source)
signed cryptographic binaries.
This section explains how to use nightly(1) to produce these tarballs. It also
sketches what nightly(1) does to make this work, and it explains project team
responsibilities, especially when introducing new code.
\subsubsection*{Generating Deliverables}
To generate tarballs that can be posted on opensolaris.org, add `O' (upper case
oh) to NIGHTLY\_OPTIONS. When nightly(1) finishes, you will have these files in
your workspace top-level directory, assuming that you are doing both DEBUG and
non-DEBUG builds:
\begin{verbatim}
README.opensolaris
on-closed-bins-nd.$MACH.tar.bz2
on-closed-bins.$MACH.tar.bz2
on-crypto.$MACH.tar.bz2
on-crypto-nd.$MACH.tar.bz2
\end{verbatim}
These files are ready to post.
\subsubsection*{How It Works}
This section describes how the deliverables listed in above section are
generated.
The README is created from a template and two files from opensolaris.org (the
current issues list and the installation quick-start instructions).
To help external developers produce builds that are as functional as possible,
we include closed binaries that correspond to the given source snapshot.
External developers can then bake these binaries into their own builds.
The first step in producing the closed binaries is to create a shadow proto area
that contains only closed binaries. This is done as part of the normal
build--when a closed makefile installs a file into the proto area, it installs a
second copy into the closed proto area. The second step is to filter out
binaries that cannot be included in the tarball.
The crypto tarball contains signed cryptographic binaries from one of two
sources. If the build is a ``signing'' build (usually an ON gate build), the
binaries come from the workspace's proto area. These binaries have been signed
with a special key and certificate that let the binaries be run anywhere. If
the build is not a signing build, the workspace's binaries can only be run
inside Sun/Oracle. In that case, nightly copies a tarball from a signing build
from the ON gate.
Some open source licenses require that a copy of the license be included with
binary deliveries. For this reason nightly(1) includes an aggregated list of
licenses in the closed-bins and crypto tarballs. The list of license files is
extracted from the package manifest files, then filtered to eliminate licenses
that don't apply to the binaries in question.
Most license files are static (that is, they are checked in like any source
file). A few are derived dynamically from other files in the source tree. Each
license file has a corresponding ``description'' file (e.g.,
THIRDPARTYLICENSE.descrip). This file contains the one-line description that
identifies the software covered by the license.
\subsubsection*{Team Responsibilities}
If you are updating or introducing new code, there are three things you need to
look at to support opensolaris.org deliveries: third-party licenses,
confidential code (internal teams adding source to the closed tree), and crypto
code.
For third-party license files, the basic requirement is to keep the bookkeeping
straight. So when you update code that already has a third-party license,
review the license text for changes (e.g., new copyright years). If there are
changes, make sure they are propagated as needed. In a few cases, you will have
nothing to do, because the build just drops in the license file that we get from
upstream. In most cases, you will need to copy the changes to the static
third-party license file that lives in the source directory. In the cases where
the license text is dynamically extracted, you can do a ``make install'' and
then review the generated license file, to make sure that the extraction code is
still working.
If you add new code that has a third-party license, review the instructions in
usr/src/README.license-files.
If you delete code that has a third-party license, check what license(s) cover
any remaining code in the package. If there is nothing left that is covered by
a particular license, then you should remove that license from the package's
manifest.
If you are on a Sun/Oracle-internal team and have to deal with confidential
code, determine whether the resulting deliverables (e.g., binaries) can be
included in the closed binaries tarball. Contact [email protected] if you
need help with this. If the file cannot be included, add it to the exclusion
list in usr/src/tools/scripts/bindrop.sh.
If you are adding a new location in the source tree for crypto code, make sure
the new location is listed in usr/src/tools/scripts/mktpl.pl (look for the
declaration of \$iscrypto).
\end{document}
| {
"alphanum_fraction": 0.7779523257,
"avg_line_length": 46.0408805031,
"ext": "tex",
"hexsha": "a91532367a04b8ea218c65b659896057aa957d6d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "db3f4e8096500a5e79d61a0ee5b4c687c68b1fad",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "sac/doc",
"max_forks_repo_path": "illumos/tour/building-ON.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "db3f4e8096500a5e79d61a0ee5b4c687c68b1fad",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "sac/doc",
"max_issues_repo_path": "illumos/tour/building-ON.tex",
"max_line_length": 97,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "db3f4e8096500a5e79d61a0ee5b4c687c68b1fad",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "sac/doc",
"max_stars_repo_path": "illumos/tour/building-ON.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6976,
"size": 29282
} |
\section{Related Work}
\label{sec:related-work}
There is a vast literature on automatically repairing or patching programs:
% for specific properties (\eg null-dereferences, memory safety),
we focus on the most closely related work on providing feedback for novice errors.
\mypara{Example-Based Feedback}
%
Recent work uses \emph{counterexamples} that show how
a program went wrong, for type errors \cite{Seidel2016-ul}
or for general correctness properties where the generated
inputs show divergence from a reference implementation or
other correctness oracle~\cite{Song_2019}.
In contrast, we provide feedback on how to fix the error.
\mypara{Fault Localization} Several authors have studied the problem of
\emph{fault localization}, \ie winnowing down the set of locations that are
relevant for the error, often using slicing
\citep{Wand1986-nw,Haack2003-vc,Tip2001-qp,Rahli2015-tt}, counterfactual typing
\citep{Chen2014-gd} or bayesian methods \citep{Zhang2014-lv}.
%
\textsc{Nate}~\citep{Seidel:2017} introduced the BOAT representation,
and showed it could be used for accurate localization.
%
We aim to go beyond localization, into suggesting concrete \emph{changes} that
novices can make to understand and fix the problem.
\mypara{Repair-model based feedback}
%
\textsc{Seminal} \citep{Lerner2007-dt} \emph{enumerates} minimal fixes using an
expert-guided heuristic search.
%
The above approach is generalized to general correctness properties by
\cite{singh2013} which additionally performs a \emph{symbolic} search using a
set of expert provided \emph{sketches} that represent possible repairs.
%
In contrast, \toolname learns a template of repairs from a corpus yielding
higher quality feedback (\S~\ref{sec:eval}).
\mypara{Corpus-based feedback}
%
\textsc{Clara} \citep{Gulwani_2018} uses code and execution traces to match a
given incorrect program with a ``nearby'' correct solution obtained by
clustering all the correct answers for a particular task. The matched
representative is used to extract repair expressions.
%
Similarly, \textsc{Sarfgen} \citep{Wang_2018} focuses on structural and
control-flow similarity of programs to produce repairs, by using AST vector
embeddings to calculate distance metrics (to ``nearby'' correct
programs) more robustly.
%
\textsc{Clara} and \textsc{Sarfgen} are data-driven, but both assume
there is a ``close'' correct sample in the corpus.
%
In contrast, \toolname has a more general philosophy that \emph{similar errors
have similar repairs}: we extract generic fix templates that can be applied to
arbitrary programs whose errors (BOAT vectors) are similar.
%
The \textsc{Tracer} system \citep{TRACER2018} is closest in philosophy to ours,
except that it focuses on single-line compilation errors for C programs, where
it shows that NLP-based methods like sequence-to-sequence predicting DNNs can
effectively suggest repairs, %\eg \verb+scanf("%d", a)+ should be converted to
%\verb+scanf("%d", %a)+
but this does not scale up to fixing general type errors.
%
We have found that \ocaml's relatively simple
\emph{syntactic} structure but rich \emph{type}
structure make token-level seq-to-seq methods
quite imprecise (\eg \emph{deleting} offending
statements suffices to ``repair'' C but yields
ill-typed \ocaml) necessitating \toolname's
higher-level semantic features and (learned)
repair templates.
\textsc{Hoppity} \citep{Dinella_2020} is
a \dnn-based approach for fixing buggy
JavaScript programs. \textsc{Hoppity}
treats programs as graphs that are fed
to a \emph{Graph Neural Network} to
produce fixed-length embeddings, which
are then used in an LSTM model that
generates a sequence of primitive
edits of the program graph.
%
\textsc{Hoppity} is one of the few tools
that can repair errors spanning multiple
locations. However, it relies solely on
the learned models to generate a sequence
of edits, so it doesn't guarantee returning
valid JavaScript programs.
%
In contrast, \toolname, uses the learned
models to get appropriate error locations
and fix templates, but then uses a synthesis
procedure to always generate type-correct programs.
\textsc{Getafix} \citep{Bader_2019} and
\textsc{Revisar} \citep{Rolim_2018} are
two more systems that learn fix patterns
using AST-level differencing on a corpus
of past bug fixes.
%
They both use \emph{anti-unification}
\citep{Kutsia_2014} for generalizing
expressions and, thus, grouping together
fix patterns.
%
They cluster together bug fixes in order
to reduce the search space of candidate
patches.
%
While \textsc{Revisar} \citep{Rolim_2018}
ends up with one fix pattern per bug category
using anti-unification, \textsc{Getafix}
\citep{Bader_2019} builds a hierarchy of
patterns that also include the context
of the edit to be made.
%
They both keep before and after expression pairs as
their fix patterns, and they use the before expression
as a means to match an expression in a new buggy program
and replace it with the after expression.
%
While these methods are quite effective,
they are only applicable in recurring
bug categories \eg how to deal with a
null pointer exception.
%
\toolname on the other hand, attempts
to generalize fix patterns even more by
using the GAST abstractions, and predicts
proper error locations and fix patterns
with a learned model from the corpus of
bug fixes, and so so can be applied to a
diverse variety of errors.
\textsc{Prophet} \citep{Long_2016} is another technique that uses a corpus of
fixed buggy programs to learn a probabilistic model that will rank candidate
patches. Patches are generated using a set of predefined transformation schemas
and condition synthesis. \textsc{Prophet} uses logistic regression to learn the
parameters of this model and uses over 3500 extracted program features to do so.
It also uses an instrumented recompile of a faulty program together with some
failing input test cases to identify what program locations are of interest.
While this method can be highly accurate for error localization, their
experimental results show that it can take up to 2 hours to produce a valid
candidate fix. In contrast, \toolname's pretrained models make finding proper
error locations and possible fix templates more robust.
| {
"alphanum_fraction": 0.8013533108,
"avg_line_length": 41.6577181208,
"ext": "tex",
"hexsha": "59fffb60c122df9832955f3afc665fff5dd50fbe",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-08-18T11:44:40.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-09-19T12:24:29.000Z",
"max_forks_repo_head_hexsha": "958a0ad2460e15734447bc07bd181f5d35956d3b",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "gsakkas/rite",
"max_forks_repo_path": "paper/pldi20-cameraready/related-work.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "958a0ad2460e15734447bc07bd181f5d35956d3b",
"max_issues_repo_issues_event_max_datetime": "2022-02-10T00:36:31.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-01-28T22:51:35.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "gsakkas/rite",
"max_issues_repo_path": "paper/pldi20-cameraready/related-work.tex",
"max_line_length": 82,
"max_stars_count": 15,
"max_stars_repo_head_hexsha": "958a0ad2460e15734447bc07bd181f5d35956d3b",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "gsakkas/rite",
"max_stars_repo_path": "paper/pldi20-cameraready/related-work.tex",
"max_stars_repo_stars_event_max_datetime": "2021-07-15T19:23:32.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-05-10T13:06:57.000Z",
"num_tokens": 1541,
"size": 6207
} |
\chapter{clean} \label{clean}
\section{Introduction}
An LPE may contain redundant information, for example because some of its summands contain all behavior of other summands or because some of its summands will never be (or become) enabled.
Obviously, such summands can be removed without hesitation, which may result in a speedup in other computations in which the LPE is input.
The \texttt{clean} command attempts to detect redundant summands and removes the ones that it finds.
The state space of the LPE is not affected by the changes.
\section{Formal background}
\subsection{Summand containment}
Consider two summands, $s_\alpha$ and $s_\beta$, and reference their elements conform \ref{summandelements}.
Summand $s_\alpha$ is said to \emph{contain} $s_\beta$ if these two conditions hold:
\begin{itemize}
\item $s_\alpha$ and $s_\beta$ must communicate over the exact same channel with exactly as many channel variables; that is, $C_\alpha = C_\beta \land m_\alpha = m_\beta$.
\item Define the mapping
\begin{align*}
X_\alpha = [x_\beta(j) \rightarrow x_\alpha(j) \;|\; 1 \leq j \leq \text{min}(m_1, m_2)]
\end{align*}
The following conditions must also hold:
\begin{align*}
g_\beta[X_\alpha] &\rightarrow g_\alpha \\
v_\alpha(p) = v_\beta(p)[X_\alpha] &\leftrightarrow \text{\textit{True}} \text{ for all } p \in [p_1, \cdots{}, p_k]
\end{align*}
The first condition demands that whenever $s_\beta$ is enabled, $s_\alpha$ must also be enabled; consequently, $s_\alpha$ is `always ready' to perform the same behavior as $s_\beta$.
The second condition demands that the state after the application of $s_\alpha$ is the same as the state after the application of $s_\beta$, regardless of the values of communication variables and LPE parameters.
\end{itemize}
Note that $s_\alpha$ could contain all behavior of $s_\beta$ without these two conditions being true (we under-approximate by tolerating false negatives)!
\subsection{Summand reachability}
Consider summands $s_\alpha$ and $s_\beta$, referencing their elements conform \ref{summandelements}.
Summand $s_\alpha$ is said to be a \emph{possible predecessor} of $s_\beta$ if the following expression \emph{could be} satisfiable:
\begin{align*}
g_\alpha \land {g_\beta}[v \rightarrow q(v) \;|\; v \in \varsof{g_\beta} \setminus P][p \rightarrow v_\alpha(p) \;|\; p \in P]
\end{align*}
where $q(v)$ is a bijective function that relates variable $v$ to a fresh variable.
A summand $s_\beta$ is said to be \emph{reachable} if at least one of these conditions holds:
\begin{itemize}
\item It is possible that summand $s_\beta$ is enabled in the initial state of the LPE.
Formally, $s_\beta$ is reachable if the LPE is initialized as
\begin{align*}
P(v_I(p_1), \cdots{}, v_I(p_m))
\end{align*}
and if
\begin{align*}
g_\beta[p \rightarrow v_I(p) \;|\; p \in P]
\end{align*}
\emph{could be} satisfiable.
\item There exists at least one other summand $s_\alpha$ in the same LPE that is a possible predecessor of $s_\beta$.
It is \emph{not} sufficient if the only possible predecessor of $s_\beta$ is $s_\beta$ itself!
\end{itemize}
Summand reachability is \emph{over-approximated}.
For example, if it is uncertain whether a summand is enabled in the initial state, the summand is considered to be reachable.
\section{Algorithm}
The algorithm consists of 2 phases.
In the first phase, the algorithm does a pairwise comparison of all summands of the LPE.
If it discovers that some summand $s_1$ contains another summand $s_2$, it removes $s_2$ from the LPE.
In the second phase, the algorithm starts by determining which summands are reachable in the initial state of the LPE (the first condition for reachability in the previous section).
This becomes the initial value of the set $X$.
Then, until a fixpoint is reached, to $X$ all summands are added of which there exists a possible predecessor in $X$.
The output LPE of the algorithm contains all summands that are in the fixpoint of $X$.
\section{Example}
Consider the following LPE:
\begin{lstlisting}
//Process definition:
PROCDEF example[A :: Int](x :: Int)
= A ? i [[x==0]] >-> example[A](1)
+ A ? j [[x==1]] >-> example[A](2)
+ A ? k [[x==1]] >-> example[A](2)
+ A ? l [[x==2]] >-> example[A](0)
+ A ? m [[x>=2]] >-> example[A](0)
;
//Initialization:
example[A](0);
\end{lstlisting}
The \texttt{clean} command will detect that the second and third summands of the LPE are equivalent and remove one of them.
It will also detect that the fifth summand contains the fourth summand, and therefore remove the fourth summand.
Now consider
\begin{lstlisting}
//Process definition:
PROCDEF example[A :: Int, B](x :: Int)
= A ? i [[i != 13]] >-> example[A, B](i)
+ B [[x == 13]] >-> example[A, B](x)
;
//Initialization:
example[A, B](0);
\end{lstlisting}
The second summand is unreachable: it is not enabled in the initial state (since \texttt{x = 0}) and it is never enabled after the application of the first summand (since \texttt{i != 13}).
\section{Benchmark results}
The following durations were measured with a benchmark for several models:
\begin{itemize}
\item The average duration of \txs{} to make 500 steps in a model without converting it to LPE form or applying LPE operations;
\item The average duration of \txs{} to make 500 steps in a model after it has been converted to LPE form;
\item The average duration of \txs{} to make 500 steps in a model after it has been converted to LPE form and after the \texttt{clean} operation has been applied.
\end{itemize}
When plotting the second series of measurements against the first (see Figure~\ref{lpe-only-vs-original:fig}), the following conclusions can be drawn:
\begin{itemize}
\item The effect of the LPE transformation on performance is negative in most cases.
\item The effect of the LPE transformation on performance is positive in some cases, but not dramatically so.
\item For one model (\texttt{Adder3}), the effect of the LPE transformation is extremely positive (5.2 seconds instead of 12.5).
Unfortunately, this model is not very realistic since it simply consists of three parallel processes without any synchronization.
\end{itemize}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.8\linewidth]{charts/lpe-only-vs-original}
\caption{Benchmark results: LPE transformation vs original}
\label{lpe-only-vs-original:fig}
\end{center}
\end{figure}
It is possible that the \texttt{clean} operation has a significant positive influence on the performance of \txs{}.
Based on a plot of the third series of measurements against the second (see Figure~\ref{clean-vs-lpe-only:fig}), however, this cannot be concluded: only part of the models show slight (but significant) improvements, whereas other models suffer an (also slight) additional performance loss.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.5\linewidth]{charts/clean-vs-lpe-only}
\caption{Benchmark results: clean vs LPE transformation}
\label{clean-vs-lpe-only:fig}
\end{center}
\end{figure}
| {
"alphanum_fraction": 0.7456490728,
"avg_line_length": 44.3670886076,
"ext": "tex",
"hexsha": "f725062c980563fb281a9979ec1f9320bfe64f1b",
"lang": "TeX",
"max_forks_count": 13,
"max_forks_repo_forks_event_max_datetime": "2021-06-26T16:33:36.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-11-16T11:33:59.000Z",
"max_forks_repo_head_hexsha": "bc11f4b93a15e220bf6941d395d5b4cd361bfe74",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "ikbendedjurre/txs-develop",
"max_forks_repo_path": "sys/lpeops/tex/lpeopsDoc/clean.tex",
"max_issues_count": 746,
"max_issues_repo_head_hexsha": "a791ce9960e88df576733404fe4d60114c35e50a",
"max_issues_repo_issues_event_max_datetime": "2022-03-23T19:14:31.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-06-13T07:36:42.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "ikbendedjurre/TorXakis",
"max_issues_repo_path": "sys/lpeops/tex/lpeopsDoc/clean.tex",
"max_line_length": 289,
"max_stars_count": 44,
"max_stars_repo_head_hexsha": "a791ce9960e88df576733404fe4d60114c35e50a",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "ikbendedjurre/TorXakis",
"max_stars_repo_path": "sys/lpeops/tex/lpeopsDoc/clean.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-08T02:17:01.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-06-09T08:17:13.000Z",
"num_tokens": 1908,
"size": 7010
} |
\subsection{Go to}
\subsection{While-do}
\subsection{If-then}
\subsection{For each loops}
\subsection{Loop x times}
\subsection{Defining and calling functions}
| {
"alphanum_fraction": 0.75,
"avg_line_length": 11.2,
"ext": "tex",
"hexsha": "3c8835cd84c5aff8695f15ab2e7908def0d2d41a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/computer/highLevel/05-01-structured.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/computer/highLevel/05-01-structured.tex",
"max_line_length": 43,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/computer/highLevel/05-01-structured.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 43,
"size": 168
} |
\documentclass[10pt,a4paper]{book}
\usepackage[latin1]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\author{Daniel Frederico Lins Leite}
\title{Guide to Aristotle}
\begin{document}
\chapter{Modern Scholars}
\section{Giovanni Reale}
\section{Jonathan Barnes}
Books\\
Aristotle: A very short introduction\\
\\
No man before him had contributed so much to learning. No man after him might aspire to rival his achievements.\\
\\
In one of his later works, the Nicomachean Ethics, Aristotle argues that ?happiness? ? that state of mind in which men realize themselves and flourish best ? consists in a life of intellectual activity. Is not such a life too godlike for mere mortals to sustain? No; for ?we must not listen to those who urge us to think human thoughts since we are human, and mortal thoughts since we are mortal; rather, we should as far as possible immortalize ourselves and do all we can to live by the finest element in us ? for if in bulk it is small, in power and worth it is far greater than anything else?.
\\
A good way of reading him is this: Take up a treatise, think of it as a set of lecture notes, and imagine that you now have to lecture from them. You must expand and illustrate the argument, and you must make the transitions clear; you will probably decide to relegate certain paragraphs to footnotes, or reserve them for another time and another lecture;
\\
The Cambrigde Companion to Aristotle\\
The Cambridge History of Hellenistic Philosophy\\
The Complete Works of Aristotle
\chapter{Timeline}
Spring 322: Retire to Chalcis (Euboea Island)
\end{document} | {
"alphanum_fraction": 0.7695522388,
"avg_line_length": 49.2647058824,
"ext": "tex",
"hexsha": "e7116bf4c325dcb922f874e678720e83f01bba11",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2019-02-22T14:55:38.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-09-20T01:07:39.000Z",
"max_forks_repo_head_hexsha": "f92c12f83433cac01a885585e41c02bb5826a01f",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "xunilrj/sandbox",
"max_forks_repo_path": "texts/philosophy/Guide.Aristotle.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "f92c12f83433cac01a885585e41c02bb5826a01f",
"max_issues_repo_issues_event_max_datetime": "2022-02-15T06:44:20.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-05-24T13:36:50.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "xunilrj/sandbox",
"max_issues_repo_path": "texts/philosophy/Guide.Aristotle.tex",
"max_line_length": 605,
"max_stars_count": 7,
"max_stars_repo_head_hexsha": "f92c12f83433cac01a885585e41c02bb5826a01f",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "xunilrj/sandbox",
"max_stars_repo_path": "texts/philosophy/Guide.Aristotle.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-12T05:23:23.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-04-01T17:18:35.000Z",
"num_tokens": 431,
"size": 1675
} |
\documentclass[12pt]{article}
\renewcommand\abstractname{\textbf{ABSTRACT}}
%----------Packages----------
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{amsthm}
\usepackage{amsrefs}
\usepackage{dsfont}
\usepackage{mathrsfs}
\usepackage{stmaryrd}
\usepackage[all]{xy}
\usepackage[mathcal]{eucal}
\usepackage{verbatim} %%includes comment environment
\usepackage{fullpage} %%smaller margins
\usepackage{times}
\usepackage{multicol}
\usepackage{booktabs}
\usepackage{graphicx}
\usepackage{float}
%\usepackage{cite}
\usepackage{setspace}
%----------Commands----------
%%penalizes orphans
\clubpenalty=9999
\widowpenalty=9999
\providecommand{\abs}[1]{\lvert #1 \rvert}
\providecommand{\norm}[1]{\lVert #1 \rVert}
\usepackage{ amssymb }
\providecommand{\x}{\times}
\usepackage{sectsty}
\usepackage{lipsum}
\usepackage{titlesec}
\titleformat*{\section}{\normalsize\bfseries\scshape}
\titleformat*{\subsection}{\normalsize}
\titleformat*{\subsubsection}{\normalsize\bfseries\filcenter}
\titleformat*{\paragraph}{\normalsize\bfseries\filcenter}
\titleformat*{\subparagraph}{\normalsize\bfseries\filcenter}
\usepackage{indentfirst}
\providecommand{\ar}{\rightarrow}
\providecommand{\arr}{\longrightarrow}
%\hyphenpenalty 10000
%\exhyphenpenalty 10000
\usepackage[english]{babel}
\usepackage[utf8]{inputenc}
\usepackage{fancyhdr}
\usepackage[top=1in,bottom=1in,right=1in,left=1in,headheight=200pt]{geometry}
\pagestyle{fancy}
\lhead{$<$Group Name$>$}
\chead{}
\rhead{$<$Method Name$>$}
\cfoot{\thepage}
\titlespacing*{\section}{0pt}{0.75\baselineskip}{0.1\baselineskip}
\usepackage[labelfont=sc]{caption}
\captionsetup{labelfont=bf}
%\doublespacing
\begin{document}
\section{Description of the Method}
\subsection{Please provide a short (1-2 paragraph) summary of the general idea of the method. What does it do and how? If the method has been previously published, please provide a link to that work in addition to answering this and the following questions.}
\subsection{What is the method sensitive to? (e.g. asymmetric line shapes, certain periodicities, etc.)}
\subsection{Are there any known pros/cons of the method? For instance, is there something in particular that sets this analysis apart from previous methods?}
\subsection{What does the method output and how is this used to mitigate contributions from photospheric velocities?}
\subsection{Other comments?}
\section{Data Requirements}
\subsection{What is the ideal data set for this method? In considering future instrument design and observation planning, please comment specifically on properties such as the desired precision of the data, resolution, cadence of observations, total number of observations, time coverage of the complete data set, etc.}
\subsection{Are there any absolute requirements of the data that must be satisfied in order for this method to work? Rough estimates welcome.}
\subsection{Other comments?}
\section{Applying the Method}
\subsection{What adjustments were required to implement this method on \texttt{EXPRES} data? Were there any unexpected challenges?}
\subsection{How robust is the method to different input parameters, i.e. does running the method require a lot of tuning to be implemented optimally?}
\subsection{What metric does the method (or parameter tuner) use internally to decide if the method is performing better or worse? (e.g. nightly RMS, specific likelihood function, etc.)}
\subsection{Other comments?}
\section{Reflection on the Results}
\subsection{Did the method perform as expected? If not, are there adjustments that could be made to the data or observing schedule to enhance the result?}
\subsection{Was the method able to run on all exposures? If not, is there a known reason why it failed for some exposures?}
\subsection{If the method is GP based, please provide the kernel used and the best-fit hyper parameters with errors. Otherwise, just write ``Not Applicable.''}
\subsection{Other comments?}
\section{General Comments}
\end{document} | {
"alphanum_fraction": 0.7761714855,
"avg_line_length": 34.5862068966,
"ext": "tex",
"hexsha": "94cb2d25fe3b26a1c5ec49da08699841fc64416a",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-02-11T04:29:20.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-11-01T03:15:48.000Z",
"max_forks_repo_head_hexsha": "5a24ea9b7850f53aa246003a18396764ff00100d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jinglinzhao/EXPRES-StellarSignals.jl",
"max_forks_repo_path": "methods/Method_Questions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5a24ea9b7850f53aa246003a18396764ff00100d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jinglinzhao/EXPRES-StellarSignals.jl",
"max_issues_repo_path": "methods/Method_Questions.tex",
"max_line_length": 320,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5a24ea9b7850f53aa246003a18396764ff00100d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jinglinzhao/EXPRES-StellarSignals.jl",
"max_stars_repo_path": "methods/Method_Questions.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1050,
"size": 4012
} |
\documentclass[a4paper,11pt,twocolumn]{article}
\input{settings/packages}
\input{settings/page}
\input{settings/macros}
\title{\Large\textbf{Functionality of Arduino MEGA2560 (REV3) Peripheral Circuitry}}
\author{Thalagala B.P.\hspace{1cm}180631J}
\date{\today}
\begin{document}
\maketitle
%-------------------------------------------------------------------------
The Arduino MEGA2560 is a microcontroller board based on the ATmega2560. It has 54 digital input/output pins (of which 15 can be used as PWM outputs), 16 analog inputs, 4 UARTs (hardware serial ports), a 16 MHz crystal oscillator, a USB connection, a power jack, an ICSP header, and a reset button\cite{arduino}.
\begin{center}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.6]{figures/top}
\caption{Top view of the Arduino MEGA 2560 REV3 board\cite{arduino}}
\end{figure}
\end{center}
%--------------------------------------------------------------------------------------
\section{Power Regulator Circuit}
Power regular circuit is used to regulate the input voltage of the board form a voltage in range of 7V-12V to lower +5V, as this is the operating voltage of the Arduino MEGA board.It consists of following electronic components.
\begin{center}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.3]{figures/vreg}
\caption{Power regulator schematic diagram\cite{arduino}}
\end{figure}
\end{center}
\subsection{Barrel connector}
This barrel type connector is used to supply DC power (\textit{6V-12V is recommended}) to the board from an external power source.
\subsection{Reverse Voltage Protection Diode}
If the polarity of input supply voltage is accidentally reversed the board may get damaged. This M7 diode (\textit{SMD version of 1N4007 diode}) provides the protection at such situations.
\subsection{Electrolytic Capacitors}
Polarized capacitors are used to reduce possible small fluctuations(electrical noise) in the input voltage/current. Capacitors with fairly high capacitance (47$\mu$F) is used for this purpose as they can store high electric charge.
\subsection{Non-polarized Capacitors}
Although electrolytic capacitors have high capacitance it is considerably slow in discharging process due to its high series resistance (capacitive reactance). As a solution for this draw back,electrolytic capacitor is discharged through a non-polar capacitor with very low capacitance(100nF) and an extremely small discharging time.
\subsection{LD1117S50CTR Voltage Regulator IC}
This IC continues to work although the input voltage is very close to the required output, and able to handle currents up to 800 mA. Also it can regulate voltages in the range 6.5V to 15V to lower 5V level\cite{lowdrop}.
\section{Clock Circuit}
Micro-controllers use clock signal to trigger events and keep track of time as everything happens with respect to time. clock Circuitry is responsible of generating this clock signal which is basically an electronic signal periodically toggle between two states like a Square wave\cite{osci}.
\begin{center}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.3]{figures/clock}
\caption{External oscillator schematic diagram\cite{arduino}}
\end{figure}
\end{center}
\subsection{Crystal Oscillator}
Arduino MEGA 2560 uses 16 MHz oscillator to generate this clock signal. It consists of a \textit{piezoelectric Crystal resonator}\cite{crystal} and two ceramic capacitors in order to adjust resonance frequency.
\section{In-system Programming/ In-circuit Serial Programming Circuit}
Arduino MEGA2560 board has two ICSP headers named as ICSP1 and ICSP. Both headers have 6 pins and are arranged in 2 * 3 arrays. 6 pins are named as MISO, MOSI, SCK, V+(+5V), Ground, and Reset\cite{icsp}.
\begin{center}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.35]{figures/programmer}
\caption{ICSP headers in an Arduino MEGA2560 board\cite{arduino}}
\end{figure}
\end{center}
\subsection{ATMEGA16U2 microcontroller}
Programs related to USB to serial conversion are stored here. This supports to upload the programs into main ATMEGA2560 microcontroller, simply through the USB cable.
\subsection{ICSP1}
ICSP1 header is used to implement the USB to serial converter micro-controller(\textit{ATMEGA16U2}).
\subsection{ICSP}
ICSP header is used to directly program the main microcontroller of the Arduino MEGA2560 board, whenever the USB to serial converter can not be used to program the microcontroller due to any failure of converter. To do this additional ISP/ICSP programmer(\textit{an electronic circuitry}) is needed.
\section{Reset Circuit}
There are two mechanisms to reset the Arduino MEGA2560 board. One of them is manual while the other one is done by the USB to serial converter chip. Here the reset pin is \textit{active low}, i.e. microcontroller get reset whenever the pin receives a low state signal.
\subsection{Manual Method}
To reset the microcontroller there is a TS42 type press button switch on the board. When it is pressed, reset pin is grounded. This process resets the microcontroller.
\begin{center}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.5]{figures/capture}
\caption{Manual resetting\cite{arduino}}
\end{figure}
\end{center}
\subsection{Reset through the ATMEGA16U2}
In this method microcontroller get reset whenever a connection is made with the Arduino IDE through its USB cable. The required DTR(Data Terminal Ready)signal is sent by the ATMEGA16U2 to the RESET pin of the ATMEGA2560 at the beginning of each connecting.
\begin{center}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.3]{figures/SOFT}
\caption{Automatic resetting\cite{arduino}}
\end{figure}
\end{center}
{\scriptsize
\bibliographystyle{plain}
\bibliography{reference} }
\end{document} | {
"alphanum_fraction": 0.7622641509,
"avg_line_length": 41.0563380282,
"ext": "tex",
"hexsha": "e27a0c09b68ebd066e813476775b63efad927a53",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2021-03-12T06:22:22.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-06-16T19:06:03.000Z",
"max_forks_repo_head_hexsha": "a38d1ad5070d1067506358b69c35b24c983cae05",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "AvishkaSandeepa/LaTeX-Templates",
"max_forks_repo_path": "Assignment_Template_2_Pages_Compact/ASSIGNMENT.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "a38d1ad5070d1067506358b69c35b24c983cae05",
"max_issues_repo_issues_event_max_datetime": "2021-05-15T09:56:33.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-04-26T13:25:30.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "AvishkaSandeepa/LaTeX-Templates",
"max_issues_repo_path": "Assignment_Template_2_Pages_Compact/ASSIGNMENT.tex",
"max_line_length": 334,
"max_stars_count": 17,
"max_stars_repo_head_hexsha": "65bc7583475c67a1881e889b63c9934bc7daf621",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "bimalka98/LaTeX-Templates",
"max_stars_repo_path": "Assignment_Template_2_Pages_Compact/ASSIGNMENT.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-15T15:49:53.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-06-11T18:18:20.000Z",
"num_tokens": 1490,
"size": 5830
} |
\chapter{Experiments}
\label{chap:Experiments}
This chapter describes the different experiments we ran as well as their respective results. It is heavily focusing on the task at hand, which is to \emph{predict the diagnosed top 50 most frequent ICD9 codes at discharge in a multi-class multi-classification fashion}. \\
The first section describes the setup in which the experiments are run, the optimizer as well as the hardware setting. Following to this, we get to the core of the chapter and explain our experiments we ran while providing the quantitative results on different metrics. Finally, from these experiments we conclude the chapter with a hyper-parameter tuning section detailing how we picked the deep learning hyper-parameters.
\section{Setup}
\label{sec:Setup}
All the experiments and training sessions were run on two \textbf{Tesla P100} with 16 GB of memory. The CPU was an \textbf{Intel Xeon Gold 5120 CPU @ 2.20 GHz}, with \textbf{56 cores}, while we had access to \textbf{187 GB of RAM} on the machine. On top of that, the different scripts and models have been built on \textit{Python 3.6.7} and we used \textit{Pytorch 1.0.0} as our Deep Learning framework. \\
We employ mini-batch stochastic gradient descent method together with the Adam optimizer~\cite{DBLP:journals/corr/KingmaB14} to minimize our multi-class multi-classification loss. \\
To facilitate reading and understanding, we break down the different hyper-parameters in three groups:
\begin{itemize}
\item Knowledge Graph
\begin{itemize}
\item $M$: number of neighbors
\item Characters n-grams
\item Weighted Personalized PageRank threshold
\item Disease - Symptom threshold
\item Admission - Symptom threshold
\item Admission - Disease threshold
\end{itemize}
\item Admission Processing
\begin{itemize}
\item $N$: number of chunks
\item $K$: number of events per chunk
\item $\tau$: chunk length
\end{itemize}
\item Deep Learning
\begin{itemize}
\item $n$: events embedding dimension (potentially different for each event type)
\item $a$: aggregators hidden dimension (for both input entity and neighbors, potentially different for each event type)
\item Input entity aggregator type (max, mean or sum, potentially different for each event type)
\item Neighbor entities aggregator type (max, mean or sum)
\item RNN: number of layers
\item RNN: number of neurons per layer
\item RNN: dropout between layers
\item RNN: bidirectional or not
\item $b$: Recurrent Neural Network output dimension
\item Batch size
\end{itemize}
\end{itemize}
To reduce the search space, we contrived our hyper-parameters by fixing the same embedding dimension and aggregator for all events. Finally, we also decided to tune the \emph{Knowledge Graph} and \emph{Admission Processing} hyper-parameters while tuning the \emph{Deep Learning} ones. Consequently, we also did the opposite when tuning the \emph{Knowledge Graph} and \emph{Admission Processing} hyper-parameters. \\
This way, we limit the computation time while getting a pretty good idea of the top-performing configurations and interactions between theses hyper-parameters. We suppose that scores reported for the final task on the testing set could be further improved by fine-tuning all these hyper-parameters together instead of group-by-group. \\
Finally, we decided to split the \emph{58'976} in two groups, training and testing set, using an 80\%-20\% split. We then further holdout 10\% of this training set for validation purposes (the training set is thus 72\% of the total admissions while validation represents 8\% of these). \\
As a training strategy, we monitor different metrics, as explained in the section~\ref{sec:Metrics}, at each epoch and employ an early-stopping criterion on the validation loss with a patience of 20 epochs. That is, if the validation loss does not decrease within 20 epochs, the model is saved at its best scoring epoch and the training is considered finished.
\newpage
\section{Metrics}
\label{sec:Metrics}
\paragraph{F1 Score} The $F_1$ score is a particular case of the $F_\beta$ score which blends the well-known precision and recall scores. The $F_\beta$ score is computed as follows:
\begin{equation}
F_\beta = (1+\beta)^2 \cdot \frac{\mbox{Precision} \cdot \mbox{Recall}}{(\beta^2 \cdot \mbox{Precision}) + \mbox{Recall}}
\end{equation}
Basically, the $\beta$ parameters dictate the importance we want to give to the precision or recall. By setting $\beta=1$, we give equal importance to both of them and above formula is equivalent to computing harmonic mean between the precision and recall. Our choice of $\beta=1$ is common and usually a good trade-off when there is no obvious preference between precision and recall.
\paragraph{Area Under Receiver Operating Characteristic} Often abbreviated \textit{AUROC}, this metric is coming from the receiver operating characteristic curve that is computed using the ``True positive rate`` and ``False positive rate``. This curve is particularly interesting since it reports the ability of our classifier to discriminate between positive and negative samples when the threshold varies. \\
Generally speaking, computing the area under this ROC curve is equivalent to the probability that a classifier will rank a positive random sample \textbf{higher} than a random negative one.
\paragraph{Macro and Micro Averages} To account for the slight class imbalance depicted in the top-right figure~\ref{fig:icd9-codes}, we consider two kinds of averages over classes scores, namely \emph{macro} and \emph{micro} averages. \\
The macro average will compute the metric independently for each class and then average globally, thus treating our different ICD9 codes equally. On the other hand, the micro average will first aggregate the contributions of all classes in order to compute the average metric. \\
We decided to compute both averages for our two metrics, leading to four combinations that we report in the experiments in sections~\ref{sec:Experiments}:
\begin{itemize}
\item Macro-F1 Score
\item Micro-F1 Score
\item Macro-AUROC
\item Micro-AUROC
\end{itemize}
\newpage
\section{Deep Learning Hyper-parameter Tuning}
For the hyper-parameter tuning of deep learning parameters, we fixed the \emph{Knowledge Graph} and \emph{Admission Processing} hyper-parameters to the following values:
\begin{itemize}
\item Knowledge Graph
\begin{itemize}
\item $M=10$
\item Characters n-grams = 3 characters
\item Weighted Personalized PageRank threshold = 0.0001
\item Disease - Symptom threshold = 0.2
\item Admission - Symptom threshold = 0.6
\item Admission - Disease threshold = 0.6
\end{itemize}
\item Admission Processing
\begin{itemize}
\item $N = 200$
\item $K = 25$
\item $\tau = 3 \mbox{ hours}$
\end{itemize}
\end{itemize}
\subsection{Method}
All the scores were computed on the validation set, leaving the testing one untainted to prevent overfitting. The best scoring model was chosen accordingly and in accordance to performances on the validation loss. \\
We employed a very simple \emph{random search} as a sampling strategy of the next set of hyper-parameters during the optimization. This allows for much better results than a typical grid search that has the tendency to explore only a small subset of the search space. This strategy is also proven to show very good performances when the number of \textit{trials} is limited ($\leq 100$ in our case) for a very large search space, on top of the ease of implementation compared to more advanced sampling strategies.
\subsection{Search space}
The search space in our case consisted of:
\begin{itemize}
\item $n \in \{8, 16, 32, 64, 128\}$, same for all event types
\item $a=64$, same for input entity, neighbors and all event types
\item $\mbox{agg}_{\mbox{input}} \in \{\mbox{max}, \mbox{mean}, \mbox{sum}\}$, same for all event types
\item $\mbox{agg}_{\mbox{neighbors}} \in \{\mbox{max}, \mbox{mean}, \mbox{sum}\}$, same for all event types
\item $\mbox{RNN}_{\mbox{layers}} \in \{1, 2, 3, 4, 5\}$
\item $\mbox{RNN}_{\mbox{neurons}} \in \{64, 128, 256, 512\}$
\item $\mbox{RNN}_{\mbox{dropout}} \in \{0, 0.1, 0.25, 0.5\}$
\item $\mbox{RNN}_{\mbox{bidirectional}} \in \{\mbox{True}, \mbox{False}\}$
\item $\mbox{RNN}_{\mbox{output size}} = 64$
\item $\mbox{Batch size} \in \{4, 8, 16, 32, 64, 128\}$
\end{itemize}
Which leaves us with a search space of $5 \times 3 \times 3 \times 5 \times 4 \times 4 \times 2 \times 6 = 43'200$ possibilities.
\subsection{Results}
\paragraph{Baseline} We obtained the following best-scoring combination of hyper-parameters for the baseline:
\begin{itemize}
\item $n = 64$
\item $a = 64$
\item $\mbox{agg}_{\mbox{input}} = \mbox{mean}$
\item $\mbox{RNN}_{\mbox{layers}} = 1$
\item $\mbox{RNN}_{\mbox{neurons}} = 128$
\item $\mbox{RNN}_{\mbox{dropout}} = 0.25$
\item $\mbox{RNN}_{\mbox{bidirectional}} = \mbox{True}$
\item $\mbox{RNN}_{\mbox{output size}} = 64$
\item $\mbox{Batch size} = 16$
\end{itemize}
Which translates to a particularly lightweight model, indicating the tendency of the baseline to overfit rather quickly.
\paragraph{KG-RNN} We obtained the following best-scoring combination of hyper-parameters for the baseline:
\begin{itemize}
\item $n = 16$
\item $a = 64$
\item $\mbox{agg}_{\mbox{input}} = \mbox{max}$
\item $\mbox{agg}_{\mbox{neighbors}} = \mbox{mean}$
\item $\mbox{RNN}_{\mbox{layers}} = 2$
\item $\mbox{RNN}_{\mbox{neurons}} = 128$
\item $\mbox{RNN}_{\mbox{dropout}} = 0.5$
\item $\mbox{RNN}_{\mbox{bidirectional}} = \mbox{False}$
\item $\mbox{RNN}_{\mbox{output size}} = 64$
\item $\mbox{Batch size} = 128$
\end{itemize}
It is important to notice that this top-scoring configuration consists in using a different kind of aggregator for neighboring admissions and input events. \\
One could argue that using a \emph{max-pooling} for input events and \emph{mean-pooling} for neighbors final diagnoses make sense since we can hypothesis that most salient events are important for a single patient, while blending the final diagnoses of neighbors allows to account for their relative contributions. \\
\section{Experiments}
\label{sec:Experiments}
Each experiment is broken down in a short description of the experiment and the associated results and graphics, followed by a ``discussion`` section where we interpret the results and plots. \\
All of these are on validation set for sake of keeping the testing set untainted until the final evaluation of the task at hand, namely ICD9 code prediction. Therefore, results of the different metrics, presented in sections~\ref{sec:Metrics}, are computed on the validation set for all experiments and compared to the baseline scores on each figure. \\
For all these experiments, the base setup of hyper-parameters are the one from the previous section for the \emph{Deep Learning} group. For the two other groups, here are the default parameters we chose:
\begin{itemize}
\item Knowledge Graph
\begin{itemize}
\item $M=10$
\item Characters n-grams = 3 characters
\item Weighted Personalized PageRank threshold = 0.0001
\item Disease - Symptom threshold = 0.2
\item Admission - Symptom threshold = 0.6
\item Admission - Disease threshold = 0.6
\end{itemize}
\item Admission Processing
\begin{itemize}
\item $N = 200$
\item $K = 25$
\item $\tau = 3 \mbox{ hours}$
\end{itemize}
\end{itemize}
These are the hyper-parameters to be assumed except for the varying one in each experiment, that will be reminded in the relevant subsection. It is important to bear in mind that complex interactions within hyper-parameters are not taken into account in these experiments since we tune one degree of freedom at a time. \\
It would be very interesting to run these experiments with more computational power and time to account for much more precise hyper-parameter tuning and visualize how the metrics react for possibly more complex combinations.
\newpage
\subsection{Chunk length}
In this experiment we make the number of hours per chunk vary. This means that we consider the larger time window as the current graph state. Thus, it should have an impact on the amount of information fed to \emph{KG-RNN}, especially its aggregators.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{figures/exp-hours.pdf}
\end{figure}
\paragraph{Discussion} We see that the number of hours per chunk does not have a huge impact after 3 hours. However, setting the chunk length to small (i.e. 1 hour) makes the graph state probably too sparse in terms of the number of events.
\newpage
\subsection{Admission length}
This experiment makes the number of chunks per admission vary, also referred as the number of graph states. This will have the impact of increasing the size of sequences that are fed to the downstream Recurrent Neural Network and challenge its capacity to encode long-term dependencies in the input sequences.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{figures/exp-chunks.pdf}
\end{figure}
\paragraph{Discussion} In the above figure, we clearly see that we obtain a local optimum with an admission length of \emph{200} for the F1 score. However, the optimal value for the AUROC score is not so clear cut and further investigation would be beneficial in terms of interaction with the \textbf{chunk length}. \\
Indeed, since the total considered time for an admission is $\mbox{chunk length} \times \mbox{admission length}$, both parameters are intrinsically linked and probably heavily correlated.
\newpage
\subsection{Number of events per chunk}
For this experiment, we vary the \textit{maximum} number of events per chunk and per type. Chunks that have more than this number of events in a chunk for a given type will be randomly sampled to $K$, while other ones remain with their number of events $\leq K$.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{figures/exp-events.pdf}
\end{figure}
\paragraph{Discussion} As in the previous experiment, we see a clear optimum for the F1 score at 25 events per chunk and per type. Again, the conclusion is not so obvious for the AUROC score, and this metric does not seem to improve much above $K=15$.
\newpage
\subsection{Number of neighbors}
As a last experiment, we made the number of neighbors for each input entity varies. As the previous experiment, this defines the maximum number of neighbors and does not mean it will have exactly $M$ neighbors. Indeed, it is important to remember that we set a minimum \emph{WPPR} threshold of $0.0001$ that might lead to some isolated admissions having $\leq M$ neighbors.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{figures/exp-neighbors.pdf}
\end{figure}
\paragraph{Discussion} The results of this experiment are very interesting, yet difficult to conclude on. We can see that the number of neighbors has a huge impact on the F1 metrics, but not so much on the AUROC score. However, it is important to notice that we would expect the bar plots to have a similar shape as before. \\
That is, we would expect it to have a concave shape, where fewer neighbors means less information and thus decreased scores. Whereas, too many neighbors would add more noise than information and also hindering results. Therefore, we would expect to find an optimum value in-between but we see a drastic decrease between \textit{10} and \textit{20} neighbors, while having an increase in our metrics between \textit{20} and \textit{40} neighbors. Lastly, the scores with \textit{40} neighbors seem slightly better than with \textit{20}.
\newpage
\section{ICD9 Prediction}
\subsection{Quantitative analysis}
From these top-scoring hyper-parameters for the different groups, we selected the following parameters and ran our final evaluation for the task of predicting the top 50 ICD9 codes in a multi-class multi-classification scenario. \\
These results are all performed on the \textbf{testing set}, after all the previous hyper-parameter tuning and experiments were run on the validation set. To this end, we left the testing set untainted and the results of this section and for the task at hand will be as representative as possible from a real scenario. This testing set is made of 20\% of the dataset, totaling \textit{11'230} admissions. \\
The two following plots are interesting to compare between the baseline performances and our \emph{KG-RNN}:
\begin{figure}[H]
\setkeys{Gin}{width=\linewidth}
\begin{tabularx}{\textwidth}{XXXX}
\includegraphics{figures/roc-curves.pdf} &
\includegraphics{figures/pr-curves.pdf}
\end{tabularx}
\caption{\textbf{Left}: Receiver operating characteristic curve on the testing set for both the \emph{baseline} and \emph{KG-RNN}. \textbf{Right}: Precision-recall curve on the testing set for both the \emph{baseline} and \emph{KG-RNN}.}
\end{figure}
We also evaluated the results on the different metrics that we described in the section~\ref{sec:Metrics}: \\
\begin{table}[H]
\begin{center}
\begin{tabular}{| p{2cm} | p{2cm} | p{2cm} | p{3cm} |}
\hline
&&&\\
\textbf{Metric} & \textbf{Average} & \textbf{Model} & \textbf{Score} \\
&&&\\ \hline
\multirow{4}{*}{F1} & \multirow{2}{*}{Macro} & Baseline & \textbf{36.52\%} \\
&& KG-RNN & \textbf{37.90\%} (+1.38\%) \\ \cline{2-4}
& \multirow{2}{*}{Micro} & Baseline & \textbf{51.55\%} \\
&& KG-RNN & \textbf{53.47\%} (+1.92\%) \\ \hline
\multirow{4}{*}{AUROC} & \multirow{2}{*}{Macro} & Baseline & \textbf{85.24\%} \\
&& KG-RNN & \textbf{86.29\%} (+1.05\%) \\ \cline{2-4}
& \multirow{2}{*}{Micro} & Baseline & \textbf{90.55\%} \\
&& KG-RNN & \textbf{91.03\%} (+0.48\%) \\ \hline
\multirow{2}{*}{Accuracy} & \multirow{2}{*}{-} & Baseline & \textbf{92.22\%} \\
&& KG-RNN & \textbf{92.36\%} (+0.14\%) \\ \hline
\end{tabular}
\end{center}
\end{table}
\newpage
\paragraph{Discussion} We notice an improvement on each metric for both averages. Yet, the results are not easily distinguishable and very close to each other on the testing set compared to the validation set where the improvements from \textit{KG-RNN} were stronger. \\
To further assess the quality of our \emph{KG-RNN} model, we decided to use a statistical significance test, the McNemar's Test~\footnote{\href{https://en.wikipedia.org/wiki/McNemar\%27s\_test}{https://en.wikipedia.org/wiki/McNemar's\_test}}. Indeed, this test will allow us to get a very precise idea of the relevance of the edge that \emph{KG-RNN} seems to have over the baseline. \\
The McNemar's test requires to build a ``contingency table`` from our results, as follows: \\
\begin{table}[H]
\begin{center}
\begin{tabular}{ p{4cm} | p{4cm} | p{4cm} }
& \textbf{KG-RNN is correct} & \textbf{KG-RNN is incorrect} \\
&& \\ \hline
&& \\
\textbf{Baseline is correct} & 510'590 (\textit{a}) & 7'247 (\textit{b}) \\
&& \\ \hline
&& \\
\textbf{Baseline is incorrect} & 7'984 (\textit{c}) & 35'679 (\textit{d}) \\
&& \\ \hline
\end{tabular}
\end{center}
\end{table}
If we consider $p_b$ and $p_c$ as the theoretical cell probabilities, the null hypothesis and the alternative are:
\begin{equation*}
\begin{aligned}
H_0: &&p_b = p_c \\
H_1: &&p_b \neq p_c
\end{aligned}
\end{equation*}
In other words, \textbf{under the null hypothesis, the two models should have the same error rate.} and we can summarize this as follows:
\begin{itemize}
\item \textbf{Fail to Reject Null Hypothesis}: Classifiers have a similar proportion of errors on the testing set.
\item \textbf{Reject Null Hypothesis}: Classifiers have a different proportion of errors on the testing set.
\end{itemize}
Where under the null hypothesis and with a sufficient amount of data (which is our case), the statistic should follow a chi-squared distribution with 1 degree of freedom. \\
Notably, in our case the \emph{Corrected McNemar} test statistic is: \\
\begin{equation}
\chi^2 = \frac{(|b-c|-1)^2}{b+c} = \frac{(|7247-7984|-1)^2}{7247+7984} = 35.57
\end{equation}
and the associated \textbf{p-value} under $H_0$ is $P=0.0000000025$, which is extremely small and statistically significant. \\
Hence, our p-value shows sufficient evidence to reject the null in favor of the alternative hypothesis. Therefore, the two models have different performances when trained on this particular testing set, supporting the idea that the edge over the baseline is not due to randomness but rather a truly improved predictive power.
\newpage
\subsection{Qualitative Analysis}
\label{subsec:Qualitative analysis}
\paragraph{Embeddings} First off, we decided to analyze the embedding we optimized while training \emph{KG-RNN}. For this purpose, we projected our embedding matrices into two dimensions using a Uniform Manifold Approximation and Projection~\cite{2018arXivUMAP}. Then, we picked a laboratory measurement (\textbf{White Blood Cells}) and a prescription (\textbf{Sodium}) to extract their nearest neighbors in the projected 2D manifold:
\begin{multicols}{2}
\centering
Neighbors for \textbf{White Blood Cells}
\begin{center}
\begin{tabular}{| c | c |}
\hline
\textbf{Rank} & \textbf{Laboratory measure} \\ \hline
1 & WBC \\ \hline
3 & White Cells \\ \hline
6 & Immunoglobulin A \\
\hline
\end{tabular}
\end{center}\columnbreak
\vfill
Neighbors for \textbf{Sodium}
\begin{center}
\begin{tabular}{| c | c |}
\hline
\textbf{Rank} & \textbf{Prescription} \\ \hline
1 & Sodium Chloride Nasal \\ \hline
2 & Sodium Chloride 0.9\% Flush \\ \hline
5 & Famotidine \\
\hline
\end{tabular}
\end{center}
\end{multicols}
In the first case (i.e. \textit{White Blood Cells}), it is very interesting to notice that among the neighbors, we first find an abbreviation of the same laboratory measurement: \textbf{WBC}, as well as a short name of it at the third-closest neighbor: \textbf{White Cells}. Finally, we also see the compelling evidence that the embedding is of good quality by the positioning of \textbf{Immunoglobulin A}, which are antibodies produced by white blood cells. \\
In the second table, we see that \textit{Sodium} is close to its related prescriptions in the first two nearest neighbors. Whereas, \textbf{Famotidine} is also ranked very close at the fifth rank, and it is known in the medical field that Famotidine is often given in \textbf{0.9\% Sodium Chloride} through intravenous, our second neighbor. \\
Overall, we can see with these two examples that \emph{KG-RNN} has not only learned similar events, such as ``White Blood Cells`` and ``WBC``, but also intrinsically and correlated linked events. Indeed, as we noticed, our deep learning architecture mapped events close to each other in the embedding space if they are related in the medical domain, learning more complex interactions.
\newpage
\paragraph{Predictions} Secondly, to have a better understanding of the value provided by \emph{KG-RNN} over the baseline, we extracted a few samples where the baseline was incorrect but KG-RNN was right in its prediction (represented by \textit{c} in the ``contingency table``). \\
From the extracted samples and to the purpose of understanding the differences, we also checked the information conveyed by neighbors of the input admission. To this end, we are able to analyze and visualize where the correctness of \emph{KG-RNN} over the baseline comes from. \\
In the following graphs, the first row corresponds to the input admission while the other ones to its neighbors. Each red line shows an ICD9 code actually diagnosed at discharge (i.e. the ground truth), and the horizontal gray line represents the default 0.5 threshold above which a prediction is considered. In the input admission graph (first row), the orange and blue bars are the estimated probabilities by the models, respectively the baseline and \emph{KG-RNN}. Finally, in the neighbor rows, the dark-gray bars represent their final diagnoses (i.e. entity static information) as one-hot vectors. \\
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{figures/preds-0.pdf}
\caption{In this plot we can see that the neighbors provide the necessary information to \emph{KG-RNN} to improve over the baseline. Indeed, this model does not detect any diagnostic, whereas KG-RNN is able to predict the second one with the help of its neighbor. Namely, we hypothesize that since its neighbor static information conveys the ICD9 code of interest (the second one), it helped pushing the confidence of KG-RNN upward, above the 0.5 decision threshold.}
\end{figure}
\newpage
The ICD9 codes of the ground truth and the neighbor from the previous figure are summarized in the following table:
\begin{table}[H]
\begin{center}
\textbf{Input admission} \\
\begin{tabular}{| c | c | p{8cm} |}
\hline
\textbf{Rank} & \textbf{ICD9 Code} & \textbf{Description} \\ \hline
4 & 272.4 & Other and unspecified hyperlipidemia \\ \hline
18 & 401.9 & Unspecified essential hypertension \\ \hline
26 & 486 & Pneumonia, organism unspecified \\ \hline
30 & 530.81 & Esophageal reflux \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[H]
\begin{center}
\textbf{Neighbor} \\
\begin{tabular}{| c | c | p{8cm} |}
\hline
\textbf{Rank} & \textbf{ICD9 Code} & \textbf{Description} \\ \hline
1 & 244.9 & Unspecified acquired hypothyroidism \\ \hline
2 & 250.00 & Diabetes mellitus without mention of complication, type II or unspecified type, not stated as uncontrolled \\ \hline
15 & 38.93 & Unspecified septicemia \\ \hline
18 & 401.9 & Unspecified essential hypertension \\ \hline
24 & 427.31 & Atrial fibrillation \\ \hline
28 & 507.0 & Pneumonitis due to inhalation of food or vomitus \\ \hline
37 & 96.6 & Late syphilis, latent \\ \hline
\end{tabular}
\end{center}
\end{table}
It is interesting to see that even if the neighbor diagnoses do not match the ground truth \textit{perfectly}, some ICD9 codes still convey information for the input admission. Indeed, if we look at the number \textbf{26} in the input admission (i.e. \textit{Pneumonia}), it could be linked to the number \textbf{28} of the neighbor (i.e. \textit{Pneumonitis}). However, even if the final diagnoses are different (``Pneumonitis`` indicates inflammation of lung tissues, whereas ``Pneumonia`` is inflammation caused by an infection), the admissions are linked through their pre-diagnosis and thus one may carry information on the other. \\
Therefore, using the ICD9 code hierarchy may help the model to make sense from neighbors diagnoses and increase predictive power for the input admission. This potential trail is further described in the appropriate section~\ref{sec:Future Work} of the last chapter.
\newpage
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{figures/preds-1.pdf}
\caption{In this plot, the input admission has 4 neighbors with different final diagnoses (static information), and we also notice interesting behavior of \emph{KG-RNN} compared to the baseline. In this scenario, we see that the first neighbor helps pushing the confidence of the model over the decision threshold for the second diagnosed ICD9 code. However, we also see that \emph{KG-RNN} is misled by the first two neighbors that are wrongly pushing ICD9 number thirty-one over the decision threshold. This highlights the importance of a good knowledge graph construction and neighbors extraction algorithm.}
\end{figure}
To conclude, we see that neighbors are having a big impact on KG-RNN, as the predictions where $\mbox{Score}_{\mbox{baseline}} \ge \mbox{Score}_{\mbox{KG-RNN}}$ occur when none of the neighbors have a matching ICD9 code. On the other hand, when one or more of the neighbors have a certain ICD9 code, the corresponding score for \emph{KG-RNN} on the input admission is consistently pushed upwards. This reveals how much KG-RNN learns to rely on its neighbors to score its predictions. | {
"alphanum_fraction": 0.7413586765,
"avg_line_length": 67.6428571429,
"ext": "tex",
"hexsha": "63a5d3d1ef2d7aabeecd915c36b1c7104fec505b",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-01-27T07:47:20.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-01-27T07:47:20.000Z",
"max_forks_repo_head_hexsha": "dea9569735fc73365f4d0da67aae5e3d7a577845",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Timonzimm/masters-thesis",
"max_forks_repo_path": "chapters/experiments.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "dea9569735fc73365f4d0da67aae5e3d7a577845",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Timonzimm/masters-thesis",
"max_issues_repo_path": "chapters/experiments.tex",
"max_line_length": 639,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "dea9569735fc73365f4d0da67aae5e3d7a577845",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Timonzimm/masters-thesis",
"max_stars_repo_path": "chapters/experiments.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 7501,
"size": 28410
} |
%\documentclass[10pt,handout]{beamer}
\documentclass[10pt]{beamer}\usepackage[]{graphicx}\usepackage[]{color}
%% maxwidth is the original width if it is less than linewidth
%% otherwise use linewidth (to make sure the graphics do not exceed the margin)
\makeatletter
\def\maxwidth{ %
\ifdim\Gin@nat@width>\linewidth
\linewidth
\else
\Gin@nat@width
\fi
}
\makeatother
\definecolor{fgcolor}{rgb}{0.345, 0.345, 0.345}
\newcommand{\hlnum}[1]{\textcolor[rgb]{0.686,0.059,0.569}{#1}}%
\newcommand{\hlstr}[1]{\textcolor[rgb]{0.192,0.494,0.8}{#1}}%
\newcommand{\hlcom}[1]{\textcolor[rgb]{0.678,0.584,0.686}{\textit{#1}}}%
\newcommand{\hlopt}[1]{\textcolor[rgb]{0,0,0}{#1}}%
\newcommand{\hlstd}[1]{\textcolor[rgb]{0.345,0.345,0.345}{#1}}%
\newcommand{\hlkwa}[1]{\textcolor[rgb]{0.161,0.373,0.58}{\textbf{#1}}}%
\newcommand{\hlkwb}[1]{\textcolor[rgb]{0.69,0.353,0.396}{#1}}%
\newcommand{\hlkwc}[1]{\textcolor[rgb]{0.333,0.667,0.333}{#1}}%
\newcommand{\hlkwd}[1]{\textcolor[rgb]{0.737,0.353,0.396}{\textbf{#1}}}%
\let\hlipl\hlkwb
\usepackage{framed}
\makeatletter
\newenvironment{kframe}{%
\def\at@end@of@kframe{}%
\ifinner\ifhmode%
\def\at@end@of@kframe{\end{minipage}}%
\begin{minipage}{\columnwidth}%
\fi\fi%
\def\FrameCommand##1{\hskip\@totalleftmargin \hskip-\fboxsep
\colorbox{shadecolor}{##1}\hskip-\fboxsep
% There is no \\@totalrightmargin, so:
\hskip-\linewidth \hskip-\@totalleftmargin \hskip\columnwidth}%
\MakeFramed {\advance\hsize-\width
\@totalleftmargin\z@ \linewidth\hsize
\@setminipage}}%
{\par\unskip\endMakeFramed%
\at@end@of@kframe}
\makeatother
\definecolor{shadecolor}{rgb}{.97, .97, .97}
\definecolor{messagecolor}{rgb}{0, 0, 0}
\definecolor{warningcolor}{rgb}{1, 0, 1}
\definecolor{errorcolor}{rgb}{1, 0, 0}
\newenvironment{knitrout}{}{} % an empty environment to be redefined in TeX
\usepackage{alltt}
\usepackage{etex} % helps fix \newdimen error which is cause when ctable is loaded with other packages
\usepackage{comment}
\usepackage{amsmath,amsthm,amssymb}
\usepackage{url}
\usepackage{color, colortbl}
\usepackage{tikz}
\usepackage{ctable}
\usetikzlibrary{shapes.geometric, arrows,shapes.symbols,decorations.pathreplacing}
\tikzstyle{startstop} = [rectangle, rounded corners, minimum width=3cm, minimum height=1cm,text centered, draw=black, fill=red!30,text width=2.0cm]
\tikzstyle{io} = [trapezium, trapezium left angle=70, trapezium right angle=110, minimum width=2cm, minimum height=1cm, text centered, draw=black, fill=blue!30,text width=1.5cm]
\tikzstyle{process} = [rectangle, minimum width=1cm, minimum height=1cm, text centered, draw=black, fill=orange!30,text width=2cm]
\tikzstyle{decision} = [diamond, minimum width=2cm, minimum height=1cm, text centered, draw=black, fill=green!30]
\tikzstyle{arrow} = [thick,->,>=stealth]
\tikzstyle{both} = [thick,<->,>=stealth, red]
\tikzset{myshade/.style={minimum size=.4cm,shading=radial,inner color=white,outer color={#1!90!gray}}}
\newcommand\mycirc[1][]{\tikz\node[circle,myshade=#1]{};}
\newcommand\myrect[1][]{\tikz\node[rectangle,myshade=#1]{};}
\newcommand\mystar[1][]{\tikz\node[star,star points=15,star point height=2pt,myshade=#1]{};}
\newcommand\mydiamond[1][]{\tikz\node[diamond,myshade=#1]{};}
\newcommand\myellipse[1][]{\tikz\node[ellipse,myshade=#1]{};}
\newcommand\mykite[1][]{\tikz\node[kite,myshade=#1]{};}
\newcommand\mydart[1][]{\tikz\node[dart,myshade=#1]{};}
\newcommand\mycloud[1][]{\tikz\node[cloud,myshade=#1]{};}
%\usepackage{subcaption}
\usepackage{subfig}
%\usepackage{caption}
\mode<presentation>
\usetheme{Hannover}
\usecolortheme{rose}
\setbeamertemplate{navigation symbols}{}
\setbeamertemplate{footline}[frame number]
\setbeamertemplate{caption}[numbered]
\setbeamertemplate{frametitle}[default][left]
\usepackage[]{hyperref}
\hypersetup{
unicode=false,
pdftoolbar=true,
pdfmenubar=true,
pdffitwindow=false, % window fit to page when opened
pdfstartview={FitH}, % fits the width of the page to the window
pdftitle={Reproducible Research}, % title
pdfauthor={Sahir Rai Bhatnagar}, % author
pdfsubject={Subject}, % subject of the document
pdfcreator={Sahir Rai Bhatnagar}, % creator of the document
pdfproducer={Sahir Rai Bhatnagar}, % producer of the document
pdfkeywords={}, % list of keywords
pdfnewwindow=true, % links in new window
colorlinks=true, % false: boxed links; true: colored links
linkcolor=red, % color of internal links (change box color with linkbordercolor)
citecolor=blue, % color of links to bibliography
filecolor=black, % color of file links
urlcolor=cyan % color of external links
}
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
\begin{document}
%\SweaveOpts{concordance=TRUE}
%\SweaveOpts{concordance=TRUE}
\title[RR: Intro to \texttt{knitr}]{Reproducible Research}
\subtitle{An Introduction to \texttt{knitr}}
\author[]{Sahir Rai Bhatnagar%
\thanks{\href{https://github.com/sahirbhatnagar/knitr-tutorial}{https://github.com/sahirbhatnagar/knitr-tutorial}%
}}
\date{May 28, 2014}
%\makebeamertitle
\maketitle
\begin{frame}{Acknowledgements}
% \hspace*{-1.9cm}\parbox[t]{\textwidth}
%\frametitle{Acknowledgements}
\begin{columns}[c] % The "c" option specifies centered vertical alignment while the "t" option is used for top vertical alignment
\column{.45\textwidth} % Left column and width
\begin{itemize}
%\scriptsize
\item Dr. Erica Moodie
\item Maxime Turgeon (Windows)
\item Kevin McGregor (Mac)
\item Greg Voisin
\item Don Knuth (\TeX)
\item Friedrich Leisch (Sweave)
\item Yihui Xie (knitr)
\item You
\end{itemize}
\column{.45\textwidth} % Right column and width
\begin{figure}
\includegraphics[width=0.6\columnwidth]{eboh50.pdf}\\[5mm]
\includegraphics[width=1.0\columnwidth]{crm.png}
%\includegraphics[width=0.7\columnwidth]{Logo-LUDMER.jpg}
\end{figure}
\end{columns}
\end{frame}
\begin{frame}{Disclaimer \#1}
\begin{itemize}
\item Feel free to Ask questions
\item Interrupt me often
\item You don't need to raise your hand to speak
\end{itemize}
\end{frame}
\begin{frame}{Disclaimer \#2}
\begin{figure}
\includegraphics[width=1.0\columnwidth]{rstudio.png}\\[5mm]
\includegraphics[width=0.2\columnwidth]{rlogo.png}\\[5mm]
\includegraphics[width=0.2\columnwidth]{LaTeX_logo.png}
\end{figure}
\textit{I don't work for, nor am I an author of any of these packages. I'm just a messenger.}
\end{frame}
\begin{frame}{Disclaimer \#3}
\begin{itemize}
\item Material for this tutorial comes from many sources. For a complete list see: \href{https://github.com/sahirbhatnagar/knitr-tutorial}{https://github.com/sahirbhatnagar/knitr-tutorial}
\item Alot of the content in these slides are based on these two books
\end{itemize}
\begin{columns}[c] % The "c" option specifies centered vertical alignment while the "t" option is used for top vertical alignment
\column{.45\textwidth} % Left column and width
\begin{figure}
\includegraphics[width=0.6\columnwidth]{yihui.png}
\end{figure}
\column{.45\textwidth} % Right column and width
\begin{figure}
\includegraphics[width=0.6\columnwidth]{chris.png}
\end{figure}
\end{columns}
\end{frame}
\begin{frame}{Eat Your Own Dog Food}
\begin{itemize}
\item These slides are reproducible
\item Source code: \href{https://github.com/sahirbhatnagar/knitr-tutorial/tree/master/slides}{https://github.com/sahirbhatnagar/knitr-tutorial/tree/master/slides}
\end{itemize}
\end{frame}
\begin{frame}{Main objective for today}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.60, keepaspectratio]{./juice}
\end{figure}
\end{frame}
\section{Reproducible Research}
\subsection{What?}
\begin{frame}
\frametitle{What is Science Anyway?}
\pause
\begin{block}{According to the American Physical Society:}
\emph{Science is the systematic enterprise of gathering knowledge about the universe and organizing and condensing that knowledge into \textbf{testable} laws and theories. The \textbf{success and credibility of science} are anchored in the \textbf{willingness} of scientists to \textbf{expose their ideas} and results to \textbf{independent testing} and \textbf{replication} by other scientists}
\end{block}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}
\frametitle{RR: A Minimum Standard to Verify Scientific Findings}
\pause
\begin{block}{Reproducible Research (RR) in Computational Sciences}
\emph{The data and the code used to make a finding are available and they are sufficient for an independent researcher to recreate the finding}
\end{block}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Why?}
\begin{frame}
\begin{tikzpicture}
\scriptsize
\node (expr) [startstop] {Why should we care about RR?};
\node (science) [decision, right of=expr, xshift=2cm, yshift=1.5cm] {For Science};
\draw [arrow] (expr) -- (science);
\node (stan) [process, right of=science, xshift=3cm, yshift=2cm] {Standard to judge scientific claims};
\node (dupli) [process, right of=science, xshift=3cm, yshift=0.5cm] {Avoid duplication};
\node (know) [process, right of=science, xshift=3cm, yshift=-1cm] {Cumulative knowledge development};
\draw [arrow] (science) -- (stan);
\draw [arrow] (science) -- (dupli);
\draw [arrow] (science) -- (know);
\pause \node (you) [decision, right of=expr, xshift=2cm, yshift=-1.5cm] {For You};
\draw [arrow] (expr) -- (you);
\node (work) [process, right of=you, xshift=3cm, yshift=0.5cm] {Better work habits};
\node (team) [process, right of=you, xshift=3cm, yshift=-0.7cm] {Better teamwork};
\node (change) [process, right of=you, xshift=3cm, yshift=-1.9cm] {Changes are easier};
\node (soft) [process, right of=you, xshift=3cm, yshift=-3.1cm] {Higher research impact};
\draw [arrow] (you) -- (work);
\draw [arrow] (you) -- (team);
\draw [arrow] (you) -- (change);
\draw [arrow] (you) -- (soft);
\end{tikzpicture}
\end{frame}
\subsection{001-motivating-example}
\begin{frame}{A Motivating Example}
\textit{Demonstrate:} \href{https://github.com/sahirbhatnagar/knitr-tutorial/tree/master/001-motivating-example}{001-motivating-example}\\
\textit{Survey:}
\href{https://www.surveymonkey.com/s/CDVXW3C}{https://www.surveymonkey.com/s/CDVXW3C}
\end{frame}
\section{Getting Started}
\begin{frame}{Tools for Reproducible Research\footnote{\href{http://onepager.togaware.com/}{http://onepager.togaware.com/}}}
\begin{block}{Free and Open Source Software}
\begin{itemize}
\item \texttt{RStudio}: Creating, managing, compiling documents
\item \LaTeX: Markup language for typesetting a document
\item \texttt{R}: Statistical analysis language
\item \texttt{knitr}: Integrate \LaTeX and \texttt{R} code. Based on Prof. Friedrich Leisch's \href{https://www.statistik.lmu.de/~leisch/Sweave/}{\texttt{Sweave}}
\end{itemize}
\end{block}
\end{frame}
\subsection{\LaTeX}
\begin{frame}\frametitle{Comparison}
\begin{columns}[c] % The "c" option specifies centered vertical alignment while the "t" option is used for top vertical alignment
\column{.45\textwidth} % Left column and width
\begin{figure}[h!]
\centering
\includegraphics[scale=1, keepaspectratio]{./miktex}
\caption{Comparison}
\label{fig:word}
\end{figure}
\column{.5\textwidth} % Right column and width
\begin{itemize}
\item \LaTeX \, has a greater learning curve
\item Many tasks are very tedious or impossible (most cases) to do in MS Word or Libre Office
\end{itemize}
\end{columns}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}\frametitle{The Philosophy behind \LaTeX}
\begin{columns}[c] % The "c" option specifies centered vertical alignment while the "t" option is used for top vertical alignment
\column{.45\textwidth} % Left column and width
\begin{figure}[h!]
\centering
\includegraphics[scale=0.6, keepaspectratio]{./smith}
\small
\caption{Adam Smith, author of \textit{The Wealth of Nations} (1776), in which he conceptualizes the notion of the division of labour}
\label{fig:smith}
\end{figure}
\column{.5\textwidth} % Right column and width
\small
\begin{block}{Division of Labour}
Composition and logical structuring of text is the author's specific contribution to the production of a printed text. Matters such as the choice of the font family, should section headings be in bold face or small capitals? Should they be flush left or centered? Should the text be justified or not? Should the notes appear at the foot of the page or at the end? Should the text be set in one column or two? and so on, is the typesetter's business
\end{block}
\end{columns}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}\frametitle{The Genius Behind \LaTeX}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4, keepaspectratio]{./don}
\small
\caption{The \TeX~project was started in 1978 by Donald Knuth (Stanford). He planned for 6 months, but it took him nearly 10 years to complete. Coined the term ``Literate programming'': mixture of code and text segments that are ``human'' readable. Recipient of the Turing Award (1974) and the Kyoto Prize (1996).}
\label{fig:don}
\end{figure}
\end{frame}
\subsection{RStudio}
\begin{frame}\frametitle{Integrated Development Environment (IDE)}
\pause
\begin{figure}[h!]
\centering
\includegraphics[scale=0.25, keepaspectratio]{./RStudio-Screenshot}
\end{figure}
\textit{Demonstrate:} Explore \texttt{RStudio}
\end{frame}
\subsection{\texttt{knitr}}
\begin{frame}{What \texttt{knitr} does}
\textbf{\LaTeX} example:
\begin{tikzpicture}
\scriptsize
\node (expr) [startstop] {Report.Rnw (contains both code and markup)};
\node (science) [decision, below of=expr, xshift=0cm, yshift=-2cm] {Report.tex};
\draw [arrow] (expr) -- node[anchor=east]{\texttt{knitr::knit('Report.Rnw')}} (science);
\pause \node (pdf) [io, below of=science, xshift=0cm, yshift=-2cm] {Report.pdf};
\draw [arrow] (science) -- node[anchor=east]{\texttt{latex2pdf('Report.tex')}} (pdf);
\end{tikzpicture}
\end{frame}
\begin{frame}\frametitle{Compiling a \texttt{.Rnw} document}
\begin{block}{The two steps on previous slide can be executed in one command:}
\[ \textrm{\texttt{knitr::knit2pdf()}} \]
\end{block}
or in \texttt{RStudio}:
\begin{figure}[h!]
\centering
\includegraphics[scale=0.5, keepaspectratio]{./Compile-pdf.jpg}
\end{figure}
%\textit{Demonstrate:} Explore \texttt{RStudio}, projects and \texttt{.Rprofile}
\end{frame}
\begin{frame}\frametitle{Incorporating \texttt{R} code}
\begin{itemize}
\item Insert \texttt{R} code in a \textbf{Code Chunk} starting with $$ << \quad >>= $$
and ending with \begin{center}
{@}
\end{center}
\end{itemize}
In \texttt{RStudio}:
\begin{figure}[h!]
\centering
\includegraphics[scale=0.35, keepaspectratio]{./sweave_chunk}
\end{figure}
%\textit{Demonstrate:} Explore \texttt{RStudio}, projects and \texttt{.Rprofile}
\end{frame}
\begin{frame}[fragile]{Example 1}
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe}
\begin{verbatim}
<<example-code-chunk-name, echo=TRUE>>=
library(magrittr)
rnorm(50) %>% mean
@
\end{verbatim}
\end{kframe}
\end{knitrout}
produces
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlkwd{library}\hlstd{(magrittr)}
\hlkwd{rnorm}\hlstd{(}\hlnum{50}\hlstd{)} \hlopt{%>%} \hlstd{mean}
\end{alltt}
\begin{verbatim}
## [1] 0.12
\end{verbatim}
\end{kframe}
\end{knitrout}
\end{frame}
\begin{frame}[fragile]{Example 2}
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe}
\begin{verbatim}
<<example-code-chunk-name2, echo=TRUE, tidy=TRUE>>=
for(i in 1:5){ (i+3) %>% print}
@
\end{verbatim}
\end{kframe}
\end{knitrout}
produces
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlkwa{for} \hlstd{(i} \hlkwa{in} \hlnum{1}\hlopt{:}\hlnum{5}\hlstd{) \{}
\hlstd{(i} \hlopt{+} \hlnum{3}\hlstd{)} \hlopt{%>%} \hlstd{print}
\hlstd{\}}
\end{alltt}
\begin{verbatim}
## [1] 4
## [1] 5
## [1] 6
## [1] 7
## [1] 8
\end{verbatim}
\end{kframe}
\end{knitrout}
\end{frame}
\begin{frame}[fragile]{Example 2.2}
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe}
\begin{verbatim}
<<example-code-chunk-name3, echo=FALSE>>=
for(i in 1:5){ (i+3) %>% print}
@
\end{verbatim}
\end{kframe}
\end{knitrout}
produces
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe}
\begin{verbatim}
## [1] 4
## [1] 5
## [1] 6
## [1] 7
## [1] 8
\end{verbatim}
\end{kframe}
\end{knitrout}
\end{frame}
\begin{frame}[fragile]{Example 2.3}
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe}
\begin{verbatim}
<<example-code-chunk-name4, echo=FALSE, eval=FALSE>>=
for(i in 1:5){ (i+3) %>% print}
@
\end{verbatim}
\end{kframe}
\end{knitrout}
produces
\textit{Demonstrate:} Try it yourself
\end{frame}
\begin{frame}[fragile]{\texttt{R} output within the text}
\begin{itemize}
\item Include \texttt{R} output within the text
\item We can do that with ``S-expressions'' using the command \textbackslash \texttt{Sexpr}\{$\ldots$\}
\end{itemize}
\vspace{1cm}
\textbf{Example:} \vspace{0.3cm}
The iris dataset has \textbackslash \texttt{Sexpr}\{\texttt{nrow(iris)}\} rows and \textbackslash \texttt{Sexpr}\{\texttt{ncol(iris)}\} columns
\vspace{0.5cm}
produces \vspace{0.5cm}
The iris dataset has 150 rows and 5 columns
\end{frame}
\begin{frame}[fragile]
\frametitle{Include a Figure}
\scriptsize
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe}
\begin{verbatim}
<<fig.ex, fig.cap='Linear Regression',fig.height=3,fig.width=3>>=
plot(mtcars[ , c('disp','mpg')])
lm(mpg ~ disp , data = mtcars) %>%
abline(lwd=2)
@
\end{verbatim}
\end{kframe}
\end{knitrout}
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}
{\centering \includegraphics[width=\maxwidth]{figure/slr7-1}
}
\caption[Linear regression]{Linear regression}\label{fig:slr7}
\end{figure}
\end{knitrout}
\end{frame}
\begin{frame}[fragile]
\frametitle{Include a Table}
\scriptsize
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe}
\begin{verbatim}
<<table.ex, results='asis'>>=
library(xtable)
iris[1:5,1:5] %>%
xtable(caption='Sample of Iris data') %>%
print(include.rownames=FALSE)
@
\end{verbatim}
\end{kframe}
\end{knitrout}
% latex table generated in R 3.5.1 by xtable 1.8-4 package
% Sun May 12 16:21:55 2019
\begin{table}[ht]
\centering
\begin{tabular}{rrrrl}
\hline
Sepal.Length & Sepal.Width & Petal.Length & Petal.Width & Species \\
\hline
5.10 & 3.50 & 1.40 & 0.20 & setosa \\
4.90 & 3.00 & 1.40 & 0.20 & setosa \\
4.70 & 3.20 & 1.30 & 0.20 & setosa \\
4.60 & 3.10 & 1.50 & 0.20 & setosa \\
5.00 & 3.60 & 1.40 & 0.20 & setosa \\
\hline
\end{tabular}
\caption{Sample of Iris data}
\end{table}
\end{frame}
\begin{comment}
\section{Details}
\subsection{Code Chunks}
\begin{frame}{A selection of \texttt{knitr} code chunk options}
content...
\end{frame}
\begin{frame}{Set global chunk options}
content...
\end{frame}
\begin{frame}{Option Aliases}
see page 109 yihui
\end{frame}
\begin{frame}{Option Templates}
see page 110 yihui
\end{frame}
\begin{frame}{Chunk References}
see page 79 yihui
\end{frame}
\begin{frame}{Code in Appendix}
see page 110 yihui
\end{frame}
\subsection{Hooks}
\begin{frame}{A selection of \texttt{knitr} code chunk options}
content...
\end{frame}
\subsection{Child Documents}
\begin{frame}{A selection of \texttt{knitr} code chunk options}
see 83
\end{frame}
\subsection{Custom Environments}
\begin{frame}{Example Environment}
see 120
\end{frame}
\end{comment}
\section{Examples}
\subsection{002-minimum-working-example}
\begin{frame}{Minimum Working Example}
\href{https://github.com/sahirbhatnagar/knitr-tutorial/tree/master/002-minimum-working-example}{https://github.com/sahirbhatnagar/knitr-tutorial/tree/master/002-minimum-working-example}
\end{frame}
\subsection{003-model-output}
\begin{frame}{Extracting output from Regression Models}
\href{https://github.com/sahirbhatnagar/knitr-tutorial/tree/master/003-model-output}{https://github.com/sahirbhatnagar/knitr-tutorial/tree/master/003-model-output}
\end{frame}
\subsection{004-figures}
\begin{frame}{Figures}
\href{https://github.com/sahirbhatnagar/knitr-tutorial/tree/master/004-figures}{https://github.com/sahirbhatnagar/knitr-tutorial/tree/master/004-figures}
\end{frame}
\subsection{005-beamer-presentation}
\begin{frame}{Beamer Presentations}
\href{https://github.com/sahirbhatnagar/knitr-tutorial/tree/master/005-beamer-presentation}{https://github.com/sahirbhatnagar/knitr-tutorial/tree/master/005-beamer-presentation}
\end{frame}
\subsection{006-sensitivity-analysis-one-parameter}
\begin{frame}{Changing one Parameter in an Analysis}
\href{https://github.com/sahirbhatnagar/knitr-tutorial/tree/master/006-sensitivity-analysis-one-parameter}{https://github.com/sahirbhatnagar/knitr-tutorial/tree/master/006-sensitivity-analysis-one-parameter}
\end{frame}
\subsection{007-sensitivity-analysis-many-parameters}
\begin{frame}{Changing Many Parameters in an Analysis}
\href{https://github.com/sahirbhatnagar/knitr-tutorial/tree/master/007-sensitivity-analysis-many-parameters}{https://github.com/sahirbhatnagar/knitr-tutorial/tree/master/007-sensitivity-analysis-many-parameters}
\end{frame}
\subsection{008-large-documents}
\begin{frame}{Large Documents}
\href{https://github.com/sahirbhatnagar/knitr-tutorial/tree/master/008-large-documents}{https://github.com/sahirbhatnagar/knitr-tutorial/tree/master/008-large-documents}
\end{frame}
\subsection{009-rmarkdown}
\begin{frame}{HTML Reports}
\href{https://github.com/sahirbhatnagar/knitr-tutorial/tree/master/009-rmarkdown}{https://github.com/sahirbhatnagar/knitr-tutorial/tree/master/009-rmarkdown}
\end{frame}
\subsection{010-rmarkdown-presentation}
\begin{frame}{HTML Presentations}
\href{https://github.com/sahirbhatnagar/knitr-tutorial/tree/master/010-rmarkdown-presentation}{https://github.com/sahirbhatnagar/knitr-tutorial/tree/master/010-rmarkdown-presentation}
\end{frame}
\section{Final Remarks}
\begin{frame}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.30, keepaspectratio]{./leek}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{Always Remember ...}
\[ \textrm{Reproducibility} \propto \frac{1}{\textrm{copy paste}} \]
\end{frame}
\begin{frame}{Is the juice worth the squeeze?}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.40, keepaspectratio]{./juice}
\end{figure}
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.7162944582,
"avg_line_length": 30.1456692913,
"ext": "tex",
"hexsha": "4674122c0002e49dbfed0cf822ecdb0f68b4db29",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fe313b2e05fd9e26fbc2e085bc0b93dfbbd32611",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "sahirbhatnagar/raqc",
"max_forks_repo_path": "slides/mcgill-knitr.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fe313b2e05fd9e26fbc2e085bc0b93dfbbd32611",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "sahirbhatnagar/raqc",
"max_issues_repo_path": "slides/mcgill-knitr.tex",
"max_line_length": 448,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "fe313b2e05fd9e26fbc2e085bc0b93dfbbd32611",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "sahirbhatnagar/raqc",
"max_stars_repo_path": "slides/mcgill-knitr.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 7538,
"size": 22971
} |
\section{\ac{XML}}
All text based file formats\footnote{Have a look at \emph{SDKBrowser.chm} for a list of all this formats} of PixelLight are \ac{XML} based. This way they follow a well known syntax. The \ac{XML} classes you will find in \emph{PLCore} are only wrapper classes. Further we added some additional functions to make the usage of \ac{XML} more efficient. After you load in such a \ac{XML} document you can browse and edit it using the \ac{DOM} interface which is quite comfortable\footnote{But not as performant as for example a \ac{SAX}/\ac{StAX} \ac{API}}. After you created a new \ac{XML} document or edited an existing one, you can also save the \ac{XML} or even print it into the console. Here's an example how it would look like if you want to load in a PixelLight configuration file (\emph{cfg}-extension). \emph{Config} will do this for you but this is only an example to see how to use the \ac{XML} classes in practice:
\begin{lstlisting}[caption=\ac{XML} \ac{DOM} usage example]
// Create XML document
PLCore::XmlDocument cDocument;
if (!cDocument.Load(cFile)) {
PL_LOG(Error, cDocument.GetValue() + ": " + cDocument.GetErrorDesc())
// Error!
return false;
}
// Get config element
PLCore::XmlElement *pConfigElement =
cDocument.GetFirstChildElement("Config");
if (!pConfigElement) {
PL_LOG(Error, "Can't find 'Config' element")
// Error!
return false;
}
// Iterate through all groups
PLCore::XmlElement *pGroupElement =
pConfigElement->GetFirstChildElement("Group");
while (pGroupElement) {
// Get group class name
PLCore::String sClass = pGroupElement->GetAttribute("Class");
if (sClass.GetLength()) {
// Get config class instance
PLCore::ConfigGroup *pClass = cConfig.GetClass(sClass);
if (pClass) {
// Set variables
pClass->SetVarsFromXMLElement(*pGroupElement, 0);
}
}
// Next element, please
pGroupElement = pConfigElement->GetNextSiblingElement("Group");
}
// Done
return true;
\end{lstlisting}
To print a \ac{XML} document into the console you can write for example the following:
\begin{lstlisting}[caption=Print \ac{XML} document into the console]
PLCore::XmlDocument cDocument;
cDocument.Load(cFile);
cDocument.Save(PLCore::File::StandardOutput);
\end{lstlisting}
| {
"alphanum_fraction": 0.7452999105,
"avg_line_length": 41.3703703704,
"ext": "tex",
"hexsha": "f8db7ccb32f69ffc04b0317592b8850db8bbca78",
"lang": "TeX",
"max_forks_count": 40,
"max_forks_repo_forks_event_max_datetime": "2021-03-06T09:01:48.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-02-25T18:24:34.000Z",
"max_forks_repo_head_hexsha": "d7666f5b49020334cbb5debbee11030f34cced56",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "naetherm/PixelLight",
"max_forks_repo_path": "Docs/PixelLightBase/PLCore/XML.tex",
"max_issues_count": 27,
"max_issues_repo_head_hexsha": "43a661e762034054b47766d7e38d94baf22d2038",
"max_issues_repo_issues_event_max_datetime": "2020-02-02T11:11:28.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-06-18T06:46:07.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "PixelLightFoundation/pixellight",
"max_issues_repo_path": "Docs/PixelLightBase/PLCore/XML.tex",
"max_line_length": 922,
"max_stars_count": 83,
"max_stars_repo_head_hexsha": "43a661e762034054b47766d7e38d94baf22d2038",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ktotheoz/pixellight",
"max_stars_repo_path": "Docs/PixelLightBase/PLCore/XML.tex",
"max_stars_repo_stars_event_max_datetime": "2021-07-20T17:07:00.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-08T15:06:14.000Z",
"num_tokens": 604,
"size": 2234
} |
\documentclass[../../Instructions_Framework]{subfiles}
% Hier müssen keine Packages geladen werden, es werden automatisch die von masterdoc geladen,
% sowie die Konfigurationen.
% Bei includegraphics nur Bildname (Bsp: Bild.png) eingeben, da er in den angegebenen Pfade die Bilder sucht
\graphicspath{{img/}{img/}}
\begin{document}
\chapter{Instructions for creating scenes}
The following chapter shows the creation of a scene in Unity. The requirements are Unity version 5.5.1f and the BullsEye.unitypackages.\\
\section{Select your reality}
Depending on whether you want to create an augmented or a virtual reality scene you have two distinct prefabs which can be used.
\subsection{Augmented Reality}
The \textit{AR} prefab is used for augmented reality scenes. The prefab can be found in the Asset Folder under the data path \textbf{Assets/BullsEye/Prefabs}. To create a scene you need to drag and drop the prefab into the hierarchy. This will setup every required render component (see Fig. \ref{fig:screenshot002}).
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{img/screenshot002}
\caption{AR prefab}
\label{fig:screenshot002}
\end{figure}
The AR prefab differs from the VR prefab by including a plane which acts as a \textit{screen} and displays the real world as a video stream. If you switch to scene view you can see the basic AR scene setup, consisting of the camera and the \textit{screen} to show the video stream of the real world. You are now set up to include your own objects into the scene to augment the reality displayed on the plane (see Fig. \ref{fig:screenshot004}).
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{img/screenshot004}
\caption{AR prefab structure}
\label{fig:screenshot004}
\end{figure}
\subsection{Virtual Reality}
The \textit{VR} prefab can be used for virtual reality. The prefab can be found in the Asset Folder under the data path \textbf{Assets/BullsEye/Prefabs}. To create a scene you need to drag and drop the prefab into the hierarchy. This will setup every required render component (see Fig. \ref{fig:screenshot003}).
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{img/screenshot003}
\caption{VR prefab}
\label{fig:screenshot003}
\end{figure}
To create a virtual reality scene you simply need to create a new scene (see Fig. \ref{fig:screenshot005}), but instead of dragging the AR prefab into the hierarchy as explained before, you will instead drag the VR prefab into the hierarchy. You are now ready to build your virtual world around the prefab. Fig. \ref{pic8} show the VR prefab in an example virtual reality scene.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{img/screenshot005}
\caption{VR prefab structure}
\label{fig:screenshot005}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=1.0\linewidth]{pic7.png}
\caption{The VR prefab in an example VR scene}
\label{pic8}
\end{figure}
After creation your scene go to chapter \ref{usage} for use an existing example interaction technique or to chapter \ref{technique} for creation your own interaction technique.
\end{document} | {
"alphanum_fraction": 0.7853503185,
"avg_line_length": 61.568627451,
"ext": "tex",
"hexsha": "9f738ec65236fba1651273a2a6c167a45d11c72c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1055fea03ff0f7b82ee9d6581db0070ba465ba50",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "PGBullsEye/EyeMR",
"max_forks_repo_path": "Instructions/InstructionsSceneTechnique/chapter/szene/szene.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1055fea03ff0f7b82ee9d6581db0070ba465ba50",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "PGBullsEye/EyeMR",
"max_issues_repo_path": "Instructions/InstructionsSceneTechnique/chapter/szene/szene.tex",
"max_line_length": 443,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "1055fea03ff0f7b82ee9d6581db0070ba465ba50",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "PGBullsEye/EyeMR",
"max_stars_repo_path": "Instructions/InstructionsSceneTechnique/chapter/szene/szene.tex",
"max_stars_repo_stars_event_max_datetime": "2019-03-26T18:21:20.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-10-05T20:16:08.000Z",
"num_tokens": 803,
"size": 3140
} |
\documentclass{article}
\usepackage{graphicx}
\begin{document}
\section{Logic Gates}
Logic gates perform logical operations that take binary input (0s and 1s) and produce
a single binary output. Thye are used in must electronic device including:
\begin{table}[h!]
\begin{center}
\caption{Lgic Gates}
\label{tab:table1}
\begin{tabular}{l|c|c}
\hline
Smartphones
&
Tablets
&
Memory Devices
\\
\begin{figure}
\includegraphics[width=0.2\linewidth]{phone.jpg}
&
\includegraphics[width=0.25\linewidth]{tablet.jpg}
&
\includegraphics[width=0.2\linewidth]{memory.jpg}
\end{figure}
\end{tabular}
\end{center}
\end{table}
\end{document} | {
"alphanum_fraction": 0.6007509387,
"avg_line_length": 27.5517241379,
"ext": "tex",
"hexsha": "e3106acc8d787d6d2bef2ea65f88b6f5360b6bba",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "44628b606b6ac9d467600b45c02ccc5d8b565803",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "EgbodoM2003/MaroCSC102",
"max_forks_repo_path": "practice 1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "44628b606b6ac9d467600b45c02ccc5d8b565803",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "EgbodoM2003/MaroCSC102",
"max_issues_repo_path": "practice 1.tex",
"max_line_length": 154,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "44628b606b6ac9d467600b45c02ccc5d8b565803",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "EgbodoM2003/MaroCSC102",
"max_stars_repo_path": "practice 1.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 237,
"size": 799
} |
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{indentfirst}
\usepackage{amssymb}
\usepackage{fancyhdr}
\usepackage[margin=1.5in]{geometry}
\title{LX331: Assignment 6}
\author{Duy Nguyen}
\date{30 March 2017}
\pagestyle{fancy}
\fancyhf{}
\rhead{Nguyen \thepage}
\begin{document}
\maketitle
\newcommand*{\msim}{\mathord{\sim}}
\newcommand*{\mand}{\mathbin{\&}}
\section{Models and the semantics of PredL}
\subsection{Domain and assignment functions}
We have:
\begin{center}
\begin{tabular}{c|c}
a & Aragorn \\
b & Bilbo \\
c & Celebrimbor \\
\end{tabular}
\begin{tabular}{c|c}
\textbf{x} & \textbf{Val(x)} \\ \hline
\textsc{love} & \verb|{<Aragorn, Bilbo>}| \\
\textsc{greek} & \verb|{<Aragorn>}| \\
\textsc{man} & \verb|{<Bilbo>}|\\
\textsc{between} & $\varnothing$ \\
\end{tabular}
\end{center}
\subsection{A wild tautology appears!}
It is impossible to make (4) false. We observe the sentence $(\textsc{greek}(a) \mand \textsc{man}(a)) \rightarrow \textsc{man}(a)$, the only way to make a material implication false is to have the antecedent be true and have a false consequence. If we made our consequence (\textsc{man}) false, however, then we observe that our logic just contradict itself, as \textsc{man} appears both in the consequence and the antecedent. In another word, if we made the consequence false by setting \textsc{man}(a) to false then the antecedent will be false regardless of the truth value of \textsc{greek}(a).
Another way to look at this problem is the fact that formula in the form of $ (A \mand B) \rightarrow C $ can be translate into $ A \rightarrow B \rightarrow C $. Since $ \textsc{man}(a) \rightarrow \textsc{man}(a) $ is a tautology, it is not possible for us to construct a scenario where (4) would be false.
(4) could be translate to English as: \textit{If a person is both a man and a Greek then that person is a man}.
\subsection{Ain't a Greek man}
We note that $\msim (\textsc{greek}(c) \lor \textsc{man}(c)) \equiv \msim \textsc{greek}(c) \mand \msim \textsc{man}(c)$. Since we have an \& operator, we need to make both side of \& true. Therefore, to make both (5) and (6) true, we have to make sure both $\msim \textsc{man}(c)$ and $\textsc{man}(c)$ true. Since logic doesn't allows non-binary gender, this is impossible.
Let's look at a truth table.
\begin{center}
\begin{tabular}{cc|cc}
$\textsc{greek}(c)$ & $\textsc{man}(c)$ & $\msim (\textsc{greek}(c) \lor \textsc{man}(c))$ & $\textsc{man}(c)$ \\ \hline
T & T & F & T\\
T & F & F & F\\
F & T & F & T\\
F & F & T & F\\
\end{tabular}
\end{center}
From the truth table, we can see that (5) and (6) are \textbf{logically incompatible}. They are not logical denial however, since on the second row, they are both false.
\section{Unexpressed arguments}
\subsection{A sinking ship}
Based on the sentences in (3) and (4), I would think that the verb \textbf{sunk}, used in the manner as it was used in (3) and (4) would only be a \textbf{two-place predicate}. The reason for this is because the construction \textit{was blah-ed} (the passive voice) implies to the listener that the action was not caused by the object itself, but there was a causer. Therefore, the listener understand that there has to be an agent that cause the action to happen to the object. This agent however, could be left unsaid, and listener's mind would fill in some vague notion about this agent. This unsaid agent's phantom appears in sentences (b-d) in (4).
In (3b) and (4b), we can't have \textit{but no one was responsible for its sinking} next to the original sentence. This is because we can't denied what the original sentence (3a) has confirmed, that its owner sunk the boat. This is similar to the fact that you can't be both a man and not a man (1.3). However, in (4b), even though there is no people exist in the sentence, the phrase \textit{but no one was responsible for its sinking} is still not semantically acceptable, thus means that there is a phantom person blocking this.
In (3c) and (4c), the meaning of the word \textit{deliberately} require a causer. Furthermore, the causer of \textit{deliberately} has to possess some kind of intelligence, since it, the causer, has to be able to at least distinguish between accidental causation and not accidental causation. In general term, the one who \textit{deliberately} refers to has to be smart enough to cause it and know and want to cause an event. In (3c), this fact makes sense, as the agent is the owner. However, the lack of agent in (4c) doesn't make the sentence goes nonsensical. This means that there is someone that \textit{deliberately} refers to in (4c).
Lastly, in (3d) and (4d), we look at the phrase \textit{in order to collect the insurance}. In the similar argument we have had for (3c) and (4c), the fact that this phrase doesn't make the sentence (4d) goes bad means that there is an agent that this phrase refers to, Furthermore, this agent can't be the boat since it would not be able to "collect the insurance". The agent has to be an intelligent entity, who lives in a society where the concept of insurance exist for this entity's boat.
Therefore, sentences (b-d) show us that there is a phantom entity that is most likely a human that was left unsaid in (4) who is one of the parameter for the verb \textbf{sunk}.
\subsection{Self-sinking}
Interestingly, changing just one or two words in a sentence can radically change its meaning. In (5), the construction is no longer passive. This means that the core meaning of the sentence no longer implies that there is a causer (left unsaid) in the sentence.
I would still argue however, that the verb \textbf{sunk}, even though not use in a passive construction, is a two-place predicate. In the sentences (5a-d), the causer of the action of the boat sinking however, are not a person but a "thing". Contrast this to (4) where the listener would be able to guess that the causer is a person, in (5) the listener would only be able to come to a conclusion that the boat was sunk because of \textbf{something} (happened).
(5c) and (5d) are nonsensical. This is because \textit{deliberately} and \textit{in order to collect the insurance} require an intelligent being to refer to. However, if we see that the causer place in the \textbf{sunk} predicate is a "thing" then the fact that (5c) and (5d) don't make sense makes sense.
In contrast to (4b), (5b) makes sense. This is because there is no person who is responsible for the fact that the boat is sinking. Therefore, \textbf{no one} is able to be used. However, note that this sentence doesn't seem to make sense:
\# The boat sunk (sank), but nothing was responsible for its sinking.
Even though to some people the sentence can have some sense (I know it does for me once I have read it enough time), the fact that an event happened without any causality seem intolerable.
\section{Lexical semantics of predicates}
\subsection{identical $\sim$ different}
This pair of words are \textbf{scalar antonyms} of each other. These two words are mutually exclusive but not jointly exhaustive of their domain. The meaning of the two words are opposites of each other. Furthermore, we can come up with a word that fits in the middle of these two words, e.g. \textit{similar}. Two item could be not identical but also not different. They are \textit{similar} to each other.
\subsection{wide $\sim$ narrow}
In contrast to (3.1), the meaning of these two words encompassed their domain. A thing can be wide or narrow. However, there is no middle ground between wide or narrow. Therefore, these two are \textbf{complementary antonyms}.
\subsection{murder $\sim$ kill}
We observe that \textit{kill} is a vague notion, while \textit{murder} is a more definite description. An act of killing encompassed willful killing and involuntary killing. E.g. you can \textit{accidentally kill} someone by mistaken a vial of almond fragrance with a vial of cyanide. However, when you say \textit{murder}, there is a sense of intentional killing implied. Furthermore, you can kill plants, but you can't murder plants. Thus \textit{murder} is a \textbf{hyponym} of \textit{kill}.
\subsection{near $\sim$ far}
Similar to (3.2), the relationship between \textit{near} and \textit{far} is mutually exclusive and jointly exhaustive. We can't come up with a word that can sit in the middle of \textit{near} and \textit{far} (average distance?). Therefore, we will say that \textit{near} and \textit{far} are \textbf{complementary antonyms} of each other.
\subsection{ancestor $\sim$ descendant}
Like husband and wife, the relationship between \textit{ancestor} and \textit{descendant} are of a two opposite relationship. They could be called \textbf{relational antonyms}. In details, a statement of \textit{Andrew is the ancestor of Barney} automatically implies that \textit{Barney is the descendant of Andrew}. We see that $\verb|<a, b>| \in Val(\textsc{ancestor}) \leftrightarrow \verb|<b, a>| \in Val(\textsc{descendant})$.
\end{document}
| {
"alphanum_fraction": 0.737764152,
"avg_line_length": 86.1603773585,
"ext": "tex",
"hexsha": "bcfa92c2e37504df1d8dfd3772959dd007a3df2d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4bae095f8a857c884b409356a61f56a49b768611",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "zuik/stuff",
"max_forks_repo_path": "unnatural_rubber/hw/LX/LX331_A6.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4bae095f8a857c884b409356a61f56a49b768611",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "zuik/stuff",
"max_issues_repo_path": "unnatural_rubber/hw/LX/LX331_A6.tex",
"max_line_length": 654,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4bae095f8a857c884b409356a61f56a49b768611",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "zuik/stuff",
"max_stars_repo_path": "unnatural_rubber/hw/LX/LX331_A6.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2508,
"size": 9133
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% %
% File: Thesis_Appendix_A.tex %
% Tex Master: Thesis.tex %
% %
% Author: Andre C. Marta %
% Last modified : 2 Jul 2015 %
% %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Vector calculus}
\label{chapter:appendixVectors}
In case an appendix if deemed necessary, the document cannot exceed a total of 100 pages...
Some definitions and vector identities are listed in the section below.
% ----------------------------------------------------------------------
\section{Vector identities}
\label{section:vectorIdentities}
\begin{equation}
\nabla \times \left( \nabla \phi \right) = 0
\label{eq:cross_nnp}
\end{equation}
\begin{equation}
\nabla \cdot \left( \nabla \times {\bf u} \right) = 0
\label{eq:dotCross_nnu}
\end{equation}
| {
"alphanum_fraction": 0.3676948052,
"avg_line_length": 38.5,
"ext": "tex",
"hexsha": "7652c72ed082c0e6f8650aef74a7a896110cc2ef",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "08143907a2bd0e572d9ab5c4cf4c2b64b54bd0b8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "guimarais/thesis-barebones",
"max_forks_repo_path": "Thesis_Appendix_A.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "08143907a2bd0e572d9ab5c4cf4c2b64b54bd0b8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "guimarais/thesis-barebones",
"max_issues_repo_path": "Thesis_Appendix_A.tex",
"max_line_length": 91,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "08143907a2bd0e572d9ab5c4cf4c2b64b54bd0b8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "guimarais/thesis-barebones",
"max_stars_repo_path": "Thesis_Appendix_A.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 217,
"size": 1232
} |
\renewcommand*\chappic{img/megosat.pdf}
\renewcommand*\chapquote{}
\renewcommand*\chapquotesrc{}
\chapter{Results}
\label{ch:results}
%
In Chapter~\ref{ch:sat}, we discussed Boolean algebra; in particular we looked at
satisfiability which is practically covered by SAT solvers. SAT solvers take
Boolean functions in Conjunctive Normal Form and determine satisfiability.
In Chapter~\ref{ch:dc}, we discussed how we can analyze algorithms by observing
progression of differences between algorithm instances. In particular,
we looked at hash algorithms introduced in Chapter~\ref{ch:hash}.
With this background, we designed an attack setting in Chapter~\ref{ch:enc}
which enables us to verify and also find a hash collision given a differential
characteristic as starting point. Our goal is to find hash collisions
in practical time which we define by 1~day (86,400~seconds).
Therefore, we designed several approaches to improve our runtime results.
In this section, we will evaluate those approaches. Furthermore, we briefly
discuss claims we made about average SAT problems. In Section~\ref{ch:features},
we defined SAT features which to some extent characterize a SAT problem.
\section{Evaluating SAT features}
\label{sec:results-features}
%
In Chapter~\ref{ch:features}, we posed 8 questions.
In the following, we want to answer them with the data
provided by the cnf-analysis project.
\begin{description}
\item[Given an arbitrary literal. What is the percentage it is positive?]
We look at every clause and determine the ratio of positive to the total number of literals.
We determine the mean per CNF file and the mean among all CNF files
and retrieve a value of $0.48$ meaning that 48~\% of the literals are positive.
\item[What is the clauses / variables ratio?]
In average a CNF file has 12,219 variables and 89,541 clauses.
Its clauses-variables ratio is 7.328.
\item[How many literals occur only once either positive or negative?]
In average there are 36 existential literals per CNF file,
but its standard deviation of 967 is very large.
\item[What is the average and longest clause length among CNF benchmarks?]
The average clause length is 3.04 with a standard deviation of 0.99
and the longest clause length found was 61,473. Long clauses are typically
outliers excluding specific assignments.
\item[How many Horn clauses exist in a CNF?]
In average 29,994 goal clauses and 31,315 definite clauses exist
with an average number of 83,649 clauses in a CNF file.
\item[Are there any tautological clauses?]
In one file, 1679 tautological literals have been found. However,
its mean is 0.07 with a standard deviation of $9.63$ meaning that tautological
clauses are very rare.
\item[Are there any CNF files with more than one connected variable component?]
Indeed, an average CNF file contains 67.07 connected variable components.
However, its median is 1 implying that at least half of the CNF files
have only 1 connected variable component.
\item[How many variables of a CNF are covered by unit clauses?]
In average 124 variables are covered by unit clauses. This is an insignificant
number compared to 12,219 variables in an average CNF.
\end{description}
The clauses/variables ratio was thoroughly studied by the SAT
community~\cite{nudelman2004understanding}.
A strong correlation between the instance's hardness and the ratio of number
of clauses to number of variables exists~\cite{selman1996generating}
though it is important to point out that this result holds for randomly
generated SAT instances, which our testcases are not classified as.
Existential literals are interesting to discover, because they allow
to remove a clause immediately. Consider a clause with literals
$(l_1, l_2, \ldots, l_n)$. If a guarantee exists such that the variable
of any literal $l_i$ does not occur in any other clause, we can claim
$l_i$ true rendering the clause satisfied.
Tautological clauses trivially also render clauses satisfied.
Connected variable components are interesting, because they split the
SAT problem into several small independent subproblems which can be
solved in parallel.
Consider two sets of variables $A$ and $B$. Now consider some clauses
using only variables of $A$ and some clauses using only variables of $B$.
The overall CNF is satisfiable iff both clause sets are satisfiable.
The overall CNF is falsifiable iff any clause set is falsifiable.
Hence, if we know the connected variable components, we could easily
create two parallel SAT solver instances and solve the problems
independently. 4,607 out of 62,251 CNF files contained more than one
connected variable component.
These features represent very fundamental properties of the SAT problem.
But for us the question arises whether we can distinguish our cryptoproblems
from average problems?
\begin{itemize}
\item We looked at 36 files classified as cryptographic problems.
They are considered cryptographic, because their file or folder name
indicated they are related to hash functions or general cryptographic
applications like AES. The specific selection can be identified by
the crypto tag annotated to these CNF files as part of the cnf-analysis
project.
\item In average these problems have 116,398 clauses and 27,407 variables.
The average clauses-variables ratio is 5.58.
\item The 36 cryptographic SAT instances give a standard deviation of 0.7
for clause length meaning that most clause lengths are close to
the mean 3.4.
\item The number of definite clauses is twice its value for general problems
(62,457 versus 31,315) and the number of goal clauses is 26~\% of its
value for general problems (7,761 versus 29,994).
\item The number of connected variable components is 2,236 in average
($\sigma =$ 10,060), but the median is $1$ again.
\end{itemize}
No other value has been found to be significantly different from
average problems (or its difference follows immediately by the
non-uniform clause length).
The number of connected variable components seems interesting in
cryptographic problem, because it might indicate diffusion
in cryptographic problems. Diffusion means that variables strongly
interact with many different variables due to the repetitive
structure of cryptographic primitives. And finally the other
differences can be explained by a certain SAT design which
reoccurs in these testcases, because 36 is an exceptionally small
number compared to 62,251 unique CNF problems. It is expected
the cryptographic problems were designed by a small set of authors.
Comparing our average problem with cryptographic problems did
not draw any useful conclusions. Particularly a more thorough discussion
of the SAT designs might be more valuable than our high-level features.
We now specifically look at a SAT design we are familiar with:
Do average SAT problems distinguish from \emph{our} CNF testcases?
\begin{itemize}
\item For all MD4 testcases we have the same number of variables,
because the internal state of the hash algorithm instances are
always the same size.
However, adding the differential description as described in
Section~\ref{sec:enc-diff-desc} increases the number of clauses
by about 47~\% ($\sigma = 0.0005$) for MD4 instances and
by about 43~\% ($\sigma = 0.0008$) for SHA-256 instances.
For SHA-256 problems, this is illustrated in Table~\ref{tab:problem-sizes}.
The preference variable introduced in Section~\ref{sec:enc-diff-desc-eo}
increases the number of variables by about 80~\%
and the number of clauses by factor 2.
%\begin{table}[!h]
% \begin{center}
% \begin{tabular}{rc|rc}
% MD4~A: & 254,656 / 48,704 & MD4~A diff-desc: & 374,592 / 48,704 \\
% MD4~B: & 254,210 / 48,704 & MD4~B diff-desc: & 374,146 / 48,704 \\
% MD4~C: & 253,984 / 48,704 & MD4~C diff-desc: & 373,920 / 48,704 \\
% SHA-256 18: & 590,953 / 107,839 & SHA-256 18 diff-desc: & 846,487 / 107,839 \\
% SHA-256 21: & 636,838 / 116,800 & SHA-256 21 diff-desc: & 911,629 / 116,800 \\
% SHA-256 23: & 667,438 / 122,774 & SHA-256 23 diff-desc: & 955,067 / 122,774 \\
% SHA-256 24: & 682,722 / 125,761 & SHA-256 24 diff-desc: & 976,770 / 125,761
% \end{tabular}
% \caption{Our testcases and their number of clauses / variables}
% \end{center}
%\end{table}
Compared to 83,542 clauses and 12,219 variables for our average SAT problem,
we consider our testcases to be noticeably large. However, it is important to
point out that the problem size does not necessarily correlate with
the hardness of the SAT problem.
\item The variables of clauses of average SAT problems
have a standard deviation of 3,337 in average ($\sigma=$1,261, median $=$3,643).
Our average SAT problem has a standard deviation of 1,004 in average
($\sigma=$13,992, median $=22$). Hence variables which got created at every
point during the CNF generation are shared within one clause.
The general statement, that variable enumeration is arbitrary
and therefore this standard deviation has no meaning holds, but we need
to consider that practically speaking variables created close to each
other share close variable indices.
Under these assumptions a large $\sigma$ indicates variables are reused.
We assume this is another indicator for high diffusion in cryptographic
algorithms. Values are intermingled over and over throughout the repetitive
structure of hash algorithms.
\item Connected variables components are 129 for MD4 problems and 2 for SHA-256
problems. For SHA-256 problems, a unit clause is given as existential literal
and for MD4 problems, all components except one are of size 3.
We did not investigate further, because this number is constant
with an increasing problem size and all other variables are strongly
correlated due to a high diffusion.
\item An average literal frequency of $3.5\cdot 10^{-5}$ for our testcases
is much lower than 0.014 for average problems. We explain this with the
larger problem size. Literal frequency is divided by the number of clauses
of the CNF and is therefore smaller, the larger the problem is.
\end{itemize}
In general, we were not able to identify features allowing us to solve
differential cryptanalysis problems more efficiently compared to
general-purpose SAT problems. We concluded writing your own SAT solver
dedicated to solving differential cryptanalysis problems is not worth
the effort.
\section{Finding hash collisions}
\label{sec:results-attacks}
%
In this section, we look at our runtime results of testcases provided in
Appendix~\ref{app:tc}. We make various claims and substantiate them
with runtime results. Runtimes are always provided in seconds. Therefore,
smaller runtimes are better. \timeout{} denotes a timeout (solving took
more than 1 day) and \unknown{} denotes unavailable data.
We considered MD4 testcases~A, B and C (listed in Table~\ref{tab:md4-tcs})
and generated the corresponding CNF files. The SAT solvers mentioned
in Section~\ref{sec:sat-solvers} were used to evaluate whether the problem
is solvable within the time limit.
\begin{table}[!h]
\begin{center}
\begin{tabular}{cccccc}
\textbf{algorithm} & \textbf{testcase} & \textbf{rounds} & \textbf{diff. characteristic} & \textbf{clauses} & \textbf{variables} \\
\hline
MD4 & A & 48 & Appendix~\ref{fig:tcA} & 254,656 & 48,704 \\
MD4 & B & 48 & Appendix~\ref{fig:tcB} & 254,210 & 48,704 \\
MD4 & C & 48 & Appendix~\ref{fig:tcC} & 253,984 & 48,704
\end{tabular}
\caption{MD4 testcases considered}
\label{tab:md4-tcs}
\end{center}
\end{table}
\subsection{Attacking MD4}
\label{sec:results-md4}
%
\begin{prop}
Testcase~A in the encoding described in Section~\ref{sec:enc-original}
can be solved within one minute by all considered SAT solvers.
\end{prop}
In our attack setting we started off with Testcase~A. It serves
rather as a toy example to verify correctness of our implementation than
as an actual benchmark. Be aware that invalid implementations either
result in unsatisfiability for satisfiable testcases or runtime results are
unexpected because the SAT solver could not take advantage of our SAT design
improvements. This particular testcase can be solved easily with all major
SAT solvers as can be seen in Table~\ref{tab:tcA-results}.
%
\begin{table}[!h]
\begin{center}
\begin{tabular}{cccccc}
\textbf{solver} & \textbf{version} & \textbf{propagations} & \textbf{decisions} & \textbf{restarts} & \textbf{runtime} \\
\hline
MiniSat & 2.2.0 & 3,813,726 & 250,759 & \unknown & 3 \\
CryptoMiniSat & 4.5.3 & 140,000 & 2,441,566 & 539 & 26 \\
& 5 & 32,163,801 & 2,178,965 & 598 & 29 \\
Lingeling & ats1 & 6,586,770 & 436,621 & 980 & 23 \\
Plingeling & ats1 & 452,630,440 & 3,275,498 & \unknown & 88 \\
Treengeling & ats1 & 18,629,811 & 1,640,289 & \unknown & 64 \\
Glucose & 4.0 & 14,727,839 & 990,491 & 272 & 8 \\
Glucose Syrup & 4.0 & 37,021,496 & 629,363 & 201 & 14 \\
\end{tabular}
\caption{Testcase~A can be solved within 1 minute by all SAT solvers}
\label{tab:tcA-results}
\end{center}
\end{table}
%
We end up with the result, that the hash collision given in Testcase~C
can be solved by the majority of modern SAT solvers. Of course the cryptanalyst
needs to figure out good starting points for the hash collision and encode them
in the differential characteristic, but this task is still considered practical,
because it can be easily automated.
\subsection{Evaluating simplification}
\label{sec:results-simplification}
%
As a next approach, we looked at CNF simplifiers. Those simplifiers consume a
CNF file and transform the CNF file to an equisatisfiable CNF file.
Those simplified CNF files might be subject to performance improvements.
\begin{prop}
Simplification reduces the problem size (number of variables and clauses).
\end{prop}
%Consider for example Testcase~18 (Appendix \ref{sec:tc18}) in the basic encoding
%introduced in Section~\ref{sec:enc-original}. Then simplification will reduce the problem
%size down by 57--77~\% of its original size (as illustrated in Table~\ref{tab:simpl-size}).
%We verified these data for all simplified files and got similar results.
%Therefore the claim holds considering the problem size gets reduced to
%approximately half of its size.
%
%\begin{table}[!h]
% \begin{center}
% \begin{tabular}{rcccc}
% \textbf{simplification} & \textbf{variables} &/& \textbf{clauses} & \textbf{(variables left / clauses left)} \\
% \hline
% none & 590953 &/& 107839 & (100~\% / 100~\%) \\
% satelite & 457972 &/& 69670 & (77.50~\% / 64.61~\%) \\
% cmsat & 342139 &/& 61789 & (57.90~\% / 57.30~\%) \\
% lingeling & 344544 &/& 107839 & (58.30~\% / 100.00~\%) \\
% minisat & 391134 &/& 61845 & (66.19~\% / 57.35~\%)
% \end{tabular}
% \caption{
% Problem sizes of Testcase 18 in the encoding of
% Section~\ref{sec:enc-original} after simplification
% }
% \label{tab:simpl-size}
% \end{center}
%\end{table}
Consider for example Testcase~C (Appendix \ref{sec:tcC}) in the basic encoding
introduced in Section~\ref{sec:enc-original}. Then simplification will reduce the problem
size down to 42.9~\% or more of its original size (as illustrated in Table~\ref{tab:simpl-size}).
We verified these data for all simplified files and got similar results.
Therefore, the claim holds considering the problem size gets reduced to
approximately half of its size.
\begin{table}[!h]
\begin{center}
\begin{tabular}{rcccc}
\textbf{simplification} & \textbf{variables} & \textbf{percent of none} &\textbf{clauses} & \textbf{percent of none} \\
\hline
none & 48,704 & 100~\% & 253,984 & 100~\% \\
cmsat & 24,503 & 50.31~\% & 111,931 & 44.07~\% \\
lingeling & 48,704 & 100~\% & 106,626 & 41.98~\% \\
minisat & 20,895 & 42.90~\% & 118,236 & 46.55~\% \\
satelite & 27,495 & 56.45~\% & 153,262 & 60.34~\%
\end{tabular}
\caption{
Problem sizes of Testcase~C in the encoding of
Section~\ref{sec:enc-original} after simplification.
lingeling maintains the same number of variables
according to the CNF header.
}
\label{tab:simpl-size}
\end{center}
\end{table}
\begin{prop}
Simplification as preprocessing step does not significantly improve the runtime of SAT solvers.
\end{prop}
%
We look at Testcase~C which is a more difficult MD4 problem
compared to Testcase~A. Simplification runtime results depend on the
SAT solver, which applies certain simplifications while trying to solve the
CNF, and the simplifier used. A small number of variables or clauses does
not necessarily lead to better performance. But an equisatisfiable encoding
of the same problem is worth considering.
\newpage
Table~\ref{tab:simplification-results} lists runtimes depending
on the simplification used.
\begin{description}
\item[none]
refers to the unsimplified CNF
\item[cmsat]
refers to simplification applied with CryptoMiniSat version~5: \\
\texttt{./cryptominisat5 -p1 file.cnf simplified.cnf}
\item[lingeling]
refers to simplification with lingeling version ats1: \\
\texttt{./lingeling -s file.cnf -o simplified.cnf}
\item[minisat]
also simplifies CNF file with the following command line: \\
\texttt{./minisat file.cnf -dimacs=simplified.cnf}
\item[satelite]
is specifically designed to simplify CNF files: \\
\texttt{./satelite file.cnf simplified.cnf}
\end{description}
It is worth pointing out that simplification time is not part of the runtime
listed. Simplification can take very long. Especially, simplifications with
lingeling have sometimes taken several hours without result.
In conclusion, simplification leads to a slight improvement of the runtime,
but in general we cannot recommend simplifying every CNF file.
Because technically speaking, SAT solvers internally apply simplification
algorithms on their own.
\begin{table}[!h]
\begin{center}
\begin{tabular}{cc|ccccc}
\textbf{solver} & \textbf{version} & \textbf{none} & \textbf{cmsat} & \textbf{lingeling} & \textbf{minisat} & \textbf{satelite} \\
\hline
MiniSat & 2.2.0 & 4,519 & 7,649 & 1,337 & 1,476 & 1,293 \\
CryptoMiniSat & 5 & 1,064 & 973 & 1,201 & 4,470 & 3,920 \\
Lingeling & ats1 & 1,492 & 906 & 356 & 860 & 1,297 \\
Treengeling & ats1 & 1,281 & 13,401 & 20,903 & 13,790 & 10,840 \\
Plingeling & ats1 & 2,310 & 1,232 & 955 & 1,384 & 2,030
\end{tabular}
% algotocnf_1.cnf
%
% not listed in runtime results folder contained:
% lingeling-ats1: 74.3 seconds, 49.2 MB
% treengeling-ats1: 1955 100% scheduled jobs 1281.49 seconds, 2468 MB
% cmsat5, ling-simplified 1.cnf: c Total time : 1201.19
% plingeling, ling-simplified 1.cnf: c 954.7 seconds, 719.9 MB
\caption{
Runtimes of Testcase~C after CNF files have been simplified
}
\label{tab:simplification-results}
\end{center}
\end{table}
\subsection{Attacking SHA-256}
\label{sec:result-diff-desc}
%
\index{Differential description}
While the basic approach works well for MD4, hash algorithm SHA-256 encompasses
a much larger state making the problem significantly more difficult for the SAT solver.
Consider Testcases 18, 21, 23 and 24. Those testcases describe
round-reduced hash collisions on SHA-256 (the testcase number gives
the number of rounds). Our next approach is called differential description
as originally described in Section~\ref{sec:enc-diff-desc}.
\begin{table}[!h]
\begin{center}
\begin{tabular}{cc|cc}
\textbf{testcase} & \textbf{clauses / variables} & \textbf{testcase} & \textbf{clauses / variables} \\
\hline
18: & 590,953 / 107,839 & 18 diff-desc: & 846,487 / 107,839 \\
21: & 636,838 / 116,800 & 21 diff-desc: & 911,629 / 116,800 \\
23: & 667,438 / 122,774 & 23 diff-desc: & 955,067 / 122,774 \\
24: & 682,722 / 125,761 & 24 diff-desc: & 976,770 / 125,761 \\
\end{tabular}
\caption{Problem sizes of our SHA-256 testcases (clauses / variables)}
\label{tab:problem-sizes}
\end{center}
\end{table}
\begin{prop}
A differential description encoding (Section~\ref{sec:enc-diff-desc})
improves the runtime compared to a missing differential description.
\end{prop}
To testing differential description, we looked at MD4's Testcase~C
and compared it with out SHA-256 testcases. Those testcases
are described in detail in Appendices~\ref{sec:tc18}, \ref{sec:tc21}
and \ref{sec:tc23}.
Recall that differential description explicitly encodes
how differences in arithmetic and bitwise operations propagate in the CNF.
We discussed \boolf{XOR} and \boolf{IF} in Section~\ref{sec:enc-diff-desc}.
These clauses should be deducible by the SAT solver itself and
do not narrow the search space. Therefore we expected equivalent
runtime results for both cases (with or without differential description).
However, the resulting data indicates the opposite.
In Table~\ref{tab:diff-desc-results} we picked two SAT solvers lingeling
and CryptoMiniSat and we can clearly see a significant improvement of the runtimes.
\begin{table}[!h]
\begin{center}
\begin{tabular}{c|cccc}
& \multicolumn{2}{c}{\textbf{CryptoMiniSat~5}} & \multicolumn{2}{c}{\textbf{lingeling-ats1}} \\
\textbf{testcase} & \textbf{w/o dd} & \textbf{w/ dd} & \textbf{w/o dd} & \textbf{w/ dd} \\
\hline
MD4, C & 1,064 & 231 & 798 & 53 \\
SHA-256, 18 & 37 & 37 & 31 & 160 \\
SHA-256, 21 & \timeout & 7,855 & 28,621 & 5,513 \\
SHA-256, 23 & \timeout & 26,212 & 76,196 & 1,450 \\
SHA-256, 24 & \timeout & 37,194 & 78,017 & 1,235
\end{tabular}
% not contained in folder structure:
% SHA256-18, lingeling-ats1, with dd
% 1805652 decisions, 30042.1 decisions/sec
% 188760 conflicts, 3140.6 conflicts/sec
% 47432866 propagations, 0.8 megaprops/sec
% 60.1 seconds, 79.4 MB
% MD4-C, lingeling-ats1, w/o dd
% 18835161 decisions, 12588.8 decisions/sec
% 9364054 conflicts, 6258.6 conflicts/sec
% 830017204 propagations, 0.6 megaprops/sec
% 1496.2 seconds, 147.4 MB
% MD4-C, lingeling-ats1, w/ dd
% 1710883 decisions, 32432.6 decisions/sec
% 333050 conflicts, 6313.5 conflicts/sec
% 44484852 propagations, 0.8 megaprops/sec
% 52.8 seconds, 31.1 MB
\caption{
Runtimes for various testcases with or without differential
description with CryptoMiniSat and lingeling. Testcase~C
has been added for reference. We need to point out that
the timeouts, unlike other testcases, were determined on
Thinkpad x220 (compare Appendix~\ref{app:setup}), because
the processes consistently died on our cluster.
}
\label{tab:diff-desc-results}
\end{center}
\end{table}
We continued by modifying the guessing strategy to reflect
differential cryptanalysis, which generally use the assumption
that difference variables are assigned first. This strategy
requires customization of the SAT solver and therefore we
only considered lingeling, which was adapted for our purposes.
\subsection{Modifying the guessing strategy}
\label{sec:results-guessing}
%
In differential cryptanalysis the general assumption is made that
differences should be guessed first. Once they are assigned, we
can look at the Boolean values in the two hash algorithm instances.
To model this behavior, we looked at options provided by SAT solvers.
\newcommand\mone[1][-1]{\texttt{-{}-phase=#1}}
\begin{prop}
Lingeling option \mone{} improves its runtime for our testcases.
\end{prop}
%
Option \mone{} of lingeling is described as \enquote{default phase} set to
$-1$ (negative), $0$ (Jeroslow-Wang strategy~\cite{JeroslowWang})
or $1$ (positive). Per default a strategy engineered by
Jeroslow-Wang~\cite{JeroslowWang} is used, but considering
Claim~\ref{prop:false-first} at page~\pageref{prop:false-first}
we expect \mone{} to provide better runtime results.
Indeed our results consistently indicate a small improvement.
This can be recognized in Table~\ref{tab:phase-results}.
\begin{table}[!h]
\begin{center}
\begin{tabular}{c|cc cc cc cc}
\textbf{testcase} & \multicolumn{2}{c}{18} & \multicolumn{2}{c}{21} & \multicolumn{2}{c}{23} & \multicolumn{2}{c}{24} \\
\hline
\textbf{phase} & 0 & -1 & 0 & -1 & 0 & -1 & 0 & -1 \\
\hline
\textbf{runtime} & 31 & 22 & 28,621 & 19,717 & 76,196 & 71,677 & 85,774 & 70,259 \\
\end{tabular}
\caption{lingeling-ats1 results for SHA-256 comparing \mone{} with \mone[0]{}}
\label{tab:phase-results}
\end{center}
\end{table}
\subsection{Evaluating the lightweight approach}
\label{sec:lightweight-results}
%
Though the results of \mone{} was recognizable, we wanted to push it further.
We got in contact with Armin Biere who provided us an extended lingeling implementation
which distinguishes two sets of variables; namely a set of differences variables
which needs to be assigned first.
\begin{prop}
Evaluating difference variables first and with Boolean value false improves the runtime.
\end{prop}
%
The lightweight approach mentioned in Section~\ref{sec:enc-lightweight}
evaluates difference variables first with Boolean value false,
but does not add a differential description. Hence, differential behavior is
not modelled explicitly.
This approach is justified by the assumption that a low number of differences,
leading to a sparse differential path, is more likely to cancel out differences
ending in a hash collision.
Table~\ref{tab:diff-first-false-results} reveals a nice improvement (the runtime
becomes 0.85~\%) of its original runtime in average.
\begin{table}[!h]
\begin{center}
\begin{tabular}{c|c|cccc}
\textbf{testcase} & \textbf{C} & \textbf{18} & \textbf{21} & \textbf{23} & \textbf{24} \\
\hline
\textbf{basic approach} (ats1) & 798 & 31 & 28,621 & 76,196 & 85,774 \\
\textbf{diff-first-false} (ats1o1) & 652 & 29 & 27,599 & 59,312 & 66,052
\end{tabular}
\caption[Difference variables first (with Boolean value false) results]{
lingeling-ats1o1 and lingeling-ats1 results
comparing a difference variables (with Boolean value false) first approach
with the basic approach
}
\label{tab:diff-first-false-results}
\end{center}
\end{table}
\subsection{Using preference variables}
\label{sec:preference-variables}
%
Our last approach uses \emph{preference variables} mentioned in
Section~\ref{sec:enc-diff-desc-eo}. Under the assumption that
preference variables $x^*$ and difference variables $\Delta x$
are assigned first, an additional clauses provides a decision tree
which assigns difference variables first and once they are all set,
values for the two hash algorithm instances are assigned.
\begin{prop}
Adding preference variables dramatically worsens performance.
\end{prop}
%
Section~\ref{sec:enc-diff-desc-eo} introduces preference variables
which enforce the idea that difference variables are evaluated first.
Preference variables only add additional clauses, but do not provide
a runtime improvement per se. The larger number of variables and
clauses make the problem potentially harder.
However, evaluating them with false first makes sure that a low
number of differences is propagated. Otherwise the SAT solver would
spend much time in fruitless branches and the number of restarts
would be comparably high.
Given an assigned difference variable, differential description
ensures that the value is propagated quickly to other parts of
the equation system. This justifies why our encoding with preference
variables should be compared to an instance with differential
description and difference variables first.
Table~\ref{tab:pref-vars-results} shows results for MD4 and SHA-256 testcases.
The data indicates that for very small runtimes, the runtime improved.
Unfortunately, for the SHA-256 testcases runtimes have worsened extraordinary.
\begin{table}[!h]
\begin{center}
\begin{tabular}{c|ccccccc}
\textbf{testcase} & \textbf{A} & \textbf{B} & \textbf{C} & \textbf{18} & \textbf{21} & \textbf{23} & \textbf{24} \\
\hline
\textbf{CNF with diff-desc} & 11 & 133 & 155 & 49 & 2,282 & 1,314 & 2,632 \\
\textbf{preference variables added} & 8 & 50 & 62 & \timeout & \timeout & \timeout & \timeout \\
\end{tabular}
\caption{
lingeling-ats1o1 testcases comparing an approach
with differential description with additional preference variables
}
\label{tab:pref-vars-results}
\end{center}
\end{table}
\section{Summary}
\label{sec:results-summary}
%
In this section we looked at various improvements to improve the runtime, namely
\begin{enumerate}
\item CNF simplification
\item differential description
\item Lingeling's \mone{} option
\item Difference variables first with Boolean false
\item Preference variables
\end{enumerate}
%
We evaluated these approaches with several SAT solvers and found some significiant
runtime improvements. We successfully found hash collisions for MD4 and SHA-256,
where the latter has been reduced to 18, 21, 23 and 24 steps.
| {
"alphanum_fraction": 0.7095370004,
"avg_line_length": 47.8296529968,
"ext": "tex",
"hexsha": "83863a5e04cf774c4188d60c476c506b5aa9016d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "30703774eaac6a514add2424f5c408a549cd4db1",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "prokls/master_iaik",
"max_forks_repo_path": "results.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "30703774eaac6a514add2424f5c408a549cd4db1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "prokls/master_iaik",
"max_issues_repo_path": "results.tex",
"max_line_length": 154,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "30703774eaac6a514add2424f5c408a549cd4db1",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "prokls/master_iaik",
"max_stars_repo_path": "results.tex",
"max_stars_repo_stars_event_max_datetime": "2016-05-13T21:45:11.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-05-13T21:45:11.000Z",
"num_tokens": 8309,
"size": 30324
} |
% appendix.tex
\section{Appendix}
\subsection{Source Code}
The source code that was developed can be found at the following URL on Github:\\
\href{https://github.com/ralflorent/gns.git}{Source Code}\\
\noindent
The structure is breifly outlined below:\\
\noindent
\textit{gns/gns/gns-backend/}\\
The Source code for the Java 8 / Spring Boot backend system.\\
\noindent
\textit{gns/gns/gns-db/}\\
The SQL script for the database schema.\\
\noindent
\textit{gns/gns/gns-ui/}\\
The web frontend based on AngularJS.\\
\noindent
\textit{gns/gns/utils/}\\
Utility python scripts for automation.\\
\noindent
\textit{gns/docs/}\\
Basic documentations with instructions on setting up the service.\\
\noindent
\textit{gns/docs/latex\_src/}\\
The LaTeX source files this document was built from.\\ | {
"alphanum_fraction": 0.7449748744,
"avg_line_length": 22.7428571429,
"ext": "tex",
"hexsha": "154c4f42bb16be550ef0dc367463d4b3cd6ea9f0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "536d802830c9bde991e09e46863170f1c9c77049",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ralflorent/gns",
"max_forks_repo_path": "docs/latex_src/sections/appendix.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "536d802830c9bde991e09e46863170f1c9c77049",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ralflorent/gns",
"max_issues_repo_path": "docs/latex_src/sections/appendix.tex",
"max_line_length": 81,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "536d802830c9bde991e09e46863170f1c9c77049",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ralflorent/gns",
"max_stars_repo_path": "docs/latex_src/sections/appendix.tex",
"max_stars_repo_stars_event_max_datetime": "2019-04-06T12:12:52.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-04-06T12:12:52.000Z",
"num_tokens": 236,
"size": 796
} |
\newpage
\chapter{Materials and Methods}
\label{sec:Approach}
\section{General Problem Formulation}
\label{sec:Approach_GPF}
The general idea behind \ac{TAD} is that a high-dimensional input (e.g. a
high-resoluted or colored image) is transformed in a low-dimensional space so that it first can be reconstructed as good as possible and second still is human-understandable in lower dimensional space. Besides, both transformations should be computationally efficient, so that an optimal trade-off between network complexity (efficiency) and reconstruction capabilities is met.
\begin{figure}[!htbp]
\centering
\includegraphics[width=10cm]{figures/problem}
\caption{General \ac{TAD} problem formulation.}
\label{fig:problem}
\end{figure}
With $g_\phi$ the downscaling and $f_\theta$ the upscaling function, $X_{GT}$ the groundtruth (input) as well as $X_{SLR}$, $X_{SHR}$ its low- and high-dimensional representation, the \ac{TAD} problem can be formulated as combined optimization problem constraining both the low-dimensional representation (readability) as well as the high-dimensional reconstruction (accuracy). While the second constraint can be easily formulated using the input image $X_{GT}$ as groundtruth the first constraint is more vague and hard to quantify. Therefore, it is assumed that the optimal latent space encoding is similar to a trivially obtained low-dimensional representation like a (bilinearly) interpolated or greyscale image. As further described in \mysecref{sec:Approach_TS} $X_{SLR}$ is thereby not derived from scratch but builds up on the guidance image in the training procedure so that both optimization problems can be solved more independently than learning both $X_{SLR}$ and $X_{SHR}$ from scratch and typically the first optimization problem (readability of $X_{SLR}$) is easier to solve for the model.
\section{Autoencoder Network Architecture}
\label{sec:Approach_ANA}
As no groundtruth for the low resolution image is available, since \ac{TAD} poses requirements for both the down- and upscaling and because it has proven to work for the \ac{TAD} problem in previous works an autoencoder design is used.
\begin{figure}[!htbp]
\centering
\includegraphics[width=14cm]{figures/architecture_example.png}
\caption{Example architecture of the \ac{TAD} autoencoder network design for \ac{SISR} task.}
\label{fig:architecture}
\end{figure}
The autoencoder should be able to handle an input image of general size, it should be runtime-efficient, store as much information as possible while downscaling as well as end-to-end and efficiently trainable.
Therefore a convolutional-only, reasonable shallow network design is used. To avoid the loss of information during downscaling instead of pooling operations subpixel convolutional layers are employed. Furthermore, in order to enable efficient training and circumvent vanishing gradient problems (especially for larger networks that were tested) next to direct forward passes ResNet (\cite{DRLFIR}) like \textit{Resblocks} are used, which are structured as
$$Resblock(x) = x + Conv2D(ReLU(Conv2D(x)))$$
Since this network design does not continuously downscale the input but
applies pixel shuffling to downscale while all other layers do not alter their inputs shape, the networks also is easily adaptable to design changes, which simplifies the architecture optimization process.
\section{Loss Function}
\label{sec:Approach_LF}
The loss function $L$ consists of two parts, representing both optimization problems introduced in \mysecref{sec:Approach_GPF}. The first one, $L_{TASK}$, is task-dependent and states the difference between the decoders output $X_{SHR}$ and the desired output $X_{GT}$, e.g. the original \ac{HR} in the \ac{SISR} task.
$$L_{TASK} = L1(X_{GT}, X_{SHR})$$
The second part, $L_{LATENT}$, encodes the human-readability of the low-dimensional representation. So $L_{LATENT}$ is the distance between the interpolated guidance image $X_{GD}$ and the actual encoding $X_{SLR}$:
$$L_{LATENT} = \begin{cases}
L1(X_{GD}, X_{SLR}) & \text{if } ||L1/d_{max}|| \geq \epsilon
\\ 0.0 & \text{otherwise}
\end{cases}$$
with $||L1/d_{max}||$ being the $L1(X_{GD}, X_{SLR})$ loss normalized
by the maximal deviation between $X_{GD}$ and $X_{SLR}$. Hence, $L_{LATENT}$ is zero in an $\epsilon$-ball around the guidance image, ensuring that \ac{SLR} is close to the guidance image but also helps to prevent overfitting to the trivial solution $X_{GD} = X_{SLR} \Leftrightarrow g_\phi = 0$. As shown in \mychapterref{sec:ExperimentsandResults} introducing an $\epsilon$-ball also improves the model's robustness against perturbations.
The overall loss function is a weighted sum of both of the loss function introduced above. The relative weight $(\alpha, \beta)$ is of large importance for the trade-off between the readability requirement and the performance of the model's upscaling part (super-resolution, colorization). However, as described above the readability requirement is less strict so that typically $\alpha >> \beta$.
$$L = \alpha L_{TASK} + \beta L_{LATENT}$$
\section{Training Specifications}
\label{sec:Approach_TS}
Even if a guidance image is part of the loss function learning both the low- and high-dimensional representation from scratch poses a combined optimization problem which usually is very hard to solve. To ensure (faster) convergence, therefore in the beginning of the training procedure the guidance image is added to the encoder's output and the encoding is not discretized (finetuning). This improves both the convergence rate of $X_{SLR}$ and $X_{SHR}$, especially in the beginning of the training procedure, since merely a difference between the interpolated representation and the more optimal encoding has the be derived and the down- and upscaling can be learned more independently since the lower dimensional representation is always guaranteed to be useful for upscaling.
\begin{figure}[!htbp]
\centering
\includegraphics[height=5cm]{figures/guidance_loss.png}
\includegraphics[height=5cm]{figures/guidance_psnrs.png}
\caption{Loss curve without adding guidance image (left) and with
adding guidance image (right) while training for \ac{IC} example.}
\label{fig:loss_w_wo_adding_guidance}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[height=5cm]{figures/finetuning_loss.png}
\includegraphics[height=5cm]{figures/finetuning_psnrs_16.png}
\caption{Loss curve without finetuning (left) and with finetuning (right) while training for \ac{SISR} (scale = $16$) example.}
\label{fig:loss_w_wo_finetuning}
\end{figure}
\section{Task-Specific Design}
\label{sec:Approach_TSD}
While the general approach derived in \mysecref{sec:Approach_GPF} can be trivially applied on the \ac{SISR} and on the \ac{IC} problem the extension to the \ac{VSR} task is more advanced. In the following the specification of all three problems are presented:
\subsection*{Single-Image-Super-Resolution Problem}
An example of the overall design of the \ac{TAD} pipeline applied on the
\ac{SISR} problem is shown in \myfigref{fig:architecture}. The high-dimensional space here is a high-resolution image ($X_{GT} = X_{HR}$), while the low-resolution guidance image is the bilinearly downscaled image ($X_{GD} = X_{LR}^{B}$). As shown for an example of scaling factor 4 in \mysecref{sec:Experiments_SISR} the model can be trained more efficiently as well as performs better when iteratively scaling (i.e. scaling two times with factor 2, instead of one time with factor 4 directly).
\subsection*{Video-Super-Resolution}
As pointed out in \mychapterref{sec:RelatedWork} the challenge of \ac{VSR} compared to \ac{SISR} is to not only take one frame into account but subsequent frames in order to reconstruct opaqued objects, reflect motions, etc. As shown in \mychapterref{sec:RelatedWork} most approaches tackling the \ac{VSR} problem follow a two-step approach, firstly reconstructing the optical flow using the current and previous low-resolution, using the result to warp the image and finally upscale it. An improvement in any of these building blocks would improve the overall reconstruction, therefore there are multiple possibilities for finding a more optimal low-dimensional representation. Several approaches were tried including a direct approach which encodes the reconstruction capability of a given \ac{VSR} model into the loss function and tries to shape the low-dimensional representation such that the model's performance improves as well as an approach directly targeting the optical flow calculation (more details in the appendix).
\begin{figure}[!htbp]
\centering
\includegraphics[width=14cm]{figures/architecture_video_external}
\caption{Design for \ac{TAD} video super-resolution task (example network
architecture).}
\label{fig:architecture_video}
\end{figure}
The most promising approach is displayed in \myfigref{fig:architecture_video} and involves a training in multiple steps. In the first step, the autoencoder model is trained on incoherent images (similar to \ac{SISR}) to learn a good general down- and upscaling transformation. In a second step the pretrained model is explicitly trained on a video dataset, in order to produce the training dataset for a \ac{VSR} model, which is trained on this SLR frames specifically in the third step. After training to validate the model subsequent frames are downscaled first using the video pretrained \ac{TAD} model, the downscaled images are then fed to the trained \ac{VSR} model, upscaling them. While this approach basically is applicable to all \ac{VSR} frameworks in the scope of this project the SOFVSR (\cite{LFVSRTHROFE}) model was used. \footnote{Further details about the selection of SOFVSR and a description of the approach itself can be found in the appendix.}
\subsection*{Image Colorization}
While in a grayscale image information about intensity, color value and
saturation are mingled over all channels, other color spaces split these
information in separate channels. In order to contain as much information about the original colors a non-uniformly-weighted (while grayscale would be a uniformly weighted sum, destroying color contrast information) and static (i.e. non periodic like hue in HSV color space) sum of original color values would be optimal, which is e.g. the Y channel of the YCbCr channel. Therefore it is used as guidance image ($X_{GD} = X_{GRY}^Y$), while the original colored image can be used as groundtruth ($X_{GT} = X_{COL}$).
\begin{figure}[!htbp]
\centering
\includegraphics[width=14cm]{figures/architecture_color_large.png}
\caption{Design for \ac{TAD} colorization task (example network architecture).}
\label{fig:architecture_color}
\end{figure}
\section{Implementation}
\label{sec:Approach_IMP}
The project was implemented in Python 3, using the PyTorch deep learning
framework. Although some ideas from Kim et al. \cite{TAID} were adopted as described above the pipeline had to be re-implemented from scratch and
re-validated since neither code nor any pretrained model have been
available publicly (nor upon request). As PyTorch merely supports
subpixel convolutional layers, their inverse transformation was implemented as well. Since most of the open-source models for the \ac{VSR} problem merely are available in Tensorflow, as is the used model, it was re-implemented and adapted to the \ac{TAD} pipeline.
\newline
During program development it was paid attention to generality and
commutability in order to efficiently test a variety of different models and datasets as well as guarantee comparability of different approaches. The full code stack can be found on \url{https://github.com/simon-schaefer/tar}.
% The objectives of the ``Materials and Methods'' section are the following:
% \begin{itemize}
% \item \textit{What are tools and methods you used?} Introduce the environment, in which your work has taken place - this can be a software package, a device or a system description. Make sure sufficiently detailed descriptions of the algorithms and concepts (e.g. math) you used shall be placed here.
% \item \textit{What is your work?} Describe (perhaps in a separate section) the key component of your work, e.g. an algorithm or software framework you have developed.
% \end{itemize}
| {
"alphanum_fraction": 0.7889789303,
"avg_line_length": 97.1653543307,
"ext": "tex",
"hexsha": "da844121ee05d563483a30d950d9792a7d6d2799",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-07-16T23:07:28.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-07-16T23:07:28.000Z",
"max_forks_repo_head_hexsha": "bf28a959fb150ceeadbd9f0bcfc12f3025cf82f4",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "SeleSchaefer/super_resolution",
"max_forks_repo_path": "report/materialsandmethods.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "bf28a959fb150ceeadbd9f0bcfc12f3025cf82f4",
"max_issues_repo_issues_event_max_datetime": "2020-06-13T06:39:44.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-06-13T06:39:44.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "SeleSchaefer/super_resolution",
"max_issues_repo_path": "report/materialsandmethods.tex",
"max_line_length": 1105,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "bf28a959fb150ceeadbd9f0bcfc12f3025cf82f4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "SeleSchaefer/super_resolution",
"max_stars_repo_path": "report/materialsandmethods.tex",
"max_stars_repo_stars_event_max_datetime": "2020-12-08T11:56:33.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-11-11T10:01:52.000Z",
"num_tokens": 2952,
"size": 12340
} |
\documentclass[24pt, a4]{article}
% To compile latex inside of vim
% ! clear; pdlatex name_of_file.tex or "%"
% \usepackage[utf8]{inputenc}
\usepackage[parfill]{parskip}
\usepackage{listings}
\usepackage{geometry}
\title{Linked List}
\author{Mustafa Muhammad}
\date{12th November 2021}
\begin{document}
\maketitle
List of Questions:
\begin{enumerate}
\item{General format}
\end{enumerate}
\section{Add two numbers}
Input: l1 = [2,4,3], l2 = [5,6,4]
Output: [7,0,8]
Explanation: 342 + 465 = 807.
\begin{lstlisting}
class Solution:
def addTwoNumbers(self, l1: Optional[ListNode], l2: Optional[ListNode]) -> Optional[ListNode]:
list1 = l1
list2 = l2
ans = ListNode()
head = ans
carry = 0
while(list1 or list2):
added = 0
if list1 != None:
added += list1.val
else:
added += 0
if list2 != None:
added += list2.val
else:
added += 0
added += carry
ans.next = ListNode(added % 10)
ans = ans.next
carry = int(added/10)
if list1 != None:
list1 = list1.next
if list2 != None:
list2 = list2.next
if carry > 0:
ans.next = ListNode(carry)
ans = ans.next
return head.next
\end{lstlisting}
\section{Remove Nth Node from end of list}
Input: head = [1,2,3,4,5], n = 2
Output: [1,2,3,5]
Input: head = [1], n = 1
Output: []
\begin{lstlisting}
class Solution:
def removeNthFromEnd(self, head: Optional[ListNode], n: int) -> Optional[ListNode]:
right = head
# so that we initialize 1 step back
dummy = ListNode(0, head)
left = dummy
for i in range(0, n):
right = right.next
while right != None:
right = right.next
left = left.next
left.next = left.next.next
return dummy.next
\end{lstlisting}
\section{Merge K sorted Lists}
You are given an array of k linked-lists lists, each linked-list is sorted in ascending order.
Merge all the linked-lists into one sorted linked-list and return it.
Input: lists = [[1,4,5],[1,3,4],[2,6]]
Output: [1,1,2,3,4,4,5,6]
Explanation: The linked-lists are:
[
1->4->5,
1->3->4,
2->6
]
merging them into one sorted list:
1->1->2->3->4->4->5->6
\begin{lstlisting}
import heapq
class Solution:
def mergeKLists(self, lists: List[Optional[ListNode]]) -> Optional[ListNode]:
heap = []
for i in range(0, len(lists)):
linked_list = lists[i]
while linked_list != None:
heapq.heappush(heap, linked_list.val)
linked_list = linked_list.next
ans = ListNode()
head = ans
while len(heap) != 0:
ans.next = ListNode(heapq.heappop(heap))
ans = ans.next
return head.next
\end{lstlisting}
\section{Merge Two sorted lists}
Input: l1 = [1,2,4], l2 = [1,3,4]
Output: [1,1,2,3,4,4]
\begin{lstlisting}
class Solution:
def mergeTwoLists(self, l1: Optional[ListNode], l2: Optional[ListNode]) -> Optional[ListNode]:
first = l1
second = l2
ans = ListNode()
head = ans
while first and second:
if first.val < second.val:
ans.next = ListNode(first.val)
ans = ans.next
first = first.next
elif first.val > second.val:
ans.next = ListNode(second.val)
ans = ans.next
second = second.next
else:
ans.next = ListNode(second.val)
ans = ans.next
ans.next = ListNode(second.val)
ans = ans.next
second = second.next
first = first.next
if first != None:
while first != None:
ans.next = ListNode(first.val)
ans = ans.next
first = first.next
if second != None:
while second != None:
ans.next = ListNode(second.val)
ans = ans.next
second = second.next
return head.next
\end{lstlisting}
\section{Reverse Linked List}
Input: head = [1,2,3,4,5]
Output: [5,4,3,2,1]
\begin{lstlisting}
class Solution:
def reverseList(self, head: Optional[ListNode]) -> Optional[ListNode]:
prev = None
current = head
while current != None:
next = current.next
current.next = prev
prev = current
current = next
return prev
\end{lstlisting}
\section{LinkedList Cycle}
Given head, the head of a linked list, determine if the linked list has a cycle in it.
Naive approach is to use a hashmap with 0(n) worst case space.
\begin{lstlisting}
class Solution:
def hasCycle(self, head: Optional[ListNode]) -> bool:
slow = head
fast = head
while fast and fast.next:
fast = fast.next.next
slow = slow.next
if fast == slow:
return True
return False
\end{lstlisting}
\section{LinkedList Cycle II}
\begin{lstlisting}
class Solution:
def detectCycle(self, head: ListNode) -> ListNode:
slow = head
fast = head
cycle = False
while fast and fast.next:
fast = fast.next.next
slow = slow.next
if fast == slow:
cycle = True
break
if cycle:
slow = head
while slow != fast:
fast = fast.next
slow = slow.next
return slow
return None
\end{lstlisting}
\section{Remove duplicates from sorted list}
\begin{lstlisting}
class Solution:
def deleteDuplicates(self, head: Optional[ListNode]) -> Optional[ListNode]:
node = head
ans = node
if not head or not head.next: return head
while node.next != None:
if node.val == node.next.val:
node.next = node.next.next
else:
node = node.next
return ans
\end{lstlisting}
\section{Partition List}
all less than x on one side and all greater than equal to x on the other.
Input: head = [1,4,3,2,5,2], x = 3
Output: [1,2,2,4,3,5]
\begin{lstlisting}
class Solution:
def partition(self, head: Optional[ListNode], x: int) -> Optional[ListNode]:
left_part = ListNode()
ans_head = left_part
right_part = ListNode()
right_head = right_part
node = head
while node != None:
if node.val < x:
left_part.next = ListNode(node.val)
left_part = left_part.next
else:
right_part.next = ListNode(node.val)
right_part = right_part.next
node = node.next
left_part.next = right_head.next
return ans_head.next
\end{lstlisting}
\section{Rotate List}
Given the head of a linked list, rotate the list to the right by k places.
Input: head = [1,2,3,4,5], k = 2
Output: [4,5,1,2,3]
\begin{lstlisting}
class Solution:
def rotateRight(self, head: Optional[ListNode], k: int) -> Optional[ListNode]:
if head == None:
return None
tail = head
length = 1
while tail.next != None:
tail = tail.next
length += 1
k = k%length
if k == 0:
return head
position = length-1-k
j = 0
front = head
while j != position:
front = front.next
j+=1
tail_start = front.next
front.next = None
tail.next = head
return tail_start
\end{lstlisting}
\section{Remove LinkedList Elements}
Remove all duplicate elements val from a linkedlist.
\begin{lstlisting}
class Solution:
def removeElements(self, head: Optional[ListNode], val: int) -> Optional[ListNode]:
if head == None:
return None
while(head != None and head.val == val):
head = head.next
current = head
current_head = current
while current != None and current.next != None:
if current.next.val == val:
current.next = current.next.next
else:
current = current.next
return current_head
\end{lstlisting}
\section{Delete Node}
Bouncer question because we're asked to delete the node that we're currently at.
\begin{lstlisting}
class Solution:
def deleteNode(self, node):
"""
:type node: ListNode
:rtype: void Do not return anything, modify node in-place instead.
"""
node.val = node.next.val
node.next = node.next.next
\end{lstlisting}
\section{Middle of Linked List}
\begin{lstlisting}
class Solution:
def middleNode(self, head: Optional[ListNode]) -> Optional[ListNode]:
fast = head
slow = head
while fast and fast.next:
fast = fast.next.next
slow = slow.next
return slow
\end{lstlisting}
\section{Odd Even LinkedList}
\begin{lstlisting}
class Solution:
def oddEvenList(self, head: Optional[ListNode]) -> Optional[ListNode]:
if head == None:
return None
even = head.next
even_head = even
odd = head
odd_head = odd
while even and even.next:
odd.next = even.next
odd = odd.next
even.next = odd.next
even = even.next
odd.next = even_head
return odd_head
\end{lstlisting}
\section{Reorder List}
You are given the head of a singly linked-list. The list can be represented as:
L0 - L1 - ... - Ln - 1 - Ln
Input: head = [1,2,3,4,5]
Output: [1,5,2,4,3]
\begin{lstlisting}
import copy
class Solution:
def reorderList(self, head: ListNode) -> None:
if not head or not head.next:
return
def reverse(head):
prev = None
current = copy.copy(head)
while current != None:
next = current.next
current.next = prev
prev = current
current = next
return prev
slow, fast = head, head
while fast and fast.next:
slow = slow.next
fast = fast.next.next
head1 = reverse(slow.next)
slow.next = None
p = head
q = head1
while q:
temp1 = p.next
temp2 = q.next
p.next = q
q.next = temp1
p = temp1
q = temp2
\end{lstlisting}
\end{document} | {
"alphanum_fraction": 0.5314131413,
"avg_line_length": 25.4233409611,
"ext": "tex",
"hexsha": "d3c5e7b0fb2ce0fb6916525605d1f180f12fc849",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "54957ebb92436521e375919bef6fa88a81192a9d",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "MoMus2000/Notes",
"max_forks_repo_path": "LinkedList.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "54957ebb92436521e375919bef6fa88a81192a9d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "MoMus2000/Notes",
"max_issues_repo_path": "LinkedList.tex",
"max_line_length": 98,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "54957ebb92436521e375919bef6fa88a81192a9d",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "MoMus2000/Notes",
"max_stars_repo_path": "LinkedList.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2649,
"size": 11110
} |
%% $Id$
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\documentclass[english]{article}
\usepackage[latin1]{inputenc}
\usepackage{babel}
\usepackage{verbatim}
%% do we have the `hyperref package?
\IfFileExists{hyperref.sty}{
\usepackage[bookmarksopen,bookmarksnumbered]{hyperref}
}{}
%% do we have the `fancyhdr' or `fancyheadings' package?
\IfFileExists{fancyhdr.sty}{
\usepackage[fancyhdr]{latex2man}
}{
\IfFileExists{fancyheadings.sty}{
\usepackage[fancy]{latex2man}
}{
\usepackage[nofancy]{latex2man}
\message{no fancyhdr or fancyheadings package present, discard it}
}}
%% do we have the `rcsinfo' package?
\IfFileExists{rcsinfo.sty}{
\usepackage[nofancy]{rcsinfo}
\rcsInfo $Id$
\setDate{\rcsInfoLongDate}
}{
\setDate{ 2011/02/22}
\message{package rcsinfo not present, discard it}
}
\setVersionWord{Version:} %%% that's the default, no need to set it.
\setVersion{=PACKAGE_VERSION=}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\begin{Name}{1}{hpcstruct}{The HPCToolkit Performance Tools}{The HPCToolkit Performance Tools}{hpcstruct:\\ Recovery of Static Program Structure}
\Prog{hpcstruct} recovers the static program structure of \emph{fully optimized} object code for use with an \textbf{HPCToolkit} correlation tool.
In particular, \Prog{hpcstruct}, recovers source code procedures and loop nests, detects inlining, and associates procedures and loops with object code addresses.
See \HTMLhref{hpctoolkit.html}{\Cmd{hpctoolkit}{1}} for an overview of \textbf{HPCToolkit}.
\end{Name}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Synopsis}
\Prog{hpcstruct} \oOpt{options} \Arg{binary}
Typical usage:\\
\SP\SP\SP\Prog{hpcstruct} \Arg{binary}\\
which creates \File{basename(}\Arg{binary}\File{).hpcstruct}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Description}
Given an application binary or DSO \Arg{binary}, \Prog{hpcstruct} recovers the program structure of its object code.
Program structure is a mapping of a program's static source-level structure to its object code.
By default, \Prog{hpcstruct} writes its results to the file \File{basename(<binary>).hpcstruct}.
This file is typically passed to \textbf{HPCToolkit}'s correlation tool, \HTMLhref{hpcprof.html}{\Cmd{hpcprof}{1}} or \HTMLhref{hpcprof-flat.html}{\Cmd{hpcprof-flat}{1}}.
\Prog{hpcstruct} is designed primarily for highly optimized binaries created from C, C++ and Fortran source code.
Because \Prog{hpcstruct}'s algorithms exploit a binary's debugging information, for best results, \Arg{binary} should be compiled with standard debugging information.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Arguments}
\begin{Description}
\item[\Arg{binary}] An executable binary or dynamically linked library.
Note that \Prog{hpcstruct} does not recover program structure for libraries that \Arg{binary} depends on. To create such information, run hpcstruct on each dynamically linked library (or relink your program with static versions of the libraries).
\end{Description}
Default values for an option's optional arguments are shown in \{\}.
\subsection{Options: Informational}
\begin{Description}
\item[\OptoArg{-v}{n}, \OptoArg{--verbose}{n}]
Verbose: generate progress messages to stderr at verbosity level \Arg{n}. \{1\}
\item[\Opt{-V}, \Opt{--version}]
Print version information.
\item[\Opt{-h}, \Opt{--help}]
Print help.
\item[\OptoArg{--debug}{n}]
Debug: use debug level \Arg{n}. \{1\}
\item[\OptArg{--debug-proc}{glob}]
Debug structure recovery for procedures matching the procedure glob \Arg{glob}.
\end{Description}
\subsection{Options: Structure recovery}
\begin{Description}
\item[\OptArg{-I}{path}, \OptArg{--include}{path}]
Use \Arg{path} when resolving source file names.
This option is useful when a compiler records the same filename in \emph{different} ways within the symbolic information.
(Yes, this does happen.)
For a recursive search, append a '+' after the last slash, e.g., \texttt{/mypath/+}.
May pass multiple times.
\item[\OptArg{--loop-intvl}{yes|no}]
Should loop recovery heuristics assume an irreducible interval is a loop? \{yes\}.
\item[\OptArg{--loop-fwd-subst}{yes|no}]
Should loop recovery heuristics assume forward substitution may occur? \{yes\}.
\item[\OptArg{-N}{all|safe|none}, \OptArg{--normalize}{all|safe|none}]
Specify normalizations to apply to program structure. \{all\}
\begin{itemize}
\item \textbf{all}: apply all normalizations
\item \textbf{safe}: apply only safe normalizations
\item \textbf{none}: apply no normalizations
\end{itemize}
\item[\OptArg{-R}{'old-path=new-path'}, \OptArg{--replace-path}{'old-path=new-path'}]
Substitute instances of \Arg{old-path} with \Arg{new-path}; apply to all paths (e.g., a profile's load map and source code) for which \Arg{old-path} is a prefix. Use '\\' to escape instances of '=' within a path. May pass multiple times.
Use this option when a profile or binary contains references to files that have been relocated, such as might occur with a file system change.
\end{Description}
\subsection{Options: Output}
\begin{Description}
\item[\OptArg{-o}{file}, \OptArg{--output}{file}]
Write results to \Arg{file}. Use `-' for \File{stdout}. \{\File{basename(}\Arg{binary}\File{).hpcstruct}\}
\item[\Opt{--compact}]
Generate compact output, eliminating extra white space.
% \item[\OptArg{-p}{path-list}, \OptArg{--canonical-paths}{path-list}] Ensure that scope tree only contains files found in the colon-separated path list \Arg{path-list}. May be passed multiple times.
\end{Description}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Examples}
%\begin{enumerate}
%\item
Assume we have collected profiling information for the (optimized) binary \File{sweep3dsingle}, compiled with debugging information.
We wish to recover program structure in the file \File{sweep3dsingle.hpcstruct} for use with \HTMLhref{hpcprof.html}{\Cmd{hpcprof}{1}} or \HTMLhref{hpcprof-flat.html}{\Cmd{hpcprof-flat}{1}}.
To do this, execute:
\begin{verbatim}
hpcstruct sweep3dsingle
\end{verbatim}
%\end{enumerate}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Notes}
\begin{enumerate}
\item For best results, an application binary should be compiled with debugging information.
To generate debugging information while also enabling optimizations, use the appropriate variant of \verb+-g+ for the following compilers:
\begin{itemize}
\item GNU compilers: \verb+-g+
\item Intel compilers: \verb+-g -debug inline_debug_info+
\item PathScale compilers: \verb+-g+
\item PGI compilers: \verb+-gopt+
%\item Compaq and SGI MIPSpro compilers: \verb+-g3+
%\item Sun Compilers: \verb+-g -xs+
\end{itemize}
\item While \Prog{hpcstruct} attempts to guard against inaccurate debugging information, some compilers (notably PGI's) often generate invalid and inconsistent debugging information.
Garbage in; garbage out.
\item C++ mangling is compiler specific. On non-GNU platforms, \Prog{hpcstruct}
tries both platform's and GNU's demangler.
\end{enumerate}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Bugs}
\begin{enumerate}
\item \Prog{hpcstruct} may incorrectly infer the structure of loops contained within switch statements.
When reconstructing the control flow graph (CFG) of the object code, \Prog{hpcstruct} currently ignores indirect jumps and does not add edges from the jump to possible target basic blocks.
The result is an incomplete CFG and a misleading loop nesting tree.
The workaround is to use an if/elseif/else statement.
\end{enumerate}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{See Also}
\HTMLhref{hpctoolkit.html}{\Cmd{hpctoolkit}{1}}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Version}
Version: \Version\ of \Date.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{License and Copyright}
\begin{description}
\item[Copyright] \copyright\ 2002-2015, Rice University.
\item[License] See \File{README.License}.
\end{description}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Authors}
\noindent
Nathan Tallent \\
John Mellor-Crummey \\
Rob Fowler \\
Rice HPCToolkit Research Group \\
Email: \Email{hpctoolkit-forum =at= rice.edu} \\
WWW: \URL{http://hpctoolkit.org}.
Thanks to Gabriel Marin and Jason Eckhardt.
\LatexManEnd
\end{document}
%% Local Variables:
%% eval: (add-hook 'write-file-hooks 'time-stamp)
%% time-stamp-start: "setDate{ "
%% time-stamp-format: "%:y/%02m/%02d"
%% time-stamp-end: "}\n"
%% time-stamp-line-limit: 50
%% End:
| {
"alphanum_fraction": 0.6699921533,
"avg_line_length": 37.0165975104,
"ext": "tex",
"hexsha": "b90b0875fc2ee1e9b4a82ac979665db9bc20392e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b454d5966a5da64b9928925ba08481ebd4a120f2",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "byzhang/hpctoolkit",
"max_forks_repo_path": "doc/man/hpcstruct.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b454d5966a5da64b9928925ba08481ebd4a120f2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "byzhang/hpctoolkit",
"max_issues_repo_path": "doc/man/hpcstruct.tex",
"max_line_length": 247,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "b454d5966a5da64b9928925ba08481ebd4a120f2",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "byzhang/hpctoolkit",
"max_stars_repo_path": "doc/man/hpcstruct.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2211,
"size": 8921
} |
%!TEX root = ../../thesis.tex
\section{Emergence of HTML editing JavaScript libraries}
Around the year 2003\footnote{compare \textit{Meine Tabelle aller Editoren}} the first JavaScript libraries emerged that made use of Microsoft's and Mozilla's editing mode to offer rich-text editing in the browser. Usually these libraries were released as user interface components (text fields) with inherent rich-text functionality and were only partly customizable.
In May 2003 and March 2004 versions 1.0 of ''FCKEditor''\footnote{Now distributed as ''CKEditor''} and ''TinyMCE'' have been released as open source projects. These projects are still being maintained and remain among the most used rich-text editors. TinyMCE is the default editor for Wordpress and CKEditor is listed as the most popular rich-text editor for Drupal\footnote{\url{https://www.drupal.org/project/project\_module}, last checked on 07/16/2015}.
Since the introduction of Microsoft's HTML editing APIs, a large number of rich-text editors have been implemented. While many have been abandoned, GitHub lists about 600 JavaScript projects related to rich-text editing\footnote{\url{https://github.com/search?o=desc\&q=wysiwyg\&s=stars\&type=Repositories\&utf8=\%E2\%9C\%93}, last checked on 07/16/2015}. However, it should be noted, that some projects only use other projects' editors and some projects are stubs. Popular choices on GitHub include ''MediumEditor'', ''wysihtml'', ''Summernote'' and others.
| {
"alphanum_fraction": 0.7894380501,
"avg_line_length": 164.1111111111,
"ext": "tex",
"hexsha": "57f807358259e6b58063b4dfa2a1634681a62834",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8d641664b5325512474ab662417c555ff025ff41",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "LukasBombach/old-type-js",
"max_forks_repo_path": "Report/janstyle/content/browser/lib_emergence.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8d641664b5325512474ab662417c555ff025ff41",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "LukasBombach/old-type-js",
"max_issues_repo_path": "Report/janstyle/content/browser/lib_emergence.tex",
"max_line_length": 558,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8d641664b5325512474ab662417c555ff025ff41",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "LukasBombach/old-type-js",
"max_stars_repo_path": "Report/janstyle/content/browser/lib_emergence.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 349,
"size": 1477
} |
%!TEX root = ../thesis.tex
% **************************** Define Graphics Path **************************
\ifpdf
\graphicspath{{Chapter5/Figs/Raster/}{Chapter5/Figs/PDF/}{Chapter5/Figs/}}
\else
\graphicspath{{Chapter5/Figs/Vector/}{Chapter5/Figs/}}
\fi
%*******************************************************************************
%****************************** Fifth Chapter *********************************
%*******************************************************************************
\chapter{Experiments and Results}
\label{chapter5}
In the penultimate chapter we present results obtained from the methodology and software presented in Chapter~\ref{chapter4}. We begin by discussing the training of rates $\Gamma^{(v)}$ and problems that we run into, before turning our attention to sampling with the obtained rates.
\section{Training the rates}
\label{sec:training_exp}
\subsection{The elusive timescale $\lambda$}
\label{subsec:elusive_lambda}
An intuitive explanation of the Todorov loss introduced in section~\ref{subsec:todorov}, is in terms of penalisation for deviations away from the passive dynamics of the system, i.e. an agent has full control over the system but pays for straying far from the passive dynamics. Our method in its most basic form, using the \emph{latest} variational rates $\Gamma^{(v)}_{s \rightarrow s^\prime}$ to sample and using \emph{permute} batch generation, seems to completely disregard this penalty for straying far away from the passive dynamics when applied to the TFIM. Optimised rates $\Gamma_*^{(v)}$ obtained with this method vary depending on the initialisation, but the training itself is not sensitive to it. It is robust in the sense that low loss is obtained irrespective of the initialisation, barring very bad hyperparameter settings, but the rates $\Gamma_*^{(v_i)}$ corresponding to these low losses are not the same. At first glance it appears that there are many \emph{local optima} for the rates, or alternatively, that there is an invariance in the loss and many rates are optimal. The latter turns out to be true. We refer to this problem as the inability to learn the correct \textbf{timescale} $\lambda$. This issue is illustrated in Fig.~\ref{fig:scaleblowup} which shows the number of spin flips in the sampled trajectory throughout training for three initialisations, along with the distributions of holding times $P(\tau) \sim \lambda e^{-\lambda x}$ of the final learned rates.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Chapter5/Figs/Vector/scale_blowup}
\caption[The inability of the method to learn the correct timescale $\lambda$]{\textbf{The inability of the method to learn the correct timescale $\lambda$.} The $\log \text{RN}$ + \emph{permute} batch fails to learn the correct timescale of the TFIM model. The figure shows the discrepancy between three learned rates in 1D ($L=8, J=1.0, g=1.0$). We see that the number of flips in the original sampled trajectory varies wildly throughout training, and is not related to the loss (\textbf{top}). All three initialisations reach low loss, but corresponding learned rates vary in scale, as exemplified by the distribution of holding times $\tau_i$ of each CTMC (\textbf{bottom}). The rate parameter $\lambda$ can be orders of magnitude different from the passive rates $\lambda_0$ (black dashed line).}
\label{fig:scaleblowup}
\end{figure}
If we assume that the holding times are approximately exponentially distributed\footnote{In each state the holding time is exponentially distributed, see section~\ref{subsubsect:CTMC}.} and compare the rate constants $\lambda$ for different optimal rates, we see that they can be orders of magnitude different. Yet the obtained rates do appear to have something in common, the similarity can be seen in Fig.~\ref{fig:rate_structure}, which depicts a pair of optimized rates for 1d-TFIM ($L=6$) and the absolute difference of their \emph{jump chains}. The obtained rates seem to share jump chains $T^{(v)}_{s_i \rightarrow -s_i}$ and differ in holding time distributions. It is important to note, that this jump chain is not necessarily the correct jump chain, but it merely smooths out the $\log \text{RN}$ values in this type of batch to achieve low loss. Why is this the case?
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Chapter5/Figs/Vector/rate_structure.pdf}
\caption[Structure of learned rates for 1d-TFIM]{\textbf{Structure of rates in 1d-TFIM.} The figure displays the underlying \emph{structure} learned in the 1d-TFIM ($L=6, J=1.0, h=1.0$) using the simple permutation batch. The optimal rates $\Gamma^{(v_1)}_{s_i \rightarrow -s_i}$, $\Gamma^{(v_2)}_{s_i \rightarrow -s_i}$ obtained with two initialisations (\textbf{middle}) for each configuration (\textbf{top}), look very similar. The jump matrices $T^{(v_1)}_{s_i \rightarrow -s_i}$, $T^{(v_2)}_{s_i \rightarrow -s_i}$ corresponding to the learned CTMCs are nearly the same (\textbf{bottom}), with maximum absolute difference being $\Delta_{\text{max}} T_{s_i \rightarrow -s_i} \approx 2\cdot10^{-3}$.}
\label{fig:rate_structure}
\end{figure}
The issue in learning the timescale is not in the logarithm Radon-Nikodym loss, but in the way we generate batches. Moreover, this issue applies to a wider range of models than just the TFIM. For any model which has the following properties:
\begin{enumerate}
\item The passive rates are constant and the same for all adjacent states $\Gamma_{s \rightarrow s^{\prime}}=g$
\item The number of adjacent states $N_a$ is independent for any state $s$
\end{enumerate}
the $\log \text{RN}$ loss is \textbf{detached} from the passive rates $g$, if the variance is calculated from a batch of trajectories with the same time $T$ and number of actions $N_{\text{actions}}$. Detached in this context means that the passive rates $g$ have no effect on the variance and hence on the optimisation of the rates. This can be seen by inserting
\begin{equation}
V(\mathrm{s})=-\sum_{s \neq s^{\prime}} \Gamma_{s \rightarrow s^{\prime}}+H_{s s}=-g N_a + H_{s s} \quad \text{ and } \quad \Gamma_{s \rightarrow s^{\prime}}=g
\end{equation}
into
\begin{equation}
\begin{aligned}
{\operatorname{Var}}\Bigg[\int\bigg(V(k(t))+&\sum_{l \neq k(t)}\bigg(\Gamma_{k(t) \rightarrow l}-\Gamma_{k(t) \rightarrow l}^{(v)}\bigg)\bigg) \mathrm{d} t \\
&+\sum_{n} \log \left(\frac{\Gamma_{k^{(n)} \rightarrow k^{(n+1)}}^{(v)}}{\Gamma_{k^{(n)} \rightarrow k^{(n+1)}}}\right)-E_{0} T-\log \left(\frac{\varphi(k^{(N)})}{\varphi(k^{(0)})}\right)
\Bigg],
\end{aligned}
\end{equation}
to obtain
\begin{equation}
\begin{aligned}
\operatorname{Var}\Bigg[\int\left(H_{s s}-\sum_{l \neq k(t)}\Gamma_{k(t) \rightarrow l}^{(v)}\right)\mathrm{d} t
&+ \sum_{n} \log \left(\Gamma_{k^{(n)} \rightarrow k^{(n+1)}}^{(v)}\right)\Bigg] \\ &-E_{0} T -\log \left(\frac{\varphi(k^{(N)})}{\varphi(k^{(0)})}\right) \textcolor{red}{- gN_aT + gN_aT - N_{\text{actions}}\log{(g)}}.
\end{aligned}
\end{equation}
The terms in red contain the passive rates, and can be taken out of $\operatorname{Var}[\cdot]$. The TFIM obeys both listed properties, and a \emph{permute} batch has constant $T$ and $N_{\text{actions}}$, meaning that we find ourselves in a regime where the correct timescale $\lambda$ \textbf{cannot} be learned.
\subsection{Alternative Batch generation}
The unlearnable timescale for the \emph{permute} batch puts us in a precarious situation. The only way forward in models with the timescale learning issue is to work with trajectories of different lengths $N_{\text{actions}}$, fixed endpoints, and fixed time\footnote{Otherwise we cannot neglect the $E_0T$ term.} $T$. This is computationally disadvantageous, because we can no longer evaluate $\log \text{RN}$ on the whole batch in parallel. Moreover, the generation of the batch becomes difficult to implement and may become a bottleneck itself.
We test the \emph{construct} batch, an alternative batch generation technique described in section~\ref{sec:qoptsampl}.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{Chapter5/Figs/Vector/constr-g-loss}
\caption[Rate training in 1d-TFIM using the \emph{construct} method]{\textbf{Rate training in 1d-TFIM using the construct method}. Optimisation with the \emph{construct} batch method, converges for different values of parameter $h$, moreover the obtained rates are not independent of $h$. Solved with $L=6, N_a=4, N_e=2, T=1$.}
\label{fig:constr-g-loss}
\end{figure}
Fig.~\ref{fig:constr-g-loss} displays the training loss for the TFIM with different values of $h$ when using the \emph{construct} method for batch generation. Two things are apparent when optimising this new loss. First, it is even more volatile and increasing the batch size helps less than with the \emph{permute} batch. Second, the optimisation becomes more difficult when $h$ is smaller. It is also clear, that this loss now has the capability to scale the outputs of the model throughout optimisation, and at least for small times $T$ the timescale of the model is close to the passive timescale $\lambda_0$. This is clearly seen in Fig.~\ref{fig:troub}, where distributions of holding times are shown for a range of optimisation settings.
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{Chapter5/Figs/Vector/troub}
\caption[Distributions of holding times $\tau$ in the 1d-TFIM with \emph{construct} batch]{\textbf{Distributions of holding times $\tau$ in the 1d-TFIM with \emph{construct} batch.} The $\log\text{RN}$ loss is no longer decoupled from the rates. We see that the timescale $\lambda$ is learned by the model and is comparable to the passive rates $\lambda_0=hL$ for a range of setups ($T$, $h$, at $L=6$).}
\label{fig:troub}
\end{figure}
\noindent
However, taking a look at the optimised rates for a range of values of $h$ is worrying, as shown in Fig.~\ref{fig:troubrates}. The structure of the rates does not change much when varying parameter $h$. Even though the system undergoes a phase transition between $h=0.4$ and $h=1.6$ this is not evident from the obtained transition probabilities. Moreover, while the optimal rates scale correctly with $h$, they are nearly uniform, meaning that the distribution has minimal preference about which spin to flip, regardless of $h$ and the state. This is also exemplified when using the rates to perform sampling, the results are remarkably similar to using passive rates, see~\ref{subsec:impsampl}.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{Chapter5/Figs/Vector/troub_rates}
\caption[Learned rates of the 1-dTFIM with the \emph{construct} batch]{\textbf{Learned rates of the 1-dTFIM with the \emph{construct} batch.} The optimised rates for the 1d-TFIM scale with $h$, but are nearly uniform. }
\label{fig:troubrates}
\end{figure}
For a comparison of jump chains $T$ that belong to the optimised rates at different $h$, refer to Fig.~\ref{fig:ratecompare2}. And for a comparison between jump chains obtained with different batch construction methods, see Fig.~\ref{fig:ratecompare1}. While the absolute difference between jump matrices in either of the two cases may seem small, this does not speak to any hidden structure uncovered by the loss, as it is merely a consequence of the jump matrices being nearly uniform. Most jump probabilities fall in the range $T_{i \rightarrow j} \in (0.16, 0.17)$ which is remarkably close to $\frac{1}{6} \approx 0.167$.
\begin{figure}[H]
\centering
\includegraphics[width=0.65\linewidth]{Chapter5/Figs/Vector/Tvlamb}
\caption[Timescale gap for the \emph{construct} batch.]{\textbf{Timescale gap for the \emph{construct} batch.} When optimising the rates $\Gamma^{(v)}$ using the $\log \text{RN}$ loss and \emph{construct} batch, we find ourselves in one of two learning regimes depending on the simulation time $T$ at constant $h$ and constant range ($N_a^{(\text{min})}$, $N_a^{(\text{max})}$). For small times $T$ the rates are forced towards the passive rates but after a certain $T_\text{treshold}$ the timescale has a hard time converging. The figure shows an example for $h=0.6$ and $N_a^{(\text{max}-\text{min})} = 60$.}
\label{fig:tvlamb}
\end{figure}
% T and h interplay
While training the rates using the \emph{construct} batch method with different parameters $T$, $h$ and $N_b$, it is qualitatively evident that we may find ourselves in one of two training regimes. We have either quick convergence with optimised rates very close to the passive dynamics, such as in Fig.~\ref{fig:constr-g-loss}, or we have trouble converging at all and the optimised rates stray far away from the passive dynamics. Neither of the two regimes leads to the right solution. The "phase transition" from one learning regime to another occurs when we increase the simulation time $T$ and do not change $N_b$ or $h$, see Fig.~\ref{fig:tvlamb}. Let us examine the loss again,
\begin{equation}
\begin{aligned}
\operatorname{Var}\Bigg[\int\left(H_{s s}-\sum_{l \neq k(t)}\Gamma_{k(t) \rightarrow l}^{(v)}\right)\mathrm{d} t
+ \sum_{n} \log \left(\Gamma_{k^{(n)} \rightarrow k^{(n+1)}}^{(v)}\right) \textcolor{red}{- N_{\text{actions}}\log{(g)}}\Bigg],
\end{aligned}
\end{equation}
where we have purposely left out the terms that cancel out or are constant over all trajectories in the batch. The third term, is the only term tying the variance to the passive rates. In any batch there will be two trajectories with the maximum difference in length $N_a^{(\text{max}-\text{min})}$. This puts a bound on how much the third term can vary across trajectories in a batch. If we increase the time $T$, the variability of the first term grows proportionally, while it stays constant for the other two terms. This means that at a certain time $T$ the majority contribution will be from the first term.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{Chapter5/Figs/Vector/rate_compare1}
\caption[Comparison of \emph{jump matrices} $T$ obtained with different batch types]{\textbf{Comparison of \emph{jump matrices} $T$ obtained with different batch types.} The Figure shows jump matrices $T_{s \rightarrow s^{\prime}}=\frac{\Gamma_{s \rightarrow s^{\prime}}^{\theta}}{\sum_{s^{\prime} \neq s} \Gamma_{s \rightarrow s^{\prime}}^{\theta}}$, of optimised rates for the TFIM in 1d. The top row shows all configurations of the 1d-TFIM with $L=6$, stacked in the horizontal direction. The second two rows display jump matrices which correspond to leaned rates for both the permute and construct methods. Each pixel is coloured according to probability of flipping the spin at the corresponding site in the first row. The bottom row displays the absolute difference between both jump matrices.}
\label{fig:ratecompare1}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{Chapter5/Figs/Vector/rate_compare2}
\caption[Comparison of \emph{jump matrices} $T$ for different batch types]{\textbf{Comparison of \emph{jump matrices} $T$ for different $h$.} The figure displays jump matrices $T_{s \rightarrow s^{\prime}}=\frac{\Gamma_{s \rightarrow s^{\prime}}^{\theta}}{\sum_{s^{\prime} \neq s} \Gamma_{s \rightarrow s^{\prime}}^{\theta}}$ obtained with the construct method for widely different values of $h$ in the 1d-TFIM model. The final row show the absolute difference between the two. We expect the jump matrices to differ for the two distinct phases of the model, but this is not the case, meaning that the optimised rates are not correct.}
\label{fig:ratecompare2}
\end{figure}
We find ourselves in a similar situation to using the \emph{permute} batch, where the passive terms do not matter at all and the timescale is impossible to learn\footnote{Although this case is not as extreme.}. Unfortunately this same argument can be applied to the \emph{split} batch, which may be limited in this same way.
A way around this may be to have $N_a^{(\text{max}-\text{min})}$ vary from epoch to epoch, but this introduces additional computational cost, as well as additional stochasticity into the loss. Varying the time $T$ for each epoch would be easier to implement, since the holding times are arbitrarily assigned to states anyway.
\newpage
\subsection{Architectural choices}
\subsubsection{The pCNN}
As expected the architecture choices and hyperparameters play an important role in learning the rates $\Gamma^{(v)}$. However, how each separate parameter impacts the optimisation process is not very surprising. More interesting is perhaps the relative importance we can assign to each hyperparameter when compared to others. As we will see hyperparameters of the pCNN play one of two roles. The CTMC simulation time $T$ and batch size $N_b$ stabilise the estimation of variance loss, thus improving the stability of the learning process and convergence, while the width $N_w$ of the pCNN and its depth $N_l$ increase the representational power of the NN.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{Chapter5/Figs/Vector/init_test_learning}
\caption[Initial rate training experiments]{\textbf{Initial rate training experiments}. Learning experiments using \emph{initial} sampling mode and different simulation setups for 2d-TFIM ($L=3, J=1.0, h=1.0$). \textbf{top left:} $T = 75, N_w = 5, N_l = 3$, \textbf{top right:} $N_b = 50, N_w = 5, N_l = 3$, \textbf{bottom left:} $N_b = 50, T = 50, N_l = 3$, \textbf{bottom right:} $N_b = 50, T = 50, N_w = 5$.}
\label{fig:inittestlearning}
\end{figure}
Initial experiments are shown in Fig.~\ref{fig:inittestlearning}, the top row displays the aforementioned stability. Increasing the batch size decreases the "volatility" of the loss, and the same is true for $T$, but only if we consider loss normalized per time unit. As a rule of thumb $N_w$ and $N_l$ do not have a significant impact on the efficiency and speed of training, so long as both hyperparameters are chosen to be somewhere in the goldilocks zone, deep and narrow networks sometimes struggle to converge, (bottom right) Fig.~\ref{fig:inittestlearning}.
Going beyond the training itself, we need to evaluate the \emph{generalisation} of the rates. The worry is that training the rates at certain $T$ and $N_b$ could overfit and negatively impact sampling down the line, where the $T$ will be very different. To test this the rates were trained for various settings of $T, N_b, N_w, N_l$ and tested them by evaluating the loss at $T=50$, $N_b=50$, Fig.~\ref{fig:avgvarloss}. What we learn is that the rates do not overfit, average test loss is in the same order of magnitude than the train loss, and that $N_b$ is more important for stability than $T$. Finally, as we want to construct the most effective ansatz for eventual sampling experiments, we also investigate the relative time and size cost of increasing hyperparameters. The results are shown in Fig.~\ref{fig:initialtimespace}.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{Chapter5/Figs/Raster/avg_var_loss}
\caption[Generalisation capabilities of the pCNN]{\textbf{Generalisation capabilities of the pCNN.} The figure displays the effect of $T, N_b$ and $N_w, N_l$ on transferability of the learned rates using initial sampling, 2d-TFIM (L=3, J=1.0, h=1.0). The data was obtained by training the rates for different setups, sampling $N=15$ trajectories with $T=50$ from the learned rates for each setup, permuting the trajectories to obtain batches of $N_b=50$ and finally evaluating the average and sample variance of the $\log \text{RN}$ loss. Subfigures show (left to right) the loss after final epoch of training, average test loss, and variance of test loss. Setups: varying $N_b$ and $T$ at $N_w=3$, $N_l=3$ (\textbf{top}), varying $N_w$ and $N_l$ at $N_b=32$ and $T=30$ (\textbf{bottom}). All loss values are $T$ normalised.}
\label{fig:avgvarloss}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{Chapter5/Figs/Raster/initial_time_space}
\caption[Time and space complexity of the pCNN]{\textbf{Time and space complexity of the pCNN.} Comparison of normalised time needed per epoch averaged over $N=15$ samples, for different $T$, $N_b$ (\textbf{top left}) and $N_w$, $N_l$ (\textbf{top right}). Comparison of normalised batch size averaged over $N=15$ samples, for different $T$, $N_b$ (\textbf{bottom left}) and $N_w$, $N_l$ (\textbf{bottom right}). All values obtained from the same simulations as in Fig.~\ref{fig:avgvarloss}.}
\label{fig:initialtimespace}
\end{figure}
\subsubsection{Comparison with G-pCNN}
A strong inductive bias, e.g. equivariance, to translation or other appropriate symmetries should provide an advantage when learning a mapping equivariant to these same symmetries. Moreover, we expect this advantage to be even more significant in the case of learning with the $\log \text{RN}$ loss. This is because variational rates $\Gamma^{(v)}$ are evaluated multiple times in succession to evaluate the logarithm Radon-Nikodym derivative on a single trajectory, and this is repeated for a whole batch of trajectories.
Initial training experiments were conducted on the TFIM, comparing the learning of a fully connected network, and the pCNN in 1D, as well as both compared to the group-equivariant CNN in two dimensions. Qualitatively, there is little difference in the performance of the pCNN and the MLP in 1d, whereas the difference becomes more notable in 2d, where the gCNN seems to learn quickest, when comparing networks with comparable numbers of variational parameters. These experiments are limited in two ways. First, due to the stochastic nature of the loss it is hard to tell which architecture is performing better and statistical metrics are needed. Second, because of a suboptimal implementation of the gCNN, runs with many epochs could not be carried out and the results are thus inconclusive. Further investigation is needed to confirm these qualitative results, and is suggested as future work.
\section{Importance sampling}
\label{subsec:impsampl}
Ultimately we were unsuccessful in learning the optimal rates $\Gamma^\prime$ in the TFIM model with the $\log {\text{RN}}$ loss. The restrictions that the fixed endpoints and variable length of trajectories put onto the batch construction process are too severe, and the learning algorithm is unable to efficiently learn from the batches constructed in such an artificial way. This section shows that the sampler part of the code was indeed implemented, that sampling can be performed with either the discrete or continuous time formulation of the rates, see Fig~\ref{fig:sampling_tfim1d} left, and that the rates in Fig.~\ref{fig:troubrates} resemble sampling from the passive rates of the TFIM, see Fig.~\ref{fig:sampling_tfim1d} right.
\subsection{Ising model}
\begin{figure}[H]
\centering
\subfloat{\includegraphics[width=0.45\linewidth]{Chapter5/Figs/Vector/sampling_example}}
\quad
\subfloat{ \includegraphics[width=0.45\linewidth]{Chapter5/Figs/Vector/tfim1d_finite_scaling-justsigz.pdf}}
\caption[Sampling in the TFIM model]{\textbf{Sampling in the TFIM model.} Convergence of discrete and continuous formulations of the rates towards the expected $\sigma_z$ value, taken as an average of $N=10$ chains (\textbf{left}). In the limited sampling experiments performed, neither formulation provides a considerable advantage, although the continuous chain is more elegant. Estimation of the modulus per spin magnetisation $M_z=\frac{2}{N}\left|\sum_{i} s_{i}\right|$ using the rates from Fig.~\ref{fig:troubrates}. The analytical solution for $N=\infty$ spins, and a VMC result with NetKet~\cite{netket2019} are shown for comparison. (\textbf{right}).}
\label{fig:sampling_tfim1d}
\end{figure}
| {
"alphanum_fraction": 0.7550332744,
"avg_line_length": 140.4852071006,
"ext": "tex",
"hexsha": "da517def5d58e0dd8f3302c0feebca5da09bc9b1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "682aa0448efe563a8a8aee87d979628a890ce150",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "BlazStojanovic/MPhil_Thesis",
"max_forks_repo_path": "Chapter5/chapter5.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "682aa0448efe563a8a8aee87d979628a890ce150",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "BlazStojanovic/MPhil_Thesis",
"max_issues_repo_path": "Chapter5/chapter5.tex",
"max_line_length": 1496,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "682aa0448efe563a8a8aee87d979628a890ce150",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "BlazStojanovic/MPhil_Thesis",
"max_stars_repo_path": "Chapter5/chapter5.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6314,
"size": 23742
} |
\chapter{Exploratory Data Analysis}
\label{ch:Exploratory-Data-Analysis}
Before we can get into the details of our model, it behooves us to perform an initial \gls{eda} so that we may be able to summarize the main characteristics of the data sets that we have on hand. This will both help us understand how to perform the necessary data transformations needed to render our data serviceable as well as aid in the discovery of patterns or anomalies that might be present in the data. To this end, we will be applying and discussing the use of a variety of visualization techniques and statistical tests on the \gls{refit} data set. In order to maintain the length and readability of the overall paper, the relevant Figures and Tables for the \gls{ucid} data set will be shifted to the Appendix.
\section{Issues}
\label{sec:Exploratory-Data-Analysis:Issues}
Given that the \gls{refit} data set consists of numerous households, each comprising its own subset of data, the first step in our \gls{eda} will be to determine which of these households contains the cleanest data, or in other words, the least amount of issues. We define issues here as any one of the following: missing periods of data, days that exhibit an incomplete number of data, or any values recorded that are labeled 'issue' by the data collection team.
\subsection{The 'Issues' Column}
\label{subsec:Exploratory-Data-Analysis:Issues:The-Issues-Column}
The first issue that we will be tackling is the aptly named \textit{Issues} column. As previously mentioned in section \ref{subsec:Introduction:Introduction-to-the-Data:REFIT}, the data collection team responsible for the curation of the \gls{refit} data set appended an \textit{Issues} column to the data set so as to indicate whether the sample being recorded either contains no issues and can be treated normally, given a recorded value of 0, or whether the sum of the \glspl{iam} is greater than that of the household aggregate, given a recorded value of 1. In the cases where the recorded value for the \textit{Issues} column reads 1, the data collection team recommends either completely discarding the data or, at the very least, noting the discrepancy. Table \ref{tab:REFIT-values-recorded} outlines the total number of recorded values alongside the number of values with the \textit{Issues} column set to 1 (\ie values with \textit{issues}).
\begin{table}[H]
\begin{adjustwidth}{-3.0cm}{-3.0cm}%
\myfloatalign
\centering
\begin{tabular}{cccc} \toprule
\tableheadline{House no.} & \tableheadline{Date range} & \tableheadline{Values recorded} & \tableheadline{Values with issues} \\ \midrule
1 & 2013-10-09 $\rightarrow$ 2015-07-10 & 6,960,008 & 58,183 (0.84\%) \\ \midrule
2 & 2013-09-17 $\rightarrow$ 2015-05-28 & 5,733,526 & 28,444 (0.5\%) \\ \midrule
3 & 2013-09-25 $\rightarrow$ 2015-06-02 & 6,994,594 & 408,627 (5.84\%) \\ \midrule
4 & 2013-10-11 $\rightarrow$ 2015-07-07 & 6,760,511 & 67,441 (1.0\%) \\ \midrule
5 & 2013-09-26 $\rightarrow$ 2015-07-06 & 7,430,755 & 425,766 (5.73\%) \\ \midrule
6 & 2013-11-28 $\rightarrow$ 2015-06-28 & 6,241,971 & 34,451 (0.55\%) \\ \midrule
7 & 2013-11-01 $\rightarrow$ 2015-07-08 & 6,756,034 & 161,919 (2.4\%) \\ \midrule
8 & 2013-11-01 $\rightarrow$ 2015-05-10 & 6,118,469 & 25,000 (0.41\%) \\ \midrule
9 & 2013-12-17 $\rightarrow$ 2015-07-08 & 6,169,525 & 32,226 (0.52\%) \\ \midrule
10 & 2013-11-20 $\rightarrow$ 2015-06-30 & 6,739,284 & 30,162 (0.45\%) \\ \midrule
11 & 2014-06-03 $\rightarrow$ 2015-06-30 & 4,431,541 & 40,114 (0.91\%) \\ \midrule
12 & 2014-03-07 $\rightarrow$ 2015-07-08 & 5,859,544 & 14,183 (0.24\%) \\ \midrule
13 & 2014-01-17 $\rightarrow$ 2015-05-31 & 4,737,371 & 123,796 (2.61\%) \\ \midrule
15 & 2013-12-17 $\rightarrow$ 2015-07-08 & 6,225,696 & 23,349 (0.38\%) \\ \midrule
16 & 2014-01-10 $\rightarrow$ 2015-07-08 & 5,722,544 & 14,713 (0.26\%) \\ \midrule
17 & 2014-03-06 $\rightarrow$ 2015-06-19 & 5,431,577 & 85,937 (1.58\%) \\ \midrule
18 & 2014-03-07 $\rightarrow$ 2015-05-24 & 5,007,721 & 174,490 (3.48\%) \\ \midrule
19 & 2014-03-06 $\rightarrow$ 2015-06-20 & 5,622,610 & 62,636 (1.11\%) \\ \midrule
20 & 2014-03-20 $\rightarrow$ 2015-06-23 & 5,168,605 & 19,594 (0.38\%) \\ \midrule
21 & 2014-03-07 $\rightarrow$ 2015-07-10 & 5,383,993 & 206,832 (3.84\%) \\ \bottomrule
\end{tabular}
\caption{Range of dates in the \gls{refit} data set as well as the total number of values and the total number of values that contain issues.}
\label{tab:REFIT-values-recorded}
\end{adjustwidth}
\end{table}
\noindent \newline We note that, for the majority of the households, the number of values recorded that contain issues are rather small, with only a small number of households, namely numbers 3 and 5, presenting a problematic number of values with issues and households 7, 13, 18 and 21 closely following suit.
\subsection{Missing \& Incomplete Data}
\label{subsec:Exploratory-Data-Analysis:Issues:Missing-and-Incomplete-Data}
The second issue that we will be tackling is that of missing or otherwise incomplete data. Here, we refer to missing data as any dates that fall within the period of recorded data but for which we have no recorded values while incomplete data refers to any days that contain less than 96 readings when considering a resolution of 15 minutes. The results of our analysis can be seen in Table \ref{tab:REFIT-missing-data} which also includes a column that indicates the longest period of consecutive missing days.
\begin{table}[H]
\begin{adjustwidth}{-3.0cm}{-3.0cm}%
\myfloatalign
\centering
\begin{tabular}{ccccc} \toprule
\tableheadline{House no.} & \tableheadline{No. of days} & \tableheadline{Missing days} & \tableheadline{Incomplete days} & \tableheadline{Stretch} \\ \midrule
1 & 640 & 61 (9.53\%) & 57 (8.91\%) & 40 days \\ \midrule
2 & 619 & 128 (20.68\%) & 58 (9.37\%) & 61 days \\ \midrule
3 & 616 & 54 (8.77\%) & 47(7.63\%) & 40 days \\ \midrule
4 & 635 & 41 (6.46\%) & 79 (12.44\%) & 13 days \\ \midrule
5 & 649 & 21 (3.24\%) & 76 (11.71\%) & 8 days \\ \midrule
6 & 578 & 69 (11.94\%) & 52 (9.0\%) & 32 days \\ \midrule
7 & 615 & 61 (9.92\%) & 51 (8.29\%) & 40 days \\ \midrule
8 & 556 & 43 (7.73\%) & 43 (7.73\%) & 38 days \\ \midrule
9 & 569 & 74 (13.01\%) & 35 (6.15\%) & 40 days \\ \midrule
10 & 588 & 22 (3.74\%) & 79 (13.44\%) & 8 days \\ \midrule
11 & 393 & 31 (7.89\%) & 33 (8.4\%) & 9 days \\ \midrule
12 & 489 & 20 (4.09\%) & 37 (7.57\%) & 8 days \\ \midrule
13 & 500 & 89 (17.8\%) & 79 (15.8\%) & 40 days \\ \midrule
15 & 569 & 38 (6.68\%) & 69 (12.13\%) & 8 days \\ \midrule
16 & 545 & 52 (9.54\%) & 70 (12.84\%) & 17 days \\ \midrule
17 & 471 & 19 (4.03\%) & 37 (7.86\%) & 8 days \\ \midrule
18 & 444 & 15 (3.38\%) & 34 (7.66\%) & 8 days \\ \midrule
19 & 472 & 19 (4.03\%) & 33 (6.99\%) & 8 days \\ \midrule
20 & 461 & 19 (4.12\%) & 27 (5.86\%) & 8 days \\ \midrule
21 & 491 & 33 (6.72\%) & 45 (9.16\%) & 14 days \\ \bottomrule
\end{tabular}
\caption{Total number of days that are missing data in the \gls{refit} data set as well as the number of days that contain incomplete data and the longest period of consecutive days missing data.}
\label{tab:REFIT-missing-data}
\end{adjustwidth}
\end{table}
\noindent \newline We note that the earlier households, in order of numbering (houses 1 through 10), tend to contain a larger range of dates recorded and, subsequently, also tend to contain a larger number of missing days as well as a larger period of consecutive missing days. The largest outages seem to span the entirety of the month of February in the year 2014, which is also indicated in the documentation of the \gls{refit} data set, and, as the earlier households tend to have been set up prior to that date it makes sense that they would also contain a larger overall number of missing days. The number of incomplete days displays no such correlation and can likely be attributed to any number of factors on a smaller-scale including household internet failure, hardware failures, network routing issues etc.
\subsection{Thresholding}
\label{subsec:Exploratory-Data-Analysis:Issues:Thresholding}
To wrap up Section \ref{sec:Exploratory-Data-Analysis:Issues}, we will be selecting a single household with which to continue our \gls{eda} based on the analysis conducted in sections \ref{subsec:Exploratory-Data-Analysis:Issues:The-Issues-Column} and \ref{subsec:Exploratory-Data-Analysis:Issues:Missing-and-Incomplete-Data}. This selection is done in order to maintain a high level of integrity in the data of the households we choose to work with and so as to minimize the overall number of transformations that must be undertaken on said data to render it feasible to work with. We score each household by taking the normalized (between a range of 0 $\rightarrow$ 1) mean of the number of incomplete days, missing days and values with issues and subtracting the obtained value from 1. The results of this can be seen in Table \ref{tab:REFIT-candidates}.
\begin{table}[H]
\myfloatalign
\centering
\begin{tabular*}{\linewidth}{c@{\extracolsep{\fill}}c} \toprule
\tableheadline{House no.} & \tableheadline{Score} \\ \midrule
20 & 0.98 \\ \midrule
12 & 0.92 \\ \midrule
19 & 0.91 \\ \midrule
11 & 0.89 \\ \midrule
17 & 0.87 \\ \bottomrule
\end{tabular*}
\caption{Weighted scores for the top 5 scoring households in the \gls{refit} data set.}
\label{tab:REFIT-candidates}
\end{table}
\noindent \newline Given that, the top candidates are households 20, 12 and 19 as they make up the days that contain the least number of issues. We arbitrarily narrow our choice down to house number 12 although realistically, any of the remaining candidates would work just as fine and can be used to ascertain the findings within the scope of the entire project.
\clearpage
\section{Data Visualization}
\label{sec:Exploratory-Data-Analysis:Data-Visualization}
\textit{Data visualization} is a rather broad term encompassing a large variety of different techniques that serve to display a variety of different aspects of our data set. Within the scope of this project we have chosen to narrow down our focus on a small subset of visualizations that display vital information relevant to the overall forecasting pipeline and these will be presented in Sections \ref{subsec:Exploratory-Data-Analysis:Data-Visualization:Sample-Distribution} and \ref{subsec:Exploratory-Data-Analysis:Data-Visualization:Time-Series-Decomposition}.
\subsection{Sample Distribution}
\label{subsec:Exploratory-Data-Analysis:Data-Visualization:Sample-Distribution}
Figures \ref{fig:REFIT-House-12-Day-of-the-Week-Count} and \ref{fig:REFIT-House-12-Month-Count} serve to provide an overview of the distribution of samples over the days of the week as well as the months of the year. Noting the distribution of our samples over these different criteria is essential when considering the results of our clustering algorithm as we will be attempting to classify new samples into the generated clusters based on these temporal variables later on in this project..
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Images/Chapter 4/REFIT/REFIT-House-12-Day-of-the-Week-Count.pdf}
\caption{Number of samples per day of the week over the entirety of the data set. Data for this plot was pulled from CLEAN\_House12.csv of the \gls{refit} data set.}
\label{fig:REFIT-House-12-Day-of-the-Week-Count}
\end{figure}
\noindent \newline At a glance, we note that the distribution of the samples over the days of the week is relatively even with no one day containing a much larger number of samples than the other. The distribution of the samples over the months, on the other hand, is heavily dominated by the months of March through June and, to a lesser extent, July. When inspecting the results of our clustering algorithm, the impact of having nearly twice as many samples for the aforementioned months might skew the results and as such, we will have to keep that in mind when interpreting said results.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Images/Chapter 4/REFIT/REFIT-House-12-Month-Count.pdf}
\caption{Number of samples per month over the entirety of the data set. Data for this plot was pulled from CLEAN\_House12.csv of the \gls{refit} data set.}
\label{fig:REFIT-House-12-Month-Count}
\end{figure}
\noindent \newline \textit{N.B. We note that the plots in Figures \ref{fig:REFIT-House-12-Day-of-the-Week-Count} and \ref{fig:REFIT-House-12-Month-Count} represent our data set after removing days that contain an incomplete number of values.}
\subsection{Time Series Decomposition}
\label{subsec:Exploratory-Data-Analysis:Data-Visualization:Time-Series-Decomposition}
The decomposition of a time series is a statistical task that deconstructs it into three principle components: trend, seasonality and noise.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Images/Chapter 4/REFIT/REFIT-House-12-Time-Series-Decomposition-NoSmoothing.pdf}
\caption{Time series decomposition. Data for these plots were pulled over a 3 month period that was resampled into a resolution of 15 minutes from CLEAN\_House12.csv of the \gls{refit} data set.}
\label{fig:REFIT-House-12-Time-Series-Decomposition}
\end{figure}
\noindent \newline Figure \ref{fig:REFIT-House-12-Time-Series-Decomposition} depicts the result obtained when performing additive time series decomposition on our observed electric energy consumption data \cite{Chujai}. Our reasoning for selecting an additive model is due to the fact that our data is stationary (\ie no sharp increase or decrease in trend over time). Alongside the raw data, we will also be attempting to train models that learn to forecast future values for the \textit{trend} component of this decomposition as it captures the main essence of the energy consumption patterns present in the individual household(s) that we are exploring.
\section{Causality \& Correlation}
\label{sec:Exploratory-Data-Analysis:Causality-and-Correlation}
Given the substantial number of features or, in other words, independent variables (both temporal and meteorological) that we will be appending to our data set in the feature engineering step of our forecasting pipeline it is only appropriate then to perform a cursory examination as to the relative importance of each of these features with respect to their ability to aid us in forecasting our target variable. In this case, our target variable is the aggregate power consumption, or global active power consumption, of an individual household. To this end, a variety of tests, statistical or otherwise, are available that allow us to ascertain the relationship between the independent variables in our data set and our target variable.
\subsection{Granger Causality Test}
\label{subsec:Exploratory-Data-Analysis:Causality-and-Correlation:Grangers-Causality-Test}
The first of these tests that we will be performing is the Granger Causality test. First proposed in 1969 by \citet{Granger}, the Granger Causality test is a statistical hypothesis test that allows us to determine whether one time series is useful in forecasting another. In essence, one time series $T_x$ is said to Granger-cause another time series $T_y$ if it can be shown that, through a series of t-tests and F-tests on lagged values of both $T_x$ and $T_y$, that the values present in $T_x$ provide information that is of some statistical significance with respect to future values of $T_y$. The null hypothesis that we are testing here is that the past values of one time series $T_x$ does not cause the other time series $T_y$. If a p-value obtained from the test is less than the significance level of 0.05 \ie 95\% confidence, then we can safely reject the null hypothesis and ascertain that a relationship exists between the two time series. Figures \ref{fig:REFIT-House-12-Grangers-Causality-Matrix-All} and \ref{fig:REFIT-House-12-Grangers-Causality-Matrix-Single} depict the output of performing the Granger Causality test on the meteorological features present in our data set as well as the relevant target variable, the aggregate power consumption (\textit{Aggregate}). We keep in mind that to perform the Granger Causality test we make the assumption that all of the variables of our data set are stationary \ie characteristics such as mean and variance do not change heavily over time. To confirm this we perform the Augmented Dicky-Fuller test, a unit-root test, that tests the null hypothesis that a unit root is present in our time series data set. Given a significance level of 0.05 \ie 95\% confidence, then we can safely reject the null hypothesis for any p-values less than 0.05 and state that the relevant feature does not contain a unit-root and is thus stationary. Our findings, as shown in Table \ref{tab:REFIT-ADF-Test}, are that each of our independent variables are indeed stationary and so we can reaffirm the plausibility of the results obtained from performing the Granger Causality test.
\begin{table}[htb!]
\centering
\begin{tabular*}{\linewidth}{l@{\extracolsep{\fill}}c@{\extracolsep{\fill}}c} \toprule
\tableheadline{Feature} & \tableheadline{P-Value} & \tableheadline{Stationary} \\ \midrule
AirTemp & 1.68e-05 & True \\
AlbedoDaily & 3.22e-19 & True \\
Azimuth & 0.0 & True \\
CloudOpacity & 0.0 & True \\
DewpointTemp & 4.02e-12 & True \\
Dhi & 0.0 & True \\
Dni & 0.0 & True \\
Ebh & 0.0 & True \\
Ghi & 0.0 & True \\
GtiFixedTilt & 0.0 & True \\
GtiTracking & 0.0 & True \\
PrecipitableWater & 1.21e-22 & True \\
RelativeHumidity & 1.85e-21 & True \\
SnowDepth & 8.96e-05 & True \\
SurfacePressure & 6.43e-14 & True \\
WindDirection10m & 6.75e-23 & True \\
WindSpeed10m & 3.02e-22 & True \\
Zenith & 0.02 & True \\
Aggregate & 0.0 & True \\ \bottomrule
\end{tabular*}
\caption{The results of performing the Augmented Dicky-Fuller test on our target variable as well as the meteorological variables introduced in Section \ref{subsec:Introduction:Introduction-to-the-Data:Meteorological-Data} and outlined in Table \ref{tab:Solcast-parameters}.}
\label{tab:REFIT-ADF-Test}
\end{table}
\noindent In the output of the Granger Causality test we performed, as seen in Figure \ref{fig:REFIT-House-12-Grangers-Causality-Matrix-All}, the rows represent the predictor series ($T_x$) while the columns represent the response series ($T_y$) where $T_x$ causes $T_y$. The values in the matrix represent the respective p-values obtained from the test where any value that falls under the significance level of 0.05 indicates that the corresponding $T_x$ could be considered to have an effect on or otherwise be causing $T_y$. For the purposes of our test, we considered the Chi-squared test $\left(\chi^2 = \sum \frac{(O_i - E_i)^2}{E_i}\right)$, testing for causality among lags up to a maximum of 12. As we can see, the majority of the meteorological features seem to form a relationship with our target variable, barre the AlbedoDaily, CloudOpacity, \glsentryfull{ebh} and WindDirection which we can safely drop from our data set.
\begin{figure}[hbt!]
\centering
\includegraphics[width=0.75\textwidth]{Images/Chapter 4/REFIT/REFIT-House-12-Grangers-Causality-Matrix-Single.png}
\caption{A trimmed subset of the Granger Causation matrix (Figure \ref{fig:REFIT-House-12-Grangers-Causality-Matrix-All}) that displays only the relevant information with regards to our independent variables causing our target variable.}
\label{fig:REFIT-House-12-Grangers-Causality-Matrix-Single}
\end{figure}
\noindent \newline The complete Granger Causation Matrix is located in the Appendix (Figure \ref{fig:REFIT-House-12-Grangers-Causality-Matrix-All})
\subsection{Mutual Information Gain}
\label{subsec:Exploratory-Data-Analysis:Causality-and-Correlation:Mutual-Information-Gain}
Another measure of dependence between our independent variables and our target variable would be to calculate the mutual information gain. Mutual information quantifies the "amount of information" obtained about one variable through the observation of another variable. The results of calculating mutual information of all our independent variables, including temporal variables, against our target variable can be seen in Figure \ref{fig:REFIT-House-12-Mutual-Information-Gain}. The results seen in Figure \ref{fig:REFIT-House-12-Mutual-Information-Gain} are more or less in line with the output of the results seen in the output of the Granger Causality test further ascertaining our assumptions that certain features, such as AlbedoDaily, CloudOpacity, \gls{ebh} and WindDirection, can safely be dropped from our data set and excluded from further consideration as part of this feature selection process.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Images/Chapter 4/REFIT/REFIT-House-12-Mutual-Information.pdf}
\caption{Mutual information of our independent variables against our target variable.}
\label{fig:REFIT-House-12-Mutual-Information-Gain}
\end{figure} | {
"alphanum_fraction": 0.5834067871,
"avg_line_length": 132.8195121951,
"ext": "tex",
"hexsha": "6cffd80ff4f2a54ce759b7481bad6e134d7cd3f6",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "43efeff1d1303814e538dbbb7f217118e240b587",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "rug-ds-lab/msc-thesis-s3877043-al-saudi-kareem",
"max_forks_repo_path": "Report/LaTeX source/Chapters/4. Chapter_04.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "43efeff1d1303814e538dbbb7f217118e240b587",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "rug-ds-lab/msc-thesis-s3877043-al-saudi-kareem",
"max_issues_repo_path": "Report/LaTeX source/Chapters/4. Chapter_04.tex",
"max_line_length": 2124,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "43efeff1d1303814e538dbbb7f217118e240b587",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "rug-ds-lab/msc-thesis-s3877043-al-saudi-kareem",
"max_stars_repo_path": "Report/LaTeX source/Chapters/4. Chapter_04.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6362,
"size": 27228
} |
% Copyright 2019 by Christian Feuersaenger
%
% This file may be distributed and/or modified
%
% 1. under the LaTeX Project Public License and/or
% 2. under the GNU Free Documentation License.
%
% See the file doc/generic/pgf/licenses/LICENSE for more details.
\section{Floating Point Unit Library}
\label{pgfmath-floatunit}
\label{section-library-fpu}
{\noindent {\emph{by Christian Feuersänger}}}
\begingroup
\pgfqkeys{/pgf/number format}{sci}
\pgfkeys{/pgf/fpu}
\begin{pgflibrary}{fpu}
The floating point unit (fpu) allows the full data range of scientific
computing for use in \pgfname. Its core is the \pgfname\ math routines for
mantissa operations, leading to a reasonable trade-of between speed and
accuracy. It does not require any third-party packages or external
programs.
\end{pgflibrary}
\subsection{Overview}
The fpu provides a replacement set of math commands which can be installed in
isolated placed to achieve large data ranges at reasonable accuracy. It
provides at least%
\footnote{To be more precise, the FPU's exponent is currently a 32-bit
integer. That means it supports a significantly larger data range than an
IEEE double precision number -- but if a future \TeX\ version may provide
low-level access to doubles, this may change.}%
the IEEE double precision data range, $\pgfmathprintnumber{-1e+324}, \dotsc,
\pgfmathprintnumber{+1e324}$. The absolute smallest number bigger than zero is
$\pgfmathprintnumber{1e-324}$. The FPU's relative precision is at least
$\pgfmathprintnumber{1e-4}$ although operations like addition have a relative
precision of $\pgfmathprintnumber{1e-6}$.
Note that the library has not really been tested together with any drawing
operations. It should be used to work with arbitrary input data which is then
transformed somehow into \pgfname\ precision. This, in turn, can be processed
by \pgfname.
\subsection{Usage}
\begin{key}{/pgf/fpu=\marg{boolean} (default true)}
This key installs or uninstalls the FPU. The installation exchanges any
routines of the standard math parser with those of the FPU: |\pgfmathadd|
will be replaced with |\pgfmathfloatadd| and so on. Furthermore, any number
will be parsed with |\pgfmathfloatparsenumber|.
%
\begin{codeexample}[preamble={\usepgflibrary{fpu}}]
\pgfkeys{/pgf/fpu}
\pgfmathparse{1+1}\pgfmathresult
\end{codeexample}
%
\noindent The FPU uses a low-level number representation consisting of
flags, mantissa and exponent%
\footnote{Users should \emph{always} use high
level routines to manipulate floating point numbers as the format may
change in a future release.}.%
To avoid unnecessary format conversions, |\pgfmathresult| will usually
contain such a cryptic number. Depending on the context, the result may
need to be converted into something which is suitable for \pgfname\
processing (like coordinates) or may need to be typeset. The FPU provides
such methods as well.
%--------------------------------------------------
% TODOsp: codeexamples: Why is this example commented?
% \begin{codeexample}[preamble={\usepgflibrary{fpu}}]
% \begin{tikzpicture}
% \fill[red,fpu,/pgf/fpu/scale results=1e-10] (*1.234e10,*1e10) -- (*2e10,*2e10);
% \end{tikzpicture}
% \end{codeexample}
%--------------------------------------------------
Use |fpu=false| to deactivate the FPU. This will restore any change. Please
note that this is not necessary if the FPU is used inside of a \TeX\ group
-- it will be deactivated afterwards anyway.
It does not hurt to call |fpu=true| or |fpu=false| multiple times.
Please note that if the |fixedpointarithmetic| library of \pgfname\ will
be activated after the FPU, the FPU will be deactivated automatically.
\end{key}
\begin{key}{/pgf/fpu/output format=\mchoice{float,sci,fixed} (initially float)}
This key allows to change the number format in which the FPU assigns
|\pgfmathresult|.
The predefined choice |float| uses the low-level format used by the FPU.
This is useful for further processing inside of any library.
%
\begin{codeexample}[preamble={\usepgflibrary{fpu}}]
\pgfkeys{/pgf/fpu,/pgf/fpu/output format=float}
\pgfmathparse{exp(50)*42}\pgfmathresult
\end{codeexample}
The choice |sci| returns numbers in the format
\meta{mantissa}|e|\meta{exponent}. It provides almost no computational
overhead.
%
\begin{codeexample}[preamble={\usepgflibrary{fpu}}]
\pgfkeys{/pgf/fpu,/pgf/fpu/output format=sci}
\pgfmathparse{4.22e-8^-2}\pgfmathresult
\end{codeexample}
The choice |fixed| returns normal fixed point numbers and provides the
highest compatibility with the \pgfname\ engine. It is activated
automatically in case the FPU scales results.
%
\begin{codeexample}[preamble={\usepgflibrary{fpu}}]
\pgfkeys{/pgf/fpu,/pgf/fpu/output format=fixed}
\pgfmathparse{sqrt(1e-12)}\pgfmathresult
\end{codeexample}
%
\end{key}
\begin{key}{/pgf/fpu/scale results=\marg{scale}}
A feature which allows semi-automatic result scaling. Setting this key has
two effects: first, the output format for \emph{any} computation will be
set to |fixed| (assuming results will be processed by \pgfname's kernel).
Second, any expression which starts with a star, |*|, will be multiplied
with \meta{scale}.
\end{key}
\begin{keylist}{
/pgf/fpu/scale file plot x=\marg{scale},%
/pgf/fpu/scale file plot y=\marg{scale},%
/pgf/fpu/scale file plot z=\marg{scale}%
}
These keys will patch \pgfname's |plot file| command to automatically scale
single coordinates by \meta{scale}.
The initial setting does not scale |plot file|.
\end{keylist}
\begin{command}{\pgflibraryfpuifactive\marg{true-code}\marg{false-code}}
This command can be used to execute either \meta{true-code} or
\meta{false-code}, depending on whether the FPU has been activated or not.
\end{command}
\begin{key}{/pgf/fpu/install only=\marg{list of names}}
\label{fpu-install-only}
Unfortunately, the FPU is currently incompatible with drawing operations.
However, it can still be useful to replace single definitions with FPU
counterparts to avoid errors of the kind |Dimension too large| which tend
to happen when transformation matrices are inverted.
This key allows to specify a list of definitions to be pulled into the
current scope. \emph{Note that there is no reverse operation to uninstall
these definitions at the moment}, so it is advisable to do this in a group.
Conveniently, \tikzname{} paths form an implicit group, so you can use this
key on a path as well.
You have to be aware of the limitations that the FPU imposes. It will not
magically give \TeX{} better precision, but it will avoid overflow or
underflow situations for large or small operands by rescaling them. In the
following example, in the first case the FPU variant performs much better
than the normal variant, however, in the second case where a rescaling
would not in fact be needed the rescaling introduces a small round-off
error.
%
\begin{codeexample}[
preamble={\usepgflibrary{fpu}},
pre={\pgfkeys{/pgf/fpu=false}},
]
\begingroup
\pgfkeys{/pgf/fpu/install only={divide}}
\pgfmathparse{12.34/0.001234}\pgfmathresult (good)
\pgfmathparse{12/4}\pgfmathresult (bad)
\endgroup
\end{codeexample}
%
\emph{This key is experimental and can change or disappear at any time!}
\end{key}
\subsection{Comparison to the fixed point arithmetics library}
There are other ways to increase the data range and/or the precision of
\pgfname's math parser. One of them is the |fp| package, preferable combined
with \pgfname's |fixedpointarithmetic| library. The differences between the FPU
and |fp| are:
%
\begin{itemize}
\item The FPU supports at least the complete IEEE double precision number
range, while |fp| covers only numbers of magnitude
$\pm\pgfmathprintnumber{1e17}$.
\item The FPU has a uniform relative precision of about 4--5 correct
digits. The fixed point library has an absolute precision which may
perform good in many cases -- but will fail at the ends of the data
range (as every fixed point routines does).
\item The FPU has potential to be faster than |fp| as it has access to fast
mantissa operations using \pgfname's math capabilities (which use \TeX\
registers).
\end{itemize}
\subsection{Command Reference and Programmer's Manual}
\subsubsection{Creating and Converting Floats}
\begin{command}{\pgfmathfloatparsenumber\marg{x}}
Reads a number of arbitrary magnitude and precision and stores its result
into |\pgfmathresult| as floating point number $m \cdot 10^e$ with mantissa
and exponent base~$10$.
The algorithm and the storage format is purely text-based. The number is
stored as a triple of flags, a positive mantissa and an exponent, such as
%
\begin{codeexample}[]
\pgfmathfloatparsenumber{2}
\pgfmathresult
\end{codeexample}
%
Please do not rely on the low-level representation here, use
|\pgfmathfloattomacro| (and its variants) and |\pgfmathfloatcreate| if you
want to work with these components.
The flags encoded in |\pgfmathresult| are represented as a digit where
`$0$' stands for the number $\pm 0\cdot 10^0$, `$1$' stands for a positive
sign, `$2$' means a negative sign, `$3$' stands for `not a number', `$4$'
means $+\infty$ and `$5$' stands for $-\infty$.
The mantissa is a normalized real number $m \in \mathbb{R}$, $1 \le m <
10$. It always contains a period and at least one digit after the period.
The exponent is an integer.
Examples:
%
\begin{codeexample}[]
\pgfmathfloatparsenumber{0}
\pgfmathfloattomacro{\pgfmathresult}{\F}{\M}{\E}
Flags: \F; Mantissa \M; Exponent \E.
\end{codeexample}
\begin{codeexample}[]
\pgfmathfloatparsenumber{0.2}
\pgfmathfloattomacro{\pgfmathresult}{\F}{\M}{\E}
Flags: \F; Mantissa \M; Exponent \E.
\end{codeexample}
\begin{codeexample}[]
\pgfmathfloatparsenumber{42}
\pgfmathfloattomacro{\pgfmathresult}{\F}{\M}{\E}
Flags: \F; Mantissa \M; Exponent \E.
\end{codeexample}
\begin{codeexample}[]
\pgfmathfloatparsenumber{20.5E+2}
\pgfmathfloattomacro{\pgfmathresult}{\F}{\M}{\E}
Flags: \F; Mantissa \M; Exponent \E.
\end{codeexample}
\begin{codeexample}[]
\pgfmathfloatparsenumber{1e6}
\pgfmathfloattomacro{\pgfmathresult}{\F}{\M}{\E}
Flags: \F; Mantissa \M; Exponent \E.
\end{codeexample}
\begin{codeexample}[]
\pgfmathfloatparsenumber{5.21513e-11}
\pgfmathfloattomacro{\pgfmathresult}{\F}{\M}{\E}
Flags: \F; Mantissa \M; Exponent \E.
\end{codeexample}
%
The argument \meta{x} may be given in fixed point format or the scientific
``e'' (or ``E'') notation. The scientific notation does not necessarily
need to be normalized. The supported exponent range is (currently) only
limited by the \TeX-integer range (which uses 31 bit integer numbers).
\end{command}
\begin{key}{/pgf/fpu/handlers/empty number=\marg{input}\marg{unreadable part}}
This command key is invoked in case an empty string is parsed inside of
|\pgfmathfloatparsenumber|. You can overwrite it to assign a replacement
|\pgfmathresult| (in float!).
The initial setting is to invoke |invalid number|, see below.
\end{key}
\begin{key}{/pgf/fpu/handlers/invalid number=\marg{input}\marg{unreadable part}}
This command key is invoked in case an invalid string is parsed inside of
|\pgfmathfloatparsenumber|. You can overwrite it to assign a replacement
|\pgfmathresult| (in float!).
The initial setting is to generate an error message.
\end{key}
\begin{key}{/pgf/fpu/handlers/wrong lowlevel format=\marg{input}\marg{unreadable part}}
This command key is invoked whenever |\pgfmathfloattoregisters| or its
variants encounter something which is not a properly formatted low-level
floating point number. As for |invalid number|, this key may assign a new
|\pgfmathresult| (in floating point) which will be used instead of the
offending \meta{input}.
The initial setting is to generate an error message.
\end{key}
\begin{command}{\pgfmathfloatqparsenumber\marg{x}}
The same as |\pgfmathfloatparsenumber|, but does not perform sanity checking.
\end{command}
\begin{command}{\pgfmathfloattofixed{\marg{x}}}
Converts a number in floating point representation to a fixed point number.
It is a counterpart to |\pgfmathfloatparsenumber|. The algorithm is purely
text based and defines |\pgfmathresult| as a string sequence which
represents the floating point number \meta{x} as a fixed point number (of
arbitrary precision).
%
\begin{codeexample}[]
\pgfmathfloatparsenumber{0.00052}
\pgfmathfloattomacro{\pgfmathresult}{\F}{\M}{\E}
Flags: \F; Mantissa \M; Exponent \E
$\to$
\pgfmathfloattofixed{\pgfmathresult}
\pgfmathresult
\end{codeexample}
\begin{codeexample}[]
\pgfmathfloatparsenumber{123.456e4}
\pgfmathfloattomacro{\pgfmathresult}{\F}{\M}{\E}
Flags: \F; Mantissa \M; Exponent \E
$\to$
\pgfmathfloattofixed{\pgfmathresult}
\pgfmathresult
\end{codeexample}
%
\end{command}
\begin{command}{\pgfmathfloattoint\marg{x}}
Converts a number from low-level floating point representation to an
integer (by truncating the fractional part).
%
\begin{codeexample}[]
\pgfmathfloatparsenumber{123456}
\pgfmathfloattoint{\pgfmathresult}
\pgfmathresult
\end{codeexample}
See also |\pgfmathfloatint| which returns the result as float.
\end{command}
\begin{command}{\pgfmathfloattosci\marg{float}}
Converts a number from low-level floating point representation to
scientific format, $1.234e4$. The result will be assigned to the macro
|\pgfmathresult|.
\end{command}
\begin{command}{\pgfmathfloatvalueof\marg{float}}
Expands a number from low-level floating point representation to scientific
format, $1.234e4$.
Use |\pgfmathfloatvalueof| in contexts where only expandable macros are
allowed.
\end{command}
\begin{command}{\pgfmathfloatcreate{\marg{flags}}{\marg{mantissa}}{\marg{exponent}}}
Defines |\pgfmathresult| as the floating point number encoded by
\meta{flags}, \meta{mantissa} and \meta{exponent}.
All arguments are characters and will be expanded using |\edef|.
%
\begin{codeexample}[]
\pgfmathfloatcreate{1}{1.0}{327}
\pgfmathfloattomacro{\pgfmathresult}{\F}{\M}{\E}
Flags: \F; Mantissa \M; Exponent \E
\end{codeexample}
%
\end{command}
\begin{command}{\pgfmathfloatifflags\marg{floating point number}\marg{flag}\marg{true-code}\marg{false-code}}
Invokes \meta{true-code} if the flag of \meta{floating point number} equals
\meta{flag} and \meta{false-code} otherwise.
The argument \meta{flag} can be one of
%
\begin{description}
\item[0] to test for zero,
\item[1] to test for positive numbers,
\item[+] to test for positive numbers,
\item[2] to test for negative numbers,
\item[-] to test for negative numbers,
\item[3] for ``not-a-number'',
\item[4] for $+\infty$,
\item[5] for $-\infty$.
\end{description}
%
\begin{codeexample}[preamble={\usetikzlibrary{fpu}}]
\pgfmathfloatparsenumber{42}
\pgfmathfloatifflags{\pgfmathresult}{0}{It's zero!}{It's not zero!}
\pgfmathfloatifflags{\pgfmathresult}{1}{It's positive!}{It's not positive!}
\pgfmathfloatifflags{\pgfmathresult}{2}{It's negative!}{It's not negative!}
% or, equivalently
\pgfmathfloatifflags{\pgfmathresult}{+}{It's positive!}{It's not positive!}
\pgfmathfloatifflags{\pgfmathresult}{-}{It's negative!}{It's not negative!}
\end{codeexample}
%
\end{command}
\begin{command}{\pgfmathfloattomacro{\marg{x}}{\marg{flagsmacro}}{\marg{mantissamacro}}{\marg{exponentmacro}}}
Extracts the flags of a floating point number \meta{x} to
\meta{flagsmacro}, the mantissa to \meta{mantissamacro} and the exponent to
\meta{exponentmacro}.
\end{command}
\begin{command}{\pgfmathfloattoregisters{\marg{x}}{\marg{flagscount}}{\marg{mantissadimen}}{\marg{exponentcount}}}
Takes a floating point number \meta{x} as input and writes flags to count
register \meta{flagscount}, mantissa to dimen register \meta{mantissadimen}
and exponent to count register \meta{exponentcount}.
Please note that this method rounds the mantissa to \TeX-precision.
\end{command}
\begin{command}{\pgfmathfloattoregisterstok{\marg{x}}{\marg{flagscount}}{\marg{mantissatoks}}{\marg{exponentcount}}}
A variant of |\pgfmathfloattoregisters| which writes the mantissa into a
token register. It maintains the full input precision.
\end{command}
\begin{command}{\pgfmathfloatgetflags{\marg{x}}{\marg{flagscount}}}
Extracts the flags of \meta{x} into the count register \meta{flagscount}.
\end{command}
\begin{command}{\pgfmathfloatgetflagstomacro{\marg{x}}{\marg{macro}}}
Extracts the flags of \meta{x} into the macro \meta{macro}.
\end{command}
\begin{command}{\pgfmathfloatgetmantissa{\marg{x}}{\marg{mantissadimen}}}
Extracts the mantissa of \meta{x} into the dimen register
\meta{mantissadimen}.
\end{command}
\begin{command}{\pgfmathfloatgetmantissatok{\marg{x}}{\marg{mantissatoks}}}
Extracts the mantissa of \meta{x} into the token register
\meta{mantissatoks}.
\end{command}
\begin{command}{\pgfmathfloatgetexponent{\marg{x}}{\marg{exponentcount}}}
Extracts the exponent of \meta{x} into the count register
\meta{exponentcount}.
\end{command}
\subsubsection{Symbolic Rounding Operations}
Commands in this section constitute the basic level implementations of the
rounding routines. They work symbolically, i.e.\ they operate on text, not on
numbers and allow arbitrarily large numbers.
\begin{command}{\pgfmathroundto{\marg{x}}}
Rounds a fixed point number to prescribed precision and writes the result
to |\pgfmathresult|.
The desired precision can be configured with
|/pgf/number format/precision|, see section~\ref{pgfmath-numberprinting}.
This section does also contain application examples.
Any trailing zeros after the period are discarded. The algorithm is purely
text based and allows to deal with precisions beyond \TeX's fixed point
support.
As a side effect, the global boolean |\ifpgfmathfloatroundhasperiod| will
be set to true if and only if the resulting mantissa has a period.
Furthermore, |\ifpgfmathfloatroundmayneedrenormalize| will be set to true
if and only if the rounding result's floating point representation would
have a larger exponent than \meta{x}.
%
\begin{codeexample}[]
\pgfmathroundto{1}
\pgfmathresult
\end{codeexample}
%
\begin{codeexample}[]
\pgfmathroundto{4.685}
\pgfmathresult
\end{codeexample}
%
\begin{codeexample}[]
\pgfmathroundto{19999.9996}
\pgfmathresult
\end{codeexample}
%
\end{command}
\begin{command}{\pgfmathroundtozerofill{\marg{x}}}
A variant of |\pgfmathroundto| which always uses a fixed number of digits
behind the period. It fills missing digits with zeros.
%
\begin{codeexample}[]
\pgfmathroundtozerofill{1}
\pgfmathresult
\end{codeexample}
%
\begin{codeexample}[]
\pgfmathroundto{4.685}
\pgfmathresult
\end{codeexample}
%
\begin{codeexample}[]
\pgfmathroundtozerofill{19999.9996}
\pgfmathresult
\end{codeexample}
%
\end{command}
\begin{command}{\pgfmathfloatround{\marg{x}}}
Rounds a normalized floating point number to a prescribed precision and
writes the result to |\pgfmathresult|.
The desired precision can be configured with
|/pgf/number format/precision|, see section~\ref{pgfmath-numberprinting}.
This method employs |\pgfmathroundto| to round the mantissa and applies
renormalization if necessary.
As a side effect, the global boolean |\ifpgfmathfloatroundhasperiod| will
be set to true if and only if the resulting mantissa has a period.
%
\begin{codeexample}[]
\pgfmathfloatparsenumber{52.5864}
\pgfmathfloatround{\pgfmathresult}
\pgfmathfloattosci{\pgfmathresult}
\pgfmathresult
\end{codeexample}
%
\begin{codeexample}[]
\pgfmathfloatparsenumber{9.995}
\pgfmathfloatround{\pgfmathresult}
\pgfmathfloattosci{\pgfmathresult}
\pgfmathresult
\end{codeexample}
%
\end{command}
\begin{command}{\pgfmathfloatroundzerofill{\marg{x}}}
A variant of |\pgfmathfloatround| produces always the same number of digits
after the period (it includes zeros if necessary).
%
\begin{codeexample}[]
\pgfmathfloatparsenumber{52.5864}
\pgfmathfloatroundzerofill{\pgfmathresult}
\pgfmathfloattosci{\pgfmathresult}
\pgfmathresult
\end{codeexample}
%
\begin{codeexample}[]
\pgfmathfloatparsenumber{9.995}
\pgfmathfloatroundzerofill{\pgfmathresult}
\pgfmathfloattosci{\pgfmathresult}
\pgfmathresult
\end{codeexample}
%
\end{command}
\subsubsection{Math Operations Commands}
This section describes some of the replacement commands in more detail.
Please note that these commands can be used even if the |fpu| as such has not
been activated -- it is sufficient to load the library.
\begin{command}{\pgfmathfloat\meta{op}}
Methods of this form constitute the replacement operations where \meta{op}
can be any of the well-known math operations.
Thus, \declareandlabel{\pgfmathfloatadd} is the counterpart for
|\pgfmathadd| and so on. The semantics and number of arguments is the same,
but all input and output arguments are \emph{expected} to be floating point
numbers.
\end{command}
\begin{command}{\pgfmathfloattoextentedprecision{\marg{x}}}
Renormalizes \meta{x} to extended precision mantissa, meaning $100 \le m <
1000$ instead of $1 \le m < 10$.
The ``extended precision'' means we have higher accuracy when we apply
pgfmath operations to mantissas.
The input argument is expected to be a normalized floating point number;
the output argument is a non-normalized floating point number (well,
normalized to extended precision).
The operation is supposed to be very fast.
\end{command}
\begin{command}{\pgfmathfloatsetextprecision\marg{shift}}
Sets the precision used inside of |\pgfmathfloattoextentedprecision| to
\meta{shift}.
The different choices are
\begin{tabular}{llrll}
0 & normalization to & $0$ & $\le m < 1$ & (disable extended precision) \\
1 & normalization to & $10$ & $\le m < 100$ & \\
2 & normalization to & $100$ & $\le m < 1000$ & (default of |\pgfmathfloattoextentedprecision|) \\
3 & normalization to & $1000$ & $\le m < 10000$ & \\
\end{tabular}
\end{command}
\begin{command}{\pgfmathfloatlessthan{\marg{x}}{\marg{y}}}
Defines |\pgfmathresult| as $1.0$ if $\meta{x} < \meta{y}$, but $0.0$
otherwise. It also sets the global \TeX-boolean |\pgfmathfloatcomparison|
accordingly. The arguments \meta{x} and \meta{y} are expected to be numbers
which have already been processed by |\pgfmathfloatparsenumber|. Arithmetic
is carried out using \TeX-registers for exponent- and mantissa comparison.
\end{command}
\begin{command}{\pgfmathfloatmultiplyfixed\marg{float}\marg{fixed}}
Defines |\pgfmathresult| to be $\meta{float} \cdot \meta{fixed}$ where
\meta{float} is a floating point number and \meta{fixed} is a fixed point
number. The computation is performed in floating point arithmetics, that
means we compute $m \cdot \meta{fixed}$ and renormalize the result where
$m$ is the mantissa of \meta{float}.
This operation renormalizes \meta{float} with
|\pgfmathfloattoextentedprecision| before the operation, that means it is
intended for relatively small arguments of \meta{fixed}. The result is a
floating point number.
\end{command}
\begin{command}{\pgfmathfloatifapproxequalrel\marg{a}\marg{b}\marg{true-code}\marg{false-code}}
Computes the relative error between \meta{a} and \meta{b} (assuming
\meta{b}$\neq 0$) and invokes \meta{true-code} if the relative error is
below |/pgf/fpu/rel thresh| and \meta{false-code} if that is not the case.
The input arguments will be parsed with |\pgfmathfloatparsenumber|.
\begin{key}{/pgf/fpu/rel thresh=\marg{number} (initially 1e-4)}
A threshold used by |\pgfmathfloatifapproxequalrel| to decide whether
numbers are approximately equal.
\end{key}
\end{command}
\begin{command}{\pgfmathfloatshift{\marg{x}}{\marg{num}}}
Defines |\pgfmathresult| to be $\meta{x} \cdot 10^{\meta{num}}$. The
operation is an arithmetic shift base ten and modifies only the exponent of
\meta{x}. The argument \meta{num} is expected to be a (positive or
negative) integer.
\end{command}
\begin{command}{\pgfmathfloatabserror\marg{x}\marg{y}}
Defines |\pgfmathresult| to be the absolute error between two floating
point numbers $x$ and $y$, $\lvert x - y\rvert $ and returns the result as
floating point number.
\end{command}
\begin{command}{\pgfmathfloatrelerror\marg{x}\marg{y}}
Defines |\pgfmathresult| to be the relative error between two floating
point numbers $x$ and $y$, $\lvert x - y\rvert / \lvert y \rvert$ and
returns the result as floating point number.
\end{command}
\begin{command}{\pgfmathfloatint\marg{x}}
Returns the integer part of the floating point number \meta{x}, by
truncating any digits after the period. This methods truncates the absolute
value $\rvert x \lvert$ to the next smaller integer and restores the
original sign afterwards.
The result is returned as floating point number as well.
See also |\pgfmathfloattoint| which returns the number in integer format.
\end{command}
\begin{command}{\pgfmathlog{\marg{x}}}
Defines |\pgfmathresult| to be the natural logarithm of \meta{x},
$\ln(\meta{x})$. This method is logically the same as |\pgfmathln|, but it
applies floating point arithmetics to read number \meta{x} and employs the
logarithm identity \[ \ln(m \cdot 10^e) = \ln(m) + e \cdot \ln(10) \] to
get the result. The factor $\ln(10)$ is a constant, so only $\ln(m)$ with
$1 \le m < 10$ needs to be computed. This is done using standard pgf math
operations.
Please note that \meta{x} needs to be a number, expression parsing is not
possible here.
If \meta{x} is \emph{not} a bounded positive real number (for example
$\meta{x} \le 0$), |\pgfmathresult| will be \emph{empty}, no error message
will be generated.
%
\begin{codeexample}[preamble={\usetikzlibrary{fpu}}]
\pgfmathlog{1.452e-7}
\pgfmathresult
\end{codeexample}
%
\begin{codeexample}[preamble={\usetikzlibrary{fpu}}]
\pgfmathlog{6.426e+8}
\pgfmathresult
\end{codeexample}
%
\end{command}
\subsubsection{Accessing the Original Math Routines for Programmers}
As soon as the library is loaded, every private math routine will be copied to
a new name. This allows library and package authors to access the \TeX-register
based math routines even if the FPU is activated. And, of course, it allows the
FPU as such to perform its own mantissa computations.
The private implementations of \pgfname\ math commands, which are of the form
|\pgfmath|\meta{name}|@|, will be available as|\pgfmath@basic@|\meta{name}|@|
as soon as the library is loaded.
\endgroup
| {
"alphanum_fraction": 0.7260894999,
"avg_line_length": 38.3081232493,
"ext": "tex",
"hexsha": "04989eef44d8f1182da62e225e72abf276357694",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "52fe6e0cd5af6b4610fd344a7392cca11bc5a72e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "waqas4afzal/LatexUrduBooksTools",
"max_forks_repo_path": "Texlive_Windows_x32/2020/texmf-dist/doc/generic/pgf/text-en/pgfmanual-en-library-fpu.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "52fe6e0cd5af6b4610fd344a7392cca11bc5a72e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "waqas4afzal/LatexUrduBooksTools",
"max_issues_repo_path": "Texlive_Windows_x32/2020/texmf-dist/doc/generic/pgf/text-en/pgfmanual-en-library-fpu.tex",
"max_line_length": 116,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "52fe6e0cd5af6b4610fd344a7392cca11bc5a72e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "waqas4afzal/LatexUrduBooksTools",
"max_stars_repo_path": "Texlive_Windows_x32/2020/texmf-dist/doc/generic/pgf/text-en/pgfmanual-en-library-fpu.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 7831,
"size": 27352
} |
\section{The protocol}
The protocol we designed is a variation of the Two-Phase Commit protocol, adapted to handle multiple transactions. The protocol we designed is fault tolerant without blocking: it blocks only in a specific case that will be discussed later in this chapter.
\subsection{Beginning a transaction}
When a client wants to start a new transaction it simply chooses a coordinator randomly and sends to it a \url{TxnBeginMsg} specifying its own clientID.
\newline
The coordinator will reply to the client with a \url{TxnAcceptedMsg} which signals the client that the transaction has been started successfully and that from now on the only possible outcomes will be either \textbf{COMMIT} or \textbf{ABORT}. Upon receiving the \url{TxnBeginMsg} the coordinator will assign to the transaction performed by the client a globally unique identifier, namely the transactionID.
\subsection{Performing an operation on a data item}
The client has the possibility to perform two different types of operation over a specific data item: it can perform either a \textbf{READ} or a \textbf{WRITE} operation; of course during a transaction a client can perform operations over an unlimited number of data items.
\newline
In order to read the value of a specific data item the client sends to the coordinator a \url{TxnReadRequestMsg}, specifying the data item of which it wants the value. Upon receiving this message the coordinator finds out the data store in charge of that specific key and will send to it a \url{DSSReadRequestMsg}, specifying the transactionID it refers to and the data item the client wants to read.
\newline
The data store retrieves the value of the specified data item and replies to the coordinator sending a \url{DSSRead} \url{ResponseMsg}, specifying the transactionID, the data item and the associated value the message refers to.
\newline
The coordinator finally translates this message into a \url{TxnReadResultMsg} and sends it back to the client.
\newline
The flow to perform a \textbf{WRITE} operation is almost identical: the only key difference is that there is no message returned neither from the data store to the coordinator nor from the coordinator to the client.
\subsection{Ending a transaction}
In order to end a transaction the client has two options:
\begin{itemize}
\item unilaterally abort the transaction;
\item ask the coordinator to commit the transaction.
\end{itemize}
In both cases the client sends to the coordinator a \url{TxnEndMsg} specifying its own clientID. In the first case, it will signal that it does want to abort the whole transaction, while in the second case it will specify that it is willing to commit the transaction.
\newline
Upon receiving the message the coordinator will behave differently whether the client wants to commit the transaction or not:
\begin{itemize}
\item if the client wants to unilaterally abort the transaction the coordinator simply broadcasts to the data stores a \url{DSSDecisionResponseMsg} with the final decision \textbf{ABORT};
\item if the client wants to commit the transaction the coordinator then sends a \url{DSSVoteRequestMsg} to all the data stores, asking for their vote about the specific transaction. Upon receiving this message every data store will check if committing the transaction will leave the items it is responsible for in a consistent state and sends back a \url{DSSVoteResponseMsg} for the specified transaction to the coordinator setting the vote accordingly: if the items will be in a consistent state then the vote will be \textbf{YES}, otherwise the vote will be NO and it will also fix the decision to be \textbf{ABORT}. Upon receiving all the votes the coordinator will take the final decision and will send a \url{DSSDecisionResponseMsg} to all the data stores: if all data stores voted YES the decision will be \textbf{COMMIT}, otherwise it will be \textbf{ABORT}; it will also send back a \url{TxnResultMsg} to the client with the final outcome of the transaction. Upon receiving the \url{DSSDecisionResponseMsg} every data store will behave accordingly: if the decision is \textbf{ABORT} then all the changes made in that transaction will be discarded, otherwise they will become effective.
\end{itemize}
\subsection{Coordinator crash}
The actions taken by the data stores and by the coordinator upon restoring depend on then the coordinator crashed:
\begin{itemize}
\item if the coordinator crashed before sending any \url{DSSVoteRequestMsg} then nothing happens and once the coordinator will come back up it will start the vote request;
\item if the coordinator crashed after sending some, but not all, the \url{DSSVoteRequestMsg}s then the data store(s) that received the message and sent back the vote will realize that the coordinator crashed and will perform the temination protocol. Upon restoring the coordinator will ask all the data stores if they managed to take a decision: if so the coordinator will agree, otherwise it will simply decide to abort;
\item if the coordinator crashes before sending any \url{DSSDecisionResponseMsg} then the timeout of the data stores will expire and they will proceed as in the previous case. The only difference is that upon restoring the coordinator will send to all data stores its decision: if they did not manage to take a decision they will be informed of the decision while if they managed to take a decision without the coordinator they will simply ignore the message because it will be the same decision;
\item if the coordinator crashes after sending some, but not all, \url{DSSDecisionResponseMsg}s then the timout of the data stores that did not receive the message will expire and they will run the termination protocol: they will contact one of the stores that got the decision message and they will be informed of the decision. Upon restore the coordinator sends again all the messages but they will be ignored by the stores since the decision of the coordinator will be the same as the previous one.
\end{itemize}
\subsection{Data store crash}
The actions taken by the other data stores, by the coordinator and by the data store upon recovery depend on when the crash happened:
\begin{itemize}
\item if the datastore crashes before receiving any vote request or before deciding its vote, then coordinator's timeout will expire and it will just decide \textbf{ABORT}, sending a \url{DSSDecisionResponseMsg} to all the data stores. Upon recovery the data store will realize that it has not voted and will simply \textbf{ABORT}. Note that this also works for the other transactions: while the data store was not alive he might have missed some operations, so it will just abort not to create inconsistencies;
\item if the datastore crashes before sending back its vote, then the coordinator's timeout will expire and it will just decide \textbf{ABORT}, sending a \url{DSSDecision} \url{ResponseMsg} to all the data stores. Upon recovery the data store takes actions with respect to its vote: if it voted NO then it just aborts, otherwise it asks to the coordinator, and other data stores, for the final outcome of the decision;
\item if the datastore crashes after sending its vote to the coordinator but before receiving the \url{DSSDecisionResponseMsg}, then the coordinator does nothing. Upon recovery the data store will take actions with respect to its vote: if the vote is NO than nothing happens since he already decided to abort, otherwise it will just ask other data stores and the coordinator for the final outcome of the transaction.
\end{itemize}
\subsection{Termination protocol}
When the coordinator crashes before sending all the \url{DSSDecisionResponseMsg}s, the data stores will run a simple protocol to determine whether they can take a decision without the coordinator; in particular when the timeout of a data store for the upper cited message expires it performs the following actions:
\begin{itemize}
\item it will send a \url{DSSDecisionRequestMsg} to all data stores;
\item upon receiving this \url{DSSDecisionRequestMsg} if the data store already knows the final outcome of the transaction it replies with a \url{DSSDecisionResponseMsg}, setting the decision it know;
\item upon receiving a \url{DSSDecisionResponseMsg} the data stores fixes its own decision so that it matches with the one in the message.
\item if it receives no \url{DSSDecisionResponseMsg} then it means that the coordinator crashed and none of the other stores knows the final decision, therefore it is necessary to wait for the coordinator to come back up.
\end{itemize}
This is useful in case there one or more data stores that voted \textbf{NO}: in this case the final outcome of the transaction has already been decided because the only plausible final decision for the coordinator will be \textbf{ABORT}.
\newline
On the other hand if all data stores voted \textbf{YES} then no assumptions can be made and they must wait the coordinator: it is possible that there is a data store that crashed after voting \textbf{NO} and they do not know about it.
| {
"alphanum_fraction": 0.803579271,
"avg_line_length": 142.3125,
"ext": "tex",
"hexsha": "c1d4ebabf6dc06113da84c4229576450ad49fdc6",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fccc0d3f870dc2c86713d98ada3dd0c8da20a9e4",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mfranzil/unitn-m-ds1",
"max_forks_repo_path": "project/docs/protocol.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fccc0d3f870dc2c86713d98ada3dd0c8da20a9e4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mfranzil/unitn-m-ds1",
"max_issues_repo_path": "project/docs/protocol.tex",
"max_line_length": 1198,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "fccc0d3f870dc2c86713d98ada3dd0c8da20a9e4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mfranzil-unitn/unitn-m-ds1",
"max_stars_repo_path": "project/docs/protocol.tex",
"max_stars_repo_stars_event_max_datetime": "2021-07-20T16:24:57.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-07-20T16:24:57.000Z",
"num_tokens": 1946,
"size": 9108
} |
\section{conclusion}\label{sec:sec005} | {
"alphanum_fraction": 0.8157894737,
"avg_line_length": 38,
"ext": "tex",
"hexsha": "be6ba4bc87df200064ac9cd5717511820e6eefcc",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "341d7e19e06d91a3d8044def9585d6e0085f45aa",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "kandrews92/phy7550paper",
"max_forks_repo_path": "sec005.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "341d7e19e06d91a3d8044def9585d6e0085f45aa",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "kandrews92/phy7550paper",
"max_issues_repo_path": "sec005.tex",
"max_line_length": 38,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "341d7e19e06d91a3d8044def9585d6e0085f45aa",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "kandrews92/phy7550paper",
"max_stars_repo_path": "sec005.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 13,
"size": 38
} |
\documentclass[11pt]{article}
\usepackage[margin=1in]{geometry}
\usepackage{setspace}
\onehalfspacing
\usepackage{graphicx}
\graphicspath{report_images/}
\usepackage{appendix}
\usepackage{listings}
\usepackage{float}
\usepackage{multirow}
\usepackage{amsthm}
% The next three lines make the table and figure numbers also include section number
\usepackage{chngcntr}
\counterwithin{table}{section}
\counterwithin{figure}{section}
% Needed to make titling page without a page number
\usepackage{titling}
% DOCUMENT INFORMATION =================================================
\font\titleFont=cmr12 at 11pt
\title {{\titleFont ECEN 429: Introduction to Digital Systems Design Laboratory \\ North Carolina Agricultural and Technical State University \\ Department of Electrical and Computer Engineering}} % Declare Title
\author{\titleFont Reporter: Chris Cannon \\ Partner: Nikiyah Beulah} % Declare authors
\date{\titleFont April 5, 2018}
% ======================================================================
\begin{document}
\begin{titlingpage}
\maketitle
\begin{center}
Prelab 9
\end{center}
\end{titlingpage}
\section{Introduction}
The purpose of this prelab is to evaluate the state machine involved in a Traffic Light Controller.
\section{Background, Design Solution, Results}
\subsection{Problem 1}
What are the states within the FSM? What happens in each state?
The 3 bits in NS represent red, yellow, and green lights, respectively.
The 3 bits in EW represent red, yellow, and green lights, respectively.
\begin{figure}[H]
\begin{center}
\includegraphics[width=\textwidth]{./9images/trafficLight.png}
\caption{\label{fig:trafficLight} State Machine for traffic light}
\end{center}
\end{figure}
\subsection{Problem 2}
Add turn lanes for the N/S direction (when traveling from North to South) only. Design a new FSM that accounts for this (includes the turn signal within the cycle).
The 5 bits in NS represent red, yellow, yellow flashing arrow, green and green arrow lights, respectively.
The 3 bits in EW represent red, yellow, and green lights, respectively.
\begin{figure}[H]
\begin{center}
\includegraphics[width=\textwidth]{./9images/trafficLightWithTurn.png}
\caption{\label{fig:trafficLightWithTurn}State Machine for traffic light with a turn lane}
\end{center}
\end{figure}
\subsection{Problem 3}
Add a sensor for the turn lane (it only turns on when a car is present). Design a new FSM that accounts for this.
The 5 bits in NS represent red, yellow, yellow flashing arrow, green and green arrow lights, respectively.
The 3 bits in EW represent red, yellow, and green lights, respectively.
\begin{figure}[H]
\begin{center}
\includegraphics[width=\textwidth]{./9images/trafficLightWithSensorTurn.png}
\caption{\label{fig:trafficLightWithSensorTurn}State machine for traffic light with a turn lane and a sensor}
\end{center}
\end{figure}
\section{Conclusion}
Finite state machines are very useful for designing fault-tolerant systems that cycle through a set number of states based on limited inputs from users. We are now able to understand and design such systems.
\end{document} | {
"alphanum_fraction": 0.7568087152,
"avg_line_length": 38.0609756098,
"ext": "tex",
"hexsha": "2dcc1685903d2668c5741220263173bea0b496c4",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7a7be7becb73d0f2ec8db52213b7dd8961a32e5b",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "ccannon94/ncat-ecen429-repository",
"max_forks_repo_path": "ChrisPrelabs/Prelab9.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7a7be7becb73d0f2ec8db52213b7dd8961a32e5b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "ccannon94/ncat-ecen429-repository",
"max_issues_repo_path": "ChrisPrelabs/Prelab9.tex",
"max_line_length": 212,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "7a7be7becb73d0f2ec8db52213b7dd8961a32e5b",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "ccannon94/ncat-ecen429-repository",
"max_stars_repo_path": "ChrisPrelabs/Prelab9.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 765,
"size": 3121
} |
% !TEX root = ../root.tex
\chapter{Conclusions}\label{ch:concl}
\input{product}
The project did not manage to reach a satisfactory completion level.
As of now, the product doesn't fulfill its purpose and must undergo heavy development before it can be used as a proof of concept, let alone considered complete.
Indeed, most its implementation exists only on a theoretical basis.
However, the compilation of this document contributed in shedding some light on the encountered issue, and has allowed the formalization of the future implementation steps.
On a purely practical standpoint, the set of tools and reusable software modules implemented during the course of the project will likely reveal useful in the future, and add value to the company.
Despite the amount of time lost while migrating from one architecture to the other, the possibility of experimenting with a different platform --- in particular a more powerful and extensively supported one --- will benefit the future development of the product as a whole.
For what concerns the decaWave modules, their stability is still far from behind even adequate.
Although the newer PCB layout has been a mayor lead in usability, consecutive acquisition of ranges fails a staggering three fourth of the times, hinting that some issues are still present.
\input{process}
The software development and integration process and suffered from several sources of hindrance.
In particular, commitment to previously-established projects has played a large role in shifting most of the focus and productivity away from this very project and towards other endeavors considered more imperative by the company.
However, the choice of not employing the necessary firmness is one for which I must take full responsibility.
Another major setback has been the delayed delivery of the second iteration of printed circuit boards (along with several other designs) which has left anyone in distress and mainly goes to prove how a proper risk management plan can really make a difference.
On a side note, the chance of managing several complex repositories on a on-premises infrastructure has been particularly useful and will certainly be beneficial for future projects.
\input{future}
As both UAVComponents and myself have put a considerable amount of resources for what is nevertheless a project with possible interesting applications, the development is most likely to resume.
With the improved programming skills and the acquisition of a broad corpus of knowledge related to ultra-wideband, CAN, and embedded development practices, the project and process alike will hopefully experience a more focused and lean management.
As for possible improvements and features (besides those yet to be implemented), the idea of employing some form of data filtering may substantially improve the real-time behavior of the localization system, which would otherwise be quite limited.
| {
"alphanum_fraction": 0.8187414501,
"avg_line_length": 83.5428571429,
"ext": "tex",
"hexsha": "ee7c4208669d4ce1a2b056f3ec10db7a2738eb7b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fd7a4575569dbc7c3da6bc7b06abbacb039d4d22",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "miccio-dk/beng-report",
"max_forks_repo_path": "5.conclusion/0.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fd7a4575569dbc7c3da6bc7b06abbacb039d4d22",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "miccio-dk/beng-report",
"max_issues_repo_path": "5.conclusion/0.tex",
"max_line_length": 273,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "fd7a4575569dbc7c3da6bc7b06abbacb039d4d22",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "miccio-dk/beng-report",
"max_stars_repo_path": "5.conclusion/0.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 526,
"size": 2924
} |
% Appendix X
\chapter{In-depth Review of ARM Instruction structure}
\label{appendix:instructions}
%----------------------------------------------------------------------------------------
\section{Condition Codes}
\label{sec:appendix:conditions}
Bits 28 through 31 of any ARM instruction word denote the condition code for the instruction. The ``NV'' condition code, which is architecturally undefined on the ARM4T architecture, is co-opted to serve as the ``HALT'' pseudo-instruction used to tell \emph{Handy} to end simulation. This is required as no equivalent instruction exists in the actual processor. \autoref{table:conditions} lists all condition bit fields and the condition they refer to.
\begin{table}
\caption{ARM Condition Codes and their meanings\citep[pp. A3-4]{armarm:2005}}
\label{table:conditions}
\begin{tabularx}{\textwidth}{l|l|X|X}
Bits 31:28 & Name & Meaning & Flag States \\ \hline
0000 & EQ & Equal & Z set \\ \hline
0001 & NE & Not Equal & Z clear \\ \hline
0010 & CS/HS & Carry set/unsigned higher or same & C set \\ \hline
0011 & CC/LO & Carry clear/unsigned lower & C clear \\ \hline
0100 & MI & Minus/negative & N set \\ \hline
0101 & PL & Plus/positive or zero & N clear \\ \hline
0110 & VS & Overflow & V set \\ \hline
0111 & VC & No overflow & V clear \\ \hline
1000 & HI & Unsigned higher & C set and Z clear \\ \hline
1001 & LS & Unsigned lower or same & C clear or Z set \\ \hline
1010 & GE & Signed greater than or equal & N equal to V \\ \hline
1011 & LT & Signed less than & N not equal to V \\ \hline
1100 & GT & Signed greater than & Z clear and N equal to V \\ \hline
1101 & LE & Signed less than or equal & Z set or N not equal to V \\ \hline
1110 & AL & Always & True \\ \hline
1111 & X & Undefined for ARM4T architecture & X
\end{tabularx}
\end{table}
\section{Instruction Families}
\label{sec:appendix:instruction_families}
Bits 25 through 27 of the instruction word divide the remaining possible values into one of eight families. This is not explicitly stated in any literature read in the course of implementing this simulator but was instead observed from Figure A3-1 of the ARM Architecture Reference Manual\citep[pp. A3-2]{armarm:2005}. This information simplified implementation of the parser, especially so since two of the eight families contained no instructions required by the simulator\footnote{Coprocessor manipulating instructions and software interrupts}.
\begin{table}
\caption{Instruction Families}
\label{table:families}
\begin{tabularx}{\textwidth}{l|X}
Bits 27:25 & Family \\ \hline
\multirow{5}{*}{000} & Data processing immediate shift \\
& Data processing register shift \\
& Miscellaneous Instruction \\
& Multiplies \\
& Extra Loads/Stores (not implemented) \\ \hline
\multirow{3}{*}{001} & Data processing immediate value \\
& Move immediate to status register (not implemented) \\
& Undefined instruction \\ \hline
010 & Load/store immediate offset \\ \hline
\multirow{3}{*}{011} & Load/Store register offset \\
& Media instruction (not implemented) \\
& Architecturally undefined \\ \hline
100 & Load/Store Multiple \\ \hline
101 & Branch / Branch and Link \\ \hline
\multirow{2}{*}{110} & Coprocessor Load/Store (not implemented) \\
& Double Register Transfers (not implemented) \\ \hline
\multirow{3}{*}{111} & Coprocessor Data Processing (not implemented) \\
& Coprocessor Register Transfer (not implemented) \\
& Software Interrupt (not implemented)
\end{tabularx}
\end{table}
\section{Opcodes and Data-processing Instructions}
\label{sec:appendix:opcodes}
For instructions other than memory loads and stores and multiplications, bits 21 through 24 hold the opcode. These instructions are split across two of the ``families'' described in \autoref{sec:appendix:instruction_families} depending on whether the third operand of the instruction is an immediate or register value. These opcodes represent what the ARM Architecture Reference Manual describes as ``Data-processing instructions''\citep[pp. A3-7]{armarm:2005}, and all of these sixteen data processing instructions share similar structures in their binary representations. Multiplications and memory operations are exceptional and have neither opcodes in this sense nor similar structures to data-processing instructions. The sixteen data-processing instructions and their opcodes are shown in \autoref{table:opcodes}
\begin{table}
\caption{Data-processing Instructions and their Opcodes\citep[pp. A3-7]{armarm:2005}}
\label{table:opcodes}
\begin{tabularx}{\textwidth}{l|l}
Opcode & Instruction \\ \hline
0000 & AND - Logical AND \\ \hline
0001 & EOR - Logical Exclusive OR \\ \hline
0010 & SUB - Subtract \\ \hline
0011 & RSB - Reverse Subtract \\ \hline
0100 & ADD - Add \\ \hline
0101 & ADC - Add with Carry \\ \hline
0110 & SBC - Subtract with Carry \\ \hline
0111 & RSC - Reverse Subtract with Carry \\ \hline
1000 & TST - Test \\ \hline
1001 & TEQ - Test Equivalence \\ \hline
1010 & CMP - Compare \\ \hline
1011 & CMN - Compare Negated \\ \hline
1100 & ORR - Logical (inclusive) OR \\ \hline
1101 & MOV - Move \\ \hline
1110 & BIC - Bit Clear \\ \hline
1111 & MVN - Move Not
\end{tabularx}
\end{table}
\subsection{Data-processing Instruction Operands}
\label{sec:appendix:opcodes:operands}
For data processing instructions the destination and first source value must be registers. Bits 12 through 15 denote the destination register while bits 16 through 19 represent the source register. The second source operand is a ``data-processing operand'' as defined in Section A5.1 of the ARM Architecture Reference Manual\citep[pp. A5-2]{armarm:2005}, which can be any of an immediate value, a register, or a register with some logical or arithmetic shift or rotation applied to it.
If the instruction family bits of the instruction word are ``000'' then the least significant twelve bits of the instruction word are interpreted as a register value with some operation (possibly the identity operation) applied to it. In this case, bits 0 through 3 specify the register number. Bits 5 and 6 specify the manner of transformation to apply\footnote{00 = shift left, 01 = logical shift right, 10 = arithmetic shift right, 11 = rotate right}. Bit 4 specifies whether the register is to be shifted by an immediate value or by a value taken from a register. If bit 4 is clear bits 7 through 11 represent a 5 bit immediate value by which to shift the value in the source register. If bit 4 is set bit 7 must be clear and bits 8 through 11 represent the register from which to take the value by which to shift the value in the source register. There are two special cases of the above:
\begin{enumerate}
\item A register with no operation applied to it is represented as a shift left by an immediate value of 0.
\item A rotate right by an immediate value of zero is a ``rotate right with extend'' operation, which shifts the instruction operand in the specified register by one position to the right and inserts a 1 at the vacated most significant bit if the carry flag was set and a 0 otherwise. The carry out from the shifter is the least significant bit of the original value, though this does not necessarily replace the carry flag in the CPU's status register.
\end{enumerate}
Conversely, if the instruction family bits of the instruction word are ``001'' then the least significant twelve bits of the instruction word are interpreted as an immediate value, with bits 0 through 7 being an eight bit constant and bits 8 through 11 a four bit rotation. The actual value of the immediate is calculated by left shifting the four bit rotation value by one place and then right-rotating the eight bit constant by that many positions. This allows the ARM instruction set to represent immediate values much larger than 12 bits would allow, but for constants larger than 255 only some values are valid and representable with this encoding.
\section{Multiplication Instructions}
There are six possible multiplication instructions in the ARM4T instruction set. They are respectively:
\begin{enumerate}
\item MUL - Multiplies the value of two registers together, truncates the result to 32 bits, and stores the result in a third register.\citep[Section A3.5.1, pp. A3-10]{armarm:2005}
\item MLA - Multiplies the value of two registers together, adds the value of a third register, truncates the result to 32 bits, and stores the result in a fourth register.\citep[Section A3.5.1, pp. A3-10]{armarm:2005}
\item SMULL and UMULL - Multiply the values of two registers together and store the 64-bit result in two parts in a third and a fourth register. SMULL treats operands as signed and UMULL treats operands as unsigned.\citep[Section A3.5.2, pp. A3-10]{armarm:2005}
\item SMLAL and UMLAL - Multiply the values of two registers together and and add the resultant 64-bit value to a 64-bit value taken from a third and a fourth register, then store the final value back in the third and fourth registers in two parts.\citep[Section A3.5.2, pp. A3-10]{armarm:2005}
\end{enumerate}
Multiplies are part of instruction family ``000'' and are distinguished from data-processing operations by having both bits 4 and 7 set and bits 5 and 6 are clear. Bit 23 specifies whether the instruction produces a 32- or 64-bit result. If bit 23 is set the instruction is a ``long'' multiplication and bit 22 specifies whether it is the signed or unsigned variant. Bit 22 must be clear if bit 23 is clear. Bit 21 specifies whether the instruction is an accumulate variant.
Unlike data-processing instructions, multiplication instructions can only have register operands and cannot have immediate values specified. Bits 0 through 3 represent the left source operand and bits 8 through 11 represent the right source operand. For MUL, bits 12 through 15 are specified as ``Should Be Zero'' though other values do not cause an error. For MLA, bits 12 through 15 specify the source of the accumulator value that should be added to the result of the multiplication before storing in the destination register. For all 64-bit variants bits 12 through 15 specify the destination for the least significant 32 bits of the result. Bits 16-19 specify the destination register for MUL and MLA or the destiation for the most significant 32 bits of the result otherwise.
\section{The S Bit}
\label{sec:appendix:sbit}
Data-processing and multiplication instructions can optionally\footnote{Or necessarily, in the case of CMP, CMN, TST and TEQ instructions, though this is also accomplished using the S bit where its being unset is an undefined instruction\citep[pp. A3-7]{armarm:2005}} update the status flags of the CPU. Whether this occurs is decided by bit 20 of the instruction word, referred to as the ``S bit''\citep[pp. A3-7]{armarm:2005}, and the method by which an instruction updates the status flags with its result varies by instruction.
\section{Load and Store Instructions}
\label{sec:appendix:loadstore}
Load and Store instructions have yet another distinct structure, and are divided across two instruction families depending on whether the addressing mode uses a register or immediate value as an offset. Bit 20 specifies whether the instruction is a load (bit 20 is set) or a store (bit 20 is clear) instruction. Bit 22 specifies whether the instruction loads a byte (bit 22 is set) or a word (bit 22 is clear) value. Bits 21, 23, 24, and 25 specify the addressing mode and bits 0 through 11 are addressing mode specific. Bits 16 through 19 specify the register containing the base address and bits 12 through 15 specify the register whose contents are either to be stored or replaced by the loaded value.
\subsection{Load and Store Addressing Modes}
\label{subsec:appendix:loadstore:addressingmode}
Broadly, there are three\citep[pp. A5-18]{armarm:2005} types of addressing mode for Load and Store instructions:
\begin{enumerate}
\item Immediate offset/index
\item Register offset/index\footnote{Though the Register offset/index class can be considered a special case of Scaled register offset/index the ARM Architecture Reference manual treats them separately}
\item Scaled register offset/index
\end{enumerate}
Bit 23 is referred to by the ARM Architecture Reference Manual as the ``U'' bit, and specifies whether the offset described by bits 0 through 11 is to be added or subtracted from the base address. If the U bit is set, the offset is added. If the U bit is clear, the offset is subtracted.
Bits 21 and 24 are referred to as the ``W'' and ``P'' bits respectively, and their meanings dependent on each other and are shown in \autoref{table:pwbits}.
\begin{table}
\caption{Meanings of P and W bits in Memory Operations\citep[pp. A5-19]{armarm:2005}}
\label{table:pwbits}
\begin{tabularx}{\textwidth}{l|l|X}
P & W & Meaning \\ \hline
0 & 0 & Post-indexed addressing: Value taken from base address register and used as address, then the offset is applied and the result written back to the base address register. \\ \hline
0 & 1 & Unprivileged operation: The memory operation is performed in User mode. As this simulator does not implement modes other than Supervisor mode, this is undefined for our purposes. \\ \hline
1 & 0 & Offset addressing: Offset applied to value in base address register to create the address, but the modified value is discarded after the operation. \\ \hline
1 & 1 & Pre-indexed addressing: Offset applied to the value taken from the base address register and used as the address for the operation. After the operation the base address register is updated with the result of applying the offset.
\end{tabularx}
\end{table}
Whether an addressing mode uses an immediate or register offsetis decided by bit 25\footnote{Which is one of the bits I describe as deciding the ``instruction family'', ergo immediate indexed loads and stores reside in a separate instruction family to register indexed loads and stores}, with bit 25 being set indicating a register indexed addressing mode.
The offset value is represented by bits 0 through 11. In immediate addressing mode bits 0 through 11 are treated as a 12-bit constant and directly. In register and scaled register addressing modes bits 0 through 3 specify the register containing the offset and bit 4 must be zero. Bits 5 and 6 represent the scaling type, using the same scheme and semantics as described in \autoref{sec:appendix:opcodes:operands}. Bits 7 through 11 specify the amount by which the offset is to be shifted.
\section{Load and Store Multiple Instructions}
Distinct from ordinary load and store instructions are what the ARM Architecture Reference Manual refers to as ``Load and Store Multiple'' instructions\citep[pp. A3-26]{armarm:2005}. These instructions reside in an entirely separate instruction family and have a unique bit pattern. Load and Store Multiple instructions are composed of five distinct parts:
\begin{enumerate}
\item A register list represented as a bit array in bits 0 through 15, with bit 0 representing R0 and bit 15 representing R15 and set bits denoting membership in the set. The registers listed are those to be populated with loaded values or to have their values stored in memory.
\item The P, U and W bits (bits 24, 23 and 20 respectively). The P and U bits specify the addressing mode for the instruction and the W bit specifies whether the base address register is to be updated by execution of the instruction.
\item The S bit (bit 22) the purpose of which pertains to privileged or unprivileged execution modes and therefor falls outside of the bounds of the simulator, meaning it is ignored.
\item The L bit (bit 20) which distinguishes between a Load and a Store operation (set and unset respectively).
\item The base address register specified by bits 16 through 19.
\end{enumerate}
\subsection{Load and Store Multiple Addressing Modes}
Load and Store Multiple instructions must be in one of four addressing modes, which describe the manner in which the base address register is modified when storing the registers specified by the register list. Each addressing mode is known by two names, with one name being most useful when describing block data transfer operations and another name for the same mode in terms of stack operations. These names and how they relate to the P and U bits appear in \autoref{table:ldm_modes}
\begin{table}
\caption{Load and Store Multiple Addressing Modes and the P and U bits\citep[pp. A5-42]{armarm:2005}}
\label{table:ldm_modes}
\begin{tabularx}{\textwidth}{l|l|X}
P & U & Name and Meaning \\ \hline
0 & 0 & DA/FA (Decrement After / Full Ascending) - Base address register decremented by 4 after each register\\ \hline
0 & 1 & IA/FD (Increment After / Full Descending) - Base address register incremented by 4 after each register\\ \hline
1 & 0 & DB/EA (Decrement Before / Empty Ascending) - Base address register decremented by 4 before each register\\ \hline
1 & 1 & IB/ED (Increment Before / Empty Descending) - Base address register incremented by 4 before each register
\end{tabularx}
\end{table}
| {
"alphanum_fraction": 0.7505278174,
"avg_line_length": 88.0653266332,
"ext": "tex",
"hexsha": "daa57edd0ccbc8c6765ee8d26d7fc8142915caf2",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2015-10-09T10:53:36.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-10-09T10:53:36.000Z",
"max_forks_repo_head_hexsha": "814648634f43db0053fc20491cc4e2f1a6ea8bb1",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "minuteman3/Handy",
"max_forks_repo_path": "report/Chapters/InstructionDetails.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "814648634f43db0053fc20491cc4e2f1a6ea8bb1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "minuteman3/Handy",
"max_issues_repo_path": "report/Chapters/InstructionDetails.tex",
"max_line_length": 893,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "814648634f43db0053fc20491cc4e2f1a6ea8bb1",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "minuteman3/Handy",
"max_stars_repo_path": "report/Chapters/InstructionDetails.tex",
"max_stars_repo_stars_event_max_datetime": "2021-02-02T02:53:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-06-22T13:01:25.000Z",
"num_tokens": 4253,
"size": 17525
} |
\appendix
\chapter{An appendix}
\lipsum[1]
| {
"alphanum_fraction": 0.7272727273,
"avg_line_length": 8.8,
"ext": "tex",
"hexsha": "c51a63ae34e31e74481304c46e81d0e182c5f385",
"lang": "TeX",
"max_forks_count": 72,
"max_forks_repo_forks_event_max_datetime": "2022-03-04T13:15:54.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-03-12T13:54:46.000Z",
"max_forks_repo_head_hexsha": "797c724d8d43824505dd265f9220520e91960b0e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mrsinfry/EPFL_thesis_template",
"max_forks_repo_path": "tail/appendix.tex",
"max_issues_count": 3,
"max_issues_repo_head_hexsha": "797c724d8d43824505dd265f9220520e91960b0e",
"max_issues_repo_issues_event_max_datetime": "2022-03-15T13:18:52.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-12-22T18:42:32.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mrsinfry/EPFL_thesis_template",
"max_issues_repo_path": "tail/appendix.tex",
"max_line_length": 21,
"max_stars_count": 61,
"max_stars_repo_head_hexsha": "797c724d8d43824505dd265f9220520e91960b0e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mrsinfry/EPFL_thesis_template",
"max_stars_repo_path": "tail/appendix.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-17T21:35:26.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-03-10T11:28:22.000Z",
"num_tokens": 16,
"size": 44
} |
\pageId{upconversionUsage}
\subsection*{Getting Started}
\begin{itemize}
\item
In order to use the up-conversion features, you will have to download the
\href[full distribution of SnuggleTeX]{docs://download}.
\item
You must have \verb|snuggletex-core.jar| and
\verb|snuggletex-upconversion.jar| in your runtime ClassPath, as well as
\verb|saxon9.jar| and \verb|saxon9-dom.jar|. (The JARs in the distributon
ZIP file contain version numbers in their names; it should be obvious what
we mean here!)
\end{itemize}
\subsection*{Example}
Have a look at the source code for
\href[\verb|BasicUpConversionExample.java|]{maven://xref/uk/ac/ed/ph/snuggletex/upconversion/samples/BasicUpConversionExample.html}
to see how to invoke the up-conversion process.
The key to invoking the up-conversion process is by attaching an
\verb|UpConvertingPostProcessor| into the SnuggleTeX DOM-building process using
the \verb|addDOMPostProcessor()| method of your \verb|DOMOutputOptions|,
\verb|XMLStringOutputOptions|.
Additionally, if you want to use any of the special LaTeX up-conversion
commands added in SnuggleTeX 1.2.0, such as \verb|\assumeSymbol|
and \verb|\setUpConversionOption|, then you must also register the
appropriate \verb|SnugglePackage| with your \verb|SnuggleEngine| as
follows:
\begin{verbatim}
SnuggleEngine engine = new SnuggleEngine();
engine.addPackage(UpConversionPackageDefinitions.getPackage());
\end{verbatim}
\subsection*{Extracting Information from Resulting MathML}
The up-conversion process {\it replaces} each raw MathML \verb|<math/>| element
with an annotated element housing all of the resulting output forms together. In
the case of the above example, the resulting MathML is:
\begin{verbatim}
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block">
<semantics>
<mfrac>
...
</mfrac>
<annotation-xml encoding="MathML-Content">
<apply>
<divide/>
...
</apply>
</annotation-xml>
<annotation encoding="Maxima">((2 * x) - (y^2)) / sin((x * y * (x - 2)))</annotation>
</semantics>
</math>
\end{verbatim}
The SnuggleTeX utility class
\href[\verb|MathMLUtilities|]{maven://xref/uk/ac/ed/ph/snuggletex/utilities/MathMLUtilities.html}
includes some useful helper methods for extracting individual parts of the
MathML. Here are some examples:
\begin{itemize}
\item \verb|extractAnnotationXML(element, "MathML-Content")|
will extract the content of the named \verb|<annotation-xml/>| element,
in this case giving you an \verb|Element| of the form:
\begin{verbatim}
<apply>
<divide/>
...
</apply>
\end{verbatim}
\item You can use \verb|isolateAnnotationXML(element, "MathML-Content")|
to do the same thing, but wrapping the result in a \verb|<math/>| having
the same attributes as the original one, giving:
\begin{verbatim}
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block">
<apply>
<divide/>
...
</apply>
</math>
\end{verbatim}
\item Use \verb|extractAnnotationString(element, "Maxima")|
to get the Maxima input annotation. This would give you:
\begin{verbatim}
((2 * x) - (y^2)) / sin((x * y * (x - 2)))
\end{verbatim}
\item The \verb|unwrapParallelMathMLDOM()| method returns a simple
Object with all of the annotations extracted that can be easily
interrogated.
\item
\href[\verb|MathMLUtilities|]{maven://xref/uk/ac/ed/ph/snuggletex/utilities/MathMLUtilities.html}
has some convenience methods for serialising MathML \verb|Element|s and
\verb|Document|s as XML Strings, that can be useful for debugging or
documentation purposes.
\end{itemize}
| {
"alphanum_fraction": 0.7378453039,
"avg_line_length": 32.6126126126,
"ext": "tex",
"hexsha": "6568a3ea49f9262accbd095ff59f76bb9d3929a2",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2022-02-17T07:29:24.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-09-11T17:01:19.000Z",
"max_forks_repo_head_hexsha": "f2464e26386ad70e51ee0c2b3eda544e21e7a4da",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "bsmith-n4/snuggletex",
"max_forks_repo_path": "snuggletex-webapp/src/main/webapp/WEB-INF/docs/upconversion-usage.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f2464e26386ad70e51ee0c2b3eda544e21e7a4da",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "bsmith-n4/snuggletex",
"max_issues_repo_path": "snuggletex-webapp/src/main/webapp/WEB-INF/docs/upconversion-usage.tex",
"max_line_length": 131,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f2464e26386ad70e51ee0c2b3eda544e21e7a4da",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "bsmith-n4/snuggletex",
"max_stars_repo_path": "snuggletex-webapp/src/main/webapp/WEB-INF/docs/upconversion-usage.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 987,
"size": 3620
} |
\chapter{Viscoelastic Predictions}
%The ultrasound method \cite{Foiret2014} enables the simultaneous measurement of cortical thickness and tissue elastic properties where previous techniques such as DXA (the \textit{de facto} method) could archieve.
As explained before, factors of risk fracture such as thickness, porosity and particular quality elements of the extracellular matrix, define the bone quality assessed by DXA techniques and BMD values. The QUS method proposed by \textit{Minonzio and Foiret et al.} \cite{Foiret2014}, \cite{Minonzio2018} of axial transmission technique simulated in the chapter before, is based on recording from the guided-wave propagation over the media, where damping factors naturally affect the signal. This relates to a viscoelastic behavior of the cortical bone arising mainly from the presence of collagen fibers, specifically treated with Resonant Ultrasound Spectroscopy (RUS) techniques. Thus, it becomes natural to study the correspondence between such damping elements and their preponderance on the resulting homogenized coefficients by the two-scale homogenization theory.
Describing the damping effects on cortical bone is not new, \textit{Bernard} \textit{et al. }\cite{Bernard2015} studied a viscoelastic-type behavior on a frequency domain, in which he modelled the elastic tensor $C^*_{ij}$ with damping effect described by the formulas:
\begin{equation*}
C^*_{ij} = C_{ij} + i C_{ij}^{'} = C_{ij} (1+ iQ_{ij}^{-1}) \quad i,j = 1,\dots, 6
\end{equation*}
where the $Q^{-1}_{ij}$ are defined as ratios of the imaginary part ($C_{ij}'$) to the real part ($C_{ij}$), denoting the so-called quality factors.
In this section, I shall reformulate such quality factors following the two-scale homogenization formalism, recovering the homogenized coefficients in the elastic case at different porosity levels and particularly obtaining prediction for the quality factors at some interval. Using up-to-date references, such coefficients are yet to be validated since there isn't enough experimental literature to confirm nor further validate the predictions.
\section{Formalization of Q-factors}
The quality factors $Q_{ij}$ proposed by \textit{Bernard} in experimental fashion, provide an interesting formalization of the ratio between the real and imaginary constitutive coefficients of a full viscoelastic mechanical description of the bone, moreover it gives us a comparison using the existent literature and experimental results.
In the following, it is described the so-called Q-factors by means of the two-scale homogenization theory, derived from a \textit{Kelvin-Voigt} viscoelastic formulation of bone in frequency domain. More specifically, the mechanical behavior of bone is assumed as a multiphase viscoelastic material composed of two-phases, defined by a square cell unit on $\mathbf{R}^2$ with circular inclusion in the form $\mathbf{Y} = \mathbf{Y}_{m} \cup \mathbf{Y}_{f}$ being each the matrix and fluid parts respectively.
For the bone matrix, we associate an elastic behavior defined by the elastic coefficients:
\begin{equation*}
C_{ijkl}(\mathbf{y}) = C_{ijkl}^m \mathbb{I}_{\mathbf{Y}_m}(\mathbf{y}) + C_{ijkl}^f \mathbb{I}_{\mathbf{Y}_f}(\mathbf{y})
\end{equation*}
while the porosity is modeled with a viscous contribution, associated to coefficients in the form:
\begin{equation*}
D_{ijkl}(\mathbf{y}) = D_{ijkl}^m \mathbb{I}_{\mathbf{Y}_m}(\mathbf{y}) + D_{ijkl}^f \mathbb{I}_{\mathbf{Y}_f}(\mathbf{y}).
\end{equation*}
Moreover, the relations between both behaviors are expressed with attenuation specified by parameters $\alpha^{(m)}, \alpha^{(f)} >0$ associated to the bone matrix and mesostructure respectively. Explicitly, it is assumed:
\begin{equation*}
D_{ijkl}^m(\mathbf{y}) = \alpha^{m} C_{ijkl}^m(\mathbf{y}) , \quad D_{ijkl}^f (\mathbf{y}) = \alpha^{f} C_{ijkl}^f(\mathbf{y})
\end{equation*}
\begin{rem}
By assuming this kind of relation, the objective is to obtain a viscoelastic model in which the viscous part is modelled by a linear attenuation of the elastic one, so that the overall behavior is of transverse isotropic type defined by pair of parameters $(\alpha^m, \alpha^f)$ that mimic closely the experimental behavior of cortical bone.
\end{rem}
\subsection{Workflow Description}
Given the requirements of a viscous-like behavior, in time domain is considered a model of \textit{Kelvin-Voigt} type with mixed boundary conditions, described in the form:
\begin{equation*}
\left \{
\begin{array}{cc}
\rho^{\epsilon}\partial_{tt}u^{\epsilon} - \nabla \cdot \sigma(u^{\epsilon}, \partial_t u^{\epsilon}) = \mathbf{0} & \text{ in } (0,T) \times \Omega\\
\sigma^{\epsilon}(u^{\epsilon},\partial_t u^{\epsilon}) = \mathbf{C}:\mathbf{e}(u^{\epsilon}) + \mathbf{D}:\mathbf{e}(\partial_t u^{\epsilon}) & \text{ in } (0,T) \times \Omega\\
\sigma^{\epsilon}(u^{\epsilon}, \partial_t u^{\epsilon})\cdot n = \mathbf{F} & \text{ on } (0,T) \times \Gamma_N\\
u^{\epsilon} = \mathbf{0} & \text{ on } (0,T) \times \Gamma_D
\end{array}
\right .
\label{ViscoElasticModel}
\end{equation*}
\begin{rem}
In the above and the next developments, we assume resting initial conditions, i.e., $\partial_t u^{\epsilon} = u^{\epsilon} = \mathbf{0}$ at $t = 0$, not written explicitly in the models and further deductions.
\end{rem}
Existence results can be derived similarly to the elastic case proposed before, by applying spectral decomposition on both: the elastic and viscoelastic operators. Similar mathematical description of viscous models are given by \cite{Abdessamad2009}, \cite{Boughammoura2013} on homogeneous \textit{Dirichlet} boundary condition cases.
The interest is regarded to the frequency-domain, thus applying \textit{Fourier} transform defined at frequency $\omega \in \mathbb{R}$ by assuming $u^{\epsilon}(t,\mathbf{x}) = \hat{u}^{\epsilon}(\mathbf{x}) e^{i\omega t}$ it follows the redefined problem in Fourier domain:
\begin{equation*}
\left \{
\begin{array}{cc}
-\omega^2 \rho^{\epsilon} \hat{u}^{\epsilon} - \nabla \cdot \hat{\sigma}_{\epsilon,\omega}(\hat{u}^{\epsilon}) = \mathbf{0} & \text{ in } \Omega \\
\hat{\sigma}^{\epsilon} (\hat{u}^{\epsilon}) = (\mathbf{C} + i\omega \mathbf{D}):\mathbf{e}(\hat{u}^{\epsilon}) & \text{ in } \Omega \\
\hat{\sigma}^{\epsilon} (\hat{u}^{\epsilon}) \cdot n = \hat{\mathbf{F}}(\omega) & \text{ on } \Gamma_N \\
\hat{u}^{\epsilon} = \mathbf{0} & \text{ on } \Gamma_D
\end{array}
\right .
\end{equation*}
such that at $\omega = 0$ we have $\hat{u}^{\epsilon}=\mathbf{0}$ at $\Omega$. In particular, for easiness of exposure, it has been omitted dependencies on the frequency for the multiscale solution $u^{\epsilon}$.\\
Now, by the homogenization heuristic using the two-scale asymptotic method, it follows the effective (macroscopic) model defined at frequency $\omega$ in the form:
\begin{equation*}
\left \{
\begin{array}{cc}
-\omega^2 \rho^{0} \hat{u}^0 - \nabla \cdot \hat{\sigma}^0(\hat{u}^0) = \mathbf{0} & \text{ in } \Omega \\
\hat{\sigma}^{0} (\hat{u}^0) = (\mathbf{C} + i\omega \mathbf{D})^{hom}:\mathbf{e}(\hat{u}^0) & \text{ in } \Omega \\
\hat{\sigma}^{0} (\hat{u}^0) \cdot n = \hat{\mathbf{F}}(\omega) & \text{ on } \Gamma_N \\
\hat{u}^0 = \mathbf{0} & \text{ on } \Gamma_D
\end{array}
\right .
\end{equation*}
In particular, the homogenized coefficients are defined by the cell problem solutions $\mathbf{N}^{rs} \in \mathbf{H}^1_{0}(\mathbf{Y}, \mathbb{C})$, described for each $r,s \in \{1,2,3\}$ in the form
\begin{equation*}
\left \{
\begin{array}{cc}
\partial_{y_j} \big[ \big( C_{ijkl} + i\omega D_{ijkl} \big) \mathbf{e}_{kl}(\mathbf{N}^{rs}) \big] &= - \partial_{y_j} \big[ C_{ijkl} + i\omega D_{ijkl} \big] \quad \forall y \in \mathbf{Y} \\
\big \langle \mathbf{N}^{rs} \big \rangle_{\mathbf{Y}} = \mathbf{0} &
\end{array}
\right.
\end{equation*}
Since the cell problems must be valid for each $\omega \in \mathbb{R}$ and for each $\mathbf{y} \in \mathbf{Y}$, a natural procedure would be to decouple the cell PDE problems thus being able the define viscosity-elasticity ratios, i.e. an expression to the so-called Q-factors.
The decoupling is then defined by considering the separation between real and imaginary parts associated to the cell solutions, i.e., by considering the following decomposition
\begin{equation*}
\mathbf{N}^{rs}(\mathbf{y}) = \mathbf{N}_R^{rs}(\mathbf{y}) + i\mathbf{N}_I^{rs}(\mathbf{y})
\end{equation*}
being now the vectors functions $\mathbf{N}_R^{rs}, \mathbf{N}_I^{rs}$ in $\mathbf{H}^1_{0}(\mathbf{Y},\mathbb{R})$ solving the following PDE coupled system for each real and imaginary solution parts in the form:
\begin{equation*}
\left \{
\begin{array}{cc}
\partial_{y_j} \big[ C_{ijkl} \mathbf{e}_{kl}(\mathbf{N}^{rs}_R) -\omega D_{ijkl} \mathbf{e}_{kl}(\mathbf{N}^{rs}_I) \big] = - \partial_{y_j} \big[ C_{ijrs} \big] & \forall \mathbf{y} \in \mathbf{Y} \\
\partial_{y_j} \big[ C_{ijkl} \mathbf{e}_{kl}(\mathbf{N}^{rs}_I) +\omega D_{ijkl} \mathbf{e}_{kl}(\mathbf{N}^{rs}_R) \big] = - \partial_{y_j} \big[ \omega D_{ijrs} \big] & \forall \mathbf{y} \in \mathbf{Y} \\
\big \langle \mathbf{N}^{rs}_R \big \rangle_{\mathbf{Y}}= \mathbf{0} \quad \big \langle \mathbf{N}^{rs}_I \big \rangle_{\mathbf{Y}} = \mathbf{0}. &
\end{array}
\right.
\end{equation*}
\begin{rem}
Note that for the above cell problems, the existence and uniqueness of a weak solution is guaranteed since the problem can be rewritten as a fully elliptic operator, being the solution unique by applying a normalization condition of mean equal $\mathbf{0}$ type.
\end{rem}
With the solution to the cell problem, we can then define the homogenized coefficients associated to the elastic and viscous part by recalling first:
\begin{equation*}
\hat{\sigma}_{ij}^0 (\hat{u}^0,\omega) = R_{ijkl}^{hom} (\omega) \mathbf{e}_{kl}(\hat{u}^0)
\end{equation*}
being the homogenized tensor
\begin{equation*}
R^{hom}_{ijrs}= \big \langle C_{ijrs} + i\omega D_{ijrs} + \big( C_{ijkl} + i \omega D_{ijkl} \mathbf{e}_{kl}(\mathbf{N}^{rs}) \big) \big \rangle
\end{equation*}
so that, using the decomposition of $N^{rs}$ it follows the full homogenized expression, described in (\ref{ViscoElasticDecom}) decomposed in real and imaginary parts characterizing the elastic and viscous contribution respectively.
\begin{equation}
\begin{aligned}
R^{hom}_{ijrs} &= \big \langle C_{ijrs} + \big( C_{ijkl}\mathbf{e}_{kl}( \mathbf{N}^{rs}_R) -\omega D_{ijkl}\mathbf{e}_{kl}(\mathbf{N}^{rs}_I) \big) \big \rangle \\
& \, + i \big \langle \omega D_{ijrs} + \big( C_{ijkl} \mathbf{e}_{kl}(\mathbf{N}^{rs}_I) + \omega D_{ijkl}\mathbf{e}_{kl}(\mathbf{N}^{rs}_R) \big) \big \rangle \\
& := C^{hom}_{ijrs} + \mathbf{i} \omega D^{hom}_{ijrs}
\end{aligned}
\label{ViscoElasticDecom}
\end{equation}
In particular, the definition of $Q_{ij}$ factors can be directly rewritten terms of a homogenized formulation, defined directly on the tensor coefficients described by (\ref{Qfactor-Def}). In particular, given the deduction done, the definition of such quality-factors becomes dependent on the frequency which derives from the multiscale \textit{Kelvin-Voigt} mechanical model assumed.
\begin{equation}
\label{Qfactor-Def}
Q_{ijrs}^{-1}(\omega) := \frac{D^{hom}_{ijrs}(\omega)}{ C^{hom}_{ijrs}(\omega)}
\end{equation}
\subsection{Nonlinear Decomposition}
An aspect that must be taken into account is the nonlinear effect added from the asymptotic assumption on the solution, expressed in the term $\mathbf{N}^{rs}$ at the homogenized coefficients definition that can be explicitly stated from (\ref{Qfactor-Def}).
It is possible to account for that effect by taking a decomposition on the linear part associated to the mean over the coefficient itself and the nonlinear effect produced from the solutions to the cell problems, i.e., from (\ref{Qfactor-Def}) the decomposition in their linear and nonlinear effects is obtained as:
\begin{equation}
\label{Expansion-Qfactor}
Q_{ijrs}^{-1} = \frac{D_{hom}^{(0)}}{C_{hom}^{(0)}} + \frac{1}{C^{(0)}_{hom}\left( C^{(0)}_{hom} + C^{(1)}_{hom} \right)} \big[C^{(0)}_{hom} \big( D^{(0)}_{hom} + D^{(1)}_{hom}\big) - D^{(0)}_{hom}\big( C^{(0)}_{hom} + C^{(1)}_{hom} \big) \big]
\end{equation}
where the notation for the linear terms in $p \in [0,1]$ (porosity) in given by:
\begin{equation*}
C^{(0)}_{hom} = \langle C_{ijrs} \rangle_{\mathbf{Y}} \quad D^{(0)}_{hom} = \langle D_{ijrs} \rangle_{\mathbf{Y}}
\end{equation*}
and the nonlinear terms with respect to $p \in [0,1]$ associated to the solutions $N^{rs}$ is given by:
\begin{equation*}
\begin{array}{cc}
C^{(1)}_{hom} =& \langle C_{ijkl}\mathbf{e}_{kl}(N^{rs}_R) - \omega D_{ijkl}\mathbf{e}_{kl}(N^{rs}_I) \rangle_{\mathbf{Y}} \\
D^{(1)}_{hom} =& \langle \omega^{-1} C_{ijkl}\mathbf{e}_{kl}(N^{rs}_I) + D_{ijkl}\mathbf{e}_{kl}(N^{rs}_R) \rangle_{\mathbf{Y}}
\end{array}
\end{equation*}
\section{Predictions}
The explicit dependency on the frequency $\omega$, requires us to consider the frequency range in which the experimental setting take place. In this sense, after adjusting regards experimental data range, viscous factors with transverse isotropic behavior of the type are assumed in the form:
\begin{equation*}
D^m_{ijkl} (\mathbf{y}) = 5\times10^{-2} C^{m}_{ijkl}(\mathbf{y}), \quad D^f_{ijkl}(\mathbf{y}) = 1 \times 10^{-3} C^{f}_{ijkl}(\mathbf{y})
\end{equation*}
Under such considerations, the figure (\ref{BernardPredictionHomCoeffs}) contains the prediction for the homogenized elastic coefficients and Q-factors associated to a fixed frequency $\omega = 0,5 \, [Mhz]$. It describes the behavior as function of density in the clinical range of interest. The main quality factors describing the axial-related behavior shows predictions comparable to up-to-date literature \cite{Bernard2015}. Moreover, homogenized elasticity coefficients are recovered with predictions regarding results obtained in fully elastic models \cite{Parnell2008}, therefore describing a generalization.
In particular, shear-like $C_{55}^{hom}, C_{66}^{hom}$ coefficients describe experimentally measured values validating the simulated model.
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{images/Qfactors/CellProb_QfactorCircular5E-2_Relations.pdf}
\caption{Predicted behavior for the viscoelastic model. It is shown on the left figure the predicted quality factors for a \textit{Kelvin-Voigt} model; the center figure a prediction of homogenized coefficient ratios, and on the right figure some homogenized shear ratios with behavior as in \cite{Bernard2015}.}
\label{BernardPredictionHomCoeffs}
\end{figure}
Nevertheless, figure (\ref{BernardPredictionHomCoeffs}) cannot be used to account for the nonlinearity effects which might not be preponderant on the definition of the quality factors. In this setting, the decomposition (\ref{Expansion-Qfactor}) can be used to describe characteristic behavior regarding the linear and nonlinear contribution and therefore, answering the question of dependency on preponderant linear behavior. In this sense, figure (\ref{QfactorDecomposition}) shows a clear
direction-dependent strong non-linear preponderance on the overall behavior for the three cases, implying full cell-problem interactions to describe each factors. In particular, becoming the cell-problems the most relevant effect that describes the mechanical behavior.
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{images/Qfactors/PlotsVisc_Circular2DPart50EPS5-2_Ome5.pdf}
\caption{The effects from cell-problem solutions is shown for some representative quality factors accounting the linear and non-linear contributions over the range of biomedical interest.}
\label{QfactorDecomposition}
\end{figure}
Finally, a relevant dependency that must be taken into account from the Q-factors definition (\ref{Qfactor-Def}) is related to the frequency dependency. From the experimental setting proposed by \cite{Bernard2015}, such dependency is not taken into account on the viscoelastic operator nor in the quality-factor.
In this direction, figure (\ref{BernardPrediction-Freq}) describes the obtained factors for different frequency values. It shows a clear independent behavior at each frequency being used, therefore expressing the assumption proposed by \textit{Bernard et. al. (2015)} \cite{Bernard2015} in which given the frequency range under consideration, the overall behavior of the Q-factors remains the same, i.e., equal ratio of elastic and viscous part at each frequency
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{images/Qfactors/QfactorsFreqsEPS5-2.pdf}
\caption{Predicted behavior for the viscoelastic model. It is shown in the left figure the predicted quality factors for a \textit{Kelvin-Voigt} model; the center figure a prediction of homogenized coefficient ratios, and in the right figure some homogenized shear ratios behaves as in \cite{Bernard2015} }
\label{BernardPrediction-Freq}
\end{figure}.
| {
"alphanum_fraction": 0.723341438,
"avg_line_length": 89.0412371134,
"ext": "tex",
"hexsha": "d6d1a19f3865e73d5faa0ac40b00e3fce00a145f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4cefa410208dfced927616ba8e2b62c4b6557281",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Reidmen/Master-Thesis-2018",
"max_forks_repo_path": "Qfactor.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4cefa410208dfced927616ba8e2b62c4b6557281",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Reidmen/Master-Thesis-2018",
"max_issues_repo_path": "Qfactor.tex",
"max_line_length": 870,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4cefa410208dfced927616ba8e2b62c4b6557281",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Reidmen/Master-Thesis-2018",
"max_stars_repo_path": "Qfactor.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5093,
"size": 17274
} |
\section{April 20 Review Questions}
\begin{QandA}
\item How does the size of a task affect MapReduce?
\begin{answered}
MapReduce takes a user-submitted job and maps to $M$ tasks and then reduce to $R$ tasks. As shown in the paper, the master
must make $O(M + R)$ scheduling decisions and keeps $O(M \ast R)$ state in memory. Thus, given the job size fixed, the smaller
the task in each phase (map phase or reduce phase), the larger the variable is ($M$ or $R$). Accordingly, the master will have
more scheduling decisions to make and keep more state in memory. However, if the task size is too big, then the advantage of
parallel execution on a large cluster offered by MapReduce goes in vain.
\end{answered}
\item Spark uses a very different mechanism for fault tolerance than most systems we've studied so far. What's one assumption that allows Spark to use this method?
\begin{answered}
Spark uses a distributed memory abstraction called Resilient Distributed Datasets (RDDs), which is a read-only, partitioned collections
of records to achieve the fault tolerance. Since RDDs are immutable, then we can effectively recompute the faulty RDDs given the initial
RDDs and the lineage graph that is from initial RDDs to the faulty one.
\end{answered}
\item Describe one way in which a datacenter has different needs or limitations than a single machine or small cluster.
\begin{answered}
Compared to a single machine, datacenter has disadvantage in terms of both latency and bandwidth. Usually, a single machine can
access its DRAM in 100 ns. However, for machines on rack to access other machine's DRAM in the same rack, the latency can take
300 $\mu$s. Similarly, machine that accesses its local DRAM has bandwidth 20 GB/s. However, for the same machine to access other
machine's DRAM within the same rack, bandwidth drops to 100 MB/s. When compared to small cluster, datacenter has to worry about
the enery consumption, indoor humidity, and indoor temperature. In addition, the probability of a single machine failure at any time
can become more noticeable in a datacenter than in a small cluster.
\end{answered}
\end{QandA}
| {
"alphanum_fraction": 0.7227722772,
"avg_line_length": 72.59375,
"ext": "tex",
"hexsha": "525f5a33624f0c3ad26790e027858abc402aab0d",
"lang": "TeX",
"max_forks_count": 10,
"max_forks_repo_forks_event_max_datetime": "2022-02-24T05:17:27.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-12-26T09:02:45.000Z",
"max_forks_repo_head_hexsha": "3d5ae181f2b6c986f3dc1977d190847757d30834",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "xxks-kkk/Code-for-blog",
"max_forks_repo_path": "2018/380D-vijay/review_questions/0420.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "3d5ae181f2b6c986f3dc1977d190847757d30834",
"max_issues_repo_issues_event_max_datetime": "2017-10-22T20:10:50.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-10-22T20:10:50.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "xxks-kkk/Code-for-blog",
"max_issues_repo_path": "2018/380D-vijay/review_questions/0420.tex",
"max_line_length": 166,
"max_stars_count": 8,
"max_stars_repo_head_hexsha": "3d5ae181f2b6c986f3dc1977d190847757d30834",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "xxks-kkk/Code-for-blog",
"max_stars_repo_path": "2018/380D-vijay/review_questions/0420.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-11T23:43:59.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-10-04T08:20:03.000Z",
"num_tokens": 515,
"size": 2323
} |
\section{Introduction}
This paper proposes to require that a violation handler needs to be a
\cppkey{noexcept} function. That is, a violation handler cannot exit by throwing
an exception.
This change will allow that functions can be \cppkey{noexcept} and include a
contract in the form of \cppkey{expects} and \cppkey{ensures}.
| {
"alphanum_fraction": 0.7939393939,
"avg_line_length": 36.6666666667,
"ext": "tex",
"hexsha": "def0d5afca9a505a00278ba2919d4355e1a60e14",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "65329b6d2fb857f799daaf5d05dbc5a93fa02908",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "jdgarciauc3m/contract-throw",
"max_forks_repo_path": "intro.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "65329b6d2fb857f799daaf5d05dbc5a93fa02908",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "jdgarciauc3m/contract-throw",
"max_issues_repo_path": "intro.tex",
"max_line_length": 80,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "65329b6d2fb857f799daaf5d05dbc5a93fa02908",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "jdgarciauc3m/contract-throw",
"max_stars_repo_path": "intro.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 80,
"size": 330
} |
\section{Determinants and Inverses}
\subsection{Introduction}
Consider a linear map $\mathbb R^n\to\mathbb R^n$ given by the matrix $M$.
We want to define an $n\times n$ matrix $\tilde{M}=\operatorname{adj}M$ and a scalar $\det M$ with
$$M\tilde{M}=(\det M)I$$
Furthermore, $\det M$ is the factor by whcih an area in $\mathbb R^2$ or a volumn in $\mathbb R^3$ is scaled.
If $\det M\neq 0$, then we will have
$$M^{-1}=\frac{1}{\det M}\tilde{M}$$
For $n=2$, we know that
$$\tilde{M}=
\begin{pmatrix}
M_{22}&-M_{12}\\
-M_{21}&M_{11}
\end{pmatrix},
\det M=
\begin{vmatrix}
M_{11}&M_{12}\\
M_{21}&M_{22}
\end{vmatrix}
=M_{11}M_{22}-M_{12}M_{21}
$$
works.
Note that $\det M\neq 0$ if and only if $M\underline{e_1},M\underline{e_2}$ are linearly independent if and only if $\operatorname{Im}M=\mathbb R^2$ if and only if $\operatorname{rank}M=2$.\\
For $n=3$, we recall that given any $\underline{a},\underline{b},\underline{c}\in\mathbb R^3$, the scalar $[\underline{a},\underline{b},\underline{c}]$ is the volumn of a parallelopiped spanned by $\underline{a},\underline{b},\underline{c}$.
We can also note that the standard basis vectors obey $[\underline{e_i},\underline{e_j},\underline{e_k}]=\epsilon_{ijk}$.
Note that for $M$ a real $3\times 3$ matrix, its columns are $M\underline{e_i}=M_{ji}\underline{e_j}$.
So the volumn is scaled by a factor
$$[M\underline{e_1},M\underline{e_2},M\underline{e_3}]=M_{i1}M_{j2}M_{k3}[\underline{e_i},\underline{e_j},\underline{e_k}]=M_{i1}M_{j2}M_{k3}\epsilon_{ijk}$$
We define this to be $\det M$.
We can also define $\tilde{M}$ by
$$\underline{R_1}(\tilde{M})=\underline{C_2}(M)\times\underline{C_3}(M)$$
$$\underline{R_2}(\tilde{M})=\underline{C_3}(M)\times\underline{C_1}(M)$$
$$\underline{R_3}(\tilde{M})=\underline{C_1}(M)\times\underline{C_2}(M)$$
So one can see immediately that
$$(\tilde{M}M)_{ij}=\underline{R_i}(\tilde{M})\cdot\underline{C_j}(M)=\det M\delta_{ij}$$
as desired.\\
How about when we consider $n$ in general?
\subsection{Alternating forms}
We first want to generalize our $\epsilon$ symbol to higher dimensions by considering the permutation.
\begin{definition}
A permutation $\sigma:\{1,2,\ldots,n\}\to\{1,2,\ldots,n\}$ is a bijection of $\{1,2,\ldots,n\}$ to itself.
\end{definition}
So $\{1,2,\ldots,n\}=\{\sigma{1},\sigma{2},\ldots,\sigma{n}\}$.
It is immediate that the permutations on $n$ letters form a group $S_n$ under composition, and it is easy that $|S_n|=n!$.
\begin{definition}
A permutaion $\tau\in S_n$ is called a transposition of $i,j\in\{1,2,\ldots,n\}$ if $\tau(i)=j,\tau(j)=i,\forall k\neq i,j, \tau(k)=k$.
We denote $\tau$ by $(p\ q)$.
\end{definition}
\begin{proposition}
Any permutation is a product of transpositions.
\end{proposition}
\begin{proof}
Trivial.
\end{proof}
The way we write it is not unique, but the number of transpositions is unique modulo $2$.
\begin{proposition}
If some permutation $\sigma$ can be written as the product of $k$ transpositions and the product of $l$ transpositions, then $k\equiv l\pmod{2}$.
\end{proposition}
\begin{proof}
Will see in groups (actually quite trivial).
\end{proof}
\begin{definition}
We say the permutation $\sigma$ is even if it can be written as the product of an even number of transpositions, odd if otherwise.
We define the sign, or signature function $\epsilon:S_n\to\{1,-1\}$ by
$$\epsilon(\sigma)=
\begin{cases}
1\text{, if $\sigma$ is even}\\
-1\text{, otherwise}
\end{cases}
$$
\end{definition}
Note that $\epsilon(\operatorname{id})=0$ and more generally $\epsilon(\sigma\circ\pi)=\epsilon(\sigma)\epsilon(\pi)$ for permutations $\sigma,\pi$.
\begin{definition}
The $\epsilon$ symbol (or tensor) on $n$ letters is defined as
$$
\epsilon_{ij\ldots kl}=
\begin{cases}
\epsilon(\sigma)\text{, if $ij\ldots kl$ is a permutation $\sigma$ of $\{1,2,\ldots,n\}$}\\
0\text{, otherwise. That is, some indices coincide.}
\end{cases}
$$
\end{definition}
So we now can define the alternating forms.
\begin{definition}
Given vectors $\underline{v_1},\underline{v_2},\ldots,\underline{v_n}$ in $\mathbb R^n$ or $\mathbb C^n$.
We define the alternating form to be the scalar
$$[\underline{v_1},\underline{v_2},\ldots,\underline{v_n}]=\epsilon_{ij\ldots kl}(\underline{v_1})_i(\underline{v_2})_j\cdots(\underline{v_{n-1}})_k(\underline{v_n})_l$$
One can check that the alternating forms when $n=2,3$ are exactly the same as we have defined them before.
\end{definition}
Note that the alternating form is multilinear, thus a tensor.
Also, it is totally antisymmetric: interchanging any two vectors changes the sign.
Equivalently,
$$[\underline{v_{\sigma(1)}},\underline{v_{\sigma(2)}},\ldots,\underline{v_{\sigma(n)}}]=\epsilon(\sigma)[\underline{v_1},\underline{v_2},\ldots,\underline{v_n}]$$
Moreover, $[\underline{e_1},\underline{e_2},\ldots,\underline{e_n}]=1$.\\
One can see immediately that
\begin{proposition}
If the function $f:(F^n)^n\to F$ where $F=\mathbb R$ or $\mathbb C$ is multilinear, totally antisymmetric and $f(\underline{e_1},\underline{e_2},\ldots,\underline{e_n})=1$, then $f$ is uniquely determined.
\end{proposition}
\begin{proof}
Trivial.
\end{proof}
\begin{proposition}
If some vector(s) is $\underline{v_1},\underline{v_2},\ldots,\underline{v_n}$ is a linear combination of others, then $[\underline{v_1},\underline{v_2},\ldots,\underline{v_n}]=0$.
\end{proposition}
\begin{example}
In $\mathbb C^4$, let $\underline{v_1}=(i,0,0,2),\underline{v_2}=(0,0,5i,0),\underline{v_3}=(3,2i,0,0),\underline{v_4}=(0,0,-i,1)$, then $[\underline{v_1},\underline{v_2},\underline{v_3},\underline{v_4}]=10i$
\end{example}
\begin{proposition}
$[\underline{v_1},\underline{v_2},\ldots,\underline{v_n}]\neq 0$ if and only if $\{\underline{v_i}\}$ is an independent set.
\end{proposition}
\begin{proof}
We have already shown the ``only if'' part, so it remains to show the other direction.
If $\{\underline{v_i}\}$ is independent, then it constituted a basis, hence if $[\underline{v_1},\underline{v_2},\ldots,\underline{v_n}]=0$, then by multilinearity the alternating form will be zero everywhere, which is a contradiction.
\end{proof}
\begin{definition}[Definition of determinant]
Consider $M\in M_{n\times n}(F)$ where $F=\mathbb R$ or $\mathbb C$ with columns $\underline{C_a}$, then the determinant of $M$ is
\begin{align*}
\det M&=[\underline{C_1},\underline{C_2},\ldots,\underline{C_n}]\\
&=[M\underline{e_1},M\underline{e_2},\ldots,M\underline{e_n}]\\
&=\epsilon_{ij\ldots kl}M_{i1}M_{j2}\cdots M_{k(n-1)}M_{ln}\\
&=\sum_{\sigma\in S_n}\epsilon(\sigma)M_{\sigma(1)1}M_{\sigma(2)2}\cdots M_{\sigma(n)n}
\end{align*}
\end{definition}
\begin{example}
1. The definition of determinant here in general coincides with the cases in $2$ and $3$ dimensional cases.\\
2. If $M$ is diagonal, then $\det M$ is the product of all diagonal entries.
So $\det I=1$.\\
3. If we have
$$M=\left(\begin{array}{@{}ccc|c@{}}
&&&0\\
&A&&\vdots\\
&&&0\\
\hline
0&\dots&0&1
\end{array}\right)$$
Then $\det M=\det A$
\end{example}
\subsection{Properties of Determinants}
If we multiply one of the columns (or rows) of the matrix $M$ by a scalar $\lambda$ to produce $M'$, we have $\det M'=\lambda\det M$.
Furthermore, if we interchange two adjascent columns, the determinant is negated.
We can see these directly from the definitive property of determinants.
In addition, $\det M\neq 0$ if and only if the columns are linearly independent.
\begin{proposition}
For any $n\times n$ matrix $M$, $\det M=\det M^\top$.\\
Equivalently, $[\underline{C_1},\underline{C_2},\ldots,\underline{C_n}]=[\underline{R_1},\underline{R_2},\ldots,\underline{R_n}]$.
\end{proposition}
\begin{proof}
We have
\begin{align*}
[\underline{C_1},\underline{C_2},\ldots,\underline{C_n}]&=\sum_{\sigma\in S_n}\epsilon(\sigma)M_{\sigma(1)1}M_{\sigma(2)2}\cdots M_{\sigma(n)n}\\
&=\sum_{\sigma\in S_n}\epsilon(\sigma)M_{1\sigma^{-1}(1)}M_{2\sigma^{-1}(2)}\cdots M_{n\sigma^{-1}(n)}\\
&=\sum_{\sigma'\in S_n}\epsilon(\sigma')M_{1\sigma'(1)}M_{2\sigma'(2)}\cdots M_{n\sigma'(n)}\\
&=[\underline{R_1},\underline{R_2},\ldots,\underline{R_n}]
\end{align*}
Since the map $\sigma\to\sigma'=\sigma^{-1}$ is an automorphism on $S_n$ and $\epsilon(\sigma)=\epsilon(\sigma')$.
\end{proof}
We can evaluate determinants by expanding rows or columns.
\begin{definition}
For $M$ an $n\times n$ matrix,
Define $M^{ia}$ be the determinant of the $(n-1)\times (n-1)$ matrix obtained by deleting the row $i$ and column $a$ of $M$.
This is called a minor.
\end{definition}
\begin{proposition}\label{det_formula}
$$\forall a,\det{M}=\sum_i(-1)^{i+a}M_{ia}M^{ia}$$
$$\forall i,\det{M}=\sum_a(-1)^{i+a}M_{ia}M^{ia}$$
\end{proposition}
\begin{proof}
Trivial but see later sections for the proof.
\end{proof}
By some trivial computation, we discover
\footnote{We definitely did not see that coming}
that matrices having many zeros would be easier to calculate, so it brings us to the ways to simplify the determinants.\\
The first thing we could do is row/column operations.
If we modify $M$ by mapping $\underline{C_i}\mapsto \underline{C_i}+\lambda\underline{C_j}, i\neq j$ (or equivalently on rows), then the determinant is not changed.
This follows immediate from multilinearity and total antisymmetry.\\
Plus, as we stated above, if we interchange $\underline{C_i},\underline{C_j},i\neq j$ (same for rows), then the determinant changes sign.
This can help us simplify the calculation greatly since we can produce a lot of $0$'s from there.
\begin{theorem}
For any $n\times n$ matrices $M,N$, $\det(MN)=\det(M)\det(N)$.
\end{theorem}
\begin{lemma}
$$\epsilon_{i_1i_2\ldots i_n}M_{i_1a_1}M_{i_2a_2}\cdots M_{i_na_n}=\epsilon_{a_1a_2\ldots a_n}\det{M}$$
\end{lemma}
\begin{proof}
Trivial.
\end{proof}
\begin{proof}[Proof of the multiplicativity of determinants]
By the preceding lemma,
\begin{align*}
\det(MN)&=\epsilon_{i_1i_2\ldots i_n}(MN)_{i_11}\cdots(MN)_{i_nn}\\
&=\epsilon_{i_1i_2\ldots i_n}M_{i_1k_1}N_{k_11}\cdots M_{i_nk_n}N_{k_nn}\\
&=\epsilon_{i_1i_2\ldots i_n}M_{i_1k_1}\cdots M_{i_nk_n}N_{k_11}\cdots N_{k_nn}\\
&=\det{M}\epsilon_{k_1k_2\ldots k_n}N_{k_11}\cdots N_{k_nn}\\
&=\det{M}\det{N}
\end{align*}
As desired.
\end{proof}
Note that this would mean that $\det$ is a group homomorphism.
There are a few consequences of the multiplicative property:
\begin{proposition}
1. If $M$ is invertible, then $\det(M^{-1})=\det(M)^{-1}$.\\
2. If $M$ is orthogonal, then $\det(M)=\pm 1$.\\
3. If $M$ is unitary, then $|\det(M)|=1$.
\end{proposition}
\begin{proof}
Trivial.
\end{proof}
\subsection{Minors, Cofactors and Inverses}
\begin{definition}
Select a column $\underline{C_a}$ of matrix $M$, we can write $\underline{C_a}=M_{ia}\underline{e_i}$, so
\begin{align*}
\det M&=[\underline{C_1},\underline{C_2},\ldots,\underline{C_a},\ldots,\underline{C_n}]\\
&=[\underline{C_1},\underline{C_2},\ldots,M_{ia}\underline{e_i},\ldots,\underline{C_n}]\\
&=M_{ia}[\underline{C_1},\underline{C_2},\ldots,\underline{e_i},\ldots,\underline{C_n}]\\
&=:\sum_iM_{ia}\Delta_{ia}
\end{align*}
So
$$\Delta_{ia}=[\underline{C_1},\underline{C_2},\ldots,\underline{e_i},\ldots,\underline{C_n}]$$
is called the cofactor.
Note that the cofactor is exactly the determinant of the matrix removing the row $i$ and column $a$.
\end{definition}
\begin{proof}[Proof of Proposition \ref{det_formula}]
We know, then, that
$$\Delta_{ia}=(-1)^{i+a}M^{ia}$$
by shuffling the rows and columns.
Therefore we have
$$\det M=\sum_iM_{ia}\Delta_{ia}=\sum_i(-1)^{i+a}M_{ia}M^{ia}$$
since columns and rows do not have real difference (we can do a transpose anyways), the two statements are proved.
\end{proof}
Now we want to find our adjugate (or adjoint) $\tilde{M}$.
Note that in our above proof we can observe that $M_{ib}\Delta_{ia}=\det(M)\delta_{ab}$, so we can easily define $\tilde{M}_{ij}=\Delta_{ji}$.
So $\tilde{M}$ is the transpose of the matrix of cofactors, then the relation above becomes
$$M\tilde{M}=\det(M)I$$
as desired.
This justifies the existence of $\tilde{M}$ and $\det(M)$ in the full generality of $\mathbb C^N$ from beginning of this section.
\subsection{System of Linear Equations}
Consider a system of $n$ linear equations in $n$ unknowns $x_i$, written in vector-matrix form $A\underline{x}=\underline{b}$ where $A$ is some $n\times n$ matrix.
If $\det A$ is nonzero, then $A^{-1}$ exists, which implies an unique solution $\underline{x}=A^{-1}\underline{b}$.
But what if $\det A$ is zero?\\
We know that if $\underline{b}\notin\operatorname{Im}A$, then by definition there is no solution.
But if $\underline{b}\in\operatorname{Im}A$, then the entire (shifted) space $\underline{x_p}+\ker{A}$ where $A\underline{x_p}=\underline{b}$ is exactly all the solutions.
This can be seen from the linear superposition of the solutions.\\
Note that the formula $\underline{x_p}+\ker{A}$ also applied to the case for $\det A\neq 0$ since in that case $\ker{A}=\{\underline{0}\}$.
If $\underline{u_i}$ is a basis for $\ker{A}$, then the general solution is $\underline{x_p}+a_i\underline{u_i}$.
\begin{example}
Consider $A\underline{x}=B$ where
$$A=\begin{pmatrix}
1&1&a\\
a&1&1\\
1&a&1
\end{pmatrix},\underline{b}=\begin{pmatrix}
1\\
c\\
1
\end{pmatrix}$$
Now $\det A=(a-1)^2(a+2)$.
So if $a\notin \{1,-2\}$, then
$$
A^{-1}=\frac{1}{(a-1)(a+2)}
\begin{pmatrix}
-1&a+1&-1\\
-1&-1&a+1\\
a+1&-1&-1
\end{pmatrix}$$
So
$$\underline{x}=A^{-1}\underline{b}=\frac{1}{(1-a)(a+2)}\begin{pmatrix}
2-c-ca\\
c-a\\
c-a
\end{pmatrix},c\in\mathbb R$$
Geometrically, this solves to give a point.
If $a=1$, then
$$A=\begin{pmatrix}
1&1&1\\
1&1&1\\
1&1&1
\end{pmatrix}\implies\operatorname{Im}A=\left\{\lambda\begin{pmatrix}
1\\
1\\
1
\end{pmatrix}:\lambda\in\mathbb R\right\}$$
So there is no solution if $c\neq 1$.
For $c=1$, since $(1,0,0)^\top$ would be a particular solution, the solutions form the plane $(1,0,0)^\top+\ker A$, i.e. the general solution is of the form
$$\begin{pmatrix}
1-\lambda-\mu\\
\lambda\\
\mu
\end{pmatrix},\lambda,\mu\in\mathbb R$$
For $c=-2$, by again looking at the image we conclude that $c$ must be $-2$, in which case the same analysis gives us the general solution
$$\begin{pmatrix}
1+\lambda\\
\lambda\\
\lambda
\end{pmatrix},\lambda\in\mathbb R$$
\end{example}
Let $\underline{R_1},\underline{R_2},\underline{R_3}$ be the rows of $A$, then
$$A\underline{u}=\underline{0}\iff \forall i\in\{1,2,3\},\underline{R_i}\cdot\underline{u}=0$$
Note that the latter system of equations described planes through the origin.
We know that the solution of the homogeneous problem is equivalent to finding the kernel of $A$.
If $\operatorname{rank}A=3$, then we must have $\underline{u}=\underline{0}$ since $\{\underline{R_i}\}$ would be independent.
If $\operatorname{rank}A=2$, then $\{\underline{R_i}\}$ spans a plane, so the kernel is living along the line of normal to the plane.
If $\operatorname{rank}A=1$, then the normals to the pairwise spanned planes are parallel, so the kernel is a plane.\\
Now consider instead $A\underline{u}=\underline{b}$, then it happens iff $\underline{R_i}\cdot \underline{u}=b_i$.
In this case, if $\operatorname{rank}A=3$, then the normals of the planes described by the the system intersects at a point, thus there is an unique solution.
If $\operatorname{rank}A=2$, then the planes may intersect, in which case the solution is a line, but it might not be the case.
If $\operatorname{rank}A=1$, then the planes may coincide, in which case they are all the same, thus we have a plane of solution.
But again this might not be the case.
Two of them may coincide but that is not enough.\\
But how do you solve the equations and find the kernels systematically?
\subsection{Gaussian Elimination and Echelon Form}
Consider $A\underline{x}=\underline{b}$ where $\underline{x}\in\mathbb R^n,\underline{b}\in\mathbb R^m$ and $A$ is an $m\times n$ matrix, then we can solve it by Gaussian Elimination.
In general, we can reorder rows by row operations to rewrite original system in simplier form.
Note that when we do row operations, we have to do it simultaneously on the matrix and on the vector.
Our aim is to finally transform the matrix to the following form
$$M=\begin{pmatrix}
M_{11}&\star&\star&\dots&\star&\star&\dots&\star\\
0&M_{22}&\star&\dots&\star&\star&\dots&\star\\
0&0&M_{33}&\dots&\star&\star&\dots&\star\\
\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\
0&0&0&\dots&M_{kk}&\star&\dots&\star\\
0&0&0&\dots&0&0&\dots&0\\
\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\
0&0&0&\dots&0&0&\dots&0
\end{pmatrix}$$
Note that the row rank equals the column rank.
We can use induction to prove that we can indeed use a way to obtain $M$ from $A$ in a finitely many number of row operations.
Note that $\det A=\pm\det M$ | {
"alphanum_fraction": 0.6593690356,
"avg_line_length": 54.4770642202,
"ext": "tex",
"hexsha": "8856f88b1a8836f163ee0562983ad359c206ce76",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7fc43486ec5276262d4058c9daaea12affdb6bac",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "david-bai-notes/IA-Vectors-and-Matrices",
"max_forks_repo_path": "5/det.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7fc43486ec5276262d4058c9daaea12affdb6bac",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "david-bai-notes/IA-Vectors-and-Matrices",
"max_issues_repo_path": "5/det.tex",
"max_line_length": 242,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "7fc43486ec5276262d4058c9daaea12affdb6bac",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "david-bai-notes/IA-Vectors-and-Matrices",
"max_stars_repo_path": "5/det.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6072,
"size": 17814
} |
\documentclass{itatnew}
\usepackage{paralist} % inparaenum support
\usepackage{varioref} % added by Viliam for \vref
\usepackage{xcolor} % added by Viliam for \textcolor
\usepackage{caption}
\usepackage{subcaption}
\usepackage{graphicx}
\usepackage{textcomp} % used for degrees celsius
\usepackage{booktabs}
\usepackage[british]{babel}
% ==========================================
% Custom commands (by Viliam)
% ==========================================
\newcommand{\FOCAL}[3]{ % usage: \FOCAL{R}{M}{op}
\ifmmode
#1{\stackrel{\mathtt{#3}}{\circ}}#2
\else
\begin{math}\FOCAL{#1}{#2}{#3}\end{math}
\fi
}
\newcommand{\SOHUP}{
\ifmmode
SoH^\uparrow
\else
\begin{math}\SOHUP\end{math}
\fi
}
\newcommand{\SOHDOWN}{
\ifmmode
SoH^\downarrow
\else
\begin{math}\SOHDOWN\end{math}
\fi
}
% ==========================================
\begin{document}
\title{Stability of Hot Spot Analysis}
\author{
Marc Gassenschmidt \and
Viliam Simko \and
Julian Bruns
\\
\email{[email protected]},
\email{[email protected]},
\email{[email protected]}
}
\institute{
FZI Forschungszentrum Informatik\\
am Karlsruher Institut f\"ur Technologie\\
76131, Haid-und-Neu-Str. 10-14\\
Karlsruhe, Germany
}
\maketitle % typeset the title of the contribution
\begin{abstract}
Hot spot analysis is essential for geo-statistics.
It supports decision making by detecting points
as well as areas of interest in comparison to their
neighbourhood. However, these methods are dependent
on different parameters, ranging from the resolution
of the study area to the size of their neighbourhood.
This dependence can lead to instabilities of the detected
hotspots, where the results can highly vary between
different parameters. A decision maker can therefore ask
how valid the analysis actually is.
%
In this study, we examine the impact of key parameters
on the stability of the hotspots, namely the size of
the neighbourhood, the resolution and the size of the
study area, as well as the influence of the ratio between
those parameters.
%
We compute the hotspots with the well known Getis-Ord ($G^*$)
statistic as well as its modification, the \emph{Focal $G^*$} statistic.
We measure the stability of the hotspot analysis using
a recently introduced \emph{stability of hotspots} metric (SoH)
and compare the results to intuitive visual analysis.
%
We evaluate the results on real world data with the well-known
yellow cab taxi data set from New York, Manhattan.
Our results indicate a negative impact on the stability with
a reduction of the size of the neighbourhood as well as
a reduction of the size of the study area, regardless of
the resolution.
\end{abstract}
\section{Introduction}
The goal of hotspot analysis is the detection and identification of interesting areas. It achieves this goal by computing statistically significant deviations from the mean value of a given study area. This allows a decision maker to easily identify those areas of interest and allows further focus in sub-sequential data analysis or the decision focus. Typical applications range from crime detection over identification of disease outbreaks to urban heat islands. In such applications, scarce resources are then often applied in only those identified hotspots or used as the basis for the allocation. The general approach is an unsupervised learning method similar to a cluster analysis.
But, similar to a cluster analysis, the identified hotspots highly depend on
the parametrization of the detection method. The identified areas as well as
their shape can vary highly. This volatility can lead to a decrease in trust in
the result or in suboptimal allocations of scarce resources. Therefore, it is
necessary to measure and evaluate the stability of a hotspot analysis as well
as the different parametrizations.
In our initial work~\cite{SoH-GI-Forum}, we introduced a method to measure the
stability of hotspots, the \emph{stability of hotspots} metric (SoH) and showed
its use on the basis of temperature data.
Here, we build upon that work and examine the impact of the
different instantiations of the most typical parameters in more detail. We use
the well known
Getis-Ord statistic \cite{Ord.1995}, the standard $G^*$, and a modification of
this statistic, the Focal~$G^*$~\cite{SoH-GI-Forum}.
Those parameters examined are
\begin{inparaenum}[(1)]
\item size of the study area given as a focal matrix,
\item pixel resolution i.e. aggregation level and
\item the size of the neighbourhood given as a weight matrix.
\end{inparaenum}
By varying over these parameter, we can compare the stability for all possible
combinations and isolate the effect of a single parameter.
We evaluate the metrics on the well-known dataset of New York Taxi drives to
show the applicability to real world use cases as well as to foster
reproducibility.
\section{Related Work}
\subsection{Quality of Clustering}
The problem of assessing the quality in unsupervised learning is well known. In the case of the k-mean algorithm, the quality of the clustering is mostly dependent on the value of the \emph{k} and a miss-specification can lead to highly irregular clusters. In a simple 2D clustering, they can be easily recognized by visual analysis, but in higher dimensionality, this is impossible. One method, to measure the quality of such a clustering is the compactness of the clusters, see e.g. \cite{CompactnessDataClustering}. This enables the comparison between different clusters. Another possibility is the Silhouette Coefficient by Kaufman et Rousseeuw 1990. This metric measures the similarity of objects in a cluster in comparison to other clusters. For density based clustering, e.g. for DBSCAN~\cite{Ester96adensity-based}, OPTICS~\cite{Ankerst:1999:OOP:304181.304187} gives a simple method to tune the essential parameter for this clustering. This is only a small overview of methods to influence and measure the quality of different clustering methods. But it shows that this problem is not easily solved and dependent on the chosen algorithm. To our knowledge, there does not exist a method to overall measure the stability of a clustering.
\subsection{Hot Spot Analysis}
The goal of hotspot analysis is the detection of interesting areas as well as patterns in spatial information. One of the most fundamental approach is Moran´s I~\cite{MoranI}. There it is tested whether or
not a spatial dependency exists. This gives the information on global
dependencies in a data set. Upon this hypothesis test several geo-statistical
tests are based. The most well known are the Getis-Ord statistic~\cite{Ord.1995}
and LISA~\cite{Anselin.1995}. In both cases the general, the global statistic of
Moran´s I is applied in a local context. The goal is to detect not only global
values, but instead to focus on local hotspots and to measure the significance
of those local areas. A more in depth overview of methods to identify and visualize spatial patterns and areas of interest can be found in \cite{shekhar2011identifying}.
\section{Stability of Hot Spot Analysis}
\label{sec:Metric}
Existing methods for determining hot spots are dependent on the parametrization
of the weight matrix as well as on the size of the study area. Intuitively,
increasing the size of a weight matrix has a "blurring" effect on the raster
(Figure~\ref{fig:BlurExample:a}) whereas decreasing the size can be
seen as a form of "sharpening" (Figure~\ref{fig:BlurExample:b}).
%TODO: add image showing and blurring/sharpening effects
%As we can see, hot spots are oftentimes disappearing or appearing unrelated to
%previously found hot spots. While these computations indeed show hot spots and
%the results are correct, they lack stability.
For a data analyst, when exploring the data interactively by choosing different
filter sizes (weight matrices) or point aggregation strategies (pixel sizes), it
is important that the position and size of a hot spot changes in a predictable
manner. We formalize the intuition in our stability metric.
%
%Consider the real-world example depicted in Figure~\ref{fig:TempMaps}.
%%TODO: thermal flight -> NY taxi
%The temperature map of a morning thermal flight dataset
%(Figure~\ref{fig:TempMaps:a}) has been processed using $G^*$.
%
%%TODO: thermal flight -> NY taxi
%%TODO: need two images - G* and FocalG*
\begin{figure}[htp]
\centering
\begin{subfigure}{.3\linewidth}
\caption{Small matrix}
\includegraphics[width=\linewidth]{images/gen-raw-blur-gstar-2}
\label{fig:BlurExample:a}
\end{subfigure}
\begin{subfigure}{.3\linewidth}
\caption{Large matrix}
\includegraphics[width=\linewidth]{images/gen-raw-blur-gstar-8}
\label{fig:BlurExample:b}
\end{subfigure}
\caption{
Example of G* statictics computed for the same raster with different weight
matrix sizes.
}
\end{figure}
We define a hot spot found in comparably more coarse resolutions as
\emph{parent} (larger weight matrix or larger pixel size) and in finer
resolutions as \emph{child} (smaller weight matrix or smaller pixel size).
Stability assumes that every parent has at least one child and that each child
has one parent. For a perfectly stable interaction, it can be easily seen that
the connection between parent and child is a injective function and between
child and parent a surjective function. To measure the stability,
we propose a metric called the \emph{Stability of Hot spot} (SoH). It measures
the deviation from a perfectly stable transformation between the parent and
chaild rasters (like frames of an animation).
In its downward property (from parent to child, injective) it is defined as:
\begin{equation}
\label{eq:SoH-down}
\SOHDOWN
= \frac{ParentsWithChildNodes}{Parents}
= \frac{|Parents \cap Children|}{|Parents|}
\end{equation}
And for its upward property (from child to parent, surjective):
\begin{equation}
\label{eq:SoH-up}
\SOHUP
= \frac{ChildrenWithParent}{Children}
= 1 - \frac{|Children - Parents|}{|Children|}
\end{equation}
where $ParentsWithChildNodes$ is the number of parents that have at least one
\emph{child}, $Parents$ is the total number of \emph{parent},
$ChildrenWithParent$ is the number of children and $Children$ as the total
number of children.
The SoH is defined for a range between 0 and 1, where 1 represents a perfectly
stable transformation while 0 would be a transformation with no stability at
all.
If $|Children|=0$ or $|Parents|=0$ SoH is 0.
%\section{Focal Getis-Ord}
%\label{sec:FocalGetisOrd}
%
%In paper~\cite{SoH-GI-Forum}, we have defined the measure Focal G*.
%We use the notation \FOCAL{R}{M}{op} to denote a focal
%operation $op$ applied on a raster $R$ with a focal window determined by a
%matrix $M$. This is rougly eqivalent to a command \verb|focal(x=R, w=M,
%fun=op)|
%from package \emph{raster} in the R programming language \cite{cran:raster}.
%\begin{definition}[$G^*$ function on rasters]
% \label{def:GenericGetisOrdFunc}
%
% The function $G^*$ can be expressed as a raster operation:
%
% \begin{displaymath}
% G^*(R, W, st) =
% \frac{
% \FOCAL{R}{W}{sum} - M*\sum_{w \in W}{w}
% }{
% S \sqrt{
% \frac{
% N*\sum_{w \in W}{w^2} - (\sum_{w \in W}{w})^2
% }{
% N - 1
% }
% }
% }
% \end{displaymath}
% \noindent where:
% \begin{itemize}
%
% \item $R$ is the input raster.
%
% \item $W$ is a weight matrix of values between 0 and 1.
%
% \item $st = (N, M, S)$ is a parametrization specific to a particular
%version
% of the $G^*$ function. (Def.\vref{def:StdGetisOrdVersion} and
% \vref{def:FocalGetisOrdVersion}).
%
% \end{itemize}
%\end{definition}
%
%\begin{definition}[Standard $G^*$ parametrization]
% \label{def:StdGetisOrdVersion}
%
% Computes the parametrization $st$ as global statistics for all pixels in the
% raster $R$:
%
% \begin{itemize}
% \item $N$ represents the number of all pixels in $R$.
% \item $M$ represents the global mean of $R$.
% \item $S$ represents the global standard deviation of all pixels in $R$.
% \end{itemize}
%\end{definition}
%
%\begin{definition}[Focal $G^*$ parametrization]
% \label{def:FocalGetisOrdVersion}
%
% Let $F$ be a boolean matrix such that: $all(dim(F) \geq dim(W))$.
% This version uses focal operations to compute per-pixel statistics given by
% the focal neighbourhood $F$ as follows:
%
% \begin{itemize}
%
% \item $N$ is a raster computed as a focal operation \FOCAL{R}{F}{sum}. Each
% pixel represents the number of pixels from $R$ convoluted with the matrix
% $F$.
%
% \item $M$ is a raster computed as a focal mean \FOCAL{R}{F}{mean}, thus
%each
% pixel represents a mean value of its $F$-neighbourhood.
%
% \item $S$ is a raster computed as a focal standard deviation
% \FOCAL{R}{F}{sd}, thus each pixel represents a standard deviation of its
% $F$-neighbourhood.
%
% \end{itemize}
%
%\end{definition}
\section{Evaluation}
\subsection{Dataset}
We evaluate our data on the New York city yellow cab data set
\footnote{$http://www.nyc.gov/html/tlc/html/about/trip\_record\_data.shtml$}.
This data set includes all taxi drives from the yellow cabs in New York City,
from location, to passengers and many more informations. In this study, we compare the
total pickups over January from 2016 in the Manhattan area.
The borders of the rasters are (40.699607 \textdegree (N), -74.020265 \textdegree (E))
and (40.769239 \textdegree (N), -73.948286 \textdegree (E)) after WGS84.
By using this data set, we reduce the computational effort while still
being able to show the applicability on real world data and problems.
\subsection{Treatments}
%\begin{definition}[Evaluation Run]
% \label{def:EvalRun}
% We define a single evaluation run as a tuple:
% \begin{displaymath}
% E = (V, m, p, w)
% \end{displaymath}
% \noindent where:
% \begin{itemize}
%
% \item $V$ is the input dataset of points, representing the taxi dropoffs
%in
% our case.
%
% \item $m$ is the metric used, in our case either \SOHUP or \SOHDOWN.
%
% \item $p$ represents the pixel size for aggregating points from $V$,
% e.g. $100 \times 100$ meters.
%
% \item $w$ represents the size of a weight matrix.
% In our case, we chose a weight matrix depicted in
% Figure~\ref{fig:ExampleMatrices}(a) for both the G* and FocalG* cases.
% \end{itemize}
%\end{definition}
%Figure~\ref{fig:ComparisonMorning} and Figure~\ref{fig:ComparisonEvening} show
%Standard and Focal $G^*$ computations for both morning and eventing datasets
%with weight matrix $W$ of size 3, 5, 7, 9, 15 and 31. In this figures, when
%computing Focal
%$G^*$, the focal matrix $F$ has a constant size of $61{\times}61$ cells.
%Example weight matrix $W$ and focal matrix $F$ are depicted in
%Figure~\ref{fig:ExampleMatrices}.
Aggregation size 1 means we aggregate every point, which was in range of $100x100 \cdot 0.000001$ into one pixel.
For aggregation size Z we aggregated ZxZ pixel from aggregated size 1 into a new pixel.
%
For our measurements we specified the following data series:
\begin{itemize}
\item $W_i$: The weight matrix W is from 3 to 47 with a stepsize of 4.
\item $F_j$: The focal matrix F is from 17 to 137 with a stepsize of 12. F is ignored for $G^*$ but not for Focal $G^*$
\item $R_z$: The aggregation level is from 1 to 6 with a stepsize of 1. R represents the raster
\end{itemize}
\begin{definition}
\begin{displaymath}
\SOHUP(child, parent)
\end{displaymath}
\begin{displaymath}
\SOHDOWN(child, parent)
\end{displaymath}
The SoH for a singled evalution run in the weight dimension is defined as:
\begin{eqnarray*}
child & = & G^*(R_z, W_{3+i \cdot 4}, st, F_{17+j \cdot 12}) \\
parent & = & G^*(R_z, W_{3+(i+1) \cdot 4}, st, F_{17+j \cdot 12}) )
\end{eqnarray*}
The SoH for a singled evalution run in the focal dimension is defined as:
\begin{eqnarray*}
child & = & G^*(R_z, W_{3+i \cdot 4}, st, F_{17+j \cdot 12}) \\
parent & = & G^*(R_z, W_{3+i \cdot 4}, st, F_{17+(j+1) \cdot 12}) )
\end{eqnarray*}
The SoH for a singled evalution run in the aggregation dimension is defined
as:
\begin{eqnarray*}
child & = & G^*(R_z, W_{3+i \cdot 4}, st, F_{17+j \cdot 12}) \\
parent & = & G^*(R_{z+1}, W_{3+i \cdot 4}, st, F_{17+j \cdot 12}) )
\end{eqnarray*}
\begin{displaymath}
z \in [1,6], i,j \in [0,10], G^* \in [Standard, Focal]
\end{displaymath}
\end{definition}
Therefore we would calculate $10 \cdot 10 \cdot 5 + 10 \cdot 5=550$ results for each dimension.
The following conditions must hold $dim(F) > dim(W)$ and $dim(F) < dim(R)$
which leads e.g. to 460 results in the z-dimension.
We vary over the weight matrix W, the focal matrix F and the aggregation level, as motivated in the introduction.
To compare the impact each of these parameters has we compute all variations
over two parameter and holding one parameter fix. We then calculate the mean as
well as the standard deviation for the SoH for the fixed value based on the two
other parameters. This allows us to isolate the impact of the variation in the
single parameter. The results are then plotted in
Figures~\ref{fig:SoHFocal}--\ref{fig:SoHZoom}, one image for each fixed
parameter and the direction of the SoH.
The hotspots to use the SoH on are computed by the focal $G^*$ and standard $G^*$ method. The focal $G^*$ allows us, to isolate the impact that the size of the study area has on the stability of the results without recomputing the total dataset.
\subsection{Clumping}
The SoH needs a parent and a child. Clusters can be calculated different. For
$G^*$ a sigmoid can be used. Values above 2.58 and under -2.58 means in less
then 1\% of the cases you are wrong. The values of Focal $G^*$ depend on the
focal size F. Therefore, we decide all values in the top quantile belong to a
clusters. For clustering we grouped the neighbourhood with values in the top
quantile to one cluster. Neigbours are are pixels with touching edges which
leads for every pixel to 8 neigbours. Fig.~\ref{fig:Clumping} depicts the
clusters using different colours. In this figure, Focal~$G^*$ has smaller
clusters which are distributed over the complete raster.
\begin{figure}[htp]
\centering
\begin{subfigure}{.7\linewidth}
\caption{G* clustering difference}
\includegraphics[width=\linewidth]{images/gen-metric-example-1}
\label{fig:ClustDiffGstar}
\end{subfigure}
\hspace{3em}
\begin{subfigure}{.7\linewidth}
\caption{Focal G* clustering difference}
\includegraphics[width=\linewidth]{images/gen-metric-example-2}
\label{fig:ClustDiffFocalGstar}
\end{subfigure}
\caption{
Example clustering for G* and Focal G* used as a input for SoH computation.
}
\label{fig:Clumping}
\end{figure}
\subsection{Aggregate}
The aggregation level is a slice plane through our three dimensional space.
This is an example how the results can look with visualize representations of the calculated
rasters and the raw data.
\begin{definition} Aggregation run:
\begin{eqnarray*}
child & = & G^*(R_z, W_{11}, st, F_{41}) \\
parent & = & G^*(R_{z+1}, W_{11}, st, F_{41})
\end{eqnarray*}
\begin{displaymath}
z \in [1,6], G^* \in [Standard, Focal]
\end{displaymath}
\end{definition}
The results can be found in Figure~\ref{fig:Zoom}. It can be seen that with an
increase of the aggregation level the values increase.
If the aggregation level increases the number of cluster decreases. It can seen that
small clusters are disappearing and some clusters are growing to big ones. From aggreagtion level~2 to
aggreagtion level~3 there is no big change in the images of Focal $G^*$ which is consitent with our results.
For the upward property, $G^*$ is always better than Focal
$G^*$, compared to the results from figure \ref{fig:SoHZoom}. Focal $G^*$
reaches the same result as $G^*$ at aggregation level~6 . For the downward
property, Focal $G^*$ leads to better results at aggregation level~6.
\begin{figure*}[htp]
\centering
\begin{tabular}{cccccccc}
\multicolumn{8}{l}{
Aggregation levels computed from original raster (log scale)
}\\
\hline
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\
\includegraphics[width=4.6em]{images/gen-rawdata-1}&
\includegraphics[width=4.6em]{images/gen-rawdata-2}&
\includegraphics[width=4.6em]{images/gen-rawdata-3}&
\includegraphics[width=4.6em]{images/gen-rawdata-4}&
\includegraphics[width=4.6em]{images/gen-rawdata-5}&
\includegraphics[width=4.6em]{images/gen-rawdata-6}&
\includegraphics[width=4.6em]{images/gen-rawdata-7}&
\includegraphics[width=4.6em]{images/gen-rawdata-8}
\end{tabular}
\vspace{1em}
\begin{tabular}{cccccccc}
\multicolumn{8}{l}{G* computed from multiple aggregation levels} \\
\hline
\includegraphics[width=4.6em]{images/gen-raw-zoom-gstar-1}&
\includegraphics[width=4.6em]{images/gen-raw-zoom-gstar-2}&
\includegraphics[width=4.6em]{images/gen-raw-zoom-gstar-3}&
\includegraphics[width=4.6em]{images/gen-raw-zoom-gstar-4}&
\includegraphics[width=4.6em]{images/gen-raw-zoom-gstar-5}&
\includegraphics[width=4.6em]{images/gen-raw-zoom-gstar-6}&
\includegraphics[width=4.6em]{images/gen-raw-zoom-gstar-7}&
\includegraphics[width=4.6em]{images/gen-raw-zoom-gstar-8}\\
\includegraphics[width=4.6em]{images/gen-demo-zoom-gstar-1}&
\includegraphics[width=4.6em]{images/gen-demo-zoom-gstar-2}&
\includegraphics[width=4.6em]{images/gen-demo-zoom-gstar-3}&
\includegraphics[width=4.6em]{images/gen-demo-zoom-gstar-4}&
\includegraphics[width=4.6em]{images/gen-demo-zoom-gstar-5}&
\includegraphics[width=4.6em]{images/gen-demo-zoom-gstar-6}&
\includegraphics[width=4.6em]{images/gen-demo-zoom-gstar-7}&
\includegraphics[width=4.6em]{images/gen-demo-zoom-gstar-8}\\
\end{tabular}
\vspace{1em}
\begin{tabular}{cccccccc}
\multicolumn{8}{l}{Focal G* computed from multiple aggregation levels} \\
\hline
\includegraphics[width=4.6em]{images/gen-raw-zoom-focalgstar-1}&
\includegraphics[width=4.6em]{images/gen-raw-zoom-focalgstar-2}&
\includegraphics[width=4.6em]{images/gen-raw-zoom-focalgstar-3}&
\includegraphics[width=4.6em]{images/gen-raw-zoom-focalgstar-4}&
\includegraphics[width=4.6em]{images/gen-raw-zoom-focalgstar-5}&
\includegraphics[width=4.6em]{images/gen-raw-zoom-focalgstar-6}&
\includegraphics[width=4.6em]{images/gen-raw-zoom-focalgstar-7}&
\includegraphics[width=4.6em]{images/gen-raw-zoom-focalgstar-8}\\
\includegraphics[width=4.6em]{images/gen-demo-zoom-focalgstar-1}&
\includegraphics[width=4.6em]{images/gen-demo-zoom-focalgstar-2}&
\includegraphics[width=4.6em]{images/gen-demo-zoom-focalgstar-3}&
\includegraphics[width=4.6em]{images/gen-demo-zoom-focalgstar-4}&
\includegraphics[width=4.6em]{images/gen-demo-zoom-focalgstar-5}&
\includegraphics[width=4.6em]{images/gen-demo-zoom-focalgstar-6}&
\includegraphics[width=4.6em]{images/gen-demo-zoom-focalgstar-7}&
\includegraphics[width=4.6em]{images/gen-demo-zoom-focalgstar-8}\\
\end{tabular}
\vspace{2em}
\includegraphics[width=.45\linewidth]{images/gen-zoom-sohup-1}
\hspace{1em}
\includegraphics[width=.45\linewidth]{images/gen-zoom-sohdown-1}
\caption{
This image shows all aggregation levels for G* and Focal G* together
with two metrics -- $SoH^\uparrow$ and $SoH^\downarrow$.
}
\label{fig:Zoom}
\end{figure*}
\subsection{Blur}
We also made an example for the variation of the weight size W.
Blur is a slice plane through our three dimensional space.
W is changed from 7 to 29 with a stepsize of 2, F and the aggregation level is fixed.
\begin{definition} Blur run:
\begin{eqnarray*}
child & = & G^*(R_3, W_{7+i \cdot 2}, st, F_{41}) \\
parent & = & G^*(R_3, W_{7+(i+1)\cdot 2}, st, F_{41})
\end{eqnarray*}
\begin{displaymath}
i \in [0,11], G^* \in [Standard, Focal]
\end{displaymath}
\end{definition}
The blur results are plotted in Figure~\ref{fig:Blur}. The weight matrix is
shown in the first row, with increasing size. The 2nd and 4th row show how
$G^*$ and Focal $G^*$ changes based on W. This example is a slice plane of the
weight dimension ploted in figure ~\ref{fig:SoHWeight}. It can be seen that an
increase of the weight size leads in general to better results for $G^*$.
Also it can bee seen that for focal $G^*$ the clusters are more distributed over the raster.
\begin{figure*}[htp]
\centering
\begin{tabular}{cccccccc}
\multicolumn{8}{l}{Multiple weight matrix sizes} \\
\hline
\includegraphics[width=4.6em]{images/gen-blur-wmatrices-1}&
\includegraphics[width=4.6em]{images/gen-blur-wmatrices-2}&
\includegraphics[width=4.6em]{images/gen-blur-wmatrices-3}&
\includegraphics[width=4.6em]{images/gen-blur-wmatrices-4}&
\includegraphics[width=4.6em]{images/gen-blur-wmatrices-5}&
\includegraphics[width=4.6em]{images/gen-blur-wmatrices-6}&
\includegraphics[width=4.6em]{images/gen-blur-wmatrices-7}&
\includegraphics[width=4.6em]{images/gen-blur-wmatrices-8}
\end{tabular}
\vspace{1em}
\begin{tabular}{cccccccc}
\multicolumn{8}{l}{G* computed using different weight matrix sizes} \\
\hline
\includegraphics[width=4.6em]{images/gen-raw-blur-gstar-1}&
\includegraphics[width=4.6em]{images/gen-raw-blur-gstar-2}&
\includegraphics[width=4.6em]{images/gen-raw-blur-gstar-3}&
\includegraphics[width=4.6em]{images/gen-raw-blur-gstar-4}&
\includegraphics[width=4.6em]{images/gen-raw-blur-gstar-5}&
\includegraphics[width=4.6em]{images/gen-raw-blur-gstar-6}&
\includegraphics[width=4.6em]{images/gen-raw-blur-gstar-7}&
\includegraphics[width=4.6em]{images/gen-raw-blur-gstar-8}\\
\includegraphics[width=4.6em]{images/gen-demo-blur-gstar-1}&
\includegraphics[width=4.6em]{images/gen-demo-blur-gstar-2}&
\includegraphics[width=4.6em]{images/gen-demo-blur-gstar-3}&
\includegraphics[width=4.6em]{images/gen-demo-blur-gstar-4}&
\includegraphics[width=4.6em]{images/gen-demo-blur-gstar-5}&
\includegraphics[width=4.6em]{images/gen-demo-blur-gstar-6}&
\includegraphics[width=4.6em]{images/gen-demo-blur-gstar-7}&
\includegraphics[width=4.6em]{images/gen-demo-blur-gstar-8}\\
\end{tabular}
\vspace{1em}
\begin{tabular}{cccccccc}
\multicolumn{8}{l}{Focal G* computed using different weight matrix sizes} \\
\hline
\includegraphics[width=4.6em]{images/gen-raw-blur-focalgstar-1}&
\includegraphics[width=4.6em]{images/gen-raw-blur-focalgstar-2}&
\includegraphics[width=4.6em]{images/gen-raw-blur-focalgstar-3}&
\includegraphics[width=4.6em]{images/gen-raw-blur-focalgstar-4}&
\includegraphics[width=4.6em]{images/gen-raw-blur-focalgstar-5}&
\includegraphics[width=4.6em]{images/gen-raw-blur-focalgstar-6}&
\includegraphics[width=4.6em]{images/gen-raw-blur-focalgstar-7}&
\includegraphics[width=4.6em]{images/gen-raw-blur-focalgstar-8}\\
\includegraphics[width=4.6em]{images/gen-demo-blur-focalgstar-1}&
\includegraphics[width=4.6em]{images/gen-demo-blur-focalgstar-2}&
\includegraphics[width=4.6em]{images/gen-demo-blur-focalgstar-3}&
\includegraphics[width=4.6em]{images/gen-demo-blur-focalgstar-4}&
\includegraphics[width=4.6em]{images/gen-demo-blur-focalgstar-5}&
\includegraphics[width=4.6em]{images/gen-demo-blur-focalgstar-6}&
\includegraphics[width=4.6em]{images/gen-demo-blur-focalgstar-7}&
\includegraphics[width=4.6em]{images/gen-demo-blur-focalgstar-8}\\
\end{tabular}
\vspace{2em}
\includegraphics[width=.45\linewidth]{images/gen-blur-sohup-1}
\hspace{1em}
\includegraphics[width=.45\linewidth]{images/gen-blur-sohdown-1}
\caption{
This image shows different weight matrix sizes for G* and Focal G* together
with two metrics -- $SoH^\uparrow$ and $SoH^\downarrow$.
}
\label{fig:Blur}
\end{figure*}
\section{Results and Discussion}
% Focal size
% Increase of focal size leads to better results in the upward property
% Standard deviation decrease with higher focal size
% No standard deviation af F 17x17 because of missing clusters
% The downward property reaches its maximum at 65x65
%
% Conclusion: F should be around 65x65 or higher
\subsection{Impact of study area}
The first parameter we want to isolate is the size of the study area. This enables us to discuss the optimal size of the study area for the given research/decision question.
To do so, we have to fix the focal size F and use it as the x-axis and plot the
mean value as well as the mean +/- the standard deviations of the SoH for all
other parameter combinations as the y-axis.
In this scenario, only the Focal $G^*$ can produce varied results as the standard $G^*$ has by definition a fixed study area by always using the total size of the study area.
In Figure~\ref{fig:SoHFocal} the \SOHUP and \SOHDOWN is shown for $G^*$ and
Focal $G^*$.
Higher values indicate that the hot spots found are more stable.
The \SOHUP in Figure~\ref{fig:fUp} growth with an increase of the focal size.
It can see that an increase of the focal size leads to better results for \SOHUP.
With a higher focal size you can see that the dotted line, which represent the standard deviation, is getting smaller. This indicates that an increase in the size of the study area does not only lead to an overall increase of stability but also that the it reduces the volatility of the variations in other parameters.
When F and W have similar size, there are no clusters. Because the values have no much variation.
\SOHDOWN (Figure ~\ref{fig:fDown}) shows a different picture then the \SOHUP.
It reaches its maximum SoH for a focal size F of 65x65 in our evaluation set.
The standard deviation is not decreasing with an increase of the
focal size.
\begin{figure}[htp]
\begin{subfigure}{\linewidth}
\caption{SoH up for focal size}
\includegraphics[width=\linewidth]{images/gen-focal-1}
\label{fig:fUp}
\end{subfigure}
\hspace{1em}
\begin{subfigure}{\linewidth}
\caption{SoH down for focal size}
\includegraphics[width=\linewidth]{images/gen-focal-2}
\label{fig:fDown}
\end{subfigure}
\caption{SoH for focal matrix size}
\label{fig:SoHFocal}
\end{figure}
%
% Weight size
% Increase of weight leads to increase of downward property
% Small break downs for G* for upward and downward property
% For the upward property Focal G* is better than G*
%
% Conclusion: Higher weight size leads to better results. There are parametrisations of Focal G* which are better than G*
\subsection{Impact of Neighbourhood size}
Next, we want to isolate the effect of the neighbourhood size by fixing the weight matrix size W
and using it as x-axis. With the impact of the weight matrix W we gain information on how far the
interconnectness is between different points as indicated in \cite{Getis.1992}.
The results for the weight matrix can be seen in figure ~\ref{fig:wUp} for \SOHUP and figure ~\ref{fig:wDown} for \SOHDOWN.
Here it can be seen that in general for both evaluations metrics as well as the different hot spot methods
an increase in the size of the weight matrix W leads to more stable results as well as that the Focal $G^*$
matrix is slightly more stable in most cases for the mean values.
An interesting phenomena is the dip in stability for the standard $G^*$ for the value of 39. Despite our
effort we could not determine the reason behind this result, as no overall trend could be extracted.
The standard deviation shows similar no particular trend, but for the values above 37 is increasing.
We assume that the reason behind this effect is the inclusion of the water near Manhattan, which leads to a stronger differentiation between areas near water and areas more in the inner city. by increasing the weight matrix, the number of points which are influenced increase in number. The other parameter can regulate this impact and therefore the variance is stronger.
\begin{figure}[htp]
\begin{subfigure}{\linewidth}
\caption{SoH up for weight size}
\includegraphics[width=\linewidth]{images/gen-weight-1}
\label{fig:wUp}
\end{subfigure}
\hspace{1em}
\begin{subfigure}{\linewidth}
\caption{SoH down for weight size}
\includegraphics[width=\linewidth]{images/gen-weight-2}
\label{fig:wDown}
\end{subfigure}
\caption{SoH for weight size}
\label{fig:SoHWeight}
\end{figure}
%
% Zoom
% G* alwas better than Focal G* for upward and downward property
% Focal G* has different focal sizes because zoom is compared
% At zoom level 4 G* increase much more than Focal G*
%
% Conclusion: G* is better than Focal G* in the zoom dimension, but the focal size is fixed. The zoom dimension has the least
% influence on the value
\subsection{Impact of Aggregation level}
Finally, we isolate the impact of the different aggregation levels on the stability of the hot spot analysis.
We fix the aggregation and use it as x-axis. This allows us to examine the impact
the resolution of a data set has on the results and allow us indirectly to reduce the
computational effort on future computations by using the maximal aggregation as useful.
The results can seen in Figure~\ref{fig:SoHZoom}.
First, we can see that for the aggregation level, in contrast to previous
results, the standard $G^*$ seems to be more stable.
For the \SOHUP there is a huge increase between aggregation level 4 and 5. This increase can't be seen for \SOHDOWN.
One can see that $G^*$ is always better than Focal $G^*$.
This could be beacause the target area of Focal $G^*$ increases with every aggregation step.
\begin{figure}[htp]
\begin{subfigure}{\linewidth}
\caption{SoH up for focal size}
\includegraphics[width=\linewidth]{images/gen-aggregation-1}
\label{fig:zUp}
\end{subfigure}
\hspace{1em}
\begin{subfigure}{\linewidth}
\caption{SoH down for focal size}
\includegraphics[width=\linewidth]{images/gen-aggregation-2}
\label{fig:zDown}
\end{subfigure}
\caption{SoH for focal size}
\label{fig:SoHZoom}
\end{figure}
\section{Conclusions and Future Work} \label{sec:Conclusion}
In this work we examined the influence of different parameter on the stability
of hot spot analysis on the basis of the Getis-Ord and Focal Getis-Ord
statistic. We validated our results with the use of the SoH metric on a well
known real world data set to ensure external validity and easy replication.
Based on the result we can show several insights, given the restrictions of this
work, for future use of hot spot analysis. First, the greater the area size, the
more stable the results seem to be. The same relation seem to be given for the
size of the weight matrix. The greater the comparable area, the more stable are
the results. Given the study are, the focal range should be at least be a size
of 65x65 tiles, to have a good trade-off between \SOHUP and \SOHDOWN. In the
case of the aggregation level, the stability does not seem to be impacted in the
case of the Focal $G^*$, but for the standard $G^*$ statistic we see an increase
in stability for higher aggregation levels. We assume that the high aggregation
of data and the inherent increase of examined area reduce the impact of
outliers, but reduce the potential to differentiate. The worse results for the
Focal $G^*$ statistic compared to the $G^*$ statistic are most likely a result
of the increased focus on a smaller region. A more in depth look at the
interaction between focal size and aggregation level would be an interesting
question for the future, but this is beyond the scope of this work.
With this work, we made a step further to the optimal parametrisation for $G^*$
and Focal $G^*$.
Our results indicate that the parametrization for $G^*$ and Focal $G^*$ besides
what is defined as parent and child has a huge impact on the
stability of hotspots. We can see the increase of up to 0.6 \SOHUP for e.g. the focal matrix size.
This emphasizes the importance of a metric for the stability of hotspots. But this work also shows the shortcomings of the existing metric, as the trade-off between \SOHUP and \SOHDOWN is not yet fully explored and should be the focus of future work.
Another interesting field for future research is the stability of spatio-temporal hot spots and their different parametrizations. As $G^*$ is often applied with regard to temporal impacts, the efficient computation of the focal $G^*$ for spatio-temporal and the impact on stability is the second main path for future research.
We evaluated our data on the aggregate of a single month, but one could assume
that during the lunchtime there will be hotspots at restaurants and they are
not
consistent with the hotspots over one month. Therefore the impact of time on
the results is important and could also influence the metric for stability in
profound ways. As in the example, the question is, how fine grained should the
temporal clustering be and when is the result unstable?
This leads to the final future work: To which hot spots should the comparison
be made for the metric? In the current state, only the next parametrization is
compared due to the high computational complexity. How the comparison of hot
spots should be carried out and how many different comparisons have to be used
is another quite interesting research question.
\section{Acknowledgements}
This work is part of the research project BigGIS (reference number: 01IS14012)
funded by the Federal Ministry of Education and Research (BMBF) within the
frame of the programme "Management and Analysis of Big Data" in "ICT 2020 --
Research for Innovations".
%
\noindent R-packages used: \verb|raster|~\cite{cran:raster},
\verb|knitr|~\cite{cran:knitr},
\bibliographystyle{plain}
\bibliography{bibfile} % sigproc.bib is the name of the
\end{document}
| {
"alphanum_fraction": 0.7375299098,
"avg_line_length": 46.0981818182,
"ext": "tex",
"hexsha": "7ff6f192ea96dfe387da3e5fa0a22a580d21761b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5eb35dd4e57b6d24fe295cdbfffbeed0d65d4975",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "vsimko/focalgstart-newyork",
"max_forks_repo_path": "paper-main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5eb35dd4e57b6d24fe295cdbfffbeed0d65d4975",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "vsimko/focalgstart-newyork",
"max_issues_repo_path": "paper-main.tex",
"max_line_length": 1243,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5eb35dd4e57b6d24fe295cdbfffbeed0d65d4975",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "vsimko/focalgstart-newyork",
"max_stars_repo_path": "paper-main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 10817,
"size": 38031
} |
Input to the Output Control Option of the Groundwater Transport Model is read from the file that is specified as type ``OC6'' in the Name File. If no ``OC6'' file is specified, default output control is used. The Output Control Option determines how and when concentrations are printed to the listing file and/or written to a separate binary output file. Under the default, head and overall budget are written to the Listing File at the end of every stress period. The default printout format for concentrations is 10G11.4.
Output Control data must be specified using words. The numeric codes supported in earlier MODFLOW versions can no longer be used.
For the PRINT and SAVE options of concentration, there is no option to specify individual layers. Whenever the concentration array is printed or saved, all layers are printed or saved.
\vspace{5mm}
\subsubsection{Structure of Blocks}
\vspace{5mm}
\noindent \textit{FOR EACH SIMULATION}
\lstinputlisting[style=blockdefinition]{./mf6ivar/tex/gwt-oc-options.dat}
\vspace{5mm}
\noindent \textit{FOR ANY STRESS PERIOD}
\lstinputlisting[style=blockdefinition]{./mf6ivar/tex/gwt-oc-period.dat}
\vspace{5mm}
\subsubsection{Explanation of Variables}
\begin{description}
\input{./mf6ivar/tex/gwt-oc-desc.tex}
\end{description}
\vspace{5mm}
\subsubsection{Example Input File}
\lstinputlisting[style=inputfile]{./mf6ivar/examples/gwt-oc-example.dat}
| {
"alphanum_fraction": 0.7951463241,
"avg_line_length": 53.8846153846,
"ext": "tex",
"hexsha": "cbdadf5b77fc737b65a13d133b3b28b244ec3843",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8b58f53e2f9dd5c34b3b2d15f7ced7058b4107c6",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "jwhite-usgs/modflow6",
"max_forks_repo_path": "doc/mf6io/gwt/oc.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8b58f53e2f9dd5c34b3b2d15f7ced7058b4107c6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "jwhite-usgs/modflow6",
"max_issues_repo_path": "doc/mf6io/gwt/oc.tex",
"max_line_length": 524,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8b58f53e2f9dd5c34b3b2d15f7ced7058b4107c6",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "jwhite-usgs/modflow6",
"max_stars_repo_path": "doc/mf6io/gwt/oc.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 343,
"size": 1401
} |
\textbf{Theorem 6 makes a prediction about the geometric rate of convergence in the third pane of Output 12. Exactly what is this prediction? How well does it match the observed rate of convergence?}
\newline
Since $f(x) = \frac{1}{1+x^2}$ is analytic everywhere exept $x=\pm i$, where it has singularities, it is analytic on and inside the ellipse with foci $\{-1,1\}$ and semiminor axis $b < 1$ on which the Chebishev potential takes the value
\begin{align*}
\phi_f = \log\left(\frac{1}{2}(a+b)\right) \approx \log\left(\frac{1}{2}(\sqrt{2}+1)\right),
\end{align*}
where we have obtained a from the foci identity $ f = a^2 - b^2$. Hence, by \textsc{Theorem 6}, the convergence should be
\begin{align*}
|w_j-u^{(v)}(x_j)| = \mathcal{O}\left(e^{-N(\phi_f+\log{2})}\right) = \mathcal{O}\left(e^{-(\sqrt{2}+1)N}\right),
\end{align*}
which is proved to be right by the next figure.
\begin{figure}[H]
\centering
\includegraphics[scale=0.75]{P6_7.png}\caption{Acuracy of the Chebishev spectral derivative for $f(x) = \frac{1}{1+x^2}$.}
\end{figure}
\subsection*{Matlab code for this problem}
\begin{verbatim}
%% Problem 3 - 6.7 Trefethen
Nmax = 50; E = zeros(Nmax,1);
for N = 1:Nmax
[D,x] = cheb(N);
v = 1./(1+x.^2); vprime = -2*x.*v.^2; % analytic in [-1,1]
E(N) = norm(D*v-vprime,inf);
end
% Define ellipse and potential on it.
a = sqrt(2) ; b = 1;
phif = log(0.5*(a+b));
% Plot results:
figure
semilogy(1:Nmax,E(:),'r*')
hold on
semilogy(1:Nmax,exp(-(phif+log(2))*(1:Nmax)))
axis([0 Nmax 1e-16 1e3]), grid on
set(gca,'xtick',0:10:Nmax,'ytick',(10).^(-15:5:0))
xlabel N, ylabel error
txt='Latex/FIGURES\P6_7';
saveas(gcf,txt,figformat)
\end{verbatim}
| {
"alphanum_fraction": 0.6680672269,
"avg_line_length": 39.6666666667,
"ext": "tex",
"hexsha": "0dd1b7bf6eb3842b4b974b7f80869988385f9de3",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "12ab3e86a4a44270877e09715eeab713da45519d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "fjcasti1/Courses",
"max_forks_repo_path": "SpectralMethods/Homework3/Latex/problem3.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "12ab3e86a4a44270877e09715eeab713da45519d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "fjcasti1/Courses",
"max_issues_repo_path": "SpectralMethods/Homework3/Latex/problem3.tex",
"max_line_length": 236,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "12ab3e86a4a44270877e09715eeab713da45519d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "fjcasti1/Courses",
"max_stars_repo_path": "SpectralMethods/Homework3/Latex/problem3.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 609,
"size": 1666
} |
\documentclass{amsart}
\newcommand{\sm}{\raisebox{2.33pt}{~\rule{6.4pt}{1.3pt}~}}
\begin{document}
\section{Rules}
Let $j\colon U\to X$ be an open immersion, and let ${\mathcal F}$ be a perverse sheaf on $X$. Let ${\mathcal G}\subset {\mathcal F}$ be the largest subperverse sheaf supported on the complement $X\sm U$.
\end{document}
| {
"alphanum_fraction": 0.6900584795,
"avg_line_length": 26.3076923077,
"ext": "tex",
"hexsha": "6f95f2f3bb3383a11c9afb2b2127d7f64ff85929",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "be292ba03326441656d0aa8a192c7b7ba9c25063",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "zorkow/COV885",
"max_forks_repo_path": "exercises/math_tests/antipatterns/rules.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "be292ba03326441656d0aa8a192c7b7ba9c25063",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "zorkow/COV885",
"max_issues_repo_path": "exercises/math_tests/antipatterns/rules.tex",
"max_line_length": 204,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "be292ba03326441656d0aa8a192c7b7ba9c25063",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "zorkow/COV885",
"max_stars_repo_path": "exercises/math_tests/antipatterns/rules.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 118,
"size": 342
} |
To quantify the important metrics for incentives to transition
into a closed fuel cycle, I take the following steps:
\begin{enumerate}
\item Benchmark \Cyclus against ORION and other codes
\item Identify key fuel cycle transition metrics
\item Identify key fuel cycle parameters
\item Perform parameter sweep on key parameters on real scenarios
\begin{enumerate}
\item United States transition scenario
\item France transition scenario
\end{enumerate}
\item Analyze sensitivity of parameters on key metrics
\item Visualization and discussion
\end{enumerate}
\subsection{Material Flow}
The fuel cycle is represented by a series of facility agents whose material
flow is illustrated in figure \ref{diag:fc}, along with
the \Cyclus archetypes that were used to model each facility.
In this diagram, \gls{MOX} Reactors include both French \glspl{PWR} and
\glspl{SFR}.
% Define block styles
\tikzstyle{decision} = [diamond, draw, fill=blue!20,
text width=4.5em, text badly centered, node distance=3cm, inner sep=0pt]
\tikzstyle{block} = [rectangle, draw, fill=blue!20,
text width=5em, text centered, rounded corners, minimum height=4em]
\tikzstyle{line} = [draw, -latex']
\tikzstyle{cloud} = [draw, ellipse,fill=red!20, node distance=3cm,
minimum height=2em]
\begin{figure}
\centering
\scalebox{0.6}{
\begin{tikzpicture}[align=center, node distance = 3cm and 3cm, auto]
% Place nodes
\node [block] (sr) {Mine (\texttt{SOURCE})};
\node [cloud, below of=sr] (nu) {Nat U};
\node [block, below of=nu] (enr) {Enrichment ({\footnotesize \texttt{ENRICHMENT}})};
\node [cloud, below of=enr] (uox) {\acrshort{UOX}};
\node [block, below of=uox] (lwr) {\gls{LWR} (\texttt{REACTOR})};
\node [cloud, right of=lwr] (snf) {\gls{UNF}};
\node [block, right of=snf] (pool) {Pool (\texttt{Storage})};
\node [cloud, left of=lwr] (tl2) {Dep U};
\node [cloud, right of=enr] (tl) {Dep U};
\node [block, right of=tl] (sk) {Repository (\texttt{SINK})};
\node [cloud, below of=sk] (cunf) {Cooled \gls{UNF}};
\node [cloud, below of=pool] (cunf2) {Cooled \gls{UNF}};
\node [block, below of=snf] (rep) {{\small Reprocessing ({\footnotesize \texttt{SEPARATIONS}})}};
\node [cloud, below of=rep] (u) {Sep. U} ;
\node [cloud, left of=rep] (pu) {Sep. Pu};
\node [block, left of=pu] (mix) {Fabrication (\texttt{MIXER})};
\node [cloud, below of=mix] (mox) {\gls{MOX}};
\node [block, below of=mox] (mxr) {\gls{MOX} Reactors
(\texttt{REACTOR})};
\node [cloud, right of= mxr] (snmox) {Spent \gls{MOX}};
\draw[->, thick] (sr) -- (nu);
\draw[->, thick] (nu) -- (enr);
\draw[->, thick] (enr) -- (tl);
\draw[->, thick] (enr) -- (tl2);
\draw[->, thick] (tl) -- (sk);
\draw[->, thick] (tl2) -- (mix);
\draw[->, thick] (enr) -- (uox);
\draw[->, thick] (uox) -- (lwr);
\draw[->, thick] (lwr) -- (snf);
\draw[->, thick] (lwr) -- (snf);
\draw[->, thick] (snf) -- (pool);
\draw[->, thick] (pool) -- (cunf);
\draw[->, thick] (pool) -- (cunf2);
\draw[->, thick] (cunf) -- (sk);
\draw[->, thick] (cunf2) -- (rep);
\draw[->, thick] (rep) -- (u);
\draw[->, thick] (rep) -- (pu);
\draw[->, thick] (pu) -- (mix);
\draw[->, thick] (mix) -- (mox);
\draw[->, thick] (mox) -- (mxr);
\draw[->, thick] (mxr) -- (snmox);
\draw[->, thick] (snmox) -- (rep);
\end{tikzpicture}
}
\caption{Fuel cycle facilities (blue boxes) represented by
\Cyclus archetypes (in parentheses) pass materials (red
ovals) around the simulation.}
\label{diag:fc}
\end{figure}
A mine facility provides natural uranium, which is enriched by an enrichment
facility to produce \gls{UOX}. Enrichment wastes (tails) are disposed of to a
sink facility representing ultimate disposal. The enriched \gls{UOX} fuels
the \glspl{LWR} which in turn produce spent \gls{UOX}. The used fuel
is sent to a wet storage facility for a minimum of 72 months. \cite{carre_overview_2009}.
The cooled fuel is then reprocessed to separate plutonium and uranium,
or sent to the repository.
The plutonium mixed with depleted uranium (tails) makes \gls{MOX} (Both for
French \glspl{LWR} and \glspl{ASTRID}).
Reprocessed uranium is unused and stockpiled. Uranium is reprocessed
in order to separate the raffinate (minor actinides and fission products)
from the usable material. Though neglected in this work, reprocessed
uranium may substitute depleted uranium for \gls{MOX} production. In the
simulations, sufficient depleted uranium existed that the complication of
preparing reprocessed uranium for incorporation into reactor fuel
was not included. However, further in the future where the depleted
uranium inventory drains, reprocessed uranium (or, natural uranium) will need to be utilized.
| {
"alphanum_fraction": 0.5798592312,
"avg_line_length": 48.6052631579,
"ext": "tex",
"hexsha": "5b761af9242f57a228303d082ad6e9721e5572f4",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4f99e565df663252493021057ba8f4b419d1fd4e",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "jbae11/master_thesis",
"max_forks_repo_path": "method.tex",
"max_issues_count": 16,
"max_issues_repo_head_hexsha": "4f99e565df663252493021057ba8f4b419d1fd4e",
"max_issues_repo_issues_event_max_datetime": "2018-11-27T01:03:24.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-12-12T23:59:04.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "jbae11/master_thesis",
"max_issues_repo_path": "method.tex",
"max_line_length": 113,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4f99e565df663252493021057ba8f4b419d1fd4e",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "jbae11/master_thesis",
"max_stars_repo_path": "method.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1579,
"size": 5541
} |
\providecommand{\main}{../..}
\documentclass[\main/thesis.tex]{subfiles}
\begin{document}
\section{The Maximum Numeral}\label{maximum}
A number is said to be the \textit{maximum} if it is greater than or equal
to all the other numbers.
\begin{lstlisting}
Maximum : ∀ {b d o} → (xs : Numeral b d o) → Set
Maximum {b} {d} {o} xs = (ys : Numeral b d o) → ⟦ xs ⟧ ≥ ⟦ ys ⟧
\end{lstlisting}
If a numeral is a maximum
\footnote{More precisely, ``If a \text{value} of a numeral is a maximum ...''},
then its least significant digit (LSD) must be the greatest.
\begin{lstlisting}
Maximum⇒Greatest-LSD : ∀ {b} {d} {o}
→ (xs : Numeral b d o)
→ Maximum xs
→ Greatest (lsd xs)
\end{lstlisting}
\subsection{Properties of each Category}
\paragraph{NullBase}
\begin{center}
\begin{adjustbox}{max width=\textwidth}
\begin{tikzpicture}
% the frame
\path[clip] (-1, -1) rectangle (11, 2);
% the spine
\draw[ultra thick] (0,0) -- (1,0);
\draw[ultra thick] (9,0) -- (10,0);
% the body
\foreach \i in {1,...,7} {
\draw[ultra thick, fill=white] ({\i+0.05}, -0.2) rectangle ({\i+0.95}, +0.2);
};
\draw[ultra thick, fill=black] ({8.05}, -0.2) rectangle ({8.95}, +0.2);
% labels
\draw[->, ultra thick] (1.5,1) -- (1.5,0.5)
node at (1.5, 1.3) {$o$};
\draw[->, ultra thick] (8.5,1) -- (8.5,0.5)
node at (8.5, 1.3) {$o+d$};
\end{tikzpicture}
\end{adjustbox}
\end{center}
It is obvious that systems of {\lstinline|NullBase|} have maximum.
If a numeral's LSD happens to be the greatest,
then the numeral must be the maximum.
\begin{lstlisting}
Maximum-NullBase-Greatest : ∀ {d} {o}
→ (xs : Numeral 0 (suc d) o)
→ Greatest (lsd xs)
→ Maximum xs
\end{lstlisting}
With this lemma, we can tell whether a numeral is a maximum by looking at its
LSD. In case that the LSD is not the greatest, we could disprove the proposition
by contraposition.
\begin{lstlisting}
Maximum-NullBase : ∀ {d} {o}
→ (xs : Numeral 0 (suc d) o)
→ Dec (Maximum xs)
Maximum-NullBase xs with Greatest? (lsd xs)
Maximum-NullBase xs | yes greatest =
yes (Maximum-NullBase-Greatest xs greatest)
Maximum-NullBase | no ¬greatest =
no (contraposition (Maximum⇒Greatest-LSD xs) ¬greatest)
\end{lstlisting}
\paragraph{AllZeros}
\begin{center}
\begin{adjustbox}{max width=\textwidth}
\begin{tikzpicture}
% the frame
\path[clip] (0, -1) rectangle (4, 2);
% the spine
\draw[ultra thick] (2,0) -- (4,0);
% the body
\draw[ultra thick, fill=black] ({1.1}, -0.4) rectangle ({2.9}, +0.4);
% labels
\draw[->, ultra thick] (2,1.5) -- (2,0.8)
node at (2, 1.8) {$0$};
\end{tikzpicture}
\end{adjustbox}
\end{center}
\textit{All} numerals of systems of {\lstinline|AllZeros|} are maxima since they
are all mapped to $ 0 $.
\begin{lstlisting}
Maximum-AllZeros : ∀ {b}
→ (xs : Numeral b 1 0)
→ Maximum xs
Maximum-AllZeros xs ys = reflexive (
begin
⟦ ys ⟧
≡⟨ toℕ-AllZeros ys ⟩
zero
≡⟨ sym (toℕ-AllZeros xs) ⟩
⟦ xs ⟧
∎)
\end{lstlisting}
\paragraph{Proper}
On the contrary, there are no maxima in the systems of {\lstinline|Proper|}.
In fact, that is the reason why they are categorized as \textit{proper} in the
first place. The theorem below is proven by contradicting two propositions:
\begin{itemize}
\item Given {\lstinline|claim : Maximum xs|}, we claim that {\lstinline|xs|}
is greater than or equal to {\lstinline|greatest-digit d ∷ xs|},
a numeral we composed by prefixing it with the greatest digit.
\item On the other hand, we prove that {\lstinline|xs|} is less than
{\lstinline|greatest-digit d ∷ xs|}.
\end{itemize}
\begin{lstlisting}
Maximum-Proper : ∀ {b d o}
→ (xs : Numeral (suc b) (suc d) o)
→ (proper : 2 ≤ suc (d + o))
→ ¬ (Maximum xs)
Maximum-Proper {b} {d} {o} xs proper claim = contradiction p ¬p
where
p : ⟦ xs ⟧ ≥ ⟦ greatest-digit d ∷ xs ⟧
p = claim (greatest-digit d ∷ xs)
¬p : ⟦ xs ⟧ ≱ ⟦ greatest-digit d ∷ xs ⟧
¬p = <⇒≱ (
start
suc ⟦ xs ⟧
≈⟨ cong suc (sym (*-right-identity ⟦ xs ⟧)) ⟩
suc (⟦ xs ⟧ * 1)
≤⟨ s≤s (n*-mono ⟦ xs ⟧ (s≤s z≤n)) ⟩
suc (⟦ xs ⟧ * suc b)
≤⟨ +n-mono (⟦ xs ⟧ * suc b) (≤-pred proper) ⟩
d + o + ⟦ xs ⟧ * suc b
≈⟨ cong
(λ w → w + ⟦ xs ⟧ * suc b)
(sym (greatest-digit-toℕ (Fin.fromℕ d)
(greatest-digit-is-the-Greatest d)))
⟩
⟦ greatest-digit d ∷ xs ⟧
□)
\end{lstlisting}
\subsection{Determine the Maximum}
We can \textit{decide} whether a numeral is a maximum by applying them to
lemmata of each category.
\begin{lstlisting}
Maximum? : ∀ {b d o}
→ (xs : Numeral b d o)
→ Dec (Maximum xs)
Maximum? {b} {d} {o} xs with numView b d o
Maximum? xs | NullBase d o = Maximum-NullBase xs
Maximum? xs | NoDigits b o = no (NoDigits-explode xs)
Maximum? xs | AllZeros b = yes (Maximum-AllZeros xs)
Maximum? xs | Proper b d o proper = no (Maximum-Proper xs proper)
\end{lstlisting}
\paragraph{Summary}
\begin{center}
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{ | l | c | c | c | c | }
\textbf{Properties} & \textbf{NullBase} & \textbf{NoDigits} & \textbf{AllZeros} & \textbf{Proper} \\
\hline
has an maximum & yes & no & yes & no \\
\end{tabular}
\end{adjustbox}
\end{center}
\end{document}
| {
"alphanum_fraction": 0.5693771626,
"avg_line_length": 30.582010582,
"ext": "tex",
"hexsha": "ce870370ad3d993309e65b6cb4d8fa4b26b1f248",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2015-05-30T05:50:50.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-05-30T05:50:50.000Z",
"max_forks_repo_head_hexsha": "aae093cc9bf21f11064e7f7b12049448cd6449f1",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "banacorn/numeral",
"max_forks_repo_path": "Thesis/tex/constructions/maximum.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "aae093cc9bf21f11064e7f7b12049448cd6449f1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "banacorn/numeral",
"max_issues_repo_path": "Thesis/tex/constructions/maximum.tex",
"max_line_length": 104,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "aae093cc9bf21f11064e7f7b12049448cd6449f1",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "banacorn/numeral",
"max_stars_repo_path": "Thesis/tex/constructions/maximum.tex",
"max_stars_repo_stars_event_max_datetime": "2015-04-23T15:58:28.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-04-23T15:58:28.000Z",
"num_tokens": 1930,
"size": 5780
} |
\documentstyle[11pt,reduce]{article}
\title{A \REDUCE{} package for the computation of several matrix
normal forms}
\author{Matt Rebbeck \\
Konrad-Zuse-Zentrum f\"ur Informationstechnik Berlin \\
Takustra\"se 7 \\
D--14195 Berlin -- Dahlem \\
Federal Republic of Germany \\[0.05in]
E--mail: [email protected] \\[0.05in]
}
\date{February 1994}
\begin{document}
\maketitle
\index{NORMFORM package}
\section{Introduction}
When are two given matrices similar? Similar matrices have the same
trace, determinant, \hspace{0in} characteristic polynomial,
\hspace{0in} and eigenvalues, \hspace{0in} but the matrices
\begin{displaymath}
\begin{array}{ccc} {\cal U} = \left( \begin{array}{cc} 0 & 1 \\ 0 &
0 \end{array} \right) & $and$ & {\cal V} = \left( \begin{array}{cc}
0 & 0 \\ 0 & 0 \end{array} \right) \end{array}
\end{displaymath}
are the same in all four of the above but are not similar. Otherwise
there could exist a nonsingular ${\cal N} {\in} M_{2}$ (the set of
all $2 \times 2$ matrices) such that ${\cal U} = {\cal N} \, {\cal V}
\, {\cal N}^{-1} = {\cal N} \, {\it 0} \, {\cal N}^{-1} = {\it 0}$,
which is a contradiction since ${\cal U} \neq {\it 0}$.
Two matrices can look very different but still be similar. One
approach to determining whether two given matrices are similar is to
compute the normal form of them. If both matrices reduce to the same
normal form they must be similar.
{\small NORMFORM} is a package for computing the following normal
forms of matrices:
\begin{verbatim}
- smithex
- smithex_int
- frobenius
- ratjordan
- jordansymbolic
- jordan
\end{verbatim}
The package is loaded by {\tt load\_package normform;}
By default all calculations are carried out in {\cal Q} (the rational
numbers). For {\tt smithex}, {\tt frobenius}, {\tt ratjordan},
{\tt jordansymbolic}, and {\tt jordan}, this field can be extended.
Details are given in the respective sections.
The {\tt frobenius}, {\tt ratjordan}, and {\tt jordansymbolic} normal
forms can also be computed in a modular base. Again, details are given
in the respective sections.
The algorithms for each routine are contained in the source code.
{\small NORMFORM} has been converted from the normform and Normform
packages written by T.M.L. Mulders and A.H.M. Levelt. These have been
implemented in Maple [4].
\section{smithex}
\subsection{function}
{\tt smithex}(${\cal A},\, x$) computes the Smith normal form ${\cal S}$
of the matrix ${\cal A}$.
It returns \{${\cal S}, {\cal P}, {\cal P}^{-1}$\} where ${\cal S},
{\cal P}$, and ${\cal P}^{-1}$ are such that ${\cal P S P}^{-1} =
{\cal A}$.
${\cal A}$ is a rectangular matrix of univariate polynomials in $x$.
$x$ is the variable name.
\subsection{field extensions}
Calculations are performed in ${\cal Q}$. To extend this field the
{\small ARNUM} package can be used. For details see {\it section} 8.
\subsection{synopsis}
\begin{itemize}
\item The Smith normal form ${\cal S}$ of an n by m matrix ${\cal A}$
with univariate polynomial entries in $x$ over a field {\it F} is
computed. That is, the polynomials are then regarded as elements of the
{\it E}uclidean domain {\it F}($x$).
\item The Smith normal form is a diagonal matrix ${\cal S}$ where:
\begin{itemize}
\item rank(${\cal A}$) = number of nonzero rows (columns) of
${\cal S}$.
\item ${\cal S}(i,\, i)$ is a monic polynomial for 0 $< i \leq $
rank(${\cal A}$).
\item ${\cal S}(i,\, i)$ divides ${\cal S}(i+1,\, i+1)$ for 0 $< i
<$ rank(${\cal A}$).
\item ${\cal S}(i,\,i)$ is the greatest common divisor of all $i$ by
$i$ minors of ${\cal A}$.
\end{itemize}
Hence, if we have the case that $n = m$, as well as
rank(${\cal A}$) $= n$, then product (${\cal S}(i,\,i),
i=1\ldots n$) = det(${\cal A}$) / lcoeff(det$({\cal A}), \, x$).
\item The Smith normal form is obtained by doing elementary row and
column operations. This includes interchanging rows (columns),
multiplying through a row (column) by $-1$, and adding integral
multiples of one row (column) to another.
\item Although the rank and determinant can be easily obtained from
${\cal S}$, this is not an efficient method for computing these
quantities except that this may yield a partial factorization of
det(${\cal A}$) without doing any explicit factorizations.
\end{itemize}
\subsection{example}
{\tt load\_package normform;}
\begin{displaymath}
{\cal A} = \left( \begin{array}{cc} x & x+1 \\ 0 & 3*x^2 \end{array}
\right)
\end{displaymath}
\begin{displaymath}
\hspace{-0.5in}
\begin{array}{ccc}
{\tt smithex}({\cal A},\, x) & = &
\left\{ \left( \begin{array}{cc} 1 & 0 \\
0 & x^3 \end{array} \right), \left( \begin{array}{cc} 1 & 0 \\ 3*x^2
& 1 \end{array} \right), \left( \begin{array}{cc} x & x+1 \\ -3 & -3
\end{array} \right) \right\} \end{array}
\end{displaymath}
\section{smithex\_int}
\subsection{function}
Given an $n$ by $m$ rectangular matrix ${\cal A}$ that contains
{\it only} integer entries, {\tt smithex\_int}(${\cal A}$) computes the
Smith normal form ${\cal S}$ of ${\cal A}$.
It returns \{${\cal S}, {\cal P}, {\cal P}^{-1}$\} where ${\cal S},
{\cal P}$, and ${\cal P}^{-1}$ are such that ${\cal P S P}^{-1} =
{\cal A}$.
\subsection{synopsis}
\begin{itemize}
\item The Smith normal form ${\cal S}$ of an $n$ by $m$ matrix
${\cal A}$ with integer entries is computed.
\item The Smith normal form is a diagonal matrix ${\cal S}$ where:
\begin{itemize}
\item rank(${\cal A}$) = number of nonzero rows (columns) of
${\cal S}$.
\item sign(${\cal S}(i,\, i)$) = 1 for 0 $< i \leq $ rank(${\cal A}$).
\item ${\cal S}(i,\, i)$ divides ${\cal S}(i+1,\, i+1)$ for 0 $< i
<$ rank(${\cal A}$).
\item ${\cal S}(i,\,i)$ is the greatest common divisor of all $i$ by
$i$ minors of ${\cal A}$.
\end{itemize}
Hence, if we have the case that $n = m$, as well as
rank(${\cal A}$) $= n$, then abs(det(${\cal A}$)) =
product(${\cal S}(i,\,i),i=1\ldots n$).
\item The Smith normal form is obtained by doing elementary row and
column operations. This includes interchanging rows (columns),
multiplying through a row (column) by $-1$, and adding integral
multiples of one row (column) to another.
\end{itemize}
\subsection{example}
{\tt load\_package normform;}
\begin{displaymath}
{\cal A} = \left( \begin{array}{ccc} 9 & -36 & 30 \\ -36 & 192 & -180 \\
30 & -180 & 180 \end{array}
\right)
\end{displaymath}
{\tt smithex\_int}(${\cal A}$) =
\begin{center}
\begin{displaymath}
\left\{ \left( \begin{array}{ccc} 3 & 0 & 0 \\ 0 & 12 & 0 \\ 0 & 0 & 60
\end{array} \right), \left( \begin{array}{ccc} -17 & -5 & -4 \\ 64 & 19
& 15 \\ -50 & -15 & -12 \end{array} \right), \left( \begin{array}{ccc}
1 & -24 & 30 \\ -1 & 25 & -30 \\ 0 & -1 & 1 \end{array} \right) \right\}
\end{displaymath}
\end{center}
\section{frobenius}
\subsection{function}
{\tt frobenius}(${\cal A}$) computes the Frobenius normal form
${\cal F}$ of the matrix ${\cal A}$.
It returns \{${\cal F}, {\cal P}, {\cal P}^{-1}$\} where ${\cal F},
{\cal P}$, and ${\cal P}^{-1}$ are such that ${\cal P F P}^{-1} =
{\cal A}$.
${\cal A}$ is a square matrix.
\subsection{field extensions}
Calculations are performed in ${\cal Q}$. To extend this field the
{\small ARNUM} package can be used. For details see {\it section} 8.
\subsection{modular arithmetic}
{\tt frobenius} can be calculated in a modular base. For details see
{\it section} 9.
\subsection{synopsis}
\begin{itemize}
\item ${\cal F}$ has the following structure:
\begin{displaymath}
{\cal F} = \left( \begin{array}{cccc} {\cal C}{\it p_{1}} & & &
\\ & {\cal C}{\it p_{2}} & & \\ & & \ddots & \\ & & &
{\cal C}{\it p_{k}} \end{array} \right)
\end{displaymath}
where the ${\cal C}({\it p_{i}})$'s are companion matrices
associated with polynomials ${\it p_{1}, p_{2}},\ldots,
{\it p_{k}}$, with the property that ${\it p_{i}}$ divides
${\it p_{i+1}}$ for $i =1\ldots k-1$. All unmarked entries are
zero.
\item The Frobenius normal form defined in this way is unique (ie: if
we require that ${\it p_{i}}$ divides ${\it p_{i+1}}$ as above).
\end{itemize}
\subsection{example}
{\tt load\_package normform;}
\begin{displaymath}
{\cal A} = \left( \begin{array}{cc} \frac{-x^2+y^2+y}{y} &
\frac{-x^2+x+y^2-y}{y} \\ \frac{-x^2-x+y^2+y}{y} & \frac{-x^2+x+y^2-y}
{y} \end{array} \right)
\end{displaymath}
{\tt frobenius}(${\cal A}$) =
\begin{center}
\begin{displaymath}
\left\{ \left( \begin{array}{cc} 0 & \frac{x*(x^2-x-y^2+y)}{y} \\ 1 &
\frac{-2*x^2+x+2*y^2}{y} \end{array} \right), \left( \begin{array}{cc}
1 & \frac{-x^2+y^2+y}{y} \\ 0 & \frac{-x^2-x+y^2+y}{y} \end{array}
\right), \left( \begin{array}{cc} 1 & \frac{-x^2+y^2+y}{x^2+x-y^2-y} \\
0 & \frac{-y}{x^2+x-y^2-y} \end{array} \right) \right\}
\end{displaymath}
\end{center}
\section{ratjordan}
\subsection{function}
{\tt ratjordan}(${\cal A}$) computes the rational Jordan normal form
${\cal R}$ of the matrix ${\cal A}$.
It returns \{${\cal R}, {\cal P}, {\cal P}^{-1}$\} where ${\cal R},
{\cal P}$, and ${\cal P}^{-1}$ are such that ${\cal P R P}^{-1} =
{\cal A}$.
${\cal A}$ is a square matrix.
\subsection{field extensions}
Calculations are performed in ${\cal Q}$. To extend this field the
{\small ARNUM} package can be used. For details see {\it section} 8.
\subsection{modular arithmetic}
{\tt ratjordan} can be calculated in a modular base. For details see
{\it section} 9.
\subsection{synopsis}
\begin{itemize}
\item ${\cal R}$ has the following structure:
\begin{displaymath}
{\cal R} = \left( \begin{array}{cccccc} {\it r_{11}} \\ &
{\it r_{12}} \\ & & \ddots \\ & & & {\it r_{21}} \\ & &
& & {\it r_{22}} \\ & & & & & \ddots \end{array} \right)
\end{displaymath}
The ${\it r_{ij}}$'s have the following shape:
\begin{displaymath}
{\it r_{ij}} = \left( \begin{array}{ccccc} {\cal C}({\it p}) &
{\cal I} & & & \\ & {\cal C}({\it p}) & {\cal I} & & \\ &
& \ddots & \ddots & \\ & & & {\cal C}({\it p}) & {\cal I} \\ &
& & & {\cal C}({\it p}) \end{array} \right)
\end{displaymath}
where there are e${\it ij}$ times ${\cal C}({\it p})$ blocks
along the diagonal and ${\cal C}({\it p})$ is the companion
matrix associated with the irreducible polynomial ${\it p}$. All
unmarked entries are zero.
\end{itemize}
\subsection{example}
{\tt load\_package normform;}
\begin{displaymath}
{\cal A} = \left( \begin{array}{cc} x+y & 5 \\ y & x^2 \end{array}
\right)
\end{displaymath}
{\tt ratjordan}(${\cal A}$) =
\begin{center}
\begin{displaymath}
\left\{ \left( \begin{array}{cc} 0 & -x^3-x^2*y+5*y \\ 1 &
x^2+x+y \end{array} \right), \left( \begin{array}{cc}
1 & x+y \\ 0 & y \end{array} \right), \left( \begin{array}{cc} 1 &
\frac{-(x+y)}{y} \\ 0 & \hspace{0.2in} \frac{1}{y} \end{array} \right)
\right\}
\end{displaymath}
\end{center}
\section{jordansymbolic}
\subsection{function}
{\tt jordansymbolic}(${\cal A}$) \hspace{0in} computes the Jordan
normal form ${\cal J}$of the matrix ${\cal A}$.
It returns \{${\cal J}, {\cal L}, {\cal P}, {\cal P}^{-1}$\}, where
${\cal J}, {\cal P}$, and ${\cal P}^{-1}$ are such that ${\cal P J P}^
{-1} = {\cal A}$. ${\cal L}$ = \{ {\it ll} , $\xi$ \}, where $\xi$ is
a name and {\it ll} is a list of irreducible factors of ${\it p}(\xi)$.
${\cal A}$ is a square matrix.
\subsection{field extensions}
Calculations are performed in ${\cal Q}$. To extend this field the
{\small ARNUM} package can be used. For details see {\it section} 8.
\subsection{modular arithmetic}
{\tt jordansymbolic} can be calculated in a modular base. For details
see {\it section} 9.
\subsection{extras}
If using {\tt xr}, the X interface for \REDUCE, the appearance of the
output can be improved by switching {\tt on looking\_good;}. This
converts all lambda to $\xi$ and improves the indexing, eg: lambda12
$\Rightarrow \xi_{12}$. The example ({\it section} 6.6) shows the
output when this switch is on.
\subsection{synopsis}
\begin{itemize}
\item A {\it Jordan block} ${\jmath}_{k}(\lambda)$ is a $k$ by $k$
upper triangular matrix of the form:
\begin{displaymath}
{\jmath}_{k}(\lambda) = \left( \begin{array}{ccccc} \lambda & 1
& & & \\ & \lambda & 1 & & \\ &
& \ddots & \ddots & \\ & & & \lambda & 1 \\ &
& & & \lambda \end{array} \right)
\end{displaymath}
There are $k-1$ terms ``$+1$'' in the superdiagonal; the scalar
$\lambda$ appears $k$ times on the main diagonal. All other
matrix entries are zero, and ${\jmath}_{1}(\lambda) = (\lambda)$.
\item A Jordan matrix ${\cal J} \in M_{n}$ (the set of all $n$ by $n$
matrices) is a direct sum of {\it jordan blocks}.
\begin{displaymath}
{\cal J} = \left( \begin{array}{cccc} \jmath_{n_1}(\lambda_{1})
\\ & \jmath_{n_2}(\lambda_{2}) \\ & & \ddots \\ & & &
\jmath_{n_k}(\lambda_{k}) \end{array} \right),
{\it n}_{1}+{\it n}_{2}+\cdots +{\it n}_{k} = n
\end{displaymath}
in which the orders ${\it n}_{i}$ may not be distinct and the
values ${\lambda_{i}}$ need not be distinct.
\item Here ${\lambda}$ is a zero of the characteristic polynomial
${\it p}$ of ${\cal A}$. If ${\it p}$ does not split completely,
symbolic names are chosen for the missing zeroes of ${\it p}$.
If, by some means, one knows such missing zeroes, they can be
substituted for the symbolic names. For this,
{\tt jordansymbolic} actually returns $\{ {\cal J,L,P,P}^{-1} \}$.
${\cal J}$ is the Jordan normal form of ${\cal A}$ (using
symbolic names if necessary). ${\cal L} = \{ {\it ll}, \xi \}$,
where $\xi$ is a name and ${\it ll}$ is a list of irreducible
factors of ${\it p}(\xi)$. If symbolic names are used then
${\xi}_{ij}$ is a zero of ${\it ll}_{i}$. ${\cal P}$ and
${\cal P}^{-1}$ are as above.
\end{itemize}
\subsection{example}
{\tt load\_package normform;}\\
{\tt on looking\_good;}
\begin{displaymath}
{\cal A} = \left( \begin{array}{cc} 1 & y \\ y^2 & 3 \end{array}
\right)
\end{displaymath}
{\tt jordansymbolic}(${\cal A}$) =
\begin{eqnarray}
& & \left\{ \left( \begin{array}{cc} \xi_{11} & 0 \\ 0 & \xi_{12}
\end{array} \right) ,
\left\{ \left\{ -y^3+\xi^2-4*\xi+3 \right\}, \xi \right\}, \right.
\nonumber \\ & & \hspace{0.1in} \left. \left( \begin{array}{cc}
\xi_{11} -3 & \xi_{12} -3 \\ y^2 & y^2
\end{array} \right), \left( \begin{array}{cc} \frac{\xi_{11} -2}
{2*(y^3-1)} & \frac{\xi_{11} + y^3 -1}{2*y^2*(y^3+1)} \\
\frac{\xi_{12} -2}{2*(y^3-1)} & \frac{\xi_{12}+y^3-1}{2*y^2*(y^3+1)}
\end{array} \right) \right\} \nonumber
\end{eqnarray}
\vspace{0.2in}
\begin{flushleft}
\begin{math}
{\tt solve(-y^3+xi^2-4*xi+3,xi)}${\tt ;}$
\end{math}
\end{flushleft}
\vspace{0.1in}
\begin{center}
\begin{math}
\{ \xi = \sqrt{y^3+1} + 2,\, \xi = -\sqrt{y^3+1}+2 \}
\end{math}
\end{center}
\vspace{0.1in}
\begin{math}
{\tt {\cal J} = sub}{\tt (}{\tt \{ xi(1,1)=sqrt(y^3+1)+2,\, xi(1,2) =
-sqrt(y^3+1)+2\},}
\end{math}
\\ \hspace*{0.29in} {\tt first jordansymbolic (${\cal A}$));}
\vspace{0.2in}
\begin{displaymath}
{\cal J} = \left( \begin{array}{cc} \sqrt{y^3+1} + 2 & 0 \\ 0 &
-\sqrt{y^3+1} + 2 \end{array} \right)
\end{displaymath}
\vspace{0.2in}
For a similar example ot this in standard {\REDUCE} (ie: not using
{\tt xr}), see the {\it normform.log} file.
\vspace{0.5in}
\section{jordan}
\subsection{function}
{\tt jordan}(${\cal A}$) computes the Jordan normal form
${\cal J}$ of the matrix ${\cal A}$.
It returns \{${\cal J}, {\cal P}, {\cal P}^{-1}$\}, where
${\cal J}, {\cal P}$, and ${\cal P}^{-1}$ are such that ${\cal P J P}^
{-1} = {\cal A}$.
${\cal A}$ is a square matrix.
\subsection{field extensions}
Calculations are performed in ${\cal Q}$. To extend this field the
{\small ARNUM} package can be used. For details see {\it section} 8.
\subsection{note}
In certain polynomial cases {\tt fullroots} is turned on to compute the
zeroes. This can lead to the calculation taking a long time, as well as
the output being very large. In this case a message {\tt ***** WARNING:
fullroots turned on. May take a while.} will be printed. It may be
better to kill the calculation and compute {\tt jordansymbolic} instead.
\subsection{synopsis}
\begin{itemize}
\item The Jordan normal form ${\cal J}$ with entries in an algebraic
extension of ${\cal Q}$ is computed.
\item A {\it Jordan block} ${\jmath}_{k}(\lambda)$ is a $k$ by $k$
upper triangular matrix of the form:
\begin{displaymath}
{\jmath}_{k}(\lambda) = \left( \begin{array}{ccccc} \lambda & 1
& & & \\ & \lambda & 1 & & \\ &
& \ddots & \ddots & \\ & & & \lambda & 1 \\ &
& & & \lambda \end{array} \right)
\end{displaymath}
There are $k-1$ terms ``$+1$'' in the superdiagonal; the scalar
$\lambda$ appears $k$ times on the main diagonal. All other
matrix entries are zero, and ${\jmath}_{1}(\lambda) = (\lambda)$.
\item A Jordan matrix ${\cal J} \in M_{n}$ (the set of all $n$ by $n$
matrices) is a direct sum of {\it jordan blocks}.
\begin{displaymath}
{\cal J} = \left( \begin{array}{cccc} \jmath_{n_1}(\lambda_{1})
\\ & \jmath_{n_2}(\lambda_{2}) \\ & & \ddots \\ & & &
\jmath_{n_k}(\lambda_{k}) \end{array} \right),
{\it n}_{1}+{\it n}_{2}+\cdots +{\it n}_{k} = n
\end{displaymath}
in which the orders ${\it n}_{i}$ may not be distinct and the
values ${\lambda_{i}}$ need not be distinct.
\item Here ${\lambda}$ is a zero of the characteristic polynomial
${\it p}$ of ${\cal A}$. The zeroes of the characteristic
polynomial are computed exactly, if possible. Otherwise they are
approximated by floating point numbers.
\end{itemize}
\subsection{example}
{\tt load\_package normform;}
\begin{displaymath}
{\cal A} = \left( \begin{array}{cccccc} -9 & -21 & -15 & 4 & 2 & 0 \\
-10 & 21 & -14 & 4 & 2 & 0 \\ -8 & 16 & -11 & 4 & 2 & 0 \\ -6 & 12 & -9
& 3 & 3 & 0 \\ -4 & 8 & -6 & 0 & 5 & 0 \\ -2 & 4 & -3 & 0 & 1 & 3
\end{array} \right)
\end{displaymath}
\begin{flushleft}
{\tt ${\cal J}$ = first jordan$({\cal A})$;}
\end{flushleft}
\begin{displaymath}
{\cal J} = \left( \begin{array}{cccccc} 3 & 0 & 0 & 0 & 0 & 0 \\ 0 & 3
& 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & i+2 & 0 \\ 0 & 0 & 0 & 0 & 0 & -i+2
\end{array} \right)
\end{displaymath}
\newpage
\section{arnum}
The package is loaded by {\tt load\_package arnum;}. The algebraic
field ${\cal Q}$ can now be extended. For example, {\tt defpoly
sqrt2**2-2;} will extend it to include ${\sqrt{2}}$ (defined here by
{\tt sqrt2}). The {\small ARNUM} package was written by Eberhard
Schr\"ufer and is described in the {\it arnum.tex} file.
\subsection{example}
{\tt load\_package normform;} \\
{\tt load\_package arnum;} \\
{\tt defpoly sqrt2**2-2;} \\
(sqrt2 now changed to ${\sqrt{2}}$ for looks!)
\vspace{0.2in}
\begin{displaymath}
{\cal A} = \left( \begin{array}{ccc} 4*{\sqrt{2}}-6 & -4*{\sqrt{2}}+7 &
-3*{\sqrt{2}}+6 \\ 3*{\sqrt{2}}-6 & -3*{\sqrt{2}}+7 & -3*{\sqrt{2}}+6
\\ 3*{\sqrt{2}} & 1-3*{\sqrt{2}} & -2*{\sqrt{2}} \end{array} \right)
\end{displaymath}
\vspace{0.2in}
\begin{eqnarray}
{\tt ratjordan}({\cal A}) & = &
\left\{ \left( \begin{array}{ccc} {\sqrt{2}} & 0 & 0 \\ 0 & {\sqrt{2}}
& 0 \\ 0 & 0 & -3*{\sqrt{2}}+1 \end{array} \right), \right. \nonumber
\\ & & \hspace{0.1in} \left. \left( \begin{array}{ccc} 7*{\sqrt{2}}-6
& \frac{2*{\sqrt{2}}-49}{31} & \frac{-21*{\sqrt{2}}+18}{31} \\
3*{\sqrt{2}}-6 & \frac{21*{\sqrt{2}}-18}{31} & \frac{-21*{\sqrt{2}}+18}
{31} \\ 3*{\sqrt{2}}+1 & \frac{-3*{\sqrt{2}}+24}{31} &
\frac{3*{\sqrt{2}}-24}{31} \end{array} \right), \right. \nonumber \\ &
& \hspace{0.1in} \left. \left( \begin{array}{ccc} 0 & {\sqrt{2}}+1 &
1 \\ -1 & 4*{\sqrt{2}}+9 & 4*{\sqrt{2}} \\ -1 & -\frac{1}{6}*{\sqrt{2}}
+1 & 1 \end{array} \right) \right\} \nonumber
\end{eqnarray}
\newpage
\section{modular}
Calculations can be performed in a modular base by switching {\tt on
modular;}. The base can then be set by {\tt setmod p;} (p a prime). The
normal form will then have entries in ${\cal Z}/$p${\cal Z}$.
By also switching {\tt on balanced\_mod;} the output will be shown using
a symmetric modular representation.
Information on this modular manipulation can be found in {\it chapter}
9 (Polynomials and Rationals) of the {\REDUCE} User's Manual [5].
\subsection{example}
{\tt load\_package normform;} \\
{\tt on modular;} \\
{\tt setmod 23;}
\vspace{0.1in}
\begin{displaymath}
{\cal A} = \left( \begin{array}{cc} 10 & 18 \\ 17 & 20 \end{array}
\right)
\end{displaymath}
{\tt jordansymbolic}(${\cal A}$) =
\begin{center}
\begin{displaymath}
\left\{ \left( \begin{array}{cc} 18 & 0 \\ 0 & 12 \end{array} \right),
\left\{ \left\{ \lambda + 5, \lambda + 11 \right\}, \lambda \right\},
\left( \begin{array}{cc} 15 & 9 \\ 22 & 1 \end{array} \right), \left(
\begin{array}{cc} 1 & 14 \\ 1 & 15 \end{array} \right) \right\}
\end{displaymath}
\end{center}
\vspace{0.2in}
{\tt on balanced\_mod;}
\vspace{0.2in}
{\tt jordansymbolic}(${\cal A}$) =
\begin{center}
\begin{displaymath}
\left\{ \left( \begin{array}{cc} -5 & 0 \\ 0 & -11 \end{array} \right),
\left\{ \left\{ \lambda + 5, \lambda + 11 \right\}, \lambda \right\},
\left( \begin{array}{cc} -8 & 9 \\ -1 & 1 \end{array} \right), \left(
\begin{array}{cc} 1 & -9 \\ 1 & -8 \end{array} \right) \right\}
\end{displaymath}
\end{center}
\newpage
\begin{thebibliography}{6}
\bibitem{MulLev} T.M.L.Mulders and A.H.M. Levelt: {\it The Maple
normform and Normform packages.} (1993)
\bibitem{Mulders} T.M.L.Mulders: {\it Algoritmen in De Algebra, A
Seminar on Algebraic Algorithms, Nigmegen.} (1993)
\bibitem{HoJo} Roger A. Horn and Charles A. Johnson: {\it Matrix
Analysis.} Cambridge University Press (1990)
\bibitem{Maple} Bruce W. Chat\ldots [et al.]: {\it Maple (Computer
Program)}. Springer-Verlag (1991)
\bibitem{Reduce} Anthony C. Hearn: {\REDUCE} {\it User's Manual 3.6.}
RAND (1995)
\end{thebibliography}
\end{document}
| {
"alphanum_fraction": 0.6005790646,
"avg_line_length": 33.8612368024,
"ext": "tex",
"hexsha": "0cfea4cdac47f0da06c1a60152d95a8be82d53c1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "arthurcnorman/general",
"max_forks_repo_path": "packages/normform/normform.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "arthurcnorman/general",
"max_issues_repo_path": "packages/normform/normform.tex",
"max_line_length": 72,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "arthurcnorman/general",
"max_stars_repo_path": "packages/normform/normform.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 8081,
"size": 22450
} |
\section{Introduction}
% This will give a brief overview of the project including
% What problem is addressed by the project?
% What are the aims and objectives of the project?
% What are the challenges of the project?
% What is the solution produced?
% How effective is the solution / how successful has the project been?
Artificial Neural Networks (ANNs) has been widely used in multiple areas.
It has been proved that ANN is Turing complete \cite{siegelmann1995computational}.
It indicates that understanding the logic (i.e. symbolic computation) is possible theoretically, gives broad application prospects.
There are ANNs \cite{holldobler1999approximating, Holldobler91towardsa} which are able to mimic some first-order logic program.
However, these solutions were based on the Recurrent Neural Network (RNN) and tried to simulate the logic program rather than catch the logic feature.
Trying to explore the limitation of ANNs for understanding logical concepts, will help the research of neuro-symbolic integration.
Thus, here we are concerned with the capability of different Multilayer Perceptrons (MLPs) to understand the logic concept (i.e. bisimulation of logic structures ).
And MLPs here is introduced as the representative of ANNs, as it is recognised as the original and most basic model of ANNs.
Datasets used to train the MLPs is consisted of groups of generated graphs with flags (i.e. a number that indicates if two graphs are bisimulation).
Then test if MLPs are able to learn the ability to distinguish the bisimulation equivalence.
The report is structured as follows.
First, in Section \ref{sec:background}, relevant research of bisimulation and relation between neural network and logic will be given.
The specification of the problem and the approach used to explore the limitation will be explained in Section \ref{sec:description}.
Then in Section \ref{sec:relisation}, the process of realisation including component test and the process of the project will be stated.
Section \ref{sec:experiment} will discuss the entire process of the experiments including the setting up, impelementation and analysation of result.
And Section \ref{sec:evaluation}, the summary and the evaluation from the aspect of engineer and research will be explained.
Knowledge and skills gained will be talked in Section \ref{sec:learning}.
The professional issue will be discussed in Section \ref{sec:profession}.
At last the full code of the project, part of the data, screenshots of sample runs, user manual, progress log and Gantt chart will be appended in the Section \ref{sec:appendics}. | {
"alphanum_fraction": 0.8073676132,
"avg_line_length": 89.8620689655,
"ext": "tex",
"hexsha": "0fbc0af2ecb5da207cb22af7f2d0e3ef213b3ded",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "736f7e64dc9f381450c3539ec254ef23dae65c5b",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "SuperElephant/Bisimulation_fyp_2019",
"max_forks_repo_path": "FYP_report/tex/introduction.tex",
"max_issues_count": 7,
"max_issues_repo_head_hexsha": "736f7e64dc9f381450c3539ec254ef23dae65c5b",
"max_issues_repo_issues_event_max_datetime": "2022-02-10T00:06:36.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-03-15T01:56:03.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "SuperElephant/Bisimulation_fyp_2019",
"max_issues_repo_path": "FYP_report/tex/introduction.tex",
"max_line_length": 178,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "736f7e64dc9f381450c3539ec254ef23dae65c5b",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "SuperElephant/Bisimulation_fyp_2019",
"max_stars_repo_path": "FYP_report/tex/introduction.tex",
"max_stars_repo_stars_event_max_datetime": "2019-04-29T12:37:17.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-04-29T12:37:17.000Z",
"num_tokens": 550,
"size": 2606
} |
%!TeX root = sic_susceptor
\documentclass[paper.tex]{subfiles}
\begin{document}
\section{Silicon carbide test}
To verify that we can detect a resonance with this setup, we use a microwave susceptor mixture described by \cite{Effect2016}, which can be tuned to the correct range by varying the concentration of SiC to binder. We used the S-1 mixture.
0.6g 2000-mesh silicon carbide powder was added to 1.4 g white Elmer's glue, mixed manually until homogeneous,
applied to a Mylar film, and dried with a hair dryer. A small wafer of the hardened mixture, perhaps 0.3 mm thick, 5 mm x 2 mm, was placed directly on top of the coplanar waveguide.
The spectrum was then captured, averaging multiple sweeps of the VCO. A background spectrum was taken without the wafer in place, and then the two sweeps were subtracted.
\begin{figure}[H]
\includesvg[width=\textwidth]{../firmware/eppenwolf/runs/sic_susceptor/sic_9_3}
\caption{}
\end{figure}
\begin{figure}[H]
\includesvg[width=\textwidth]{../firmware/eppenwolf/runs/sic_susceptor/sic_9_2}
\caption{}
\end{figure}
\cite{Effect2016} measures a peak at about 7.2 GHz. Our peak is at approximately $7.84 \pm \approx 0.25$ GHz, which is well within variation in powder grain sizes and concentration.
The paper specifies the reflection loss, S$_{11}$. Unfortunately, having not designed in directional couplers, it is probably not possible to directly determine S$_{21}$ and S$_{11}$ with this setup.
Naively, we might expect the peak to appear as a decrease at both sensors; instead it appears as an increase in the far sensor and decrease in the near sensor. It could be possible that the presence of the dielectric wafer has merely detuned an existing resonance, that the peak is a coincidence, and that these data mean nothing.
%$\strike $
The raw voltage plots do not suggest that this is the case; but it is nonetheless concerning.
\footnote{The paper mentions that the "blending fraction of silicon carbide powders to epoxy resin was 30 \%, 35 \%, 40 \%, 45 \% and 50\% by weight". That seems a little ambiguous as to whether that's "weight per total mass" or "weight per epoxy".}
\end{document} | {
"alphanum_fraction": 0.7695511337,
"avg_line_length": 60.0277777778,
"ext": "tex",
"hexsha": "4d17791e1f843b621f428d57f38c4183043277e1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e9c103e5e62bc128169400998df5f5cd13bd8949",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "0xDBFB7/covidinator",
"max_forks_repo_path": "documents/sic_susceptor.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e9c103e5e62bc128169400998df5f5cd13bd8949",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "0xDBFB7/covidinator",
"max_issues_repo_path": "documents/sic_susceptor.tex",
"max_line_length": 330,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e9c103e5e62bc128169400998df5f5cd13bd8949",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "0xDBFB7/covidinator",
"max_stars_repo_path": "documents/sic_susceptor.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 564,
"size": 2161
} |
\documentclass[11pt]{article}
\bibliographystyle{alpha}
\usepackage{todonotes}
\usepackage{fullpage}
% Uncomment this when debuging labels. This will display labels
% citations etc when they are refered. Which makes some debuggin
% easier. Comment it for the final version.
%\usepackage{showkeys}
\input{preamble}
\renewcommand{\R}{\mathcal{R}} %these two are very specific to thsi
\renewcommand{\F}{\mathcal{F}} %file. was just lazy
\newcommand{\mr}{\mathcal{M}_\mathcal{R}} %M_r
\title{Fast Integer Multiplication Using Modular Arithmetic \footnote{A preliminary version appeared in the proceedings of the 40th ACM Symposium on Theory of Computing, 2008.}}
\author{Anindya De\thanks{Research done while the author was at the Dept
of Computer Science and Engineering, IIT Kanpur}\\
Computer Science Division\\
University of California at Berkeley\\
Berkeley, CA 94720, USA\\
{\tt [email protected]}\\
%
\and
Piyush P Kurur%
\thanks{Research supported through Research I Foundation project
NRNM/CS/20030163},\\ %
Dept. of Computer Science and Engineering\\%
Indian Institute of Technology Kanpur\\%
Kanpur, UP, India, 208016\\
{\tt [email protected]}\\%
\and%
Chandan Saha\\%
Dept. of Computer Science and Engineering\\%
Indian Institute of Technology Kanpur\\%
Kanpur, UP, India, 208016\\%
{\tt [email protected]}%
\and Ramprasad Saptharishi%
\thanks{Research done while visiting IIT Kanpur %
under Project FLW/DST/CS/20060225}\\%
Chennai Mathematical Institute\\%
Plot H1, SIPCOT IT Park\\%
Padur PO, Siruseri, India, 603103\\%
{\tt [email protected]}
}
\date{}
\begin{document}
\maketitle
\begin{abstract}
We give an $N\cdot \log N\cdot 2^{O(\log^*N)}$ time algorithm to
multiply two $N$-bit integers that uses modular arithmetic for
intermediate computations instead of arithmetic over complex numbers
as in F\"{u}rer's algorithm, which also has the same and so far the
best known complexity. The previous best algorithm using modular
arithmetic (by Sch{\"{o}}nhage and Strassen) has complexity $O(N \cdot
\log N \cdot \log\log N)$. The advantage of using modular arithmetic
as opposed to complex number arithmetic is that we can completely
evade the task of bounding the truncation error due to finite
approximations of complex numbers, which makes the analysis relatively
simple. Our algorithm is based upon F\"{u}rer's algorithm, but uses
FFT over multivariate polynomials along with an estimate of the least
prime in an arithmetic progression to achieve this improvement in the
modular setting. It can also be viewed as a $p$-adic version of
F\"{u}rer's algorithm.
\end{abstract}
\section{Introduction}
Computing the product of two $N$-bit integers is nearly a ubiquitous
operation in algorithm design. Being a basic arithmetic operation, it
is no surprise that multiplications of integers occur as intermediate
steps of computation in algorithms from every possible domain of
computer science. But seldom do the complexity of such multiplications
influence the overall efficiency of the algorithm as the integers
involved are relatively small in size and the multiplications can
often be implemented as fast hardware operations. However, with the
advent of modern cryptosystems, the study of the bit complexity of
integer multiplication received a significant impetus. Indeed, large
integer multiplication forms the foundation of many modern day
public-key crystosytems, like RSA, El-Gamal and Elliptic Curve
crytosystems. One of the most notable applications is the RSA
cryptosystem, where it is required to multiply two primes that are
hundreds or thousands of bits long. The larger these primes the harder
it is to factor their product, which in turn makes the RSA extremely
secure in practice.
In this paper, our focus is more on the theoretical aspects of integer
multiplication, it being a fundamental problem in its own right. This
is to say, we will be concerned with the asymptotic bit complexity of
mutiplying two $N$-bit integers with little emphasis on optimality in
practice. We begin with a brief account of earlier work on integer
multiplication algorithms.
\subsection{Previous Work}
The naive approach to multiply two $N$-bit integers leads to an
algorithm that uses $O(N^2)$ bit operations. Karatsuba \cite{K63}
showed that some multiplication operations of such an algorithm can be
replaced by less costly addition operations which reduces the overall
running time of the algorithm to $O(N^{\log_23})$ bit
operations. Shortly afterwards, this result was improved by Toom
\cite{T63} who showed that for any $\varepsilon>0$, integer
multiplication can be done in $O(N^{1+\varepsilon})$ time. This led to
the question as to whether the time complexity can be improved further
by replacing the term $O(N^{\epsilon})$ by a poly-logarithmic
factor. In a major breakthrough, Sch\"{o}nhage and
Strassen~\cite{SS71} gave two efficient algorithms for multiplying
integers using fast polynomial multiplication. One of the algorithms
achieved a running time of $O(N\cdot \log N\cdot \log\log N\ldots
2^{O(\log^*N)})$ using arithmetic over complex numbers (approximated
to suitable precisions), while the other used arithmetic modulo
carefully chosen integers to improve the complexity further to
$O(N\cdot \log N\cdot \log\log N)$ bit operations. The modular
algorithm remained the best for a long period of time until a recent
remarkable result by F\"{u}rer \cite{F07} (see also
\cite{F09}). F\"{u}rer gave an algorithm that uses arithmetic over
complex numbers and runs in $N\cdot\log N\cdot 2^{O(\log^\ast N)}$
time. Till date this is the best time complexity known for integer
multiplication and indeed our result is inspired by F\"{u}rer's
algorithm.
Further details on other approaches and enhancements to previous
integer multiplication algorithms can be found in \cite{F09}.
\subsection{The Motivation}
Sch\"{o}nhage and Strassen introduced two seemingly different
approaches to integer multiplication -- using complex and modular
arithmetic. F\"{u}rer's algorithm improves the time complexity in the
complex arithmetic setting by cleverly reducing some costly
multiplications to simple shift operations. However, the algorithm
needs to approximate the complex numbers to certain precisions during
computation. This introduces the added task of bounding the total
truncation errors in the analysis of the algorithm. On the contrary,
in the modular setting the error analysis is virtually absent or
rather more implicit, which in turn simplifies the overall
analysis. In addition, modular arithmetic gives a discrete approach to
a discrete problem like integer multiplication. Therefore, it seems
natural to ask whether we can achieve a similar improvement in time
complexity of this problem in the modular arithmetic setting. In this
work, we answer this question affirmatively. We give an $N\cdot
\log{N}\cdot 2^{O(\log^{*}{N})}$ time algorithm for integer
multiplication using only modular arithmetic, thus matching the
improvement made by F\"{u}rer.
\subsection*{Overview of our result}
As is the case in both Sch\"{o}nhage-Strassen's and F\"{u}rer's
algorithms, we start by reducing the problem to polynomial
multiplication over a ring $\mathcal{R}$ by properly encoding the
given integers. Polynomials can be multiplied efficiently using
Discrete Fourier Transforms (DFT). However, in order that we are able
to use Fast Fourier Transform (FFT), the ring $\mathcal{R}$ should
have some special roots of unity. For instance, to multiply two
polynomials of degree less than $M$ using FFT, we require a
\emph{principal} $2M$-th root of unity (see
Definition~\ref{def-principal-root} for principal roots). One way to
construct such a ring in the modular setting is to consider rings of
the form $\mathcal{R} = \Z/(2^M + 1) \Z$ as in Sch\"{o}nhage and
Strassen's work ~\cite{SS71}. In this case, the element $2$ is a
$2M$-th principal root of unity in $\mathcal{R}$. This approach can be
equivalently viewed as attaching an `artificial' root to the ring of
integers. However, this makes the size of $\mathcal{R}$ equal to $2^M$
and thus a representation of an arbitrary element in $\mathcal{R}$
takes $M$ bits. This means an $N$-bit integer is encoded as a
polynomial of degree $M$ with every coefficient about $M$ bits long,
thereby making $M \approx \sqrt{N}$ as the optimal choice. Indeed, the
choice of such an $\mathcal{R}$ is the basis of Sch\"{o}nhage and
Strassen's modular algorithm in which they reduce multiplication of
$N$-bit integers to multiplication of $\sqrt{N}$-bit integers and
achieve a complexity of $O(N \cdot \log N \cdot \log \log N)$ bit
operations.
Naturally, such rings are a little too expensive in our setting. We
would rather like to find a ring whose size is bounded by some
polynomial in $M$ and which still contains a principal $2M$-th root of
unity. In fact, it is this task of choosing a suitable ring that poses
the primary challenge in adapting F\"{u}rer's algorithm and making it
work in the discrete setting.
We choose the ring to be $\mathcal{R} = \Z/p^c\Z$, for a prime $p$ and
a constant $c$ such that $p^c = \mathsf{poly}(M)$. The ring
$\Z/p^c\Z$, has a principal $2M$-th root of unity if and only if $2M$
divides $p-1$, which means that we need to find a prime $p$ from the
arithmetic progression $\inbrace{1 + i\cdot 2M}_{i>0}$. To make this
search computationally efficient, we also need the degree of the
polynomials, $M$ to be sufficiently small. This we can achieve by
encoding the integers as multivariate polynomials instead of
univariate ones. It turns out that the choice of the ring as
$\mathcal{R} = \Z/p^c\Z$ is still not quite sufficient and needs a
little more refinement. This is explained in Section \ref{sec:ring}.
The use of multivariate polynomial multiplications along with a small
base ring are the main steps where our algorithm differs from earlier
algorithms by Sch\"{o}nhage-Strassen and F\"{u}rer. Towards
understanding the notion of \emph{inner} and \emph{outer} DFT in the
context of multivariate polynomials, we also present a group theoretic
interpretation of DFT. The use of inner
and outer DFT plays a central role in both F\"{u}rer's as well as our
algorithm. Arguing along the line of F\"{u}rer \cite{F07}, we show that
repeated use of efficient computation of inner DFT's using some
special roots of unity in $\mathcal{R}$ reduces the number of
`bad multiplications' (in comparison to Sch\"{o}nhage-Strassen's algorithm)
and makes the overall process efficient, thereby leading to an $N\cdot \log{N}\cdot 2^{O(\log^{*}{N})}$
time algorithm.
\section{The Basic Setup}
\subsection{The Underlying Ring} \label{sec:ring}
Rings of the form $\mathcal{R} = \Z/(2^M + 1) \Z$ have the nice
property that multiplications by powers of $2$, the $2M$-th principal
root of unity, are mere shift operations and are therefore very
efficient. Although by choosing the ring $\mathcal{R} = \Z/p^c\Z$ we
ensure that the ring size is small, it comes with a price:
multiplications by principal roots of unity are no longer just shift
operations. Fortunately, this can be redeemed by working with rings of
the form $\mathcal{R} = \Z[\alpha]/(p^c, \alpha^m + 1)$ for some $m$
whose value will be made precise later. Elements of $\mathcal{R}$ are
thus $m-1$ degree polynomials over $\alpha$ with coefficients from
$\Z/p^c\Z$. By construction, $\alpha$ is a $2m$-th root of unity and
multiplication of any element in $\mathcal{R}$ by any power of
$\alpha$ can be achieved by shift operations --- this property is
crucial in making some multiplications in the FFT less costly (see
Section~\ref{fourier_analysis}).
Given an $N$-bit number $a$, we encode it as a $k$-variate polynomial
over $\mathcal{R}$ with degree in each variable less than $M$. The
parameters $M$ and $m$ are powers of two such that $M^k$ is roughly
$\frac{N}{\log^2N}$ and $m$ is roughly $\log{N}$. The parameter $k$
will ultimately be chosen a constant (see Section
\ref{complexity_section}). We now explain the details of this encoding
process.
\subsection{Encoding Integers into $k$-variate
Polynomials}\label{encoding_section}
Given an $N$-bit integer $a$, we first break these $N$ bits into $M^k$
blocks of roughly $\frac{N}{M^k}$ bits each. This corresponds to
representing $a$ in base $q = 2^{\frac{N}{M^k}}$. Let $a = a_0 +
\ldots + a_{M^k-1}q^{M^k - 1}$, where every $a_i < q$. The number $a$
is converted into a polynomial as follows:
\begin{enumerate}
\item Express $i$ in base $M$ as $i = i_1 + i_2M + \cdots +
i_kM^{k-1}$. \label{base_M_item}
\item Encode each term $a_iq^i$ as the monomial $a_i\cdot
X_1^{i_1}X_2^{i_1}\cdots X_k^{i_k}$. As a result, the number $a$
gets converted to the polynomial $\sum_{i=0}^{M^k - 1}
a_i\cdot X_1^{i_1}\cdots X_k^{i_k}$.
\end{enumerate}
Further, we break each $a_i$ into $\frac{m}{2}$ equal sized blocks
where the number of bits in each block is $u = \frac{2N}{M^k\cdot m}$.
Each coefficient $a_i$ is then encoded as a polynomial in $\alpha$ of
degree less than $\frac{m}{2}$. The polynomials are then padded with
zeroes to stretch their degrees to $m$. Thus, the $N$-bit number $a$
is converted to a $k$-variate polynomial $a(X)$ over
$\Z[\alpha]/(\alpha^m + 1)$.\\
Given integers $a$ and $b$, each of $N$ bits, we encode them as
polynomials $a(X)$ and $b(X)$ and compute the product polynomial. The
product $a\cdot b$ can be recovered by substituting $X_s =
q^{M^{s-1}}$, for $1\leq s\leq k$, and $\alpha = 2^u$ in the
polynomial $a(X)\cdot b(X)$. The coefficients in the product
polynomial could be as large as $M^k\cdot m\cdot 2^{2u}$ and hence it
is sufficient to do arithmetic modulo $p^c$ where $p^c > 2M^k\cdot
m\cdot 2^{2u}$. Our choice of
the prime $p$ ensures that $c$ is in fact a constant (see
Section~\ref{complexity_section}). We summarize this discussion as a lemma.
\begin{lemma}\label{lem:encoding-time}
Multiplication of two $N$-bit integers reduces to
multiplication of two $k$-variate polynomials, with degree in each
variable bounded by $M$, over the ring $\Z[\alpha]/(p^c,\alpha^m +
1)$ for a prime $p$ satisfying $p^c > 2M^k\cdot m \cdot 2^{2u}$,
where $u=\frac{2N}{M^km}$. Furthermore, the reduction can be
performed in $O(N)$ time.
\end{lemma}
\subsection{Choosing the Prime}\label{prime_section}
The prime $p$ should be chosen such that the ring $\Z/p^c\Z$ has a
\emph{principal} $2M$-th root of unity, which is required for
polynomial multiplication using FFT. A principal root of unity is
defined as follows.
\begin{definition}\label{def-principal-root}
\emph{\textsf{(Principal root of unity)}} An $n$-th root of unity
$\zeta\in \mathcal{R}$ is said to be primitive if it generates a
cyclic group of order $n$ under multiplication. Furthermore, it is
said to be principal if $n$ is coprime to the characteristic of
$\mathcal{R}$ and $\zeta$ satisfies $\sum_{i=0}^{n-1}\zeta^{ij}=0$ for
all $0< j < n$.
\end{definition}
\noindent
In $\Z/p^c\Z$, a $2M$-th root of unity is principal if and only if
$2M\mid p-1$ (see also Section~\ref{Qp_section}). As a result, we need
to choose the prime $p$ from the arithmetic progression $\inbrace{1 +
i\cdot 2M}_{i> 0}$, which is potentially the main bottleneck of our
approach. We now explain how to circumvent this problem. \\
An upper bound for the least prime in an arithmetic progression is
given by the following theorem by Linnik \cite{L44}:
\begin{theorem}\label{linnik_theorem}
\emph{\textsf{(Linnik)}} There exist absolute constants $\ell$ and $L$
such that for any pair of coprime integers $d$ and $n$, the least
prime $p$ such that $p\equiv d\bmod{n}$ is less than $\ell n^L$.
\end{theorem}
Heath-Brown \cite{B92} showed that the \emph{Linnik constant} $L\leq
5.5$ (a recent work by Xylouris \cite{X09} showed that $L \leq
5.2$). Recall that $M$ is chosen such that $M^k$ is
$O\inparen{\frac{N}{\log^2N}}$. If we choose $k=1$, that is if we use
univariate polynomials to encode integers, then the parameter $M =
O\inparen{\frac{N}{\log^2N}}$. Hence the least prime $p\equiv
1\pmod{2M}$ could be as large as $N^L$. Since all known deterministic
sieving procedures take at least $N^L$ time this is clearly infeasible
(for a randomized approach see Section~\ref{ERH_section}). However, by
choosing a larger $k$ we can ensure that the least prime $p\equiv
1\pmod{2M}$ is $O(N^\varepsilon)$ for some constant $\varepsilon <
1$. Since primality testing is in deterministic polynomial\footnote{a
subexponential algorithm would suffice} time\cite{AKS04}, we can find the least prime $p\equiv 1 \pmod{2M}$ in $o(N)$
time.
\begin{lemma}\label{prime_time}
If $k$ is any integer greater than $L+1$, then $M^L =
O\inparen{N^{\frac{L}{L+1}}}$ and hence the least prime $p\equiv
1\pmod{2M}$ can be found in $o(N)$ time.
\end{lemma}
\subsubsection*{Choosing the Prime Randomly}\label{ERH_section}
To ensure that the search for a prime $p\equiv 1\pmod{2M}$ does not
affect the overall time complexity of the algorithm, we considered
multivariate polynomials to restrict the value of $M$; an alternative
is to use randomization.
\begin{proposition} \label{prop:randomprime}
Assuming ERH, a prime $p\equiv 1\pmod{2M}$ can be computed by a
randomized algorithm with expected running time $\tilde{O}(\log^3 M)$.
\end{proposition}
\begin{proof}
Titchmarsh \cite{Titchmarsh} (see also Tianxin \cite{Tianxin})
showed, assuming ERH, that the number of primes less than $x$ in the
arithmetic progression $\{ 1 + i \cdot 2M\}_{i > 0}$ is given by,
\begin{equation*}
\pi(x,2M) = \frac{Li(x)}{\varphi(2M)} + O(\sqrt{x} \log x)
\end{equation*}
for $2M \leq \sqrt{x} \cdot (\log x)^{-2}$, where $Li(x) =
\Theta(\frac{x}{\log x})$ and $\varphi$ is the Euler totient
function. In our case, since $M$ is a power of two, $\varphi(2M) = M$,
and hence for $x \geq 4M^2 \cdot \log^6 M$, we have $\pi(x, 2M) =
\Omega\inparen{\frac{x}{M\log x}}$. Therefore, for an $i$ chosen
uniformly randomly in the range $1 \leq i \leq 2M \cdot \log^6 M$, the
probability that $i\cdot 2M + 1$ is a prime is at least $\frac{d}{\log
x}$ for a constant $d$. Furthermore, primality test of an $O(\log
M)$ bit number can be done in $\tilde{O}(\log^2 M)$ time using
Rabin-Miller primality test \cite{M76, R80}. Hence, with $x = 4M^2
\cdot \log^6 M$, a suitable prime for our algorithm can be found in
expected $\tilde{O}(\log^3 M)$ time.
\end{proof}
\noindent \textbf{Remark - } We prefer to use Linnik's theorem (Theorem
\ref{linnik_theorem}) instead of Proposition \ref{prop:randomprime}
while choosing the prime in the progression $\{ 1 + i \cdot 2M\}_{i > 0}$
so as to make the results in this paper independent of the ERH and the usage
of random bits.
\subsection{Finding the Root of Unity}\label{root_section}
We require a principal $2M$-th root of unity $\rho(\alpha)$ in
$\mathcal{R}$ to compute the Fourier transforms. This root
$\rho(\alpha)$ should also have the property that its
$\inparen{\frac{M}{m}}$-th power is $\alpha$, so as to make some
multiplications in the FFT efficient (see Section
\ref{fourier_analysis}). The root $\rho(\alpha)$ can be computed by
interpolation in a way similar to that in F\"{u}rer's algorithm
\cite[Section 3]{F07}, except that we need a principal $2M$-th root of
unity $\omega$ in $\Z/p^c\Z$ to start with.
To obtain such a root, we first obtain a $2M$-th root of unity
$\omega_1$ in $\Z/p\Z$. A generator $\zeta$ of $\mathbb{F}_p^{\times}$
can be computed by brute force, as $p$ is sufficiently small, and
$\omega_1 = \zeta^{(p-1)/2M}$ is a principal $2M$-th root of unity in $\Z/p\Z$.
A principal $2M$-root of unity
$\omega_1$ must be a root of the polynomial $f(x) = x^M+1$ in $\Z/p\Z$. Having obtained $\omega_1$,
we use Hensel Lifting \cite[Theorem 2.23]{Zuckerman}.
\begin{lemma}[\textsf{Hensel Lifting}]
Let $\omega_s$ be a root of $f(x) = x^M + 1$ in $\Z/p^s\Z$. Then
there exists a unique root $\omega_{s+1}$ in $\Z/p^{s+1}\Z$ such
that $\omega_{s+1}\equiv \omega_s\pmod{p^s}$ and $f(\omega_{s+1}) =
0\pmod{p^{s+1}}$. This unique root is given by $\omega_{s+1} =
\omega_s - \frac{f(\omega_s)}{f'(\omega_s)}$.\end{lemma}
%%% HANDLING TODO PRES 5
\noindent
It is clear from the above lemma that we can compute a $2M$-th root of
unity $\omega = \omega_c$ in $\Z/p^c\Z$. We will need the following
well-known and useful fact about principal roots of unity in any ring.
\begin{lemma}\cite[Lemma 2.1]{F09}\label{lem:Furer-principal-root}
If $M$ is a power of $2$ and $\omega^{M} = -1$ in an arbitrary ring
$\mathcal{R}$. If $M$ is relatively prime to the characteristic of
$\mathcal{R}$, then $\omega$ is a principal $2M$-th root of unity in
$\mathcal{R}$.
\end{lemma}
\noindent
Hence it follows that the root $\omega$ of $f(x) = x^M + 1$ in
$\Z/p^c\Z$ is a principal $2M$-th root of unity in $\Z/p^c\Z$.
Furthermore, $\zeta^{(p-1)/2M} = \omega_1 \equiv \omega\bmod p$. Since
$\zeta$ is a generator of $\mathbb{F}_p^\times$, different powers of
$\zeta$ must generate the group $\mathbb{F}_p^\times$ and hence must
be distinct modulo $p$. Hence it follows that different powers of
$\omega$ must be distinct modulo $p$ as well. Therefore, the
difference between any two of them is a unit in $\Z/p^c\Z$ and this
makes the following
interpolation feasible in our setting. \\
\paragraph{Finding $\rho(\alpha)$ from $\omega$:} Since $\omega$ is a
principal $2M$-th root of unity, $\gamma = \omega^{\frac{2M}{2m}}$ is
a principal $2m$-th root of unity in $\Z/p^c\Z$. Notice that,
$\alpha^m + 1$ uniquely factorizes as, $\alpha^m + 1 = (\alpha -
\gamma)(\alpha - \gamma^3) \ldots (\alpha - \gamma^{2m-1})$. The
ideals generated by $(\alpha - \gamma^i)$ and $(\alpha - \gamma^j)$
are mutually coprime as $\gamma^i -\gamma^j$ is a unit for $i\neq j$
and is contained in the ideal generated by $(\alpha - \gamma^i)$ and
$(\alpha - \gamma^j)$. Therefore, using Chinese Remaindering,
$\alpha$ has the direct sum representation $(\gamma, \gamma^3, \ldots,
\gamma^{2m-1})$ in $\mathcal{R}$. Since we require
$\rho(\alpha)^{\frac{2M}{2m}} = \alpha$, it is sufficient to choose a
$\rho(\alpha)$ whose direct sum representation is $(\omega, \omega^3,
\ldots, \omega^{2m-1})$. Now use Lagrange's formula to interpolate
$\rho(\alpha)$ as,
\begin{equation*}
\rho(\alpha) = \sum_{i=1, \text{ } i \text{ odd}}^{2m-1}{\omega^i \cdot \prod_{j=1, \text{ } j \neq i, \text{ } j \text{ odd}}^{2m-1}{\frac{\alpha - \gamma^j}{\gamma^i - \gamma^j}}}
\end{equation*}
The inverses of the elements $\gamma^i - \gamma^j$ in $\Z/p^c\Z$ can
be easily computed in $\text{poly}(\log p)$ time. It is also clear
that $\rho(\alpha)^M$, in the direct-sum representation, is
$(\omega^M, \omega^{3M}, \cdots, \omega^{(2m -1)M})$ which is the element $-1$
in $\R$. Hence $\rho(\alpha)$ is
a principal $2M$-th root of unity (by Lemma~\ref{lem:Furer-principal-root}).\\
Besides finding a generator $\zeta$ of $\Z/p\Z$ by brute-force (which
can be performed in $O(p)$ time), all other computations can be done
in $\text{poly}(\log p)$ time. We summarize this as a lemma.
\begin{lemma}\label{lem:root-time}
A principal $2M$-th root of unity $\rho(\alpha)\in \R$ such that
$\rho(\alpha)^{2M/2m} = \alpha$ can be computed in deterministic
time $O(p \cdot \mathrm{poly}(\log(p)))$ (which is $o(N)$ if $p =
o(N)$).
\end{lemma}
% \subsection*{old}
% We require a principal $2M$-th root of unity $\rho(\alpha)$ in
% $\mathcal{R}$ to compute the Fourier transforms. This root
% $\rho(\alpha)$ should also have the property that its
% $\inparen{\frac{M}{m}}$-th power is $\alpha$, so as to make some
% multiplications in the FFT efficient (see Section
% \ref{fourier_analysis}). The root $\rho(\alpha)$ can be computed by
% interpolation in a way similar to that in F\"{u}rer's algorithm
% \cite[Section 3]{F07}, except that we need a principal $2M$-th root of
% unity $\omega$ in $\Z/p^c\Z$ to start with. To obtain such a root, we
% first obtain a $(p-1)$-th root of unity $\zeta$ in $\Z/p^c\Z$ by
% lifting a generator of $\mathbb{F}_p^{\times}$. The
% $\inparen{\frac{p-1}{2M}}$-th power of $\zeta$ gives us the required
% $2M$-th root of unity $\omega$. A generator of $\mathbb{F}_p^{\times}$
% can be computed by brute force, as $p$ is sufficiently small. Having
% obtained a generator, we use Hensel Lifting \cite[Theorem
% 2.23]{Zuckerman}.
% \begin{lemma}[\textsf{Hensel Lifting}]
% Let $\zeta_s$ be a primitive $(p-1)$-th root of unity in
% $\Z/p^s\Z$. Then there exists a unique primitive $(p-1)$-th root of
% unity $\zeta_{s+1}$ in $\Z/p^{s+1}\Z$ such that $\zeta_{s+1}\equiv
% \zeta_s\pmod{p^s}$. This unique root is given by $\zeta_{s+1} =
% \zeta_s - \frac{f(\zeta_s)}{f'(\zeta_s)}$ where $f(x) = x^{p-1} - 1$.
% \end{lemma}
% \noindent It can be shown that the root $\zeta$ in $\Z/p^c\Z$ thus
% obtained by lifting is principal. Furthermore, different powers of
% $\zeta$ are distinct modulo $p$. Therefore, the difference between any
% two of them is a unit in $\Z/p^c\Z$ and this makes the following
% interpolation feasible in our setting.
% \paragraph{Finding $\rho(\alpha)$ from $\omega$:} Since
% $\omega$ is a principal $2M$-th root of unity, $\gamma =
% \omega^{\frac{2M}{2m}}$ is a principal $2m$-th root of unity in
% $\Z/p^c\Z$. Notice that, $\alpha^m + 1$ uniquely factorizes as,
% $\alpha^m + 1 = (\alpha - \gamma)(\alpha - \gamma^3) \ldots (\alpha -
% \gamma^{2m-1})$, and the ideals generated by $(\alpha - \gamma^i)$ in
% $\mathcal{R}$ are mutually coprime as $\gamma^i - \gamma^j$ is a unit
% for $i \neq j$. Therefore, using Chinese Remaindering, $\alpha$ has
% the direct sum representation $(\gamma, \gamma^3, \ldots,
% \gamma^{2m-1})$ in $\mathcal{R}$. Since we require
% $\rho(\alpha)^{\frac{2M}{2m}} = \alpha$, it is sufficient to choose a
% $\rho(\alpha)$ whose direct sum representation is $(\omega, \omega^3,
% \ldots, \omega^{2m-1})$. Now use Lagrange's formula to interpolate
% $\rho(\alpha)$ as,
% \begin{equation*}
% \rho(\alpha) = \sum_{i=1, \text{ } i \text{ odd}}^{2m-1}{\omega^i \cdot \prod_{j=1, \text{ } j \neq i, \text{ } j \text{ odd}}^{2m-1}{\frac{\alpha - \gamma^j}{\gamma^i - \gamma^j}}}
% \end{equation*}
% The inverses of the elements $\gamma^i - \gamma^j$ can be easily computed in $\Z/p^c\Z$.
\section{Fourier Transform}
\subsection{Inner and Outer DFT} \label{sec:inoutDFT}
Suppose that $a(x) \in \mathcal{S}[x]$ is a polynomial of degree less
than $2M$, where $\mathcal{S}$ is a ring containing a $2M$-th principal
root of unity $\rho$. Let us say that we want to compute the
$2M$-point DFT of $a(x)$ using $\rho$ as the root of unity. In other
words, we want to compute the elements $a(1), a(\rho), \ldots,
a(\rho^{2M-1})$ is $\mathcal{S}$. This can be done in two steps.
\paragraph{Step $1$:}
Compute the following polynomials using $\alpha = \rho^{2M/2m}$.
\begin{eqnarray*}
a_0(x) &=& a(x) \mod (x^{2M/2m} - 1) \\
a_1(x) &=& a(x) \mod (x^{2M/2m} - \alpha) \\
&\vdots& \\
a_{2m-1}(x) &=& a(x) \mod (x^{2M/2m} - \alpha^{2m-1}),
\end{eqnarray*}
where $\deg(a_j(x)) < \frac{2M}{2m}$ for all $0 \leq j < 2m$.
\paragraph{Step $2$:}
Note that, $a_j(\rho^{k \cdot 2m + j}) = a(\rho^{k \cdot 2m + j})$ for
every $0 \leq j < 2m$ and $0 \leq k < \frac{2M}{2m}$. Therefore, all
we need to do to compute the DFT of $a(x)$ is to evaluate the
polynomials $a_j(x)$ at appropriate powers of $\rho$.\\
\noindent The idea is to show that both Step $1$ and Step $2$ can be
performed by computation of some `smaller' DFTs. Let us see how.
\paragraph{Performing Step $1$:}
The crucial observation here is the following. Fix an integer $\ell$
in the range $[0, \frac{2M}{2m} - 1]$. Then the $\ell^{th}$
coefficients of $a_0(x), a_1(x), \ldots, a_{2m-1}(x)$ are exactly
$e_\ell(1), e_\ell(\alpha), \ldots, e_\ell(\alpha^{2m-1})$,
respectively, where $e_\ell(y)$ is the polynomial,
\begin{equation*}
e_\ell(y) = \sum_{j=0}^{2m-1}{a_{j \cdot \frac{2M}{2m} + \ell} \cdot y^j}.
\end{equation*}
But then, finding $e_\ell(1), e_\ell(\alpha), \ldots,
e_\ell(\alpha^{2m-1})$ is essentially computing the $2m$-point DFT of
$e_\ell(y)$ using $\alpha$ as the $2m^{th}$ root of unity. Therefore,
all we need to do to find $a_0(x), \ldots, a_{2m-1}(x)$ is to compute
the DFTs of $e_\ell(y)$ for all $0 \leq \ell < \frac{2M}{2m}$. These
$\frac{2M}{2m}$ many $2m$-point DFTs are called the \emph{inner} DFTs.
\paragraph{Performing Step $2$:} In order to find $a_j(\rho^{k \cdot
2m + j})$, for $0 \leq k < \frac{2M}{2m}$ and a fixed $j$, we first
compute the polynomial $\tilde{a}_j(x) = a_j(x \cdot \rho^j)$ followed
by a $\frac{2M}{2m}$-point DFT of $\tilde{a}_j(x)$ using $\rho^{2m}$
as the root of unity. These $2m$ many $\frac{2M}{2m}$-point DFTs ($j$
running from $0$ to $2m - 1$) are called the \emph{outer} DFTs. The
polynomials $\tilde{a}_j(x)$ can be computed by multiplying the
coefficients of $a_j(x)$ by suitable powers of $\rho$. Such
multiplications are termed as \emph{bad} multiplications (as they would result in recursive calls to integer
multiplication). \\
\noindent The above discussion is summarized in the following lemma.
\begin{lemma} \label{lem:inoutDFT} \emph{\textsf{(DFT time = Inner
DFTs + Bad multiplications + Outer DFTs)}} \\ Time taken to
compute a $2M$-point DFT over $\mathcal{S}$ is sum of:
\begin{enumerate}
\item Time taken to compute $\frac{2M}{2m}$ many $2m$-point inner DFTs over $\mathcal{S}$ using $\alpha$ as the $2m$-th root of unity.
\item Time to do $2M$ multiplications in $\mathcal{S}$ by powers of
$\rho$ (bad multiplications).
\item Time taken to compute $2m$ many $\frac{2M}{2m}$-point outer DFTs over $\mathcal{S}$ using $\rho^{2m}$ as the $\frac{2M}{2m}$-th root of unity.
\end{enumerate}
\end{lemma}
\subsection{Analysis of the FFT}\label{fourier_analysis}
We are now ready to analyse the complexity of multiplying the two
$k$-variate polynomials $a(X)$ and $b(X)$ (see Section
\ref{encoding_section}) using Fast Fourier Transform. Treat $a(X)$ and
$b(X)$ as univariate polynomials in variable $X_k$ over the ring
$\mathcal{S} = \mathcal{R}[X_1, \ldots, X_{k-1}]$. We write $a(X)$ and
$b(X)$ as $a(X_k)$ and $b(X_k)$, respectively, where $\deg(a(X_k))$
and $\deg(b(X_k))$ are less than $M$. Multiplication of $a(X)$ and
$b(X)$ can be thought of as multiplication of the univariates $a(X_k)$
and $b(X_k)$ over $\mathcal{S}$. Also note that, the root
$\rho(\alpha)$ (constructed in Section \ref{root_section}) is a
primitive $2M$-th root of unity in $\mathcal{S} \supset
\mathcal{R}$. Denote the multiplication complexity of $a(X_k)$ and
$b(X_k)$ by $\mathcal{F}(2M, k)$.
Multiplication of $a(X_k)$ and $b(X_k)$ using FFT involves computation
of three $2M$-point DFTs over $\mathcal{S}$ and $2M$ pointwise (or
componentwise) multiplications in $\mathcal{S}$. Let $\mathcal{D}(2M,
k)$ be the time taken to compute a $2M$-point DFT over
$\mathcal{S}$. By Lemma \ref{lem:inoutDFT}, the time to compute a DFT
is the sum of the time for the inner DFTs, the bad multiplications and
the outer DFTs. Let us analyse these three terms separately. We will
go by the notation in Section \ref{sec:inoutDFT}, using $\mathcal{S} =
\mathcal{R}[X_1, \ldots, X_{k-1}]$ and $\rho = \rho(\alpha)$.
\paragraph{Inner DFT time:} Computing a $2m$-point DFT requires $2m
\log (2m)$ additions in $\mathcal{S}$ and $m \log (2m)$
multiplications by powers of $\alpha$. The important observation here
is: since $\mathcal{R} = \Z[\alpha]/(p^c, \alpha^m + 1)$,
multiplication by a power of $\alpha$ with an element in $\mathcal{R}$
can be readily computed by simple cyclic shifts (with possible
negations), which takes only $O(m \cdot \log p)$ bit operations. An
element in $\mathcal{S}$ is just a polynomial over $\mathcal{R}$ in
variables $X_1, \ldots, X_{k-1}$, with degree in each variable bounded
by $M$. Hence, multiplication by a power of $\alpha$ with an element
of $\mathcal{S}$ can be done using $\mathcal{N}_{\mathcal{S}} =
O(M^{k-1} \cdot m \cdot \log p)$ bit operations. A total of $m \log
(2m)$ multiplications takes $O(m \log m \cdot
\mathcal{N}_{\mathcal{S}})$ bit operations. It is easy to see that $2m
\log (2m)$ additions in $\mathcal{S}$ also require the same order of
time.
Since there are $\frac{2M}{2m}$ many $2m$-point DFTs, the total time
spent in the inner DFTs is $O(2M \cdot \log m \cdot
\mathcal{N}_{\mathcal{S}})$ bit operations.
\paragraph{Bad multiplication time:}
Suppose that two arbitrary elements in $\mathcal{R}$ can be multiplied
using $\mathcal{M}_{\mathcal{R}}$ bit operations. Mulitplication in
$\mathcal{S}$ by a power of $\rho$ amounts to $c_{\mathcal{S}} =
M^{k-1}$ multiplications in $\mathcal{R}$. Since there are $2M$ such
bad multiplications, the total time is bounded by $O(2M \cdot
c_{\mathcal{S}} \cdot \mathcal{M}_{\mathcal{R}})$.
\paragraph{Outer DFT time:}
By Lemma \ref{lem:inoutDFT}, the total outer DFT time is $2m \cdot
\mathcal{D}\left(\frac{2M}{2m}, k \right)$.
\paragraph{Total DFT time:}Therefore, the net DFT time is bounded as,
\begin{eqnarray*}
\mathcal{D}(2M, k) &=& O\left(2M \cdot \log m \cdot \mathcal{N}_{\mathcal{S}} +
2M \cdot c_{\mathcal{S}} \cdot \mathcal{M}_{\mathcal{R}}\right) +
2m \cdot \mathcal{D}\left(\frac{2M}{2m}, k\right) \\
&=& O\left(2M \cdot \log m \cdot \mathcal{N}_{\mathcal{S}} +
2M \cdot c_{\mathcal{S}} \cdot \mathcal{M}_{\mathcal{R}}\right) \cdot
\frac{\log 2M}{ \log 2m} \\
&=& O\left(M^k \log M \cdot m \log p + \frac{M^k \log M}{\log m} \cdot
\mathcal{M}_{\mathcal{R}}\right),
\end{eqnarray*}
putting the values of $\mathcal{N}_{\mathcal{S}}$ and $c_{\mathcal{S}}$.
\paragraph{Pointwise multiplications:}
Finally, FFT does $2M$ pointwise multiplications in
$\mathcal{S}$. Since elements of $\mathcal{S}$ are $(k-1)$-variate
polynomials over $\mathcal{R}$, with degree in every variable bounded
by $M$, the total time taken for pointwise multiplications is $2M
\cdot \mathcal{F}(2M, k-1)$ bit operations.
\paragraph{Total polynomial multiplication time:}
This can be expressed as,
\begin{eqnarray} \label{eqn_FFT_complexity}
\mathcal{F}(2M, k) &=& O\left(M^k \log M \cdot m \log p + \frac{M^k \log M}{\log m} \cdot \mathcal{M}_{\mathcal{R}}\right) + 2M \cdot \mathcal{F}(2M, k-1) \nonumber \\
&=& O\left(M^k \log M \cdot m \log p + \frac{M^k \log M}{\log m} \cdot \mathcal{M}_{\mathcal{R}}\right),
\end{eqnarray}
as $k$ is a constant. \\
% \subsection*{Inverse FFT}
% \todo{Pres 9}Computing the inverse of a fourier transform with $\rho$ as the root
% of unity just amounts to yet another fourier transform with
% $\rho^{-1}$ as the root of unity. This can be seen by observing that a
% fourier transform can be thought of as a matrix multiplication. If $a(x) = a_0 + a_1 x + \cdots a_{2M-1}x^{2M-1}$ and $b_i = a(\rho^i)$ for $i=0\cdots (2M-1)$, then
% \begin{eqnarray*}
% \insquar{\begin{array}{c}b_0\\b_1\\b_2\\ \vdots \\b_{2M-1}\end{array}} & = & \insquar{\begin{array}{cccc}
% 1 & 1 & \cdots & 1\\
% 1 & \rho & \cdots & \rho^{2M-1}\\
% 1 & \rho^2 & \cdots & \rho^{2(2M-1)}\\
% \vdots & \vdots & \ddots & \vdots \\
% 1 & \rho^{2M-1}& \cdots & \rho^{(2M-1)(2M-1)}
% \end{array}}\insquar{\begin{array}{c}a_0\\a_1\\a_2\\ \vdots \\a_{2M-1} \end{array}}.
% \end{eqnarray*}
% Since $\rho$ is a principal root of unity, it is easy to see that
% \begin{eqnarray*}
% \insquar{\begin{array}{c}a_0\\a_1\\a_2\\ \vdots \\a_{2M-1}\end{array}} & = & \frac{1}{N}\cdot\insquar{\begin{array}{cccc}
% 1 & 1 & \cdots & 1\\
% 1 & \rho^{-1} & \cdots & \rho^{-(2M-1)}\\
% 1 & \rho^{-2} & \cdots & \rho^{-2(2M-1)}\\
% \vdots & \vdots & \ddots & \vdots \\
% 1 & \rho^{-(2M-1)}& \cdots & \rho^{-(2M-1)(2M-1)}
% \end{array}}\insquar{\begin{array}{c}b_0\\b_1\\b_2\\ \vdots \\b_{2M-1} \end{array}}.
% \end{eqnarray*}
% which is just another fourier transform with $\rho^{-1}$ as a root of unity instead. \\
We now present an equivalent group theoretic interpretation of the
above process of polynomial multiplication, which is a subject of
interest in itself.
\subsection{A Group Theoretic Interpretation}
A convenient way to study polynomial multiplication is to interpret it
as multiplication in a \emph{group algebra}.
\begin{definition}
\emph{\textsf{(Group Algebra)}}
Let $G$ be any group. The \emph{group algebra} of $G$ over a ring $R$ is
the set of formal sums $\sum_{g \in G} \alpha_g g$ where $\alpha_g
\in R$ with addition defined point-wise and multiplication defined
via convolution as follows
$$ \left(\sum_g \alpha_g g\right) \left(\sum_h
\beta_h h\right) = \sum_{u}\inparen{\sum_{gh=u} \alpha_g \beta_h}u $$
\end{definition}
In this section, we study the Fourier transform over the group algebra
$R[E]$ where $E$ is an \emph{additive abelian group}. Most of this,
albeit in a different form, is well known but is provided here for
completeness \cite[Chapter 17]{Igor}.
In order to simplify our presentation, we will fix the base ring to be
$\C$, the field of complex numbers. Let $n$ be the \emph{exponent} of
$E$, that is the maximum order of any element in $E$. A similar
approach can be followed for any other base ring as long as it has a
principal $n$-th root of unity.
We consider $\C[E]$ as a vector space with basis $\{ x \}_{x \in E}$
and use the Dirac notation to represent elements of $\C[E]$ --- the
vector $\ket{x}$, $x$ in $E$, denotes the element $1 . x$ of $\C[E]$.
Multiplying univariate polynomials over $R$ of degree less than $n$
can be seen as multiplication in the group algebra $R[G]$ where $G$ is
the cyclic group of order $2n$. Say $a(x) = a_0 + a_1 x +
\cdots + a_dx^d$ and $b(x)= b_0 + b_1x \cdots + b_dx^d$ (with $d<n$)
are the polynomials we wish to multiply, they can be embedded in
$\C[\Z/2n\Z]$ as $\ket{a} = \sum_{i=0}^d a_i \ket{i}$ and $\ket{b} =
\sum_{i=0}^d b_i\ket{i}$. It is trivial to see that their product in
the group algebra is the embedding of the product of the
polynomials. Similarly, multiplying $k$-variate polynomials of degree
less than $n$ in each variable can be seen as multiplying in the group
algebra $R[G^k]$, where $G^k$ denotes the $k$-fold product group
$G\times\ldots \times G$.
\begin{definition}
\emph{\textsf{(Characters)}} Let $E$ be an additive abelian group. A
\emph{character} of $E$ is a homomorphism from $E$ to $\C^*$.
\end{definition}
An example of a character of $E$ is the trivial character, which we
will denote by $1$, that assigns to every element of $E$ the complex
number $1$. If $\chi_1$ and $\chi_2$ are two characters of $E$ then
their product $\chi_1 . \chi_2$ is defined as $\chi_1 . \chi_2(x) =
\chi_1(x) \chi_2(x)$.
\begin{proposition}\cite[Chapter 17, Theorem 1]{Igor}\label{prop:dual-isomorphism}
Let $E$ be an additive abelian group of exponent $n$. Then the
values taken by any character of $E$ are $n$-th roots of
unity. Furthermore, the characters form a \emph{multiplicative
abelian group} $\hat{E}$ which is isomorphic to $E$.
\end{proposition}
An important property that the characters satisfy is the following
\cite[Corollary 2.14]{Isaacs}.
\begin{proposition}\label{prop:schur-orthogonality}
\emph{\textsf{(Schur's Orthogonality)}}
Let $E$ be an additive abelian group. Then
\[ \sum_{x \in E} \chi(x) =%
\begin{cases}
0 & \textrm{ if $\chi \neq 1$,}\\
\# E &\textrm{ otherwise}
\end{cases} \quad \text{and}\quad
\sum_{\chi \in \hat{E}} \chi(x) =%
\begin{cases}
0 & \textrm{ if $x \neq 0$,}\\
\# E &\textrm{ otherwise.}
\end{cases}
\]
\end{proposition}
It follows from Schur's orthogonality that the collection of vectors
$\ket{\chi} = \sum_x \chi(x) \ket{x}$ forms a basis of $\C[E]$. We
will call this basis the \emph{Fourier basis} of $\C[E]$.
\begin{definition}
\emph{\textsf{(Fourier Transform)}}
Let $E$ be an additive abelian group and let $x \mapsto \chi_x$ be an
isomorphism between $E$ and $\hat{E}$. The \emph{Fourier transform}
over $E$ is the linear map from $\C[E]$ to $\C[E]$ that sends
$\ket{x}$ to $\ket{\chi_x}$.
\end{definition}
Thus, the Fourier transform is a change of basis from the point basis
$\{ \ket{x} \}_{x \in E}$ to the Fourier basis $\{
\ket{\chi_x}\}_{x\in E}$. The Fourier transform is unique only up to
the choice of the isomorphism $x \mapsto \chi_x$. This isomorphism is
determined by the choice of the principal root of unity.
It is a standard fact in representation theory that any character
$\chi$ of an abelian group satisfies $\chi_y(x) = \chi_x(y)$ for every
$x,y$. Using this and Proposition~\ref{prop:schur-orthogonality}, it is easy
to see that this transform can be inverted by the map $\ket{x} \mapsto
\frac{1}{n}\ket{\overline{\chi_x}}$. Hence the \emph{Inverse Fourier
Transform} is essentially just a Fourier transform using $x\mapsto
\overline{\chi_x}$ as the isomorphism between $E$ and $\hat{E}$.
\begin{remark}\label{rem-Fourier-inner}
Given an element $\ket{f} \in
\C[E]$, to compute its Fourier transform (or Inverse Fourier transform) it is sufficient to compute
the \emph{Fourier coefficients} $\{\braket{\chi}{f} \}_{\chi \in
\hat{E}}$.
\end{remark}
\subsubsection*{Fast Fourier Transform}
We now describe the Fast Fourier Transform for general abelian groups
in the character theoretic setting. For the rest of the section fix an
additive abelian group $E$ over which we would like to compute the
Fourier transform. Let $A$ be any subgroup of $E$ and let $B =
E/A$. For any such pair of abelian groups $A$ and $B$, we have an
appropriate Fast Fourier transformation, which we describe in the rest
of the section.
\begin{proposition}\label{prop-character-lift}
\begin{enumerate}
\item Every character $\lambda$ of $B$ can be ``lifted'' to a
character of $E$ (which will be denoted by $\tilde\lambda$ defined
as follows $\tilde\lambda(x) = \lambda(x + A)$.
\item Let $\chi_1$ and $\chi_2$ be two characters of $E$ that when
restricted to $A$ are identical. Then $\chi_1 = \chi_2 \tilde\lambda$ for
some character $\lambda$ of $B$.
\item The group $\hat{B}$ is (isomorphic to) a subgroup of $\hat{E}$
with the quotient group $\hat{E}/\hat{B}$ being (isomorphic to)
$\hat{A}$.
\end{enumerate}
\end{proposition}
\begin{proof}
It is very easy to check that $\tilde\lambda(x) = \lambda(x + A)$ is indeed
a homomorphism from $E$ to $\C$. This therefore establishes that
$\hat{B}$ is a subgroup of $\hat{E}$.
\medskip As for the second, define the map $\tilde\lambda(x) =
\frac{\chi_1(x)}{\chi_2(x)}$. It is easy to check that this is a
homomorphism from $E$ to $\C$. Then, for $x\in E$ and $a\in A$
$$
\tilde\lambda(x + a) = \frac{\chi_1(x)\chi_1(a)}{\chi_2(x)\chi_2(a)} =
\frac{\chi_1(x)}{\chi_2(x)} = \tilde\lambda(x)
$$
And hence $\tilde\lambda$ is equal over cosets over $A$ in $E$ and hence $\chi$
is indeed a homomorphism from $B$ to $\C$.
\medskip The third part follows from
Proposition~\ref{prop:dual-isomorphism} and the fact that any quotient
group of a finite abelian group is isomorphic to a subgroup.
% For the third, every character $\chi\in \hat{E}$ can be restricted
% to $A$ to get a character $\phi\in \hat{A}$. Therefore, the
% restriction map is a natural homomorphism from $\hat{E}$ to
% $\hat{A}$. It suffices to show that this homomorphism is surjective
% and the kernel is $\hat{B}$. Suppose $\varphi$ is an arbitrary
% character of $A$. Let $R = \inbrace{x_b}_{b\in B}$ be a set of coset
% representatives of $A$ in $E$. Then every element $x\in E$ can be
% uniquely written as $x_b + a$ where $x_b\in R$ and $a\in
% A$. Consider the following map:
% $$
% \chi(x_b + a) = \varphi(a)
% $$ It is easy to verify that this is indeed a character of $E$, whose
% restriction to $A$ is $\varphi$. Therefore, the restriction map is
% surjective. And if $\chi$ is in the kernel of this restriction, then
% $\chi$ and the trivial character are identical on $A$ and therefore
% $\chi = 1\cdot \lambda = \lambda \in \hat{B}$. Thus, the kernel is
% precisely $\hat{B}$ and hence $\hat{A}$ is the quotient of $\hat{E}$
% and $\hat{B}$.
\end{proof}
We now consider the task of computing the Fourier transform of an
element $\ket{f} = \sum f_x \ket{x}$ presented as a list of
coefficients $\{f_x\}$ in the point basis. For this, it is sufficient
to compute the Fourier coefficients $\{\braket{\chi}{f}\}$ for each
character $\chi$ of $E$ (Remark~\ref{rem-Fourier-inner}). To describe
the Fast Fourier transform we fix two sets of cosets representatives,
one of $A$ in $E$ and one of $\hat{B}$ in $\hat{E}$ as follows.
\begin{enumerate}
\item For each $b \in B$, $b$ being a coset of $A$, fix a coset
representative $x_b \in E$ such $b = x_b + A$.
\item For each character $\varphi$ of $A$, fix a character
$\chi_\varphi$ of $E$ such that $\chi_\varphi$ restricted to $A$ is
the character $\varphi$. The characters $\{ \chi_\varphi \}$ form
(can be thought of as) a set of coset representatives of $\hat{B}$
in $\hat{E}$.
\end{enumerate}
Since $\{ x_b \}_{b \in B}$ forms a set of coset representatives, any
$\ket{f} \in \C[E]$ can be written uniquely as $\ket{f} = \sum f_{b,a}
\ket{x_b + a}$.
\begin{proposition}\label{prop-Fourier-coefficient}
Let $\ket{f} = \sum f_{b,a}\ket{x_b + a}$ be an element of $\C[E]$.
For each $b \in B$ and $\varphi \in \hat{A}$ let $\ket{f_b}\in
\C[A]$ and $\ket{f_\varphi} \in \C[B]$ be defined as
follows.
\begin{eqnarray*}
\ket{f_b} &= & \sum_{a \in A} f_{b,a} \ket {a}\\
\ket{f_\varphi} & = &\sum_{b \in B} \overline{\chi}_{\varphi}(x_b)
\braket{\varphi}{f_b} \ket{b}
\end{eqnarray*}
Then for any character $\chi = \chi_\varphi\tilde\lambda$ of $E$ the
Fourier coefficient $\braket{\chi}{f} =
\braket{\lambda}{f_\varphi}$.
\end{proposition}
\begin{proof}
$$
\braket{\chi}{f} \quad=\quad \sum_{b\in B , a\in A} \overline{\chi_\varphi
\tilde\lambda(x_b + a)} \cdot f_{b,a}
$$
\noindent
Recall that for any $\tilde\lambda$, that is a lift of a character $\lambda$ of
$B$, acts identically inside cosets of $A$ and hence $\tilde\lambda(x_b + a) = \lambda(b)$. Therefore, the above sum can be
rewritten as follows:
\begin{eqnarray*}
\sum_{b\in B , a\in A} \overline{\chi_\varphi
\tilde\lambda(x_b + a)} \cdot f_{b,a} & = & \sum_{b\in B} \sum_{a\in A}
\overline{\chi_\varphi(x_b + a)}\overline{\lambda(b)} \cdot
f_{b,a}\\
& = & \sum_{b\in B} \overline{\lambda(b)}\cdot
\overline{\chi_\varphi(x_b)}\sum_{a\in A}
\overline{\varphi(a)} f_{b,a}
\end{eqnarray*}
The inner sum over $a$ is precisely $\braket{\varphi}{f_b}$ and
therefore we have:
\begin{eqnarray*}
\braket{\chi}{f} & = & \sum_{b\in B}
\overline{\lambda(x_b)}\cdot \overline{\chi_\varphi(x_b)}
\braket{\varphi}{f_b}
\end{eqnarray*}
which can be rewritten as $\braket{\lambda}{f_\varphi}$ as claimed.
\end{proof}
We are now ready to describe the Fast Fourier transform given an
element $\ket{f} = \sum f_x \ket{x}$.
\begin{enumerate}
\item \label{step_inner_dft} For each $b \in B$ compute the Fourier
transforms of $\ket{f_b}$. This requires $\# B$ many Fourier
transforms over $A$.
\item \label{step_bad_mult}As a result of the previous step we have
for each $b \in B$ and $\varphi \in \hat{A}$ the Fourier
coefficients $\braket{\varphi}{f_b}$. Compute for each $\varphi$ the
vectors $\ket{f_\varphi} = \sum_{b \in B}
\overline{\chi}_{\varphi}(x_b) \braket{\varphi}{f_b} \ket{b}$. This
requires $\# \hat{A} . \# B = \# E$ many multiplications by roots of
unity.
\item \label{step_outer_dft} For each $\varphi \in \hat{A}$ compute
the Fourier transform of $\ket{f_\varphi}$. This requires $\#\hat{A}
= \# A$ many Fourier transforms over $B$.\label{item-Fourier-B}
\item Any character $\chi$ of $E$ is of the
form $\chi_\varphi \lambda$ for some $\varphi \in \hat{A}$ and
$\lambda \in \hat{B}$. Using
Proposition~\ref{prop-Fourier-coefficient} we have at the end of
Step~\ref{item-Fourier-B} all the Fourier coefficients
$\braket{\chi}{f} = \braket{\lambda}{f_\varphi}$.
\end{enumerate}
If the quotient group $B$ itself has a subgroup that is isomorphic to
$A$ then we can apply this process recursively on $B$ to obtain a divide and
conquer procedure to compute Fourier transform. In the standard FFT we
use $E = \Z/2^n\Z$. The subgroup $A$ is $2^{n-1}E$ which is isomorphic
to $\Z/2\Z$ and the quotient group $B$ is $\Z/2^{n-1}\Z$.
\subsubsection*{Analysis of the Fourier Transform}
Our goal is to multiply $k$-variate polynomials over $\mathcal{R}$, with the
degree in each variable less than $M$. This can be achieved by
embedding the polynomials into the algebra of the product group $E =
\inparen{\frac{\Z}{2M\cdot \Z}}^k$ and multiplying them as elements of
the algebra. Since the exponent of $E$ is $2M$, we require a principal
$2M$-th root of unity in the ring $\mathcal{R}$. We shall use the root
$\rho(\alpha)$ (as defined in Section~\ref{root_section}) for the
Fourier transform over $E$.
For every subgroup $A$ of $E$, we have a corresponding FFT. We choose
the subgroup $A$ as $\inparen{\frac{\Z}{2m\cdot \Z}}^k$ and let $B$ be
the quotient group $E/A$. The group $A$ has exponent $2m$ and $\alpha$
is a principal $2m$-th root of unity. Since $\alpha$ is a power of
$\rho(\alpha)$, we can use it for the Fourier transform over $A$. As
multiplications by powers of $\alpha$ are just shifts, this makes
Fourier transform over $A$ efficient.
Let $\mathcal{F}(M,k)$ denote the complexity of computing the Fourier transform
over $\inparen{\frac{\Z}{2M\cdot \Z}}^k$. We have
\begin{equation}
\mathcal{F}(M,k) = \inparen{\frac{M}{m}}^k \mathcal{F}(m,k) + (2M)^k
\mathcal{M}_{\mathcal{R}}+(2m)^k\mathcal{F}\inparen{\frac{M}{2m},k}
\label{first_recursive_step}
\end{equation}
where $\mathcal{M}_{\mathcal{R}}$ denotes the complexity of multiplications in
$\mathcal{R}$. The first term comes from the $\# B$ many Fourier transforms
over $A$ (Step~\ref{step_inner_dft} of FFT), the second term
corresponds to the multiplications by roots of unity
(Step~\ref{step_bad_mult}) and the last term comes from the $\# A$
many Fourier transforms over $B$ (Step~\ref{step_outer_dft}).
Since $A$ is a subgroup of $B$ as well, Fourier transforms over $B$
can be recursively computed in a similar way, with $B$ playing the
role of $E$. Therefore, by simplifying the recurrence in
Equation~\ref{first_recursive_step} we get:
\begin{equation}
\mathcal{F}(M,k) = O\inparen{\frac{M^k\log M}{m^k\log m}\mathcal{F}(m,k) +
\frac{M^k\log M}{\log m}\mathcal{M}_{\mathcal{R}}}
\label{eqn_with_Fa}
\end{equation}
\begin{lemma}\label{lem-Fmk}
$\mathcal{F}(m,k) = O(m^{k+1}\log m\cdot \log p)$
\end{lemma}
\begin{proof}
The FFT over a group of size $n$ is usually done by taking $2$-point
FFT's followed by $\frac{n}{2}$-point FFT's. This involves $O(n\log
n)$ multiplications by roots of unity and additions in base
ring. Using this method, Fourier transforms over $A$ can be computed
with $O(m^k\log m)$ multiplications and additions in $\mathcal{R}$. Since each
multiplication is between an element of $\mathcal{R}$ and a power of $\alpha$,
this can be efficiently achieved through shifting operations. This is
dominated by the addition operation, which takes $O(m\log p)$ time,
since this involves adding $m$ coefficients from $\Z/p^c\Z$.
\end{proof}
Therefore, from Equation~\ref{eqn_with_Fa},
\begin{equation*}
\mathcal{F}(M,k) = O\inparen{M^k\log M\cdot m\cdot \log p + \frac{M^k\log
M}{\log m}\mathcal{M}_{\mathcal{R}}}.
\end{equation*}
\section{Algorithm and Analysis}
\subsection{Integer Multiplication Algorithm}\label{intmult_section}
We are given two integers $a,b< 2^N$ to multiply. We fix constants $k$
and $c$ whose values are given in
Section~\ref{complexity_section}. The algorithm is as follows:
\begin{enumerate}
\item Choose $M$ and $m$ as powers of two such that $M^k \approx
\frac{N}{\log^2N}$ and $m \approx \log N$. Find the least prime
$p\equiv 1\pmod{2M}$ (Lemma~\ref{prime_time}).
\item Encode the integers $a$ and $b$ as $k$-variate polynomials
$a(X)$ and $b(X)$, respectively, over the ring $\mathcal{R} =
\Z[\alpha]/(p^c, \alpha^m + 1)$ (Section~\ref{encoding_section}).
\item Compute the root $\rho(\alpha)$ (Section~\ref{root_section}).
\item Use $\rho(\alpha)$ as the principal $2M$-th root of unity to
compute the Fourier transforms of the $k$-variate polynomials $a(X)$
and $b(X)$. Multiply component-wise and take the inverse Fourier
transform to obtain the product polynomial. (Sections
\ref{sec:inoutDFT} and \ref{fourier_analysis})
\item Evaluate the product polynomial at appropriate powers of two to
recover the integer product and return it
(Section~\ref{encoding_section}).
\end{enumerate}
\subsection{Complexity Analysis}\label{complexity_section}
The choice of parameters should ensure that the following constraints
are satisfied:
\begin{enumerate}
\item $M^k = O\inparen{\frac{N}{\log^2N}}$ and $m = O(\log
N)$.
\item $M^L = O(N^\varepsilon)$, where $L$ is the Linnik constant
(Theorem~\ref{linnik_theorem}) and $\varepsilon$ is any constant less
than $1$. Recall that this makes picking the prime by brute force
feasible (see Lemma~\ref{prime_time}).
\item $p^c > 2M^k\cdot m\cdot 2^{2u}$ where $u = \frac{2N}{M^km}$. This
is to prevent overflows during modular arithmetic (see
Section~\ref{encoding_section}).
\end{enumerate}
\noindent
It is straightforward to check that $k > L+1$ and $c > 5(k+1)$ satisfy
the above constraints. Since $L\leq 5.2$, it is sufficient to choose
$k = 7$ and $c = 42$.\\
Let $T(N)$ denote the time complexity of multiplying two $N$ bit
integers. This consists of:
\begin{itemize}
\item Time required to pick a suitable prime $p$,
\item Computing the root $\rho(\alpha)$,
\item Encoding the input integers as polynomials,
\item Multiplying the encoded polynomials,
\item Evaluating the product polynomial.
\end{itemize}
As argued before, the prime $p$ can be chosen in $o(N)$ time. To
compute $\rho(\alpha)$, we need to lift a generator of
$\mathbb{F}_p^{\times}$ to $\Z/p^c\Z$ followed by an interpolation. Since $c$
is a constant and $p$ is a prime of $O(\log N)$ bits, the time
required for Hensel Lifting and interpolation is $o(N)$.
The encoding involves dividing bits into smaller blocks, and
expressing the exponents of $q$ in base $M$
(Section~\ref{encoding_section}) and all these take $O(N)$ time since
$M$ is a power of $2$. Similarly, evaluation of the product polynomial
takes linear time as well. Therefore, the time complexity is dominated
by the time taken for polynomial multiplication.
\subsubsection*{Time complexity of Polynomial Multiplication}
From Equation~\ref{eqn_FFT_complexity}, the complexity of polynomial multiplication is given by,
\[
\mathcal{F}(2M,k) = O\inparen{M^k\log M\cdot m\cdot \log p + \frac{M^k\log
M}{\log m}\cdot \mathcal{M}_{\mathcal{R}}}.
\]
\begin{proposition}\cite{scho_complex}
If $\mathcal{M}_{\mathcal{R}}$ denotes the complexity of
multiplication in $\mathcal{R}$, then $\mathcal{M}_{\mathcal{R}} =
T\left(O(\log^2{N})\right)$ where $T(x)$ denotes the complexity of
multiplying two $x$-bit integers.
\end{proposition}
\begin{proof}
Elements of $\mathcal{R}$ can be viewed as polynomials in $\alpha$
over $\Z/p^c\Z$ with degree at most $m$. Given two such polynomials
$f(\alpha)$ and $g(\alpha)$, encode them as follows: Replace
$\alpha$ by $2^d$, transforming the polynomials $f(\alpha)$ and
$g(\alpha)$ to the integers $f(2^d)$ and $g(2^d)$ respectively. The
parameter $d$ is chosen such that the coefficients of the product
$h(\alpha) = f(\alpha) g(\alpha)$ can be recovered from the product
$f(2^d)\cdot g(2^d)$. For this, it is sufficient to ensure that the
maximum coefficient of $h(\alpha)$ is less than $2^d$. Since $f$
and $g$ are polynomials of degree $m$, we would want $2^d$ to be
greater than $m\cdot p^{2c}$, which can be ensured by choosing $d =
O\left(\log{N}\right)$. The integers $f(2^d)$ and $g(2^d)$ are
bounded by $2^{md}$, which is of $O(\log^2 N)$ bits. The product
$f(\alpha)\cdot g(\alpha)$ can be decoded from the integer product
$f(2^d)\cdot g(2^d)$ by splitting the bits into $(2m-1)$ blocks of
$d$ bits each (one for each coefficient of $\alpha^i$) to obtain a
polynomial in $\Z[\alpha]$ and reducing it modulo $p^c$ and
$\alpha^m + 1$. Reducing modulo $(\alpha^m + 1)$ can be performed in
$O(md) = O(\log^2N)$ time. Dividing by $p^c$, which has $O(\log N)$
bits, can be performed in the same time as multiplying $O(\log N)$
bit integers using standard techniques (see for example
\cite[Chapter 4]{Knuth}). Since $T(N) = \Omega(N)$, we have that
$\mathcal{M}_{\mathcal{R}} = T(O(\log^2 N)) + O(\log N \cdot
T(O(\log N))) = T(O(\log^2 N))$ bit operations.
\end{proof}
Therefore, the complexity of our integer multiplication algorithm
$T(N)$ is given by,
\begin{eqnarray*}
T(N) & = & O(\mathcal{F}(2M, k)) = O\inparen{M^k\log M\cdot m\cdot \log p + \frac{M^k\log
M}{\log m} \cdot \mathcal{M}_{\mathcal{R}}}\\
& = & O\inparen{N\log N + \frac{N}{\log N\cdot \log\log N} \cdot T(O(\log^2N))}
\end{eqnarray*}
\noindent Solving the above recurrence leads to the following theorem.
\begin{theorem} \label{thm:mainthmx}
Given two $N$ bit integers, their product can be computed using
$N\cdot \log N\cdot 2^{O(\log^*N)}$ bit operations.
\end{theorem}
\subsubsection*{Performing on multi-tape turing machines}
The upper-bound presented in Theorem \ref{thm:mainthmx} holds for
multi-tape turing machines. The only part of the algorithm that
warrants an explanation is regrouping of the terms in preparation for
the inner and outer DFTs. For the inner DFT, we are given $\ket{f} =
\sum_{a,b} f_{b,a}\ket{x_b + a}$ and we wish to write down $\ket{f_b}
= \sum_{a} f_{b,a}\ket{a}$ for each $b\in B$. The following discussion
essentially outlines how this can be performed on a multi-tape turing
machine using $O(N\log m)$ bit operations. (Recall that the group $E =
(\Z/2M\Z)^k$ and $A = (\Z/2m\Z)^k$ and both $M$ and $m$ are powers of
$2$). \\
We may assume that the coefficients are listed according to the
natural lexicographic order of $(\Z/2M\Z)^k$. To ease the
presentation, we first present the approach for $k=1$. It would be
straightforward to generalize this to larger $k$ by repeated
applications of this approach.
Given as input is a sequence of coefficients $f_g$ for each $g\in
\tilde{E} = \Z/2M\Z$ in the natural lexicographic order on one of the
tapes of the turing machine. We can then ``shuffle'' the input tape
using two passes to order the coefficients according to
$\inbrace{0,M,1,M+1,\dots, M-1,2M-1}$ (by copying the first half with
appropriate blanks, and copying the second half on the second
pass). This results in regrouping the inputs as various cosets of
$\Z/2\Z$. By considering two successive elements as a block and
repeating the shuffling, we obtain various cosets of
$Z/4\Z$. Repeating this process $\log(2m)$ times regroups the elements
according to various cosets of $\Z/2m\Z$.\\
Suppose that $E = (\Z/2M\Z)^k$ and that the coefficients are ordered
according to the natural lexicographic order. By repeating the same
shuffling process for $\log(2m)$ steps, the coefficients are regrouped
according to the cosets of $\inbrace{0}^{k-1} \times (\Z/2m\Z)$ and
hence there are $M/m$ many groups. By considering every $2m$
successive coefficients as one block, each of the $M/m$ groups can
be thought of as coefficients ordered according to the natural
lexicographic order of $(\Z/2M\Z)^{k-1}$. By repeating this for
$(k-1)$ more steps, we can reorder the coefficients according to the
cosets of $(\Z/2m\Z^k)$.
The procedure totally requires $k\log(2m)$ passes over the tapes and
hence can be performed in $O(N\log m)$ time on a $3$-tape\footnote{an
input tape, an output tape, and a third tape for a counter} turing
machine. With the above discussion, it can be easily seen that the
upper-bound holds for multi-tape turing machines.
\begin{theorem}
Given two $N$ bit integers, their product can be computed using
$N\cdot \log N\cdot 2^{O(\log^*N)}$ bit operations on a multi-tape turing machine.
\end{theorem}
\section{A Comparison with F\"{u}rer's Algorithm} \label{Qp_section}
Our algorithm can be seen as a $p$-adic version of F\"{u}rer's integer
multiplication algorithm, where the field $\C$ is replaced by $\Q_p$,
the field of $p$-adic numbers (for a quick introduction, see Baker's
online notes \cite{Baker}). Much like $\C$, where representing a
general element (say in base $2$) takes infinitely many bits,
representing an element in $\Q_p$ takes infinitely many $p$-adic
digits. Since we cannot work with infinitely many digits, all
arithmetic has to be done with finite precision. Modular arithmetic in
the base ring $\Z[\alpha]/(p^c, \alpha^m + 1)$, can be viewed as
arithmetic in the ring $\Q_p[\alpha]/(\alpha^m + 1)$ keeping a
precision of $\varepsilon = p^{-c}$.
Arithmetic with finite precision naturally introduces some errors in
computation. However, the nature of $\Q_p$ makes the error analysis
simpler. The field $\Q_p$ comes with a norm $\abs{\ \cdot\ }_p$ called
the $p$-adic norm, which satisfies the stronger triangle inequality
$\abs{x+y}_p \leq \max\inparen{\abs{x}_p, \abs{y}_p}$ \cite[Proposition
2.6]{Baker}. As a result, unlike in $\C$, the errors in computation
do not compound.\\
Recall that the efficiency of FFT crucially depends on a special
principal $2M$-th root of unity in $\Q_p[\alpha]/(\alpha^m + 1)$. Such
a root is constructed with the help of a primitive $2M$-th root of
unity in $\Q_p$. The field $\Q_p$ has a primitive $2M$-th root of
unity if and only if $2M$ divides $p-1$ \cite[Theorem
5.12]{Baker}. Also, if $2M$ divides $p-1$, a $2M$-th root can be
obtained from a $(p-1)$-th root of unity by taking a suitable power. A
primitive $(p-1)$-th root of unity in $\Q_p$ can be constructed, to
sufficient precision, using Hensel Lifting starting from a generator
of $\mathbb{F}_p^{\times}$.
\section{Conclusion}\label{conclusions_section}
As mentioned earlier, there has been two approaches to multiplying
integers - one using arithmetic over complex numbers and the other
using modular arithmetic. Using complex numbers, Sch\"{o}nhage and
Strassen \cite{SS71} gave an $O(N \cdot \log N \cdot \log\log N\ldots
2^{O(\log^* N)})$ algorithm. F\"{u}rer \cite{F07} improved this
complexity to $N\cdot\log N \cdot2^{O(\log^*N)}$ using some special
roots of unity. The other approach, that is modular arithmetic, can be
seen as arithmetic in $\Q_p$ with certain precision. A direct
adaptation of the Sch\"{o}nhage-Strassen's algorithm in the modular
setting leads to an $O(N \cdot \log N \cdot \log\log N\ldots
2^{O(\log^* N)})$ time algorithm. In this work, we show that by
choosing an appropriate prime and a special root of unity, a running
time of $N\cdot \log N \cdot 2^{O(\log^*N)}$ can be achieved through
modular arithmetic as well. Therefore, in a way, we have unified the
two paradigms. The important question that remains open is:
\begin{itemize}
\item Can $N$-bit integers be multiplied using $O(N \cdot \log N)$ bit operations?
\end{itemize}
\noindent Even an improvement of the complexity to $O(N \cdot \log N
\cdot \log^{*}N)$ operations will be a significant step forward
towards answering this question.
\section*{Acknowledgements}
We are greatly thankful to the anonymous reviewers for their detailed
comments that have improved the presentation of the paper
significantly.
\bibliography{references}
\end{document}
| {
"alphanum_fraction": 0.7036237933,
"avg_line_length": 48.8814472671,
"ext": "tex",
"hexsha": "2cd8077a5e68b8e50dad76e1500097feff82b3d8",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-11-10T22:18:56.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-11-10T22:18:56.000Z",
"max_forks_repo_head_hexsha": "246dfa730328b45b65840ebed3293e96c497aa86",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "piyush-kurur-pages/website",
"max_forks_repo_path": "contents/research/publication/Journal/2013-04-18-Fast-integer-multiplication/intMult.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "246dfa730328b45b65840ebed3293e96c497aa86",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "piyush-kurur-pages/website",
"max_issues_repo_path": "contents/research/publication/Journal/2013-04-18-Fast-integer-multiplication/intMult.tex",
"max_line_length": 183,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "246dfa730328b45b65840ebed3293e96c497aa86",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "piyush-kurur-pages/website",
"max_stars_repo_path": "contents/research/publication/Journal/2013-04-18-Fast-integer-multiplication/intMult.tex",
"max_stars_repo_stars_event_max_datetime": "2017-04-16T09:55:17.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-04-16T09:55:17.000Z",
"num_tokens": 20548,
"size": 63497
} |
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{hyperref}
\usepackage{textcomp}
\usepackage{listings}
\usepackage{color}
\title {%
C++ basics: Lecture 6 \\
\large Classes, class templates, auto and dynamic memory
}
\author{Code the Universe Foundation \and
Presented by: Mikdore}
\date{28th June 2020}
\definecolor{dkgreen}{rgb}{0,0.6,0}
\definecolor{gray}{rgb}{0.5,0.5,0.5}
\definecolor{mauve}{rgb}{0.58,0,0.82}
\lstset{frame=tb,
language=C++,
aboveskip=3mm,
belowskip=3mm,
showstringspaces=false,
columns=flexible,
basicstyle={\small\ttfamily},
numbers=none,
numberstyle=\tiny\color{gray},
keywordstyle=\color{blue},
commentstyle=\color{dkgreen},
stringstyle=\color{mauve},
breaklines=true,
breakatwhitespace=true,
tabsize=3
}
\begin{document}
\maketitle
\section{The motivation behind C++: Classes}
\section{Using templates to generalize}
\section{The 'auto' keyword: using the power of type deduction}
\section{Pointers: What are they pointing at?}
\section{CS101: The stack and the heap}
\section{New \& delete: allocating dynamic memory}
\section{Avoiding memory leaks: Why we don't use raw new}
\section{Introduction to the basis of modern C++: RAII}
\section{Using smart pointers to manage memory}
\end{document}
| {
"alphanum_fraction": 0.7255639098,
"avg_line_length": 24.1818181818,
"ext": "tex",
"hexsha": "6cb23f80a54a8c7b5cf442d05574d6fb1357ff5e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6a2f5df32be89514d649496e408290dee2a98dcb",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Code-the-Universe/cpp-lectures",
"max_forks_repo_path": "C++ Basics/Notes/lecture6.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6a2f5df32be89514d649496e408290dee2a98dcb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Code-the-Universe/cpp-lectures",
"max_issues_repo_path": "C++ Basics/Notes/lecture6.tex",
"max_line_length": 68,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "6a2f5df32be89514d649496e408290dee2a98dcb",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Code-the-Universe/cpp-lectures",
"max_stars_repo_path": "C++ Basics/Notes/lecture6.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-17T08:26:57.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-05-17T08:26:57.000Z",
"num_tokens": 403,
"size": 1330
} |
\documentclass{beamer}
\setbeamertemplate{navigation symbols}{}
\usetheme{Malmoe}
\usecolortheme{beaver}
%\beamertemplatenavigationsymbolsempty
\beamersetuncovermixins{\opaqueness<1>{25}}{\opaqueness<2->{15}}
%\usepackage{float}
\usepackage{amssymb}
\usepackage{wrapfig}
\usepackage{amsmath}
\usepackage[ngerman]{babel}
\usepackage[utf8]{inputenc}
\usepackage{float}
\usepackage{graphicx}
%\usepackage{wrapfig}
\usepackage{textcomp}
\usepackage{braket}
\usepackage{bbm}
\usepackage{framed}
\usepackage{bbold}
\usepackage{colortbl}
\usepackage{color}
\usepackage{ifthen}
%\usepackage{setspace}
\newcommand{\tikzfig}[2]{\begin{figure}[h]\begin{center}\input{./img/TikZ/#1.tex}\end{center}\end{figure}}
\newcommand{\tikzfigC}[2]{\begin{figure}[h]\begin{center}\input{./img/TikZ/#1.tex}\end{center}\caption{{#2}}\end{figure}}
\newcommand{\fig}[2]{\begin{figure}[h]\begin{center}\includegraphics[width = 0.5\textwidth]{./img/#1}\end{center}\caption{{#2}}\end{figure}}
\usepackage[T1]{fontenc}
\usepackage{amsthm}
\usepackage{bm}
\usepackage{amsbsy}
\usepackage{tikz,pgfplots}
\usepackage{xcolor}
\usepackage{scalefnt}
\usepackage{caption}
\usetikzlibrary{calc,arrows,external,shapes,shapes.multipart}
%\tikzexternalize[prefix=figures/]
\addto\captionsngerman{
\renewcommand{\figurename}{Figure}%
\renewcommand{\tablename}{Tab.}%
}
\setlength{\parskip}{1.5ex plus0.5ex minus0.5ex}
\setlength{\parindent}{0em}
\sloppy \frenchspacing \raggedbottom
\begin{document}
%\part{ABS}
\title{Sampling the Ising Model}
\author{Abraham Hinteregger}
\institute{University of Vienna}
\date{04.10.2013}
\titlepage
\setcounter{tocdepth}{4}
%\AtBeginSection[]{
%\begin{frame}
%\frametitle{Chapter}
%\tableofcontents[currentsection,currentsubsection,currentsubsubsection,hideothersubsections]
%\end{frame}
%}
\section{Model}
\begin{frame}\frametitle{History}
%\begin{block}{}
\begin{itemize}%[<+->]
\item Proposed by Wilhelm Lenz to his student Ernst Ising
\item 1924: \href{http://link.springer.com/content/pdf/10.1007 BF02980577.pdf}{Ernst Ising - \textit{Beitrag zur Theorie des Ferromagnetismus}\footnote{Zeitschrift für Physik Februar–April 1925, Volume 31, Issue 1, pp 253-258 }}
\begin{quote}
''Es entsteht \ldots [durch] \ldots die Beschr"ankung der Wechselwirkung auf diejenige benachbarter Elemente [\ldots] kein Ferromagnetismus.''
\end{quote}
\item 1936: \href{http://journals.cambridge.org/action/displayAbstract?fromPage=online\&aid=2027260}{Rudolph Peierls - \textit{On Ising's model of ferromagnetism}\footnote{Cambridge Philosophical Society 1936, Volume 32, Issue 03, Oct.}}
\begin{quote}
''[\ldots] for sufficiently low temperatures the Ising model in two [or more] dimensions shows ferromagnetism [\ldots].
\end{quote}
\end{itemize}
%\end{block}
\end{frame}
%:TODO
%:\subsection{Phase Transitions}\begin{frame}\frametitle{Phase Transitions}
%\end{frame}%1st 2nd order - macro/microscopic differences, discontinuity
\subsection{Lattice Geometry}
\begin{frame}\frametitle{Lattice}
\tikzfigC{Ising1D}{Square Lattice in 1 dimension}
\vspace*{0.25cm}
\tikzfigC{Ising2D}{Square Lattice in 2 \alt<9-10>{and 3 }{}dimensions}
\end{frame}
\subsection{Lattice Sites}
\begin{frame}\frametitle{Lattice Sites}
\begin{itemize}
\item Each site has a state $s_i = -1\lor1$
\visible<2->{\item Assignment of states $S = (s_0, s_1, s_2,\ldots, s_{N-1})$ to the lattice sites is called a configuration
\item Therefore $2^N$ unique configurations for a lattice with N lattice sites.}
\end{itemize}
\end{frame}
\subsection{Magnetization}
\begin{frame}\frametitle{Magnetization}
\begin{itemize}
\item The magnetization of a configuration is calculated by \[m_x = m(S_x) = \sum_i^N s_i \in[-N,N]\]
\item usually the magnetization per spin is considered \[M_x =\frac{m_x}{N} \qquad\in[-1,1]\]
%:TODO Grad der Ordnung
\end{itemize}
\end{frame}
\subsection{Energy}
\begin{frame}\frametitle{Energy}
\begin{itemize}
\item Each configuration has a corresponding energy - the Hamiltonian.
\visible<1->{
\begin{equation*}E_x = E(S_x) = H(s_0, s_1,\ldots, s_{N-1})=
\alert<3>{ -J \sum_{\text{<i,j>}}s_i\cdot s_j}
\alert<2>{- h \sum_{\text{i}}s_i}
\end{equation*}
}
\end{itemize}\end{frame}
\begin{frame}
\begin{equation*}
H = -J \sum_{\text{<i,j>}}s_i\cdot s_j\qquad\in[-2NJ,2NJ]
\end{equation*}\tikzfigC{Examples}{Energy contribution (nearest neighbor interaction) of the bonds connected to the central lattice site}
\end{frame}
\begin{frame}
\frametitle{Boltzmann distribution}
Probability of system being in the state S is given by the Boltzmann distribution ($\beta = 1/kT$) \[p_x = p(S_x) =\frac{ e^{-\beta E_x}}{Z}\]
Z is the the partition function\[Z = \sum_i^{2^N} e^{-\beta E_{i}}\]
\end{frame}
\section*{}
\begin{frame}
\begin{itemize}
\item Calculating all $2^N$ possible configurations (or a big part of them) only viable for small systems.
\item Sampling from random configurations would lead to many high- energy (temperature) configurations (simple sampling)
\end{itemize}
\visible<2->{
$\rightarrow$ \textit{importance sampling}
}
\end{frame}
\section{Markov Chain Monte Carlo}
\subsection{Markov Chain}
\begin{frame}
\frametitle{Markov Chain}
\begin{itemize}
\item Chain of iteratively created configurations $C_1, C_2, \ldots C_n$
\item Resulting configurations correspond to the desired probability distribution $p$ and span the entire state space.
\item Configuration $C_t$ only depends on $C_{t-1}$ (Markov property).
\end{itemize}
\end{frame}
\subsection{Transition probability}
\begin{frame}
\frametitle{Transitions in the Markov Chain}
\begin{align*}
&\text{Probability for being in state A: } && p_A\\
&\text{Transition probability for transition $S_A \rightarrow S_B$:} &&p_{AB}\\
\end{align*}
If it fulfills \textit{detailed balance} (it must!)
\begin{align*}
p_A\cdot p_{AB} &= p_B\cdot p_{BA} \\
\intertext{the following relation for the transition probability follows:}
\frac{p_{AB}}{p_{BA}} = \frac{p_B}{p_A} &= \left(\frac{Z}{Z}\right)\frac{e^{-\beta E_B}}{e^{-\beta E_A}} = e^{-\beta (E_B - E_A) = e^{-\beta \Delta E}}
\end{align*}
\end{frame}
\subsection{Metropolis- Hastings Algorithm}
\begin{frame}\frametitle{Metropolis- Hastings Algorithm}
\begin{itemize}
\item Configuration after t steps is $C_t$
\item Flip one lattice site $\rightarrow$ $C_t'$
\begin{itemize}
\item has to be chosen randomly - suitable RNG necessary
\end{itemize}
\item Calculate energy difference $\Delta E = E_t' - E_t$
\item Calculate acceptance probability $P$
\begin{equation*}
P = \operatorname{min}\left(1,e^{-\beta\cdot \Delta E}\right),\qquad \beta = 1/kT > 0
\end{equation*}
\item Generate random number $r \in [0,1[$
\begin{itemize}
\item $r<P \rightarrow C_{t+1} = C_t'$
\item $r>P \rightarrow C_{t+1} = C_t$
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Magnetization over time (MH- Algorithm)}
\begin{figure}[h]\begin{center}\includegraphics[width = \textwidth]{./img/MPlot.png}\end{center}\caption{Not in physical time but Monte Carlo time steps.}\end{figure}
\end{frame}
\subsection{Swendsen- Wang Algorithm}
\begin{frame}\frametitle{Swendsen- Wang Algorithm}
\begin{itemize}
\item Search for clusters with equal spins
\item Generate a bond between lattice sites in such a cluster with \[p =1-e^{-2/T}\]
\item Search for clusters connected by bonds
\item Assign a random spin to each cluster.
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Magnetization over time (SW- Algorithm)}
\begin{figure}[h]\begin{center}\includegraphics[width = \textwidth]{./img/MSW.png}\end{center}\caption{Not in physical time but Monte Carlo time steps.}\end{figure}
\end{frame}
\section{Possible Errors}
\subsection{Autocorrelation}
\begin{frame}\frametitle{Autocorrelation}
\begin{itemize}
\item Similarity of a function with a ''lagged'' version of itself.
\[C_M(t) =\frac{cov(M_n,M_{n+t})}{std(M_n)std(M_{n+t})} = \sum_i c_ie^{-t/\tau_i}\]
\end{itemize}
\begin{figure}[h]\begin{center}\includegraphics[width = 0.8\textwidth]{./img/MPlot.png}\end{center}\end{figure}
\end{frame}
\begin{frame}
\input{./img/lCorr.tex}
\[ln\left(C_M(t)\right) = \sum_i \left(ln(c_i) - \frac{t}{\tau_i}\right)\]
\end{frame}
\subsection{Effective chain length}
\begin{frame}
\frametitle{Effective Markov- chain length}
\begin{itemize}
\item naive error analysis leads to underestimated errors due to autocorrelation. \[var(M) = \sigma_M^2 = \frac{\sum_i \left(M_i-\bar M\right)^2}{n}\]
\item realistic error considers autocorrelation with smaller sample-size $n_{eff}$:\[ n_{eff} = \frac{n}{2\tau_{int}} \approx \frac{n}{2\sum_i c_i \tau_i}\]
\end{itemize}
\end{frame}
%\subsection{Binning}
%\begin{frame}
%\frametitle{Binning}
%\begin{itemize}
%\item Can be used to avoid calculating autocorrelation
%\item Divide samples in blocks of increasing length $k$ ($k= 2^1, 2^2, ...$)
%\end{itemize}
%\end{frame}
\subsection{Finite Size Effects}
\begin{frame}% 75x75 Swendsen Wang
\frametitle{Absolute magnetization per spin}
\begin{minipage}{0.70\textwidth}
\input{./img/TC.tex}
\end{minipage}\begin{minipage}{0.25\textwidth}
\visible<2->{\ref{plot:mcB} MC $50\times 50$}
\ref{plot:mc} MC $75\times 75$
\ref{plot:analytical} analytical
\end{minipage}
\end{frame}
%\begin{frame}
%\frametitle{Finite size effects at T = 2.3J}
%\begin{figure}[h]\begin{center}\input{./img/FSE(T=2.3).tex}\end{center}\end{figure}
%\end{frame}
\subsection{Overview}
\begin{frame}
\begin{itemize}
\item Update algorithm must be chosen according to parameters.
\item Sample size must be chosen big enough to compensate autocorrelation
\item Right size of the lattice to adjust for finite size effects - depending on temperature.
\end{itemize}
\end{frame}
\section{Further goal}
\subsection{Ambition}
\begin{frame}
\frametitle{Ambition}
\begin{itemize}
\item Simulating flow of sediments in current
\item Needed adjustments:
\begin{itemize}
\item constant ''spin''- ratio and... \only<2>{ $\rightarrow$ Kawasaki Dynamics}
\item flowing and not random spawning/despawning of ''particles''
\item current in form of outer force \only<2>{ $\rightarrow$ force term in Hamiltonian}
\item more states for a lattice site \only<2>{ $\rightarrow$ Potts Model}
\end{itemize}
\end{itemize}
\end{frame}
\subsection{Kawasaki Dynamics}
\begin{frame}
\frametitle{Kawasaki Dynamics}
\begin{itemize}
\item Choose A-B bond
\item Calculate $\Delta E$ for $A-B \rightarrow B-A$
\item Accept with probability depending on energy difference
\end{itemize}
Behaviour:
\begin{itemize}
\item Spin- ratio (Magnetization) constant.
\item Different spins get sorted (if coupling constant J is positive).
\end{itemize}
\end{frame}
\subsection{Current}
\begin{frame}
\frametitle{Current}
\[H_G = -J\sum_{<i,j>}s_is_j - \sum_jh_j\sum_{line_j} s_i\]
with
\[h_j = h_1 + j\alert<2>{\frac{(h_L-h_1)}{L}}\]
\end{frame}
\subsection{Potts Model}
\begin{frame}
\frametitle{Potts Model}
\begin{itemize}
\item Generalized version of the Ising model.
\item states not only $-1 \land 1$ but (discrete) angles.
\item Energy usually in the form $H = -J_c \sum_{i,j}cos\left(\theta_i -\theta_j\right) + \ldots$
\end{itemize}
\end{frame}
\section*{}
\begin{frame}
\frametitle{Thanks for your attention}
\begin{itemize}
\item Sourcefiles and binaries on my github \href{https://github.com/oerpli/Ising}{https://github.com/oerpli/Ising}
\begin{itemize}
\item currently only tested on Windows 7\& 8.
\item Swendsen Wang Algorithm implemented with recursive DFS - accordingly stackoverflow error with big clusters (depending on configuration)
\end{itemize}
\end{itemize}
\end{frame}
\end{document}%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%: IGNORE
\section{Nucleation}
\subsection{What is Nucleation?}
\begin{frame}\frametitle{Nucleation}
\begin{itemize}
\item is a phase transformation process
\item phase transformation grows from small nucleus
\begin{exampleblock}{Examples}
\begin{itemize}
\item cloud chamber
\item \href{http://www.youtube.com/watch?v=pTdiTe3x0Bo}{supercooled water}
\end{itemize}
\end{exampleblock}
\end{itemize}
\end{frame}
\subsection{Homogeneous Nucleation}\begin{frame}\frametitle{Nucleation}
\begin{itemize}
\item
Homogeneous nucleation
\begin{itemize}
\item in a uniform substance
\item no nucleation until nucleus with critical size ''appears'' (due to stochastic processes)
\item higher supersaturation leads to smaller critical radius.
\item rarely occurs in nature
\end{itemize}
\item Heterogeneous Nucleation{}
\begin{itemize}
\item begins at some preferable interface and grows from there
\item much (!) more likely
\item common in nature (freezing (in most cases), bubbles in water,...)
\end{itemize}
\end{itemize}
\end{frame}
\subsection{Heterogeneous Nucleation}
\begin{frame}\frametitle{Nucleation}
\begin{itemize}
\item
Homogeneous nucleation
\begin{itemize}
\item in a uniform substance
\item no nucleation until nucleus with critical size ''appears'' (due to stochastic processes)
\item higher supersaturation leads to smaller critical radius.
\item rarely occurs in nature
\end{itemize}
\item Heterogeneous Nucleation{}
\begin{itemize}
\item begins at some preferable interface and grows from there
\item much (!) more likely
\item common in nature (freezing (in most cases), bubbles in water,...)
\end{itemize}
\end{itemize}
\end{frame}
\section{Nucleation in the Ising Model}
\subsection{Homogeneous Nucleation}
\begin{frame}{Homogeneous Nucleation in the Ising Model}
%: ÜBERARBEITEN
\begin{itemize}
\item Necessary modifications:\begin{itemize}
\item none\\
\end{itemize}\end{itemize}\begin{itemize}
\item Problems
\begin{itemize}
\item long time until nucleus of critical size appears
\item inefficient to simulate billions of cycles until phase change takes place
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}\frametitle{Cluster size}
\fig{nsize.png}{Propability of finding a cluster of size N at different times\footnote{\href{http://www.ncbi.nlm.nih.gov/pubmed/16494425}{Pan, Rappl, Chandler, Balsara: J. Phys. Chem. B 2006}}}
\end{frame}
%: NACH HOMOGENER NUKLEATION TPS EINFÜGEN
\subsection{Transition Path Sampling}
\begin{frame}\frametitle{Transition Path Sampling (TPS) - ''shooting'' method\footnote{aa\href{http://dx.doi.org/10.1063\%2F1.476378}{Dellago, Bolhuis, Chandler: Advances in Chemical Physics 123 (1998)}}}
\begin{itemize}
\item needs two stable states (A \& B)
\item path through configuration space connecting these
\item change the path a little at a random point between A and B
\item sample new path and accept if it connects A with B
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Transition Path Sampling}
\fig{tps}{First path (red), slightly changed and accepted path (blue), rejected path (green)\footnote{\href{dx.doi.org/10.1088/0953-8984/21/33/333101}{Esobedo, Borrero, Araque - J. Phys.: Condens. Matter 21 (2009)}}}
\end{frame}
\subsection{Heterogeneous Nucleation}
\begin{frame}\frametitle{Heterogeneous Nucleation}
\begin{itemize}
\item Necessary modifications:\begin{itemize}
\item handle boundaries in heterogeneous nucleation
\begin{equation*}
H(s)= \color{gray}{ -J \sum_{\text{<i,j>}}s_i\cdot s_j - h \sum_{\text{i}}s_i}\color{black} - J_s \sum_{\text{<i,j>}}^{\text{II}}s_i\cdot s_j
\end{equation*}
\item implement walls/surfaces with fixed spins
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}\frametitle{Nucleation in and out of Pores\footnote{Page, Sear - Heterogeneous Nucleation in and out of Pores PRL 97, 065701 (2006)}}
\alt<2>{\fig{upspins}{2 phases of nucleation}}{
\begin{equation*}
H(s)= { -0.8 \sum_{\text{<i,j>}}s_i\cdot s_j - 0.05 \sum_{\text{i}}s_i}\color{gray} - 0 \sum_{\text{<i,j>}}^{\text{II}}s_i\cdot s_j,\color{black}\qquad kT = 1
\end{equation*}
\begin{itemize}
\item nucleation near surfaces $10^{12}$ times faster
\item fastest in pores
\item nucleation in 2 steps
\item diversified pore sizes lead to fastest reaction as probability of existence optimal pore size is higher
\end{itemize}}
\end{frame}
\begin{frame}\frametitle{Problems}
\begin{itemize}
\item phase transitions are rare events (with realistic values for the coupling constant, ...)
\item nonequilibrium systems therefore TPS (transition path sampling) not applicable.
\alert<2->{\visible<2->{\item $\rightarrow$ Forward Flux Sampling}}
\end{itemize}
\end{frame}
\subsection{Forward Flux Sampling}
\begin{frame}
\frametitle{Forward Flux Sampling}
\begin{itemize}
\item Similar to TIS (transition interface sampling - a modified TPS)\begin{itemize}
\item initial state A: $\lambda < \lambda_A = \lambda_0$
\item final state B:\quad\!\!\! $\lambda > \lambda_B = \lambda_n$
\item path has to pass every $\lambda_i$ in increasing order (can go backwards in between too) until it reaches $\lambda_n$ (B)
\end{itemize}
\item after reaching a new interface ($\lambda_{i+1}$) configuration is stored
\item stored configurations used as starting point for new trial runs
\item trial runs continued until path reaches A ($\rightarrow$ failure) or a new interface $\lambda_{i+1}$ ($\rightarrow$ success)
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{(Direct) Forward Flux Sampling\footnote{\href{iopscience.iop.org/0953-8984/21/46/463102}{Allen, Valeriani, Rein ten Wolde: 2009 J. Phys.: Condens. Matter 21}}}
\only<1>{\fig{DFFS1}{Sampling path starting in A - store configurations where the path leaves A (X)}}
\only<2>{\fig{DFFS2}{Sampling new paths from every stored configuration. Discard if path goes back to A}}
\end{frame}
%!RATE CONSTANTS
\section{Outlook}\subsection{Possible Adjustments}
\begin{frame}
\frametitle{Possible Adjustments to the Ising Model}
\begin{itemize}
\item next-nearest neighbor interaction or even higher range
\item forces from the outside, e.g. gravity
\item Multi Hit Swendsen Wang algorithm
\item Kawasaki Dynamics (alternative Metropolis algorithm with fixed state ratios and amounts)\begin{itemize}
\item choose any (A-B) bond
\item $(A-B) \rightarrow (B-A)$
\item calculate new energy
\item \ldots
\end{itemize}
\end{itemize}
\end{frame}
\section{}
\subsection{Additional Literature}
\begin{frame}
\begin{itemize}
\item Page, Sear - Heterogenous Nucleation in and out of Pores (2006): PRL 97, 065701
\item Allen, Valeriani, Rein ten Wolde - Forward Flux Sampling for rare event simulations (2009): J. Phys.: Condens. Matter 21 (2009) 463102 (21pp)
\item Allen, Frenkel, Rein ten Wolde - Forward Flux Sampling-type schemes for simulating rare events: Efficiency analysis (2008): http://arxiv.org/abs/cond-mat/0602269v1
\item Escobedo, Borrero, Araque - Transition path sampling and forward flux sampling. Applications to biological systems 2009 J. Phys.: Condens. Matter 21 333101
\end{itemize}
\end{frame}
\subsection{Simulation}
\begin{frame}
\begin{itemize}
\item Sourcefiles and binaries on my github \href{https://github.com/oerpli/Ising2D}{https://github.com/oerpli/Ising2D}
\end{itemize}
\end{frame}
\end{document} | {
"alphanum_fraction": 0.7295209457,
"avg_line_length": 34.6283662478,
"ext": "tex",
"hexsha": "22f6ca6bd2c1a5bfc2b09f1f53f938bb001e1881",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1cf226f1b8c73d6e67e885f80e32c929b8420324",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "oerpli/Presentations",
"max_forks_repo_path": "2013-10-04 - Sampling the Ising Model/PRES.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1cf226f1b8c73d6e67e885f80e32c929b8420324",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "oerpli/Presentations",
"max_issues_repo_path": "2013-10-04 - Sampling the Ising Model/PRES.tex",
"max_line_length": 237,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "1cf226f1b8c73d6e67e885f80e32c929b8420324",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "oerpli/Presentations",
"max_stars_repo_path": "2013-10-04 - Sampling the Ising Model/PRES.tex",
"max_stars_repo_stars_event_max_datetime": "2020-09-24T10:57:24.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-09-24T10:57:24.000Z",
"num_tokens": 5978,
"size": 19288
} |
%-------------------------
% Resume in Latex
% Author : ASa Hsieh
% Based off of: https://github.com/sb2nov/resume
% License : MIT
%------------------------
\documentclass[letterpaper,11pt]{article}
\usepackage{latexsym}
\usepackage[empty]{fullpage}
\usepackage{titlesec}
\usepackage{marvosym}
\usepackage[usenames,dvipsnames]{color}
\usepackage{verbatim}
\usepackage{enumitem}
\usepackage[hidelinks]{hyperref}
\usepackage[english]{babel}
\usepackage{tabularx}
\usepackage{fontawesome5}
\usepackage{multicol}
\usepackage{graphicx}
\setlength{\multicolsep}{-3.0pt}
\setlength{\columnsep}{-1pt}
\input{glyphtounicode}
\RequirePackage{tikz}
\RequirePackage{xcolor}
\RequirePackage{fontawesome}
\usepackage{tikz}
\usetikzlibrary{svg.path}
\definecolor{cvblue}{HTML}{0E5484}
\definecolor{black}{HTML}{130810}
\definecolor{darkcolor}{HTML}{0F4539}
\definecolor{cvgreen}{HTML}{3BD80D}
\definecolor{taggreen}{HTML}{00E278}
\definecolor{SlateGrey}{HTML}{2E2E2E}
\definecolor{LightGrey}{HTML}{666666}
\colorlet{name}{black}
\colorlet{tagline}{darkcolor}
\colorlet{heading}{darkcolor}
\colorlet{headingrule}{cvblue}
\colorlet{accent}{darkcolor}
\colorlet{emphasis}{SlateGrey}
\colorlet{body}{LightGrey}
%----------FONT OPTIONS----------
% sans-serif
% \usepackage[sfdefault]{FiraSans}
% \usepackage[sfdefault]{roboto}
% \usepackage[sfdefault]{noto-sans}
% \usepackage[default]{sourcesanspro}
% serif
% \usepackage{CormorantGaramond}
% \usepackage{charter}
% \pagestyle{fancy}
% \fancyhf{} % clear all header and footer fields
% \fancyfoot{}
% \renewcommand{\headrulewidth}{0pt}
% \renewcommand{\footrulewidth}{0pt}
% Adjust margins
\addtolength{\oddsidemargin}{-0.6in}
\addtolength{\evensidemargin}{-0.5in}
\addtolength{\textwidth}{1.19in}
\addtolength{\topmargin}{-.7in}
\addtolength{\textheight}{1.4in}
\urlstyle{same}
\raggedbottom
\raggedright
\setlength{\tabcolsep}{0in}
% Sections formatting
\titleformat{\section}{
\vspace{-4pt}\scshape\raggedright\large\bfseries
}{}{0em}{}[\color{black}\titlerule \vspace{-5pt}]
% Ensure that generate pdf is machine readable/ATS parsable
\pdfgentounicode=1
%-------------------------
% Custom commands
\newcommand{\resumeItem}[1]{
\item\small{
{#1 \vspace{-0.5pt}}
}
}
\newcommand{\classesList}[4]{
\item\small{
{#1 #2 #3 #4 \vspace{-2pt}}
}
}
%\newcommand{\resumeSubheading}[4]{
% \vspace{-2pt}\item
% \begin{tabular*}{1.0\textwidth}[t]{l@{\extracolsep{\fill}}r}
% \textbf{\large#1} & \textbf{\small #2} \\
% {\large#3} & \textit{\small #4} \\
%
% \end{tabular*}\vspace{-7pt}
%}
\newcommand{\resumeSubheading}[3]{
\vspace{-2pt}\item
\begin{tabular*}{1.0\textwidth}[t]{l@{\extracolsep{\fill}}r}
\textbf{\large#1} & \textbf{\small #2} \\
{\large#3} \\
\end{tabular*}\vspace{-5pt}
}
\newcommand{\resumeSubSubheading}[2]{
\item
\begin{tabular*}{0.97\textwidth}{l@{\extracolsep{\fill}}r}
\textit{\small#1} & \textit{\small #2} \\
\end{tabular*}\vspace{-7pt}
}
\newcommand{\resumeProjectHeading}[2]{
\item
\begin{tabular*}{1.001\textwidth}{l@{\extracolsep{\fill}}r}
\small#1 & \textbf{\small #2}\\
\end{tabular*}%\vspace{-7pt}
}
\newcommand{\resumeSubItem}[1]{\resumeItem{#1}\vspace{-4pt}}
\renewcommand\labelitemi{$\vcenter{\hbox{\tiny$\bullet$}}$}
\renewcommand\labelitemii{$\vcenter{\hbox{\tiny$\bullet$}}$}
\newcommand{\resumeSubHeadingListStart}{\begin{itemize}[leftmargin=0.0in, label={}]}
\newcommand{\resumeSubHeadingListEnd}{\end{itemize}}
\newcommand{\resumeItemListStart}{\begin{itemize}}
\newcommand{\resumeItemListEnd}{\end{itemize}\vspace{-5pt}}
\newcommand\sbullet[1][.5]{\mathbin{\vcenter{\hbox{\scalebox{#1}{$\bullet$}}}}}
%-------------------------------------------
%%%%%% RESUME STARTS HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
%----------HEADING----------
\begin{center}
{\Huge \scshape Kai-Chung Hsieh} \\ \vspace{1pt}
Hsinchu City, Taiwan \\ \vspace{1pt}
\small \href{tel:+886972932876}{ \raisebox{-0.1\height}\faPhone\ \underline{+886-972932876} ~} \href{mailto:[email protected]}{\raisebox{-0.2\height}\faGoogle\ \underline{[email protected]}} ~
\href{https://linkedin.com/in/asa-hsieh}{\raisebox{-0.2\height}\faLinkedinSquare\ \underline{ASa Hsieh}} ~
\href{https://github.com/asahsieh}{\raisebox{-0.2\height}\faGithub\ \underline{asahsieh}} ~
\href{https://hackmd.io/@asahsieh}{\raisebox{-0.2\height}\faBook\ \underline{asahsieh}} ~
% \href{https://codeforces.com/profile/yourid}{\raisebox{-0.2\height}\faPoll\ \underline{yourid}}
\vspace{-8pt}
\end{center}
%-----------OBJECTIVE-----------
\section{OBJECTIVE}
{Digital Verification (DV) engineer with 7 years of experience working on Media Processor SoCs, seeking jobs related to \textbf{CPU} (listed by expected order):}
\resumeItemListStart
\resumeItem{\normalsize{CPU Performance Architect \href{https://careers.google.com/jobs/results/106694213048902342/}{\raisebox{-0.1\height}\faExternalLink }}}
\vspace{-5pt}
\resumeItem{\normalsize{CPU Design Verification and Emulation Engineer \href{https://careers.google.com/jobs/results/103455667181757126/}{\raisebox{-0.1\height}\faExternalLink }}}
\vspace{-5pt}
\resumeItem{\normalsize{CPU Physical Design Implementation \href{https://careers.google.com/jobs/results/90401417669288646/}{\raisebox{-0.1\height}\faExternalLink }}}
\resumeItemListEnd
\vspace{-6pt}
% TODO
%-----------PROGRAMMING SKILLS-----------
\section{TECHNICAL SKILLS}
\begin{itemize}[leftmargin=0.15in, label={}]
\small{\item{
\textbf{\normalsize{Languages:}}{ \normalsize{SystemVerilog, Verilog, C/C++, Arm assembly, Shell scripting, Perl, Java}} \\
\textbf{\normalsize{Simulators/Tools:}}{ \normalsize{VCS, NC-Verilog, Design compiler, Verdi, SimVision}} \\
\textbf{\normalsize{Technologies/Frameworks:}}{ \normalsize{UVM, VMM, SVN, Linux}} \\
\textbf{\normalsize{Specifications:}}{ \normalsize{USB 2.0/3.0, IEEE 802.3/Ethernet, Arm AMBA AXI/AHB/APB}} \\
}}
\end{itemize}
\vspace{-4pt}
{{\textbf{\large{Currently learned}}}}
\vspace{-4pt}
\begin{itemize}[leftmargin=0.15in, label={}]
\small{\item{
{\normalsize{RISC-V assembly, Ripes simulator, Git, Linux kernel, Python}}
}}
\end{itemize}
\vspace{-14pt}
%-----------EXPERIENCE-----------
\section{WORK EXPERIENCE}
\vspace{-5pt}
\resumeSubHeadingListStart
% \resumeSubheading
% {\underline{Digital Verification Engineer}}{Hsinchu city, Taiwan}
% {CNBU, Realtek Semiconductor Corp \href{https://www.realtek.com/en/products/communications-network-ics/category/digital-home-center}{\raisebox{-0.1\height}\faExternalLink }}{Sep 2014 -- Sep 2018}
% Average performance rating: A
% \resumeItemListStart
% \resumeItem{\normalsize{About the role \textbf{and responsibilities carried out.}}}
% \resumeItemListEnd
\resumeProjectHeading
{{\textbf{\large{Communication Network(CN) BU, Realtek}} \href{https://www.realtek.com/en/products/communications-network-ics/category/digital-home-center}{\raisebox{-0.1\height}\faExternalLink }} $|$ \large{Digital Verification Engineer}}{Sep 2014 -- Sep 2021}
% description
{Average performance rating: \textbf{A}}\newline
{Promotion from junior to senior engineer: Sep 2018}
\vspace{-5pt}
\resumeItemListStart
\resumeItem{Worked on various digital Home products, e.g., \textbf{Set-up box}, \textbf{NAS}, etc}
\resumeItem{I was responsible for testing High-Speed peripherals by SystemVerilog VMM/UVM methodologies,\newline including \textbf{USB} from 1.0 to 3.0 in both of hosts and devices and\newline \textbf{Ethernet} supporting media-independent interfaces include MII/GMII/RMII/RGMII}
\resumeItem{Worked with the Hardware System Design group to verify ROM code for USBs,\newline including testing for hardware security to enable USB booting following mass storage protocols\newline between PC hosts and USB modules and writing USB data to different flash types or DDR}
\resumeItem{Worked as part of a 4-member team to do a whole-system verification for \textbf{an in-house memory system},\newline the system does arbitration on transactions for dozens of HW clients to DDR.\newline We designed a unified and flexible verification platform by UVM, replacing HW clients with\newline Bus Functional Modules (BFMs) of common bus protocols. I was responsible for\newline the BFM of in-house data bus protocol, designing functional test patterns by using top-level sequences,\newline and designing a user-friendly GUI for easy to use}
\resumeItem{I was responsible for coordinating a team who worked on peripherals, including USB, Ethernet, PCI-E, SATA,\newline flash controllers and card readers and integrating the peripherals module testbenches into the verification platform}
\resumeItem{Worked with and described to vendors issues with their testbenches for\newline the verification of the revision on USB and PCI-E IPs}
\resumeItem{Designed a parameterized testbench to verify master/slave supported \textbf{wrappers of hardware modules} on\newline all bus protocols including in-house register/data buses and Arm AMBA}
\resumeItem{Verified control GPIOs shared by different hardware clients; designed a flow to update test cases automatically\newline according to various combinations and revisions on the SPEC}
\resumeItem{Wrote test cases using C language for the above-mentioned modules to verify \textbf{test chips}\newline embedded on a demo board}
\vspace{-14pt}
\resumeItemListEnd
\resumeSubHeadingListEnd
\resumeSubHeadingListStart
\resumeProjectHeading
{{\textbf{\large{DT/HPC1, MediaTek}} \href{https://www.mediatek.com/}{\raisebox{-0.1\height}\faExternalLink }} $|$ \large{Arm CPU verification internship}}{July 2011 -- Aug 2011}
\vspace{-18pt}
\resumeItemListStart
\resumeItem{\normalsize{I was responsible for writing test patterns in assembly code to verify an in-house processor\newline which supports ARMv7-A instruction set}}
\resumeItemListEnd
\resumeSubHeadingListEnd
\vspace{-14pt}
%目前我正在修習從業界轉教職的成大資工老師,黃敬群老師,開的基於UCB CS Computer Architecture 2021 Spring的課程, 並完成該課程的project.
%-----------Off-the-Job Training-----------
\section{OFF-THE-JOB TRAINING}
{Currently completing the Computer Architecture based by RISC-V and Linux Kernel Design courses\newline ran by the Linux kernel expert, \textbf{Jim Huang} \href{http://wiki.csie.ncku.edu.tw/User/jserv}{\raisebox{-0.1\height}\faExternalLink}}
\vspace{-14pt}
\resumeSubHeadingListStart
\resumeProjectHeading
{{\textbf{\large{\underline{Computer Architecture based by RISC-V}}} \href{http://wiki.csie.ncku.edu.tw/arch/schedule}{\raisebox{-0.1\height}\faExternalLink}} $|$ \large{Assembly, C++ used}}{Nov 2021 - Present}
\resumeItem{The course is based on \textcolor{accent} {\href{https://cs61c.org/fa21/} {\underline{\normalsize{CS 61C at UC Berkeley, CS152/252: Computer Architecture}}}}}
\vspace{-10pt}
\resumeProjectHeading
{{\textbf{\large{\underline{Linux Kernel Design}}} \href{https://hackmd.io/@asahsieh/linux_kernel}{\raisebox{-0.1\height}\faExternalLink }} $|$ \large{C/C++ used}}{Sep 2021 - Present}
\resumeItem{The course is a comprehensive look at C language based on the \textcolor{accent} {\href{https://csapp.cs.cmu.edu/} {\underline{\normalsize{CS:APP3e}}}} and
on the Linux Kernel\newline using the book \textcolor{accent} {\href{https://sysprog21.github.io/lkmpg/?fbclid=IwAR2_2G0TwJpx3uiIafaLAxKUVLz_7FIem0lqw8Up59qlplgOZZmqiXHtTGY} {\underline{\normalsize{The Linux Kernel Module Programming Guide (LKMPG)}}}} as course materials}
\resumeSubHeadingListEnd
%------RELEVANT COURSEWORK-------
\section{COURSEWORK}
%\resumeSubHeadingListStart
\begin{multicols}{4}
\begin{itemize}[itemsep=-2pt, parsep=5pt]
\item Computer Architecture
\item Embedded System Design
\item Embedded System Software/Tools
\item Real-time Computing
\end{itemize}
\end{multicols}
\vspace*{2.0\multicolsep}
%\resumeSubHeadingListEnd
%-----------EDUCATION-----------
\section{EDUCATION}
\resumeSubHeadingListStart
\resumeSubheading
{National Chiao Tung University \href{https://www.nctu.edu.tw/en}{\raisebox{-0.1\height}\faExternalLink}}{Sep 2008 -- Dec 2013}
{Master of Computer Science}
\resumeItemListStart
\resumeItem{Master thesis: Adaptive Cache Replacement Policies for High Hit Rate of a Cache of Base Register}
\resumeItem{Mater research: Mechanisms to accommodate software diversity in chip multiprocessors}
\resumeItem{Teaching Assistant for the computer Architecture, Computer Organization and Embedded System Design course}
\resumeItemListEnd
\resumeSubHeadingListEnd
\vspace{-16pt}
\resumeSubHeadingListStart
\resumeSubheading
{National Chung Cheng University \href{https://www.ccu.edu.tw/eng/index.php}{\raisebox{-0.1\height}\faExternalLink}}{July 2008 -- Sep 2008}
{Summer vacation training in the SOC lab}
Implemented a pre-synthesis 5-stage pipeline ARM processor by Verilog
\resumeSubHeadingListEnd
\vspace{-18pt}
\resumeSubHeadingListStart
\resumeSubheading
{Chung Yuan Christian University \href{https://www.cycu.edu.tw/eng/}{\raisebox{-0.1\height}\faExternalLink}}{Sep 2004 -- July 2008}{Bachelor of Computer Science}
Courses included:
\vspace{-5pt}
\resumeItemListStart
\resumeItem{Computer organization: A+}
\resumeItem{Logic circuit design experiment, which to implement CPUs on FPGA: A+}
\resumeItem{Independent study on VLSI design automation, the topic:\newline Rectilinear Steiner Routing Problem with Obstacles in Multiple Layers}
\resumeItemListEnd
Honor: Dean's List on 2006 (1st senior year)
\resumeSubHeadingListEnd
\vspace{-16pt}
%status
%-----------INVOLVEMENT---------------
\section{EXTRACURRICULAR}
\resumeSubHeadingListStart
\resumeProjectHeading
{{\textbf{\large{\underline{MotionPro}}} \href{https://hackmd.io/motion-pro}{\raisebox{-0.1\height}\faExternalLink }} $|$ \large{\underline{Motion Detection, Stepper motor, Zoom lens, ... used}}}{July 2016 -- Present}
A project that came from my love of surfing. I tried to design a camera that can detect\newline one of surfers who is catching a wave \textbf{without wearing tags}
\vspace{-18pt}
\resumeProjectHeading
{{\textbf{\large{\underline{Surf \& house}}} \href{https://www.surfhouse.tw/}{\raisebox{-0.1\height}\faExternalLink }}}{Oct. 2021 -- Present}
{Part-time job}
\vspace{-5pt}
\resumeItemListStart
\resumeItem{Class C surfing coach license from the Chinese Taipei Exploration Exercise Association}
\resumeItem{Designed and taught surfing courses to beginners}
\resumeItemListEnd
\vspace{-16pt}
\resumeProjectHeading
{{\textbf{\large{\underline{Ramen Kikkou}}} \href{https://goo.gl/maps/6Gfw4KYR6en3mU5x7}{\raisebox{-0.1\height}\faExternalLink }}}{Dec. 2021 -- Present}
{Part-time job}
\vspace{-5pt}
\resumeItemListStart
\resumeItem{Assisted the chef in making various ramen dishes}
\resumeItem{Assisted with service in the restaurant}
\resumeItemListEnd
\resumeSubHeadingListEnd
\vspace{-11pt}
%-----------CERTIFICATIONS---------------
%\section{CERTIFICATIONS}
%$\sbullet[.75] \hspace{0.1cm}$ {\href{certificateLink.com}{ReactJS \& Redux - Udemy}} \hspace{1.6cm}
%$\sbullet[.75] \hspace{0.1cm}$ {\href{certificateLink.com}{Java}} \hspace{2.59cm}
%$\sbullet[.75] \hspace{0.2cm}${\href{certificateLink.com} {Command Line in Linux - Coursera}}\\
%$\sbullet[.75] \hspace{0.2cm}${\href{certificateLink.com}{Python for Data Science - XIE}} \hspace{1cm}
%$\sbullet[.75] \hspace{0.1cm}$ {\href{certificateLink.com}{SQL}} \hspace{2.6cm}
%$\sbullet[.75] \hspace{0.2cm}${\href{certificateLink.com}{Microsoft AI Classroom - Microsoft}} \\
%$\sbullet[.75] \hspace{0.2cm}${\href{certificateLink.com}{\textbf{5 Stars} in \textbf{C++} \& \textbf{SQL} \href{certificateLink.com}{\raisebox{-0.1\height}\faExternalLink }}}\hspace{1.45cm}
%$\sbullet[.75] \hspace{0.2cm}${\href{certificateLink.com}{MongoDB Basics}} \hspace{0.5cm}
%$\sbullet[.75] \hspace{0.2cm}${\href{certificateLink.com}{NodeJS with Express \& MongoDB - Udemy}} \\
%\end{document}
\end{document}
| {
"alphanum_fraction": 0.6912953674,
"avg_line_length": 46.0461956522,
"ext": "tex",
"hexsha": "d5e970202f5c1f4b573c7a84ab9b63c24cc3562c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fdbddca887b0b618271decffe9336485fe54632d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "asahsieh/resume",
"max_forks_repo_path": "kai-chung-hsieh_resume.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fdbddca887b0b618271decffe9336485fe54632d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "asahsieh/resume",
"max_issues_repo_path": "kai-chung-hsieh_resume.tex",
"max_line_length": 571,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "fdbddca887b0b618271decffe9336485fe54632d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "asahsieh/resume",
"max_stars_repo_path": "kai-chung-hsieh_resume.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4991,
"size": 16945
} |
%
% $Id$
%
\label{sec:nwARGOS}
%\newcommand{\mc}[3]{\multicolumn{#1}{#2}{#3}}
\newcommand{\mc}[3]{\mbox{\bf #3}}
\newcommand{\vb}[1]{\mbox{\verb.#1.}}
\newcommand{\none}{\multicolumn{2}{|c|}{ }}
%%%%%%%\renewcommand{\thetable}{\Roman{table}}
\newcommand{\mcc}[1]{\multicolumn{2}{c}{#1}}
\def\bmu{\mbox{\boldmath $\mu$}}
\def\bE{\mbox{\bf E}}
\def\br{\mbox{\bf r}}
\def\tT{\tilde{T}}
\def\t{\tilde{1}}
\def\ip{i\prime}
\def\jp{j\prime}
\def\ipp{i\prime\prime}
\def\jpp{j\prime\prime}
\def\etal{{\sl et al.}}
\def\nwchem{{\bf NWChem}}
\def\nwargos{{\bf nwargos}}
\def\prepare{{\bf prepare}}
\def\nwtop{{\bf nwtop}}
\def\nwrst{{\bf nwrst}}
\def\nwsgm{{\bf nwsgm}}
\def\argos{{\bf ARGOS}}
\section{Introduction}
\subsection{Spacial decomposition}
The molecular dynamics module of \nwchem\ uses a distribution of data
based on a spacial decomposition of the molecular system, offering
an efficient parallel implementation in terms of both memory
requirements and communication costs, especially for simulations of
large molecular systems.
Inter-processor communication using the global array tools and the
design of a data structure allowing distribution based on spacial
decomposition are the key elements in taking advantage of
the distribution of memory requirements and computational work with
minimal communication.
In the spacial decomposition approach, the physical simulation
volume is divided into rectangular boxes, each of which is
assigned to a processor. Depending on the conditions of the
calculation and the number of available processors, each processor
contains one or more of these spacially grouped boxes.
The most important aspects of this decomposition are the dependence
of the box sizes and communication cost on the number of processors
and the shape of the boxes, the frequent reassignment of atoms to
boxes leading to a fluctuating number of atoms per box, and the
locality of communication which is the main reason for the efficiency
of this approach for very large molecular systems.
To improve efficiency, molecular systems are broken up into separately
treated solvent and solute parts. Solvent molecules are assigned to
the domains according to their center of geometry and are always owned
by a one node. This avoids solvent--solvent bonded interactions
crossing node boundaries. Solute molecules are broken up into
segments, with each segment assigned to a processor based on its
center of geometry. This limits the number of solute bonded
interactions that cross node boundaries. The processor to which a
particular box is assigned is responsible for the calculation of all
interactions between atoms within that box. For the calculation of
forces and energies in which atoms in boxes assigned to different
processors are involved, data are exchanged between processors. The
number of neighboring boxes is determined by the size and shape of the
boxes and the range of interaction. The data exchange that takes place
every simulation time step represents the main communication
requirements. Consequently, one of the main efforts is to design
algorithms and data structures to minimize the cost of this
communication. However, for very large molecular systems, memory
requirements also need to be taken into account.
To compromise between these requirements exchange of data is performed
in successive point to point communications rather than using the
shift algorithm which reduces the number of communication calls
for the same amount of communicated data.
For inhomogeneous systems, the computational load of evaluating
atomic interactions will generally differ between box pairs.
This will lead to load imbalance between processors.
Two algorithms have been implemented that allow for dynamically
balancing the workload of each processor.
One method is the dynamic resizing of boxes such that boxes gradually
become smaller on the busiest node, thereby reducing the computational
load of that node. Disadvantages of this method are that the
efficiency depends on the solute distribution in the simulation volume
and the redistribution of work depends on the number of nodes which
could lead to results that depend on the number of nodes used.
The second method is based on the dynamic redistribution of intra-node
box-box interactions. This method represents a more coarse load
balancing scheme, but does not have the disadvantages of the box
resizing algorithm. For most molecular systems the box pair
redistribution is the more efficient and preferred method.
The description of a molecular system consists of static and dynamic
information. The static information does not change during a
simulation and includes items such as connectivity, excluded and third
neighbor lists, equilibrium values and force constants for all
bonded and non-bonded interactions. The static information is called
the topology of the molecular system, and is kept on a separate
topology file. The dynamic information includes coordinates and
velocities for all atoms in the molecular system, and is kept in a
so-called restart file.
The \nwchem\ molecular dynamics module is the parallel implementation
of the vectorized code \argos\ developed at the University of Houston.
\subsection{Topology}
\label{sec:nwatopology}
The static information about a molecular system that is needed for
a molecular simulation is provided to the simulation module in a
topology file.
Items in this file include, among many other things,
a list of atoms, their non-bonded parameters for van der Waals and
electrostatic interactions, and the complete connectivity in terms
of bonds, angles and dihedrals.
In molecular systems, a distinction is made between
{\it solvent} and {\it solute}, which are treated separately.
A solvent molecule is defined only once in the topology file,
even though many solvent molecules usually are included in the
actual molecular system. In the current implementation only one
solvent can be defined. Everything that is not solvent in the
molecular system is solute. Each solute atom in the system must
be explicitly defined in the topology.
Molecules are defined in terms of one or more {\it segment}s.
Typically, repetitive parts of a molecule are each defined as a single
segment, such as the amino acid residues in a protein.
Segments can be quite complicated to define and are, therefore,
collected in a set of database files.
The definition of a molecular system in terms of segments is a
{\it sequence}.
Topology files are created using the \prepare\ module.
\subsection{Files}
\label{sec:nwafilenames}
File names used have the form \verb+$system$_$calc$.$ext$+, with
exception of the topology file (Section \ref{sec:nwatopology}), which is named
\verb+$system$.top+.
Anything that refers to the definition of the chemical system can be used
for \verb+$system$+, as long as no periods or underlines are used.
The identifier \verb+$calc$+ can be anything that refers to the type of
calculation to be performed for the system with the topology defined.
This file naming convention allows for the creation of a single
topology file \verb+$system$.top+ that can be used for a number of
different calculations, each identified with a different \verb+$calc$+.
For example, if {\tt crown.top} is the name of the topology file for
a crown ether, {\tt crown\_em}, {\tt crown\_md}, {\tt crown\_ti} could
be used with appropriate extensions for the filenames for energy
minimization, molecular dynamics simulation and multi-configuration
thermodynamic integration, respectively. All of these calculations
would use the same topology file {\tt crown.top}.
\label{sec:nwaextensions}
The extensions \verb+<ext>+ identify the kind of information on a file,
and are pre-determined.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{ll}
{\bf acf} & free energy correlation data file\\
{\bf cnv} & free energy convergence file\\
{\bf coo} & coordinate trajectory file\\
{\bf day} & dayfile\\
{\bf dbg} & debug file\\
{\bf dre} & distance restraints file\\
{\bf emt} & minimization trajectory file\\
{\bf fet} & free energy step contribution file\\
{\bf frg} & fragment file\\
{\bf gib} & free energy data file\\
{\bf mri} & free energy multiple run input file\\
{\bf mro} & free energy multiple run output file\\
{\bf nw} & \nwchem\ input file\\
{\bf nwout} & \nwchem\ output file\\
{\bf out} & molecular dynamics output file\\
{\bf pdb} & PDB formatted coordinate file\\
{\bf prp} & property file\\
{\bf qrs} & quenched restart file, resulting from an energy minimization\\
{\bf rdf} & radial distribution function output file\\
{\bf rdi} & radial distribution function input file\\
{\bf rst} & restart file, used to start or restart a simulation \\
{\bf seq} & sequence file, describing the system in segments\\
{\bf sco} & solute coordinate trajectory file\\
{\bf sgm} & segment file, describing segments\\
{\bf slv} & solvent coordinate file\\
{\bf svl} & solute velocity trajectory file\\
{\bf syn} & synchronization time file\\
{\bf tst} & test file\\
{\bf tim} & timing analysis file\\
{\bf top} & topology file, contains the static description of a system\\
{\bf vel} & velocity trajectory file\\
\end{tabular}
\end{center}
\caption{List of file extensions for nwchem chemical system files.}
\end{table}
\subsection{Databases}
Database file used by the \prepare\ module are found in directories
with name \verb+$ffield$_$level$+, \\
where \verb+$ffield$+ is any of the
supported force fields (Section \ref{sec:nwaforcefields}).
The source of the data is identified by \verb+$level$+, and can be
\begin{center}
\begin{tabular}{lll}
\hline
level & Description & Availability \\
{\bf s} & original published data & public \\
{\bf x} & additional published data & public \\
{\bf q} & contributed data & public \\
{\bf u} & user preferred data & private \\
{\bf t} & user defined temporary data & private \\
\hline
\end{tabular}
\end{center}
Typically, only the level {\bf s} and {\bf x} databases are publicly
available.
The user is responsible for the private level {\bf u} and {\bf t}
database files. When the \prepare\ module scans the databases, the priority
is {\bf t}$>${\bf u}$>${\bf x}$>${\bf s}$>$.
The extension \verb+<ext>+ defines the type of database file within each
database directory.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{ll}
{\bf frg} & fragments\\
{\bf par} & parameters\\
{\bf seq} & sequences\\
{\bf sgm} & segments\\
\end{tabular}
\end{center}
\caption{List of database file extensions.}
\end{table}
The paths of the different database directories should be defined in a file
{\tt .nwchemrc} in a user's home directory, and provides the user the
option to select which database files are scanned.
\subsection{Force fields}
\label{sec:nwaforcefields}
Force fields recognized are
\begin{center}
\begin{tabular}{ll}
\hline
Keyword & Force field \\
{\tt amber} & AMBER95 \\
{\tt charmm} & CHARMM \\
%{\tt cvff} & CVFF \\
%{\tt gromos} & GROMOS87 \\
%{\tt oplsa} & OPLS/AMBER3.0 \\
%{\tt oplsg} & OPLS/GROMOS87 \\
\hline
\end{tabular}
\end{center}
\section{Format of fragment files}
Fragment files contain the basic information needed to specify all
interactions that need to be considered in a molecular simulation.
The format of the fragment files is described in Table \ref{tbl:nwafrag}.
Normally these files are created by the \prepare\ module. Manual
editing is needed when, for example, the \prepare\ module could not
complete atom typing, or when modified charges are required.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{p{15mm}p{12mm}l}
\hline\hline
Card & Format & Description \\ \hline
I-1-1 & a1 & \$ and \# for comments that describe the fragment \\ % $ for emacs
\hline
I-2-1 & i5 & number of atoms in the fragment\\
I-2-2 & i5 & number of parameter sets\\
I-2-3 & i5 & default parameter set\\
I-2-4 & i5 & number of z-matrix definition\\
\hline
\mc{3}{l}{For each atom one deck II} \\
\hline
II-1-1 & i5 & atom sequence number \\
II-1-2 & a6 & atom name \\
II-1-3 & a5 & atom type \\
II-1-4 & a1 & dynamics type\\
& & \verb+ + : normal\\
& & \verb+D+ : dummy atom\\
& & \verb+S+ : solute interactions only\\
& & \verb+Q+ : quantum atom\\
& & other : intramolecular solute interactions only\\
II-1-5 & i5 & link number\\
& & 0: no link\\
& & 1: first atom in chain\\
& & 2: second atom in chain\\
& & 3 and up: other links\\
II-1-6 & i5 & environment type\\
& & 0: no special identifier\\
& & 1: planar, using improper torsion\\
& & 2: tetrahedral, using improper torsion\\
& & 3: tetrahedral, using improper torsion\\
& & 4: atom in aromatic ring\\
II-1-7 & i5 & charge group\\
II-1-8 & i5 & polarization group\\
II-1-9 & f12.6 & atomic partial charge\\
II-1-10 & f12.6 & atomic polarizability\\
\hline
\mc{3}{l}{Any number of cards in deck III to specify complete
connectivity} \\
\hline
III-1-1 & 16i5 & connectivity, duplication allowed\\
\hline\hline
\end{tabular}
\caption{The format of fragment files.\label{tbl:nwafrag}}
\end{center}
\end{table}
\section{Creating segment files}
\label{sec:nwanwsgm}
The \prepare\ module is used to generate segment files
from corresponding fragment files. A segment file contains all
information for the calculation of bonded and non-bonded interactions
for a given chemical system using a specific force field.
Which atoms form a fragment is specified in the coordinate file,
currently only in PDB format.
the restriction is that bonded interactions may only involve atoms on at
most two segments does no longer exist as of \nwchem\ release 3.2.1.
The segment entries define three sets of parameters
for each interaction.
Free energy perturbations can be performed using set 1 for the
generation of the ensemble while using sets 2 and/or 3
as perturbations. Free energy multiconfiguration thermodynamic
integration and multistep thermodynamic perturbation calculations are
performed by gradually changing the interactions in the system from
parameter set 2 to parameter set 3. These modifications can be
edited into the segment files manually, or introduced directly into
the topology file using the \verb+modify+ commands in the input for
the \prepare\ module.
The format of a segment is
described in Tables \ref{tbl:nwaseg1}--\ref{tbl:nwaseg6}.
\begin{table}[htbp]
\begin{center}
\begin{tabular*}{150mm}{p{15mm}p{12mm}l}
\hline\hline
Deck & Format & Description \\ \hline
I-0-1 & & \# lines at top are comments \\
I-1-1 & a1 & \$ to identify the start of a segment \\ %$ for emacs
I-1-2 & a10 & name of the segment, the tenth character\\
& & N: identifies beginning of a chain\\
& & C: identifies end of a chain\\
& & blank: identifies chain fragment\\
& & M: identifies an integral molecule\\
I-2-1 & i5 & number of atoms in the segment\\
I-2-2 & i5 & number of bonds in the segment\\
I-2-3 & i5 & number of angles in the segment\\
I-2-4 & i5 & number of proper dihedrals in the segment\\
I-2-5 & i5 & number of improper dihedrals in the segment\\
\hline
\end{tabular*}
\caption{Segment file format, table 1 of 6.\label{tbl:nwaseg1}}
\end{center}
\end{table}
\begin{table}[htbp]
\begin{center}
\begin{tabular*}{150mm}{p{15mm}p{12mm}l}
\hline\hline
Deck & Format & Description \\ \hline
\mc{3}{l}{For each atom one deck II} \\
II-1-1 & i5 & atom sequence number \\
II-1-2 & a6 & atom name \\
II-1-3 & a5 & atom type, generic set 1 \\
II-1-4 & a1 & dynamics type\\
& & \verb+ + : normal\\
& & \verb+D+ : dummy atom\\
& & \verb+S+ : solute interactions only\\
& & \verb+Q+ : quantum atom\\
& & other : intramolecular solute interactions only\\
II-1-4 & a5 & atom type, generic set 2 \\
II-1-5 & a1 & dynamics type\\
& & \verb+ + : normal\\
& & \verb+D+ : dummy atom\\
& & \verb+S+ : solute interactions only\\
& & \verb+Q+ : quantum atom\\
& & other : intramolecular solute interactions only\\
II-1-6 & a5 & atom type, generic set 3 \\
II-1-7 & a1 & dynamics type\\
& & \verb+ + : normal\\
& & \verb+D+ : dummy atom\\
& & \verb+S+ : solute interactions only\\
& & \verb+Q+ : quantum atom\\
& & other : intramolecular solute interactions only\\
II-1-8 & i5 & charge group\\
II-1-9 & i5 & polarization group\\
II-1-10 & i5 & link number\\
II-1-11 & i5 & environment type\\
& & 0: no special identifier\\
& & 1: planar, using improper torsion\\
& & 2: tetrahedral, using improper torsion\\
& & 3: tetrahedral, using improper torsion\\
& & 4: atom in aromatic ring\\
II-2-1 & f12.6 & atomic partial charge in e, set 1\\
II-2-2 & f12.6 & atomic polarizability/$4\pi\epsilon_o$ in nm$^3$, set 1\\
II-2-3 & f12.6 & atomic partial charge in e, set 2\\
II-2-4 & f12.6 & atomic polarizability/$4\pi\epsilon_o$ in nm$^3$, set 2\\
II-2-5 & f12.6 & atomic partial charge in e, set 3\\
II-2-6 & f12.6 & atomic polarizability/$4\pi\epsilon_o$ in nm$^3$, set 3\\
\hline
\end{tabular*}
\caption{Segment file format, table 2 of 6.\label{tbl:nwaseg2}}
\end{center}
\end{table}
\begin{table}[htbp]
\begin{center}
\begin{tabular*}{150mm}{p{15mm}p{12mm}l}
\hline\hline
Deck & Format & Description \\ \hline
\mc{3}{l}{For each bond a deck III} \\
III-1-1 & i5 & bond sequence number \\
III-1-2 & i5 & bond atom i \\
III-1-3 & i5 & bond atom j \\
III-1-4 & i5 & bond type \\
& & 0: harmonic\\
& & 1: constrained bond\\
III-1-5 & i5 & bond parameter origin\\
& & 0: from database, next card ignored \\
& & 1: from next card\\
III-2-1 & f12.6 & bond length in nm, set 1\\
III-2-2 & e12.5 & bond force constant in kJ nm$^2$ mol$^{-1}$, set 1 \\
III-2-3 & f12.6 & bond length in nm, set 2\\
III-2-4 & e12.5 & bond force constant in kJ nm$^2$ mol$^{-1}$, set 2 \\
III-2-5 & f12.6 & bond length in nm, set 3\\
III-2-6 & e12.5 & bond force constant in kJ nm$^2$ mol$^{-1}$, set 3 \\
\hline
\end{tabular*}
\caption{Segment file format, table 3 of 6.\label{tbl:nwaseg3}}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular*}{150mm}{p{15mm}p{12mm}l}
\hline\hline
Deck & Format & Description \\ \hline
\mc{3}{l}{For each angle a deck IV} \\
IV-1-1 & i5 & angle sequence number \\
IV-1-2 & i5 & angle atom i \\
IV-1-3 & i5 & angle atom j \\
IV-1-4 & i5 & angle atom k \\
IV-1-5 & i5 & angle type \\
& & 0: harmonic\\
IV-1-6 & i5 & angle parameter origin\\
& & 0: from database, next card ignored \\
& & 1: from next card\\
IV-2-1 & f12.6 & angle in radians, set 1\\
IV-2-2 & e12.5 & angle force constant in kJ mol$^{-1}$, set 1 \\
IV-2-3 & f12.6 & angle in radians, set 2\\
IV-2-4 & e12.5 & angle force constant in kJ mol$^{-1}$, set 2 \\
IV-2-5 & f12.6 & angle in radians, set 3\\
IV-2-6 & e12.5 & angle force constant in kJ mol$^{-1}$, set 3 \\
\hline
\end{tabular*}
\caption{Segment file format, table 4 of 6.\label{tbl:nwaseg4}}
\end{center}
\end{table}
\begin{table}[htbp]
\begin{center}
\begin{tabular*}{150mm}{p{15mm}p{12mm}l}
\hline\hline
Deck & Format & Description \\ \hline
\mc{3}{l}{For each proper dihedral a deck V} \\
V-1-1 & i5 & proper dihedral sequence number \\
V-1-2 & i5 & proper dihedral atom i \\
V-1-3 & i5 & proper dihedral atom j \\
V-1-4 & i5 & proper dihedral atom k \\
V-1-5 & i5 & proper dihedral atom l \\
V-1-6 & i5 & proper dihedral type \\
& & 0: $C\cos(m\phi-\delta)$\\
V-1-7 & i5 & proper dihedral parameter origin\\
& & 0: from database, next card ignored \\
& & 1: from next card\\
V-2-1 & i5 & multiplicity, set 1\\
V-2-2 & f12.6 & proper dihedral in radians, set 1\\
V-2-3 & e12.5 & proper dihedral force constant in kJ mol$^{-1}$, set 1 \\
V-2-4 & i5 & multiplicity, set 2\\
V-2-5 & f12.6 & proper dihedral in radians, set 2\\
V-2-6 & e12.5 & proper dihedral force constant in kJ mol$^{-1}$, set 2 \\
V-2-7 & i5 & multiplicity, set 3\\
V-2-8 & f12.6 & proper dihedral in radians, set 3\\
V-2-9 & e12.5 & proper dihedral force constant in kJ mol$^{-1}$, set 3 \\
\hline
\end{tabular*}
\caption{Segment file format, table 5 of 6.\label{tbl:nwaseg5}}
\end{center}
\end{table}
\begin{table}[htbp]
\begin{center}
\begin{tabular*}{150mm}{p{15mm}p{12mm}l}
\hline\hline
Deck & Format & Description \\ \hline
\mc{3}{l}{For each improper dihedral a deck VI} \\
VI-1-1 & i5 & improper dihedral sequence number \\
VI-1-2 & i5 & improper dihedral atom i \\
VI-1-3 & i5 & improper dihedral atom j \\
VI-1-4 & i5 & improper dihedral atom k \\
VI-1-5 & i5 & improper dihedral atom l \\
VI-1-6 & i5 & improper dihedral type \\
& & 0: harmonic\\
VI-1-7 & i5 & improper dihedral parameter origin\\
& & 0: from database, next card ignored \\
& & 1: from next card\\
VI-2-1 & f12.6 & improper dihedral in radians, set 1\\
VI-2-2 & e12.5 & improper dihedral force constant in kJ mol$^{-1}$, set 1 \\
VI-2-3 & f12.6 & improper dihedral in radians, set 2\\
VI-2-4 & e12.5 & improper dihedral force constant in kJ mol$^{-1}$, set 2 \\
VI-2-5 & f12.6 & improper dihedral in radians, set 3\\
VI-2-6 & e12.5 & improper dihedral force constant in kJ mol$^{-1}$, set 3 \\
\hline\hline
\end{tabular*}
\caption{Segment file format, table 6 of 6.\label{tbl:nwaseg6}}
\end{center}
\end{table}
\section{Creating sequence files}
A sequence file describes a molecular system in terms of segments. This
file is generated by the \prepare\ module for the molecular system
provided on a PDB-formatted coordinate file.
The file format is given in Table \ref{tbl:nwaseq}
\begin{table}[htbp]
\begin{center}
\begin{tabular*}{150mm}{p{15mm}p{12mm}l}
\hline\hline
Card & Format & Description \\ \hline
I-1-1 & a1 & \$ to identify the start of a sequence \\ %$ for emacs
I-1-2 & a10 & name of the sequence\\
\mc{3}{l}{Any number of cards 1 and 2 in deck II to specify the system} \\
II-1-1 & i5 & segment number\\
II-1-2 & a10 & segment name, last character will be determined from chain\\
II-1-3 & i5 & link segment 1, if blank previous segment in chain\\
II-1-4 & i3 & link atom, if blank link atom 2\\
II-1-5 & i5 & link segment 2, if blank next segment in chain\\
II-1-6 & i3 & link atom, if blank link atom 1\\
II-1-7 & i5 & link segment 3\\
II-1-8 & i3 & link atom in link segment 3\\
II-1-9 & i5 & link segment 4\\
II-1-0 & i3 & link atom in link segment 4\\
II-1-1 & i5 & link segment 5\\
II-1-2 & i3 & link atom in link segment 5\\
II-1-3 & i5 & link segment 6\\
II-1-4 & i3 & link atom in link segment 6\\
II-1-5 & i5 & link segment 7\\
II-1-6 & i3 & link atom in link segment 7\\
II-1-7 & i5 & link segment 8\\
II-1-8 & i3 & link atom in link segment 8\\
II-1-9 & i5 & link segment 9\\
II-1-0 & i3 & link atom in link segment 9\\
II-1-1 & i5 & link segment 10\\
II-1-2 & i3 & link atom in link segment 10\\
II-2-1 & a & \verb+break+ to identify a break in the molecule chain\\
II-2-1 & a & \verb+molecule+ to identify the end of a solute molecule\\
II-2-1 & a & \verb+fraction+ to identify the end of a solute fraction\\
II-2-1 & a5 & \verb+link + to specify a link\\
II-2-2 & i5 & segment number of first link atom\\
II-2-3 & a4 & name of first link atom \\
II-2-4 & i5 & segment number of second link atom\\
II-2-5 & a4 & name of second link atom \\
II-2-1 & a & \verb+solvent+ to identify solvent definition on next card\\
II-2-1 & a & \verb+stop+ to identify the end of the sequence\\
II-2-1 & a6 & \verb+repeat+ to repeat next $ncard$ cards $ncount$
times\\
II-2-2 & i5 & number of cards to repeat ($ncards$)\\
II-2-3 & i5 & number of times to repeat cards ($ncount$)\\
\mc{3}{l}{Any number of cards in deck II to specify the system} \\
\hline\hline
\end{tabular*}
\caption{Sequence file format.\label{tbl:nwaseq}}
\end{center}
\end{table}
\section{Creating topology files}
\label{sec:nwanwtop}
The topology (Section \ref{sec:nwatopology}) describes all static information
that describes a molecular system. This includes the connectivity in
terms of bond-stretching, angle-bending and torsional interactions, as well as
the non-bonded van der Waals and Coulombic interactions.
The topology of a molecular system is generated by the \prepare\ module
from the sequence in terms of segments as specified on the PDB file.
For each unique segment specified in this file the
segment database directories are searched for the segment definition.
For segments not found in one of the database directories a segment definition
is generated in the temporary directory if a fragment file was found.
If a fragment file could not be found, it is generated by the \prepare\ module
base on what is found on the PDB file.
When all segments are found or created, the parameter substitutions are
performed, using force field parameters taken from the parameter
databases. After all lists have been generated the
topology is written to a local topology file \verb+$system$.top+.
\section{Creating restart files}
\label{sec:nwanwrst}
Restart files contain all dynamical information about a molecular
system and are created by the \prepare\ module if a topology file
is available. The \prepare\ module will automatically generate
coordinates for hydrogen atoms and monatomic counter ions
not found on the PDB formatted coordinate file, if no fragment or
segment files were generated using that PDB file.
The \prepare\ module has a number of other optional input command,
including solvation.
\section{Molecular simulations}
The type of molecular dynamics simulation is specified by the
\nwchem\ task directive.
\begin{verbatim}
task md [ energy || optimize || dynamics || thermodynamics ]
\end{verbatim}
where the theory keyword {\tt md} specifies use of the molecular
dynamics module, and the operation keyword is one of
\begin{description}
\item
{\tt energy} for single configuration energy evaluation
\item
{\tt optimize} for energy minimization
\item
{\tt dynamics} for molecular dynamics simulations and single step
thermodynamic perturbation free energy molecular dynamics simulations
\item
{\tt thermodynamics} for combined multi-configuration thermodynamic
integration and multiple step thermodynamic perturbation free
energy molecular dynamics simulations.
\end{description}
\section{System specification}
The chemical system for a calculation is specified in the topology
and restart files. These files should be created using the utilities
\nwtop\ and \nwrst\ before a simulation can be performed.
The names of these files are determined from the \nwchem\ \verb+start+
directive.
There is no default. If the \verb+rtdb+ name is given as {\tt system\_calc},
the topology file used is {\tt system.top}, while all other files
are named {\tt system\_calc.ext}.
\section{Parameter set}
\begin{description}
\item
\begin{verbatim}
set <integer iset>
\end{verbatim}
specifies the use of parameter set \verb+<iset>+ for the
molecular dynamics simulation.
The topology file contains three separate parameters sets that can
be used. The default for \verb+<iset>+ is 1.
\item
\begin{verbatim}
pset <integer isetp1> [<integer isetp2>]
\end{verbatim}
specifies the parameter sets to be used as perturbation potentials
in single step thermodynamic perturbation free energy evaluations,
where \verb+<isetp1>+ specifies the first perturbation parameter set and
\verb+<isetp2>+ specifies the second perturbation parameter set. Legal
values for \verb+<isetp1>+ are 2 and 3. Legal value for \verb+<isetp2>+ is
3, in which case \verb+<isetp1>+ can only be 2. If specified, \verb+<iset>+
is automatically set to 1.
\end{description}
\section{Energy minimization algorithms}
The energy minimization of the system as found in the restart file
is performed with the following directives. If both are specified,
steepest descent energy minimization precedes conjugate gradient
minimization.
%\begin{description}
\begin{description}
\item
\begin{verbatim}
sd <integer msdit> [init <real dx0sd>] [min <real dxsdmx>] \
[max <real dxmsd>]
\end{verbatim}
specifies the variables for steepest descent energy minimizations,
where \verb+<msdit>+ is the maximum number of steepest descent steps taken,
for which the default is 100, \verb+<dx0sd>+ is the initial step size in nm
for which the default is 0.001, \verb+<dxsdmx>+ is the threshold for the
step size in nm for which the default is 0.0001, and \verb+<dxmsd>+ is the
maximum allowed step size in nm for which the default is 0.05.
\item
\begin{verbatim}
cg <integer mcgit> [init <real dx0cg>] [min <real dxcgmx>] \
[cy <integer ncgcy>]
\end{verbatim}
specifies the variables for conjugate gradient energy minimizations,
where \verb+<mcgit>+ is the maximum number of conjugate gradient steps
taken, for which the default is 100, \verb+<dx0cg>+ is the initial search
interval size in nm for which the default is 0.001, \verb+<dxcgmx>+ is the
threshold for the step size in nm for which the default is 0.0001, and
\verb+<ncgcy>+ is the number of conjugate gradient steps after which the
gradient history is discarded for which the default is 10. If conjugate
gradient energy minimization is preceded by steepest descent energy
minimization, the search interval is set to twice the final step of the
steepest descent energy minimization.
\end{description}
%\end{description}
\section{Multi-configuration thermodynamic integration}
The following keywords control free energy difference simulations.
Multi-configuration thermodynamic integrations are always combined
with multiple step thermodynamic perturbations.
\begin{description}
\item
\begin{verbatim}
(forward || reverse) [[<integer mrun> of] <integer maxlam>]
\end{verbatim}
specifies the direction and number of integration steps in free
energy evaluations, with {\tt forward} being the default direction.
\verb+<mrun>+ is the number of ensembles that will be generated in
this calculation, and \verb+<maxlam>+ is the total number of ensembles
to complete the thermodynamic integration. The default value for
\verb+<maxlam>+ is 21. The default value of \verb+<mrun>+ is the
value of \verb+<maxlam>+.
\item
\begin{verbatim}
error <real edacq>
\end{verbatim}
specifies the maximum allowed statistical error in each generated
ensemble, where \verb+<edacq>+ is the maximum error allowed in the
ensemble average derivative of the Hamiltonian with respect to
$\lambda$ with a default of 5.0 kJ~mol$^{-1}$.
\item
\begin{verbatim}
drift <real ddacq>
\end{verbatim}
specifies the maximum allowed drift in the free energy result,
where \verb+<ddacq>+ is the maximum drift allowed in the
ensemble average derivative of the Hamiltonian with respect to
$\lambda$with a default of 5.0 kJ~mol$^{-1}$ps$^{-1}$.
\item
\begin{verbatim}
factor <real fdacq>
\end{verbatim}
specifies the maximum allowed change in ensemble size
where \verb+<fdacq>+ is the minimum size of an ensemble relative to the
previous ensemble in the calculation with a default value of 0.75.
\item
\begin{verbatim}
decomp
\end{verbatim}
specifies that a free energy decomposition is to be carried out.
Since free energy contributions are path dependent, results from a
decomposition analysis can not be unambiguously interpreted, and
the default is not to perform this decomposition.
\item
\begin{verbatim}
sss [delta <real delta>]
\end{verbatim}
specifies that atomic non-bonded interactions describe a dummy atom
in either the initial or final state of the thermodynamic calculation
will be calculated using separation-shifted scaling, where \verb+<delta>+
is the separation-shifted scaling factor with a default of 0.075 nm$^2$.
This scaling method prevents problems associated with singularities in
the interaction potentials.
\item
\begin{verbatim}
new || renew || extend
\end{verbatim}
specifies the initial conditions for thermodynamic calculations.
{\tt new} indicates that this is an initial mcti calculation, which
is the default. {\tt renew} instructs to obtain the initial
conditions for each $\lambda$ from the {\bf mro}-file from a previous
mcti calculation, which has to be renamed to an {\bf mri}-file. The
keyword {\tt extend} will extend a previous mcti calculation from the
data read from an {\bf mri}-file.
\end{description}
\section{Time and integration algorithm directives}
Following directives control the integration of the equations of motion.
\begin{description}
\item
\begin{verbatim}
leapfrog || vverlet
\end{verbatim}
specifies the integration algorithm,
where {\tt leapfrog} specifies the default leap frog integration, and
{\tt vverlet} specifies the velocity Verlet integrator.
\item
\begin{verbatim}
equil <integer mequi>
\end{verbatim}
specifies the number of equilibration steps \verb+<mequi>+, with a default
of 100.
\item
\begin{verbatim}
data <integer mdacq> [over <integer ldacq>]]
\end{verbatim}
specifies the number of data gathering steps \verb+<mdacq>+ with a
default of 500. In multi-configuration thermodynamic integrations
\verb+<mequi>+ and \verb+<mdacq>+ are for each of the ensembles, and
variable \verb+<ldacq>+ specifies the minimum number of data gathering steps
in each ensemble. In regular molecular dynamics simulations \verb+<ldacq>+
is not used. The default value for \verb+<ldacq>+ is the value of \verb+<mdacq>+.
\item
\begin{verbatim}
time <real stime>
\end{verbatim}
specifies the initial time \verb+<stime>+ of a molecular simulation in ps,
with a default of 0.0.
\item
\begin{verbatim}
step <real tstep>
\end{verbatim}
specifies the time step \verb+<tstep>+ in ps, with 0.001 as the default value.
\end{description}
\section{Ensemble selection}
Following directives control the ensemble type.
\begin{description}
\item
\begin{verbatim}
isotherm [<real tmpext>] [trelax <real tmprlx> [<real tmsrlx>]]
\end{verbatim}
specifies a constant temperature ensemble using Berendsen's thermostat,
where \verb+<tmpext>+ is the external temperature with a default of 298.15~K,
and \verb+<tmprlx>+ and \verb+<tmsrlx>+ are temperature relaxation times in ps
with a default of 0.1. If only \verb+<tmprlx>+ is given the complete system
is coupled to the heat bath with relaxation time \verb+<tmprlx>+. If both
relaxation times are supplied, solvent and solute are independently coupled
to the heat bath with relaxation times \verb+<tmprlx>+ and \verb+<tmsrlx>+,
respectively.
\item
\begin{verbatim}
isobar [<real prsext>] [trelax <real prsrlx> ] \
[compress <real compr>]
\end{verbatim}
specifies a constant pressure ensemble using Berendsen's piston,
where \verb+<prsext>+ is the external pressure with a default of 1.025~10$^5$ Pa,
\verb+<prsrlx>+ is the pressure relaxation time in ps with a default of 0.5, and
\verb+<compr>+ is the system compressibility in m$^2$N$^{-1}$ with a
default of 4.53E-10.
\end{description}
\section{Velocity reassignments}
Velocities can be periodically reassigned to reflect a certain temperature.
\begin{description}
\item
\begin{verbatim}
vreass <integer nfgaus> <real tgauss>
\end{verbatim}
specifies that velocities will be reassigned every \verb+<nfgaus>+ molecular
dynamics steps, reflecting a temperature of \verb+<tgauss>+~K. The default
is not to reassign velocities, i.e.\ \verb+<nfgaus>+ is 0.
\end{description}
\section{Cutoff radii}
Cutoff radii can be specified for short range and long range interactions.
\begin{description}
\item
\begin{verbatim}
cutoff [short] <real rshort> [long <real rlong>] \
[qmmm <real rqmmm>]
\end{verbatim}
specifies the short range cutoff radius \verb+<rshort>+, and the long range
cutoff radius \verb+<rlong>+ in nm. If the long range cutoff radius
is larger than the short range cutoff radius the twin range method will
be used, in which short range forces and energies are evaluated every
molecular dynamics step, and long range forces and energies with a
frequency of \verb+<nflong>+ molecular dynamics steps. Keyword
\verb+qmmm+ specifies the radius of the zone around quantum atoms
defining the QM/MM bare charges.
The default value for \verb+<rshort>+, \verb+<rlong>+ and \verb+<rqmmm>+
is 0.9~nm.
\end{description}
\section{Polarization}
First order and self consistent electronic polarization models have
been implemented.
\begin{description}
\item
\begin{verbatim}
polar (first || scf [[<integer mpolit>] <real ptol>])
\end{verbatim}
specifies the use of polarization potentials,
where the keyword {\tt first} specifies the first order polarization
model, and {\tt scf} specifies the self consistent polarization field
model, iteratively determined with a maximum of \verb+<mpolit>+
iterations to within a tolerance of \verb+<ptol>+ D in the generated
induced dipoles. The default is not to use polarization models.
\end{description}
\section{External electrostatic field}
\begin{description}
\item
\begin{verbatim}
field <real xfield> [freq <real xffreq>] [vector <real xfvect(1:3)>]
\end{verbatim}
specifies an external electrostatic field,
where \verb+<xfield>+ is the field strength, \verb+<xffreq>+ is the
frequency in MHz and \verb+<xfvect>+ is the external field vector.
\end{description}
\section{Constraints}
Constraints are satisfied using the SHAKE
coordinate resetting procedure.
\begin{description}
\item
\begin{verbatim}
shake [<integer mshitw> [<integer mshits>]] \
[<real tlwsha> [<real tlssha>]]
\end{verbatim}
specifies the use of SHAKE constraints,
where \verb+<mshitw>+ is the maximum number of solvent SHAKE iterations,
and \verb+<mshits>+ is the maximum number of solute SHAKE iterations. If
only \verb+<mshitw>+ is specified, the value will also be used for \verb+<mshits>+.
The default maximum number of iterations is 100 for both.
\verb+<tlwsha>+ is the solvent SHAKE tolerance in nm, and \verb+<tlssha>+ is
the solute SHAKE tolerance in nm. If only \verb+<tlwsha>+ is specified, the
value given will also be used for \verb+<tlssha>+. The default tolerance
is 0.001~nm for both.
\item
\begin{verbatim}
noshake (solvent || solute)
\end{verbatim}
disables SHAKE and treats the bonded interaction according to the force
field.
\end{description}
\section{Long range interaction corrections}
Long range electrostatic interactions are implemented using the
smooth particle mesh Ewald technique, for neutral periodic cubic systems in
the constant volume ensemble, using pair interaction potentials. Particle-mesh
Ewald long range interactions can only be used in molecular dynamics simulations
using effective pair potentials, and not in free energy simulations, QMD or
QM/MM simulations.
\begin{description}
\item
\begin{verbatim}
pme [grid <integer ng>] [alpha <real ealpha>] \
[order <integer morder>] [fft <integer imfft>]
\end{verbatim}
specifies the use of smooth particle-mesh Ewald long range
interaction treatment,
where \verb+ng+ is the number of grid points per dimension,
\verb+ealpha+ is the Ewald coefficient in nm$^{-1}$, with a default
that leads to a tolerance of $10^{-4}$ at the short range cutoff radius,
and \verb+morder+ is order of the Cardinal B-spline
interpolation which must be an even number and at least 4 (default
value). A platform specific 3D fast Fourier transform is used, if
available, when \verb+imfft+ is set to 2.
\end{description}
\section{Fixing coordinates}
The solvent or solute part of a system may be fixed or unfixed using
the following keywords. Fixing part of the system will not propagate to
simulations using restart files written. In the commands \verb+fix+
and \verb+unfix+, the keywords \verb+all+, \verb+solvent+,
\verb+solute+ and \verb+non-H+ specify the entire molecular system,
the solvent, the solute and the solute atoms other than hydrogen,
respectively.
\begin{description}
\item
\begin{verbatim}
fix (all || solvent || solute || non-H)
\end{verbatim}
fixes all atoms, solvent molecule, solute atom or solute non-hydrogen atoms,
respectively.
\item
\begin{verbatim}
unfix (all || solvent || solute || non-H)
\end{verbatim}
makes all atoms, solvent molecule, solute atom or solute non-hydrogen atoms,
respectively, dynamic.
\end{description}
\section{Autocorrelation function}
For the evaluation of the statistical error of multi-configuration
thermodynamic integration free energy results a correlated data
analysis is carried out, involving the calculation of the
autocorrelation function of the derivative of the Hamiltonian with
respect to the control variable $\lambda$.
\begin{description}
\item
\begin{verbatim}
auto <integer lacf> [fit <integer nfit>] [weight <real weight>]
\end{verbatim}
controls the calculation of the autocorrelation,
where \verb+<lacf>+ is the length of the autocorrelation function, with
a default of 1000, \verb+<nfit>+ is the number of functions used in the
fit of the autocorrelation function, with a default of 15, and
\verb+<weight>+ is the weight factor for the autocorrelation function,
with a default value of 0.0.
\end{description}
\section{Print options}
Keywords that control print to the output file, with extension {\bf out}.
Print directives may be combined to a single directive.
\begin{description}
\item
\begin{verbatim}
print topol [nonbond] [solvent] [solute]
\end{verbatim}
specifies printing the topology information,
where {\tt nonbond} refers to the non-bonded interaction parameters,
{\tt solvent} to the solvent bonded parameters, and {\tt solute} to the
solute bonded parameters. If only {\tt topol} is specified, all
topology information will be printed to the output file.
\item
\begin{verbatim}
print step <integer nfoutp> [extra] [energy]
\end{verbatim}
specifies the frequency \verb+nfoutp+ of printing molecular dynamics step
information to the output file. If the keyword {\tt extra} is specified
additional energetic data are printed for solvent and solute separately.
If the keyword {\tt energy} is specified, information is printed for
all bonded solute interactions.
The default for \verb+nfoutp+ is 0. For molecular dynamics simulations
this frequency is in time steps, and for multi-configuration thermodynamic
integration in $\lambda$-steps.
\item
\begin{verbatim}
print stat <integer nfstat>
\end{verbatim}
specifies the frequency \verb+<nfstat>+ of printing statistical information
of properties that are calculated during the simulation.
For molecular dynamics simulation
this frequency is in time steps, for multi-configuration thermodynamic
integration in $\lambda$-steps.
\end{description}
\section{Periodic updates}
Following keywords control periodic events during a molecular
dynamics or thermodynamic integration simulation.
Update directives may be combined to a single directive.
\begin{description}
\item
\begin{verbatim}
update pairs <integer nfpair>
\end{verbatim}
specifies the frequency \verb+<nfpair>+ in molecular dynamics steps of
updating the pair lists. The default for the frequency is 1.
In addition, pair lists are also updated after each step in which
recording of the restart or trajectory files is performed. Updating
the pair lists includes the redistribution of atoms that changed
domain and load balancing, if specified.
\item
\begin{verbatim}
update long <integer nflong>
\end{verbatim}
specifies the frequency \verb+<nflong>+ in molecular dynamics steps
of updating the long range forces. The default frequency is 1.
The distinction of short range and long range forces is only
made if the long range cutoff radius was specified to be larger
than the short range cutoff radius. Updating the long range forces
is also done in every molecular dynamics step in which the
pair lists are regenerated.
\item
\begin{verbatim}
update center <integer nfcntr> [fraction <integer idscb(1:5)>]
\end{verbatim}
specifies the frequency \verb+<nfcntr>+ in molecular dynamics steps in
which the center of geometry of the solute(s) is translated to the
center of the simulation volume. The solute fractions determining the
solutes that will be centered are specified by the keyword
{\tt fraction} and the vector \verb+<idscb>+, with a maximum of 5 entries.
This translation is implemented such that it has no effect on any
aspect of the simulation. The default is not to center, i.e. nfcntr is
0. The default fraction used to center solute is 1.
\item
\begin{verbatim}
update motion <integer nfslow>
\end{verbatim}
specifies the frequency \verb+<nfslow>+ in molecular dynamics steps of
removing the center of mass motion.
\item
\begin{verbatim}
update analysis <integer nfanal>
\end{verbatim}
specifies the frequency \verb+<nfanal>+ in molecular dynamics steps of
invoking the analysis module.
\item
\begin{verbatim}
update rdf <integer nfrdf> [range <real rrdf>] [bins <integer ngl>]
\end{verbatim}
specifies the frequency \verb+<nfrdf>+ in molecular dynamics steps of
calculating contributions to the radial distribution functions.
The default is 0. The range of the radial distribution
functions is given by \verb+<rrdf>+ in nm, with a default of the short
range cutoff radius. Note that radial distribution functions are not
evaluated beyond the short range cutoff radius. The number of
bins in each radial distribution function is given by \verb+<ngl>+, with
a default of 1000.
If radial distribution function are to be
calculated, a {\bf rdi} files needs to be available in which the
contributions are specified as follows.
\begin{center}
\begin{tabular}{lll}
\hline\hline
Card & Format & Description \\ \hline
I-1 & i & Type, 1=solvent-solvent, 2=solvent-solute,
3-solute-solute\\
I-2 & i & Number of the rdf for this contribution\\
I-3 & i & First atom number \\
I-4 & i & Second atom number \\
\hline
\end{tabular}
\end{center}
\end{description}
\section{Recording}
The following keywords control recording data to file.
Record directives may be combined to a single directive.
\begin{description}
%\item
%The file format of selected recording files is specified with
%\begin{verbatim}
%record (binary || ascii [ecce || argos])
%\end{verbatim}
%with the default of ascii in ecce readable format.
\item
\begin{verbatim}
record rest <integer nfrest> [keep]
\end{verbatim}
specifies the frequency \verb+<nfrest>+ in molecular dynamics steps
of rewriting the restart file, with extension \verb+rst+.
For multi-configuration
thermodynamic integration simulations the frequency is in
steps in $\lambda$. The default is not to record. The restart
file is used to start or restart simulations. The keyword {\tt keep}
causes all restart files written to be kept on disk, rather than
to be overwritten.
\item
\begin{verbatim}
record coord <integer nfcoor>
\end{verbatim}
specifies the frequency \verb+<nfcoor>+ in molecular dynamics steps
of writing information to the coordinate file, with extension \verb+coo+.
For multi-configuration
thermodynamic integration simulations the frequency is in
steps in $\lambda$. The default is not to record.
\item
\begin{verbatim}
record scoor <integer nfscoo>
\end{verbatim}
specifies the frequency \verb+<nfscoo>+ in molecular dynamics steps
of writing information to the solute coordinate file, with extension
\verb+sco+. For multi-configuration
thermodynamic integration simulations the frequency is in
steps in $\lambda$. The default is not to record.
\item
\begin{verbatim}
record veloc <integer nfvelo>
\end{verbatim}
specifies the frequency \verb+<nfvelo>+ in molecular dynamics steps
of writing information to the velocity file, with extension \verb+vel+.
For multi-configuration
thermodynamic integration simulations the frequency is in
steps in $\lambda$. The default is not to record.
\item
\begin{verbatim}
record svelo <integer nfsvel>
\end{verbatim}
specifies the frequency \verb+<nfsvel>+ in molecular dynamics steps
of writing information to the solute velocity file, with extension
\verb+svl+. For multi-configuration
thermodynamic integration simulations the frequency is in
steps in $\lambda$. The default is not to record.
\item
\begin{verbatim}
record prop <integer nfprop>
\end{verbatim}
specifies the frequency \verb+<nfprop>+ in molecular dynamics steps
of writing information to the property file, with extension
\verb+prp+. For multi-configuration
thermodynamic integration simulations the frequency is in
steps in $\lambda$. The default is not to record.
\item
\begin{verbatim}
record mind <integer nfem>
\end{verbatim}
specifies the frequency \verb+<nfem>+ in energy minimization steps of
writing the minimization trajectory to file, with extension \verb+emt+.
\item
\begin{verbatim}
record free <integer nffree>
\end{verbatim}
specifies the frequency \verb+<nffree>+ in multi-configuration
thermodynamic integration steps to record data to the
free energy data file, with extension \verb+gib+.
The default is 1, i.e.\ to record at every $\lambda$.
\item
\begin{verbatim}
record cnv
\end{verbatim}
specifies that free energy convergence data will be written to the
free energy convergence file, with extension \verb+cnv+.
\item
\begin{verbatim}
record acf
\end{verbatim}
specifies that free energy derivative autocorrelation data will be
written to the free energy autocorrelation file, with extension
\verb+acf+.
\item
\begin{verbatim}
record fet
\end{verbatim}
that free energy vs.\ time data will be recorded to the free energy
data file, with extension \verb+fet+.
\item
\begin{verbatim}
record sync <integer nfsync>
\end{verbatim}
specifies the frequency \verb+<nfsync>+ in molecular dynamics steps
of writing information to the synchronization file, with extension
\verb+syn+.
The default is not to record.
The information written is the simulation time, the wall clock time
of the previous MD step, the wall clock time of the previous force
evaluation, the total synchronization time, the largest
synchronization time and the node on which the largest synchronization
time was found. The recording of synchronization times is part of the
load balancing algorithm. Since load balancing is only performed when
pair-lists are updated, the frequency \verb+<nfsync>+ is correlated
with the frequency of pair-list updates \verb+<nfpair>+.
\end{description}
\section{Program control options}
\begin{description}
\item
\begin{verbatim}
load [reset] ( none || size [<real factld>] || pairs ||
(pairs [<integer ldpair>] size [<real factld>]) )
\end{verbatim}
determines the type of dynamic load balancing performed,
where the default is {\tt none}. Load balancing option {\tt size}
is resizing boxes on a node, and {\tt pairs} redistributes the
box-box interactions over nodes. Keyword \verb+reset+ will reset the
load balancing read from the restart file. The level of box resizing
can be influenced with $factld$. The boxes on the busiest node are
resized with a factor
\begin{equation}
\left( 1 - factld * { {T_{sync} \over n_p} - t^{min}_{sync} \over t_{wall}}
\right)^{1\over 3}
\end{equation}
where $T_{sync}$ is the accumulated synchronization time of all nodes,
$n_p$ is the total number of nodes, $t^{min}_{sync}$ is the synchronization
time of the busiest node, and $t_{wall}$ is the wall clock time of the
molecular dynamics step.\\
For the combined load balancing, \verb+ldpair+ is the number of successive pair
redistribution load balancing steps in which the accumulated synchronization
time increases, before a resizing load balancing step will be attempted.\\
Load balancing is only performed in molecular dynamics steps in which the
pair-list is updated.
\item
\begin{verbatim}
nodes <integer npx> <integer npy> <integer npz>
\end{verbatim}
specifies the distribution of the available nodes over the three
Cartesian dimensions. The default distribution is chosen such that,
\verb+<npx>+$*$\verb+<npy>+$*$\verb+<npz>+=\verb+<np>+
and \verb+<npx>+ $<=$ \verb+<npy>+ $<=$ \verb+<npz>+,
where \verb+<npx>+, \verb+<npy>+ and \verb+<npz>+ are the nodes in the
x, y and z dimension respectively, and \verb+<np>+ is the number of nodes
allocated for the calculation. Where more than one combination
of \verb+<npx>+, \verb+<npy>+ and \verb+<npz>+ are possible, the
combination is chosen with the minimum value of
\verb+<npx>+$+$\verb+<npy>+$+$\verb+<npz>+. To change the default setting
the following optional input option is provided.
\item
\begin{verbatim}
boxes <integer nbx> <integer nby> <integer nbz>
\end{verbatim}
specifies the distribution of boxes,
where \verb+<nbx>+, \verb+<nby>+ and \verb+<nbz>+ are the number of
boxes in x, y and z direction, respectively.
The molecular system is decomposed into boxes that form the smallest
unit for communication of atomic data between nodes. The size of the
boxes is per default set to the short-range cutoff radius. If
long-range cutoff radii are used the box size is set to half the
long-range cutoff radius if it is larger than the short-range cutoff.
If the number of boxes in a dimension is less than the number of
processors in that dimension, the number of boxes is set to the number
of processors.
\item
\begin{verbatim}
extra <integer madbox>
\end{verbatim}
sets the number of additional boxes for which memory is allocated.
In rare events the amount of memory set aside per node is insufficient
to hold all atomic coordinates assigned to that node. This leads to
execution which aborts with the message that {\tt mwm} or {\tt msa} is too
small. Jobs may be restarted with additional space allocated by
where \verb+<madbox>+ is the number of additional boxes that are allocated
on each node. The default for \verb+<madbox>+ is 6.
In some cases \verb+<madbox>+ can be reduced to 4 if memory usage is a
concern. Values of 2 or less will almost certainly result in memory
shortage.
\item
\begin{verbatim}
mwm <integer mwmreq>
\end{verbatim}
sets the maximum number of solvent molecules \verb+<mwmreq>+ per node,
allowing increased memory to be allocated for solvent molecules. This
option can be used if execution aborted because \verb+mwm+ was too
small.
\item
\begin{verbatim}
msa <integer msareq>
\end{verbatim}
sets the maximum number of solute atoms \verb+<msareq>+ per node,
allowing increased memory to be allocated for solute atoms. This
option can be used if execution aborted because \verb+msa+ was too
small.
\item
\begin{verbatim}
memory <integer memlim>
\end{verbatim}
sets a limit \verb+<memlim>+ in kB on the allocated amount of memory used by
the molecular dynamics module.
Per default all available memory is allocated. Use of this command
is required for QM/MM simulations only.
%\item
%For development purposes debug information can be written to the debug
%file with extension {\bf dbg} with
%\begin{verbatim}
%debug <i idebug>
%\end{verbatim}
%where $idebug$ specifies the type of debug information being written.
\end{description}
| {
"alphanum_fraction": 0.7336430017,
"avg_line_length": 39.6332378223,
"ext": "tex",
"hexsha": "da4f6eb708156312d3609b8069e76d69d6aad8a5",
"lang": "TeX",
"max_forks_count": 135,
"max_forks_repo_forks_event_max_datetime": "2022-03-31T02:28:49.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-11-19T18:36:44.000Z",
"max_forks_repo_head_hexsha": "21cb07ff634475600ab687882652b823cad8c0cd",
"max_forks_repo_licenses": [
"ECL-2.0"
],
"max_forks_repo_name": "dinisAbranches/nwchem",
"max_forks_repo_path": "doc/user/nwargos.tex",
"max_issues_count": 356,
"max_issues_repo_head_hexsha": "21cb07ff634475600ab687882652b823cad8c0cd",
"max_issues_repo_issues_event_max_datetime": "2022-03-31T02:28:21.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-12-05T01:38:12.000Z",
"max_issues_repo_licenses": [
"ECL-2.0"
],
"max_issues_repo_name": "dinisAbranches/nwchem",
"max_issues_repo_path": "doc/user/nwargos.tex",
"max_line_length": 84,
"max_stars_count": 317,
"max_stars_repo_head_hexsha": "21cb07ff634475600ab687882652b823cad8c0cd",
"max_stars_repo_licenses": [
"ECL-2.0"
],
"max_stars_repo_name": "dinisAbranches/nwchem",
"max_stars_repo_path": "doc/user/nwargos.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-28T11:48:24.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-11-20T21:29:11.000Z",
"num_tokens": 15186,
"size": 55328
} |
%% Generated by Sphinx.
\def\sphinxdocclass{report}
\documentclass[letterpaper,10pt,english]{sphinxmanual}
\ifdefined\pdfpxdimen
\let\sphinxpxdimen\pdfpxdimen\else\newdimen\sphinxpxdimen
\fi \sphinxpxdimen=.75bp\relax
\usepackage[utf8]{inputenc}
\ifdefined\DeclareUnicodeCharacter
\ifdefined\DeclareUnicodeCharacterAsOptional
\DeclareUnicodeCharacter{"00A0}{\nobreakspace}
\DeclareUnicodeCharacter{"2500}{\sphinxunichar{2500}}
\DeclareUnicodeCharacter{"2502}{\sphinxunichar{2502}}
\DeclareUnicodeCharacter{"2514}{\sphinxunichar{2514}}
\DeclareUnicodeCharacter{"251C}{\sphinxunichar{251C}}
\DeclareUnicodeCharacter{"2572}{\textbackslash}
\else
\DeclareUnicodeCharacter{00A0}{\nobreakspace}
\DeclareUnicodeCharacter{2500}{\sphinxunichar{2500}}
\DeclareUnicodeCharacter{2502}{\sphinxunichar{2502}}
\DeclareUnicodeCharacter{2514}{\sphinxunichar{2514}}
\DeclareUnicodeCharacter{251C}{\sphinxunichar{251C}}
\DeclareUnicodeCharacter{2572}{\textbackslash}
\fi
\fi
\usepackage{cmap}
\usepackage[T1]{fontenc}
\usepackage{amsmath,amssymb,amstext}
\usepackage{babel}
\usepackage{times}
\usepackage[Bjarne]{fncychap}
\usepackage[dontkeepoldnames]{sphinx}
\usepackage{geometry}
% Include hyperref last.
\usepackage{hyperref}
% Fix anchor placement for figures with captions.
\usepackage{hypcap}% it must be loaded after hyperref.
% Set up styles of URL: it should be placed after hyperref.
\urlstyle{same}
\addto\captionsenglish{\renewcommand{\figurename}{Fig.}}
\addto\captionsenglish{\renewcommand{\tablename}{Table}}
\addto\captionsenglish{\renewcommand{\literalblockname}{Listing}}
\addto\captionsenglish{\renewcommand{\literalblockcontinuedname}{continued from previous page}}
\addto\captionsenglish{\renewcommand{\literalblockcontinuesname}{continues on next page}}
\addto\extrasenglish{\def\pageautorefname{page}}
\setcounter{tocdepth}{1}
\title{Kerberos Plugin Module Developer Guide}
\date{ }
\release{1.18.2}
\author{MIT}
\newcommand{\sphinxlogo}{\vbox{}}
\renewcommand{\releasename}{Release}
\makeindex
\begin{document}
\maketitle
\sphinxtableofcontents
\phantomsection\label{\detokenize{plugindev/index::doc}}
Kerberos plugin modules allow increased control over MIT krb5 library
and server behavior. This guide describes how to create dynamic
plugin modules and the currently available pluggable interfaces.
See \DUrole{xref,std,std-ref}{plugin\_config} for information on how to register dynamic
plugin modules and how to enable and disable modules via
\DUrole{xref,std,std-ref}{krb5.conf(5)}.
\chapter{Contents}
\label{\detokenize{plugindev/index:for-plugin-module-developers}}\label{\detokenize{plugindev/index:contents}}
\section{General plugin concepts}
\label{\detokenize{plugindev/general:general-plugin-concepts}}\label{\detokenize{plugindev/general::doc}}
A krb5 dynamic plugin module is a Unix shared object or Windows DLL.
Typically, the source code for a dynamic plugin module should live in
its own project with a build system using \sphinxhref{https://www.gnu.org/software/automake/}{automake} and \sphinxhref{https://www.gnu.org/software/libtool/}{libtool}, or
tools with similar functionality.
A plugin module must define a specific symbol name, which depends on
the pluggable interface and module name. For most pluggable
interfaces, the exported symbol is a function named
\sphinxcode{INTERFACE\_MODULE\_initvt}, where \sphinxstyleemphasis{INTERFACE} is the name of the
pluggable interface and \sphinxstyleemphasis{MODULE} is the name of the module. For these
interfaces, it is possible for one shared object or DLL to implement
multiple plugin modules, either for the same pluggable interface or
for different ones. For example, a shared object could implement both
KDC and client preauthentication mechanisms, by exporting functions
named \sphinxcode{kdcpreauth\_mymech\_initvt} and \sphinxcode{clpreauth\_mymech\_initvt}.
A plugin module implementation should include the header file
\sphinxcode{\textless{}krb5/INTERFACE\_plugin.h\textgreater{}}, where \sphinxstyleemphasis{INTERFACE} is the name of the
pluggable interface. For instance, a ccselect plugin module
implementation should use \sphinxcode{\#include \textless{}krb5/ccselect\_plugin.h\textgreater{}}.
initvt functions have the following prototype:
\fvset{hllines={, ,}}%
\begin{sphinxVerbatim}[commandchars=\\\{\}]
\PYG{n}{krb5\PYGZus{}error\PYGZus{}code} \PYG{n}{interface\PYGZus{}modname\PYGZus{}initvt}\PYG{p}{(}\PYG{n}{krb5\PYGZus{}context} \PYG{n}{context}\PYG{p}{,}
\PYG{n+nb}{int} \PYG{n}{maj\PYGZus{}ver}\PYG{p}{,} \PYG{n+nb}{int} \PYG{n}{min\PYGZus{}ver}\PYG{p}{,}
\PYG{n}{krb5\PYGZus{}plugin\PYGZus{}vtable} \PYG{n}{vtable}\PYG{p}{)}\PYG{p}{;}
\end{sphinxVerbatim}
and should do the following:
\begin{enumerate}
\item {}
Check that the supplied maj\_ver argument is supported by the
module. If it is not supported, the function should return
KRB5\_PLUGIN\_VER\_NOTSUPP.
\item {}
Cast the supplied vtable pointer to the structure type
corresponding to the major version, as documented in the pluggable
interface header file.
\item {}
Fill in the structure fields with pointers to method functions and
static data, stopping at the field indicated by the supplied minor
version. Fields for unimplemented optional methods can be left
alone; it is not necessary to initialize them to NULL.
\end{enumerate}
In most cases, the context argument will not be used. The initvt
function should not allocate memory; think of it as a glorified
structure initializer. Each pluggable interface defines methods for
allocating and freeing module state if doing so is necessary for the
interface.
Pluggable interfaces typically include a \sphinxstylestrong{name} field in the vtable
structure, which should be filled in with a pointer to a string
literal containing the module name.
Here is an example of what an initvt function might look like for a
fictional pluggable interface named fences, for a module named
“wicker”:
\fvset{hllines={, ,}}%
\begin{sphinxVerbatim}[commandchars=\\\{\}]
\PYG{n}{krb5\PYGZus{}error\PYGZus{}code}
\PYG{n}{fences\PYGZus{}wicker\PYGZus{}initvt}\PYG{p}{(}\PYG{n}{krb5\PYGZus{}context} \PYG{n}{context}\PYG{p}{,} \PYG{n+nb}{int} \PYG{n}{maj\PYGZus{}ver}\PYG{p}{,}
\PYG{n+nb}{int} \PYG{n}{min\PYGZus{}ver}\PYG{p}{,} \PYG{n}{krb5\PYGZus{}plugin\PYGZus{}vtable} \PYG{n}{vtable}\PYG{p}{)}
\PYG{p}{\PYGZob{}}
\PYG{n}{krb5\PYGZus{}ccselect\PYGZus{}vtable} \PYG{n}{vt}\PYG{p}{;}
\PYG{k}{if} \PYG{p}{(}\PYG{n}{maj\PYGZus{}ver} \PYG{o}{==} \PYG{l+m+mi}{1}\PYG{p}{)} \PYG{p}{\PYGZob{}}
\PYG{n}{krb5\PYGZus{}fences\PYGZus{}vtable} \PYG{n}{vt} \PYG{o}{=} \PYG{p}{(}\PYG{n}{krb5\PYGZus{}fences\PYGZus{}vtable}\PYG{p}{)}\PYG{n}{vtable}\PYG{p}{;}
\PYG{n}{vt}\PYG{o}{\PYGZhy{}}\PYG{o}{\PYGZgt{}}\PYG{n}{name} \PYG{o}{=} \PYG{l+s+s2}{\PYGZdq{}}\PYG{l+s+s2}{wicker}\PYG{l+s+s2}{\PYGZdq{}}\PYG{p}{;}
\PYG{n}{vt}\PYG{o}{\PYGZhy{}}\PYG{o}{\PYGZgt{}}\PYG{n}{slats} \PYG{o}{=} \PYG{n}{wicker\PYGZus{}slats}\PYG{p}{;}
\PYG{n}{vt}\PYG{o}{\PYGZhy{}}\PYG{o}{\PYGZgt{}}\PYG{n}{braces} \PYG{o}{=} \PYG{n}{wicker\PYGZus{}braces}\PYG{p}{;}
\PYG{p}{\PYGZcb{}} \PYG{k}{else} \PYG{k}{if} \PYG{p}{(}\PYG{n}{maj\PYGZus{}ver} \PYG{o}{==} \PYG{l+m+mi}{2}\PYG{p}{)} \PYG{p}{\PYGZob{}}
\PYG{n}{krb5\PYGZus{}fences\PYGZus{}vtable\PYGZus{}v2} \PYG{n}{vt} \PYG{o}{=} \PYG{p}{(}\PYG{n}{krb5\PYGZus{}fences\PYGZus{}vtable\PYGZus{}v2}\PYG{p}{)}\PYG{n}{vtable}\PYG{p}{;}
\PYG{n}{vt}\PYG{o}{\PYGZhy{}}\PYG{o}{\PYGZgt{}}\PYG{n}{name} \PYG{o}{=} \PYG{l+s+s2}{\PYGZdq{}}\PYG{l+s+s2}{wicker}\PYG{l+s+s2}{\PYGZdq{}}\PYG{p}{;}
\PYG{n}{vt}\PYG{o}{\PYGZhy{}}\PYG{o}{\PYGZgt{}}\PYG{n}{material} \PYG{o}{=} \PYG{n}{wicker\PYGZus{}material}\PYG{p}{;}
\PYG{n}{vt}\PYG{o}{\PYGZhy{}}\PYG{o}{\PYGZgt{}}\PYG{n}{construction} \PYG{o}{=} \PYG{n}{wicker\PYGZus{}construction}\PYG{p}{;}
\PYG{k}{if} \PYG{p}{(}\PYG{n}{min\PYGZus{}ver} \PYG{o}{\PYGZlt{}} \PYG{l+m+mi}{2}\PYG{p}{)}
\PYG{k}{return} \PYG{l+m+mi}{0}\PYG{p}{;}
\PYG{n}{vt}\PYG{o}{\PYGZhy{}}\PYG{o}{\PYGZgt{}}\PYG{n}{footing} \PYG{o}{=} \PYG{n}{wicker\PYGZus{}footing}\PYG{p}{;}
\PYG{k}{if} \PYG{p}{(}\PYG{n}{min\PYGZus{}ver} \PYG{o}{\PYGZlt{}} \PYG{l+m+mi}{3}\PYG{p}{)}
\PYG{k}{return} \PYG{l+m+mi}{0}\PYG{p}{;}
\PYG{n}{vt}\PYG{o}{\PYGZhy{}}\PYG{o}{\PYGZgt{}}\PYG{n}{appearance} \PYG{o}{=} \PYG{n}{wicker\PYGZus{}appearance}\PYG{p}{;}
\PYG{p}{\PYGZcb{}} \PYG{k}{else} \PYG{p}{\PYGZob{}}
\PYG{k}{return} \PYG{n}{KRB5\PYGZus{}PLUGIN\PYGZus{}VER\PYGZus{}NOTSUPP}\PYG{p}{;}
\PYG{p}{\PYGZcb{}}
\PYG{k}{return} \PYG{l+m+mi}{0}\PYG{p}{;}
\PYG{p}{\PYGZcb{}}
\end{sphinxVerbatim}
\subsection{Logging from KDC and kadmind plugin modules}
\label{\detokenize{plugindev/general:logging-from-kdc-and-kadmind-plugin-modules}}
Plugin modules for the KDC or kadmind daemons can write to the
configured logging outputs (see \DUrole{xref,std,std-ref}{logging}) by calling the
\sphinxstylestrong{com\_err} function. The first argument (\sphinxstyleemphasis{whoami}) is ignored. If
the second argument (\sphinxstyleemphasis{code}) is zero, the formatted message is logged
at informational severity; otherwise, the formatted message is logged
at error severity and includes the error message for the supplied
code. Here are examples:
\fvset{hllines={, ,}}%
\begin{sphinxVerbatim}[commandchars=\\\{\}]
\PYG{n}{com\PYGZus{}err}\PYG{p}{(}\PYG{l+s+s2}{\PYGZdq{}}\PYG{l+s+s2}{\PYGZdq{}}\PYG{p}{,} \PYG{l+m+mi}{0}\PYG{p}{,} \PYG{l+s+s2}{\PYGZdq{}}\PYG{l+s+s2}{Client message contains }\PYG{l+s+si}{\PYGZpc{}d}\PYG{l+s+s2}{ items}\PYG{l+s+s2}{\PYGZdq{}}\PYG{p}{,} \PYG{n}{nitems}\PYG{p}{)}\PYG{p}{;}
\PYG{n}{com\PYGZus{}err}\PYG{p}{(}\PYG{l+s+s2}{\PYGZdq{}}\PYG{l+s+s2}{\PYGZdq{}}\PYG{p}{,} \PYG{n}{retval}\PYG{p}{,} \PYG{l+s+s2}{\PYGZdq{}}\PYG{l+s+s2}{while decoding client message}\PYG{l+s+s2}{\PYGZdq{}}\PYG{p}{)}\PYG{p}{;}
\end{sphinxVerbatim}
(The behavior described above is new in release 1.17. In prior
releases, the \sphinxstyleemphasis{whoami} argument is included for some logging output
types, the logged message does not include the usual header for some
output types, and the severity for syslog outputs is configured as
part of the logging specification, defaulting to error severity.)
\section{Client preauthentication interface (clpreauth)}
\label{\detokenize{plugindev/clpreauth:client-preauthentication-interface-clpreauth}}\label{\detokenize{plugindev/clpreauth::doc}}
During an initial ticket request, a KDC may ask a client to prove its
knowledge of the password before issuing an encrypted ticket, or to
use credentials other than a password. This process is called
preauthentication, and is described in \index{RFC!RFC 4120}\sphinxhref{https://tools.ietf.org/html/rfc4120.html}{\sphinxstylestrong{RFC 4120}} and \index{RFC!RFC 6113}\sphinxhref{https://tools.ietf.org/html/rfc6113.html}{\sphinxstylestrong{RFC 6113}}.
The clpreauth interface allows the addition of client support for
preauthentication mechanisms beyond those included in the core MIT
krb5 code base. For a detailed description of the clpreauth
interface, see the header file \sphinxcode{\textless{}krb5/clpreauth\_plugin.h\textgreater{}} (or
\sphinxcode{\textless{}krb5/preauth\_plugin.h\textgreater{}} before release 1.12).
A clpreauth module is generally responsible for:
\begin{itemize}
\item {}
Supplying a list of preauth type numbers used by the module in the
\sphinxstylestrong{pa\_type\_list} field of the vtable structure.
\item {}
Indicating what kind of preauthentication mechanism it implements,
with the \sphinxstylestrong{flags} method. In the most common case, this method
just returns \sphinxcode{PA\_REAL}, indicating that it implements a normal
preauthentication type.
\item {}
Examining the padata information included in a PREAUTH\_REQUIRED or
MORE\_PREAUTH\_DATA\_REQUIRED error and producing padata values for the
next AS request. This is done with the \sphinxstylestrong{process} method.
\item {}
Examining the padata information included in a successful ticket
reply, possibly verifying the KDC identity and computing a reply
key. This is also done with the \sphinxstylestrong{process} method.
\item {}
For preauthentication types which support it, recovering from errors
by examining the error data from the KDC and producing a padata
value for another AS request. This is done with the \sphinxstylestrong{tryagain}
method.
\item {}
Receiving option information (supplied by \sphinxcode{kinit -X} or by an
application), with the \sphinxstylestrong{gic\_opts} method.
\end{itemize}
A clpreauth module can create and destroy per-library-context and
per-request state objects by implementing the \sphinxstylestrong{init}, \sphinxstylestrong{fini},
\sphinxstylestrong{request\_init}, and \sphinxstylestrong{request\_fini} methods. Per-context state
objects have the type krb5\_clpreauth\_moddata, and per-request state
objects have the type krb5\_clpreauth\_modreq. These are abstract
pointer types; a module should typically cast these to internal
types for the state objects.
The \sphinxstylestrong{process} and \sphinxstylestrong{tryagain} methods have access to a callback
function and handle (called a “rock”) which can be used to get
additional information about the current request, including the
expected enctype of the AS reply, the FAST armor key, and the client
long-term key (prompting for the user password if necessary). A
callback can also be used to replace the AS reply key if the
preauthentication mechanism computes one.
\section{KDC preauthentication interface (kdcpreauth)}
\label{\detokenize{plugindev/kdcpreauth:kdc-preauthentication-interface-kdcpreauth}}\label{\detokenize{plugindev/kdcpreauth::doc}}
The kdcpreauth interface allows the addition of KDC support for
preauthentication mechanisms beyond those included in the core MIT
krb5 code base. For a detailed description of the kdcpreauth
interface, see the header file \sphinxcode{\textless{}krb5/kdcpreauth\_plugin.h\textgreater{}} (or
\sphinxcode{\textless{}krb5/preauth\_plugin.h\textgreater{}} before release 1.12).
A kdcpreauth module is generally responsible for:
\begin{itemize}
\item {}
Supplying a list of preauth type numbers used by the module in the
\sphinxstylestrong{pa\_type\_list} field of the vtable structure.
\item {}
Indicating what kind of preauthentication mechanism it implements,
with the \sphinxstylestrong{flags} method. If the mechanism computes a new reply
key, it must specify the \sphinxcode{PA\_REPLACES\_KEY} flag. If the mechanism
is generally only used with hardware tokens, the \sphinxcode{PA\_HARDWARE}
flag allows the mechanism to work with principals which have the
\sphinxstylestrong{requires\_hwauth} flag set.
\item {}
Producing a padata value to be sent with a preauth\_required error,
with the \sphinxstylestrong{edata} method.
\item {}
Examining a padata value sent by a client and verifying that it
proves knowledge of the appropriate client credential information.
This is done with the \sphinxstylestrong{verify} method.
\item {}
Producing a padata response value for the client, and possibly
computing a reply key. This is done with the \sphinxstylestrong{return\_padata}
method.
\end{itemize}
A module can create and destroy per-KDC state objects by implementing
the \sphinxstylestrong{init} and \sphinxstylestrong{fini} methods. Per-KDC state objects have the
type krb5\_kdcpreauth\_moddata, which is an abstract pointer types. A
module should typically cast this to an internal type for the state
object.
A module can create a per-request state object by returning one in the
\sphinxstylestrong{verify} method, receiving it in the \sphinxstylestrong{return\_padata} method, and
destroying it in the \sphinxstylestrong{free\_modreq} method. Note that these state
objects only apply to the processing of a single AS request packet,
not to an entire authentication exchange (since an authentication
exchange may remain unfinished by the client or may involve multiple
different KDC hosts). Per-request state objects have the type
krb5\_kdcpreauth\_modreq, which is an abstract pointer type.
The \sphinxstylestrong{edata}, \sphinxstylestrong{verify}, and \sphinxstylestrong{return\_padata} methods have access
to a callback function and handle (called a “rock”) which can be used
to get additional information about the current request, including the
maximum allowable clock skew, the client’s long-term keys, the
DER-encoded request body, the FAST armor key, string attributes on the
client’s database entry, and the client’s database entry itself. The
\sphinxstylestrong{verify} method can assert one or more authentication indicators to
be included in the issued ticket using the \sphinxcode{add\_auth\_indicator}
callback (new in release 1.14).
A module can generate state information to be included with the next
client request using the \sphinxcode{set\_cookie} callback (new in release
1.14). On the next request, the module can read this state
information using the \sphinxcode{get\_cookie} callback. Cookie information is
encrypted, timestamped, and transmitted to the client in a
\sphinxcode{PA-FX-COOKIE} pa-data item. Older clients may not support cookies
and therefore may not transmit the cookie in the next request; in this
case, \sphinxcode{get\_cookie} will not yield the saved information.
If a module implements a mechanism which requires multiple round
trips, its \sphinxstylestrong{verify} method can respond with the code
\sphinxcode{KRB5KDC\_ERR\_MORE\_PREAUTH\_DATA\_REQUIRED} and a list of pa-data in
the \sphinxstyleemphasis{e\_data} parameter to be processed by the client.
The \sphinxstylestrong{edata} and \sphinxstylestrong{verify} methods can be implemented
asynchronously. Because of this, they do not return values directly
to the caller, but must instead invoke responder functions with their
results. A synchronous implementation can invoke the responder
function immediately. An asynchronous implementation can use the
callback to get an event context for use with the \sphinxhref{https://fedorahosted.org/libverto/}{libverto} API.
\section{Credential cache selection interface (ccselect)}
\label{\detokenize{plugindev/ccselect:credential-cache-selection-interface-ccselect}}\label{\detokenize{plugindev/ccselect::doc}}\label{\detokenize{plugindev/ccselect:ccselect-plugin}}
The ccselect interface allows modules to control how credential caches
are chosen when a GSSAPI client contacts a service. For a detailed
description of the ccselect interface, see the header file
\sphinxcode{\textless{}krb5/ccselect\_plugin.h\textgreater{}}.
The primary ccselect method is \sphinxstylestrong{choose}, which accepts a server
principal as input and returns a ccache and/or principal name as
output. A module can use the krb5\_cccol APIs to iterate over the
cache collection in order to find an appropriate ccache to use.
A module can create and destroy per-library-context state objects by
implementing the \sphinxstylestrong{init} and \sphinxstylestrong{fini} methods. State objects have
the type krb5\_ccselect\_moddata, which is an abstract pointer type. A
module should typically cast this to an internal type for the state
object.
A module can have one of two priorities, “authoritative” or
“heuristic”. Results from authoritative modules, if any are
available, will take priority over results from heuristic modules. A
module communicates its priority as a result of the \sphinxstylestrong{init} method.
\section{Password quality interface (pwqual)}
\label{\detokenize{plugindev/pwqual::doc}}\label{\detokenize{plugindev/pwqual:password-quality-interface-pwqual}}\label{\detokenize{plugindev/pwqual:pwqual-plugin}}
The pwqual interface allows modules to control what passwords are
allowed when a user changes passwords. For a detailed description of
the pwqual interface, see the header file \sphinxcode{\textless{}krb5/pwqual\_plugin.h\textgreater{}}.
The primary pwqual method is \sphinxstylestrong{check}, which receives a password as
input and returns success (0) or a \sphinxcode{KADM5\_PASS\_Q\_} failure code
depending on whether the password is allowed. The \sphinxstylestrong{check} method
also receives the principal name and the name of the principal’s
password policy as input; although there is no stable interface for
the module to obtain the fields of the password policy, it can define
its own configuration or data store based on the policy name.
A module can create and destroy per-process state objects by
implementing the \sphinxstylestrong{open} and \sphinxstylestrong{close} methods. State objects have
the type krb5\_pwqual\_moddata, which is an abstract pointer type. A
module should typically cast this to an internal type for the state
object. The \sphinxstylestrong{open} method also receives the name of the realm’s
dictionary file (as configured by the \sphinxstylestrong{dict\_file} variable in the
\DUrole{xref,std,std-ref}{kdc\_realms} section of \DUrole{xref,std,std-ref}{kdc.conf(5)}) if it wishes to use
it.
\section{KADM5 hook interface (kadm5\_hook)}
\label{\detokenize{plugindev/kadm5_hook:kadm5-hook-interface-kadm5-hook}}\label{\detokenize{plugindev/kadm5_hook::doc}}\label{\detokenize{plugindev/kadm5_hook:kadm5-hook-plugin}}
The kadm5\_hook interface allows modules to perform actions when
changes are made to the Kerberos database through \DUrole{xref,std,std-ref}{kadmin(1)}.
For a detailed description of the kadm5\_hook interface, see the header
file \sphinxcode{\textless{}krb5/kadm5\_hook\_plugin.h\textgreater{}}.
The kadm5\_hook interface has five primary methods: \sphinxstylestrong{chpass},
\sphinxstylestrong{create}, \sphinxstylestrong{modify}, \sphinxstylestrong{remove}, and \sphinxstylestrong{rename}. (The \sphinxstylestrong{rename}
method was introduced in release 1.14.) Each of these methods is
called twice when the corresponding administrative action takes place,
once before the action is committed and once afterwards. A module can
prevent the action from taking place by returning an error code during
the pre-commit stage.
A module can create and destroy per-process state objects by
implementing the \sphinxstylestrong{init} and \sphinxstylestrong{fini} methods. State objects have
the type kadm5\_hook\_modinfo, which is an abstract pointer type. A
module should typically cast this to an internal type for the state
object.
Because the kadm5\_hook interface is tied closely to the kadmin
interface (which is explicitly unstable), it may not remain as stable
across versions as other public pluggable interfaces.
\section{kadmin authorization interface (kadm5\_auth)}
\label{\detokenize{plugindev/kadm5_auth:kadm5-auth-plugin}}\label{\detokenize{plugindev/kadm5_auth:kadmin-authorization-interface-kadm5-auth}}\label{\detokenize{plugindev/kadm5_auth::doc}}
The kadm5\_auth interface (new in release 1.16) allows modules to
determine whether a client principal is authorized to perform an
operation in the kadmin protocol, and to apply restrictions to
principal operations. For a detailed description of the kadm5\_auth
interface, see the header file \sphinxcode{\textless{}krb5/kadm5\_auth\_plugin.h\textgreater{}}.
A module can create and destroy per-process state objects by
implementing the \sphinxstylestrong{init} and \sphinxstylestrong{fini} methods. State objects have
the type kadm5\_auth\_modinfo, which is an abstract pointer type. A
module should typically cast this to an internal type for the state
object.
The kadm5\_auth interface has one method for each kadmin operation,
with parameters specific to the operation. Each method can return
either 0 to authorize access, KRB5\_PLUGIN\_NO\_HANDLE to defer the
decision to other modules, or another error (canonically EPERM) to
authoritatively deny access. Access is granted if at least one module
grants access and no module authoritatively denies access.
The \sphinxstylestrong{addprinc} and \sphinxstylestrong{modprinc} methods can also impose restrictions
on the principal operation by returning a \sphinxcode{struct
kadm5\_auth\_restrictions} object. The module should also implement
the \sphinxstylestrong{free\_restrictions} method if it dynamically allocates
restrictions objects for principal operations.
kadm5\_auth modules can optionally inspect principal or policy objects.
To do this, the module must also include \sphinxcode{\textless{}kadm5/admin.h\textgreater{}} to gain
access to the structure definitions for those objects. As the kadmin
interface is explicitly not as stable as other public interfaces,
modules which do this may not retain compatibility across releases.
\section{Host-to-realm interface (hostrealm)}
\label{\detokenize{plugindev/hostrealm:hostrealm-plugin}}\label{\detokenize{plugindev/hostrealm::doc}}\label{\detokenize{plugindev/hostrealm:host-to-realm-interface-hostrealm}}
The host-to-realm interface was first introduced in release 1.12. It
allows modules to control the local mapping of hostnames to realm
names as well as the default realm. For a detailed description of the
hostrealm interface, see the header file
\sphinxcode{\textless{}krb5/hostrealm\_plugin.h\textgreater{}}.
Although the mapping methods in the hostrealm interface return a list
of one or more realms, only the first realm in the list is currently
used by callers. Callers may begin using later responses in the
future.
Any mapping method may return KRB5\_PLUGIN\_NO\_HANDLE to defer
processing to a later module.
A module can create and destroy per-library-context state objects
using the \sphinxstylestrong{init} and \sphinxstylestrong{fini} methods. If the module does not need
any state, it does not need to implement these methods.
The optional \sphinxstylestrong{host\_realm} method allows a module to determine
authoritative realm mappings for a hostname. The first authoritative
mapping is used in preference to KDC referrals when getting service
credentials.
The optional \sphinxstylestrong{fallback\_realm} method allows a module to determine
fallback mappings for a hostname. The first fallback mapping is tried
if there is no authoritative mapping for a realm, and KDC referrals
failed to produce a successful result.
The optional \sphinxstylestrong{default\_realm} method allows a module to determine the
local default realm.
If a module implements any of the above methods, it must also
implement \sphinxstylestrong{free\_list} to ensure that memory is allocated and
deallocated consistently.
\section{Local authorization interface (localauth)}
\label{\detokenize{plugindev/localauth:local-authorization-interface-localauth}}\label{\detokenize{plugindev/localauth:localauth-plugin}}\label{\detokenize{plugindev/localauth::doc}}
The localauth interface was first introduced in release 1.12. It
allows modules to control the relationship between Kerberos principals
and local system accounts. When an application calls
\sphinxcode{krb5\_kuserok()} or \sphinxcode{krb5\_aname\_to\_localname()}, localauth
modules are consulted to determine the result. For a detailed
description of the localauth interface, see the header file
\sphinxcode{\textless{}krb5/localauth\_plugin.h\textgreater{}}.
A module can create and destroy per-library-context state objects
using the \sphinxstylestrong{init} and \sphinxstylestrong{fini} methods. If the module does not need
any state, it does not need to implement these methods.
The optional \sphinxstylestrong{userok} method allows a module to control the behavior
of \sphinxcode{krb5\_kuserok()}. The module receives the authenticated name
and the local account name as inputs, and can return either 0 to
authorize access, KRB5\_PLUGIN\_NO\_HANDLE to defer the decision to other
modules, or another error (canonically EPERM) to authoritatively deny
access. Access is granted if at least one module grants access and no
module authoritatively denies access.
The optional \sphinxstylestrong{an2ln} method can work in two different ways. If the
module sets an array of uppercase type names in \sphinxstylestrong{an2ln\_types}, then
the module’s \sphinxstylestrong{an2ln} method will only be invoked by
\sphinxcode{krb5\_aname\_to\_localname()} if an \sphinxstylestrong{auth\_to\_local} value in
\DUrole{xref,std,std-ref}{krb5.conf(5)} refers to one of the module’s types. In this
case, the \sphinxstyleemphasis{type} and \sphinxstyleemphasis{residual} arguments will give the type name and
residual string of the \sphinxstylestrong{auth\_to\_local} value.
If the module does not set \sphinxstylestrong{an2ln\_types} but does implement
\sphinxstylestrong{an2ln}, the module’s \sphinxstylestrong{an2ln} method will be invoked for all
\sphinxcode{krb5\_aname\_to\_localname()} operations unless an earlier module
determines a mapping, with \sphinxstyleemphasis{type} and \sphinxstyleemphasis{residual} set to NULL. The
module can return KRB5\_LNAME\_NO\_TRANS to defer mapping to later
modules.
If a module implements \sphinxstylestrong{an2ln}, it must also implement
\sphinxstylestrong{free\_string} to ensure that memory is allocated and deallocated
consistently.
\section{Server location interface (locate)}
\label{\detokenize{plugindev/locate:server-location-interface-locate}}\label{\detokenize{plugindev/locate::doc}}
The locate interface allows modules to control how KDCs and similar
services are located by clients. For a detailed description of the
ccselect interface, see the header file \sphinxcode{\textless{}krb5/locate\_plugin.h\textgreater{}}.
A locate module exports a structure object of type
krb5plugin\_service\_locate\_ftable, with the name \sphinxcode{service\_locator}.
The structure contains a minor version and pointers to the module’s
methods.
The primary locate method is \sphinxstylestrong{lookup}, which accepts a service type,
realm name, desired socket type, and desired address family (which
will be AF\_UNSPEC if no specific address family is desired). The
method should invoke the callback function once for each server
address it wants to return, passing a socket type (SOCK\_STREAM for TCP
or SOCK\_DGRAM for UDP) and socket address. The \sphinxstylestrong{lookup} method
should return 0 if it has authoritatively determined the server
addresses for the realm, KRB5\_PLUGIN\_NO\_HANDLE if it wants to let
other location mechanisms determine the server addresses, or another
code if it experienced a failure which should abort the location
process.
A module can create and destroy per-library-context state objects by
implementing the \sphinxstylestrong{init} and \sphinxstylestrong{fini} methods. State objects have
the type void *, and should be cast to an internal type for the state
object.
\section{Configuration interface (profile)}
\label{\detokenize{plugindev/profile:configuration-interface-profile}}\label{\detokenize{plugindev/profile::doc}}\label{\detokenize{plugindev/profile:profile-plugin}}
The profile interface allows a module to control how krb5
configuration information is obtained by the Kerberos library and
applications. For a detailed description of the profile interface,
see the header file \sphinxcode{\textless{}profile.h\textgreater{}}.
\begin{sphinxadmonition}{note}{Note:}
The profile interface does not follow the normal conventions
for MIT krb5 pluggable interfaces, because it is part of a
lower-level component of the krb5 library.
\end{sphinxadmonition}
As with other types of plugin modules, a profile module is a Unix
shared object or Windows DLL, built separately from the krb5 tree.
The krb5 library will dynamically load and use a profile plugin module
if it reads a \sphinxcode{module} directive at the beginning of krb5.conf, as
described in \DUrole{xref,std,std-ref}{profile\_plugin\_config}.
A profile module exports a function named \sphinxcode{profile\_module\_init}
matching the signature of the profile\_module\_init\_fn type. This
function accepts a residual string, which may be used to help locate
the configuration source. The function fills in a vtable and may also
create a per-profile state object. If the module uses state objects,
it should implement the \sphinxstylestrong{copy} and \sphinxstylestrong{cleanup} methods to manage
them.
A basic read-only profile module need only implement the
\sphinxstylestrong{get\_values} and \sphinxstylestrong{free\_values} methods. The \sphinxstylestrong{get\_values} method
accepts a null-terminated list of C string names (e.g., an array
containing “libdefaults”, “clockskew”, and NULL for the \sphinxstylestrong{clockskew}
variable in the \DUrole{xref,std,std-ref}{libdefaults} section) and returns a
null-terminated list of values, which will be cleaned up with the
\sphinxstylestrong{free\_values} method when the caller is done with them.
Iterable profile modules must also define the \sphinxstylestrong{iterator\_create},
\sphinxstylestrong{iterator}, \sphinxstylestrong{iterator\_free}, and \sphinxstylestrong{free\_string} methods. The
core krb5 code does not require profiles to be iterable, but some
applications may iterate over the krb5 profile object in order to
present configuration interfaces.
Writable profile modules must also define the \sphinxstylestrong{writable},
\sphinxstylestrong{modified}, \sphinxstylestrong{update\_relation}, \sphinxstylestrong{rename\_section},
\sphinxstylestrong{add\_relation}, and \sphinxstylestrong{flush} methods. The core krb5 code does not
require profiles to be writable, but some applications may write to
the krb5 profile in order to present configuration interfaces.
The following is an example of a very basic read-only profile module
which returns a hardcoded value for the \sphinxstylestrong{default\_realm} variable in
\DUrole{xref,std,std-ref}{libdefaults}, and provides no other configuration information.
(For conciseness, the example omits code for checking the return
values of malloc and strdup.)
\fvset{hllines={, ,}}%
\begin{sphinxVerbatim}[commandchars=\\\{\}]
\PYG{c+c1}{\PYGZsh{}include \PYGZlt{}stdlib.h\PYGZgt{}}
\PYG{c+c1}{\PYGZsh{}include \PYGZlt{}string.h\PYGZgt{}}
\PYG{c+c1}{\PYGZsh{}include \PYGZlt{}profile.h\PYGZgt{}}
\PYG{n}{static} \PYG{n}{long}
\PYG{n}{get\PYGZus{}values}\PYG{p}{(}\PYG{n}{void} \PYG{o}{*}\PYG{n}{cbdata}\PYG{p}{,} \PYG{n}{const} \PYG{n}{char} \PYG{o}{*}\PYG{n}{const} \PYG{o}{*}\PYG{n}{names}\PYG{p}{,} \PYG{n}{char} \PYG{o}{*}\PYG{o}{*}\PYG{o}{*}\PYG{n}{values}\PYG{p}{)}
\PYG{p}{\PYGZob{}}
\PYG{k}{if} \PYG{p}{(}\PYG{n}{names}\PYG{p}{[}\PYG{l+m+mi}{0}\PYG{p}{]} \PYG{o}{!=} \PYG{n}{NULL} \PYG{o}{\PYGZam{}}\PYG{o}{\PYGZam{}} \PYG{n}{strcmp}\PYG{p}{(}\PYG{n}{names}\PYG{p}{[}\PYG{l+m+mi}{0}\PYG{p}{]}\PYG{p}{,} \PYG{l+s+s2}{\PYGZdq{}}\PYG{l+s+s2}{libdefaults}\PYG{l+s+s2}{\PYGZdq{}}\PYG{p}{)} \PYG{o}{==} \PYG{l+m+mi}{0} \PYG{o}{\PYGZam{}}\PYG{o}{\PYGZam{}}
\PYG{n}{names}\PYG{p}{[}\PYG{l+m+mi}{1}\PYG{p}{]} \PYG{o}{!=} \PYG{n}{NULL} \PYG{o}{\PYGZam{}}\PYG{o}{\PYGZam{}} \PYG{n}{strcmp}\PYG{p}{(}\PYG{n}{names}\PYG{p}{[}\PYG{l+m+mi}{1}\PYG{p}{]}\PYG{p}{,} \PYG{l+s+s2}{\PYGZdq{}}\PYG{l+s+s2}{default\PYGZus{}realm}\PYG{l+s+s2}{\PYGZdq{}}\PYG{p}{)} \PYG{o}{==} \PYG{l+m+mi}{0}\PYG{p}{)} \PYG{p}{\PYGZob{}}
\PYG{o}{*}\PYG{n}{values} \PYG{o}{=} \PYG{n}{malloc}\PYG{p}{(}\PYG{l+m+mi}{2} \PYG{o}{*} \PYG{n}{sizeof}\PYG{p}{(}\PYG{n}{char} \PYG{o}{*}\PYG{p}{)}\PYG{p}{)}\PYG{p}{;}
\PYG{p}{(}\PYG{o}{*}\PYG{n}{values}\PYG{p}{)}\PYG{p}{[}\PYG{l+m+mi}{0}\PYG{p}{]} \PYG{o}{=} \PYG{n}{strdup}\PYG{p}{(}\PYG{l+s+s2}{\PYGZdq{}}\PYG{l+s+s2}{ATHENA.MIT.EDU}\PYG{l+s+s2}{\PYGZdq{}}\PYG{p}{)}\PYG{p}{;}
\PYG{p}{(}\PYG{o}{*}\PYG{n}{values}\PYG{p}{)}\PYG{p}{[}\PYG{l+m+mi}{1}\PYG{p}{]} \PYG{o}{=} \PYG{n}{NULL}\PYG{p}{;}
\PYG{k}{return} \PYG{l+m+mi}{0}\PYG{p}{;}
\PYG{p}{\PYGZcb{}}
\PYG{k}{return} \PYG{n}{PROF\PYGZus{}NO\PYGZus{}RELATION}\PYG{p}{;}
\PYG{p}{\PYGZcb{}}
\PYG{n}{static} \PYG{n}{void}
\PYG{n}{free\PYGZus{}values}\PYG{p}{(}\PYG{n}{void} \PYG{o}{*}\PYG{n}{cbdata}\PYG{p}{,} \PYG{n}{char} \PYG{o}{*}\PYG{o}{*}\PYG{n}{values}\PYG{p}{)}
\PYG{p}{\PYGZob{}}
\PYG{n}{char} \PYG{o}{*}\PYG{o}{*}\PYG{n}{v}\PYG{p}{;}
\PYG{k}{for} \PYG{p}{(}\PYG{n}{v} \PYG{o}{=} \PYG{n}{values}\PYG{p}{;} \PYG{o}{*}\PYG{n}{v}\PYG{p}{;} \PYG{n}{v}\PYG{o}{+}\PYG{o}{+}\PYG{p}{)}
\PYG{n}{free}\PYG{p}{(}\PYG{o}{*}\PYG{n}{v}\PYG{p}{)}\PYG{p}{;}
\PYG{n}{free}\PYG{p}{(}\PYG{n}{values}\PYG{p}{)}\PYG{p}{;}
\PYG{p}{\PYGZcb{}}
\PYG{n}{long}
\PYG{n}{profile\PYGZus{}module\PYGZus{}init}\PYG{p}{(}\PYG{n}{const} \PYG{n}{char} \PYG{o}{*}\PYG{n}{residual}\PYG{p}{,} \PYG{n}{struct} \PYG{n}{profile\PYGZus{}vtable} \PYG{o}{*}\PYG{n}{vtable}\PYG{p}{,}
\PYG{n}{void} \PYG{o}{*}\PYG{o}{*}\PYG{n}{cb\PYGZus{}ret}\PYG{p}{)}\PYG{p}{;}
\PYG{n}{long}
\PYG{n}{profile\PYGZus{}module\PYGZus{}init}\PYG{p}{(}\PYG{n}{const} \PYG{n}{char} \PYG{o}{*}\PYG{n}{residual}\PYG{p}{,} \PYG{n}{struct} \PYG{n}{profile\PYGZus{}vtable} \PYG{o}{*}\PYG{n}{vtable}\PYG{p}{,}
\PYG{n}{void} \PYG{o}{*}\PYG{o}{*}\PYG{n}{cb\PYGZus{}ret}\PYG{p}{)}
\PYG{p}{\PYGZob{}}
\PYG{o}{*}\PYG{n}{cb\PYGZus{}ret} \PYG{o}{=} \PYG{n}{NULL}\PYG{p}{;}
\PYG{n}{vtable}\PYG{o}{\PYGZhy{}}\PYG{o}{\PYGZgt{}}\PYG{n}{get\PYGZus{}values} \PYG{o}{=} \PYG{n}{get\PYGZus{}values}\PYG{p}{;}
\PYG{n}{vtable}\PYG{o}{\PYGZhy{}}\PYG{o}{\PYGZgt{}}\PYG{n}{free\PYGZus{}values} \PYG{o}{=} \PYG{n}{free\PYGZus{}values}\PYG{p}{;}
\PYG{k}{return} \PYG{l+m+mi}{0}\PYG{p}{;}
\PYG{p}{\PYGZcb{}}
\end{sphinxVerbatim}
\section{GSSAPI mechanism interface}
\label{\detokenize{plugindev/gssapi::doc}}\label{\detokenize{plugindev/gssapi:gssapi-mechanism-interface}}
The GSSAPI library in MIT krb5 can load mechanism modules to augment
the set of built-in mechanisms.
A mechanism module is a Unix shared object or Windows DLL, built
separately from the krb5 tree. Modules are loaded according to the
GSS mechanism config files described in \DUrole{xref,std,std-ref}{gssapi\_plugin\_config}.
For the most part, a GSSAPI mechanism module exports the same
functions as would a GSSAPI implementation itself, with the same
function signatures. The mechanism selection layer within the GSSAPI
library (called the “mechglue”) will dispatch calls from the
application to the module if the module’s mechanism is requested. If
a module does not wish to implement a GSSAPI extension, it can simply
refrain from exporting it, and the mechglue will fail gracefully if
the application calls that function.
The mechglue does not invoke a module’s \sphinxstylestrong{gss\_add\_cred},
\sphinxstylestrong{gss\_add\_cred\_from}, \sphinxstylestrong{gss\_add\_cred\_impersonate\_name}, or
\sphinxstylestrong{gss\_add\_cred\_with\_password} function. A mechanism only needs to
implement the “acquire” variants of those functions.
A module does not need to coordinate its minor status codes with those
of other mechanisms. If the mechglue detects conflicts, it will map
the mechanism’s status codes onto unique values, and then map them
back again when \sphinxstylestrong{gss\_display\_status} is called.
\subsection{NegoEx modules}
\label{\detokenize{plugindev/gssapi:negoex-modules}}
Some Windows GSSAPI mechanisms can only be negotiated via a Microsoft
extension to SPNEGO called NegoEx. Beginning with release 1.18,
mechanism modules can support NegoEx as follows:
\begin{itemize}
\item {}
Implement the gssspi\_query\_meta\_data(), gssspi\_exchange\_meta\_data(),
and gssspi\_query\_mechanism\_info() SPIs declared in
\sphinxcode{\textless{}gssapi/gssapi\_ext.h\textgreater{}}.
\item {}
Implement gss\_inquire\_sec\_context\_by\_oid() and answer the
\sphinxstylestrong{GSS\_C\_INQ\_NEGOEX\_KEY} and \sphinxstylestrong{GSS\_C\_INQ\_NEGOEX\_VERIFY\_KEY} OIDs
to provide the checksum keys for outgoing and incoming checksums,
respectively. The answer must be in two buffers: the first buffer
contains the key contents, and the second buffer contains the key
encryption type as a four-byte little-endian integer.
\end{itemize}
By default, NegoEx mechanisms will not be directly negotiated via
SPNEGO. If direct SPNEGO negotiation is required for
interoperability, implement gss\_inquire\_attrs\_for\_mech() and assert
the GSS\_C\_MA\_NEGOEX\_AND\_SPNEGO attribute (along with any applicable
RFC 5587 attributes).
\subsection{Interposer modules}
\label{\detokenize{plugindev/gssapi:interposer-modules}}
The mechglue also supports a kind of loadable module, called an
interposer module, which intercepts calls to existing mechanisms
rather than implementing a new mechanism.
An interposer module must export the symbol \sphinxstylestrong{gss\_mech\_interposer}
with the following signature:
\fvset{hllines={, ,}}%
\begin{sphinxVerbatim}[commandchars=\\\{\}]
\PYG{n}{gss\PYGZus{}OID\PYGZus{}set} \PYG{n}{gss\PYGZus{}mech\PYGZus{}interposer}\PYG{p}{(}\PYG{n}{gss\PYGZus{}OID} \PYG{n}{mech\PYGZus{}type}\PYG{p}{)}\PYG{p}{;}
\end{sphinxVerbatim}
This function is invoked with the OID of the interposer mechanism as
specified in the mechanism config file, and returns a set of mechanism
OIDs to be interposed. The returned OID set must have been created
using the mechglue’s gss\_create\_empty\_oid\_set and
gss\_add\_oid\_set\_member functions.
An interposer module must use the prefix \sphinxcode{gssi\_} for the GSSAPI
functions it exports, instead of the prefix \sphinxcode{gss\_}.
An interposer module can link against the GSSAPI library in order to
make calls to the original mechanism. To do so, it must specify a
special mechanism OID which is the concatention of the interposer’s
own OID byte string and the original mechanism’s OID byte string.
Since \sphinxstylestrong{gss\_accept\_sec\_context} does not accept a mechanism argument,
an interposer mechanism must, in order to invoke the original
mechanism’s function, acquire a credential for the concatenated OID
and pass that as the \sphinxstyleemphasis{verifier\_cred\_handle} parameter.
Since \sphinxstylestrong{gss\_import\_name}, \sphinxstylestrong{gss\_import\_cred}, and
\sphinxstylestrong{gss\_import\_sec\_context} do not accept mechanism parameters, the SPI
has been extended to include variants which do. This allows the
interposer module to know which mechanism should be used to interpret
the token. These functions have the following signatures:
\fvset{hllines={, ,}}%
\begin{sphinxVerbatim}[commandchars=\\\{\}]
\PYG{n}{OM\PYGZus{}uint32} \PYG{n}{gssi\PYGZus{}import\PYGZus{}sec\PYGZus{}context\PYGZus{}by\PYGZus{}mech}\PYG{p}{(}\PYG{n}{OM\PYGZus{}uint32} \PYG{o}{*}\PYG{n}{minor\PYGZus{}status}\PYG{p}{,}
\PYG{n}{gss\PYGZus{}OID} \PYG{n}{desired\PYGZus{}mech}\PYG{p}{,} \PYG{n}{gss\PYGZus{}buffer\PYGZus{}t} \PYG{n}{interprocess\PYGZus{}token}\PYG{p}{,}
\PYG{n}{gss\PYGZus{}ctx\PYGZus{}id\PYGZus{}t} \PYG{o}{*}\PYG{n}{context\PYGZus{}handle}\PYG{p}{)}\PYG{p}{;}
\PYG{n}{OM\PYGZus{}uint32} \PYG{n}{gssi\PYGZus{}import\PYGZus{}name\PYGZus{}by\PYGZus{}mech}\PYG{p}{(}\PYG{n}{OM\PYGZus{}uint32} \PYG{o}{*}\PYG{n}{minor\PYGZus{}status}\PYG{p}{,}
\PYG{n}{gss\PYGZus{}OID} \PYG{n}{mech\PYGZus{}type}\PYG{p}{,} \PYG{n}{gss\PYGZus{}buffer\PYGZus{}t} \PYG{n}{input\PYGZus{}name\PYGZus{}buffer}\PYG{p}{,}
\PYG{n}{gss\PYGZus{}OID} \PYG{n}{input\PYGZus{}name\PYGZus{}type}\PYG{p}{,} \PYG{n}{gss\PYGZus{}name\PYGZus{}t} \PYG{n}{output\PYGZus{}name}\PYG{p}{)}\PYG{p}{;}
\PYG{n}{OM\PYGZus{}uint32} \PYG{n}{gssi\PYGZus{}import\PYGZus{}cred\PYGZus{}by\PYGZus{}mech}\PYG{p}{(}\PYG{n}{OM\PYGZus{}uint32} \PYG{o}{*}\PYG{n}{minor\PYGZus{}status}\PYG{p}{,}
\PYG{n}{gss\PYGZus{}OID} \PYG{n}{mech\PYGZus{}type}\PYG{p}{,} \PYG{n}{gss\PYGZus{}buffer\PYGZus{}t} \PYG{n}{token}\PYG{p}{,}
\PYG{n}{gss\PYGZus{}cred\PYGZus{}id\PYGZus{}t} \PYG{o}{*}\PYG{n}{cred\PYGZus{}handle}\PYG{p}{)}\PYG{p}{;}
\end{sphinxVerbatim}
To re-enter the original mechanism when importing tokens for the above
functions, the interposer module must wrap the mechanism token in the
mechglue’s format, using the concatenated OID. The mechglue token
formats are:
\begin{itemize}
\item {}
For \sphinxstylestrong{gss\_import\_sec\_context}, a four-byte OID length in big-endian
order, followed by the mechanism OID, followed by the mechanism
token.
\item {}
For \sphinxstylestrong{gss\_import\_name}, the bytes 04 01, followed by a two-byte OID
length in big-endian order, followed by the mechanism OID, followed
by the bytes 06, followed by the OID length as a single byte,
followed by the mechanism OID, followed by the mechanism token.
\item {}
For \sphinxstylestrong{gss\_import\_cred}, a four-byte OID length in big-endian order,
followed by the mechanism OID, followed by a four-byte token length
in big-endian order, followed by the mechanism token. This sequence
may be repeated multiple times.
\end{itemize}
\section{Internal pluggable interfaces}
\label{\detokenize{plugindev/internal::doc}}\label{\detokenize{plugindev/internal:internal-pluggable-interfaces}}
Following are brief discussions of pluggable interfaces which have not
yet been made public. These interfaces are functional, but the
interfaces are likely to change in incompatible ways from release to
release. In some cases, it may be necessary to copy header files from
the krb5 source tree to use an internal interface. Use these with
care, and expect to need to update your modules for each new release
of MIT krb5.
\subsection{Kerberos database interface (KDB)}
\label{\detokenize{plugindev/internal:kerberos-database-interface-kdb}}
A KDB module implements a database back end for KDC principal and
policy information, and can also control many aspects of KDC behavior.
For a full description of the interface, see the header file
\sphinxcode{\textless{}kdb.h\textgreater{}}.
The KDB pluggable interface is often referred to as the DAL (Database
Access Layer).
\subsection{Authorization data interface (authdata)}
\label{\detokenize{plugindev/internal:authorization-data-interface-authdata}}
The authdata interface allows a module to provide (from the KDC) or
consume (in application servers) authorization data of types beyond
those handled by the core MIT krb5 code base. The interface is
defined in the header file \sphinxcode{\textless{}krb5/authdata\_plugin.h\textgreater{}}, which is not
installed by the build.
\section{PKINIT certificate authorization interface (certauth)}
\label{\detokenize{plugindev/certauth:certauth-plugin}}\label{\detokenize{plugindev/certauth::doc}}\label{\detokenize{plugindev/certauth:pkinit-certificate-authorization-interface-certauth}}
The certauth interface was first introduced in release 1.16. It
allows customization of the X.509 certificate attribute requirements
placed on certificates used by PKINIT enabled clients. For a detailed
description of the certauth interface, see the header file
\sphinxcode{\textless{}krb5/certauth\_plugin.h\textgreater{}}
A certauth module implements the \sphinxstylestrong{authorize} method to determine
whether a client’s certificate is authorized to authenticate a client
principal. \sphinxstylestrong{authorize} receives the DER-encoded certificate, the
requested client principal, and a pointer to the client’s
krb5\_db\_entry (for modules that link against libkdb5). It returns the
authorization status and optionally outputs a list of authentication
indicator strings to be added to the ticket. A module must use its
own internal or library-provided ASN.1 certificate decoder.
A module can optionally create and destroy module data with the
\sphinxstylestrong{init} and \sphinxstylestrong{fini} methods. Module data objects last for the
lifetime of the KDC process.
If a module allocates and returns a list of authentication indicators
from \sphinxstylestrong{authorize}, it must also implement the \sphinxstylestrong{free\_ind} method
to free the list.
\section{KDC policy interface (kdcpolicy)}
\label{\detokenize{plugindev/kdcpolicy:kdcpolicy-plugin}}\label{\detokenize{plugindev/kdcpolicy::doc}}\label{\detokenize{plugindev/kdcpolicy:kdc-policy-interface-kdcpolicy}}
The kdcpolicy interface was first introduced in release 1.16. It
allows modules to veto otherwise valid AS and TGS requests or restrict
the lifetime and renew time of the resulting ticket. For a detailed
description of the kdcpolicy interface, see the header file
\sphinxcode{\textless{}krb5/kdcpolicy\_plugin.h\textgreater{}}.
The optional \sphinxstylestrong{check\_as} and \sphinxstylestrong{check\_tgs} functions allow the module
to perform access control. Additionally, a module can create and
destroy module data with the \sphinxstylestrong{init} and \sphinxstylestrong{fini} methods. Module
data objects last for the lifetime of the KDC process, and are
provided to all other methods. The data has the type
krb5\_kdcpolicy\_moddata, which should be cast to the appropriate
internal type.
kdcpolicy modules can optionally inspect principal entries. To do
this, the module must also include \sphinxcode{\textless{}kdb.h\textgreater{}} to gain access to the
principal entry structure definition. As the KDB interface is
explicitly not as stable as other public interfaces, modules which do
this may not retain compatibility across releases.
\renewcommand{\indexname}{Index}
\printindex
\end{document} | {
"alphanum_fraction": 0.7622167678,
"avg_line_length": 55.6399548533,
"ext": "tex",
"hexsha": "a9696cfa803465adc88e260c9cf65306cd0f5116",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1efe2f06e16e42437f1840a2303756f93f1f21ee",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "umanwizard/rust-krb5-src",
"max_forks_repo_path": "krb5/doc/pdf/plugindev.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1efe2f06e16e42437f1840a2303756f93f1f21ee",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "umanwizard/rust-krb5-src",
"max_issues_repo_path": "krb5/doc/pdf/plugindev.tex",
"max_line_length": 370,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1efe2f06e16e42437f1840a2303756f93f1f21ee",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "umanwizard/rust-krb5-src",
"max_stars_repo_path": "krb5/doc/pdf/plugindev.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 14756,
"size": 49297
} |
%%
%% This is the documentation of DPcircling package.
%% (Last Update: 2020/04/15)
%% Maintained on GitHub:
%% https://github.com/domperor/DPcircling
%%
%% Copyright (c) 2020 Oura M. (domperor)
%% Released under the MIT license
%% https://opensource.org/licenses/mit-license.php
%%
\documentclass[10pt]{article}
\usepackage[dvipdfmx]{graphicx}
\usepackage{geometry}
\usepackage{url}
\geometry{a4paper,total={170mm,257mm},left=20mm,top=20mm}
\usepackage{DPcircling}
\title{DPcircling package v1.0}
\author{Oura M. (domperor)}
\date{\today}
\pagestyle{empty}
\begin{document}
\maketitle\thispagestyle{empty}
\section*{about this package}
This package provides 4 types of text decorations: \verb+\DPcircling+ \DPcircling{circle}, \verb+\DPrectangle+ \DPrectangle{rectangle}, \verb+\DPjagged+ \DPjagged{jagged rectangle}, and \verb+\DPfanshape+ \DPfanshape{fan-shape}. The baseline would be adjusted properly according to the surroundings\footnote{Unless you cram too tall things into a small decoration box. The box would be broken, and you would make a mess.}. You can use these decorations both in text mode and in math mode. You can specify \verb+line color+, \verb+line width+, \verb+width+, and \verb+height+ as option keys.
\
This package is maintained on GitHub: \url{https://github.com/domperor/DPcircling}
\subsection*{basic usage}
\begin{quote}
(in preamble): \verb+\usepackage[+\textit{\texttt{driver}}\verb+]{graphicx}\usepackage{DPcircling}+\footnote{The author personally uses dvipdfmx as the driver.}
(in your text): \verb+\DP*****[+\textit{\texttt{options}}\verb+]{+\textit{\texttt{content}}\verb+}+
\end{quote}
For instance, the code
\begin{quote}
\begin{verbatim}
It is a \DPjagged[line color=blue,line width=1.4pt]{\color{brown}great} opportunity.
\end{verbatim}
\end{quote}
\noindent gives the result:
\begin{quote}
It is a \DPjagged[line color=blue, line width= 1.4pt]{\color{brown}great} opportunity.
\end{quote}
\subsection*{required packages}
DPcircling requires the following packages: \verb+tikz+, \verb+keyval+, \verb+graphicx+, and the ones that these packages require.
\subsection*{aliases}
\verb+\DPcircle+ and \verb+\DPcirc+ are the aliases of \verb+\DPcircling+.
\verb+\DPrect+ is the alias of \verb+\DPrectangle+.
\subsection*{changing default values}
The default values of \verb+line color+, \verb+line width+, \verb+width+, and \verb+height+ are \verb+black+, \verb+1pt+, noted below(*), and \verb+2*(content height)+. You can modify these like this:
\begin{quote}
\begin{verbatim}
\DPcirclingDefault{line color=brown, line width=0.33pt, width=4em, height=5em}
\end{verbatim}
\end{quote}
\noindent (*) The default values of \verb+width+ are \verb+max{2*(content width), 2em}+ (circle) and
\verb:(content width)+2em: (else).
\section*{version history}
2020/04/15 v1.0
\end{document} | {
"alphanum_fraction": 0.7415373766,
"avg_line_length": 36.8311688312,
"ext": "tex",
"hexsha": "4fe11b535e205fd17d8ecc2dda2c4b7d07714785",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7a3cb19db7a5d00178688643fddd4c4c71a666ec",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "domperor/DPcircling",
"max_forks_repo_path": "DPcircling.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7a3cb19db7a5d00178688643fddd4c4c71a666ec",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "domperor/DPcircling",
"max_issues_repo_path": "DPcircling.tex",
"max_line_length": 590,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "7a3cb19db7a5d00178688643fddd4c4c71a666ec",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "domperor/DPcircling",
"max_stars_repo_path": "DPcircling.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 854,
"size": 2836
} |
\documentclass[a4paper]{report}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{textcomp}
\usepackage{amsmath, amssymb}
%\usepackage{amsthm}
\usepackage{xcolor}
\usepackage{tcolorbox}
\tcbuselibrary{theorems}
\newtcbtheorem
[]% init options
{definition}% name
{Definition}% title
{%
colback=green!5,
colframe=green!35!black,
fonttitle=\bfseries,
}% options
{def}% prefix
\newtcbtheorem
[]% init options
{theorem}% name
{Theorem}% title
{%
colback=red!5,
colframe=red!35!black,
fonttitle=\bfseries,
}% options
{thrm}% prefix
\newtcbtheorem
[]% init options
{example}% name
{Example}% title
{%
colback=blue!5,
colframe=blue!35!black,
fonttitle=\bfseries,
}% options
{expl}% prefix
\title{Discrete Math Notes}
\author{Allen Li}
% figure support
\usepackage{graphicx}
\graphicspath{ {./images/} }
\usepackage{import}
\usepackage{xifthen}
\pdfminorversion=7
\usepackage{pdfpages}
\usepackage{transparent}
\newcommand{\incfig}[1]{%
\def\svgwidth{\columnwidth}
\import{./figures/}{#1.pdf_tex}
}
\DeclareMathSymbol{\lsim}{\mathord}{symbols}{"18}
\usepackage{hyperref}
\hypersetup{
colorlinks,
citecolor=black,
filecolor=black,
linkcolor=black,
urlcolor=black
}
\begin{document}
\maketitle
\newpage
\tableofcontents{}
\newpage
\chapter{Speaking Mathematically}
\section{Statements}
\subsection{The Three Types of Statements}
\begin{enumerate}
\item Universal Statement: Statement that applies "for all"
\begin{itemize}
\item Ex: For all real numbers $x$, $x^2 \ge 0$
\item Key Words: "For All"
\end{itemize}
\item Conditional Statement: If one thing is true then some other thing must be true
\begin{itemize}
\item Key Words: "If, then"
\end{itemize}
\item Existential Statement: Statement that says there is at least one thing for which property is true
\begin{itemize}
\item There exists an even prime number.
\item Key Words: "There exists"
\end{itemize}
We use these three statements in order to classify different statements.
\end{enumerate}
\subsection{Universal Conditional Statements}
Ex: For all real numbers, if $|x| > 1$, then $x^2 > 1$.
This statement is universal because it has the "for all", and conditional because it has "if".
\subsection{Universal Existential Statement}
Ex: For all integers $n$, there exists another integer $m$, such that $n < m$.
This statement is universal again because it has the "for all", and existential because of the "there exists".
\subsection{Existential Universal Statement}
Ex. There exists a positive integer that is less than all the positive integers.
See above reasoning.
\section{Set Builder Notation}
A set is simply defined as a collection of objects. The size of a set is the number of unique elements in the set.
\subsection{Set Roster Notation}
\begin{itemize}
\item Example set: $A = \{a, b, c\}$
\item As shown, sets can include anything.
\end{itemize}
\subsection{Set Builder Notation}
\begin{itemize}
\item Example set: $A = \{ x \in \mathbb{R} \mid -1 \le x < 5 \}$
\item $0 \in A$, $-2 \not\in A$
\item Empty Set: $\{\}, \phi$
\item Universal Set: $u$, set containing all objects, all other sets are proper subsets.
\item Subsets: $A \subseteq B$ if for any element $a \in A$, $a \in B$
\item By definition, $\phi \subseteq A$
\item Proper subset: $A \subset B$ means all elements in A are also in B, and there exist elements in B that do not exist in A.
\item $\forall $: for all, $\exists $: there exists
\end{itemize}
We actually cannot use set roster notation most of the time. It is not countable.
\subsection{Cartesian Products}
\begin{itemize}
\item $A \times B = \{(x, y) \mid x \in A, y \in B\}$
\item Number of elements in $A \times B$ = sizes multiplied together
\item This was introduced in an effort to introduce ordered pairs (differentiation of $(a, b)$ and $(b, a)$ )
\item Because of this, $A \times B \neq B \times A$
\end{itemize}
\section{The Language of Relations and Functions}
\begin{enumerate}
\item Relations on sets
\begin{itemize}
\item Definition: a relation from set $A$ to set $B$ is a subset of $A \times B$. More formally, for some relation $x$, $x \subseteq A \times B$
\item We denote a relation between $x_1$ and $x_2$ as $x_1 \: R \: x_2$ provided that $x$ is in $A \times B$.
\item It can be proved intuitively that a set with $n$ elements has $2^n$ subsets.
\item Domain and co domain of $A \times B$ is simply $A$ and $B$ respectively.
\end{itemize}
\item Function from A to B
\begin{itemize}
\item Definition: a function is a relation such that $\forall x \in A, \exists y \in B \mid (x, y) \in F$ where $F: A \mapsto B$
\item If $(x, y) \in F$ and $(x, z) \in F$ then $(y, z) \in F$.
\item In other words, every element in $A$ must have exactly one distinct image.
\end{itemize}
\item Extra Function Terminology
\begin{itemize}
\item Squaring function: $x \mapsto x^2$
\item Successor function: $x \mapsto x + 1$
\item Constant function: $x \mapsto C$
\end{itemize}
\end{enumerate}
\chapter{The Logic of Compound Statements}
\section{Logical Form and Logical Equivalence}
An argument is a sequence of statements aimed at demonstrating the truth of an assertion.
\begin{itemize}
\item The assertion at the end of the sequence is called the conclusion, and the preceding statements are called premises.
\item We commonly use the variables $p$ $q$ and $r$ to represent component sentences.
\end{itemize}
A statement (or proposition) is a sentence that is true or false but not both.
Further terminology:
\begin{itemize}
\item The symbol $\lsim$ denotes not, $\land$ denotes and, and $\lor$ denotes or.
\item And, or, and not can easily be represented using truth tables.
\item Set up the truth table such that each group of columns builds off of the last.
\item Two statement forms are logically equivalent iff they have identical truth table outputs. This is represented as $P \equiv Q$.
\item Tautology \textbf{t}: A statement form that is always true regardless of the truth values substituted
\item Contradiction \textbf{c}: opposite of tautology
\item $p \land \textbf{t} \equiv p$, $p \lor \textbf{c} \equiv p$.
\item Absorption laws: $p \lor (p \land q) \equiv p$, $p \land (p \lor q) \equiv p$
\item See page 35 of the Discrete Math Textbook for a complete table on logical equivalences.
\end{itemize}
De Morgan's Laws: $\lsim(p \land q) \equiv \lsim p \lor \lsim q$, $\lsim (p \lor q) \equiv \lsim p \land \lsim q$.
Caution: De Morgan's Laws can only be used between complete statements on each side.
\section{Conditional Statements}
Logic flows from a hypothesis to a conclusion. The aim is to frame it as an "if, then". Given hypothesis $p$ and conclusion $q$, we represent this as \[
p \rightarrow q
.\]
The only combination of circumstances in which you would call a conditional sentence false occurs when the hypothesis is true and the conclusion is false. If the statement is true because the hypothesis is false, this is called vacuously true.
Note that in order of operations, $\to $ is performed last, and also note that \[
p \to q \equiv p \lor \lsim q
.\].
Helpful Information:
\begin{itemize}
\item The negation of "if p then q" is logically equivalent to "p and not q".
\item $p \to q \equiv \lsim q \to \lsim p$.
\item While the converse is not equivalent to the statement, the converse is logically equivalent to the inverse.
\item $p$ \textbf{only if} $q$ means "if $p$ then $q$".
\item Biconditional: "if and only if" is true if both statements have the same value.
\end{itemize}
\subsection{Necessary and Sufficient Conditions}
\begin{itemize}
\item $r$ is a sufficient condition for $s$ means "if $r$ then $s$"
\item $r$ is a necessary condition for $s$ means "if not $r$ then not $s$"
\end{itemize}
\section{Valid and Invalid Arguments}
\subsection{Arguments Terminology}
\begin{itemize}
\item \textbf{Argument}: sequence of statements
\item All statements in an argument except for the final one are called premises
\item The final statement is called a conclusion.
\item An argument form being valid means that if the resulting premises are all true, the conclusion is true.
\item \textbf{Critical row}: role of the truth table in which all premises are true. If the conclusion in every critical row is true, the argument form is valid.
\item \textbf{Syllogism}: argument form consisting of two premises and a conclusion. The first and second premises in a syllogism are the major and minor premises respectively.
\item \textbf{Modus ponens}: If $p$ then $q$. $p$. $\therefore q$. Modus ponens means "method of affirming" in Latin.
\item \textbf{Modus tollens}: If $p$ then $q$. $\lsim q, \therefore \lsim p$. Modus tollens means "method of denying" in Latin.
\end{itemize}
\subsection{Rule of Inference}
Rule of Inference: form of argument that is valid. Below are some helpful Rules of Inference.
\begin{itemize}
\item \textbf{Generalization}: $p \therefore p \lor q$
\item \textbf{Specialization}: $p \land q \therefore p$
\item \textbf{Elimination}: $p \lor q, \lsim q \therefore p$
\item \textbf{Transitivity}: $p \to q, q \to r \therefore p \to r$
\item \textbf{Proof by casework}: $p \lor q, p \to r, q \to r \therefore r$
\end{itemize}
\subsection{Fallacies}
\begin{itemize}
\item Using ambiguous premises
\item Assuming that is to be proved
\item Jumping to a conclusion
\item Assuming the converse to be true.
\item Assuming the inverse to be true.
\item An argument is sound if and only if it is valid and all its premises are true.
\end{itemize}
\subsection{Contradiction Rule}
$\lsim p \to c \therefore p$, in other words, if you can prove that an assumption leads to a contradiction, then you have proved that the assumption is false.
\chapter{The Logic of Quantified Statements}
\section{Predicates and Quantified Statements I}
\subsection{Terminology}
\begin{itemize}
\item \textbf{Predicate Calculus}: Symbolic analysis of predicates and quantified statements
\item \textbf{Statement Calculus}: Symbolic analysis of ordinary compound statements (see Sections 2.1-2.3)
\item \textbf{Predicate}: part of the sentence from which the subject has been removed. A predicate is a predicate symbol together with predicate variables.
Predicates become statements when specific values are substituted for the variables.
\begin{itemize}
\item Note that predicates can be obtained by removing some or all the nouns from a statement.
\end{itemize}
\item \textbf{Predicate symbols}: variables to stand for predicates that act as functions that take in predicate variables
\item \textbf{Domain of a predicate variable}: set of all values that may be substituted in place of the variable.
\item \textbf{Truth set of $P(x)$}: set of all elements of $D$ that make $P(x)$ true when they are substituted for $x$.
\[
\{x \in D \mid P(x)\}
.\]
\end{itemize}
\subsection{The Universal Quantifier: $\forall $}
\begin{definition}{Universal Statement}{label}
Let $Q(x)$ be a predicate and $D$ the domain of $x$. A \textbf{universal statement} (statement
of the form $\forall x \in D, Q(x)$) is true if, and only if, $Q(x)$ is true for every $x$ in $D$.
It is false if there is a counterexample.
\end{definition}
One way to show that a universal statement is true is by showing that there are no counterexamples.
This is called the \textbf{method of exhaustion}.
\subsection{The Existential Quantifier: $\exists $}
\begin{definition}{Existential Statement}{label}
Let $Q(x)$ be a predicate and $D$ the domain of $x$. An \textbf{existential statement} (statement
of the form $\exists x \in D, Q(x)$) is true if, and only if, $Q(x)$ is true for at least one $x$ in $D$.
It is false if and only if $Q(x)$ is false for all $x$ in $D$.
\end{definition}
\subsection{Equivalent Forms of Universal and Existential Statements}
Observe that ($\forall x \in U$, if $P(x)$ then $Q(x)$) can always be rewritten as
$\forall x \in D, Q(x)$ by narrowing $U$ to be the domain $D$ where $D$ consists of all values
$x$ that make $P(x)$ true.
\subsection{Implicit Quantification}
\begin{definition}{Implication Notation}{label}
Let $P(x)$ and $Q(x)$ be predicates.
\begin{itemize}
\item $P(x) \implies Q(x) \equiv \forall x, P(x) \to Q(x)$, meaning every element in the truth
set of $P(x)$ is also in the truth set of $Q(x)$.
\item $P(x) \Leftrightarrow Q(x) \equiv \forall x, P(x) \leftrightarrow Q(x)$, meaning the two
predicates have identical truth sets.
\end{itemize}
\end{definition}
\section{Predicates and Quantified Statements II}
This section contains rules for negating quantified statements and additional extensions to quantified statements.
\begin{theorem}{Negation of a Universal Statement}{label}
\[
\lsim (\forall x \in D, Q(x)) \equiv \exists x \in D \mid \lsim Q(x)
.\]
In other words, the negation of a universal statement is that there exists a counterexample.
\end{theorem}
\begin{theorem}{Negation of an Existential Statement}{label}
\[
\lsim (\exists x \in D \mid Q(x)) \equiv \forall x \in D, \lsim Q(x)
.\]
In other words, the negation of an existential statement is the universal statement "none are" or "all are not".
\end{theorem}
Note that using the Negation of a Universal Statement theorem, we can find the negation of a universal conditional statement by
negating the quantifier and the conditional "separately".
\[
\lsim (\forall x, P(x) \to Q(x)) \equiv \exists x \mid P(x) \land ~Q(x)
.\]
\subsection{Relation among $\forall ,\exists ,\land, \lor$}
Note that \[
\forall x \in D, Q(x) \equiv Q(x_1) \land Q(x_2) \land \ldots \land Q(x_n)
.\]
Similarly, \[
\exists x \in D \mid Q(x) \equiv Q(x_1) \lor Q(x_2) \lor \ldots \lor Q(x_n)
.\]
Using this, we can easily prove the above two theorems using DeMorgan's Laws.
Additionally, note that contrapositives, converses, and inverses extend to universal conditional statements as well, and there is no need
to flip the quantifier when finding these.
\subsection{Necessary and Sufficient Conditions}
\begin{definition}{Sufficient/Necessary Conditions}{label}
\begin{itemize}
\item "$\forall x, r(x)$ is a \textbf{sufficient condition} for $s(x)$" means "$\forall x, r(x) \to s(x)$".
\item "$\forall x, r(x)$ is a \textbf{necessary condition} for $s(x)$" means "$\forall x, \lsim r(x) \to \lsim s(x)$".
\end{itemize}
\end{definition}
\section{Statements with Multiple Quantifiers}
When there are multiple quantifiers, we perform the quantifiers in the order that they are stated. In terms of coding, it may be helpful to think
of the first quantifier as the outer loop and the second quantifier as the inner loop in a case with 2 quantifiers.
Amazingly, note that the order of the quantifiers only matters between $\forall $ and $\exists $.
\subsection{Negations of Multiply-Quantified Statements}
We simply negate the statement from left to right.
\begin{align}
\lsim (\forall x \in D, \exists y \in E \mid P(x, y)) \\
\equiv \exists x \in D \mid \lsim (\exists y \in E \mid P(x, y)) \\
\equiv \exists x \in D \mid \forall y \in E, \lsim P(x, y)
\end{align}
\section{Arguments with Quantified Statements}
\begin{definition}{Rule of Universal Instantiation}{label}
If some property is true of \emph{everything} in a set, then it is true of \emph{any particular} thing in the set.
\end{definition}
\subsection{Universal Modus Ponens/Tollens}
Rule of Universal Instantiation can be used in combination with Modus Ponens in order to establish that a universal conditional statement can establish
that the necessary condition necessarily follows from the sufficient condition.
Forms of Modus Ponens and Modus Tollens can be found in Section 2.3.1.
\subsection{Disc Diagrams}
They're basically inheritance diagrams. Since I don't know how to draw with LaTeX yet, just check 3.4.5 of the book for more info.
\subsection{Universal Transitivity}
\begin{definition}{Universal Transitivity}{label}
If $\forall x P(x) \to Q(x), Q(x) \to R(x)$, then $\forall x P(x) \to R(x)$.
\end{definition}
\chapter{Elementary Number Theory and Methods of Proof}
\section{Direct Proof and Counterexample I: Introduction}
\begin{definition}{Even and Odd}{label}
An integer $n$ in \textbf{even} if, and only if, $n$ equals twice some integer. An integer $n$ is \textbf{odd} if, and only if,
$n$ equals twice some integer plus $1$.
\end{definition}
\begin{definition}{Prime and Composite}{label}
An integer $n$ is \textbf{prime} if, and only if, $n > 1$ and for all positive integers $r$ and $s$, if $n=rs$ then either
$r$ or $s$ equals $n$. An integer $n$ is \textbf{composite} if, and only if, $n > 1$ and $n = rs$ for some integers
$r$ and $s$ with $1 < r < n$ and $1 < s < n$.
\end{definition}
\subsection{Proving Existential Statements}
There are two ways to prove an existential statement: find one condition that satisfies the predicate,
or give a set of directions for finding that condition. These methods are called
\textbf{constructive proofs of existence}. A \textbf{nonconstructive proof of existence} shows that
the condition satisfying the predicate is guaranteed from some axiom/theorem, or showing that the lack
of such a condition would lead to a contradiction.
\subsection{Disproving Universal Statements}
\begin{definition}{Disproof by Counterexample}{label}
To disprove a universal statement of the form $\forall x \in D, P(x) \to Q(x)$, simply find an
$x$ for which $P(x)$ is true and $Q(x)$ is false.
\end{definition}
\subsection{Proving Universal Statements}
The \textbf{Method of Exhaustion}, although impractical, can work for small domains. For more general
cases, we use
\begin{definition}{Method of Generalizing from the Generic Particular}{label}
To show that every element of a set satisfies a certain property, show that a particular but
arbitrary chosen $x$ satisfies the property. When using this method on a universal conditional,
this is known as the \textbf{method of direct proof}.
\end{definition}
\begin{definition}{Existential Instantiation}{label}
If the existence of a certain kind of object is assumed or has been deduced then it can be
given a name, as long as that name is not currently being used to denote something else.
\end{definition}
\subsection{Proof Guidelines}
\begin{enumerate}
\item Copy the statement of the theorem to be proved on your paper.
\item Clearly mark the beginning of your proof with the word \textbf{Proof}.
\item Make your proof self-contained.
\item Write your proof in complete, grammatically correct sentences.
\item Keep your reader informed about the status of each statement in your proof.
\item Give a reason for each assertion in your proof.
\item Include the "little words and phrases" that make the logic of your arguments clear.
\item Display equations and inequalities.
\item Note: be careful with using the word if. Use because instead if the premise is not in doubt.
\end{enumerate}
\subsection{Disproving Existential Statements}
In order to prove that an existential statement is false, you simply have to prove that its negation
is true.
\section{Direct Proof and Counterexample II: Rational Numbers}
\begin{definition}{Rational Number}{label}
A real number is \textbf{rational} if, and only if, it can be expressed as a quotient of
two integers with a nonzero denominator. A real number that is not rational is \textbf{irrational}.
\end{definition}
\begin{theorem}{Rational Number Properties}{label}
\begin{itemize}
\item Every integer is a rational number.
\item The sum of any two rational numbers is rational.
\end{itemize}
\end{theorem}
\begin{definition}{Corollary}{label}
A statement whose truth can be immediately deduced from a theorem that has already been proven.
\end{definition}
\section{Direct Proof and Counterexample III: Divisibility}
\begin{definition}{Divisibility}{label}
If $n$ and $d$ are integers and $d \neq 0$ then $n$ is \textbf{divisible by} $d$ if, and only
if, $n$ equals $d$ times some integer. The notation $d \mid n$ is read "$d$ divides $n$".
Symbolically, \[
d \mid n \leftrightarrow \exists k \in \mathbb{Z} \mid n = dk
.\]
It then follows that \[
d \nmid n \leftrightarrow \forall k \in Z | n \neq dk
.\]
\end{definition}
\subsection{The Unique Factorization of Integers Theorem}
Because of its importance, this theorem is also called the \emph{fundamental theorem of arithmetic}.
It states that any integer greater than 1 either is prime or can be written as a product of
prime numbers in a way that is unique. Formally,
\begin{theorem}{Unique Factorization of Integers}{label}
Given any integer $n > 1$ there exists a positive integer $k$, distinct prime numbers
$p_1, p_2, \ldots p_k$, and positive integers $e_1,e_2, \ldots e_k$ such that \[
n = p_1^{e_1} p_2^{e_2} \ldots p_k^{e_k}
.\]
When the values of $p$ are ordered in non decreasing order, the above is known as the
\textbf{standard factored form} of $n$.
\end{theorem}
\section{Direct Proof and Counterexample IV: Division into Cases and the Quotient-Remainder Theorem}
\begin{theorem}{The Quotient-Remainder Theorem}{label}
Given any integer $n$ and positive integer $d$, there exist unique integers $q$ and $r$ such that
\[
n = dq + r, 0 \le r < d
.\]
Note that if $n$ is negative, the remainder is still positive.
\end{theorem}
\subsection{div and mod}
From the quotient remainder theorem, div is the value $q$, and mod is the value $r$. Note that \[
n \: mod \: d = n - d \cdot (n \: div \: d)
.\]
\subsection{Method of Proof by Division into Cases}
To prove a statement of the form "If $A_1$ or $A_2$ or $A_3$ or $\ldots$ or $A_n$, then $C$ prove that
$A_i$ for all $1 \le i \le n$ implies $C$. This is useful when a statement can be easily split
into multiple statements that fully encompass the original statement.
\begin{example}{}{label}
Prove that the square of any odd integer has the form $8m + 1$ for some integer $m$.
\end{example}
\emph{Proof (Brief).} Suppose $n$ is an odd integer. By the quotient remainder theorem and using the fact
that the integer is odd, we can split the possible forms of $n$ into two cases: $4q+1$ or $4q+3$ for
some integer $q$. It can be proven through substitution that these two cases simplify to the form
$n^2=8m+1$.
\subsection{Absolute Value and the Triangle Inequality}
\begin{definition}{Absolute Value}{label}
For any real number $x$, the \textbf{absolute value of x} is defined as follows:
\begin{equation}
|x| =
\begin{cases}
x & \text{if} \: x \ge 0 \\
-x & \text{if} \: x < 0
\end{cases}
\end{equation}
\end{definition}
\begin{theorem}{Triangle Inequality}{label}
For all real numbers $x$ and $y$, $|x + y| \le |x| + |y|$.
\end{theorem}
\section{Indirect Argument: Contradiction and Contraposition}
Proof by contradiction is extremely intuitive and exactly what it sounds like. Assume that the negation
is true, and show that this assumption leads to a contradiction. Argument by contrapositive is equally
intuitive given the fact that a statement is logically equivalent to its contrapositive. Note that
proof by contraposition can only be used on universal conditionals.
\chapter{Sequences, Mathematical Induction, and Recursion}
The proof chapter is finally over! I did not like that chapter D:
\section{Sequences}
\begin{definition}{Sequence}{label}
A \textbf{sequence} is a function whose domain is either all the integers between two given
integers or all the integers greater than or equal to a given integer.
\end{definition}
The first term of a sequence is known as the \textbf{initial term}, and the last term is known as
the \textbf{final term}. An \textbf{explicit formula} or \textbf{general formula} is a rule
that shows how the values of $a_k$ depend on $k$.
\subsection{Summation Notation}
\begin{definition}{Summation Notation}{label}
For integers $m$ and $n$ where $m \le n$, \[
\sum_{k=m}^{n} a_k = a_m + a_{m+1} + \ldots + a_n
.\]
We call $k$ the \textbf{index} of the summation, $m$ the \textbf{lower limit} of the summation,
and $n$ the \textbf{upper limit} of the summation.
A recursive definition of summation notation: \[
\sum_{k=m}^{m} a_k = a_m \; \text{and} \sum_{k=m}^{n} a_k = \sum_{k=m}^{n-1} a_k + a_n
.\]
\end{definition}
\subsection{Product Notation}
\begin{definition}{Product Notation}{label}
For integers $m$ and $n$ where $m \le n$, \[
\prod_{k=m}^{n} a_k = a_m \cdot a_{m+1} \cdot \ldots \cdot a_n
.\]
The recursive definition of product notation is in essence the same idea as the one in summation
notation.
\end{definition}
\subsection{Properties of Summations and Products}
\begin{theorem}{Properties}{label}
\begin{enumerate}
\item $\sum_{k=m}^{n} a_k + \sum_{k=m}^{n} b_k = \sum_{k=m}^{n} (a_k+b_k)$
\item $c \cdot \sum_{k=m}^{n} a_k = \sum_{k=m}^{n} c \cdot a_k$
\item $(\prod_{k=m}^{n} a_k) * (\prod_{k=m}^{n} b_k) = (\prod_{k=m}^{n} (a_k \cdot b_k))$
\end{enumerate}
\end{theorem}
Substituting a new variable for $k$ is simple. Change the limits by setting $k$ equal to each of the
limits and noting the new value, then change the expression itself by substituting $k$.
\section{Mathematical Induction}
The general structure of mathematical induction mirrors a line of thinking somewhat like a domino effect:
If we can show that some property $P(n)$ is true for some integer $a$, and for all integers $k \ge a$,
if $P(k)$ is true then $P(k+1)$ is true, then the statement "for all integers $n \ge a, P(n)$ is true.
This is known as the \textbf{Principle of Mathematical Induction}. Showing that $P(n)$ is true for some
integer $a$ is known as the \textbf{basis step}, and proving that the truth of $P(k+1)$ follows from the
truth of $P(k)$ is known as the \textbf{inductive step}.
\begin{example}{Sum of the First $n$ Integers}{label}
For all integers $n \ge 1$, \[
1+2+\ldots+n=\frac{n*(n+1)}{2}
.\]
\end{example}
\emph{Proof (Brief).} Let the property $P(n)$ be the given equation. It can be shown that $P(1)$ is true.
We assume that $P(k)$ is true ($P(k)$ is known as the \textbf{inductive hypothesis}) and use the fact
that $P(k)+k=P(k+1)$. Using algebraic manipulation, we find that $P(k+1)$ is true.
In general, use the inductive hypothesis to prove the inductive step.
\section{Strong Mathematical Induction}
\begin{definition}{Principle of Strong Mathematical Induction}{label}
Let $P(n)$ be a property that is defined for integers $n$, and let $a$ and $b$ be fixed integers
with $a \le b$. Suppose the following two statements are true:
\begin{enumerate}
\item $P(a), P(a+1), \ldots, P(b)$ are all true. (\textbf{basis step})
\item For any integer $k \ge b$, if $P(i)$ is true for all integers $i$ from $a$ through $k$,
then $P(k+1)$ is true. (\textbf{inductive step})
\end{enumerate}
Then the statement \[
\text{for all integers } n \ge a, P(n)
.\]
is true.
\end{definition}
\begin{example}{Number of Multiplications Needed to Multiply $n$ Numbers}{label}
Prove that for any integer $n \ge 1$, if $x_1, x_2, \ldots x_n$ are $n$ numbers, then no matter
how the parentheses are inserted into their product, the number of multiplications used
to compute the product is $n - 1$.
\end{example}
\emph{Proof (Brief).} $P(1)$ is evidently true, as it takes $0$ multiplications to multiply one number.
Using the inductive hypothesis that $P(i)$ is true for all integers $i$ from $1$ to $k$, we now
attempt to prove that $P(k+1)$ is also true. We do this by splitting the $k + 1$ integers into two
sides with $l$ ($1 \le l \le k$) factors on the left and $r$ $(1 \le r \le k)$ factors on the right.
By inductive hypothesis, we note that we end up with $(l-1)+(r-1)+1=l+r-1$ multiplications, which, when
considering that $l + r = k + 1$, proves the statement.
\begin{definition}{Well-Ordering Principle for the Integers}{label}
Let $S$ be a set of integers containing one or more integers all of which are greater than some fixed integer. Then $S$ has a least element.
\end{definition}
A consequence of the well-ordering principle is the fact that any strictly decreasing sequence of nonnegative integers is finite. (This is because
from the well-ordering principle, there must be some least element $r_k$, which is the final element, because if there were some term $r_{k+1}$, then
$r_{k+1}<r_k$, which contradicts the well-ordering principle.
\section{Recursion}
Solving a problem recursively means to find a way to break it down into smaller subproblems each having the same form as the original problem.
\begin{example}{Tower of Hanoi}{label}
What is the least number of steps to move 64 golden disks from pole A to pole C (given the information that
there are three poles, all the disks are different sizes, and at any point, discs on every pole must be in
increasing order of size)?
\end{example}
The key observation here is to note that if we know the solution for $k-1$ discs, we can use this information to
get the solution for $k$ discs:
\begin{enumerate}
\item Move $k-1$ discs to pole B.
\item Move the bottom disc of pole A to pole C.
\item Move all the discs from pole B to pole C.
\end{enumerate}
It then follows that we get the recurrence $m_k=2m_{k-1}+1$ where $m_k$ is the number of moves to move $k$ discs from
one pole to another.
An explicit formula for this recurrence can be found through iteration and using the formula for the sum of a geometric
sequence. From this, we get the formula $m_n = 2^n-1$.
\subsection{Checking the Correctness of Explicit Formulas}
We can use mathematical induction to check the correctness of explicit formulas. For example, we can verify the formula for
Tower of Hanoi by showing that it is true for 1 ring, and then showing that it is true for all $k+1$ assuming that $k$ is true.
Check your induction proof carefully to make sure that no mistakes were made, and that the recursive form as well as the
explicit form were both used.
\chapter{Set Theory}
All mathematical objects can be defined in terms of sets, and the language of set theory is used
in every mathematical subject.
\section{Definitions and the Element Method of Proof}
Sets, as defined earlier, are collections of objects, called elements. Using our new knowledge, we can
redefine some definitions.
\subsection{Subsets}
We can redefine some definitions of subsets: \[
A \subseteq B \Leftrightarrow \forall x, x \in A \to x \in B
.\]
\[
A \not\subseteq B \Leftrightarrow \exists x \mid x \in A \land x \not\in B
.\]
Recall that a \textbf{proper subset} is a subset that is not equal to its containing set.
We can prove for two sets $X$ and $Y$ that $X \subseteq Y$ by \textbf{supposing} that $x$ is a particular but
arbitrarily chosen element of $X$, and \textbf{showing} that $x$ is also an element of $Y$.
\begin{definition}{Set Equality}{label}
Set $A$ equals set $B$ if, and only if, $A \subseteq B$ and $B \subseteq A$.
\end{definition}
\subsection{Operations on Sets}
\begin{definition}{Operations}{label}
Let $A$ and $B$ be subsets of a universal set $U$.
\begin{enumerate}
\item The \textbf{union} of $A$ and $B$, denoted $A \cup B$, is the set of all elements
that are in at least one of $A$ or $B$.
\item The \textbf{intersection} of $A$ and $B$, denoted $A \cap B$, is the set
of all elements that are common to both $A$ and $B$.
\item The \textbf{difference} of $B$ minus $A$ (or \textbf{relative complement}
of $A$ in $B$), denoted $B - A$, is the set of all elements that are in
$B$ and not $A$.
\item The \textbf{complement} of $A$, denoted $A^c$, is the set of all elements
in $U$ that are not in $A$.
\end{enumerate}
\end{definition}
\includegraphics[scale=0.65]{setops}
\subsection{Disjoint Sets and Partitions}
A group of sets are \textbf{mutually disjoint} if the intersection of all pairs of sets is equal to
the empty set $\emptyset$.
\begin{definition}{Partition}{label}
A finite or infinite collection of nonempty sets $\{A_1, A_2, A_3 \ldots\}$ is a \textbf{partition}
of a set $A$ if, and only if,
\begin{enumerate}
\item $A$ is the union of all the $A_i$
\item The sets $A_1,A_2,A_3 \ldots$ are mutually disjoint.
\end{enumerate}
\end{definition}
\begin{definition}{Power Sets}{label}
Given a set $A$, the \textbf{power set} of $A$, denoted $\wp(A)$, is the set of all subsets of $A$.
\end{definition}
\begin{definition}{Cartesian Product}{label}
In general, \[
A_1 \times A_2 \times \ldots A_n = \{(a_1, a_2, \ldots , a_n) \mid a_1 \in A_1, a_2 \in A_2, \ldots , a_n \in A_n \}
.\]
Note that $A_1 \times A_2 \times A_3$ is not quite the same thing as $(A_1 \times A_2) \times A_3$
because of tuple ordering.
\end{definition}
\section{Properties of Sets}
\begin{theorem}{Some Subset Relations}{label}
\begin{enumerate}
\item Inclusion of Intersection: $A \cap B \subseteq A$ and vice versa
\item Inclusion in Union: $A \subseteq A \cup B$ and vice versa
\item Transitive Property of Subsets: $A \subseteq B \land B \subseteq C \to A \subseteq C$
\end{enumerate}
To prove these theorems, \emph{suppose} that there is some arbitrary element of $A$ and show
that it is also in $B$.
\end{theorem}
\includegraphics[scale=0.65]{setids}
In general, to prove set equality, you prove that set $A$ is a subset of set $B$, and that
set $B$ is a subset of set $A$. Additionally, to prove that a set $X$ is equal to the empty set
$\emptyset$, suppose $X$ has an element and derive a contradiction.
Additionally, casework is helpful when dealing with unions. It may be helpful to split a union into
2 cases.
\section{Proving/Disproving Set Theorems}
When proving/disproving set theorems, it is often useful to picture the sets as a Venn diagram, and
formalize the proof by numbering the regions of the Venn diagram. This way, we can easily arrive
at a counterexample, or proof that there is not one.
\begin{theorem}{Number of Elements in a Powerset}{label}
For all integers $n \ge 0$, if a set $X$ has $n$ elements, then the powerset of $X$ has $2^n$ elements.
\end{theorem}
\emph{Proof (Brief).} We can prove this using mathematical induction. To prove the base case, we note that
the powerset of an empty set is just the empty set. To prove that powersets of sets with $k+1$ elements have
$2^{k+1}$ elements, we use the inductive hypothesis and note that adding an extra element $z$ will mean that
there are half of elements that do not contain $z$ and half that do, which means that the number of elements
doubles.
Algebraic proofs of Set Identities can be done using the laws in the image provided in the previous section.
\chapter{Functions}
In this chapter we go more in depth into properties of functions and their composition.
\section{Functions Defined on General Sets}
\begin{definition}{Function}{label}
A \textbf{function} from a set $X$ to a set $Y$, denoted $f: X \to Y$, is a relation from
$X$, the \textbf{domain}, to $Y$, the \textbf{co-domain}, that satisfies two properties:
\begin{enumerate}
\item Every element in $X$ is related to some element in $Y$
\item No element in $X$ is related to more than one element in $Y$.
\end{enumerate}
The set of all values of $f$ is called the \emph{range of f} or the \emph{image of X under f}.
If there exists some $x$ such that $f(x)=y$, then x is called a \textbf{preimage
(or inverse image) of $y$}.
Two functions $F: X \to Y$ and $G: X \to Y$ are considered equal if, for all $x \in X, F(x) = G(x)$
\end{definition}
\begin{definition}{Identity Function}{label}
The identity function $I_X$ is a function from $X \to X$ by which $I_X(x) = x \forall x \in X$.
\end{definition}
\begin{definition}{Logarithmic Function}{label}
The log function $\log_b{x}=y$ (from $\mathbb{R}^{+}$ to $\mathbb{R}$) maps a number to the $y$ in the solution of
the equation $b^y=x$.
\end{definition}
\subsection{Well Defined Functions}
We say that a function is \textbf{not well defined} if it fails to satisfy at least one of the
requirements for being a function. A function being well defined really means that it qualifies
to be called a function.
\section{One-to-One and Onto, Inverse Functions}
\subsection{One-to-one}
A function is \textbf{one-to-one} (or \textbf{injective}) if, and only if, every input has a unique
output. Symbolically, $\forall x_1, x_2 \in X, f(x_1)=f(x_2) \to x_1 = x_2$.
To prove that $f$ is one-to-one, you \textbf{suppose} $x_1$ and $x_2$ are elements of $X$ such that
$f(x_1)=f(x_2)$, and \textbf{show} that $x_1=x_2$.
\subsection{Onto}
A function is \textbf{onto} (or \textbf{surjective}) if, and only if, the co-domain of the function
is equal to its image. Symbolically, $\forall y \in Y, \exists x \in X \mid f(x) = y$.
To prove that $f$ is onto, you \textbf{suppose} $y$ is in $Y$, and \textbf{show} that
there exists an element in $x$ such that $y = f(x)$.
\subsection{One-to-one correspondences and Inverse Functions}
\begin{definition}{Bijection}{label}
A \textbf{one-to-one correspondence} (or \textbf{bijection}) from a set $X$ to a set $Y$ is
a function that is both one-to-one and onto.
\end{definition}
\begin{theorem}{Inverse Functions}{label}
Suppose $F: X \to Y$ is a one-to-one correspondence. Then there is a function $F^{-1}: Y \to X$
where $F^{-1}(y)=x$. Note that $F^{-1}$ is also a one-to-one correspondence.
Additionally, note that \[
f^{-1}(b) = a \Leftrightarrow f(a) = b
.\]
\end{theorem}
Finding an inverse function is done while proving that some function $F$ is onto.
\section{Composition of Functions}
The composition of functions (defined as $(g \circ f)(x) = g(f(x)) \forall x \in X$) for two
functions $f: X \to Y$ and $g: Y \to Z$ is $(g \circ f): X \to Z$.
Two compositions are equivalent if they have the same output for every input.
\begin{theorem}{Composition of a Function with Its Inverse}{label}
$f^{-1} \circ f = I_X$ and $f \circ f^{-1} = I_Y$. This can be proved directly using the
definition of inverse.
\end{theorem}
\begin{theorem}{One-to-one/Onto Compositions}{label}
If $f: X \to Y$ and $g: Y \to Z$ are both one-to-one, then $g \circ f$ is one-to-one.
The same line of reasoning applies for onto functions.
\end{theorem}
\section{Cardinality and Sizes of Infinity}
\begin{definition}{Cardinality}{label}
Let $A$ and $B$ be any sets. \textbf{$A$ has the same cardinality as $B$} if, and only if, there is
a one-to-one correspondence from $A$ to $B$. Sets in terms of cardinality follow reflexive,
symmetric, and transitive properties.
\end{definition}
Note that if a set has the same cardinality as a set that is countably infinite, then the set is
also countably infinite. Surprisingly, the set of all rational numbers is countably infinite.
\subsection{Larger Infinities}
The set of all real numbers is uncountable. This can be proved using the Cantor diagonalization process.
Note that any subset of a countable set is countable, and any subset with an uncountable set is uncountable.
\end{document}
| {
"alphanum_fraction": 0.701754386,
"avg_line_length": 42.4646996839,
"ext": "tex",
"hexsha": "5644176492ce541912c71ff691aa85b9b21a9b9b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0cc34f262932f2d2ecc61249939fbe46a0f451a8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "allenli873/Discrete-Math-Notes",
"max_forks_repo_path": "notes.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0cc34f262932f2d2ecc61249939fbe46a0f451a8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "allenli873/Discrete-Math-Notes",
"max_issues_repo_path": "notes.tex",
"max_line_length": 243,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0cc34f262932f2d2ecc61249939fbe46a0f451a8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "allenli873/Discrete-Math-Notes",
"max_stars_repo_path": "notes.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 11418,
"size": 40299
} |
\documentclass[preprint]{sigplanconf}
%% I am getting an error about too many math packages used!!!
%% I commented the ones we don't seem to be using.
\usepackage{graphicx}
%%\usepackage{longtable}
\usepackage{comment}
\usepackage{amsmath}
%%\usepackage{mdwlist}
%%\usepackage{txfonts}
\usepackage{xspace}
%%\usepackage{amstext}
\usepackage{amssymb}
\usepackage{stmaryrd}
\usepackage{proof}
\usepackage{multicol}
\usepackage[nodayofweek]{datetime}
\usepackage{etex}
\usepackage[all, cmtip]{xy}
\usepackage{xcolor}
\usepackage{listings}
\usepackage{multicol}
\newcommand\hmmax{0} % default 3
\newcommand\bmmax{0} % default 4
\usepackage{bm}
\usepackage{cmll}
\newcommand{\fname}[1]{\ulcorner #1 \urcorner}
\newcommand{\fconame}[1]{\llcorner #1 \lrcorner}
%% \newtheorem{theorem}{Theorem}[section]
%% \newtheorem{lemma}[theorem]{Lemma}
%% \newtheorem{proposition}[theorem]{Proposition}
%% \newtheorem{corollary}[theorem]{Corollary}
\newcommand{\xcomment}[2]{\textbf{#1:~\textsl{#2}}}
\newcommand{\amr}[1]{\xcomment{Amr}{#1}}
\newcommand{\roshan}[1]{\xcomment{Roshan}{#1}}
\newcommand{\ie}{\textit{i.e.}\xspace}
\newcommand{\eg}{\textit{e.g.}\xspace}
\newcommand{\lcal}{\ensuremath{\lambda}-calculus\xspace}
\newcommand{\G}{\ensuremath{\mathcal{G}}\xspace}
\newcommand{\code}[1]{\lstinline[basicstyle=\small]{#1}\xspace}
\newcommand{\name}[1]{\code{#1}}
\def\newblock{}
\newenvironment{floatrule}
{\hrule width \hsize height .33pt \vspace{.5pc}}
{\par\addvspace{.5pc}}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{proposition}[theorem]{Proposition}
\newenvironment{proof}[1][Proof.]{\begin{trivlist}\item[\hskip \labelsep {\bfseries #1}]}{\end{trivlist}}
\newcommand{\arrow}[1]{\mathtt{#1}}
%subcode-inline{bnf-inline} name langRev
%! swap+ = \mathit{swap}^+
%! swap* = \mathit{swap}^*
%! dagger = ^{\dagger}
%! assocl+ = \mathit{assocl}^+
%! assocr+ = \mathit{assocr}^+
%! assocl* = \mathit{assocl}^*
%! assocr* = \mathit{assocr}^*
%! identr* = \mathit{uniti}
%! identl* = \mathit{unite}
%! dist = \mathit{distrib}
%! factor = \mathit{factor}
%! eta = \eta
%! eps = \epsilon
%! eta+ = \eta^+
%! eps+ = \epsilon^+
%! eta* = \eta^{\times}
%! eps* = \epsilon^{\times}
%! trace+ = trace^+
%! trace* = trace^{\times}
%! ^^^ = ^{-1}
%! (o) = \fatsemi
%! (;) = \fatsemi
%! (*) = \times
%! (+) = +
%! LeftP = L^+
%! RightP = R^+
%! LeftT = L^{\times}
%! RightT = R^{\times}
%! alpha = \alpha
%! bool = \textit{bool}
%! color = \textit{color}
%! Gr = G
%subcode-inline{bnf-inline} regex \{\{(((\}[^\}])|[^\}])*)\}\} name main include langRev
%! Gx = \Gamma^{\times}
%! G = \Gamma
%! [] = \Box
%! |-->* = \mapsto^{*}
%! |-->> = \mapsto_{\ggg}
%! |--> = \mapsto
%! <--| = \mapsfrom
%! |- = \vdash
%! <><> = \approx
%! ==> = \Longrightarrow
%! <== = \Longleftarrow
%! <=> = \Longleftrightarrow
%! <-> = \leftrightarrow
%! ~> = \leadsto
%! -o+ = \multimap^{+}
%! -o* = \multimap^{\times}
%! -o = \multimap
%! ::= = &::=&
%! /= = \neq
%! @@ = \mu
%! forall = \forall
%! exists = \exists
%! empty = \epsilon
%! langRev = \Pi
%! langRevT = \Pi^{o}
%! langRevEE = \Pi^{\eta\epsilon}
%! theseus = Theseus
%! sqrt(x) = \sqrt{#x}
%! surd(p,x) = \sqrt[#p]{#x}
%! inv(x) = \frac{1}{#x}
%! frac(x,y) = \frac{#x}{#y}
%! * = \times
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\conferenceinfo{ICFP'12}{}
\CopyrightYear{}
\copyrightdata{}
\titlebanner{}
\preprintfooter{}
\title{The Two Dualities of Computation: Negative and Fractional Types}
\authorinfo{Roshan P. James}
{Indiana University}
{[email protected]}
\authorinfo{Amr Sabry}
{Indiana University}
{[email protected]}
\maketitle
\begin{abstract}
Every functional programmer knows about sum and product types, {{a+b}} and
{{a*b}} respectively. Negative and fractional types, {{a-b}} and {{a/b}}
respectively, are much less known and their computational interpretation is
unfamiliar and often complicated. We show that in a programming model in
which information is preserved (such as the model introduced in our recent
paper on \emph{Information Effects}), these types have particularly natural
computational interpretations. Intuitively, values of negative types are
values that flow ``backwards'' to satisfy demands and values of fractional
types are values that impose constraints on their context. The combination
of these negative and fractional types enables greater flexibility in
programming by breaking global invariants into local ones that can be
autonomously satisfied by a subcomputation. Theoretically, these types give
rise to \emph{two} function spaces and to \emph{two} notions of
continuations, suggesting that the previously observed duality of
computation conflated two orthogonal notions: an additive duality that
corresponds to backtracking and a multiplicative duality that corresponds
to constraint propagation.
\end{abstract}
\category{D.3.1}{Formal Definitions and Theory}{}
\category{F.3.2}{Semantics of Programming Languages}{}
\category{F.3.3}{Studies of Program Constructs}{Type structure}
\terms
Languages, Theory
\keywords continuations, information flow, linear logic, logic programming,
quantum computing, reversible logic, symmetric monoidal categories, compact
closed categories.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
In a recent paper~\cite{infeffects}, we argued that, because they include
irreversible physical primitives, conventional abstract models of computation
have inadvertently included some \emph{implicit} computational effects which
we called \emph{information effects}. We then developed a pure reversible
model of computation that is obtained from the type isomorphisms and
categorical structures that underlie models of linear logic and quantum
computing and that treats information as a linear resource that can neither
be erased nor duplicated. In this paper, we show that our pure reversible
model unveils deeper and more elegant symmetries of computation than have
previously been reported. In particular, we expose two notions of duality of
computation: an additive duality and a multiplicative duality that give rise
to negative types and fractional types respectively. Although these types
have previously appeared in the literature (see Sec.~\ref{sec:related}), they
have typically appeared in the context of conventional languages with
information effects, which limited their appeal and obscured their properties.
\paragraph*{Negative Types.}
Consider the following algebraic manipulation relating a natural number {{a}}
to itself (ignoring the dotted line for a moment):
\begin{center}
\scalebox{1.1}{
\includegraphics{diagrams/thesis/algebra-wire1.pdf}
}
\end{center}
Although seemingly pointless, this algebraic proof corresponds, in our model,
to an isomorphism of type {{a <-> a}} with a non-trivial and interesting
computational interpretation. The witness for this isomorphism is a
computation that takes a value of type~{{a}}, say \$20.00, and eventually
produces another \$20.00 value as its output. As the semantics of
Sec.~\ref{sec:rat} formalizes, this computation flows along the dotted line
with the following intermediate steps:
\begin{itemize}
\item We start at line (0) with \$20.00;
\item We proceed to line (1) with the same \$20.00 but tagged as being
in the left summand of the sum type {{a+0}}; we indicate this value
as {{left 20}};
\item We continue to line (2) with the same value {{left 20}};
\item At line (3), as a result of re-association the tag on the \$20.00
changes to indicate that it is in the left-left summand, i.e., the value is
now {{left (left 20)}};
\item At line (4), we find ourselves needing to produce a value of type 0
which is impossible; this signals the beginning of a reverse execution
which sends us back to line (3) with a value {{left (right 20)}};
\item Execution continues in reverse to line (2) with the value
{{right (left 20)}};
\item At line (1) we find ourselves again facing an empty type so we reverse
execution again; we go to line (2) with a value {{right (right 20)}};
\item We proceed to line (3) and (4) with the value {{right 20}};
\item We finally reach line 5 with the value {{20}}.
\end{itemize}
The example illustrates that the empty type and negative types have a
computational interpretation related to continuations: negative types denote
values that backtrack to satisfy dependencies, or in other words act as debts
that are satisfied by the backward flow of information.
\paragraph*{Fractional Types.}
Consider a similar algebraic manipulation involving fractional types.
\begin{center}
\scalebox{1.1}{
\includegraphics{diagrams/thesis/algebra-wire2.pdf}
}
\end{center}
In the case of negatives, the dotted line indicated the flow of control
whereas for fractionals it indicates the flow of constraints. At the heart of
logic programming is the idea of variables that capture constraints. Hence it
is useful to trace the computation corresponding to the algebraic proof
above, with the analogy to logic variables in mind.
As before, the execution begins at line~(0) with the value {{20}}. At
line~(1) two values, {{20}} and {{()}}, flow forward. One can think of the
value {{()}} (of type {{1}}) as ``having a credit card.'' The credit card
isn't money, nor is it debt, but is the option to generate a credit-debt
constraint. At line~(2) we exercise this option and hence have three values:
the initial value {{20}} flowing from line (1) and two entangled values,
{{1/alpha}} and {{alpha}}. The {{alpha}} and {{1/alpha}} are unspecified
values, i.e., we don't yet know how much money we need to borrow, but we do
know that what is borrowed must be what is returned. Hence~{{alpha}} denotes
the presence of an unknown quantity and dually {{1/alpha}} should be thought
of as the absence of an unknown quantity. At line (3), the missing unknown
{{1/alpha}} is brought together with a value {{20}} and at line (4) we use
the {{20}} to satisfy the constraint {{1/alpha}}. In other words, this branch
of the computation succeeded in borrowing {{20}} which immediately
communicates the {{20}} to the rightmost branch.
Unlike with negative types, wherein only one value existed at a time and the
computation backtracked, here we have three values that \emph{exist at the
same time}. In other words, the computation with fractions is realized with
a schedule in which every value independently and concurrently proceeds
through its subcomputation. The example illustrates that fractional types
also have a computational interpretation that have some flavor of
continuations: the fractional types denote values ({{1/alpha}}) that
represent missing information that must be supplied in much the same sense
that continuations denote evaluation contexts with holes that must be filled.
There are at least four fundamental points about the examples above that must
be emphasized:
\begin{itemize}
\item As the examples illustrate, both negative types and fractional types
corresponds to ``debts'' but in different ways: negatives are satisfied by
backtracking and fractionals are satisfied by constraint propagation.
\item It would clearly be disastrous if debts could be deleted or
duplicated. This simple observation explains why these types are much
simpler and much more appealing in a framework where information is
guaranteed to be preserved. In previous work that used negative types (see
Sec.~\ref{sec:related}), complicated mechanisms are typically needed to
constrain the propagation and use of negative values because the
surrounding computational framework is, generally speaking, careless in its
treatment of information.
\item Each of the values {{-a + b}} and {{(1/a) * b}} can be viewed as a
function that asks for an {{a}} and then produces a {{b}}. When viewed as
functions, we write these types as {{a -o+ b}} and {{a -o* b}}
respectively. Alternatively we can view these values as first producing a
value of type {{b}} and then demanding an {{a}} and in that perspective
they correspond to delimited continuations. Evidently, as the discussion
above suggests, these two notions of functions are not the same at all and
should not be conflated. Sec.~\ref{sub:hof} discusses this point in detail.
\item The main reason credit card transactions are convenient is because they
disentangle the propagation of the resources (money) from the propagation
of the services. Not every transaction needs both the resources and
services to be brought together: it is sufficient to have a promise that
the demand for resources will be somehow satisfied, as long as the
infrastructure can be trusted with such promises. This idea that
dependencies can be freely decoupled and propagated can be a powerful
programming tool and we leverage this in the construction of a novel
SAT-solver (see Sec.~\ref{sec:sat-solver}).
\end{itemize}
\paragraph*{Contributions and Outline.}
To summarize, in a computational framework that guarantees that information
is preserved, negative and fractional types provide fascinating mechanisms in
which computations can be sliced and diced, decomposed and recomposed, run
forwards and backwards, in arbitrary ways. The remainder of the paper
formalizes these informal observations. Specifically our main contributions
are:
\begin{itemize}
\item We extend {{langRev}} our reversible programming language of type
isomorphisms~\cite{rc2011,infeffects} (reviewed in Sec.~\ref{sec:pi}) with a notion
of negative types, that satisfies the isomorphism {{a + (-a) <-> 0}}. The
semantics of this extension is expressed by having a \emph{dual} evaluator
that reverses the flow of execution for negative
values. (Sec.~\ref{sec:neg})
\item We independently extend {{langRev}} with a notion of fractional types,
that satisfies the isomorphism {{a * (1/a) <-> 1}}. The semantics of this
extension is expressed by introducing logic variables and a unification
mechanism to model and resolve the constraints introduced by the fractional
types. (Sec.~\ref{sec:frac})
\item We combine the above two extensions into a language, which we
call {{langRevEE}}, whose type system allows any rational number to
be used as a type. Moreover the types satisfy the same familiar and
intuitive isomorphisms that are satisfied in the mathematical field
of rational numbers. (Sec.~\ref{sec:rat})
\item We develop programming intuition and argue that negative and fractional
types ought to be part of the vocabulary of every
programmer. (Sec.~\ref{sec:prog})
\item We relate our notions of negative and fractional types to previous work
on continuations. Briefly, we argue that conventional continuations
conflate negative and fractional components. This observation allows us to
relate two apparently unrelated lines of work: the first pioneered by
Filinski~\cite{Filinski:1989:DCI:648332.755574} relating continuations to
negative types and the second~\cite{Bernardi:2010:CSL:1749618.1749689}
relating continuations to the fractional types of the Lambek-Grishin
calculus. (Sec.~\ref{sec:related})
\end{itemize}
\textbf{Note:} All the constructions, semantics, and examples in this paper
have been implemented and tested in Haskell. We will make the URL available
once the code is organized for better presentation.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{The Core Reversible Language: {{langRev}} }
\label{sec:pi}
We review our reversible language {{langRev}}: the presentation in this
section differs from the one in our previous paper~\cite{infeffects} in two
aspects. First, we add the empty type {{0}} which is necessary to express the
additive duality. Second, instead of explaining evaluation using a natural
semantics, we give a small-step operational semantics that is more
appropriate for the connections with continuations explored in this paper.
%%%%%%%%%%%%%%%%%%%%
\subsection{Syntax and Types}
\label{sec:pi-syntax}
The set of types includes the empty type {{0}}, the unit type {{1}}, sum
types {{b1+b2}}, and products types {{b1*b2}}. The set of values {{v}}
includes {{()}} which is the only value of type {{1}}, {{left v}} and {{right v}}
which inject {{v}} into a sum type, and {{(v1,v2)}} which builds a
value of product type. There are no values of type {{0}}:
%subcode{bnf} include main
% value types, b ::= 0 | 1 | b + b | b * b
% values, v ::= () | left v | right v | (v, v)
The combinators of {{langRev}} are witnesses to the following type
isomorphisms:
%subcode{bnf} include main
%! columnStyle = r@{\hspace{-0.5pt}}c@{\hspace{-0.5pt}}l
%zeroe :& 0 + b <-> b &: zeroi
%swap+ :& b1 + b2 <-> b2 + b1 &: swap+
%assocl+ :& b1 + (b2 + b3) <-> (b1 + b2) + b3 &: assocr+
%identl* :& 1 * b <-> b &: identr*
%swap* :& b1 * b2 <-> b2 * b1 &: swap*
%assocl* :& b1 * (b2 * b3) <-> (b1 * b2) * b3 &: assocr*
%dist0 :& 0 * b <-> 0 &: factor0
%dist :&~ (b1 + b2) * b3 <-> (b1 * b3) + (b2 * b3)~ &: factor
Each line of the above table introduces one or two combinators that witness
the isomorphism in the middle. Collectively the isomorphisms state that the
structure {{(b,+,0,*,1)}} is a \emph{commutative semiring}, i.e., that each
of {{(b,+,0)}} and {{(b,*,1)}} is a commutative monoid and that
multiplication distributes over addition. The isomorphisms are extended to
form a congruence relation by adding the following constructors that witness
equivalence and compatible closure:
%subcode{proof} include main
%@ ~
%@@ id : b <-> b
%
%@ c : b1 <-> b2
%@@ sym c : b2 <-> b1
%
%@ c1 : b1 <-> b2
%@ c2 : b2 <-> b3
%@@ c1(;)c2 : b1 <-> b3
%---
%@ c1 : b1 <-> b3
%@ c2 : b2 <-> b4
%@@ c1 (+) c2 : b1 + b2 <-> b3 + b4
%
%@ c1 : b1 <-> b3
%@ c2 : b2 <-> b4
%@@ c1 (*) c2 : b1 * b2 <-> b3 * b4
To summarize, the syntax of {{langRev}} is given as follows.
\begin{definition}{(Syntax of {{langRev}})}
\label{def:langRev}
We collect our types, values, and combinators, to get the full language
definition.
%subcode{bnf} include main
% value types, b ::= 0 | 1 | b+b | b*b
% values, v ::= () | left v | right v | (v,v)
%
% comb.~types, t ::= b <-> b
% iso ::= zeroe | zeroi
% &|& swap+ | assocl+ | assocr+
% &|& identl* | identr*
% &|& swap* | assocl* | assocr*
% &|& dist0 | factor0 | dist | factor
% comb., c ::= iso | id | sym c | c (;) c | c (+) c | c (*) c
\end{definition}
\paragraph*{Adjoint.}
An important property of the language is that every combinator {{c}} has an
adjoint {{c{dagger}}} that reverses the action of {{c}}. This is evident by
construction for the primitive isomorphisms. For the closure combinators, the
adjoint is homomorphic except for the case of sequencing in which the order
is reversed, i.e., {{(c1 (;) c2){dagger} = (c2{dagger}) (;) (c1{dagger}) }}.
%%%%%%%%%%%%%%%
\subsection{Graphical Language}
The syntactic notation above is often obscure and hard to read. Following
the tradition established for monoidal
categories~\cite{springerlink:10.1007/978-3-642-12821-94}, we present a
graphical language that conveys the intuitive semantics of the language
(which is formalized in the next section).
The general idea of the graphical notation is that combinators are modeled by
``wiring diagrams'' or ``circuits'' and that values are modeled as
``particles'' or ``waves'' that may appear on the wires. Evaluation therefore
is modeled by the flow of waves and particles along the wires.
%% In fact, when talking about {{langRev}} circuits, we will often use the
%% words {{value}} and {{particle}} interchangeably.
\begin{itemize}
\item The simplest sort of diagram is the {{id : b <-> b}} combinator which
is simply represented as a wire labeled by its type {{b}}, as shown on the
left. In more complex diagrams, if the type of a wire is obvious from the
context, it may be omitted. When tracing a computation, one might imagine a
value {{v}} of type {{b}} on the wire, as shown on the right.
\begin{multicols}{2}
\begin{center}
\scalebox{0.95}{
%%subcode-line{pdfimage}[diagrams/thesis/b-wire.pdf]
\includegraphics{diagrams/thesis/b-wire.pdf}
}
\end{center}
\begin{center}
\scalebox{0.95}{
%%subcode-line{pdfimage}[diagrams/thesis/b-wire.pdf]
\includegraphics{diagrams/thesis/b-wire-value.pdf}
}
\end{center}
\end{multicols}
\item The product type {{b1*b2}} may be represented using either one wire
labeled {{b1*b2}} or two parallel wires labeled {{b1}} and {{b2}}. In the
case of products represented by a pair of wires, when tracing execution
using particles, one should think of one particle on each wire or
alternatively as in folklore in the literature on monoidal categories as a
``wave.''
\begin{multicols}{2}
\begin{center}
\scalebox{0.95}{
%%subcode-line{pdfimage}[diagrams/thesis/pair-one-wire.pdf]
\includegraphics{diagrams/thesis/product-one-wire.pdf}
}
\end{center}
\begin{center}
\scalebox{0.95}{
\includegraphics{diagrams/thesis/product-one-wire-value.pdf}
}
\end{center}
\end{multicols}
\begin{multicols}{2}
\begin{center}
\scalebox{0.95}{
%%%subcode-line{pdfimage}[diagrams/thesis/pair-of-wires.pdf]
\includegraphics{diagrams/thesis/product-two-wires.pdf}
}
\end{center}
\begin{center}
\scalebox{0.95}{
\includegraphics{diagrams/thesis/product-two-wires-value.pdf}
}
\end{center}
\end{multicols}
\item Sum types may similarly be represented by one wire or using
parallel wires with a {{+}} operator between them. When tracing the
execution of two additive wires, a value can reside on only one of the two
wires.
\begin{multicols}{2}
\begin{center}
\scalebox{0.95}{
%%subcode-line{pdfimage}[diagrams/thesis/sum-one-wire.pdf]
\includegraphics{diagrams/thesis/sum-one-wire.pdf}
}
\end{center}
\begin{center}
\scalebox{0.95}{
\includegraphics{diagrams/thesis/sum-two-wires-left-value.pdf}
}
\end{center}
\end{multicols}
\begin{multicols}{2}
\begin{center}
\scalebox{0.95}{
%%subcode-line{pdfimage}[diagrams/thesis/sum-of-wires.pdf]
\includegraphics{diagrams/thesis/sum-two-wires.pdf}
}
\end{center}
\begin{center}
\scalebox{0.95}{
\includegraphics{diagrams/thesis/sum-two-wires-right-value.pdf}
}
\end{center}
\end{multicols}
%% \item
%% When representing complex types like {{(b1*b2)+b3}} some visual
%% grouping of the wires may be done to aid readability. The exact type
%% however will always be clarified by the context of the diagram.
%% \begin{center}
%% \scalebox{0.95}{
%% %subcode-line{pdfimage}[diagrams/thesis/complex-type-crop.pdf]
%% }
%% \end{center}
\item Associativity is implicit in the graphical language. Three parallel
wires represent {{b1*(b2*b3)}} or {{(b1*b2)*b3}}, based on the context.
\begin{center}
\scalebox{1.5}{
\includegraphics{diagrams/thesis/assoc.pdf}
}
\end{center}
\item Commutativity is represented by crisscrossing wires.
\begin{multicols}{2}
\begin{center}
\scalebox{0.95}{
\includegraphics{diagrams/thesis/swap_times.pdf}
}
\end{center}
\begin{center}
\scalebox{0.95}{
\includegraphics{diagrams/thesis/swap_plus.pdf}
}
\end{center}
\end{multicols}
By visually tracking the flow of particles on the wires, one can
verify that the expected types for commutativity are satisfied.
\begin{multicols}{2}
\begin{center}
\scalebox{0.95}{
\includegraphics{diagrams/thesis/swap_times_value.pdf}
}
\end{center}
\begin{center}
\scalebox{0.95}{
\includegraphics{diagrams/thesis/swap_plus_value.pdf}
}
\end{center}
\end{multicols}
\item The morphisms that witness that {{0}} and {{1}} are the additive and
multiplicative units are represented as shown below. Note that since there
is no value of type 0, there can be no particle on a wire of type {{0}}.
Also since the monoidal units can be freely introduced and eliminated, in
many diagrams they are omitted and dealt with explicitly only when they are
of special interest.
\begin{multicols}{2}
\begin{center}
\scalebox{0.95}{
%%subcode-line{pdfimage}[diagrams/thesis/identr1.pdf]
\includegraphics{diagrams/thesis/uniti.pdf}
}
\end{center}
\begin{center}
\scalebox{0.95}{
%%subcode-line{pdfimage}[diagrams/thesis/identl1.pdf]
\includegraphics{diagrams/thesis/unite.pdf}
}
\end{center}
\end{multicols}
\begin{multicols}{2}
\begin{center}
\scalebox{0.95}{
%%subcode-line{pdfimage}[diagrams/thesis/identr0.pdf]
\includegraphics{diagrams/thesis/zeroi.pdf}
}
\end{center}
\columnbreak
\begin{center}
\scalebox{0.95}{
%%subcode-line{pdfimage}[diagrams/thesis/identl0.pdf]
\includegraphics{diagrams/thesis/zeroe.pdf}
}
\end{center}
\end{multicols}
\item Finally, distributivity and factoring are represented using the dual
boxes shown below:
\begin{multicols}{2}
\begin{center}
\includegraphics{diagrams/thesis/dist.pdf}
\end{center}
\begin{center}
\includegraphics{diagrams/thesis/factor.pdf}
\end{center}
\end{multicols}
Distributivity and factoring are interesting because they represent
interactions between sum and pair types. Distributivity should
essentially be thought of as a multiplexer that redirects the flow of
{{v:b}} depending on what value inhabits the type {{b1+b2}}, as shown
below. Factoring is the corresponding adjoint operation.
\begin{multicols}{2}
\begin{center}
\includegraphics{diagrams/thesis/dist-wire-value1.pdf}
\end{center}
\begin{center}
\includegraphics{diagrams/thesis/dist-wire-value2.pdf}
\end{center}
\end{multicols}
\end{itemize}
\noindent
\textit{Example.} We use the type {{bool}} as a shorthand to denote
the type {{1+1}} and use {{left ()}} to be {{true}} and {{right ()}}
to be {{false}}. The following combinator is represented by the given
diagram:
{{c : b * bool <-> b + b}}
{{c = swap* (;) dist (;) (identl* (+) identl*)}}
\begin{center}
\scalebox{0.95}{
%%subcode-line{pdfimage}[diagrams/thesis/example1-crop.pdf]
\includegraphics{diagrams/thesis/example1.pdf}
}
\end{center}
%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Semantics}
The semantics of the primitive combinators is given by the following
single-step reductions. Since there are no values of type {{0}}, the rules
omit the impossible cases:
\begin{scriptsize}
%subcode{opsem} include main
%! columnStyle = rlcl
% swap+ & (left v) &|-->& right v
% swap+ & (right v) &|-->& left v
% assocl+ & (left v1) &|-->& left (left v1)
% assocl+ & (right (left v2)) &|-->& left (right v2)
% assocl+ & (right (right v3)) &|-->& right v3
% assocr+ & (left (left v1)) &|-->& left v1
% assocr+ & (left (right v2)) &|-->& right (left v2)
% assocr+ & (right v3) &|-->& right (right v3)
% identl* & ((), v) &|-->& v
% identr* & v &|-->& ((), v)
% swap* & (v1, v2) &|-->& (v2, v1)
% assocl* & (v1, (v2, v3)) &|-->& ((v1, v2), v3)
% assocr* & ((v1, v2), v3) &|-->& (v1, (v2, v3))
% dist & (left v1, v3) &|-->& left (v1, v3)
% dist & (right v2, v3) &|-->& right (v2, v3)
% factor & (left (v1, v3)) &|-->& (left v1, v3)
% factor & (right (v2, v3)) &|-->& (right v2, v3)
\end{scriptsize}
The reductions for the primitive isomorphisms above are exactly the same as
have been presented before~\cite{infeffects}. The reductions for the closure
combinators are however presented in a small-step operational style using the
following definitions of evaluation contexts and machine states:
\begin{scriptsize}
%subcode{bnf} include main
% Combinator Contexts, C = [] | Fst C c | Snd c C
% &|& LeftT C c v | RightT c v C
% &|& LeftP C c | RightP c C
% Machine states = <c, v, C> | {[c, v, C]}
% Start state = <c, v, []>
% Stop State = {[c, v, []]}
\end{scriptsize}
The machine transitions below track the flow of particles through a
circuit. The start machine state, {{<c,v,[]>}}, denotes the
particle~{{v}} about to be evaluated by the circuit {{c}}. The end
machine state, {{[c, v, [] ]}}, denotes the situation where the particle
{{v}} has exited the circuit {{c}}.
\begin{scriptsize}
%subcode{opsem} include main
%! columnStyle = rclr
% <iso, v, C> &|-->& {[iso, v', C]} & (1)
% & & where iso v |--> v' &
% <c1(;)c2, v, C> &|-->& <c1, v, Fst C c2> & (2)
% {[c1, v, Fst C c2]} &|-->& <c2, v, Snd c1 C> & (3)
% {[c2, v, Snd c1 C]} &|-->& {[ c1(;)c2, v, C ]} & (4)
% <c1(+)c2, left v, C> &|-->& <c1, v, LeftP C c2> & (5)
% {[ c1, v, LeftP C c2 ]} &|-->& {[c1 (+) c2, left v, C ]} & (6)
% <c1(+)c2, right v, C> &|-->& <c2, v, RightP c1 C> & (7)
% {[ c2, v, RightP c1 C ]} &|-->& {[c1 (+) c2, right v, C ]} & (8)
% <c1(*)c2, (v1, v2), C> &|-->& <c1, v1, LeftT C c2 v2> & (9)
% {[ c1, v1, LeftT C c2 v2 ]} &|-->& <c2, v2, RightT c1 v1 C> & (10)
% {[ c2, v2, RightT c1 v1 C ]} &|-->& {[ c1 (*) c2, (v1, v2), C ]} & (11)
\end{scriptsize}
Rule (1) describes evaluation by a primitive isomorphism. Rules (2), (3) and
(4) deal with sequential evaluation. Rule (2) says that for the value {{v}}
to flow through the sequence {{c1 (;) c2}}, it should first flow through
{{c1}} with {{c2}} pending in the context ({{Fst C c2}}). Rule (3) says the
value {{v}} that exits from {{c1}} should proceed to flow
through~{{c2}}. Rule (4) says that when the value {{v}} exits {{c2}}, it also
exits the sequential composition {{c1(;)c2}}. Rules (5) to (8) deal with
{{c1 (+) c2}} in the same way. In the case of sums, the shape of the value,
i.e., whether it is tagged with {{left}} or {{right}}, determines whether
path {{c1}} or path {{c2}} is taken. Rules (9), (10) and (11) deal with
{{c1 (*) c2}} similarly. In the case of products the value should have the
form {{(v1, v2)}} where {{v1}} flows through {{c1}} and {{v2}} flows through
{{c2}}. Both these paths are entirely independent of each other and we could
evaluate either first, or evaluate both in parallel. In this presentation we
have chosen to follow {{c1}} first, but this choice is entirely arbitrary.
The interesting thing about the semantics is that it represents a reversible
abstract machine. In other words, we can compute the start state from the
stop state by changing the reductions {{|-->}} to run backwards
{{<--|}}. When running backwards, we use the isomorphism represented by a
combinator {{c}} in the reverse direction, i.e., we use the adjoint
{{c{dagger}}}.
\begin{proposition}[Logical Reversibility]
\label{prop:logrev}
{{<c,v,[]> |--> [c,v',[]]}} iff
{{<c{dagger},v',[]> |--> [c{dagger},v,[]]}}
\end{proposition}
% \roshan{We really have to rethink this prop in the presence of
% negatives and fractionals. With negatives, the program can actually
% end at the beginning of the circuit -- i.e. the program {{c:b1<->b2}}
% can stop with a value of type {{b1}}. With fractionals, there can be
% several values of type {{b2}} produced for a value of type {{b1}}. }
% \begin{proposition}[Groupoid]
% \label{prop:groupoid}
% {{langRev}} is a groupoid.
% \end{proposition}
% \begin{proposition}
% \label{prop:category}
% {{langRev}} is a dagger symmetric monoidal category.
% \end{proposition}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Negative and Fractional Types}
\label{sec:neg}
\label{sec:frac}
This section introduces the syntax and graphical languages for
negatives and fractionals. The general outline is as follows:
\footnote{We refer the reader to Selinger's excellent survey
article of monoidal
categories~\cite{springerlink:10.1007/978-3-642-12821-94} for the precise
definitions.}
\begin{itemize}
\item As established in our earlier work~\cite{rc2011,infeffects}, the
underlying categorical semantics of our core reversible language
{{langRev}} is based on \emph{two} distinct symmetric monoidal structures,
one with {{+}} as the monoidal tensor, and one with {{*}} as the monoidal
tensor.
\item We extend each underlying symmetric monoidal structure to a
\emph{compact closed} structure by adding a dual for each object and two
special morphisms traditionally called {{eta}} and {{eps}}.
\end{itemize}
The extended language is referred to as {{langRevEE}}. After
presenting the extension at the syntactic level, we discuss the
categorical semantics informally via the graphical language. The
operational semantics is presented in Sec.~\ref{sec:rat}.
As will be detailed in the remainder of this section, the compact closed
structure provides several properties of interest: (i) morphisms or wires are
allowed to run from right to left, (ii) the structure admits a trace that,
depending on the situation, can be used to implement various notions of loops
and
recursion~\cite{joyal1996traced,Hasegawa:2009:TMC:1552068.1552069,Hasegawa:1997:RCS:645893.671607},
(iii) the structure includes an isomorphism showing that the dual operator is
an involution, and (iv) the structure is equipped with objects representing
higher-order functions and a morphism \textit{eval} that applies these
functional objects. Interestingly each monoidal structure provides the same
ingredients, resulting in two dualities, two traces, two involutions, and two
notions of higher-order functions.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Syntax and Types}
Describing the syntax and types of an extension to {{langRev}} with additive
and multiplicative duality is fairly straightforward. Basically, for the
additive case, we extend the language with negative types denoted {{-b}},
negative values denoted {{-v}}, and two isomorphisms {{eta+}} and
{{eps+}}. Similarly, for the multiplicative case, we extend the language with
fractional types denoted {{1/b}}, fractional values denoted {{1/v}}, and two
isomorphisms {{eta*}} and {{eps*}}.
%subcode{bnf} include main
% Value Types, b = ... | -b | 1/b
% Values, v = ... | -v | 1/v
%
% Isomorphisms, iso &=& ... | eta+ | eps+ | eta* | eps*
For convenience, we sometimes use the notations {{b1 - b2}} and
{{b1/b2}} to indicate the types {{b1 + (-b2)}} and {{b1 * (1/b2)}}
respectively. The types of the new constructs are:
\vspace{-15pt}
\begin{multicols}{2}
%subcode{opsem} include main
% eta+ &: 0 <-> (-b) + b :& eps+
% eta* &: 1 <-> (1/b) * b :& eps*
%subcode{proof} include main
%@ |- v : b
%@@ |- -v : -b
%
%@ |- v : b
%@@ |- 1/v : 1/b
\end{multicols}
For the graphical language, we visually represent {{eta+}}, {{eps+}},
{{eta*}}, and {{eps*}} as U-shaped connectors. On the left below are {{eta+}}
and {{eta*}} showing the maps from {{0}} to {{-b+b}} and {{1}} to {{1/b * b}}.
On the right are {{eps+}} and {{eps*}} showing the maps from {{-b+b}} to~0 and
{{1/b * b}} to~1. Even though the diagrams below show the {{0}} and {{1}}
wires for completeness, later diagrams will always drop them in contexts
where they can be implicitly introduced and eliminated.
\begin{multicols}{2}
\begin{center}
\includegraphics{diagrams/eta.pdf}
\end{center}
\begin{center}
\includegraphics{diagrams/eps.pdf}
\end{center}
\end{multicols}
\begin{multicols}{2}
\begin{center}
\includegraphics{diagrams/eta_times.pdf}
\end{center}
\begin{center}
\includegraphics{diagrams/eps_times.pdf}
\end{center}
\end{multicols}
The usual interpretation of the type {{b1+b2}} is that we either have an
appropriately tagged value of type {{b1}} or an appropriately tagged value of
type {{b2}}. This interpretation is maintained in the presence of {{eta+}}
and {{eps+}} in the following sense: a value of type {{right v : -b + b}}
flowing into an {{eps+}} on the lower wire switches to a value
{{left (-v): -b + b}} that flows backwards on the upper wire. The inversion of
direction is captured by the negative sign on the value.
\begin{multicols}{2}
\begin{center}
\scalebox{1.5}{
\includegraphics{diagrams/eps_plus1.pdf}
}
\end{center}
\begin{center}
\scalebox{1.5}{
\includegraphics{diagrams/eps_plus2.pdf}
}
\end{center}
\end{multicols}
As an example, the diagram representing the first isomorphism in the
introduction is:
\begin{center}
\scalebox{1.2}{
\includegraphics{diagrams/algebra1.pdf}
}
\end{center}
Similarly, in the usual interpretation of {{b1*b2}} we have both a value of
type {{b1}} and a value of type {{b2}}. This interpretation is maintained in
the presence of fractionals. Hence an {{eta* : 1 <-> (1/b) * b}} is to be
viewed as a fission point for a value of type {{b}} and its multiplicative
inverse {{1/b}}. Operationally, this corresponds to the creation of two
values {{alpha}} and {{1/alpha}} where {{alpha}} is a fresh logic
variable. The operator {{eps*}} then becomes a unification site for these
logic variables:
\begin{multicols}{2}
\begin{center}
\scalebox{1.5}{
\includegraphics{diagrams/eta_times1.pdf}
}
\end{center}
\begin{center}
\scalebox{1.5}{
\includegraphics{diagrams/eps_times1.pdf}
}
\end{center}
\end{multicols}
%%%%%%%%%%%%%%%%%%%%
\subsection{Categorical Constructions}
All the constructions below are standard: they are collected from Selinger's
survey paper on monoidal
categories~\cite{springerlink:10.1007/978-3-642-12821-94} and presented in
the context of our language.
For a monoidal category to be compact closed the maps~{{eta}}
and~{{eps}} must satisfy a coherence condition that is usually
visualized as follows:
\begin{center}
\includegraphics{diagrams/coherence.pdf}
\end{center}
where $b^*$ represents the dual of {{b}}. In the case of negatives,
the condition amounts to checking that reversing direction twice is a
no-op. In the case of fractionals, the condition amounts to checking
that creating values {{alpha}} and {{1/alpha}} and immediately
unifying them is also a no-op. Both checks are straightforward and are
essentially the constructions in the introduction.
We now review several interesting constructions related to looping,
involution, and higher order functions.
\paragraph*{Trace.}
Every compact closed category admits a trace. For the additive case, we get
the following definition. Given {{f : b1+b2 <-> b1+b3 }},
define {{trace+ f : b2 <-> b3}} as:
{{ trace+ f = zeroi (;) (id (+) eta+) (;) (f (+) id) (;) (id (+) eps+) (;) zeroe }}
\begin{center}
\includegraphics{diagrams/thesis/trace_plus.pdf}
\end{center}
\noindent We have omitted some of the commutativity and associativity
shuffling to communicate the main idea. We are given a value of type
{{b2}} which we embed into {{0+b2}} and then {{(-b1+b1)+b2)}}. This
can be re-associated into {{-b1+(b1+b2)}}. The component {{b1+b2}},
which until now is just an appropriately tagged value of type {{b2}},
is transformed to a value of type {{b1+b3}} by {{f}}.
If the result is in the {{b3}}-summand, it is produced as
the answer; otherwise the result is in the {{b1}}-summand; {{eps+}} is
used to make it flow backwards to be fed to the {{eta+}} located at
the beginning of the sequence. Iteration continues until a {{b3}} is
produced.
%%%%%
\paragraph*{Involution (Principium Contradictiones)}
In a symmetric compact closed category, we can build isomorphisms that the
dual operation is an involution. Specifically, we get the isomorphisms
{{b <-> -(-b)}} and {{b <-> (1/(1/b))}}. For the additive case, the
isomorphism is defined as follows:
{{ (id (+) eta+) (;) (swap+ (+) id) (;) (id (+) eps+) }}
\noindent where we have omitted the 0 introduction and elimination. The idea
is as follows: we start with a value of type {{b}}, embed it into {{b+0}} and
use {{eta}} to create something of type {{b + (-(-b) + (-b))}}. This is possible
because {{eta}} has the polymorphic type {{-a + a}} which can be instantiated
to {{-b}}. We then reshuffle the type to produce {{-(-b) + (-b + b)}} and cancel
the right hand side using {{eps+}}. The construction for the multiplicative
case is identical and omitted.
%% {{b <-> -(-b)}}
%%
%% this is the wrong diagram: see lemma 4.17 in selinger's paper
%% \begin{center}
%% \includegraphics{diagrams/double_neg.pdf}
%% \end{center}
\paragraph*{Duality preserves the monoidal tensor. }
As with compact closed categories, the dual on the objects distributes
over the tensor. In terms of {{langRevEE}} we have that {{-(b1+b2)}}
can be mapped to {{(-b1)+(-b2)}} and that {{1/(b1*b2)}} can be mapped
to {{(1/b1)*(1/b2)}}. The isomorphism {{-(b1 + b2) <-> (-b1) + (-b2)}}
can be realized as follows:
\begin{center}
\includegraphics{diagrams/dist_neg_plus.pdf}
\end{center}
\noindent
The multiplicative construction is similar.
%% This is a distinguishing difference from *-autonomous categories that
%% are models for Linear Logic, where the dual does not preserve the
%% tensor but maps to a dual tensor \cite{curien2009interactive}.
\paragraph*{Duality is a functor.}
Duality in {{langRevEE}} can map objects to their duals and morphisms
to act on dual objects. In other {{c : b1 <-> b2}} to
{{neg~c:-b1<->-b2}} in the additive monoid and to
{{inv~c:1/b1<->1/b2}} in the multiplicative monoid.
%% Any operation on types can be applied to the negative versions of these
%% types.
The idea is simply to reverse the flow of values and use the
adjoint of the operation:
\begin{multicols}{2}
\begin{center}
\includegraphics{diagrams/neg_lift.pdf}
\end{center}
%subcode{proof} include main
%@ c : b1 <-> b2
%@@ neg c : (-b1) <-> (-b2)
\end{multicols}
This construction relies on the fact that every {{langRevEE}} morphism
has an adjoint. The {{inv}} construction is similar.
%% inverse morphism, whereas in compact closed categories with no
%% underlying dagger structure we can only construct an op-functor.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Functions and Delimited Continuations}
\label{sub:hof}
Although these constructions are also standard, they are less known and they
are particularly important in our context: we devote a little more time to
explain them. Our discussion is mostly based on Abramsky and Coecke's article
on categorical quantum mechanics~\cite{abramsky-2008}.
In a compact closed category, each morphism {{f : b1 <-> b2 }} can be given a
\emph{name} and a \emph{coname}. For the additive fragment, the name
$\fname{f}$ has type {{0 <-> (-b1 + b2)}} and the coname $\fconame{f}$ has type
{{b1 + (-b2) <-> 0}}. For the multiplicative fragment, the name $\fname{f}$ has
type {{1 <-> ((1/b1) * b2))}} and the coname $\fconame{f}$ has type
{{(b1 * (1/b2)) <-> 1}}. Intuitively, this means that for each morphism,
it is possible to construct, from ``nothing,'' an object in the category that
denotes this morphism, and dually it is possible to eliminate this object.
The construction of the name and coname of {{c : b1 <-> b2}} in the additive case
can be visualized as follows:
\begin{multicols}{2}
\begin{center}
\includegraphics{diagrams/function.pdf}
\end{center}
\begin{center}
\includegraphics{diagrams/delimc.pdf}
\end{center}
\end{multicols}
Intuitively the name consists of viewing {{c}} as a function and the coname
consists of viewing {{c}} as a delimited continuation.
In addition to being able to represent morphisms, it is possible to express
function composition. For the additive case, the composition is depicted
below:
\begin{multicols}{2}
\scalebox{1.1}{
\includegraphics{diagrams/compose1.pdf}
}
\scalebox{0.8}{
\includegraphics{diagrams/compose.pdf}
}
\end{multicols}
which is essentially equivalent to sequencing both the computation blocks as
shown below:
\begin{center}
\includegraphics{diagrams/compose2.pdf}
\end{center}
Applying a function to an argument consists of making the argument flow
backwards to satisfy the demand of the function:
\begin{multicols}{2}
\begin{center}
\scalebox{1.0}{
\includegraphics{diagrams/apply1.pdf}
}
\end{center}
\begin{center}
\scalebox{0.8}{
\includegraphics{diagrams/apply2.pdf}
}
\end{center}
\end{multicols}
Having reviewed the representation of functions, we now discuss the
similarities and differences between the two notions of functions and their
relation to conventional (linear) functions which mix additive and
multiplicative components. For that purpose, we use a small example.
Consider a datatype {{color = R|Gr|B}}, and let us consider the following
manipulations:
\begin{itemize}
\item Using the fact that {{1}} is the multiplicative unit, generate
from the input {{()}} the value {{((),())}} of type {{1 * 1}};
\item Apply the isomorphism {{1 <-> (1/b) * b}} in parallel to each of
the components of the above tuple. The resulting value is
{{((1/alpha1,alpha1),(1/alpha2,alpha2))}} where {{alpha1}} and
{{alpha2}} are fresh logic variables;
\item Using the fact that {{*}} is associative and commutative, we can
rearrange the above tuple to produce the value:
{{((1/alpha1,alpha2),(1/alpha2,alpha1))}}.
\end{itemize}
At this point we have constructed a strange mix of two {{b-o*b}} functions;
inputs of one function manifest themselves as outputs of the other. If
{{(1/alpha1,alpha2)}} is held by one subcomputation and {{(1/alpha2,alpha1)}}
is held by another subcomputation, these remixed functions form a
communication channel between the two concurrent subcomputations. Unifying
{{1/alpha1}} with {{color}} {{R}} in one subcomputation, fixes {{alpha1}} to
be {{R}} in the other. The type {{b}} thus takes the role of the type of the
communication channel, indicating how much information can be communicated
between the two subcomputations. Depending on the choice of the type {{b}},
an arbitrary number of bits may be communicated.
Dually, the additive reading of the above manipulations correspond to
functions of the form {{b-o+b}}, witnessing isomorphisms of the form
{{0<->(-b)+b}}). The remixed additive functions express control flow transfer
between two subcomputations, \emph{only one of which exists} at any point,
i.e., they capture the essence of coroutines.
It should be evident that in a universe in which information is not
guaranteed to be preserved by the computational infrastructure, the above
slicing and dicing of functions would make no sense. But linearity is not
sufficient: one must also recognize that the additive and multiplicative
spaces are different.
%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Additional Constructions}
\label{sec:specific-constructions}
The additional constructions below (presented with minimal commentary)
confirm that conventional algebraic manipulations in the mathematical field
of rationals do indeed correspond to realizable type isomorphisms in our
setting. The constructions involving both negative and fractional types are
novel.
\paragraph*{Lifting negation out of {{*}}.}
The isomorphisms below state that the direction is \emph{relative}. If {{b1}}
and {{b2}} are flowing opposite to each other then it doesn't matter which
direction is forwards and which is backwards. More interestingly as {{b1}} is
moving backwards, it can ``see the past'' of {{b2}} which is equivalent to
both particles moving backwards.
{{(-b1) * b2 <-> -(b1 * b2) <-> b1 * (-b2)}}
To build these isomorphisms, we first build an intermediate
construction which we call {{eps_{fst} : (-b1)*b2 + b1*b2 <-> 0}}.
\begin{center}
\scalebox{0.8}{
\includegraphics{diagrams/eps_fst.pdf}
}
\end{center}
The isomorphism {{(-b1) * b2 <-> -(b1 * b2)}} can be constructed in
terms of {{eps_{fst} }} as shown below.
\begin{center}
\scalebox{0.9}{
\includegraphics{diagrams/mult_neg.pdf}
}
\end{center}
The second isomorphism can be built in the same way by merely swapping
the arguments.
\paragraph*{Multiplying Negatives.}
{{b1 * b2 <-> (-b1)*(-b2)}}
This isomorphism is a consequence of the fact that $-$ is an involution: it
corresponds to the algebraic manipulation:
%% %subcode{opsem} include main
%% %! columnStyle = rc
%% % & b1 * b2
%% % = & -(-(b1*b2))
%% % = & -((-b1)*b2)
%% % = & (-b1)*(-b2)
{{b1 * b2 = -(-(b1*b2)) = -((-b1)*b2) = (-b1)*(-b2)}}
\paragraph*{Multiplying and Adding Fractions.}
An isomorphism witnessing:
{{b1/b2 * b3/b4 <-> (b1*b3)/(b2*b4)}}
is straightforward. More surprisingly, it is also possible to construct
isomorphisms witnessing:
{{b1/b + b2/b <-> (b1+b2)/b}}
{{b1/b2 + b3/b4 <-> (b1*b4+b3*b2)/(b2*b4) }}
% \paragraph*{Unification.}
% A computational interpretation of this is that {{langRevEE}} acts like
% a logic programming system and {{eta*}} acts as the site of creation
% of a new typed logic variable of type {{b}} and its corresponding dual
% of type {{1/b}}. These two values are free to move independently
% through a circuit, but any operations on value affect what operations
% are possible on the other since they are both related by an underlying
% logic variable. Dually, {{eps*}} acts as a unification site for values
% flowing along both branches. If the values are not exactly duals, then
% the system fails to unify and hence the program is stuck. If the
% values are exactly duals, then unification succeeds and {{eta*}}
% returns {{1}}.
% \paragraph*{Entanglement and Anti-Particles.}
% A dual view of the operation of {{eta*}} and {{eps*}} is inspired by
% quantum mechanics. The {{eta*}} is a site of fission that creates an
% information particle and its anti-particle. Both flow in the same
% direction in time. The produce particles are entangles in that actions
% on one (such as transformation by application of an isomorphism)
% affects the other. Further the fission site does not create a particle
% in any specific state, but in a superposition of all possible states
% as dictated by its type. In each possible world that the particle
% exists, it takes on a specific value inhibiting its type and
% correspondingly its entangled anti-particle takes the dual of the
% specific value.
% This also leads to the exciting point of view that entangled
% particle/anti-particle pairs act as functions in a physical
% first-class sense. We can ask the function to be applied to a value by
% unifying the anti-particle with a particle in a well-known state and
% examining is pair. We shall use this point of view in the development
% our SAT solver (see Sec. \ref{sec:sat-solver}).
% \paragraph*{Entanglement and Anti-Particles.}
% \paragraph*{Annihilation.}
% As with {{(+, 0)}} we can construct a {{trace*}} operation. Unlike the
% additive {{trace+}}, {{trace*}} gives us the ability to find the
% fixpoint of the constraint (expressed with combinators) that is
% traced. If the constraint has not satisfying values, there is not
% fixpoint possible -- we use the term `annihilation' to describe the
% corresponding undefined state of the program. Annihilation and a
% non-termination both describe undefined program execution -- while
% non-termination characterizes an iteration that fails to terminate,
% annihilation characterizes a constraint that has no fixpoint.
%% An analogy inspired by quantum mechanics is also applicable here. The
%% operator {{eta*}} is a site of fission that creates \emph{an entangled
%% particle and anti-particle}. These particles can be thought to be in
%% a \emph{superposition} of states determined by their type. Both flow
%% in the same direction in time. In each possible world that the
%% particle exists, it takes on a specific value inhibiting its type and
%% correspondingly its entangled anti-particle takes the dual of the
%% specific value. Since the particles are entangled, actions on one
%% (such as transformation by application of an isomorphism) affects the
%% other in much the same sense as \emph{action at a distance}. The
%% `{{a-o*b}}' functions mentioned in the introduction, are first-class
%% values and they correspond to these entagled pairs. Finally {{eps*}}
%% operations are sites where corresponding particle and anti-particles
%% annihilate each other. While appealing as an analogy, this description
%% does not imply (or preclude) any formal connection with Physics.
% \paragraph*{Zero Information.}
% As with {{eta+}}, we argue that {{eta*}} constructs no new
% information, instead it gives us the ability to explore as many worlds
% as permitted by the type {{b}}. So situate ourselves in any specific
% world, we have to supply additional information in the form of
% unifying the particle or anti-particle with a particle representing
% known information. Hence only by supplying information to a an
% entangled pair can we read information from it -- and always exactly
% as much information as we supplied.
%% We can ask the function to be applied to a value by
%% unifying the anti-particle with a particle in a well-known state and
%% examining is pair. We shall use this point of view in the development
%% our SAT solver (see Sec. \ref{sec:sat-solver}).
%% This also leads to the exciting point of view that
%% entangled such particle/anti-particle pairs, which are first class
%% values, act as functions in a physical first-class sense.
%% Further the fission site does not create a particle in any specific
%% state, but in a superposition of all possible states as dictated by
%% its type.
% Since such pairs are first-class entities, they correspond to
%% \emph{Coherence for Compact Closure with Multiplicative Duality.} As
%% before, the triangle axiom which is the coherence condition on compact
%% closed categories is applicable and is expressed below in terms of
%% {{eta*}}/{{eps*}} and the multiplicative monoid.
%% %% The section follows the outline of the previous one, first presenting
%% %% the new syntax, and discussing the graphical language and the
%% %% properites implied by the underlying categorical structure.
%% The usual interpretation of {{b1*b2}} we have both a value of type
%% {{b1}} and a value of type {{b2}}. This interpretation is maintained
%% in the presence of fractionals. Hence an {{eta* : 1 <-> (1/b) * b}} is
%% to be viewed as a fission point for a value of type {{b}} and its
%% multiplicative inverse {{1/b}}. This naturally raises the question --
%% which value of type {{b}} is produced? The isomorphism {{eta*}} does
%% not produce any specific value of type {{b}}, instead it produces a
%% placeholder for an unspecified value and dually, {{1/b}} is inhabited
%% by the placeholder that indicates the absence of the unspecified
%% value.
%% \begin{enumerate}
%% \item
%% \emph{Trace}
%% % \item
%% % Multiplicative functions, of the form {{b1 -o* b2}}, on the other hand
%% % are indeed first-class values. However these functions can only be
%% % linearly used, i.e., they represent a constraint that can be satisfied
%% % at most once.
%% % %% in one `computational world'. It may be possible that to
%% % %% repeatedly invoke multiplicative functions with an appropriate
%% % %% recursive type whose unfoldings allow for repeated unification.
%% % \end{itemize}
% The exact encoding on HOF in {{langRevEE}} should shed light on
% several problems, including the connections to continuations discussed
% in Sec. \ref{sec:related}. They should also allow for a natural
% encoding of HOFs in the translations of \lcal to {{langRevT}} and
% indicate a monadic encapsulation of of information effects which are
% currrently structured as an arrow.
%% If one branch uses a clause that matches one part of the input to
%% produce one part of the output, all other clauses are automatically
%% adjusted to account for this to ensure that this input is not matched
%% again and that this output is not produced again.
%% \item At this point, we have produced the skeleton for our desired function.
%% Assume now that a client uses the first clause of the function in a context
%% that contains the value \verb|R|, i.e., the client has the value
%% {{((1/alpha1,alpha2),R)}} consiting of one of the clauses of the function
%% and an argument. This value can be rearranged to {{((1/alpha1,R),alpha2)}}.
%% At this point, we can use the isomorphism {{1 <-> (1/a) * a}} in reverse on
%% the first component of the pair which has the following effects:
%% \begin{itemize}
%% \item the logic variable {{alpha1}} is unified with {{R}};
%% \item the first component of the pair then reduces to () and can be
%% absorbed by the fact that {{1}} is the multiplicative unit. In other
%% words, the value reduces to simply {{alpha2}} reflecting that the
%% pattern-matching was successful and the result of the clause is now
%% available;
%% \item the first clause of the function has now disappeared and the
%% remaining two clauses become {{((1/alpha2,alpha3),(1/alpha3,R))}}.
%% \item The same client or another client can now use the second clause
%% applying it to {{Gr}} which would produce the result {{alpha3}} for the
%% second client and ground the result of the first client to {{Gr}}.
%% \item Finally the remaining clause {{(1/alpha3,R)}} can be applied to {{B}}
%% which grounds the result of the second client to {{B}}.
%% \end{itemize}
%% Furthermore, just like the case for negative types, a dual reading of
%% the pattern-matching clauses as continuations allows the context to
%% use the result of the clause and provide the matching argument later.
%% \begin{itemize}
%% \item Using the fact that {{1}} is the multiplicative unit, generate from the
%% input {{()}} the value {{((),((),()))}} of type {{1 * (1 * 1)}};
%% \item Apply the isomorphism {{1 <-> (1/a) * a}} in parallel to each of the
%% components of the above tuple. The resulting value is
%% {{((1/alpha1,alpha1),((1/alpha2,alpha2),(1/alpha3,alpha3)))}} where {{alpha1}},
%% {{alpha2}}, and {{alpha3}} are fresh logic variables.
%% \item Using the fact that {{*}} is associative and commutative, we can
%% rearrange the above tuple to produce the value:
%% {{((1/alpha1,alpha2),((1/alpha2,alpha3),(1/alpha3,alpha1)))}}.
%% \item At this point, we have produced the skeleton for our desired function.
%% Assume now that a client uses the first clause of the function in a context
%% that contains the value \verb|R|, i.e., the client has the value
%% {{((1/alpha1,alpha2),R)}} consiting of one of the clauses of the function
%% and an argument. This value can be rearranged to {{((1/alpha1,R),alpha2)}}.
%% At this point, we can use the isomorphism {{1 <-> (1/a) * a}} in reverse on
%% the first component of the pair which has the following effects:
%% \begin{itemize}
%% \item the logic variable {{alpha1}} is unified with {{R}};
%% \item the first component of the pair then reduces to () and can be
%% absorbed by the fact that {{1}} is the multiplicative unit. In other
%% words, the value reduces to simply {{alpha2}} reflecting that the
%% pattern-matching was successful and the result of the clause is now
%% available;
%% \item the first clause of the function has now disappeared and the
%% remaining two clauses become {{((1/alpha2,alpha3),(1/alpha3,R))}}.
%% \item The same client or another client can now use the second clause
%% applying it to {{Gr}} which would produce the result {{alpha3}} for the
%% second client and ground the result of the first client to {{Gr}}.
%% \item Finally the remaining clause {{(1/alpha3,R)}} can be applied to {{B}}
%% which grounds the result of the second client to {{B}}.
%% \end{itemize}
% \paragraph*{Unification.}
% A computational interpretation of this is that {{langRevEE}} acts like
% a logic programming system and {{eta*}} acts as the site of creation
% of a new typed logic variable of type {{b}} and its corresponding dual
% of type {{1/b}}. These two values are free to move independently
% through a circuit, but any operations on value affect what operations
% are possible on the other since they are both related by an underlying
% logic variable. Dually, {{eps*}} acts as a unification site for values
% flowing along both branches. If the values are not exactly duals, then
% the system fails to unify and hence the program is stuck. If the
% values are exactly duals, then unification succeeds and {{eta*}}
% returns {{1}}.
% \begin{itemize}
% \item
% Additive functions are not really values, but are possible paths of
% execution. However, they can be encoded as values as follows: Consider
% two functions {{f:b1 -o+ b2}} and {{g:b1-o+b2}}. A value of type
% {{bool}} can discriminate them and hence a value of type {{bool}} acts
% as the address of the function and is a first class value
% representing one of the {{b1 -o+ b2}} functions. %% This is similar to
% %% how first classes functions a'la \lcal are represented by their
% %% addresses when compiled to conventional hardware.
% \paragraph*{Entanglement and Anti-Particles.}
% A dual view of the operation of {{eta*}} and {{eps*}} is inspired by
% quantum mechanics. The {{eta*}} is a site of fission that creates an
% information particle and its anti-particle. Both flow in the same
% direction in time. The produce particles are entangles in that actions
% on one (such as transformation by application of an isomorphism)
% affects the other. Further the fission site does not create a particle
% in any specific state, but in a superposition of all possible states
% as dictated by its type. In each possible world that the particle
% exists, it takes on a specific value inhibiting its type and
% correspondingly its entangled anti-particle takes the dual of the
% specific value.
% \item
% Multiplicative functions, of the form {{b1 -o* b2}}, on the other hand
% are indeed first-class values. However these functions can only be
% linearly used, i.e., they represent a constraint that can be satisfied
% at most once.
% %% in one `computational world'. It may be possible that to
% %% repeatedly invoke multiplicative functions with an appropriate
% %% recursive type whose unfoldings allow for repeated unification.
% \end{itemize}
% This also leads to the exciting point of view that entangled
% particle/anti-particle pairs act as functions in a physical
% first-class sense. We can ask the function to be applied to a value by
% unifying the anti-particle with a particle in a well-known state and
% examining is pair. We shall use this point of view in the development
% our SAT solver (see Sec. \ref{sec:sat-solver}).
% The exact encoding on HOF in {{langRevEE}} should shed light on
% several problems, including the connections to continuations discussed
% in Sec. \ref{sec:related}. They should also allow for a natural
% encoding of HOFs in the translations of \lcal to {{langRevT}} and
% indicate a monadic encapsulation of of information effects which are
% currrently structured as an arrow.
% \paragraph*{Annihilation.}
% As with {{(+, 0)}} we can construct a {{trace*}} operation. Unlike the
% additive {{trace+}}, {{trace*}} gives us the ability to find the
% fixpoint of the constraint (expressed with combinators) that is
% traced. If the constraint has not satisfying values, there is not
% fixpoint possible -- we use the term `annihilation' to describe the
% corresponding undefined state of the program. Annihilation and a
% non-termination both describe undefined program execution -- while
% non-termination characterizes an iteration that fails to terminate,
% annihilation characterizes a constraint that has no fixpoint.
% \paragraph*{Zero Information.}
% As with {{eta+}}, we argue that {{eta*}} constructs no new
% information, instead it gives us the ability to explore as many worlds
% as permitted by the type {{b}}. So situate ourselves in any specific
% world, we have to supply additional information in the form of
% unifying the particle or anti-particle with a particle representing
% known information. Hence only by supplying information to a an
% entangled pair can we read information from it -- and always exactly
% as much information as we supplied.
% ADD MATH.
% %%%%%%%%%%%%%%%%%%%%%%%%
% \subsection{Conventional Higher-Order Functions}
% \label{sec:hof}
% In a conventional language, higher-order functions capture scope and can be
% represented as first-class values. In {{langRevEE}}, we have two notions of
% functions both of which share similarities with conventional higher-order
% functions.
% \begin{itemize}
% \item
% Additive functions are not really values, but are possible paths of
% execution. However, they can be encoded as values as follows: Consider
% two functions {{f:b1 -o+ b2}} and {{g:b1-o+b2}}. A value of type
% {{bool}} can discriminate them and hence a value of type {{bool}} acts
% as the address of the function and is a first class value
% representing one of the {{b1 -o+ b2}} functions. %% This is similar to
% %% how first classes functions a'la \lcal are represented by their
% %% addresses when compiled to conventional hardware.
%% \emph{Non-termination.} Compact closed categories contain {{trace}}
%% operations and this implies that {{eta+}}/{{eps+}} allow the
%% construction of a {{trace+}} operator (shown in Section
%% \ref{sec:monoidal-constructions}). As shown in previous work
%% \cite{infeffects}, non-terminating computation can be constructed from
%% {{trace+}} and recursive types.
% \begin{multicols}{2}
% \begin{center}
% \includegraphics{diagrams/trace.pdf}
% \end{center}
% %subcode{proof} include main
% %@ c : b2 + b1 <-> b2 + b3
% %@@ trace c : b1 <-> b3
% \end{multicols}
% \paragraph*{Zero Information.}
% The fact that {{eta+}} (and hence {{eps+}}) introduces no information
% can be understood is by viewing {{-b+b}} as a type whose value is
% observable only if you `pay' {{b}} amount of information to the {{-b}}
% branch. When {{b}} amount of information is supplied {{(-b+b)+b=b}}
% amount of information is returned to you. (TODO. more needs to be said
% here.)
% For this reason, our treatment of negative values in the context of
% {{langRev}} is particularly simple. We will have a type $0$ and an
% isomorphism {{0 <-> a + (-a)}} that when used in the left-to-right direction
% allows us to create a value and a corresponding debt out of nothing. Both the
% value and its negative counterpart can flow anywhere in the system. Because
% information is preserved, a closed program (which does not have any
% ``dangling'' negative values) will eventually have to match the negative
% value with some corresponding value, effectively using the isomorphism in the
% right-to-left direction.
%% Hence, the intuitive way to read the diagrams is that they reverse the
%% flow of information -- i.e. they capture a notion of \emph{negative
%% information flow}. In normal {{langRev}} circuits, information flows
%% from left-to-right. The operator {{eps+}} causes information to flow
%% from right-to-left. The interpretation of {{eta+}} is the opposite:
%% it turns a right-to-left flow of information in a circuit into a
%% left-to-right flow. The diagram below is an example of using {{eta+}}
%% and {{eps+}} and corresponds to the additive {{a <-> a}} isomorphism
%% from the introduction. (Here the {{0}} wires have been shown for
%% completeness.)
%% \paragraph*{Negative Information Flow.}
% A time traveling intuition is also applicable here: if view the
% left-to-right direction of information flow as the canonical forward
% direction of information, then the action of {{eps+}} allows values,
% aka information particles, to flow backwards in time. Particles that
% flow backwards in time get to see and interact with the history of
% particles that they coexist with, i.e., values that they exist
% multiplicatively with. Hence the isomorphism {{(-b1)*b2<->-(b1*b2)}}
% (whose construction is shown in Section \ref{sec:neg-constructions}).
%\subsection{Compact Closure with Additive Inverses}
%% \emph{Coherence for Compact Closure with Additive Duality.} The
%% appropriate coherence condition for compact closed categories,
%% expressed as the triangle axiom, is satisfied by {{eta+}}/{{eps+}} and
%% may be visualized (modulo {{0}} wires, swapping etc.) as explained by
%% Selinger \cite{springerlink:10.1007/978-3-642-12821-94} as the
%% operational equivalence:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Computing in the Field of Rationals : {{langRevEE}} }
\label{sec:rat}
In this section we develop an operational semantics for {{langRev}} extended
with negative and fractional types, {{langRevEE}}. The operational semantics
takes the abstract categorical diagrams which were previously known but gives
them a computational interpretation based on reverse execution for negative
types and unification for fractional types. Even though this computational
interpretation appears straightforward in retrospect, it constitutes a
breakthrough because it breaks the preconceived idea that there is only one
notion of duality and hence that both negative and fractional types should be
implemented using the same underlying mechanism. In some sense, the folklore
literature on monoidal categories describing the evaluations for additive and
multiplicative fragments are ``particle-style'' and ``wave-style'' can be
seen to suggest either a unified or separate implementations. But, since de
Morgan, we have been predisposed to think of only one notion of duality which
persisted to even linear logic. (See Sec.~\ref{sec:related} for further
discussion of duality.)
For the purposes of this section, we start with the semantics for {{langRev}}
defined in Sec~\ref{sec:pi}, and we will systematically rewrite it to achieve
the desired {{langRevEE}} semantics. This rewriting consists of two main steps:
\begin{enumerate}
\item We rewrite the semantics in Sec~\ref{sec:pi} using unification of the
values instead of direct pattern matching. This gives us the necessary
infrastructure for fractional types. We assume the reader is familiar with
the idea of unification as realized using logic variables, substitutions,
and reification as presented in any standard text on logic programming.
\item We write a reverse interpreter by reversing the direction of the
{{|-->}} reductions. This gives us the necessary infrastructure for
negative types.
\end{enumerate}
%%%%%%%%%%%%%%%%%%%%%
\subsection{Unification}
Previously the reductions of the primitive isomorphisms were specified in the
following form:
{{ iso v_{input pattern} |--> v_{output pattern} }}
\noindent
Instead of relying on pattern-matching, we rewrite the rules to accept an
incoming substitution to which the required pattern-matching rules are added
as constraints. The general case is:
{{ iso s v' |--> (v_{output pattern}, s[v' <><> v_{input pattern}]) }}
Using the same idea, the entire semantics can be extended to thread
the substitution as shown below:
%subcode{opsem} include main
%! columnStyle = rclr
%! fwd = \triangleright
% <iso, v, C, s>_{fwd} &|-->& {[iso, v', C, s']}_{fwd}
% & & where iso s v |--> (v', s')
% <c1(;)c2, v, C, s>_{fwd} &|-->& <c1, v, Fst C c2, s>_{fwd}
% {[c1, v, Fst C c2, s]}_{fwd} &|-->& <c2, v, Snd c1 C, s>_{fwd}
% {[c2, v, Snd c1 C, s]}_{fwd} &|-->& {[ c1(;)c2, v, C,s ]}_{fwd}
% <c1(+)c2, v', C, s>_{fwd} &|-->& <c1, v, LeftP C c2, s[v' <><> left v]>_{fwd}
% {[ c1, v, LeftP C c2, s ]}_{fwd} &|-->& {[c1 (+) c2, left v, C,s ]}_{fwd}
% <c1(+)c2, v', C, s>_{fwd} &|-->& <c2, v, RightP c1 C, s[v' <><> right v]>_{fwd}
% {[ c2, v, RightP c1 C,s ]}_{fwd} &|-->& {[c1 (+) c2, right v, C,s ]}_{fwd}
% <c1(*)c2, v', C, s>_{fwd} &|-->& <c1, v1, LeftT C c2 v2, s'>_{fwd}
% & & where s' = s[v <><> (v1, v2)]
% {[ c1, v1, LeftT C c2 v2,s ]}_{fwd} &|-->& <c2, v2, RightT c1 v1 C, s>_{fwd}
% {[ c2, v2, RightT c1 v1 C,s ]}_{fwd} &|-->& {[ c1 (*) c2, (v1, v2), C,s ]}_{fwd}
Most of the rules look obviously correct but there is a subtle
point. Consider the case for {{c1 (+) c2}}. Previously the shape of the value
determined which of the combinators {{c1}} or {{c2}} to use. Now the incoming
value could be a fresh logical variable, and indeed we have two rules with
the same left hand side, and only depending on the success or failure of some
further unification, can we decide which of them applies. The situation is
common in logic programming in the sense that it is considered a
non-deterministic process that keeps searching for the right substitution if
any. Typical implementations of logic programming languages further provide
top-level mechanisms to manipulate this non-deterministic search, for example
by returning one answer and giving the user the option to ask for more
answers. We abstract from this top-level semantics and view the rules above
as being applied non-deterministically.
%%%%%%%%%%%%%%%%%
\subsection{Reverse Execution}
Given that our language is reversible (Prop.~\ref{prop:logrev}), a backward
evaluator is relatively straightforward to implement: using the backward
evaluator to calculate {{c v}} is equivalent {{c^{dagger} v}} in the forward
evaluator.
%subcode{opsem} include main
%! columnStyle = rclr
%! bck = \triangleleft
% {[iso, v, C, s]}_{bck} &|-->& <iso, v', C, s'>_{bck}
% & & where iso^{dagger} s v |--> (v', s')
% <c1, v, Fst C c2, s>_{bck} &|-->& <c1(;)c2, v, C, s>_{bck}
% <c2, v, Snd c1 C, s>_{bck} &|-->& {[c1, v, Fst C c2, s]}_{bck}
% {[ c1(;)c2, v, C, s ]}_{bck} &|-->& {[c2, v, Snd c1 C, s]}_{bck}
% <c1, v, LeftP C c2,s >_{bck} &|-->& <c1(+)c2,left v, C>_{bck}
% {[c1 (+) c2, v', C, s ]}_{bck} &|-->& {[ c1, v, LeftP C c2, s[v' <><> left v] ]}_{bck}
% <c2, v, RightP c1 C, s>_{bck} &|-->& <c1(+)c2, right v, C>_{bck}
% {[c1 (+) c2, v', C,s ]}_{bck} &|-->& {[ c2, v, RightP c1 C, s[v' <><> right v] ]}_{bck}
% <c1, v1, LeftT C c2 v2, s>_{bck} &|-->& <c1(*)c2, (v1, v2), C, s>_{bck}
% <c2, v2, RightT c1 v1 C, s>_{bck} &|-->& {[ c1, v1, LeftT C c2 v2, s ]}_{bck}
% {[ c1 (*) c2, v, C, s ]}_{bck} &|-->& {[ c2, v2, RightT c1 v1 C, S' ]}_{bck}
% && where s' = s[v <><> (v1, v2)]
%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Semantics of {{langRevEE}} }
Combining the two constructions above, we get the semantics of the full
{{langRevEE}} language by adding the rules for the two variants of~{{eta}}
and~{{eps}}.
\begin{enumerate}
\item
To add multiplicative duality, we add the following rules to the set
of primitive isomorphisms:
\begin{enumerate}
\item {{eta* s () |--> ((1/v, v), s)}}
where {{v}} is a fresh logic variable.
As explained previously {{eta*}} creates two values {{1/v}} and {{1/v}}
from a single logic variable.
\item {{eps* s v |--> (1, s[v <><> (1/v',v') ])}}
where {{v'}} is a fresh logic variable.
Similarly, {{eps*}} unifies the values on incoming wires. The incoming
value {{v}} represents the values of both wires and hence unifying
them is accomplished in terms of an intermediate logic variable
{{v'}}.
\end{enumerate}
\item To add negative types we add the following rules to the
reductions above. The additions formalize our previous discussions
and should not be surprising at this point.
\begin{enumerate}
\item The rules for {{eps+}} essentially transfer control from the
forward evaluator (whose states are tagged by $\triangleright$) to
the backward evaluator (whose states are tagged by
$\triangleleft$). In other words, after an {{eps+}} the direction
of the world is reversed. The pattern matching done by the
unification ensures that a value on the {{right}} wire is tagged
to be negative and transferred to the {{left}} wire, and vice versa.
%subcode{opsem} include main
%! columnStyle = rclr
%! fwd = \triangleright
%! bck = \triangleleft
% <eps+, v, C, s>_{fwd} |--> <eps+, left (-v'), C, s[v <><> right v']>_{bck}
% <eps+, v, C, s>_{fwd} |--> <eps+, right v', C, s[v <><> left (-v')]>_{bck}
%
Note that there is no evaluation rule for {{eta+}} in the forward
evaluator. This corresponds to the fact that there is no value of
type {{0}} and hence the forward evaluator can never execute an
{{eta+}}.
\item The rules for {{eta+}} are added to the backward evaluator. A
program executing backwards starts executing forwards after the
execution of the {{eta+}}. Dual to the previous case, there is
no rule for {{eps+}} in the backward evaluator since the output
type of {{eps+}} is {{0}}.
%subcode{opsem} include main
%! columnStyle = rclr
%! fwd = \triangleright
%! bck = \triangleleft
% <eta+, v, C, s>_{bck} |--> <eta+, left (-v'), C, s[v <><> right v']>_{fwd}
% <eta+, v, C, s>_{bck} |--> <eta+, right v', C, s[v <><> left (-v')]>_{fwd}
\end{enumerate}
\end{enumerate}
%% %%%%%%%%%%%%%%%%%%%%%%
%% \subsection{Constructions in {{langRevEE}} }
%% \label{sec:monoidal-constructions}
% \paragraph*{Zero Information.}
% As with {{eta+}}, we argue that {{eta*}} constructs no new
% information, instead it gives us the ability to explore as many worlds
% as permitted by the type {{b}}. So situate ourselves in any specific
% world, we have to supply additional information in the form of
% unifying the particle or anti-particle with a particle representing
% known information. Hence only by supplying information to a an
% entangled pair can we read information from it -- and always exactly
% as much information as we supplied.
% ADD MATH.
% \emph{Zero Information.} The intuitive explanation that {{eta+}} and
% hence {{eps+}} introduces no information effects is that {{-b+b}} as a
% type whose value is observable only if you `pay' {{b}} amount of
% information to the {{-b}} branch. When {{b}} amount of information is
% supplied {{(-b+b)+b=b}} amount of information is returned to
% you. (TODO. more needs to be said here.)
% For this reason, our treatment of negative values in the context of
% {{langRev}} is particularly simple. We will have a type $0$ and an
% isomorphism {{0 <-> a + (-a)}} that when used in the left-to-right direction
% allows us to create a value and a corresponding debt out of nothing. Both the
% value and its negative counterpart can flow anywhere in the system. Because
% information is preserved, a closed program (which does not have any
% ``dangling'' negative values) will eventually have to match the negative
% value with some corresponding value, effectively using the isomorphism in the
% right-to-left direction.
\paragraph*{Observability.}
The reductions above allow us to apply a program {{c : b1 <-> b2}} to an
input {{v1 : b1}} to produce a result {{v2 : b2}} on termination. Execution
is well defined only if {{b1}} and {{b2}} are entirely positive types. If
either {{b1}} or {{b2}} is a negative or fractional type, the system has
``dangling'' unsatisfied demands or constraints. For this reason, we
constrain entire programs to have positive non-fractional types. This is
similar to the constraint that Zeilberger imposes to explain intuitionistic
polarity and delimited control~\cite{10.1109/LICS.2010.23}.
% (TODO: So this is a really bad explanation. But this section is really
% important and we need to explain carefully. Maybe an analogy can be
% drawn to Cayley diagrams for groups to point out that the notion of
% `i' is entirely artificial, but it is essential to the discourse.)
% \begin{center}
% \includegraphics{diagrams/shuffle.pdf}
% \end{center}
% We define observables to be only positive types. The outputs of
% programs that output mixed positive and negative types are not
% observable. Also, programs that input mixed positive and negatives
% types are not executable.
% It is not fair to say that negative types flow backwards. The
% following circuits are valid in {{langRevEE}}. It is however proper to
% say that for any type {{b}} that flows in a direction, the type {{-b}}
% flows in the reverse direction.
% Several things we don't know how to do, or are sloppy about. Some of
% these are deep questions to which there is no immeidate answer.
% \begin{itemize}
% \item Is the language without recursive types always terminating? For
% the right symmetry with annihilation, it is desirable to have
% non-termination.
% \item What is the correct lemma for logical reversibility now?
% \item We should say things about information preservation. At least
% more so than what is being said now.
% \item Multiplicative duality gives us a means of talking about
% equivalence classes of values. This should correspond to all sorts
% of other things -- groupoids, Lawvere theories etc. We barely even
% understand these connections.
% \item How does this connect to QC? Is there any strong connection such
% that the SAT solver here is an alternate QC algorithm to Shor's?
% \end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Advanced Example: SAT Solver in {{langRevEE}} }
\label{sec:prog}
\label{sec:sat-solver}
% Additive traces correspond to iteration and multiplicative traces
% correspond to fixpoint of constraints. Addtive traces have been
% studied in the past \cite{infeffects} and here we focus on the
% multiplicative trace.
% The construction presented here is a novel SAT-solver that relies on
% {{trace*}} for its solution. It varies from previous classical SAT solvers
% \amr{must cite ???} in that there is no explicit search operation on the
% boolean space. It also varies from previous quantum SAT solvers \amr{must
% cite ???} in that it does uses a {{trace}} operation to compute a fixpoint
% which is the solution following which an isomorphic clone operation allows us
% to examine the result. Though we focus on SAT here, this construction can be
% generalized to any constraint satisfaction problem whose search space can be
% represented by a recursive type and for which an isomorphic clone operation
% can be constructed.
% can be constructed. Before we start, let us recall two constructions:
We illustrate the expressive power of first-class constraints represented by
fractional types. To understand the intuition, recall the definition of
{{trace}} for the multiplicative fragment of {{langRevEE}}.
Given {{f : a*c <-> b*c }}, we have {{trace* f : a <-> b}}:
{{ trace* f = uniti (;) (id (*) eta*) (;) (f (*) id) (;) (id (*) eps*) (;) unite }}
\noindent This circuit uses {{eta*}} to generate all possible {{c}}-values
together with an associated {{(1/c)}}-constraint. It then applies {{f}} to the
pair {{(a,c)}}. The function {{f}} must produce an output {{(b,c')}} for each
such input. If the input {{c}} and the output {{c'}} are the same they can be
annihilated by {{eps*}}; otherwise the execution gets stuck and this
particular choice of {{c}} is pruned.
% Annihilation and non-termination describe undefined program
% execution. While non-termination characterizes an iteration that fails
% to terminate, annihilation characterizes a constraint that has no
% fixpoint. The SAT-solver of Sec. \ref{sec:sat-solver} works by
% annihilating only the program states that do not match the boolean
% satisfiability constraint.
% \emph{Annihilation.}
% With {{eta*}} and {{eps*}}, we can construct a {{trace*}} operation
% which gives us the ability to find the fixpoint of a constraint. If
% the constraint has no satisfying values, there is no fixpoint possible
% -- we use the term \emph{annihilation} to describe the corresponding
% undefined state of the program.
A large class of constraint satisfaction problems can be expressed
using {{trace*}}. We illustrate the main ideas with the implementation
of a SAT-solver. We proceed in small steps, reviewing some of the
necessary constructions presented in our earlier
paper~\cite{infeffects}.
%%%%%%%%%%%%%%%%%%%
\subsection{Booleans and Conditionals}
Given any combinator {{c : b <-> b}} we can construct a combinator called
{{if_c : bool*b <->bool*b}} in terms of {{c}}, where {{if_c}} behaves like a
one-armed $\mathit{if}$-expression. If the supplied boolean is {{true}} then
the combinator {{c}} is used to transform the value of type~{{b}}. If the
boolean is {{false}}, then the value of type {{b}} remains unchanged. We can
write down the combinator for {{if_c}} in terms of {{c}} as
{{ dist (;) ((id (*) c) (+) id) (;) factor }}.
\noindent The diagram below shows the input value of type {{(1+1)*b}}
processed by the distribute operator {{dist}}, which converts it into a value
of type {{(1*b)+(1*b)}}. In the {{left}} branch, which corresponds to the
case when the boolean is {{true}} (i.e. the value was {{left ()}}), the
combinator~{{c}} is applied to the value of type~{{b}}. The right branch
which corresponds to the boolean being {{false}} passes along the value of
type {{b}} unchanged.
\begin{center}
\scalebox{1.0}{
%%subcode-line{pdfimage}[diagrams/if_c.pdf]
\includegraphics{diagrams/thesis/cnot.pdf}
}
\end{center}
% We will be seeing many more such wiring diagrams in this paper and it is
% useful to note some conventions about them. Wires indicate a
% value that can exist in the program. Each wire, whenever possible, is
% annotated with its type and sometimes additional information to help clarify
% its role. When multiple wires run in parallel, it means that those values
% exist in the system at the same time, indicating pair types. When there is a
% disjunction, we put a {{+}} between the wires.
% Combinators for distribution {{dist}} and factoring {{factor}}
% are represented as triangles with their operator symbols in them. Other
% triangles may be used and, in each case, types or labels will be used to
% clarify their roles. Finally, we don't draw boxes for combinators such as
% {{id}}, commutativity, and associativity, but instead just shuffle the wires
% as appropriate.
The combinator {{if_{not} }} has type {{bool*bool<->bool*bool}} and
negates its second argument if the first argument is {{true}}. This
gate {{if_{not} }} is often referred to as the {{cnot}} gate. An
equivalent construction that is useful is {{else_{not} }} where we
negate the second argument only if the first is {{false}}.
Similarly, we can iterate the construction of {{if_c}} to check several
bits. The gate {{if_{cnot} }}, which we may also write as {{if^2_{not} }},
checks two booleans and negates the result wire only if they are both
{{true}}. The gate {{if^2_{not} }} is well known as the Toffoli gate and is a
universal reversible gate. We can generalize this construction to
{{if^n_{not} }} which checks {{n}} bits and negates the result wire only if
they are all {{true}}.
%%%%%%%%%%%%%%%%%%
\subsection{Cloning}
Although cloning is generally not allowed in reversible languages, it is
possible at the cost of having additional constant inputs. For example,
consider the gate {{else_{not} }}. Generally, the gate maps
{{(false,a)}} to {{(false,not a)}} and {{(true,a)}} to {{(true,a)}}. Focusing
on the cases in which the second input is {{true}}, we get that the gate maps
{{(false,true)}} to {{(false,false)}} and {{(true,true)}} to {{(true,true)}},
i.e., the gate clones the first input. A circuit of {{n}} parallel
{{else_{not} }} gates can hence clone {{n}} bits. They also consume {{n}}
{{true}} inputs in the process. Let us call this construction
{{clone^n_{bool} }}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Construction of the Solver}
The key insight underlying the construction comes from the fact that we can
build \emph{annihilation circuits} such as the one below:
\begin{center}
\scalebox{1.2}{
\includegraphics{diagrams/not_trace.pdf}
}
\end{center}
The circuit constructs a boolean {{b}} and its dual {{1/b}}, negates one of
them and attempts to satisfy the constraint that they are equal which
evidently fails.
With a little work, we can modify this circuit to only annihilate values that
fail to satisfy the constraints represented by a SAT-instance {{f}}. In more
detail, an instance of SAT is a function/circuit~{{f}} that given some
boolean inputs returns {{true}} or {{false}} which we interpret as whether
the inputs satisfy the constraints imposed by the structure of {{f}}. Because
we are in a reversible world, our instance of SAT must be expressed as an
isomorphism: this is easily achieved as shown in Sec.~\ref{sub:f}
below. Assuming that {{f}} is expressed as an isomorphism, we have enough
information to reconstruct the input from the output. This can be done by
using the adjoint of {{f}}. At this point we have, the top half of the
construction below:
\begin{center}
\scalebox{1.2}{
\includegraphics{diagrams/sat1.pdf}
}
\end{center}
To summarize, the top half of the circuit is the identity function except
that we have also managed to produce a boolean wire labeled
\textsf{satisfied?} that tells us if the inputs satisfy the desired
constraints. We can take this boolean value and use it to decide whether to
negate the control wire or not. Thus, the circuit achieves the following
goal: if the inputs do not satisfy {{f}}, the control wire is negated. We
can now use {{trace*}} to annihilate all these bad values because the control
wire acts like the closed-loop {{not}} in the previous construction.
%%%%%%%%%
\subsection{Final Details}
\label{sub:f}
Any boolean expression {{f : bool^n -> bool}} can be compiled into the
isomorphism {{iso_f : bool^h*bool^n<->bool^g*bool}} where the extra bits
{{bool^h}} and {{bool^g}} are considered as heap and garbage. Constructing
such an isomorphism has been detailed
before~\cite{Toffoli:1980,infeffects}. The important relation to note is that
applying {{iso_f}} to some special heap values and an input \textit{bs}
produces some bits that can be ignored and the same output that {{f}} would
have produced on {{bs}}. We can ensure that the heap has the appropriate
initial values by checking the heap and negating a second control wire, if
the values do not match, i.e., the dotted part in the diagram below.
\begin{center}
\scalebox{1.2}{
\includegraphics{diagrams/sat2.pdf}
}
\end{center}
% \roshan{Explain how the dotted part is implemented.}
Let us call the above construction which maps inputs, heap, and
control wires to inputs, heap, and control wires as {{sat_f}}. The
SAT-solver is is completed by tracing the {{sat_f}} and cloning the
inputs using {{clone^n_{bool} }}.
\begin{center}
\scalebox{1.5}{
\includegraphics{diagrams/sat3.pdf}
}
\end{center}
When the solver is fed inputs initialized to {{true}}, it clones only
those inputs to {{sat_f}} that satisfy {{f}} and the heap
constraints. In the case of unique-SAT the solver will produce exactly
0 or 1 solutions. In the case of general SAT, the solver will produce
solutions as determined by the semantics of the top-level interaction
(see discussion in Sec. \ref{sec:rat}).
%% %%%%%%%%%%%%%%%%%%%
%% \subsection{GoI machine}
%% We can now encode the GoI machine of
%% Mackie~\cite{Mackie2011,DBLP:conf/popl/Mackie95}. \amr{The machine uses bang
%% which allows arbitrary duplication. Can we really do that???}
%% the final equivalence is valid and can be used to actually
%% construct an isomorphism between $x^7$ the type of seven binary
%% trees and $x$ the type of binary trees.
%% Here is one way to view computations involving irrationals that might
%% shed some insight into how we should view recursive data types
%% also. For finite datatypes say of size {{n}} we can say that an
%% element specifies a choice of {{1/n}} and hence contains as much
%% information. For finite things, each value is computationally equal to
%% any other value. Any permutation is possible among the values. Hence
%% there is no metric possible on the values. Recursive data types, such
%% as binary trees, have an infinite number of elements and hence the
%% argument that says that value specifies {{1/n}} units of information
%% is not longer valid. Instead to talk meaningfully about information
%% content, we should talk about a probability distribution on the values
%% of the type. However what shall we base this distribution on? Is there
%% a natural order beyond the unfolding order of the values of a type?
%% In this context, let us go back to the thought that recursive such as
%% {{nat=nat+1}} have no meaningful solutions, whereas trees do.
%% Real numbers in that sense really are a series of bits of unknown
%% information theoretic interpretation -- a sort of {{top}}. Numbers
%% such as {{sqrt(2) }} are very different in the sense that even though
%% they contain an infinite sequence of bits, there is a finite program
%% that generates those bits and its only the cost of computing bits that
%% is infinite. The consistency of arithmetic isn't affected by
%% extending the rational field by adding any specific computable
%% irrational. So it might be possible to design a model of computing
%% where we can indeed deal with them. Such a model will have to explicit
%% thunks for the delayed computation represented by trees and never
%% equate and cancel two infinite computations unless the represent
%% exactly the same infinite computation.
%% TODO.
%% Square root and imaginary types have also appeared in the literature: we
%% relegate the connections to Sec.~\ref{sec:related} and proceed with a simple
%% explanation. We have so far extended the set of types to be the rational
%% numbers. Now we will push this and extend the set of types to algebraic
%% numbers. In other words, we will allow datatypes defined by arbitrary
%% polynomials and allow the roots of such polynomials to be types.
%% Consider first an example in which we want to compute with the sides of a
%% rectangle whose area is 91 and whose length is longer than its width by 6
%% units. One can solve the quadratic equation to determine that the sides are 7
%% and 13 and proceed. This however prematurely forces us to globally solve the
%% constraint. Instead we can let the two sides of the rectangle be $x$ and
%% $x+6$ and use the following equation to capture the desired constraint:
%% \[
%% x^2 + 6x - 91 = 0
%% \]
%% The equation introduces an isomorphism between the type $x^2 + 6x - 91$ and
%% the type $0$. We can now proceed to compute with the unknown $x$, being
%% assured that in a closed program, our computation will eventually be
%% consistent with the solution of the algebraic equation.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Related Work}
\label{sec:related}
The idea of ``negative types'' has appeared many times in the literature and
has often been related to some form of continuations. Fractional types are
less common but have also appeared in relation to parsing natural
languages. Although each of these previous occurrences of negative and
fractional types is somewhat related to our work, our results are
substantially different. To clarify this point, we start by reviewing the
salient point of the major pieces of related work and conclude this section
with a summary contrasting our approach to previous work.
\paragraph*{Declarative Continuations.}
In his Masters thesis~\cite{Filinski:1989:DCI:648332.755574}, Filinski
proposes that continuations are a \emph{declarative} concept. He,
furthermore, introduces a symmetric extension of the $\lambda$-calculus in
which call-by-value is dual to call-by-name and values are dual to
continuations. In more detail, the symmetric calculus contains a ``value''
fragment and a ``continuation'' fragment which are mirror images. Pairs and
sums are treated as duals in the sense that the ``value'' fragment includes
pairs whose mirror image in the ``continuation'' fragment are sums. In
contrast, our language includes pairs and sums in the value fragment and two
symmetries: one that maps the pairs to fractions and another that maps the
sums to subtractions.
\paragraph*{The Duality of Computation.}
The duality between call-by-name and call-by-value was further investigated
by Selinger using control
categories~\cite{Selinger:2001:CCD:966910.966911}. Curien and
Herbelin~\cite{Curien:2000} also introduce a calculus that exhibits
symmetries between values and continuations and between call-by-value and
call-by-name. The calculus includes the type $A-B$ which is the dual of
implication, i.e., a value of type $A-B$ is a context expecting a function of
type $A \rightarrow B$. Alternatively a value of type $A-B$ is also explained
as a \emph{pair} consisting of a value of type $A$ and a continuation of type
$B$. This is to be contrasted with our interpretation of a value of that type
as \emph{either} a value of $A$ or a demand for a value of type $B$. This
calculus was further analyzed and extended by
Wadler~\cite{Wadler:2003,DBLP:conf/rta/Wadler05}. The extension gives no
interpretation to the subtraction connective and like the original symmetric
calculus of Filinski, introduces a duality that relates sums to products and
vice-versa.
\paragraph*{Subtractive Logic.}
Rauszer~\cite{springerlink:10.1007/BF02120864,rauszer,rauszer2} introduced a
logic which contains a dual to implication. Her work has been distilled in
the form of \emph{subtractive logic}~\cite{Crolard01} which has recently been
related to coroutines~\cite{Crolard01082004} and delimited
continuations~\cite{Ariola:2009:TFD:1743339.1743381}. In more detail,
Crolard explains the type $A-B$ as the type of \emph{coroutines} with a local
environment of type $A$ and a continuation of type $B$. The description is
complicated by what is essentially the desire to enforce linearity
constraints so that coroutines cannot access the local environment of other
coroutines.
\paragraph*{Negation in Classical Linear Logic}
Filinski~\cite{Filinski92} uses the negative types of linear logic to model
continuations. Reddy~\cite{Reddy91} generalizes this idea by interpreting the
negative types of linear logic as \emph{acceptors}, which are like
continuations in the sense that they take an input and return no
output. Acceptors however are also similar in flavor to logic variables:
they can be created and instantiated later once their context of use is
determined. Although a formal connection is lacking, it is clear that, at an
intuitive level, acceptors are entities that combine elements of our negative
and fractional types.
\paragraph*{The Lambek-Grishin Calculus.} The ``parsing-as-deduction'' style
of linguistic analysis uses the Lambek-Grishin calculus with the following
types: product, left division, right division, sum, right difference, and
left difference~\cite{Bernardi:2010:CSL:1749618.1749689}. The division and
difference types are similar to our types but because the calculus lacks
commutativity and associativity and only has limited notions of
distributivity, each connective needs a left and right version. The
Lambek-Grishin exhibits two notions of symmetry but they are unrelated to our
notions. In particular, the first notion of symmetry expresses commutativity
and the second relates products to sums and divisions to subtractions. In
contrast, our two symmetries relate sums to subtractions and products to
divisions.
\paragraph*{Our Approach.} The salient aspects of our approach are the
following:
\begin{itemize}
\item Negative and fractional types have an elementary and familiar
interpretation borrowed from the algebra of rational numbers. One can write
any algebraic identity that is valid for the rational numbers and interpret
it as an isomorphism with a clear computational interpretation: negative
values flow backwards and fractional values represent constraints on the
context. None of the systems above has such a natural interpretation of
negative and fractional types.
\item Because we are \emph{not} in the context of the full
$\lambda$-calculus, which allows arbitrary duplication and erasure of
information, values of negative and fractional types are first-class values
that can flow anywhere. The information-preserving computational
infrastructure guarantees that, in a complete program, every negative
demand will be satisfied exactly once, and every constraint imposed by a
fractional value will also be satisfied exactly once. This property is
shared with systems that are based on linear logic; other systems must
impose ad hoc constraints to ensure negative and fractional values are used
exactly once.
\item In contrast to all the work that takes continuations as primitive
entities of negative types, we view continuations as a derived notion that
combines a demand for a value with constraints on how this value will be
used to proceed with the evaluation (to the closest delimiter or to the end
of the program). In other words, we view a continuation as a non-elementary
notion that combines the negative types to demand a value and the
fractional types to explain how this value will be used to continue the
evaluation. As a consequence, the previously observed duality between
values and continuations can be teased into two dualities: a duality
between values flowing in one direction or the other and a duality between
aggregate values composing and decomposing into smaller values. Arguably
each of the dualities is more natural than a duality that maps regular
values to a conflated notion of negative and fractional types, and hence
requires notions like ``additive pairs'' and ``multiplicative sums.''
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Conclusion and Future Work}
\label{sec:conc}
We have extended the language {{langRev}} that expressed computation
in the commutative semiring of whole numbers to {{langRevEE}} that
expresses computation in the field of rationals. Every algebraic
identity that holds for the rational numbers corresponds to a type
isomorphism with a computational interpretation in our model. We have
examined the two function spaces that arise in this model and
developed non-trivial constructions such as a SAT-solver that relies
on a multiplicative trace.
In another sense however, this paper is about the nature of duality in
computation.
% The concept of duality is deep and significant and in many cases ---
% probably most famously in the divide between classical and intuitionistic
% mathematics --- contain non-trivial assumptions about our worldview. The
% problem of duality in computation and logic, has many facets of which
% `continuation' and `higher order functions' are only one.
The concept of duality is deep and significant: we have opened the
door for us to consider, not one but two notions of
duality. Surprisingly this makes things substantially simpler. In
particular, instead of conflating pairs as dual to sums, the tradition
in mathematics has long been to consider fields with two notions of
duality: one for sums and one for pairs. This double notion of duality
has a crisp semantics, clear computational interpretation, and an
information theoretic basis.
Our work has barely scratched the surface of an area of computing which has
been explored in depth before but without the combined reversible
information-preserving framework and the two notions of duality. The new
insights point to further new areas of investigation, of which we mention
the three most significant ones (in our opinion).
%% \paragraph*{Int Construction} How are we preserving two compact
%% closed structures -- what does this imply about the Int
%% construction.
\paragraph*{Geometry of Interaction (GoI).}
Geometry of Interaction was developed by Girard~\cite{girard1989geometry} as part of
the development of linear logic. It was given a computational interpretation
by Abramsky and Jagadeesan~\cite{Abramsky:1994:NFG:184662.184664}, and was
developed into a reversible model of computing by
Mackie~\cite{Mackie2011,DBLP:conf/popl/Mackie95}. Preliminary investigations
suggest that many of the GoI machine constructions can be simulated in
{{langRevEE}} by treating Mackie's bi-directional wires as pairs of wires in
{{langRevEE}} and replacing the machine's global state with a typed value on
the wire that captures the appropriate state. This connection is exciting
because when viewed through a Curry-Howard lens it suggests that the logical
interpretation of {{langRevEE}} would be a linear-like logic with a notion of
resource preservation and with a natural computational interpretation.
\paragraph*{Computing in the Field of Algebraic Numbers.}
\label{sec:algebraic-field}
Algebraically, the move from {{langRev}} to {{langRevEE}} corresponds to a
move from a ring-like structure to a full field. Our language {{langRevEE}}
captures the structure of one particular field: that of the rational
numbers. As we have seen, computation in this field is quite expressive and
interesting and yet, it has two fundamental limitations. First it cannot
express any recursive type, and second it cannot express any datatype
definitions. We believe these to be two orthogonal extensions: recursive
types were considered in our previous paper~\cite{infeffects}; arbitrary
datatypes are however even more exciting that plain rationals as each
datatype definition can be viewed as a polynomial (see below) which
essentially means that we start computing in the field of algebraic numbers,
which includes square roots and imaginary numbers. As crazy as it might seem,
the type $\sqrt{2}$ and even the type $(1/2)+i(\sqrt{3}/2)$ ``make sense.''
In fact the latter type is the solution to the polynomial {{x^2-x+1=0}} which
if re-arranged looks like {{x=1+x*x}} and perhaps more familiarly as the
datatype of binary trees {{@@x.(1+x*x)}}. These types happen to have been
studied extensively following a paper by Blass~\cite{seventrees} which used
the above datatype of trees to infer an isomorphism between seven binary
trees and one!
We have confirmed that we can extended {{langRevEE}} with the datatype
declaration for binary trees and build a witness for this isomorphism
that works as expected. However not every isomorphism constructed from
algebraic manipulation is computationally meaningful. To understand the
issue in more detail, consider the following algebraically valid proof
of the isomorphism in question:
\[\begin{array}{rclclclcl}
x^3 &=& x^2 x &=& (x-1) x &=& x^2 - x &=& -1 \\
x^6 &=& 1 \\
x^7 &=& x^6 x &=& x
\end{array}\]
The question is why such an algebraic manipulation makes sense type
theoretically, even though the intermediate step asking for an isomorphism
between $x^6$ and $1$ has no computational context. In the setting of
{{langRevEE}}, this isomorphism can be constructed but it diverges on all
inputs (in both ways). This suggests that, in the field of algebraic numbers,
some algebraic manipulations are somehow
``more constructive'' than others.
A related issue is that not all meaningful recursive types are
meaningful polynomials. For instance {{nat=@@x.(1+x)}} implies the
polynomial {{x=1+x}} which has no algebraic solutions without appeal
to more complex structures with limits etc.
%% or dually that {{langRevEE}} with recursive types
%%somehow lacks the ability to express all computations that are otherwise
%%algebraically meaningful.
% \begin{itemize}
% \item Maybe the problem is that there is cancellation in the second
% case and cancellation with recursive types is problematic. However
% not all cancellation is. So what gives?
% \item One idea is that irrationals correspond to an infinite amount of
% computational work. If what we are cancelling is not the same
% infinity, then things go wrong.
% \item Chaitin has already shown that reals have inifite amount of
% information which is paradoxical and problematic. This is a
% different issue from that of the rational being infinite
% computations.
% \item Finally, not all meaningful types are meaningful polynomials --
% ex. nat. So this is not the whole story.
% \end{itemize}
\paragraph*{Quantum Computing.}
One understanding of quantum computing is that it exploits the laws of physics
to build faster machines (perhaps). Another more foundational understanding
is that it provides a computational interpretation of physics, and in
particular directly addresses the question of interpretation of quantum
mechanics. In a little known document, Rozas~\cite{Rozas:1987:CMO:889539}
uses continuations to implement the transactional interpretation of quantum
mechanics~\cite{transactional} which includes as its main ingredient a
fixpoint calculation between waves or particles traveling forwards and
backwards in time. Our work sheds no light on whether this interpretation is
the ``right one'' but it is interesting that we can directly realize it using
the primitives of {{langRevEE}}.
% A \emph{time
% traveling} intuition is also applicable here. In a normal cicruit, as
% computational steps are taken values flow from left to right,
% i.e. computational time progresses from left to right. The action of
% {{eps+}} causes values, aka information particles, to flow from right to left
% i.e. backwards in computational time. The interesting thing about this
% interpretation is that particles that travel backwards in time get to see and
% interact with the history of particles that they coexist with. This gives us
% an intuitive interpretation of the isomorphism {{(-b1)*b2<->-(b1*b2)}} (see
% Sec. \ref{sec:specific-constructions}) where it would otherwise seem that
% {{-b1}} and {{b2}} move in opposite directions. The backward flow of the
% {{-b1}} allows it to `see the past' of {{b2}} and is thus equivalent to both
% particles moving backward in time.
The multiplicative structure of {{langRevEE}} also has a direct
connection to entangled quantum particles, or perhaps entangled
particles and anti-particles. The idea of entanglement, that an
action on one particle is ``instantaneously'' communicated to the
other, is analogous to how unifying one value affects its dual pair
which is possibly in another part of the computation. Again our model
sheds no light on whether this is related to how nature computes but
it is again interesting that we can directly realize the idea using
the primitives of {{langRevEE}}.
% natural An analogy
% inspired by quantum mechanics is also applicable here. The operator {{eta*}}
% is a site of fission that creates \emph{an entangled particle and
% anti-particle}. These particles can be thought to be in a
% \emph{superposition} of states determined by their type. Both flow in the
% same direction in time. In each possible world that the particle exists, it
% takes on a specific value inhibiting its type and correspondingly its
% entangled anti-particle takes the dual of the specific value. Since the
% particles are entangled, actions on one (such as transformation by
% application of an isomorphism) affects the other in much the same sense as
% \emph{action at a distance}. The `{{a-o*b}}' functions mentioned in the
% introduction, are first-class values and they correspond to these entagled
% pairs. Finally {{eps*}} operations are sites where corresponding particle and
% anti-particles annihilate each other. While appealing as an analogy, this
% description does not imply (or preclude) any formal connection with Physics.
% Are constructions are very similar to those developed in the context
% of QC. Is this how nature computes? Are the analogies to time
% traveling particles and anti-particles more concrete in some way?
\begin{comment}
--- END --
%% \begin{center}
%% \includegraphics{diagrams/dispatch.pdf}
%% \end{center}
% \subsection{Other}
%% To summarize negative, fractional, square root, and imaginary types all make
%% sense. What they help you accomplish as a programmer is to disassociate
%% global invariants into local ones that can be satisfied independently by
%% subcomputations with no synchronization or communication. A computation
%% producing something of type $a/b$ does not need to concern itself with who is
%% going to supply the missing $b$: it just does its part. Conversely faced with
%% a complicated task, a computation might decide to break it into pieces and
%% demand these pieces using negative types.
%% It is no surprise that these types are closely related to quantum mechanics
%% and that they give us the feel that this is how nature computes. This is
%% speculation however.
%% In any case, in a framework where information can be copied and deleted, none
%% of this makes much sense. It is critical that these constraints and demands
%% can neither be duplicated nor erased. This gives us the maximum
%% ``parallelism'' possible.
%% Say we have not considered recursion in this paper.
%% The simplest way to connect the In a conventional computational model, one
%% might realize this situation by simply writing the identity function: the
%% buyer hands the money to the seller to finish the transaction. The above
%% sequence of isomorphisms implements this identity function in a much more
%% interesting way, however. of the above series of
%% On the producer side, the debt is paid for by the money computation with the
%% identity function. the producer and consumer must somehow share an explicit
%% dependency that allows the value. However in our model, the presence of
%% negative types allows the produced value to satisfy the demand without the
%% producer or consumer even knowing about each other. As is explained in detail
%% in
%% Furthermore, we illustrate their true appeal and expressiveness is brought
%% forth by viewing them in the context of an information-preserving
%% computational model.
%% In addition, we show how these types enrich our computational model, they
%% obey the same laws as the rational numbers.
%% Specifically, isomorphisms enrich our computational model with have an
%% interesting computational interpretation
In particular, linear logic~\cite{Girard87tcs}, among other contributions,
exposed an additive/multiplicative distinction in logical connectives and
rules. In particular, linear logic includes additive disjunctions $\oplus$
and conjunctions $\with$ as well as multiplicative disjunctions $\parr$ and
conjunctions $\otimes$. Duality is also prominent in linear logic: it relates
the additive connectives to each other (the dual of $\oplus$ is $\with$ and
vice-versa) and the multiplicative connectives to each other (the dual of
$\otimes$ is $\parr$ and vice-versa).
%% We furthermore demonstrate that, in our model,
%% programming with these negative and fractional types, is a new ``revolution''
%% breaking dependencies...
Since Filinski, we've had the
idea that values and continuations are like mirror images. In a conventional
language, the negative (continuation) side is implicit and we introduce
information effects on the positive. Trying to recover the duality from this
distorted positive side has always been messy. Now it looks clean because we
have kept the positive side pure.
Continuations made their introduction to the world of programming language
semantics as a mathematical device to model first-class labels and
jumps~\cite{springerlink:10.1023/A:1010026413531}. In a remarkable
development, Filinski~\cite{Filinski:1989:DCI:648332.755574} observed that
--- with the right perspective --- this highly imperative concept was
actually the symmetric dual of values. The heart of the observation is that
values represent entities that flow from producers to consumers while
continuations represent \emph{demands} for such entities, that flow from
consumers to producers. In more detail, Filinski describes continuations as
representing ``the \emph{lack} or \emph{absence} of a value, just as having a
negative amount of money represents a debt.'' He then proceeds to construct a
language where values and continuations are treated truly symmetrically. To
that end, he abandons the $\lambda$-calculus amalgamation of functions and
values and distinguishes between three different syntactic classes:
functions, values, and continuations, with the property that any function can
be used either as a value transformer or as a continuation transformer.
This highly intuitive and appealing idea was further explored and refined by
many authors~\cite{Griffin:1989:FNC:96709.96714, Curien:2000,
Wadler:2003, DBLP:conf/rta/Wadler05}. Yet, despite its appeal, the duality
between values and continuations
%% Computationally, symmetry exhibits itself as a duality between two concepts.
%% Symmetry is pervasive in both natural and man-made environments.
In 1989, Filinski~\cite{Filinski:1989:DCI:648332.755574} observed that values
and continuations are dual notions. This observation was followed by numerous
%% We introduced this thesis that computation should be based on isomorphisms
%% that preserve information~\cite{infeffects}. Since Filinski, we've had the
%% idea that values and continuations are like mirror images. In a conventional
%% language, the negative (continuation) side is implicit and we introduce
%% information effects on the positive. Trying to recover the duality from this
%% distorted positive side has always been messy. Now it looks clean because we
%% have kept the positive side pure.
%% In a technical sense, this paper extends the language of isomorphisms
%% {{langRev}}, with duality. Unlike Linear logic \cite{Girard87tcs} and
%% other systems which have one notion of duality over additive and
%% multiplicative components, {{langRevEE}} has two notions of duality
%% -- an additive duality over the monoid {{(0, +)}} and a multiplicative
%% duality over the monoid {{(1, *)}}. Each axis of duality also give us
%% a function space and hence {{langRevEE}} has an additive function
%% space corresponding to a notion of control or backtracking and a
%% multiplicative function space corresponding to a notion of unification
%% or constraint satisfaction.
The world of computation we are describing has:
\begin{itemize}
\item suppliers,
\item consumers, and
\item bi-directional transformations
\end{itemize}
This is the same world described by the papers on the duality of computation
but that work only scratched the surface! We have the following features:
\begin{itemize}
\item we can start from the supplier and push the values towards the
consumer (call-by-value in the duality of computation papers)
\item we can start from the consumer and pull the values from the suppliers
(call-by-name in the duality of computation papers)
\item we can combine the pushing and pulling and values using eta/epsilon for
sum types; these allow us to at any point in the middle of the computation
create out of nothing a value to send to the consumers and a demand to send
to the suppliers.
\item we can break a big data structure into fragments described by
fractional types; the suppliers and consumers can produce and consume the
pieces completely independently of each other. Eventually the pieces will
fit together at the consumer to produce the desired output.
\item we can break a bi-directional transformation into pieces using square
roots
\item we can take into account that values have phase (complex numbers),
i.e., it is not that they flow towards the consumer or just towards the
suppliers; they can be flowing in direction that ``30 degrees'' towards the
consumer for example.
\end{itemize}
%% So it is all about breaking dependencies in some sense to allow for maximum
%% autonomy (parallelism) of subcomputations. It is probably the case that to
%% make full use of square root types and imaginary types, we have to move to a
%% vector space. If that's the case, we should probably leave this stuff out and
%% focus on negative and fractional types and only have a short discussion of
%% the polynomials restricted to seven trees in one and similar issues.
%% The conventional idea is to divide the world into a ``real'' one and a
%% ``virtual'' one. In the ``real'' world, we can define datatypes like
%% \verb|t=t^2+1| but we don't have additive inverses so it makes no sense to
%% talk of negative types and we can't rearrange the terms in the datatype
%% definition. However the observation is that we can map these datatypes to a
%% virtual world that has more structure (a ring that provides additive inverses
%% or a field that also provides multiplicative inverses) and then perform
%% computations in the ring/field. If we perform computations in the ring, then
%% some of these will use additive inverses in ways that cannot be mapped back
%% to the ``real'' world. Much of current research attempts to characterize
%% which computations done in the ring are valid isomorphisms between datatypes
%% in the ``real'' world. This is nice but is not what I am after. In fact I am
%% not interested in the ring or the semiring at all. I am interested in the
%% field and I want this field to be \textbf{the real world.} This is partly
%% motivated by the fact that Quantum Mechanics seems to demand an underlying
%% field and more generally that the field provides the maximum generality in
%% slicing and dicing computations. So assuming I live in a field and that the
%% negative, fractional, square root, and imaginary types are all ``real,'' how
%% do I compute in this field? Clearly there will be constraints on
%% ``measurement'' in the sense that a full program cannot produce any of the
%% crazy types but that's done outside the formalism in some sense just as in
%% Quantum Computing. The main question I am after is how to compute in this
%% field with first-class negative, fractional, etc. types. As I mentioned in my
%% previous email, we can produce programs that have types \verb|t^3 <-> -1| and
%% they ``run'' (but only to give infinite loops).
%% So when a programmer writes the datatype declaration \verb|t = t^2+1|,
%% if we allow negative etc. then this is effectively writing
%% \verb|t = cubicroot{-1}|. If we are in the field then computations
%% that manipulate these trees can be sliced and diced even at interfaces
%% that expose the cubic root and the imaginary types.
%% Future work: develop a type system for a ``normal language'' that has
%% negative, fractional, etc. types as first-class types. More long term,
%% instead of adding one polynomial at a time, we can go to an algebraically
%% closed field. The complex numbers is an obvious choice but I would rather go
%% to something computable like the field of algebraic numbers. Is the adele
%% ring or the p-adics relevant here?
We show a deep symmetry between functions and delimited continuations, values
and continuations that arises in {{langRev}} in a manner that is reminiscent
of Filinski's Symmetric \lcal ~\cite{Filinski:1989:DCI:648332.755574}. The
symmetry arises by extending {{langRev}} with a notion of additive duality
over the monoid {{(+, 0)}} by including {{eta+}} and {{eps+}} operators of
Compact Closed Categories. The resulting dual types, which we denote {{-b}},
have a time traveling ``backward information flow'' interpretation and allow
for the encoding of higher-order function and iteration via the construction
of trace operators, thereby making the extended language {{langRevEE}} a
Turing-complete reversible programming language with higher-order functions
and first-class delimited continuations.
%% We introduced this thesis that computation should be based on isomorphisms
%% that preserve information~\cite{infeffects}. Since Filinski, we've had the
%% idea that values and continuations are like mirror images. In a conventional
%% language, the negative (continuation) side is implicit and we introduce
%% information effects on the positive. Trying to recover the duality from this
%% distorted positive side has always been messy. Now it looks clean because we
%% have kept the positive side pure.
%% The way to think about something of type $A$ is that it is a value we have
%% produced. The way to think about something of type $-A$ is that is a value we
%% have already consumed.
Other interpretations of the types of think about. The first one is
arithmetic obviously. Another one is languages consisting of sets of
string. The type 0 is the empty set, the type 1 is the set containing the
empty word, the $+$ constructor corresponds to union, and the $*$ constructor
corresponds to concatenation. The constructor $-$ would not correspond to set
difference however. It would correspond to marking the elements in the set as
``consumed'' so that if we take the union and a ``consumed'' element appears
in the other set, the two cancel. This makes it clear that concatenating a
produced $a$ and a consumed $b$ is not the same as concatenating a consumed
$a$ and a produced $b$. They really need to be kept separate. Incidentally,
division would be defined as follows:
\[
L_1 / L_2 = \{ x ~|~ xy \in L_1 \mbox{~for~some~} y \in L_2 \}
\]
Filinski proposes that continuations are a \emph{declarative} concept. He,
furthermore, introduces a symmetric extension of the $\lambda$-calculus in
which values and continuations are treated as opposites. This is essentially
what we are proposing with one fundamental difference: our underlying
language is not the $\lambda$-calculus but a language of pure isomorphisms in
which information is preserved. This shift of perspective enables us to
distill and generalize the duality of values and continuations: in
particular, in the conventional $\lambda$-calculus setting values and
continuations can be erased and duplicated which makes it difficult to
maintain the correspondence between a value and its negative counterpart.
The idea of using negative types to model information flowing backwards,
demand for values, continuations, etc. goes back to at Filinski's thesis. We
recall these connections below but we first note that all these systems are
complicated because in all these systems information can be ignored,
destroyed, or duplicated. Clearly the possibility of erasure of information
would mean that our credit card transaction is incorrect. In our work,
information is maintained and hence we have a guarantee that, in a closed
program, the debt must be accounted and paid for.
Much of previous work builds on the idea that there is one duality in
computation between values and continuations which manifests itself as a
duality between the call-by-value and call-by-name parameter-passing
mechanisms. The former mechanism focuses on evaluating expressions to values
even if these values are not demanded by the context; the latter focuses on
evaluating expressions to continuations even if these continuations might be
aborted. The idea that a continuation is dual to a value is intuitive but
then one would expect that the sum of two values naturally corresponds to the
sum of two continuations and that the product of two values naturally
corresponds to the product of two continuations.
check and say that our work teases the continuations into a negative part
(which simply demands a value) and a fractional part (which imposes
constraints on how this value will be used). So something like $-1/c$ is
needed to express a conventional continuation. Having two dualities makes the
whole calculus natural and symmetric.
Consider a continuation that takes $x$ and $y$ and swaps them. It can't be
expressed using two conventional continuations because the demand and the way
it is used are entangled together.
In accounts that are linear, the value and continuation that comprise the
subtractive type need to be constrained to ``stay together.'' This can be
achieved by various restrictions. In this work we have no such constraints,
the negative value can flow anywhere. The entire system guarantees that any
closed program would have to account for it. We don't have to introduce
special constraints to achieve that. Zeilberger in the paper on polarity and
the logic of delimited continuations uses polarized logic: he shows that if
positive and negative values are completely symmetric except that answer
types are positive, then the framework accommodates delimited
continuations. But he interprets negative values are control operators, or as
values defined by the shape of their continuations. We simply interpret
values of negative type as values flowing in the ``other'' direction.
This is essentially what we are proposing with one fundamental difference:
our underlying language is not the $\lambda$-calculus but a language of pure
isomorphisms in which information is preserved. This shift of perspective
enables us to distill and generalize the duality of values and continuations:
in particular, in the conventional $\lambda$-calculus setting values and
continuations can be erased and duplicated which makes it difficult to
maintain the correspondence between a value and its negative counterpart. In
contrast, in our setting, one can start from the empty type $0$, introduce a
positive value and its negative counterpart, and let each of these flow in
arbitrary ways. The entire framework guarantees that neither the value nor
its negative counterpart will be deleted or duplicated and hence that, in any
closed program, the ``debt'' corresponding to the negative value is paid off
exactly once. The forward and backward executions in our framework correspond
to call-by-value and call-by-name. This duality was observed by Filinski and
others following him but it is particularly clean in our framework.
\paragraph*{Logic Programming and Backtracking.}
This is a constrained form of backtracking and a constrained form of
logic programming.
\paragraph*{Linear Logic and GoI.}
Say something.
\paragraph*{Int Construction.}
For a traced monoidal category {{C}} the Int construction produces a Compact
Closed Category called Int {{C}} \cite{joyal1996traced}. Further we know
that the target of the Int construction is isomorphic to the target of \G
construction of Abramsky \cite{Abramsky96:0} from Haghverdi. However, note
that the {{langRevEE}} is not the same as the image of the Int construction
on {{langRevT}}, since the later lacks a multiplicative tensor that
distributes over the additive tensor in Int {{langRevT}}.
once we combine the two structures, we seem to retain the monoidal
structure. How!?
\end{comment}
\acks We thank Jacques Carette for stimulating discussion, and Michael Adams,
Will Byrd, Lindsey Kuper, and Yin Wang for helpful comments and
questions. This project was partially funded by Indiana University's Office
of the Vice President for Research and the Office of the Vice Provost for
Research through its Faculty Research Support Program. We also acknowledge
support from Indiana University's Institute for Advanced Study.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{small}
\bibliographystyle{abbrvnat}
\bibliography{cites}
\end{small}
\end{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
| {
"alphanum_fraction": 0.7330640819,
"avg_line_length": 46.877028489,
"ext": "tex",
"hexsha": "828538757ff91d86c22e475035e095bab2ee96dd",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2019-09-10T09:47:13.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-05-29T01:56:33.000Z",
"max_forks_repo_head_hexsha": "003835484facfde0b770bc2b3d781b42b76184c1",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "JacquesCarette/pi-dual",
"max_forks_repo_path": "rationals/rational.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "003835484facfde0b770bc2b3d781b42b76184c1",
"max_issues_repo_issues_event_max_datetime": "2021-10-29T20:41:23.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-06-07T16:27:41.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "JacquesCarette/pi-dual",
"max_issues_repo_path": "rationals/rational.tex",
"max_line_length": 105,
"max_stars_count": 14,
"max_stars_repo_head_hexsha": "003835484facfde0b770bc2b3d781b42b76184c1",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "JacquesCarette/pi-dual",
"max_stars_repo_path": "rationals/rational.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-05T01:07:57.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-08-18T21:40:15.000Z",
"num_tokens": 33770,
"size": 129990
} |
% Declaring packages and formatting
\documentclass[a4paper]{article}
\usepackage{graphicx} % EFor graphics
\usepackage{amsmath} % For mathematics
\usepackage{setspace} % For doublespacing
\usepackage{fancyhdr} % For headers
\usepackage{siunitx} % For SI units
\usepackage[scale=0.75]{geometry} % For wider pages
\usepackage[colorlinks = true, urlcolor = blue, linkcolor = black]{hyperref} % For hyperlinks
% Let's begin the document
\begin{document}
% Let's set up our headers, footers and doublespacing
\pagestyle{fancy}
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0.4pt}
\renewcommand{\subsectionmark}[1]{}
\renewcommand{\sectionmark}[1]{\markboth{#1}{}}
\cfoot{Last Updated -- \today \\ Tim Snow -- \href{http://www.cunninglemon.com}{http://www.cunninglemon.com} \\ \vspace{1em} \includegraphics[height=2em]{Graphics/License}}
\rfoot{\thepage}
\doublespacing
\begin{centering}
\section*{UV Analyser}
\label{sec:uv_analyser}
\end{centering}
\begin{figure}[ht!]
\centering
\includegraphics[height=6em]{Graphics/UVIcon}
\end{figure}
\noindent This program has been designed to be small and simple, it uses the Beer-Lambert law, equation \ref{eq:concentrationToAbs}, to calculate the absorbance, A, for a sample with a known exctinction coefficient, $\varepsilon$, and molecular weight, M$_{\text{w}}$.
\begin{equation}
\text{A} = \varepsilon \times l \times \left( \frac{c}{\text{M}_{\text{w}}} \right)
\label{eq:concentrationToAbs}
\end{equation}
\noindent It can also calculate the concentration of a sample from its absorbance, using equation \ref{eq:absToConcentration}.
\begin{equation}
c = \left( \frac{\text{A}}{l \times \varepsilon} \right) \times \text{M}_{\text{w}}
\label{eq:absToConcentration}
\end{equation}
\noindent It should be noted that usually the output from the Beer-Lambert law is \si{\mole\per\liter} however this program requests a molecular weight to convert this to \si{\gram\per\liter} which is directly equivilent to \si{\milli\gram\per\milli\liter}, which is commonly the unit of interest. If a result in \si{\mole\per\liter} is desired, simply enter 1 as the molecular weight.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.4\textwidth]{Graphics/ScreenOne} \includegraphics[width=0.4\textwidth]{Graphics/ScreenTwo}
\end{figure}
\noindent The figure on the left shows the program startup screen. Simply entering numbers, as shown on the figure on the right, will provide the result requested in either Abs or \si{\milli\gram\per\milli\liter}, depending on which tab is selected by the user, for a given extinction coefficient and molecular weight.
\clearpage
\begin{centering}
\section*{Requirements and License}
\label{sec:requirements_and_license}
\end{centering}
\noindent This software should work work with any Macintosh computer running Mac OS 10.10 or later and is released under the BSD 3-Clause license, as follows:
\begin{figure}[ht!]
\centering
\includegraphics[height=6em]{Graphics/BSD}
\end{figure}
\begin{quote}
{\it
Copyright \textcopyright 2015, Tim Snow\\
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the copyright notice, this list of conditions and the following disclaimer.\\
2. Redistributions in binary form must reproduce the copyright notice, this list of conditions and the disclaimer found in the license file and/or other materials provided with the distribution.\\
3. Neither authors' names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
}
\end{quote}
\end{document} | {
"alphanum_fraction": 0.755565161,
"avg_line_length": 53.183908046,
"ext": "tex",
"hexsha": "3806901b68e0dce7ad26774b8e1d1cb10310a65a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4f64c6054eddf37767aa910dca6453b6bfa8dffa",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "timsnow/UV-Analyser",
"max_forks_repo_path": "Manual - TeX Files/UV Analyser Software Manual.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4f64c6054eddf37767aa910dca6453b6bfa8dffa",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "timsnow/UV-Analyser",
"max_issues_repo_path": "Manual - TeX Files/UV Analyser Software Manual.tex",
"max_line_length": 760,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4f64c6054eddf37767aa910dca6453b6bfa8dffa",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "timsnow/UV-Analyser",
"max_stars_repo_path": "Manual - TeX Files/UV Analyser Software Manual.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1266,
"size": 4627
} |
\documentclass[12pt]{article}
\usepackage[utf8]{inputenc}
\usepackage{lscape}
\usepackage{graphicx}
\usepackage{stfloats}
\usepackage{float}
\usepackage{import}
\usepackage{adjustbox}
\usepackage{hyperref}
\usepackage{apacite}
\usepackage{fancyhdr}
\pagestyle{fancy}
\lhead{Niklas Lundberg}
\rhead{[email protected]}
\setlength{\parindent}{0em}
\setlength{\parskip}{1em}
\title{Structural Operational Semantics}
\author{Niklas Lundberg \\ [email protected]}
\date{\today}
\begin{document}
\maketitle
\newpage
\section{SOS}
\subsection{i32}
\begin{equation}
\langle e, \sigma \rangle\Downarrow n
\end{equation}
\subsection{bool}
\begin{equation}
\langle e, \sigma \rangle\Downarrow b
\end{equation}
\subsection{Unop}
\subsubsection{Not}
\begin{equation}
\frac{\langle b, \sigma \rangle\Downarrow false}
{\langle !b, \sigma \rangle\Downarrow true }
\end{equation}
\begin{equation}
\frac{\langle b, \sigma \rangle\Downarrow true}
{\langle !b, \sigma \rangle\Downarrow false }
\end{equation}
\subsubsection{Sub}
\begin{equation}
\frac{\langle e, \sigma \rangle\Downarrow n}
{\langle -e, \sigma \rangle\Downarrow - n }
\end{equation}
\subsection{Binop}
\subsubsection{Add}
\begin{equation}
\frac{\langle e1, \sigma \rangle\Downarrow n1 \: \langle e2, \sigma \rangle\Downarrow n2}
{\langle e1 + e2, \sigma \rangle\Downarrow n1 + n2}
\end{equation}
\subsubsection{Sub}
\begin{equation}
\frac{\langle e1, \sigma \rangle\Downarrow n1 \: \langle e2, \sigma \rangle\Downarrow n2}
{\langle e1 - e2, \sigma \rangle\Downarrow n1 - n2}
\end{equation}
\subsubsection{Div}
\begin{equation}
\frac{\langle e1, \sigma \rangle\Downarrow n1 \: \langle e2, \sigma \rangle\Downarrow n2}
{\langle e1 / e2, \sigma \rangle\Downarrow n1 / n2}
\end{equation}
\subsubsection{Multiplication}
\begin{equation}
\frac{\langle e1, \sigma \rangle\Downarrow n1 \: \langle e2, \sigma \rangle\Downarrow n2}
{\langle e1 * e2, \sigma \rangle\Downarrow n1 * n2}
\end{equation}
\subsubsection{Mod}
\begin{equation}
\frac{\langle e1, \sigma \rangle\Downarrow n1 \: \langle e2, \sigma \rangle\Downarrow n2}
{\langle e1 \% e2, \sigma \rangle\Downarrow n1 \% n2}
\end{equation}
\subsubsection{And}
\begin{equation}
\frac{\langle b1, \sigma \rangle\Downarrow false \: \langle b2, \sigma \rangle\Downarrow false}
{\langle e1 \&\& e2, \sigma \rangle\Downarrow false}
\end{equation}
\begin{equation}
\frac{\langle b1, \sigma \rangle\Downarrow true \: \langle b2, \sigma \rangle\Downarrow false}
{\langle e1 \&\& e2, \sigma \rangle\Downarrow false}
\end{equation}
\begin{equation}
\frac{\langle b1, \sigma \rangle\Downarrow false \: \langle b2, \sigma \rangle\Downarrow true}
{\langle e1 \&\& e2, \sigma \rangle\Downarrow false}
\end{equation}
\begin{equation}
\frac{\langle b1, \sigma \rangle\Downarrow true \: \langle b2, \sigma \rangle\Downarrow true}
{\langle e1 \&\& e2, \sigma \rangle\Downarrow true}
\end{equation}
\subsubsection{Or}
\begin{equation}
\frac{\langle b1, \sigma \rangle\Downarrow false \: \langle b2, \sigma \rangle\Downarrow false}
{\langle e1 || e2, \sigma \rangle\Downarrow false}
\end{equation}
\begin{equation}
\frac{\langle b1, \sigma \rangle\Downarrow true \: \langle b2, \sigma \rangle\Downarrow false}
{\langle e1 || e2, \sigma \rangle\Downarrow true}
\end{equation}
\begin{equation}
\frac{\langle b1, \sigma \rangle\Downarrow false \: \langle b2, \sigma \rangle\Downarrow true}
{\langle e1 || e2, \sigma \rangle\Downarrow true}
\end{equation}
\begin{equation}
\frac{\langle b1, \sigma \rangle\Downarrow true \: \langle b2, \sigma \rangle\Downarrow true}
{\langle e1 || e2, \sigma \rangle\Downarrow true}
\end{equation}
\subsubsection{Not equal}
\begin{equation}
\frac{\langle e1, \sigma \rangle\Downarrow n1 \: \langle e2, \sigma \rangle\Downarrow n2}
{\langle e1 != e2, \sigma \rangle\Downarrow true}
\end{equation}
\begin{equation}
\frac{\langle e1, \sigma \rangle\Downarrow n \: \langle e2, \sigma \rangle\Downarrow n}
{\langle e1 != e2, \sigma \rangle\Downarrow false}
\end{equation}
\subsubsection{Equal}
\begin{equation}
\frac{\langle e1, \sigma \rangle\Downarrow n1 \: \langle e2, \sigma \rangle\Downarrow n2}
{\langle e1 == e2, \sigma \rangle\Downarrow false}
\end{equation}
\begin{equation}
\frac{\langle e1, \sigma \rangle\Downarrow n \: \langle e2, \sigma \rangle\Downarrow n}
{\langle e1 == e2, \sigma \rangle\Downarrow true}
\end{equation}
\subsubsection{Less or equal then}
\begin{equation}
\frac{\langle e1, \sigma \rangle\Downarrow n1 \: \langle e2, \sigma \rangle\Downarrow n2}
{\langle e1 <= e2, \sigma \rangle\Downarrow n1 <= n2}
\end{equation}
\subsubsection{Larger or equal then}
\begin{equation}
\frac{\langle e1, \sigma \rangle\Downarrow n1 \: \langle e2, \sigma \rangle\Downarrow n2}
{\langle e1 >= e2, \sigma \rangle\Downarrow n1 >= n2}
\end{equation}
\subsubsection{Less then}
\begin{equation}
\frac{\langle e1, \sigma \rangle\Downarrow n1 \: \langle e2, \sigma \rangle\Downarrow n2}
{\langle e1 < e2, \sigma \rangle\Downarrow n1 < n2}
\end{equation}
\subsubsection{Larger then}
\begin{equation}
\frac{\langle e1, \sigma \rangle\Downarrow n1 \: \langle e2, \sigma \rangle\Downarrow n2}
{\langle e1 > e2, \sigma \rangle\Downarrow n1 > n2}
\end{equation}
\subsection{Assigment}
\begin{equation}
\frac{}
{\langle x := n, \sigma \rangle\Downarrow \sigma [ x := n ]}
\end{equation}
\subsection{Variable}
\begin{equation}
\sigma [ x := n ] = n
\end{equation}
\subsection{If}
\begin{equation}
\frac{\langle b, \sigma \rangle\Downarrow true \: \langle c1, \sigma \rangle\Downarrow \sigma'}
{\langle \textbf{if } b \textbf{ then } c1 \textbf{ else } c2, \sigma \rangle\Downarrow \sigma'}
\end{equation}
\begin{equation}
\frac{\langle b, \sigma \rangle\Downarrow false \: \langle c2, \sigma \rangle\Downarrow \sigma''}
{\langle \textbf{if } b \textbf{ then } c1 \textbf{ else } c2, \sigma \rangle\Downarrow \sigma''}
\end{equation}
\subsection{While}
\begin{equation}
\frac{\langle b, \sigma \rangle\Downarrow false }
{\langle \textbf{while } b \textbf{ do } c, \sigma \rangle\Downarrow \sigma}
\end{equation}
\begin{equation}
\frac{\langle b, \sigma \rangle\Downarrow true \: \langle c, \sigma \rangle\Downarrow \sigma' \langle \textbf{while } b \textbf{ do } c, \sigma' \rangle\Downarrow \sigma'' }
{\langle \textbf{while } b \textbf{ do } c, \sigma \rangle\Downarrow \sigma''}
\end{equation}
\subsection{Function call}
\begin{equation}
\frac{\langle c, \sigma \rangle\Downarrow \sigma' }
{\langle \textbf{call } c, \sigma \rangle\Downarrow \sigma'}
\end{equation}
\subsection{Return}
\begin{equation}
\frac{\langle c, \sigma \rangle\Downarrow \sigma' }
{\langle \textbf{return } c, \sigma \rangle\Downarrow \sigma'}
\end{equation}
\end{document} | {
"alphanum_fraction": 0.4636654701,
"avg_line_length": 48.8483412322,
"ext": "tex",
"hexsha": "38f54d41ebd322851c6b01d82cee7d352273580b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "52c080ccfa102ee98b80b258a67aedb6583d9294",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Blinningjr/D7050E",
"max_forks_repo_path": "my_compiler/Documents/SOS.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "52c080ccfa102ee98b80b258a67aedb6583d9294",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Blinningjr/D7050E",
"max_issues_repo_path": "my_compiler/Documents/SOS.tex",
"max_line_length": 193,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "52c080ccfa102ee98b80b258a67aedb6583d9294",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Blinningjr/D7050E",
"max_stars_repo_path": "my_compiler/Documents/SOS.tex",
"max_stars_repo_stars_event_max_datetime": "2019-09-10T07:49:54.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-09-09T16:04:12.000Z",
"num_tokens": 2536,
"size": 10307
} |
% !TeX root = constructions.tex
\part{Origami}\label{p.origami}
\chapter{Axioms}\label{c.axioms}
Each axiom states that a \emph{fold} exists that will place given points and lines onto points and lines, such that certain properties hold. The term fold comes from the origami operation of folding a piece of paper, but here it is used to refer to the geometric line that would be created by folding the paper.
The axioms are called the \emph{Huzita-Hatori axioms} \cite{wiki:hh}, although their final form resulted from the work of several mathematicians. Lee \cite[Chapter~4]{hwa} is a good overview of the mathematics of origami, while Martin \cite[Chapter~10]{martin} is a formal development. The reader should be aware that, \emph{by definition}, folds result in \emph{reflections}. Given a point $p$, its reflection around a fold $l$ results in a point $p'$, such that $l$ is the perpendicular bisector of the line segment $\overline{pp'}$:
\medskip
\begin{center}
\begin{tikzpicture}[scale=1.3]
\coordinate (P1) at (2,2);
\coordinate (P1P) at (6,4);
\coordinate (mid) at (4,3);
\draw[rotate=30] (mid) rectangle +(8pt,8pt);
\coordinate (m1) at ($(P1)!.5!(mid)$);
\coordinate (m2) at ($(mid)!.5!(P1P)$);
\draw[thick] (m1) -- +(120:4pt);
\draw[thick] (m1) -- +(-60:4pt);
\draw[thick] (m2) -- +(120:4pt);
\draw[thick] (m2) -- +(-60:4pt);
\draw[thick] (P1) -- (P1P);
\draw[very thick,dashed] (4.7,1.6) -- node[very near end,right,yshift=4pt] {$l$} (3.5,4);
\fill (P1) circle(1.2pt) node[above left] {$p$};
\fill (P1P) circle(1.2pt) node[above left] {$p'$};
\fill (mid) circle(1.2pt);% node[below,xshift=-2pt,yshift=-8pt] {$p_i$};
\draw[very thick,dotted,->,bend right=50] (2,1.8) to (6,3.8);
\end{tikzpicture}
\end{center}
\medskip
In the diagrams, given lines are solid, folds are dashed, auxiliary lines are dotted, and dotted arrows indicate the direction of folding the paper.
\newpage
\section{Axiom 1}\label{s.ax1}
\textbf{Axiom}
Given two distinct points $p_1=(x_1,y_1)$, $p_2=(x_2,y_2)$, there is a unique fold $l$ that passes through both of them.
%\begin{figure}[H]
\begin{center}
\begin{tikzpicture}[scale=1.2]
\draw[step=10mm,white!50!black,thin] (-1,-1) grid (8,6);
\draw[thick] (-1,0) -- (8,0);
\draw[thick] (0,-1) -- (0,6);
\foreach \x in {0,...,8}
\node at (\x-.2,-.2) {\sm{\x}};
\foreach \y in {1,...,6}
\node at (-.2,\y-.3) {\sm{\y}};
\coordinate (P1) at (2,2);
\coordinate (P2) at (6,4);
\draw[very thick,dashed] ($(P1)!-.75!(P2)$) -- node[very near end,below] {$l$} ($(P1)!1.5!(P2)$);
\fill (P1) circle(2pt) node[above left] {$p_1$};
\fill (P2) circle(2pt) node[above left] {$p_2$};
\draw[very thick,dotted,->,bend left=30] (2,5) to (4,1);
\end{tikzpicture}
\end{center}
%\end{figure}
\textbf{Derivation of the equation of the fold}
The equation of fold $l$ is derived from the coordinates of $p_1$ and $p_2$: the slope is the quotient of the differences of the coordinates and the $y$-intercept is derived from $p_1$:
\begin{equation}
y - y_1 = \disfrac{y_2-y_1}{x_2-x_1}(x-x_1)\,.
\end{equation}
\vspace*{-3ex}
%\newpage
\textbf{Example}
Let $p_1=(2,2), p_2=(6,4)$. The equation of $l$ is:
\begin{form}{1.4}
y-2&=&\disfrac{4-2}{6-2}(x-2)\\
y&=&\disfrac{1}{2}x+1\,.
\end{form}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newpage
\section{Axiom 2}\label{s.ax2}
\textbf{Axiom}
Given two distinct points $p_1=(x_1,y_1)$, $p_2=(x_2,y_2)$, there is a unique fold $l$ that places $p_1$ onto $p_2$.
%\begin{figure}[H]
\begin{center}
\begin{tikzpicture}[scale=1.2]
\draw[step=10mm,white!50!black,thin] (-1,-1) grid (8,6);
\draw[thick] (-1,0) -- (8,0);
\draw[thick] (0,-1) -- (0,6);
\foreach \x in {0,...,8}
\node at (\x-.2,-.2) {\sm{\x}};
\foreach \y in {1,...,6}
\node at (-.2,\y-.3) {\sm{\y}};
\coordinate (P1) at (2,2);
\coordinate (P2) at (6,4);
\coordinate (mid1) at ($(P1)!.5!(P2)$);
\coordinate (mid2) at ($(P1)!.5!(P2)+(-1,2)$);
\draw[rotate=29] (mid1) rectangle +(8pt,8pt);
\draw[very thick,dotted] (P1) -- (P2);
\draw[very thick,dashed] ($(mid1)!-1.6!(mid2)$) --
node[very near end,left,yshift=-12pt] {$l$} ($(mid1)!1.4!(mid2)$);
\fill (P1) circle(2pt) node[above left] {$p_1$};
\fill (P2) circle(2pt) node[above left] {$p_2$};
\draw[very thick,dotted,->,bend right=50] (2,1.8) to (6,3.8);
\end{tikzpicture}
\end{center}
%\end{figure}
\textbf{Derivation of the equation of the fold}
The fold $l$ is the perpendicular bisector of $\overline{p_1p_2}$. Its slope is the negative reciprocal of the slope of the line connecting $p_1$ and $p_2$. $l$ passes through the midpoint between the points:
\begin{equation}
y - \disfrac{y_1+y_2}{2} = -\disfrac{x_2-x_1}{y_2-y_1}\left(x-\disfrac{x_1+x_2}{2}\right)\,.\label{eq.midpoint1}
\end{equation}
\textbf{Example}
Let $p_1=(2,2), p_2=(6,4)$. The equation of $l$ is:
\begin{form}{1.4}
y-\left(\disfrac{2+4}{2}\right)&=&-\disfrac{6-2}{4-2}\left(x-\left(\disfrac{2+6}{2}\right)\right)\\
y&=&-2x+11\,.
\end{form}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newpage
\section{Axiom 3}\label{s.ax3}
\textbf{Axiom}
Given two lines $l_1$ and $l_2$, there is a fold $l$ that places $l_1$ onto $l_2$.
%\begin{figure}[H]
\begin{center}
\begin{tikzpicture}[scale=1.2]
\draw[step=10mm,white!50!black,thin] (-1,-1) grid (8,7);
\draw[thick] (-1,0) -- (8,0);
\draw[thick] (0,-1) -- (0,7);
\foreach \x in {0,...,8}
\node at (\x-.2,-.2) {\sm{\x}};
\foreach \y in {1,...,7}
\node at (-.2,\y-.3) {\sm{\y}};
\coordinate (L1a) at (2,2);
\coordinate (L1b) at (4,6);
\draw[very thick] (L1a) --
node[very near start,right,yshift=-4pt] {$l_1$} (L1b);
\draw[thick,dotted,name path=l1] ($(L1a)!-.75!(L1b)$) --
($(L1a)!1.25!(L1b)$);
\coordinate (L2a) at (7,1);
\coordinate (L2b) at (4,4);
\draw[very thick] (L2a) -- (L2b);
\draw[thick,dotted,name path=l2] ($(L2a)!-.3!(L2b)$) --
node[very near start,above,xshift=-6pt,yshift=8pt] {$l_2$}
($(L2a)!2!(L2b)$);
\path [name intersections = {of = l1 and l2, by = {PM}}];
\fill (PM) circle(2pt) node[below left,xshift=-9pt,yshift=-7pt] {$p_i$};
\node[above right,xshift=10pt,yshift=4pt] at (PM) {$\alpha$};
\node[below right,xshift=10pt] at (PM) {$\alpha$};
\node[above left,xshift=-3pt,yshift=12pt] at (PM) {$\beta$};
\node[above right,xshift=-3pt,yshift=12pt] at (PM) {$\beta$};
\coordinate (B1a) at (0,4.13);
\coordinate (B1b) at (6,5.1);
\draw[very thick,dashed] ($(B1a)!-.15!(B1b)$) -- node[very near start,above] {$l_{f_1}$} ($(B1a)!1.35!(B1b)$);
\coordinate (B2a) at (3,6.73);
\coordinate (B2b) at (4,.57);
\draw[very thick,dashed] ($(B2a)!-.05!(B2b)$) --
node[very near end,right,xshift=4pt,yshift=6pt] {$l_{f_2}$}
($(B2a)!1.25!(B2b)$);
\draw[very thick,dotted,->,bend right=50] (6,2.2) to (4.5,6.7);
\draw[very thick,dotted,->,bend left=50] (6.2,1.6) to (1.8,1.3);
\end{tikzpicture}
\end{center}
%\end{figure}
If the lines are parallel, let $l_1$ be $y=mx+b_1$ and let $l_2$ be $y=mx+b_2$. The fold is the line parallel to $l_1,l_2$ and halfway between them $y=mx+\disfrac{b_1+b_2}{2}$.
If the lines intersect, let $l_1$ be $y=m_1x+b_1$ and let $l_2$ be $y=m_2x+b_2$.
\textbf{Derivation of the point of intersection}
$p_i=(x_i,y_i)$, the point of intersection of the two lines, is:
\vspace{-2ex}
\begin{form}{1.3}
m_1x_i+b_2&=&m_2x_i+b_2\\
x_i &=& \disfrac{b_2-b_1}{m_1-m_2}\\
y_i &=&m_1x_i+b_1\,.
\end{form}
\vspace{-2ex}
\textbf{Example}
Let $l_1$ be $y=2x-2$ and let $l_2$ be $y=-x+8$. The point of intersection is:
\begin{form}{1.4}
x_i&=&\disfrac{8-(-2)}{2-(-1)}=\disfrac{10}{3}\\
y_i &=& 2\cdot\disfrac{10}{3}-2=\disfrac{14}{3}\,.
\end{form}
\textbf{Derivation of the equation of the slope of the angle bisector}
The two lines form an angle at their point of intersection, actually, two pairs of vertical angles. The folds are the bisectors of these angles.
If the angle of line $l_1$ relative to the $x$-axis is $\theta_1$ and the angle of line $l_2$ relative to the $x$-axis is $\theta_2$, then the fold is the line which makes an angle of $\theta_b=\disfrac{\theta_1+\theta_2}{2}$ with the $x$-axis.
$\tan\theta_1=m_1$ and $\tan\theta_2=m_2$ are given so $m_b$, the slope of the angle bisector, is:
\[
m_b=\tan\theta_b=\tan\disfrac{\theta_1+\theta_2}{2}\,.
\]
The derivation requires the use of two trigonometric identities that we derive here:
\vspace{-2ex}
\begin{form}{1.4}
\tan (\theta_1+\theta_2) &=& \disfrac{\sin(\theta_1+\theta_2)}{\cos(\theta_1+\theta_2)}\\
&=&\disfrac{\sin\theta_1\cos\theta_2+\cos\theta_1\sin\theta_2}{\cos\theta_1\cos\theta_2-\sin\theta_1\sin\theta_2}\\
&=&\disfrac{\sin\theta_1+\cos\theta_1\tan\theta_2}{\cos\theta_1-\sin\theta_1\tan\theta_2}\\
&=&\disfrac{\tan\theta_1+\tan\theta_2}{1-\tan\theta_1\tan\theta_2}\,.
\end{form}
We use this formula to obtain a quadratic equation in $\tan(\theta/2)$:
\vspace{-3ex}
\begin{form}{1.4}
\tan \theta=\disfrac{\tan(\theta/2)+\tan(\theta/2)}{1-\tan^2(\theta/2)}\\
\tan\theta \,(\tan(\theta/2))^2 \;+\; 2\,(\tan (\theta/2)) \;-\;\tan \theta = 0\,,
\end{form}
whose solutions are:
\[
\tan(\theta/2) = \disfrac{-1\pm\sqrt{1+\tan^2\theta}}{\tan\theta}\,.
\]
First derive $m_s$, the slope of $\theta_1+\theta_2$:
\[
m_s=\tan(\theta_1+\theta_2)= \disfrac{m_1+m_2}{1-m_1m_2}\,.
\]
Then derive $m_b$, the slope of the angle bisector:
\vspace{-2ex}
\begin{form}{1.4}
m_b&=& \tan\disfrac{\theta_1+\theta_2}{2}\\
&=&\disfrac{-1\pm\sqrt{1+\tan^2(\theta_1+\theta_2)}}{\tan (\theta_1+\theta_2)}\\
&=&\disfrac{-1\pm\sqrt{1+m_s^2}}{m_s}\,.
\end{form}
\textbf{Example}
For the lines $y=2x-2$ and $y=-x+8$, the slope of the angle bisector is:
\begin{form}{1.4}
m_s=\disfrac{2+(-1)}{1-(2 \cdot -1)}=\disfrac{1}{3}\\
m_b=\disfrac{-1\pm\sqrt{1+(1/3)^2}}{1/3}=-3\pm \sqrt{10}\,.
\end{form}
\textbf{Derivation of the equation of the fold}
Let us derive equation of the fold $l_{f_1}$ with the positive slope; we know the coordinates of the intersection of the two lines $m_i=\left(\disfrac{10}{3},\disfrac{14}{3}\right)$:
\begin{form}{1.3}
\disfrac{14}{3} &=& (-3+\sqrt{10}) \cdot \disfrac{10}{3} + b\\ b&=&\disfrac{44-10\sqrt{10}}{3}\\
y&=& (-3+\sqrt{10})x + \disfrac{44-10\sqrt{10}}{3}\approx 0.162x+4.13\,.
\end{form}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newpage
\section{Axiom 4}\label{s.ax4}
\textbf{Axiom}
Given a point $p_1$ and a line $l_1$, there is a unique fold $l$ perpendicular to $l_1$ that passes through point $p_1$.
%\begin{figure}[H]
\begin{center}
\begin{tikzpicture}[scale=1.2]
\draw[step=10mm,white!50!black,thin] (-1,-1) grid (8,7);
\draw[thick] (-1,0) -- (8,0);
\draw[thick] (0,-1) -- (0,7);
\foreach \x in {0,...,8}
\node at (\x-.2,-.2) {\sm{\x}};
\foreach \y in {1,...,7}
\node at (-.2,\y-.3) {\sm{\y}};
\coordinate (L1a) at (2,0);
\coordinate (L1b) at (5,6);
\draw[thick] (L1a) -- node[very near start,right,yshift=-4pt] {$l_1$} ($(L1a)!1.15!(L1b)$);
\fill (2,6) circle (2pt) node[above right] {$p_1$};
\draw[thick,dashed] (0,7) -- node[very near end,above right] {$l$} (8,3);
\coordinate (intersection) at (4.4,4.8);
\draw[rotate=-30] (intersection) rectangle +(8pt,8pt);
\draw[very thick,dotted,->,bend left=50] (5.4,6.3) to (3.7,3);
\end{tikzpicture}
\end{center}
%\end{figure}
\textbf{Derivation of the equation of the fold}
Let $l_1$ be $y = m_1x + b_1$ and let $p_1=(x_1,y_1)$. $l$ is perpendicular to $l_1$ so its slope is $-\disfrac{1}{m_1}$. Since it passes through $p_1$, we can compute the intercept $b$ and write down its equation:
\vspace{-2ex}
\begin{form}{1.4}
y_1=-\disfrac{1}{m} x_1 + b\\
b= \disfrac{(my_1+x_1)}{m}\\
y=-\disfrac{1}{m} x +\disfrac{(my_1+x_1)}{m}\,.
\end{form}
\textbf{Example}
Let $p_1=(2,6)$ and let $l_1$ be $y=2x-4$. The equation of the fold $l$ is:
\[
y=-\disfrac{1}{2}x + \disfrac{2\cdot 6 + 2}{2}=-\disfrac{1}{2}x + 7\,.
\]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newpage
\section{Axiom 5}\label{s.ax5}
\textbf{Axiom}
Given two points $p_1,p_2$ and a line $l_1$, there is a fold $l$ that places $p_1$ onto $l_1$ and passes through $p_2$.
%\begin{figure}[H]
\begin{center}
\begin{tikzpicture}[scale=1.1]
\draw[step=10mm,white!50!black,thin] (-1,-1) grid (9,9);
\draw[thick] (-1,0) -- (9,0);
\draw[thick] (0,-1) -- (0,9);
\foreach \x in {0,...,9}
\node at (\x-.2,-.2) {\sm{\x}};
\foreach \y in {1,...,9}
\node at (-.2,\y-.3) {\sm{\y}};
\coordinate (L1a) at (0,3);
\coordinate (L1b) at (8,-1);
\draw[thick] (L1a) -- node[near end,below right,xshift=8pt,yshift=-8pt] {$l_1$} (L1b);
\coordinate (P1) at (2,8);
\fill (P1) circle (2pt) node[above left] {$p_1$};
\coordinate (P2) at (4,4);
\fill (P2) circle (2pt) node[above left,yshift=4pt] {$p_2$};
\draw[thick,dotted,name path=L1] (8,-1) -- (-1,3.5);
\node[very thick,dotted,draw, name path = circle] at (P2)
[circle through = (P1)] {};
\path [name intersections = {of = circle and L1, by = {P1P,P1PP}}];
\fill (P1P) circle (2pt) node[above left,xshift=-2pt,yshift=4pt] {$p_1'$};
\fill (P1PP) circle (2pt) node[above left,yshift=6pt] {$p_1''$};
\coordinate (f1) at (0,6);
\draw[thick,dashed] ($(f1)!-.25!(P2)$) -- node[very near end,above] {$l_{f_2}$} ($(f1)!2.25!(P2)$);
\coordinate (f2) at (0,2);
\draw[thick,dashed] ($(f2)!-.25!(P2)$) -- node[very near end,below,yshift=-2pt] {$l_{f_1}$} ($(f2)!2.25!(P2)$);
\draw[very thick,dotted,->,bend left=50] (2.2,7.8) to (-.2,3.2);
\draw[very thick,dotted,->,bend left=50] (2.4,7.85) to (6.1,.2);
\end{tikzpicture}
\end{center}
%\end{figure}
For a given pair of points and a line, there may be zero, one or two folds.
\textbf{Derivation of the equations of the reflections}
Let $l$ be a fold through $p_2$ and $p_1'$ be the reflection of $p_1$ around $l$. The length of $\overline{p_1p_2}$ equals the length of $\overline{p_1'p_2}$. The locus of points at distance $\overline{p_1p_2}$ \emph{from} $p_2$ is the circle centered at $p_2$ whose radius is the length of $\overline{p_1p_2}$. The intersections of this circle with the line $l_1$ give the possible points $p_1'$.
Let $l_1$ be $y=m_1x + b_1$ and let $p_1=(x_1,y_1)$, $p_2=(x_2,y_2)$. The equation of the circle centered at $p_2$ with radius the length of $\overline{p_1p_2}$ is:
\vspace{-2ex}
\begin{form}{1.2}
(x-x_2)^2 + (y-y_2)^2 = r^2\,,\quad \textrm{where}\\
r^2= (x_2-x_1)^2 + (y_2-y_1)^2\,.
\end{form}
Substituting the equation of the line into the equation for the circle:
\[
(x-x_2)^2+((m_1x+b_1)-y_2)^2=(x-x_2)^2+(m_1x+(b_1-y_2))^2=r^2\,,
\]
we obtain a quadratic equation for the $x$-coordinates of the possible intersections:
\begin{equation}
x^2(1+m_1^2) \,+\, 2(-x_2+m_1b-m_1y_2)x \,+\, (x_2^2 + (b_1^2 - 2b_1y_2+y_2^2)-r^2)=0\,.\label{eq.intersections}
\end{equation}
The quadratic equation has at most two solutions $x_1',x_1''$ and we can compute $y_1',y_1'')$ from $y=m_1x+b_1$. The reflected points are $p_1'=(x_1',y_1')$, $p_1''=(x_1'',y_1'')$.
\textbf{Example}
Let $p_1=(2,8)$, $p_2=(4,4)$ and let $l_1$ be $y=-\disfrac{1}{2}x +3$. The equation of the circle is:
\[
(x-4)^2 + (y-4)^2 = r^2=(4-2)^2+(4-8)^2=20\,.
\]
Substitute the equation of the line into the equation of the circle and simplify to obtain a quadratic equation for the $x$-coordinates of the intersections (or use Equation~\ref{eq.intersections}):
\vspace{-2ex}
\begin{form}{1.4}
(x-4)^2 + \left(\left(-\disfrac{1}{2}x+3\right)-4\right)^2&=&20\\
%\disfrac{5}{4}x^2-7x-3 &=&0\\
5x^2 -28x -12&=&0\\
(5x+2)(x-6)&=&0\,.
\end{form}
The two points of intersection are:
\[
p_1'=\left(-\disfrac{2}{5},\disfrac{16}{5}\right) = (-0.4,3.2)\,,\quad\quad p_1''=(6,0)\,.
\]
\textbf{Derivation of the equations of the folds}
The folds will be the perpendicular bisectors of $\overline{p_1p_1'}$ and $\overline{p_1p_1''}$. The equation of a perpendicular bisector is given by Equation~\ref{eq.midpoint1}, repeated here with for $p_1'$:
\begin{equation}
y - \disfrac{y_1+y_1'}{2} = -\disfrac{x_1'-x_1}{y_1'-y_1}\left(x-\disfrac{x_1+x_1'}{2}\right)\,.\label{eq.midpoint2}
\end{equation}
\textbf{Example}
For $p_1=(2,8)$ and $p_1'=\left(-\disfrac{2}{5},\disfrac{16}{5}\right)$, the equation of the fold $l_{f_1}$ is:
\vspace{-2ex}
\begin{form}{1.4}
y-\disfrac{8+(16/5)}{2}&=&-\disfrac{(-2/5)-2}{(16/5)-8}\left(x-\disfrac{2+\left(-2/5\right)}{2}\right)\\
y&=&-\disfrac{1}{2}x+6\,.
\end{form}
For $p_1=(2,8)$ and $p_1''=(6,0)$, the equation of the fold $l_{f_2}$ is:
\vspace{-2ex}
\begin{form}{1.4}
y-\disfrac{8+0}{2}&=&-\disfrac{6-2}{0-8}\left(x-\disfrac{2+6}{2}\right)\\
y&=&\disfrac{1}{2}x+2\,.
\end{form}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newpage
\section{Axiom 6}\label{s.ax6}
\textbf{Axiom}
Given two points $p_1$ and $p_2$ and two lines $l_1$ and $l_2$, there is a fold $l$ that places $p_1$ onto $l_1$ and $p_2$ onto $l_2$.
%\begin{figure}[H]
\begin{center}
\begin{tikzpicture}[scale=.9]
\draw[step=10mm,white!50!black,thin] (-7,-7) grid (7,5);
\draw[thick] (-7,0) -- (7,0);
\draw[thick] (0,-7) -- (0,5);
\foreach \x in {-6,...,7}
\node at (\x-.3,-.2) {\sm{\x}};
\foreach \y in {1,...,5}
\node at (-.2,\y-.3) {\sm{\y}};
\foreach \y in {-6,...,-1}
\node at (-.3,\y-.3) {\sm{\y}};
\coordinate (P1) at (0,4);
\fill (P1) circle (3pt) node[above left,xshift=-2pt] {$p_1$};
\coordinate (P2) at (0,-6);
\fill (P2) circle (3pt) node[above left,xshift=-2pt,yshift=2pt] {$p_2$};
\coordinate (P1P) at (3.1,2);
\fill (P1P) circle (3pt) node[above right,xshift=-2pt] {$p_1'$};
\coordinate (P2P) at (-6.12,-2);
\fill (P2P) circle (3pt) node[below right,xshift=2pt] {$p_2'$};
\coordinate (P1PP) at (-3.1,2);
\fill (P1PP) circle (3pt) node[above right,xshift=-2pt] {$p_1''$};
\coordinate (P2PP) at (6.12,-2);
\fill (P2PP) circle (3pt) node[below right,xshift=2pt] {$p_2''$};
\draw[very thick] (-7,2) -- node[very near start,above,xshift=-34pt] {$l_1$} (7,2);
\draw[very thick] (-7,-2) -- node[very near start,below,xshift=-34pt] {$l_2$} (7,-2);
\draw[domain=-4.8:4.8,samples=50,very thick,dotted] plot (\x,{-.13*\x*\x-4});
\draw[domain=-2.8:2.8,samples=50,very thick,dotted] plot (\x,{.25*\x*\x+3});
\draw[very thick,dashed] (-5,-7) -- (2.95,5);
\draw[very thick,dashed] (-3,5) -- (4.95,-7);
\draw[very thick,dotted,->,bend left=50] (.2,4.1) to (3,2.2);
\draw[very thick,dotted,->,bend left=50] (-.3,-6.1) to (-6.1,-2.2);
\draw[very thick,dotted,->,bend right=50] (-.2,4.1) to (-3,2.2);
\draw[very thick,dotted,->,bend right=50] (.3,-6.1) to (6.1,-2.2);
\end{tikzpicture}
\end{center}
%\end{figure}
For a given pair of points and pair of lines, there may be zero, one, two or three folds.
If a fold places $p_i$ onto $l_i$, the distance from $p_i$ to the fold is equal to the distance from $l_i$ to the fold. The locus of points that are equidistant from a point $p_i$ and a line $l_i$ is a parabola with focus $p_i$ and directrix $l_i$. A fold is any line tangent to that parabola. A detailed justification of this claim is given below.
For a fold to simultaneously place $p_1$ onto $l_1$ and $p_2$ onto $l_2$, it must be a tangent common to the two parabolas.
The formula for an arbitrary parabola is quite complex, so we limit the presentation to parabolas with the the $y$-axis as the axis of symmetry. This is not a significant limitation because for any parabola there is a rigid motion that moves the parabola so that its axis of symmetry is the $y$-axis. An example will also be given where one of the parabolas has the $x$-axis as its axis of symmetry.
\newpage
\textbf{Derivation of the equation a fold}
Let $(0,f)$ be the focus of a parabola with directrix $y=d$. Define $p=f-d$, the signed length of the line segment between the focus and the directrix.\footnote{We have been using the notation $p_i$ for points; the use of $p$ here might be confusing but it is the standard notation. The formal name for $p$ is one-half the \emph{latus rectum}.} If the vertex of the parabola is on the $x$-axis, the equation of the parabola is $y=\disfrac{x^2}{2p}$. To move the parabola up or down the $y$-axis so that its vertex is at $(0,h)$, add $h$ to the equation of the parabola: $y=\disfrac{x^2}{2p}+h$.
\begin{center}
\begin{tikzpicture}
\draw (-6,0) --
node[very near start,below,xshift=-32pt] {$x$-\textsf{axis}} (6,0);
\draw (0,-3) --
node[very near start,right,yshift=-20pt] {$y$-\textsf{axis}} (0,4.5);
\draw[very thick] (-6,-2) --
node[near start,below]
{\textsf{directrix} $\quad y=d$ \textsf{is} $y=-2$} (6,-2);
\draw[domain=-5:5,samples=50,very thick,dotted]
plot (\x,{\x*\x/8+1});
\coordinate (F) at (0,4);
\fill (F) circle (2pt)
node[left,xshift=-2pt,yshift=0pt] {$(0,f)\textsf{\ is\ }(0,4)$}
node[above left,xshift=-2pt,yshift=4pt] {\textsf{focus}};
\fill (0,-2) circle (2pt);
\fill (0,1) circle (2pt)
node[below left] {\textsf{vertex}}
node[below left,yshift=-10pt] {$(0,1)$};
\draw[<->] (2,-1.9) --
node[fill=white,xshift=4pt] {$p=6$} +(0,5.8);
\draw[<->] (1,.1) --
node[fill=white] {$h=1$} +(0,.8);
\end{tikzpicture}
\end{center}
Define $a=2ph$ so that the equation of the parabola is:
\vspace{-2ex}
\begin{form}{1.2}
y=\disfrac{x^2}{2p}+\disfrac{a}{2p}\\
x^2-2py+a=0\,.
\end{form}
The equation of the parabola in the diagram above is:
\begin{form}{.9}
x^2-2\cdot 6\,y + 2\cdot 6 \cdot 1=0\\
x^2-12y +12=0\,.
\end{form}
Substitute the equation of an \emph{arbitrary} line $y=mx+b$ into the equation for the parabola to obtain an equation for the points of intersection of the line and the parabola:
\vspace{-2ex}
\begin{form}{1}
x^2-2p(mx+b)+a=0\\
x^2+(-2mp)x+(-2pb+a)=0\,.
\end{form}
The line will be a \emph{tangent} to the parabola \emph{if and only if} this quadratic equation has \emph{exactly one} solution \emph{if and only if} its discriminant is zero:
\[
(-2mp)^2\:-\:4\cdot 1\cdot (-2pb+a)=0\,,
\]
which simplifies to:
\begin{equation}
m^2p^2+2pb-a=0\,.\label{eq.disc}
\end{equation}
This is the quadratic equation with variable $m$ for the slopes of tangents to the parabola. There are an infinite number of tangents because for each $m$, there is some $b$ that makes the line a tangent by moving it up or down.\footnote{Except of course for a line parallel to the axis of symmetry.}
To obtain the common tangents to both parabolas, the equations for the two parabolas have two unknowns and can be solved for $m$ and $b$.
\textbf{Example}
Parabola 1: focus $(0,4)$, directrix $y=2$, vertex $(0,3)$, $p=2$, $a=2\cdot 2\cdot 3=12$. The equation of the parabola is:
\[
\begin{array}{l}
x^2-2\cdot 2y +12=0\,.
\end{array}
\]
Substituting into Equation~\ref{eq.disc} and simplifying:
\[
m^2+b-3=0\,.
\]
Parabola 2: focus $(0,-4)$, directrix $y=-2$, vertex $(0,-3)$, $p=-2$, $a=2\cdot -2\cdot -3=12$. The equation of the parabola is:
\[
x^2-2\cdot (-2)y+12=0\,.
\]
Substituting into Equation~\ref{eq.disc} and simplifying:
\begin{equation}
m^2-b-3=0\,.\label{eq.parabola1}
\end{equation}
The solutions of the two equations:
\vspace{-3ex}
\begin{form}{1}
m^2+b-3=0\\
m^2-b-3=0\,,
\end{form}
are $m=\pm\sqrt{3}\approx \pm 1.73$ and $b=0$. There are two common tangents that are the folds:
\[
y=\sqrt{3}x\,,\quad y=-\sqrt{3}x\,.
\]
\textbf{Example}
Parabola 1 is unchanged.
Parabola 2: focus $(0,-6)$, directrix $y=-2$, vertex $(0,-4)$, $p=-4$, $a=2\cdot -4\cdot -4=32$. The equation of the parabola is:
\[
x^2-2\cdot (-4)y +32=0\,.
\]
Substituting into Equation~\ref{eq.disc} and simplifying:
\[
2m^2-b-4=0\,.
\]
The solutions of the two equations (using Equation~\ref{eq.parabola1} for the first parabola):
\vspace{-2ex}
\begin{form}{1}
m^2+b-3=0\\
2m^2-b-4=0\,,
\end{form}
are $m=\pm\sqrt{\disfrac{7}{3}}\approx \pm 1.53$ and $b=\disfrac{2}{3}$. There are two common tangents that are folds:
\[
y=\sqrt{\disfrac{7}{3}}x+\disfrac{2}{3}\,,\quad y=-\sqrt{\disfrac{7}{3}}x+\disfrac{2}{3}\,.
\]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\textbf{Example}
Let us now define a parabola whose axis of symmetry is the $x$-axis.
Parabola 1 is unchanged.
Parabola 2: focus $(4,0)$, directrix $x=2$, vertex $(3,0)$, $p=2$, $a=2\cdot 2\cdot 3=12$. The equation of the parabola is:
\[
y^2-4x+12 = 0\,.
\]
This is an equation with $y^2$ and $x$ instead of $x^2$ and $y$, so we can't use Equation~\ref{eq.disc} and we must perform the derivation again.
Substitute the equation for a line:
\vspace{-2ex}
\begin{form}{1}
(mx+b)^2-4x+12=0\\
m^2x^2+(2mb-4)x+(b^2+12)=0\,,
\end{form}
set the discriminant equal to zero and simplify:
\vspace{-2ex}
\begin{form}{1}
(2mb-4)^2\:-\:4m^2(b^2+12)=0\\
-3m^2-mb+1=0\,.
\end{form}
If we try to solve the two equations (using Equation~\ref{eq.parabola1} for the first parabola):
\vspace{-2ex}
\begin{form}{1}
m^2+b-3=0\\
-3m^2-mb+1=0\,,
\end{form}
we obtain a cubic equation with variable $m$:
\begin{equation}
m^3-3m^2-3m+1=0\,.\label{eq.cubic}
\end{equation}
Since a cubic equation has at least one and at most three (real) solutions, there can be one, two or three common tangents. There can also be no common tangents if the two equations have no solution, for example, if one parabola is ``contained'' with another: $y=x^2$, $y=x^2+1$.
The formula for solving cubic equations is quite complicated, so I used a calculator on the internet and obtained three solutions:
\[
m=3.73\,, \;m=-1\,, \; m=0.27\,.
\]
Choosing $m=0.27$, $b=3-m^2=2.93$, and the equation of the fold is:
\[
y=0.27x+2.93\,.
\]
From the form of Equation~\ref{eq.cubic}, we might guess that $1$ or $-1$ is a solution:
\vspace{-2ex}
\begin{form}{1}
1^3-3\cdot 1^2-3\cdot 1+1=-4\\
(-1)^3-3\cdot (-1)^2-3\cdot(-1)+1=0\,.
\end{form}
Divide Equation~\ref{eq.cubic} by $m-(-1)=m+1$ to obtain the quadratic equation $m^2-4m+1$ whose roots are $2\pm\sqrt{3}\approx 3.73, 0.27$.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\textbf{Derivation of the equations of the reflections}
We derive the position of the reflection $p_1'=(x_1',y_1')$ of $p_1=(x_1,y_1)$ around some tangent line $l_t$ whose equation is $y=m_tx+b_t$. The derivation is identical for any tangent and for $p_2$.
To reflect $p_1$ around $l_t$, we find the line $l_p$ with equation $y=m_px+b_p$ that is perpendicular to $l_t$ and passes through $p_1$:
\vspace{-2ex}
\begin{form}{1.2}
y=-\disfrac{1}{m_t}x+b_p\\
y_1=-\disfrac{1}{m_t}x_1+b_p\\
y=\disfrac{-x}{m_t}+\left(y_1+\disfrac{x_1}{m_t}\right)\,.
\end{form}
Next we find the intersection $p_t=(x_t,y_t)$ of $l_t$ and $l_p$:
\vspace{-3ex}
\begin{form}{1.7}
m_tx_t+b_t=\disfrac{-x_t}{m_t}+\left(y_1+\disfrac{x_1}{m_t}\right)\\
x_t=\disfrac{\left(y_1+\disfrac{x_1}{m_t}-b_t\right)}{\left(m_t+\disfrac{1}{m_t}\right)}\\
y_t=m_tx_t+b_t\,.
\end{form}
The reflection $p_1'$ is easy to derive because the intersection $p_t$ is the midpoint between $p_1$ and its reflection $p_1'$:
\vspace{-2ex}
\begin{form}{1.3}
x_t=\disfrac{x_1+x_1'}{2}\,,\quad y_t=\disfrac{y_1+y_1'}{2}\\
x_1'=2x_t-x_1\,,\quad y_1'=2y_t-y_1\,.
\end{form}
\textbf{Example}
$p_1=(0,4)$, $l_1$ is $y=\sqrt{3}x$:
\vspace{-2ex}
\begin{form}{1.3}
x_t=\disfrac{\left(4+\disfrac{0}{\sqrt{3}}-0\right)}{\left(\sqrt{3}+\disfrac{1}{\sqrt{3}}\right)}=\sqrt{3}\\
y_t=\sqrt{3}\sqrt{3}+0=3\\
x_1'=2x_t-x_1=2\sqrt{3}-0=2\sqrt{3}\approx 3.46\\
y_1'=2y_t-y_1=2\cdot 3 - 4 = 2\,.
\end{form}
\bigskip
\textbf{A fold is any tangent to a parabola}
Students are usually introduced to parabolas as the graphs of second degree equations $y=ax^2+bx+c$. However, parabolas can be defined geometrically: given a point, the \emph{focus}, and a line, the \emph{directrix}, the locus of points equidistant from the focus and the directrix defines a parabola.
The following diagram shows the focus---the large dot at $p=(0,f)$, and the directrix---the thick line $d$ whose equation is $y=-f$. The resulting parabola is shown as a dotted curve. Its \emph{vertex} $p_2$ is at the origin of the axes.
\begin{center}
\begin{tikzpicture}[scale=1]
\draw (-6,0) -- node[very near start,below,xshift=-32pt] {$x$-\textsf{axis}} (6,0);
\draw (0,-3) -- node[very near start,right,yshift=-20pt] {$y$-\textsf{axis}} (0,4.5);
\draw[ultra thick] (-6,-2) -- node[near start,below] {\textsf{directrix} $d: \quad y=-f$} (6,-2);
\draw[domain=-6:6,samples=50,very thick,dotted] plot (\x,{\x*\x/8});
\coordinate (F) at (0,2);
\fill (F) circle (3pt) node[above left,xshift=-2pt,yshift=15pt] {$(0,f)$} node[above left,xshift=-2pt,yshift=30pt] {\textsf{focus}} node[above right] {$p$};
\fill (0,0) circle (1.5pt) node[below right] {$p_2$};
\fill (0,-2) circle (1.5pt);
\fill (2,-2) circle (1.5pt);
\fill (3,-2) circle (1.5pt);
\fill (5,-2) circle (1.5pt);
\coordinate (FP) at (-5,-2);
\fill (FP) circle (1.5pt) node[below] {$p'$};
\coordinate (F1) at (2,.5);
\fill (F1) circle (1.5pt) node[below right] {$p_3$};
\coordinate (F2) at (3,1.125);
\fill (F2) circle (1.5pt) node[below right] {$p_4$};
\coordinate (F3) at (5,3.125);
\fill (F3) circle (1.5pt) node[below right] {$p_5$};
\coordinate (F4) at (-5,3.125);
\fill (F4) circle (1.5pt) node[above right] {$p_1$};
\draw (F) -- node[left] {$a_2$} (0,0) -- node[left] {$a_2$} (0,-2);
\draw (F) -- node[near end,left] {$a_3$} (F1) -- node[left] {$a_3$} (2,-2);
\draw (F) -- node[near end,above] {$a_4$} (F2) -- node[left] {$a_4$} (3,-2);
\draw (F) -- node[above] {$a_5$} (F3) -- node[left] {$a_5$} (5,-2);
\draw (F) -- node[above] {$a_1$} (F4) -- node[left] {$a_1$} (FP);
\draw[thick,dashed] ($(F4)!-.4!(-2.5,0)$) -- node[very near end,right,xshift=2pt] {$l_1$} ($(F4)!1.8!(-2.5,0)$);
\draw (0,-2) rectangle +(8pt,8pt);
\draw (2,-2) rectangle +(8pt,8pt);
\draw (3,-2) rectangle +(8pt,8pt);
\draw (5,-2) rectangle +(8pt,8pt);
\draw (-5,-2) rectangle +(8pt,8pt);
\end{tikzpicture}
\end{center}
We have selected five points $p_i$, $i=1,\ldots,5$ on the parabola. Each point $p_i$ is at a distance of $a_i$ both from the focus and from the directrix. Drop a perpendicular from $p_i$ to the directrix and let $p_i'$ be the intersection of the perpendicular and the directrix. Using Axiom~2, construct the line $l_i$ by folding $p$ onto $p_i'$. Since $p_i$ is on the parabola, $\overline{p'p_i}=\overline{p_{i}p}=a_i$. The diagram shows the fold $l_1$ through $p_1$.
\newpage
\textbf{Theorem} The folds are tangents to the parabola.
\textbf{Proof (Oriah Ben Lulu)}
In the following diagram, the focus is $p$, the directrix is $l$, $p'$ is a point on the directrix and $m$ is the fold that places $p$ on $p'$. By definition, $m$ is the perpendicular bisector of $\overline{pp'}$. Let $s$ be the intersection of $\overline{pp'}$ and $m$; then $\overline{ps}=\overline{p's}=a$ and $m\perp \overline{pp'}$.
\begin{center}
\begin{tikzpicture}[scale=1.1]
\draw[ultra thick] (-6,-2) -- node[near end, below] {$l$} (6,-2);
\draw[domain=-5.5:5.5,samples=50,very thick,dotted] plot (\x,{\x*\x/8});
\coordinate (F) at (0,2);
\fill (F) circle (2pt) node[above right] {$p$};
\coordinate (FP) at (-3,-2);
\fill (FP) circle (1.5pt) node[below] {$p'$};
\coordinate (F4) at (-3,1.125);
\fill (F4) circle (1.5pt) node[above right] {$r$};
\coordinate (F5) at (-5,2.775);
\fill (F5) circle (1.5pt) node[left,yshift=-4pt] {$q$};
\coordinate (F5p) at (-5,-2);
\fill (F5p) circle (1.5pt) node[below] {$p''$};
\draw (F) -- node[above] {$b$} (F4) -- node[left] {$b$} (FP);
\draw (F) -- node[above] {$c$} (F5);
\draw (F5) -- node[left] {$d$} (F5p);
\draw[thick,dashed,name path=fold] ($(F4)+(140:4)$) -- (F4) -- node[below,xshift=-2pt,yshift=-2pt] {$m$} ($(F4)+(-40:5.1)$);
\draw (FP) rectangle +(8pt,8pt);
\draw (F5p) rectangle +(8pt,8pt);
\draw (F5) -- (FP);
\draw[name path=base] (F) -- (FP);
\path [name intersections = {of = base and fold, by = {G}}];
\fill (G) circle (1.5pt) node[below,yshift=-4pt] {$s$};
\draw[rotate=140] (G) rectangle +(6pt,6pt);
\path (FP) -- node[left] {$c$} (F5);
\path (F) -- node[below] {$a$} (G) -- node[below] {$a$} (FP);
\end{tikzpicture}
\end{center}
Let $r$ be the intersection of a perpendicular to $l$ through $p'$ and the fold $m$. Then $\triangle psr\cong \triangle p'sr$ by side-angle-side, since $\overline{ps}=\overline{p's}$, $\angle psr=\angle p'sr=90^\circ$ and $\overline{rs}$ is a common edge. It follows that $\overline{pr}=\overline{p'r}=b$ and therefore $r$ must be on the parabola.
Choose a point $p''$ on the directrix that is \emph{distinct} from $p'$ and suppose that $m$ is also the fold that places $p$ on $p''$. Let $q$ be the intersection of the perpendicular to $l$ through $p''$ and the fold $m$. As before, we can prove that $\overline{pq}=\overline{p'q}=c$. Let $\overline{qp''}=d$. If $q$ is on the parabola then $d=\overline{qp''}=\overline{qp}=c$. But $c$ is the hypotenuse of the right triangle $\triangle qp''p'$ and cannot be equal to one of its sides $d$.
We have proved that $m$ intersects the parabola in only one point so it is a tangent to the parabola.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newpage
\section{Axiom 7}\label{s.ax7}
\textbf{Axiom}
Given one point $p_1$ and two lines $l_1$ and $l_2$, there is a fold $l$ that places $p_1$ onto $l_1$ and is perpendicular to $l_2$.
%\begin{figure}[H]
\begin{center}
\begin{tikzpicture}[scale=1.2]
\draw[step=10mm,white!50!black,thin] (-1,-1) grid (9,8);
\draw[thick] (-1,0) -- (9,0);
\draw[thick] (0,-1) -- (0,8);
\foreach \x in {0,...,9}
\node at (\x-.2,-.2) {\sm{\x}};
\foreach \y in {1,...,8}
\node at (-.2,\y-.3) {\sm{\y}};
\coordinate (P1) at (5,3);
\fill (P1) circle (2pt) node[below left] {$p_1$};
\coordinate (P1P) at (2.75,5.25);
\fill (P1P) circle (2pt) node[left,xshift=-4pt] {$p_1'$};
\draw[very thick] (1,0) -- node[very near start,right,xshift=2pt] {$l_1$} (3,6);
\draw[very thick,name path=l2] (8,3) -- node[very near start,right,xshift=-2pt,yshift=6pt] {$l_2$} (5,6);
\draw[thick,dotted] ($(1,0)!-.15!(3,6)$) -- ($(1,0)!1.34!(3,6)$);
\draw[thick,dotted] ($(8,3)!-.3!(5,6)$) -- ($(8,3)!1.65!(5,6)$);
\draw[thick,dotted] ($(P1)!-.4!(P1P)$) -- node[very near start,below,yshift=-4pt] {$l_p$} ($(P1)!1.5!(P1P)$);
\draw[very thick,dashed,name path=fold] (-.5,-.25) -- node[very near end,above,xshift=4pt,yshift=6pt] {$l$} (7.5,7.75);
\coordinate (mid) at ($(P1)!.5!(P1P)$);
\fill (mid) circle (2pt) node[below,yshift=-8pt] {$p_m$};
\path [name intersections = {of = fold and l2, by = {perp}}];
\draw[rotate=-45] (mid) rectangle +(8pt,8pt);
\draw[rotate=-45] (perp) rectangle +(8pt,8pt);
\draw[very thick,dotted,->,bend right=50] (5.1,3.2) to (3,5.2);
\end{tikzpicture}
\end{center}
%\end{figure}
\textbf{Derivation of the equation of the fold}
Let $p_1=(x_1,y_1)$, let $l_1$ be $y = m_1x + b_1$ and let $l_2$ be $y=m_2x+b_2$.
Since the fold $l$ is perpendicular to $l_2$, and the line $l_p$ containing $\overline{p_1p_1'}$ is perpendicular to $l$, it follows that $l_p$ parallel to $l_2$:
\[
y=m_2x+b_p\,.
\]
$l_p$ passes through $p_1$ so $y_1=m_2x_1+b_p$ and its equation is:
\[
y=m_2x+(y_1-m_2x_1)\,.
\]
$p_1'=(x_1',y_1')$, the reflection of $p_1$ around the fold $l$, is the intersection of $l_1$ and $l_p$:
\vspace{-2ex}
\begin{form}{1.2}
m_1x_1'+b_1=m_2x_1'+(y_1-m_2x_1)\\
x_1'=\disfrac{y_1-m_2x_1-b_1}{m_1-m_2}\\
y_1'=m_1x_i'+b_1\,.
\end{form}
The midpoint $p_m=(x_m,y_m)$ of $l_p$ between $p_1$ and $p_1'$ is on the fold $l$:
\[
(x_m,y_m)=\left(\disfrac{x_1+x_1'}{2},\disfrac{y_1+y_1'}{2}\right)\,.
\]
The equation of the fold $l$ is the perpendicular bisector of $\overline{p_1p_1'}$. First compute the intercept of $l$ which passes through $p_m$:
\vspace{-2ex}
\begin{form}{1.3}
y_m=-\disfrac{1}{m_2}x_m+b_m\\
b_m=y_m+\disfrac{x_m}{m_2}\,.
\end{form}
The equation of the fold $l$ is:
\[
y=-\disfrac{1}{m_2}x+\left(y_m+\disfrac{x_m}{m_2}\right)\,.
\]
\vspace*{-1ex}
\textbf{Example}
Let $p_1=(5,3)$, let $l_1$ be $y=3x-3$ and let $l_2$ be $y=-x+11$.
\vspace{-2ex}
\begin{form}{1.3}
x_1'=\disfrac{3-(-1)\cdot 5-(-3)}{3-(-1)}=\disfrac{11}{4}\\
y_1'=3\cdot \disfrac{11}{4} + (-3)=\disfrac{21}{4}\\
p_m=\left(\disfrac{5+\disfrac{11}{4}}{2},\disfrac{3+\disfrac{21}{4}}{2}\right)=\left(\disfrac{31}{8},\disfrac{33}{8}\right)\,.
\end{form}
The equation of the fold $l$ is:
\[
y=-\disfrac{1}{-1}\cdot x+\left(\disfrac{33}{8}+\disfrac{\disfrac{31}{8}}{-1}\right)=x+\disfrac{1}{4}\,.
\]
| {
"alphanum_fraction": 0.6331270213,
"avg_line_length": 39.9866220736,
"ext": "tex",
"hexsha": "c2e52290f6d34a59b90ab535a53dec57473a23a2",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8f8f4f25a91abb31b8392b83802e7f5ed42462c7",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "motib/constructions",
"max_forks_repo_path": "axioms.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8f8f4f25a91abb31b8392b83802e7f5ed42462c7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "motib/constructions",
"max_issues_repo_path": "axioms.tex",
"max_line_length": 594,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "8f8f4f25a91abb31b8392b83802e7f5ed42462c7",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "motib/constructions",
"max_stars_repo_path": "axioms.tex",
"max_stars_repo_stars_event_max_datetime": "2020-10-07T15:57:52.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-10-07T15:57:52.000Z",
"num_tokens": 14646,
"size": 35868
} |
\chapter{Narrative Writing}
\section{Unit 1}
| {
"alphanum_fraction": 0.7608695652,
"avg_line_length": 11.5,
"ext": "tex",
"hexsha": "1c225d244fa2533b5aef077b65af87503ea6050b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "de33e73ca7df9d3adcb094aa9909ea0337e68fad",
"max_forks_repo_licenses": [
"Info-ZIP"
],
"max_forks_repo_name": "SingularisArt/notes",
"max_forks_repo_path": "Grade-10/semester-1/hs-english/unit-1/unit-info.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "de33e73ca7df9d3adcb094aa9909ea0337e68fad",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Info-ZIP"
],
"max_issues_repo_name": "SingularisArt/notes",
"max_issues_repo_path": "Grade-10/semester-1/hs-english/unit-1/unit-info.tex",
"max_line_length": 27,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "de33e73ca7df9d3adcb094aa9909ea0337e68fad",
"max_stars_repo_licenses": [
"Info-ZIP"
],
"max_stars_repo_name": "SingularisArt/notes",
"max_stars_repo_path": "Grade-10/semester-1/hs-english/unit-1/unit-info.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-16T07:29:05.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-08-31T12:45:26.000Z",
"num_tokens": 14,
"size": 46
} |
\section{Applications: Judging Wine Quality}
As a sort of fun application, I found a dataset on UCI's archive from a paper by
Cortez et al. \cite{2009CCAMR} which charted various physicochemical properties
of a set of white wine, and had corresponding 'quality' (judged by a committee,
the CVRVV) levels for each data point. The reason why I decided to try this
dataset, is not only because knowing what makes alcohol quality clearly
a very important task, but also because in the original paper, their uncertainty
regarding the relevancy of all input variables was noted. That is, this is an
interesting opportunity to try a feature selection method: my method of choice
this time was ridge-regression, the $\ell^2$ variant of LASSO. See Figures
\ref{fig:linearwine} and \ref{fig:ridgewine} for the linear and ridge regression
variants, respectively.
\begin{figure}[!htb]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{./resources/linear_wine}
\caption{}\label{fig:linearwine}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{./resources/ridge_wine}
\caption{}\label{fig:ridgewine}
\end{subfigure}
\caption{
Both converge in rougly four to eight epochs, but it should be noted
that neither converges to the true solution. Regardless of how I chose the
learning rate, it appeared that both runs of \hogwild\ would get stuck in
a noise ball somewhere close to the true solution. This is an important
failing of stochastic gradient methods in general, sometimes the noise
generated will prevent us from seeing a 'true' solution, and indeed the
ridge-regressed version didn't properly feature select, whereas the solution
computed via CVX saw the 11th feature to be most weighted (The 11th feature,
to my amusement, is alcohol content. Certainly an important part of what
makes a good wine...).
}
\end{figure}
| {
"alphanum_fraction": 0.7619047619,
"avg_line_length": 50.6153846154,
"ext": "tex",
"hexsha": "e408cb18055defb806cd95dcdced8e4e1324c5b9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1ae85888fb5c33f0cf01043064d30106e7a3de39",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "abhijit-c/HOGWILD",
"max_forks_repo_path": "Report/TeXsrc/src/winequality.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1ae85888fb5c33f0cf01043064d30106e7a3de39",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "abhijit-c/HOGWILD",
"max_issues_repo_path": "Report/TeXsrc/src/winequality.tex",
"max_line_length": 80,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1ae85888fb5c33f0cf01043064d30106e7a3de39",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "abhijit-c/HOGWILD",
"max_stars_repo_path": "Report/TeXsrc/src/winequality.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 498,
"size": 1974
} |
\section{Testing} \label{sec:Testing}
As can be seen in the test suites implemented, the testing for this project was
an attempt at Behaviour-Driven Development (BDD), which is a form of
Test-Driven Development that focuses on trying to write tests by including
descriptions of the expected behaviour, which is also an attempt to make the
code more readable (\cite[][Ch.~1]{wynne2017cucumber}). The main library used
for testing was \texttt{ScalaTest}, ``an extensive BDD suite with numerous
built-in specs'' (\cite[][p.~21]{hinojosa2013testing}). Some of Scala's native
features also make it easy to write more readable tests, such as its infix
notation, which work well with the members of the \texttt{FlatSpec} class and
\texttt{Matchers} trait.
Most of the automated testing consist of integration tests, since more groups
of objects are being tested together but not necessarily the whole application.
This was chosen where it felt it would be beneficial to test how well the more
heavily integrated objects interacted. Whenever it was appropriate, mocks of
other objects were used, as can be seen in the \texttt{StringClassifierTester}
class. The \emph{Mockito} libraries were used for object mocking, as they work
well with \texttt{ScalaTest}, with the aid of Scala's \texttt{MockitoSugar}
trait, to aid with the syntax (\cite[][pp.~102-106]{hinojosa2013testing}).
The automated tests cover mostly the persistence and business logic layers, but
unfortunately not much was done with the presentation layer due to time
constraints. For the persistence layer, a test database schema was created, and
it needs to be loaded into a \emph{MySQL} database before it can be used. The
specifications for these need to be entered into \texttt{.properties} files
before running the tests, otherwise they will fail. Each time they run, any
changes made to the test database will be overwritten by the persistence
helper, which ensures consistency. The only reason this was done with
\emph{MySQL} for the current implementation is that the database is local,
therefore performance is not significantly affected. However, for future
iterations a portable database would be more suitable for persistence layer
testing.
Since TDD was used, in the vast majority of cases the tests were implemented
before the classes being tested were written. The classes were then written and
refactored until the tests were satisfied. In some cases, further tests were
written where it was noticed that more specific behaviours were required of
particular classes. After the tests were written, they behaved like a harness
which allowed for any further necessary changes to be made with confidence, and
helped to speed up the implementation phase
(\cite[][pp.~x-xi]{hinojosa2013testing}). The result of running the tests at
the time of writing can be seen in \hyperref[appendix4]{Appendix IV}.
Whenever `on-the-fly', manual tests were needed, these were done using the
Simple Build Tool (SBT) console feature. This feature is a very interesting
combination of Scala's Read-Eval-Print-Loop (REPL) with the build tool's
ability to quickly integrate all necessary dependencies into a REPL console
session (\cite[][p.~1]{hinojosa2013testing}). The advantage of testing in this
manner was that it was possible to run a session with compiled code incomplete
classes and test different aspects of their behaviour which did not seem fit
for automated testing.
| {
"alphanum_fraction": 0.8013399359,
"avg_line_length": 62.4181818182,
"ext": "tex",
"hexsha": "c6b7c9b4ae5ee047830400d323f7ed4318446683",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cb97bce25316ee889fa022a999d949a447a9bd2f",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "claudiusbr/personal_finance_system",
"max_forks_repo_path": "report/contents/testing.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cb97bce25316ee889fa022a999d949a447a9bd2f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "claudiusbr/personal_finance_system",
"max_issues_repo_path": "report/contents/testing.tex",
"max_line_length": 79,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "cb97bce25316ee889fa022a999d949a447a9bd2f",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "claudiusbr/personal_finance_system",
"max_stars_repo_path": "report/contents/testing.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 783,
"size": 3433
} |
\documentclass[11pt,a4paper]{article}
\usepackage{od}
\usepackage[utf8]{inputenc}
\usepackage[english]{babel}
\usepackage{tikz}
\usetikzlibrary{shapes.misc}
\usetikzlibrary{arrows.meta}
\pagestyle{headings}
\title{TRIZ-Summit Cup – 2019/2020}
\author{The official English version, moved to \LaTeX\ by Hans-Gert Gräbe,
Leipzig .}
\date{November 10, 2019}
\newcommand{\video}{Videos should be short (from 2 to 5 minutes). All authors
of the given video should be mentioned: screenplay writer, film editor,
actors, etc.
This work is directed at forming methodological material for TRIZ teaching.
TRIZ Summit web-site\footnote {\url
{http://triz-summit.ru/en/contest/competition/video/}, \\ \url
{https://www.youtube.com/channel/UCjMNOjboWRBQA72DJvaC7ew/featured}}
contains videos, presented to the last TRIZ Summit Cup.
Tasks for TRIZ Summit Cup-2018/2019 were prepared by M.S. Rubin, N.V. Rubina
nomination «fantasizing» was prepared by P.R. Amnuel.
}
\newcommand{\melies}{
\begin{minipage}{.6\textwidth}
\paragraph{1.}
The founder of the genre of science-fiction film is the French filmmaker
George Méliès. Each of more than 500 short-length films shot by Méliès, are
distinguished by inimitable «filmmaker’s mannerism». While shooting the film
«A visit to the wreckage of cruiser «Man» (1897--1898) it was necessary to
show a distinct picture of people working under water in the episode «divers
at work», and also a distinct picture of wreckage of the cruiser. Underwater
photography existed in those times, however, they were looked upon as
something exotic: sufficiently good quality could not be attained even in a
rather transparent water. For example, look at one of the first photographs
made under water, made in France in 1893 (we hope, you’ll agree that the
quality is insufficient for a film). How did they manage to shoot a good
quality underwater sequence?
\end{minipage} \hfill
\begin{minipage}{.35\textwidth}
\includegraphics[width =\textwidth]{Bild-1.jpg}
\end{minipage}
\medskip
The film by George Méliès «In the Kingdom of
Fairies»\footnote{\url{https://www.youtube.com/watch?v=paM_BPyRT1o&t=542s}}
uses a similar effect. It is surprising, but the same special effect was used
by the filmmaker Alexander Ptushko in the screened fairy tale «Sadko» in
sequences, where the action takes place at the bottom of the sea, in the realm
of the Sea King.
What other special effects are used in cinema and for solving what problems
are they used? What contradictions appear thereby and what techniques are used
for resolving these contradictions?
}
\newcommand{\hollywood}{
\paragraph{2.}
In the history of the biggest corporate film company – Hollywood – there are
episodes, which are associated neither with the creative process, nor with the
invention of new technical means for film-making, but with a purely law
problem. By the turn of the 20$^\mathrm{th}$ century the man, who possessed
main patents for film-making equipment was Thomas Edison (that very inventor
of a filament lamp). However, the majority of film-makers of that time were
main owners of independent companies. Besides, they all worked with equipment,
which they themselves improved and adapted to making films based on their own
ideas. Because of that they did not consider it to be necessary to pay
interest to Edison for using his patents. On December 18, 1908 Edison
announced the creation of Motion Picture Patents Company (MPPC), which entered
the history of cinema as the Edison Trust. Several film companies entered this
Trust, but the majority preferred to remain independent. The day of December
24, 1908, when the jurisdiction of the Trust entered into force, became known
in the history of American cinematography as «black Christmas». According to
the rules introduced by Edison, independent companies had to pay the Trust
huge amounts of money for their right to shoot and demonstrate films. The
police closed 500 out of 800 film theaters in New York at the pretext of
violation of patent rights. What decision did the independent companies take,
in order not to depend upon Edison Trust?
Do you know of any examples of commercial marketing solutions, which
contributed to the evolution of film art or braked it? }
\newcommand{\kinogenres}{
\paragraph{1.}
What genres of cinema do you know? Create the chronology of origination of
different genres of the cinema. Is it possible to use the examples of genres
of film art for illustrating the Lines of system evolution, known in TRIZ (for
example, dynamization, mono-bi-poly-trimming, transition to micro-level)? What
genres of film art do you like best of all and why?
\paragraph{2.}
In film history there are people whose achievements and contribution to the
development of cinema are rather significant, while their names are forgotten.
Collect information concerning the biographies of these film-makers. What
problems did they solve and what techniques did they use? What do you think,
what Qualities of a Creative Personality helped them to achieve their goals?
Often, having solved a particular problem the person stops and, being
satisfied with his achievements, cannot create anything new. It is
surprising, but the company of Lumière brothers, who have entered history as
creators of cinematography, creased to produce films in 1900, only 5 years
after the first commercial show of films on December 28, 1895 – the birthday
of cinema. What do you think, what can be an obstacle for a person in
attainment of significant results in creative activity?}
\newcommand{\kinotools}{
\paragraph{1.}
Select several techniques, which are used by film directors, cameramen, sound
engineers and screenplay writers (editing, pauses, special effects,
slow-motion and quick-motion shooting, etc.) for obtainment of certain effects
in cinema. Use these techniques in your videos. Describe your idea in greater
detail in the description accompanying the video. Is it possible to intensify
these effects using TRIZ methods?
\paragraph{2.}
Videos on history of photography, cinema and inventions, which were made in
these fields. }
\newcommand{\contradictions}{ In creating an animated cartoon film it is
necessary to resolve several contradictions:
\begin {itemize}
\item The number of drawings of which the film consists, should be as large as
possible, so that the picture should be continuous, without jumps, and it
should be as small as possible, in order to reduce time for creating an
animated cartoon film.
\item The drawings, of which the cartoon film consists, should have the
largest possible number of details, so that the characters should be
emotional, and should contain the smallest number of details in order to
facilitate the process of creating the animated cartoon film.
\item The camera filming individual drawings, should be immobile, so that the
picture should be distinct and include the objects, which gradually change
their position, and should be mobile, in order to include different shooting
angles and different volume of space, surrounding the object.
\end{itemize}
How were these contradictions solved in different techniques of creating
animated cartoon films?
}
\begin{document}
\maketitle
\section{Category: 8--10 years }
\subsection*{Nomination «Inventive activity»}
\paragraph{1.}
\contradictions
\paragraph{2.}
G.S. Altshuller in his book «Find the idea» proposed a method of creating
plots for an animated cartoon film (or a tale), which could be called «Fairy
Tales with contradictions». An algorithm for creating such a tale (a plot for
an animated cartoon film) is as follows:
\begin{enumerate}
\item Select a character or an object of a fairy tale plot
\item Imagine the environment of the character in a nutshell.
\item Apply the technique of fantasizing (increase, decrease, do-it-reverse,
animate, imagination binomial, etc.).
\item Formulate an anti-idea and contradiction. Create a plot based on solving
of this contradiction.
\item Introduce a restriction or create a new contradiction in the plot using
cl. 3. Solutions of the contradiction-development of plot.
\item The result is a very fast moving plot.
\end{enumerate}
Invent a plot for an animated cartoon film, using this algorithm.
\subsection*{Nomination «Fantasizing»}
\paragraph{1.}
In 1974 American science fiction author James Teeptry wrote a story «The girl,
who was plugged in». There is such an episode in this story: the film is
projected with the aid of lasers to the clouds above the city and all
citizens, looking upwards, can watch the film. Nowadays it almost ceased to be
science fiction – laser shows on the clouds have become popular. However,
these are not exactly films. Invent a new fantastic method for demonstrating
films.
\paragraph{2.}
Soviet science fiction author Nickolay Vassilenko in his story «The Curved
mirror» (1977) described a fantastic mirror: when a person looks into it, he
sees not the reflection, but a small film, in which the essence of the human
being, who looks into the mirror, is presented in the image of a certain
animal. Write a science fiction story, based on this idea, however, change the
idea with the help of one of the techniques for fantasizing.
\subsection*{Nomination «TRIZ Tools»}
\paragraph{1.}
What elements does a cartoon camera consist of? In what way are they
interconnected? Which functions are played by each element? Of what materials
is it possible to manufacture characters of an animated cartoon film? Invent
your own character for an animated cartoon film and a small story about him.
\paragraph{2.}
Compile a collection of cards on using the techniques of fantasizing in
animated cartoon films and screen adaptations of fairy tales. Analyze this
collection of cards. What techniques are most often used for creating typical
characters or images (for example, imagine how the image of a curious,
mischievous and courageous Buratino is created: long (increased) nose seems to
be created «to be peeped into other people’s affairs»; «personalized» wood log
«does not drown in water and cannot be burnt by fire»). Compare, how similar
images are created in different animated cartoon films (screened fairy tales):
kind and evil magicians, hero warriors, beautiful princesses, capricious
princesses, etc. What is your favorite animated cartoon film? Which particular
episodes in this animated cartoon film do you like? Are any techniques for
fantasizing used in them?
\subsection*{Nomination «Research»}
Compile a collection of cards on inventors (or scientists) and their
inventions (discoveries). Analyze this cards collection. What problems are
solved by the characters of these animated cartoon films? What techniques were
used for solving problems? How are they found solutions (discoveries) used in
the life of people?
\subsection*{Nomination «TRIZ Videos»}
\kinotools
\video
\clearpage
\section{Category: 11--14 years}
\subsection*{Nomination «Inventive activity»}
\paragraph{1.}
In 1909 the American film-maker David Griffith shot a mute short film «Lonely
villa»\footnote{\url{https://www.youtube.com/watch?time_continue=13&v=5RdbnyNYAv8}}
(which lasted only 7 minutes). This is an action film about an attempt at
robbery of a lonely villa. Most probably, this was the first thriller movie
(action film causing tension and excitement with the spectator). Three points
of view are presented in this film:
\begin{itemize}
\item the family staying inside the house and trying to rescue from robbery;
\item the gangsters attacking a rich house;
\item head of the family and policemen, which appear «at the last moment».
\end{itemize}
Griffith had an objective to show (during 7 minutes) the starting of this
story, penetration of gangsters into the house, the fear of the housewife and
her three daughters and (the most important) – the rescue. Prior to creation
of «Lonely villa» the films contained such plots, which embraced one event and
reflected the sequence of actions of the characters chronologically (the time
in the movie coincided with the time of the event in life). What’s to be done,
in order to show different lines of the plot during a short time and at the
same time to intensify the impression (tension) with the spectators? What film
technique did David Griffith use for the first time? What other film
techniques did Griffith invent and for the creation of what effects were they
used? Select a problem or several problems, which were solved by D. Griffith
and analyze them according to the following pattern: formulate contradictions,
IFR, enumerate resources, which are present in the problem, techniques for
resolving contradictions, which could be applied to this problem. Which
techniques of filming do you know and for the solving of what problems are
they used?
\paragraph{2.}
\contradictions
\subsection*{Nomination «Fantasizing»}
\paragraph{1.}
The American science fiction author Thomas Sherred in his story «The Attempt»
described a film, which was shot with the aid of a time machine. Invent a
fantastic method for creating films. How will the films be shot in future?
\paragraph{2.}
In the story «The Flying Eagle» by Pavel Amnuel (1970) a dramatist, inventing
the plot for a would-be play, does not write the text, as they do now, but
creates a video film, imagining that the play has already been staged. He does
not need actors, he himself plays the characters embodying their acting manner
and their behavior. Invent a story based on this new idea, but change it using
some technique of fantasizing.
\subsection*{Nomination «TRIZ Tools»}
Describe a technological operation principle of cinema. Of what parts does the
CINEMA consist? How do these parts interact for the purpose of obtainment of a
cinematographic effect? What effects (physical, chemical and physiological)
are used in film shooting and demonstration? How did perception and mentality
of people change during decades of existence of cinema? What do you think:
what effects will be used in the cinema in future?
\subsection*{Nomination «Research»}
\paragraph{1.}
What genres of cinema do you know? Create the chronology of origination of
different genres of the cinema. Is it possible to use the examples of genres
of film art for illustrating the Lines of system evolution, known in TRIZ (for
example, dynamization, mono-bi-poly-trimming, transition to micro-level)? What
genres of film art do you like best of all and why?
\paragraph{2.}
In 1927 the first show of the film «The Jazz
Singer»\footnote{\url{https://my.mail.ru/mail/wikki0508/video/5622/29304.html}}
took place. It was the first sound film in the history of cinematography. It
is difficult for a modern reader to imagine a film without words and sound
effects. However, by far not all the film-makers immediately accepted this
revolutionary change. «Muteness, two-dimensionality and monochrome nature of
the film are not its shortcomings, but its «structural essence». Film art does
not need to overcome them, since new expressive means will only hinder its
further improvement», -- that is what Yury Tynyanov wrote about peculiarities
of film art in 1930-ies. Try to prove the righteousness of opposing
viewpoints, first from the standpoint of those who oppose to introduction of
sound in the cinema, and then from the standpoint of followers of this
technology. What do you think: what new sound effects will appear in the film
art in future?
\subsection*{Nomination «TRIZ Videos»}
\kinotools
\video
\clearpage
\section{Category: 15--17 years}
\subsection*{Nomination «Inventive activity»}
\melies
\hollywood
\subsection*{Nomination «Fantasizing»}
\paragraph{1.}
A device is described in the novel by American science fiction author Philip
Dick «Do androids dream of electric
sheep?»\footnote{\url{http://lib.ru/INOFANT/DICKP/sheeps.txt}}, which enables
a group of people to experience feelings of a selected person and to perceive
the world in the same way as he perceives it. Thus it is possible to shoot
films and the spectators will feel everything that the actor felt during the
shooting process. Invent a new fantastic idea, changing the idea of Dick with
the aid of a «staged matrix» of G.S. Altshuller.
\paragraph{2.}
Alexander Belyaev in his novel «The Ruler of the World»
(1929\footnote{\url{https://www.litmir.me/br/?b=3062&p=1}.}) described
«theater of thought», in which the actors don’t act on the stage, but only
imagine their acting, while the spectator «catches» these thoughts and
«watches» the performance mentally. Write a science fiction story about the
shooting of such a «thought-based film», changing the idea of Belyaev by using
one of the techniques of fantasizing.
\subsection*{Nomination «TRIZ Tools»}
It is possible to single out several Lines of development in the evolution of
technical systems, for example, «mono-bi-poly-trimming»; «dynamization»;
«emptiness»; «transition to microlevel»; «personal – personal-corporate –
corporate». Compile a collection of cards, illustrating these Lines of
development in the field of cinema or photography. Take into account that both
cinema and photography are at the same time art and production technology as
well as a commercial product (film theaters, advertizing, attending consumer
services, etc.).
\subsection*{Nomination «Research»}
\kinogenres
\subsection*{Nomination «TRIZ Videos»}
\kinotools
\video
\clearpage
\section{Category: students}
\subsection*{Nomination «Inventive activity»}
\melies
\hollywood
\subsection*{Nomination «Fantasizing»}
\paragraph{1.}
In science fiction stories by the Japanese author Kobo Abe «Totaloscope» and
the Italian writer Lino Aldani «Onirofilm» (both stories were published in
1965) films are described, which were shot with the full participation effect
and feedback with the spectator. Change this idea using such techniques of
fantasizing, using several techniques instead of one. Describe the result.
\paragraph{2.}
The American science fiction author Frank Herbert in his space epic «The Dune»
(1965\footnote{\url{https://www.litmir.me/br/?b=196731&p=1}.}) described how
one of the characters controls the behavior of other characters using special
voice intonations. At that the person, who is being controlled, understands it
fairly well but cannot do anything: he has to obey. Think of a plot for an
adventure film of the future taking the idea of Herbert as a basis, but
changing it with the development of plot using the techniques of fantasizing.
\subsection*{Nomination «TRIZ Tools»}
Trends of technical systems evolution is the basis of TRIZ. There are several
possible classifications of these trends. We offer you one of such
classifications.
\begin{center}
\newcommand{\law}[2]{\parbox{#1cm}{\small\centering #2}}
\tikz[>={Triangle[length=3pt 9, width=3pt 3]}] {
\node[draw] at (5,7) [rectangle]
(A1) {\law{3}{Trend of increasing ideality}};
\node[draw] at (0,6) [rectangle]
(A2) {\law{4}{Trend of increasing completeness (Trend of completeness of
performing operation)}};
\node[draw] at (0,2) [rectangle]
(A3) {\law{4}{Trend of increasing coordination of system parts}};
\node[draw] at (0,4) [rectangle]
(A4) {\law{4}{Trend of conductivity (of energy, flows, etc.)}};
\node[draw] at (5,5) [rectangle]
(A5) {\law{3}{Trend of irregular evolution of system parts}};
\node[draw] at (10,0) [rectangle]
(A6) {\law{4}{Tendency of system S-curve evolution}};
\node[draw] at (0,0) [rectangle]
(A7) {\law{4}{Tendency of elimination of human involvement}};
\node[draw] at (10,4) [rectangle]
(A8) {\law{4}{Trend of increasing controllability and Su-Field interactions}};
\node[draw] at (5,1) [rectangle]
(A9) {\law{3}{Trend of increasing dynamicity}};
\node[draw] at (5,3) [rectangle]
(A10) {\law{3}{Trend of transition to the super-system}};
\node[draw] at (10,6) [rectangle]
(A11) {\law{4}{Trend of transition from macro-level to micro-level}};
\node[draw] at (10,2) [rectangle]
(A12) {\law{4}{Tendency of system evolution due to trimming or ??}};
\draw[->] (A1) -- (2.8,7) |- (A2) ;
\draw[->] (2.8,7) |- (A3);
\draw[->] (2.8,7) |- (A4);
\draw[->] (2.8,7) |- (A5);
\draw[->] (2.8,7) |- (A7);
\draw[->] (2.8,7) |- (A9);
\draw[->] (2.8,7) |- (A10);
\draw[->] (A2) -- (-2.7,6) |- (A7);
\draw[->] (A5) -- (7.2,5) |- (A9);
\draw[->] (7.2,5) |- (A10);
\draw[->] (7.2,5) |- (A6);
\draw[->] (A1) -- (7.5,7) |- (A8) ;
\draw[->] (7.5,7) |- (A11) ;
\draw[->] (7.5,7) |- (A12) ;
\draw[->] (A8) -- (12.5,4) |- (A12) ;
}
\end{center}
Analyze the stages of cinematography evolution using the system of Trends of
Technical Systems Evolution. How can one forecast further development of film
art taking into account the tendencies in cinematography evolution?
\subsection*{Nomination «Research»}
\kinogenres
\subsection*{Nomination «TRIZ Videos»}
\kinotools
\video
\end{document}
| {
"alphanum_fraction": 0.7740403071,
"avg_line_length": 44.2462845011,
"ext": "tex",
"hexsha": "05ed42e929f4e3183ca88f6c415024a7b0b8014e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "445b25b8a6f5d03e41a98c28a60c38003e9b84a4",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "wumm-project/OpenDiscovery",
"max_forks_repo_path": "TRIZ-Cup/2020/Exercises-en.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "445b25b8a6f5d03e41a98c28a60c38003e9b84a4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "wumm-project/OpenDiscovery",
"max_issues_repo_path": "TRIZ-Cup/2020/Exercises-en.tex",
"max_line_length": 83,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "445b25b8a6f5d03e41a98c28a60c38003e9b84a4",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "wumm-project/OpenDiscovery",
"max_stars_repo_path": "TRIZ-Cup/2020/Exercises-en.tex",
"max_stars_repo_stars_event_max_datetime": "2020-04-21T08:48:43.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-04-21T08:48:43.000Z",
"num_tokens": 5289,
"size": 20840
} |
% !TEX root = ../zeth-protocol-specification.tex
\section{Data structures and representation}\label{preliminaries:structured-data}
\newcommand{\STR}{\mathsf{STR}}
\subsection{Structured data}\label{preliminaries:data-types:formulation}
When describing the operations to be performed and the data to be manipulated as part of the protocol, we commonly employ tuples of related data where each element of the tuple has some associated semantic meaning and which must often satisfy some conditions. In this section, we develop a framework to reason about such \emph{structured} data, where a single datum may consist of one or more logical parts (called \emph{fields}). The framework is built on top of simple mathematical concepts such as sets, and mappings between them, ensuring that we can always reason about structured data in a rigorous way. We also define notation to aid the specification of structured data, and to refer to specific components of a datum. This will be used extensively in the specification of the protocol.
As a simple motivating example, consider a protocol that processes data relating to individual people. This fictional system may send and receive data such as \emph{name}, \emph{age} and \emph{address} for a single person, grouping this data into a logical unit. Further, each piece of data must satisfy specific conditions (\emph{name} must be a series of characters from some alphabet, \emph{age} must be a positive integer, etc.) We shall make use of this example several times during the formulation below.
In what follows, let $\STR = \smallset{a, b, \ldots, y, z}^{*}$ (the Kleene star of the \emph{Roman alphabet}). In our formulation, field names $f_i$ will be elements in this set.
\begin{remark}
Note that a similar formulation could be made using an arbitrary set, such as the same alphabet augmented with specific symbols, or the alphabet of a different language. Our choice of $\STR{}$ here is for simplicity.
\end{remark}
We begin by defining a data type as a set of values called ``fields'', each with a ``name'' from $\STR$. Abstract sets are used to constrain the values of each field.
\begin{definition}[Structured Data Type]\label{preliminaries:def:datatype}
Let $f_0, \ldots, f_{n-1}$ be $n$ distinct elements of $\STR$ and let $V_0, \ldots, V_{n-1}$ be sets, for some $n \in \NN$. We define \emph{the structured data type $\datatypestyle{T}$ with fields $\smallset{(f_i, V_i)}_{i \in [n]}$} to be a set of values:
\[
\datatypestyle{T} = V_0 \times \cdots \times V_{n-1}
\]
with associated post-fix ``dot'' operators $.f_i : \datatypestyle{T} \to V_i$ for $i = 0, \ldots, n-1$, acting on values $\mathbf{x} \in \datatypestyle{T}$ to extract the individual elements:
\[
\mathbf{x}.f_i = v_i \text{, where } \mathbf{x} = (v_0, \ldots, v_{n-1}) \in \datatypestyle{T}
\]
Here, we say that the $i$-th field has \emph{field name} $f_i$, with \emph{value set} $V_i$. Each ``dot'' operator $.f_i$ \emph{extracts} the $i$-th component, or the \emph{value with field name $f_i$}.
\end{definition}
\begin{example}\label{preliminaries:eg:datatype-person}
Consider our example protocol that processes information about people. A potentially useful structured data type $\datatypestyle{Person}$ may be defined with fields:
\[
\smallset{ (\varstyle{name}, \STR), (\varstyle{age}, \NN), (\varstyle{height}, \RR^+) }
\]
Values $\mathbf{p}$ in $\datatypestyle{Person}$ are simply tuples in $\STR{} \times \NN \times \RR^+$, with semantic meaning (name, age, height) assigned to each component of $\mathbf{p}$.
Examples of valid elements in $\datatypestyle{Person}$ include $\mathbf{a} = (\varstyle{alice}, 28, 1.65)$ and $\mathbf{b} = (\varstyle{bob}, 31, 1.74)$, where the following equalities hold:
\begin{align*}
\mathbf{a}.\varstyle{name} &= \varstyle{alice}, \\
\mathbf{b}.\varstyle{age} &= 31, \\
\mathbf{b}.\varstyle{height} &= 1.74;
\end{align*}
\end{example}
For clarity, structured data types may be specified using tables of names, descriptions and value sets, rather than sets of the form $\smallset{(f_i, V_i)}_{i \in [n]}$. Similarly, it is frequently convenient to include the \emph{field names} alongside values when specifying structured data values.
\begin{example}\label{preliminaries:eg:datatype-person-table}
$\datatypestyle{Person}$ from \cref{preliminaries:eg:datatype-person} might be described in table-form as follows:
\begin{table}[H]
\centering
\begin{tabular}{cp{15em}c}
Field & Description & Data type \\ \toprule
$\varstyle{name}$ & Name of the person & $\STR{}$ \\ \midrule
$\varstyle{age}$ & Age in years & $\NN$ \\ \midrule
$\varstyle{height}$ & Height in meters & $\RR^+$ \\ \midrule
\end{tabular}
\end{table}
\end{example}
\begin{example}\label{preliminaries:eg:datatype-person-value-with-field-names}
The values $\mathbf{a}$ and $\mathbf{b}$ in \cref{preliminaries:eg:datatype-person} might be written as follows:
\begin{align*}
\mathbf{a} &=
\{\varstyle{name}:\: \varstyle{alice}, \; \varstyle{age}:\: 28, \; \varstyle{height}:\: 1.65\} \\
\mathbf{b} &=
\{\varstyle{name}:\: \varstyle{bob}, \; \varstyle{age}:\: 31, \; \varstyle{height}:\: 1.74\}
\end{align*}
\end{example}
\begin{remark}[``dot'' operators in assignment]\label{preliminaries:rem:dot-assignment}
The ``dot'' operators may be used in algorithm descriptions to indicate \emph{assignment to a specific component}. For example $\mathbf{a}.\varstyle{age} \gets 29$ means that the value of the $\varstyle{age}$ field of $\mathbf{a}$ is replaced by the value $29$.
Formally, for a structured data type $\datatypestyle{T}$ with fields $\smallset{(f_i, V_i)}_{i \in [n]}$ where $\mathbf{x} = (v_0, \ldots, v_{n-1}) \in \datatypestyle{T}$ and $v_i^\prime \in V_i$:
\[
\mathbf{x}.f_i \gets v_i^\prime
\]
is equivalent to:
\[
\mathbf{x} \gets (v_0, \ldots, v_{i-1}, v_i^\prime, v_{i+1}, \ldots, v_{n-1})
\]
\end{remark}
We define one further operator and related assignment notation, convenient in cases where $V_i = X^m$ for sets $X$ and $m \in \NN$.
\begin{definition}[Square bracket operator]\label{def:squareb-operator}
For $m \in \NN$ and set $X$, define the operator $[\ ] : X^m \times [m] \to X$ as:
\[
\mathbf{x}[i] = x_i \text{ where } \mathbf{x} = (x_0, \ldots, x_m)
\]
For the set $X^*$, the operator takes the form $[\ ] : X^* \times \NN \to X$, defined as:
\[
\mathbf{x}[i] =
\begin{cases}
x_i & \text{ if } \len{\mathbf{x}} > i \text{ where } \mathbf{x} = (x_0, \ldots) \\
\bot & \text{otherwise}
\end{cases}
\]
\end{definition}
\begin{remark}[Square bracket operators in assignment]
Similarly to \cref{preliminaries:rem:dot-assignment}, we develop assignment notation for the square bracket operator $[\ ]$.
Let $\mathbf{x} = (x_0, \ldots, x_{m-1})$ be a member of $X^m$, and $x_i^\prime$ be some element in $X$. The statement:
\[
\mathbf{x}[i] \gets x_i^\prime
\]
is equivalent to:
\[
\mathbf{x} \gets (x_0, \ldots, x_{i-1}, x_i^\prime, x_{x+1}, \ldots, x_{m-1})
\]
Informally, this can be interpreted as replacing the $i$-th component of $\mathbf{x}$ with $x_i^\prime$.
\end{remark}
\begin{remark}[Deep structures and chained ``dot'' operators]
Consider the case of structured data $\datatypestyle{T}$ with fields $\{(f_i, V_i)\}_{i \in [n]}$ for $n \in \NN$. Let $\datatypestyle{T^\prime}$ be another structured data type with fields $\{(f^\prime_i, V^\prime_i)\}_{i \in [n^\prime]}$ for $n^\prime \in \NN$, and assume that $V_j = \datatypestyle{T^\prime}$ for some $j \in [n]$. Informally, the values of the $j$-th field of elements of $\datatypestyle{T}$ are themselves structured data of type $\datatypestyle{T^\prime}$.
In this case, ``dot'' operators may be \emph{chained}, so that $\mathbf{x}.f_j.f^\prime_k$ refers to the $k$-th field $v^\prime_k$ of the $j$-th field $v_j$ of $\mathbf{x} \in \datatypestyle{T}$.
\end{remark}
\begin{example}\label{preliminaries:eg:datatype-person-deep}
Define a structured data type $\datatypestyle{Address}$ with fields $(\varstyle{country}, \STR{}), (\varstyle{zipcode}, \STR{})$. We redefine the structured data type $\datatypestyle{Person}$ from \cref{preliminaries:eg:datatype-person}, with an extra field $\varstyle{address}$ of type $\datatypestyle{Address}$. That is, $\datatypestyle{Person}$ is the structured data type with fields:
\begin{table}[H]
\centering
\begin{tabular}{cp{15em}c}
Field & Description & Data type \\ \toprule
$\varstyle{name}$ & Name of the person & $\STR{}$ \\ \midrule
$\varstyle{age}$ & Age in years & $\NN$ \\ \midrule
$\varstyle{height}$ & Height in meters & $\RR^+$ \\ \midrule
$\varstyle{address}$ & Address of the person & $\datatypestyle{Address}$ \\ \midrule
\end{tabular}
\end{table}
An example element $\mathbf{a}$ in $\datatypestyle{Person}$ is:
\begin{align*}
\mathbf{a} =
\{ &\\
& \varstyle{name}:\: \varstyle{alice}, \\
& \varstyle{age}:\: 28, \\
& \varstyle{height}:\: 1.65, \\
& \varstyle{address}:\: (\varstyle{country}:\: \varstyle{UK}, \varstyle{zipcode}:\: \varstyle{SW1A})\\
\}
\end{align*}
In this case, the following equalities using the dot and square bracket operators all hold:
\begin{align*}
& \mathbf{a}.\varstyle{name} = \varstyle{alice} \\
& \mathbf{a}.\varstyle{height} = 1.65 \\
& \mathbf{a}.\varstyle{address}.\varstyle{country} = \varstyle{UK} \\
& \mathbf{a}.\varstyle{address}.\varstyle{zipcode} = \varstyle{SW1A} \\
& \mathbf{a}.\varstyle{address}.\varstyle{country}[1] = \varstyle{K}
\end{align*}
\end{example}
\subsection{Representations}\label{preliminaries:data-types:representation}
The binary alphabet $\bin$, denoted $\BB$, is used to represent the presence or absence of an electrical signal in a computer. In fact, every piece of information in a computer is represented as a string of bits.
We assume the existence of an efficient binary representation for some set of primitive datatypes (such as the natural numbers $\NN$, or alphanumeric characters). Structured data types built up from primitive types (as described above) can then recursively be assigned similarly efficient representations.
This is used to define the following functions to \emph{encode} data to its bit-string representation, and to \emph{decode} such bit-strings back to elements of the original type.
\begin{definition}
For a set $X$ of values which are to be represented as bit strings, we define functions:
\begin{align*}
\encode{}{X} &: X \to \BB^* \\
\decode{}{X} &: \BB^* \to X \cup \bot
\end{align*}
satisfying
\[
\decode{\encode{x}{X}}{X} = x\ \forall x \in X
\]
to be the functions which encode (resp.~decode) elements of $X$ into (resp.~from) the bit-string representations chosen above.
We note that $\decode{}{X}$ may return $\bot$ in the case that the input bit-string is malformed.
\end{definition}
Without ambiguity, we overload the functions $\encode{}{}$ and $\decode{}{}$ to mean $\encode{}{X}$ and $\decode{}{X}$ where the set $X$ is clear from context.
In the following sections, we assume that elements of $\NN$ are encoded as big-endian binary numbers in the natural way. We denote by $\NN_{b}$ the set of natural numbers that can be uniquely encoded in this way using $b$ bits (possibly with padding). In other words,
\[
\NN_{b} = \smallset{x \in \NN\ \suchthat\ \encode{x}{\NN} \in \BB^{b}}
\]
| {
"alphanum_fraction": 0.6834637544,
"avg_line_length": 64.6055555556,
"ext": "tex",
"hexsha": "2ef683a9783ef692eb45ff64b5311a37b1d66175",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-07-26T04:51:29.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-07-26T04:51:29.000Z",
"max_forks_repo_head_hexsha": "ba29c67587395f5c7b26b52ee7ab9cba12f1cc6b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "clearmatics/zeth-specifications",
"max_forks_repo_path": "chapters/chap01-sec01.tex",
"max_issues_count": 13,
"max_issues_repo_head_hexsha": "ba29c67587395f5c7b26b52ee7ab9cba12f1cc6b",
"max_issues_repo_issues_event_max_datetime": "2021-04-16T10:57:05.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-10-27T10:41:50.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "clearmatics/zeth-specifications",
"max_issues_repo_path": "chapters/chap01-sec01.tex",
"max_line_length": 794,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "ba29c67587395f5c7b26b52ee7ab9cba12f1cc6b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "clearmatics/zeth-specifications",
"max_stars_repo_path": "chapters/chap01-sec01.tex",
"max_stars_repo_stars_event_max_datetime": "2021-04-29T18:22:00.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-04-29T18:22:00.000Z",
"num_tokens": 3487,
"size": 11629
} |
\section*{Postdoctoral Researcher Mentoring Plans}
%\ (1 page limit)
% \subsection*{Postdoctoral Researcher Mentoring Plan}
This Postdoctoral Researcher Mentoring plan establishes guidelines for work to be performed by the two Postdoctoral Researchers (PDR) in support of OpenEarthscape. One PDR, based at the University of Colorado, will be responsible for producing use-case demonstrations that illustrate how OpenEarthscape technology can be applied in the scientific domains represented by the project. The other PDR, based at Tulane University, will be responsible for building a source-to-sink modeling framework, using OpenEarthscape tools, as part of the Source-to-Sink demonstration project.
% We propose to employ one PDF to be jointly supervised between CSDMS and individual PIs in the community at large. Over year 2-5 of the proposed project period, CSDMS will solicit concise research proposals to advance earth surface process modeling science. Project teams are to consist of a PDF with a faculty mentor at an academic institution in the US. CSDMS will offer 50\% funding for 2 years, whereas the partnering faculty mentor needs to match the other 50\% for those years. A support letter of each applicant’s mentor ensures that this mentoring plan is agreed upon. The CSDMS Executive Committee as an independent entity will review proposals for intellectual merit, novelty of research, match with current community science themes, and fit with CSDMS science teams priorities.
The postdoctoral researchers will be provided with a unique training opportunity: they will participate in the OpenEarthscape annual meeting, be part of cyber-skill building sessions through webinars, and participate in the Earth Surface Processes Summer Institute (ESPIn). The project will allow them to apply these skills and their familiarity with cyberframework tools to make substantial science advances on topics of interest as developed jointly with independent advisors from the larger community through the OpenEarthscape Computational Summer Science Program. The PDRs will come away from their postdoctoral research projects with a broader skill set in earth surface and environmental computation. The University of Colorado PDR will work primarily with a faculty member (PI G. Tucker) and a research professional mentor (senior software engineer and Co-PI E. Hutton) The Tulane PDR will work primarily with Tulane faculty member and Co-PI N. Gasparini.
Essential elements of PDR mentoring include:
\begin{compactenum}
\item Orientation, including in-depth conversations between PDR, and OpenEarthscape mentors. Expectations will be discussed and agreed upon in advance. Orientation topics include: a) the amount of independence the PDR requires, b) interaction with research group members, c) productivity, including publications, d) work habits.
\item Career Counseling will be directed at providing the PDR: skills, knowledge, and experience needed to excel in his/her career. Mentors will engage in career planning, by pointing out job opportunities and helping to evaluate advertisements and application materials.
\item Proposal Writing Experience will be gained by involvement of the PDR in proposals prepared by their mentors. The PDR will have an opportunity to learn best practices in proposal preparation including identification of key research questions, definition of objectives, description of approach and rationale, and construction of a work plan, timeline, and budget.
\item Publications and Presentations are expected from their work. These will be prepared under the direction of the PDR and in collaboration with their mentors. Attendance of the OpenEarthscape Annual Meeting is expected. The PDR will receive guidance and training in the preparation of manuscripts for scientific journals and presentations at conferences.
\item Teaching and Mentoring Skills will be developed in the context of regular meetings within their local research group. The PDR will lead webinars to share across the OpenEarthscape community.
\item Instruction in Professional Practices will be provided by attending the 10-day Earth Surface Processes Summer Institute, with emphasis on modeling, reproducibility of modeling projects, software development, high-performance computing, and uncertainty and sensitivity testing.
% \item Success of the Mentoring Plan will be assessed by biennial interviews by a volunteer chair of one of the Working Groups or Focus Research groups who is not the supervisor of the postdoctoral researcher.
\end{compactenum}
% \subsection*{Postdoctoral Researcher Mentoring Plan (Gasparini)}
% Gasparini will supervise a postdoctoral researcher at Tulane University. The mentoring plan discussed here is designed to create a productive work environment for the postdoctoral researcher (PDR) so that they can both excel in their research while at Tulane and also gain the skills and experience needed to continue on their desired career path.%
% \begin{compactenum}
% \item Within the first week of employment or switching on to this project, the PDR and Gasparini will discuss and document expectations and goals for the postdoctoral researcher’s period of employment/this project. They will develop an Individual Development Plan (IDP) to define their research and career goals, identify gaps in training, develop expectations for communication with and input from the advisor, and set a realistic schedule for completion of publications and other research products. The PDR will be encouraged to use an on-line tool for IDPs developed by the AAAS (American Association for the Advancement of Science). This document will include both Gasparini’s and the PDR’s goals and expectations. Target deadlines will be set when appropriate.
% \item The document will be revisited at least once every six months to assess progress and reassess goals and target dates.
% \item The PDR will be encouraged to take advantage of on-campus resources for career planning and application support, such as those available through the Tulane Office of Graduate and Postdoctoral Studies.
% \item The PDR will be encouraged to attend workshops focused on career development and proposal writing. For example, proposal-writing workshops or public speaking workshops might be applicable, depending the PDR's career goals.
% \item Gasparini will also work with the PDR to help them prepare for job interviews and seminars, evaluate potential positions, and negotiate start-up packages.
% \item The PDR will be offered the opportunity to be a guest lecturer in a class period to present and discuss a topic falling under their expertise. This will be most useful if the PDR is interested in an academic career path, although teaching also improves public speaking skills.
% \item The PDR will participate in all reports and scientific publications related to the project. They will take the lead when appropriate.
% \item The PDR will participate in all virtual meetings related to this project.
% \item The PDR will be encouraged to present work at conferences beyond those specifically mentioned in the proposal. This includes presenting work from their research prior to coming to Tulane, as well as work from this project.
% \item The PDR will be given opportunities to contribute to proposal writing.
% \item The PDR will be encouraged to attend workshops on unconscious bias and inclusion, when held at Tulane or at national meetings.
% \item The PDR will be given the opportunity to work with and mentor Gasparini’s graduate and undergraduate students.
% \item The PDR will be encouraged to attend the weekly research seminars at Tulane in both the department of Earth and Environmental Sciences and the Center for Computational Sciences (CCS). They will also be encouraged to participate in activities through CSDMS, such as webinars and virtual trainings.
%\end{compactenum}
| {
"alphanum_fraction": 0.8192375663,
"avg_line_length": 127.7741935484,
"ext": "tex",
"hexsha": "c54d0adc95bec63d5bd42044d384133f1262ea16",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9296d0eabc2cc8b6888df9bd09929b7b13723dcd",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mdpiper/nsf-proposal-template",
"max_forks_repo_path": "sections/mentoring.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9296d0eabc2cc8b6888df9bd09929b7b13723dcd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mdpiper/nsf-proposal-template",
"max_issues_repo_path": "sections/mentoring.tex",
"max_line_length": 964,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9296d0eabc2cc8b6888df9bd09929b7b13723dcd",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mdpiper/nsf-proposal-template",
"max_stars_repo_path": "sections/mentoring.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1550,
"size": 7922
} |
%!TEX root = ./ERL Industrial Robots.tex
%--------------------------------------------------------------------
%--------------------------------------------------------------------
%--------------------------------------------------------------------
\clearpage\phantomsection
\section{\erlir Award Categories}
\label{sec:AwardCategories}
Awards will be given to the best teams in each of the \erlir \emph{task benchmarks} and \emph{functionality benchmarks} that are described in Sections \ref{sec:TaskBenchmarks} and \ref{sec:FunctionalityBenchmarks}.
For every local/major tournament, and for every task and functionality benchmark, a score is computed by taking the median of the best (up to 5) trials. The final end of year score is computed by taking the median of the pooled trials that were used for scoring the best two Local/Major tournaments and teams are ranked based on this score.
The \erl Competition awards will be given in the form of cups for the best teams. Every team will also receive a plaquette with the \erl logo and a certificate.
Please note that teams need to participate in a minimum of two tournaments (Local and/or Major) per year in order to obtain a score for the TBMs and/or FBMs that they intend to enter.
%\erlir competition awards will be given in the form of cups for the best teams, as specified below. Every team will receive a plaquette with the \erlir logo and a certificate.
%Awards will be given to the best teams in \erlir \emph{task benchmarks}, \emph{functional benchmarks} and overall.
%--------------------------------------------------------------------
%--------------------------------------------------------------------
\subsection{Awards for Task Benchmarks}
\label{sec:AwardTBMs}
The team with the highest score in each of the three \emph{task benchmarks} will be awarded a cup (''\erlir Best-in-class Task Benchmark <\emph{task benchmark} title>'').
When a single team participates in a given \emph{task benchmark}, the corresponding \emph{task benchmark} award will only be given to that team if the Executive and Technical Committees consider the team performance of exceptional level.
%The team with the highest score in each of the three \emph{task benchmarks} will be awarded a cup (''\erlir Best-in-class Task Benchmark <\emph{task benchmark} title>'').
%When a single team participates in a given \emph{task benchmark}, the corresponding \emph{task benchmark} award will only be given to that team if the Executive and Technical Committees consider the team performance of exceptional level.
%--------------------------------------------------------------------
%--------------------------------------------------------------------
\subsection{Awards for Functionality Benchmarks}
\label{sec:AwardFBMs}
The teams with the highest score ranking for each of the three \emph{functionality benchmarks} will be awarded a cup (''\erlir Best-in-Class Functionality Benchmark <\emph{functionality benchmark} title>'' and '\erlir Second-Best-in-Class Functionality Benchmark <\emph{functionality benchmark} title>'').
When less than three teams participate in a given \emph{functionality benchmark}, only the ''\erlir Best-in-class Functionality Benchmark <\emph{functionality benchmark} title>'' award will be given to a team, and only if the Executive and Technical Committees consider that team's performance as excellent.
%The two top teams in the score ranking for each of the three \emph{functionality benchmarks} will be awarded a cup (''\erlir Best-in-class Functionality Benchmark <\emph{functionality benchmark} title>'' and ''\erlir Second-Best-in-class Functionality Benchmark <\emph{functionality benchmark} title>'').
%When less than three teams participate in a given \emph{functionality benchmark}, only the ''\erlir Best-in-class Functionality Benchmark <\emph{functionality benchmark} title>'' award will be given to a team, and this will occur only if the Executive and Technical Committees consider that team's performance as excellent.
%--------------------------------------------------------------------
%--------------------------------------------------------------------
%\subsection{Competition Winners}
%\label{sec:AwardOverall}
%Teams participating in \erlir competitions will be ranked taking into account their overall rank in all the \emph{task benchmarks}.
%\todo{LaberLaberLaber entfernen}
%The overall ranking will be obtained by combining task benchmark rankings using the Social Welfare principle (see \url{http://en.wikipedia.org/wiki/Social_welfare_function}); the overall winning team of \roaw Competition 2015 will be the top team in this combined ranking, and will receive the corresponding award cup (''Best Team \roaw Competition 2015''). The second and third placed teams in the ranking will also receive award cups (respectively ''2nd place \roaw Competition 2015'' and ''3rd place \roaw Competition 2015'')).
%The three awards will be given only if more than 5 teams participate in the competition. Otherwise, only the best team will be awarded, except if it is the single team participating, in which case the Executive and Technical Committees must consider that team performance of exceptional level so as for the team to be awarded. Only teams performing the total of the three tasks will be considered for the ''Best Team \roaw Competition 2015'' award.
%--------------------------------------------------------------------
% EOF
%--------------------------------------------------------------------
| {
"alphanum_fraction": 0.6818181818,
"avg_line_length": 102.2592592593,
"ext": "tex",
"hexsha": "0606ba42862e44e2a653788fd6d3b5b2b3a34a7d",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-10-09T09:08:01.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-10-09T09:08:01.000Z",
"max_forks_repo_head_hexsha": "32a10713fcd6de11042853dafcb1275c301f5f82",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "mhwasil/ProfessionalServiceRobots",
"max_forks_repo_path": "rulebook/secErlirRulebookAwards.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "32a10713fcd6de11042853dafcb1275c301f5f82",
"max_issues_repo_issues_event_max_datetime": "2020-10-09T09:23:13.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-07-06T16:03:20.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "mhwasil/ProfessionalServiceRobots",
"max_issues_repo_path": "rulebook/secErlirRulebookAwards.tex",
"max_line_length": 531,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "32a10713fcd6de11042853dafcb1275c301f5f82",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "mhwasil/ProfessionalServiceRobots",
"max_stars_repo_path": "rulebook/secErlirRulebookAwards.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1128,
"size": 5522
} |
%\addtocounter{chapter}{-1}
\chapter{Advice for the reader}
\section{Prerequisites}
As explained in the preface,
the main prerequisite is some amount of mathematical maturity.
This means I expect the reader to know how
to read and write a proof, follow logical arguments, and so on.
I also assume the reader is familiar with basic terminology
about sets and functions (e.g.\ ``what is a bijection?'').
If not, one should consult \Cref{ch:sets_functions}.
\section{Deciding what to read}
There is no need to read this book in linear order:
it covers all sorts of areas in mathematics,
and there are many paths you can take.
In \Cref{ch:sales}, I give a short overview for each part
explaining what you might expect to see in that part.
For now, here is a brief chart showing
how the chapters depend on each other;
again see \Cref{ch:sales} for details.
Dependencies are indicated by arrows;
dotted lines are optional dependencies.
\textbf{I suggest that you simply pick a chapter you find interesting,
and then find the shortest path.}
With that in mind, I hope the length of the entire PDF is not intimidating.
\input{tex/frontmatter/digraph}
\newpage
\section{Questions, exercises, and problems}
In this book, there are three hierarchies:
\begin{itemize}
\ii An inline \vocab{question} is intended to be offensively easy,
mostly a chance to help you internalize definitions.
If you find yourself unable to answer one or two of them,
it probably means I explained it badly and you should complain to me.
But if you can't answer many,
you likely missed something important: read back.
\ii An inline \vocab{exercise} is more meaty than a question,
but shouldn't have any ``tricky'' steps.
Often I leave proofs of theorems and propositions as exercises
if they are instructive and at least somewhat interesting.
\ii Each chapter features several trickier \vocab{problems} at the end.
Some are reasonable, but others are legitimately
difficult olympiad-style problems.
\gim Harder problems are marked with up to
three chili peppers (\scalebox{0.7}{\chili}), like this paragraph.
In addition to difficulty annotations,
the problems are also marked by how important they are to the big picture.
\begin{itemize}
\ii \textbf{Normal problems},
which are hopefully fun but non-central.
\ii \textbf{Daggered problems},
which are (usually interesting) results that one should know,
but won't be used directly later.
\ii \textbf{Starred problems},
which are results which will be used later
on in the book.\footnote{This is to avoid the classic
``we are done by PSet 4, Problem 8''
that happens in college sometimes,
as if I remembered what that was.}
\end{itemize}
\end{itemize}
Several hints and solutions can be found in \Cref{app:hints,app:sol}.
\begin{center}
\includegraphics[width=14cm]{media/abstruse-goose-exercise.png}
\\ \scriptsize Image from \cite{img:exercise}
\end{center}
% I personally find most exercises to not be that interesting, and I've tried to keep boring ones to a minimum.
% Regardless, I've tried hard to pick problems that are fun to think about and, when possible, to give them
% the kind of flavor you might find on the IMO or Putnam (even when the underlying background is different).
\section{Paper}
At the risk of being blunt,
\begin{moral}
Read this book with pencil and paper.
\end{moral}
Here's why:
\begin{center}
\includegraphics[width=0.5\textwidth]{media/read-with-pencil.jpg}
\\ \scriptsize Image from \cite{img:read_with_pencil}
\end{center}
You are not God.
You cannot keep everything in your head.\footnote{
See also \url{https://usamo.wordpress.com/2015/03/14/writing/}
and the source above.}
If you've printed out a hard copy, then write in the margins.
If you're trying to save paper,
grab a notebook or something along with the ride.
Somehow, some way, make sure you can write. Thanks.
\section{On the importance of examples}
I am pathologically obsessed with examples.
In this book, I place all examples in large boxes to draw emphasis to them,
which leads to some pages of the book simply consisting of sequences of boxes
one after another. I hope the reader doesn't mind.
I also often highlight a ``prototypical example'' for some sections,
and reserve the color red for such a note.
The philosophy is that any time the reader sees a definition
or a theorem about such an object, they should test it
against the prototypical example.
If the example is a good prototype, it should be immediately clear
why this definition is intuitive, or why the theorem should be true,
or why the theorem is interesting, et cetera.
Let me tell you a secret. Whenever I wrote a definition or a theorem in this book,
I would have to recall the exact statement from my (quite poor) memory.
So instead I often consider the prototypical example,
and then only after that do I remember what the definition or the theorem is.
Incidentally, this is also how I learned all the definitions in the first place.
I hope you'll find it useful as well.
\section{Conventions and notations}
This part describes some of the less familiar notations and definitions
and settles for once and for all some annoying issues
(``is zero a natural number?'').
Most of these are ``remarks for experts'':
if something doesn't make sense,
you probably don't have to worry about it for now.
A full glossary of notation used can be found in the appendix.
\subsection{Natural numbers are positive}
The set $\NN$ is the set of \emph{positive} integers, not including $0$.
In the set theory chapters, we use $\omega = \{0, 1, \dots\}$
instead, for consistency with the rest of the book.
\subsection{Sets and equivalence relations}
This is brief, intended as a reminder for experts.
Consult \Cref{ch:sets_functions} for full details.
An \vocab{equivalence relation} on a set $X$ is a relation $\sim$
which is symmetric, reflexive, and transitive.
An equivalence relation partitions $X$
into several \vocab{equivalence classes}.
We will denote this by $X / {\sim}$.
An element of such an equivalence class is a
\vocab{representative} of that equivalence class.
I always use $\cong$ for an ``isomorphism''-style relation
(formally: a relation which is an isomorphism in a reasonable category).
The only time $\simeq$ is used in the Napkin is for homotopic paths.
I generally use $\subseteq$ and $\subsetneq$ since these are non-ambiguous,
unlike $\subset$. I only use $\subset$ on rare occasions in which equality
obviously does not hold yet pointing it out would be distracting.
For example, I write $\QQ \subset \RR$
since ``$\QQ \subsetneq \RR$'' is distracting.
I prefer $S \setminus T$ to $S - T$.
The power set of $S$ (i.e.,\ the set of subsets of $S$),
is denoted either by $2^S$ or $\mathcal P(S)$.
\subsection{Functions}
This is brief, intended as a reminder for experts.
Consult \Cref{ch:sets_functions} for full details.
Let $X \taking f Y$ be a function:
\begin{itemize}
\ii By $f\pre(T)$ I mean the \vocab{pre-image}
\[ f\pre(T) \defeq \left\{ x \in X \mid f(x) \in T \right\}. \]
This is in contrast to the $f\inv(T)$ used in the rest of the world;
I only use $f\inv$ for an inverse \emph{function}.
By abuse of notation, we may abbreviate $f\pre(\{y\})$ to $f\pre(y)$.
We call $f\pre(y)$ a \vocab{fiber}.
\ii By $f\im(S)$ I mean the \vocab{image}
\[ f\im(S) \defeq \left\{ f(x) \mid x \in S \right\}. \]
Almost everyone else in the world uses $f(S)$
(though $f[S]$ sees some use, and $f``(S)$ is often used in logic)
but this is abuse of notation,
and I prefer $f\im(S)$ for emphasis.
This image notation is \emph{not} standard.
\ii If $S \subseteq X$, then the \vocab{restriction} of $f$ to $S$
is denoted $f \restrict{S}$,
i.e.\ it is the function $f \restrict{S} \colon S \to Y$.
\ii Sometimes functions $f \colon X \to Y$
are \emph{injective} or \emph{surjective};
I may emphasize this sometimes by writing
$f \colon X \injto Y$ or $f \colon X \surjto Y$, respectively.
\end{itemize}
\subsection{Cycle notation for permutations}
\label{subsec:cycle_notation}
Additionally, a permutation on a finite set may be denoted
in \emph{cycle notation},
as described in say \url{https://en.wikipedia.org/wiki/Permutation#Cycle_notation}.
For example the notation $(1 \; 2 \; 3 \; 4)(5 \; 6 \; 7)$
refers to the permutation with
$1 \mapsto 2$, $2 \mapsto 3$, $3 \mapsto 4$, $4 \mapsto 1$,
$5 \mapsto 6$, $6 \mapsto 7$, $7 \mapsto 5$.
Usage of this notation will usually be obvious from context.
\subsection{Rings}
All rings have a multiplicative identity $1$ unless otherwise specified.
We allow $0=1$ in general rings but not in integral domains.
\textbf{All rings are commutative unless otherwise specified.}
There is an elaborate scheme for naming rings which are not commutative,
used only in the chapter on cohomology rings:
\begin{center}
\small
\begin{tabular}[h]{|c|cc|}
\hline
& Graded & Not Graded \\ \hline
$1$ not required & graded pseudo-ring & pseudo-ring \\
Anticommutative, $1$ not required & anticommutative pseudo-ring & N/A \\
Has $1$ & graded ring & N/A \\
Anticommutative with $1$ & anticommutative ring & N/A \\
Commutative with $1$ & commutative graded ring & ring \\ \hline
\end{tabular}
\end{center}
On the other hand, an \emph{algebra} always has $1$,
but it need not be commutative.
\subsection{Choice}
We accept the Axiom of Choice, and use it freely.
\section{Further reading}
The appendix \Cref{ch:refs} contains a list of resources I like,
and explanations of pedagogical choices that I made for each chapter.
I encourage you to check it out.
In particular, this is where you should go for further reading!
There are some topics that should be covered in the Napkin,
but are not, due to my own ignorance or laziness.
The references provided in this appendix should hopefully help partially
atone for my omissions.
| {
"alphanum_fraction": 0.7487559663,
"avg_line_length": 40.356557377,
"ext": "tex",
"hexsha": "0551e9d9db3a19cb6a5917c3d755476afe1d1622",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e",
"max_forks_repo_licenses": [
"Apache-2.0",
"MIT"
],
"max_forks_repo_name": "aDotInTheVoid/ltxmk",
"max_forks_repo_path": "corpus/napkin/tex/frontmatter/advice.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0",
"MIT"
],
"max_issues_repo_name": "aDotInTheVoid/ltxmk",
"max_issues_repo_path": "corpus/napkin/tex/frontmatter/advice.tex",
"max_line_length": 111,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e",
"max_stars_repo_licenses": [
"Apache-2.0",
"MIT"
],
"max_stars_repo_name": "aDotInTheVoid/ltxmk",
"max_stars_repo_path": "corpus/napkin/tex/frontmatter/advice.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2638,
"size": 9847
} |
\chapter{Answers}
Answers to selected exercises.\\
\begin{multicols}{2}%\raggedcolumns
\setlength{\columnseprule}{0pt}
\shipoutAnswer
\end{multicols}
% END answers
| {
"alphanum_fraction": 0.7398843931,
"avg_line_length": 21.625,
"ext": "tex",
"hexsha": "42c364e626a58fb1a418a779d63f24f778760f98",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ecb6fddf196353bac375c2c2f585d2e02d87605f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "millecodex/ENGE401",
"max_forks_repo_path": "7Answers.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ecb6fddf196353bac375c2c2f585d2e02d87605f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "millecodex/ENGE401",
"max_issues_repo_path": "7Answers.tex",
"max_line_length": 36,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ecb6fddf196353bac375c2c2f585d2e02d87605f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "millecodex/ENGE401",
"max_stars_repo_path": "7Answers.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 56,
"size": 173
} |
\documentclass{l3proj}
\begin{document}
\setlength{\parindent}{10ex}
\title{Team CS25 - Nutriplotter App}
\author{Samuel Owen-Hughes \\
Max Kirker Burton \\
Joe Kadi \\
Lucy Conaghan \\
Max Kolle}
\date{25 March 2019}
\maketitle
\begin{abstract}
This paper presents a case study of the Nutriplotter App, a healthy eating app with a focus on gamification and user interaction, developed by a group of five Computer Science students. It reflects on the software development process, the issues faced and suggestions on how to resolve them.
\end{abstract}
\educationalconsent
\newpage
%==============================================================================
\section{Introduction}
This paper presents a case study of the project undertook by Team CS25 to create the Nutriplotter mobile app. Our customer for this project was the School of Medicine, Dentistry and Nursing, and the aim was to design a game that helped teach users about healthy nutrition. The customer had done research into public perception of a balanced diet and were looking for an interactive tool to help teach users about how a balanced plate of food should look like, presented in an aesthetically pleasing and easy to understand way.
This case study will describe the progress and development of our app as well as discussing the challenges encountered over its duration. This project was a chance to implement the modern software development methods taught to us in the Professional Software Development module (PSD3). We shall explain techniques used, their effectiveness and reasoning for not using certain techniques.
The app is designed using the Expo and React Native frameworks and is an offline-focused app with the only online feature being Facebook connectivity and leaderboards. The app allows users to construct a plate of food using a database of over 3000 foods \cite{mccancewiddowson} and compares nutritional values for the plate against daily recommended values \cite{drv}. The user is then given a score and told what nutrients their plate is lacking or has a surplus of.
\noindent The rest of the case study is structured as follows:
\noindent \ref{sec:background}: \nameref{sec:background} presents the background of the case study discussed, describing the customer and project context, aims, objectives, initial targets and project state at the time of writing.
\noindent \ref{sec:team}: \nameref{sec:team} is a brief section on the roles taken on by each member, how these evolved over the course of development and the effect on the project caused by some members being absent for extended periods of time during both semesters.
\noindent \ref{sec:development}: \nameref{sec:development} is split into 6 sections. The first is for the technical decisions made before development began and the reasoning behind them. Then one for each of the 5 development cycles. For each cycle we go into the progress made, changes to planned features, any challenges we encountered, how we overcame them and what we learned. We also reflect on the success of each stage.
\noindent \ref{sec:conclusions}: \nameref{sec:conclusions} outlines the lessons that the team learned over the course of the project, the key changes that would be made in hindsight, thoughts on key areas of agile development and how these lessons can be generalized for use in other projects.
%==============================================================================
\section{Case Study Background}
\label{sec:background}
\subsection{Customer}
The School of Medicine, Dentistry and Nursing is a branch of the University of Glasgow, who were looking for a team in the Computer Science department to help out with a long-envisioned idea. We were working for Prof. Mike Lean, who has a peer-reviewed book \cite{lean} which made reference to the lack of professional input on nutritional apps, Mrs Janette McBride and Dr Laura Moss. Only Mrs McBride was able to attend the demonstrations on campus so there was a small break in communication between the parties that we addressed by making visits to their offices at the Royal Infirmary on a few occasions.
The customer had done research into public perception of what a healthy meal looks like based on proportions/percentages of food on a plate. The findings suggested that most people were eating excessive amounts of meat (Up to half a plate) and too few vegetables. Unless the person is an athlete or dieting, they should only be trying to meet the daily recommended values for nutrients.
An easy way to achieve this is the well known 5 fruit or veg a day recommendation. Over 3 meals this means that you should have 2 fruit or veg with most of your meals but the general public struggle to get 1 \cite{fruitveg}. In general, a healthy meal has half of the plate taken up by vegetables for 1 or 2 portions of the daily recommended amount. Carbohydrates should fill out the plate with enough dairy to get daily calcium amounts. However, we were told that the average Briton normally fills half their plate with meat and then carbohydrates with a small amount of vegetables.
\subsection{Aim}
The customer imagined the app to not only have correct nutritional data, but also encourage healthier eating through graphics and coaching. Their goal was to have an app that could be used to help correct the public's view of what a healthy plate of food should look like. They also wanted the app to have a score system to make use of Gamification theory \cite{gamification} to increase user interaction.
\subsection{Market Research}
After receiving the initial brief, we looked at similar apps to find common trends (Nutrients \cite{nutr}, MyFitnessPal \cite{myfitpal} and Make My Plate \cite{plate}) . A lot of healthy eating and diet apps already exist but most of these force their own meals upon users, and often relied on user submitted data rather than data taken from trusted, scientific sources. We found no apps that took the design of a game and also gave correct nutritional data. This gave us a clear target to aim for and gave us inspiration on what features we thought worked well in similar apps.
\subsection{Initial Requirements}
From the aim of the project, we negotiated a set of initial requirements with the customer to guide the development of the project.
\begin{itemize}
\item Users will be able to pick foods they want (from a small group of commonly eaten foods), which will then be shown on plate as a graphic.
\item Foods on the plate will be shown as groups of their respective types. (Fruit, veg, pasta, etc.)
\item Upon completion of the meal, users will be shown accurate nutritional data and given a score.
\item Users will be able to compare scores to hopefully induce competition to increase engagement in app.
\item Users can edit their plates after submission with a score penalty.
\item A login system should be implemented.
\end{itemize}
\subsection{Requirements Changes}
When we went to the Royal Infirmary, Prof. Lean gave a much clearer idea about the exact way he wanted the data to be presented to maximize the ease of displaying the features he wanted. This happened in the last cycle, so we endeavoured to meet their new requirements to the best of our abilities with the time remaining. Things that he brought up included having some standard foods on the side of the plate that could be toggled. These were meant to represent snacks between meals that should also be counted in nutritional totals. Initially he requested being able to add whatever foods but we negotiated the request down to a small set of standard foods set around the plate. Mrs. McBride also had a request for additional functionality: she proposed that we add the ability to save a plate for viewing/editing at a later time, as well as adding default plates to give users an idea of how a plate should be built.
\subsection{Final Product Features}
The final release given to the customer is the working app meeting all initially agreed functionality as well as requirements set out in later meetings. We were also able to include the entirety of the provided McCance and Widdowson \cite{mccancewiddowson} composition of foods dataset instead of the originally suggested small set of foods. The customer was very pleased by the flexibility and replay value this offered and encouraged us to include this.
The app opens up to the plate screen, with navigation options to get to other screens, and presents the user with an empty plate and a search bar to add foods to the plate. Recently searched foods are listed allowing users to easily re-add commonly eaten foods. There are snacks around the plate that can be toggled on or off. Once added to the plate, food quantities can be changed by tapping on the plate and using sliders or buttons when the changes need to be incremental.
When finished, the plate can be submitted to receive its nutritional breakdown and score. Feedback is given based on the user's performance. The user may then go back and edit their plate based on the feedback, start anew or save it under a friendly name for later use. Other features include a help screen for instructions with links to healthy eating websites, a saved plates screen where previously made plates can be reloaded and Facebook login with access to online leaderboards.
\subsection{Future Features}
Several features were discussed early in development that were eventually cut. These features included:
\begin{itemize}
\item Push Notifications (Difficulties with Expo proved this to be tiresome)
\item Food Icons Being Draggable (Time constraints and complications with cross-platform/differing screen sizes)
\item Non-Facebook Login (Seemed unnecessary as the user would only be missing out on leaderboards)
\end{itemize}
\noindent During a post-project discussion, we decided if we were to do the project again we would also like to include these features:
\begin{itemize}
\item Add more Goals aside from daily recommended nutrients (e.g. A more protein weighted goal for muscle-gain)
\item Healthier Replacement Suggestions (e.g. Instead of Chips, Sweet Potato Wedges)
\item Option to hide certain categories of food (e.g. Vegetarian would hide all meats)
\end{itemize}
The first priority, however, for another release would be to fix the known bugs (minor) and consider the densities of each food, as this affects the nutritional accuracy of the plate's nutrient breakdown.
%==============================================================================
\section{Team Dynamics}
\label{sec:team}
\subsection{Initial Team roles}
We used the spare time at the start of the year before project allocation to give each member a role to manage a part of the process, with the idea being we would all be working on the code in addition to these responsibilities. In theory this would mean that there would be someone in charge of each area of the project to ensure nothing was forgotten about.
The roles assigned were:
\begin{itemize}
\item Chief Architect - Joe Kadi
\item Quality Assurance - Lucy Conaghan
\item Project Manager - Max Kirker Burton
\item Customer Liaison - Samuel Owen-Hughes
\item Tool Smith - Max Kolle
\end{itemize}
\subsection{Participation}
Almost all members of the team missed various parts or extended stretches of the project for various reasons.
\subsubsection{Semester 1}
All members were present for the initial meeting at Kelvin Gallery, but this would be the only time throughout the year that all 5 of us were in the same room. One member was unable to attend any lab sessions or meetings in the first semester due to housing difficulties. Others had Wednesday afternoon commitments such as work and sport that meant they couldn't attend for very long.
\subsubsection{Semester 2}
In the second semester, one member stopped responding to any messages from the team. They replied once to say they were having health issues but never did any work or offered to make up for lost time. The member with housing issues did start to contribute in the latter parts of the second semester.
\subsection{Results of these absences}
Due to the way the team was structured at the start of the year, the long absences forced a massively increased workload onto a few members of the team. One member was made to take up nearly all front end development by themselves, after already handling all back-end issues themselves in the first semester. As the Quality Assurance member was one of the absentees, testing was only done at completion of the project and the continuous integration suite took a long time to be configured. Due to one member being forced to take up a huge workload near the end of the project, some quality control and programming practices were ignored as there was no-one to handle merge requests, etc.
In retrospect, the team roles shouldn't have been so firmly set at the beginning of the year. We should have spread out the testing responsibility between more than one person and responsibilities could have been spread to the remaining members earlier, or we could have made a simplified version of the app and added in additional functionality as the costs became clearer. This is in line with the key ideas behind continuous integration \cite{ci} and agile development, and probably should have been the initial plan even without these absences.
%==============================================================================
\section{Development}
\label{sec:development}
\subsection{Technical Decisions}
At the start of the project we were offered the choice of making a web app or a purely native mobile app. We decided a mobile app would be more beneficial for this small scale project and because of the low amount of data being input from the user. From there we chose to use React Native as our development framework. We chose this as it allowed cross platform portability between iOS and Android devices (the customer wanted as large of a market share as possible), as well as advertising the ease of features such as push notifications and Facebook login. We wanted Facebook login as it seemed to be the easiest way to allow users to have profiles and we wouldn't have to worry about security and GDPR.
Using React Native also allowed us to use Expo to run the app on our phones during development to track the progress we were making and test for any errors or warnings. Near the end of our first functional release we published the app on Expo so that the customer had access to it, and could view changes as soon as we made them.
We needed a way to store persistent data, and decided to use a database rather than local asynchronous storage, as databases provided a way to interrogate the large amount of data involved. At first we used SQLite, but since it didn't work as intended on certain versions of Android, we switched over to MongoDB. We also had several local JSON arrays that would read from MongoDB and save session data. Some of the data we stored included the dataset of all 3000+ foods, a user's saved plates, the current plate and their recently searched foods, as well as others.
Testing was run by Jest, the built in testing environment for React Native, using snapshots for regression testing.
React Native was good for our needs, very few changes or platform specific code was required. However, we would not use Expo if we were to re-develop the app. The CI and push notification issues (described in 4.5.2) were not worth the benefit of testing the app live on our phones. Instead, we would use pure React Native.
\subsection{Cycle 1 - Deadline 14/11}
\subsubsection{Progress}
This cycle had no project development. Instead we spent time choosing, setting up and getting used to the development environment; planning specification requirements and looking into future features that could be implemented.
We spent time trying to get React Native environments working on the University lab computers but soon realized that it was too much effort compared with setting it up individually on our own laptops. We then started doing tutorials to get to grips with the language and software, committing to no development in the first cycle.
We did however work on researching similar apps, writing documentation, negotiating the requirements shown earlier in \nameref{sec:background} and drawing up wire-frames to give the customer an idea of how we envisioned the app to look. This helped greatly in understanding how our ideas differed and ensured our aims were aligned early on.
We also got acquainted with the version control system that we would be using for this project (GitLab). From early in the project we ensured that branches were used for all work and issues were created any time new work was assigned or discovered. This gave the team a clear view of the work they were all expected to do and when it was expected to be done. The issue allocation could have been improved by going over all issues that breached their deadlines to ensure members weren't having problems. Later on in the project we would also include labels and deadlines for each issue, which would help us prioritise issues by importance.
\subsubsection{Challenges Encountered}
The provided system for the virtual machine (VM) wasn't enabled at the start of the year due to issues from the University. This was the first of multiple issues out of our control. It delayed the setup of our VM but the system was online shortly after this cycle concluded. We didn't have much work to do in these weeks before we met the customer, but having an unsolvable issue so early into the project was a humbling experience. The consequence of this was that we couldn't get our continuous integration up and working before work on the project begun, which led to larger problems down the road.
Little developmental progress was made (as intended) during this cycle. If we were to do something differently it would be to give all members tutorials on different technologies in the first week, then begin bare-bones production towards the end of the cycle as a means to put our learning into practice. Having a slow start meant that there was always going to be a constant increase in the amount of work to be done, and is against the continuous deployment model we were aiming for.
There was no way to have the game mechanics working for the first presentation, even if the concept was simplified as much as possible and we ignored all minor features. We should however have created the base of the app so we had something to build on come the next cycle. Building in these small steps would give us a greater idea of the costs of the features we wanted to add and give the customer a better opportunity to provide feedback.
%%----------------------------------
\subsection{Cycle 2 - Deadline 5/12}
\subsubsection{Progress}
The plan for this iteration was to develop a basic version of the app that would have all basic functionality except for the gameplay itself. As we thought figuring out and implementing a good gameplay solution would require much time and effort, we wanted to collaborate on development as a team in future cycles, and not have to worry about the functionality of the rest of the app.
The navigation bar was added to the app as well as all the screens associated with it. The main plate screen was added with a simple background and a text input field. A help screen was given simple text and a link to a government healthy eating page was added. The settings screen was left blank as we were unsure what features would be going there.
\subsubsection{Challenges Encountered}
To import all the food's nutritional data from the provided database \cite{mccancewiddowson}, a Python script was used to generate a SQLite command for each food. These were put into a JavaScript file which was imported on the app's first launch and saved to a local JSON array. This worked seamlessly on the iOS devices but when tested on certain Android devices foods would not be stored properly. To solve this we switched from SQLite to MongoDB, which also provided a streamlined method of fetching data (as we no longer needed to rely on individual promise functions).
The navigation bar implementation was easy to figure out from other React Native examples but screen to screen navigation proved difficult to solve. This problem became a major time sink, with 3 members struggling to solve it before the 4th found a solution. The issue was a lack of understanding of the React's navigation system, and was solved by including routing paths in the AppNavigator file in the navigation folder. This problem lasted over a month and better communication would have resulted in less time being wasted.
We used instant messaging (Slack, Facebook) as our main communication channel throughout the project, however during this cycle it started to become clear that this was not effective, as messages were often ignored. A more effective approach would have been to discuss issues at the start of every meeting, to identify the major issues that required collaboration to solve. A weekly stand-up \cite{standup} where members would talk about their progress and any challenges would have provided motivation and assistance to those struggling.
%%-----------------------------------
\subsection{Cycle 3 - Deadline 22/1}
\subsubsection{Progress}
The goal of this iteration was to deliver a working game to the customer, with basic features such as a score system. We told the customer we could deliver this and would focus on the aesthetics and supplementary features towards the end of our project.
\subsubsection{Challenges Encountered}
There were problems with the Computer Science department's security over Christmas and access to the project repository was revoked until the problem was corrected in the new year. This delayed progress but our initial goal of delivering a working app had been overly ambitious as well.
This sort of problem could have been fixed using methods such as planning poker \cite{poker} or MoSCoW prioritization \cite{moscow}. Planning poker would have helped with understanding the issues we were facing and the actual time required to meet our goals. MoSCoW prioritization would have helped us focus on the key features as many of them were of low importance but members were committing to completing them before switching onto newer, higher priority tasks.
We underestimated the difficulty of integrating each of the components (score system, plate data, local and session databases, etc). Furthermore the lack of a version control system in the first half of the cycle resulted in members working on different solutions to the same problems. When we collated our work, additional work had to be done fixing merge conflicts and deciding whose work would actually be used. This lowered team morale as some members work was not used, and time was wasted.
This could have been prevented via merge requests, however being forced to migrate to another version control system and a lack of communication made this difficult. Perhaps a simple prototype could have been developed as a proof of concept but this would have taken away from our already limited time. Instead we admitted to the customer that we hadn't met our targets and would be more careful when setting goals so we wouldn't promise features we couldn't complete.
As a temporary workaround, the member with the most recent copy of the app created a repository on git for the team to use while the system was down. We also all signed up for Trac accounts to manage our issues. These two solutions were enough for us to be able to continue our work without the security concern being much of a problem.
%%-----------------------------------
\subsection{Cycle 4 - Deadline 18/2}
\subsubsection{Progress}
The key focus for this cycle was to get the game working before the last customer meeting. Those not involved in this worked on setting up the online database for tracking Facebook users logging into the app and placing them onto the leaderboard.
Due to few tasks being completed concurrently, some members tried using pair programming to more efficiently complete work. The cost of pair programming was that it required two members to work on the issue, but if there wasn't any other pressing issues the cost can be ignored to gain the benefits of faster completion times and higher code quality \cite{pairprogramming}. This exercise proved to be effective when working on simple tasks but when we moved onto more complex tasks, the programmers would have different ideas which led to lower productivity. It would have been useful to experiment more with pair programming throughout the year, as it did result in an overall increase in quality.
For the demonstration we were unable to complete the submission feature but the plate system and results screen were fully implemented.
\subsubsection{Challenges Encountered}
Months were spent trying to get CI/CD up and running with Expo. The main issue was that Expo hosts its APKs on the cloud, and fetching a local version of it to test in an automation script proved very finicky, as the tool to fetch it required several dependencies not included in the VM's image. There was also no way to set up CI for iOS builds as this required an Apple Developer account. Eventually it was decided to test the app locally for logic errors and have Expo check for syntax errors, before automatically publishing the APK.
The benefits of having a CI/CD system running were immediately noticed. Changes to external artefacts such as app icons and splash screens (which were not covered by our Jest and snapshot testing) caused the build to fail (certain images needed to be square). This may not have been noticed if we did not have CI/CD set up, and showed how critical of a system it was. Throughout the rest of the project this system would rapidly alert us of any unforeseen consequences and allow us to fix issues in minutes. Had we focused on setting this up in the first cycle we could have saved a lot of time wasted searching for issues.
We also wanted to include draggable food icons to the plate screen. These would appear when a food is searched, and could be dragged onto the plate to add it, or flicked off to delete it. Unfortunately the member assigned to this stopped responding to messages two days prior to a meeting at the Royal Infirmary and we were forced to scrap this and replace it with the current, simpler solution.
Originally the customer also wanted users to be able to drag segments of foods on the plate to increase their amounts. We decided due to the small size of an average mobile phone screen the ergonomics would be poor and users would become frustrated. We instead decided to implement sliders and buttons, for large and small changes respectively, which the customer agreed was the better approach.
%%-----------------------------------
\subsection{Cycle 5 - Deadline 15/3}
\subsubsection{Progress}
In the last iteration we managed to finally get the game system working properly and added the save plate feature. Due to the slow start, we now felt the pressure to get as much work done as possible. We decided to cut some supplementary features that were low priority, such as push notifications, sounds, non-Facebook login and most of the options page. This increase of workload could have been avoided if work was managed in a more organised fashion from the start of the year.
\subsubsection{Challenges Encountered}
The pressure was increased when the University shut down the computing labs due to a bomb threat on the day we were planning our last meeting before the code deadline. This removed a huge percentage of our remaining time and forced us to stop working on features and to instead move onto documentation. This kind of unpredictable problem is why development teams should try to avoid a build up of work.
However despite all these difficulties, all major features requested by the customer in the specification were completed. The Project Manager also kept in regular contact with the customer in the last few weeks providing them with updated releases to test and give feedback via a Google sheets document. This feedback allowed us to implement some supplementary features quickly, fix bugs and change features the customer did not like. This included adding the recently searched foods list (the customer was very keen on this feature being added), adding tips to the score screen and changing scores to be colour coded based on performance.
As mentioned earlier, testing was left to the end of the project due to members being absent. The snapshot tests worked fine and handled regression well. The problems came when trying to create unit tests for state in components. Due to the amount of coupling involved in the components making the game work, it was difficult to render the components so that their functions could be tested. Some of these issues were solved by commenting out lines of code from the component being tested. While not a complete fix this allowed us to test the functions, at the cost of not being able to have the CI run our test suite on new builds. In hindsight, we could have utilised test driven development \cite{tdd}, so each member would have been responsible for writing tests for code they would then implement.
%==============================================================================
\section{Conclusions}
\label{sec:conclusions}
%Explain the wider lessons that you learned about software engineering,
%based on the specific issues discussed in previous sections. Reflect
%on the extent to which these lessons could be generalized to other
%types of software project.
The key challenge for our project was a lack of time that was felt throughout the second half of development. A number of issues arose that compounded these problems.
A big time sink that was evident throughout the project was merge conflicts. In many instances, multiple members pushed their changes to the master branch or merged their development branches to master. As a result, merge conflicts arose and would have to be corrected. The simple solution to this was to use merge requests and have an independent member review any conflicts and approve of the merge. Although we did very briefly experiment with merge requests at the start of the project, we didn't really understand how they worked or their purpose so decided not to utilise them. It wasn't until the end of the project that we realised how useful they would have been.
What helped us massively in the project was keeping all our work well documented. Although team communication was lacking, by thoroughly documenting changes and code explanations on the wiki and in the code itself, we were able to understand each others code quickly. As far as possible we wanted to write code that was self-documenting and keep comments to a minimum, however since this was a completely new language and environment for all of us, and with the amount of external modules/API used, comments were more often than not necessary.
We completed a team retrospective after every cycle, with the key issues being discussed using stop, start and continue prompts. This increased the teams use of issues and branching at the start of development as these were stressed heavily. A downside was that we didn't always focus on software practices. A solution to this would have been to clearly explain its purpose early in the project.
Our thorough issue management helped us keep track of what needed to be done. We would immediately make issues for any problems that did and we thought would arise. At the start of the year we simply assigned issues to members, but starting around cycle three we also included priority and feature labels (e.g. bug, supplementary feature) to each issue, as well as deadlines. This let us easily sort what issues were most urgent. This system proved vital in the success of our project, but would be most effective if the entire team could be relied on to contribute.
We also kept track of the minutes of each customer meeting, our following retrospectives, reflections of our previous cycles as well as evaluating our coding practices (pair programming and code reviews). This gave a physical handbook that we could refer to and ensure we never made the same mistakes twice.
To summarise, the use of modern agile development techniques greatly improved efficiency whenever implemented. We should have strived to use them more often but issues in team availability limited these opportunities. If we can see the impact of these techniques in our small scale project, the benefits are clear for applications in the real world.
%==============================================================================
\newpage
\bibliographystyle{plain}
\bibliography{dissertation}
\end{document} | {
"alphanum_fraction": 0.7894019682,
"avg_line_length": 129.00390625,
"ext": "tex",
"hexsha": "87a34e9e7e06d7e60c2b5aae9f935ca2301b5ca0",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-04-07T14:10:08.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-04-07T14:10:08.000Z",
"max_forks_repo_head_hexsha": "a8fd3778ddf74c1a0a675dd47bdcdec59e7808f8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "maxburton/NutriPlotter",
"max_forks_repo_path": "docs/dissertation/dissertation.tex",
"max_issues_count": 15,
"max_issues_repo_head_hexsha": "a8fd3778ddf74c1a0a675dd47bdcdec59e7808f8",
"max_issues_repo_issues_event_max_datetime": "2021-08-23T17:46:30.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-07-03T16:47:35.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "maxburton/NutriPlotter",
"max_issues_repo_path": "docs/dissertation/dissertation.tex",
"max_line_length": 919,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "a8fd3778ddf74c1a0a675dd47bdcdec59e7808f8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "maxburton/NutriPlotter",
"max_stars_repo_path": "docs/dissertation/dissertation.tex",
"max_stars_repo_stars_event_max_datetime": "2021-04-07T14:10:04.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-04-07T14:10:04.000Z",
"num_tokens": 6514,
"size": 33025
} |
% ---------------------------------------------------
% CHAPTER SUMMARY MESSAGE: We want to identify the science drivers and the primary goals
% and key areas for using ML
% ---------------------------------------------------
%As particle physics enters the post-Higgs boson discovery era, one of the main objectives is
One of the main objectives of particle physics in the post-Higgs boson discovery era is to exploit the full physics potential of both the Large Hadron Collider (LHC) and its upgrade, the high luminosity LHC (HL-LHC), in addition to present and future neutrino experiments.
The HL-LHC will deliver an integrated luminosity that is 20 times larger than the present LHC dataset, bringing quantitatively and qualitatively new challenges due to event size, data volume, and complexity. The physics reach of the experiments will be limited by the physics performance of algorithms and computational resources. Machine learning (ML) applied to particle physics promises to provide improvements in both of these areas.\\
Incorporating machine learning in particle physics workflows will require significant research and development over the next five years. Areas where significant improvements are needed include:
\begin{itemize}
\item \textbf{Physics performance} of reconstruction and analysis algorithms;
\item \textbf{Execution time} of computationally expensive parts of event simulation, pattern recognition, and calibration;
\item \textbf{Realtime implementation} of machine learning algorithms;
\item \textbf{Reduction of the data footprint} with data compression, placement and access.
\end{itemize}
\noindent
\subsection{Motivation}
The experimental high-energy physics (HEP) program revolves around two main objectives that go hand in hand: probing the Standard Model (SM) with increasing precision and searching for new particles associated with physics beyond the SM. Both tasks require the identification of rare signals in immense backgrounds. Substantially increased levels of pile-up collisions from additional protons in the bunch at the HL-LHC will make this a significant challenge.\\
%Machine learning community benefits from rapid development and experimentation. Modern machine-learning tools are themselves undergoing a process of rapid evolution. Noteworthy are the recent machine-learning tools originating from industry. The HEP community can benefit from these advancements by developing flexible software and workflows. A major challenge is how to engage the ML community and maximally benefit from these developments. Common to the HEP and ML communities is the desire to interpret machine learning models~\cite{interpretability}.
Machine learning algorithms are already the state-of-the-art in event and particle identification, energy estimation and pile-up suppression applications in HEP. Despite their present advantage, machine-learning algorithms still have significant room for improvement in their exploitation of the full potential of the dataset.
% an open area of research expected to require a substantial R\&D effort. %ML developments for HEP are likely to have applications for other scientific domains which are similarly exposed to large amounts of data.
\subsection{Brief Overview of Machine Learning Algorithms in HEP}
This section provides a brief introduction to the most important machine learning algorithms in HEP, introducing key vocabulary (in \textit{italic}).\\
%Specific application areas of machine learning in HEP are detailed in Chapter ~\ref{sec:applications}.
Machine learning methods are designed to exploit large datasets in order to reduce complexity and find new features in data. The current most frequently used machine learning algorithms in HEP are Boosted Decision Trees (BDTs) and Neural Networks (NN).\\
%\textcolor{red}{DR: PROBABLY NEED DIAGRAMS}
Typically, variables relevant to the physics problem are selected and a machine learning \textit{model} is \textit{trained} for \textit{classification} or \textit{regression} using signal and background events (or \textit{instances}).
Training the model is the most human- and CPU-time consuming step, while the application, the so called \textit{inference} stage, is relatively inexpensive.
BDTs and NNs are typically used to classify particles and events.
They are also used for regression, where a continuous function is learned, for example to obtain the best estimate of a particle's energy based on the measurements from multiple detectors.\\
Neural Networks have been used in HEP for some time; however, improvements in training algorithms and computing power have in the last decade led to the so-called deep learning
revolution, which has had a significant impact on HEP. Deep learning is particularly promising when there is a large amount of data and features, as well as symmetries and complex non-linear dependencies between inputs and outputs.\\
There are different types of DNN used in HEP: fully-connected (FCN), convolutional (CNN) and recurrent (RNN). Additionally, neural networks are used in the context of Generative Models, where a Neural Network is trained to reproduce the multidimensional distribution of the training instances set. Variational AutoEncoders (VAE) and more recent Generative Adversarial Networks (GAN) are two examples of such generative models used in HEP.\\
A large set of machine learning algorithms is devoted to time series analysis and prediction. They are in general not relevant for HEP data analysis where events are independent from each other. However, there is more and more interest in these algorithms for Data Quality, Computing and Accelerator Infrastructure monitoring, as well as those physics processes and event reconstruction tasks where time is an important dimension.
\subsection{Structure of the Document}
%With many machine learning tools available and a long standing tradition of homegrown HEP software,
Applications of machine learning algorithms motivated by HEP drivers are detailed in Section~\ref{sec:applications}, while Section~\ref{sec:collaboration} focuses on outreach and collaboration with the machine learning community. Section~\ref{sec:software} focuses on the machine learning software in HEP and discusses the interplay between internally and externally developed machine learning tools. Recent progress in machine learning was made possible in part by emergence of suitable hardware for training complex models, thus in Section~\ref{sec:resources} the resource requirements of training and applying machine learning algorithms in HEP are discussed. Section~\ref{sec:training} discusses strategies for training the HEP community in machine learning. Finally, Section~\ref{sec:roadmap} presents the roadmap for the near future.\\
%Defining the best algorithms for the challenges that will be faced (Section~\ref{sec:applications}), their software implementation in production framework (Section~\ref{sec:software}), the computing resource needs (Section~\ref{sec:resources}), reaching out efficiently to the data science community (Section~\ref{sec:bridges}) and educating HEP researcher on these new technique (Section~\ref{sec:training}).
%Many of the topics that will require work in the coming years are also addressed in several other sections of the CWP, mostly data management, work-flow management, framework, computing model, training, and software integration.
%\subsection{Machine Learning and High-Energy Physics}
%Efforts are underway to bring together HEP and ML experts but more work is needed to bring the two communities together.
%The applications described are relevant beyond the LHC experiments and have been used and achieved ground-breaking potential in neutrino physics.
%the specific requirements of training large models over large datasets.% In particular the R\&D required for efficient inference in production environment.
%Even though applying most ML techniques does not require external expertise when the problems are simple, some of the challenges exposed in this document are very specific to the field of HEP and will require significant R\&D in the methods themselves.
%In view to the adoption of new mathematical and statistical techniques, widely used industry for the benefit of tackling HEP challenges, as well as training newcomers to technique that can be re-used in future steps in their curriculum, a detailed teaching plan will have to be elaborated.
| {
"alphanum_fraction": 0.8081429755,
"avg_line_length": 122.4492753623,
"ext": "tex",
"hexsha": "979ed439d44ea6893c7ff65fb8b7c09a61207675",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2018-10-05T23:42:57.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-10-05T23:42:57.000Z",
"max_forks_repo_head_hexsha": "1d49b8d5d86e8bc74da41944f028990e16f0eae1",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "iml-wg/cwp",
"max_forks_repo_path": "src/introduction.tex",
"max_issues_count": 19,
"max_issues_repo_head_hexsha": "1d49b8d5d86e8bc74da41944f028990e16f0eae1",
"max_issues_repo_issues_event_max_datetime": "2020-09-17T15:13:03.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-05-07T09:30:31.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "iml-wg/cwp",
"max_issues_repo_path": "src/introduction.tex",
"max_line_length": 842,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1d49b8d5d86e8bc74da41944f028990e16f0eae1",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "iml-wg/cwp",
"max_stars_repo_path": "src/introduction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1644,
"size": 8449
} |
\documentclass{beamer}
\usetheme{Boadilla}
\usepackage{tikz}
\usepackage{graphicx}
\usepackage{amsmath}
\usetikzlibrary{shapes.geometric,arrows, positioning, fit}
\title{Weekly Presentation}
\subtitle{Week 41}
\author{}
\institute{Luleå University of Technology}
\date{\today}
\begin{document}
\begin{frame}
\titlepage
\end{frame}
\begin{frame}
\frametitle{Overview}
\tableofcontents
\end{frame}
%%%%%%%%%%%% Add new frames below this line %%%%%%%%%
\section{Status update}
\input{frames/timeplan.tex}
\input{frames/report_flow.tex}
\input{frames/git_milestone.tex}
\section{Positioning - Arrowhead}
\begin{frame}
\frametitle{Positioning - Arrowhead}
\centering
\input{frames/positioning_flow.tex}
\end{frame}
\section{Robot arm movement}
\begin{frame}
\centering
\Huge Robot Arm
\end{frame}
%%%%%%%%%%%% Add new frames above this line %%%%%%%%%
\begin{frame}
\begin{center}
\Huge Questions?
\end{center}
\end{frame}
\end{document} | {
"alphanum_fraction": 0.6888667992,
"avg_line_length": 15.4769230769,
"ext": "tex",
"hexsha": "a439b822b132aa43f6d2bbc3b8053e4ed788baef",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-11-16T16:06:15.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-11-16T16:06:15.000Z",
"max_forks_repo_head_hexsha": "d86848a037a07e97122c92e3c80c980c58c41d52",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "kottz/D7039E",
"max_forks_repo_path": "presentations/week41/presentation_week41.tex",
"max_issues_count": 72,
"max_issues_repo_head_hexsha": "d86848a037a07e97122c92e3c80c980c58c41d52",
"max_issues_repo_issues_event_max_datetime": "2021-01-01T08:06:16.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-09-15T13:32:02.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "kottz/D7039E",
"max_issues_repo_path": "presentations/week41/presentation_week41.tex",
"max_line_length": 58,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d86848a037a07e97122c92e3c80c980c58c41d52",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "kottz/D7039E",
"max_stars_repo_path": "presentations/week41/presentation_week41.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 294,
"size": 1006
} |
\documentclass[11pt,a4paper]{beamer}
\usetheme[progressbar=frametitle]{metropolis}
\setbeamertemplate{frame numbering}[fraction]
\useoutertheme{metropolis}
\useinnertheme{metropolis}
\usefonttheme{metropolis}
\usecolortheme{spruce}
\setbeamercolor{background canvas}{bg=white}
\definecolor{mygreen}{rgb}{.125,.5,.25}
\usecolortheme[named=mygreen]{structure}
\usepackage{graphicx}
\usepackage{caption}
\usepackage{listings}
\lstset{numbers=left, numberstyle=\tiny, numbersep=5pt}
\lstset{language=R}
\logo{\includegraphics[width=1.5cm]{553400.jpg}}
\author{Bradley Mackay, Clemens Ehlich}
\title[Short Title]{R-Packages}
\subtitle{Statistics, Visualization and more using R}
\institute{NAWI PLUS}
\date{}
\begin{document}
%\metroset{block=fill}
%Titelblatt
\begin{frame}
\titlepage
\end{frame}
%Folie_2
\begin{frame}[t]{Roadmap}
\begin{enumerate}
\item[I.] The Theory
\end{enumerate}
\begin{enumerate}
\item Basics
\begin{enumerate}
\item What's a package?
\item Why Packages?
\item Where can you find packages?
\item Install, Deinstall, Update
\begin{enumerate}
\item Installing packages via devtools
\item Using devtools commands for installation
\end{enumerate}
\end{enumerate}
\end{enumerate}
\end{frame}
\begin{frame}[t]{Roadmap}
\begin{enumerate}
\item [2.] Making our own package
\begin{enumerate}
\item [2.1] Good Guide for starting
\item [2.2]Components of a package
\item [2.3]Description File
\item [2.4]Namspace
\item [2.5]R-Functions
\item [2.6]Object Documentation and Manuals
\item [2.7]Data, Vignettes, Demo
\end{enumerate}
\item [3] Sources
\item [II.]Creating a Package Yourself
\end{enumerate}
\end{frame}
\section{I. The Theory}
\begin{frame}[t]{1. Basics}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{packages}
\caption{Package Universe}
% \label{fig:packages}
\end{figure}
\end{frame}
%Folie_3
\begin{frame}[t]{1.1 What's a package?}
\begin{itemize}
\item R packages are collections of functions and data sets
\item they extend the basic functionalities or add new ones
\item mostly developed by the community itself
\end{itemize}
\end{frame}
%Folie_4
\begin{frame}[t]{1.2 Why Packages?}
\begin{itemize}
\item easy method for sharing your code with others
\item recurring tasks - no need to reinvent the wheel
\begin{itemize}
\item[--] to load data
\item[--] to manipulate data
\item[--] to visualize data
\item[--] etc. (more than 19.000 packages exist)
\end{itemize}
\item packages form interfaces to:
\begin{itemize}
\item[--] other software and their file formats (foreign, caffe, RQGIS, ...)
\item[--] databases (RODBC, RPostgreSQL, ...)
\item[--] other programming languages (Rcpp, RPython, ...)
\item[--] webservices (Rfacebook, RGoogleAnalytics, ...)
\end{itemize}
\end{itemize}
\end{frame}
%Folie_5
\begin{frame}[t]{1.3 Where can you find packages?}
\begin{itemize}
\item CRAN - The Comprehensive R Archive Network
\begin{itemize}
\item[] https://cran.r-project.org
\end{itemize}
\item BioConductor
\begin{itemize}
\item[] http://bioconductor.org/
\end{itemize}
\item GitHub
\begin{itemize}
\item[] https://github.com/
\end{itemize}
\item The most comfortable way is to use \textit{RDocumentation}, because you can search more than 19.000 CRAN, Bioconductor and GitHub packages at onces.
\begin{itemize}
\item[] https://www.rdocumentation.org/ also available as a package
\end{itemize}
\end{itemize}
\end{frame}
%%Folie_6
\begin{frame}[t]{1.4 Install, Deinstall, Update}
\begin{block}{Install:}
\begin{itemize}
\item[] install.packages("xyz")
\item[]install.packages(c("xyz", "123"))
\end{itemize}
\end{block}
\begin{block}{Deintall:}
\begin{itemize}
\item[] remove.packages("xyz")
\end{itemize}
\end{block}
\begin{block}{Update:}
\begin{itemize}
\item[] old.packages() $ \rightarrow $ check what packages need an update
\item[] update.packages() $ \rightarrow $ update packages
\end{itemize}
\end{block}
\end{frame}
%%Folie_7
\begin{frame}[t]{1.4.1 Installing packages via devtools}
\begin{itemize}
\item one big problem: each repository has its own way to install a package from them
\item to simplify this process you can use the package "devtools"
\item but you might also need to install:
\begin{itemize}
\item[--]"Rtools" for Windows
\item[--]"Xcode" for Mac
\item[--]"r-devel and r-devel" for Linux
\end{itemize}
\end{itemize}
\end{frame}
%%Folie_7
\begin{frame}[t]{1.4.2 Installing packages via devtools}
After devtools is installed, you will be able to use the utility functions to install another packages. Some options are:
\begin{itemize}
\item[--]install\_bioc() from Bioconductor
\item[--]install\_cran() from CRAN
\item[--]install\_github() from GitHub
\item[--]install\_local() from a local file
\item[--]install\_url() from a URL
\end{itemize}
Example:
devtools::install\_github("hadley/babynames")
\end{frame}
\begin{frame}[t]{2. Making our own package}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{user}
\caption{pack a pack}
\label{fig:packages}
\end{figure}
\end{frame}
\begin{frame}[t]{2.1 Making our own package}
\begin{itemize}
\item Hadley Wickham - Chief Scientist at RStudio, Adjunct Professor of Statistics at the University of Auckland, Stanford University, and Rice University
\item Free to read Book "R Packages": \textbf{http://r-pkgs.had.co.nz/}
\item https://cran.r-project.org/doc/manuals/r-release/R-exts.html
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.25\linewidth]{hadley}
\includegraphics[width=0.25\linewidth]{cover}
\caption{H. Wickham and his Book}
\label{fig:packages}
\end{figure}
\end{frame}
%%Folie_7
\begin{frame}[t]{2.2 Components of a package}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{stott}
\caption{package structure}
\label{fig:packages}
\end{figure}
\end{frame}
\begin{frame}[t]{2.3 Description File}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{Desc}
\caption{Description File}
\label{fig:packages}
\end{figure}
\end{frame}
\begin{frame}[t]{2.3 Description File}
\begin{itemize}
\item Title
\item Description
\item Dependencies
\begin{itemize}
\item list of necessary packages (and also package versions)
\item "LinkingTo" a Package - if you want to use c or c++ code from another package
\end{itemize}
\item Author
\item License (MIT, GPL-2, ...)
\end{itemize}
\end{frame}
\begin{frame}[t]{2.4 Namespace}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{Names}
\caption{Namespace}
\label{fig:packages}
\end{figure}
\end{frame}
\begin{frame}[t]{2.4 Namespace}
\begin{itemize}
\item With NAMESPACE you define the way in which your package interacts with other packages. For Example, to:
\begin{itemize}
\item ensure that other packages won’t interfere with the code you include
\item your code won’t interfere with other packages
\item and that your package works regardless of the
environment in which it’s run
\end{itemize}
\item Practical example: You are using two packages with ":: operator" and both have the summarize() function. Then it does matter in which order the packages are loaded in your code.
\end{itemize}
\end{frame}
\begin{frame}[t]{2.5 R-Functions}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{Rfunc}
\caption{R-Functions}
\label{fig:packages}
\end{figure}
\end{frame}
\begin{frame}[t]{2.5 R-Functions}
\begin{itemize}
\item include the functions and the R-Code itself
\item with rcpp-package its posible to write C or C++ Code
\end{itemize}
\end{frame}
\begin{frame}[t]{2.6 Object Documentation and Manuals}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{Manu}
\caption{Object Documentation and Manual}
\label{fig:packages}
\end{figure}
\end{frame}
\begin{frame}[t]{2.6 Object Documentation and Manuals}
\begin{itemize}
\item Good Documentation is one of the most important aspects of a good package
\begin{itemize}
\item Documenting Functions, Classes, Generics and Methods
\item Documenting Datasets
\item Documenting Packages
\item ...
\end{itemize}
\item R provides a standard way of documenting the objects in a package: you write .Rd files in the man/ directory.
\item These files use a custom syntax, loosely based on LaTeX, and are
rendered to HTML, plain text, and PDF for viewing.
\item there are two ways to do this:
\begin{itemize}
\item writing these files by hand
\item use the package "roxygen2" (recommended)
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}[t]{2.7 Data, Vignettes, Demo}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{opt}
\caption{Data, Vignettes, Demo}
\label{fig:packages}
\end{figure}
\end{frame}
\begin{frame}[t]{2.7 Data - External Input (optional)}
\begin{itemize}
\item to include own data in a package
\item three main ways to include data in your package:
\begin{itemize}
\item store binary data (available for the user) in \textbf{data/}
\item store parsed data (not available for the user) in
\textbf{R/sysdata.rda}
\item store raw data in \textbf{inst/extdata}
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}[t]{2.7 Vignettes (optional)}
\begin{itemize}
\item vignettes are a long-form guide to your package
\item describes a problem like a book chapter or an academic paper
\item For Example: https://cran.r-project.org/web/packages/dplyr/vignettes/colwise.html
\item The elegant way to build a vignette is to use RMarkdown and Knitr.
\item Workflow:
\begin{enumerate}
\item 1. Create a vignettes/ directory.
\item Add the necessary dependencies to DESCRIPTION
\item Draft a vignette, vignettes/my-vignette.Rmd.
\end{enumerate}
\end{itemize}
\end{frame}
\begin{frame}[t]{2.7 Demo (optional)}
\begin{itemize}
\item Demos are like examples
\item A demo is an .R file that lives in \textbf{demo/}
\item Fuseful to the introduction of vignettes
\end{itemize}
\end{frame}
%%Folie_x
\begin{frame}[t]{3. Sources }
\begin{enumerate}
\leftskip3em \item [Fig.1:] https://www.elasticfeed.com/\linebreak ab4da588234cfa0cabc5373f0b69ae5e/
\item[Fig.2:] https://rtask.thinkr.fr/wp-content/\linebreak uploads/user2019\_create\_package\_with\_pkg.png
\item [Fig.3:] http://r-pkgs.had.co.nz/cover.png
\item [Fig.3:]https://www.mango-solutions.com/wp-content/\linebreak uploads/2018/03/Hadley-Wickham\_web.png
\item [Fig.4-9:] https://methodsblog.files.wordpress.com/2015/11/\linebreak stott-1.jpg?w=1024\&h=464
\end{enumerate}
\end{frame}
\section{II. Creating a Package Yourself}
\begin{frame}{Packages we will be using}
\begin{columns}
\begin{column}{0.20\textwidth}
\begin{center}
\includegraphics[width=1\textwidth]{devtools}
\end{center}
\end{column}
\begin{column}{0.20\textwidth}
\begin{center}
\includegraphics[width=1\textwidth]{usethis}
\end{center}
\end{column}
\begin{column}{0.20\textwidth}
\begin{center}
\includegraphics[width=1\textwidth]{roxygen2}
\end{center}
\end{column}
\pause
\begin{column}{0.20\textwidth}
\begin{center}
\includegraphics[width=1\textwidth]{newnice}
\end{center}
\end{column}
\begin{column}{0.20\textwidth}
\begin{center}
\includegraphics[width=1\textwidth]{shrek_hex}
\end{center}
\end{column}
\end{columns}
\end{frame}
\end{document} | {
"alphanum_fraction": 0.6876192056,
"avg_line_length": 17.251788269,
"ext": "tex",
"hexsha": "1c09ce0994dfbf1a5eedf06b0ff6541f3ff3fa35",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "22609426b7b788b019d4da3c0695b4c3b6ad9dd1",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "bkmackay/NiceNums",
"max_forks_repo_path": "R/Beamer_Vorlage/MeineVorlage.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "22609426b7b788b019d4da3c0695b4c3b6ad9dd1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "bkmackay/NiceNums",
"max_issues_repo_path": "R/Beamer_Vorlage/MeineVorlage.tex",
"max_line_length": 186,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "22609426b7b788b019d4da3c0695b4c3b6ad9dd1",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "bkmackay/NiceNums",
"max_stars_repo_path": "R/Beamer_Vorlage/MeineVorlage.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4005,
"size": 12059
} |
\documentclass[11pt]{beamer}
\usepackage{hyperref}
\usepackage{color}
\usepackage{amsmath}
\usepackage{listings}
\lstset{numbers=none,language=[ISO]C++,tabsize=4,
frame=single,
basicstyle=\small,
showspaces=false,showstringspaces=false,
showtabs=false,
keywordstyle=\color{blue}\bfseries,
commentstyle=\color{red},
}
\usepackage{verbatim}
\usepackage{fixltx2e}
\usepackage{graphicx}
\usepackage{longtable}
\usepackage{float}
\usepackage{wrapfig}
\usepackage{soul}
\usepackage{textcomp}
\usepackage{marvosym}
\usepackage{wasysym}
\usepackage{latexsym}
\usepackage{amssymb}
\usepackage{hyperref}
\tolerance=1000
\usepackage{minted}
\providecommand{\alert}[1]{\textbf{#1}}
\title{module1}
\author{gar}
\date{}
\hypersetup{
pdfkeywords={},
pdfsubject={},
pdfcreator={Emacs Org-mode version 7.9.3f}}
\begin{document}
\maketitle
\begin{frame}
\frametitle{Outline}
\setcounter{tocdepth}{3}
\tableofcontents
\end{frame}
\section{Introduction to C Programming}
\label{sec-1}
\begin{frame}[fragile]\frametitle{Extras -- Introduction to Unix-like OSes}
\label{sec-1-1}
Useful commands
\begin{itemize}
\item \verb~man~: This is a command to know about other commands. E.g. \verb~man ls~ gives the manual of the \verb~ls~ command.
\item \verb~ls~ : List contents in current directory (folder)
\item \verb~cal~: Display the calendar
\item \verb~rm~: Remove (delete) any file
\item \verb~mv~: Move a file from one location to another, also useful for renaming files
\item and many more -- explore the directories \verb~/bin~ and \verb~/usr/bin~
\end{itemize}
\end{frame}
\begin{frame}[fragile]\frametitle{Extras -- Getting started with C}
\label{sec-1-2}
\begin{itemize}
\item C is a general-purpose programming language
\item Used mainly for implementing Operating Systems, and application softwares for computers/embedded systems
\item Developed by Dennis Ritchie and used to re-implement the Unix OS
\end{itemize}
\end{frame}
\begin{frame}[fragile]\frametitle{Basic Structure of a C Program}
\label{sec-1-3}
\begin{verbatim}
Preprocessor Directives
Global Declarations
main()
{
Local Declarations
Statements
}
User defined functions
\end{verbatim}
\end{frame}
\begin{frame}[fragile]\frametitle{Basic Structure of a C Program}
\label{sec-1-4}
\begin{itemize}
\item Print the words ``Hello, world!''
\begin{minted}[]{C}
#include <stdio.h> /* include information about the
standard input/output library */
int main() /* define function named main
and returns an integer */
{
printf("Hello, world!\n"); /* prints the words,
\n is the newline character */
return 0; /* The value 0 is returned
to the OS on completion */
}
\end{minted}
\end{itemize}
\end{frame}
\begin{frame}[fragile]\frametitle{Extras -- Getting started with C}
\label{sec-1-5}
Code compilation:
\begin{itemize}
\item Compilation is a process of converting the source code to machine code
\begin{itemize}
\item i.e. converting from a human readable code to a code which machine understands
\item The output will be a binary file (0's and 1's)
\item They encode instructions regarding the action to be performed by the CPU (e.g. copy from one location to another, multiply two numbers etc.)
\end{itemize}
\end{itemize}
Compiling and executing in linux:
\begin{itemize}
\item \verb~gcc hello.c~
\item \verb~./a.out~
\item Program consists of \emph{functions} and \emph{variables}
\begin{itemize}
\item functions contain \emph{statements} that specify operations to be done
\item variables store the values used during the operation
\end{itemize}
\item The main function is the beginning of the execution
\item That may call other functions
\end{itemize}
\end{frame}
\begin{frame}[fragile]\frametitle{Getting started}
\label{sec-1-6}
\begin{itemize}
\item \begin{minted}[]{C}
"Hello, world!\n"
\end{minted}
is called a string constant
\item \begin{minted}[]{C}
printf("Hello, world!
");
\end{minted}
would cause an error
\end{itemize}
\end{frame}
\begin{frame}[fragile]\frametitle{Extras -- Comments and Programming Style}
\label{sec-1-7}
\begin{itemize}
\item Comments are statements which describe the function of the code
\begin{itemize}
\item A single line comment (C++ style, not allowed in ANSI C standard) -- \verb~//~
\item A multiline comment -- \verb~/* ... */~
\item \verb~/*~ and \verb~*/~ are called the comment delimenters
\end{itemize}
\item All the characters within the comments are ignored during compilation
\item Use an editor which supports
\begin{itemize}
\item syntax highlighting
\item automatic indentation
\item autocompletion
\end{itemize}
\item All these features will minimize the chances of errors and bugs
\item e.g. emacs, geany, vi etc.
\end{itemize}
\end{frame}
\begin{frame}[fragile]\frametitle{Variables and arithmetic expressions}
\label{sec-1-8}
\begin{itemize}
\item Declaration statement declares a variable to be used in the program
\begin{itemize}
\item E.g. \verb~int num;~
\end{itemize}
\item Assignment statement
\begin{itemize}
\item E.g. \verb~int num = 4;~
\item Value of 4 is assigned to a variable called \verb~num~
\end{itemize}
\item Increment the value of \verb~num~ by 2
\begin{itemize}
\item \verb~num = num + 2;~
\item Don't think of it as a mathematical equation
\item It means whatever value \verb~num~ contains, 2 will be added to it, and then stored back in \verb~num~
\end{itemize}
\end{itemize}
\begin{itemize}
\item To compute square of a number
\begin{itemize}
\item \verb~sqnum = num * num;~
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}[fragile]\frametitle{Variable Names}
\label{sec-1-9}
\begin{itemize}
\item Consists of letters (underscores allowed) and digits, must begin with a letter
\item Usually lower case letters are used for variable names and all upper case for symbolic constants
\item Keywords like \verb~if~, \verb~else~, \verb~int~, \verb~char~ etc. can't be used as variable names
\item Use meaningful names to indicate the purpose of the variable
\end{itemize}
\end{frame}
\begin{frame}[fragile]\frametitle{Data Types and Sizes}
\label{sec-1-10}
\begin{center}
\begin{tabular}{lp{9cm}}
\verb~char~ & single byte capable of holding one character \\
\verb~int~ & an integer \\
\verb~float~ & single-precision floating point \\
\verb~double~ & double-precision floating point \\
\end{tabular}
\end{center}
Qualifiers:
\begin{itemize}
\item E.g.
\begin{minted}[]{C}
short int a;
long int c;
unsigned int d;
\end{minted}
\item \verb~short~ will modify the size taken by an \verb~int~. Instead of 32 bits, the integer will now be represented using 16 bits
\end{itemize}
\end{frame}
\begin{frame}[fragile]\frametitle{Constants}
\label{sec-1-11}
\begin{center}
\begin{tabular}{|l|l|}
1234 & \verb~int~ \\
1234566789L & \verb~long~ \\
1234566789ul & \verb~unsigned long~ \\
1.1 or 11e-1 & \verb~double~ \\
0x3f & \verb~hexadecimal~ \\
037 & \verb~octal~ \\
`a' & character constant, ASCII value 97 \\
\end{tabular}
\end{center}
\begin{itemize}
\item ASCII -- American Standard Code for Information Interchange
\end{itemize}
\end{frame}
\begin{frame}[fragile]\frametitle{Constants}
\label{sec-1-12}
\begin{itemize}
\item Constant expression:
\begin{minted}[]{C}
#define LEN 100
char line[LEN+1];
\end{minted}
\item String constant or string literal
\begin{minted}[]{C}
"this is a string"
\end{minted}
\item can be concatenated at compile time:
\begin{minted}[]{C}
"this is" "a string"
\end{minted}
\item enumeration constant:
\begin{minted}[]{C}
enum boolean {NO, YES};
\end{minted}
\item $\backslash$ followed by another character is called an escape sequence,
which are translated to another character when used in a string literal
\end{itemize}
\end{frame}
\begin{frame}[fragile]\frametitle{Declarations}
\label{sec-1-13}
\begin{itemize}
\item Declaration specifies a type, and a list of one or more variables of that type:
\begin{minted}[]{C}
int high, mid, low;
char a,c;
\end{minted}
\item It can also be split into separate lines
\item \verb~const~ qualifier specifies that its value will not be changed:
\begin{minted}[]{C}
const double e = 2.71828182845905;
const char st[] = "Test String";
\end{minted}
\end{itemize}
\end{frame}
\begin{frame}[fragile]\frametitle{Arithmetic Operators}
\label{sec-1-14}
\begin{itemize}
\item \verb~x % y~ gives remainder when x is divided by y
\item \verb~%~ can't be applied to float or double
\item + and - have same precedence, but lower than * / and \verb~%~
\end{itemize}
\end{frame}
\begin{frame}[fragile]\frametitle{Relational and Logical operators}
\label{sec-1-15}
\begin{itemize}
\item \verb~>, >=, <, <=~ have the same precedence
\item Outcome is true or false, indicated by digits 1 or 0, respectively
\item \verb~a < b+1~ means \verb~a < (b+1)~
\item \verb~&&~ (logical AND) and \verb~||~ (logical OR) operations are evaluated left to right
\item E.g. for \verb~int a = 1, b = 2;~ -- outputs for different cases are shown:
\end{itemize}
\begin{center}
\begin{tabular}{|l|l|}
\verb~a > b~ & 0 (false) \\
\verb~a < b~ & 1 (true) \\
\verb~a+1 >= b~ & 1 \\
\end{tabular}
\end{center}
\begin{itemize}
\item Any non-zero (positive or negative) value is considered true
\item For \verb~a=0, b=10~
\end{itemize}
\begin{center}
\begin{tabular}{|l|l|}
\verb~a && b~ & 0 \\
\verb~a~ $\vert\vert$ \verb~b~ & 1 \\
\end{tabular}
\end{center}
\end{frame}
\begin{frame}[fragile]\frametitle{Type conversions}
\label{sec-1-16}
\begin{itemize}
\item When an expression has operands of different types, they are converted to a common type
\item Automatic conversions convert a narrower data type to a wider one or vice versa
\begin{itemize}
\item E.g. f = f + i;
\item An implementation of \verb~atoi~ to convert a character string of digits to its numeric equivalent
\end{itemize}
\end{itemize}
\begin{itemize}
\item Type conversions can also be forced with a unary operator called a \emph{cast}
E.g. to convert `i' from an integer to a double, we may use \verb~(double)i~
\end{itemize}
\end{frame}
\begin{frame}[fragile]\frametitle{Type conversions}
\label{sec-1-17}
E.g.
\begin{itemize}
\item Implicit type conversion
\begin{minted}[]{C}
int i = 3, j;
float f = 4.0;
f = i+f;
\end{minted}
\begin{itemize}
\item On the RHS, \verb~i~ is converted to float first, then addition is performed, and finally assigned to \verb~f~
\item If \verb~i~ was on the LHS instead of \verb~f~, all the above steps occur, but during assignment the result is converted to an integer
\end{itemize}
\item Explicit type conversion (casting)
\begin{itemize}
\item This is useful when dealing with fractions having integer data type
\begin{minted}[]{C}
int i=11, j=12;
float f=(float)i/j;
\end{minted}
\end{itemize}
\begin{itemize}
\item If we had left out `(float)', the result would have been 0
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}[fragile]\frametitle{Extras -- Floating Point Representation}
\label{sec-1-18}
The \verb~float~ and \verb~double~ data types are represented using IEEE 754 floating point format
\begin{itemize}
\item \verb~float~ takes 32 bits of memory
\item Its format is given by
\begin{center}
\begin{tabular}{|p{1ex}|p{8ex}|p{23ex}|}
\hline
S & E (8) & Mantissa (23) \\
\hline
\end{tabular}
\end{center}
\item Value is : $(-1)^{S} \times 2^{E-127}\times$ Mantissa
\item \verb~double~ takes 64 bit of memory and the format is given below
\end{itemize}
\begin{center}
\begin{tabular}{|p{1ex}|p{11ex}|p{42ex}|}
\hline
S & E (11) & Mantissa (52) \\
\hline
\end{tabular}
\end{center}
\begin{itemize}
\item Value is given by: $(-1)^{S}\times 2^{E-1023}\times$ Mantissa
\item Search around for examples
\end{itemize}
\end{frame}
\begin{frame}[fragile]\frametitle{Increment and Decrement operators}
\label{sec-1-19}
\begin{itemize}
\item \verb~++~ adds 1 to operand
\item \verb~--~ subtracts 1 from the operand
\item Can be used as postfix or prefix
\item \verb~x++;~ and \verb~++x;~ is same as \verb~x = x+1;~
\item The following table shows the difference when using the increment operator as a postfix and a prefix to the variable \verb~x~
\end{itemize}
\begin{center}
\begin{tabular}{|p{5cm}|l|}
\hline
Using increment operator & Equivalent statements \\
\hline
\verb~int x = 5;~ & \verb~int x = 5;~ \\
\verb~int a = x++;~ & \verb~int a = x;~ \\
& \verb~x = x+1;~ \\
\hline
\verb~int x = 5;~ & \verb~int x = 5;~ \\
\verb~int a = ++x;~ & \verb~x = x+1;~ \\
& \verb~int a = x;~ \\
\hline
\end{tabular}
\end{center}
\begin{itemize}
\item The same applies to the decrement operator too
\end{itemize}
\end{frame}
\begin{frame}[fragile]\frametitle{Bitwise Operators}
\label{sec-1-20}
\begin{itemize}
\item Has six operators for bit manipulation
\end{itemize}
\begin{center}
\begin{tabular}{|l|l|}
\& & bitwise AND \\
$\vert$ & bitwise inclusive OR \\
\verb~^~ & bitwise exclusive OR \\
\verb~<<~ & left shift \\
\verb~>>~ & right shift \\
\~{} & one's complement (unary) \\
\end{tabular}
\end{center}
\begin{itemize}
\item \& masks of some bits
\begin{itemize}
\item \verb~n = n & 0177;~
\item last 7 bits retain the previous values, all higher bits set to 0
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}[fragile]\frametitle{Extras -- Experiment with debugger (gdb)}
\label{sec-1-21}
\begin{itemize}
\item \verb~gdb~ is a standard debugger available in GNU/Linux systems
\item A debugger can be used to pause a running program and check the state of program (values in variables, trace of functions called etc)
\item Along with being a debugger, it can be used as a programmer's calculator
\item Run \verb~gdb~ without arguments
\item Set a variable in \verb~gdb~ and try all the operators
\begin{itemize}
\item \verb~(gdb) set $a = 10~
\item \verb~(gdb) p/t $a~
\item \verb,(gdb) p/t ~$a,
\item \verb~(gdb) p/t $a&10~
\end{itemize}
\item \verb~/t~ is a switch to the print command which tells the debugger to display the variable in binary
\item Other switches: \verb~/x~, \verb~/o~, \verb~/d~
\end{itemize}
\end{frame}
\begin{frame}[fragile]\frametitle{Assignment operators and expressions}
\label{sec-1-22}
\begin{itemize}
\item Expressions where a variable on LHS is repeated immediately on the RHS can be written in a compact form
\begin{itemize}
\item \verb~i = i+2;~ $\implies$ \verb~i += 2;~
\item \verb~+=~ is called an assignment operator
\end{itemize}
\item Thus
expr$_1$ op= expr$_2$ is equivalent to
expr$_1$ = (expr$_1$) op (expr$_2$)
\begin{itemize}
\item \verb~x *= y+1;~ means \verb~x = x * (y+1);~
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}[fragile]\frametitle{Operator Precedence}
\label{sec-1-23}
\begin{center}
\begin{tabular}{|p{7cm}|l|}
\hline
Operators & Associativity \\
\hline
\verb~() [] -> .~ & left to right \\
\verb,! ~ ++ -- + - * & (type) sizeof, & right to left \\
\verb~* / %~ & left to right \\
\verb~+ -~ & left to right \\
\verb~<< >>~ & left to right \\
\verb~< <= >= >~ & left to right \\
\verb~== !=~ & left to right \\
\verb~&~ & left to right \\
\verb~^~ & left to right \\
$\vert$ & left to right \\
\verb~&&~ & left to right \\
$\vert\vert$ & left to right \\
\verb~?:~ & right to left \\
\verb~= += -=~ etc. & right to left \\
, & left to right \\
\hline
\end{tabular}
\end{center}
\end{frame}
\begin{frame}[fragile]\frametitle{Operator Precedence -- Examples}
\label{sec-1-24}
\begin{itemize}
\item From the table, find the output of each, when \verb~a = 2~, \verb~b = 3~, \verb~c = 4~
\begin{itemize}
\item \verb~a + b<<3 + c~
\item \verb~a ^ b & 5 + c * 3~
\item \verb~(a ^ b) & (5 + c) * 3~
\end{itemize}
\end{itemize}
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.6613677265,
"avg_line_length": 28.6919104991,
"ext": "tex",
"hexsha": "1e6c10d2cabcf3ae389ed859923cf4f1a74607b1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a0812ff9a2758d8029a69dfe6ef0c320f9885554",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "VTUGargoyle/PCD",
"max_forks_repo_path": "module1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a0812ff9a2758d8029a69dfe6ef0c320f9885554",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "VTUGargoyle/PCD",
"max_issues_repo_path": "module1.tex",
"max_line_length": 146,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a0812ff9a2758d8029a69dfe6ef0c320f9885554",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "VTUGargoyle/PCD",
"max_stars_repo_path": "module1.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5146,
"size": 16670
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Friggeri Resume/CV
% XeLaTeX Template
% Version 1.2 (3/5/15)
%
% This template has been downloaded from:
% http://www.LaTeXTemplates.com
%
% Original authors:
% Adrien Friggeri ([email protected]): https://github.com/afriggeri/CV
% Antoine Marandon ([email protected]): https://github.com/ntnmrndn/Resume
%
% License:
% CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/)
%
% Important notes:
% This template needs to be compiled with XeLaTeX and the bibliography, if used,
% needs to be compiled with biber rather than bibtex.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\documentclass[]{friggeri-cv} % Add 'print' as an option into the square bracket to remove colors from this template for printing
\begin{document}
\header{Tristan}{Le Guern}{Freelance DevOps/SRE}{Gwir ha leal}
%-------------------------------------------------------------------------------
% SIDEBAR SECTION
%-------------------------------------------------------------------------------
\begin{aside} % In the aside, each new line forces a line break
\section{Contact}
~
\textbf{Location}: France
~
\textbf{Phone}: \href{tel:0033665630252}{\underline{+33 (0)6 65 63 02 52}}
~
\textbf{Email/XMPP}: \href{mailto:[email protected]}{\underline{[email protected]}}
\section {Links}
~
\href{https://github.com/Aversiste}{Github: \underline{Aversiste}}
\href{https://www.bouledef.eu/~tleguern}{Website: \underline{WWW}}
\section{Personal}
~
\textbf{Nationality}
French
\section{Skills}
~
\textbf{Programming Languages:} C, C++, Shell, Python
\textbf{Operating Systems:} OpenBSD, Linux
\textbf{DevOps:} Ansible, Terraform, Packer, Cloud-init, Jenkins
\textbf{Human Languages:} French (Mother Tongue), English (Fluent), Breton (Beginner)
\textbf{Others:} \LaTeX
\section{Hobbies}
~
Brewing, wine making, cheese making, DIY
\end{aside}
%-------------------------------------------------------------------------------
% WORK EXPERIENCE SECTION
%-------------------------------------------------------------------------------
\section{Experience}
\subsection{Gwern (my freelance company)}
\begin{entrylist}
\entry {2018--2019} {Remote DevOps for Deveryware} {Limerick, Ireland} {
\emph{Responsible for the deployment of a mobile app's backend.}
\begin{itemize}
\item Handled 240 servers split between five environments.
\item Overhauled and cleaned up many Ansible roles and playbooks, amounting to roughly 12000 lines of code.
\item Rewrote most shell scripts to improve their safety and performance.
\item Contribution to open source projects (ansible-haproxy, ansible-wazuh-agent, ansible-rcron, ansible-matomo).
\item Deployment of security tool Wazuh.
\item Security tests on the VRRP protocol.
\item Handled images creation with Packer and deployment with Terraform.
\item Became maintainer of public role Deveryware/ansible-haproxy.
\item Wrote extensive documentation dedicated to oncall ops.
\item Trained an internal devops on the job.
\end{itemize}
}
\entry {2017--2018} {Remote Devops for Ahrefs} {Paris, France} {
\emph{Worked in an Ops team}
\begin{itemize}
\item Managed multiple Elasticsearch clusters reaching thousands of nodes.
\item Configuration management with Puppet.
\end{itemize}
}
\end{entrylist}
\subsection{Orange}
\begin{entrylist}
\entry {2014--2017} {Sysadmin/DevOps} {Paris, France} {
\emph{Full time in the subsidiary hosting services for the rest of the company.}
\begin{itemize}
\item Administered a Xymon based monitoring solution, handling more than 5000 ressources (servers, services, applications, URL...) and 500 agents.
\item Managed internal Xymon fork and contributions to upstream.
\item Developped custom monitoring agents (Perl, Python, shell).
\item Wrote the documentation for each and every tool.
\item Wrote the disaster recovery procedure.
\item Fixed Ansible playbooks to ensure idempotence during deployments.
\item Implemented dynamic DNS updates in the internal DNS API.
\item Various improvements to the DNS management tools.
\item Contributed to nsdiff (https://github.com/fanf2/nsdiff).
\end{itemize}
}
\end{entrylist}
%------------------------------------------------
\subsection{Mind Technologies}
\begin{entrylist}
\entry {2014--2014} {Network engineer} {Paris, France} {
\emph{Fulltime in a company offering expertise in managed network infrastructures.}
}
\end{entrylist}
%------------------------------------------------
\pagebreak
\subsection{Linagora}
\begin{entrylist}
\entry {2012--2014} {System engineer} {Paris, France} {
\begin{itemize}
\item Administered a Xen cluster and helped clients into building their hosted services.
\item Migrated the old clients ressources from dedicated or shared hostings to virtual machines with Puppet as a deployment tool.
\item Implemented p2v (physical to virtual) migration solutions.
\item Developped an internal domain name monitoring solution.
\item Administered multiple Nagios based monitoring solutions for various clients.
\item Handled a three months classified mission for the French military involving documentation, training and monitoring.
\item Replaced a team of four sysadmins at Cergy-Pontoise University, worked on the complete overhaul of the SMTP, LDAP and DNS services.
\end{itemize}
}
\end{entrylist}
%------------------------------------------------
\subsection{Lab'Free}
\begin{entrylist}
\entry {2010--2011} {C++ Developer} {Paris, France} {
\emph{Internship. Developed and oversaw implementation of a video streaming software, in C++ and JavaScript.}
}
\end{entrylist}
%-------------------------------------------------------------------------------
% EDUCATION SECTION
%-------------------------------------------------------------------------------
\section{Education}
\begin{entrylist}
\entry
{2008--2013}
{Master's dregree {\normalfont of Computer Sciences at EPITECH}}
{Paris, France}
{\emph{5 year course}}
\entry
{2011--2012}
{Exchange Student {\normalfont at Keele University}}
{Newcastle-under-Lyme, England}
{\emph{Computer Sciences course}}
\entry
{2008}
{Baccalauréat {\normalfont at Lycée Colbert}}
{Lorient, France}
{}
\end{entrylist}
%-------------------------------------------------------------------------------
% Presonal
%-------------------------------------------------------------------------------
\section{Personal Projects}
\begin{entrylist}
\entry
{2018}
{ansible modules}
{Developer}
{Various ansible modules for matomo, keepalived, haproxy, ...}
\entry
{2018}
{www.libravatar.org}
{Q/A}
{Team member dedicated to Q/A.}
\entry
{2018}
{libravatar.cgi}
{Developer}
{Small, secure implementation of the Libravatar protocol for OpenBSD}
\entry
{2011}
{libtuntap}
{Developer}
{Popular portable library for the creation and management of virtual network interfaces.}
\entry
{2011--2013}
{tNETacle (decentralized VPN)}
{Developer}
{C/C++ solution developed with a 8 students team \\
\emph{Asymmetric cryptography, UPNP, IPC, UDP, TCP, TLS, DTLS, portability, compression...}}
\end{entrylist}
\end{document}
| {
"alphanum_fraction": 0.6594911937,
"avg_line_length": 34.5603864734,
"ext": "tex",
"hexsha": "3fd26f135c9705bd8cfd6b96f6e36967423216b0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d583b51dd600b117c454148e0c788452cd075ee0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Aversiste/Resume",
"max_forks_repo_path": "tleguern_resume.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d583b51dd600b117c454148e0c788452cd075ee0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Aversiste/Resume",
"max_issues_repo_path": "tleguern_resume.tex",
"max_line_length": 150,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d583b51dd600b117c454148e0c788452cd075ee0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Aversiste/Resume",
"max_stars_repo_path": "tleguern_resume.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1790,
"size": 7154
} |
\subsection{Discounting}
| {
"alphanum_fraction": 0.75,
"avg_line_length": 5.6,
"ext": "tex",
"hexsha": "9e4da2759b30657b622a448dc13ce94ae5dd07cf",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/ai/repeated/01-01-discounting.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/ai/repeated/01-01-discounting.tex",
"max_line_length": 24,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/ai/repeated/01-01-discounting.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 7,
"size": 28
} |
\documentclass[11pt]{beamer}
%\usetheme{Dresden}%{Berkeley}
\usetheme{metropolis}
\usepackage[utf8]{inputenc}
\usepackage[english]{babel}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{wrapfig}
%\usepackage[font=scriptsize,labelfont=bf]{subcaption}
%\usepackage[font=scriptsize,labelfont=bf]{caption}
\usepackage{FiraSans}
\usepackage{FiraMono}
\usepackage{pgfpages}
\usepackage{lmodern}
\usepackage[binary-units = true]{siunitx}
% externalize tikz images
\usepackage{tikz} % externalize tikz images
\usepackage{pgfplots} % create plots with pgf/tikz
\pgfplotsset{compat=1.14} % README here http://pgfplots.sourceforge.net/pgfplots.pdf
%\usepgfplotslibrary{external}
\usetikzlibrary{external} %
\makeatletter
\newcommand*{\overlaynumber}{\number\beamer@slideinframe}
\tikzset{
beamer externalizing/.style={%
execute at end picture={%
\tikzifexternalizing{%
\ifbeamer@anotherslide
\pgfexternalstorecommand{\string\global\string\beamer@anotherslidetrue}%
\fi
}{}%
}%
},
external/optimize=false
}
\let\orig@tikzsetnextfilename=\tikzsetnextfilename
\renewcommand\tikzsetnextfilename[1]{\orig@tikzsetnextfilename{#1-\overlaynumber}}
\makeatother
\tikzset{every picture/.style={beamer externalizing}}
\tikzexternalize % activate!
\tikzsetexternalprefix{tikz/} % set subfolder
\usepgfplotslibrary{fillbetween,dateplot} % need this to fill between functions
\usetikzlibrary{ patterns, decorations, decorations.markings, decorations.pathreplacing,
shapes, shapes.geometric, shapes.misc, arrows, arrows.meta,
positioning, intersections,
overlay-beamer-styles,
mindmap,trees,shadows}
\usepackage[
backend = biber,
style = phys,
autocite = superscript,
sortcites = true,
]{biblatex}
\addbibresource{../tex/bib_articles.bib}
\addbibresource{../tex/bib_books.bib}
\addbibresource{../tex/bib_websites.bib}
\usepackage{csquotes}
\renewcommand{\footnotesize}{\tiny}%{\scriptsize}
%\setbeamerfont{bibliography entry note}{size=\tiny}
%\renewcommand*{\bibfont}{\scriptsize}
%\setbeamertemplate{bibliography item}{\insertbiblabel}
\setbeameroption{show notes on second screen}
\author[D. Bazzanella]{Davide Bazzanella}
\title[All-Optical Neural Networks]{Optical Bistability As Neural Network Nonlinear Activation Function}
%\subtitle[short subtitle]{long subtitle}
%\logo{\includegraphics[height=2.3cm]{pics/unitn_logo.png} }
\institute[UNITN]{Università degli studi di Trento}
\date{20th March 2018}
\subject{Master Thesis in Physics}
\setbeamercovered{transparent}
\setbeamertemplate{navigation symbols}{}
\metroset{numbering=fraction, background=light}
%\newcommand{\frameofframes}{/}
%\newcommand{\setframeofframes}[1]{\renewcommand{\frameofframes}{#1}}
%\setframeofframes{of}
%\setbeamertemplate{footline}{
% \begin{beamercolorbox}[ht=2.5ex, dp=1.125ex, leftskip=.3cm, rightskip=.3cm plus1fil]
% {title in head/foot}
% {\insertshorttitle} \hfill {\insertframenumber~\frameofframes~\inserttotalframenumber}
%% {author in head/foot}
%% {\insertshortauthor} \hfill {\insertinstitute}
% \end{beamercolorbox}
%}
% set automatic fade transition
\addtobeamertemplate{background canvas}{\transfade}{}
% \section[Text]{Long Text}: Long Text is used in TOC, Text in navigation.
% \footfullcite{ <citation> }
\begin{document}
\frame{\titlepage}
%%%%% %%%%% %%%%% %%%%% %%%%% %%%%%
\section[Introduction]{Introduction}
\begin{frame}{All-optical Artificial Neural Networks}
Applying integrated photonics to artificial neural networks architecture design
Develop simulations on standard software libraries that help performance comparisons
\end{frame}
%%%%% %%%%% %%%%% %%%%% %%%%% %%%%%
\begin{frame}{Outline}
\tableofcontents[pausesections]
\end{frame}
%%%%% %%%%% %%%%% %%%%% %%%%% %%%%%
\section{Artificial Neural Networks}
%
\begin{frame}{ANNs}
Artificial Neural Networks are computation systems, composed by a collection of nodes that work seemingly biological neurons.
\end{frame}
%
\begin{frame}{ANNs blocks}
ANNs are composed by single units, \textit{nodes}, which elaborate the information in a way loosely similar to biological neurons.
\begin{figure}
\centering
\input{tikz/node.tex}
\end{figure}
\end{frame}
%
%\begin{frame}[standout]
% What can they do?
%\end{frame}
\begin{frame}[c]{What can they do?}
\center \huge What can they do?
\end{frame}
%
\begin{frame}{What can they do?}
\begin{columns}
\column{0.6\textwidth}
ANNs can solve complex problems:
\begin{itemize}
\item \alert<2>{\textbf<2>{classification}}
\item \alert<3>{\textbf<3>{clustering}}
\item \alert<4>{\textbf<4>{pattern recognition}}
\item \alert<5>{\textbf<5>{time series prediction}}
\end{itemize}
\column{0.35\textwidth}
\begin{figure}
\centering
\only<2>{\includegraphics[draft,width=3cm,height=2cm]{figures/foo.png}}
\only<3>{\includegraphics[draft,width=3cm,height=3cm]{figures/foo.png}}
\only<4>{\includegraphics[draft,width=3cm,height=4cm]{figures/foo.png}}
\only<5>{\includegraphics[draft,width=3cm,height=5cm]{figures/foo.png}}
\end{figure}
\end{columns}
\end{frame}
%
\begin{frame}[c]{How do they work?}
\center \huge How do they work?
\tikzset{external/export next=false}
\begin{tikzpicture}[remember picture, overlay]
\draw [line width=10mm, opacity=.25] (current page.center)++(-\textwidth/4,0) circle [radius=2cm];
\node (c) at (current page.center) {};
% \draw [->, very thick,red,opacity=0.5] (c) to[bend right] (current page.south east);
% \draw [overlay , ->, very thick,red,opacity=0.5] (c) to[bend right] (n1);
%\node [rotate=60,scale=2,text opacity=0.2] at (current page.center) {Example};
\end{tikzpicture}
\end{frame}
%\begin{frame}{How do they work?}
% ANNs can obtain arbitrary decision regions\footnotemark
%
% \vspace*{2em}
% The amount of free parameters in an ANN, allow ..
% \footnotetext{\fullcite{duda2012pattern}}%\footfullcite{duda2012pattern}
%\end{frame}
\begin{frame}{How do they work?}
ANNs can obtain arbitrary decision regions\footnotemark
\vspace*{2em}
The amount of free parameters in an ANN, allow ..
\tikzset{external/export next=false}\tikz[remember picture] \node [circle,fill=red!50] (n1) {};
?
\footnotetext{\fullcite{duda2012pattern}}%\footfullcite{duda2012pattern}
\tikzset{external/export next=false}\tikz[remember picture,overlay] \draw [->, very thick,red,opacity=0.5] (c)++(-\textwidth,0) to[bend right] (n1);
\end{frame}
\begin{frame}{How do they work?}
\begin{itemize}[<+->]
\item training
\begin{itemize}
\item evaluate loss
\item adjust parameters
\end{itemize}
\item validation
\item test
\end{itemize}
\end{frame}
%
%%%%% %%%%% %%%%% %%%%% %%%%% %%%%%
\section{Microring Resonator}
\begin{frame}{MRR}
\begin{columns}
\column{0.45\textwidth}
Consider a MRR in the \textit{Add-Drop Filter} configuration
\begin{align*}
\texttt{T} \left( \omega \right) &= f \left[ ~\texttt{I} \left( \omega \right) ~\right] \\
\texttt{D} \left( \omega \right) &= f \left[ ~\texttt{I} \left( \omega \right) ~\right]
\end{align*}
\visible<3->{Coupling is governed by
\[\tau \quad \mathrm{and} \quad \kappa.\]}
\column{0.45\textwidth}
\begin{figure}
\centering
\input{tikz/MRR.tex}
% \includegraphics[draft,width=4.5cm,height=3cm]{figures/foo.png}
\end{figure}
\end{columns}
\end{frame}
\begin{frame}{Theory}
Linear
\end{frame}
\begin{frame}{Theory}
Nonlinear
\end{frame}
\begin{frame}[plain]{Experiments}
\begin{figure}[htbp]
\input{tikz/pump_setup.tex}
\end{figure}
\end{frame}
%%%%% %%%%% %%%%% %%%%% %%%%% %%%%%
\section{ANN Simulations}
\begin{frame}{Simulation Framework}
What means simulating?
PyTorch library
\end{frame}
\begin{frame}{Fundamental blocks}
model ($FF[f_a]$)
loss criteria (CEL)
weight update criteria (SGD)
\end{frame}
\begin{frame}
model ($FF[f_a]$)
\end{frame}
\begin{frame}
\textit{Cross-Entropy Loss} (also known as negative log likelihood),
\begin{equation*}
L(y, \hat{y}) = f_{CEL}(y, \hat{y}) = - \frac{1}{N} \sum_{n=1}^N \sum_{i=1}^C y_{n,i} \log \left( \hat{y}_{n,i} \right)
\end{equation*}
\end{frame}
\begin{frame}
\textit{Stochastic Gradient Descent}
with \textit{momentum}
and \textit{learning rate scheduler}.
\end{frame}
\begin{frame}{Operation Tests}
ReLU vs $f_{fit}$
\end{frame}
%%%%% %%%%% %%%%% %%%%% %%%%% %%%%%
\section{Conclusion}
\begin{frame}{Overview}
{I assembled an experimental setup from scratch}
\vspace{1em}
{I characterized the response of the MRR in several aspects}
\vspace{1em}
{I implemented the bistable response in standard software libraries}
\end{frame}
\begin{frame}{Future Perspectives}
\visible<1->{Continue the current work with a quantitative analysis of specific features}
\vspace{1em}
\visible<2->{Enhance the physical theory to describe time dependent phenomena}
\vspace{1em}
\visible<3->{Proceed with the development of the simulations to include all the characteristics of the physical system}
% Proceed with the implementation of physical characteristics in the simulations
\end{frame}
%\section[]{References}
%\begin{frame}[allowframebreaks]{References}
%\printbibliography
%\end{frame}
\begin{frame}{Mindmap}
\tikzsetexternalprefix{tikz/} % set subfolder
\tikzsetnextfilename{mindmap}
\begin{tikzpicture}[mindmap, concept color=gray!50!violet, font=\sf, text=white]
\tikzstyle{level 1 concept}+=[font=\sf, sibling angle=90,level distance = 30mm]
\node[concept,scale=0.7] {Center}
[clockwise from={90+45}]
child[concept color=red, visible on=<2->]{ node[concept,scale=0.7]{NW} }
child[concept color=orange, visible on=<3->]{ node[concept,scale=0.7]{NE} }
child[concept color=yellow, visible on=<4->]{
node[concept,scale=0.7]{SE}
% [clockwise from={45}]
% child[concept color=yellow!50!orange, scale=0.3, visible on=<5->] {SE-to-NE}
% child[concept color=yellow!, scale=0.3, visible on=<7->] {SE-to-SE}
% child[concept color=yellow!50!green, scale=0.3, visible on=<6->] {SE-to-SW}
}
child[concept color=green, visible on=<5->]{ node[concept,scale=0.7]{SW} };
\end{tikzpicture}
\end{frame}
\end{document}
%\begin{itemize}
% \item<1-> Text visible on slide 1
% \item<2-> Text visible on slide 2
% \item<3> Text visible on slide 3
% \item<4-> Text visible on slide 4
%\end{itemize} | {
"alphanum_fraction": 0.7025894897,
"avg_line_length": 31.4491017964,
"ext": "tex",
"hexsha": "32f10727344080abb5ba07cfd04e7df7acc539a7",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-04-10T15:29:44.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-03-25T08:26:41.000Z",
"max_forks_repo_head_hexsha": "634a2231f02586bfac4ddfa0c39f12bcd58a820f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "DejavuITA/AOD-ANN",
"max_forks_repo_path": "viva/viva.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "634a2231f02586bfac4ddfa0c39f12bcd58a820f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "DejavuITA/AOD-ANN",
"max_issues_repo_path": "viva/viva.tex",
"max_line_length": 149,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "634a2231f02586bfac4ddfa0c39f12bcd58a820f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "DejavuITA/AOD-ANN",
"max_stars_repo_path": "viva/viva.tex",
"max_stars_repo_stars_event_max_datetime": "2019-11-25T11:04:59.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-11-25T11:04:59.000Z",
"num_tokens": 3409,
"size": 10504
} |
\section{Feature Evaluation}
Successful classification largely depends on the predictive value of the features. We, therefore, evaluate for the features whether they will be useful in the discrimination between the formal and informal classes. The evaluation requires the satellite image to be divided into the formal and informal classes. This division is represented using a ground truth, which is a mask covering the satellite image the location and shape of the slums. The mask is used to divide each block of pixels into the two classes: either formal or informal. Each block contains a vector of features, characterizing that particular block of pixels. All blocks of the same class will be grouped, allowing for the analysis of the distribution values of the two classes. If the distribution of the values of a particular feature varies substantially between the two classes, it should be a suitable candidate for classification.
The ground truth mask is constructed using a vector file containing the boundaries of informal areas. The boundary file is rasterized and applied on top of the satellite image, creating a mask of the location of the informal areas. Because the feature calculation is performed on blocks instead of pixels, we transform the pixel based mask into a block-based mask where every pixel represents a block of pixels in the original image. In the mask, the pixels with the value zero represent blocks in the image designated as formal while ones represent blocks of slums.
We visualize the predictive value of the features using a boxplot and a kernel density estimation plot. When a feature is distinctive, the distribution of the values in the two classes will be different, thus creating a visual difference in both plots. We attempted to use KL divergence as an objective measurement as an alternative for the visually inclined box plot and kernel density estimation. However, we were not able to extract reasonable KL divergence values from our data as for many outcomes would indicate a divergence of infinite value. We will therefore visually evaluate the differences between the formal and informal value distributions in the two types of plots.
The features will be calculated for all combinations of image sections, color bands, block sizes, and scales, resulting in large amounts of data of which only a selection is shown in the following sections. For HoG, LSR and RID the features are extracted from the three sections presented in Figure
\ref{fig:sections} for all three color bands. We analyze the impact of the block size and scale parameters by using the block sizes 20, 40, and 60 in combination with the scales 50, 100, and150. The individual
impact of each of the two parameters is evaluated using a comparison between the baseline and the increased parameter, for example, a comparison between the results obtained from a block size of 20 compared to a block size of 60, with all other parameters constant. The evaluation is on the assumption that all parameters are independent.
Because every combination of image section, block size and scale will have three different results due to the RGB color bands, we refrain from visualizing all bands and pick the first band, which is red. The results seem to vary slightly between the red, green, and blue band, although it is not significant. Furthermore, we will select the most expressive feature for each extraction method to reduce the complexity of the visualization.
\subsection{Histogram of Oriented Gradients}
\begin{figure}
\centering
\begin{tabular}{cc}
\subfloat{\includegraphics[width=7cm]{images/HoG/inc_bk/section_1_boxplot_hog_BK20_SC100_F2}}&
\subfloat{\includegraphics[width=7cm]{images/HoG/inc_bk/section_1_boxplot_hog_BK60_SC100_F2}}\\
\subfloat{\includegraphics[width=7cm]{images/HoG/inc_bk/section_1_kde_hog_BK20_SC100_F2}}&
\subfloat{\includegraphics[width=7cm]{images/HoG/inc_bk/section_1_kde_hog_BK60_SC100_F2}}
\end{tabular}
\caption{The effect of increased block size on the HoG features. From left to
right: a block size of 20 and 60 respectively.}
\label{hog_inc_bk}
\end{figure}
For the Histogram of Oriented Gradients, the third feature is selected for visualization in this section, which is the variance. Figure \ref{hog_inc_bk} shows the effect of increased block size on the distribution of values in both classes. This example uses a constant scale of 100 pixels and a variable block size of 20 and 60 pixels. It seems that the block size, in this case, does not influence both distributions significantly. As a result, an increase in the size of a block within the tested range should have little to no influence on the predictive value of the feature.
\begin{figure}
\centering
\begin{tabular}{cc}
\subfloat{\includegraphics[width=7cm]{images/HoG/inc_sc/section_1_boxplot_hog_BK20_SC50_F2}}&
\subfloat{\includegraphics[width=7cm]{images/HoG/inc_sc/section_1_boxplot_hog_BK20_SC150_F2}}\\
\subfloat{\includegraphics[width=7cm]{images/HoG/inc_sc/section_1_kde_hog_BK20_SC50_F2}}&
\subfloat{\includegraphics[width=7cm]{images/HoG/inc_sc/section_1_kde_hog_BK20_SC150_F2}}\\
\end{tabular}
\caption{The effect of increased scale on the HoG features. From left to
right: a scale of 50 and 150 respectively}
\label{hog_inc_sc}
\end{figure}
Figure \ref{hog_inc_sc} illustrates the effect of increased scale on a constant block size. It seems to indicate that an increased scale leads to increased separation of the two distribution, thus increasing the predictive value of the feature. Scales larger than 150 do not seem to improve the performance; in contrast, the performance seems to decrease with scales larger than 150. In the paper of Graesser \textit{et al.} combined multiple scales to improve the detection of informal settlements. We will experiment with multiple scales and scale combinations in the results section.
\subsection{Line Support Region}
The Line Support Region feature visualized is the line mean entropy, which appeared to be the most expressive of the features. Furthermore, we showed the results from the first section because it produced the most distinct results from the three sections. The individual performances of the different sections will be evaluated in the results section. Similar to the analysis of HoG, the increase of the block size does not seem to influence the distribution of the two classes. The observed difference is as negligible as the comparison of the HoG in Figure \ref{hog_inc_bk}
\begin{figure}
\centering
\begin{tabular}{cc}
\subfloat{\includegraphics[width=7cm]{images/LSR/inc_sc/section_1_boxplot_lsr_BK20_SC50_F1}}&
\subfloat{\includegraphics[width=7cm]{images/LSR/inc_sc/section_1_boxplot_lsr_BK20_SC150_F1}}\\
\subfloat{\includegraphics[width=7cm]{images/LSR/inc_sc/section_1_kde_lsr_BK20_SC50_F1}}&
\subfloat{\includegraphics[width=7cm]{images/LSR/inc_sc/section_1_kde_lsr_BK20_SC150_F1}}
\end{tabular}
\caption{The effect of increased scale on the second SLR feature. From left to
right: a scale of 50 and 150 respectively.}
\label{lsr_inc_sc}
\end{figure}
The distributions for different scales of the LSR feature is visualized in Figure \ref{lsr_inc_sc}. Similar to the features from the Histogram of Oriented Gradients, it appears that an increase in scale seems to increase the difference between the distributions of the formal and informal classes, even though this is not as clear in the other two sections, it seems to indicate that increased scale might improve performance
\subsection{Road Intersection Density}
\begin{figure}
\centering
\begin{tabular}{ccc}
\subfloat{\includegraphics[height=3.8cm]{images/RID/1}}&
\subfloat{\includegraphics[height=3.8cm]{images/RID/2}}&
\subfloat{\includegraphics[height=3.8cm]{images/RID/3}}\\
\subfloat{\includegraphics[height=3.8cm]{images/RID/4}}&
\subfloat{\includegraphics[height=3.8cm]{images/RID/5}}&
\subfloat{\includegraphics[height=3.8cm]{images/RID/6}}\\
\end{tabular}
\caption{The distribution of values for the Road Intersection Density feature. Top: Scale 150; Bottom: Scale 50}
\label{rid}
\end{figure}
The Road Intersection Density method generates a single feature produced by the Getis and Ord local G function. We have evaluated different block sizes and scales for RID although, in this case, the meaning of the scale and block size is slightly different than with HoG and LSR. In the calculation of the G function, we rasterize the locations of the intersection onto a grid, where the size of the blocks in the grid is the block size. The blocks in the grid contain a counter of the number of intersections that fell within that block. The G function is calculated over this grid where the scale is the neighborhood around a block, in the same manner as the HoG and LSR feature although the shape is a circle instead of a square.
Figure \ref{rid} shows the differences in distribution for the RID feature for scales 150 and 50. The low scale seems to create artifacts and does not have a distinctly different distribution. The higher scale, on the other hand, seems to have a difference, although not quite significant. Using RID together with the other two sections results in performance that is similar to the bottom right image in the figure. In the evaluation of RID, we have only used the parameters that worked well with the first section and applied this to the second and third section; this is likely cause for the low performance in the other sections. We would have to use three different sets of parameters to increase performance for the other two sections. It is clear that this approach does not seems scalable to large images where there is a significant variance in the types of roads.
\subsection{Conclusion}
With the right parameters, there is a clear distinction between the two classes for both the HoG and LSR features. RID, in contrast, does not seem to have a clear distinction, likely caused by a lack of specific parameters for section 2 and 3. For HoG and LSR, it seems that an increase in scale improves the distinctness of the two classes, although the increased scale is computationally expensive. On the three sections, the time to compute features at a high scale is measured in hours rather than minutes. Using features with high scales on large images is, therefore, discouraged. Besides, increasing the scale seems to give diminishing returns while taking significantly more time to compute. We use 150 as the maximum scale in the classification experiments to reduce the time needed to compute the features.
For HoG and LSR, the block size did not seem to improve or decrease performance. As a result, we would get the same performance with large block sizes as small block sizes using less computation. For now, the block size remains 20 pixels as the paper by Graesser \textit{et al.} describes. We will test our hypothesis on the effect of block size and scale during the classification experiments in the results section.
| {
"alphanum_fraction": 0.8061029879,
"avg_line_length": 115.9052631579,
"ext": "tex",
"hexsha": "0d31de61a5ff64bffcd1e98f27790c4afebd9af5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8ae38623454dc3467333f07571401073d9c40616",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "DerkBarten/SlumDetection",
"max_forks_repo_path": "thesis/evaluation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8ae38623454dc3467333f07571401073d9c40616",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "DerkBarten/SlumDetection",
"max_issues_repo_path": "thesis/evaluation.tex",
"max_line_length": 907,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8ae38623454dc3467333f07571401073d9c40616",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "DerkBarten/SlumDetection",
"max_stars_repo_path": "thesis/evaluation.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2551,
"size": 11011
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.