Search is not available for this dataset
text
string | meta
dict |
---|---|
\section{Binary positive and non negative integers : NArith}\label{NArith}
Here are defined various arithmetical notions and their properties,
similar to those of {\tt Arith}.
| {
"alphanum_fraction": 0.7921348315,
"avg_line_length": 29.6666666667,
"ext": "tex",
"hexsha": "83eed970eaf1dcaa5be9e16b1b3a8cab7ab20a34",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4fb3711723e2581a170ffd734e936f210086396e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mzp/coq-for-ipad",
"max_forks_repo_path": "Resources/coq-8.3pl2/theories/NArith/intro.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4fb3711723e2581a170ffd734e936f210086396e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mzp/coq-for-ipad",
"max_issues_repo_path": "Resources/coq-8.3pl2/theories/NArith/intro.tex",
"max_line_length": 74,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "4fb3711723e2581a170ffd734e936f210086396e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mzp/coq-for-ipad",
"max_stars_repo_path": "Resources/coq-8.3pl2/theories/NArith/intro.tex",
"max_stars_repo_stars_event_max_datetime": "2015-01-27T00:11:26.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-27T00:11:26.000Z",
"num_tokens": 43,
"size": 178
} |
\chapter{JSON Format}\label{json.chapter}
\noindent
\CmdStan can use JSON format for input data for both
model data and parameters. Model data is read in by the model
constructor. Model parameters are used to initialize the sampler and
optimizer.
\section{JSON Syntax Summary}
JSON is a data interchange notation, defined by an
\href{http://www.ecma-international.org/publications/files/ECMA-ST/ECMA-404.pdf}{ECMA standard}.
JSON data files must in Unicode.
JSON data is a series of structural tokens, literal tokens, and values:
\begin{itemize}
\item Structural tokens are the left and right curly bracket, left and right square bracket, the semicolon, and the comma. \{\}[]:,
\item Literal tokens must always be in lowercase. There are three literal tokens: \code{true} \code{false} \code{null}
\item A primitive value is a single token which is either a literal, a string, or a number.
\item A string consists of zero or more Unicode characters enclosed in
double quotes, e.g. \code{"foo"}. A backslash is used to escape the double quote
character as well as the backslash itself. JSON allows the use of
Unicode character escapes, e.g. \code{\\uHHHH} where \code{HHHH} is
the Unicode code point in hex.
\item All numbers are decimal numbers. Scientific notation is
allowed. The following are examples of numbers \code{17 17.2 -17.2
-17.2e8 17.2e-8} The concepts of positive and negative infinity as well as
"not a number" cannot be expressed as numbers in JSON, but they can
be encoded as strings which can be mixed with numbers.
\item A JSON array is an ordered, comma-separated list of zero or more
JSON values enclosed in square brackets. The elements of an array
can be of any type. The following are examples of arrays: \code{[] [1] ["a","b",true]}
\item A name-value pair consists of a string followed by a colon followed by a value, either primitive or compound.
\item A JSON object is a comma-separated series of zero or more
name-value pairs enclosed in curly brackets. Each name-value pair is
a member of the object. Membership is unordered. Member names are
not required to be unique. The following are examples of objects:
\code{\{\} \{"foo": null\} \{"bar" : 17, "baz" : [14,15,16.6] \}}
\end{itemize}
\section{Stan Data Types in JSON Notation}
Stan follows the JSON standard.
A Stan input file in JSON notation consists of JSON object which contains zero
or more name-value pairs. This structure corresponds to a Python data
dictionary object. The following is an example of JSON data for the
simple Bernoulli example model:
\begin{quote}
\begin{Verbatim}
{ "N" : 10, "y" : [0,1,0,0,0,0,0,0,0,1] }
\end{Verbatim}
\end{quote}
Matrix data and multi-dimensional arrays are indexed in row-major
order. For a Stan program which has data block
\begin{quote}
\begin{Verbatim}
data {
int d1;
int d2;
int d3;
int ar[d1, d2, d3];
}
\end{Verbatim}
\end{quote}
\noindent
the following JSON input file would be valid
\begin{quote}
\begin{Verbatim}
{ "d1" : 2,
"d2" : 3,
"d3" : 4,
"ar" : [[[0,1,2,3], [4,5,6,7], [8,9,10,11]],
[[12,13,14,15], [16,17,18,19], [20,21,22,23]]]
}
\end{Verbatim}
\end{quote}
JSON ignores whitespace. In the above examples, the spaces and
newlines are only used to improve readability and can be omitted.
All data inputs are encoded as name-value pairs.
The following table provides more examples of JSON data.
The left column contains a Stan data variable declaration
and the right column contains valid JSON data inputs.
%
\begin{center}
\begin{tabular}{r||c|c}
{\it Stan variable} & {\it JSON data} \\ \hline \hline
\code{int i;} & \code{"i" : 17} \\
\\
\code{real a;} & \code{"a" : 17} \\
& \code{"a" : 17.2} \\
& \code{"a" : "NaN"} \\
& \code{"a" : "+inf"} \\
& \code{"a" : "-inf"} \\
\\
\code{int a[5];} & \code{"a" : [1, 2, 3, 4, 5]} \\
\\
\code{real a[5];} & \code{"a" : [ 1, 2, 3.3, "NaN", 5 ]} \\
\code{vector[5] a;} & \code{"a" : [ 1, 2, 3.3, "NaN", 5 ]} \\
\code{row\_vector[5] a;} & \code{"a" : [ 1, 2, 3.3, "NaN", 5 ]} \\
\code{real a[5];} & \code{"a" : [ 1, 2, 3.3, "NaN", 5 ]} \\
\\
\code{matrix[2,3] a;} & \code{"a" : [ [ 1, 2, 3 ], [ 4, 5, 6] ]} \\
\end{tabular}
\end{center}
%
| {
"alphanum_fraction": 0.6862372237,
"avg_line_length": 38.9537037037,
"ext": "tex",
"hexsha": "35fc7de361b952c08334f48e745e96763d7a62ce",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f85d83576280447bb3765d38c0dc765147833058",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "mcol/cmdstan",
"max_forks_repo_path": "src/docs/cmdstan-guide/json-format.tex",
"max_issues_count": 8,
"max_issues_repo_head_hexsha": "f85d83576280447bb3765d38c0dc765147833058",
"max_issues_repo_issues_event_max_datetime": "2019-01-17T18:51:39.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-01-17T18:51:16.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "mcol/cmdstan",
"max_issues_repo_path": "src/docs/cmdstan-guide/json-format.tex",
"max_line_length": 131,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "f85d83576280447bb3765d38c0dc765147833058",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "mcol/cmdstan",
"max_stars_repo_path": "src/docs/cmdstan-guide/json-format.tex",
"max_stars_repo_stars_event_max_datetime": "2019-09-06T15:53:17.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-09-06T15:53:17.000Z",
"num_tokens": 1336,
"size": 4207
} |
\section{}
\subsection{}
\subsection{}
\subsection{}
\begin{problem}
From Page 320 number 6. If we take the 95\% upper, $n=60$, $s^2=12.5$, and $\overline{x}=18.6$ the formula we are going to be using is $\overline{x}+Z_\alpha\cdot \frac{s}{\sqrt{n} }$.
\begin{equation}
18.6+1.645 \frac{\sqrt{12.5}}{\sqrt{60}}\approx19.55
\end{equation}
Because our result was 19.55, our interval is $(-\infty, 19.55)$.
\end{problem}
If we were to take $\overline{x_1}-\overline{x_2}$, we would need to do
\begin{equation}
\overline{x_1}-\overline{x_2}+Z_\alpha\cdot \sqrt{\frac{s_1^2}{n_1}+\frac{s_2^2}{n_2}}.
\end{equation}
So if we were to have the $\hat{p_1}-\hat{p_2}$ lower,
\begin{equation}
\hat{p_1}-\hat{p_2}-Z_\alpha\sqrt{\frac{\hat{p_1}\hat{q_2}}{n_1}+\frac{\hat{p_2}\hat{q_2}}{n_2}}.
\end{equation}
\begin{problem}
Let's take a look at a problem, where 55\% of 2000 American adults surveyed said they have watched digitally streamed TV programming on some type of device. $\hat{W}$ sample size would be required for the width of a 99\% CI to be at most 0.5 irrespective of the value of $\hat{p}$. We know that the value of $\hat{p}$ is 55\%. and we need to find the $[a,b]$ interval.
$$
\begin{aligned}
\left(\hat{p}+2.576\sqrt{\frac{\hat{p}\hat{q}}{n}}\right)-\left(\hat{p}-2.576\sqrt{\frac{\hat{p}\hat{q}}{n}}\right)\\
&=2\cdot 2.576\sqrt{\frac{\hat{p}\hat{q}}{n}}\\
&= \text{size of the CI}\\
0.5 &> 2\cdot 2.576\sqrt{\frac{\hat{p}\hat{q}}{n}}
\end{aligned}
$$
Consider the worst scenario:
$$
\begin{aligned}
2\cdot 2.576\sqrt{\frac{\frac{1}{2}\frac{1}{2}}{n}}&<0.05\\
\sqrt{\frac{\frac{1}{4}}{\sqrt{n}}}&<\frac{0.5}{2\cdot2.576}\\
\frac{\frac{1}{2}}{\sqrt{n}}&<\frac{.05}{2*2.576}\\
\frac{2.576}{0.5}&<\sqrt{n}\\
n&>\left(\frac{2.576}{.05}\right)^2\\
n&=2655
\end{aligned}
$$
\end{problem}
| {
"alphanum_fraction": 0.6192246415,
"avg_line_length": 33.0350877193,
"ext": "tex",
"hexsha": "51ee711830535cf3740a85c226ce388d36e1cfc0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a501bcb919b60bc35fa43b99eb6ed2a2630cb100",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "therealkeyisme/Math-Notes",
"max_forks_repo_path": "math321/chapters/8.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a501bcb919b60bc35fa43b99eb6ed2a2630cb100",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "therealkeyisme/Math-Notes",
"max_issues_repo_path": "math321/chapters/8.tex",
"max_line_length": 370,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "a501bcb919b60bc35fa43b99eb6ed2a2630cb100",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "CameronSWilliamson/GU-MATH",
"max_stars_repo_path": "math321/chapters/8.tex",
"max_stars_repo_stars_event_max_datetime": "2021-03-18T00:49:14.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-03-18T00:49:14.000Z",
"num_tokens": 809,
"size": 1883
} |
\chapter{Design and Implementation}
\label{cha:design}
Same as the last chapter, give the motivation and the high-level
picture of this chapter to readers, and introduce the sections in this
chapter.
\section{Smart Design}
\label{sec:des-hotpath}
\section{Summary}
Same as the last chapter, summarize what you discussed in this chapter and
be a bridge to next chapter.
| {
"alphanum_fraction": 0.7903225806,
"avg_line_length": 26.5714285714,
"ext": "tex",
"hexsha": "7239e914ab8bf12eb09c41f6b960af138cbb7f39",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9b5f50607bede002c5c13d93eb2b41960f0dcd7f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "JLZong/Improving-lexicon-based-classification-using-n-grams",
"max_forks_repo_path": "csprojReportTemplate/design.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9b5f50607bede002c5c13d93eb2b41960f0dcd7f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "JLZong/Improving-lexicon-based-classification-using-n-grams",
"max_issues_repo_path": "csprojReportTemplate/design.tex",
"max_line_length": 74,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9b5f50607bede002c5c13d93eb2b41960f0dcd7f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "JLZong/Improving-lexicon-based-classification-using-n-grams",
"max_stars_repo_path": "csprojReportTemplate/design.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 86,
"size": 372
} |
\section{Code listing}
The below code from my Text Simlarity assignment in the Data Science class \parencite{dsc-text-similarity}.\\
\begin{lstlisting}[language=Python]
# # Vectorisation of Text Data
# The process of converting or transforming a data set into a set of Vectors is called Vectorization.
# import libraries
import pandas as pd
import sklearn as sk
import math
# Calculate dot product of two vectors, divide it by the magnitudes to find the cos(angle between them)
# Use the result as a correlation coefficient
from collections import Counter
def cosine(vector1, vector2):
# calculate nominator as a dot product
intersect = set(vector1.keys()) & set(vector2.keys())
numerator = sum([vector1[x] * vector2[x] for x in intersect])
# calculate the denominator
sum1 = sum([vector1[x] ** 2 for x in list(vector1.keys())])
sum2 = sum([vector2[x] ** 2 for x in list(vector2.keys())])
denominator = math.sqrt(sum1) * math.sqrt(sum2)
if not denominator:
return 0.0
else:
return float(numerator)/denominator
# # Assignment
with open('textfiles/A.txt', 'r', encoding='utf8', errors='ignore') as a:
A = a.read().replace('\n', '')
with open('textfiles/B.txt', 'r', encoding='utf8', errors='ignore') as b:
B = b.read().replace('\n', '')
with open('textfiles/C.txt', 'r', encoding='utf8', errors='ignore') as c:
C = c.read().replace('\n', '')
A1 = A.split(" ")
B1 = B.split(" ")
C1 = C.split(" ")
# join the sets of words to remove duplications
all= set(A1).union(set(B1).union(C1))
#print(all)
def convertTextToVector(text):
x = dict.fromkeys(all, 0)
for word in text:
x[word]+=1
return x
def compareVectors():
AVector = convertTextToVector(A1)
BVector = convertTextToVector(B1)
CVector = convertTextToVector(C1)
corrAB = cosine(AVector, BVector)
print("Similarity on A and B: ", corrAB)
corrAC = cosine(AVector, CVector)
print("Similarity on A and C: ", corrAC)
corrBC = cosine(BVector, CVector)
print("Similarity on B and C: ", corrBC)
suggestion = ""
highest = 0
corrarray = [corrAB, corrAC, corrBC]
for i in corrarray:
if highest < i:
highest = i
if highest == corrAB:
suggestion = "Text A = X, Text B = X, Text C = Y"
elif highest == corrAC:
suggestion = "Text A = X, Text B = Y, Text C = X"
else:
suggestion = "Text A = Y, Text B = X, Text C = X"
return suggestion
suggestion = compareVectors()
print(suggestion)
\end{lstlisting} | {
"alphanum_fraction": 0.6367816092,
"avg_line_length": 30,
"ext": "tex",
"hexsha": "043375e34c3788d069be3dc37b776606563fc815",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "27b667feebaf777b89f5e18cac7c15d683a68019",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "SOFT2021-UFO/Assignment-2-Professional-Typesetting-using-LaTeX",
"max_forks_repo_path": "pages/chapters/codelisting.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "27b667feebaf777b89f5e18cac7c15d683a68019",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "SOFT2021-UFO/Assignment-2-Professional-Typesetting-using-LaTeX",
"max_issues_repo_path": "pages/chapters/codelisting.tex",
"max_line_length": 109,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "27b667feebaf777b89f5e18cac7c15d683a68019",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "SOFT2021-UFO/Assignment-2-Professional-Typesetting-using-LaTeX",
"max_stars_repo_path": "pages/chapters/codelisting.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 716,
"size": 2610
} |
\documentclass[10pt,letterpaper,twoside]{book}
\usepackage{etex} % http://tex.stackexchange.com/questions/7896/no-room-for-a-new-dimen-when-including-tikz
\setlength{\parindent}{0em}
\hyphenpenalty=5000
\tolerance=1000
\pdfpageheight 11in
\pdfpagewidth 8.5in
\setlength{\textwidth}{6.5in}
\setlength\fboxsep{0pt}
\setlength\fboxrule{0.5pt}
\usepackage[usenames]{color}
\usepackage[T1]{fontenc}
\usepackage{%amsfonts,
%amsmath,
%amssymb,
%cancel,
caption,
colortbl,
enumerate,
enumitem,
fancyhdr,
graphicx,
lipsum,
listings,
lmodern,
lscape,
mathpazo, % Palatino in Math Mode
makeidx,
mdframed,
minitoc,
morewrites,
multicol,
palatino,
%pgfplots,
subcaption,
tabularx,
tcolorbox,
tikz,
tikz-qtree,
titlesec,
url
}
\usepackage[newfloat]{minted}
%\captionsetup[listing]{position=top}
%\usepackage{imakeidx}
\tcbuselibrary{listings,minted,xparse}
\setminted[java]{%
linenos,
numbersep=0.5em,
xleftmargin=1em,
frame=leftline,
framesep=1em,
rulecolor=\color{nccblue},
style=colorful
}
%
% \newmintedfile[javafile]{java}{%
% linenos,
% numbersep=0.5em,
% xleftmargin=1em,
% frame=single,
% framesep=1em
% }
\setminted[shell-session]{%
xleftmargin=0.25em,
frame=leftline,
framesep=1em,
rulecolor=\color{nccorange},
style=vim,
}
%\usemintedstyle{colorful}
\definecolor{nccblue}{cmyk}{1,0.72,0,0.38}
\definecolor{nccorange}{cmyk}{0,0.65,100,0.115}
\definecolor{ncclightblue}{cmyk}{0.76,0.085,0,0}
\definecolor{nccmuteblue}{cmyk}{1.0,0.06,0,0.034}
\definecolor{coolgray4}{cmyk}{0,0,0,0.25}
\definecolor{nccred}{cmyk}{0,0.91,0.94,0.305}
\definecolor{nccpurple}{cmyk}{0.94,0.94,0,0}
\definecolor{nccviolet}{cmyk}{0.43,0.56,0,0}
\definecolor{nccgreen}{cmyk}{1,0,0.69,0.6}
\definecolor{ncclightgreen}{cmyk}{0.305,0,0.6,0}
\definecolor{figureback}{gray}{0.95}
\usepackage[colorlinks=true,
linkcolor=nccblue,
urlcolor=nccorange,
linkbordercolor=nccblue,
urlbordercolor=nccorange,
pdfborderstyle={/S/U/W 1}]{hyperref}
\urlstyle{same}
\usepackage[scaled]{beramono}
%\usepgfplotslibrary{external}
%\tikzexternalize
\renewcommand{\sfdefault}{cmss}% cmss = Computer Modern Sans-Serif
%\usepackage{tocbibind}
\usepackage[lmargin=1.5in,rmargin=0.75in,tmargin=1in,bmargin=1in]{geometry}
%\usepackage{floatrow}% http://ctan.org/pkg/floatrow
%\DeclareColorBox{shaded}{\colorbox{green!15}}% Shade is 15% black
%\floatsetup{framestyle=colorbox,colorframeset=shaded,framefit=yes,heightadjust=all,framearound=all}
% TikZ stuff
%\usepackage[latin1]{inputenc}
\usetikzlibrary{arrows,backgrounds,calc,decorations,decorations.pathreplacing,decorations.shapes,decorations.text,external,fit,positioning,shadows,shapes,trees}
%\tikzexternalize[prefix=tikz/]
\input{code-colors}
\input{lstset}
\DeclareCaptionFont{white}{\color{white}\sffamily}
\DeclareCaptionFormat{listing}{\colorbox{nccblue}{\parbox{\textwidth}{~~#1#2#3}}}
\captionsetup[listing]{format=listing,labelfont=white,textfont=white}
% Figures
\let\originalfigure=\figure
\let\endoriginalfigure=\endfigure
\renewenvironment{figure}[1][]{
\begin{originalfigure}[#1]
\begin{mdframed}[linecolor=nccblue,backgroundcolor=figureback]
}{
\end{mdframed}
\end{originalfigure}
}
\DeclareCaptionFont{figure}{\color{nccblue}\sffamily}
\DeclareCaptionFormat{figure}{\colorbox{figureback}{\parbox{\textwidth}{\textbf{#1}#2#3}}}
\captionsetup[figure]{format=figure,labelfont=figure,textfont=figure}
% \input{tikz-nodes}
% Can't do chapter this way :(
\titleformat*{\section}{\Large\bfseries\sffamily\color{nccblue}}
\titleformat*{\subsection}{\large\bfseries\sffamily\color{nccblue}}
\titleformat*{\subsubsection}{\normalsize\bfseries\sffamily\color{nccblue}}
\titleformat{\chapter}[display]
{\bfseries\Large\color{nccblue}}
{\hfill \tikz[remember picture] \node[] (nr) {\fontsize{120}{70}\selectfont\color{nccorange}\textbf{\thechapter}};
\begin{tikzpicture}[overlay,remember picture]
\coordinate (leftborder) at ($(nr)-(100,0)$);
\coordinate (left) at ($(nr.west)-(2.5,0)$);
\draw[decoration={shape backgrounds,shape size=.5cm,shape=signal},signal from=west, signal to=east,decorate, draw=nccblue, fill=nccblue, decoration={shape sep=.5cm},line join=round] (leftborder) -- (left);
\end{tikzpicture}}
{-2ex}
{\filleft\fontsize{50}{70}\selectfont\scshape}
[\vspace{0ex}]
\pagestyle{empty}
%\definecolor{reviewcolor}{HTML}{F62817} % Fire engine red
\title{{\color{nccblue}\textbf{A Terse Introduction to Computer Science}}\\
\small{Using Object-Oriented Program Design in the Java Programming Language}}
\author{\textbf{Christopher R. Merlo}\\
\small{Nassau Community College}\\
\small{Garden City, NY}\\
\small{\url{http://www.matcmp.ncc.edu/~cmerlo}}\\
}
% Itemize
\newcommand{\bi}{\begin{itemize}}
\newcommand{\ei}{\end{itemize}}
\newcommand{\be}{\begin{enumerate}}
\newcommand{\ee}{\end{enumerate}}
\newcommand{\bmu}{\begin{multicols}}
\newcommand{\emu}{\end{multicols}}
%\setlength{\parskip}{3mm plus4mm minus3mm}
\makeindex
\begin{document}
\let\stdsection\section
%\renewcommand\section{\clearpage\stdsection}
% \newcommand{\code}[1]{\textcolor{blue}{\textsf{#1}}}
% \newcommand{\prop}[2]{\label{prop:#1}\textcolor{blue}{\textsf{\textbf{Proposition #1:}}} \textit{#2}}
% \newcommand{\qed}{~\hfill\textcolor{blue}{$\blacksquare$}}
\input{list-styles}
\frontmatter
\pagenumbering{roman}
\maketitle
\dominitoc
\tableofcontents
\cleardoublepage
\listoffigures
\addcontentsline{toc}{chapter}{List of Figures}
\adjustmtc
\renewcommand\lstlistlistingname{List of Code Listings}
\lstlistoflistings
\addcontentsline{toc}{chapter}{List of Code Listings}
\adjustmtc
\cleardoublepage
\listoftables
\addcontentsline{toc}{chapter}{List of Tables}
\adjustmtc
\cleardoublepage
\tcblistof[\chapter*]{defn}{List of Definitions}
\addcontentsline{toc}{chapter}{List of Definitions}
\adjustmtc
\cleardoublepage
\tcblistof[\chapter*]{javaformat}{List of Formatting Examples}
\addcontentsline{toc}{chapter}{List of Formatting Examples}
\adjustmtc
\cleardoublepage
\tcblistof[\chapter*]{trap}{List of Code Traps}
\addcontentsline{toc}{chapter}{List of Code Traps}
\adjustmtc
% Why doesn't this work??
\cleardoublepage
\tcblistof[\chapter*]{tip}{List of Programing Tips}
\addcontentsline{toc}{chapter}{List of Programming Tips}
\adjustmtc
\cleardoublepage
\pagenumbering{arabic}
\pagestyle{fancy}
% Even page: chapter on right
% Odd page: section on left
\fancyhead[RE]{\sffamily\color{nccblue}\textbf{\small\nouppercase\leftmark}}
\fancyhead[RO]{\sffamily\color{nccblue}\textbf{\thepage}}
\fancyhead[LO]{\sffamily\color{nccblue}\textbf{\small\nouppercase\rightmark}}
\fancyhead[LE]{\sffamily\color{nccblue}\textbf{\thepage}}
\fancyfoot[C]{}
\renewcommand{\headrule}{\hbox to\headwidth{%
\color{nccorange}\leaders\hrule height \headrulewidth\hfill}}
\setlength{\headheight}{25pt}%
\renewcommand{\footrulewidth}{\headrulewidth}
\renewcommand{\footrule}{\hbox to\headwidth{%
\color{nccorange}\leaders\hrule height \headrulewidth\hfill}}
\renewcommand{\labelitemi}{\textcolor{nccorange}{$\bullet$}}
\renewcommand{\labelenumi}{\color{nccorange}\Alph{enumi}\color{nccblue}.}
\renewcommand{\labelenumii}{\color{nccblue}\arabic{enumii}\color{nccorange})}
\renewcommand{\theenumiii}{\color{nccorange}\alph{enumiii}}
\renewcommand{\theenumiv}{\color{nccblue}\roman{enumiv}}
\parskip = \baselineskip
\setlist[itemize]{itemsep=0.25em, topsep=0pt}
\tcbstartrecording
\mainmatter
\setcounter{chapter}{-1}
\input{about-this-textbook}
\part{The Basics}
\input{computer-hardware}
\input{aboutjava}
\input{a-very-basic-java-program}
\input{storing-info-and-numbers}
\input{selection}
\input{strings}
\input{complex-selection}
\input{methods}
\part{Extending the Java Language}
\input{reference-variables}
\input{classes}
\input{default-constructors}
\input{mutators}
% Sometime later...
\part{I Don't Know Where This Goes Yet}
\input{joptionpane}
\input{io}
\addcontentsline{toc}{part}{Appendix}
\part*{Appendix}
\appendix
\input{appendix-compiling}
\input{appendix-numbers}
\input{appendix-ascii}
\tcbstoprecording
\tcbinputrecords
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\backmatter
\clearpage
\addcontentsline{toc}{part}{Index}
\printindex
\end{document}
| {
"alphanum_fraction": 0.7153631933,
"avg_line_length": 27.4458598726,
"ext": "tex",
"hexsha": "1c2ff31f03c131261d90dce104b3f4dd813a7507",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "203bd7e03ccc01470e420c40de43551f8bdaa04b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "cmerlo441/cs1textbook",
"max_forks_repo_path": "cs1textbook.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "203bd7e03ccc01470e420c40de43551f8bdaa04b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "cmerlo441/cs1textbook",
"max_issues_repo_path": "cs1textbook.tex",
"max_line_length": 205,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "203bd7e03ccc01470e420c40de43551f8bdaa04b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "cmerlo441/cs1textbook",
"max_stars_repo_path": "cs1textbook.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2802,
"size": 8618
} |
% for normal presentation
\documentclass[xcolor=dvipsnames]{beamer}
% to collapse the slides with ``pause'' and other ``animation gimics,'' compile with
%\documentclass[handout,xcolor=pdftex,dvipsnames,table]{beamer}
\mode<presentation>
% extremely helpful site:
% http://www.math.umbc.edu/~rouben/beamer/
\definecolor{mstgreen}{RGB}{8,87,6}
\definecolor{mstgold}{RGB}{223,196,99}
%\usecolortheme[named=mstgreen]{structure} % only affects bullets
\usecolortheme[named=OliveGreen]{structure}
\useoutertheme{infolines}
\setbeamertemplate{items}[circle] % supposed to switch from 3D ball to 2D circle. Doesn't seem to work
\setbeamertemplate{navigation symbols}{} % eliminates the navigation icons which are useless
\usepackage{amsmath,amssymb}
% see http://www.pletscher.org/writings/latex/beamerthemes.php
% for a useful preview of themes
% see also http://sites.google.com/site/wdoerner/latex
%\usetheme{Rochester} % fat horizontal top single banner
%\usetheme{Antibes} % tree for current section, plus fat horizontal top banner
%\usetheme{Bergen} % fat left-vertical bar
%\usetheme{Berkely} % top-horizontal and left-vertical banner % did not work on linux quad
%\usetheme{Berlin} % tree for current section, plus fat horizontal top banner AND bottom ID banner
%\usetheme{Boadilla} % skinny horizontal top banner for ID; small side and top margins
%\usetheme{boxes} % no banners, small margins
%\usetheme{CambridgeUS} % skinny top and bottom margins % GOOD
\usetheme{Madrid} % good, nice color
%\usetheme{Boadilla} % no topline
%\usetheme{Goettingen} % right-side horizontal info bar
%\usetheme{Progressbar} % not available
%\usecolortheme{crane}
\usefonttheme[onlymath]{serif}
\setbeamercolor{math text displayed}{fg=blue!60!black}
%\setbeamercolor{itemize item}{fg=mstgreen}
%\setbeamercolor{palette primary}{fg=mstgreen,bg=mstgold}
%\setbeamercolor{palette secondary}{fg=mstgreen,bg=mstgold}
%\setbeamercolor{palette tertiary}{fg=mstgreen,bg=mstgold}
\setbeamercolor{palette sidebar primary}{fg=mstgreen,bg=mstgold}
%\setbeamercolor{section in toc}{fg=mstgreen} % text color (foreground)
%\setbeamercolor{section in toc}{bg=mstgold} % background, no effect
\newcommand{\slf}[1]{\not{\!#1}}
\def\half{{\textstyle{\frac12}}}
\def\ii{\textrm{i}}
%\setbeamercolor{frametitle}{fg=mstgreen}
\title[Wave-based Transport]{Wave-based Description of Transport}
\author[Ben Payne]{\textbf{Ben Payne}\\ Alexey Yamilov}
\institute[MS\&T]{Missouri University of Science and Technology}
\date[Board of Curators \ 20110222]{March 22, 2011\\ \ \\Board of Curators}
%\subtitle{}
\logo{\includegraphics[height=0.8cm]{Logo_356.jpg}}
\begin{document}
\begin{frame}
\titlepage
\end{frame}
\begin{frame}
\frametitle{Outline}
\tableofcontents
\end{frame}
\section{Scatterering of light}
\begin{frame}
\frametitle{Particle-based scattering}
\begin{center}
\includegraphics[height=5cm]{pictures/eastern_bluebird_snow}
%\pause
\includegraphics[height=5cm]{pictures/Dark_Side_of_the_Moon}
\end{center}
\end{frame}
\subsection{Self-consistent theory of Anderson localization}
\begin{frame}
\frametitle{Diffusive description of non-diffusive systems}
\begin{itemize}
\item Self-consistent theory of localization in infinite media predicts that change in transport regime results in deviation from diffusion coefficient $D_0 \rightarrow D$
\end{itemize}
\end{frame}
\end{document} | {
"alphanum_fraction": 0.7777119146,
"avg_line_length": 38.3409090909,
"ext": "tex",
"hexsha": "4a8987b656d6f0037fc17b1eb49bea07517a3915",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "789dd2b90b53d5026d46d2ae178e0dd17ed6a0f2",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "bhpayne/physics_phd_research",
"max_forks_repo_path": "Project_Quasi1D_Transport/presentations/20110322_board_of_curators/20110322.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "789dd2b90b53d5026d46d2ae178e0dd17ed6a0f2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "bhpayne/physics_phd_research",
"max_issues_repo_path": "Project_Quasi1D_Transport/presentations/20110322_board_of_curators/20110322.tex",
"max_line_length": 171,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "789dd2b90b53d5026d46d2ae178e0dd17ed6a0f2",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "bhpayne/physics_phd_research",
"max_stars_repo_path": "Project_Quasi1D_Transport/presentations/20110322_board_of_curators/20110322.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 996,
"size": 3374
} |
\appendix
% Resetting the equation and table counters to zero.
\setcounter{equation}{0}
\setcounter{table}{0}
\setcounter{figure}{0}
% Adding an A in front of any equation numbers or table numbers in the appendix.
\renewcommand{\theequation}{A\arabic{equation}}
\renewcommand{\thetable}{A\arabic{table}}
\renewcommand{\thefigure}{A\arabic{figure}}
%\renewcommand{\thealgorithm}{A\arabic{algorithm}}
\cleardoublepage
\phantomsection
\addcontentsline{toc}{chapter}{Appendix}
\chapter*{Appendix}
\label{chap:appendix}
\Blindtext
| {
"alphanum_fraction": 0.7758945386,
"avg_line_length": 24.1363636364,
"ext": "tex",
"hexsha": "c27eb95df49b188af1403dc84ef96f7322f96fff",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ce87ed8d972335264573865e5289e203adc9608e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "modjtabaf/uwm-diss",
"max_forks_repo_path": "appendix.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ce87ed8d972335264573865e5289e203adc9608e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "modjtabaf/uwm-diss",
"max_issues_repo_path": "appendix.tex",
"max_line_length": 80,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ce87ed8d972335264573865e5289e203adc9608e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "modjtabaf/uwm-diss",
"max_stars_repo_path": "appendix.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 164,
"size": 531
} |
% !Mode:: "TeX:UTF-8"
% !TEX program = xelatex
\section{Prediction of LPPL Model}
We use the historical data of 000001.SS to predict the next critical time of bubbles ending, maybe crush or rebound following. The figure~\ref{F:LPPL-prediction} shows that the anti-bubbles tend to rebound on the 2019-03-20.
\subsection{Example One}
\begin{minipage}[b]{0.5\textwidth}
\begin{itemize}
\item Data
\begin{itemize}
\item Symbol: 000001.SS
\item Start Date: 2018-10-1
\item End Date: 2019-10-1
\end{itemize}
\end{itemize}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\begin{itemize}
\item Results
\begin{itemize}
\item $m$: 1.00
\item $\omega$: 53.05
\item $t_c$: 1505.90
\item $A$: 8.78
\item $B$: 0.00
\item $C_1$: 4.60e-5
\item $C_2$: 3.04e-6
\end{itemize}
\end{itemize}
\end{minipage}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{figures/2019-10-22-LPPL.png}
\caption{Prediction of 000001.SS}
\label{F:LPPL-prediction}
\end{figure}
\subsection{Example Two}
\begin{minipage}[b]{0.5\textwidth}
\begin{itemize}
\item Data
\begin{itemize}
\item Symbol: 000001.SS
\item Start Date: 2018-1-13
\item End Date: 2019-10-19
\end{itemize}
\end{itemize}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\begin{itemize}
\item Results
\begin{itemize}
\item $m$: 0.94
\item $\omega$: 4.63
\item $t_c$: 431.67
\item $A$: 7.99
\item $B$: 0.00
\item $C_1$: 0.00
\item $C_2$: 0.00
\end{itemize}
\end{itemize}
\end{minipage}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{figures/2019-10-22-LPPL-2.png}
\caption{Prediction of 000001.SS}
\label{F:LPPL-prediction-2}
\end{figure}
| {
"alphanum_fraction": 0.5866666667,
"avg_line_length": 27.4647887324,
"ext": "tex",
"hexsha": "e0e974a522c89b76df6b6ccc9f6fc35556cac8a8",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-03-12T23:11:28.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-11-02T05:46:01.000Z",
"max_forks_repo_head_hexsha": "65bd3372df197bec5e152a37cdc1f6f5432b7f3e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "AllenYZB/homework",
"max_forks_repo_path": "MA216/sections/project-1/3.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "65bd3372df197bec5e152a37cdc1f6f5432b7f3e",
"max_issues_repo_issues_event_max_datetime": "2022-03-12T00:49:10.000Z",
"max_issues_repo_issues_event_min_datetime": "2022-01-13T03:04:10.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "AllenYZB/homework",
"max_issues_repo_path": "MA216/sections/project-1/3.tex",
"max_line_length": 224,
"max_stars_count": 8,
"max_stars_repo_head_hexsha": "253d4746528ef62d33eba1de0b90dcb17ec587ed",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "iydon/homework",
"max_stars_repo_path": "MA216/sections/project-1/3.tex",
"max_stars_repo_stars_event_max_datetime": "2021-07-11T12:14:56.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-10-20T08:18:54.000Z",
"num_tokens": 697,
"size": 1950
} |
\section{Appendix}
\label{sec:appendix}
The code for this project can be found on \href{https://github.com/LukasBeiske/project_Flowers-299}{github}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{../data/performance_plots/first_model.pdf}
\caption{The loss and accuracy curves for the initial model based on manual hyperparameter tuning.}
\label{fig:first_curves}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[height=0.8\textheight]{../data/first_model.png}
\caption{Network structure found via manual hyperparameter tuning.}
\label{fig:first_model}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[height=0.95\textheight]{../data/best_model.png}
\caption{Optimal network structure.}
\label{fig:best_model}
\end{subfigure}
\caption{Schematic overview of network structures.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{../data/performance_plots/GS1.pdf}
\caption{The loss and accuracy curves for the three best performing models in the first grid search.}
\label{fig:GS1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{../data/performance_plots/l2.pdf}
\caption{The loss and accuracy curves for the three best performing L2-regularization rates $r_\text{L2}$.}
\label{fig:l2}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=0.99\textwidth]{../data/alternative_approach/example_image.pdf}
\caption{Image of a Marguerite Daisy.}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=0.99\textwidth]{../data/alternative_approach/example_rgb_hist.pdf}
\caption{Resulting color histogram.}
\end{subfigure}
\caption{Color histograms are used to generate suitable input data for the kNN algorithm.}
\label{fig:kNN_hist}
\end{figure} | {
"alphanum_fraction": 0.6858325667,
"avg_line_length": 36.8474576271,
"ext": "tex",
"hexsha": "3f147248378be3b388f294174b19913cf7a91d20",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c2315f8ad8e197c49622649d469e2dab802d8305",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "LukasBeiske/project_Flowers-299",
"max_forks_repo_path": "report_lukas/content/a_appendix.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c2315f8ad8e197c49622649d469e2dab802d8305",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "LukasBeiske/project_Flowers-299",
"max_issues_repo_path": "report_lukas/content/a_appendix.tex",
"max_line_length": 112,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c2315f8ad8e197c49622649d469e2dab802d8305",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "LukasBeiske/project_Flowers-299",
"max_stars_repo_path": "report_lukas/content/a_appendix.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 590,
"size": 2174
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% CS624: Analysis of Algorithms
% Copyright 2015 Pejman Ghorbanzade <[email protected]>
% Creative Commons Attribution-ShareAlike 4.0 International License
% More info: https://github.com/ghorbanzade/beacon
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section*{Question 2}
Show that the second smallest of $n$ elements can be found with $n + \lceil \log n \rceil - 2$ comparisons in the worst case.
\subsection*{Solution}
We begin by comparing elements of the array two by two, each time putting the minimum number in a new array of size $\lceil \frac{n}{2} \rceil$.
We will repeat the procedure for the new array until our new array will have only one element which is the element with minimum value.
To calculate the number of comparisons it takes to find the element with minimum value, we use the analogy of a tree with $n$ nodes at height $H$, in which case the number of comparisons would be equal to number of nodes at levels greater than $0$ which is $n - 1$.
The second smallest element is the smallest of all other nodes except the root.
Therefore, there has been a comparison in which this element has lost to the element with minimum value.
As the latter is compared $\lceil \log n \rceil$ times, we can compare all its losers to eachother to obtain the second smallest element, a procedure which takes $\lceil \log n \rceil - 1$ comparisons.
Therefore the total number of comparisons would be $n + \lceil \log n - 2$.
| {
"alphanum_fraction": 0.6967532468,
"avg_line_length": 66.9565217391,
"ext": "tex",
"hexsha": "08df56585c7f0ed069a4018ce89beb7f0ea35d57",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-12-06T17:18:05.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-09-20T05:58:32.000Z",
"max_forks_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ghorbanzade/beacon",
"max_forks_repo_path": "umb-cs624-2015s/src/tex/hw03/hw03q02.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ghorbanzade/beacon",
"max_issues_repo_path": "umb-cs624-2015s/src/tex/hw03/hw03q02.tex",
"max_line_length": 265,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ghorbanzade/beacon",
"max_stars_repo_path": "umb-cs624-2015s/src/tex/hw03/hw03q02.tex",
"max_stars_repo_stars_event_max_datetime": "2020-01-01T11:16:51.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-11-13T20:00:10.000Z",
"num_tokens": 358,
"size": 1540
} |
\chapter{Brief Usecases}
\begin{table}[]
\centering
\caption{Brief Usecases for Drivhus Effekten 9000}
\label{DE:BU}
\begin{tabularx}{360pt}{ TX TX TX}
\hline
\textbf{Actors} & \textbf{Goal} & \textbf{Descriptions} \\
\hline\\
User, Temp. sensor. & Readout temperature & The user wishes to monitor for the correct temperature.
User accesses the UI, and reads temperature. \\
\hline \\
User & Temperature regulation & The user notices from the temperature readout,
that the temperature is incorrect. User accesses the UI and corrects the temperature.\\
\hline \\
System, UPS & Promt message about powerfailure & The system experiences a power failure, and the UPS is activated. The systems promts a message via. UI the admin and user about the powerfailur, and an estimated time of the power status of the UPS is included. \\
\hline\\
User, moisture sensor & Monitor the ground moisturelevels & The user wishes to read off the grounds moisture levels. The user acesses via. the UI the readouts from the sensor(s) in the ground. \\
\hline\\
User & Regulating ground-moisture levels & From the ground-sensor(s) readout, the user wishes to regulate/modify the watering intervals. The user accesses the UI and regulates the moisture levels of the ground by deactivating or reactivating the watering system. And/or increasing or decrasing the watering intervals. \\
\hline\\
User, PH sensor & Readout ground PH-levels & The user wishes to readout from the UI, if the grounds PH levels are correct. The user accesses via. the UI the readout from the PH sensor in the ground. The user can then see if to much or to little manure is being used. \\
\end{tabularx}
\end{table} | {
"alphanum_fraction": 0.7504393673,
"avg_line_length": 65.6538461538,
"ext": "tex",
"hexsha": "6be4aa8a1b80ee2e79734d5806d9ef48d4506bf2",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f5f63f92ae97f4825adc3dddd06e3150edc9645e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Jazzbear/prj3grp8Repo",
"max_forks_repo_path": "Brief_Usecases/briefUseCases.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f5f63f92ae97f4825adc3dddd06e3150edc9645e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Jazzbear/prj3grp8Repo",
"max_issues_repo_path": "Brief_Usecases/briefUseCases.tex",
"max_line_length": 322,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f5f63f92ae97f4825adc3dddd06e3150edc9645e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Jazzbear/prj3grp8Repo",
"max_stars_repo_path": "Brief_Usecases/briefUseCases.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 452,
"size": 1707
} |
\chapter{Gallery Installation}
\section{Concept}
\section{Implementation}
\begin{figure}[htp]\centering
\includegraphics[width=.99\textwidth]{images/gallery_installation.png}
\caption{The layout of the gallery installation as implemented. All computers were networked with an Ethernet connection.}\label{fig:gallery}
\end{figure}
See the video included in the software package for a tour of the installation as it stood \cite{PACK}.
| {
"alphanum_fraction": 0.7995444191,
"avg_line_length": 33.7692307692,
"ext": "tex",
"hexsha": "3091e7e9ebc13dcf5d974addf0879e354e852a3e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6623a9a600b255e78771953975e8af4aac5013c8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "peplin/august23",
"max_forks_repo_path": "doc/report/gallery.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6623a9a600b255e78771953975e8af4aac5013c8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "peplin/august23",
"max_issues_repo_path": "doc/report/gallery.tex",
"max_line_length": 143,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "6623a9a600b255e78771953975e8af4aac5013c8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "peplin/august23",
"max_stars_repo_path": "doc/report/gallery.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-21T07:17:18.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-05-21T07:17:18.000Z",
"num_tokens": 101,
"size": 439
} |
\documentclass[letter]{article}
\renewcommand{\baselinestretch}{1.25}
\usepackage[margin=1in]{geometry}
\usepackage{physics}
\usepackage{amsmath}
\usepackage{graphicx}
%\usepackage{pythonhighlight}
\usepackage{hyperref}
\usepackage{fancyvrb}
% MATLAB Formating Code
\usepackage[numbered,framed]{matlab-prettifier}
\lstset{style=Matlab-editor,columns=fullflexible}
\renewcommand{\lstlistingname}{Script}
\newcommand{\scriptname}{\lstlistingname}
% Command for easier minimization problem def
\newcommand{\optpblm}[3][eq:default]{
\begin{equation}\label{#1}
% Array method... more centered
% \begin{array}{rl}
% \text{minimize} \hspace{0.2in}  \vspace{5pt}\\
% \text{subject to} \hspace{0.2in} 
% \end{array}
% Aligned method... left aligned... idk if its better
\begin{aligned}
\text{minimize} \hspace{0.5in} \vspace{5pt}\\
\text{subject to \hspace{0.5in}} 
\end{aligned}
\end{equation}
}
\allowdisplaybreaks
\title{MECH 6327 - Homework 3}
\author{Jonas Wagner}
\date{2021, March 24}
\begin{document}
\maketitle
\newpage
\tableofcontents
\newpage
\section*{BV Textobook Problems}
\subsection{Problem 4.11}
\textbf{Problem:}
Formulate each problem as a LP and explained the relationship between the optimal solution of the problems and the solution of its LP.\\
\textbf{Solution:}
\subsubsection{Part a: Minimize $\norm{Ax-b}_\infty$}
Define the following minimization problem:
\optpblm{\norm{Ax-b}_\infty}{\text{math}}
From the definition of an $\infty$-norm as $$\norm{x}_\infty = \max_i \abs{x_i}$$ the following can be derived:
\optpblm{t}{\qty(Ax - b)_i\leq t, \ \forall i = 1,\dots,n\\
- & \qty(Ax - b)_i\leq t, \ \forall i = 1,\dots,n}
Which is equivalent to the following linear program
\optpblm{t}{-\vb{1}t \leq Ax - b \leq \vb{1}t}
The resulted minimum to this equivalent problem, $t*$, is equivalent to the minimum of the original problem, $\norm{Ax^*-b}_\infty$.
From this the minimizing variable, $x^*$, can be found as: $$x^* = A^{-1} (\vb{1}^T t^* + b)$$
\newpage
\subsubsection{Part b: Minimize $\norm{Ax-b}_1$}
Define the following minimization problem:
\optpblm{\norm{Ax-b}_1}{\text{math}}
From the definition of an $1$-norm as $$\norm{x}_1 = \sum_i \abs{x_i}$$ the following can be derived:
\optpblm{t_1 + \cdots + t_n}{\qty(Ax - b)_i\leq t_i, \ \forall i = 1,\dots,n\\
- & \qty(Ax - b)_i\leq t_i, \ \forall i = 1,\dots,n}
Which is equivalent to the following linear program
\optpblm{\vb{1}^T t}{-t \leq Ax - b \leq t}
The resulted minimum to this equivalent problem, $\vb{1}^T t$, is equivalent to the minimum of the original problem, $\norm{Ax-b}_1$.
From this the minimizing variable, $x^*$, can be found as: $$x^* = A^{-1} (t^* + b)$$
\newpage
\subsubsection{Part c: Minimize $\norm{Ax-b}_1$ subject to $\norm{x}_{\infty}\leq 1$}
Define the following minimization problem:
\optpblm{\norm{Ax-b}_1}{\norm{x}_\infty \leq 1}
From the definition of an $1$-norm as $$\norm{x}_1 = \sum_i \abs{x_i}$$ and the definition of an $\infty$-norm as $$\norm{x}_\infty = \max_i \abs{x_i}$$ the following can be derived:
\optpblm{t_1 + \cdots + t_n}{\qty(Ax - b)_i\leq t_i, \ \forall i = 1,\dots,n\\
-& \qty(Ax - b)_i\leq t_i, \ \forall i = 1,\dots,n\\
& x_i \leq 1, \forall i = 1,\dots,n\\
-& x_i \leq 1, \forall i = 1,\dots,n}
Which is equivalent to the following linear program
\optpblm{\vb{1}^T t}{-t \leq Ax - b \leq t\\
&-\vb{1} \leq x \leq \vb{1}}
The resulted minimum to this equivalent problem, $\vb{1}^T t$, is equivalent to the minimum of the original problem, $\norm{Ax-b}_1$.
From this the minimizing variable, $x^*$, can be found as: $$x^* = A^{-1} (t^* + b)$$
\newpage
\subsubsection{Part d: Minimize $\norm{x}_1$ subject to $\norm{Ax-b}_\infty \leq 1$}
Define the following minimization problem:
\optpblm{\norm{x}_1}{\norm{Ax-b}_\infty \leq 1}
From the definition of an $1$-norm as $$\norm{x}_1 = \sum_i \abs{x_i}$$ and the definition of an $\infty$-norm as $$\norm{x}_\infty = \max_i \abs{x_i}$$ the following can be derived:
\optpblm{t_1 + \cdots + t_n}{x_i \leq t_i, \ \forall i = 1,\dots,n\\
&-x_i \leq t_i, \ \forall i = 1,\dots,n\\
&(Ax-b)_i \leq 1, \ \forall i = 1,\dots,n}
From this a linear program can be defined as:
\optpblm{\vb{1}^T t}{-t \leq x \leq t\\
&Ax - b \leq \vb{1}}
The resulted minimum to this equivalent problem, $\vb{1}^T t$, is equivalent to the minimum of the original problem, $\norm{x}_1$.
From this the minimizing variable, $x^*$, can be found as: $$x^* = t^*$$
\newpage
\subsubsection{Part e: Minimize $\norm{Ax-b}_1 + \norm{x}_\infty$}
Define the following minimization problem:
\optpblm{\norm{Ax-b}_1 + \norm{x}_\infty}{math}
From the definition of an $1$-norm as $$\norm{x}_1 = \sum_i \abs{x_i}$$ and the definition of an $\infty$-norm as $$\norm{x}_\infty = \max_i \abs{x_i}$$ the following can be derived:
\optpblm{t_1 + \cdots + t_n + s}{
\qty(Ax - b)_i\leq t_i, \ \forall i = 1,\dots,n\\
-& \qty(Ax - b)_i\leq t_i, \ \forall i = 1,\dots,n\\
&x_i \leq s, \ \forall i = 1,\dots,n\\
- &x_i \leq s, \ \forall i = 1,\dots,n}
This can be written as a standard linear program as:
\optpblm{\vb{1}^T t + s}{
- t \leq Ax-b \leq t\\
&-\vb{1}s \leq x \leq \vb{1}s}
The resulted minimum to this equivalent problem, $\vb{1}^T t + s$, is equivalent to the minimum of the original problem, $\norm{Ax-b}_1 + \norm{x}_\infty$. It should be noted that the $s$ and $\norm{x}_\infty$ are not used to find the minimization variable, but are important in weighting for solving for the minimization itself.
From this the minimizing variable, $x^*$, can be found as: $$x^* = A^{-1} (t^* + b)$$
\newpage
\subsection{Problem 4.16}
Consider the system given as
\begin{equation}\label{eq:dyn_sys_def}
x(t+1) = A x(t) + b u(t), \ t = 0,\dots,N-1
\end{equation}
with $x(t) \in \real^n, u(t) \in \real, \forall t = 0,\dots,N-1$ and $A \in \real^{n\cross n}, b \in \real^n$, and $x(0) = 0$.\\
The minimum fuel optimal control problem is to select the minimum amount of inputs to minimize the amount of fuel used, given as
\begin{equation}\label{eq:min_fuel_problem_def}
\begin{aligned}
\text{minimize} \hspace{0.5in} &F = \sum_{t=1}^{N-1} f(u(t))\\
\text{subject to \hspace{0.5in}} & x(t+1) = A x(t) + b u(t), \ t = 0,\dots,N-1\\
& x(N) = x_{des}
\end{aligned}
\end{equation}
with $N$ as the time-horizon, $x_{des} \in \real^n$ as the desired final state, and $f: \real \to \real$ given as
\begin{equation}\label{eq:fuel_usage_def}
f(a) =
\begin{cases}
\abs{a} & \abs{a} \leq 1 \\
2 \abs{a} - 1 & \abs{a} > 1
\end{cases}
\end{equation}
\textbf{Problem:}
Formulate this problem as a Linear Program.\\
\textbf{Solution:}
First, \ref{eq:min_fuel_problem_def} can be rewritten in an epigraph form (with the additional assumption that $f(u(t))$ is always positive):
\optpblm[eq:min_fuel_problem_epigraph]{F_1 + \cdots + F_{N-1}}{
f(u(t)) = F_t, \ \forall t = 1, \dots, N-1\\
& x(t+1) = A x(t) + b u(t), \ \forall t = 0,\dots,N-1\\
&x(N) = x_{des}}
Now looking at the nonlinear component, fuel usage as defined by \eqref{eq:fuel_usage_def}, can be equated to:
\begin{equation}
\begin{aligned}
\abs{a} \leq g\\
2 \abs{a} - 1 \leq g\\
\end{aligned}
\end{equation}
or equivalently,
\begin{equation}
\begin{aligned}
-g \leq a \leq g\\
-g \leq 2a -1 \leq g
\end{aligned}
\end{equation}
This represents an intersection of two half-spaces which is a simpliler convex restriction.\\
This can now be combined with \eqref{eq:min_fuel_problem_epigraph} to produce the linear program:
\optpblm[eq:min_fuel_problem_result]{F_1 + \cdots + F_{N-1}}{
-F_t \leq u(t) \leq F_t, \ \forall t = 1, \dots, N-1\\
&-F_t \leq 2u(t) -1 \leq F_t, \ \forall t = 1, \dots, N-1\\
& x(t+1) = A x(t) + b u(t), \ \forall t = 0,\dots,N-1\\
&x(N) = x_{des}}
Which can then be rewritten as:
\optpblm[eq:min_fuel_problem_result]{\vb{1}^T F}{
-F \leq \vb{u} \leq F\\
& x(t+1) = A x(t) + b u(t), \ \forall t = 0,\dots,N-1\\
&x(N) = x_{des}
}
\newpage
\subsection{Problem 4.28}
Consider the convex quadratic program given as
\begin{equation}\label{eq:convex_quadratic_program}
\begin{aligned}
\text{minimize} \ \ & \frac{1}{2} x^T P x + q^T x + r\\
\text{subject to} \ \ & Ax \leq b
\end{aligned}
\end{equation}
with a robust equivalent defined as
\begin{equation}\label{eq:robust_convex_quadratic_program}
\begin{aligned}
\text{minimize} \ \ & \sup_{P\in \mathcal{E}}\{\frac{1}{2} x^T P x + q^T x + r\}\\
\text{subject to} \ \ & Ax \leq b
\end{aligned}
\end{equation}
where $\mathcal{E}$ is the set of all possible matrices of $P$.
\subsubsection{Part a}
\textbf{Problem:}
Express the robust QP as a convex problem given $\mathcal{E} = \{P_1,\dots,P_k\}$ where $P_i\in S^n_+, \ \forall i=1,\dots,k$.\\
\textbf{Solution:}
As a base assumption, by definition all quadratic programs are convex. Additionally when taking a pointwise supremum of convex sets, the result is also convex.
Thus, for a supremum over the finite set of $\mathcal{E}$ it is known that a resultant convex problem can be defined.\\
First, we can redefine the problem as
\optpblm{\sup \{t_1, \dots, t_k\}}{
\frac{1}{2} x^T P_i x + q^T x + r \leq t_i, \ i = 1, \dots, k\\
&Ax \leq b}
Another eipigraph can then be analyzed to create the following convex optimization problem:
\optpblm{s}{
t_i \leq s, \ i = 1, \dots, k\\
&\frac{1}{2} x^T P_i x + q^T x + r \leq t_i, \ i = 1, \dots, k\\
&Ax \leq b}
% no part b,c for 4.28
\newpage
\subsection{Problem 4.43}
Suppose $A: \real^n \to S^m$ is affine such that
\begin{equation}
A(x) = A_0 + x_1 A_1 + \cdots + x_n A_n
\end{equation}
where $A_i \in S^m$. Let $\lambda_1(x) \geq \lambda_2(x) \geq \cdots \geq \lambda_m(x)$ be the eigenvalues of $A(x)$.\\
For each of the following minimization criteria, formulate the problem as an SDP.\\
\subsubsection{Part a}
\textbf{Problem:}
Minimize the maximum eigenvalue of $A$: $$\text{minimize} \ \ \lambda_1(x)$$
\textbf{Solution:}
This ca be re-written in epigraph form as:
\optpblm{t}{\lambda_1 \leq t}
or similarily as an SDP:
\optpblm{t}{A(x) \preceq t I}
%It is known that the eigenvalues of a sum of matrices is bounded below by the sum of the minimum eigenvalues of each and bounded above by the sum of the maximum eigenvalues.
%\cite{eigvalueBound}
%i.e.
%$$ \lambda(A)_m + \lambda(B)_m \leq \lambda(A+B)_m \leq \lambda(A+B)_1 \leq \lambda(A)_1 + \lambda(B)_1$$
%If this is to be expanded to the entire affine sum, $A(x)$, the objective of minimizing the eigenvalues of the weighted sum of symetric matrices can be done by minimizing the weighted sum of the largest eigenvalues of individual matrices.
%This means this problem can be redefined as:
%\optpblm{t^T x}{
% t_i = \lambda_1(A_i), \ \forall i = 1,\dots,m\\
% &s = \lambda_1(A_0)}
%Since $s$ will remain constant regrdless of $x$, this is equivalent to:
%\optpblm{t^T x}{
% t_i = \lambda_1(A_i), \ \forall i = 1,\dots,m}
%
%????????????????????????????????????????????????????
\subsubsection{Part b}
\textbf{Problem:}
Minimize the spread of the eigenvalues of $A$: $$\text{minimize} \ \ \lambda_1(x) - \lambda_m(x)$$\\
\textbf{Solution:}
This can be rewritten in epigraph form as
\optpblm{t_1 - t_2}{\lambda_1 \leq t_1\\ & \lambda_m \leq t_2}
or similarly as an SDP:
\optpblm{t_1}{A(x) \preceq t_1 I\\ &A(x) \succeq t_2}
%It is known that the eigenvalues of a sum of matrices is bounded below by the sum of the minimum eigenvalues of each and bounded above by the sum of the maximum eigenvalues.
%\cite{eigvalueBound}
%i.e.
%$$ \lambda(A)_m + \lambda(B)_m \leq \lambda(A+B)_m \leq \lambda(A+B)_1 \leq \lambda(A)_1 + \lambda(B)_1$$
%If this is to be expanded to the entire affine sum, $A(x)$, the objective of minimizing the spread eigenvalues of the weighted sum of symetric matrices can be done by minimizing the weighted sum of the spread of eigenvalues of individual matrices.
%This means this problem can be redefined as:
%\optpblm{t^T x + s}{
% t_i = (\lambda_1(A_i) - \lambda_m(A_i), \ \forall i = 1,\dots,m\\
% &s = (\lambda_1(A_0) - \lambda_m(A_0)}
%Since $s$ remains constant regardless of $x$ this is equivelent to
%\optpblm{t^T x}{
% t_i = (\lambda_1(A_i) - \lambda_m(A_i), \ \forall i = 1,\dots,m}
%
%
%????????????????????????????????????????????????????
%Defining the optimization problem as:
%\optpblm{\max \{\lambda\qty(A(x))\} - \min \{\lambda\qty(A(x))\}}{
% A(x) = A_0 + x_1 A_1 + \cdots + x_n A_n}\\
%
%WHAT???
\newpage
\subsubsection{Part c}
\textbf{Problem:}
Minimize the conditional number of $A$ while remaining postive definite:
\begin{equation*}
\begin{aligned}
\text{minimize} \ \ &k(A(x)) = \frac{\lambda_1(x)}{\lambda_m(x)} \ \forall \ x \in \{x \ | \ A(x) \succ 0\}\\
\text{subject to} \ \ &A(x) \succ 0
\end{aligned}
\end{equation*}
\textbf{Solution:}
This can be rewritten in epigraph form as
\optpblm{t_1 / t_2}{
\lambda_1 \leq t_1\\
& \lambda_m \leq t_2\\
&A \succ 0
}
or similarly as an SDP:
\optpblm{t_1 / t_2}{
A(x) \preceq t_1 I\\
&A(x) \succeq t_2\\
&A \succ 0
}
% Only Part a,b,c for 4.43
\newpage
\section{Problem 1: Open-loop optimal control with $1-$ and $\infty-$ norms.}
The following open-loop optimal regulation problem is given as:
\begin{equation}\label{eq:open-loop_opt-control_def}
\begin{aligned}
\text{minimze} \hspace{0.5in}
&\norm{x_T}_p + \sum_{t = 0}^{T-1} \norm{x_t}_p + \gamma\norm{u_t}_q\\
\text{subject to} \hspace{0.5in}
& x_{t+1} = A x_t + B u_t, \ t = 0,\dots,T-1\\
& \norm{x_t}_\infty \leq \bar{x}, \ t = 0,\dots,T\\
& \norm{u_t}_\infty \leq \bar{u}, \ t = 0,\dots,T
\end{aligned}
\end{equation}
with $x_t \in \real^n$ and $u_t \in \real^m$ as the system state and control input respectively and parameter $\gamma > 0$ governing the actuator and state regulation performance.\\
\textbf{Problem:}
Express this problem as a linear program for (i) $p=q=\infty$ and (ii) $p=q=1$. Code both in CVX and for the problem data provided. Verify the equivalence between the original optimization problem and transformed linear program obtained and plot the optimal state and input trajectories for each.\\
\textbf{Solution:}
\subsection{Linear program for $p = q = \infty$}
With $p = q = \infty$, the problem is defined as:
\optpblm{\norm{x_T}_\infty + \sum_{t = 0}^{T-1} \norm{x_t}_\infty + \gamma\norm{u_t}_\infty}{
x_{t+1} = A x_t + B u_t, \ t = 0,\dots,T-1\\
& \norm{x_t}_\infty \leq \bar{x}, \ t = 0,\dots,T\\
& \norm{u_t}_\infty \leq \bar{u}, \ t = 0,\dots,T}
The epigraph of this problem can be found as
\optpblm{r_T + (r_0 + \gamma s_0) + (r_{T-1} + \gamma s_{T-1})}{
\norm{x_t}_\infty \leq r_t, \ t = 0, \dots, T\\
&\norm{u_i}_\infty \leq s_t, \ t = 0, \dots, T-1\\
&x_{t+1} = A x_t + B u_t, \ t = 0,\dots,T-1\\
& \norm{x_t}_\infty \leq \bar{x}, \ t = 0,\dots,T\\
& \norm{u_t}_\infty \leq \bar{u}, \ t = 0,\dots,T
}
From the definition of $\norm{x}_\infty = \max \{x\}$ and through vectorization, we can redefine this as the following linear program:
\optpblm{
\mqty[\vb{1}^T & \gamma \vb{1}^T] \mqty[r\\s]
=\vb{1}^T r + \gamma \vb{1}^T s}{
x_{t+1} = A x_t + B u_t, \ t = 0,\dots,T-1\\
&x_t \leq r_t \vb{1} \leq \bar{x} \vb{1}, \ t = 0, \dots, T\\
&u_t \leq s_t \vb{1} \leq \bar{u} \vb{1}, \ t = 0, \dots, T-1
}
\subsection{Linear program for $p = q = 1$}
With $p = q = 1$, the problem is defined as:
\optpblm{\norm{x_T}_1 + \sum_{t = 0}^{T-1} \norm{x_t}_1 + \gamma\norm{u_t}_1}{
x_{t+1} = A x_t + B u_t, \ t = 0,\dots,T-1\\
& \norm{x_t}_\infty \leq \bar{x}, \ t = 0,\dots,T\\
& \norm{u_t}_\infty \leq \bar{u}, \ t = 0,\dots,T}
The epigraph of this problem can be found as
\optpblm{r_T + (r_0 + \gamma s_0) + (r_{T-1} + \gamma s_{T-1})}{
\norm{x_t}_1 \leq r_t, \ t = 0, \dots, T\\
&\norm{u_i}_1 \leq s_t, \ t = 0, \dots, T-1\\
&x_{t+1} = A x_t + B u_t, \ t = 0,\dots,T-1\\
& \norm{x_t}_\infty \leq \bar{x}, \ t = 0,\dots,T\\
& \norm{u_t}_\infty \leq \bar{u}, \ t = 0,\dots,T
}
From the definition of $\norm{x}_1 = \sum_{i=0}^T x$ and through vectorization, we can redefine this as the following linear program:
\optpblm{
\mqty[\vb{1}^T & \gamma \vb{1}^T] \mqty[r\\s]
=\vb{1}^T r + \gamma \vb{1}^T s}{
x_{t+1} = A x_t + B u_t, \ t = 0,\dots,T-1\\
& \vb{1}^T x_t \leq r_t, \ t = 0,\dots,T\\
& \vb{1}^T u_t \leq s_t, \ t = 0,\dots,T-1\\
&x_t \leq \bar{x} \vb{1}, \ t = 0, \dots, T\\
&u_t \leq \bar{u} \vb{1}, \ t = 0, \dots, T-1
}
\newpage
\subsection{CVX Formulation and Results:}
The code used to solve the linear programs and direct norm cvx calculations can be found in \appendixname \ref{apx:pblm1_matlab}.\\
\subsubsection{$\infty$-norm Solution}
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{fig/pblm1_inftyn_x}
\caption{States for Open-loop control comparing methods for $\infty$-norm.}
\label{fig:pblm1inftynx}
\end{figure}\newpage
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{fig/pblm1_inftyn_u}
\caption{Inputs for Open-loop control comparing methods for $\infty$-norm.}
\label{fig:pblm1inftynu}
\end{figure}\newpage
\subsubsection{1-norm Solution}
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{fig/pblm1_1n_x}
\caption{States for Open-loop control comparing methods for 1-norm.}
\label{fig:pblm11nx}
\end{figure}\newpage
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{fig/pblm1_1n_u}
\caption{Inputs for Open-loop control comparing methods for 1-norm.}
\label{fig:pblm11nu}
\end{figure}
\newpage
\section{Problem 2: Minimum time state transfer via quasiconvex optimization.}
Consider the LTI system:
\begin{equation}\label{eq:quasiconvex_opt_def}
\begin{aligned}
x_{t+1} &= Ax_t + B u_t, \ \forall t = 0,\dots,T\\
\underline{u} &\leq u_t \leq \bar{u}, \ \forall t = 0,\dots,T
\end{aligned}
\end{equation}
with $x_0$ as the initial state.\\
\textbf{Problem:}
Show that the minimum time required to transfer the system from $x_0$ to $x_{des}$, given as
\begin{equation}\label{eq:qualiconvex_problem_result}
f(u_0,\dots,u_T) = \min \{\tau \ | \ x_t = x_{des} \ \text{for} \ \tau \leq t \leq {T+1}\}
\end{equation}
is a quasiconvex function of the control input sequence. Implement a bisection algorithm to solve the problem for the given data.\\
\textbf{Solution:}
It is evident from the definition of the minimization problem that the time required to reach the final state for each sequence of inputs is a convex optimization problem. From that, it is known that the function overall is a quasi-convex optimization problem defined as:
\optpblm{\tau}{
x_{t+1} = Ax_t + B u_t \ \forall t = 0,\dots,T\\
&\underline{u} \leq u_t \leq \bar{u} \ \forall t = 0,\dots,T\\
&x(0) = x_0\\
&x_t = x_{des} \ \forall t \in \{t \ | \ \tau \leq t \leq {T+1}\}
}
For simplicity, we will be relaxing the final state to just reaching it at the earliest instead of remaining at rest:
$$x_\tau = x_{des}$$
A bisection algorithm can then be implemented to solve this as done using the MATLAB code shown in \appendixname \ \ref{apx:pblm2_matlab}. The result of this was a minimum value $$t = 51, \text{ or } \tau = 10.2$$
\newpage
The resulting system response and control sequence are provided as:
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{fig/pblm2}
\caption{Results for problem 2.}
\label{fig:pblm2}
\end{figure}
\newpage
\section{Problem 3: State feedback control design via SDP}
Feedback control problems can be formulated using a semidefinite program, such as
\begin{equation}\label{eq:feedback_control_def}
\begin{aligned}
\text{maximize} \hspace{0.5in}& \trace \{P\}\\
\text{subject to} \hspace{0.5in}
& \mqty [R + B^T PB & B^T PA\\
A^T PB & Q + A^T P A - P] \succeq 0\\
& P \succeq 0
\end{aligned}
\end{equation}
with variable $P \in S^n$ and problem data $A\in \real^{n\cross n}, B \in\real^{n\cross m}, Q \in S^n_+, \real \in S^m_{++}$.\\
This problem is equivalent to the solution to the optimal solution to the infinite-horizon LQR problem:
\begin{equation}\label{eq:LQR_control_def}
\begin{aligned}
\text{minimze} \hspace{0.5in}& \sum_{t=0}^\infty x_t^T Q x_t + u_t^T R u_t\\
\text{subject to} \hspace{0.5in}
& x_{t+1} = Ax_t + B u_t, \ t \geq 0, \ x(t=0) = x_0
\end{aligned}
\end{equation}
This is also equivelent to the solution the the discrete-time richotte equation (DARE) and can be solved in matlab with dare(A,B,Q,R). The solution to the feedback controller is
\begin{equation}\label{eq:LQR_control_solution}
u_t = K x_t\\
K = -\qty(R + B^T B)^{-1} B^T P^* A
\end{equation}
\textbf{Problem:}
Confirm the solution to the SDP given in \eqref{eq:feedback_control_def} is equivalent to the LQR problem given in \eqref{eq:LQR_control_def} for multiple randomly generated problems.
\textbf{Solution:}
CVX in MATLAB was used and the code can be found in \appendixname \ref{apx:pblm3_matlab}.The full set of results are provided in \appendixname \ref{apx:pblm3_results} for various randomly generated problems and solutions. The following is a few of P equivelent reuslts.
\begin{Verbatim}
P_cvx =
20.3336 10.5025 -3.3125 37.1904
10.5025 12.9464 -1.8618 32.8376
-3.3125 -1.8618 2.3441 -4.8914
37.1904 32.8376 -4.8914 97.6353
P_dare =
20.3336 10.5025 -3.3125 37.1904
10.5025 12.9464 -1.8618 32.8376
-3.3125 -1.8618 2.3441 -4.8914
37.1904 32.8376 -4.8914 97.6353
\end{Verbatim}
\begin{Verbatim}
P_cvx =
9.2752 0.3867 1.2843 -2.9794
0.3867 4.4079 0.6458 1.1789
1.2843 0.6458 1.9253 0.5430
-2.9794 1.1789 0.5430 4.4857
P_dare =
9.2752 0.3867 1.2843 -2.9794
0.3867 4.4079 0.6458 1.1789
1.2843 0.6458 1.9253 0.5430
-2.9794 1.1789 0.5430 4.4857
\end{Verbatim}
\begin{Verbatim}
P_cvx =
101.8040 -50.0081 -0.8792 117.2149
-50.0081 28.3481 2.3037 -57.6749
-0.8792 2.3037 4.1602 0.2983
117.2149 -57.6749 0.2983 136.1756
P_dare =
101.8040 -50.0081 -0.8792 117.2149
-50.0081 28.3481 2.3037 -57.6749
-0.8792 2.3037 4.1602 0.2983
117.2149 -57.6749 0.2983 136.1756
\end{Verbatim}
\newpage
\appendix
\section{MATLAB Code:}\label{apx:matlab}
All code I write in this course can be found on my GitHub repository:\\
\href{https://github.com/jonaswagner2826/MECH6337}{https://github.com/jonaswagner2826/MECH6327}
\lstinputlisting[caption={MECH6327\_HW3},label={script:HW3}]{MECH6327_HW3.m}
\newpage
\section{Problem 3 MATLAB Code:}\label{apx:pblm1_matlab}
All code I write in this course can be found on my GitHub repository:\\
\href{https://github.com/jonaswagner2826/MECH6327}{https://github.com/jonaswagner2826/MECH6327}
% MECH6313_HW3_pblm1
\lstinputlisting[caption={MECH6327\_HW3\_pblm1},label={script:HW3}]{MECH6327_HW3_pblm1.m}
\newpage
\section{Problem 3 MATLAB Code:}\label{apx:pblm2_matlab}
All code I write in this course can be found on my GitHub repository:\\
\href{https://github.com/jonaswagner2826/MECH6327}{https://github.com/jonaswagner2826/MECH6327}
% MECH6313_HW3_pblm2
\lstinputlisting[caption={MECH6327\_HW3\_pblm2},label={script:HW3}]{MECH6327_HW3_pblm2.m}
\newpage
\section{Problem 3 MATLAB Code:}\label{apx:pblm3_matlab}
All code I write in this course can be found on my GitHub repository:\\
\href{https://github.com/jonaswagner2826/MECH6327}{https://github.com/jonaswagner2826/MECH6327}
% MECH6313_HW3_pblm3
\lstinputlisting[caption={MECH6327\_HW3\_pblm3},label={script:HW3}]{MECH6327_HW3_pblm3.m}
\newpage
\section{Problem 3 MATLAB Results:}\label{apx:pblm3_results}
\begin{Verbatim}
>> MECH6327_HW3_pblm3
A =
1.1002 -1.1372 1.1077 0.2641
0.1751 0.6430 0.8205 3.1585
1.0036 -0.0128 -0.8176 1.2266
1.5110 0.9143 -0.1265 2.3206
B =
0.4145 1.2416
0.2118 -0.1576
0.6132 -1.3736
-0.5278 0.8708
Q =
0.4766 0.4525 0.2565 0.4911
0.4525 0.5808 0.3777 0.7950
0.2565 0.3777 0.4440 0.4396
0.4911 0.7950 0.4396 1.2997
R =
1.1591 0.8154
0.8154 0.7716
Calling SDPT3 4.0: 31 variables, 10 equality constraints
For improved efficiency, SDPT3 is solving the dual problem.
------------------------------------------------------------
num. of constraints = 10
dim. of sdp var = 10, num. of sdp blk = 2
*******************************************************************
SDPT3: Infeasible path-following algorithms
*******************************************************************
version predcorr gam expon scale_data
HKM 1 0.000 1 0
it pstep dstep pinfeas dinfeas gap prim-obj dual-obj cputime
-------------------------------------------------------------------
0|0.000|0.000|8.4e+01|1.2e+01|1.5e+03| 4.731862e+01 0.000000e+00| 0:0:00| chol 1 1
1|0.894|0.821|8.9e+00|2.3e+00|2.2e+02| 2.141617e+01 1.453677e+01| 0:0:00| chol 1 1
2|0.833|0.841|1.5e+00|3.7e-01|5.1e+01| 2.431086e+01 1.330905e+01| 0:0:00| chol 1 1
3|0.515|0.846|7.2e-01|5.7e-02|2.1e+01| 2.140755e+01 2.132680e+01| 0:0:00| chol 1 1
4|0.187|0.237|5.9e-01|4.4e-02|1.9e+01| 2.434592e+01 7.419460e+01| 0:0:00| chol 1 1
5|0.061|0.042|5.5e-01|4.2e-02|2.7e+01| 3.321322e+01 1.216786e+02| 0:0:00| chol 1 1
6|0.104|0.026|4.9e-01|4.1e-02|5.4e+01| 6.400240e+01 6.740367e+01| 0:0:00| chol 1 1
7|0.129|0.433|4.3e-01|2.3e-02|5.0e+01| 7.772307e+01 1.153429e+02| 0:0:00| chol 1 1
8|1.000|0.825|1.4e-08|4.0e-03|4.5e+01| 1.616373e+02 1.181693e+02| 0:0:00| chol 1 1
9|0.962|0.980|2.6e-07|7.9e-05|1.8e+00| 1.344089e+02 1.326294e+02| 0:0:00| chol 1 1
10|0.965|0.977|9.4e-09|1.8e-06|5.8e-02| 1.333014e+02 1.332444e+02| 0:0:00| chol 1 1
11|0.958|1.000|3.9e-10|1.9e-09|3.6e-03| 1.332623e+02 1.332587e+02| 0:0:00| chol 1 1
12|0.987|1.000|7.9e-12|7.9e-11|2.8e-04| 1.332596e+02 1.332594e+02| 0:0:00| chol 1 1
13|0.952|0.987|1.2e-11|2.6e-12|1.3e-05| 1.332595e+02 1.332595e+02| 0:0:00| chol 1 1
14|1.000|1.000|5.2e-12|2.3e-12|1.2e-06| 1.332595e+02 1.332595e+02| 0:0:00|
stop: max(relative gap, infeasibilities) < 1.49e-08
-------------------------------------------------------------------
number of iterations = 14
primal objective value = 1.33259457e+02
dual objective value = 1.33259456e+02
gap := trace(XZ) = 1.15e-06
relative gap = 4.31e-09
actual relative gap = 4.31e-09
rel. primal infeas (scaled problem) = 5.20e-12
rel. dual " " " = 2.32e-12
rel. primal infeas (unscaled problem) = 0.00e+00
rel. dual " " " = 0.00e+00
norm(X), norm(y), norm(Z) = 1.7e+02, 1.1e+02, 1.7e+03
norm(A), norm(b), norm(C) = 3.5e+01, 3.0e+00, 3.9e+00
Total CPU time (secs) = 0.47
CPU time per iteration = 0.03
termination code = 0
DIMACS: 7.8e-12 0.0e+00 4.0e-12 0.0e+00 4.3e-09 4.3e-09
-------------------------------------------------------------------
------------------------------------------------------------
Status: Solved
Optimal value (cvx_optval): -133.259
P_cvx =
20.3336 10.5025 -3.3125 37.1904
10.5025 12.9464 -1.8618 32.8376
-3.3125 -1.8618 2.3441 -4.8914
37.1904 32.8376 -4.8914 97.6353
K_cvx =
1.3793 2.0277 -0.1335 4.7523
-1.0233 0.0284 -0.4975 -1.2125
P_dare =
20.3336 10.5025 -3.3125 37.1904
10.5025 12.9464 -1.8618 32.8376
-3.3125 -1.8618 2.3441 -4.8914
37.1904 32.8376 -4.8914 97.6353
K_dare =
-1.3793 -2.0277 0.1335 -4.7523
1.0233 -0.0284 0.4975 1.2125
P_cvx =
20.3336 10.5025 -3.3125 37.1904
10.5025 12.9464 -1.8618 32.8376
-3.3125 -1.8618 2.3441 -4.8914
37.1904 32.8376 -4.8914 97.6353
P_dare =
20.3336 10.5025 -3.3125 37.1904
10.5025 12.9464 -1.8618 32.8376
-3.3125 -1.8618 2.3441 -4.8914
37.1904 32.8376 -4.8914 97.6353
>> MECH6327_HW3_pblm3
A =
-0.8568 1.3798 -0.6563 1.1284
0.0484 0.0951 -0.1250 0.7425
-0.6649 -0.4271 -0.5305 1.1436
1.4527 0.5108 0.1056 -0.9147
B =
0.1798 1.2963
-0.9833 1.0992
0.3848 0.6532
0.3257 -0.5051
Q =
0.3838 0.3714 0.4771 0.6771
0.3714 1.2344 1.2453 1.0731
0.4771 1.2453 1.4076 1.4208
0.6771 1.0731 1.4208 2.1397
R =
0.5081 0.7407
0.7407 1.1909
Calling SDPT3 4.0: 31 variables, 10 equality constraints
For improved efficiency, SDPT3 is solving the dual problem.
------------------------------------------------------------
num. of constraints = 10
dim. of sdp var = 10, num. of sdp blk = 2
*******************************************************************
SDPT3: Infeasible path-following algorithms
*******************************************************************
version predcorr gam expon scale_data
HKM 1 0.000 1 0
it pstep dstep pinfeas dinfeas gap prim-obj dual-obj cputime
-------------------------------------------------------------------
0|0.000|0.000|4.4e+01|5.2e+00|1.0e+03| 6.864540e+01 0.000000e+00| 0:0:00| chol 1 1
1|0.886|0.852|5.0e+00|8.2e-01|1.7e+02| 5.707109e+01 1.265425e+01| 0:0:00| chol 1 1
2|0.782|1.000|1.1e+00|5.5e-03|5.2e+01| 5.048488e+01 1.175262e+01| 0:0:00| chol 1 1
3|0.752|1.000|2.7e-01|5.5e-04|1.6e+01| 2.602445e+01 1.535157e+01| 0:0:00| chol 1 1
4|1.000|0.756|1.8e-07|1.8e-04|7.3e+00| 2.578672e+01 1.846254e+01| 0:0:00| chol 1 1
5|0.933|1.000|1.8e-08|5.6e-06|5.8e-01| 2.048293e+01 1.990168e+01| 0:0:00| chol 1 1
6|0.964|0.970|2.0e-09|7.1e-07|2.9e-02| 2.011379e+01 2.008495e+01| 0:0:00| chol 1 1
7|0.965|1.000|6.3e-10|5.6e-08|1.7e-03| 2.009536e+01 2.009368e+01| 0:0:00| chol 1 1
8|0.994|1.000|1.8e-10|1.3e-10|1.2e-04| 2.009415e+01 2.009404e+01| 0:0:00| chol 1 1
9|0.953|0.987|6.7e-11|3.7e-11|5.3e-06| 2.009409e+01 2.009408e+01| 0:0:00| chol 1 1
10|1.000|1.000|6.7e-15|1.3e-11|1.2e-06| 2.009408e+01 2.009408e+01| 0:0:00| chol 1 1
11|1.000|1.000|1.7e-14|1.0e-12|1.3e-08| 2.009408e+01 2.009408e+01| 0:0:00|
stop: max(relative gap, infeasibilities) < 1.49e-08
-------------------------------------------------------------------
number of iterations = 11
primal objective value = 2.00940824e+01
dual objective value = 2.00940824e+01
gap := trace(XZ) = 1.33e-08
relative gap = 3.22e-10
actual relative gap = 3.22e-10
rel. primal infeas (scaled problem) = 1.70e-14
rel. dual " " " = 1.00e-12
rel. primal infeas (unscaled problem) = 0.00e+00
rel. dual " " " = 0.00e+00
norm(X), norm(y), norm(Z) = 8.9e+00, 1.2e+01, 8.9e+01
norm(A), norm(b), norm(C) = 1.8e+01, 3.0e+00, 5.7e+00
Total CPU time (secs) = 0.42
CPU time per iteration = 0.04
termination code = 0
DIMACS: 2.6e-14 0.0e+00 1.8e-12 0.0e+00 3.2e-10 3.2e-10
-------------------------------------------------------------------
------------------------------------------------------------
Status: Solved
Optimal value (cvx_optval): -20.0941
P_cvx =
9.2752 0.3867 1.2843 -2.9794
0.3867 4.4079 0.6458 1.1789
1.2843 0.6458 1.9253 0.5430
-2.9794 1.1789 0.5430 4.4857
K_cvx =
0.5888 -0.3683 0.2601 -0.1295
0.7294 -0.5856 0.4291 -0.9410
P_dare =
9.2752 0.3867 1.2843 -2.9794
0.3867 4.4079 0.6458 1.1789
1.2843 0.6458 1.9253 0.5430
-2.9794 1.1789 0.5430 4.4857
K_dare =
-0.5888 0.3683 -0.2601 0.1295
-0.7294 0.5856 -0.4291 0.9410
P_cvx =
9.2752 0.3867 1.2843 -2.9794
0.3867 4.4079 0.6458 1.1789
1.2843 0.6458 1.9253 0.5430
-2.9794 1.1789 0.5430 4.4857
P_dare =
9.2752 0.3867 1.2843 -2.9794
0.3867 4.4079 0.6458 1.1789
1.2843 0.6458 1.9253 0.5430
-2.9794 1.1789 0.5430 4.4857
>> MECH6327_HW3_pblm3
A =
-1.5312 0.7880 1.6345 -0.9443
0.5046 0.2982 -0.6235 -0.6712
-0.8642 -0.1637 -1.3501 0.5767
-0.3766 0.6067 -1.1622 -2.0858
B =
0.2360 0.0076
-0.7784 -0.9376
1.0996 -0.6816
-0.8556 -0.2601
Q =
1.5071 1.3058 1.2427 1.5227
1.3058 2.0018 1.1626 1.6402
1.2427 1.1626 1.9380 2.0005
1.5227 1.6402 2.0005 2.2226
R =
0.1582 0.2334
0.2334 0.4393
Calling SDPT3 4.0: 31 variables, 10 equality constraints
For improved efficiency, SDPT3 is solving the dual problem.
------------------------------------------------------------
num. of constraints = 10
dim. of sdp var = 10, num. of sdp blk = 2
*******************************************************************
SDPT3: Infeasible path-following algorithms
*******************************************************************
version predcorr gam expon scale_data
HKM 1 0.000 1 0
it pstep dstep pinfeas dinfeas gap prim-obj dual-obj cputime
-------------------------------------------------------------------
0|0.000|0.000|4.4e+01|3.9e+00|1.0e+03| 8.266937e+01 0.000000e+00| 0:0:00| chol 1 1
1|0.818|0.880|8.1e+00|5.1e-01|1.7e+02| 3.173665e+01 9.183288e+00| 0:0:00| chol 1 1
2|0.761|0.758|1.9e+00|1.3e-01|6.0e+01| 2.745402e+01 1.125760e+01| 0:0:00| chol 1 1
3|0.553|0.897|8.6e-01|1.3e-02|2.5e+01| 2.195296e+01 1.974743e+01| 0:0:00| chol 1 1
4|0.169|0.214|7.2e-01|1.1e-02|2.1e+01| 2.472518e+01 1.029151e+02| 0:0:00| chol 1 1
5|0.017|0.022|7.1e-01|1.0e-02|3.3e+01| 3.109723e+01 1.695860e+02| 0:0:00| chol 1 1
6|0.061|0.035|6.6e-01|1.0e-02|7.8e+01| 8.019566e+01 2.054055e+02| 0:0:00| chol 1 1
7|0.423|0.391|3.8e-01|6.1e-03|1.4e+02| 2.228571e+02 2.222143e+02| 0:0:00| chol 1 1
8|1.000|0.531|8.3e-06|2.8e-03|1.1e+02| 3.445733e+02 2.448139e+02| 0:0:00| chol 2 1
9|0.952|1.000|1.7e-06|1.7e-06|1.6e+01| 2.823759e+02 2.660703e+02| 0:0:00| chol 1 1
10|0.950|0.991|8.3e-08|3.5e-07|1.1e+00| 2.712737e+02 2.701719e+02| 0:0:00| chol 1 1
11|1.000|1.000|2.1e-10|1.7e-08|1.1e-01| 2.705570e+02 2.704431e+02| 0:0:00| chol 1 1
12|0.959|0.980|1.7e-10|4.8e-10|4.0e-03| 2.704909e+02 2.704869e+02| 0:0:00| chol 1 1
13|0.993|1.000|1.0e-10|3.4e-11|2.4e-04| 2.704880e+02 2.704878e+02| 0:0:00| chol 1 1
14|0.954|0.989|8.0e-11|2.1e-11|1.1e-05| 2.704879e+02 2.704879e+02| 0:0:00| chol 1 1
15|1.000|1.000|1.7e-10|1.6e-11|1.0e-06| 2.704879e+02 2.704879e+02| 0:0:00|
stop: max(relative gap, infeasibilities) < 1.49e-08
-------------------------------------------------------------------
number of iterations = 15
primal objective value = 2.70487886e+02
dual objective value = 2.70487885e+02
gap := trace(XZ) = 1.04e-06
relative gap = 1.92e-09
actual relative gap = 1.87e-09
rel. primal infeas (scaled problem) = 1.72e-10
rel. dual " " " = 1.59e-11
rel. primal infeas (unscaled problem) = 0.00e+00
rel. dual " " " = 0.00e+00
norm(X), norm(y), norm(Z) = 8.9e+02, 2.2e+02, 1.4e+03
norm(A), norm(b), norm(C) = 2.2e+01, 3.0e+00, 7.5e+00
Total CPU time (secs) = 0.45
CPU time per iteration = 0.03
termination code = 0
DIMACS: 2.6e-10 0.0e+00 3.7e-11 0.0e+00 1.9e-09 1.9e-09
-------------------------------------------------------------------
------------------------------------------------------------
Status: Solved
Optimal value (cvx_optval): -270.488
P_cvx =
101.8040 -50.0081 -0.8792 117.2149
-50.0081 28.3481 2.3037 -57.6749
-0.8792 2.3037 4.1602 0.2983
117.2149 -57.6749 0.2983 136.1756
K_cvx =
-4.7337 3.0098 1.0219 -6.7728
0.2527 -0.0799 -1.2428 0.2020
P_dare =
101.8040 -50.0081 -0.8792 117.2149
-50.0081 28.3481 2.3037 -57.6749
-0.8792 2.3037 4.1602 0.2983
117.2149 -57.6749 0.2983 136.1756
K_dare =
4.7337 -3.0098 -1.0219 6.7728
-0.2527 0.0799 1.2428 -0.2020
P_cvx =
101.8040 -50.0081 -0.8792 117.2149
-50.0081 28.3481 2.3037 -57.6749
-0.8792 2.3037 4.1602 0.2983
117.2149 -57.6749 0.2983 136.1756
P_dare =
101.8040 -50.0081 -0.8792 117.2149
-50.0081 28.3481 2.3037 -57.6749
-0.8792 2.3037 4.1602 0.2983
117.2149 -57.6749 0.2983 136.1756
\end{Verbatim}
%\newpage
%\section*{}
%\bibliographystyle{ieeetr}
%\bibliography{mybib.bib}
\end{document}
| {
"alphanum_fraction": 0.6159670782,
"avg_line_length": 37.0426829268,
"ext": "tex",
"hexsha": "1534d19898125223271560b87da985a008273241",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2b55aaf6f9e1bcf5cc684f5c853cadec26acf9d2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jonaswagner2826/MECH6327",
"max_forks_repo_path": "Homework/HW3/MECH6327-HW3.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2b55aaf6f9e1bcf5cc684f5c853cadec26acf9d2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jonaswagner2826/MECH6327",
"max_issues_repo_path": "Homework/HW3/MECH6327-HW3.tex",
"max_line_length": 329,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2b55aaf6f9e1bcf5cc684f5c853cadec26acf9d2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jonaswagner2826/MECH6327",
"max_stars_repo_path": "Homework/HW3/MECH6327-HW3.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 15943,
"size": 36450
} |
\section{Trim and linearisation}
\subsection{Trimming}
The first step when designing the control system of an aircraft is to study the behavior of the aircraft due to control inputs or external disturbances from an equilibrium condition. If the aircraft we're not to be in equilibrium, deviation from the initial conditions would occur that are unrelated to the control inputs making the analysis more difficult.
In the case of an aircraft this equilibrium condition is known as a trimmed flight condition. In order to determine the trimmed flight condition the states and inputs much be chosen such that the linear- and angular-accelerations are zero. For this assignment this is done by minimizing the following cost function.
\begin{equation}
\label{eq:trim_cost}
cost = 5\dot{h}^2 +
W_{\phi}\dot{\phi}^2 +
W_{\theta}\dot{\theta}^2 +
W_{\psi}\dot{\psi}^2 +
2\dot{V_{tot}}^2 +
10\dot{\alpha}^2 +
10\dot{\beta}^2 +
10\dot{P}^2 +
10\dot{Q}^2 +
10\dot{R}^2
\end{equation}
All the flight states in the cost function are squared, which is what allows the trim condition to be found by minimizing it. Once the cost function returns zero, the trimmed flight condition states and inputs have been found.
Performing the trimming procedure for 5 iterations in level flight for both flight conditions and both the high and low fidelity models yields the following final values for the cost function.
\begin{center}
\begin{tabular}{ r | c | c }
& high fidelity & low fidelity \\ \hline \hline
Assigned flight condition & $0.0506$ & $4.2997\cdot10^{-29}$ \\
APA flight condition & $7.1856\cdot10^{-6}$ & $5.1572\cdot10^{-29}$
\end{tabular}
\end{center}
The results for the low fidelity models are smaller than the machine epsilon $2.2204\cdot10^{-16}$ of the computer used to calculate these, so it's safe to assume the trimmed flight condition has been successfully found.
The final costs for the high fidelity model are higher. The results for the APA flight conditions are in the order of $10^{-6}$ and for the assigned flight conditions in the order of $10^{-2}$. This indicates that the resulting state and inputs values are not perfect, but since the cost is close to zero, it may be close enough for further analysis.
\subsection{Accelerometer Position Analysis}
No changes are seen in the $A$ and $B$ matrix after adding the new vertical accelerometer output to the Simulink model. This is expected since the addition of the accelerometer output didn't change the dynamics of the aircraft. However the C and D matrices now have an extra row which corresponds to the new output. Equation~\ref{eq:anss} shows the linearized output equation of the normal acceleration at the center of gravity ($x_a=0$).
\begin{equation}
\label{eq:anss}
y = \begin{bmatrix}
0 \\ 0 \\ -3.24322220018077e-05 \\ 0 \\ -9.67969758700180e-06 \\ 0 \\
0.00398736185031356 \\ 9.92978181907083 \\ 0 \\ 0 \\ 0.966415396225217 \\
0 \\ 0 \\ 0.0208407616972783 \\ 0 \\ 0 \\ 0 \\ 0
\end{bmatrix}^T x +
\begin{bmatrix}
0 & 0 & 0 & 0
\end{bmatrix} u
\end{equation}
The accelerometer output primarily depends on the velocity, angle of attack, pitch rate and the normal load factor. The altitude and pitch angle also have a minor contribution, however since these are small it could be caused by errors in the model or linearizion process. Equation~\ref{eq:tf_el_an} shows the transfer function from the elevator to normal acceleration.
\begin{equation}
\label{eq:tf_el_an}
\frac{
\begin{matrix}
0.421 s^{17} + 22.9 s^{16} + 404.6 s^{15} + 1728 s^{14} - 2.376e04 s^{13} - 3.177e05 s^{12} \\
- 1.64e06 s^{11} - 5.127e06 s^{10} - 1.168e07 s^{9} - 1.574e07 s^{8} - 7.901e06 s^{7} \\
- 5.914e04 s^{6} + 746.3 s^{5} + 4.87 s^{4} - 1.464e-11 s^{3}
\end{matrix}
}{
\begin{matrix}
s^{18} + 80.51 s^{17} + 2581 s^{16} + 4.234e04 s^{15} + 3.876e05 s^{14} + 2.115e06 s^{13} \\
+ 7.644e06 s^{12} + 2.059e07 s^{11} + 3.952e07 s^{10} + 5.029e07 s^{9} + 4.057e07 s^{8} \\
+ 1.553e07 s^{7} + 5.631e05 s^{6} + 1.073e05 s^{5} + 1159 s^{4} - 8.86e-11 s^{3}
\end{matrix}
}
\end{equation}
The transfer function has a zero on the right hand side of the imaginary axis at $9.76+0j$. This zero is the one responsible for the non-minimum-phase behavior. The physical explanation of this is that a pitch up maneuverer with the elevator produces a downwards force which causes the aircraft to accelerate downwards. Eventually the angle of attack will increase as the nose pitches up and the extra lift will accelerate the aircraft upwards.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{figures/an_elev_step}
\label{fig:an_elev_step}
\caption{Normal acceleration after a negative step input on the elevator.}
\end{figure}
Repeating the simulation for increasing values of $x_a$ shows that the zero moves away from the origin indicating that its influence becomes less dominant. When $x_a=5.9$ the zero has moved to the left side of the imaginary axis and the non-minimum-phase behavior disappears. Since the value of the zero much further to the left then the other zeros and poles, its motion is negligible. Increasing $x_a$ further moves the zero closer to origin and its motion reappears, this time in the direction of the reference signal.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{figures/an_elev_step_mult}
\caption{Normal acceleration after a negative step input on the elevator for different values of $x_a=5.9$.}
\label{fig:an_elev_step_mult}
\end{figure}
One property of the \emph{instantaneous center of rotation} is that the linear accelerations at that point are the same as that of the overall aircraft. A point in front of it will accelerate upwards with a pitch up manoeuvre and a point behind it will accelerate downwards during the same manoeuvre. Thus based on the effects of moving $x_a$, it fair to conclude that the instantaneous center of rotation must be near $x=5.9$. Another way to arrive at the same conclusion is that the acceleration due to rotation is caused by the zero mentioned in the previous section since this is the only zero that changes significantly with $x_a$. Thus the instantaneous center of rotation will be at the location where this zero is infinitely far away from the origin.
The pilot should be placed at or after the instantaneous center of rotation ($x_a\geq5.9\ ft$), this is because if he is placed before, he will initially feel the aircraft moving into the opposite direction he is commanding it to and this might confuse the pilot resulting in him compensating for an error that is not there. This could make the combined human-aircraft system unstable or at least difficult to control.
It is important to place the accelerometer close a node of the most important fuselage bending moment. The reason is because, as the aircraft flies, loads and vibrations acting on the aircraft will make the fuselage bend. The nodes are locations where the displacements due to vibrations or bending are zero. Thus placing the accelerometer far away from a node, will result in the accelerometer measuring extra accelerations due to vibrations or bending.
\clearpage
| {
"alphanum_fraction": 0.7246279662,
"avg_line_length": 75.3434343434,
"ext": "tex",
"hexsha": "144418f82c3317cd2a32c3560d1bf3036182884d",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-03-04T15:55:23.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-03-04T15:55:23.000Z",
"max_forks_repo_head_hexsha": "ef8e368c9c81c3dcba4193bd2193a68d5e2bd2f6",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "aarondewindt/afcs_assignment",
"max_forks_repo_path": "report/1_trim_and_linearization.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ef8e368c9c81c3dcba4193bd2193a68d5e2bd2f6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "aarondewindt/afcs_assignment",
"max_issues_repo_path": "report/1_trim_and_linearization.tex",
"max_line_length": 758,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ef8e368c9c81c3dcba4193bd2193a68d5e2bd2f6",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "aarondewindt/afcs_assignment",
"max_stars_repo_path": "report/1_trim_and_linearization.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1983,
"size": 7459
} |
% Autogenerated translation of about.md by Texpad
% To stop this file being overwritten during the typeset process, please move or remove this header
\documentclass[12pt]{book}
\usepackage{graphicx}
\usepackage[utf8]{inputenc}
\usepackage[a4paper,left=.5in,right=.5in,top=.3in,bottom=0.3in]{geometry}
\setlength\parindent{0pt}
\setlength{\parskip}{\baselineskip}
\renewcommand*\familydefault{\sfdefault}
\usepackage{hyperref}
\pagestyle{plain}
\begin{document}
\Large
\hrule
permalink: /
title: "Bogdan Mazoure - Math \& Stats graduate student @ McGill University"
excerpt: "About me"
author\emph{profile: true
redirect}from:
- /about/
\section*{ - /about.html}
\chapter*{About me}
I am currently a PhD student at the Montreal Institute for Learning Algorithms (MILA) and McGill University, co-supervised by Devon Hjelm and Doina Precup. My research interests include deep reinforcement learning, probabilistic modeling, variational inference and representation learning.
I have completed my Master's in Statistics at McGill University under the supervision of Prof. \href{http://www.math.mcgill.ca/neslehova/}{Johanna Neslehova}. My thesis focuses on reconstructing graphical models from discrete data with variational inference and multiarmed bandits. It can be found here: \href{https://bmazoure.github.io/files/thesis_Msc_2018.pdf}{link}.
I was also a research intern at Nuance during the summer of 2018 where I collaborated with Dr. \href{https://scholar.google.ca/citations?user=KRPMXqYAAAAJ&hl=en}{Atta Norouzian}. My work there focused on modeling acoustic signals such as speech with deep neural architectures.
Previously, I obtained a Bachelor's in Computer Science and Statistics in 2017 from McGill University.
\chapter*{Research interests}
\begin{itemize}
\item Deep and distributional reinforcement learning;
\item Multivariate statistics;
\item Parametric and Non-parametric Bayesian methods (Gaussian processes and variational inference);
\item Probabilistic graphical models;
\item Uncertainty representation in neural networks;
\item Generative models (auto-encoding variational Bayes and generative nets);
\item Dependence modeling for discrete marginals.
\end{itemize}
\end{document}
| {
"alphanum_fraction": 0.8008111762,
"avg_line_length": 45.2857142857,
"ext": "tex",
"hexsha": "4970cc02da0084fa24de0f93c16cf0770b8249c7",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2018-09-04T03:53:27.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-09-04T03:53:27.000Z",
"max_forks_repo_head_hexsha": "02af7f10a4432d0966fca63d85d40587b9773594",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "bmazoure/bmazoure.github.io",
"max_forks_repo_path": "_pages/about.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "02af7f10a4432d0966fca63d85d40587b9773594",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "bmazoure/bmazoure.github.io",
"max_issues_repo_path": "_pages/about.tex",
"max_line_length": 370,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "02af7f10a4432d0966fca63d85d40587b9773594",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "bmazoure/bmazoure.github.io",
"max_stars_repo_path": "_pages/about.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 547,
"size": 2219
} |
\subsection{Cauchy's integral theorem}
| {
"alphanum_fraction": 0.7804878049,
"avg_line_length": 10.25,
"ext": "tex",
"hexsha": "c61d820f3a21336bcd9d4885e9fbb4a950e7e6fa",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/analysis/complexAnalysis/04-04-Cauchy's_integral_theorem.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/analysis/complexAnalysis/04-04-Cauchy's_integral_theorem.tex",
"max_line_length": 38,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/analysis/complexAnalysis/04-04-Cauchy's_integral_theorem.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 11,
"size": 41
} |
\documentclass{article}
\usepackage[utf8]{inputenc}
\title{Probability Info Write-Up}
\author{kyjeckland }
\date{October 2021}
\begin{document}
\maketitle
\section{Desert Tiles}
Here, we look at the probabilities and expected values associated with desert tiles in the simplest case of only one camel. In this instance, the camel can move at most 3 spaces, so we will restrict the distance of the desert tile to within 3 tiles. Notice that each possible movement value is equiprobable, so the probability that the camel lands on the desert tile is exactly $\frac{1}{3}$ regardless of the spot selected for the desert tile. Next, we will take a look at the expected value in the three different cases.
\textbf{Case 1:} The desert tile is immediately in front of the camel.In this case, if a $1$ is rolled the camel will move $2$ spaces. If we let $Z$ be the random value that represents the distance the camel moves, then the expected value is calculated as follows:
\begin{center} $E(Z) = \frac{1}{3}*2 + \frac{1}{3}*2+\frac{1}{3}*3 = \frac{2}{3}+\frac{2}{3}+\frac{3}{3} = \frac{7}{3} \approx 2.33$
\end{center}
\textbf{Case 2:} The desert tile is two tiles in front of the camel. If a $2$ is rolled here, then the camel will move $3$ spaces. As above, we find the expected value as follows:
\begin{center} $E(Z) = \frac{1}{3}*1+\frac{1}{3}*3+\frac{1}{3}*3 = \frac{7}{3} \approx 2.33$
\end{center}
\textbf{Case 3:} The third and final case is when the desert tile is 3 spaces away. Here, if 3 is rolled, the camel will move 4 spaces. The expected value is then:
\begin{center} $E(Z) = \frac{1}{3}*1+\frac{1}{3}*2+\frac{1}{3}*4 = \frac{7}{3} \approx 2.33$
\end{center}
\section{Crazy Camels}
The crazy camels move in the opposite direction of the other camels, but only pose a threat if a camel lands on top of them. Here we will go over the simplest case of one normal camel and one crazy camel, and evaluate the probabilities and expected values in each case. The probabilities in this scenario are more complex than those in the desert tile scenario. We will assume that both the crazy camel and normal camel's dice are the only ones that have yet to be rolled. Notice that if the crazy camel goes first, the probability that the normal camel moves backward is exactly 0. Furthermore, the probability of this specific crazy camel moving upon the crazy die being rolled is $.5$, because there are $3$ ways it can move out of a total of $6$ possibilities. Suppose $x$ is the distance of the crazy camel from the normal camel. Given this information, it follows that the probability the normal camel will move backward is
$&=\frac{1}{3}\frac{1}{2}\frac{1}{2} = \frac{1}{12}$
The probability that the normal camel lands on the crazy camel but the crazy camel doesn't move would then be
\begin{center}$\frac{1}{2}\frac{1}{3}\frac{1}{2}+\frac{1}{2}\frac{1}{3} = \frac{1}{12}+\frac{1}{6} = \frac{1}{12}+\frac{2}{12} = \frac{3}{12} = \frac{1}{4}$
\end{center}
With this information, we can then calculate the expected values of each possible distance.
\textbf{1 tile away:}
\\\begin{center} $E(Z) = 1*\frac{1}{4} + 2*\frac{1}{3}+3*\frac{1}{3} -\frac{1}{36}+\frac{2}{36} = \frac{11}{6} \approx 1.83$
\end{center}
\textbf{2 tiles away:}
\\\begin{center} $E(Z) = 1*\frac{1}{3} + 2*\frac{1}{4}+3*\frac{1}{3} -\frac{1}{36}+\frac{1}{36} = \frac{11}{6} \approx 1.83$
\end{center}
\textbf{3 tiles away:}
\\\begin{center} $E(Z) = 1*\frac{1}{3} + 2*\frac{1}{3}+3*\frac{1}{4} +\frac{2}{36}+\frac{1}{36} = \frac{11}{6} \approx 1.83$
\end{center}
And so the expected values are all the same, as with the desert tiles. How, then, do we judge which position is 'better' or 'worse,' and can we generalize this sort of thinking to apply to more complex scenarios?
\end{document}
| {
"alphanum_fraction": 0.7097883598,
"avg_line_length": 66.3157894737,
"ext": "tex",
"hexsha": "45e577bb46655e0f72e6675e5ba9e7b318cf82cf",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-10-08T02:25:10.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-10-01T12:42:42.000Z",
"max_forks_repo_head_hexsha": "f3ef2667a8a5482007e5b9b95c39e20578bee22a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "keckl99/CamelUp",
"max_forks_repo_path": "writeup/probability_info/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f3ef2667a8a5482007e5b9b95c39e20578bee22a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "keckl99/CamelUp",
"max_issues_repo_path": "writeup/probability_info/main.tex",
"max_line_length": 930,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f3ef2667a8a5482007e5b9b95c39e20578bee22a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "keckl99/CamelUp",
"max_stars_repo_path": "writeup/probability_info/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1174,
"size": 3780
} |
\subsection{The Space of Germs}
Let $D\subset\mathbb C$ a domain.
\begin{definition}
Let $(f,U)$ and $(g,V)$ be function elements on $D$.
For any $z\in U\cap V$, write $(f,U)\equiv_z(g,V)$ if $f,g$ agree on a neighbourhood of $z$.
\end{definition}
Easily $\equiv_z$ is an equivalence relation.
\begin{definition}
Let $(f,U)$ be a function element and $z\in U$.
The equivalence class of $(f,U)$ under $\equiv_z$ is called the germ of $f$ at $z$ and is denoted by $[f]_z$.
\end{definition}
So two germs $[f]_z$ and $[g]_w$ are equal iff $z=w$ and $f=g$ on a neighbourhood of $z=w$.
We want to study all possible germs on a domain $D$.
\begin{definition}
The space of germ over $D$ is
$$\mathcal G=\{[f]_z:z\in D,(f,D)\text{ a function element with $z\in U$}\}$$
\end{definition}
Now we defined it as a set, it is natural to endow a topology on it.
For any function element $(f,U)$ on $D$, let $[f]_U=\{[f]_z:z\in U\}$.
\begin{lemma}
The collection $\{[f]_U\}$ for $U$ open in $D$ is a basis of a topology on $\mathcal G$.
\end{lemma}
\begin{proof}
Let $(f,U)$ and $(g,V)$ be function elements on $D$.
For any $[h]_z\in [f]_U\cap [g]_V$, then $h$ agrees with $f,g$ on a neighbourhood $W$ of $z$, therefore $[h]_W\subset [f]_U\cap[g]_V$.
\end{proof}
This is the topology we want.
\begin{lemma}
$\mathcal G$ is Hausdorff.
\end{lemma}
\begin{proof}
Consider elements of $[f]_z,[g]_z\in\mathcal G$ with $[f]_z\neq [g]_z$.\\
If $z\neq w$ then we can choose function elements $(f,U)\in [f]_z,(g,V)\in[g]_w$ such that $U\cap V=\varnothing$, therefore $[f]_U$ and $[g]_V$ are disjoint.\\
If $z=w$, then we can choose an open neighbourhood $U$ such that $(f,U)\in[f]_z$ and $(g,U)\in [g]_z$.
Unless $[f]_U\cap [g]_U=\varnothing$, there is a germ $[h]_z\subset [f]_U\cap[g]_U$.
By the identity principle $f|_U=h|_U=g|_U$, which means $[f]_z=[h]_z=[g]_z$ due to the connectedness of $U$, contradiction.
\end{proof}
\begin{definition}
Let $\mathcal G$ be the space of germs over a domain $D$.
The forgetful map $\pi:\mathcal G\to D$ is defined by $\pi([f]_z)=z$.
\end{definition}
\begin{lemma}
For each component $G\subset\mathcal G$, the restriction $\pi:G\to D$ is a covering map.
\end{lemma}
\begin{proof}
Take an open $U\subset D$.
Then the pre-image of $U$ has to be
$$\pi^{-1}(U)=\bigcup_{(f,V)\text{ function element on }U}[f]_V$$
which is open.
So $\pi$ is continuous.\\
For each open set in the form $[f]_U$, we have
$$(\pi|_{[f]_U})^{-1}(z)=[f]_z$$
which is a continuous inverse of $\pi|_{[f]_U}$.
This shows that $\pi$ is a local homeomorphism, hence a covering map.
\end{proof}
Hence, by Lemma \ref{covering_conformal}, $\pi$ induces a well-defined conformal structure on $\mathcal G$ (well, on each of its connected components) such that $\pi$ is analytic.
Explicitly, the atlas we have in mind consists of charts $(\pi|_{[f]_U},[f]_U)$ across all the function elements $(f,U)$ on $D$.
\begin{definition}
Let $\mathcal G$ be the space of germs on a domain $D$.
The evaluation map $\mathcal E:\mathcal G\to\mathbb C$ is defined by $\mathcal E([f]_z)=f(z)$.
\end{definition}
In the chart $(\pi|_{[f]_U},[f]_U)$, we have
$$\mathcal E\circ(\pi|_{[f]_U})^{-1}(z)=\mathcal E([f]_z)=f(z)$$
Therefore $\mathcal E$ is analytic. | {
"alphanum_fraction": 0.6321060383,
"avg_line_length": 53.046875,
"ext": "tex",
"hexsha": "662d66ec7ca36471e730aa03e9c52ff401588aa4",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cbda76f7189c679c4aaccf030b70d310823ead3f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "david-bai-notes/II-Riemann-Surfaces",
"max_forks_repo_path": "8/germs.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cbda76f7189c679c4aaccf030b70d310823ead3f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "david-bai-notes/II-Riemann-Surfaces",
"max_issues_repo_path": "8/germs.tex",
"max_line_length": 180,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "cbda76f7189c679c4aaccf030b70d310823ead3f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "david-bai-notes/II-Riemann-Surfaces",
"max_stars_repo_path": "8/germs.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1234,
"size": 3395
} |
% !TEX program = xelatex
\documentclass{resume}
%\usepackage{zh_CN-Adobefonts_external} % Simplified Chinese Support using external fonts (./fonts/zh_CN-Adobe/)
%\usepackage{zh_CN-Adobefonts_internal} % Simplified Chinese Support using system fonts
\usepackage{hyperref}
\begin{document}
\pagenumbering{gobble} % suppress displaying page number
\name{Yanzhe Chen}
% {E-mail}{mobilephone}{homepage}
% be careful of _ in emaill address
\contactInfo{[email protected]}{(+86) 185-1669-3610}{}
% {E-mail}{mobilephone}
% keep the last empty braces!
%\contactInfo{[email protected]}{(+86) 131-221-87xxx}{}
\section{\faGraduationCap\ Education}
\datedsubsection{\normalsize \textbf{Institute of Parallel and Distributed Systems, Shanghai Jiao Tong University}}{2014.9 - 2017.3}
\datedline{\small \textit{M.S. in Software Engineering, Advisor: Prof. Binyu Zang}}{\textbf{GPA}: 2.81\ /\ 3.3, \textbf{Rank}: 2\ /\ 96}
\datedsubsection{\normalsize \textbf{Shanghai Jiao Tong University}}{2010.9 - 2014.6}
\datedline{\small \textit{B.Eng. in Software Engineering}}{\textbf{GPA}: 3.79\ /\ 4.3, \textbf{Rank}: 8\ /\ 99}
\section{\faBriefcase\ Internship}
\datedsubsection{\textbf{Microsoft, China}}{ 2016.7 - 2016.9 }
\role{Cloud \& Enterprise Group}{SDE Intern}
Designed and implemented an Azure cost monitoring service to help reduce team's expenses.
\begin{itemize}[leftmargin=*]
\item {Built as a SPA using the MEAN stack, shipping with the Docker.}
\item {Recommendation (4 in 14) from judges in \textbf{Microsoft Young Hackathon} semi-final.}
\end{itemize}
Rework time adjustment policy in HyperV timesync module.
\begin{itemize}[leftmargin=*]
\item {Refine timesync request handling to avoid sudden change of guest time.}
\item {Related patch accepted by the \textbf{FreeBSD kernel}.}
\end{itemize}
\section{\faUsers\ Projects}
\datedsubsection{\textbf{DrTM}}{ 2015 - 2016 }
\role{Distributed Transaction Processing}{Research Project, C++}
DrTM is a fast and general in-memory transaction system which provides high throughput and low latency.
\\[5pt]
DrTM leverages two hardware features called HTM and RDMA and designed a hardware-friendly protocol to boost the distributed transaction processing.
\\[5pt]
Related research papers were accepted by \textbf{SOSP’15} and \textbf{EuroSys’16} (\emph{top conferences in system}).
\\[5pt]
DrTM is a \textit{cooperated} project, my contributions are:
\begin{itemize}[leftmargin=*]
\item {Designed a lease-based shared lock using RDMA.}
\item {Implemented a hybrid OCC protocol to preserve transaction generality.}
\item {Designed an optimistic replication scheme to enable transaction recovery.}
\end{itemize}
\datedsubsection{\textbf{PowerLyra}}{ 2014 - 2015 }
\role{Distributed Graph Computation}{Research Project, C++}
Graph computation is widely used to reason about large-scale complex data in machine learning and data mining. PowerLyra is a performance extension to GraphLab, which is a open source graph computation framework.
\\[5pt]
PowerLyra discusses two ways of graph partition: Vertex-Cut and Edge-Cut and analyses their merits and demerits. It shows a hybrid partition to reduce redundant message and improves the performance dramatically.
\\[5pt]
Won the \textbf{Best Paper Award} from \textbf{EuroSys'15}.
\\[5pt]
PowerLyra is a \textit{cooperated} project, my contributions are:
\begin{itemize}[leftmargin=*]
\item {Analysed the replication factor for existing partitioning strategies.}
\item {Implemented a hybrid partitioning strategy which leverages both locality and parallelism.}
\end{itemize}
\datedsubsection{\textbf{GENE-MAP}}{ 2013 - 2014 }
\role{Map Generalization}{Contest Project, C++}
Map generalization is one of the core technologies for online map service. GENE-MAP provides an efficient generalization algorithm to make a trade off between precision and speed.
\\[5pt]
GENE-MAP exploits \textit{iteration} and \textit{greedy} strategy to ensure a good balance between precision and speed.
\\[5pt]
Won the \textbf{3rd Place Prize} from 2014 ACM GISCUP Competition
\\[5pt]
GENE-MAP is a \textit{personal} project, my contributions are:
\begin{itemize}[leftmargin=*]
\item {Designed and implemented an iterative and greedy algorithm for map generalization.}
\item {Parallelized the algorithm using the OpenMP library.}
\end{itemize}
\section{\faFile\ Publications}
\titleformat{\subsection}
{\normalsize\raggedright}
{}{0em}
{}
\datedsubsection{\textbf{Fast and General Distributed Transactions using RDMA and HTM}}{\textbf{EuroSys} 2016}
\datedline{\small\underline{Yanzhe Chen}, Xinda Wei, Jiaxin Shi, Rong Chen and Haibo Chen.}{\href{http://ob88vwut3.bkt.clouddn.com/papers/drtmr-eurosys16.pdf}{\faFilePdfO}\ \ \href{http://ob88vwut3.bkt.clouddn.com/slides/drtmr-eurosys16-slides.pptx}{\faFilePowerpointO}}
\datedsubsection{\textbf{Fast In-memory Transaction Processing using RDMA and HTM}}{\textbf{SOSP} 2015}
\datedline{\small Xingda Wei, Jiaxin Shi, \underline{Yanzhe Chen}, Rong Chen, Haibo Chen.}{\href{http://ob88vwut3.bkt.clouddn.com/papers/drtm-sosp15.pdf}{\faFilePdfO}\ \ \href{http://ob88vwut3.bkt.clouddn.com/slides/drtm-sosp15-slides.pptx}{\faFilePowerpointO}}
\titlespacing*{\subsection}{0cm}{*1.8}{*0.6}
\datedsubsection{\textbf{PowerLyra: Differentiated Graph Computation and Partitioning on Skewed Graphs}}{\textbf{EuroSys} 2015}
\datedline{\small Rong Chen, Jiaxin Shi, \underline{Yanzhe Chen}, Haibo Chen.}{\href{http://ob88vwut3.bkt.clouddn.com/papers/powerlyra-eurosys15.pdf}{\faFilePdfO}\ \ \href{http://ob88vwut3.bkt.clouddn.com/slides/powerlyra-eurosys15-slides.pptx}{\faFilePowerpointO}}
\datedsubsection{\textbf{Greedy Map Generalization by Iterative Point Removal}}{\textbf{SIGSPATIAL} 2014}
\datedline{\small\underline{Yanzhe Chen}, Yin Wang, Rong Chen, Haibo Chen and Binyu Zang.}{\href{http://ob88vwut3.bkt.clouddn.com/papers/gmap-sigspatialcup14.pdf}{\faFilePdfO}\ \ \href{http://ob88vwut3.bkt.clouddn.com/slides/gmap-sigspatialcup14-slides.pptx}{\faFilePowerpointO}}
\section{\faHeartO\ Honors and Awards}
\datedsubsection{ACM EuroSys Best Paper Award}{2015}
\datedsubsection{First-class Academic Scholarship for M.S., Shanghai Jiao Tong University}{2014}
\datedsubsection{ACM SIGSPATIAL GISCUP \nth{3} Place}{2014}
\datedsubsection{Outstanding College Graduate of Shanghai Jiao Tong University}{2014}
\datedsubsection{XinDong Scholarship (second-class) of Shanghai Jiao Tong University}{2013}
\datedsubsection{Sun Hung Kai Properties Scholarship}{2012}
\datedsubsection{ShenYin and WanGuo Special Scholarship of Shanghai Jiao Tong University}{2011}
\section{\faBook\ Teaching Assistant}
\datedsubsection{Computer System Design and Implementation}{2016}
\datedsubsection{Distributed Systems}{2015}
\datedsubsection{Introduction to Programming}{2013}
\section{\faCogs\ SKILLS}
\begin{itemize}[leftmargin=*, parsep=1.0ex]
\item {\textbf{Programming Languages}: Familiar with C++; Some experience with JavaScript, Bash, Java.}
\item {\textbf{Systems and Tools}: Unix (4+ years); Familiar with Git, Vim.}
\item {\textbf{English}: CET-6 561; Gave conference talk twice (Dallas and London).}
\end{itemize}
\end{document}
| {
"alphanum_fraction": 0.7733667642,
"avg_line_length": 57.8951612903,
"ext": "tex",
"hexsha": "a41febf5ac500ff4d00b4dc3016aa13db9135cbd",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8e9a8246b790d5d3e642f02236df45189197f23f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "yanzhe-chen/resume",
"max_forks_repo_path": "resume.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8e9a8246b790d5d3e642f02236df45189197f23f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "yanzhe-chen/resume",
"max_issues_repo_path": "resume.tex",
"max_line_length": 280,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8e9a8246b790d5d3e642f02236df45189197f23f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "yanzhe-chen/resume",
"max_stars_repo_path": "resume.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2107,
"size": 7179
} |
%
\documentclass[twoside]{article}
\setlength{\oddsidemargin}{0.25 in}
\setlength{\evensidemargin}{-0.25 in}
\setlength{\topmargin}{-0.6 in}
\setlength{\textwidth}{6.5 in}
\setlength{\textheight}{8.5 in}
\setlength{\headsep}{0.75 in}
\setlength{\parindent}{0 in}
\setlength{\parskip}{0.1 in}
%
% ADD PACKAGES here:
%
\usepackage{amsmath,amsfonts,graphicx}
%
\newcounter{lecnum}
\renewcommand{\thepage}{\thelecnum-\arabic{page}}
\renewcommand{\thesection}{\thelecnum.\arabic{section}}
\renewcommand{\theequation}{\thelecnum.\arabic{equation}}
\renewcommand{\thefigure}{\thelecnum.\arabic{figure}}
\renewcommand{\thetable}{\thelecnum.\arabic{table}}
%
% The following macro is used to generate the header.
%
\newcommand{\lecture}[4]{
\pagestyle{myheadings}
\thispagestyle{plain}
\newpage
\setcounter{lecnum}{#1}
\setcounter{page}{1}
\noindent
\begin{center}
\framebox{
\vbox{\vspace{2mm}
\hbox to 6.28in { {\bf EE302 - Feedback Systems
\hfill Spring 2019} }
\vspace{4mm}
\hbox to 6.28in { {\Large \hfill Lecture #1 \hfill} }
\vspace{2mm}
\hbox to 6.28in { {\it Lecturer: #2 \hfill } }
\vspace{2mm}}
}
\end{center}
\markboth{Lecture #1}{Lecture #1}
\vspace*{4mm}
}
%
\renewcommand{\cite}[1]{[#1]}
\def\beginrefs{\begin{list}%
{[\arabic{equation}]}{\usecounter{equation}
\setlength{\leftmargin}{2.0truecm}\setlength{\labelsep}{0.4truecm}%
\setlength{\labelwidth}{1.6truecm}}}
\def\endrefs{\end{list}}
\def\bibentry#1{\item[\hbox{[#1]}]}
%Use this command for a figure; it puts a figure in wherever you want it.
%usage: \fig{NUMBER}{SPACE-IN-INCHES}{CAPTION}
\newcommand{\fig}[3]{
\vspace{#2}
\begin{center}
Figure \thelecnum.#1:~#3
\end{center}
}
% Use these for theorems, lemmas, proofs, etc.
\newtheorem{theorem}{Theorem}[lecnum]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{definition}[theorem]{Definition}
\newenvironment{proof}{{\bf Proof:}}{\hfill\rule{2mm}{2mm}}
% **** IF YOU WANT TO DEFINE ADDITIONAL MACROS FOR YOURSELF, PUT THEM HERE:
\begin{document}
% Lecture Details
\lecture{18}{Asst. Prof. M. Mert Ankarali}
\par
\section{The Bode Plot}
Previously, we showed how to illustrate the frequency response
function of an LTI system, $G(j \omega)$, using the polar plot
and the Nyquist plot. In the Bode Plot gain, $| G(j \omega) |$ and phase
response, $\angle [ G(j \omega) ]$, of the system are illustrated
separately as a function of frequency, $\omega$. In both diagrams
logarithmic scale is used for frequency axis. On the other hand,
in magnitude axis we use a special logarithmic scale in dB units,
where as for phase axis we use linear scale. Specifically Magnitude
in dB scale, $M_{dB}$ and phase response, $\phi$ of a transfer
function, $G( j \omega)$ are computed as
\begin{align*}
M_{dB}(\omega) &= 20 \mathrm{log}_{10} | G( j \omega ) |
\\
\phi(\omega) &= \angle [ G( j \omega ) ]
\end{align*}
Now let's write $G(s)$ in pole-zero-gain form and analyze the
magnitude (dB) and phase functions
\begin{align*}
G(s) =& K \frac{ (s - z_1) \cdots (s - z_N) }{ (s - p_1) \cdots (s -
p_N) }
\\
,
\\
M_{dB}\lbrace G(s) \rbrace =& 20 \mathrm{log}_{10} | G( j \omega ) |
\\
=& 20 \mathrm{log}_{10} K + \left[ 20 \mathrm{log}_{10} | (j \omega
- z_1) | + \cdots + 20 \mathrm{log}_{10} (j \omega - z_N) \right]
\\
& - \left[ 20 \mathrm{log}_{10} | (j \omega
- p_1) | + \cdots + 20 \mathrm{log}_{10} (j \omega - p_M) \right]
\\
=& K_{dB} + \left[ M_{dB} \lbrace s - z_1 \rbrace \cdots M_{dB}
\lbrace s - z_N \rbrace \right]
- \left[ M_{dB} \lbrace s - p_1 \rbrace \cdots M_{dB}
\lbrace s - p_M \rbrace \right]
\\
,
\\
\phi \lbrace G(s) \rbrace &= \angle [ G( j \omega ) ]
\\
&= \left( \angle [ j \omega - z_1) ] + \cdots + \angle [ j
\omega - z_N ] \right) - \left( \angle [ j \omega - p_1 ] +
\cdots + \angle [ j \omega - p_M] \right)
\\&= \left( \phi \lbrace s - z_1 \rbrace + \cdots + \phi \lbrace s -
z_N \rbrace \right) - \left( \phi \lbrace s - p_1 \rbrace +
\cdots + \phi \lbrace s- p_M \rbrace \right)
\end{align*}
In conclusion, in order to obtain a bode diagram, we can
first find phase and magnitude (dB) arguments associated
with each pole/zero/gain separately for a given frequency.
After that, final magnitude (dB) and phase arguments of $G(s)$
are found by simply adding (and subtracting) the individual components.
\newpage
\textbf{Bode Plots of $s$ and $\frac{1}{s}$:} Let's write the
magnitude and phase functions
\begin{align*}
s \ \Rightarrow \ M_{dB}(\omega) &= 20 \mathrm{log}_{10} ( \omega ) \quad \& \quad \phi (\omega) = 90^0
\\
\frac{1}{s} \ \Rightarrow \ M_{dB} ( \omega ) &= - 20 \mathrm{log}_{10} ( \omega ) \quad \& \quad \phi (\omega) = -90^0
\end{align*}
If we illustrate the responses in bode plot, we obtain the following Figure
\vspace{6 pt}
\begin{minipage}[h]{1\linewidth}
\begin{center}
\includegraphics[width=0.8\textwidth]{bode_DandI}
\end{center}
\end{minipage}
\vspace{6pt}
\subsection*{Bode Plots of First-Order Forms}
First let's analyze the phase and magnitude (dB) response of $G(s) = s + 1$
%
\begin{align*}
M_{dB} (\omega) &= 20 \, \mathrm{log}_{10} | G( j \omega) | = 20 \, \mathrm{log}_{10} \left( \omega^2 + 1 \right)^{1/2} = 10 \, \mathrm{log}_{10} \left( \omega^2 + 1 \right)
\\
\phi(\omega) &= \arctan \omega
\end{align*}
%
Now we will approximate the gain and phase curves using piece-wise continuous straight lines.
Firs approximate the magnitude response
%
\begin{align*}
\mathrm{Low-Frequency} \ \Rightarrow \ M_{dB}(\omega) &\approx 0 \
\mathrm{dB} \\
\mathrm{High-Frequency} \ \Rightarrow \ M_{dB}(\omega) &\approx 20 \,
\mathrm{log}_{10} ( \omega )
\end{align*}
%
Note that high-frequency and low-frequency approximations intersect at
$\omega = 1 \ rad/s$ and $M_{dB} = 0 \ \mathrm{dB}$ point. Now
let's approximate the phase response
%
\begin{align*}
\mathrm{Low-Frequency} \ \Rightarrow \ \phi &\approx 0^o \\
\mathrm{High-Frequency} \ \Rightarrow \ \phi &\approx 90^o \\
\mathrm{Medium-Frequency} \ \Rightarrow \ \phi & \approx 45^0 + 45^0 \mathrm{log}_{10} ( \omega)
\end{align*}
%
Note that, low-frequency and mid-frequency approximations intersect when $\omega = 0.1 \ rad/s$,
whereas high-frequency and mid-frequency approximations intersect when $\omega = 10 \ rad/s$.
The corner frequency of this ``system'' is $\omega_c = 1 \ rad/s$. Note that it is very easy to obtain
the bode approximations of $G(s) = \frac{1}{s+1}$, if we know the bode approximations of $G(s) = s+1$. We
simple multiply both magnitude (dB) and phase responses of $s+1$ with $(-1)$. The figure below
illustrates the original bode plots (solid curves) of $G_1(s) = (s+1)$ and $G_2(s) = \frac{1}{s+1}$ as well as
their approximations (dashed lines).
\vspace{6 pt}
\begin{minipage}[h]{1\linewidth}
\begin{center}
\includegraphics[width=0.9\textwidth]{sp1}
\end{center}
\end{minipage}
\vspace{6 pt}
Now let's analyze the phase and magnitude (dB) response of
%
\begin{align*}
G(s) = T s + 1 = \frac{s + a}{a} \ \mathrm{where} \ a = \frac{1}{T}
\end{align*}
%
Magnitude and phase functions can be obtained as
\begin{align*}
M_{dB} &= 20 \, \mathrm{log}_{10} | G( j \omega) | = 20 \, \mathrm{log}_{10} \left( T^2 \omega^2 + 1 \right)^{1/2} = 10 \, \mathrm{log}_{10} \left( T^2 \omega^2 + 1 \right)
\\
\phi &= \arctan T \omega
\end{align*}
%
If we follow a similar approach, we can approximate the gain curves as
%
\begin{align*}
\mathrm{Low-Frequency}: \omega \leq \frac{1}{T} = a \ \Rightarrow \ M_{dB} &\approx 0 \ \mathrm{dB} \\
\mathrm{High-Frequency}: \omega \geq \frac{1}{T} = a \ \Rightarrow \ M_{dB} &\approx 20 \, \mathrm{log}_{10} ( T \omega )
\end{align*}
%
Note that high-frequency and low-frequency approximations intersect at
$\omega = a \ rad/s = \frac{1}{T} \ rad/s$ and $M_{dB} = 0 \ \mathrm{dB}$ point. Now
let's approximate the phase response
%
\begin{align*}
\mathrm{Low-Frequency}: \omega \leq \frac{0.1}{T} = 0.1 a \ \Rightarrow \ \phi &\approx 0^o \\
\mathrm{High-Frequency}: \omega \geq \frac{10}{T} = 10 a \ \Rightarrow \ \phi &\approx 90^o \\
\mathrm{Medium-Frequency} \ \Rightarrow \ \phi & \approx 45^0 + 45^0 \mathrm{log}_{10} ( \omega)
\end{align*}
%
Note that low-frequency and mid-frequency approximations intersect
when $\omega = 0.1 a \ rad/s$,
whereas high-frequency and mid-frequency approximations intersect when
$\omega = 10 a \ rad/s$.
The corner freqency of this system is $\omega_c = a \ rad/s = 1/T \
rad/s$. Note that in order to obtin the bode plot of $T s + 1$, we
simply shift the bode plot of $s + 1$ (in $\omega$ axis).
\vspace{6pt}
\textbf{Ex:} The figure below
illustrates the original bode plots (solid curves) of $G_1(s) = (s+10)$ and $G_2(s) = \frac{1}{s+10}$ as well as
their approximations (dashed lines).
\vspace{6 pt}
\begin{minipage}[h]{1\linewidth}
\begin{center}
\includegraphics[width=0.9\textwidth]{spa}
\end{center}
\end{minipage}
\vspace{6 pt}
\subsection*{Bode Plots of Second-Order Forms}
A unity gain standard second order system can be written in the form
%
\begin{align*}
G(s) = \frac{\omega_n^2}{s^2 + 2 \zeta \omega_n s + \omega_n^2}
= \frac{1}{\frac{1}{\omega_n^2} s^2 + \frac{2 \zeta}{\omega_n} s + 1}
\end{align*}
%
\textbf{Case 1 : $\zeta = 1$} First let's analyze the bode plots for the critically damped case,
%
\begin{align*}
G(s) = \frac{1}{\frac{1}{\omega_n^2} s^2 + \frac{2 \zeta}{\omega_n} s + 1}
= \frac{1}{(\frac{s}{\omega_n} + 1)^2}
\end{align*}
%
We can easily observe that
%
Magnitude and phase functions can be obtained as
\begin{align*}
M_{dB} \lbrace G(s) \rbrace &= 20 \, \mathrm{log}_{10} | G( j \omega) | = 20 \, \mathrm{log}_{10} \left| \frac{1}{\frac{s}{\omega_n} + 1} \right|^2
\\
&= 2 M_{dB} \left\lbrace \frac{1}{\frac{s}{\omega_n} + 1} \right\rbrace
\\
\phi \lbrace G(s) \rbrace &= 2 \phi \left\lbrace \frac{1}{\frac{s}{\omega_n} + 1} \right\rbrace
\end{align*}
\textbf{Ex:} The figure below
illustrates the original bode plots (solid curves) of $G_1(s) = \frac{10}{s+10}$ and $G_2(s) = \frac{100}{(s+10)^2}$ as well as
their approximations (dashed lines).
\vspace{6 pt}
\begin{minipage}[h]{1\linewidth}
\begin{center}
\includegraphics[width=0.9\textwidth]{second}
\end{center}
\end{minipage}
\vspace{6 pt}
\textbf{Case 2: $\zeta > 1$} Over-damped case is simply the combination of two identical first order
systems.
\textbf{Ex:} Let's analyze the the bode plots for the following system
%
\begin{align*}
G(s) = \frac{10}{(s+1)(s+10)}
= \frac{1}{ s + 1 } \frac{10}{ s + 10 }
\end{align*}
%
We can easily observe that
\begin{align*}
M_{dB} \lbrace G(s) \rbrace &= M_{dB} \left\lbrace \frac{1}{s+1} \right\rbrace +
M_{dB} \left\lbrace \frac{10}{s+10} \right\rbrace
\\
\phi \lbrace G(s) \rbrace &= \phi \left\lbrace \frac{1}{s + 1} \right\rbrace +
\phi \left\lbrace \frac{10}{s + 10} \right\rbrace
\end{align*}
%
The figure below
illustrates the original bode plots (solid curves) of $G_1(s) = \frac{1}{s+1}$, $G_2(s) = \frac{10}{s+10}$ and $G_4(s) = \frac{10}{(s+1) (s+10)}$ as well as their approximations (dashed lines).
\vspace{6 pt}
\begin{minipage}[h]{1\linewidth}
\begin{center}
\includegraphics[width=0.9\textwidth]{over}
\end{center}
\end{minipage}
\newpage
\textbf{Case 3 : $\zeta < 1$} For under-damped systems, corner frequencies
of the piece-wise linear approximations are unchanged. Thus, we use same
approximation with the critically damped case. However, as damping ratio
increases we may observe larger differences between the actual bode plot
and the approximations.
\textbf{Ex:} The figure below
illustrates the original bode plots (solid curves) of $G_1(s) = \frac{10}{s+10}$ and $G_2(s) = \frac{100}{(s+10)^2}$ as well as
their approximations (dashed lines).
\vspace{6 pt}
\begin{minipage}[h]{1\linewidth}
\begin{center}
\includegraphics[width=0.9\textwidth]{under}
\end{center}
\end{minipage}
\vspace{6 pt}
We can see that best phase matching between the actual bode plot and approximation is achieved when
$\zeta = 1$, however surprisingly best magnitude matching between the actual bode plot and approximation is achieved when
$\zeta = 2$. Indeed, best match between the actual and approximate bode plots in magnitude is achieved when $\zeta = 1/\sqrt{2}$.
\newpage
\section{Gain \& Phase Margin from Bode Plots}
We already know that for a feedback system, phase and
gain margins can be computed based on the Frequency Response
function of the open-loop transfer function, $G_{OL}(j \omega)$ (under
some assumption regarding the system properties).
Specifically, the phase crossover frequency, $\omega_{pc}$ and the gain margin,
$g_m$ (linear scale) and $G_m$ (dB scale), can be computed as
%
\begin{align*}
\angle [ G_{OL}(j \omega_{p}) ] &= \pm -180^0
\quad \Rightarrow \quad
g_m = \frac{1}{| G_{OL}(j \omega_{pc})| } \quad \mathrm{or} \quad G_m
= -20 \, \mathrm{log}_{10} | G_{OL}(j \omega_{pc} ) |
\end{align*}
%
where as gain crossover frequency, $\omega_{gc}$, and the
phase margin can be computed as
%
\begin{align*}
| G_{OL}(j \omega_{gc}) | &= 1 \quad \mathrm{or} \quad
M_{dB} \lbrace G_{OL}(j \omega_{gc}) \rbrace = 0 \ \mathrm{dB}
\quad \Rightarrow \quad
\phi_m = \pi + \angle G_{OL} (j \omega_{gc})
\end{align*}
%
Indeed it is generally easier to derive the phase and gain margin of
a system from the bode plots compared to the Nyquist plot.
\textbf{Ex:} Compute the phase margin
for the following closed-loop system for $K = 2$ and $K = 4$,
both from the approximate and actual bode plots.
\begin{center}
\begin{minipage}[h]{\linewidth}
\begin{center}
\includegraphics[width=0.45\textwidth]{ex2block}
\end{center}
\end{minipage}
\end{center}
Actual bode plots for both gains are illustrated
below.
\begin{center}
\begin{minipage}[h]{\linewidth}
\begin{center}
\includegraphics[width=0.7\textwidth]{marginactual}
\end{center}
\end{minipage}
\end{center}
If we label the gain crossover frequencies and find the
corresponding phase values, we can easily compute the phase
margins as
%
\begin{align*}
K &= 2 \Rightarrow \phi_m = 90^o \quad (\omega_{gc} = 1 \ rad/s)
\\
K &= 4 \Rightarrow \phi_m = 60^o \quad (\omega_{gc} \approx 1.8 \ rad/s)
\end{align*}
%
These results verify the actual phase margin values that we previously computed
using the Nyquist plot. Note that $G_m$ is infinity for both cases.
Now let's illustrate the approximate bode plots (dashed lines),
which are illustrated in the Figure below on top of the actual ones
\begin{center}
\begin{minipage}[h]{\linewidth}
\begin{center}
\includegraphics[width=0.7\textwidth]{margin}
\end{center}
\end{minipage}
\end{center}
If we approximately compute the phase margins based
on the approximate bode plots we obtain that
%
\begin{align*}
K &= 2 \Rightarrow \phi_m \approx 100^o
\\
K &= 4 \Rightarrow \phi_m \approx 63^o
\end{align*}
\newpage
\textbf{Ex:} Compute the gain margin and phase margin
for the following closed-loop system for $K = 1$ and $K = 8$
both from the actual and approximate bode plots.
\begin{center}
\begin{minipage}[h]{\linewidth}
\begin{center}
\includegraphics[width=0.5\textwidth]{ex3block}
\end{center}
\end{minipage}
\end{center}
First let's analyze $K = 1$, the Figure below illustrates the
actual and approximate bode plots of $G(s) = \frac{1}{(s+1)^3}$.
\begin{center}
\begin{minipage}[h]{\linewidth}
\begin{center}
\includegraphics[width=0.75\textwidth]{margin2}
\end{center}
\end{minipage}
\end{center}
We can see that phase margin is $180^o$ for the given system,
since $\omega_{gc} = 0 \ \rightarrow \ \phi (\omega_{gc} ) = 0^o$.
On the other hang we can derive the following gain margin estimates
from the actual and approximate bode plots
%
\begin{align*}
&\mathrm{Actual:} \quad G_m \approx 18 \ \mathrm{dB} \quad \rightarrow \quad g_m \approx 8
\\
&\mathrm{Approximate:} \quad G_m \approx 20 \ \mathrm{dB} \quad \rightarrow \quad g_m \approx 10
\end{align*}
%
Now let's analyze $K = 8$, the Figure below illustrates the
actual Bode plots of $G(s) = \frac{8}{(s+1)^3}$.
\begin{center}
\begin{minipage}[h]{\linewidth}
\begin{center}
\includegraphics[width=0.7\textwidth]{margin3actual}
\end{center}
\end{minipage}
\end{center}
We can see from the actual bode-plot that $G_m = 0 \ \mathrm{dB}$
and $\phi_m = 0^o$.
However, if we draw the approximate bode plots
\begin{center}
\begin{minipage}[h]{\linewidth}
\begin{center}
\includegraphics[width=0.7\textwidth]{margin3}
\end{center}
\end{minipage}
\end{center}
and estimate these margins we obtain
%
\begin{align*}
&G_m \approx 2 \ \mathrm{dB} \quad \rightarrow \quad g_m \approx 1.2
\\
&\phi_m = 2^o
\end{align*}
%
As we can see, from the approximate, we compute positive
phase and gain margin and thus assume the system is stable.
But, these margins are very low. This indicates that if one
analyzes the stability of a system using approximate
bode plots, in order to comment on stability he/she should
observe significant phase and gain margins.
\textbf{Ex:} Compute phase margin for the following closed-loop system
using the actual and approximate bode plots
\begin{center}
\begin{minipage}[h]{\linewidth}
\begin{center}
\includegraphics[width=0.3\textwidth]{ex4block}
\end{center}
\end{minipage}
\end{center}
The Figure below illustrates the actual and approximate bode plots.
\begin{minipage}[h]{1\linewidth}
\begin{center}
\includegraphics[width=0.75\textwidth]{type1}
\end{center}
\end{minipage}
We can derive the following phase margin computations from the
actual and approximate bode plots
%
%
\begin{align*}
&\mathrm{Actual:} \quad \phi_m \approx 52.5^0 \quad (\omega_{gc} \approx 0.8 \ rad/s)
\\
&\mathrm{Approximate:} \quad \phi_m \approx 45^o \quad (\omega_{gc} = 1 \ rad/s)
\end{align*}
%
% **** This ENDS THE EXAMPLES. DON'T DELETE THE FOLLOWING LINE:
\end{document} | {
"alphanum_fraction": 0.6601162916,
"avg_line_length": 32.816254417,
"ext": "tex",
"hexsha": "4912e17d8d34bec14c009d1a7d6f71e617fb0480",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1876e743aed367c160cbc5376f5d1e5a34b8b8fb",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "asudeeaydin/Lecture-Notes",
"max_forks_repo_path": "METU-EE302/Lecture 18/EE302_Lecture_18.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1876e743aed367c160cbc5376f5d1e5a34b8b8fb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "asudeeaydin/Lecture-Notes",
"max_issues_repo_path": "METU-EE302/Lecture 18/EE302_Lecture_18.tex",
"max_line_length": 194,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "1221614d34658593f8858de320e8be80fbb8e4c5",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "sahinalper/Lecture-Notes",
"max_stars_repo_path": "METU-EE302/Lecture 18/EE302_Lecture_18.tex",
"max_stars_repo_stars_event_max_datetime": "2019-10-03T15:31:15.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-10-03T15:31:15.000Z",
"num_tokens": 6274,
"size": 18574
} |
%%% PLEASE RUN A SPELL CHECKER BEFORE COMMITTING YOUR CHANGES!
%%% PLEASE RUN A SPELL CHECKER BEFORE COMMITTING YOUR CHANGES!
%%% PLEASE RUN A SPELL CHECKER BEFORE COMMITTING YOUR CHANGES!
%%% PLEASE RUN A SPELL CHECKER BEFORE COMMITTING YOUR CHANGES!
%%% PLEASE RUN A SPELL CHECKER BEFORE COMMITTING YOUR CHANGES!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Development Release Series 7.1}\label{sec:History-7-1}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
This is the development release series of Condor.
The details of each version are described below.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection*{\label{sec:New-7-1-4}Version 7.1.4}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\noindent Release Notes:
\begin{itemize}
\item The owner of the log file for the \Condor{vm-gahp}
has changed to the \Login{condor} user.
In Condor 7.1.2 and previous versions, it was owned by the
user that the virtual machine is started under.
Therefore, the owner of and permissions on an existing log file
are likely to be incorrect.
%Condor issues an error if the \Condor{gridmanager} is unable
%to read and write the existing file.
To correct the problem, an administrator may modify file
permissions such that the \Login{condor} user may read and
write the log file.
Alternatively, an administrator may delete the file, and
Condor will create a new file with the expected owner and
permissions.
In addition, the definition for \Macro{VM\_GAHP\_LOG}
in the \File{condor\_config.generic} file has changed for
Condor 7.1.3.
\item The \SubmitCmd{vm} universe no longer supports the use of
the \SubmitCmd{xm}
command for running Xen virtual machines. The \SubmitCmd{virsh} tool
should be used instead.
\item Condor no longer supports the standard universe feature in its
ports to Solaris. We may resurrect this feature in the future if demand
for it on this port grows again to sufficient levels.
\end{itemize}
\noindent New Features:
\begin{itemize}
\item Local entries in the configuration file may now be specified
by pre-pending a local name and a period to the normal name.
Local settings take precedence over the other settings.
The local name can be specified on the command line to all daemons via
the new \Opt{-local-name} command line option.
See section~\ref{sec:Config-File-Macros}
for more details on how the local name will be used in the configuration,
and section~\ref{sec:DaemonCore-Arguments}
for more details on the command line parameters.
\item Dynamic Startd Provisioning: New configuration options allow for slots
to be broken into job-sized pieces. While this feature is still under
ongoing development, we felt that what we had so far, although not yet
fulfilling our complete vision, is useful enough in its present form to
bring value to some installations.
% PR 947
\item \Condor{submit\_dag} is now automatically run recursively on
nested DAGs (unless the new \Opt{-no\_recurse} option is specified).
See \pageref{sec:DAGsinDAGs} for details.
% PR 947
\item Added the new \MacroNI{SUBDAG EXTERNAL} keyword (for specifying nested
DAGs) to \Condor{dagman}. See \pageref{sec:DAGsinDAGs} for details.
\item It is now possible to have multiple rotations of the ``event
log'' file, such as ``EventLog'', ``EventLog.1'', ``EventLog.2'', ...
\item The VM universe can now run VMware virtual machines on machines using
privilege separation without requiring the \Condor{vm-gahp} binary to be
setuid root. Running the \Condor{vm-gahp} as setuid root is no longer
supported for VMware or Xen.
\item Condor now supports the ability for the \Condor{master} to run a
program as it shuts down. This can be particularly useful for doing
a graceful shutdown, followed by, a reboot. This is
accomplished through the new
\MacroNI{MASTER\_SHUTDOWN\_\lt{}Name\gt{}} configuration variable.
The configuration variable \MacroNI{MASTER\_SHUTDOWN\_\lt{}Name\gt{}}
is defined on page \pageref{param:MasterShutdownProgram}),
and the manual page for \Condor{set\_shutdown}
is on page~\pageref{man-condor-set-shutdown}.
\item The \Condor{lease\_manager} is a new daemon. It
provides a mechanism for managing leases to resources described by
Condor's ClassAd mechanism. These resources and leases are managed
to be persistent.
\item VM universe now works with privilege separation (PrivSep)
for VMware jobs. Xen is still not supported in PrivSep mode.
\item Added the \Arg{DIR} directive for the \MacroNI{SPLICE} keyword in
the DAGMan language.
Please read section~\ref{sec:DAGSplicing} on page \pageref{sec:DAGSplicing} for
more information.
\item For gt4 type grid jobs (i.e. WS GRAM), include a request to retry
failed attempts at file clean-up in the RSL job description.
\item Improved the scalability of some algorithms used by the
\Condor{schedd} and \Condor{negotiator} when dealing with large
numbers of startds.
\item Added the ability for the \Condor{master} (actually, any
DaemonCore process with children) to kill child
processes that have quit responding SIGABRT instead of SIGKILL.
This is for debugging purposes on UNIX systems, and is controlled by
the new \MacroNI{NOT\_RESPONDING\_WANT\_CORE} configuration
parameter. If the child process is configured with
\MacroNI{CREATE\_CORE\_FILES} enabled, the child process will then
generate a core dump.
This feature is currently implemented only on UNIX systems.
See
\MacroNI{NOT\_RESPONDING\_WANT\_CORE}
on page \pageref{param:NotRespondingWantCore},
\MacroNI{NOT\_RESPONDING\_TIMEOUT}
on page\pageref{param:NotRespondingTimeout},
and
\MacroNI{CREATE\_CORE\_FILES}
on page \pageref{param:CreateCoreFiles}
for more details.
\item Condor can now be configured to keep a backup of the job queue
log on a local file system in case \Condor{schedd} operations
involving writes, flushes, or syncs to the job queue log fail. This
is most likely to happen when the job queue log is stored on a
network file system like NFS. Such a backup enables an administrator
to see that a job failed to submit, but does not perform any
automatic recovery. See below for the these configuration parameters.
\item Added preliminary support for ``Green Computing''. This is
supported only on Linux and Windows.
See section~\ref{sec:power-man} on page~\pageref{sec:power-man} on
``Power Management'' for more details.
\end{itemize}
\noindent Configuration Variable Additions and Changes:
\begin{itemize}
\item Local versions of configuration parameters can now be specified
via the use of the ``-local-name'' command line parameters (see the
above ``New Features'' entry).
\item A new configuration parameter
\MacroNI{EVENT\_LOG\_MAX\_ROTATIONS} has been added to allow
multiple rotations of the event log file.
See \pageref{param:EventLogMaxRotations} for details.
\item A new configuration parameter
\MacroNI{EVENT\_LOG\_ROTATION\_LOCK} has been added to allow
allow configuration of an alternate file for Condor to use while
rotating event log files.
See \pageref{param:EventLogRotationLock} for details.
\item The configuration parameter \MacroNI{MAX\_EVENT\_LOG} has been
renamed to \MacroNI{EVENT\_LOG\_MAX\_SIZE}. For backward
compatibility, if \MacroNI{EVENT\_LOG\_MAX\_SIZE} is not defined,
Condor will also try \MacroNI{MAX\_EVENT\_LOG}.
See \pageref{param:EventLogMaxSize} for details.
\item The \Condor{vm-gahp} no longer requires its own configuration
file. It now uses the normal Condor configuration file. Parameters
that used to reside in the \Condor{vm-gahp}'s file should now be placed
in the Condor configuration file.
\item The following VM universe-related configuration parameters have
been removed:
\begin{itemize}
\item \MacroNI{VM\_GAHP\_CONFIG}
\item \MacroNI{VM\_MAX\_MEMORY}
\item \MacroNI{XEN\_CONTROLLER}
\item \MacroNI{XEN\_VIF\_PARAMETER}
\item \MacroNI{XEN\_NAT\_VIF\_PARAMETER}
\item \MacroNI{XEN\_BRIDGE\_VIF\_PARAMETER}
\item \MacroNI{XEN\_IMAGE\_IO\_TYPE}
\end{itemize}
\MacroNI{VMWARE\_LOCAL\_SETTINGS\_FILE}
and \MacroNI{XEN\_LOCAL\_SETTINGS\_FILE} have been added. They allow
a machine administrator to add settings to the virtual machine
configuration files written by Condor for VMware and Xen.
See \pageref{param:VMwareLocalSettingsFile} and
\pageref{param:XenLocalSettingsFile} for details.
\item The configuration parameter family
\MacroNI{MASTER\_SHUTDOWN\_\lt{}Name\gt{}} can be used in conjunction
with \Condor{set\_shutdown} to cause the \Condor{master} to execute
a specified program as it shuts down. See
\pageref{param:MasterShutdownProgram} and \Condor{set\_shutdown}
manual page for more details.
\item The configuration parameter
\MacroNI{NOT\_RESPONDING\_WANT\_CORE} controls the type of signal
sent to child processes that DaemonCore has determined are no longer
responding. See the above discussion of the addition of this
feature and \MacroNI{NOT\_RESPONDING\_WANT\_CORE} on page
\pageref{param:NotRespondingWantCore} for details.
\item The configuration parameter \Macro{LOCAL\_QUEUE\_BACKUP\_DIR}
should be set to the pathname of a directory that is writable by
the Condor user and is located on a non-network file system.
This is part of the ``Job Queue Backup'' feature, above.
\item The configuration parameter
\Macro{LOCAL\_XACT\_BACKUP\_FILTER} controls whether or not the
\Condor{schedd} will attempt to keep backups of transactions that
were not written the job queue log. If it is set to to
\Expr{FAILED}, the \Condor{schedd} will attempt to keep a backup
of the transaction in the local queue backup directory,
defined by \MacroNI{LOCAL\_QUEUE\_BACKUP\_DIR},
only if operations fail on the job queue log. If it is set
to none \Expr{NONE}, no backups should be performed even in the
event of failure. If it is set to \Expr{ALL}, then at all
transactions should be backed up. The \Expr{ALL} value will
create quite a large number of files and slow the \Condor{schedd}
substantially; it is only likely to be useful for users who are
developing or debugging Condor.
This is part of the Job Queue Backup feature.
\end{itemize}
\noindent Bugs Fixed:
\begin{itemize}
\item In some rare cases, the \Condor{startd} failed to fully preempt jobs.
The job itself was killed, but the \Condor{starter} process watching over
it would not be killed. The slot would then stay in the Preempting state
indefinitely.
\item \Condor{q} performed poorly when querying a remote pool, using
\Opt{-pool}. It was using an older latency-bound protocol even when
the remote \Condor{schedd} was new enough to use the improved protocol
that first appeared in version 6.9.3.
\item When using \Macro{USE\_VISIBLE\_DESKTOP} the user's (slot or owner)
access-control entry removed from the Desktop's access-control list. This
fixes the previous behavior were users were added and never removed,
resulting in an overflow in access-control list, which can only contain
a fixed number of access-control entries.
\item Fixed a bug where if log line caching was enabled in \Condor{dagman}
and \Condor{dagman} failed during the recovery process, the cache would
stay active. Now the cache is disabled in all cases at the end of recovery.
\item Fixed a couple of bugs relevant only to the \Macro{GLEXEC\_STARTER}
mode of operation. One bug would result in the \Macro{SPOOL} directory being
deleted if local universe jobs (which are not supported in
\MacroNI{GLEXEC\_STARTER} mode) were submitted. The other bug prevented
COD jobs from running. Neither of these are problems for the newer
recommended \Macro{GLEXEC\_JOB} mode.
\item Fixed a bug that could cause the \Condor{procd} to crash, depending
on the timing of its process snapshots.
\item Fixed a bug that caused job status notifications from WS GRAM 4.2
servers to be lost.
\item Fixed a file descriptor leak in the \Condor{vm-gahp}.
\item Jobs now go on hold with a clear hold reason if a path to a
directory is put in the transfer files list. Previously, the attempt
to run the job would simply fail and return to the idle state.
\item If \Macro{MAX\_EVENT\_LOG} set to 0, then let event log grow without
bounds. Previously this behavior was broken, and setting
\Macro{MAX\_EVENT\_LOG} to 0 resulted in the log rotating with every
event. Now it works as documented.
\end{itemize}
\noindent Known Bugs:
\begin{itemize}
\item When fixing the \Macro{USE\_VISIBLE\_DESKTOP} bug, a new one was
inadvertently introduced. The bug manifests irrespective of the definition
of \Macro{USE\_VISIBLE\_DESKTOP}: the new code attempts to remove the current
user's access-control entry from the Desktop's access-control list even when
it was not added by Condor. This has the effect of inhibiting the creation
of new process for the logged on user.
\end{itemize}
\noindent Additions and Changes to the Manual:
\begin{itemize}
\item The extra space character injected into the names of Condor
daemons and programs has been removed.
\item Previously undocumented Condor Perl module subroutines have
been documented.
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection*{\label{sec:New-7-1-3}Version 7.1.3}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\noindent Release Notes:
\begin{itemize}
\item This developer release includes the majority of the bug fixes released
in stable version 7.0.5, including the security patches documented in that
release. See section~\ref{sec:New-7-0-5} below.
\item Updated the version of Globus Toolkit: The Condor binaries are now
linked against Globus v4.2.0.
\item Updated the version of OpenSSL: The Condor binaries are now linked
against OpenSSL 0.9.8h.
\item Updated the version of GCB: The Condor binaries are now linked
against GCB 1.5.6.
\item Changes to the \MacroNI{ALLOW\_*} and \MacroNI{DENY\_*} configuration
variables no longer require the use of the \Opt{-full} option to
\Condor{reconfig} upon reconfiguration.
\end{itemize}
\noindent New Features:
\begin{itemize}
\item Added a new mechanism termed \Term{Concurrency Limits}. This
mechanism allows the Condor pool administrator to define an arbitrary
number of consumable resources in the configuration file of the
matchmaker. The availability of these consumable resources will be taken
into account during the matchmaking process. Individual jobs can specify
how many of each type of consumable resource is required.
Typical applications of Concurrency Limits could include management of
software licenses, database connections, or any other consumable resource
that is external to Condor. NOTE: Documentation still being written on
this feature.
See section~\ref{sec:Concurrency-Limits}) for documentation.
\item Added support for Condor to manage serial high throughput computing
workloads on the IBM Blue Gene supercomputer. The IBM Blue Gene/P is now
a supported platform.
\item Extended Job Hooks (see section~\ref{sec:job-hooks}) to allow for
alternate transformation and/or monitoring engines for the Job Router (see
section~\ref{sec:JobRouter}. Routing is still controlled by the Job
Router, but if Job Router Hooks are configured, then external programs or
scripts can be used to transform and monitor the job instead of Condor's
internal engine.
\item Added support for the new protocol for WS GRAM introduced in Globus
4.2. For each WS GRAM resource, Condor automatically determines whether it is
speaking the 4.0 or 4.2 version of the protocol and responds appropriately.
When setting \SubmitCmd{grid\_resource} in the submit file, use
\SubmitCmd{gt4} for both WS GRAM 4.0 and 4.2.
\item Added the ability for Windows slot users to load and run their jobs
within the context of their profile.
This includes the \File{My Documents} directory
hierarchy, its monikers, and the user's registry hive.
To use the profile, add a \SubmitCmd{load\_profile} command to the
submit description file. A current restriction prevents the use of
\SubmitCmd{load\_profile}
in conjunction with \SubmitCmd{run\_as\_owner}. Please refer to
section~\ref{sec:windows-load-profile} for further details.
\item The \File{StarterLog} file for local universe jobs now displays the job id
in each line in the file, so that interleaved messages relevant to
different jobs running concurrently can be identified.
\item Added the \Opt{-AllowVersionMismatch} command line option to
\Condor{submit\_dag} and \Condor{dagman} to (if absolutely necessary)
allow a version mismatch between \Condor{dagman} and the
\File{.condor.sub} file used to submit it.
This permits a Condor version mismatch between
\Condor{submit\_dag} and \Condor{dagman}).
\item Streamlined the protocol between submit and execute machines; in some
instances, fewer messages will be exchanged over the network.
\item When network requests are denied because of the authorization
policy, Condor now logs an explanation in the daemon log that denied
the request. This helps the administrator understand why the policy
denied the request, in case it is not obvious. A similar explanation
may be logged for requests that are accepted. This is only generated
if \Macro{D\_SECURITY} is added to the daemon's debug options.
\end{itemize}
\noindent Configuration Variable Additions and Changes:
\begin{itemize}
\item Added the new configuration variable
\Macro{MAX\_PENDING\_STARTD\_CONTACTS}. This limits the
number of simultaneous connection attempts by the \Condor{schedd} when
it is requesting claims from the \Condor{startd}s. The intention is
to protect the \Condor{schedd} from being overloaded by authentication
operations. The default is 0, which indicates no limit.
\item Added the new configuration variable
\Macro{SEC\_INVALIDATE\_SESSIONS\_VIA\_TCP}, which
defaults to \Expr{True}. Previously, attempts to use an invalid security
session resulted in a UDP rather than a TCP response. In networks with
different firewall rules for UDP and TCP, the filtering of the session
invalidation messages was easily overlooked, since it would not
typically happen during the initial vetting of the pool. If these
packets were filtered out, then at the subsequent \Condor{collector}
restart, no daemons would be able to advertise themselves to the
pool until their existing security sessions expired. The old behavior
can be achieved by setting this configuration parameter to \Expr{False}.
\item Added the new configuration variable
\Macro{SEC\_ENABLE\_MATCH\_PASSWORD\_AUTHENTICATION}.
This is a special authentication mechanism designed to minimize
overhead in the \Condor{schedd} when communicating with the execute
machine. Essentially, matchmaking results in a secret being shared
between the \Condor{schedd} and \Condor{startd}, and this is used to
establish a strong security session between the execute and submit
daemons without going through the usual security negotiation protocol.
This is especially important when operating at large scale over high
latency networks, as in a glidein pool with one submit machine and thousands of
execute machines on a network with 0.1 second round trip times. See
\pageref{param:SecEnableMatchPasswordAuthentication} for
details.
\item Added configuration entry \Macro{GLEXEC\_JOB} which replaces the
functionality previously encapsulated in \Macro{GLEXEC\_STARTER}. Using
\MacroNI{GLEXEC\_JOB} enables privilege separation in Condor via glexec in a
manner much more consistent with how Condor's own privilege separation
mechanism works. Specifically, the user identity switching will now occur
between the \Condor{starter} and the actual user job.
\item Added configuration parameter \Macro{AMAZON\_GAHP\_WORKER\_MAX\_NUM}
to specify a ceiling on the number of threads spawned on the submit
machine to support jobs running on Amazon EC2. Defaults to 5.
\end{itemize}
\noindent Bugs Fixed:
\begin{itemize}
\item Includes bug fixes from Condor v7.0.5, including the security fixes.
See section~\ref{sec:New-7-0-5}.
\item Fixed a bug in the \Condor{schedd} that would cause it to
except if a crontab entry was incorrectly formatted.
\item Fixed a bug in the CondorView server (collector) that caused it
to except (crash) when it received a machine ClassAd without a valid state.
It now logs this under level \MacroNI{D\_ALWAYS} and ignores the ClassAd.
\item Fixed a bug from Condor version 7.1.2 that would cause
Condor daemons to start
consuming a lot of cpu time after rare types of communication failures
during security negotiation.
\item Fixed a bug from Condor version 7.1.2 that in rare cases could cause
Condor to fail to recognize when a call to exec() fails on Unix
platforms.
\item Fixed problems with configuration parameter
\Macro{JOB\_INHERITS\_STARTER\_ENVIRONMENT} when using PrivSep.
\item Improved the deletion of Amazon EC2 jobs when the server is
unreachable.
\item Fixed problems with Condor parallel universe jobs when recovering from
a reboot of the submit machine.
\end{itemize}
\noindent Known Bugs:
\begin{itemize}
\item None.
\end{itemize}
\noindent Additions and Changes to the Manual:
\begin{itemize}
\item None.
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection*{\label{sec:New-7-1-2}Version 7.1.2}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\noindent Release Notes:
\begin{itemize}
\item None.
\end{itemize}
\noindent New Features:
\begin{itemize}
\item Added \Procedure{formatTime}, a built-in ClassAd function to create a
formatted representation of the time. A detailed description of this
function is available in section~\ref{sec:classadFunctions}, which
documents all of the available built-in ClassAd functions.
\item Improved Condor's authentication handshake, so that daemons such
as the \Condor{schedd}, which initiate connections to other daemons,
spend less time waiting for responses.
Authentication over high latency
networks is still rather expensive in Condor, so it still may be
necessary to scale up by running more \Condor{schedd} and \Condor{collector}
daemons than one would need for equivalent workloads on a low latency network.
Additional improvements in this area are planned.
\end{itemize}
\noindent Configuration Variable Additions and Changes:
\begin{itemize}
\item None.
\end{itemize}
\noindent Bugs Fixed:
\begin{itemize}
\item Fixed a memory leak, introduced in Condor version 7.1.1, which caused the
\Condor{startd} daemon to grow without bound.
% PR 945
\item Fixed a bug in \Condor{dagman} that caused the user log file of
the first node job in a DAG to get created with 0600 permissions,
regardless of the user's umask. Note that this fix involved removing
the \Opt{-condorlog} and \Opt{-storklog} command-line arguments from
\Condor{submit\_dag} and \Condor{dagman}.
\item Fixed a problem from Condor version 7.1.1 that in some cases caused the
\Condor{starter} to stop sending updates about the job status or
to send updates too frequently.
\end{itemize}
\noindent Known Bugs:
\begin{itemize}
\item None.
\end{itemize}
\noindent Additions and Changes to the Manual:
\begin{itemize}
\item None.
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection*{\label{sec:New-7-1-1}Version 7.1.1}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\noindent Release Notes:
\begin{itemize}
\item None.
\end{itemize}
\noindent New Features:
\begin{itemize}
\item Added a new feature to \Condor{dagman} which caches the log lines
emitted to the dagman.out file when in recovery mode and emits the
cache as one call to the logging subsystem when the cache size limit is
reached. Under NFS conditions, this prevents an open and close per line
of the log and greatly improves performance. This feature is off by
default and is controlled by \Attr{DAGMAN\_DEBUG\_CACHE\_ENABLE}, which
takes a boolean, and \Attr{DAGMAN\_DEBUG\_CACHE\_SIZE}, which is an
integer in bytes of how big the cache should be before flushing.
\item Included some Windows example jobs (submit files and binaries).
\item Added a new feature to the DAGMan language called splicing. Please
read section~\ref{sec:DAGSplicing} on page \pageref{sec:DAGSplicing}.
\item The Prepare Job Hook can now modify the job ClassAd before execution.
For a complete description of the new hook system, read
section~\ref{sec:job-hooks} on page~\pageref{sec:job-hooks}.
\item Condor now coerces the result of \$\$([]) expressions within
submit description files to strings.
This means that submit files can do simple arithmetic.
For example, you can describe a command-line argument as:
arguments = \$\$([\$(PROCESS)+100])
and \Condor{submit} will expand the argument to be the expected value.
\item Condor daemons now periodically update the \Code{ctime} of their
log files, instead of the \Code{mtime}, as they previously did.
At start up, the daemons use this \Code{ctime}
to determine how long they may have been down.
\item Added the capability to the \Condor{startd} to allow it to power
down machines based a user specified policy. See
section~\ref{sec:power-man} on \pageref{sec:power-man} on
Power Management for more details.
\item \Condor{off} now supports the \Opt{-peaceful} option for the
\Condor{schedd}, in addition to the existing support that already existed for
the \Condor{startd}. When peacefully shut down,
the \Condor{schedd} stops starting new
jobs and waits for all running jobs to finish before exiting. The
default shut down behavior is still \Opt{-graceful}, which checkpoints
and stops all running standard universe jobs and gracefully
disconnects from other types of jobs in the hopes of later restarting
and reconnecting to them without any disturbance to the running job.
\item The \Condor{job\_router} now supports deletion of attributes
when transforming job ClassAds from vanilla to grid universe. It also
behaves more deterministically when choosing from multiple possible
routes. Rather than picking one at random, it uses a round-robin
selection.
% PR 941
\item \Condor{dagman} now checks that its submit file was generated by
a \Condor{submit\_dag} with the same version as \Condor{dagman} itself.
It is a fatal error for the versions to differ.
\end{itemize}
\noindent Configuration Variable Additions and Changes:
\begin{itemize}
\item Added \Attr{DAGMAN\_DEBUG\_CACHE\_ENABLE} and
\Attr{DAGMAN\_DEBUG\_CACHE\_SIZE} which allow DAGMan to maintain a
cache of log lines and write out the cache as one open/write/close
sequence. \Attr{DAGMAN\_DEBUG\_CACHE\_ENABLE} is a boolean
which turns on the ability for caching and defaults to \Expr{False}.
\Attr{DAGMAN\_DEBUG\_CACHE\_SIZE} is a positive integer and represents
the size of the cache in bytes and defaults to 5 Megabytes.
\item The existing \Macro{BIND\_ALL\_INTERFACES} configuration variable
now defaults to \Expr{True}.
\item Added the \Macro{HIBERNATE} expression, which, when evaluated in
the context of each slot, determines if a machine should enter
a low power state. See page~\pageref{param:Hibernate} for more
information.
\item Added the \Macro{HIBERNATE\_CHECK\_INTERVAL} configuration variable,
which, if set to a non-zero value, enables the \Condor{startd} to place the
machine in a low power state based on the evaluation of the
\MacroNI{HIBERNATE} expression. See
page~\pageref{param:HibernateCheckInterval} for more information.
\item The existing \Macro{VALID\_SPOOL\_FILES} configuration variable
now automatically includes \File{SCHEDD.lock},
the lock file used for high availability \Condor{schedd} fail over.
Other high availability lock files are not currently included.
\item Added the \Macro{SEC\_DEFAULT\_AUTHENTICATION\_TIMEOUT} configuration
variable, where the definition \Expr{DEFAULT} may be replaced
by the usual list of contexts for security settings
(for example, \Expr{CLIENT}, \Expr{READ}, and \Expr{WRITE}).
This specifies the number of seconds that Condor should
allow for the authentication of network connections to complete.
Previously, GSI authentication was hard-coded to allow 5 minutes
for authentication.
Now it uses the same default as all other methods: 20 seconds.
\item Added the \Macro{STARTER\_UPDATE\_INTERVAL\_TIMESLICE} configuration
variable, which
specifies the highest fraction of time that the \Condor{starter} should spend
collecting monitoring information about the job, such as disk usage.
It defaults to 0.1. If checking the disk usage of the job takes a
long time, the \Condor{starter} will monitor less frequently than
specified by \MacroNI{STARTER\_UPDATE\_INTERVAL}.
\end{itemize}
\noindent Bugs Fixed:
\begin{itemize}
\item Fixed a bug introduced in 7.1.0 affecting configurations in
which authentication of all communication between the \Condor{shadow}
and \Condor{schedd} is required. This caused failure in the final update
after the job had finished running. The result was that the job would return
to the idle state to run again.
\item Fixed a bug in Java universe where each slot would be told to
potentially use all the memory on the machine. Now, each JVM
receives the physical memory divided by the number of slots.
\item On Windows, slot users would sometimes show up in the Windows Welcome
Screen. This has now been resolved.
The slot users need to be manually
removed for this to take effect and the machine may need to be rebooted for
the setting to be honored.
\item Fixed a bug in the ClassAd \Procedure{string} function.
The function now properly converts integers and floats
to their string representation.
\item The Windows Installer is now completely internationalized: it will no
longer fail to install because of a missing "Users" group; instead, it
will use the regionally appropriate group.
\item Interoperability with Samba (as a PDC) has been improved. Condor
uses a fast form of login during credential validation. Unfortunately,
this login procedure fails under Samba, even if the credentials are
valid. The new behavior is to attempt the fast login, and on failure,
fall back to the slower form.
\item Windows slot users no longer have the Batch Privilege added, nor
does Condor first attempt a Batch login for slot users. This was
causing permission problems on hardened versions of Windows, such
as Windows Sever 2003, in that not interactive users lacked the
permission to run batch files (via the \Prog{cmd.exe} tool). This affected
any user submitting jobs that used batch files as the executable.
% issue [#1516]
\item If the \AdAttr{IWD} is not defined in a job classified
ad that was either fetched by the \Condor{startd} via job hooks, or
pushed to the \Condor{startd} via COD, the \Condor{starter} no
longer treats this as a fatal error, and instead uses the temporary
job execution sandbox as the initial working directory.
% Fixes requested by LIGO
\item Made some fixes to the new-style rescue DAG feature:
\begin{itemize}
\item \Condor{submit\_dag} no longer needs the \Opt{-force} flag if a rescue
DAG will be run, even if the files generated by \Condor{submit\_dag}
already exist.
\item \Condor{submit\_dag} with the \Opt{-force} flag now renames any
existing new-style rescue DAG files, and therefore runs the original DAG.
\end{itemize}
% PR 942
\item Fixed a problem that caused new-style rescue DAGs to fail when
\Condor{submit\_dag} is invoked with the \Opt{-usedagdir} flag.
\end{itemize}
\noindent Known Bugs:
\begin{itemize}
\item None.
\end{itemize}
\noindent Additions and Changes to the Manual:
\begin{itemize}
\item The manual now contains Windows installation instructions for
controlling the configuration for the \SubmitCmd{vm} universe.
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection*{\label{sec:New-7-1-0}Version 7.1.0}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\noindent Release Notes:
\begin{itemize}
\item Upgrading to 7.1.0 from previous versions of Condor will make
existing Standard Universe jobs that have already run fail to match to
machines running Condor 7.1.0 unless the job previously ran on a
machine using the Red Hat 5.0 release of Condor. This is because the
value of the \Attr{CheckpointPlatform} attribute of the machine
ClassAd has changed in order to better represent checkpoint
compatibility. If this affects you, you can use \Condor{qedit} to
change the \Attr{LastCheckpointPlatform} attribute of existing
Standard Universe jobs to match the new \Attr{CheckpointPlatform}
advertised by the machine ClassAd where the job last ran.
\item Condor no longer supports root configuration files
(for example, \File{/etc/condor/condor\_config.root},
\File{~condor/condor\_config.root}, and
the file defined by the configuration variable
\MacroNI{LOCAL\_ROOT\_CONFIG\_FILE}). This feature was intended to
give limited powers to a Unix administrator to configure some aspects
of Condor without gaining root powers. However, given the flexibility
of the configuration system, we decided that this was not practical.
As long as Condor is started up as root, it should be clearly
understood that whoever has the ability to edit the Condor
configuration files can effectively run arbitrary programs as root.
\end{itemize}
\noindent New Features:
\begin{itemize}
\item In the past, Condor has always sent work to the execute machines
by pushing jobs to the \Condor{startd}, either from the
\Condor{schedd} or via \Condor{cod}.
As of version 7.1.0, The \Condor{startd} now has the ability to pull
work by fetching jobs via a system of plug-ins or hooks.
Additional hooks are invoked by the \Condor{starter} to help manage
work (especially for fetched jobs, but the \Condor{starter} hooks
can be defined and invoked for other kinds of jobs as well).
For a complete description of the new hook system, read
section~\ref{sec:job-hooks} on page~\pageref{sec:job-hooks}.
% PR 888/921
\item Added the capability to insert commands into the \File{.condor.sub}
file produced by \Condor{submit\_dag} with the \Opt{-append} and
\Opt{-insert\_sub\_file} command-line arguments to \Condor{submit\_dag} and
the \Macro{DAGMAN\_INSERT\_SUB\_FILE} configuration variable.
See the \Condor{submit\_dag} manual page on
page~\pageref{man-condor-submit-dag}
and the configuration variable definition on
page~\pageref{param:DAGManInsertSubFile} for more information.
\item For platforms running a Windows operating system, the \Attr{Arch}
machine ClassAd attribute more correctly reflects the architectures
supported. Instead of values \AdStr{INTEL} and \AdStr{UNDEFINED},
the values will now be: \AdStr{INTEL} for x86,
\AdStr{IA64} for Intel Itanium,
and \AdStr{X86\_64} for both AMD and Intel 64-bit processors.
These values are listed in the unnumbered subsection labeled
Machine ClassAd Attributes on page~\pageref{sec:Machine-ClassAd-Attributes}.
\item The Windows MSI installer now supports extended \SubmitCmd{vm} universe
options. These new options include: the ability to set the
networking type, how much memory the \SubmitCmd{vm} universe can use
on a host, and
the ability to set the version of \Prog{VMware} installed on the host.
\item The \Condor{status} and \Condor{q} command line tools now have a
version option which prints the version of those specific tools. This
can be useful when multiple versions of Condor are installed on the
same machine.
\item The configuration variable \MacroNI{CONDOR\_VIEW\_HOST} may now
contain a port number and may (if desired) refer to a
\Condor{collector} daemon running on the same host as the
\Condor{collector} that is forwarding ads. It is also now possible to
use the forwarded ads for matchmaking purposes. For example, several
collectors could forward ads to a single aggregating collector which
a \Condor{negotiator} then uses as its source of information for
matchmaking.
% PR 598/788
\item \Condor{dagman} deals with rescue DAGs in a more sophisticated
way; this is especially helpful for nested DAGs.
See the rescue DAG subsection~\pageref{sec:DAGMan-rescue} of the \Condor{dagman}
manual section for more information.
\item Additional logging details for unusual error cases to help
identify problems.
\item A new (optional) daemon named \Condor{job\_router} has been
added, so far only on Unix. It may be configured to transform vanilla
universe jobs into grid universe jobs, for example to send excess jobs
to other sites via Condor-C or Condor-G. For details, see
page~\pageref{sec:JobRouter}.
\item Previously, \condor{q} \Opt{-better-analyze} was supported on most
but not all versions of Linux. It is now supported on all Unix platforms
but not yet on Windows.
\end{itemize}
\noindent Configuration Variable Additions and Changes:
\begin{itemize}
\item Added new configuration variables
\MacroNI{ALLOW\_CLIENT} and \MacroNI{DENY\_CLIENT} as
client-side authorization controls.
When using a mutual authentication method (such as GSI, SSL, or Kerberos),
these variables allow the specification of
which authenticated servers the Condor tools and daemons should
trust when they form a connection to the server.
Because of the addition of these variables,
the GSI-specific, client-side authorization configuration variable
\Macro{GSI\_DAEMON\_NAME} is retired, and no longer valid.
% PR 921
\item Added the \Macro{DAGMAN\_INSERT\_SUB\_FILE} variable, which allows a file
of commands to be inserted into \File{.condor.sub} files generated
by \Condor{submit\_dag}. See page~\pageref{param:DAGManInsertSubFile}
for more information.
\item The semantics of \MacroNI{CLAIM\_WORKLIFE} were previously not
clearly defined before the start of the first job. A delay between
the \Condor{schedd} claiming a slot and the \Condor{shadow} starting a
job could be caused by the submit machine being very busy or by
\MacroNI{JOB\_START\_DELAY}. Previously, such a delay would
unpredictably result in the first job being rejected if
\MacroNI{CLAIM\_WORKLIFE} expired during that time. Now,
\MacroNI{CLAIM\_WORKLIFE} is defined to apply only after the first job
has started. Therefore, setting it to zero has the effect of allowing
exactly one job per claim to run. The default is still the special
value -1, which places no limit on how long the slot may continue
accepting new jobs from the \Condor{schedd} that claimed it.
% PR 598/788
\item Added the \Macro{DAGMAN\_OLD\_RESCUE} variable, which controls whether
\Condor{dagman} writes rescue DAGs in the old way. See
page~\pageref{param:DAGManOldRescue} for more information.
% PR 598/788
\item Added the \Macro{DAGMAN\_AUTO\_RESCUE} variable, which controls
whether \Condor{dagman} automatically runs an existing rescue DAG.
See page~\pageref{param:DAGManAutoRescue} for more information.
% PR 598/788
\item Added the \Macro{DAGMAN\_MAX\_RESCUE\_NUM} variable, which
controls the maximum "new-style" rescue DAG number written or
automatically run by \Condor{dagman}.
See page~\pageref{param:DAGManMaxRescueNum} for more information.
\end{itemize}
\noindent Bugs Fixed:
\begin{itemize}
\item The Condor Build ID is now printed by \Condor{version} and placed
in the logs for machines running a Windows operating system.
\item \Condor{quill} and the \Condor{dbmsd} correctly register
themselves with the Windows firewall.
% PR 926
\item \Condor{submit\_dag} now avoids possibly running off the end
of the argument list if an argument requiring a value does not have one.
\item The \Condor{submit\_dag} \Opt{-debug} argument now must be
specified with at least \Opt{-de} to avoid conflict with the
\Opt{-dagman} argument.
\item Added missing information about the \Opt{-config} argument to
\Condor{submit\_dag}'s usage message.
% PR 927
\item \Condor{dagman} no longer considers duplicate edges in a DAG a
fatal error (it is now a warning).
\end{itemize}
\noindent Known Bugs:
\begin{itemize}
\item No hook is invoked if a fetched job does not contain enough data
to be spawned by a \Condor{starter} or if other errors prevent the
job from being run after the \Condor{startd} agrees to accept the
work.
This limitation will be addressed in a future version of Condor,
most likely via the addition of a new hook invoked whenever the
\Condor{starter} fails to spawn a job.
For more information about the new hook system included in Condor
version 7.1.0, read section~\ref{sec:job-hooks} on
page~\pageref{sec:job-hooks}.
\end{itemize}
\noindent Additions and Changes to the Manual:
\begin{itemize}
\item Added \AdStr{WINNT60} for the Vista operating system to
the documented list of possible values for the machine ClassAd
attribute \AdAttr{OpSys}.
\end{itemize}
| {
"alphanum_fraction": 0.7625665071,
"avg_line_length": 40.5527093596,
"ext": "tex",
"hexsha": "de6072a0543a9c8621ff6c75776fdf6600953156",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-07-14T20:20:05.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-11-09T01:42:58.000Z",
"max_forks_repo_head_hexsha": "9e00a5874cc2579f5fdc81bb778f540b40b48c87",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "pavlo-svirin/htcondor",
"max_forks_repo_path": "doc/version-history/7-1.history.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9e00a5874cc2579f5fdc81bb778f540b40b48c87",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "pavlo-svirin/htcondor",
"max_issues_repo_path": "doc/version-history/7-1.history.tex",
"max_line_length": 80,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "9e00a5874cc2579f5fdc81bb778f540b40b48c87",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "pavlo-svirin/htcondor",
"max_stars_repo_path": "doc/version-history/7-1.history.tex",
"max_stars_repo_stars_event_max_datetime": "2015-05-22T16:26:34.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-05-22T16:26:34.000Z",
"num_tokens": 10390,
"size": 41161
} |
\documentclass[11pt,oneside,a4paper]{report} % electronic version (no blank pages)
% \documentclass[11pt,openright,twoside,a4paper]{report} % print version (start on right side)
\include{DissertationDefs}
% title page
\title{Breast Cancer Detection in Mammograms using Deep Learning Techniques}
\titlepic{\includegraphics[width=0.45\linewidth]{figures/st-andrews-logo.jpeg}}
\author{Adam Jaamour\\~\\Supervised by Dr David Harris-Birtill \& Lewis McMillan}
\date{Master's of Science (MSc) in Artificial Intelligence\\University of St Andrews - School of Computer Science\\August 14, 2020}
%%TC:ignore
\begin{document}
\setcounter{page}{0}
\pagenumbering{roman}
\maketitle
\newpage
\declaration{Breast Cancer Detection in Mammograms using Deep Learning Techniques}{Adam Jaamour}
\newpage
% Abstract
\abstract
\input{chapters-content/abstract.tex}
\newpage
% Tables of Content/Figures/Tables
\setcounter{tocdepth}{3}
\tableofcontents
\newpage
\listoffigures
\newpage
\listoftables
\newpage
\input{acronyms}
\printnomenclature
\newpage
% Acknowledgements
\chapter*{Acknowledgements}
\input{chapters-content/acknowledgments.tex}
\newpage
%%TC:endignore
% Begin main body
\setcounter{page}{1}
\pagenumbering{arabic}
\chapter{Introduction}
\label{ch:chapter-intro}
\input{chapters-content/introduction.tex}
\chapter{Context Survey}
\label{ch:chapter-litsurvey}
\input{chapters-content/litsurvey.tex}
\chapter{Ethics \& Datasets}
\label{ch:chapter-ethics-datasets}
\input{chapters-content/ethics-datasets}
\chapter{Design}
\label{ch:chapter-design}
\input{chapters-content/design.tex}
\chapter{Implementation}
\label{ch:chapter-implementation}
\input{chapters-content/implementation.tex}
\chapter{Results \& Evaluation}
\label{ch:chapter-evaluation}
\input{chapters-content/evaluation.tex}
\chapter{Conclusions}
\label{ch:chapter-conclusions}
\input{chapters-content/conclusions.tex}
%%TC:ignore
\bibliography{Bibliography}
% \let\cleardoublepage\clearpage % don't automatically start appendix chapters on odd pages
\appendix
\chapter{Ethical Application Approval Letter}
\label{ch:appendix-ethical-approval-letter}
\input{chapters-content/appendix/ethical-approval-letter.tex}
\chapter{Languages \& Frameworks Comparison}
\input{chapters-content/appendix/programming_languages_comparison}
\chapter{Usage Instructions}
\label{ch:appendix-usage-instructions}
\input{chapters-content/appendix/instructions.tex}
\chapter{Remote Work Environment}
\label{ch:appendix-remote-work-environment}
\input{chapters-content/appendix/remote-work-setup.tex}
\chapter{Team Meeting Summaries}
\label{ch:appendix-team-meeting-summaries}
\input{chapters-content/appendix/team_meetings.tex}
\chapter{Coding Project Structure}
\label{ch:appendix-coding-project-structure}
\input{chapters-content/appendix/project_structure.tex}
%%TC:endignore
\end{document} | {
"alphanum_fraction": 0.802240112,
"avg_line_length": 26.2110091743,
"ext": "tex",
"hexsha": "7dcb977dffb30d98deeac4e88971dd33e6365fb1",
"lang": "TeX",
"max_forks_count": 7,
"max_forks_repo_forks_event_max_datetime": "2022-03-15T10:24:03.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-10-13T01:19:51.000Z",
"max_forks_repo_head_hexsha": "a8682484886f409e10ff8ecb7326b2d0bf8b17a0",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "Adamouization/Breast-Cancer-Detection-Mammogram-Deep-Learning",
"max_forks_repo_path": "report/Dissertation.tex",
"max_issues_count": 9,
"max_issues_repo_head_hexsha": "a8682484886f409e10ff8ecb7326b2d0bf8b17a0",
"max_issues_repo_issues_event_max_datetime": "2022-03-12T00:59:33.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-07-19T11:36:44.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "Adamouization/Breast-Cancer-Detection-Mammogram-Deep-Learning",
"max_issues_repo_path": "report/Dissertation.tex",
"max_line_length": 131,
"max_stars_count": 28,
"max_stars_repo_head_hexsha": "a8682484886f409e10ff8ecb7326b2d0bf8b17a0",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "Adamouization/Breast-Cancer-Detection-Mammogram-Deep-Learning",
"max_stars_repo_path": "report/Dissertation.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-08T16:36:15.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-08-17T14:20:24.000Z",
"num_tokens": 790,
"size": 2857
} |
%!TEX program = lualatex
% Copyright (c) 2021 Thomas Jenni
% Permission is hereby granted, free of charge, to any person obtaining a copy
% of this software and associated documentation files (the "Software"), to deal
% in the Software without restriction, including without limitation the rights
% to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
% copies of the Software, and to permit persons to whom the Software is
% furnished to do so, subject to the following conditions:
% The above copyright notice and this permission notice shall be included in all
% copies or substantial portions of the Software.
% THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
% IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
% FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
% AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
% LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
% OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
% SOFTWARE.
\documentclass{article}
\usepackage{luacode}
\usepackage{siunitx}
\usepackage{amsmath}
% siunitx config
\sisetup{
output-decimal-marker = {.},
per-mode = symbol,
separate-uncertainty = false,
add-decimal-zero = true,
exponent-product = \cdot,
round-mode=off
}
% empty unit
\DeclareSIUnit\unitless{}
\DeclareSIUnit\inch{in}
% init lua-physical
\begin{luacode}
physical = require("physical")
N = physical.Number
\end{luacode}
\newcommand{\q}[1]{%
\directlua{tex.print(physical.Quantity.tosiunitx(#1,"add-decimal-zero=true,scientific-notation=fixed,exponent-to-prefix=false"))}%
}
\newcommand{\qs}[1]{%
\directlua{tex.print(physical.Quantity.tosiunitx(#1,"scientific-notation=true,exponent-to-prefix=false,round-integer-to-decimal=true"))}%
}
\newcommand{\qt}[1]{%
\directlua{tex.print(physical.Quantity.tosiunitx(#1,"scientific-notation=engineering,exponent-to-prefix=true,round-integer-to-decimal=true"))}%
}
\newcommand{\qn}[1]{%
\directlua{tex.print(physical.Quantity.tosiunitx(#1,"add-decimal-zero=true,scientific-notation=fixed,exponent-to-prefix=false",1))}%
}
\newcommand{\qu}[1]{%
\directlua{tex.print(physical.Quantity.tosiunitx(#1,nil,2))}%
}
\begin{document}
\section*{Example for the {\tt lua-physical} package}.
Compile this Lua\LaTeX file with the command `{\tt lualatex lua-physical\_example.tex}'.
\begin{enumerate}
\begin{luacode}
a = 12 * _cm
b = 150 * _mm
c = 1.5 * _m
V = ( a * b * c ):to(_dm^3)
\end{luacode}
\item Find the volume of a cuboid with lengths $\q{a}$,
$\q{b}$ and $\q{c}$.
%
\begin{equation*}
V= a \cdot b \cdot c
= \q{a} \cdot \q{b} \cdot \q{c}
= \underline{\q{V}}
\end{equation*}
\begin{luacode}
l = 12 * _in
\end{luacode}
\item Convert $\q{l}$ to the unit $\qu{_cm}$.
%
\begin{equation*}
l = \q{l} \cdot \frac{\q{_in:to(_cm)}}{\qu{_in}} = \q{l:to(_cm)}
\end{equation*}
\begin{luacode}
N.omitUncertainty = true
d = N(1,0.0001) * ( _au ):to(_km)
v = N(1,0.0001) * ( _c ):to(_km/_s)
t = ( d/v ):to(_min)
\end{luacode}
\item Calculate the time, a lightray travels from the surface of the sun to the earth.
The mean distance from the sun to the eart is $\qs{d}$. The speed of light is $\q{v}$.
%
\begin{equation*}
t = \frac{d}{v} = \frac{\qs{d}}{\q{v}} = \underline{\q{t}}
\end{equation*}
\end{enumerate}
\end{document}
| {
"alphanum_fraction": 0.7081995915,
"avg_line_length": 24.654676259,
"ext": "tex",
"hexsha": "70c729193d574713d15c7d42ca1cab9f521b7950",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-08-13T13:00:25.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-09-04T18:08:35.000Z",
"max_forks_repo_head_hexsha": "9fcf1d17c6929650075a6214fa1968ddf3163855",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "tjenni/lua-physical",
"max_forks_repo_path": "lua-physical_example.tex",
"max_issues_count": 3,
"max_issues_repo_head_hexsha": "9fcf1d17c6929650075a6214fa1968ddf3163855",
"max_issues_repo_issues_event_max_datetime": "2020-09-16T20:10:05.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-09-07T07:50:17.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "tjenni/lua-physical",
"max_issues_repo_path": "lua-physical_example.tex",
"max_line_length": 144,
"max_stars_count": 13,
"max_stars_repo_head_hexsha": "9fcf1d17c6929650075a6214fa1968ddf3163855",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "tjenni/lua-physical",
"max_stars_repo_path": "lua-physical_example.tex",
"max_stars_repo_stars_event_max_datetime": "2021-07-24T09:33:33.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-04-20T06:20:21.000Z",
"num_tokens": 1057,
"size": 3427
} |
\chapter{Evolutionary Programming}
\section{Introduction}
Evolutionary Programming is the last of the selected metaheuristics to be evaluated in these trials. Although similar in many respects to Evolutionary Strategies it evolved independently from the work of Fogel, Owens and Walsh in the 1960s. The main difference between the two approaches is Evolutionary Strategies' absence of a recombination operator and its use of a tournament based selection method where an individuals probability of selection is based the number of wins achieved against 10 randomly selected opponents~\cite{back}. Like Evolutionary Strategies it uses a fixed length representation.
We look first at the search strategy options and the experimental set-up used in the trials. Some initial runs are performed to assess the impact of the selected strategy variables before the final results are presented. The chapter concludes by looking at some of the characteristics of solutions found by Evolutionary Programming.
\section{Search Strategy Options}
A population size of 50 individuals evolving over 500 generations provides the 25000 evaluations of the objective function used in previous trials. The genome length is fixed at 100 for Symbolic Integration, Santa Fe and Blocks, while a longer length of 200 was used for Symbolic Regression.
The key strategy choice for this method revolves around the selection of suitable strategy variables (see Section~\ref{strategy_variables}). Two have been chosen for these trials, \emph{mutation rate}, which determines the probability of a codon being mutated, and \emph{mutation bias} which determines where in the genome the mutation can take place.
\section{Experimental Conditions}
Table~\ref{ep_param_table} shows the main parameters used to configure Evolutionary Programming. The initial mutation rate has been set at 12\% based on the results from this parameter value in the Evolutionary Streategies trials. The \emph{mutation rate range} value is set at 25\% (Section~\ref{es_options} provides more detail on these parameters).
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
Parameter &\multicolumn{4}{l|}{Problems}\\
\cline{2-1} \cline{3-1} \cline{4-1} \cline{5-1}
& Sym Int & Santa Fe & Blocks & Sym Reg \\
\hline
Number of Trials & 1000 & 1000 & 1000 & 1000 \\
Number of Objective & & & & \\
Function Evaluations & 25000 & 25000 & 25000 & 25000 \\
Fixed Genome Length & 100 & 100 & 100 & 200 \\
Population Size & 50 & 50 & 50 & 50 \\
Number of Generations & 500 & 500 & 500 & 500 \\
Initial Mutation Rate & 12\% & 12\% & 12\% & 12\% \\
Mutation Range & 25\% & 25\% & 25\% & 25\% \\
\hline
\end{tabular}
\caption{\label{ep_param_table} Parameters used to Configure Evolutionary Programming.}
\end{center}
\end{table}
\section{Impact of Strategy Variables}
A number of preliminary runs were performed to evaluate the impact of the selected strategy variables. The first set uses mutation with no positional bias, that is the mutation can fall anywhere within the entire genome length. The second set uses a strategy variable that limits the mutation of codons to a certain point in the genome. It is expressed as a percentage, where for example 0\% means that mutation can take place anywhere in the gemone, 50\% means that mutation is limited to the second half of the genome and 100\% effectively stops any mutation from taking place.
The results show in Table~\ref{ep_strategies} shows a significant performance benefit for Symbolic Integration, Santafe, Blocks and Symbolic Regression when mutation without bias is used.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
&\multicolumn{2}{|l|}{Strategy Variables}\\
\hline
Problem & Mutation & Mutation \\
& without Bias & with Bias \\
\hline
Symbolic Integration & 94\% & 42\% \\
Santa Fe Trail & 73\% & 21\% \\
Blocks & 95\% & 64\% \\
Symbolic Regression & 25\% & 11\% \\
\hline
\end{tabular}
\caption{\label{ep_strategies} Analysis of EP Strategy Variables showing Success Rates with and without Mutation Bias}
\end{center}
\end{table}
\section{Results}
Table~\ref{ep_results_table} summarises the overall results for Evolutionary Programming. These final trials use mutation without positional bias, a mutation rate of 12 and a mutation range of 25. Symbolic Regression is solved with the same success rate of Evolutionary Strategies (25\%). Santa Fe is the one problem which sees Evolutionary Programming (73\%) outperform Evolutionary Strategies (63\%). Scores for the Blocks problem showing no significant difference at around 95\%. There is also no significant difference in performance for the Symbolic Integration problem with Evolutionary Strategies scoring 95\% and Evolutionary Programming scoring 94\%.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|l|}
\hline
Problem & Successful Runs \\
\hline
Symbolic Integration & 94\% \\
Santa Fe Trail & 73\% \\
Blocks & 95\% \\
Symbolic Regression & 25\% \\
\hline
\end{tabular}
\caption{ \label{ep_results_table} Results from Evolutionary Programming Trials.}
\end{center}
\end{table}
\section{Characteristics of Solutions found by Evolutionary Programming}
Table~\ref{ep_results_analysis_table} provides details of the solutions found by Evolutionary Programming A surprising aspect of these results is the emergence of wrapping in some of the solutions. This is curious when one considers that fixed length genome of length 100 are being used in the case of Santa Fe and Blocks. This length is much longer than the number of codons required to solve the respective problems, however Evolutionary Programming still manages to employ wrapping by finding solutions whose number of expressed codons are in excess of 100. This is contrast to the situation with Evolutionary Strategies in the last chapter which also employed fixed length genomes of similar lengths, yet no wrapping was used in any of the solutions found by that method.
One contributing factor towards the re-emergence of wrapping is that the solutions found by Evolutionary Programming tend to have larger numbers of expressed codons than any of the previous metaheuristics. Santa Fe, for example, uses on average 74 expressed codons as against an average of 46 for all of the previous metaheuristics. An examination of successful solutions shows very high levels of nesting of conditionals in the Santa Fe and Blocks problems. Perhaps the lack of a recombination operator forces Evolutionary Programming toward longer genome lengths in a search for successful solutions.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
Feature & Sym Int & Santa Fe & Blocks & Sym Reg \\
\hline
Avg Number of Codons & & & & \\
in Solution & 100 & 100 & 100 & 200 \\
Avg Number of expressed & & & & \\
Codons in Solution & 19 & 74 & 48 & 57 \\
Percentage of Solutions & & & & \\
using Wrapping & 0\% & 2.79\% & 46\% & 1\% \\
\hline
\end{tabular}
\caption{\label{ep_results_analysis_table} Analysis of Features from Solutions found by Evolutionary Programming.}
\end{center}
\end{table}
\section{Summary}
This chapter looked at the last of our selected metaheuristics, Evolutionary Programming. Despite the absence of a recombination operator, the algorithm successfully solved all four problems. Success rates across all of the problems are not significantly different than the scores achieved by Evolutionary Strategies.
A significant aspect of Evolutionary Programming solutions was their length, with the number of expressed codons exceeding those of the other metaheuristics, causing two of the problems to employ wrapping even when fixed length genomes of 100 codons were used.
| {
"alphanum_fraction": 0.7690325905,
"avg_line_length": 59.2595419847,
"ext": "tex",
"hexsha": "3facd454eb6187f6c3bfba636fff547fd69ffea8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "95b83a99bfb488281effdcd704f1802138f21dd0",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "johnosbb/Grammatical-Evolution",
"max_forks_repo_path": "Content/Masters/Chapter10/chapter10.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "95b83a99bfb488281effdcd704f1802138f21dd0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "johnosbb/Grammatical-Evolution",
"max_issues_repo_path": "Content/Masters/Chapter10/chapter10.tex",
"max_line_length": 776,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "95b83a99bfb488281effdcd704f1802138f21dd0",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "johnosbb/Grammatical-Evolution",
"max_stars_repo_path": "Content/Masters/Chapter10/chapter10.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1854,
"size": 7763
} |
\section{Conclusion and future work}
We introduced a mathematical framework to design neural networks for data that live on simplicial complexes and provided preliminary results on their ability to impute missing data.
Future work might include:
(i) comparing SNNs with state-of-the-art imputation algorithms,
(ii) using SNNs to solve vector field problems,
(iii) generalizing coarsening and pooling to simplicial complexes,
(iv) using boundaries and coboundaries to mix data structured by relationships of different dimensions,
and (v) studying the expressive power of SNNs.
% Their expressive power remains to be fully understood on non-homogeneous spaces.
Unrelated to the simplicial nature of this work, we would like to emphasize how the spectral language was key to developing and even formulating our method.
% Fourier analysis was developed to exploit symmetries in solving PDEs, and later used for data analysis and processing.
On homogeneous spaces, convolutions are defined as inner-products with filters shifted by the actions of a symmetry group of the space.
They are the most general shift-invariant linear operators.
% Convolutions are a sufficient and necessary condition for shift-invariance~\cite{kondor2018groupnn}
On non-homogeneous spaces however, the spectral language yields generalized convolutions which are inner-products with \emph{localized} filters~\cite[Sec.~2.4]{perraudin2019deepsphere}. %~\cite[Sec.~2.2]{perraudin2017stationarity}.
Those too are invariant to any symmetry the space might have.
% generality of spaces: homogeneous ⊂ global symmetries ⊂ some automorphisms (local symmetries) ⊂ asymmetric
% (homogeneous == any point is moved to any other by an automorphism)
Convolutions exploit the space's structure to reduce learning complexity by sharing learnable weights through shifts and localizations of filters.
| {
"alphanum_fraction": 0.8185245019,
"avg_line_length": 84.4090909091,
"ext": "tex",
"hexsha": "dd0393864f14cd6e993e7ffdc6235bfa5256a8c7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "935658c9fa93897b4e288918e6e9c3fb0a0bee3e",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "stefaniaebli/paper-snn-neurips2020tda",
"max_forks_repo_path": "discussion.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "935658c9fa93897b4e288918e6e9c3fb0a0bee3e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "stefaniaebli/paper-snn-neurips2020tda",
"max_issues_repo_path": "discussion.tex",
"max_line_length": 231,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "935658c9fa93897b4e288918e6e9c3fb0a0bee3e",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "stefaniaebli/paper-snn-neurips2020tda",
"max_stars_repo_path": "discussion.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-30T05:09:20.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-01-06T18:45:39.000Z",
"num_tokens": 405,
"size": 1857
} |
\subsubsection{Rapier}\label{weapon:rapier}
Weapon, Sword, One-Handed, Regular, Melee\\
Size: M\\
Cost: 250 Gold
\textbf{Draw/Sheath}\\
2AP draw, 4AP sheath
\textbf{Thrust}\\
4 AP, AG to hit, Critical on 18, 19 and 20, \passus{1} Reach\\
2d8 + \sfrac{1}{3}\texttimes AG Piercing Damage
\textbf{Slash}\\
3 AP, AG to hit, Critical on 18, 19 and 20, \passus{1} Reach\\
1d8 + \sfrac{1}{3}\texttimes AG Cutting Damage | {
"alphanum_fraction": 0.6987951807,
"avg_line_length": 27.6666666667,
"ext": "tex",
"hexsha": "baa0ab75dede085e3075295e50f4184a7e8fb24d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "NTrixner/RaggedLandsPenAndPaper",
"max_forks_repo_path": "items/equipment/weapons/regular/rapier.tex",
"max_issues_count": 155,
"max_issues_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95",
"max_issues_repo_issues_event_max_datetime": "2022-03-03T13:49:05.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-03-18T13:19:57.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "NTrixner/RaggedLandsPenAndPaper",
"max_issues_repo_path": "items/equipment/weapons/regular/rapier.tex",
"max_line_length": 62,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "NTrixner/RaggedLandsPenAndPaper",
"max_stars_repo_path": "items/equipment/weapons/regular/rapier.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-03T09:32:08.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-03-13T09:33:31.000Z",
"num_tokens": 165,
"size": 415
} |
% \documentclass{article}
% \usepackage{graphicx}
% \usepackage[a4paper, margin=0.5in]{geometry}
% \usepackage{subcaption}
% \usepackage{printlen}
% \uselengthunit{cm}
% \newlength\imageheight
% \newlength\imagewidth
% \begin{document}
\section{MSP\_A TX1 MSP\_C RX11 Minipod Loopback}\label{sec:MSPATX1MSPCRX11MinipodLoopback12.8-optimized}
\begin{figure}[h] % "[t!]" placement specifier just for this example
\begin{subfigure}{0.33\textwidth}
\hyperref[sec:MSPAFPGATX100RX1100MSPCFPGA12.8-optimized]{\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-00--RX11-00-MSP_C_FPGA.pdf}}
\end{subfigure}\hspace*{\fill}
\begin{subfigure}{0.33\textwidth}
\hyperref[sec:MSPAFPGATX101RX1101MSPCFPGA12.8-optimized]{\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-01--RX11-01-MSP_C_FPGA.pdf}}
\end{subfigure}\hspace*{\fill}
\begin{subfigure}{0.33\textwidth}
\hyperref[sec:MSPAFPGATX102RX1102MSPCFPGA12.8-optimized]{\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-02--RX11-02-MSP_C_FPGA.pdf}}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\hyperref[sec:MSPAFPGATX103RX1103MSPCFPGA12.8-optimized]{\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-03--RX11-03-MSP_C_FPGA.pdf}}
\end{subfigure}\hspace*{\fill}
\begin{subfigure}{0.33\textwidth}
\hyperref[sec:MSPAFPGATX104RX1104MSPCFPGA12.8-optimized]{\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-04--RX11-04-MSP_C_FPGA.pdf}}
\end{subfigure}\hspace*{\fill}
\begin{subfigure}{0.33\textwidth}
\hyperref[sec:MSPAFPGATX105RX1105MSPCFPGA12.8-optimized]{\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-05--RX11-05-MSP_C_FPGA.pdf}}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\hyperref[sec:MSPAFPGATX106RX1106MSPCFPGA12.8-optimized]{\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-06--RX11-06-MSP_C_FPGA.pdf}}
\end{subfigure}\hspace*{\fill}
\begin{subfigure}{0.33\textwidth}
\hyperref[sec:MSPAFPGATX107RX1107MSPCFPGA12.8-optimized]{\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-07--RX11-07-MSP_C_FPGA.pdf}}
\end{subfigure}\hspace*{\fill}
\begin{subfigure}{0.33\textwidth}
\hyperref[sec:MSPAFPGATX108RX1108MSPCFPGA12.8-optimized]{\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-08--RX11-08-MSP_C_FPGA.pdf}}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\hyperref[sec:MSPAFPGATX109RX1109MSPCFPGA12.8-optimized]{\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-09--RX11-09-MSP_C_FPGA.pdf}}
\end{subfigure}\hspace*{\fill}
\begin{subfigure}{0.33\textwidth}
\hyperref[sec:MSPAFPGATX110RX1110MSPCFPGA12.8-optimized]{\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-10--RX11-10-MSP_C_FPGA.pdf}}
\end{subfigure}\hspace*{\fill}
\begin{subfigure}{0.33\textwidth}
\hyperref[sec:MSPAFPGATX111RX1111MSPCFPGA12.8-optimized]{\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-11--RX11-11-MSP_C_FPGA.pdf}}
\end{subfigure}
\caption{MSP\_A TX1 MSP\_C RX11 Minipod Loopback} \label{fig:MSPATX1MSPCRX11MinipodLoopback12.8-optimized}
\end{figure}
A cross-reference to Figure~\ref{fig:MSPATX1MSPCRX11MinipodLoopback12.8-optimized}.
Sibling eye diagrams: \hyperref[sec:MSPATX1MSPCRX11MinipodLoopback6.4-optimized]{6.4-optimized}, \hyperref[sec:MSPATX1MSPCRX11MinipodLoopback9.6-optimized]{9.6-optimized}. \\
Next summary Figure~\ref{fig:MSPATX2MSPCRX10MinipodLoopback12.8-optimized}.
\clearpage
% \end{document}
\subsection{MSP\_A\_FPGA-TX1-00--RX11-00-MSP\_C\_FPGA}\label{sec:MSPAFPGATX100RX1100MSPCFPGA12.8-optimized}
% Please add the following required packages to your document preamble:
% \usepackage{booktabs}
\begin{table}[h]
\centering
\caption{MSP\_A\_FPGA-TX1-00--RX11-00-MSP\_C\_FPGA}
\label{tab:MSPAFPGATX100RX1100MSPCFPGA12.8-optimized}
\begin{tabular}{@{}|l|l|l|l|l|l|@{}}
\toprule
\textbf{SW Version} & \textbf{GT Type} & \multicolumn{2}{l|}{\textbf{Date and Time Started}} & \multicolumn{2}{l|}{\textbf{Date and Time Ended}} \\ \midrule
2017.2 & UltraScale GTY & \multicolumn{2}{l|}{2018-Jan-24 00:57:58} & \multicolumn{2}{l|}{2018-Jan-24 00:58:40} \\ \midrule
\textbf{Reset RX} & \textbf{OA} & \textbf{HO} & \textbf{HO (\%)} & \textbf{VO} & \textbf{VO (\%)} \\ \midrule
true & 17249 & 80 & 61.24\% & 255 & 100.00\% \\ \midrule
\textbf{Dwell Type} & \textbf{Dwell BER} & \textbf{Horizontal Increment} & \textbf{Vertical Increment} & \multicolumn{2}{l|}{\textbf{Misc Info}} \\ \midrule
BER & 1e-7 & 1 & 1 & \multicolumn{2}{l|}{ELF Version: 0x4002 SVN: 0} \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}[h]
\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-00--RX11-00-MSP_C_FPGA.pdf}
\caption{MSP\_A\_FPGA-TX1-00--RX11-00-MSP\_C\_FPGA} \label{fig:MSPAFPGATX100RX1100MSPCFPGA12.8-optimized}
\end{figure}
Call back to summary Figure~\ref{fig:MSPATX1MSPCRX11MinipodLoopback12.8-optimized}.
Sibling eye diagrams: \hyperref[sec:MSPAFPGATX100RX1100MSPCFPGA6.4-optimized]{6.4-optimized}, \hyperref[sec:MSPAFPGATX100RX1100MSPCFPGA9.6-optimized]{9.6-optimized}.
\clearpage
\newpage
\subsection{MSP\_A\_FPGA-TX1-01--RX11-01-MSP\_C\_FPGA}\label{sec:MSPAFPGATX101RX1101MSPCFPGA12.8-optimized}
% Please add the following required packages to your document preamble:
% \usepackage{booktabs}
\begin{table}[h]
\centering
\caption{MSP\_A\_FPGA-TX1-01--RX11-01-MSP\_C\_FPGA}
\label{tab:MSPAFPGATX101RX1101MSPCFPGA12.8-optimized}
\begin{tabular}{@{}|l|l|l|l|l|l|@{}}
\toprule
\textbf{SW Version} & \textbf{GT Type} & \multicolumn{2}{l|}{\textbf{Date and Time Started}} & \multicolumn{2}{l|}{\textbf{Date and Time Ended}} \\ \midrule
2017.2 & UltraScale GTY & \multicolumn{2}{l|}{2018-Jan-24 00:59:22} & \multicolumn{2}{l|}{2018-Jan-24 01:00:03} \\ \midrule
\textbf{Reset RX} & \textbf{OA} & \textbf{HO} & \textbf{HO (\%)} & \textbf{VO} & \textbf{VO (\%)} \\ \midrule
true & 16714 & 75 & 57.36\% & 255 & 100.00\% \\ \midrule
\textbf{Dwell Type} & \textbf{Dwell BER} & \textbf{Horizontal Increment} & \textbf{Vertical Increment} & \multicolumn{2}{l|}{\textbf{Misc Info}} \\ \midrule
BER & 1e-7 & 1 & 1 & \multicolumn{2}{l|}{ELF Version: 0x4002 SVN: 0} \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}[h]
\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-01--RX11-01-MSP_C_FPGA.pdf}
\caption{MSP\_A\_FPGA-TX1-01--RX11-01-MSP\_C\_FPGA} \label{fig:MSPAFPGATX101RX1101MSPCFPGA12.8-optimized}
\end{figure}
Call back to summary Figure~\ref{fig:MSPATX1MSPCRX11MinipodLoopback12.8-optimized}.
Sibling eye diagrams: \hyperref[sec:MSPAFPGATX101RX1101MSPCFPGA6.4-optimized]{6.4-optimized}, \hyperref[sec:MSPAFPGATX101RX1101MSPCFPGA9.6-optimized]{9.6-optimized}.
\clearpage
\newpage
\subsection{MSP\_A\_FPGA-TX1-02--RX11-02-MSP\_C\_FPGA}\label{sec:MSPAFPGATX102RX1102MSPCFPGA12.8-optimized}
% Please add the following required packages to your document preamble:
% \usepackage{booktabs}
\begin{table}[h]
\centering
\caption{MSP\_A\_FPGA-TX1-02--RX11-02-MSP\_C\_FPGA}
\label{tab:MSPAFPGATX102RX1102MSPCFPGA12.8-optimized}
\begin{tabular}{@{}|l|l|l|l|l|l|@{}}
\toprule
\textbf{SW Version} & \textbf{GT Type} & \multicolumn{2}{l|}{\textbf{Date and Time Started}} & \multicolumn{2}{l|}{\textbf{Date and Time Ended}} \\ \midrule
2017.2 & UltraScale GTY & \multicolumn{2}{l|}{2018-Jan-24 01:00:03} & \multicolumn{2}{l|}{2018-Jan-24 01:00:45} \\ \midrule
\textbf{Reset RX} & \textbf{OA} & \textbf{HO} & \textbf{HO (\%)} & \textbf{VO} & \textbf{VO (\%)} \\ \midrule
true & 17537 & 79 & 61.24\% & 255 & 100.00\% \\ \midrule
\textbf{Dwell Type} & \textbf{Dwell BER} & \textbf{Horizontal Increment} & \textbf{Vertical Increment} & \multicolumn{2}{l|}{\textbf{Misc Info}} \\ \midrule
BER & 1e-7 & 1 & 1 & \multicolumn{2}{l|}{ELF Version: 0x4002 SVN: 0} \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}[h]
\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-02--RX11-02-MSP_C_FPGA.pdf}
\caption{MSP\_A\_FPGA-TX1-02--RX11-02-MSP\_C\_FPGA} \label{fig:MSPAFPGATX102RX1102MSPCFPGA12.8-optimized}
\end{figure}
Call back to summary Figure~\ref{fig:MSPATX1MSPCRX11MinipodLoopback12.8-optimized}.
Sibling eye diagrams: \hyperref[sec:MSPAFPGATX102RX1102MSPCFPGA6.4-optimized]{6.4-optimized}, \hyperref[sec:MSPAFPGATX102RX1102MSPCFPGA9.6-optimized]{9.6-optimized}.
\clearpage
\newpage
\subsection{MSP\_A\_FPGA-TX1-03--RX11-03-MSP\_C\_FPGA}\label{sec:MSPAFPGATX103RX1103MSPCFPGA12.8-optimized}
% Please add the following required packages to your document preamble:
% \usepackage{booktabs}
\begin{table}[h]
\centering
\caption{MSP\_A\_FPGA-TX1-03--RX11-03-MSP\_C\_FPGA}
\label{tab:MSPAFPGATX103RX1103MSPCFPGA12.8-optimized}
\begin{tabular}{@{}|l|l|l|l|l|l|@{}}
\toprule
\textbf{SW Version} & \textbf{GT Type} & \multicolumn{2}{l|}{\textbf{Date and Time Started}} & \multicolumn{2}{l|}{\textbf{Date and Time Ended}} \\ \midrule
2017.2 & UltraScale GTY & \multicolumn{2}{l|}{2018-Jan-24 00:56:34} & \multicolumn{2}{l|}{2018-Jan-24 00:57:16} \\ \midrule
\textbf{Reset RX} & \textbf{OA} & \textbf{HO} & \textbf{HO (\%)} & \textbf{VO} & \textbf{VO (\%)} \\ \midrule
true & 17809 & 78 & 60.47\% & 255 & 100.00\% \\ \midrule
\textbf{Dwell Type} & \textbf{Dwell BER} & \textbf{Horizontal Increment} & \textbf{Vertical Increment} & \multicolumn{2}{l|}{\textbf{Misc Info}} \\ \midrule
BER & 1e-7 & 1 & 1 & \multicolumn{2}{l|}{ELF Version: 0x4002 SVN: 0} \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}[h]
\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-03--RX11-03-MSP_C_FPGA.pdf}
\caption{MSP\_A\_FPGA-TX1-03--RX11-03-MSP\_C\_FPGA} \label{fig:MSPAFPGATX103RX1103MSPCFPGA12.8-optimized}
\end{figure}
Call back to summary Figure~\ref{fig:MSPATX1MSPCRX11MinipodLoopback12.8-optimized}.
Sibling eye diagrams: \hyperref[sec:MSPAFPGATX103RX1103MSPCFPGA6.4-optimized]{6.4-optimized}, \hyperref[sec:MSPAFPGATX103RX1103MSPCFPGA9.6-optimized]{9.6-optimized}.
\clearpage
\newpage
\subsection{MSP\_A\_FPGA-TX1-04--RX11-04-MSP\_C\_FPGA}\label{sec:MSPAFPGATX104RX1104MSPCFPGA12.8-optimized}
% Please add the following required packages to your document preamble:
% \usepackage{booktabs}
\begin{table}[h]
\centering
\caption{MSP\_A\_FPGA-TX1-04--RX11-04-MSP\_C\_FPGA}
\label{tab:MSPAFPGATX104RX1104MSPCFPGA12.8-optimized}
\begin{tabular}{@{}|l|l|l|l|l|l|@{}}
\toprule
\textbf{SW Version} & \textbf{GT Type} & \multicolumn{2}{l|}{\textbf{Date and Time Started}} & \multicolumn{2}{l|}{\textbf{Date and Time Ended}} \\ \midrule
2017.2 & UltraScale GTY & \multicolumn{2}{l|}{2018-Jan-24 01:02:08} & \multicolumn{2}{l|}{2018-Jan-24 01:02:50} \\ \midrule
\textbf{Reset RX} & \textbf{OA} & \textbf{HO} & \textbf{HO (\%)} & \textbf{VO} & \textbf{VO (\%)} \\ \midrule
true & 17296 & 83 & 64.34\% & 250 & 97.65\% \\ \midrule
\textbf{Dwell Type} & \textbf{Dwell BER} & \textbf{Horizontal Increment} & \textbf{Vertical Increment} & \multicolumn{2}{l|}{\textbf{Misc Info}} \\ \midrule
BER & 1e-7 & 1 & 1 & \multicolumn{2}{l|}{ELF Version: 0x4002 SVN: 0} \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}[h]
\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-04--RX11-04-MSP_C_FPGA.pdf}
\caption{MSP\_A\_FPGA-TX1-04--RX11-04-MSP\_C\_FPGA} \label{fig:MSPAFPGATX104RX1104MSPCFPGA12.8-optimized}
\end{figure}
Call back to summary Figure~\ref{fig:MSPATX1MSPCRX11MinipodLoopback12.8-optimized}.
Sibling eye diagrams: \hyperref[sec:MSPAFPGATX104RX1104MSPCFPGA6.4-optimized]{6.4-optimized}, \hyperref[sec:MSPAFPGATX104RX1104MSPCFPGA9.6-optimized]{9.6-optimized}.
\clearpage
\newpage
\subsection{MSP\_A\_FPGA-TX1-05--RX11-05-MSP\_C\_FPGA}\label{sec:MSPAFPGATX105RX1105MSPCFPGA12.8-optimized}
% Please add the following required packages to your document preamble:
% \usepackage{booktabs}
\begin{table}[h]
\centering
\caption{MSP\_A\_FPGA-TX1-05--RX11-05-MSP\_C\_FPGA}
\label{tab:MSPAFPGATX105RX1105MSPCFPGA12.8-optimized}
\begin{tabular}{@{}|l|l|l|l|l|l|@{}}
\toprule
\textbf{SW Version} & \textbf{GT Type} & \multicolumn{2}{l|}{\textbf{Date and Time Started}} & \multicolumn{2}{l|}{\textbf{Date and Time Ended}} \\ \midrule
2017.2 & UltraScale GTY & \multicolumn{2}{l|}{2018-Jan-24 00:55:52} & \multicolumn{2}{l|}{2018-Jan-24 00:56:34} \\ \midrule
\textbf{Reset RX} & \textbf{OA} & \textbf{HO} & \textbf{HO (\%)} & \textbf{VO} & \textbf{VO (\%)} \\ \midrule
true & 17106 & 77 & 58.91\% & 255 & 100.00\% \\ \midrule
\textbf{Dwell Type} & \textbf{Dwell BER} & \textbf{Horizontal Increment} & \textbf{Vertical Increment} & \multicolumn{2}{l|}{\textbf{Misc Info}} \\ \midrule
BER & 1e-7 & 1 & 1 & \multicolumn{2}{l|}{ELF Version: 0x4002 SVN: 0} \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}[h]
\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-05--RX11-05-MSP_C_FPGA.pdf}
\caption{MSP\_A\_FPGA-TX1-05--RX11-05-MSP\_C\_FPGA} \label{fig:MSPAFPGATX105RX1105MSPCFPGA12.8-optimized}
\end{figure}
Call back to summary Figure~\ref{fig:MSPATX1MSPCRX11MinipodLoopback12.8-optimized}.
Sibling eye diagrams: \hyperref[sec:MSPAFPGATX105RX1105MSPCFPGA6.4-optimized]{6.4-optimized}, \hyperref[sec:MSPAFPGATX105RX1105MSPCFPGA9.6-optimized]{9.6-optimized}.
\clearpage
\newpage
\subsection{MSP\_A\_FPGA-TX1-06--RX11-06-MSP\_C\_FPGA}\label{sec:MSPAFPGATX106RX1106MSPCFPGA12.8-optimized}
% Please add the following required packages to your document preamble:
% \usepackage{booktabs}
\begin{table}[h]
\centering
\caption{MSP\_A\_FPGA-TX1-06--RX11-06-MSP\_C\_FPGA}
\label{tab:MSPAFPGATX106RX1106MSPCFPGA12.8-optimized}
\begin{tabular}{@{}|l|l|l|l|l|l|@{}}
\toprule
\textbf{SW Version} & \textbf{GT Type} & \multicolumn{2}{l|}{\textbf{Date and Time Started}} & \multicolumn{2}{l|}{\textbf{Date and Time Ended}} \\ \midrule
2017.2 & UltraScale GTY & \multicolumn{2}{l|}{2018-Jan-24 01:03:31} & \multicolumn{2}{l|}{2018-Jan-24 01:04:12} \\ \midrule
\textbf{Reset RX} & \textbf{OA} & \textbf{HO} & \textbf{HO (\%)} & \textbf{VO} & \textbf{VO (\%)} \\ \midrule
true & 16989 & 80 & 62.02\% & 255 & 100.00\% \\ \midrule
\textbf{Dwell Type} & \textbf{Dwell BER} & \textbf{Horizontal Increment} & \textbf{Vertical Increment} & \multicolumn{2}{l|}{\textbf{Misc Info}} \\ \midrule
BER & 1e-7 & 1 & 1 & \multicolumn{2}{l|}{ELF Version: 0x4002 SVN: 0} \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}[h]
\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-06--RX11-06-MSP_C_FPGA.pdf}
\caption{MSP\_A\_FPGA-TX1-06--RX11-06-MSP\_C\_FPGA} \label{fig:MSPAFPGATX106RX1106MSPCFPGA12.8-optimized}
\end{figure}
Call back to summary Figure~\ref{fig:MSPATX1MSPCRX11MinipodLoopback12.8-optimized}.
Sibling eye diagrams: \hyperref[sec:MSPAFPGATX106RX1106MSPCFPGA6.4-optimized]{6.4-optimized}, \hyperref[sec:MSPAFPGATX106RX1106MSPCFPGA9.6-optimized]{9.6-optimized}.
\clearpage
\newpage
\subsection{MSP\_A\_FPGA-TX1-07--RX11-07-MSP\_C\_FPGA}\label{sec:MSPAFPGATX107RX1107MSPCFPGA12.8-optimized}
% Please add the following required packages to your document preamble:
% \usepackage{booktabs}
\begin{table}[h]
\centering
\caption{MSP\_A\_FPGA-TX1-07--RX11-07-MSP\_C\_FPGA}
\label{tab:MSPAFPGATX107RX1107MSPCFPGA12.8-optimized}
\begin{tabular}{@{}|l|l|l|l|l|l|@{}}
\toprule
\textbf{SW Version} & \textbf{GT Type} & \multicolumn{2}{l|}{\textbf{Date and Time Started}} & \multicolumn{2}{l|}{\textbf{Date and Time Ended}} \\ \midrule
2017.2 & UltraScale GTY & \multicolumn{2}{l|}{2018-Jan-24 00:57:16} & \multicolumn{2}{l|}{2018-Jan-24 00:57:58} \\ \midrule
\textbf{Reset RX} & \textbf{OA} & \textbf{HO} & \textbf{HO (\%)} & \textbf{VO} & \textbf{VO (\%)} \\ \midrule
true & 17159 & 80 & 61.24\% & 255 & 100.00\% \\ \midrule
\textbf{Dwell Type} & \textbf{Dwell BER} & \textbf{Horizontal Increment} & \textbf{Vertical Increment} & \multicolumn{2}{l|}{\textbf{Misc Info}} \\ \midrule
BER & 1e-7 & 1 & 1 & \multicolumn{2}{l|}{ELF Version: 0x4002 SVN: 0} \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}[h]
\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-07--RX11-07-MSP_C_FPGA.pdf}
\caption{MSP\_A\_FPGA-TX1-07--RX11-07-MSP\_C\_FPGA} \label{fig:MSPAFPGATX107RX1107MSPCFPGA12.8-optimized}
\end{figure}
Call back to summary Figure~\ref{fig:MSPATX1MSPCRX11MinipodLoopback12.8-optimized}.
Sibling eye diagrams: \hyperref[sec:MSPAFPGATX107RX1107MSPCFPGA6.4-optimized]{6.4-optimized}, \hyperref[sec:MSPAFPGATX107RX1107MSPCFPGA9.6-optimized]{9.6-optimized}.
\clearpage
\newpage
\subsection{MSP\_A\_FPGA-TX1-08--RX11-08-MSP\_C\_FPGA}\label{sec:MSPAFPGATX108RX1108MSPCFPGA12.8-optimized}
% Please add the following required packages to your document preamble:
% \usepackage{booktabs}
\begin{table}[h]
\centering
\caption{MSP\_A\_FPGA-TX1-08--RX11-08-MSP\_C\_FPGA}
\label{tab:MSPAFPGATX108RX1108MSPCFPGA12.8-optimized}
\begin{tabular}{@{}|l|l|l|l|l|l|@{}}
\toprule
\textbf{SW Version} & \textbf{GT Type} & \multicolumn{2}{l|}{\textbf{Date and Time Started}} & \multicolumn{2}{l|}{\textbf{Date and Time Ended}} \\ \midrule
2017.2 & UltraScale GTY & \multicolumn{2}{l|}{2018-Jan-24 01:02:50} & \multicolumn{2}{l|}{2018-Jan-24 01:03:31} \\ \midrule
\textbf{Reset RX} & \textbf{OA} & \textbf{HO} & \textbf{HO (\%)} & \textbf{VO} & \textbf{VO (\%)} \\ \midrule
true & 16678 & 79 & 61.24\% & 248 & 97.25\% \\ \midrule
\textbf{Dwell Type} & \textbf{Dwell BER} & \textbf{Horizontal Increment} & \textbf{Vertical Increment} & \multicolumn{2}{l|}{\textbf{Misc Info}} \\ \midrule
BER & 1e-7 & 1 & 1 & \multicolumn{2}{l|}{ELF Version: 0x4002 SVN: 0} \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}[h]
\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-08--RX11-08-MSP_C_FPGA.pdf}
\caption{MSP\_A\_FPGA-TX1-08--RX11-08-MSP\_C\_FPGA} \label{fig:MSPAFPGATX108RX1108MSPCFPGA12.8-optimized}
\end{figure}
Call back to summary Figure~\ref{fig:MSPATX1MSPCRX11MinipodLoopback12.8-optimized}.
Sibling eye diagrams: \hyperref[sec:MSPAFPGATX108RX1108MSPCFPGA6.4-optimized]{6.4-optimized}, \hyperref[sec:MSPAFPGATX108RX1108MSPCFPGA9.6-optimized]{9.6-optimized}.
\clearpage
\newpage
\subsection{MSP\_A\_FPGA-TX1-09--RX11-09-MSP\_C\_FPGA}\label{sec:MSPAFPGATX109RX1109MSPCFPGA12.8-optimized}
% Please add the following required packages to your document preamble:
% \usepackage{booktabs}
\begin{table}[h]
\centering
\caption{MSP\_A\_FPGA-TX1-09--RX11-09-MSP\_C\_FPGA}
\label{tab:MSPAFPGATX109RX1109MSPCFPGA12.8-optimized}
\begin{tabular}{@{}|l|l|l|l|l|l|@{}}
\toprule
\textbf{SW Version} & \textbf{GT Type} & \multicolumn{2}{l|}{\textbf{Date and Time Started}} & \multicolumn{2}{l|}{\textbf{Date and Time Ended}} \\ \midrule
2017.2 & UltraScale GTY & \multicolumn{2}{l|}{2018-Jan-24 00:58:40} & \multicolumn{2}{l|}{2018-Jan-24 00:59:22} \\ \midrule
\textbf{Reset RX} & \textbf{OA} & \textbf{HO} & \textbf{HO (\%)} & \textbf{VO} & \textbf{VO (\%)} \\ \midrule
true & 16654 & 77 & 58.91\% & 255 & 100.00\% \\ \midrule
\textbf{Dwell Type} & \textbf{Dwell BER} & \textbf{Horizontal Increment} & \textbf{Vertical Increment} & \multicolumn{2}{l|}{\textbf{Misc Info}} \\ \midrule
BER & 1e-7 & 1 & 1 & \multicolumn{2}{l|}{ELF Version: 0x4002 SVN: 0} \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}[h]
\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-09--RX11-09-MSP_C_FPGA.pdf}
\caption{MSP\_A\_FPGA-TX1-09--RX11-09-MSP\_C\_FPGA} \label{fig:MSPAFPGATX109RX1109MSPCFPGA12.8-optimized}
\end{figure}
Call back to summary Figure~\ref{fig:MSPATX1MSPCRX11MinipodLoopback12.8-optimized}.
Sibling eye diagrams: \hyperref[sec:MSPAFPGATX109RX1109MSPCFPGA6.4-optimized]{6.4-optimized}, \hyperref[sec:MSPAFPGATX109RX1109MSPCFPGA9.6-optimized]{9.6-optimized}.
\clearpage
\newpage
\subsection{MSP\_A\_FPGA-TX1-10--RX11-10-MSP\_C\_FPGA}\label{sec:MSPAFPGATX110RX1110MSPCFPGA12.8-optimized}
% Please add the following required packages to your document preamble:
% \usepackage{booktabs}
\begin{table}[h]
\centering
\caption{MSP\_A\_FPGA-TX1-10--RX11-10-MSP\_C\_FPGA}
\label{tab:MSPAFPGATX110RX1110MSPCFPGA12.8-optimized}
\begin{tabular}{@{}|l|l|l|l|l|l|@{}}
\toprule
\textbf{SW Version} & \textbf{GT Type} & \multicolumn{2}{l|}{\textbf{Date and Time Started}} & \multicolumn{2}{l|}{\textbf{Date and Time Ended}} \\ \midrule
2017.2 & UltraScale GTY & \multicolumn{2}{l|}{2018-Jan-24 01:01:27} & \multicolumn{2}{l|}{2018-Jan-24 01:02:08} \\ \midrule
\textbf{Reset RX} & \textbf{OA} & \textbf{HO} & \textbf{HO (\%)} & \textbf{VO} & \textbf{VO (\%)} \\ \midrule
true & 16396 & 77 & 59.69\% & 255 & 99.61\% \\ \midrule
\textbf{Dwell Type} & \textbf{Dwell BER} & \textbf{Horizontal Increment} & \textbf{Vertical Increment} & \multicolumn{2}{l|}{\textbf{Misc Info}} \\ \midrule
BER & 1e-7 & 1 & 1 & \multicolumn{2}{l|}{ELF Version: 0x4002 SVN: 0} \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}[h]
\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-10--RX11-10-MSP_C_FPGA.pdf}
\caption{MSP\_A\_FPGA-TX1-10--RX11-10-MSP\_C\_FPGA} \label{fig:MSPAFPGATX110RX1110MSPCFPGA12.8-optimized}
\end{figure}
Call back to summary Figure~\ref{fig:MSPATX1MSPCRX11MinipodLoopback12.8-optimized}.
Sibling eye diagrams: \hyperref[sec:MSPAFPGATX110RX1110MSPCFPGA6.4-optimized]{6.4-optimized}, \hyperref[sec:MSPAFPGATX110RX1110MSPCFPGA9.6-optimized]{9.6-optimized}.
\clearpage
\newpage
\subsection{MSP\_A\_FPGA-TX1-11--RX11-11-MSP\_C\_FPGA}\label{sec:MSPAFPGATX111RX1111MSPCFPGA12.8-optimized}
% Please add the following required packages to your document preamble:
% \usepackage{booktabs}
\begin{table}[h]
\centering
\caption{MSP\_A\_FPGA-TX1-11--RX11-11-MSP\_C\_FPGA}
\label{tab:MSPAFPGATX111RX1111MSPCFPGA12.8-optimized}
\begin{tabular}{@{}|l|l|l|l|l|l|@{}}
\toprule
\textbf{SW Version} & \textbf{GT Type} & \multicolumn{2}{l|}{\textbf{Date and Time Started}} & \multicolumn{2}{l|}{\textbf{Date and Time Ended}} \\ \midrule
2017.2 & UltraScale GTY & \multicolumn{2}{l|}{2018-Jan-24 01:00:45} & \multicolumn{2}{l|}{2018-Jan-24 01:01:27} \\ \midrule
\textbf{Reset RX} & \textbf{OA} & \textbf{HO} & \textbf{HO (\%)} & \textbf{VO} & \textbf{VO (\%)} \\ \midrule
true & 17919 & 81 & 62.02\% & 255 & 100.00\% \\ \midrule
\textbf{Dwell Type} & \textbf{Dwell BER} & \textbf{Horizontal Increment} & \textbf{Vertical Increment} & \multicolumn{2}{l|}{\textbf{Misc Info}} \\ \midrule
BER & 1e-7 & 1 & 1 & \multicolumn{2}{l|}{ELF Version: 0x4002 SVN: 0} \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}[h]
\includegraphicsmaybe{../scans/pdf/12.8-optimized/MSP_A_FPGA-TX1-11--RX11-11-MSP_C_FPGA.pdf}
\caption{MSP\_A\_FPGA-TX1-11--RX11-11-MSP\_C\_FPGA} \label{fig:MSPAFPGATX111RX1111MSPCFPGA12.8-optimized}
\end{figure}
Call back to summary Figure~\ref{fig:MSPATX1MSPCRX11MinipodLoopback12.8-optimized}.
Sibling eye diagrams: \hyperref[sec:MSPAFPGATX111RX1111MSPCFPGA6.4-optimized]{6.4-optimized}, \hyperref[sec:MSPAFPGATX111RX1111MSPCFPGA9.6-optimized]{9.6-optimized}.
\clearpage
\newpage
| {
"alphanum_fraction": 0.642975974,
"avg_line_length": 59.4183908046,
"ext": "tex",
"hexsha": "3212234fd64dad49a3b83977f711e7223411ad3e",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-12-04T21:03:53.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-05-16T03:47:42.000Z",
"max_forks_repo_head_hexsha": "7d702ed87f0c8fbe90f4ef0445e2d4f77a79ec02",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mvsoliveira/IBERTpy",
"max_forks_repo_path": "out/tex/MSP_A_TX1_MSP_C_RX11_Minipod_Loopback_12.8-optimized.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7d702ed87f0c8fbe90f4ef0445e2d4f77a79ec02",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mvsoliveira/IBERTpy",
"max_issues_repo_path": "out/tex/MSP_A_TX1_MSP_C_RX11_Minipod_Loopback_12.8-optimized.tex",
"max_line_length": 191,
"max_stars_count": 7,
"max_stars_repo_head_hexsha": "7d702ed87f0c8fbe90f4ef0445e2d4f77a79ec02",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mvsoliveira/IBERTpy",
"max_stars_repo_path": "out/tex/MSP_A_TX1_MSP_C_RX11_Minipod_Loopback_12.8-optimized.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-04T17:49:39.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-04-22T14:22:42.000Z",
"num_tokens": 9787,
"size": 25847
} |
\section{Stochastic Collocation with Gamma Distribution}
Associated external model: \texttt{poly\_scgpc\_gamma.py}
Recall that the \textit{Gamma} distribution has the probability density function
\begin{align}
f(x) = \frac{x^{\alpha-1}e^{-x/\beta}}{\beta^\alpha\Gamma\left(\alpha\right)}, \alpha > 0, \beta > 0
\end{align}
The following two polynomials are used to compute the analytic statistical moments:
\textit{Gamma} distribution:
\begin{align}
u_1(x, y) = x + y \\
u_2(x, y) = x^2 + y^2
\end{align}
where $x$ and $y$ are two mutually independent \textit{Gamma} variates, i.e.
\begin{align}
x \thicksim \Gamma\left(\alpha_{1},\beta_{1}\right) \notag \\
y \thicksim \Gamma\left(\alpha_{2},\beta_{2}\right) \notag
\end{align}
\subsection{Mean and Variance}
The first two statistical moments of $u_1(x,y)$ and $u_2{x,y}$ are:
\begin{align}
\expv{u_1(x,y)} &= \int_{0}^\infty dxdyP\left(x,y\right)u_1\left(x,y\right), \notag \\
&= \int_{0}^\infty dxdy \Gamma\left(\alpha_{1},\beta_{1}\right) \Gamma\left(\alpha_{2},\beta_{2}\right) u_1\left(x,y\right), \notag \\
&= \frac{\alpha_{1}}{\beta_{1}} + \frac{\alpha_{2}}{\beta_{2}}
\end{align}
\begin{align}
\expv{u_2(x,y)} &= \int_{0}^\infty dxdyP\left(x,y\right)u_2\left(x,y\right), \notag \\
&= \int_{0}^\infty dxdy \Gamma\left(\alpha_{1},\beta_{1}\right) \Gamma\left(\alpha_{2},\beta_{2}\right) u_2\left(x,y\right), \notag \\
&= \frac{\left(\alpha_{1} + 1\right)\alpha_1}{\beta_{1}^2} + \frac{\left(\alpha_{2}+1\right)\alpha_2}{\beta_{2}^2}
\end{align}
\begin{align}
\text{var}[u_1(x,y)] &= \int_{0}^\infty dxdyP\left(x,y\right)\left[u_1\left(x,y\right) - \expv{u_1(x,y)}\right]^2, \notag \\
&= \frac{\alpha_{1}}{\beta_{1}^2} + \frac{\alpha_{2}}{\beta_{2}^2}
\end{align}
\begin{align}
\text{var}[u_2(x,y)] &= \int_{0}^\infty dxdyP\left(x,y\right)\left[u_2\left(x,y\right) - \expv{u_2(x,y)}\right]^2, \notag \\
&= \frac{\left(4\alpha_{1} + 6.0\right)\left(\alpha_{1} + 1\right)\alpha_1}{\beta_{1}^4} + \frac{\left(4\alpha_{2} + 6.0\right)\left(\alpha_{2}+1\right)\alpha_2}{\beta_{2}^4}
\end{align}
\subsection{numeric values}
Some numeric values for the mean and variance are listed below for given distributions:
\begin{align}
x \thicksim \Gamma\left(11, 5\right) \notag \\
y \thicksim \Gamma\left(2, 0.8\right) \notag
\end{align}
\begin{table}[h!]
\centering
\begin{tabular}{c|c|c}
$Function$ & mean & variance \\ \hline
$u_1$ & 4.7 & 3.565 \\
$u_2$ & 14.655 & 215.638125 \\
\end{tabular}
\end{table}
| {
"alphanum_fraction": 0.6486271389,
"avg_line_length": 42.593220339,
"ext": "tex",
"hexsha": "803d47d3d9a3017aa56986a599db15775829718f",
"lang": "TeX",
"max_forks_count": 95,
"max_forks_repo_forks_event_max_datetime": "2022-03-08T17:30:22.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-03-24T21:05:03.000Z",
"max_forks_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "rinelson456/raven",
"max_forks_repo_path": "doc/tests/gamma_scgpc.tex",
"max_issues_count": 1667,
"max_issues_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef",
"max_issues_repo_issues_event_max_datetime": "2022-03-31T19:50:06.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-03-27T14:41:22.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "rinelson456/raven",
"max_issues_repo_path": "doc/tests/gamma_scgpc.tex",
"max_line_length": 176,
"max_stars_count": 159,
"max_stars_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "rinelson456/raven",
"max_stars_repo_path": "doc/tests/gamma_scgpc.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-20T13:44:40.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-03-24T21:07:06.000Z",
"num_tokens": 1021,
"size": 2513
} |
\input{permve-ntnu-latex-assignment.tex}
\usepackage{float}
\usepackage{tabularx}
\title{
\normalfont \normalsize
\textsc{Norwegian University of Science and Technology\\IT3105 -- Artificial Intelligence Programming}
\horrule{0.5pt} \\[0.4cm]
\huge Module 6:\\ Deep Learning for Game Playing\\
\horrule{2pt} \\[0.5cm]
}
\author{Per Magnus Veierland\\[email protected]}
\date{\normalsize\today}
\newacro{ANN}{Artificial Neural Network}
\newacro{SGD}{Stochastic Gradient Descent}
\begin{document}
\fancyfoot[C]{}
\maketitle
\newpage
\fancyfoot[C]{\thepage~of~\pageref{LastPage}} % Page numbering for right footer
\setcounter{page}{1}
\section*{Knowledge Representation}
The goal of the module is to find a way to train an \ac{ANN} with supervised learning such that it can beat a random player at the game \textsc{2048}. The assignment states the difficulty of this task clearly, and heavily suggests preprocessing the board state to expose features in a form which is easier to learn by the network.
Based on this, two preprocessing schemes has been developed to transform a board state into features which are used as inputs to the \ac{ANN}. To construct the training data for the two schemes, an evaluation function is needed to decide which move to choose for each network input when generating training data.
During prototyping, a small reference algorithm was written in Python to find a set of working features upon which to base a heuristic for satisfactory play. An important factor when doing supervised learning is that the answer in training examples must correspond well to the inputs presented in the training example. It was found when building the reference implementation that greedily selecting the move which maximizes the number of open cells will result in a mean score of $\approx 230$, which significantly beats the mean random play score of $\approx 110$ (see Table~\ref{table:results}). This same heuristic is used to produce the correct move for both approaches.
The first knowledge engineering approach is the most basic of the two and involves feeding the number of possible merges for each row and each column in the current state as inputs to the network; resulting in 8 network inputs. Intuitively this should work well, as the information of how many merges are possible in each direction should be sufficient to correctly chose the optimal move according to the heuristic of maximizing the number of free cells.
The second knowledge engineering approach is more complex. An obvious solution would be to feed the board state directly into the network. However, this representation is complex and involves a range of different values for each cell. In an effort to simplify the board input, while allowing the network to learn when cells can be merged, a reduction is performed on the input values. For a given board state, the number of distinct non-zero values in the state is first counted. If a row in the board state has the values \texttt{[0 2 4 2]}, then the row has two distinct non-zero values. After counting the number of non-zero values, each non-zero value on the board is reassigned to be its equal to its index in the sorted list of unique non-zero values. This reduction is meant to make the network blind to the magnitude of values in a board state and only treat values according to mergeability. The number of network inputs for the second approach is 16, corresponding to the reduced board cell values.
An important detail when producing training examples was to leave out examples where the heuristic was not able to distinguish one move as better than any other move. Unless the heuristic clearly knows that one move is the right move for the given state, it is assumed to not be beneficial to insist that the training examples follows the same ``random'' move that the example generator ends up making.
\begin{table}
\centering
{\small
\begin{tabular}{ccccccc}
\toprule
Player & Dimensions & Hidden $f$ & Output $f$ & $\varepsilon$ [\%] & Mean score & $\sigma~\text{score}$ \\
\midrule
Random & N/A & N/A & N/A & N/A & 107.26 & 54.4479 \\
Reference & N/A & N/A & N/A & N/A & 231.87 & 125.5831 \\
Network A & $8 \times 4$ & \textsc{ReLu} & \textsc{Softmax} & 0.0000 & 250.56 & 129.0481 \\
Network B & $16 \times 512 \times 512 \times 4$ & \textsc{ReLu} & \textsc{Softmax} & 0.2563 & 221.31 & 117.8580 \\
\bottomrule
\end{tabular}
}
\caption{Statistics comparing the two chosen networks and the reference implementation. All networks use cross-entropy cost functions and are trained with learning~rate~0.08. Network~A is trained with minibatch size 40 and network~B with minibatch size 50. All scores are based on highest tile present at end of game; averaged over 1000 games. $\varepsilon~=~\text{Training set error}$. $\sigma~=~\text{standard deviation}$.}
\label{table:results}
\end{table}
\section*{Network Design}
The network inputs chosen for both representations has the same range in each representation. By using the \textit{rectifier} activation function, scaling the inputs is not required as the activation function cannot be saturated, and because all values will be close to zero already. The \textit{softmax} function is used for the output nodes to rank the most probable move.
The first network using the basic knowledge representation scheme does not require any hidden layer. It is able to train to 0\% error within the first couple of epochs. It is however necessary to provide a large enough number of training examples. With training examples based on 100 games played a training error of 0\% was not achieved. However, increasing the number of training examples to 1000 games was sufficient to complete optimal training. It is clear that the first knowledge representation scheme requires little ``intelligence'' from the network. This is intuitive, as it should be straightforward to select an optimal move to maximize the number of free cells based on the number of possible merges for each row and column.
The second network requires a much larger topology. Achieving good training errors required large hidden layers, and the final network uses two hidden layers of 512 nodes each. It is possible to train a network to achieve a training error of 0 since the preprocessing used for the first network is a simple algorithm. However this will likely require more training data and possibly even larger topology.
Both networks use \textit{softmax} output nodes and the cross-entropy loss function as these yielded good results for the MNIST dataset in module~5.
Running each trained network for 1000 games and running a \textit{Welch}-test produces a $p$-value of 0.0. It is clear that network~A with the much simpler input and topology beats network~B with the more complex input and topology. It is also clear that the choices and reduction algorithm used to produce feature input to the more complex network produces good results which could be improved further with the same input.
\section*{Play Analysis}
Both network configurations and their associated preprocessing stages are able to approximate the heuristic function well by achieving a low training error. When the training error is 0, bad moves will either occur because the training set is not large enough for the \ac{ANN} to capture the information necessary to model the heuristic function -- or because the heuristic function simply made an evaluation with poor results.
During observed play bad moves were only seen which were caused by the simplistic heuristic function chosen.
\begin{figure}[!h]
\centering
\begin{tabularx}{\textwidth}{cXc}
\includegraphics[scale=0.35]{bad_1} & ~ & \includegraphics[scale=0.35]{bad_2} \\
\end{tabularx}
\caption{\textit{Example of poor gameplay:} Instead of moving up to gather the 4-tile, the 8-tile and the 16-tile to the right -- while keeping the 2-tiles to the upper left gathered -- the \ac{ANN} instead choses to greedily merge two 2-tiles while risking worsening the board state by permitting tiles to spawn at the top of the board.}
\label{fig:N1}
\end{figure}
\begin{figure}[!h]
\centering
\begin{tabularx}{\textwidth}{cXc}
\includegraphics[scale=0.35]{good_1} & ~ & \includegraphics[scale=0.35]{good_2} \\
\end{tabularx}
\caption{\textit{Example of good gameplay:} In this scenario the board state offers the possibility to merge four tile pairs. The greedy heuristic which the \ac{ANN} models recognizes this possibility and performs four simultaneous merge while at the same time lining up two new merges.}
\label{fig:N1}
\end{figure}
\end{document}
| {
"alphanum_fraction": 0.7589084348,
"avg_line_length": 89.5757575758,
"ext": "tex",
"hexsha": "5da3d76ee61ac8872e56df9c12ee0ff9c86e1ac8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6a7e4751de47b091c1c9c59560c19a8452698d81",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "pveierland/permve-ntnu-it3105",
"max_forks_repo_path": "module_6/report/permve-ntnu-it3105-module-6.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6a7e4751de47b091c1c9c59560c19a8452698d81",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "pveierland/permve-ntnu-it3105",
"max_issues_repo_path": "module_6/report/permve-ntnu-it3105-module-6.tex",
"max_line_length": 1008,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "6a7e4751de47b091c1c9c59560c19a8452698d81",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "pveierland/permve-ntnu-it3105",
"max_stars_repo_path": "module_6/report/permve-ntnu-it3105-module-6.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2027,
"size": 8868
} |
\section{lowmsg}
\index{lowmsg}
Prints its arguments, to be seen with the lowest verbosity, on the screen and in the default logfile. Use it instead of \index{puts}.
\subsection{Examples}
\begin{itemize}
\item \verb+lowmsg "Here we are"+ Puts "Here we are"
\item \verb+lowmsg "There are [extract count.particles] defects"+ Puts the total number of defects.
\end{itemize}
| {
"alphanum_fraction": 0.7533512064,
"avg_line_length": 33.9090909091,
"ext": "tex",
"hexsha": "9fd21734a4a46cff2d993c768138f5f57d26f364",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2022-03-09T10:38:14.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-12-04T03:28:14.000Z",
"max_forks_repo_head_hexsha": "df279c2103484e89898ff4e81b45fb9ad43bcb9e",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "Warmshawn/MMonCa",
"max_forks_repo_path": "doc/commands/lowmsg.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "df279c2103484e89898ff4e81b45fb9ad43bcb9e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "Warmshawn/MMonCa",
"max_issues_repo_path": "doc/commands/lowmsg.tex",
"max_line_length": 133,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "126744a90253d7d7884c6dc7ec100db00a106a66",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "imartinbragado/MMonCa",
"max_stars_repo_path": "doc/commands/lowmsg.tex",
"max_stars_repo_stars_event_max_datetime": "2020-05-15T09:13:49.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-11-23T16:20:09.000Z",
"num_tokens": 106,
"size": 373
} |
\subsection{Relations and equality}
\subsubsection{Relations}
A special type of predicates is a relation. These take two terms and can be written differently:
$P(x,y)\Leftrightarrow x\oplus y$
\subsubsection{Equality}
In preterite logic we define the relation for equality.
\(a=b\)
It is defined by the following:
\begin{itemize}
\item Reflexivity : \(x=x\)
\item Symmetry: \(x=y\leftrightarrow y=x\)
\item Transivity: \(x=y\land y=z \rightarrow x=z\)
\item Substitution for functions: \(x=y\rightarrow f(x)=f(y)\)
\item Substitution for formulae: \(x=y\land P(x)\rightarrow P(y)\)
\end{itemize}
| {
"alphanum_fraction": 0.7305785124,
"avg_line_length": 24.2,
"ext": "tex",
"hexsha": "fe2496e1f4129c4a7186a40d74a1aedb1b60b828",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/logic/preteriteLogic/01-02-relations.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/logic/preteriteLogic/01-02-relations.tex",
"max_line_length": 96,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/logic/preteriteLogic/01-02-relations.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 172,
"size": 605
} |
%% Preamble
% ---------
% Document class
\documentclass[%
paper=A4,portrait,%
fontsize=11pt,%
]{scrreprt}
% Font
\usepackage{lmodern}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
% Language and typography
\usepackage[main=english, french]{babel}
\usepackage[autostyle=true]{csquotes}
\usepackage[babel=true]{microtype}
% Hyperref
\usepackage[hidelinks]{hyperref}
% Lorem Ipsum (for test purpose)
\usepackage{lipsum}
%% Document
% ---------
% Information
\title{A \LaTeX{} KOMA-Script report example}
\subtitle{Compiled with PDF\LaTeX{}}
\author{Alexandre Quenon}
\date{\today}
\dedication{Dedicated to all \LaTeX{} users.}
% Text
\begin{document}
%******begin******
\maketitle
\tableofcontents
\begin{abstract}
An abstract of the document.
\lipsum[1-2]
\end{abstract}
\chapter{A chapter}
\lipsum[1]
\section{A section}
\lipsum[2]
\subsection{A subsection}
\lipsum[3]
\subsubsection{A subsubsection}
\lipsum[4]
\paragraph{A paragraph}
\lipsum[5]
\subparagraph{A subparagraph}
\lipsum[6]
\addchap{An unnumbered chapter}
\lipsum[1]
\addsec{An unnumbered section}
\lipsum[2]
\subsection*{An unnumbered subsection}
\addcontentsline{toc}{subsection}{An unnumbered subsection}
\lipsum[3]
%******end******
\end{document} | {
"alphanum_fraction": 0.6530612245,
"avg_line_length": 13.1923076923,
"ext": "tex",
"hexsha": "15ccef51b9feb54dd8b65242fe5a37eacf566b70",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fb17aab27bae727267605897c6d00ab65b097f23",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Arkh42/LaTeX_magic",
"max_forks_repo_path": "Tutorials/B004__Choosing_Document_Class/Examples/pdflatex_compiler__komascript_report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fb17aab27bae727267605897c6d00ab65b097f23",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Arkh42/LaTeX_magic",
"max_issues_repo_path": "Tutorials/B004__Choosing_Document_Class/Examples/pdflatex_compiler__komascript_report.tex",
"max_line_length": 61,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "fb17aab27bae727267605897c6d00ab65b097f23",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Arkh42/LaTeX_magic",
"max_stars_repo_path": "Tutorials/B004__Choosing_Document_Class/Examples/pdflatex_compiler__komascript_report.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 439,
"size": 1372
} |
\chapter{Implementation}
| {
"alphanum_fraction": 0.8076923077,
"avg_line_length": 8.6666666667,
"ext": "tex",
"hexsha": "03f2c886f46baba61af7d6a7d41569aca9b59f65",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a5622e15c5a3fbc45652ac0ee5ca0db26ecf5e44",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "yszheda/nthu-master-thesis",
"max_forks_repo_path": "data/chap4.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a5622e15c5a3fbc45652ac0ee5ca0db26ecf5e44",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "yszheda/nthu-master-thesis",
"max_issues_repo_path": "data/chap4.tex",
"max_line_length": 24,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "80b12a918514ff0babf5fce9b30cdc77c1cc6357",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "david-volz/Bachelorarbeit",
"max_stars_repo_path": "implementation.tex",
"max_stars_repo_stars_event_max_datetime": "2016-05-03T05:09:47.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-02-16T17:53:49.000Z",
"num_tokens": 5,
"size": 26
} |
\begin{frame}{\ft{LTS Team Members}}
\section{Team}
\definecolor{dclr}{RGB}{103,84,17}
\setbeamercolor{description item}{fg=dclr}
\setbeamersize{description width=10pt}
\begin{center}
\begin{minipage}{0.94\textwidth}
{\LARGE \setlength{\leftmargini}{3pt}\begin{description}
\item[Lead Software Architect] {\llsep} \\Nathaniel Christen, Doctoral Candidate, University of Ottawa.
Specializations: C++, Programming Language
Implementation, Cognitive and Computational
Linguistics, Scientific Computing, Philosophy
of Science, Digital Humanities.{\thrule}
\item[Quality Assurance and User Acceptance
Director] {\llsep} \\Ara Mehetarian,
former head of Quality Assurance at Random House and AIG.{\thrule}
\item[Medical Imaging and Data Communications Consultant] {\llsep} \\Alan H. Rowberg, M.D., formerly
RIS/PACS Manager at Northwest Hospital; Co-Developer of DICOM protocol formerly Co-Chair
of DICOM Standards Committee.{\thrule}
\item[Company Founder and CEO] {\llsep} \\Amy Neustein, Ph.D., Editor-in-Chief of the \textit{International Journal of
Speech Technology}; Editor of De Gruyter Series in Text Mining in Medicine and Health Care;
Editor of SpringerBriefs in Speech Technology; Author/Editor of 12 academic books on
natural language processing, speech recognition,
text mining, speech and
automata, forensic speaker recognition,
mobile speech, and cyber-physical systems
and smart homes.\vspace{1em}
\end{description}}
\end{minipage}
\end{center}
\end{frame}
| {
"alphanum_fraction": 0.7805369128,
"avg_line_length": 40.2702702703,
"ext": "tex",
"hexsha": "43d5ea1e678eedc6157bc31a6de13f0752cc9e95",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b",
"max_forks_repo_licenses": [
"BSL-1.0"
],
"max_forks_repo_name": "ScignScape-RZ/ntxh",
"max_forks_repo_path": "NA3/presentation/slide-team.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSL-1.0"
],
"max_issues_repo_name": "ScignScape-RZ/ntxh",
"max_issues_repo_path": "NA3/presentation/slide-team.tex",
"max_line_length": 119,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b",
"max_stars_repo_licenses": [
"BSL-1.0"
],
"max_stars_repo_name": "ScignScape-RZ/ntxh",
"max_stars_repo_path": "NA3/presentation/slide-team.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 395,
"size": 1490
} |
\chapter{$\mathrm{t\bar{t}H~(H \to \gamma \gamma)}$ Analysis}
\section{Introduction} \label{sec:tth_intro}
\input{tth_analysis/introduction.tex}
\section{Overview of Analysis Strategy} \label{sec:tth_analysis_strategy}
\input{tth_analysis/analysis_strategy.tex}
\section{Preselection} \label{sec:tth_presel}
\input{tth_analysis/presel.tex}
\section{Background Description} \label{sec:tth_background_description}
\input{tth_analysis/bkg.tex}
\section{Machine Learning Algorithms} \label{sec:tth_mvas}
\input{tth_analysis/mvas.tex}
\section{Event Categorization} \label{sec:tth_event_categorization}
\input{tth_analysis/evt_cat.tex}
\section{Signal \& Background Models} \label{sec:tth_sig_bkg_models}
\input{tth_analysis/sig_bkg_models.tex}
\section{Systematic Uncertainties} \label{sec:tth_systematic_uncertainties}
\input{tth_analysis/systematic_uncertainties.tex}
\section{Results} \label{sec:tth_results}
\input{tth_analysis/results.tex}
\section{Acknowledgements} \label{sec:tth_ack}
\input{tth_analysis/ack.tex}
| {
"alphanum_fraction": 0.8101265823,
"avg_line_length": 32.09375,
"ext": "tex",
"hexsha": "47e5d1f1d05051a479670ab56c5fef2741197025",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "acd61f340e5677deba412b1b3baecd124c32440f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "sam-may/phd_thesis",
"max_forks_repo_path": "tth_analysis.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "acd61f340e5677deba412b1b3baecd124c32440f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "sam-may/phd_thesis",
"max_issues_repo_path": "tth_analysis.tex",
"max_line_length": 75,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "acd61f340e5677deba412b1b3baecd124c32440f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "sam-may/phd_thesis",
"max_stars_repo_path": "tth_analysis.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 297,
"size": 1027
} |
\documentclass[12pt]{article}
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage{hyperref}
\usepackage{xcolor}
\usepackage{subcaption}
\usepackage{siunitx}
\addtolength{\oddsidemargin}{-.875in}
\addtolength{\evensidemargin}{-.875in}
\addtolength{\textwidth}{1.75in}
\addtolength{\topmargin}{-.875in}
\addtolength{\textheight}{1.75in}
\title{Model Predictive Control}
\date{2020}
\author{sentry5588, MIT License}
\begin{document}
\pagecolor{lightgray}
\maketitle
\section{Introduction}
This note documents the Model Predictive Control (MPC) method for the two
wheel balancing robot. Many people has built similar self-balancing robots.
Almost all of them uses PID control as the control strategy. For position
estimation, some uses Kalman filters, and others uses complementary filters.
The purpose of using MPC in this robot is not trying to invent a new MPC
technique, which usually the case for research papers.
But rather I intend to 1) practice MPC and 2) test how well MPC behaves
compares to other control schemes.
\section{Robot Coordinates}
As shown in Figure~\ref{fig_coordinates}, $\theta_k$ and $\omega_k$ denotes
the angular position at time step $k$, respectively.
Counter-clockwise rotation is positive. Angular acceleration is denoted
by $\dot{\omega}_k$.
$u_k$ is the horizontal force
with positive to the right. Robot specifications can be found in
Table~\ref{tab_robot_specification}
\begin{figure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{./figures/coordinates.png}
\caption{Robot Coordinates} \label{fig_coordinates}
\end{subfigure}
\hspace*{\fill} % separation between the subfigures
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{./figures/one_d_rotation.png}
\caption{1D Rotation} \label{fig_one_d_rotation}
\end{subfigure}
\hspace*{\fill} % separation between the subfigures
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{./figures/free_body_diagram.png}
\caption{1D Free Body Diagram} \label{fig_1d_free_body_diagram}
\end{subfigure}
\caption{Two Wheel Balancing Robot} \label{fig_robots}
\end{figure}
\begin{table}
\centering
\begin{tabular}{l|c|c}
\hline
Specification & Notation & Value \\ \hline
Center of Gravity (CoG) & $h$ & ?? 0.2 m \\
Mass & $m$ & ?? 1.1 kg \\
Moment of Inertia & I & ?? 0.8 \si{\kilogram\cdot\meter^2} \\ \hline
\end{tabular}
\caption{Robot specification}
\label{tab_robot_specification}
\end{table}
\section{Problem Formulation}
The model is a 1-input-2-output model.
\begin{align}
\label{equ_orig_nonlinear_dynamics}
x_{k+1} & = f(x_k, u_k) \\
y_k & = C(x_k)x_k
\end{align}
$x_k=[\theta_k\;\;\omega_k\;\;\dot{\omega}_k]^T$ is the system state.
Linearize Equ~\ref{equ_orig_nonlinear_dynamics}, we have
\begin{align}
x_{k+1} & = A(x_k)x_k + B(x_k)u_k \\
y_k & = C(x_k)x_k
\end{align}
where $A(x_k)$ and $B(x_k)$ are Jacobians and given by
\begin{align}
A(x_k) = \frac{\partial f}{\partial x_k},
B(x_k) = \frac{\partial f}{\partial u_k}
\end{align}
Figure~\ref{fig_1d_free_body_diagram} is the 1-D free body diagram.
In the next subsection, I will develop the nonlinear continuous-time
dynamics.
\subsection{Nonlinear Continuous-Time Model}
\subsubsection{Derived from Newton's 2nd law}
Summing the forces in figure~\ref{fig_1d_free_body_diagram} in the horizontal
direction we have
\begin{align}
F_x=m\ddot{p}_x
\end{align}
where $p_x$ is the horizontal velocity of the CoG.
Summing the forces in figure~\ref{fig_1d_free_body_diagram} in the vertical
direction we have
\begin{align}
mg-F_z = m\ddot{p}_z
\label{equ_vertical_newton}
\end{align}
where $p_x$ is the horizontal velocity of the CoG.
Summing the torques in figure~\ref{fig_1d_free_body_diagram} around the
center of gravity we have
\begin{align}
F_x h\cos(\theta) + F_z h\sin(\theta) + mg\cdot 0= \ddot{\theta}I
\label{equ_rotation_newton}
\end{align}
\textcolor{blue}{
The contact point between the robot and the ground has $0$ vertical velocity.
It's vertical velocity is a combination of the rod rotation around
CoG, $\dot{\theta}h\sin(\theta)$,
and vertical velocity of CoG, $\dot{p}_z$. Therefore
\begin{align}
-\dot{\theta}h\sin(\theta) + \dot{p}_z = 0
\label{equ_velocity_equation}
\end{align}
Take time derivative of~(\ref{equ_velocity_equation})
\begin{align}
-\ddot{\theta}h\sin(\theta) - \dot{\theta}^2h\cos(\theta) + \ddot{p}_z = 0\\
\ddot{p}_z = \ddot{\theta}h\sin(\theta) + \dot{\theta}^2h\cos(\theta)
\label{equ_ddot_p_z}
\end{align}}
Substitute $\ddot{p}_z$ in~(\ref{equ_vertical_newton})
with~(\ref{equ_ddot_p_z})
\begin{align}
mg-F_z = m(\ddot{\theta}h\sin(\theta) + \dot{\theta}^2h\cos(\theta)) \\
mg-F_z = m\ddot{\theta}h\sin(\theta) + m\dot{\theta}^2h\cos(\theta) \\
F_z = mg - m\ddot{\theta}h\sin(\theta) - m\dot{\theta}^2h\cos(\theta)
\end{align}
Substitute $F_z$ with above equation in~(\ref{equ_rotation_newton})
\begin{align}
F_xh\cos(\theta) + (mg - m\ddot{\theta}h\sin(\theta)
- m\dot{\theta}^2h\cos(\theta))h\sin(\theta) + mg\cdot 0= \ddot{\theta}I\\
F_xh\cos(\theta) + mgh\sin(\theta) - m\ddot{\theta}h^2\sin^2(\theta)
- m\dot{\theta}^2h^2\cos(\theta)\sin(\theta) = \ddot{\theta}I \\
(I+mh^2\sin^2(\theta))\ddot{\theta}+mh^2\cos(\theta)\sin(\theta)\dot{\theta}^2
=F_xh\cos(\theta) + mgh\sin(\theta)
\end{align}
So the dynamics can be written as
\begin{align}
F_x &= m\ddot{p}_x \\
(I+mh^2\sin^2(\theta))\ddot{\theta}+mh^2\cos(\theta)\sin(\theta)\dot{\theta}^2
&= F_xh\cos(\theta) + mgh\sin(\theta)
\label{equ_nonlinear_CT_dynamics}
\end{align}
\textbf{Check dynamics at special points
for~(\ref{equ_nonlinear_CT_dynamics})}
When $\theta=0$, i.e. the vertical up position,
(\ref{equ_nonlinear_CT_dynamics}) becomes
\begin{align}
I\ddot{\theta} = F_xh
\end{align}
When $\theta=\pi/2$, i.e. the horizontal position pointing to the left
(assume single contact point with the ground)
(\ref{equ_nonlinear_CT_dynamics}) becomes
\begin{align}
(I+mh^2)\ddot{\theta} = mgh
\end{align}
When $\theta=-\pi/2$, i.e. the horizontal position pointing to the right
(assume single contact point with the ground)
(\ref{equ_nonlinear_CT_dynamics}) becomes
\begin{align}
(I+mh^2)\ddot{\theta} = -mgh
\end{align}
When $\theta=\pi/4$, i.e. the horizontal position pointing to the left
(assume single contact point with the ground)
(\ref{equ_nonlinear_CT_dynamics}) becomes
\begin{align}
(I+\frac{1}{2}mh^2)\ddot{\theta}+\frac{1}{2}mh^2\dot{\theta}^2
&= \frac{\sqrt{2}}{2}F_xh + \frac{\sqrt{2}}{2}mgh
\end{align}
\subsubsection{Derived from Lagrangian mechanics}
The derivation follows~\cite{peacock_2007_mit_lagrange}.
The Lagrangian is $L=KE-PE$ where $KE$ and $PE$ are the
kinematic energy and potential energy, respectively.
$W$ is the virtual work.
The equation of motion can be determined by applying
Lagrange mechanics in two generalized coordinate $p_x$ and $\theta$
\begin{align}
\label{equ_lagrangian_original}
\frac{\mathrm{d}}{\mathrm{d}t}\bigg(
\frac{\partial L}{\partial \dot{p}_x}\bigg)
-\frac{\partial L}{\partial p_x}=
\frac{\partial W}{\partial p_x}, \quad
\frac{\mathrm{d}}{\mathrm{d}t}\bigg(
\frac{\partial L}{\partial \dot{\theta}}\bigg)
-\frac{\partial L}{\partial \theta}=
\frac{\partial W}{\partial \theta}
\end{align}
The contact point displacement relative to the center of gravity is
$h\theta \cos(\theta)$. Therefore the absolute velocity of
the contact point is $p_x + h\theta \cos(\theta)$.
The kinematic energy, potential energy and virtual work are
\begin{align}
KE = \frac{1}{2}m\dot{p}_x^2
+\frac{1}{2}I\dot{\theta}^2,\quad
PE = mgh\cos(\theta), \quad
W = F_x (p_x+ h\theta \cos(\theta)) + F_x h \theta \cos(\theta)
= F_x p_x + 2F_x h \theta \cos(\theta)
\label{equ_lagrangian_energy_work}
\end{align}
Substitute~(\ref{equ_lagrangian_original})
with~(\ref{equ_lagrangian_energy_work}), in $p_x$ direction we have
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t}\bigg(
\frac{\partial}{\partial \dot{p}_x}\bigg(
\frac{1}{2}m\dot{p}_x^2 + \frac{1}{2}I\dot{\theta}^2\bigg)\bigg) \quad &\\
-\frac{\partial}{\partial p_x}\bigg(
\frac{1}{2}m\dot{p}_x^2+\frac{1}{2}I\dot{\theta}^2
-mgh\cos(\theta)\bigg)=&
\frac{\partial}{\partial p_x}F_x p_x
+\frac{\partial}{\partial p_x} 2F_x h \theta \cos(\theta) \\
\frac{\mathrm{d}}{\mathrm{d}t}
(m\dot{p}_x + 0)
-(0+0-0)=&
F_x + 0\\
\frac{\mathrm{d}}{\mathrm{d}t}
m\dot{p}_x = & F_x \\
m\ddot{p}_x =& F_x
\end{align}
Substitute~(\ref{equ_lagrangian_original})
with~(\ref{equ_lagrangian_energy_work}), in $\theta$ direction we have
\begin{align}
&\frac{\mathrm{d}}{\mathrm{d}t}\bigg(
\frac{\partial}{\partial\dot{\theta}}\bigg(
\frac{1}{2}m\dot{p}_x^2+\frac{1}{2}I\dot{\theta}^2
-mgh\cos(\theta)\bigg)\bigg)
-\frac{\partial}{\partial \theta}\bigg(
\frac{1}{2}m\dot{p}_x^2+\frac{1}{2}I\dot{\theta}^2
-mgh\cos(\theta)\bigg) \\
=&\frac{\partial}{\partial \theta}F_x p_x
+\frac{\partial}{\partial \theta} 2F_x h \theta \cos(\theta)
\end{align}
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t}
(0+I\dot{\theta}-0)
-(0+0+mgh\sin(\theta))=&-2 F_x h\theta \sin(\theta)
+2F_x h\cos(\theta)\\
(0+I\ddot{\theta}-0)
-(0+0+mgh\sin(\theta))=& -2 F_x h\theta \sin(\theta)
+2F_x h\cos(\theta) \\
I\ddot{\theta}-mgh\sin(\theta)=&-2 F_x h\theta \sin(\theta)
+2F_x h\cos(\theta) \\
I\ddot{\theta}
=& 2 F_x h\cos(\theta)+mgh\sin(\theta)
-2 F_x h\theta \sin(\theta)
\end{align}
\subsection{Model Linearization and discretization}
I follow~\cite{zhakatayev_2017_successive_linearize_MPC}
to linearize and discretize the nonlinear continuous-time dynamics
in~\cite{???}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\bibliographystyle{apalike}
\bibliography{MPC}{}
\end{document}
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
| {
"alphanum_fraction": 0.7114142887,
"avg_line_length": 32.32996633,
"ext": "tex",
"hexsha": "88e187a2c48201f6917b3ab37ad7a56fc9ad3311",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ba62de060af3bb47052e21157bb440ba55d3a7a5",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "sentry5588/two_wheeler",
"max_forks_repo_path": "control/tex/MPC.tex",
"max_issues_count": 3,
"max_issues_repo_head_hexsha": "ba62de060af3bb47052e21157bb440ba55d3a7a5",
"max_issues_repo_issues_event_max_datetime": "2019-06-11T19:27:44.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-04-04T14:43:45.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "sentry5588/two_wheeler",
"max_issues_repo_path": "control/tex/MPC.tex",
"max_line_length": 78,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "ba62de060af3bb47052e21157bb440ba55d3a7a5",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "sentry5588/two_wheeler",
"max_stars_repo_path": "control/tex/MPC.tex",
"max_stars_repo_stars_event_max_datetime": "2019-10-30T23:05:04.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-04-18T23:29:09.000Z",
"num_tokens": 3381,
"size": 9602
} |
\documentclass[a4paper,12pt]{article}
\usepackage[a4paper]{geometry}
\usepackage{graphicx}
\usepackage{hyperref}
\usepackage{authblk}
\begin{document}
\title{Beam Hardening Correction CarouselFit: \\ User Guide}
\author{Ronald Fowler}
\affil{STFC Rutherford Appleton Laboratory}
\maketitle
\begin{abstract}
This document is a brief user guide to the Python software package CarouselFit. This software takes image
data from a number of known samples, e.g. from an X-ray CT machine, and fits them to a model of
the beam hardening process which occurs when a broad spectrum source is used to image a sample.
This model can then be used to generate corrections appropriate for a single material to convert
the observed attenuation values into the actual attenuation that would be observed for that material
with monochromatic X-rays at a given energy.
This software is based on the IDL package and ideas described in \cite{davis}.
\end{abstract}
\section{Introduction}
Beam hardening is well known problem that is described in many works, see \cite{davis}.
This software takes as input a number of images of well characterised samples and uses these
to fit a simple model of the expected beam hardening (BH) to the observed data.
The result is an estimate of the ``response function'', $R(E)$, which gives the expected output signal
from the detector as a function of X-ray energy for the selected combination of X-ray source, voltage, filters
and detector.
Note that due to aging effects of the X-ray source, detector, etc., this function may change over time,
so ideally the calibration measurements should be made before and after each CT scan.
In addition the model allows for variation in the form of $R(E)$ with the number of the scan line.
This can occur due to the way the emitted X-ray spectra is known to depend on the ``take-off'' angle\cite{davis}.
Use of pre-filtering the X-ray reduces both beam hardening effects and the variation of these with take-off
angle. Low energy X-rays show the greatest variation with take-off angle.
Davis recommends using both pre-filtering as well as the software correction described in this guide
to best minimise beam hardening artifacts.
Using the fitted response function of the system it is then possible to determine a correction curve
that will map from the observed attenuation to the true attenuation that would be seen at a given
monochromatic X-ray energy.
This correction curve is calculated assuming the sample is composed of a single material type
for which the X-ray attenuation coefficient can be determined, using a program such as XCOM\cite{xcom}.
This allows for compound materials, as long as the composition is constant.
In the case of samples made of more than one compound the correction curve will only be applicable
if one material is the dominant absorber and corrections made to that material.
The next section describes how to download and run the software.
In section 3 we describe how to set up the necessary files that give
information on the number and type of test images that are used for calibration and the
image formats.
Section 4 details how to the run the fitting and post-processing image modules of the software.
The first of these fits the model to the data while the second applies the correction
directly to the CT image data.
An alternative to processing all the images is to generates a look-up table or a 4th order polynomial
to map observed attenuation values to mono-chromatic attenuation.
Such data can be used as input to some reconstuction software, such the standard XTek filtered back
projection package and the CCPi CGLS iterative code.
\section{Downloading and running the software}
The software is available from the CCPForge repository.
It consists of a Python software package along with a number of data files that are used to help model the X-ray
beams and the material attenuation.
As well as a Python environment the software depends on a number of additional packages being available.
An easy way to access most of the required packages is to download the Anaconda Python environment which is
available for Linux, MacOS and Windows systems from \url{https://www.continuum.io/downloads}.
The software has been developed using Python version 2.7.
It is recommended that the user installs this before installing the CarouselFit software.
Alternatively the user may install the required packages in their local Python installation, if they are not
already available.
The main Python modules that may need to be added to a local installation are:
\begin{itemize}
\item numpy - needed for array operations
\item matplotlib - needed for plotting
\item scipy - needed for optimization
\item tifffile - only needed if corrections are to be applied to tiff images
\end{itemize}
The CarouselFit software can be checked out to a suitable directory using the command:
\begin{verbatim}
svn co https://ccpforge.cse.rl.ac.uk/svn/tomo_bhc/branches/release01 carouselFit
\end{verbatim}
This will create a set of three directories under \texttt{carouselFit}:
\begin{itemize}
\item \texttt{src:} this contains the Python source code
\item \texttt{doc:} this contains documentation of the software
\item \texttt{test:} this contains several sub-directories with information on attenuation and X-ray spectra.
The source code must be executed from this directory and any updates to the carousel or crown information
should be made in the \texttt{carouselData} sub-directory.
\end{itemize}
After downloading the software the installation can be checked by running Python in the \texttt{test} directory
and reading the example script file.
On Linux and MacOS this could be done from a command prompt, assuming that a suitable version of Python is in the system PATH by typing:
\begin{verbatim}
python ../src/runCarouselFit.py
read script.short
quit
\end{verbatim}
This set of commands should run without generating any error messages, such as failure to import modules.
If missing modules are reported it will be necessary to add these to the Python system and run the test script again.
Check the documentation for your Python system to see how to add modules.
On Windows systems Anaconda python can be accessed from the Start Menu after it has been installed.
The software can be downloaded using the command line version of svn, e.g. via Cygwin, or using the GUI
provided by TortoiseSVN \url{https://tortoisesvn.net/}.
Once the software is installed, start an IP Python window (or similar) from the Start Menu and navigate to the
\texttt{test} directory as mentioned above. Then use the \texttt{\%run} command to execute the code, e.g.:
\begin{verbatim}
cd c:/svnpath/test
%run ../src/runCarousel.py
read script.short
quit
\end{verbatim}
\section{Configuration files}
The original calibration device described in \cite{davis} was called a carousal as it was built from a set of 9 test samples
arranged between two circular supports allowing for each of the samples to be imaged individually by the scanner.
The samples would cover the full range of lines in the scanner, but not the full range of each row; typically only
the centre half of each row would be covered by the sample.
A more recent calibration device has been developed at staff at the Research Centre at Harwell (RCaH) which is
known as a crown. This device allows a larger number of samples to be mounted.
In this case the sample usually covers all lines and rows of the image.
\subsection{Carousel sample definition file}
The materials mounted on the carousel, or crown, must be described in a simple ASCII file which is stored
in the \texttt{test/carouselData} directory.
An example of the format that was used for the carousel from QMUL is shown below.
\begin{verbatim}
# carousel definition file based on data from QMUL 17/11/14
10
Cu,Ti,Ti,Ti,Al,Al,Al,Al,Al,NOTHING
8.92,4.506,4.506,4.506,2.698,2.698,2.698,2.698,2.698,1
0.2093,0.4420,0.2210,0.1105,0.3976,0.1988,0.0994,0.0497,0.02,0.
\end{verbatim}
This illustrates a case where there are 9 sample materials in the carousel.
In this case all the samples are pure metals of known thickness and density.
It is important to emphasize that the calibration depends on the sample materials
been very well characterised.
If a large error exists in either the thickness or purity of a sample this can undermine
the accuracy of the fitting process.
No exact guidelines have yet been defined on the best set of test materials to use, but obviously
samples of the material the forms the dominant absorber in the imaged target would be ideal.
However, this is often not practical in many cases, such as bone and teeth studies, where calcium metal
is the prime absorber, but samples of the pure metal are subject chemical reactions in air.
As long as the energy dependence of the sample attenuation coefficient, $\mu(E)$, is not too different to that of
target dominant absorber then the calibration method should work.
Some possible problems may occur if the sample has sharp steps in $\mu(E)$ due to band edges that lie in the
response range of the system which are not seen in the target material.
For example, compare the attenuation of Sn with that of Ca in the range 0 to 75KeV.
The above file uses the simple format:
\begin{itemize}
\item{line1:} a comment line, starting with \#, to describe the file
\item{line2:} a single integer giving the number of sample materials plus 1
\item{line3:} a set of comma separated strings giving the names of each sample, with no spaces. the
number of names must be the same as the previous number, with the final one named "NOTHING".
In this case the samples are all pure metals and the chemical symbol has been used as the name.
However any name be used as long as a corresponding file with the extension \texttt{.txt} exists
in the directory \texttt{test/xcom}. This file gives the energy dependent $\mu(E)$ for this sample
in steps of 0.5KeV from 0 to the maximum expected energy.
\item{line4:} a set of comma separated values giving the density (in g/cm3) of each sample. A dummy
value of 1 is used for the final material.
\item{line5:} a set of comma separated values giving the thickness of each sample in cm. A dummy value of
0. is added on the end.
\end{itemize}
If a sample type other then the ones already described in \texttt{test/xcom} is used it is necessary to
create a file of the attenuation values of that sample.
See the Readme file in that directory for details.
The thickness range of the samples should aim to cover the range of attenuations that are expected in the test sample.
\subsection{Sample image data file}
In addition to a description of the samples in the carousel it is also necessary to define the format of the sample
images and details of the X-ray source, filters and detector.
This is done via another file in the directory \texttt{test/carouselData} which has the default extension \texttt{.data}.
One such file must be generated for each calibration case, while the above carousel definition file will only change
if the samples are changed.
Again a simple ASCII format is used to define the necessary values.
An example is shown below:
\begin{verbatim}
# data for one QMUL calibration run
80 # voltage
22 # take of angle [not used by default]
W # target material
19.25 # target density
600 # image res rows
800 # image res lines
carouselData/run001.img # image file
float32 # data type in image file
2 # number of filters
Al # filter material
0.12 # filter width
2.698 # filter density
Cu # filter material
0.1 # filter width - 0.1
8.92 # filter density
CsI # detector material
0.01 # starting value for detector thickness
4.51 # detector density
\end{verbatim}
The format has one value per line with a comment to described the value.
Most of these are self describing, such as the accelerating voltage, the take-off angle,
the target material (tungsten, W) and its density, for the X-ray source.
The path to the file containing the sample images must be included in this file.
All the images must currently be in a single file.
The format used above, \texttt{float32}, assumes a binary format with 9 separate images of $600 \times 800$ 32bit floating
point values.
Each value is $log ( I_0 / I )$ for that pixel with flat/dark field corrections.
Another supported format is \texttt{uint16}. In this case the sample images values are unsigned 16 bit values of the $I$ value.
Again these are all packed in order in a single file. The first image of the file is the (shading corrected) flat field image.
The $I_0$ value is taken as the average of this initial image.
A variation on \texttt{uint16} format, which is slightly more compact, is labelled as \texttt{uint64_65535}.
This format is again unsigned 16 bit images, but it assumes that the data has been corrected for flat and dark fields
and that it has been normalised to a white level of 65535.
This means that the raw binary file no longer needs an initial image giving the white level.
This is the format that is generated by the Python script \texttt{average_mat.py} which converts tif image files into this format.
See Appendix A for details of using this program.
Usually a set of filters are used to limit the energy range of the X-ray beam. In the case of the QMUL data they
normally employ two filters with 0.12cm of Al and 0.1cm of Cu, as shown in the above file.
As the fitting process includes varying the exact Cu filter width it is recommended that a zero width Cu filter element is included
even if no Cu was used in the actual imaging.
The definition of the detector material is important and tests to date have been made with CsI. However other materials may be used
if their attenuation profile is included in the \texttt{text/xcom} directory.
Since the width of the detector maybe used as a fitting parameter it is not essential to specify an accurate value, though this
will be used in the command \texttt{showspec}, if it is run before a fit has been performed.
\section{The command line interface}
\subsection{Command list}
When the Python software is started from Python or a similar environment, a simple command prompt is issued.
Typing \texttt{help} will give a list of the available commands.
The commands are:
\begin{itemize}
\item{\bf read \it{filename}} This command opens the given file and reads commands from it until end of file.
Control is then returned to the command line. Do not include blank lines in the command script.
\item{\bf load \it{file.def} \it{file.data}} This reads the definition file for the carousel and the data
relating to the actual calibration images. These two files must exist and are described in the previous section.
they are normally located in the \texttt{test/carouselData} directory.
This is usually the first command to issue since most others need this data to be present.
\item{\bf quit} Exit the program.
\item{\bf help} Give a list of available commands.
\item{\bf showcor \it{[l1 l2...]}} This command will plot the attenuation correction curve for any one or more lines. If no arguments are given
it will plot the first, middle and last correction lines. The matplotlib zoom feature can be used to focus on a particular region of the
plot. It can only be used after a fit has been performed.
The correction is shown in the space of log(I0/I).
\item{\bf showimag} This command will plot the images of each sample in one window. It may be useful to check for problems with the samples.
It can only be used after data has been loaded.
\item{\bf fitatt nlines \it{[linestep]}} This command attempts to fit the model to the selected samples (see mask command). The
number of lines of data to fit must be given. This maybe followed by a ``step'', e.g. 10 to use every 10th line.
This can be useful when using many lines as fitting all of them can be very slow and the
fit may not be improved using more data.
The time to fit also increases with the number of variables that have been selected with the ``vary'' command.
Fitting to a few lines can be a good way to see if the model fits and give a better initial guess for a fit
to a larger subset of the data.
\item{\bf vary \it{[target|detector|filter|energy|spectra npoly]}} On its own this command lists the order of polynomial used in fitting the line wise
dependence of each of the three main parameters, \textit{target width},
\textit{detector width} and \textit{filter width}.
The setting "-1" indicates that the value should be held constant, as set by the initguess command.
Using "0" indicates the value will be fitted, but is independent of the line number.
Setting to "1" gives a fit allowing a linear variation of the value with line number.
For example:
\begin{verbatim}
vary filter 0
vary detector -1
vary target 1
\end{verbatim}
will allow a single fitted value for the filter width, the detector width held constant, and
the target (filter) width to vary linearly with the line number.
The fit time increases significantly with the order used and values greater than 1 are not recommended.
An experimental option is to allow extra terms to be added to the normally linear dependence of the detector response to the photon energy,
e.g. $E+\alpha E^2$.
Note that energy dependence is NOT related to line number in this case.
However this polynomial is not constrained to be positive and the fit may fail.
Keeping energy variation off (-1) is recommended.
The final option called ``spectra'', which defaults to 0, i.e. on, when no pre-defined
spectra are present, which is the case for the open source release of the package.
Setting spectra to 0 causes the calculated spectra to be modelled as a simple non-symmetric Gaussian form with 3 parameters,
\textit{peak}, \textit{inverse left width} and \textit{inverse right width}.
If pre-computed spectra are available, e.g. from spekCalc, these can be used in preference to the
Gaussian by setting vary spectra -1.
\item{\bf initguess \it{[$s_1$ $s_2$ $s_3$}[$s_4$ $s_5$ $s_6$ $s_7$]]} Set the initial guess to be used by fitatt.
$s_1$ is the width of the target filter (usually tungsten), $s_2$ is the log width of the detector (usually CsI) and $s_3$ is the width of the
fitted filter (usually copper). Commonly used values for the initial guess are 0.01 -6.0 0.01.
If using the experimental feature ``vary spectra 0'' than 4 additional values can be given which are the initial value of the
energy term (should be zero) plus the Gaussian centre and widths, e.g. 0.01 -6.0 0.01 0.0 40.0 0.05 0.05.
When loading data the Gaussian peak is set to half the maximum X-ray energy.
Using this command with no parameters gives the current settings on the values.
\item{\bf mask \it{[n1 n2..]}} Without arguments this shows the set of masks that control if a given sample will be used in the next fit operation.
By default all values are true which means that sample will be used in the fit. Samples are labeled from 1 to $n$ and to mask the $m$ sample
that number should be given as an argument to the mask command. A negative value can be used to unmask a previously masked sample.
\item{\bf setcormat \it{material energy}} This command must be used before a fit operation to define the material and energy to which the correction
curve should be determined. For example \texttt{setcormat Al 40} sets the correction curve to be calculated for Aluminium at 40KeV.
At present if the correction material or energy are altered it is necessary to rerun the fit command.
\item{\bf transform} This is an experimental command which will be removed in future.
\item{\bf showspec \it{[line]}} - plot three spectra, the input X-ray spectrum, the filtered spectrum and the response spectrum.
Should only be used after a fit has been made. This command needs improving since the ``filtered'' plot is not meaningful.
Also the printed attenuation values are not useful since these are not fitted to.
If a line number is given, plots are for that line. The default is line 0.
\item{\bf showatt \it[nsamp nline]} - plot the sample attenuations along a specific line. By default this shows the
attenuation for all samples at line 400. Samples are labeled 0 to $n-1$ in this case.
\item{\bf debug} - set debugging option, for diagnostic purposes only.
\item{\bf showconf} - list some of the settings, such as the filters, detector, source and voltage.
\item{\bf setwidth \it[width]} - without arguments, prints the width, in pixels, used to average over each line to get the mean
attenuation. For the QMUL data, where the sample does not cover the whole image, it is important to ensure this does not
exceed the true sample width. For the RCaH data, where the image does cover the whole width, a larger value can be used.
\item{\bf setfilter \it[material width]} - without arguments lists the filters defined. Can also be used
to change the width of existing filters, though not add new ones. Used for debugging.
\item{\bf setoptions \it[solver=old$\vert$new]} - set option. Currently only allows switching
between old and new least square solvers in scipy. The old version is more widely available
and is the default.
\end{itemize}
\subsection{Using the software}
As described in section 3 it is necessary to write the definition files that describe the carousel and the particular
test case that is being treated.
The latter file must also point to the data file that contains the sample images in a suitable format.
It is assumed that corrections for dark and flat field images have being applied to the images before they are
passed to the software.
A simple partial analysis might consist of the following steps:
\begin{verbatim}
load carouselData/carousel0.def carouselData/run001.data
showimg
showatt
\end{verbatim}
The first command loads the definition and run data from files, while the next two commands
plot the 2D images and 1D cuts along line=400.
\begin{verbatim}
setcormat CaHydro 40
vary target 1
vary detector 0
vary filter 1
initguess .01 -6 .01
fitatt 800 10
showspec
showcor
\end{verbatim}
These commands then set the material and energy to which we wish to correct the data via the \texttt{setcormat}
command, and then alter the default orders of the fit variables.
The \texttt{fitatt} command fits the given initial guess using the lines of the image data, 800, but only every
10th line.
This fit may take 60 seconds. Finally the fitted spectrum and correction curves are plotted.
The correction curves are stored in the same format as used in the earlier IDL code as separate 8th order polynomial
fits to the correction data in a file called polyfit.npz.
These curves are the ones shown by the \texttt{showcor} command above.
To actually apply the correction to image data requires the use of another Python program, \texttt{applyTrans.py}.
In addition to the above 8th order polynomials, 4th order fits are also written to the
output file \textit{param.log}.
The 4th order polynomial values are written at the end of the file, one set per line if the solution includes
variation with line number.
These values can be used in the xtekct file for the parameters X0 to X4, X0 being the rightmost value in
\textit{param.log}.
If the variation of the correction with the line number is significant it would be better to correct
each project individually as described in the next sectionn.
\subsection{applyTrans.py}
The Python script applyTrans.py can be used to update image files using the correction curves calculated by
the above fitting process.
It can also calculate a file of type \texttt{.bht} which can be used by XtekCT machines to correct the image
data used in CT analysis. In latter case only one correction curve is applied to all the data, in the same
way that using the using the 4th order polynomial fit does.
The syntax of the command can be seen using the \texttt{-h} option, which gives:
\begin{verbatim}
applyTrans.py [-r rows -l lines -p poly.npz -w whiteLevel -x file.bht]
[-d] [file1.ext] [filen.ext]
\end{verbatim}
In the above data it is usually necessary to specify the image size in rows and lines.
If all the image data is stored in a single file with data type float32, as used for
some data from QMUL, then the following command can be used to process it:
\begin{verbatim}
python ../src/applyTrans.py -r 600 -l 800 images.raw
\end{verbatim}
In this case the default file \texttt{polyfit.npz} is read to find the correction curves.
If 800 curves are present then one will be applied to each line in the image.
If only one correction curve is present then this one correction will be used on all image lines.
The processed output will be written to \texttt{bhc\_images.raw}.
Note that the \texttt{whiteLevel} parameter is not needed in this case as the \texttt{.raw} extension
is taken to imply \texttt{float32} data of $log(I_0/I)$.
To generate a \texttt{.bht} correction file the following command can be used:
\begin{verbatim}
python ../src/applyTrans.py -b -x xtekct.bht -w 59200
\end{verbatim}
In this case only the file \texttt{xtekct.bht} is generated. It is necessary to provide an accurate estimate
of the white level since any pixels above this are mapped to no attenuation.
\begin{thebibliography}{9}
\bibitem{davis}
Graham R. Davis, Anthony N.Z. Evershed, and David Mills,
\emph{Quantitative high contrast X-ray microtomography for dental research},
Journal of Dentistry,
Volume 41, Issue 5, May 2013, Pages 475–482.
\bibitem{xcom}
M.J. Berger, J.H. Hubbell, S.M. Seltzer, J. Chang, J.S. Coursey, R. Sukumar, D.S. Zucker, and K. Olsen,
\emph{https://www.nist.gov/pml/xcom-photon-cross-sections-database}
\end{thebibliography}
\end{document}
| {
"alphanum_fraction": 0.7761978362,
"avg_line_length": 57.1302428256,
"ext": "tex",
"hexsha": "1538251dfa1b5e7fa79ba5afa504ee82b5e7dd4e",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-08-02T22:11:56.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-01-11T09:10:21.000Z",
"max_forks_repo_head_hexsha": "f498cb2c9a454ae7fd74ee6ee6f8c9dfdfe51a7c",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "TomasKulhanek/CCPi-PreProcessing",
"max_forks_repo_path": "Wrappers/Python/doc/beamhardening/userguide.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "f498cb2c9a454ae7fd74ee6ee6f8c9dfdfe51a7c",
"max_issues_repo_issues_event_max_datetime": "2019-01-09T10:42:11.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-01-09T09:39:50.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "TomasKulhanek/CCPi-PreProcessing",
"max_issues_repo_path": "Wrappers/Python/doc/beamhardening/userguide.tex",
"max_line_length": 150,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "f498cb2c9a454ae7fd74ee6ee6f8c9dfdfe51a7c",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "TomasKulhanek/CCPi-PreProcessing",
"max_stars_repo_path": "Wrappers/Python/doc/beamhardening/userguide.tex",
"max_stars_repo_stars_event_max_datetime": "2019-03-22T16:23:29.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-03-22T16:23:29.000Z",
"num_tokens": 6143,
"size": 25880
} |
\subsection{Depth-limited search}
Depth-first search with a limit. This is useful if we know the solution is shallower than limit \(l\).
Informed: No
Time:
Space:
Complete:
Optimal:
| {
"alphanum_fraction": 0.7368421053,
"avg_line_length": 11.875,
"ext": "tex",
"hexsha": "b20a9e3c63ec05349e4b884ad723b7f61ecdb67c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/computer/nodes/02-03-DLS.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/computer/nodes/02-03-DLS.tex",
"max_line_length": 102,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/computer/nodes/02-03-DLS.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 47,
"size": 190
} |
\section{Decompositions}
| {
"alphanum_fraction": 0.84,
"avg_line_length": 12.5,
"ext": "tex",
"hexsha": "79f95b4089dbf6549b1b9d8320c6c3b5bb21f5eb",
"lang": "TeX",
"max_forks_count": 5,
"max_forks_repo_forks_event_max_datetime": "2022-02-02T10:45:13.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-01-17T10:15:18.000Z",
"max_forks_repo_head_hexsha": "b5fdd19b09e97697f287f5ca83e0d9133b704789",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "JASory/maths_book",
"max_forks_repo_path": "chapters/linear_algebra_intuitive/decompositions.tex",
"max_issues_count": 5,
"max_issues_repo_head_hexsha": "b5fdd19b09e97697f287f5ca83e0d9133b704789",
"max_issues_repo_issues_event_max_datetime": "2022-01-20T06:18:24.000Z",
"max_issues_repo_issues_event_min_datetime": "2022-01-17T05:01:10.000Z",
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "JASory/maths_book",
"max_issues_repo_path": "chapters/linear_algebra_intuitive/decompositions.tex",
"max_line_length": 24,
"max_stars_count": 28,
"max_stars_repo_head_hexsha": "b5fdd19b09e97697f287f5ca83e0d9133b704789",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "JASory/maths_book",
"max_stars_repo_path": "chapters/linear_algebra_intuitive/decompositions.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-08T17:57:59.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-12-25T20:02:57.000Z",
"num_tokens": 7,
"size": 25
} |
%!TeX program=xelatex
\documentclass[a4paper]{article}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage[margin=1in]{geometry}
\usepackage{xcolor}
\usepackage{hyperref}
\hypersetup{
colorlinks,
linkcolor={red!50!black},
citecolor={blue!50!black},
urlcolor={green!50!black}
}
\usepackage{subcaption}
\usepackage[numbers]{natbib}
\usepackage{fontspec}
\setmainfont[Ligatures=TeX]{Latin Modern Roman}
\newcommand{\todo}[1]{{\color{red}{[\textbf{TODO:} \emph{#1}]}}}
\renewcommand{\vec}{\boldsymbol}
\newcommand{\vectilde}[1]{\tilde{\boldsymbol{#1}}}
\begin{document}
\title{Radio~Galaxy~Zoo Classification Pipeline}
\author{Matthew Alger \\ \emph{The Australian National University}}
\maketitle
In this document, I will describe the Radio Galaxy Zoo (RGZ) classification pipeline that I have implemented.
\section{Definitions}
% A \emph{radio source} is a single area of the sky emitting radio waves. This may be a black hole, or a jet from a black hole.
% A \emph{host galaxy}
A \emph{Radio Galaxy Zoo subject} is a representation of one radio source. It consists of a location in the sky (specified in right ascension/declination coordinates), an image of the sky at this location in radio wavelengths, and an image of the sky at this location in infrared wavelengths. The subject may contain other nearby radio sources.
The Radio Galaxy Zoo \emph{crowd classifications} are crowdsourced solutions to the classification task. Each crowd classification contains the combination of radio sources in a subject that a volunteer associates with the same active galactic nucleus (AGN), as well as the location where the volunteer believes the host galaxy is located. There are multiple crowd classifications for each RGZ subject.
\section{The Classification Task}
The goal of the classification task is to locate the host galaxy of the subject.
\subsection{Pipeline Inputs and Outputs}
As input to the classification pipeline we take a RGZ subject and (for training) a set of associated crowd classifications. The output of the pipeline is the location of the host galaxy associated with the subject.
\subsection{Assumptions and Limitations}
I am ignoring the fact that a RGZ subject may contain multiple host galaxies, and instead assuming that there is only one host galaxy per subject.
I am exclusively working with the Australia Telescope Large-Area Survey (ATLAS)\citep{norris06} data set for now, though I expect my results to generalise to both Faint Images of the Radio Sky at Twenty-Centimeters (FIRST)\citep{becker95} and the upcoming Evolutionary Map of the Universe (EMU)\citep{norris11}. This is important as the majority of RGZ subjects are from FIRST, and the vast majority of subjects to be classified in future will be from EMU. The reason for this limitation is twofold: the ATLAS data set is small and well-known (containing $2443$ radio subjects), and thus provides a good data set for exploring machine learning techniques; and the ATLAS data set is similar to the data that will be collected in EMU\cite{banfield15}.
I am assuming that radio sources associated with a host galaxy will be ``small'', i.e., that they are less than $2$ arcminutes in diameter. $2$ arcminutes is the width of an image presented to RGZ volunteers. This assumption does not hold in general, as some radio sources can be spread over a very large area and these are known to be present in the RGZ data\citep{banfield16}.
\section{Collating Crowd Classifications}
Raw crowd classifications are not immediately useful. There are multiple classifications for the same subject, and these may not agree. The first step in the pipeline is thus to collate the crowd classifications into labels for training. There are two components to collation. The first is collating the radio components associated with the same AGN, and the second is collating the locations of the host galaxies associated with each set AGN. The collated radio components are called the \emph{consensus radio combination} and the collated host galaxy locations are called the \emph{consensus host galaxy locations}.
After collation, the crowd classifications provide us with a map between RGZ subjects and host galaxies.
\subsection{Radio Components}
\todo{Detail the method of collating radio components. My new method differs from Kyle's.}
Collating the radio components is straightforward and I loosely follow the method of \citet{banfield15}. I count the occurrences of each unique radio combination, and then the most popular radio combination is considered the consensus radio combination. \todo{Elaborate.}
\subsection{Locations}
Collating the locations of the AGNs associated with each radio combination is more complicated. \citet{banfield15} use kernel density estimation to find the most common location chosen by volunteers, however this is not robust and does not allow us to find which galaxy was intended to be chosen by each volunteer (which is useful if we want to estimate the uncertainty in the consensus). Instead, I cluster the volunteers' locations using PG-means\citep{hamerly07} and choose the cluster with the most members as the consensus host galaxy location. This results in \todo{statistics}.
\section{Locating Host Galaxies as Binary Classification}
Each subject contains a number of potential host galaxies. We can cast the problem of finding the true host galaxy as binary classification by labelling each potential host galaxy with $0$ if it is not the true host and $1$ if it is.
To find potential host galaxies, we use the Spitzer Wide-Area Infrared Extragalactic Survey (SWIRE) Chandra Deep Field South (CDFS) Region Fall '05 Spitzer Catalog\todo{Cite.} and the SWIRE European Large Area ISO Survey --- South 1 (ELAIS-S1) Region Fall '05 Spitzer Catalog\todo{Cite.}, available through the Infrared Science Archive's GATOR interface\footnote{\url{http://irsa.ipac.caltech.edu/applications/Gator/}}. These catalogues contain all infrared galaxies detected in the CDFS and ELAIS-S1 regions, which are the regions covered by ATLAS. \todo{Show a diagram.}
Finding the labels for each potential host amounts to finding the nearest SWIRE potential host for each true host identified by the crowd classifications. For the location in each crowd classification, the nearest SWIRE potential host is found and its label is set to $1$. All other labels are set to $0$.
\bibliographystyle{abbrvnat}
\bibliography{papers}
\end{document}
| {
"alphanum_fraction": 0.7676602086,
"avg_line_length": 75.393258427,
"ext": "tex",
"hexsha": "4bd3680c54bd261fe5d47d5e4d526a2c886b12a7",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2018-10-03T13:37:15.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-11-07T00:20:09.000Z",
"max_forks_repo_head_hexsha": "ce14432c36de0574b73d813304365b74446a61f8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "chengsoonong/crowdastro",
"max_forks_repo_path": "tex/images/classification_pipeline.tex",
"max_issues_count": 234,
"max_issues_repo_head_hexsha": "ce14432c36de0574b73d813304365b74446a61f8",
"max_issues_repo_issues_event_max_datetime": "2019-06-27T00:26:08.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-02-21T23:53:16.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "chengsoonong/crowdastro",
"max_issues_repo_path": "tex/images/classification_pipeline.tex",
"max_line_length": 761,
"max_stars_count": 13,
"max_stars_repo_head_hexsha": "ce14432c36de0574b73d813304365b74446a61f8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "chengsoonong/crowdastro",
"max_stars_repo_path": "tex/images/classification_pipeline.tex",
"max_stars_repo_stars_event_max_datetime": "2020-04-20T05:29:58.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-11-07T15:24:44.000Z",
"num_tokens": 1519,
"size": 6710
} |
%% LyX 2.3.3 created this file. For more info, see http://www.lyx.org/.
%% Do not edit unless you really know what you are doing.
\documentclass[11pt]{article}
\usepackage[latin9]{inputenc}
\usepackage{url}
\usepackage{mathtools}
\usepackage{graphicx}
\usepackage{subcaption}
\usepackage{listings}
\usepackage{booktabs}
\makeatletter
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% User specified LaTeX commands.
%Gummi|065|=)
\usepackage{url}
\title{\textbf{Capstone Project}}
\author{Allyson Julian}
\date{February 8, 2020}
\makeatother
\begin{document}
\maketitle
\section{Definition}
\subsection{Project Overview}
Advances in agricultural technology over the past 30 years have made
it easier for farmers to manage their farms, particularly when the
farms are comprised of multiple fields greater than 1000 acres in
size. The use of GPS on aerial imagery, drones, etc. have been instrumental
in precision agriculture \cite{liakos_machine_2018}.
But one of the challenges of modern farming is the detection of plant
diseases. For large farms in particular, it is time-consuming for
a farmer to manually check each growing plant for disease. It can
potentially be more cost-effective to diagnose plant diseases with
automated tools \cite{fujita_basic_2016}.
For this project, I built the Python library \textbf{ikapati} (named after the Filipino goddess of agriculture), which implements a plant disease detector and then deployed the best-performing model to an embedded system built specifically for use in Artificial Intelligence applications, a Jetson Nano robot.
The library also provides utility tools to do many of the tasks required in machine learning research, including the image preprocessing necessary to prepare the data for classification, scripts to train classifier itself, and utility functions to help evaluate performance.
\subsection{Problem Statement}
The main objective for this project was to build a library that can be used for training a plant disease classifier and in doing so act as a framework with which to do further research in this subject area. It can be used to complete an entire machine learning pipeline from training to deployment to an embedded system.
To build this library, I needed to:
\begin{enumerate}
\item Retrieve the PlantVillage Dataset (\url{https://github.com/spMohanty/PlantVillage-Dataset}).
\item Prepare the raw color images from the PlantVillage Dataset for consumption by the model.
\item Train the model to classify the different diseases for a given species of plant.
\item Save the model as a TensorFlow Lite object for use in an embedded system.
\end{enumerate}
\subsection{Metrics}
Accuracy and loss were the main metrics used in this project to evaluate overall model performance.
Accuracy is generally defined as such:
\begin{equation}
\frac{TP + TN}{TP + TN + FP + FN}
\end{equation}
Where $TP$ = true positives, $TN$ = true negatives, $FP$ = false positives, $FN$ = false negatives.
Validation and training loss was used to determine whether the model was overfitting, underfitting, or fitting adequately.
If the curve of the validation loss decreases until a certain point, and then starts to increase again, it is indicative of overfitting.
If the curve of the training loss remains flat then the model may be underfit.
\clearpage
\section{Analysis}
\subsection{Data Exploration}
The main datasets used were obtained from the PlantVillage Dataset: \url{https://github.com/spMohanty/PlantVillage-Dataset}
The dataset consists of 52,803 JPEG images with the dimensions 256x256. All of the images are colored, so there are 3 channels corresponding to the RGB mode.
The images are cropped in such a way that the leaves are centered in the picture, and have a fairly uniform-looking, solid background.
The images are split up into several folders, labelled according to the plant species (e.g. Peach) and disease class (e.g. Bacterial spot).
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.3\linewidth}
\includegraphics[width=\linewidth]{figures/examples/corn.jpg}
\caption{Corn leaf spot}
\end{subfigure}
\begin{subfigure}[b]{0.3\linewidth}
\includegraphics[width=\linewidth]{figures/examples/pepper.jpg}
\caption{Pepper}
\end{subfigure}
\begin{subfigure}[b]{0.3\linewidth}
\includegraphics[width=\linewidth]{figures/examples/scab.jpg}
\caption{Apple Scab}
\end{subfigure}
\caption{Example images in dataset.}
\label{fig:benchmark}
\end{figure}
\input{figures/data_exploration.tex}
Further data exploration can be viewed in the 1.0-agj-data-exploration.ipynb notebook.
\subsection{Algorithms and Techniques}
\subsubsection{An Overview of Convolutional Neural Networks}
%% A-convolutional-neural-networks-CNN.png
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{figures/A-convolutional-neural-networks-CNN.png}
\caption{Convolutional Neural Network diagram from \cite{kamencay_new_2017}}
\label{fig:cnn}
\end{figure}
Plant disease detection, when done by a human expert, often involves their active observation of a given leaf and discernment of any patterns in the visual appearance of the leaf that may be indicative of disease. In this project, the plant disease detector is meant to mimic the operations done by a human expert in plant disease diagnosis.
Diagnosing disease in the context of machine learning can be described as the task of receiving a visual input, in this case a colored image, and producing an output that corresponds to what disease the plant has (if any) based on patterns observed in the image. With that perspective in mind, the problem of plant disease detection can be approached as a classification problem where there are multiple classes to which an input can be assigned.
\textbf{Artificial Neural Networks (ANN)}, which attempt to mimic human brain function in a digital fashion, are well-suited to classification problems. They can be described as inter-connected nodes (neurons) that process some input as a distributed unit (network) in sequential layers, which then discern patterns based on the characteristics of that input, and then produce an output corresponding to the class to which the input belongs.
ANNs have multiple connected layers (or sequences of neurons) that look at the characteristics of its inputs, then uses an activation function like sigmoid to produce outputs for each neuron that can be passed onto the next layer as its input. A simple ANN consists of an input layer that holds the values of an input (e.g. pixel values for an image), a hidden layer, and an output layer that coalesces the activation results of previous layers into an output corresponding to some class \cite{oshea_introduction_2015}.
A particular type of ANN, a \textbf{Convolutional Neural Network (CNN)}, is particularly useful for image classification tasks. Like standard ANNs, CNNs are comprised of multiple layers of neurons, but consist of \textbf{convolution}, \textbf{pooling}, and \textbf{fully-connected} layers (Figure ~\ref{fig:cnn}).
A \textbf{convolution} layer is an alternative to a fully-connected layer in the sense that it localizes the analysis to particular regions of the input rather than analyzing the whole input which can be helpful in reducing complexity and improving the efficiency of the model \cite{albawi_understanding_2017}. It does so by sliding a \textbf{kernel} (a filter) with a \textbf{receptive field size} (dimensions) smaller than the full width and height of the input across the input in \textbf{strides} (steps within the input to slide the window across) \cite{oshea_introduction_2015}.
\textbf{Pooling} layers reduce the dimensionality (resolution) of the convolution output (i.e. it downsamples the output) so that subsequent layers will receive inputs of less complexity \cite{albawi_understanding_2017} \cite{koushik_understanding_2016}.
\textbf{Fully-connected} layers are much like traditional ANNs in that they are densely-connected and look at received inputs from previous layers as a whole entity. They usually are placed after convolution and pooling sequences \cite{oshea_introduction_2015} \cite{albawi_understanding_2017}.
These layers allow for the initial input to be more complex and as such are used for learning from 2-dimensional images with 3 channels without adding more complexity to the network itself \cite{albawi_understanding_2017} \cite{oshea_introduction_2015}.
As such, CNN was selected as the classification algorithm due to its demonstrated efficacy in image-related machine learning tasks, including plant disease detection in previous studies \cite{toda_how_2019} \cite{fujita_basic_2016}.
Two CNN architectures were used in this project:
\begin{enumerate}
\item is based on \textbf{AlexNet} \cite{krizhevsky_imagenet_2017} and is comprised of Conv2D (which does convolution), MaxPooling2D (max pooling), and FCD (which are a combination of Dense and Dropout layers).
\item is based on the \textbf{InceptionV3} based network built in the study by Toda et al \cite{toda_how_2019} that demonstrated an accuracy of 99.99\% in predicting plant disease classification on the PlantVillage Dataset.
\end{enumerate}
\subsubsection{AlexNet}
(See Figure ~\ref{fig:alexnetarch} for a visualization of the AlexNet architecture).
This describes the layer sequence (the number indicates how many of that same layer repeats until the next one):
\begin{itemize}
\item Conv2D (3) - Convolution Layer
\item MaxPooling2D (1) - Max Pooling Layer
\item Conv2D (2) - Convolution Layer
\item MaxPooling2D (1) - Max Pooling Layer
\item FCD (3) - Fully Connected Layer w/ optional Dropout.
\end{itemize}
\begin{figure}[h!]
\centering
\includegraphics[width=0.2\linewidth]{figures/alexnet.png}
\caption{AlexNet Architecture}
\label{fig:alexnetarch}
\end{figure}
\subsubsection{InceptionV3}
This InceptionV3-based network is based on what was done in a previous study \cite{toda_how_2019}, but as 10 Mixed modules instead of the 11 used in that study.
\begin{itemize}
\item Conv2D (3) - Convolution Layer with Batch Normalization
\item MaxPooling2D (1) - Max Pooling Layer
\item Conv2D (2) - Convolution Layer with Batch Normalization
\item MaxPooling2D (1) - Max Pooling Layer
\item Mixed (10) - Sets of Conv2D with batch normalization
\item GlobalAveragePooling2D (1) - Global Pooling Layer
\item Dense (1) - The Output Layer
\end{itemize}
\subsubsection{Hyperparameters}
These parameters can be tuned/specified at training time:
\begin{itemize}
\item \textbf{epochs} - the number of times to run through the training set.
\item \textbf{learning rate} - the learning rate for the Adam optimizer, an adaptive learning rate optimization method \cite{kingma_adam:_2014}.
\item \textbf{batch size} - the number of training examples in each batch.
\item \textbf{activation} - the activation function to use, e.g. "relu" - Rectified Linear Unit (ReLU) which creates a sparse representation \cite{albawi_understanding_2017}.
\item \textbf{dropout} - the dropout rate, e.g. 0.2. NOTE: This was only supported by AlexNet. Dropout refers to the technique of dropping neuron activations at the specified so as to reduce overfitting.
\item \textbf{architecture} - the architecture to use for the CNN, "alexnet" (for AlexNet) or "inceptionv3" (for InceptionV3).
\end{itemize}
\subsection{Benchmark}
The benchmark for the plant disease classifier are results obtained by previous studies by
Toda et al, Fuentes et al, on plant detection which all utilize a CNN as a classifier\cite{toda_how_2019} \cite{fuentes_robust_2017}.
\newpage
\section{Methodology}
\subsection{Data Preprocessing}
Preprocessing the data was completed using these steps:
\begin{enumerate}
\item Get a list of all image filenames.
\item Shuffle list of image filenames.
\item Get labels from the image folder names. For example: images that were in the Apple\_\_\_Apple\_scab were assigned the label Apple\_\_\_Apple\_scab.
\item Split list of image filenames into training files (60\%), validation (20\%), and testing files (20\%) to mirror the split in a previous study \cite{toda_how_2019} .
\item Create a training example for each image file so it can be added to a TFRecord (TensorFlow dataset).
\item Write metadata describing the file counts of training, validation, and test sets and list the class names.
\item Create parser function to read each TFRecord batch by batch during training due to hardware limitations (limited VRAM, in this case).
\item Normalize image pixel values by dividing by 255 (representing the range of values for an RGB image) and subtract -0.5 to the values to make them fall in the -0.5 to 0.5 range.
\end{enumerate}
\subsection{Implementation}
The library was primarily written in Python 3. TensorFlow 2.0 with the keras API was used in the training stage.
The modules for the \textbf{ikapati} library can be broken down as such:
\begin{itemize}
\item \textbf{models} - these contain the training script and code to build the networks.
\item \textbf{data} - this has the utility functions used to preprocess the data, e.g. normalizing pixel values, reading datasets into memory.
\item \textbf{visualization} - this contains utility functions to help visualize and evaluate model performance. Figures included in this report were generated using that module.
\end{itemize}
\subsubsection{Data Preparation}
The datasets were prepared using the \textbf{ikapati/data/make\_dataset.py} script (which utilizes datasets scikit-learn and TensorFlow 2.0) and then saved as TFRecords, a format used by TensorFlow.
\begin{enumerate}
\item Get the filenames of all the images that match the specified plant species.
\item Follow the steps outlined in the \textbf{Data Preprocessing} section.
\item Put each subset of training examples (training, validation, and test) into separate TFRecords (train.tfrecord, eval.tfrecord, test.tfrecord).
\end{enumerate}
\subsubsection{Training Stage}
The training models were created using TensorFlow 2.0 with the keras functional API. As described in the section \textbf{Algorithms and Techniques}, CNN was chosen as the classifier, with neural network architectures based on AlexNet and InceptionV3 as described in a previous study \cite{toda_how_2019} . TensorFlow was configured to use a physical GPU similar to what was used in the Toda study, a GeForce GTX 1080 Ti with 11GB of VRAM, to do the training.
The code used to build the AlexNet architecture was based on:
\begin{itemize}
\item \url{https://engmrk.com/alexnet-implementation-using-keras/}
\item \url{https://github.com/tensorpack/benchmarks/blob/master/other-wrappers/keras.alexnet.py}
\end{itemize}
The code used to build the InceptionV3 architecture was based on:
\begin{itemize}
\item \url{https://github.com/keras-team/keras-applications/blob/master/keras_applications/inception_v3.py}
\end{itemize}
The training steps for each training run were:
\begin{enumerate}
\item Execute the ikapati/models/train\_model.py script with the architecture (alexnet or inceptionv3), learning rate, epochs, batch size, activation, and dropout rate (if architecture is set to alexnet) hyperparameters specified, the model and data directories set, and checkpoint saving enabled to kick off a training run. (See the \textbf{Hyperparameters} described in the section \textbf{Algorithms and Techniques} for more details on the hyperparameters).
\item Create a folder under the model dir specified at runtime, with the UUID of dataset used as the name, and a subfolder within that with the start time string as the name. This will be where the models are saved during this training run.
\item Record the start time.
\item After each epoch, if the validation loss has improved from the previous epoch, save the model to file in h5 format. Otherwise, we don't save anything and proceed with the next epoch.
\item When all epochs have concluded, save the final model to file and record the end time.
\item Write the start and end time of the training run, the learning rate, and other parameters specified at runtime to a CSV file that acts as a log of training runs.
\item Repeat steps above, tweaking the hyperparameters as needed.
\end{enumerate}
\subsection{Refinement}
In the initial benchmark run, the hyperparameters were set to match that of the benchmark model from Toda et al, with the learning rate set to 0.05, the batch size to 128, and the epochs to 20. The architecture was inceptionv3 and the activation function was "relu" (rectified linear units). \textbf{NOTE} The benchmark model input shape was 224x224, where as in this project the input shape was 256x256.
The benchmark model achieved an accuracy of 85.68\% on the test dataset, and a loss score of 0.570 (Figure ~\ref{fig:benchmark}).
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.6\linewidth}
\includegraphics[width=\linewidth]{figures/benchmark-loss.png}
\caption{Training vs Validation Loss}
\end{subfigure}
\begin{subfigure}[b]{0.6\linewidth}
\includegraphics[width=\linewidth]{figures/benchmark-accuracy.png}
\caption{Training vs Validation Accuracy}
\end{subfigure}
\caption{Loss and accuracy plots for the benchmark model.}
\label{fig:benchmark}
\end{figure}
In subsequent training runs, certain modifications were made to the hyperparameters to see if it would improve the model validation loss and accuracy:
\begin{itemize}
\item The learning rate was decreased to 0.001.
\item The batch size was reduced to account for hardware limitations.
\item The epoch number was increased to see if the model's validation loss decreases over time.
\item If validation loss improved in one epoch, then a checkpoint of the model was saved to file.
\item After each iteration, the validation vs training loss was plotted to determine whether a model is underfitting or overfitting, or reaching a good fit.
\item When selected as the architecture, AlexNet demonstrated a tendency to overfit on this dataset even with dropout layers, so InceptionV3 became the defacto architecture.
\end{itemize}
\subsubsection{Visualization and Metrics Tools}
Utility functions to evaluate model performance were created using seaborn (a high-level API for matplotlib) and pandas.
\subsubsection{Deployment to an Embedded System}
Upon training completion, trained models as saved in both h5 and tflite (TensorFlow Lite) formats. Both formats can be loaded for later use with the TensorFlow library, but the tflite format is specifically used to run inference on embedded devices such as the Jetson Nano mentioned previously.
A simple web app was created using the Flask framework.
This web app has a POST endpoint located at \url{http://jetbot:5000/predict}, which loads the tensorflow lite model and then attempts to predict the class name for an input image uploaded using that endpoint.
To simulate a POST request to that endpoint, the curl commandline tool was used with the image file passed in:
\begin{lstlisting}
curl -F "[email protected]" http://jetbot:5000/predict
\end{lstlisting}
This endpoint expects a 256x256 image. If the request is successful, the result will be a JSON body containing the predicted class for the input image:
\begin{lstlisting}
{
"class_name": "Tomato___Septoria_leaf_spot"
}
\end{lstlisting}
\clearpage
\section{Results}
\subsection{Model Evaluation and Visualization}
Model performance was evaluated with accuracy and loss plots (Figure ~\ref{fig:best1}).
Models that showed overfitting, such as the AlexNet-based model, were discarded.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.6\linewidth}
\includegraphics[width=\linewidth]{figures/alexnet-loss.png}
\caption{Training vs Validation Loss}
\end{subfigure}
\begin{subfigure}[b]{0.6\linewidth}
\includegraphics[width=\linewidth]{figures/alexnet-accuracy.png}
\caption{Training vs Validation Accuracy}
\end{subfigure}
\caption{Loss and accuracy plots for AlexNet. Validation shows overfitting.}
\label{fig:alexnet}
\end{figure}
The 2 best performing models, based on these metrics, were both InceptionV3-based and outperformed the benchmark. In both of these runs, the learning rate was reduced to 0.001 from the benchmark 0.05.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.6\linewidth}
\includegraphics[width=\linewidth]{figures/best-model-loss-4632.png}
\caption{Training vs Validation Loss}
\end{subfigure}
\begin{subfigure}[b]{0.6\linewidth}
\includegraphics[width=\linewidth]{figures/best-model-accuracy-4632.png}
\caption{Training vs Validation Accuracy}
\end{subfigure}
\caption{Model 1. Loss and accuracy plots for InceptionV3, with batch size of 64 and learning rate of 0.001.}
\label{fig:best1}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.6\linewidth}
\includegraphics[width=\linewidth]{figures/best-model-loss-5614.png}
\caption{Training vs Validation Loss}
\end{subfigure}
\begin{subfigure}[b]{0.6\linewidth}
\includegraphics[width=\linewidth]{figures/best-model-accuracy-5614.png}
\caption{Training vs Validation Accuracy}
\end{subfigure}
\caption{Model 2. Loss and accuracy plots for InceptionV3, with batch size of 100 and learning rate of 0.001.}
\label{fig:best2}
\end{figure}
In Figure ~\ref{fig:best1}, the plots show how that training run Model 1, with the batch size reduced to 64 from 128 performed on the validation and training sets. On the test dataset, this model performed with an accuracy of 97\% and a loss score of 0.089.
In Figure ~\ref{fig:best2}, the plots show how the training run with the same batch size as the benchmark but a learning rate reduced to 0.001 performed. On the test dataset, this this model performed with an accuracy of 96\% and a loss score of 0.126.
\subsection{Justification}
While Model 1 and 2 performed similarly on the training and validation datasets, Model 1 did better on the test dataset than Model 2, so it was selected as the best model.
With an accuracy of 97\%, and a loss score of 0.089, Model 1 also outperforms the benchmark model, which had 85.68\% accuracy and a loss score of 0.570 on the test data.
Model 1 was converted to TensorFlow Lite format (tflite) and then transferred to the Jetson Nano Robot. The Flask app loaded the tflite model, and then listened for any requests to the \url{/predict} endpoint.
\clearpage
\section{Conclusion}
\subsection{Free-Form Visualization}
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{figures/IMG_7846.jpg}
\caption{Jetson Nano Robot with Camera}
\end{subfigure}
\begin{subfigure}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{figures/IMG_7845.jpg}
\caption{Side view of the Jetson Nano Robot}
\end{subfigure}
\caption{Jetson Nano Robot}
\label{fig:jetson}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.3\linewidth}
\includegraphics[width=\linewidth]{figures/examples/5365398.jpg}
\caption{Strawberry leaf spot }
\end{subfigure}
\begin{subfigure}[b]{0.3\linewidth}
\includegraphics[width=\linewidth]{figures/examples/5430722.jpg}
\caption{Septoria leaf spot }
\end{subfigure}
\begin{subfigure}[b]{0.3\linewidth}
\includegraphics[width=\linewidth]{figures/examples/5465560.jpg}
\caption{Common corn rust }
\end{subfigure}
\caption{Images the classifier failed on. \cite{noauthor_strawberry_2008} \cite{noauthor_septoria_2011} \cite{noauthor_common_2012} }
\label{fig:benchmark}
\end{figure}
\subsection{Reflection}
These steps summarize what was done to complete this project:
\begin{enumerate}
\item Initial exploration of the agricultural domain was done to see what problems need more investigation.
\item A large dataset with multiple classes of plants was downloaded and then preprocessed.
\item A benchmark model was created based on a previous study \cite{toda_how_2019} .
\item Models built in TensorFlow were trained using different hyperparameters and 2 different architectures.
\item The best model was chosen using validation loss and accuracy scores and then converted to TensorFlow Lite format.
\item A simple Flask API that loads the TensorFlow Lite model and does inference, was deployed to a Jetson Nano.
\end{enumerate}
I found 3-4, and 6 most challenging because I had never used TensorFlow at the level this project required and had never built a CNN before.
It was interesting to learn about how those neural networks work and how they can be improved upon.
\subsection{Improvement}
There is some opportunities for improvement from this project:
\begin{itemize}
\item The dataset used here consisted of pictures with leaves that were set against a fairly uniform background. In a future iteration of this project, the dataset could be augmented by introducing random noise and transformations to a subset of images, and then use that to train.
\item The machine learning model could be converted to do online learning so that learning can be done as inference is attempted. This could be especially useful when trying to improve the accuracy of the model.
\end{itemize}
\clearpage
\bibliographystyle{IEEEtran}
\bibliography{citations}
\end{document}
| {
"alphanum_fraction": 0.7823051436,
"avg_line_length": 53.9404255319,
"ext": "tex",
"hexsha": "85c99417d585a1064c8023d7e3833f021c636e4b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9349eff40ccf62c72d53570557bca91ec9c0386e",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "bitjockey42/ikapati-research",
"max_forks_repo_path": "reports/report.tex",
"max_issues_count": 7,
"max_issues_repo_head_hexsha": "9349eff40ccf62c72d53570557bca91ec9c0386e",
"max_issues_repo_issues_event_max_datetime": "2022-03-12T00:13:59.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-02-16T07:07:03.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "bitjockey42/ikapati-research",
"max_issues_repo_path": "reports/report.tex",
"max_line_length": 584,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9349eff40ccf62c72d53570557bca91ec9c0386e",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "bitjockey42/ikapati-research",
"max_stars_repo_path": "reports/report.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6252,
"size": 25352
} |
\section*{Introduction}
Phenotypes are the result of many overlaping genetic effects and developmental
processes \citep{Lande1983-ez,Klingenberg2008-ll,Melo2016-yw}. These different
factors create a structured pattern of phenotypic correlations, which we can
measure and use to infer something about the underlying processes. Identifing
modules, groups of traits that are more related among themselves than with other
traits, is a powerful way to describe and interpret the patterns of pheotypic
covariation.
Discussion on overlapping factors contributing to covariation (Mitteroecker2007,
Hallgrimsson2012)
Several methods for assessing the apropriatness of putative modularity
hypothesis exist in the literature. These methods use variations on some comon
themes, like hypothesis testing and model comparison.
Brief review of existing methods:
\begin{itemize}
\item CR
\item EMMLi
\item Mint
\item Mantel
\end{itemize}
Most methods of comparing different modularity hypothesis don't allow for the
comparison of nested hypothesis.
Most are ad-hock measures similarity, and do
not allow for the use of proper statistical methods.
The likelihood in EMMLi is
extrapolated from the expectation on measuring the correlation between two
traits, and does not acctually use the data in the likelihood computation, only
the correlations.
Yamda sets it self appart by using the likelihood of the data to compare the
different modularity hypothesis, allowing for overlaping modules, allowing
different methods for the construction of the putative covariance matrix, and
having
\begin{figure}[h]
\center
\input{factor_diagram}
\caption[Factors and modules]{ Factors can overlap in their effect on traits }
\label{fig:factor-diagram}
\end{figure}
\section*{Methods}
\subsection*{A maximum likelihood for the modularity hypothesis}
\subsection*{Carnivora sample}
\section*{Results}
\begin{figure}[h]
\includegraphics[width=5cm]{example-image-golden}
\caption[Factors and modules]{ Modulues are cool }
\label{fig:example-fig}
\end{figure}
\section*{Discussion}
The interaction between selection and multivariate covariation is a
central part of our understanding of evolution. Even simple selection
regimes can produce complex multivariate responses due to cascading
developmental effects and genetic constraints. Here, ...
\begin{notes}[Acknowledgments]
We thank Tiago Zahn for insightful discussions that contributed to the
development of this work.
\end{notes}
\begin{notes}[Funding]
This work was supported by grants...
\end{notes}
| {
"alphanum_fraction": 0.8016272762,
"avg_line_length": 30.011627907,
"ext": "tex",
"hexsha": "be46319e2a13abb9a53073683f8539023a7ebd27",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "64ccedf2413d807c1f1e9d28774a11deaf55afe2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "diogro/YAMDA-R",
"max_forks_repo_path": "manuscript/text.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "64ccedf2413d807c1f1e9d28774a11deaf55afe2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "diogro/YAMDA-R",
"max_issues_repo_path": "manuscript/text.tex",
"max_line_length": 80,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "64ccedf2413d807c1f1e9d28774a11deaf55afe2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "diogro/YAMDA-R",
"max_stars_repo_path": "manuscript/text.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 595,
"size": 2581
} |
\documentclass[11pt]{article}
\newcommand{\csim}{\textsf{CSIM} }
\newcommand{\nmc}{\textsf{Circuit-Tool} }
\newcommand{\lsm}{\textsf{Learning-Tool} }
%
% common packages
%
\usepackage{graphicx}
\usepackage{color}
\PassOptionsToPackage{colorlinks=true,linkcolor=blue,citecolor=blue,urlcolor=blue}{hyperref}
\usepackage{html}
%begin{latexonly}
\usepackage{a4wide}
%\newif\ifpdf\ifx\pdfoutput\undefined\pdffalse\else\pdfoutput=1\pdftrue\fi
%\newcommand{\pdfgraphics}{\ifpdf\DeclareGraphicsExtensions{.pdf,.jpg}\else\DeclareGraphicsExtensions{.eps}\fi}
%\newcommand{\pdfgraphics}{}
%\ifpdf
%\else
% \usepackage[dvips]{hyperref}
%\fi
%end{latexonly}
\html{
\pagecolor[gray]{1.0}
% \newcommand{\pdfgraphics}{}
\newcommand{\href}[2]{\htmladdnormallink{#2}{#1}}
% we assume that the name of the hypertarget also exists as label!
\newcommand{\hyperlink}[2]{\hyperref{#2}{}{}{#1}}
\newcommand{\hypertarget}[2]{#2}
}
\newcommand{\Section}[2]{\hypertarget{#2}{\section{#1}\label{#2}}}
\newcommand{\Subsection}[2]{\hypertarget{#2}{\subsection{#1}\label{#2}}}
\newcommand{\Subsubsection}[2]{\hypertarget{#2}{\subsubsection{#1}\label{#2}}}
\newcommand{\secref}[2]{\hyperlink{#1}{#2}\latex{ (Sec.~\ref{#1})}}
\newcommand{\sect}[1]{\hyperlink{#1}{Section}~\ref{#1}}
\newcommand{\figref}[1]{\hyperlink{#1}{Figure}~\ref{#1}}
\setlength{\parindent}{0em}
\setlength{\parskip}{1ex plus 0.1ex minus 0.1ex}
\setlength{\itemsep}{-0.5ex plus 0.1ex minus 0.1ex}
\setlength{\topmargin}{0cm}
%
% since we include the detailed description from doxygen we need some
% of these definitions
%
\input{doxygen}
%
% define the \objfield which is output by reggen for each registered
% field of an object.
%
\newcommand{\objfield}[6]{\item[{\normalsize \texttt{#2}} ($ #5 $) :] {\small #6}}
\newcommand{\objfieldnu}[6]{\item[{\normalsize \texttt{#2}} :] {\small #6}}
\newcommand{\cmdref}[1]{\Subsubsection{#1}{cmd:#1}}
\begin{document}
%\pdfgraphics
\sloppy
%
% titlepage
%
\latex{\input{um-tp-paper}}
\html{\input{um-tp-html}}
%
% table of contents
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\setcounter{tocdepth}{3}
\tableofcontents
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\Section{Preliminaries}{sec:pre}
\input{prelim}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\clearpage
\Section{A short Tutorial}{sec:start}
\input{getstarted}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\Section{Input and Output}{sec:inout}
\input{inout}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
%\Section{Distributed Simulation}{sec:distr}
%
%\input{distributed}
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\Section{Additional Topics}{sec:additional}
\input{additional}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\Section{Adding your own C++ model classes to \csim}{sec:usermodels}
\input{usermodels}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\clearpage
\Section{\csim command reference}{sec:cmdref}
\input{cmd_ref}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\Section{\csim model classe reference}{sec:clref}
\input{um_class_ref}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\bibliographystyle{apalike}
\bibliography{references}
\end{document}
| {
"alphanum_fraction": 0.5621009867,
"avg_line_length": 24.0979020979,
"ext": "tex",
"hexsha": "14bd322469f1eb8023312196c87e2c824fa0b227",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "bf1631031a7c2cb709ca476e458f85faa2a1f84d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Kirito56/lsm-matlab",
"max_forks_repo_path": "csim/documentation/usermanual/usermanual.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "bf1631031a7c2cb709ca476e458f85faa2a1f84d",
"max_issues_repo_issues_event_max_datetime": "2021-12-02T09:00:37.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-12-02T09:00:37.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Kirito56/lsm-matlab",
"max_issues_repo_path": "csim/documentation/usermanual/usermanual.tex",
"max_line_length": 111,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "bf1631031a7c2cb709ca476e458f85faa2a1f84d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Kirito56/lsm-matlab",
"max_stars_repo_path": "csim/documentation/usermanual/usermanual.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 978,
"size": 3446
} |
\subsection{Quick Introduction}
Various classes (e.g., \T{saga::file} and \T{saga::stream}) in the SAGA
API expose I/O operations, i.e., chunks of binary data can be written
to, or read from these classes. Other classes (such as \T{saga::rpc})
handle binary data as parameters. In order to unify the application
management of these data, SAGA introduces the \T{saga::buffer} class,
which is essentially a simple container class for a byte buffer, plus
a number of management methods. Various subclasses of the
\T{saga::buffer} exist, and, as described below, users are allowed, and
actually encouraged, to build their own ones.
The C++ rendering of SAGA distinguishes between mutable and
non-mutable buffers: non-mutable buffers are used for write-type
operations, and cannot be changed by the SAGA implementation; mutable
buffers are for read-type operations, and can be changed (i.e., new
data can be added to the buffer).
\subsection{Reference}
\begin{mycode}[label=Prototype: saga::buffer]
namespace saga
{
class const_buffer
: public saga::object
{
public:
const_buffer (void const * data,
saga::ssize_t size);
~const_buffer (void);
saga::ssize_t get_size (void) const;
void const * get_data (void) const;
void close (double timeout = 0.0);
};
class mutable_buffer
: public saga::const_buffer
{
public:
typedef void buffer_deleter_ type(void* data);
typedef TR1::function <buffer_deleter_type> buffer_deleter;
static void default_buffer_deleter (void * data);
mutable_buffer (saga::ssize_t size = -1);
mutable_buffer (void * data,
saga::ssize_t size);
mutable_buffer (void * data,
saga::ssize_t size,
buffer_deleter cb);
~mutable_buffer (void);
void set_size (saga::ssize_t size = -1);
void set_data (void * data,
saga::ssize_t size,
buffer_deleter cb = default_buffer_deleter);
void * get_data (void); // non-const version
};
}
\end{mycode}
\subsection{Details}
Although the concept of an I/O buffer is very simple, and the
prototype shown above is rather straight forward, the semantic
details of the SAGA buffer are relatively rich. That holds true in
particular for the memory management of the buffer data segment.
For the interested reader, the |saga::buffer| section in the SAGA
Core API specification contains quite some detail on that issue.
\subsubsection{Buffer Memory Management}
In general, buffers can operate in two different memory
management modes: the data segment can be user-managed (i.e.,
application-managed), or SAGA-managed (i.e., implementation-managed). The constructors allow the application to pass a memory
area on buffer creation: if that buffer is given, and not-NULL,
then the SAGA implementation will use that buffer, and will never
re- nor de-allocate that memory (memory management is left up to
the application). On the other hand, if that memory area is not
given, or given as NULL, then the SAGA implementation will
internally allocate the required amount of memory, which MUST NOT
be re- or de-allocated by the application (memory management is
left to the SAGA implementation).
Although the latter version is certainly convenient for the end
user, it comes with a potential performance penalty: data from the
implementation allocated buffer may sometimes need an additional
memcopy into application memory. If that is the case, it is up to you to decide what memory management mode works best for your application use
case.
\HINT{The most performant case is most of the time to re-use a
single (or a small set of) application allocated memory buffer(s)
over and over again. Note that you can use a larger size memory
segment for a small buffer by giving a smaller size parameter to
the constructor.}
\begin{mycode}[label=Example saga::buffer usage]
// allocate a 'large' buffer statically
char mem[1024];
// create a saga buffer object for reading (mutable)
saga::mutable_buffer buf (mem, 512);
// open a file
saga::url u ("/etc/passwd");
saga::filesystem::file f (u);
// read data into buffer - the first 512 bytes get fill
f.read (buf);
// seek the buffer, so that the next read goes into the
// second half of the buffer
buf.set_data (mem + 512, 512);
// now read again
f.read (buf);
// the complete buffer should be filled now
// print what we got
std::cout << mem << std::endl;
\end{mycode}
\subsubsection{Const versus Mutable Buffers}
On |write|-like operations, the SAGA implementation has no need to
change the buffer's data segment in any way: it only needs to read
the data, and to copy them to whatever entity the write operations
happens upon. The implementation can thus treat the buffer as
|const|, which allows a number of optimizations and memory access
safeguards to be employed.
On the other hand, |read|-like operations will usually require the
SAGA implementation to write, or even to (re-)allocate the buffers
memory segment. In such cases, |const| safeguards cannot be
employed.
\HINT{It is encouraged the use of \T{const\_buffer} instances for
\T{write}-like operations, and of \T{mutable\_buffer} instances for
\T{read}-like operations.}
In order to simplify memory management and to provide optimal
memory access safeguards, the SAGA C++ bindings distinguish between
|const_buffer| and |mutable_buffer| classes. Both types can be
used for |write|-like operations, but only |mutable_buffer|
instances can be used for |read|-like operations.
\HINT{It is possible to case \T{mutable\_buffer} instances to
\T{const\_buffer}, which allows to re-use buffers for all I/O
operations and, at the same time, allows the implementation to use
\T{const} checking.\\
| {
"alphanum_fraction": 0.6964430729,
"avg_line_length": 38.7232704403,
"ext": "tex",
"hexsha": "16fd3aba2bacf5aa613eb52b974bae16c870cbdc",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-04-10T17:23:52.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-11-17T04:38:38.000Z",
"max_forks_repo_head_hexsha": "7376c0de0529e7d7b80cf08b94ec484c2e56d38e",
"max_forks_repo_licenses": [
"BSL-1.0"
],
"max_forks_repo_name": "saga-project/saga-cpp",
"max_forks_repo_path": "docs/manuals/programming_guide/tex/saga-programming-guide_buffers.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7376c0de0529e7d7b80cf08b94ec484c2e56d38e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSL-1.0"
],
"max_issues_repo_name": "saga-project/saga-cpp",
"max_issues_repo_path": "docs/manuals/programming_guide/tex/saga-programming-guide_buffers.tex",
"max_line_length": 148,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "7376c0de0529e7d7b80cf08b94ec484c2e56d38e",
"max_stars_repo_licenses": [
"BSL-1.0"
],
"max_stars_repo_name": "saga-project/saga-cpp",
"max_stars_repo_path": "docs/manuals/programming_guide/tex/saga-programming-guide_buffers.tex",
"max_stars_repo_stars_event_max_datetime": "2021-08-12T11:05:55.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-09-15T16:24:14.000Z",
"num_tokens": 1500,
"size": 6157
} |
%% start of file `template.tex'.
%% Copyright 2006-2013 Xavier Danaux ([email protected]).
%
% This work may be distributed and/or modified under the
% conditions of the LaTeX Project Public License version 1.3c,
% available at http://www.latex-project.org/lppl/.
\documentclass[11pt,a4paper,sans]{moderncv} % possible options include font size ('10pt', '11pt' and '12pt'), paper size ('a4paper', 'letterpaper', 'a5paper', 'legalpaper', 'executivepaper' and 'landscape') and font family ('sans' and 'roman')
% moderncv themes
\moderncvstyle{banking} % style options are 'casual' (default), 'classic', 'oldstyle' and 'banking'
\moderncvcolor{red} % color options 'blue' (default), 'orange', 'green', 'red', 'purple', 'grey' and 'black'
%\renewcommand{\familydefault}{\sfdefault} % to set the default font; use '\sfdefault' for the default sans serif font, '\rmdefault' for the default roman one, or any tex font name
%\nopagenumbers{} % uncomment to suppress automatic page numbering for CVs longer than one page
% character encoding
\usepackage[utf8]{inputenc} % if you are not using xelatex ou lualatex, replace by the encoding you are using
%\usepackage{CJKutf8} % if you need to use CJK to typeset your resume in Chinese, Japanese or Korean
% adjust the page margins
\usepackage[scale=0.81, top=2cm, bottom=2cm]{geometry}
%\setlength{\hintscolumnwidth}{3cm} % if you want to change the width of the column with the dates
%\setlength{\makecvtitlenamewidth}{10cm} % for the 'classic' style, if you want to force the width allocated to your name and avoid line breaks. be careful though, the length is normally calculated to avoid any overlap with your personal info; use this at your own typographical risks...
% personal data
\name{Hin Hong (Ryan)}{Tam}
% \title{Résumé} % optional, remove / comment the line if not wanted
\address{705 Sculpture House, 4 Killick Way}{E1 3FE London}{United Kingdom}% optional, remove / comment the line if not wanted; the "postcode city" and and "country" arguments can be omitted or provided empty
\phone{+44 (0)7413668898} % optional, remove / comment the line if not wanted
\email{[email protected]} % optional, remove / comment the line if not wanted
\homepage{github.com/ryantam626}
% to show numerical labels in the bibliography (default is to show no labels); only useful if you make citations in your resume
%\makeatletter
%\renewcommand*{\bibliographyitemlabel}{\@biblabel{\arabic{enumiv}}}
%\makeatother
%\renewcommand*{\bibliographyitemlabel}{[\arabic{enumiv}]}% CONSIDER REPLACING THE ABOVE BY THIS
% bibliography with mutiple entries
%\usepackage{multibib}
%\newcites{book,misc}{{Books},{Others}}
%----------------------------------------------------------------------------------
% content
%----------------------------------------------------------------------------------
\begin{document}
%\begin{CJK*}{UTF8}{gbsn} % to typeset your resume in Chinese using CJK
%----- resume ---------------------------------------------------------
\makecvtitle
\vspace{-35px}
\section{Experience}
\subsection{Vocational}
\cventry{10/2016--Present}{Data Scientist / Software Engineer}{Smarkets}{London}{}{%
\begin{itemize}%
\item Built anti money laundering system using Python;
\item Built system to detect problem gambling patterns using Python;
\item Built data pipeline with Python insuring data consistency and producing metrics consumed by accounting systems;
\item Maintaining and extending various other Python micro-services (Flask) and providing additional tooling;
\end{itemize}}
% \cventry{04/2015--06/2016}{Technical Partner / Python Django Developer / Ionic Framework Developer}{Aboard}{London}{}{%
% \begin{itemize}%
% \item Built a web service using Python (Django) that powers the mobile application;
% \item Rewrote part of the front-end written by other developers to improve performance;
% \end{itemize}}
\cventry{06/2014--08/2014}{Summer Analyst}{HSBC Bank Plc.}{London}{}{%
\begin{itemize}%
\item Built and maintained reconciliation systems as per project manager’s requests using PL/SQL and VBA;
\item Extended functionality to the Excel sheet that generates control files for SQL*Loader which in turn streamlined the process of loading data into the database for reconciliations;
\end{itemize}}
\subsection{Selected Projects}
\cventry{07/2018--Present}{Open Source Project}{JupyterLab Extensions}{Personal}{}{
\begin{itemize}
\item Developed a JupyterLab extension to apply Black (a Python code formatter) to code within codecell;
\item Developed a JupyterLab extension to support a slightly opinionated Sublime Text keybinding within codecell;
\end{itemize}
}
\cventry{05/2016--09/2016}{Individual Project with Stratagem Technologies Ltd.}{State-Aware TrueSkill For Tennis Match Prediction}{Imperial}{}{
\begin{itemize}
\item Compiled a more detailed report on the TrueSkill model than the original technical report;
\item Implemented the TrueSkill model in Cython and Python;
\item Extended the TrueSkill model to allow it to learn from features of the event in question;
\end{itemize}
}
\cventry{01/2015--06/2015}{Individual Academic Research Project}{Modelling Loss Given Default And Truncated Support Vector Regression }{Imperial}{}{%
\begin{itemize}%
\item Reviewed several previous publications on modelling Loss Given Default;
\item Implemented Support Vector Regression in MATLAB and compared it against other methods on a real dataset;
\item Explored the possibility of Truncated Support Vector Regression, reported some promising preliminary results;
\end{itemize}}
% \cventry{07/2014--Present }{Individual Project}{Modelling Horse Racing}{Personal}{}{%
% \begin{itemize}%
% \item Wrote a web scraper that store horse racing data into local MySQL database using Java;
% \item Experimented a few models that does not yield concrete predictions that could be used;
% \item Leveraging the experience gained in building Aboard, wrote an interactive application to scrape data into a Neo4j database in Python to speed up development cycle of new models;
% \end{itemize}}
\section{Computer Skills}
\cvitem{Selected Proficient Languages}{Python, MATLAB, R, JavaScript/TypeScript, C}
\cvitem{Selected Frameworks and Libraries}{NumPy, SciPy, Pandas, Luigi, Apache Spark}
\cvitem{Operating Systems}{Linux, Windows}
\section{Education}
\cventry{10/2015--09/2016}{MSc. Computer Science (Machine Learning)}{Imperial College London}{London}{\textbf{Merit}}{}
\cventry{10/2012--06/2015}{BSc. Mathematics with Statistics for Finance}{Imperial College London}{London}{\textbf{First Class Honours}}{}
\section{References}
\begin{itemize}
\item Dr. Tony Bellotti \hfill Department of Mathematics, Imperial College London, London SW7 2AZ
\item Dr. Marc Deisenroth \hfill Department of Computing, Imperial College London, London SW7 2AZ
\item Prof. Richard Thomas \hfill Department of Mathematics, Imperial College London, London SW7 2AZ
\end{itemize}
\clearpage
\end{document}
| {
"alphanum_fraction": 0.7176664832,
"avg_line_length": 57.6825396825,
"ext": "tex",
"hexsha": "0516d6146c53126554e491cf562004793b9c2bf5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ec63142b37a767b908678e17bab60a66b99f1903",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ryantam626/ryantam626.github.io",
"max_forks_repo_path": "cv-source/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ec63142b37a767b908678e17bab60a66b99f1903",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ryantam626/ryantam626.github.io",
"max_issues_repo_path": "cv-source/main.tex",
"max_line_length": 297,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ec63142b37a767b908678e17bab60a66b99f1903",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ryantam626/ryantam626.github.io",
"max_stars_repo_path": "cv-source/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1769,
"size": 7268
} |
\section{Traces}
In this section, we will explain how to use our PIN-based trace
recording tool to record a trace. We then show how to perform common
trace analysis tasks.
\subsection{Setup}
The PIN trace tool is located in the \cmdline{pintraces/} directory.
Because of licensing reasons, we cannot distribute PIN with the trace
tool. The PIN tool cannot be built until PIN has been extracted
inside of the \bap directory. In the following, we assume that
\cmdline{\$BAPDIR} is set to the \bap directory. For example, if you
extracted \bap to \cmdline{/home/user/Downloads/bap-x.y}, then you should
replace \cmdline{\$BAPDIR} below with
\texttt{/home/user/Downloads/bap-x.y}. Once downloaded, PIN should be
extracted to \cmdline{\$BAPDIR/pin}. On Linux, running the
./getpin.sh script from the \cmdline{\$BAPDIR/pintraces}
directory will automatically download and extract PIN for Linux; the
user is responsible for accepting the PIN license agreements.
On Windows, the process is more complicated. We usually test with
Windows 7 and Windows XP SP3, but expect NT, 2000, 2003, Vista, and 7
to work. First, make sure that \cmdline{\$BAPDIR} contains no
spaces. Then, install GNU Make for Windows
(\url{http://gnuwin32.sourceforge.net/packages/make.htm}) and Visual
C++ 2010 Express\footnote{This is the latest version of Visual Studio
that PIN is compatible with.}
(\url{http://www.microsoft.com/visualstudio/en-us/products/2010-editions/visual-cpp-express}).
Make sure to add the directory containing \texttt{make} (the default
is \verb!C:\Program Files\GnuWin32\bin!) to the Windows \texttt{Path}
environment variable. Also download PIN and extract it to the
\cmdline{\$BAPDIR/pin} directory. Next, we need to upgrade the Visual
Studio project for Google's protobuffers library, which our PIN tool
depends on\footnote{This can be done automatically at the command line
using devenv.exe, but this tool is only part of Visual Studio
proper; it does not come with the Express versions.}. To do this,
navigate to the
\cmdline{\$BAPDIR/libtracewrap/libtrace/protobuf/vsprojects} directory
and open \cmdline{protobuf.sln}. When the Visual Studio Conversion
Wizard appears, click Finish and close the summary window. It is
normal for the conversion process to have warnings, but not errors.
Next, right click on the libprotobuf project in the Solution Explorer,
and select Properties. Select the Release configuration at the top of
the dialog box, and then navigate to the C/C++ Code Generation
settings. Change the Runtime Library to Multi-threaded
(/MT)\footnote{If you skip this step, you will get unresolved symbol
errors while linking the PIN tool}. Finally, close Visual Studio
and save the changes to the project. Open a Visual Studio command
prompt, navigate to \cmdline{\$BAPDIR/libtracewrap/libtrace/src/cpp}
and run \cmdline{make -f Makefile.windows} to build protobuffers and
the libtrace library.
The PIN tool itself can be built by executing \cmdline{make} in the
\cmdline{\$BAPDIR/pintraces} directory on Linux. On Windows, execute
\begin{verbatim}
make -f Makefile.pin PIN_ROOT=$BAPDIR/pin TARGET=ia32
\end{verbatim}
instead. After compilation, the PIN tool for recording x86 programs
should exist in \cmdline{\$BAPDIR/pintraces/obj-ia32/gentrace.so} (or
\cmdline{gentrace.dll} on Windows). If you are running on a x86-64
host, you should also have a PIN tool for recording x86-64 programs in
\cmdline{\$BAPDIR/pintraces/obj-intel64/gentrace.so}. In the rest of
the chapter, we will assume Linux is being used; most interaction with
the trace tool is the same on Windows.
\subsection{Recording a trace}
To see the command line options to the trace tool, execute
\begin{verbatim}
$BAPDIR/utils/gentrace.bash -help -- /bin/ls
\end{verbatim}
on Linux. On Windows, use
\begin{verbatim}
powershell $BAPDIR/utils/gentrace.ps1 -- -help -- cmd.exe
\end{verbatim}
instead.
By default, the trace tool will only log instructions that are
\emph{tainted}, i.e., those that depend on user input. The
\cmdline{-taint-*} command line options are used to mark various
inputs as being tainted. For instance, \cmdline{-taint-files readme}
marks the file \cmdline{readme} as being tainted.
We will record a trace of a simple buffer overflow. Run
\begin{verbatim}
echo "helloooooooooooooooooooooooooooooooooo" > readme
\end{verbatim}
to create the input file. Then run
\begin{verbatim}
$BAPDIR/utils/gentrace.bash -taint-files readme \
-- $BAPDIR/tests/C/bof1 readme
\end{verbatim}
The PIN tool will output many debugging messages; this is normal. If
the trace tool detected the buffer overflow, it will print ``Stack
smashing detected'' near the end of the logs. At this point, there
should be a trace file ending with suffix bpt in the current working
directory. In the following commands, we assume this file is named
trace.bpt.
To lift the trace data and print it using \cmdline{iltrans} run:
\begin{verbatim}
iltrans -trace trace.bpt -pp-ast /dev/stdout
\end{verbatim}
\cmdline{iltrans} processes traces by lifting the entire trace into
memory, and then performing the specified operation(s).
Unfortunately, this makes it unsuitable for processing traces that do
not fit in memory. \cmdline{streamtrans} is an alternative utility
for processing traces in a \emph{streaming} model. Instead of lifting
the entire trace at once, \cmdline{streamtrans} loads only a small
part of it at a time. The downside to \cmdline{streamtrans} is that
many traces capabilities in BAP are implemented in \cmdline{iltrans}
but not \cmdline{streamtrans}.
To lift the trace data and print it using \cmdline{streamtrans} use:
\begin{verbatim}
streamtrans -tracestream trace.bpt -pp-ast /dev/stdout
\end{verbatim}
Regardless of the lifting tool, the lifted trace data should look
something like:
\begin{verbatim}
addr 0x8048639 @asm "mov 0x804a038,%eax" @tid "0"
@context "R_EAX_32" = 0x68, 1, u32, wr
@context "mem32[0x804a038]" = 0x5, 0, u8, rd
@context "mem32[0x804a039]" = 0x0, 0, u8, rd
@context "mem32[0x804a03a]" = 0x0, 0, u8, rd
@context "mem32[0x804a03b]" = 0x0, 0, u8, rd
label pc_0x8048639
R_EAX_32:u32 = mem32:?u32[0x804a038:u32, e_little]:u32
\end{verbatim}
In addition to the lifted IL, there are several trace-specific
annotations. \texttt{@tid "0"} indicates that thread 0 executed this
instruction. The \texttt{@context} attribute provides information
about this instruction's operands. The four values in the tuple refer
to the operand's initial value, its taint identifier, data type, and
access type, respectively. Positive taint identifiers indicate the
operand was only tainted by the taint source with the corresponding
number. For example, taint identifier five generally corresponds to
the fifth symbolic byte introduced into the program. An operand may
have a taint identifier of negative one if it is tainted by multiple
source bytes. A taint identifier of zero corresponds to an untainted
operand. The access type is one of read only (rd), write only (wr),
or both (rw).
It is also possible to concretize the trace after lifting, which
removes jumps and performs memory concretization, by executing
\begin{verbatim}
iltrans -trace trace.bpt -trace-concrete -pp-ast /dev/stdout
\end{verbatim}
or
\begin{verbatim}
streamtrans -tracestream trace.bpt -trace-concrete -pp-ast /dev/stdout
\end{verbatim}
Adding the -trace-check option before -trace-concrete causes BAP to
compare its internal evaluator's notion of state with the actual
values recorded in the trace. It can be used to check for bugs in the
IL. For example:
\begin{verbatim}
iltrans -trace trace.bpt -trace-check -trace-concrete
\end{verbatim}
or
\begin{verbatim}
streamtrans -tracestream trace.bpt -trace-check -trace-concrete-drop
\end{verbatim}
Finally, running
%
\begin{verbatim}
iltrans -trace trace.bpt -trace-formula f
\end{verbatim}
or
\begin{verbatim}
streamtrans -tracestream trace.bpt -trace-formula f
\end{verbatim}
will symbolically execute the trace and output the generated verification
condition to the file f. This can then be solved with stp to find satisfying
answers to the trace.
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "../main"
%%% End:
| {
"alphanum_fraction": 0.7742523705,
"avg_line_length": 41.3366834171,
"ext": "tex",
"hexsha": "ab3dd926fd36157611acba918b9dee35a3310723",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-04-20T08:38:41.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-04-20T07:44:02.000Z",
"max_forks_repo_head_hexsha": "f537d396605d1b943137b1b964c5a2ebecbf2aca",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "BambooL/AutoGrader",
"max_forks_repo_path": "ag_pathanalysis/doc/chap-examples/traces.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f537d396605d1b943137b1b964c5a2ebecbf2aca",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "BambooL/AutoGrader",
"max_issues_repo_path": "ag_pathanalysis/doc/chap-examples/traces.tex",
"max_line_length": 94,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "f537d396605d1b943137b1b964c5a2ebecbf2aca",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "BambooL/AutoGrader",
"max_stars_repo_path": "ag_pathanalysis/doc/chap-examples/traces.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-05T13:41:16.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-12-05T19:47:42.000Z",
"num_tokens": 2250,
"size": 8226
} |
% Options for packages loaded elsewhere
\PassOptionsToPackage{unicode}{hyperref}
\PassOptionsToPackage{hyphens}{url}
%
\documentclass[
]{article}
\usepackage{amsmath,amssymb}
\usepackage{lmodern}
\usepackage{iftex}
\ifPDFTeX
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{textcomp} % provide euro and other symbols
\else % if luatex or xetex
\usepackage{unicode-math}
\defaultfontfeatures{Scale=MatchLowercase}
\defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1}
\fi
% Use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
\IfFileExists{microtype.sty}{% use microtype if available
\usepackage[]{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\makeatletter
\@ifundefined{KOMAClassName}{% if non-KOMA class
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}}
}{% if KOMA class
\KOMAoptions{parskip=half}}
\makeatother
\usepackage{xcolor}
\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available
\IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}}
\hypersetup{
pdftitle={Data \& Modeling section for Lab2},
pdfauthor={Group 2: Jeremy Lan, Taehun Kim, Nicolas Loffreda},
hidelinks,
pdfcreator={LaTeX via pandoc}}
\urlstyle{same} % disable monospaced font for URLs
\usepackage[margin=1in]{geometry}
\usepackage{graphicx}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
% Set default figure placement to htbp
\makeatletter
\def\fps@figure{htbp}
\makeatother
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{-\maxdimen} % remove section numbering
\usepackage{longtable} \usepackage{dcolumn}
\ifLuaTeX
\usepackage{selnolig} % disable illegal ligatures
\fi
\title{Data \& Modeling section for Lab2}
\author{Group 2: Jeremy Lan, Taehun Kim, Nicolas Loffreda}
\date{}
\begin{document}
\maketitle
\hypertarget{data}{%
\subsection{2. Data}\label{data}}
The dataset we will be using for this analysis is a subset of that
collected by Moro et al.~(2016). The dataset contains a representative
sample of 500 Facebook posts from a worldwide renowned cosmetic brand,
collected between January 1st and December 31st of 2014. By the time the
data was collected, Facebook was the most used social website, with
roughly 1.28 billion monthly active users (Insights 2014).
Each observation from the dataset represents a post from this company,
for which a variety of features have been collected.
Given the large sample size, we will use a randomized sub-sample of 150
observations for exploration purposes and the remaining 350 for running
the models. The only anomaly in the variables of interest is one missing
value for \texttt{paid} variable, which we will be removing from the
analysis, leaving us with a total of 499 observations.
\begin{center}
Randomized sub-samples
\end{center}
\begin{center}
\begin{tabular}{l r}
\hline
Split & \\
\hline
Exploration & 150 \\
Test & 349 \\
\hline
Total & 499 \\
\hline
\end{tabular}
\end{center}
\hypertarget{engaged-users}{%
\subsubsection{2.1 Engaged users}\label{engaged-users}}
The outcome variable will be the number of unique \emph{engaged users}
the post had through its lifetime. An engaged user is defined as someone
who clicked in the post. Looking into this variable, we can see that it
is fairly skewed to the right. To make the variable easier to work with,
we will be applying a log transformation:
\includegraphics{lab2_datasplit_noba_nico_files/figure-latex/unnamed-chunk-2-1.pdf}
\hypertarget{category-and-type}{%
\subsubsection{2.2 Category and Type}\label{category-and-type}}
The main variables we want to measure the impact on engaged users are
the \texttt{type} and \texttt{category} of the post. The \texttt{type}
is categorized in Photo, Video, Link or Status, and it represents what
kind of content the post contained. We can see that most of the posts
published were photos:
\includegraphics{lab2_datasplit_noba_nico_files/figure-latex/unnamed-chunk-3-1.pdf}
On the other hand, the category describes how the content of the post
was displayed to the user. There were 3 distinct categories the dataset
differentiates: - Action: Special offers and contests - Product: Direct
advertisement or explicit brand content - Inspiration: Non-explicit
brand related content
The number of posts published of each category are as follows:
\includegraphics{lab2_datasplit_noba_nico_files/figure-latex/unnamed-chunk-4-1.pdf}
\hypertarget{covariates}{%
\subsubsection{2.3 Covariates}\label{covariates}}
\hypertarget{paid}{%
\paragraph{2.3.1 Paid}\label{paid}}
Among the covariates we will be including in the model is paid
advertising. The variable \texttt{paid} will be encoded as a dummy
variable to indicate whether the post had any paid media associated with
it or not. We can see that in the exploratory dataset,
\textasciitilde32\% of all the posts had some kind of paid media
support:
\begin{center}
Paid media support
\end{center}
\begin{center}
\begin{tabular}{l r}
\hline
Media Support & Number of posts \\
\hline
No Paid support & 114 \\
Paid support & 36 \\
\hline
Total & 150 \\
\hline
\end{tabular}
\end{center}
\hypertarget{period-of-day-and-day-of-the-week}{%
\paragraph{2.3.2 Period of day and Day of the
week}\label{period-of-day-and-day-of-the-week}}
The last variables we will be including as control are the period of the
day and the day of the week the post was published to account for the
differences that may exist on user activity at different times and days.
In particular, we will distinguish 4 periods of the day, overnight,
morning, afternoon and evening. The first going from 12am to 6am, the
second one from 6am to 12pm, then 12pm to 6pm, and 6pm to 12am.
On the other hand, the days of the week will be divided into weekdays
and weekends. This will be encoded in a dummy set to 1 if the period is
a weekend.
\includegraphics{lab2_datasplit_noba_nico_files/figure-latex/unnamed-chunk-5-1.pdf}
\hypertarget{model}{%
\subsection{3. Model}\label{model}}
\hypertarget{base-model}{%
\subsubsection{3.1 Base Model}\label{base-model}}
As explained in the data section, we will be applying a log
transformation to the outcome variable, engaged users. The base model
will only include type and category as main explanatory variables:
\(\widehat{\log(engaged\_users)}=\beta_0 + \beta_1 \text{ } type + \beta_2 \text{ } category\)
\hypertarget{adding-covariates}{%
\subsubsection{3.2 Adding Covariates}\label{adding-covariates}}
With the base model established, we will be including as control
variables paid media efforts, brand awareness, day of the week and
period of the day. All these as described on the data section:
\begin{align*}
\widehat{\log(engaged\_users)}=\beta_0 &+ \beta_1 \text{ } type + \beta_2 \text{ } category\\
&+ \beta_3 \text{ } paid +\beta_4 \text{ } day\_of\_week + \beta_5 \text{ } period\_of\_day
\end{align*}
\hypertarget{adding-interaction-term}{%
\subsubsection{3.3 Adding Interaction
term}\label{adding-interaction-term}}
As a next model, we will be including an interaction term to account to
see how the different types behave when paired with the different
categories:
\begin{align*}
\widehat{\log(engaged\_users)}=\beta_0 &+ \beta_1 \text{ } type + \beta_2 \text{ } category + \beta_3 \text{ } paid \\
& + \beta_4 \text{ } day\_of\_week + \beta_5 \text{ } period\_of\_day + \beta_6 \text{ } type*category
\end{align*}
\hypertarget{standard-errors}{%
\subsubsection{3.4 Standard Errors}\label{standard-errors}}
We understand that certain dependencies may exist among the posts given
that they are all from the same company. This means that the people that
know the brand and interact with the social site and posts may be
similar on the different posts.
Because of this reason, we will be using \emph{robust clustered standard
errors} to adjust the significance for any independence that may exist.
\hypertarget{results}{%
\subsection{4. Results}\label{results}}
Results of the models described are shown in the table below:
\begin{longtable}{@{\extracolsep{5pt}}lccc}
\caption{}
\label{}
\\[-1.8ex]\hline
\hline \\[-1.8ex]
& \multicolumn{3}{c}{\textit{Dependent variable:}} \\
\cline{2-4}
\\[-1.8ex] & \multicolumn{3}{c}{log(Engaged Users)} \\
\\[-1.8ex] & (1) & (2) & (3)\\
\hline \\[-1.8ex]
Type - Photo & 0.944$^{***}$ & 0.939$^{***}$ & 0.951$^{***}$ \\
& (0.259) & (0.273) & (0.292) \\
& & & \\
Type - Status & 1.822$^{***}$ & 1.883$^{***}$ & 2.330$^{***}$ \\
& (0.329) & (0.346) & (0.504) \\
& & & \\
Type - Video & 1.696$^{***}$ & 1.594$^{***}$ & 1.609$^{***}$ \\
& (0.441) & (0.367) & (0.381) \\
& & & \\
Category - Inspiration & 0.139 & 0.094 & 0.335 \\
& (0.101) & (0.102) & (0.289) \\
& & & \\
Category - Product & $-$0.007 & $-$0.001 & $-$0.387 \\
& (0.119) & (0.122) & (0.469) \\
& & & \\
Paid Media & & 0.236$^{**}$ & 0.226$^{**}$ \\
& & (0.093) & (0.094) \\
& & & \\
Period - Evening & & $-$0.362 & $-$0.359 \\
& & (0.490) & (0.488) \\
& & & \\
Period - Morning & & $-$0.340$^{***}$ & $-$0.334$^{***}$ \\
& & (0.116) & (0.116) \\
& & & \\
Period - Overnight & & $-$0.129 & $-$0.123 \\
& & (0.117) & (0.116) \\
& & & \\
Weekend & & $-$0.156 & $-$0.166$^{*}$ \\
& & (0.100) & (0.100) \\
& & & \\
Interaction - Photo:Inspiration & & & $-$0.219 \\
& & & (0.303) \\
& & & \\
Interaction - Status:Inspiration & & & $-$1.678 \\
& & & (1.259) \\
& & & \\
Interaction - Video:Inspiration & & & \\
& & & \\
& & & \\
Interaction - Photo:Product & & & 0.371 \\
& & & (0.480) \\
& & & \\
Interaction - Status:Product & & & \\
& & & \\
& & & \\
Interaction - Video:Product & & & \\
& & & \\
& & & \\
Constant & 5.431$^{***}$ & 5.611$^{***}$ & 5.596$^{***}$ \\
& (0.249) & (0.292) & (0.307) \\
& & & \\
\hline \\[-1.8ex]
Observations & 349 & 349 & 349 \\
R$^{2}$ & 0.147 & 0.193 & 0.203 \\
Adjusted R$^{2}$ & 0.135 & 0.169 & 0.172 \\
Residual Std. Error & 0.815 (df = 343) & 0.798 (df = 338) & 0.797 (df = 335) \\
\hline
\hline \\[-1.8ex]
\textit{Note:} & \multicolumn{3}{r}{$^{*}$p$<$0.1; $^{**}$p$<$0.05; $^{***}$p$<$0.01} \\
\end{longtable}
All the different types of posts are significant to an \(\alpha\) level
of 0.05 across all models, with Link being the omitted type. From the
magnitude of the coefficients we can see that Video, Status and Photo
perform much better than Link, with Status yielding the highest increase
on engaged users. All these coefficients are quite stable across models
as well. As for the categories, none of them are significant which is
quite surprising.
The second model includes the covariates paid, period of the day and
weekday. The results are similar to those of the base model. As
expected, we see that paid comes as significant with a positive
coefficient, although smaller in magnitude than any of the post types.
Posting on weekends doesn't seem to be significant in any of the models
but some of the periods do. With the omitted day period being the
afternoon, we see that morning is highly significant and has a negative
coefficient. This implies that the period of the day associated with
higher engagement is the afternoon.
We ran a Wald Test between the base and the covariate models and it
yields a low p-value, meaning that some of the covariates are helping to
explain the variability of the engaged users. This phenomenon can also
be appreciated in the coefficient of determination (i.e.~\(R^2\)), as it
jumps from 14\% in the first model, to 17\% after the addition of these
covariates.
\begin{verbatim}
## Wald test
##
## Model 1: log(lifetime_engaged_users) ~ 1 + type + category_str
## Model 2: log(lifetime_engaged_users) ~ 1 + type + category_str + paid +
## period_of_day + weekend
## Res.Df Df F Pr(>F)
## 1 343
## 2 338 5 4.191 0.001042 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
\end{verbatim}
When looking at the model with the interaction term and covariates, we
see similar results than the previous one, but with a few modifications.
The first thing we see is that now posting on weekends is significant to
an \(\alpha\) level of 0.1. We also see that the coefficient is
negative, implying that posting during weekdays is associated with more
engaged users.
Examining the interaction term, we notice that none of the terms are
significant, which is quite surprising. Some of the coefficients are
missing, implying high correlation between them.
Last, we examine the Wald Test between the model with covariates and the
interaction model, and the high p-value indicates that the interaction
term doesn't add explanatory power to the model. This can also be
appreciated by the fact that the \(R^2\) doesn't increase significantly
from one model to another.
\begin{verbatim}
## Wald test
##
## Model 1: log(lifetime_engaged_users) ~ 1 + type + category_str + paid +
## period_of_day + weekend
## Model 2: log(lifetime_engaged_users) ~ 1 + type * category_str + paid +
## period_of_day + weekend
## Res.Df Df F Pr(>F)
## 1 338
## 2 335 3 0.7353 0.5316
\end{verbatim}
\end{document}
| {
"alphanum_fraction": 0.6931963147,
"avg_line_length": 36.8407310705,
"ext": "tex",
"hexsha": "1799a0a358ad07d8d31dc2896d65605c20592acb",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0b3e2d5a21cdb7d9bc2f39fc8c0cfa2824678e1c",
"max_forks_repo_licenses": [
"RSA-MD"
],
"max_forks_repo_name": "nicolob88/w203_lab2_g2",
"max_forks_repo_path": "lab2_datasplit_noba_nico.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0b3e2d5a21cdb7d9bc2f39fc8c0cfa2824678e1c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"RSA-MD"
],
"max_issues_repo_name": "nicolob88/w203_lab2_g2",
"max_issues_repo_path": "lab2_datasplit_noba_nico.tex",
"max_line_length": 118,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0b3e2d5a21cdb7d9bc2f39fc8c0cfa2824678e1c",
"max_stars_repo_licenses": [
"RSA-MD"
],
"max_stars_repo_name": "nicolob88/w203_lab2_g2",
"max_stars_repo_path": "lab2_datasplit_noba_nico.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4343,
"size": 14110
} |
\documentclass[a4paper]{article}
\def\npart{III}
\def\ntitle{Symplectic Topology}
\def\nlecturer{A.\ Keating}
\def\nterm{Lent}
\def\nyear{2020}
\input{header}
\renewcommand*{\P}{\mathbb{P}}
\newcommand{\w}{\wedge} % wedge product
\DeclareMathOperator{\Vect}{Vect} % vector field
\DeclareMathOperator{\Vol}{Vol} % volume form
\DeclareMathOperator{\Symp}{Symp}
\newcommand{\immersion}{\looparrowright}
\begin{document}
\input{titlepage}
\tableofcontents
\setcounter{section}{-1}
\section{Introduction and Motivations}
Let \(M\) be a manifold. In Riemannian geometry, we put a nondegenerate symmetric bilinear form on \(T_xM\). In symplectic geometry we put a non-degenerate skew symmetric form instead. By basic linear algebra, by a change of basis, such a form is
\[
\Omega =
\begin{pmatrix}
0 & 1 \\
-1 & 0 \\
& & \ddots \\
& & & 0 & 1 \\
& & & -1 & 0
\end{pmatrix}
\]
We define the \emph{symplectic group} to be
\[
\Sp_{2n}(\R) = \{A \in \GL_{2n}(\R): A^T\Omega A = \Omega\}.
\]
A symplectic manifold is a \(2n\)-manifold \(M^{2n}\) with an atlas of charts such that the derivatives of transition maps are in \(\Sp_{2n}(\R)\). We will prove that this is equivalent to \((M^{2n}, \omega)\), where \(\omega \in \Omega^2(M)\) closed (\(\d \omega = 0\)), everywhere nondegenerate (\(\omega^{\w n} \ne 0\)) at all points.
\begin{ex}
\((\R^{2n}, \sum \d x_i \w \d y_i)\) gives \(\Omega\) with respect to \(\frac{\partial }{\partial x_i}, \frac{\partial }{\partial y_i}\). We call this sympletic form \(\omega_{\text{std}}\).
\end{ex}
In fact, this example is the ``local model'' for all symplectic manifolds. In other words, they have no local invariants.
Motivation 1: mechanices. Given a particle in \(\R^n\) and a potential \(U\), define \(H = U + \frac{\dot q^2}{2}\) to be the energy. Then we can work out the flow of Hamilton's equations
\[
\frac{\partial H}{\partial p} = \dot q, \frac{\partial H}{\partial q} = - \dot p.
\]
It is a fact that the flow preserves the symplectic form \(\sum \d p_i \w \d q_i\).
Motivation 2: symmetry groups. We want to classify groups acting locally on \(\R^k\) such that
\begin{itemize}
\item act locally transitively (or reduce dim - orbit)
\item noo invariant foliations: not of the form \((x, y) \mapsto (f(x), g(x, y))\) (or reduce dimension).
\end{itemize}
\begin{theorem}[Lie]
If such a group is finite dimensional, it is one of finitely many families (e.g.\ \(\SO(n), \SU(n), \SO(p, q)\) etc).
\end{theorem}
\begin{theorem}[Cartan]
If such a group is infinite dimensional, it is one of
\begin{itemize}
\item \(\operatorname{Diff}(\R^k)\): all diffeomorphisms (preserving orientation),
\item \(\operatorname{Vol}(\R^k)\): all diffeomorphisms preserving volume form,
\item \(\operatorname{Symp}(\R^{2\ell})\): symplectomorphisms, i.e.\ diffeomorphisms preserving symplectic structure,
\item \(\operatorname{Cont}(\R^{2\ell + 1})\): contactomorphism (odd dimensional analogue of symplectormorphism)
\item and their conformal analogues.
\end{itemize}
\end{theorem}
Motivation 3: difference with volume. As \(A \in \Sp_{2n}(\R)\) implies \(\det A = 1\), we have inclusion \(\operatorname{Symp}(\R^{2n}) \subseteq \operatorname{Vol}(\R^{2n})\).
\begin{theorem}[Moser]\leavevmode
\begin{enumerate}
\item Two volume forms on a closed manifold are equivalent if and only if they have the same total volume.
\item Suppose \(U, V\) are connected open in \(\R^k\). There is a volume form-preserving \(U \embed V\) if and only if \(\operatorname{vol}(U) \leq \operatorname{vol}(V)\).
\end{enumerate}
\end{theorem}
By contrast
\begin{theorem}[Gromov non-squeezing]\index{Gromov non-squeezing}
There is no symplectic embedding \(B^{2n}(R) \embed B^2(r) \times \R^{2n - 2}\) if \(R > r\).
\end{theorem}
Motivation 4: complex geometry. Any smooth affine variety has a natural symplectic form
Course outline:
\begin{itemize}
\item background: extra bit of differential geometry, almost complex structure, first Chern class,
\item basic symplecti geometry: distinguished submanifolds, local models, some constants on symplectic manifolds,
\item constructions: e.g.\ new sympletric manifolds from old,
\item holomorphic curves: invariants given by generalisation of Cauchy-Riemann equations, proof of non-squeezing theorem
\end{itemize}
\section{(More) Differential geometry}
\paragraph{Tensor algebra}
Let \(E\) be a vector space over \(\F\). We define the \emph{tensor algebra} of \(E\) to be
\[
T(E) = \bigoplus_{i \geq 0} E^{\otimes i}
\]
where \(E^{\otimes 0} \cong \F\). Then we define the \emph{exterior algebra} to be
\[
\Lambda^*E = T(E)/\langle v \otimes v \rangle_{\text{as algebra}}
\]
which has a natural grading \(\Lambda^*E = \bigoplus_{k \geq 0} \Lambda^kE\),
\[
\Lambda^kE = E^{\otimes k}/\langle w_1 \otimes \cdots \otimes w_k: w_i = w_j \text{ for some } i \ne j \rangle_{\text{as vector space}}
\]
If \(\dim_\F E = n\) then \(\dim \Lambda^*E = 2^n, \dim \Lambda^kE = \binom{n}{k}\). The tensor product on \(T(E)\) induces wedge product on \(\Lambda^*(E)\) which is bilinear, associative and graded commutative.
If \(A: E \to F\) is a linear map then it induces a map \(\Lambda^k A: \Lambda^k E \to \Lambda^k F\). In \(\dim E = \dim F = n\) then we can identify \(\Lambda^nA: \Lambda^nE \to \Lambda^nF\) can be identified with \(\det A: \F \to \F\).
\paragraph{Vector fields and differential forms}
Suppose \(M^n\) is a manifold\footnote{In this course all manifolds are assumed to be smooth unless stated otherwise.}. Then we have the tangent and cotangent bundle \(TM, T^*M\). \emph{Vector fields} and \emph{\(k\)-forms} on \(M\) are defined to be
\begin{align*}
\Vect(M) &= \Gamma(TM) = \Gamma(M, TM) \\
\Omega^k(M) &= \Gamma(M, \Lambda^kT^*M)
\end{align*}
The \(0\)-forms are also the smooth functions on \(M\), \(C^\infty(M) = \Omega^0(M)\).
In local coordinates \(x_1, \dots, x_n\) on \(M\), \(X \in \Vect(M)\) can be written locally as
\[
X_p = \sum_{i = 1}^n X^i \frac{\partial }{\partial x_i}|_p
\]
where each \(X^i\) is a smooth function.
Given \(X \in \Vect(M), f \in C^\infty(M)\), we can differentiate \(f\) along \(X\) by \((Xf)_p = X_p(f)\). In local coordinates,
\[
(Xf)_p = \sum X^i(p) \frac{\partial f}{\partial x_i}|_p.
\]
We can check this is well-defined and it is a derivation in the sense that \(X(fg) = fX(g) + gX(f)\).
\paragraph{Pullbacks}
Suppose \(f: M \to N\) is smooth. It induces \(f^*: C^\infty(M) \to C^\infty(N), g \mapsto g \compose f\) and also induces a function on \(1\)-forms by \((f^*\varphi)_x = (Df_*)^* (\varphi_{f(x)})\). It then induces \(f^*: \Omega^*(N) \to \Omega^*(M)\) with the properties that
\begin{enumerate}
\item \(f^*\) is linear and \(f^*(\varphi \w \theta) = f^*\varphi \w f^* \theta\).
\item \((f \compose g)^* \varphi = g^*f^* \varphi\)
\end{enumerate}
\paragraph{Differential on \(\Omega^*(M)\)}
Let \(U \subseteq M\) be a chart with coordinates \(x_i\). Then a local basis for \(\Lambda^kU\) is \(\{\d x_I = \d x_{i_1} \w \cdots \w \d x_{i_k}\}\) where \(I = \{i_1 < \dots < i_k\}\). If \(\varphi: U \to \R\) then we have
\begin{align*}
\d: C^\infty(M) &\to \Omega^1(M) \\
\varphi &\mapsto \d \varphi = \sum \frac{\partial \varphi}{\partial x_i} \d x_i
\end{align*}
Note that \((\d \varphi)X = X\varphi \in C^\infty(M)\). In general, if \(\varphi = \sum \varphi_i \d x_I \in \Omega^k(M)\) where \(\varphi_I\) smooth functions, then \(\d \varphi = \sum \d \varphi_I \w \d x_I\). Can check this is well-defined and satisfies
\begin{enumerate}
\item \(\d (\varphi_1 + \varphi_2) = \d \varphi_1 + \d \varphi_2\),
\item \(\d (\varphi_1 \w \varphi_2) = \d \varphi_1 \w \varphi_2 + (-1)^k \varphi_1 \w \d \varphi_2\) if \(\varphi_1 \in \Omega^k\),
\item \(\d^2 = 0\),
\item \(\d (f^*\varphi) = f^*(\d \varphi)\).
\end{enumerate}
Moreover we can show these properties uniquely determines \(\d\).
It follows that we have the \emph{de Rham complex}
\[
\begin{tikzcd}
0 \ar[r] & \Omega^0(M) \ar[r, "\d"] & \Omega^1(M) \ar[r, "\d"] & \Omega^2(M) \ar[r] & \cdots \ar[r, "\d"] & \Omega^n(M) \ar[r] & 0
\end{tikzcd}
\]
which gives rise to de Rham cohomology \(H^*_{\text{dR}}(M)\). By de Rham theorem this is isomorphic to \(H^*(M; \R)\), singular cohomology with coefficients in \(\R\). \((\Omega^*M, \w, \d)\) is the de Rham algebra. Morgan 1978 shows that it characterises the rational homotopy type of algebraic varieties.
\paragraph{Isotopies and vector fields}
\begin{definition}[isotopy]\index{isotopy}
A smooth map \(\rho: M \times \R \to M\) is an \emph{isotopy} if \(\rho_t = \rho(-, t): M \to M\) is a diffeomorphism for each \(t\) and \(\rho_0 = \id_M\).
\end{definition}
We could replace \(\R\) with open intervals containing \(0\).
Given an isotopy \(\rho\), we get a time-dependent vector field, say \(v_t\), as follows:
\[
v_t|_p = \frac{d}{ds} \rho_s(q)|_{s = t}
\]
where \(q = \rho_t^{-1}(p)\), i.e.
\[
\frac{d \rho_t}{dt} = v_t \compose \rho_t.
\tag{\ast}
\]
Conversely, given a time-dependent vector field \(v_t\), if \(M\) is compact or if \(v_t\) is compactly supported, by Picard's theorem on existence of solutions to ODEs, there is an isotopy \(\rho\) such that \(\rho_0 = \id\) and the ODE (\(\ast\)) is satisfied. For compact \(M\) we have a one-to-one correspondence
\[
\{\text{isotopies of } M\} \longleftrightarrow \{\text{time-dependent vector fields on } M\}.
\]
For non-compact \(M\), the flow still exists locally (i.e.\ at each point \(p\) for sufficiently small interval of time) by Picard(-Lindelöf).
\begin{definition}\index{exponential map}
If \(v_t = v\) (independent of \(t\)), its flow is called the \emph{exponential map} of \(v\), denoted \(\exp(tv)\).
\end{definition}
Useful formula (III Differential Geometry Example sheet 2 question 3): for \(\theta \in \Omega^1(M), X, Y \in \Vect(M)\), have
\[
\d \theta(X, Y) = X \theta(Y) - Y \theta(X) - \theta([X, Y]).
\]
\paragraph{Interior product}
Suppose \(\alpha \in \Omega^{p + 1}(M), X \in \Gamma(TM)\), the we define the \emph{interior product} \(X \lrcorner \alpha = \iota_X a \in \Omega^p(X)\) to be
\[
\iota_X(\alpha)(u) = \alpha(X \w u)
\]
for \(u \in \Gamma(\Lambda^pTM)\).
\paragraph{Lie derivatives}
DG ES2 Q 11*
Let \(M\) be a manifold, \(X \in \Gamma(TM)\) a vector field. Then we have a local flow \(\varphi_t: M \to M\) for \(t \in (-\delta, \delta)\). Given \(\alpha \in \Omega^*(M)\), the \emph{Lie derivative} of \(\alpha\) with respect to \(X\) is
\[
\mathcal L_X(\alpha) = \frac{d}{dt} (\varphi_t^*\alpha)|_{t = 0} \in \Omega^*(M)
\]
and has the same degree as \(\alpha\) if \(\alpha\) has pure degree. For \(V \in \Gamma(\Lambda^kTM)\), we similarly define
\[
\mathcal L_XV = \frac{d }{d t}((\varphi_{-t})_*V)|_{t = 0}
\]
where \((\varphi_t)_* = \Lambda^k D\varphi_t\).
Properties:
\begin{enumerate}
\item \(\mathcal L_Xf = Xf\) for \(f \in C^\infty(M)\).
\item \(\mathcal L_X(Y) = [X, Y]\) for \(X, Y \in \Gamma(TM)\).
\item \(\mathcal L_X(\d \alpha) = \d \mathcal L_X \alpha\) for \(\alpha \in \Omega^*(M)\).
\item Cartan's formula: \(\mathcal L_X = \iota_X \compose \d + \d \compose \iota_X\).
\item For a time-dependent \(X_t\) with flow \(\varphi_t\), \(\frac{d}{dt}(\varphi_t^*\alpha) = \varphi_t^* \mathcal L_{X_t}\alpha\) for \(\alpha \in \Omega^*(M)\).
\end{enumerate}
\begin{proof}[Sketch proof]\leavevmode
\begin{enumerate}
\item \(\mathcal L_X f = \frac{d}{dt}|_{t = 0} (f \compose \varphi_t) = Xf\).
\item Let \(\varphi_t\) be the flow of \(X\). Use the slightly unusual notation \(\varphi_t^*(Y)|_p = (D\varphi_t^{-1})(Y_{\varphi_t(p)})\). Check
\[
\varphi_t^*(Y)(f \compose \varphi_t) = Y(f) \compose \varphi_t
\]
so have
\[
\frac{\varphi_t^*(Y)(f \compose \varphi_t) - \varphi_t^*(Y)(f)}{t} + \frac{\varphi_t^*(Y)(f) - Y(f)}{t} = \frac{Y(f) \compose \varphi_t - Y(f)}{t}
\]
take limit as \(t \to 0\),
\[
YX(f) + \mathcal L_X(Y)(f) = XY(f).
\]
\item Omitted.
\item General strategy: check the formula holds for \(0\)-forms, both sides commute with \(\d\), both sides are derivations for \((\Omega^*(M), \w)\), and use the fact that the equations are local and for a local coordinate patch \(U\), \(\Omega^*(U)\) is generated as an algebra by \(\Omega^0(U)\) and \(\d \Omega^0(U)\).
\item Same as 4.
\end{enumerate}
\end{proof}
\begin{lemma}
For a smooth family \(\alpha_t \in \Omega^k(M)\),
\[
\frac{d}{dt}(\varphi_t^* \alpha_t) = \varphi_t^*(\mathcal L_{X_t} \alpha_t + \frac{d \alpha_t}{dt}).
\]
\end{lemma}
\begin{proof}
Treat LHS as the derivative of a function of two variables,
\[
\frac{d}{dt}(\varphi_t^* \alpha_t)
= \frac{d}{dx}(\varphi_x^* \alpha_t)|_{x = t} + \frac{d}{dy}(\varphi_t^* \alpha_y)|_{y = t}
= \varphi_t^*\mathcal L_{X_t}\alpha_t + \varphi_t^* \frac{d\alpha_t}{dt}.
\]
\end{proof}
\paragraph{Oreintations}
Let \(E\) be an \(n\)-dim \(\R\) vector space. Then an orientation on \(E\) is an equivalence class of ordered basis \((e_1, \dots, e_n)\) under the equivalence relation \((e_1, \dots, e_n) \sim (f_1, \dots, f_n)\) if and only if the endomorphism \(A: e_i \mapsto f_i\) has \(\det A > 0\).
Let \(\pi: E \to B\) be a rank \(k\) real vector bundle. An orientation on \(E\) is a coherent choice of orientations on each fibre \(E_b\), where ``coherent'' means that for local trivialisation \(\pi^{-1}(U) \cong \R^k \times U\), the choice is constant.
Let \(M^n\) be a manifold. An orientation of \(M\) is an orientation of \(TM\) (if exists). We will denote by \(\overline M\) the manifold \(M\) with opposite orientation. If \(M^n\) is a manifold with boundary \(\p M\), an orientation of \(M\) induces an orientation on \(\p M\): a basis \((e_1, \dots, e_{n - 1})\) for \(T_x(\p M)\) is positively oriented if \((n_x, e_1, \dots, e_{n - 1})\) is for ``\(T_xM\)'', where \(n_x\) is the outward pointing normal vector.
Note if \(M\) is a compact oriented \(1\)-manifold with boundary the \(\sum_{p \in \p M} \operatorname{or}(p) = 0\).
\paragraph{Integration}
In vector calculus, we have if \(f: (U, x_i) \to (V, y_i)\) is a diffeomorphism of open subsets of \(\R^k\), then
\[
\int_V a dy_1 \cdots dy_k = \int_U (a \compose f) |\det(Df)| dx_1 \cdots dx_k.
\]
In differential geometry we formulate integration in this way: for \(\varphi = a \d y_1 \w \cdots \w \d y_k\), if \(\varphi\) preserves orientation then
\[
\int_V \varphi = \int_U f^* \varphi.
\]
\begin{lemma}
If \(X\) is an oriented \(k\)-manifold, there is a well-defined integration map
\[
\int_X: \Omega^k_c \to \R
\]
where \(\Omega^k_c\) are \(k\)-forms with compact support.
\end{lemma}
A \emph{volume form} on \(M^k\) is a nowhere zero section \(\d \Vol \in \Omega^k(M)\), which is equivalent to a choice of trivialisation \(\Lambda^kT^*M \cong \R \times M\). \(M\) is orientable if and only if \(\Lambda^kT^*M\) is trivial. Note that \(\d (\d \Vol) = 0\).
\begin{theorem}[Stokes]
\[
\int_M \d \alpha = \int_{\p M} \alpha.
\]
\end{theorem}
\begin{corollary}
For \(X\) closed oriented, we have a surjection \(\int_X: H^k_{\text{dR}}(X) \to \R\).
\end{corollary}
\begin{proof}[Sketch proof]
Let \(U \subseteq \R^k_+\) be an open chart. Use linearity and partition of unity, it suffices to work in \(U\). Then use standard results from multivariate calculus/Fubini's theorem. See example sheet 1.
\end{proof}
\section{Symplectic linear algebra}
Recall the standard symplectic form \(\omega_{\text{std}} = \sum \d x_i \w \d y_i\) on \((\R^{2n}, (x_i, y_i))\): with respect to \(\frac{\partial }{\partial x_1}, \frac{\partial }{\partial y_1}, \dots, \frac{\partial }{\partial x_n}, \frac{\partial }{\partial y_n}\), it has matrix
\[
\Omega_0 =
\begin{pmatrix}
0 & 1 \\
-1 & 0 \\
& & \ddots \\
& & & 0 & 1 \\
& & & -1 & 0
\end{pmatrix}
\]
Define
\begin{align*}
\Sp_{2n}(\R)
&= \{A \in \GL_{2n}(\R): A^*\omega_0 = \omega_0\} \\
&= \{A \in \GL_{2n}(\R): A^T\Omega_0A = \Omega_0\}
\end{align*}
by identifying \(A\) with its matrix representation.
Recall from linear algebra
\begin{lemma}
Suppose \((V, \Omega)\) is a vector space with a non-degenerate alternating (or skew-symmetric) bilinear form \(\Omega\) (i.e.\ \(V\) is a \emph{symplectic vector space}\index{symplectic vector space}), then there is a basis \(\mathcal B = (u_1, v_1, \dots, u_n, v_n)\) of \(V\) such that \([\Omega]_{\mathcal B} = \Omega_0\).
\end{lemma}
\begin{proof}[Sketch]
By non-degeneracy exist \(u_1, v_1\) such that \(\Omega(u_1, v_1) = 1\). \(u_1, v_1\) are linearly independent since \(\Omega\) is alternating. Then \(V = \langle u_1, v_1 \rangle \oplus \{w: \Omega(u_1, w) = \Omega(v_1, w) = 0\}\). Proceed by induction.
\end{proof}
\begin{corollary}
Symplectic vector spaces are even-dimensional and \(\Omega \in \Lambda^2V^*\) is non-degenerate if and only if \(\Omega^n \ne 0 \in \Lambda^{2n}V^*\).
\end{corollary}
\begin{definition}[symplectic complement]\index{symplectic complement}
Suppose \(U \leq (V, \Omega)\). The \emph{symplectic complement} of \(U\) in \(V\) is
\[
U^{\Omega} = \{w \in V: \Omega(w, u) = 0 \text{ for all } u \in U\}.
\]
\end{definition}
\begin{definition}[symplectic, (co)isotropic, Lagrangian subspace]\index{symplectic subspace}\index{isotropic subspace}\index{coisotropic subspace}\index{Lagrangian subspace}
Let \((V, \Omega)\) be a symplectic vector space.
\begin{itemize}
\item \(U \leq V\) is a \emph{symplectic subspace} if \(U \cap U^\Omega = 0\), i.e.\ \(\Omega|_U\) is nondegenerate.
\item \(U \leq V\) is an \emph{isotropic subspace} if \(U \leq U^\Omega\).
\item \(U \leq V\) is a \emph{coisotropic subspace} if \(U^\Omega \leq U\).
\item \(U \leq V\) is a \emph{Lagrangian subspace} if it is both isotropic and coisotropic.
\end{itemize}
\end{definition}
\begin{proposition}
An isotropic subspace has dimension at most \(\frac{1}{2} \dim V\) and a coisotropic subspace has dimension at least \(\frac{1}{2} \dim V\). If \(U\) is isotropic of dimension \(\frac{1}{2} \dim V\), it is also coisotropic (and vice versa), in which case it is Lagrangian.
\end{proposition}
\begin{proof}
\(\Omega\) nondegenerate gives a surjection \(V \to U^*, v \mapsto \Omega(-, v)\). Thus \(\dim U^* + \dim U^0 = \dim V\), so \(\dim U + \dim U^\Omega = \dim V\).
\end{proof}
\section{Symplectic manifolds: first notions}
\(\varphi \in \Omega^2(M)\) gives
\begin{align*}
\mu_\varphi: TM &\to T^*M \\
u &\mapsto (v \mapsto \varphi(u, v))
\end{align*}
i.e.\ \(\mu_\varphi u = \iota_u \varphi\). \(\varphi\) is \emph{non-degenerate} if \(\mu_\varphi\) is an isomorphism, which happens if and only if \(\varphi^n\) is nonwhere zero, where \(\dim M = 2n\).
\begin{definition}[symplectic form]\index{symplectic form}
A closed and non-degenerate form \(\omega \in \Omega^2(M)\) is called a \emph{symplectic form}.
\end{definition}
First condition a linear algebra condition, the second a local condition: to check some integral is zero only have to check locally.
\begin{definition}[symplectic manifold]\index{symplectic manifold}
\((M^{2n}, \omega)\) is a symplectic manifold.
\end{definition}
\begin{definition}[symplectic structure]\index{symplectic structure}
A \emph{symplectic structure} is a symplectic form up to pullback by diffeomorphism.
\end{definition}
\begin{proposition}
A 2-fold is symplectic if and only if it is orientable.
\end{proposition}
\begin{proof}
A non-degenerate form on \(M^2\) is a volume form. By dimension reason all \(2\)-forms are closed.
\end{proof}
\begin{proposition}
Suppose a closed manifold \(M^{2n}\) is symplectic. Then \(H^{2i}_{\text{dR}}(M) \ne 0\) for \(0 \leq i \leq n\).
\end{proposition}
\begin{proof}
\([\omega] \in H^2_{\text{dR}}(M)\) and \(\omega^n\) is a volume form, say \(\d \Vol\). Thus
\[
[\omega]^n = [\omega^{\w n}] = [\d \Vol] \ne 0 \in H^{2n}_{\text{dR}}(M).
\]
\end{proof}
\begin{eg}
\(S^4\) is not symplectic.
\end{eg}
\subsection{Hamiltonian flows}
Suppose \((M^{2n}, \omega)\) is symplectic and \(f \in C^\infty(M)\). We can construct a vector field as follow: \(\d f \in \Omega^1(M)\). Use the bundle isomorphism \(\mu_\omega: TM \to T^*M\) we obtain \(X_f = (\mu_\omega)^{-1}(\d f)\). It is the unique vector field such that \(\iota_{X_f} \omega = \d f\).
\begin{definition}[Hamiltonian flow]\index{Hamiltonian flow}
The flow of \(X_f\) is called the \emph{Hamiltonian flow} of \(f\).
\end{definition}
\begin{proposition}
Wherever defined, the Hamiltonian flow of a function acts by symplectomorphism.
\end{proposition}
\begin{proof}
By Cartan's formula
\[
\mathcal L_{X_f} \omega = \iota_{X_f} \d \omega + \d \iota_{X_f} \omega
= 0 + \d (\d f) = 0.
\]
\end{proof}
\begin{remark}
This means that symplectic manifolds have larger spaces of symmetries than Riemannian manifolds. For example compare isometries of \(S^2\) vs. symplectomorphisms.
\end{remark}
\begin{eg}
Hamilton's equations
\[
\frac{\partial H}{\partial p} = \dot q, \frac{\partial H}{\partial q} = - \dot p
\]
where \(q\) is position and \(p\) is momentum. This is same as following the flow of \(X_H = (\frac{\partial H}{\partial p}, - \frac{\partial H}{\partial q})\), for \(\omega = \sum \d q_i \w \d p_i\). Note \(\iota_{X_H} \omega = \d H\). This shows that classical Hamiltonian flows are examples of Hamiltonian flows in the sense of symplectic geometry and are through symplectomorphisms of \(\R^{2n}\).
\end{eg}
\subsection{(Almost) complex manifolds}
\begin{definition}[complex manifolds]
A \emph{complex manifold} is a manifold \(M^{2n}\) covered by charts \(u_\alpha \subseteq \C^n\) such that the transition maps are biholomorphisms. Equivalently, for two charts \((U_\alpha, \varphi_\alpha)\) and \((U_\beta, \varphi_\beta)\), we require \(D(\varphi_\alpha \compose \varphi_b^{-1}) \in \GL_n(\C) \subseteq \GL_{2n}(\R)\).
\end{definition}
The (co)tangent spaces of \(M\) are naturally complex vector spaces.
\begin{definition}[almost complex structure]\index{almost complex structure}
An \emph{almost complex structure} (acs) on a smooth manifold \(M\) is an endomorphism \(J: TM \to TM\) such that \(J^2 = -\id\).
\end{definition}
\begin{definition}[integrable]\index{almost complex structure!integrable}
If an almost complex structure \(J\) comes from a complex structure then we say \(J\) is \emph{integrable}.
\end{definition}
\begin{remark}\leavevmode
\begin{enumerate}
\item A complex manifold is almost complex: the complex structure is induced by multiplication by \(i\).
\item \(J\) extends to a map \(TM \otimes_\R \C \to TM \otimes_\R \C\), which we also denote by \(J\). For complex manifolds we get
\[
J(\frac{\partial }{\partial z_j}) = i \frac{\partial }{\partial z_j}, J(\frac{\partial }{\partial \overline z_j}) = -i \frac{\partial }{\partial \overline z_j}
\]
where \(TM \otimes \C = \C \langle \frac{\partial }{\partial x_j}, \frac{\partial }{\partial y_j} \rangle\), and
\[
\frac{\partial }{\partial z_j} = \frac{1}{2}(\frac{\partial }{\partial x_j} - i \frac{\partial }{\partial y_j}),
\frac{\partial }{\partial \overline z_j} = \frac{1}{2}(\frac{\partial }{\partial x_j} + i \frac{\partial }{\partial y_j}),
\]
and their dual basis is
\[
\d z_j = \d x_j + i \d y_j, \d \overline z_j = \d x_j - i \d y_j.
\]
\item The complexified cotangent bundle splits as \(T^*M \otimes \C = T^*M^{1, 0} \oplus T^*M^{0, 1}\) where
\begin{align*}
T^*M^{1, 0} &= \{\alpha: \alpha(Jv) = i \alpha(v)\} = \C \langle \d z_j \rangle \\
T^*M^{0, 1} &= \{\alpha: \alpha(Jv) = -i \alpha(v)\} = \C \langle \d \overline z_j \rangle \\
\end{align*}
This then induces splitting on sections
\[
\Omega^1(M; \C) = \Gamma(T^*M \otimes \C) = \Omega^{1, 0} \oplus \Omega^{0, 1}.
\]
More generally, define
\[
\Omega^{p, q}(M) = \Gamma(\Lambda^pT^*M^{1, 0} \otimes \Lambda^qT^*M^{0, 1}) \leq \Gamma(\Lambda^{p + q}(T^*M \otimes \C)) = \Omega^{p + q}(M; \C).
\]
For \(M\) a complex manifold, a section for \(\Omega^{p, q}\) is given in locally coordintes by
\[
\sum \alpha_{PQ} \d z_P \w \d \overline Z_Q
\]
where \(\alpha_{PQ}\) are smooth functions.
\item \(\d: \Omega^k(M) \to \Omega^{k + 1}(M)\) induces \(\d: \Omega^k(M; \C) \to \Omega^{k + 1}(M; \C)\). Suppose \(\d: \Omega^0(M; \C) \to \Omega^1(M; \C)\). By composition with projections to \((1, 0)\)-forms and \((0, 1)\)-forms, we get \(\d = \p + \overline \p\).
For a \emph{complex manifold}, we can
\begin{enumerate}
\item talk about holomorphic functions (functions that are holomorphic on each chart), and \(f \in C^\infty(M)\) is holomorphic if and only if \(\overline \p f = 0\).
\item \(\d(\Omega^{p, q}) \subseteq \Omega^{p + 1, q} \oplus \Omega^{p, q + 1}\) (this needn't be true for almost complex manifolds).
\item \(\d^2 = 0\) implies \(\overline \p = 0\) so we can form the Dolbeault complex
\[
\begin{tikzcd}
\cdots \ar[r] & \Omega^{\bullet, k} \ar[r, "\overline \p"] & \Omega^{\bullet, k + 1} \ar[r, "\overline \p"] & \cdots
\end{tikzcd}
\]
The cohomology of this complex is Dolbeault cohomology \(H_{\overline \p}^{\bullet, k}(M)\).
\end{enumerate}
\end{enumerate}
\end{remark}
\subsection{Kähler manifold}
\begin{definition}[Kähler manifold]\index{Kähler manifold}
A \emph{Kähler manifold} is a complex manifold with a closed positive-definite real \((1, 1)\)-form: \(\omega \in \Omega^{1, 1}(M)\) such that
\begin{itemize}
\item \(\p \omega = \overline \p \omega = 0\),
\item \(\omega = \frac{i}{2} \sum_{j, k} h_{jk} \d z_j \w \d \overline z_k\) with \((h_{jk})\) a hermitian postive definite matrix.
\end{itemize}
\end{definition}
\begin{ex}
Expand the local expression in terms of \(\d x_j\) and \(\d y_j\) to show that the factor \(\frac{i}{2}\) is sensible.
\end{ex}
\begin{proposition}
Kähler forms are symplectic forms.
\end{proposition}
\begin{proof}
A Kähler form \(\omega\) is closed, and
\[
\omega^n = n! \left(\frac{i}{2} \right)^n \underbrace{\det(h_{jk})}_{> 0} \d z_1 \w \d \overline z_1 \w \cdots \w \d z_n \w \d \overline z_n
\]
and since \(\frac{i}{2} \d z_i \w \d \overline z_i = \d x_i \w \d y_i\), \(\omega^n\) is nowhere zero.
\end{proof}
\begin{definition}[plurisubharmonic]\index{plurisubharmonic}
Let \(M\) be a complex manifold. A smooth function \(\rho: M \to \R\) is strictly \emph{plurisubharmonic} (plush) if
\[
\left( \frac{\partial^2 \rho}{\partial z_j \partial \overline z_k} \right)
\]
is positive definite everywhere.
\end{definition}
Check that for such a function, \(\frac{i}{2} \p \overline \p \rho\) defines a Kähler form on \(M\).
\begin{remark}
Such an \(M\) can't be closed.
\end{remark}
\begin{eg}\leavevmode
\begin{enumerate}
\item \(\rho(z) = |z|^2\) on \(\C^n \cong \R^{2n}\) gives \(\frac{i}{2} \p \conj \p \rho = \omega_{\text{std}}\).
\item \(\rho(z) = \log(1 + |z|^2)\) on \(\C^n \cong \R^{2n}\). To check this is plush, look at \((1, 0, \dots, 0)\) and use \(U(n)\)-invariance. Check also the induced volume form has finite total volume.
\item A complex submanifold of a Kähler manifold is Kähler with the pullback form.
\item \(\C \P^n\) is Kähler: \(\P^n\) can be covered by charts \(U_i = \{z_i \ne 0\}\). The transition functions are of the form
\[
\varphi: (u_1, \dots, u_n) \mapsto (\frac{1}{u_1}, \frac{u_2}{u_1}, \dots, \frac{u_n}{u_1})
\]
so
\[
\varphi^*(\frac{i}{2} \p \overline \p(\log(1 + |z|^2))
= \frac{i}{2} \p \overline \p (\log(1 + |z|^2) + \log (\frac{1}{|z_1|^2}))).
\]
Thus the local Kähler forms patch to give a global one. For details see III Complex Manifolds.
\end{enumerate}
\end{eg}
\begin{theorem}[Hodge]
If \(X\) is compact Kähler then
\begin{itemize}
\item \(H^k_{\mathrm{dR}}(X) \otimes \C \cong \bigoplus_{i + j = k} H^{i, j}_{\conj \p}(X)\).
\item \(H^{i, j}_{\conj \p}(X) \cong \conj{H^{j, i}_{\conj \p}(X)}\). In particular they have the same dimension.
\end{itemize}
\end{theorem}
\begin{corollary}
If \(X\) is compact Kähler then any Betti number of odd degree is even.
\end{corollary}
\begin{theorem}[Lefschetz]
For \(X\) compact Kähler the wedge product
\[
- \w [\omega]^k: H^{n - k}_{\mathrm{dR}}(X) \to H^{n + k}_{\mathrm{dR}}(X)
\]
is an isomorphism, where \(n\) is the complex dimension of \(X\).
\end{theorem}
\begin{remark}\leavevmode
\begin{enumerate}
\item It is easy to write down a compact complex manifold that is not Kähler: consider \((\C^2 \setminus \{0\})/z \sim 2z\). It inherits a complex structure from \(\C^2\). It is homeomorphic to \(S^1 \times S^3\), so \(b_2 = 0\). Thus it can't be symplectic.
\item Most examples of complex Kähler manifold we'll see are projective, but plenty are not (e.g.\ take deformations of complex surfaces in \(\P^3\). c.f.\ K3 surfaces).
\end{enumerate}
\end{remark}
\subsection{Almost complex structure on symplectic manifolds}
\begin{definition}[compatible almost complex structure]\index{almost complex structure!compatible}
An almost complex structure \(J\) on a symplectic manfiold \((M, \omega)\) is \emph{compatible} with \(\omega\) if
\begin{enumerate}
\item \(\omega(Ju, Jv) = \omega(u, v)\),
\item \(\omega(v, J(v)) > 0\) unless \(v = 0\).
\end{enumerate}
\end{definition}
\begin{note}
It follows that \(\omega(\cdot, J \cdot)\) is a positive-definite symmetric bilinear form, so gives a Riemannian metric \(g(w, u) = \omega(u, Jv)\). Such a triple \((\omega, J, g)\) is sometimes called a compatible triple. We'll see any two of them determines the third.
\end{note}
\begin{proposition}
Any symplectic manifold \((M, \omega)\) admits a compatible almost complex structure.
\end{proposition}
\begin{proof}
Let \((V, \Omega)\) be a symplectic vector space. Fix any metric \(g\) on \(V\). As \(g\) and \(\Omega\) both determine isomorphisms \(V \to V^*\), there exists \(A \in \End V\) such that \(\omega(u, v) = g(Au, v)\). Note that
\[
\omega(u, v) = -\omega(v, u) = -g(Av, u) = -g(u, Av)
\]
so \(A^* = -A\) with respect to \(g\). As \(g(AA^*v, v) = g(A^*v, A^*v) > 0\) for all \(v \ne 0\) and \((AA^*)^* = AA^*\), \(AA^*\) is positive definite symmetric. Thus by choosing an orthnormal basis, \(AA^* = BDB^{-1}\), where \(B\) is the diagonal matrix with entries \(\lambda_1, \cdots \lambda_{2n}\). We can thus take the positive square root and define \(J = (\sqrt{AA^*})^{-1}A\).
\begin{enumerate}
\item \(J^2 = (\sqrt{-A^2})^{-1}A(\sqrt{-A^2})^{-1}A = -\id\).
\item \(J^* = -J\) (implies \(JJ^* = \id\) so \(J\) is orthogonal).
\item For compatibility,
\[
\omega(Ju, Jv) = g(AJu, Jv) = g(JAu, Jv) = g(Au, v) = \omega(u, v).
\]
\item \(\omega(u, Ju) = g(-JAu, u) = g(\sqrt{AA^*} u, u) > 0\).
\end{enumerate}
Since \(A\) is uniquely determined and we are taking the positive square root, we can use this construction on each \(T_xM\), for a choice of Riemannian metric on \(M\). Our procedure on \((V, \Omega)\) is canonical so this works globally.
\end{proof}
\begin{note}
Given \((M, \Omega)\), there is a bijection
\[
\{\text{compatible complex structure } J\} \longleftrightarrow \{\text{Riemannian metric } M\}
\]
Note that RHS is convex so contractible, and it follows that LHS is also contractible. We will see that for many applications, there is ``essentially no choice'' of \(J\).
\end{note}
\begin{definition}[symplectic vector bundle]\index{symplectic vector bundle}
A \emph{symplectic vector bundle} is vector bundle \(\pi: E \to B\) with a section \(\Omega \in \Gamma(\Lambda^2E^*)\) such that \(\Omega|_{E_b} = \omega_b\) is symplectic on each fibre and locally trivial, i.e.\ \((\pi^{-1}(U), \Omega) \cong (U \times \R^{2n}, \omega_0)\).
\end{definition}
\begin{corollary}
Such an \(E\) admits the structure of a complex vector bundle, uniquely determined up to a contractible choice.
\end{corollary}
Note that this is weaker than a holomorphic bundle.
\subsection{First Chern class}
What is the (unique) compatible complex structure good for? We can define a topological invariant. The \emph{first Chern class} of a complex vector bundle \(E \to B\) is an element of \(c_1(E) \in H^2(B; \Z)\). We present three definitions here. They are equivalent, but we will not prove so.
\begin{enumerate}
\item Algebraic topology: let \(\det E = \Lambda^{\mathrm{rank} E}E\), which is a complex line bundle over \(B\), so can be obtained as a pullback of \(\mathcal L_{\mathrm{taut}} \to BU(1) = \C\P^\infty\). Note the tautological bundle can be described as
\begin{align*}
\mathcal L_{\mathrm{taut}}
&= \{(w, z) \in \C\P^n \times \C^{n + 1}: z \in w\} \\
&= \{(w, z): z_iw_j = z_jw_i \text{ for all } i, j\} \subseteq \C\P^n \times \C^{n + 1}
\end{align*}
The advantage of the second description is that it is a smooth projective variety so a Kähler manifold.
Suppose \(\det E = \varphi^* \mathcal L_{\mathrm{taut}}\). \(H^*(\C\P^\infty) = \Z[c_1^{\mathrm{univ}}]\) where \(c_1\) has degree \(2\). We then define \(c_1(E) = \varphi^*(c_1^{\mathrm{univ}})\).
\item Algebraic topology, alternative: take \(s: B \to \det E\) a generic section. Then \(\dim_\R(s(B) \cap s_0(B)) = n - 2\) (recall \(E\) is complex). Then define \(c_1(E)\) to be the Poincaré dual of this class.
\item Differential geometry: suppose \(\d_A\) is a connection on \(E\), \(F_A = \d \cdot \d_A \in \Omega^2(\End E)\) the curvature. Then \(c_1(E) = [\frac{1}{2\pi i} \tr F_A]\).
\end{enumerate}
Properties of \(c_1(E)\):
\begin{enumerate}
\item from the second definition, \(c_1(E) = e(\det E) \in H^2(B; \Z)\).
\item \(c_1(E) \in H^2(B; \Z)\) is invariant if \(J_E\) change continuously, so we get an invariant of the symplectic bundle.
\item \(c_1(f^*E) = f^*c_1(E)\).
\item \(c_1(E^*) = -c_1(E)\).
\item For any short exact sequence of complex vector bundles \(0 \to E \to \mathcal E \to E' \to 0\), then \(c_1(\mathcal E) = c_1(E) + c_1(E')\).
\item \(c_1(E \otimes F) = \mathrm{rk} E c_1(F) + \mathrm{rk} F c_1(E)\).
\item For \(M\) compact, complex line bundles over \(M\) up to isomorphism is isomorphic to \(H^2(M; \Z)\) via \(c_1\).
\end{enumerate}
\begin{notation}
For a manifold with amost complex structure, define \(c_1(M) = c_1(TM)\).
\end{notation}
\begin{ex}\leavevmode
\begin{enumerate}
\item \(c_1(\Sigma_g) = (2 - 2g) PD(\mathrm{pt})\). This is Guass-Bonnet: count the signed zeros of a generic vector field. As \(\Sigma_g\) has complex dimension \(1\), this is just the Euler class of \(T\Sigma_g\).
\item \(c_1(\mathcal L_{\mathrm{taut}} \to \C\P^n) = -[H] = PD(\C\P^{n -1}) \in H^2(\C\P^n) \cong \Z \langle H\rangle\). \([H]\) is the hyperplane class. In algebraic geometry we have \(\mathcal L_{\mathrm{taut}} = \mathcal O(-1)\).
\item \(c_1(T \C\P^n) = c_1(\C\P^n) = (n + 1) [H]\).
\end{enumerate}
\end{ex}
Constraint for \(4\)-manifolds:
\paragraph{adjunction formula}
Suppose \(C \subseteq (X^4, J)\) is an almost complex curve in an almost complex surface (\(JTC = TC \subseteq TX\)). Then
\[
2g(C) - 2 = -c_1(X) \cdot [C] + [C]^2
\]
where \([C]^2\) is the self-intersection number of \(C\). Thus the genus of \(C\) is determiend by its homology class.
\begin{proof}
We have a short exact sequence
\[
\begin{tikzcd}
0 \ar[r] & TC \ar[r] & TX|_C \ar[r] & \nu_{C/X} \ar[r] & 0
\end{tikzcd}
\]
where \(\nu_{C/X}\) is the normal bundle. So \(c_1(TX|_C) = c_1(TC) + c_1(\nu_{C/X})\). Pair this with \([C] \in H_2(C; \Z)\),
\[
c_1(TX) \cdot [C] = \underbrace{c_1(TC) \cdot [C]}_{\chi(C) = 2 - 2g(C)} + c_1(\nu_{C/X}) \cdot [C].
\]
The last term is \([C]^2\), argued as follow: the self-intersection number is the signed number of intersection points of \(C\) and a small (smooth) pushoff of \(C\), say \(C'\). We can identify \(C\) with the zero section of the normal bundle and \(C'\) a small generic section of \(\nu_{C/X}\).
\end{proof}
Recall for oriented \(X^4\), \(H^2(X; \R)\) has symmetric non-degenerate cup product pairing, so gives rise to \emph{signature} \(\sigma(X) = b_+ - b_-\). We state without proof
\begin{theorem}[Hirzebruch signature theorem]
Let \(X^4\) be an almost complex manifold. Then \(c_1(X)^2 = 2 \chi(X) + 3\sigma(X)\). In particular this is a topological invariant.
\end{theorem}
Fact: \(c_1^2 = \sigma \pmod 8\).
\begin{corollary}
If \(X^4\) admits an almost complex structure then \(X \# X, X \# X \# X \# X\) etc do not admit almost complex structures.
\end{corollary}
\begin{proof}
\(1 - b_1 + b_+\) is even if \(X\) admits an almost complex structure. Now \(b_\pm(X \# X) = 2 b_\pm(X)\) etc.
\end{proof}
\paragraph{Local forms for symplectic manifolds}
\begin{theorem}[Moser stability]\index{Moser stability}
If we have a smooth family of symplectic forms \(\{\omega_t\}\) on a closed symplectic manifold \(M\) such that \([\omega_t] \in H^2_{\mathrm{dR}}(X)\) is constant, then there is a diffeomorphism \(f: M \to M\) such that \(f^*\omega_1 = \omega_0\).
\end{theorem}
Thus if we deform a symplectic form smoothly the symplectic structure remains unchanged.
\begin{remark}\leavevmode
\begin{enumerate}
\item Small deformation of symplectic form matters: let \(M = \C\P^1 \times \C\P^1, \Omega_1 = \omega \oplus \omega, \Omega_2 = \omega \oplus t \omega, t > 1\). They are not in the same class.
Gromov: there is a deformation retract \(\Symp(M, \Omega_1) \simeq \SO(3) \times \SO(3)\). On the other hand \(\pi_1 \Symp(M, \Omega_2)\) is infinite.
\item A blowup of \(S^2 \times T^2 \times S^2 \times S^2\) has two symplectic forms which are cohomologous but there does not exist a diffeomorphism pulling back one to the other.
\item A note on closedness: there exist exotic symplectic structures on \(\R^{2n}\) for \(n \geq 2\), i.e.\ no symplectic embedding into \((\R^{2n}, \omega_{\mathrm{std}})\).
\end{enumerate}
\end{remark}
\begin{proof}
We are going to show there is an isotopy \(\varphi_t: M \to M, \varphi_0 = \id, \varphi_1 = f\) and \(\varphi_t^* \omega_t = \omega_0\). It's equivalent to looking for a vector field \(X_t \in \Vect(M)\) whose flow is \(\varphi_t\). Recall that the vector field is defined by.
\[
\frac{d}{dt} \varphi_t = X_t \compose \varphi_t.
\]
Differentiate the relation \(\varphi_t^* \omega_t = \omega_0\),
\[
0 = \frac{d}{dt} (\varphi_t^* \omega_t)
= \varphi_t^*(\frac{d}{dt} \omega_t + \mathcal L_{X_t} \omega_t)
= \varphi_t^* (\frac{d}{dt} \omega_t + \d \iota_{X_t} \omega_t + \iota_{X_t} \d \omega_t).
\tag{\ast}
\]
Since \([\omega_t]\) is fixed, \(\frac{d}{dt}\omega_t = \d \sigma_t\) where \(\sigma_t \in \Omega^1(M)\) is smooth. Thus (\(\ast\)) is equivalent to \(\d \sigma_t + \d \iota_{X_t} \omega_t = 0\). Thus it is enough to show \(\sigma_t + \iota_{X_t} \omega_t = 0\), which unqiuely determines \(X_t\) as \(\omega_t\) is nondegenerate.
\end{proof}
Slogan for local form theorems: check condition infinitesimally, integrate on small neighbourhood using vector field/Moser type argument.
\begin{theorem}[Darboux]\index{Darboux theorem}
If \(p \in (M^{2n}, \omega)\), there is a local chart \(f: D_\varepsilon(0) \to M\) with \(f(0) = p\) such that \(f^*\omega = \omega_0\). In other words, symplectic manifolds are locally standard.
\end{theorem}
\begin{proof}
Fix a chart \(h: D_r(0) = D \to M\) around \(p\). Both \(h^*\omega\) and \(\omega_0\) are symplectic forms on \(D\). Postcomposing an element of \(\Sp_{2n}(\R)\), wlog \(h^*\omega\) and \(\omega_0\) agree at \(p\). Let \(\omega_t = (1 - t)\omega_0 + t h^*\omega\) be the linear interpolation. Being nondegenerate is an open condition, so there is an open neighbourhood \(U \subseteq D\) containing \(0\) on which \(\omega_t\) is symplectic for \(t \in [0, 1]\). \(H^2(D) = 0\) so
\[
\frac{d}{dt} \omega_t = -\omega_0 + h^*\omega = \d \sigma
\]
for some \(\sigma \in \Omega^1(D)\). wlog \(\sigma\) vanishes at \(0\). Let \(X_t\) be the vector field defined by \(\iota_{X_t} \omega_t = -\sigma\). Note \(X_t\) vanishes at \(0\), so for some \(\varepsilon > 0\) the trajectory of every \(x \in D_\varepsilon(0) \subseteq U\) under the flow of \(X_t\) stays inside \(U\) (and is defined) up to at least time \(1\). Thus on \(D_\varepsilon(0)\) we can integrate \(X_t\) to \(\{\psi_t\}_{t \in [0, 1]}\) such that \(\psi_1^*h^* \omega = \omega_0\).
\end{proof}
\begin{remark}
This shows that closed nondegenerate \(2\)-form is the same as atlas of charts with transition functions whose derivatives are in \(\Sp_{2n}(\R)\).
\end{remark}
\begin{theorem}[Poincaré lemma]\index{Poincaré lemma}
Suppose \(M\) is a \(C^\infty\) manifold, \(Q \subseteq M\) closed smooth submanifold, \(\alpha_1, \alpha_2 \in \Omega^k(M)\) agreeing on \(TM|_Q\). Then exists \(\beta \in \Omega^{k - 1}(U_Q)\), where \(U_Q\) is an open neighbourhood of \(Q\), such that \(\d \beta = \alpha_1 - \alpha_2\) and \(\beta = 0\) on \(TM|_Q\).
\end{theorem}
\begin{proof}
Example sheet 2. Use partition of unity.
\end{proof}
\begin{note}
If \(Q \subseteq (M, \omega)\) is a symplectic submanifold. Then
\[
\nu_{Q/M} \cong (TQ)^\omega \subseteq TM|_Q
\]
as a subbundle. On each fibre \(V \cong W \oplus W^\omega\) for \(W \leq V\). Moreover \(\nu_{Q/M} = (TQ)^\omega\) is a symplectic vector bundle.
\end{note}
\begin{theorem}
If \(Q \subseteq (M, \omega)\) is a compact symplectic submanifold, the symplectic structure of \(M\) near \(Q\) is determined by \(\omega|_Q\) and \(\nu_{Q/M}\) as symplectic vector bundle, i.e.\ if two submanifolds have the same data then they have symplectomorphic tubular neighbourhoods.
\end{theorem}
\begin{proof}
Choice of a metric gives \(\exp: \nu_{Q/M} \to U_Q\) defined for \(|t| < \varepsilon\). If \(Q_i \subseteq M_i\) symplectic, \(\varphi: Q_1 \to Q_2\) a symplectomorphism then it lifts to \(\Phi: \nu_{Q_1/M_1} \to \nu_{Q_2/M_2}\), an isomorphism of symplectic bundles. Using metric to get \(\tilde \varphi: \exp_{M_2} \compose \Phi \compose \exp_{M_1}^{-1}: U_{Q_1} \to U_{Q_2}\). \(\omega_1\) and \(\tilde \varphi^* \omega_2\) are symplectic forms on \(U_{Q_1}\) which agree on \(TM|_{Q_1}\) so by Poincaré lemma exists \(\sigma \in \Omega^1(U_{Q_1}')\) which vanish on \(TM_{Q_1}\) and such that \(\d \sigma = \omega_1 - \tilde \varphi^* \omega_2\) on \(U_{Q_1}'\). Now apply Moser's method to the path \(\omega_t = t\omega_1 + (1 - t)\tilde \varphi^* \omega_2\). Similar to the proof of Darboux, we can integrate the vector field to time \(1\) on a sufficiently small neighbourhood of \(Q_1\).
\end{proof}
\begin{corollary}
A neighbourhood of a closed symplectic surface (i.e.\ real 2 dimensional) \(C \subseteq (M^4, \omega)\) is determined by
\begin{itemize}
\item topological type of \(C\) (i.e.\ genus), \(\int_C \omega\) (these determine \(C\) as symplectic manifold).
\item \([C]^2\) (determines \(\nu_{C/M}\)).
\end{itemize}
\end{corollary}
\begin{definition}\index{Lagrangian submanifold}
If \((M^{2n}, \omega)\) is symplectic, \(L \subseteq M\) is \emph{Lagrangian} if \(\dim_\R L = n\) and \(\omega|_L = 0\), i.e.\ \(i^* \omega = 0\) where \(i: L \embed M\).
\end{definition}
\begin{eg}\leavevmode
\begin{enumerate}
\item \(S^1 \subseteq \Sigma\) a surface is Lagrangian as \(\Omega^2(S^1) = 0\).
\item \((S^1)^n \subseteq (\C^n, \omega_0)\), the Clifford torus\index{Clifford torus}. Note that the radius of \(S^1\)'s can be arbitrarily small so by Darboux, these exist in any symplectic manifold.
\end{enumerate}
\end{eg}
\paragraph{Cotangent bundle}
\(\R^{2n} = T^*\R^n\). Let \(q_j\) be coordinates on \(\R^n\), \(p_j = \d q_j\). Together they give coordinates on \(\R^{2n}\). Together they give a 1-form
\[
\lambda = \sum p_j \d q_j \in \Omega^1(T^*\R^n)
\]
and \(\d \lambda = \omega_0\). A diffeomorphism \(q \mapsto q'\) of \(\R^n\) induces a diffeomorphism of \(T^*\R^n\) via pullback which preserves \(\lambda\): \(\sum p_j \d q_j \mapsto \sum p_j' \d q_j'\), where \(p_j'\) are dual to \(q_j'\). (It doesn't work in general). For a manifold, patch \(\lambda\)'s together to get \(\lambda_{\mathrm{can}} \in \Omega^1(T^*X)\) with \(\d \lambda_{\mathrm{can}}\) symplectic.
Coordinate-free description: for \(v \in T_{(q, p)}T^*X\),
\[
\lambda_{\mathrm{can} (q, p)}(v) = \langle p, D \pi(v) \rangle
\]
where \(\pi: T^*X \to X\).
\begin{ex}
If \(\sigma: M \to T^*M\) is a 1-form then
\[
\sigma^* \lambda_{\mathrm{can}} = \sigma
\]
where on LHS we interpret \(\sigma\) as a morphism and on RHS we treat it as an element of \(\Omega^1(M)\).
\end{ex}
Note \(M \subseteq (T^*M, \d \lambda_{\mathrm{can}})\), the zero section, is Lagrangian. It turns out to be the prototype for Lagrangian submanifolds.
\begin{theorem}[Weinstein tubular neighbourhood theorem]\index{Weinstein tubular neighbourhood theorem}
If \(L \subseteq (M, \omega)\) is compact Lagrangian, then there is a tubular neighbourhood \(U(L)\) of \(L\) in \(M\) which is symplectomorphic to a neighbourhood \(U'(L) \subseteq T^*L\) of the zero section.
\end{theorem}
\begin{proof}
Choose an \(\omega\)-compatible almost complex structure \(J\) and let \(g = \omega(\cdot, J \cdot)\). Note the \(g\)-orthogonal complement to \(T_qL \subseteq T_qM\) is \(JT_qL\). Let \(\Phi: T^*M \to TM\) be induced by \(g\): \(g(\Phi(f), v) = f(v)\). Define
\begin{align*}
\varphi: T^*L &\to M \\
(q, f) &\mapsto \exp_q(J \Phi_q(f))
\end{align*}
Check
\[
D \varphi_{(q, 0)}(v, f) = v + J \Phi_q(f)
\]
for \((v, f) \in T_qL \oplus (T_qL)^* \cong T_{(q, 0)}(T^*L)\).
\begin{align*}
(\varphi^* \omega_M)(_{(q, 0)}((v, f), (v', f'))
&= \omega_M|_q(v + J\Phi f, v' + J\Phi f') \\
&= \omega_M|_q(v, J \Phi f') - \omega_M(v', J\Phi f) \\
&= g|_q(v, \Phi f') - g|_q(v', \Phi f) \\
&= f'(v) - f(v') \\
&= (\d \lambda_{\mathrm{can}})_{(q, 0)} ((v, f), (v', f'))
\end{align*}
so \(\varphi: T^*L \to M\) is such that \(\varphi^* \omega_M\) and \(\d \lambda_{\mathrm{can}}\) agree on the zero section, i.e.\ \(T(T^*L)|_L\). The Poincaré lemma now gives \(\sigma \in \Omega^1(T^*L)\) for which \(\d \lambda_{\mathrm{can}} - \varphi^* \omega_M = \d \sigma\) on \(U(L)\) an open neighbourhood of the \(0\)-section, \(\sigma = 0\) on \(0\)-section. Apply Moser's trick to the family of forms \(\omega_t = (1 - t) \varphi^* \omega_M + t \d \lambda_{\mathrm{can}}\).
\end{proof}
\begin{corollary}
A neighbourhood of a Lagrangian only depends on the smooth topology of \(L\) as a symplectic manifold.
\end{corollary}
\begin{ex}
Let \(L \subseteq M^4\) be Lagrangian.
\begin{enumerate}
\item \(\chi(L) = - [L]^2\) (because \(T^*L \cong \nu_{L/M}\)).
\item If \(L \subseteq M\) is homologically trivial, i.e.\ \([L] = 0\), and \(L\) is connected closed then \(L\) is a torus or a Klein bottle.
\end{enumerate}
\end{ex}
\begin{proposition}
If \(f: M \to M\) is a diffeomorphism then \(f^*\omega = \omega\) if and only if \(\Gamma_f \subseteq (M \times M, \omega \oplus -\omega)\) is Lagrangian.
\end{proposition}
\begin{proof}
\(f^*\omega = \omega\) if and only if \(f^*\omega - \omega = 0\) if and only if \(i^*(\omega \oplus -\omega) = 0\).
\end{proof}
\begin{eg}
The antidiagonal \(\Gamma_{\id}\) is Lagrangian.
\end{eg}
\begin{proposition}
If \(f: M \to M\) is antisymplectic, i.e.\ \(f^*\omega = - \omega\) then \(\mathrm{Fix}(f)\), the fixed points of \(f\), is Lagrangian where smooth.
\end{proposition}
\begin{eg}
Complex conjugation on \(\C^n\) or \(\C\P^n\) is antisymplectic. The fixed points are \(\R^n\) and \(\R\P^n\) respectively.
More generally if a quasi-affine or projective smooth variety \(X(\C)\) (smooth submanifold of \(\C^n\) or \(\C\P^n\) cut out by polynomial equations) is defined over \(\R\), then \(X(\R) \subseteq X(\C)\) is a Lagrangian submanifold where smooth.
\end{eg}
\begin{corollary}
A neighbourhood of \(\id \in \Symp(M)\) where \(M\) compact is homeomorphic to a neighbourhood of \(0\) in the space of closed 1-forms on \(M\). In particular \(\Symp(M)\) is locally path connected.
\end{corollary}
\begin{proof}
If \(f\) is close to \(\id\) then \(\Gamma_f \subseteq M \times M\) is close to the Lagrangian antidiagonal. By Weinstein \(\Gamma_f \subseteq U\) for \(U \subseteq T^*M\) a neighbourhood of the zero section of \(M\). Moreover the proof of Weinstein shows \(\Gamma_f\) gives a section of \(T^*M\) (near the \(0\)-section). This means that \(\Gamma_f \subseteq T^*M\) can be thought of as the graph of a 1-form, say \(\sigma\).
\[
\sigma^* \d \lambda_{\mathrm{can}} = \d \sigma^* \lambda_{\mathrm{can}} = \d \sigma
\]
so \(\Gamma_f\) is Lagrangian if and only if \(\sigma\) is closed.
\end{proof}
\begin{corollary}
Suppose \((M, \omega)\) is compact and \(H^1_{\mathrm{dR}}(M) = 0\). Then any \(f \in \Symp(M)\) which is \(C^1\) close to \(\id\) has at least \(2\) fixed points.
\end{corollary}
\begin{proof}
If \(f\) is \(C^1\) close to \(\id\) then \(\Gamma_f \subseteq T^*M\) can be written as the graph of a closed \(1\)-form \(\sigma\) (\(C^1\) is enough. See proof of Weinstein). But \(\sigma = \d h\) as \(H^1_{\mathrm{dR}}(M) = 0\). \(p \in \mathrm{Fix}(f)\) if and only if \(\d h_p = 0\), if and only if \(p \in \mathrm{Crit}(h)\). \(M\) is compact so \(h\) has at least 2 critical points, a minimum and a maximum.
\end{proof}
\begin{proposition}
Suppose \((M, \omega)\) is connected. Then \(\Symp(M, \omega)\) acts transitively on points.
\end{proposition}
\begin{proof}
\(M\) is path-connected so enough to work in a single Darboux chart. Let \(x \in \R^{2n}\). Translation from \(0\) to \(x\) is symplectic and is the time \(1\) flow of the constant vector field \(X = 0x\). Now need to cut out a chart, which we do by passing to forms. Let \(\sigma = \iota_X \omega_0\). Then
\[
\d \sigma = \d \iota_X \omega_0 = \mathcal L_X \omega_0 = 0
\]
so \(\sigma\) is exact as \(H^1(\R^{2n}) = 0\). Say \(\sigma = \d f\) for \(f \in C^\infty(\R^{2n})\). Now pick a suitable cut-off function \(\psi\) and replace \(f\) with \(\psi f\), \(X\) with \(Y\) such that \(\iota_Y \omega_0 = \d (\psi f)\).
\end{proof}
Stengthening of Darboux chart.
\begin{theorem}[Gromov-Lees]
There is a Lagrangian immersion of \(L\) into \(\C^n \cong \R^{2n}\) if and only if \(TL \otimes_\R \C \cong L \times \C^n\) as complex vector bundles.
\end{theorem}
\begin{proof}
We prove only if and the other direction is beyond the scope of the course so omitted. Note \(V \leq (\C^n, \omega_0)\) is a Lagrangian subspace if and only if \(V \perp i V\) (with respect to standard metric) and \(\dim_\R V = n\) (recall this is also the observation we used for Weinstein). An immersion \(\iota: L \to \C^n\) is Lagrangian if and only if \(\Im (D \iota_x) \perp i \Im(D \iota_x)\) for all \(x \in L\). This gives a map
\begin{align*}
T_xL \otimes \C &\to \C^n \\
v \otimes (a + ib) &\mapsto a D \iota_x(v) + i b D \iota_x (v)
\end{align*}
on each fibre and varying over \(x\) induces the trivialisation.
The other direction is hard. Uses \(h\)-principle.
\end{proof}
\begin{proposition}
If \(W \subseteq (M, \omega)\) is an isotropic submanifold, i.e.\ \(\omega|_W = 0\), then a neighbourhood of \(W\) is determined symplectically by the smooth topology of \(W\) and the bundle \(TW^\perp/TW\) (which is trivial in Lagrangian case).
\end{proposition}
\begin{proof}
Example sheet 2.
\end{proof}
\begin{lemma}
Suppose \(W \immersion X\) is an isotropic immersion and \(\dim W < \frac{1}{2} \dim M\) then isotropic perturbation of \(W\) will be embedded generically.
\end{lemma}
\begin{proof}
General position argument: flow one branch of \(W\) by a compactly supported vector field near individual self-intersection points to remove them locally.
Slogan: stengthen a smooth perturbation to be isotropic.
\end{proof}
\begin{corollary}
If \(L\) compact has a Lagrangian immersion in \(\C^n\) then \(L \times S^1\) embeds in \(\C^{n + 1}\).
\end{corollary}
Note by Gromov-Less this is in fact if and only if.
\begin{proof}
Have isotropic immersion \(L \immersion \C^n \times \C = \C^{n + 1}\). This can be perturbed to get an isotropic embedding \(L \embed \C^{n + 1}\). Note \(TL^\omega/TL\) is trivial and hence we get a symplectic embedding of an open neighbourhood of \(L \subseteq T^*L \times \C\). This containes a Lagrangian \(L \times S^1\) by taking the radius of \(S^1\) to be sufficiently small.
\end{proof}
Fact: every compact orientable \(3\)-manifold \(Y\) is parallelisable. (proof uses characteristic class)
\begin{corollary}
If \(Y^3\) is compact orientable then we have a Lagrangian immersion \(Y \immersion \C^3\) and a Lagrangian embedding \(Y \times S^1 \embed \C^4\).
\end{corollary}
\begin{remark}
Any compact three manifold can be Lagrangian immersed in \(\C^3\). On the other hand if \(L^4\) compact is Lagrangian immersible into \(\C^4\) then by Gromov-Rees \(\chi(L) = 0\) so \(b_1 > 0\). Thus \(\pi_1(L)\) is infinite.
\end{remark}
How to remove double point?
\begin{proposition}
Suppose \(M \supseteq L_1, L_2\) contains 2 Lagrangian submanifolds which meet transversally at a point \(p\). Then there is a Darboux chart \(\varphi: B(\varepsilon) \to M\) such that \(\varphi(0) = p, \varphi^{-1}(L_1) = \R^n \cap B(\varepsilon), \varphi^{-1}(L_2) = i \R^n \cap B(\varepsilon)\).
\end{proposition}
\begin{proof}
Exercise.
\end{proof}
\paragraph{Polterovich surgery}
Polterovich surgery\index{Polterovich surgery} replaces \(L_1 \cup L_2\) with \(L_\gamma\), local on a neighbourhood of \(p\).
Let \(\gamma: \R \to \C\) smooth, coincide with \(\R_+ \{0\} \cup \{0\} \{0\} \times \R_-\) away from the origin.
\[
L_\gamma = \{z_j = \gamma a_j: (a_j) \in S^{n - 1}\}
\]
where \(z_j\) is the complex coordinate, \(a_j \in S^{n - 1} \cap \R^n \subseteq \C^n\). This is a Lagrangian handle.
\begin{eg}
For \(n = 1\), \(L_\gamma = \{z = \gamma a: a\in S^0 = \{\pm 1\}\}\). Away from a neighbourhood of \(0\), \(L_\gamma\) agrees with \(L_1 \cup L_2\).
For \(n = 2\), suppose \(\gamma = (\gamma_1(t), \gamma_2(t)) \in \R^2 \cong \C, S^1 = (\cos \theta, \sin \theta)\). Then
\[
L_\gamma = \{(\gamma_1 \cos \theta, \gamma_2 \cos \theta, \gamma_1 \sin \theta, \gamma_2 \sin \theta)\} \subseteq \R^4 \cong \C^2.
\]
If \(\gamma_1 = 0\) then we get \(i\R^n\) minus a neighbourhood of \(0\). If \(\gamma_2 = 0\) then we get \(\R^n\) minus a neighbourhood of \(0\).
\end{eg}
This depends on the ordering of \(L^1\) and \(L^2\) (corresponding to the canonical order \(\R^n, i\R^n\)), doesn't depend on the choice of \(\gamma\), as long as \(\gamma\) agrees with half-axes outside the ball and \(\gamma \cap (-\gamma) = \emptyset\).
\begin{corollary}
If \(L_1, L_2\) are Lagrangian submanifolds which meet transversally at a point, there is an embedding Lagrangian submanifold diffeomorphic to the connected sum \(L_1 \# L_2\).
\end{corollary}
General case: can perform surgery separately at any finite number of transverse intersections of Lagrangians. Topologically, each surgery replaces \(B^n \amalg B^n\) with \(\R \times S^{n - 1}\).
\begin{eg}
If \(L^n \immersion M^{2n}\) Lagrangian with a single double point, get Lagrangian embedding of \(L \# (S^1 \times S^{n - 1}) \embed M\).
\end{eg}
\begin{corollary}
If \(Y\) is a compact orientable 3-manifold then for some \(k \geq 0\) there is a Lagrangian embedding of \(Y \# k(S^1 \times S^2)\) into \(\C^3\).
\end{corollary}
Highly restrictive
\begin{definition}[prime manifold]\index{prime manifold}
A closed manifold \(M^n\) is \emph{prime} if it can't be written as \(M = M_1 \# M_2\) unless \(M_1\) or \(M_2\) is \(S^n\).
\end{definition}
\begin{theorem}[Fukaya]
A compact prime orientable 3-manifold has a Lagrangian embedding in \(\C^3\) if and only if and only if it is diffeomorphic to \(S^1 \times \Sigma_g\).
\end{theorem}
Fix a primitive \(\theta\) of \(\omega_0\) in \(\C^n\), i.e.\ \(\d \theta = \omega_0\). If \(L\) is Lagrangian then \(\omega_0|_L = 0\) so \([\theta|_L] \in H^1(L)\). (By Stoke's this measures the symplectic area of disc with boundary on \(L\)). Define \(L\) to be exact if \([\theta|_L] = 0\).
\begin{theorem}[Gromov]
There is no compact exact Lagrangian in \(\C^n\).
\end{theorem}
\subsection{Symplectic submanifolds}
\begin{theorem}[Gromov]
Fix symplectic manifold \((V, \omega_V)\) compact and \((X, \omega_X)\) with \(\dim V \leq \dim X - 4\). Suppose we are given a smooth embedding \(f: V \embed X\) such
\begin{itemize}
\item \(f^*[\omega_X] = [\omega_V] \in H^2(V)\),
\item \(Df\) is symplectic through bundle maps \(TV \to TX\) to a fibrewise symplectic embedding.
\end{itemize}
The \(f\) is smoothly isotopic to an embedding \(\tilde f: V \to X\) such that \(\tilde f^*\omega_X = \omega_V\).
\end{theorem}
\begin{proof}
Omitted. This is an instance of h-principle.
\end{proof}
This fails in general for codimension 2 but
\begin{theorem}[Donaldson]
If \([\omega] \in H^2(X; \Z)\) then for \(k \gg 0\) there are symplectic submanifolds representing \(PD (k [\omega])\).
\end{theorem}
Idea: \([w] \in H^2(X; \Z)\) represents a complex line bundle \(\pi: L \to X\) with \(c_1(L) = [\omega]\). If \(s: X \to L\) is a section of \(\pi\), cleanly intersecting \(0\)-section, then
\[
s^{-1}(0) = PD [c_1(L)] = PD[\omega].
\]
(By Sard's theorem, \(s^{-1}(0)\) is a \((2n - 2)\) dimensional manifold). If we use instead \(L^{\otimes k}\) then \(s^{-1}(0) = PD (k[\omega])\). Donaldson's idea is to construct, for sufficiently large \(k\), sections which are ``approximately holomorphic'' (they satisfy Cauchy-Riemann equations up to an error term). The error term is small enough that \(s^{-1}(0)\) is symplectic.
c.f.\ ample/very ample line bundles in algebraic geometry.
\begin{align*}
\sigma: X &\to \P(H^0(X, L^{\otimes k})^*) = \P^N \\
x &\mapsto [\varphi_x: x \mapsto s(x)]
\end{align*}
\(\sigma\) is injective for \(k \gg 0\). If that case, \(\P^{N - 1} \cap \sigma(X)\) represents \(PD(k c_1(L))\).
\subsection{Blow-ups}
\begin{definition}[blow-up]\index{blow-up}
The blow-up of \(0\) in \(\C^n\) is
\[
Z = \{(z, \ell) \in \C^n \times \P^{n - 1}: z \in \ell\}
\]
together with two projections \(\pi: Z \to \C^n, p: Z \to \P^{n - 1}\).
\end{definition}
\(\pi\) is one-to-one away from \(0\) and over \(0\) the fibre is \(\P^{n - 1}\). \(p: Z \to \P^{n - 1}\) is the tautological bundle. \(c_1 = - PD[\P^{n - 2}]\).
\begin{note}
Our choice of Kähler form on \(\P^n\) is normalised so that \(\omega(\P^1) = \pi\).
\end{note}
Define
\[
\omega_\lambda = \pi^* \omega_{\C^n} + \lambda^2 p^* \omega_{\P^{n - 1}}.
\]
\begin{lemma}
For \(\lambda > 0\), \(\omega_\lambda\) is Kähler and for \(\delta > 0\), let
\[
Z(\delta) = \{(z, \ell) \in Z: |z| \leq \delta\}.
\]
Then \((Z(\delta) \setminus Z(0), \omega_\lambda)\) is symplectomorphic to \((B(\sqrt{\lambda^2 + \delta^2}) \setminus B(\lambda), \omega_0)\).
\end{lemma}
\begin{proof}
Recall our definition of Fubin-Study metric: for the natural projection \(\Phi: \C^n \setminus \{0\} \to \P^{n - 1}\), have
\[
\Phi^*\omega_{\P^{n - 1}} = \frac{i}{2} \p \conj \p \log |z|^2.
\]
Let
\[
\mu_\lambda = \frac{i}{2} \p \conj \p (|z|^2 + \lambda^2 \log |z|^2).
\]
On \(Z(\delta) \setminus Z(0)\), \(\pi^* \mu_\lambda = \omega_\lambda\). Define a bijection
\begin{align*}
F: \C^n\setminus \{0\} &\to \C^n\setminus B(\lambda) \\
z &\mapsto \frac{z}{|z|} \sqrt{|z|^2 + \lambda^2}
\end{align*}
Note \(F^*\omega_0 = \mu_\lambda\). Using the fact that \(\omega_0\) is Kähler, easy to get plush for \(\mu_\lambda\) so Kähler.
\end{proof}
\begin{definition}[blowup]\index{blowup}
The \emph{weight \(\lambda\) blowup} of \((M, \omega)\) at \(p\) is a symplectic embedding \(\varphi: B(\sqrt{\lambda^2 + \delta^2}) \embed M, \varphi(0) = p\), and
\[
\widetilde M = (M \setminus \im \varphi) \cup Z(\delta)
\]
with \(\tilde \omega_M = \omega_{M \setminus \im \varphi} \cup \omega_\lambda\).
\end{definition}
\begin{note}\leavevmode
\begin{enumerate}
\item \(\Vol(\tilde M) = \Vol(M, \omega) - \Vol(B(\lambda))\) so the volume decreases under blowup.
\item \([\tilde \omega_\lambda] = \pi^*(\omega) - \pi \lambda^2 PD(E) \in H^2_{\mathrm{dR}}(M)\) where \(E = Z(0) \cong \P^{n - 1}\) is the exceptional divisor.
\end{enumerate}
\end{note}
\subsection{Fibre sums}
\begin{lemma}
If \((Q^{2n - 2}, \omega_Q) \embed (M^{2n}, \omega_M)\) is a closed symplectic submanifold of \(M\) then \((Q^{2n - 2} \times B(\varepsilon), \omega_q \oplus \omega_0) \embed (M^{2n}, \omega_M)\) symplectically if and only if \(\nu_{Q/M}\) is symplectically trivial (i.e.\ \(c_1(\nu_{Q/M}) = 0\)).
\end{lemma}
Already proved!
Consider
\begin{align*}
\psi: B(\varepsilon) \setminus \{0\} &\to B(\varepsilon) \setminus \{0\} \\
(r, \theta) &\mapsto (\sqrt{\varepsilon^2 - r^2}, - \theta)
\end{align*}
(turn inside out and then flip orientation). Check
\[
\psi^*\omega_0 = \psi^*(r \d r \w \d \theta) = r \d r \w \d \theta.
\]
Thus \(\psi\) is area preservaing and ``turns the annulus inside out''.
Suppose \(Q^{2n - 2} \embed M_i^{2n}\) symplectically, \(i = 1, 2\), \(Q\) closed symplectic submanifold such that \(\nu_{Q/M_i}\) is symplectically trivial. We can define
\begin{definition}[fibre sum]\index{fibre sum}
The \emph{fibre sum} of \(M_1\) and \(M_2\) along \(Q\) is
\[
M_1 \#_Q M_2 = (M_1 \setminus Q) \cup_{\id \times \psi} (M_2 \setminus Q)
\]
where \(\id \times \psi: M_1 \setminus Q \supseteq Q \times B(\varepsilon)^* \to Q \times B^*(\varepsilon) \subseteq M_2 \setminus Q\).
\end{definition}
\begin{note}
As \(\nu_{Q/M_i}\) is trivial, \(Q \embed M_1 \# M_2\) symplectically.
\end{note}
More generally, suppose \(Q \embed M_i\) symplectically and \(c_1(\nu_{Q/M_1}) = -c_1(\nu_{Q/M_2})\), then we can define \(M_1 \#_Q M_2\) completely analogously.
Note that on each local chart of \(Q\) we use \(\id \times \psi\). These patch together to give a symplectomorphism
\[
\begin{tikzcd}
\nu_{Q/M_1} \setminus \{0\} \ar[d] \ar[r] & \mathcal L \setminus \{0\} \ar[d] \\
Q \ar[r] & Q
\end{tikzcd}
\]
for some complex line bundle to be determined. This flips the signs of intersections of section and zero section, so \(c_1(\mathcal L) = - c_1(\nu_{Q/M_1})\).
\begin{remark}
There is a hidden choice: the \emph{framing} of \(Q\) needn't be unique. Fo example if \(\nu_{Q/M}\) is trivial, the choices of symplectic trivialisations of \(\nu_{Q/M}\) are in one-to-one correspondence with \(H^1(Q; \Z)\) (\(\nu_{Q/M}\) is trivial and choices of trivialisation up to homotopy is \([Q, U(1)] = [Q, S^1] = H^1(Q; \Z)\)). More generally, maps \(\nu_{Q/M} \embed U(Q)\) can differ (up to homotopy) be an element of \(H^1(Q; \Z)\).
\end{remark}
\begin{eg}\leavevmode
\begin{enumerate}
\item Suppose \(X^4 \supseteq E\) where \(E\) a symplectic sphere of self-intersection number \(-1\). Pick \(Y^4 \supseteq S^2\) symplectic sphere of self-intersection number \(+1\), for example \(H \subseteq \P^2\) a hyperplane. Then we can form \(X^4 \#_{E} \C\P^2\). This is called the \emph{blowdown} of \(X\) along \(E\).
Note if \(\widetilde W\) is the blowup of \(W^4\) at \(p\) then \(\widetilde W \#_E \C\P^2 \cong W\).
\item Recall if \(C \subseteq \P^2\) smooth degree \(d\) curve then \([C] = d [\P^1]\), \([C] \cdot [C] = d^2, g(C) = \frac{(d - 1)(d - 2)}{2}\). Suppose \(X^4 \supseteq Q\) symplectic sphere of self-intersection \(-4\), \(\C\P^2 \supseteq C\) curve of degree \(2\). Then can form \(X^4 \#_{Q/C} \C\P^2\).
Claim \(\C\P^2 \setminus C \cong D_\varepsilon^*(\R\P^2)\), (truncated) cotangent bundle of Lagrangian \(\R\P^2\) (becuas given \(C \subseteq \C\P^2\) of genus \(0\), exists Lagrangian \(\R\P^2\) which doesn't intersect it. Check ``no other topology'')
Replcaed a symplectic sphere with a Lagrangian \(\R\P^2\). This is a purely symplectic operation --- we can't usually do this in the algebraic geometry world.
\end{enumerate}
\end{eg}
\subsection{Lefschetz principles}
\begin{definition}[Lefschetz pencil]\index{Lefschetz pencil}
A \emph{Lefschetz pencil} on a closed oriented 4-manifold \(X\) is a smooth map \(f: X \setminus \{b_1, \dots, b_k\} \to \P^1\) such that
\begin{itemize}
\item \(Df\) is onto except at a finite collection of points \(\{p_1, \dots, p_m\} \subseteq X \setminus \{b_1, \dots, b_k\}\),
\item Near the \(b_i\) (\(p_j\) respectively) there are central local complex coordinates \(z, w \in B_\varepsilon(0) \subseteq \C\) such that \(f(z, w) = \frac{z}{w}\) (\(f(z, w) = z^2 + w^2\) respectively)
\end{itemize}
The \(b_i\)'s are called \emph{base points}, \(p_i\)'s \emph{critical points}.
\end{definition}
To get such a form, we use the complex Morse lemma
\begin{lemma}
If \(U \subseteq \C^n\) is an open subset, \(f: U \to \C\) homomorphic maps \(0\) to \(0\) with a non-degenerate critical point at \(0\). Then exists local coordiantes \(z_1, \dots, z_n\) such that \(f(z_1, \dots, z_n) = \sum_{i = 1}^n z_i^2\).
\end{lemma}
Compare with the real Morse lemma which says that we can find coordinates such that \(f(x_1, \dots, x_n) = \sum_{i = 1}^p x_i^2 - \sum_{i = p + 1}^n x_i^2\). In the complex case we can run the same proof and absorb \(-1\) into the coordinates.
Suppose \(t \in \P^1\) is not a critical value of \(f\). Then \(\overline{f^{-1}(t)} \subseteq X\) (gluing back \(b_i\)), the fibre of \(t\) is smooth. Around each critical point \(p_j\) of the pencil, the equation for a fibre looks like \(\{z^2 + w^2 = 0\}\), i.e.\ two complex lines intersecting transversally. \(p_j\) is also called \emph{ordinary double point} or \emph{node}.
Note all smooth fibres are closed, orientable and have fixed genus (pick a path avoiding the critical points, then the fibre varies smoothly).
Suppose \(X^4\) compact Kähler, \(L \to X\) a very ample holomorphic line bundle. For the purpose of this course, this means a holomorphic embedding \(i: X \embed \P^N\) uch that \(i^*\mathcal O(1) = L\), where \(O(1)\) is the line bundle such that \(c_1(\mathcal O(1)) = PD[\P^{N - 1}]\). Key fact: the restriction of a generic hyperplane \(\P^{N - 1}\) gives \(s^{-1}(0) \cap X\) for some holomorphic sections \(s\) of \(L\).
Let \(s_1, s_2\) be generic holomorphic sections. Then we can define a rational map \(\pi: X \to \P^1, p \mapsto [s_1(p): s_2(p)]\). \(\{s_1 = s_2 = 0\}\) is a set of finitely many points. These are the base points of \(p\). Among critical points of a holomorphic functions, non-degenerate ones are generic, so by complex Morse lemma we have correct local forms near critical points.
What's the local model near \(b \in \{s_1 = s_2 = 0\}\)? \((s_1, s_2)\) give local holomorphic coordintes
\begin{eg}[pencil of cubics in \(\P^2\)]
Define
\begin{align*}
\P^2 &\to \P^1 \\
[x: y: z] &\mapsto [x^3 + y^3 + z^3: x^3 + y^3 + z^3 + xyz] = [f: g]
\end{align*}
\(f\) and \(g\) are sections of \(\mathcal O(3)\). The fibre
\[
\pi^{-1}([\lambda: \mu]) = \{\mu f - \lambda g = 0\} \subseteq \P^2
\]
is given by a cubic equation (so torus if smooth). The base points are given by \(\{f = g = 0\}\), i.e.\ \(x^3 + y^3 + z^3 = 0, xyz = 0\), which is a set of 9 points (this can also be seen by \([C] \cdot [C'] = 9\) for \(C, C'\) cubic). Explicitly, they are given by \([0: 1: \xi^i]\) and there cyclic permutations, where \(\xi^3 = -1\).
When is the set \(\{\mu f - \lambda g = 0\} \subseteq \P^2\) singular? either \(\mu = \lambda\), so \(xyz = 0\) three coordinate lines, 3 critical points. If \(\mu \ne \lambda\), this can be rephrased as \(\{x^3 + y^3 + z^3 + axyz = 0\}\). The critical points are given by the zeroes of all three partial derivatives. We get \(xyz = 0\) (already seen) or \(a^3 + 27 = 0\). Thus we get 3 more cricial fibres \(a = 3 \xi^i\). We can factorise \(x^3 + y^3 + z^3 - 3xyz\), which is again three lines.
In either case, there are three base points on each line.
(pic)
Near a base point, maps \((z, w) \mapsto \frac{z}{w}\) (this is the direction of the line). Then we get an honest map to \(\P^1\) by blowing up each of the base points:
\[
\pi: E(1) = \C\P^2 \# q \conj{\C\P^2} \to \P^1
\]
\(E(1)\) is the \emph{rational elliptic surface}.
\begin{itemize}
\item Smooth fibre \(C\) with \(C \cdot C = 0\) inside \(E(1)\) (fibration is locally trivial so can push \(C\) of itself).
\item \(\pi_1(E(1)) = 0\).
\item Example sheet 3: \(\pi_1(E(1) \setminus C) = 0\).
\end{itemize}
\end{eg}
\begin{theorem}
Suppose \(X^4\) connected closed oriented is the total space of a Lefschetz pencil with at least one basepoint. Then \(X\) admits a symplectic structure.
\end{theorem}
\begin{proof}
We first outline the general strategy. It is enough to construct a symplectic form on \(\tilde X\) obtained by blowing up basepoints. More precisely, near basepoint \(b\), we have local holomorphic coordinates \((z, w)\) such that \(\pi: X \to \P^1\) is given by \((z, w) \mapsto \frac{z}{w}\). Replace \(D_\varepsilon(0)\) with \(Z(\varepsilon)\) (local blowup model) at every basepoint to get \(\tilde X\). Now \(\pi\) induces a well-defined map \(\pi: \tilde X \to \P^1\). Each basepoint is replaced with exceptional divisor \(E\) (\(\cong \P^1\)), which gives a section of \(\pi\).
We will construct a symplectic form on \(\tilde X\) such that each smooth fibre is symplectic (ditto symplectic fibre away from critical points). We'll also see that for any \(N \geq 0\), \(\omega + N \pi^* \omega_{\P^1}\) is also symplectic. Taking \(N\) sufficiently large ensures that the sections coming from the basepoints are all symplectic. Now blow each of these down using fibre connected sum to get a symplectic form on \(X\).
Working on \(\tilde X\). Claim we can find \(\eta \in \Omega^2(\tilde X)\) closed such that
\begin{itemize}
\item \(\eta|_F\) is symplectic on neighbourhood of any smooth point points of a fibre.
\item near a critical point \(p_i\), \(\eta|_{U_i(p_i)} = \frac{i}{2}(\d z \d \conj z + \d w \d \conj w)\), where \(U_i(p_i)\) open in \(\tilde X\) (not the fibre), in local coordinates.
\end{itemize}
We postpone the proof. Let \(\omega = \eta + k \pi^* \pi_{\P^1}\). Claim for \(k \gg 0\), \(\omega\) is symplectic. This automatically gives us the sought after form.
\begin{proof}[Proof of step 3]
Near a smooth point \(T_x \tilde X = \ker D\pi \oplus (\ker D\pi)^\eta\). \(\ker D\pi = T_xF\) is called the vertical subspace, and the symplectic complement is called a choice of horizontal space. The matrix form for \(\eta + k \pi^* \omega_{\P^1}\) is
\[
\begin{pmatrix}
\eta|_F & 0 \\
0 & \eta|_{\mathrm{hor}} + k \pi^* \omega_{\P^1}
\end{pmatrix}
\]
Since \(\tilde X\) is compact, we can choose \(k\) sufficiently large such that the this is symplectic.
Near singular points, the local model is \(\pi: \C^2 \to \C, (z, w) \mapsto z^2 + w^2\). Check
\[
(\omega_{\mathrm{std}} + k\pi^* \omega_{\P^1})^2 = (1 + k |(z, w)|^2) \Vol > 0
\]
so symplectic.
\end{proof}
\begin{proof}[Proof of step 2]
Model fibre \(F\). Pick a symplectic \(2\)-form \(\sigma \in \Omega^2(F)\) with area 1. Claim exists \(\xi \in \Omega^2(\tilde X)\) closed such that \(\int_{F'} \xi = 1\) for all fibre \(F'\). This is Poincaré duality: the expression says \([\xi] \cap [F'] = 1\). This requires \([F] \ne 0\), which is OK if \(\#\{b_j\} > 0\) as \(E \pitchfork F = \{pt\}\), \(E\) section.
Pick open sets \(\tilde X\), \(\{U_\alpha\}_{\alpha \in A}, \{V_i\}_{i \in I}\) such that
\begin{itemize}
\item \(U_\alpha \cong F \times B_\alpha \to B_\alpha\) for some balls \(B_\alpha \subseteq \P^1\) away from the critical points.
\item \(V_i \cong B_i^4 \to \C (z^2 + w^2)\) near cricical points.
\end{itemize}
\begin{itemize}
\item On \(U_\alpha\), \(f: F \times B_\alpha \to F\), \(f^*\sigma - \xi = \d \lambda_\alpha\) for some \(\lambda_\alpha \in \Omega^1(U_\alpha)\).
\item On \(V_i\), \(\pi: B_i^4 \to \C\) smooth fibre, locally \(z^2 + w^2 = a\), \(a \ne 0\). Standard symplectic form restricts to a symplectic form. \(\d \theta_i = \omega_{\mathrm{std}}\).
\end{itemize}
Pick partitions of unity subordinate to \(\pi(\{U_\alpha\} \cup \{V_i\})\), say \(\{\varphi_\alpha, \psi_i\}\). Set (?)
\[
\eta = \xi - \d (\varphi_\alpha \compose \lambda_\alpha) + d ((\psi_i \compose \pi) \theta_i) \cdot N
\]
\end{proof}
\end{proof}
As a corollary
\begin{theorem}
If \((X^4, \omega)\) is symplectic and \([\omega] \in H^2(X, \Z)\) then for all \(k \gg 0\) there exists a Lefschetz pencil \(X\) with fibre Poincaré dual to \(k[\omega]\).
\end{theorem}
\begin{proof}
Omitted.
\end{proof}
Thus every integral symplectic 4 manifold is a Lefschetz pencil.
\subsection{Grompf's theorem}
\begin{theorem}[Grompf]\index{Gompf's theorem}
If \(G = \langle g_1, \dots, g_n | r_1, \dots, r_\ell \rangle\) is a finitely presented group then there exists \((Y_G, \omega)\) a symplectic 4 manifold with \(\pi_1 Y_G = G\).
\end{theorem}
This is really saying the symplectic category is big enough: the topological version is a basic result in algebraic topology. Symplectic 2 manifolds are surfaces so symplectic 4 manifolds are the simplest objects that the theorem could possibly hold.
\begin{proof}
The proof strategy is as follow. From example sheet 3 \(\pi_1(E(1) \setminus T) = 0\) and \(c_1(\nu_{T(E(1))}) = 0\). We'll use this to kill off generators in a bigger space. Consider \((T^2 \times \Sigma_g, \omega_{T_2} \oplus \omega_{\Sigma_g})\). Find in here disjoint symplectic tori \(T_i\)'s and construct the fibre sum
\[
M = (T^2 \times \Sigma_g) \#_{T_1} E(1) \#_{T_2} E(1) \#_{T_3} \dots \#_{T_k} E(1)
\]
and then \(\pi_1M = \pi_1(T^2 \times \Sigma_g)/\langle \pi_1(T_i)\rangle\).
We first find disjoint tori. Let \(M = T^2 \times \Sigma_g\), regarded as a fibration with base \(\Sigma_g\) (pic). Then
\[
\pi_1(M) = \langle u \rangle \times \langle v \rangle \times \langle a_1, b_1, \dots, a_g, b_g | \prod [a_i, b_i] \rangle.
\]
Candidate tori: given loops \(\alpha_1, \dots, \alpha_k \in \Sigma_g\) and loops \(u_1, \dots, u_k\). Then \(\alpha_i \times u_i \subseteq M\) are (Lagrangian) tori.
Step 1: can we find enough tori so that \(\pi_1(M)/\langle \pi_1 T_i\rangle\) is the free group?
First let \(T_0 = T\) a torus fibre. It is an honest symplectic torus and will kill off \(\langle u \rangle \times \langle v \rangle\). Next consider tori given by \(u_i \times b_i\). This will kill off generators \(b_i\) in \(\pi_1M\). Claim the resulting space has free fundamental group: killing off \(b_i\) is the same as adding handlebodies to the ``holes'' on \(\Sigma_g\) and the resulting space is homotopic to \(g\) loops attached to \(S^2\), whose fundamental group is indeed \(F_g\).
For a finitely generated group \(G = F_g/ \langle r_1, \dots, r_\ell \rangle\), for each \(r_i\) we find a loop \(\alpha_i \in \Sigma_g\) of \(a_i\) which spell out the relation. We run into a problem: \(\alpha_i\)'s may not be embedded (as a simple example, \(a_1^2\) in \(\Sigma_1\)). We can resolve the problem by adding more generators: for example \(F_1/\langle a_1^2 \rangle = F_2/\langle a_1a_2, a_1a_2^{-1} \rangle\) (pic). Note that it does not matter if these two loops intersect, as we can move them off to different fibres.
So now we should be convinced that there are many Lagrangian tori \(T_i \subseteq T^2 \times \Sigma_g\) such that \(\pi_1(T^2 \times \Sigma_g)/\pi_1(T_i) = G\).
Remains to show exists \(\omega \ne \omega_{T^1} \oplus \omega_{\Sigma_g}\) making \(T_i\) symplectic. Oftentimes being Lagrangian is a ``closed'' condition on the space of symplectic forms. For example \(S^2 \subseteq T^*S^2\) is certainly Lagrangian. Consider \(T\P^1\) which is homeomorphic to \(T^*S^2\). \(T\P^1\) has symplectic form compatible with standard complex structure, and \(\P^1\) is complex so symplectic.
Claim exists \(\omega\) making the \(T_i\)'s symplectic.
Exists a closed form \(\beta \in \Omega^2(M, \R)\) such that the pullback \(i^*[\beta] \in \Omega^2(T_i^2, \R)\) is positive. Note that \(\beta\) is in the same class as the usual symplectic form but may not itself by symplectic.
Idea of proof: take forms \(s_1, s_2\) Poincaré dual to loops \(U_1, \alpha_1\) with \(T^2 = u_1 \times a_i\). Then \(\beta = s_1 \w s_2\).
Suppose \(\beta\) is such a form on \(M\), WTS we can modify this form so its pullback is symplectic. Since \(T_i\)'s are Lagrangian, we know that their neighbourhood is trivial. Let \(j: B_\varepsilon(0) \times T^2 \embed M\) be a neighbourhood of \(T_i\). The pullback \(j^*(\beta)\) on \(B_\varepsilon(0) \times T^2\) is in the same cohomology class of \(p^*_{T^2}\omega_{T^2}\) (?). By abuse of notation, \(\beta, \omega_{T^2} \in \Omega^2(B_\varepsilon(0) \times T^2)\) with \([\beta] = [\omega_{T^2}]\).
Let \(\eta = \Omega^1(B_\varepsilon \times T^2)\) with \(\d \eta = \omega_{T^2} - \beta\). Pick \(\rho: B_\varepsilon(0) \to \R\) which is 1 in neighbourhood of \(0\) and \(0\) in a neighbourhood of the boundary. Look at the forms
\[
\tilde \beta = \rho \omega_{T^2} + (1 - \rho) \beta - \d \rho \w \eta
\]
where the last term is presente to make sure \(\tilde \beta\) is closed:
\[
\d \tilde \beta = \d \rho \omega_{T^2} - \d \rho \w \beta - \d \rho \w \d \eta = 0.
\]
Substitute \(\beta\) on \(M\) with \(\tilde \beta\). Take \(\omega\) for \(T^2 \times \Sigma_g\) to be \(\omega_{T^2} \oplus \omega_{\Sigma_g} + \varepsilon \beta\) for \(\varepsilon\) small. This is nondegenerate since it is an open condition.
\end{proof}
Rcall that a Lefschetz pencil is a map \(\pi: (\tilde X, \omega, J) \setminus \{b_i\} \to \P^n\) that is
\begin{enumerate}
\item \(J\)-holomorphic,
\item at the crtical points of the map exists honestly holomorphic coordinates such that \(\pi(z_1, z_2) = z_1^2 + z_2^2\).
\end{enumerate}
Goal: understand Lagrangian submanifolds in Lefschetz pencilss. For today let \(F_p = \pi^{-1}(p)\).
Idea: suppose \(\ell \subseteq F_p\) is a Lagrangian and \(\gamma \subseteq \P^1\) is a Lagrangian, can we create \(L\) in \(\tilde X\) based on this data? For example if \(\tilde X = \Sigma_g \times \P^1\) then any product \(\ell \times \gamma\) is Lagrangian. We expect this to work locally in the general case.
First claim \((F_p, \omega|_{F_p})\) is symplectic. Let \(i: F_p \embed \tilde X\). To check closedness
\[
\d i^*\omega = i^* \d \omega = 0.
\]
To check nondegeneracy, note that \(F_p\) is a complex submanifold. Take \(v \in TF_p\). Then
\[
\omega(v, Jv) = g_J(v, v) > 0
\]
so \(\omega\) is nondegerate. Thus the Lagrangian in the fibre component always makes sense. What about the other?
Observe that at \(x \in \tilde X\), the tangent spaces splits as
\[
T_x \tilde X = \ker D\pi \oplus (\ker D\pi)^\omega.
\]
As a result \(\tilde X \to \P^1\) carries a connection. Notation: given \(\gamma: [0, 1] \to \P^1 \setminus \{\text{critical values of } \pi\}\), let \(f_\gamma: F_{\gamma_0} \to F_{\gamma_1}\) be the induced parallel transport map.
\begin{theorem}
\(f_\gamma\) is a symplectomorphism.
\end{theorem}
\begin{proof}
Need to show that if \(v \in T_x \tilde X\) is a horizontal vector then \(\mathcal L_V \omega\) vanishes on the fibre.
\[
\mathcal L_V \omega = \iota_V \d \omega + \d \iota_V \omega = \d \eta
\]
with \(\eta = \iota_V \omega\). \(\eta\) vanishes on the vertical tangent space \(\ker D\pi\) as
\[
\eta(w) = \omega(v, \omega) = 0.
\]
Write \(\eta = \pi^*\alpha\) for \(\alpha\) a \(1\)-form on \(\P^1\). Then \(\d \eta(v_1, v_2) = 0\) if \(v_1, v_2 \in \ker D\pi\).
\end{proof}
\begin{corollary}
If \(\ell \subseteq F_{\gamma(0)}\) is a Lagrangian submanifold for \(\omega|_{F_{\gamma(0)}}\) then \(\ell \times I \subseteq \tilde X\), given by the parallel transport along a curve \(\gamma\), is a Lagrangian submanifold.
\end{corollary}
Observation: if \(\gamma: [0, 1] \in \P^1\) with \(\gamma(1) \in \mathrm{CritVal}(\pi)\), there is still a map from \(F_{\gamma(0)} \to F_{\gamma(1)}\).
\begin{definition}[vanishing path]\index{vanishing path}
A \emph{vanishing path} is a path \(\gamma: [0, 1] \to \P^1\) with \(\gamma(t) \in \mathrm{CritVal}(\pi)\) if and only if \(t = 1\).
The \emph{vanishing cycle} is \(V_{\gamma(0)} = f_\gamma^{-1}(p)\) for \(p \in \mathrm{Crti}(\pi)\), where \(f_\gamma\) is the parallel transport map.
\end{definition}
\begin{proposition}
\(V_p\) is a sphere.
\end{proposition}
\begin{proof}
Look at local model near the critical point: \(\C^2 \to \C, (z_1, z_2) \mapsto z_1^1 + z_2^2\). The regular fibre is \(\{z_1^2 + z^2 = a\}\), \(a \ne 0\) and the singular fibre is the union of two lines. Observe that \(\pi(\R \times \R) = \R\). \(\R \times \R\) is Langrangian and therefore the image is under parallel transport. Thus \(V_p = \{z_1^2 + z_2^2 = 1\}, z_1, z_2 \in \R\). Note that we can increase the dimension of fibration \(z_1, z_2, z_3, \dots\).
\end{proof}
Since parallel tarnsport along curves gives symplectomorphisms of fibres, a loop \(\gamma\) passing through \(a\) gives \(f_\gamma: F_a \to F_a\). Claim if \(\gamma\) is contractible in \(\P^1 \setminus \mathrm{CritVal}(\pi)\) then \(f_\gamma\) is Hamiltonian isotopy.
Question: what happens to \(F_a\) if \(\gamma\) travels around a critical point? We study an easier picture first: consider a Lefschetz fibration \(\C \to \C, z \mapsto z^2\). Then over the regular value \(1\) sits two fibres, and the parallel transport induces the (only) nontrivial symplectomorphism that transposes them. (pic)
Consider next \(\C^2 \to \C\). The symplectomorphism twists the hyperboloid.
Now consider the projection \(z_2\)
In general
\begin{theorem}
There exists a symplectomorphism of \(T^*S^n \to T^*S^n\) fixing the boundary, \(\tau_{S^n}\) the Dehn twist.
\end{theorem}
Idea: geodesic flow on \(T^*S^n\) is a symplectomorphism which fixes \(S^n\). Time \(\pi\) flow is antipodal.
For \(n \geq 2\), \(\tau_{S^n}^2\) is isotopic to identity but not symplectic isotopic.
\section{\(J\)-holomorphic curves}
Let \((X, \omega)\) be a symplective manifold and fix \(J\) a compatible almost complex structure.
\begin{definition}[\(J\)-holomorphic curve]\index{\(J\)-holomorphic curve}
A \emph{\(J\)-holomorphic curve} \(f: \Sigma \to X\) is a triple \((\Sigma, j, f)\) where \((\sigma, j)\) is a smooth real 2-manifold equipped with a complex structure \(j\), and \(f \in C^\infty(\Sigma, X)\) such that \(Df \compose j = J \compose Df\) or equivalently, \(Df + J \compose Df \compose j = 0.\)
\end{definition}
\begin{remark}
In real dimension 2, all almost complex structure are complex.
\end{remark}
\begin{remark}\leavevmode
\begin{enumerate}
\item \(Df(T_x \Sigma) \subseteq F_{f(x)}X\) is \(J\)-invariant.
\item \(f\) needn't be an embedding.
\item \(J\)-holomorphic curves are parameterised (we care about \(f\) rather than merely about \(\im f\).
\end{enumerate}
\end{remark}
\begin{lemma}
If \(f: \Sigma \to X\) is \(J\)-holomorphic then its image is a symplectic submanifold at the smooth points and
\[
\int_{f(\Sigma)} \omega = \int_\Sigma f^*\omega \geq 0.
\]
\end{lemma}
\begin{proof}
At a smooth point \(f(x) \in f(\Sigma)\), \(J(T_{f(x)}f(\Sigma)) = T_{f(x)}f(\Sigma)\). As \(\omega\) is \(J\)-compatible, this is nondegenerate
Basis for \(T_{f(x)}f(\Sigma)\) is \(v, J_v\) for some \(v \ne 0\) and \(\omega(v, Jv) \geq 0\).
\end{proof}
In local coordinates, suppose \(x + iy\) is a local complex coordiante on \(\Sigma\), the \(J\)-holomorphic condition says
\[
(\frac{\partial f}{\partial x} + J \frac{\partial f}{\partial y}) \d x + (\frac{\partial f}{\partial y} - J \frac{\partial f}{\partial x}) \d y = 0
\]
or equivalently,
\[
\p_x f + J \p_y f = 0.
\]
This is called the \emph{generalised Cauchy-Riemann equation}\index{generalised Cauchy-Riemann equation}, or \(\overline \p\)-equation.
\begin{definition}[energy]\index{energy}
Suppose \((X, \omega, J)\) is a symplectic manifold with a compatible almost complex structure, with associated metric \(g_J = |\cdot|_J\). Let \(u: \Sigma \to X\) be smooth where \(\Sigma\) is a surface. \(\d u \in \Omega^1(\Sigma, u^*(T^*X))\). The \emph{energy} of \(u\) is
\[
E(u) = \frac{1}{2} \int_\Sigma |\d u|^2_J \d\Vol_\Sigma
\]
where for \(L: T_z \Sigma \to T_{u(z)} X\),
\[
|L|_J = |\xi|^{-1} \sqrt{|L(\xi)|^2_J + |L(j\xi)|^2_J}
\]
where \(0 \ne \xi \in T_z\Sigma\).
\end{definition}
\begin{note}
\(E(u)\) depends on
\begin{itemize}
\item metric \(g_J\) on \(X\),
\item the complex structure on \(\Sigma\) (but not the volume form itself). More compactly,\(|\alpha|^2 \d \Vol_\Sigma = -\alpha \w (\alpha \compose j)\).
\end{itemize}
\end{note}
\begin{lemma}
\[
E(u) = \int_\Sigma \frac{1}{2} |\overline \p_J u|_J^2 \d \Vol_\Sigma + \int_\Sigma u^* \omega
\]
where \(\overline \p_J u = D u + J \compose Du \compose j\).
\end{lemma}
\begin{proof}
Use conformal coordinates \(x + iy\) on \(\Sigma\),
\begin{align*}
|\d u|^2 \d \Vol
&= (|\p_x u|^2 + |\p_y u|^2) \d x \w \d y \\
&= (|\p_x u + J \p_y u|^2 + 2 \langle \p_x u, J \p_y u \rangle) \d x \w \d y \\
&= |\p_J u|^2 \d \Vol + 2 u^*\omega
\end{align*}
as \(J\) is \(\omega\)-compatible.
\end{proof}
First properties of \(J\)-holomorphic curves:
\begin{enumerate}
\item local existence: there are always locally defined \(J\)-holomorphic curves through any point \(p\) of an almost complex structure. (In fact morally there is a one-to-one correspondence with curves through \(0 \in \C^n\)).
\item unique continuation: if \(f, g: \Sigma \to X\) are \(J\)-holomorphic curves and agree to infinite order (as smooth maps) at a point \(p \in \Sigma\) then \(f = g\).
\item corollary of energy identity: \(J\)-holomorphic curves are energy minimising within their cohomology class.
\item Positivity of intersection for almost complex submanifolds.
If \(C\) is a \(J\)-holomorphic curve and \(Y \subseteq X\) is a codimension 2 submanifold which is also \(J\)-holomorphic. If \(C \cap Y\) is discrete and nonempty, each intersection point has a positive sign, i.e.\ if the intersection is nonempty then \([C] \cdot [Y] > 0\) (could be infinite).
\begin{proof}
Pick local almost-complex coordinates near an intersection point so that \(Y = \{(z_1, \dots, z_{n - 1}, 0)\} \subseteq X\). Local analysis analogous to the proof of 5 (stated below) tells us that \(C\) is locally given by \(z \mapsto (f(z), a z^k + O(|z|^{k + 1}))\) for some \(a \ne 0\). Perturb this to \(z \mapsto (f(z), az^k + \varepsilon + O(|z|^{k + 1}))\). This gives transverse intersection points, each of which is positive as they are locally modelled on complex intersection points.
\end{proof}
Corollary: \((X^4, \omega)\) compat with \(J\) a compatible almost complex structure. Suppose \(u: C \to X\) is \(J\)-holomorphic, \([u(C)] \cdot [u(C)] < 0\). Any other holomorphic curve \(v: C' \to X\) representing the same cohomology class must have the same image as \(u\) (it's given by a holomorphic reparameterisation of \(u\)).
\item Let \(\Sigma\) be closed. If \(u: \Sigma \to X\) is \(J\)-holomorphic nonconstant then both \(\{z \in \Sigma| d u_z = 0\}\) and \(u^{-1}(\mathrm{Crit}(u))\) are finite.
\begin{proof}
Work in a coordinate patch. Let \(u: D \to \C^n\) is \(J\)-holomorphic for \(J: D \to \End_\R(\C^n)\). wlog \(u(0) = 0, Du_0 = 0\), i.e.\ \(0\) a critical point and \(J(0)\) is the action of \(i\) on \(\C^n\). We've assumed that \(u\) is non-constant so exists \(\ell \geq 2\) such that \(u(z) = O(|z|^\ell)\) near \(0\), \(u(z) \ne O(|z|^{\ell + 1})\). In this step we're using unique continuation property to say that all derivatives can't be zero. \(J(u(z)) = i + O(|z|^\ell)\) near \(0\). Let's look at Taylor expansion, say \(T_\ell u\), the \(\ell\)th order expansion. Taking the Taylor expansion of the Cauchy-Riemann equation
\[
\frac{\partial u}{\partial s} + J(u) \frac{\partial u}{\partial t} = 0,
\]
we obtain
\[
\frac{\partial (T_\ell u)}{\partial s} + i \frac{\partial (T_\ell u)}{\partial t} = 0
\]
so \(T_\ell u\) is holomorphic in the standard sense. By Taylor's theorem \(T_\ell u(z) = c z^\ell\) for some \(c \ne 0\), so \(u(z) = c z^\ell + O(|z|^{\ell + 1})\). Thus \(0\) is an isolated critical point. Finiteness follows from closedness of \(\Sigma\).
\end{proof}
\item Two versions
\begin{itemize}
\item Monotonicity theorem: suppose \(u: (D, 0) \to (B^{2n}(r), 0)\) is \(J\)-holomorphic for \((B^{2n}(r), \omega_0, J)\) for some \(J\) compatible, and \(u(\p D) \subseteq \p B^{2n}(r)\). Then \(\mathrm{area}(u(D)) \geq \pi r^2\), where the area is \(\int_{u(D)} \omega\).
We will not prove this theorem. This is a special case of the monotonicity theorem for minimal surfaces (which are energy minimising surfaces). We say last time that \(J\)-holomorphic curves are minimal.
\item Weaker version: same statement with \(\mathrm{area}(u(D)) \geq \frac{\pi}{4} r^2\).
\begin{proof}
Assume we have such a \(u\). Let \(S_t = u^{-1}(B_0^{2n}(t))\). Let \(a(t) = \mathrm{area}(S_t)\), \(\ell(t) = \mathrm{length}(f|_{\p S_t})\). We have
\begin{itemize}
\item \(a'(t) = \ell(t)\) almost everywhere. This part does not use any property of \(J\)-holomorphicity.
\item \(a(t) \leq \frac{\ell(t)^2}{\pi}\). Proof later.
\end{itemize}
Putting these together
\[
\frac{d}{dt} \sqrt{a(t)} = \frac{a'(t)}{2\sqrt{a(t)}} = \frac{\ell(t)}{2\sqrt{a(t)}} \geq \frac{\sqrt{\pi}}{2}.
\]
Integrating gives \(\sqrt{a(t)} \geq \frac{\sqrt \pi}{2} t\).
\begin{proof}[Proof of second claim]
\(u(\p D)\) bounds a 2-complex \(C\) which can be made arbitrarily close to a union of ``flat 2-simplicies'' (pieces of linear subplanes). These can be approximated by union of small flat discs. For a disc of radius \(\varepsilon\), \(\pi \varepsilon^2 = \frac{(2\pi \varepsilon)^2}{4\pi}\) so \(\mathrm{area} C \leq \frac{\ell(\p D)^2}{\pi}\). By Stokes'
\[
\mathrm{area}(u(D)) = \int_D f^*\omega = \int_C \omega \leq \int_C 1 = \mathrm{area}(C)
\]
because \(\norm \omega_p \leq 1\) for any \(p \in B^{2n}(0)\).
\end{proof}
\end{proof}
\end{itemize}
\item Reomoval of singularity: given \(J\)-holomorphic map \(f: D^* \to X\) of finite energy, \(f\) extends \(J\)-holomorphically to \(D\).
Sketch: can use monotonicity to get continuous extension (argue that \(f(D^*)\) must have a unique limit as \(x \to 0\), as else the energy would be infinite).
\item
\begin{definition}
A \(J\)-holomorphic curve \(f: \Sigma \to X\) is \emph{simple} if it isn't multiplely covered, i.e.\ it doesn't factor as \(\Sigma \to \Sigma' \to X\) where \(\Sigma \to \Sigma'\) is a nontrivial branched cover.
\end{definition}
Corollary of 5: if \(f: \Sigma \to X\) is \(J\)-holomorphic, \(\Sigma\) closed and \(f\) simple then the set of injective points, i.e.\ \(\{x \in \Sigma: Df_x \ne 0, f^{-1}(f(x)) = \{x\}\}\), is open and dense. In fact, non-injective points are at most countable, and can only accummulate at critical points.
\item Suppose \(\Sigma_0, \Sigma_1\) closed, \(u_j: \Sigma_j \to M\) \(J\)-holomorphic simple such that \(u_0(\Sigma_0) = u_1(\Sigma_1)\), then exists a biholomorphic \(u: \Sigma_0 \to \Sigma_1\) such that \(u_0 = u_1 \compose \varphi\). ``\(J\)-holomorphic curves with same energy as the same up to reparameterisation''.
\end{enumerate}
Note for 6B: area is calculated using the Riemannian metric \(g_J\), and for \(J\)-holomorphic curves it is the same as \(\int_C \omega\).
For a general \((X, g)\) Riemannian, say \(g\) has injectivity radius \(r \geq 0\) if \(s < r\), \(\gamma \subseteq B_x(r)\) closed \(C^\infty\) loop, then \(\gamma\) bounds a disc of area \(A_\gamma \leq \frac{\ell(\gamma)^2}{\pi}\).
Another remark: we used \(u: (D, \p D) \to (B^{2n}(r), \p B^{2n}(r))\) where \(J\) on the codomain is only required to be compatible. If instead we use the standard complex structure then we can do better using standard isoperimetric inequality: \(u(\p D) = (u_1(t), \dots, u_nt) \in \C^n, t \in S^1\), can fill them with \(C_i \subseteq \R^2\) such that \(\mathrm{area}(c_i) \leq \frac{\ell_i^2}{4\pi}\), and we can actually get the stronger inequality.
\subsection{Moduli space of \(J\)-holomorphic curves}
Let \((X, \omega, J)\) be a symplectic manifold with a fixed compatible almost complex structure \(J\). Let \((\Sigma, j)\) closed.
\begin{definition}
For \(A \in H_2(X, \Z)\), define the moduli space of \(J\)-holomorphic curves to be
\[
\mathcal M_J(A, X) = \{J\text{-holomorphic curves } \Sigma \to X \text{ such that } [\Sigma] = A\}.
\]
\end{definition}
Fact: in favourable circumstances (\(J\) is regular), \(\mathcal M_J(A, X)\) is a manifold of real dimension
\[
n(2 - 2g) + 2\langle c_1, A\rangle
\]
where \(g = g(\Sigma), 2n = \dim_\R X, c_1 = c_1(X)\).
\begin{theorem}
If \(X\) is compact, \(A\) is simple in homology (i.e.\ primitive) then regular \(J\)'s are dense in the space of all almost complex structures.
\end{theorem}
Let \(S_0, S_1\) be Riemann surfaces diffeomorphic to \(S^2\). They are biholomorphic, in constrast to elliptic curves. \(\C\P^1\) admits a \(\PSL_2(\C)\)-action by reparameterisation. If \(A = [\Sigma]\) where \(\Sigma \cong S^2\), \(\PSL_2(\C)\) acts on \(\mathcal M_J(X, A)\) so we can form the quotient \(\overline{\mathcal M}_J(X, A)\). For regular \(J\), \(\overline {\mathcal M}_J(A, X)\) is a manifold of dimension \(2(n - 3 + \langle c_1, A\rangle)\).
\begin{itemize}
\item If \(X = \P^2, A = [\C\P^1]\) the the dimension is \(2 (2 - 3 + 3) = 4\). This agrees with the fact that the Grassmannian of lines in \(\P^2\) is isomorphic to \(\P^2\), which has real dimension \(4\).
\item \(X = \P^2, A = 2[\C\P^1]\) (conic, still isomorphic to \(\C\P^1\)): dimension \(2(2 - 3 + 6) = 10\). c.f. conic through 5 points in generic position.
\end{itemize}
Example of non-regular behaviour: \(X^4\) Kähler, \(C \cap X^4\) smooth holomorphic curves. Take local coordinate \(z\) on \(C\). Blow up at points \(0, t\) on \(C\) (in \(X\)). If \(t \ne 0\) get two exceptional divisors \(E_1, E_2\). If \(t = 0\) then we blow up twice. Symplectically they are the same: \(\P^2 \# 2 \overline \P^2\), wlog with the same symplectic form. The construction for varying \(t\) gives a \(1\)-(complex) parameter family of \(J\)'s on \(\P^2 \# 2 \overline \P^2\), say \(J_2\). There are two curves for \(t = 0\), lies in class \(E_1 = E_2\). However for \(t \ne 0\), no connected smooth holomorphic curve \(C\) in that class: suppose not, \((E_1 + E_2) \cdot E_1 = -1\), so would get a contradiction to positivity of intersection unless \(E_1 \subseteq C\) or \(C \cap E_1\) (impossible), similar for \(E_2\). Thus \(J_0\) is not regular for the class \(E_1 + E_2\).
Note: blowing up \(p \in C^2 \subseteq X^4\) gives \(\pi: \tilde X \to X\), \(\pi^{-1}(C) = \tilde C + E\) where \(\tilde C\) is the strict transform of \(C\), \(E \cong \C\P^1\) is the exceptional divisor. Then \([\tilde C]^2 = [C]^2 - 1\).
Question: suppose we have a sequence of points in \(\mathcal M_J(X, A)\). When do we have convergence?
Example: consider the family
\begin{align*}
\P^1 &\to \P^2 \\
[x: y] &\mapsto [x^2 : y^2 : txy]
\end{align*}
The iamge is \(\{t^2XY = Z^2\}\). If \(t \ne 0, \infty\) then have smooth conic. As \(t \to \infty\), it becomes \(\{XY = 0\}\). This is an example of \emph{bubbling}.
\begin{eg}
More bubbling: consider
\begin{align*}
f_n: \P^1 &\to \P^1 \times \P^1 \\
z &\mapsto (z, \frac{1}{zn^2})
\end{align*}
Away from \(0\), \(|Df_n|\) is bounded and \(f_n\)'s coverge uniformly on compact sets. As \(n \to \infty\), \(\{|z| \leq \frac{1}{n}\}\) collapses to a point. However, \(\im \{f_n\}\) converges pointwise to \(\P^1 \times \{0\} \cup \{0\} \times \P^1\).
\end{eg}
\begin{definition}[bubble]\index{bubble}
A \(J\)-holomorphic curve \(g: \P^1 \to X\) occurs as a \emph{bubble} in a seuqnece \(f_n: \Sigma \to X\) of \(J\)-holomorphic curves if for some \(p \in \Sigma\), exist holomorphic charts \(\{\varphi_n: B(R_n) \to \Sigma\}_{R_n \to \infty}\) such that for all \(z \in \C\), \(\varphi_n(z) \to p\) (for \(n \gg 0\)) on \(\Sigma\) and \(f_n \compose \varphi_n \to g\) uniformly on compact sets of \(\C\).
\end{definition}
\begin{theorem}
If \(f_n: \Sigma \to X\) is a sequence of \(J_n\)-holomorphic curves (some sequence \(J_n \to J\) compatible almost complex structures) with compact image, then either some subsequence coverges to a \(J\)-holomorphic map, or there is bubbling.
\end{theorem}
\begin{note}
Each bubble splits off its symplectic area's worth of energy (recall that the energy is zero if and only if the \(J\)-holomorphic curve is constant.
\end{note}
\begin{definition}
We say that a class \(\alpha \in H^2(X, \Z)\) has \emph{least area} if \(\omega(\alpha) > 0\) and there does not exist \(\beta \in H_2(X, \Z)\) such that \(0 < \omega(\beta) < \omega(\alpha)\).
\end{definition}
If \(\alpha\) has least area then any bubble ``eats'' at least \(\omega(\alpha)\) energy, there are \(\leq \omega(\Sigma)/\omega(\alpha)\) bubbles, so
\begin{corollary}
If \(X\) has a class of least area, any sequence in \(\mathcal M_J(X, \beta)\), for any \(\beta\), can only have finitely many bubbles.
\end{corollary}
\begin{theorem}
Let \(X^{2n}\) be a closed symplectic manifold. Suppose \(\alpha \in H_2(X, \Z)\) has least symplctic area. Then exists a dense collection of compatible almost complex structures \(J\) which are regular for \(\alpha\). In this case \(\mathcal M_J(X, \alpha)\) is a compact manifold (every sequence of nets converges).
\end{theorem}
Fact: the dnese collection is path-connected.
Idea for bubbling theorem:
\begin{itemize}
\item If there is a uniform bound on \(|Df_n|\) then there is a convergent subsequence of \(f_n\)'s by Arzela-Ascoli.
\item Could fail to converge if exists points \(p_n \in \Sigma\) with \(|Df_n(p_n)| \to \infty\). Since \(\Sigma\) is compact, \(p_n \to p\), say. Slogan: energy concentrates near \(p\), i.e.\ bubble at \(p\).
\end{itemize}
Sketch application: conisder \(\mathrm{ev}: \mathcal M_J(\P^2, [H]) \to \mathrm{Sym}^2(\P^2)\) given by evaluation at two fixed points on \(\im \P^1\). Note by adjunction the only \(J\)-holomorphic curves in \([H]\) are isomorphic to \(\P^1\). For the standard \(J\) (which is regular), \([\im \mathrm{ev}] = [\mathrm{Sym}^2 \P^2]\). For another regular \(J\), \([\im \mathrm{ev}: \mathcal M_J(\P^2, [H])]\) is unchanged as a cycle. In particular \(mathrm{ev}\) must hit every point in \(\mathrm{Sym}^2(\P^2)\) (if not, \(\im \mathrm{ev}\) would not be fundamental class). Thus for a regular \(J\) (thought as the standard Kähler form deformed a bit), there is a \(J\)-holomorphic \(\P^1\) between any two points.
\subsection{Gromov non-squeezing}\index{Gromov non-squeezing}
Recall Gormov non-squeezing theorem says if there is a symplectic embedding \(B^{2n}(R) \embed B^2(r) \times \R^{2n - 2}\) then \(r \leq R\).
First proof
Assume \(B^{2n}(r) \embed B^2(R) \times \R^{2n - 2}\) is a symplectic embedding. We get \(\overline B^{2n}(r_0) \embed B^2(R) \times \R^{2n - 2}\) for any \(r_0 < r\). \(\overline B^{2n}(r_0)\) has compact images so exists symplectic embedding \(\overline B^{2n}(r_0) \embed (\P^1, \Sigma) \times V\) where \(\int_{\P^1} \omega = \pi R^2 + \varepsilon\) for some \(\varepsilon\) small and \(V = \R^{2n - 2}/L \Z^{2n - 2}\) for \(L\) sufficiently large. We have \(\varphi: B^{2n}(r_0) \embed M = \P^1 \times V, 0 \mapsto (a, b)\). Blow up \(\P^1 \times V\) by cutting out this ball and replacing it by a ball of any weight \(\rho < r_0\) to get \((\tilde M, \omega_\rho)\). \(u: \P^1 \to \P^1 \times V, x \mapsto (x, b)\) is a \(J\)-holomorphic curve through \((a, b)\) on \(M\). We can lift it to \(\tilde u: \P^1 \setminus u^{-1}(a, b) \to \tilde M\) which is \(\tilde J\)-holomorphic automatically, where \(\tilde J\) is picked so that it agrees with \(J\) away from the exceptional divisor \(E\). Finiteness of energy/area means that we can apply removal of singularities to extend to a \(\tilde J\)-holomorphic map \(\tilde u: \P^1 \to \tilde M\). By positivity of intersection, \([\tilde u(\P^1)] \cdot E > 0\). But
\[
\pi R^2 + \varepsilon = \int_{\P^1} u^*\omega = \int_{\P^1} (\pi \compose \tilde u)^* \omega = \underbrace{\int_{\P^1} \tilde u^* \omega_\rho}_{> 0} + \pi\rho^2 \underbrace{[\tilde u(\P^1)] \cdot E}_{\geq 1}
\]
so \(\pi R^2 + \varepsilon > \pi \rho^2\) for \(\varepsilon\) arbitrarily small. Thus \(r \leq R\).
Could use monotonicity to get bounds on areas of bubbule
***
Moduli space for non-primitive classes: define \(M_J^*(X, A) \subseteq M_J(X, A)\) to be the subspace of curves which are simple (for example generic degree \(d\) in \(\C\P^2\) represents \(d[H]\) and is simple. We have a version of the theorem before
\begin{theorem}
for \(J \in \mathcal J\) where \(\mathcal J\) is a subspace of the space of all compatible almost complex structures which is of the second category (i.e.\ contains an intersection of countably many dense open subsets), \(M_J^*(X, A)\) is a smooth manifold of dimension \(n(2 - 2g) + 2c_1(A)\), where \(g = g(\Sigma)\) and \(u: \Sigma \to X\). Such a \(J\) is ``regular''.
\end{theorem}
\subsection{Application of Gromov non-squeezing}
\begin{theorem}[Eliashberg]\label{thm:Elaishberg theorem about C0 closure of Symp}
Let \(M\) be a symplectic manifold. Then \(\Symp(M)\) is \(C^0\) closed in \(\mathrm{Diff}(M)\), i.e.\ the topology of uniform convergence of compact sets).
\end{theorem}
\begin{lemma}
Volume (Lebesgue measure) is preserved under \(C^0\) limit. This means that if \(\{f_n\} \subseteq \Symp(M)\) and \(f_n \to f\) in \(C^0\) where \(f \in \mathrm{Diff}(M)\) then \(f\) is volume preserving.
\end{lemma}
\begin{proof}
Work near a point, wlog \(0 \in \R^{2n}, Df|_0 = A\). Let \(B(\delta) = B_\delta(0)\). \(\frac{\Vol(f(B(\delta)))}{\Vol(B(\delta))} \to \det A\) as \(\delta \to 0\). Now note \(f_n\)'s are volume preserving and converge uniformly on compact sets to \(f\), so \(\det A = 1\).
\end{proof}
\begin{proof}[Proof of \Cref{thm:Elaishberg theorem about C0 closure of Symp}]
This is a local result. Suppose we have a sequence of symplectic embeddings \(B^{2n}(r) \embed (\R^{2n}, \omega)\) such that \(f_n \to f\) in \(C^0\), \(f \in C^\infty\). Need to show \(A = Df|0 \in \Sp_{2n}(\R)\). We will show \(A^*\omega = \pm \omega\), and this is enought to show \(A^*\omega = \omega\) by considering \(f_n \times \id: B^{2n + 2}(r) \to \R^{2n + 2}\) which \(C^0\) converges to \(f \times \id\).
Assume \(A^*\omega \ne \pm \omega\). Work with \(A^\tau = -J_0 A^T J_0\) where \(J_0\) is the standard complex structure. This is such that \(\omega(Ax, y) = \omega(x, A^\tau y)\). Exist \(u, v \in \R^{2n}\) such that \(\omega(A^\tau u, A^\tau v) \ne \pm \omega(u, v)\). \(|\det A| = |\det A^\tau| = 1\). We can assume
\[
0 < \lambda^2 = |\omega(A^\tau u, A^\tau)| < \omega(u, v) = 1.
\]
Fix signs, wlog \(\omega(A^\tau u, A^\tau v) = \lambda^2\). Extend \(u, v\) to a basis \(\mathcal B_1\) for \(\R^{2n}\) with respect to which \(\omega\) is given by the standard matrix. Similarly extend \(\frac{A^\tau u}{\lambda}, \frac{A^\tau v}{\lambda}\) to a basis \(\mathcal B_2\) for \(\R^{2n}\) such that \(\omega\) has the same matrix representation. Let \(S \in \Sp_{2n}(\R)\) be the linear map taking \(\mathcal B_1\) to \(\mathcal B_2\). Now with respect to \(\mathcal B_1\), \(A \compose S\) has matrix
\[
\begin{pmatrix}
\lambda & 0 & 0 \\
0 & \lambda \\
* & & *
\end{pmatrix}
\]
\(A \compose S = D(f \compose S)_0\) and this takes \(B^{2n}(\varepsilon) \embed B^2(\lambda \varepsilon) \times \R^{2n - 2}\). For sufficiently small \(\varepsilon\), \(f \compose S: B^{2n}(\varepsilon) \embed B^2(\lambda \varepsilon) \times \R^{2n - 2}\). Thus exists \(n\) such that \(f_n \compose S: B^{2n}(\varepsilon) \embed B^2(\lambda \varepsilon) \times \R^{2n - 2}\), contradicting Gromov non-squeezing.
\end{proof}
\printindex
\end{document}
% Text
% Cannas da Silva, Lectures on Symplectic Geometry
% McDuff-Salamon, Introduction to Symplectic Topology
% ---, J-holomoprhic curves and symplectic topology (more advanced) | {
"alphanum_fraction": 0.6369982048,
"avg_line_length": 58.3420612813,
"ext": "tex",
"hexsha": "10089939b406fec27e8769c5d7bfd06d38143a07",
"lang": "TeX",
"max_forks_count": 10,
"max_forks_repo_forks_event_max_datetime": "2022-02-25T17:20:19.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-11-08T16:16:20.000Z",
"max_forks_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "geniusKuang/tripos",
"max_forks_repo_path": "III/symplectic_topology.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0",
"max_issues_repo_issues_event_max_datetime": "2020-10-14T21:29:15.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-10-11T20:43:21.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "geniusKuang/tripos",
"max_issues_repo_path": "III/symplectic_topology.tex",
"max_line_length": 1220,
"max_stars_count": 27,
"max_stars_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "geniusKuang/tripos",
"max_stars_repo_path": "III/symplectic_topology.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-10T15:48:31.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-01-15T05:02:27.000Z",
"num_tokens": 37412,
"size": 104724
} |
\chapter{Sequence functions}
This module was written entirely by Marco Heisig. It provides
high-performance implementations of the functions in the ``sequences''
chapter of the \hs{}. High performance is obtained by identifying
important special cases such as the use of \texttt{:test} function
\texttt{eq}, or \texttt{equal}, or the use of a \texttt{:key} of
\texttt{identity}. These special cases are handled by macros
according to the technique described by our 2017 ELS paper
\cite{Durand:2017:ELS:Sequence}.
In addition to the technique described in that paper, Marco Heisig
decided to write the sequence functions as generic functions,
specialized to the type of the sequence argument. Many
implementations have specialized versions of vectors, based on element
type, and a method specialized this way can often be significantly
faster than code that uses a generic \texttt{vector} type. In order
to account for the different set of vector subclasses available in
different \commonlisp{} implementations, a macro
\texttt{replicate-for-each-vector-class} is used to generate a method
for each such subclass. Client code can customize this module by
defining this macro according to its set of vector subclasses.
This module can be used as an ``extrinsic'' module, i.e., it can be
loaded into an existing \commonlisp{} implementation without
clobbering the native sequence functions of that implementation. This
feature has been used to compare the performance of the functions in
this module to that of the native sequence functions of \sbcl{}, and
the result is very encouraging, in that many functions in this module
are as fast, or faster, than the native \sbcl{} equivalents.
The \texttt{sort} functions in this module use the technique described
in a paper by Kim and Kutzner \cite{10.1007/978-3-540-30140-0_63}.
This technique is based on merging.
\section{Future work}
\label{sec-sequence-functions-future-work}
Concerning the \emph{sorting functions} (i.e., \texttt{sort} and
\texttt{stable-sort}), there is a challenge. The current
implementation uses a merging technique where no additional space is
required. However, the current implementation is not as fast as
traditional merging with O(n) extra space. So the question is whether
there is an intermediate solution where a small amount of additional
space is used whenever there is such space available, for example on
the stack.
Currently, this module is located in the \texttt{Code/Sequence}
directory of the \sysname{} repository, but we may extract it to a
separate repository in the future.
%% LocalWords: subclasses
| {
"alphanum_fraction": 0.79348659,
"avg_line_length": 49.2452830189,
"ext": "tex",
"hexsha": "a595db1b49acc58956b31c99580f58fcc66549a1",
"lang": "TeX",
"max_forks_count": 80,
"max_forks_repo_forks_event_max_datetime": "2022-03-15T05:30:33.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-03-06T12:52:05.000Z",
"max_forks_repo_head_hexsha": "ec5cc25de783ecce373081ab72d2a04359155ad6",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "gwerbin/SICL",
"max_forks_repo_path": "Specification/chap-sequences.tex",
"max_issues_count": 85,
"max_issues_repo_head_hexsha": "ec5cc25de783ecce373081ab72d2a04359155ad6",
"max_issues_repo_issues_event_max_datetime": "2022-02-18T11:06:19.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-03-25T00:31:09.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "gwerbin/SICL",
"max_issues_repo_path": "Specification/chap-sequences.tex",
"max_line_length": 70,
"max_stars_count": 842,
"max_stars_repo_head_hexsha": "ec5cc25de783ecce373081ab72d2a04359155ad6",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "gwerbin/SICL",
"max_stars_repo_path": "Specification/chap-sequences.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-30T14:03:04.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-12T15:44:23.000Z",
"num_tokens": 614,
"size": 2610
} |
\documentclass[amsmath,amssymb,aps,pra,reprint,groupedaddress,showpacs]{revtex4-1}
\usepackage{multirow}
\usepackage{verbatim}
\usepackage{color,graphicx, verbatim, float}
\input{../Common}
\begin{document}
%%%%%%%%%%%%%%% TITLE %%%%%%%%%%%%%%%%
\title{Fibonacci Sequence Prime Factors Properties}
%%%%%%%%%%%%%%% AUTHORS %%%%%%%%%%%%%%%%
\author{Lucchi Manuele}
\email[]{[email protected]}
\affiliation{IT Department Students, Universita' degli Studi di Milano, Citta' degli Studi, Milano, Italia}
%%%%%%%%%%%%%%% DATE %%%%%%%%%%%%%%%%
\date{\today}
%%%%%%%%%%%%%%% ABSTRACT %%%%%%%%%%%%%%%%
\begin{abstract}
The purpose of this research is to find a common pattern in the scomposition at prime factors of the Fibonacci Numbers.
\end{abstract}
\maketitle
%%%%%%%%%%%%%%% INTRODUCTION %%%%%%%%%%%%%%%%
\section{Introduction}
The Fibonacci Sequence [1] is a recurrent equation introduced by Fibonacci in 1202. This sequence rapidly increases the size of its numbers and has a lot of properties.
The sequence is $F_n = F_{n-1} + F_{n-2}$.
\begin{figure}[H]
\centering
\includegraphics[width=3in]{assets/fibonacci(100).png}
\caption{The Fibonacci Sequence}
\end{figure}
%%%%%%%%%%%%%%% POWER OF 2 BEHAVIOURS IN FIBONACCI NUMBERS %%%%%%%%%%%%%%
\section{Powers of 2 behaviours in Fibonacci Numbers}
We first tested with the simpler prime numbers. If we look at the power of 2 present in the first 100 Fibonacci Numbers we get something like this
\begin{figure}[H]
\centering
\includegraphics[width=3in]{assets/factor(100,2).png} % Power of 2 in 100
\caption{Power of 2 as factor of the first 100 Fibonacci Numbers}
\end{figure}
It's pretty easy to see some sort of regularity between the power of 2 included in each Fibonacci Number. We can now increase the range to 1000 to see it better
\begin{figure}[H]
\centering
\includegraphics[width=3in]{assets/factor(1000,2).png} % Power of 2 in 1000
\caption{Power of 2 as factor of the first 1000 Fibonacci Numbers}
\end{figure}
And again we can see a similar behaviour. First of all, we can notice how only numbers that are multiple of 3 are divisible by 2.
So we can suppose that every number divisible by 3 has its Fibonacci equivalent to be even
\begin{figure}[H]
\centering
\includegraphics[width=3in]{assets/factor(100,2)only(3).png} % Power of 2 in 100 divisible for 3
\caption{Power of 2 as factor of the first 100 Fibonacci Numbers but only for $n$ divisible by 3}
\end{figure}
Instead, if we look only at every $n$ not divisible by 3, we get something like this
\begin{figure}[H]
\centering
\includegraphics[width=3in]{assets/factor(100,2)except(3).png} % Power of 2 in 100 not divisible for 3
\caption{Power of 2 as factor of the first 100 Fibonacci Numbers but only for $n$ not divisible by 3}
\end{figure}
So we can draw a first conclusion:
$$\mathbf{\forall n \in \mathbb{N} : 3\mid n \implies 2\mid Fibonacci(n)}$$
Going forward, we can define the regularity for every exponent of 2,
for example, $F(n)$ is divisble by 2 starting from 3 (or $2^0 \cdot 3$) and $\forall n \in \mathbb{N}$ that is a multiple of 3.
If we search for Fibonacci Numbers divisible for $2^3$, we have to start from 6 (or $2^1 \cdot 3$)
and increase of 12 (or $2^2 \cdot 3$) at time. For $2^4$ the starting $n$ becomes 12 (or $2^2 * 4$)
and the increase between similar exponents is 24 (or $2^4 \cdot 3$).\\
So, after defining E as a certain exponent of 2 $\geq 3$ inside Fibonacci Numbers we can set the first $n$ which will appear as $2^{E-1}$ and the difference between the $n_s$ that have the same exponent of 2 in $F(n)$
as $2^E \cdot 3$.\\
However, the regularity in the exponents of 2 is not $\forall n$, since 0, 1, and 2 don't follow it,
otherwise, with 1 as exponent we would have a 3 as a starting point, but we got 6. Also, it seems like that 2 as an exponent will not appear,
so \AllBold{we will not find any $F(n)$ that can be divided by $2^2$ and not for $2^k$ with $k \in N \land k>2$ at the same time}
%%%%%%%%%%%%%%% A FIRST SPECIFIC FORMULA %%%%%%%%%%%%%%%%
\section{A first, specific Formula}
We have now all the tools for the development of a simple equation that gives us the power of in the $n_{th}$ number of Fibonacci. Let's define $S_n$ as the exponent of 2 of the $n_{th}$ Fibonacci Number,
we can say that $\mathbf{S_n = S_{n/2} + 1}$. But that is true for $F(n)$ only where $n$ is divisible by both 2 and 3, so we have to create special cases for
$n=1$ and divisible by 3 or 2 but not both. For $n = 1$ or $n \nmid 2 \land \mid 3 $ it's always 0, for $2 \mid n \land 3 \nmid n $ it's always one.
So the final recurrent formula will be:
$$S_n = \begin{cases} S_{n/2} + 1, & \mbox{if }2|n \land 3 \mid n \\ 1, & \mbox{if } 2 \nmid n \land 3|n \\ 0, & \mbox{if } \left( 2 \mid n \land 3 \nmid n \right) \lor n = 1 \lor n = 0
\end{cases} $$
%//DECISAMENTE MIGLIORAMENTO FORMULA
%%%%%%%%%%%%%%% A RECURRENT BEHAVIOR IN PRIME FACTORS SCOMPOSITION OF FIBONACCI NUMBERS %%%%%%%%%%%%%%%%
\section{A recurrent behavior in Prime Factors Scomposition of Fibonacci Numbers}
2 is not the only base to present a similar behavior. Every number (even non prime ones) have a similar regularity, but with some differences.
For example the 3
\begin{figure}[H]
\centering
\includegraphics[width=3in]{assets/factor(100,3).png} % Power of 3 in 100
\caption{Power of 3 as factor of the first 100 Fibonacci Numbers}
\end{figure}
Or the 5
\begin{figure}[H]
\centering
\includegraphics[width=3in]{assets/factor(100,5).png} % Power of 5 in 100
\caption{Power of 5 as factor of the first 100 Fibonacci Numbers}
\end{figure}
And so on even for bigger prime numbers like 23
\begin{figure}[H]
\centering
\includegraphics[width=3in]{assets/factor(10000,23).png} % Power of 23 in 10000
\caption{Power of 23 as factor of the first 10000 Fibonacci Numbers}
\end{figure}
%//INIZIO FOCUS STARTING POINT
\begin{comment}
But it's not exactly the same, it seems like the more we increase the base, the less the number appears as a prime factor in Fibonacci Numbers
(so with an increasing distance between the same exponents), but also this is not true, there are a lot of cases that don't follow the growing rule.
But before getting in a more generic formula, we have to name some things (since they are going to be different for every base).
\begin{enumerate}
\item $\mathbf{\Gamma}$ will be the base of the prime (or non-prime) number we are taking as example.
\item $\mathbf{\Omega}$ will be the exponent of base $\Gamma$ of $F(n)$
\item $\mathbf{\Delta \Omega}$ will be defined as the distance between the Fibonacci Number before the next equal exponent will appear
\item $\mathbf{F(n)}$ will be the $n_{th}$ Fibonacci Number
\item $\mathbf{\lambda(\Omega, \Gamma)}$ will identify the function that returns the first $n$ where the exponent of the $\Gamma$ base appears in $F(n)$
\item $\mathbf{\lambda_n(\Omega, \Gamma)}$ will, in the same way, be the $n_{th}$ Fibonacci Number where the given exponent for the given base appears
\end{enumerate}
If we look at FIG. 6 and 7, we have two examples to notice the differences between different bases.
For $\Gamma = 3$ and $\Gamma = 5$ we have $\lambda(1, 3) = 4$ and $\lambda(1, 5) = 6$.
It seems like $\lambda(1, \Gamma) = \Gamma + 1$, but that's wrong: for example,
for $\Gamma = 11$ we have $\lambda(1, 11) = 10$. For $\Gamma = 13$ it will be $n = 7$.
So let's have a look at the starting points of the first prime numbers:
\end{comment}
%//FINE FOCUS STARTING POINT
%%%%%%%%%%%%%%% FOCUS ON NON PRIME FACTORS %%%%%%%%%%%%%%%%
\section{Focus on non-prime factors}
Until now we tested only prime numbers, but we can expect a similar behavior for non-primes.
For example if we look at power of 4 $(2^2)$ we notice that it's exactly halved compared of 2.
\begin{figure}[H]
\centering
\includegraphics[width=3in]{assets/factor(100,[2,4]).png}
\caption{Comparison of powers of 2 and 4 of the first 1000 Fibonacci Numbers}
\end{figure}
We can have better overview by comparing the distance between the same exponent of two different factors
as they first appear. For example, using 2 and 4 as factors and comparing the index in Fibonacci numbers
when they are on 1 by subtracting we get this:
\begin{figure}[H]
\centering
\includegraphics[width=3in]{assets/distance(1000,2,4,1,minus).png}
\caption{Difference between indexes of the same recurrence of the same grade of 2 and 4}
\end{figure}
As expected it's a linear function. If we try to divide them, again we can see the ratio it's exactly $\frac{1}{2}$
\begin{figure}[H]
\centering
\includegraphics[width=3in]{assets/distance(1000,2,4,1,div).png}
\caption{Division between indexes of the same recurrence of the same grade of 2 and 4}
\end{figure}
%%%%%%%%%%%%%%% COMPARISON ON DIFFERENT BASES %%%%%%%%%%%%%%%%
\section{Comparison on different bases}
But if we try to obtain a choerent behaviour for other couple of numbers we'll be disappointed. The examples below are between 9 and 3
\begin{figure}[H]
\centering
\includegraphics[width=3in]{assets/distance(1000,3,9,1,minus).png}
\caption{Difference between indexes of the same recurrence of the same grade of 3 and 9}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=3in]{assets/distance(1000,3,9,1,div).png}
\caption{Division between indexes of the same recurrence of the same grade of 3 and 9}
\end{figure}
While in both of them we can see some sort of logic, it's not $1/3$ nor the square root ($9 = 3^2$).
However, this sort of behavior is once again recurrent for every number, even between numbers that doesn't have anything in common.
For example, 2 and 3:
\begin{figure}[H]
\centering
\includegraphics[width=3in]{assets/distance(500,2,3,1,minus).png}
\caption{Difference between indexes of the same recurrence of the same grade of 2 and 3}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=3in]{assets/distance(500,2,3,1,div).png}
\caption{Division between indexes of the same recurrence of the same grade of 2 and 3}
\end{figure}
We tested a lot of possibilities but couldn't be able to find what connect them all, except one thing:
In every test made by dividing the corrispondent indexes, going forward with the iterations shows the ratio to stabilize near a certain number
We can see that already in FIG. 13 where it goes somewhere between 0.44 and 0.45 or in FIG.15 where it tends to 1.
On the other hand, subtracting the indexes shows some different behaviours, from linear functions to regular curves that tries to mimic linear functions.
Or again, what we saw in FIG. 14, that' pretty regular but different from everything else seen until now.
Of course the same test can be done with different exponent grades with the same properties.
%%%%%%%%%%%%%%% CONCLUSION %%%%%%%%%%%%%%%%
\section{Conclusion}
As we know, the Fibonacci Numbers hold a great number of interesting properties and this one sums up to the list.
%%%%%%%%%%%%%%% REFERENCES %%%%%%%%%%%%%%%%
\begin{thebibliography}{24}
\bibitem{fibonaccigeneral}
{OEIS},
\textit{Sequence A000045 (Fibonacci Numbers)}
\end{thebibliography}
\end{document}
%%https://www.overleaf.com/learn/latex/Positioning_images_and_tables | {
"alphanum_fraction": 0.7210361769,
"avg_line_length": 44.9598393574,
"ext": "tex",
"hexsha": "31e98171455428b4cdbd105ffefec5a567d06042",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e3c683dddccefc8876de0c7bb30a3375d1f94929",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "manuelelucchi/Fibonacci",
"max_forks_repo_path": "prime_factors_scomposition/fibonacci_prime_factors_scomposition.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e3c683dddccefc8876de0c7bb30a3375d1f94929",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "manuelelucchi/Fibonacci",
"max_issues_repo_path": "prime_factors_scomposition/fibonacci_prime_factors_scomposition.tex",
"max_line_length": 217,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "e3c683dddccefc8876de0c7bb30a3375d1f94929",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "manuelelucchi/Fibonacci",
"max_stars_repo_path": "prime_factors_scomposition/fibonacci_prime_factors_scomposition.tex",
"max_stars_repo_stars_event_max_datetime": "2021-02-06T23:49:23.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-02-06T23:49:23.000Z",
"num_tokens": 3266,
"size": 11195
} |
%These notes summarize the lecture notes from the Linear Modelling course at Sheffield's School of Mathematics and Statistics, MSc degree programme. The original notes were written by Dr.\ Kevin Walters and Dr.\ Jeremy Oakley. This summary is completely derived from these notes and from other MSc sources. Any errors are most probably mine.
%Everything is in matrix form unless a lower case letter with a subscript (such as $x_i$) is used (even there, I might deviate from this convention if I need to index sub-matrices; it's best to look at the context to decide what is meant).
\section{Background}
\subsection{Derivations of combinations of functions}
\begin{equation}
(uv)' = uv' + vu'
\end{equation}
\begin{equation}
(u/v)' = \frac{vu' - uv'}{v^2}
\end{equation}
\subsection{Some key distributional results}
\subsection{Some very basic matrix algebra facts}
\textbf{Inverse (2x2)}:
$\begin{pmatrix}
a & b \\
c & d\\
\end{pmatrix}^{-1}
= \frac{1}{ad-bc}
\begin{pmatrix}
d & -b \\
-c & a\\
\end{pmatrix}$
\textbf{Inverse of non-singular matrices}.
If A and B are non-singular matrices then $(AB)^{?1} = B^{-1}A^{-1}$.
\textbf{Symmetric square matrix}: $A=A^T$.
If a symmetric matrix A is non-singular then $A^{-1}$ is also symmetric.
$AA^{-1}=A^{-1}A=I$ given A is square and invertible.
If the symmetric matrix $A$ is non-singular then $A^{-1}$ is also symmetric.
\textbf{Multiplication is distributive}: Multiplication is distributive over addition and subtraction, so $(A-B)(C-D)=AC -BC-AD+BD$.
\textbf{Transpose}: $(A+B)^T =A^T+B^T$ and $(AB)^T =B^TA^T$
\textbf{Sum of squares}: $\sum x_i^2 = \mathbf{x}^T \mathbf{x}$.
\textbf{Symmetry under multiplication}: If A is $n\times p$, then $AA^T$ and $A^TA$ are symmetric.
\textbf{Trace of a square matrix}:
\begin{enumerate}
\item
$tr(A)=\sum a_{ii}$
\item
$tr(A + B) = tr(A) + tr(B)$
\item
$tr(cA) = ctr(A)$
\item
$tr(AB) = tr(BA)$
\end{enumerate}
\textbf{Idempotent}: $A^2 = AA = A$. Example: $A = I_n$; this is the only non-singular idempotent matrix.
If A is idempotent and if $A\neq I_n$, then A is singular and its trace is equal to its rank $n-p$, for some $p>0$.
\textbf{Inverse of a matrix product}:
If A and B are non-singular matrices then $(AB)^{-1} = B^{-1}A^{-1}$.
\textbf{Rank}: the number of linearly independent columns or rows of A.
How to determine linear independence:
\section{Basic facts}
\begin{equation}
y=X\beta+\epsilon
\end{equation}
\begin{tabular}{@{}ll@{}ll@{}}
$E(y) = X\beta = \mu$ & & $E(\epsilon)=0$ \\
$Var(y) = \sigma^2 I_n $ & & $Var(\epsilon) = \sigma^2 I_n$ \\
%y+X\hat{\beta} + e$ \\
%%$E(y) = X\beta = \mu$ & Var\\
% $\epsilon \sim N_p (0,\sigma^2I_n)$ & No \verb!\part! divisions. \\
%\verb!article! & No \verb!\part! or \verb!\chapter! divisions. \\
%\verb!letter! & Letter (?). \\
%\verb!slides! & Large sans-serif font.
\end{tabular}
\begin{equation}
y = X\hat{\beta} + e
\end{equation}
\begin{fmpage}{\linewidth}
Note that $S_{xx}=\frac{1}{(X'X)^{-1}}$, and
$G=(X'X)^{-1}$, so that $S_{xx}=\frac{1}{G}$.
\end{fmpage}
\begin{tabular}{@{}ll@{}ll@{}}
Results for $\hat{\beta}$ & Results for $e$\\
$E(\hat{\beta}) = \beta$ & $E(e) = 0$\\
$Var(\hat{\beta}) = \sigma^2 (X^T X)^{-1} = \frac{\sigma^2}{S_{xx}}=\sigma^2G$ & $Var(e)=\sigma^2 M$ \\
$\hat{\beta} \sim N_p(\beta,\sigma^2 (X^T X)^{-1})$ & $Var(e_i)=\sigma^2 m_{ii} $\\
& $E(e_i^2)= \sigma^2 m_{ii}$\\
$\hat{\beta} = (X^T X)^{-1} X^T y$, $X$ has full rank & $E(\sum e_i^2) = \sigma^2 (n-p)$\\
\end{tabular}
\medskip
\textbf{Sum of Squares}:
\begin{equation}
S(\hat{\beta}) = \sum e_i^2 = e^T e = (y-X\hat{\beta})^T (y-X\hat{\beta}) = y^T y - y^T X \hat{\beta} = S_r
\end{equation}
Alternatively: $S_r= y^Ty - \hat{\beta}^T X^T X\hat{\beta}=y^Ty - \hat{\beta}^T X^T y$ (see review exercises 2).
\textbf{Estimation of error variance: $e=My$}
\begin{equation}
e = y - X\hat{\beta} = y - X (X^T X)^{-1} X^T y = My
\end{equation}
\noindent
where
\begin{equation}
M = I_n - X (X^T X)^{-1} X^T
\end{equation}
M is symmetric, idempotent $n\times n$.
Note that $MX=0$, which means that
\begin{equation}
E(e)=E(My) = ME(y)= MX\beta = 0
\end{equation}
Also, $Var(e) = Var(My) = M Var(y) M^T = \sigma^2 I_n M$.
\medskip
\textbf{Important properties of M}:
\begin{itemize}
\item $M$ is singular because every idempotent matrix except $I_n$ is singular.
\item $trace(M)=rank(M)=n-p$.
\end{itemize}
\medskip
\textbf{Residual mean square}:
\begin{equation}
\hat{\sigma}^2 = \frac{\sum e_i^ 2}{n-p} \quad E(\hat{\sigma}^2)=\sigma^2
\end{equation}
The square root of $\hat{\sigma}^2$, $\hat{\sigma}$ is the \textbf{residual standard error}.
Note: The phrase ``standard error'' here should not be misinterpreted to mean standard error in the sense of ``SE''.
\textbf{Variance-covariance matrix}:
In a model like \begin{verbatim}fm<-lm(Maint ~ Age, data = data)\end{verbatim}, the variance-covariance matrix is:
\begin{equation}
\begin{pmatrix}
Var(\hat{\beta}_0) & Cov(\hat{\beta}_0,\hat{\beta}_1) \\
Cov(\hat{\beta}_0,\hat{\beta}_1) & Var(\hat{\beta}_1)\\
\end{pmatrix}
\end{equation}
The correlation between the two parameter estimates is therefore:
\begin{equation}
Corr(\hat{\beta}_0,\hat{\beta}_1) = \frac{Cov(\hat{\beta}_0,\hat{\beta}_1)}{SE(\hat{\beta}_0) SE(\hat{\beta}_1)}
\end{equation}
Example (tractor data):
\begin{verbatim}
> vcov(fm)
(Intercept) Age
(Intercept) 21591 -4624.0
Age -4624 1267.9
\end{verbatim}
We can check the correlation calculation using
\begin{verbatim}
> cov2cor(vcov(fm))
(Intercept) Age
(Intercept) 1.00000 -0.88378
Age -0.88378 1.00000
\end{verbatim}
\subsection{Some short-cuts for hand-calculations}
\begin{tabular}{@{}ll@{}}
$S_{xx} = \sum (x_i - \bar{x})^2$ & $= \sum x_i^2 - n\bar{x}^2$\\
$S_{yy} = \sum (y_i - \bar{y})^2$ & $= \sum y_i^2 - n\bar{y}^2$\\
$S_{xy} = \sum (x_i -\bar{x})(y_i -\bar{y})$ & = $\sum x_i y_i - n\bar{x}\bar{y}$\\
\end{tabular}
\begin{equation}
\hat{\beta} = (X^T X)^{-1} X^T y =
\begin{pmatrix}
\bar{y} - \bar{x} \frac{S_{xy}}{S_{xx}}\\
\frac{S_{xy}}{S_{xx}}
\end{pmatrix}
\end{equation}
\begin{equation}
X^T X = \begin{pmatrix}
n & \sum_{i=1}^n x_i\\
\sum_{i=1}^n x_i & \sum_{i=1}^n x_i^2\\
\end{pmatrix}
\end{equation}
\begin{equation}
(X^T X)^{-1} = \frac{1}{nS_{xx}}
\begin{pmatrix}
S_{xx}+n\bar{x}^2 & -n\bar{x} \\
-n\bar{x} & n\\
\end{pmatrix}
\end{equation}
\noindent
Note that $\sum_{i=1}^n x_i = n\bar{x}$.
\begin{equation}
X^T y = \begin{pmatrix}
n\bar{y} \\
S_{xy} + n\bar{x}\bar{y}
\end{pmatrix}
\end{equation}
See \cite[25]{DraperSmith} for a full exposition.
\subsection{Gauss-Markov conditions}
This imposes distributional assumptions on $\epsilon = y - X \beta$.
$E(\epsilon)=0$ and $Var(\epsilon)=\sigma^2 I_n$,
\subsection{Gauss-Markov theorem}
Let $a$ be any $p \times 1$ vector and suppose that $X$ has rank $p$. Of all estimators of $\theta = a^T \beta$ that are unbiased and linear functions of $y$, the estimator $\hat{\theta} = a^T \hat{\beta}$ has minimum variance. Note that $\theta$ is a scalar.
Note: no normality assumption required! But if $\epsilon \sim N(0,\sigma^2)$, $\hat{\beta}$ have smaller variances than any other estimators.
\textbf{Minimum variance unbiased linear estimators}:
to-do
\subsection{$R^2$ or Coefficient of determination}
\begin{tabular}{@{}ll@{}}
$S_{TOTAL} = (y-\bar{y})^T(y-\bar{y})$ $= y^T y - n\bar{y}^2$ & \\
$S_{REG} = (X\hat{\beta}-\bar{y})^T (X\hat{\beta}-\bar{y})$ & \\
$S_r = \sum e_i^2 = (y-X\hat{\beta})^T (y-X\hat{\beta})$ & \\
\end{tabular}
\begin{equation}
S_{TOTAL} = S_{REG}+ S_r
\end{equation}
\begin{equation}
R^2 = \frac{S_{TOTAL}-S_r}{S_{TOTAL}} = \frac{S_{REG}}{S_{TOTAL}}
\end{equation}
For $y = 1_n \beta_0 + \epsilon$, then $R^2 = \frac{S_{REG}}{S_{TOTAL}} = 0$ because $X\hat{\beta} = \bar{y}$. So $S_{REG} = (X\hat{\beta} - \bar{y})^T (X\hat{\beta} - \bar{y}) = 0$.
In simple linear regression, $R^2 = r^2$. $R^2$ is a generalization of $r^2$.
Adjusted $R^2= R_{Adj}^2$. $R_{Adj}^2= 1-\frac{S_r/(n-p)}{S_{TOTAL}/(n-1)}$.
$R^2$ increases with increasing numbers of explanatory variables, therefore $R_{Adj}^2$ is better.
\section{Hypothesis testing}
\subsection{Some theoretical background}
\textbf{Multivariate normal}:
Let $X^T = < X_1,\dots,X_p>$, where $X_i$ are univariate random variables.
X has a multivariate normal distribution if and only if every component of $X$ has a univariate normal distribution.
\textbf{Linear transformations}:
Let $A, b$ be constants. Then, $Ax + b\sim N_q (A\mu + b, A\Sigma A^T)$.
\textbf{Standardization}:
Note that $\Sigma$ is positive definite (it's a variance covariance matrix), so $\Sigma = CC^T$.
$C$ is like a square root (not necessarily unique).
It follows ``immediately'' that
\begin{equation}
C^{-1} (X-\mu) \sim N_p (0_p, I_p)
\end{equation}
If $\Sigma$ is a diagonal matrix, then $X_1,\dots,X_n$ are independent and uncorrelated.
\textbf{Quadratic forms}:
Recall distributional result: If we have $n$ independent standard normal random variables, their sum of squares is $\chi_n^2$.
Lt $z = C^{-1} (X-\mu)$, and $\Sigma=CC^T$. The sum of squares $z^T z$ is:
\begin{equation}
\begin{split}
z^T z & = [C^{-1} (X-\mu)]^T [C^{-1} (X-\mu)]\\
& = (X-\mu)^T [C^{-1}]^T [C^{-1}](X-\mu) \quad \dots (AB)^T=B^T A^T\\
\end{split}
\end{equation}
Note that $ [C^{-1}]^T = [C^{T}]^{-1}$. Therefore,
\begin{equation}
\begin{split}
[C^{-1}]^T [C^{-1}] & = [C^T]^{-1} [C^{-1}]\\
& = (C^T C)^{-1}\\
& = (C C^T)^{-1}\\
& = \Sigma^{-1}\\
\end{split}
\end{equation}
Therefore: $z^T z = (X-\mu)^T \Sigma^{-1} (X-\mu)\sim \chi_p^2$, where $p$ is the number of parameters.
\textbf{Quadratic expressions involving idempotent matrices}
Given a matrix $K$ that is idempotent, symmetric. Then:
\begin{equation}
x^T K x = x^T K^2 x = x^T K^T K x
\end{equation}
Let $x\sim N_n(\mu,\sigma^2 I_n)$, and let $K$ be a symmetric, idempotent $n \times n$ matrix such that $K\mu=0$. Let $r$ be the rank or trace of $K$. Then we have the
\begin{fmpage}{\linewidth}
\textbf{sum of squares property}:
\begin{equation}
x^T K x \sim \sigma^2 \chi_r^2
\end{equation}
\end{fmpage}
The above generalizes the fact that if we have $n$ independent standard normal random variables, their sum of squares is $\chi_n^2$.
Two points about the sum of squares property:
\begin{itemize}
\item
Recall that the expectation of a chi-squared random variable is its degrees of freedom. It follows that:
\begin{equation}
E(x^T K x) = \sigma^2 r
\end{equation}
If $K\mu\neq 0$, $E(x^T K x) = \sigma^2 r+\mu^T K\mu$.
\item If $K$ is idempotent, so is $I-K$. This allows us to split $x^T x$ into two components sums of squares:
\begin{equation}
x^T x = x^T K x+x^T (I-K) x
\end{equation}
\end{itemize}
\textbf{Partition sum of squares}:
[helps prove independence of SSs]
\begin{enumerate}
\item Let $K_1, K_2,\dots, K_q$ be \textbf{symmetric idempotent $n \times n$ matrices} such that
\item $\sum K_i= I_n$ and
\item $K_iK_j =0$, for all $i\neq j $.
\item Let $x\sim N_n(\mu, \sigma^2I_n)$.
\end{enumerate}
Then we have the following partitioning into \textbf{independent} sums of squares:
\begin{equation}
x^T x = \sum x^T K_i x
\end{equation}
If $K_i \mu = 0$, then $ x^T K_i x\sim \sigma^2 \chi_{r_i}^2$, where $r_i$ is the rank of $K_i$.
\textbf{Example}:
\begin{equation}
y^T y = y^T M y + y^T (I-M) y
\end{equation}
Let $K_1 = M$ and $K_2 = (I-M)$. It is easy to check that all four conditions above are satisfied; therefore the sums of squares are independent.
Note that
\begin{equation}
y^T M y = e^T e \sim \chi^2_{n-p}
\end{equation}
and
\begin{equation}
y^T (I-M y) = \hat{\beta}^T (X^T X) \hat{\beta} \sim \chi^2_{p}
\end{equation}
Recall distributional result: if $X\sim \chi_v^2, Y\sim \chi_w^2$ and $X,Y$ independent then $\frac{X/v}{Y/w}\sim F_{v,w}$.
Therefore, $ \frac{\frac{y^T (I-M y)}{n-p}}{\frac{y^T M y}{p}}\sim F_{p,n-p}$.
\subsection{Confidence intervals for $\hat{\beta}$}
Note that $\hat{\beta} \sim N_p (\beta,\sigma^2 (X^T X)^{-1})$, and that
$\frac{\hat{\sigma}^2}{\sigma^2} \sim \frac{\chi^2_{n-p}}{n-p}$.
From distributional theory we know that $T=\frac{X}{\sqrt{Y/v}}$, when $X\sim N(0,1)$ and $Y\sim \chi^2_{v}$.
Let
$x_i$ be a column vector containing the values of the explanatory/regressor variables for a new observation $i$. Then if we define:
\begin{equation}
X=\frac{x_i^T \hat{\beta} - x_i^T \beta}{\sqrt{\sigma^2 x_i^T (X^T X)^{-1}x_i}} \sim N(0,1)
\end{equation}
\noindent
and
\begin{equation}
Y=\frac{\hat{\sigma}^2}{\sigma^2} \sim \frac{\chi^2_{n-p}}{n-p}
\end{equation}
It follows that $T=\frac{X}{\sqrt{Y/v}}$:
\begin{equation}
T= \frac{x_i^T \hat{\beta} - x_i^T \beta}{\sqrt{\hat{\sigma}^2 x_i^T (X^T X)^{-1}x_i}} =
\frac{ \frac{x_i^T \hat{\beta} - x_i^T \beta}{\sqrt{\sigma^2 x_i^T (X^T X)^{-1}x_i}}}{\sqrt{\frac{\hat{\sigma}^2}{\sigma^2}}}
\sim t_{n-p}
\end{equation}
I.e., a 95\% CI:
\begin{equation}
x_i^T \hat{\beta} \pm t_{n-p,1-\alpha/2}\sqrt{\hat{\sigma}^2 x_i^T(X^T X)^{-1}x_i}
\end{equation}
Cf.\ a prediction interval:
\begin{equation}
x_i^T \hat{\beta} \pm t_{n-p,1-\alpha/2}\sqrt{\hat{\sigma}^2 (1+x_i^T(X^T X)^{-1}x_i)}
\end{equation}
Note that
\begin{enumerate}
\item
A prediction interval will be wider about the edges; this is because the term $\hat{\sigma}^2 (1+x_i^T(X^T X)^{-1}x_i)$ in the prediction interval formula is minimized at the mean value of the
predictor variable. When $x_i = \bar{x}$ we have the smallest value for the term, and so the further away the $x_i$ value from $\bar{x}$, the larger the interval.
\item
The width of the prediction interval stays much more constant around the range of observed values.
This is because 1 is much larger than $x_i^T(X^T X)^{-1}x_i)$; so if $x_i$ is near the mean value for $x$ then this term will not change much.
\end{enumerate}
\subsection{Distributions of estimators and residuals}
Covar$(\hat{\beta},e)=0$:
Var$\begin{pmatrix}
\hat{\beta} \\
e \\
\end{pmatrix}
=
\begin{pmatrix}
Var(\hat{\beta}) & 0 \\
0 & Var(e) \\
\end{pmatrix}
=
\begin{pmatrix}
\sigma^2 (X^T X)^{-1} & 0 \\
0 & \sigma^2 M \\
\end{pmatrix}
$.
\textbf{Confidence intervals for components of $\beta$}
Let $G=(X^T X)^{-1}$, and $g_{ii}$ the $i$-th diagonal element.
\begin{equation}
\hat{\beta}_i \sim N(\beta_i, \sigma^2 g_{ii})
\end{equation}
Since $\hat{\beta}$ and $S_r$ are independent, we have:
\begin{equation}
\frac{\hat{\beta}_i - \beta_i}{\hat{\sigma}\sqrt{g_{ii}}} \sim t_{n-p}
\end{equation}
The 95\% CI:
\begin{equation}
\hat{\beta}_i \pm t_{n-p,(1-\alpha)/2} \hat{\sigma} \sqrt{g_{ii}}
\end{equation}
\subsection{Maximum likelihood estimators}
For $\sigma^2$:
Let $X_i$, $i=1,\dots,n$ be a random variable with PDF $f(x; \sigma) = \frac{1}{2\sigma} exp (-\frac{\mid x \mid}{\sigma})$. Find $\hat \sigma$, the MLE of $\sigma$.
\begin{equation}
L(\sigma) = \prod f(x_i; \sigma) = \frac{1}{(2\sigma)^n} exp (-\sum \frac{(x_i - \mu)^2}{\sigma^2})
\end{equation}
Let $\ell$ be log likelihood. Then:
\begin{equation}
\ell (x; \sigma) = - n\log 2 - n\log \sigma - \sum (x_i - \mu)^2 /\sigma^2
\end{equation}
Differentiating and equating to zero to find maximum:
\begin{equation}
\ell ' (\sigma) = - \frac{n}{\sigma} + \sum (x_i - \mu)^2 /\sigma^3 =
0
\end{equation}
Rearranging the above, the MLE for $\sigma$ is:
\begin{equation}
\hat \sigma^2 = \sum (x_i - \mu)^2 /n
\end{equation}
Since $S_r\sim \chi^2_{n-p}$, $E(S_r)=\sigma^2 (n-p)$. So we need to correct $S_r$ as $S_r/n-p$ to get $E(S_r)=\sigma^2$.
\subsection{Hypothesis testing}
A general format for specifying null hypotheses: $H_0: C\beta = c$, where $C$ is a $q\times p$ matrix and $c$ is a $q\times 1$ vector of known constants. The matrix $C$ effectively asserts specific values for $q$ linear functions of $\beta$. In other words, it asserts $q$ null hypotheses stated in terms of (components of) the parameter vector $\beta$.
E.g., given:
\begin{equation}
y_i = \beta_0 + \beta_1 x_i + \beta_2 x_i^2+\epsilon_i
\end{equation}
\noindent
we can test $H_0: \beta_1=1, \beta_2=2$ by setting
$C=\begin{pmatrix}
0 & 1 & 0\\
0 & 0 & 1\\
\end{pmatrix}$
and $c=\begin{pmatrix}
1\\
2\\
\end{pmatrix}$.
The alternative is usually the negation of the null, i.e., $H_1: C\beta\neq c$, which means that at least one of the $q$ linear functions does not take its hypothesized value.
\textbf{Constructing a test}:
\begin{equation}
C\hat{\beta} \sim N_q (c,\sigma^2 C (X^T X)^{-1} C^T)
\end{equation}
So, if $H_0$ is true, by sum of squares property:
\begin{equation}
(C\hat{\beta} - c)^T [C (X^T X)^{-1} C^T]^{-1} (C\hat{\beta} - c) \sim \sigma^2 \chi_q^2
\end{equation}
In other words:
\begin{equation}
\frac{(C\hat{\beta} - c)^T [C (X^T X)^{-1} C^T]^{-1} (C\hat{\beta} - c)}{ \sigma^2} \sim \chi_q^2
\end{equation}
Note that $\hat{\beta}$ is independent of $\hat{\sigma}^2$, and recall that
\begin{equation}
\frac{\hat{\sigma}^2 }{\sigma^2} \sim \frac{\chi_{n-p}^2}{n-p} \Leftrightarrow
\frac{\hat{\sigma}^2 (n-p)}{\sigma^2} \sim \chi_{n-p}^2
\end{equation}
Recall distributional result: if $X\sim \chi_v^2, Y\sim \chi_w^2$ and $X,Y$ independent then $\frac{X/v}{Y/w}\sim F_{v,w}$.
It follows that if $H_0$ is true, and setting
$X=\frac{(C\hat{\beta} - c)^T [C (X^T X)^{-1} C^T]^{-1} (C\hat{\beta} - c)}{ \sigma^2}$,
$Y=\frac{\hat{\sigma}^2 (n-p)}{\sigma^2}$, and setting the degrees of freedom to $v=q$ and $w=n-p$:
\begin{equation}
\frac{X/v}{Y/w}=
\frac{\frac{(C\hat{\beta} - c)^T [C (X^T X)^{-1} C^T]^{-1} (C\hat{\beta} - c)}{ \sigma^2}/q}{\frac{\hat{\sigma}^2 (n-p)}{\sigma^2}/(n-p)}
\end{equation}
Simplifying:
\begin{equation}
\frac{(C\hat{\beta} - c)^T [C (X^T X)^{-1} C^T]^{-1} (C\hat{\beta} - c)}{q\hat{\sigma}^2} \sim F_{q,n-p}
\end{equation}
%The above test is the \textbf{generalized likelihood ratio test}.
This is a \textbf{one-sided test} even though the original alternative was two-sided.
\textbf{Special cases of hypothesis tests}:
When $q$ is 1, we have only one hypothesis to test, the $i$-th element of $\beta$. Given:
\begin{equation}
y_i = \beta_0 + \beta_1 x_i + \beta_2 x_i^2+\epsilon_i
\end{equation}
\noindent
we can test $H_0: \beta_1=0$ by setting
$C=\begin{pmatrix}
0 & 1 & 0\\
\end{pmatrix}$
and $c=0$.
Using the fact that $X\sim t(v)\Leftrightarrow X^2 \sim F(1,v)$, we have
\begin{equation}
\frac{\hat{\beta}_i - c_i}{\hat{\sigma}\sqrt{g_{ii}}} \sim t_{n-p}
\end{equation}
\subsection{Sum of squares}
This is a very important section!
\begin{fmpage}{\linewidth}
Recall:
If $K$ is idempotent, so is $I-K$. This allows us to split $x^T x$ into two components sums of squares:
\begin{equation}
x^T x = x^T K x+x^T (I-K) x
\end{equation}
Let $K_1, K_2,\dots, K_q$ be symmetric idempotent $n \times n$ matrices such that
$\sum K_i= I_n$ and $K_iK_j =0$, for all $i\neq j $. Let $x\sim N_n(\mu, \sigma^2)$.
Then we have the following partitioning into independent sums of squares:
\begin{equation}
x^T x = \sum x^T K_i x
\end{equation}
If $K_i \mu = 0$, then $ x^T K_i x\sim \sigma^2 \chi_{r_i}^2$, where $r_i$ is the rank of $K_i$.
\end{fmpage}
We can use the sum of squares property just in case $K$ is idempotent, and $K\mu =0$ . Below, $K=M$ and $\mu=E(y)=X\beta$.
Consider the sum of squares partition:
\begin{equation}
y^T y = \explain{\underline{y^T M y}}{S_r= e^T e} + \explain{\underline{y^T (I-M) y}}{\hat{\beta}^T (X^T X)\hat{\beta}}
\end{equation}
Note that the preconditions for sums of squares partitioning are satisfied:
\begin{enumerate}
\item $M$ is idempotent (and symmetric), rank=trace=$n-p$.
\item $I-M$ is idempotent (and symmetric), rank=trace=$p$.
\item $ME(y) = 0$ because $ME(y)=MX\beta$ and $MX=0$.
\end{enumerate}
We can therefore partition the sum of squares into two independent sums of squares:
\begin{equation}
y^T y = \explain{\underline{y^T M y}}{e^T e \sim \sigma^2 \chi_{n-p}^2} \hbox{~~~~~~~~}+\hbox{~~~~~~~~}
\explain{\underline{y^T (I-M) y}}{ \sim \sigma^2 \chi_p^2 \newline \hbox{ iff } X\beta=0, i.e., \beta=0}
\end{equation}
So, iff we have $H_0: \beta=0$, we can partition sum of squares as above. Saying that $\beta=0$ is equivalent to saying that $X$ has rank $p$ and $X\beta=0$.
\subsection{Testing the effect of a subset of regressor variables}
Let:
\begin{equation}
C= (0_{p-q} I_q) \quad c=0, \hbox{ and } \beta=\begin{pmatrix} \beta_1\\ \beta_2 \end{pmatrix}
\end{equation}
Here, $\beta_{1,2}$ are vectors (sub-vectors?), not components of the $\beta$ vector.
Then, $C\times \beta = \beta_2$ and $H_0: \beta_2=0$. Note that order of elements in $\beta$ is arbitrary; i.e., any subset of $\beta$ can be tested.
Since $C\times \beta = \beta_2$ and $c=0$, we can construct a sum of squares:
\begin{equation}
(C\hat{\beta} - c)^T [C (X^T X)^{-1} C^T]^{-1} (C\hat{\beta} - c) \sim \sigma^2 \chi_q^2
\end{equation}
This becomes (since $C\beta=\hat{\beta}_2$):
\begin{equation}
\hat{\beta}_2^T [C (X^T X)^{-1} C^T]^{-1} \hat{\beta}_2 \sim \sigma^2 \chi_q^2
\end{equation}
We can rewrite this as: $\hat{\beta}_2^T G_{qq}^{-1} \hat{\beta}_2$, where $G_{qq}= C (X^T X)^{-1} C^T$ ($G_{qq}$ should not be confused with $g_{ii}$) is a $q\times q$ submatrix of $G=(X^T X)^{-1}$.
Note that $\hat{\beta}$ is independent of $\hat{\sigma}^2$, and
recall that $\frac{\hat{\sigma}^2 (n-p)}{\sigma^2} \sim \chi_{n-p}^2$. We can now construct the F-test as before:
\begin{equation}
\frac{\hat{\beta}_2^T C (X^T X)^{-1} C^T \hat{\beta}_2}{q\hat{\sigma}^2} =
\frac{\hat{\beta}_2^T G \hat{\beta}_2}{q\hat{\sigma}^2}
\sim F_{q,n-p}
\end{equation}
\textbf{Sums of squares}:
We can construct three idempotent matrices:
\begin{itemize}
\item
$M = I_n - X(X^T X)^{-1} X^T$
\item
$M_1 = X(X^T X)^{-1} X^T - [X(X^T X)^{-1} C^T] [\explain{\underline{C (X^T X)^{-1} C^T}}{G}]^{-1}
[C(X^T X)^{-1} X^T]$
(that is: $M_1 = X(X^T X)^{-1} X^T - M_2$)
\item $M_2 = [X(X^T X)^{-1} C^T] [\explain{\underline{C (X^T X)^{-1} C^T}}{G}]^{-1}
[C(X^T X)^{-1} X^T]$
\end{itemize}
Note that $M+M_1+M_2=I_n$ and $MM_1=MM_2=M_1M_2=0$. I.e., sum of squares partition property applies. We have three independent sums of squares:
\begin{enumerate}
\item $S_r = y^T M y$
\item $S_1 = y^T M_1 y = \hat{\beta}^T X^T X \hat{\beta}- \hat{\beta}_2^T G_{qq}^{-1} \hat{\beta}_2$
\item $S_2 = y^T M_2 y = \hat{\beta}_2^T G_{qq}^{-1} \hat{\beta}_2$
\end{enumerate}
So: $y^T y = S_r + S_1 + S_2$. Then:
\begin{itemize}
\item It is unconditionally true that $S_r \sim \sigma^2 \chi^2_{n-p}$.
\item If $H_0: \beta=0$ is true, then $E(\hat{\beta}_2) = \beta_2 = 0$. It follows from the sum of squares property that $S_2 \sim \sigma^2 \chi_q^2$.
\item Regarding $S_1$:
We can prove that $M_1 = X_1 (X_1^T X_1)^{-1}X_1^T$, where $X_1$ contains the first $p-q$ columns of $X$. It follows that:
$S_1 = y^T M_1 y =y^T X_1 (X_1^T X_1)^{-1}X_1^T y$
Note that $X_1 (X_1^T X_1)^{-1}X_1^T$ is idempotent. If $\beta=0$, i.e., if $E(y) =X\beta = 0$, we can use the sum of squares property and conclude that
$S_1 \sim \sigma^2 \chi_{p-q}^2$
The degrees of freedom are $p-q$ because the rank=trace of $X_1 (X_1^T X_1)^{-1}X_1^T$ is $n-p$.
\textbf{Thus, $S_1$ is testing $\beta_1=0$ but under the assumption that $\beta_2=0$}.
\end{itemize}
\textbf{Analysis of variance}
%\begin{table}[htdp]
%\caption{default}
%\begin{center}
\begin{tabular}{|l|c|c|c|c|}
\hline
Sources & SS & df & MS & MS ratio\\
of variation & & & & \\
\hline
Due to $X_1$ & $S_1$ & $p-q$ & $S_1/(p-q)$ & $F_1$ \\
if $\beta_2=0$ d& & & & $F_{p-q,n-p}$\\
\hline
Due to $X_2$ & $S_2$ & $q$ & $S_2/q$ & $F_2$\\
& & & & $F_{q,n-p}$\\
\hline
Residuals & $S_r$ & $n-p$ & $\hat{\sigma}^2$ & \\
\hline
Total & $y^T y$ & n & &\\
\hline
\end{tabular}
%\end{center}
%\label{default}
%\end{table}%
Note:
\begin{enumerate}
\item
The ANOVA tests are \textbf{performed in order}: First we test $H_0: \beta_2=0$. Then, if this test does not reject the null, we test $H_0: \beta_1 = 0$ \textbf{on the assumption (which may or may not be true)} that $\beta_2=0$.
\item What happens if we reject the first hypothesis?
\end{enumerate}
\textbf{The null or minimal model (constant term)}
We can set $C=I_p$ and $c=0$. This tests whether all coefficients are zero. But this states that $E(y)=0$, whereas it should have a non-zero value (e.g., reading times). We include the constant term to accommodate this desire to have $E(y)=\mu=\neq 0$. In matrix format: let $\beta$ be the parameter vector; then, $\beta_1=\mu$ is the first, constant, term, and the rest of the parameters are the vector $\beta_2$ ($p-1\times 1$).
The first column of $X$ will be $X_1=1_n$.
\begin{enumerate}
\item
$S_1=y^T (X_1^T X_1)^{-1} X_1^T y = (\sum y)^2/n = n\bar{y}^2$
\item
$S_r = y^Ty - \hat{\beta}^T X^T X\hat{\beta}$
\item
$S_2 = y^T y -S_1 - S_r = \hat{\beta}^T X^T X\hat{\beta}-n\bar{y}^2$
\end{enumerate}
It is normal to omit the row in the ANOVA table corresponding to the constant term.
\medskip
\textbf{Testing whether all predictors (besides the constant term) are zero}
To test whether $p$ predictor variables have any effect on $y$,we set $q=p-1$, and our anova table looks like this:
\begin{tabular}{|l|l|l|l|l|}
\hline
Sources & SS & df & MS & MS \\
of variation & & & & ratio \\
\hline
%Due to $X_1$ if $\beta_2=0$ & $S_1$ & $p-q$ & $S_1/(p-q)$ & ($F_1$) $ F_{p-q,n-p}$\\
%\hline
Due & $S_2$ & $p-1$ & $\frac{S_2}{(p-1)}$ & $F_2$\\
to regressors & & & & $F_{p-1,n-p}$\\
\hline
Residuals & $S_r$ & $n-p$ & $\hat{\sigma}^2$ & \\
\hline
Total & $S_{yy}=$ & n-1 & &\\
(\textbf{adjusted}) & $(y-\bar{y})^T(y-\bar{y})$ & & & \\
& $=y^T y - n\bar{y}^2$ & & & \\
\hline
\end{tabular}
Note that $S_{yy}=\sum (y_i - \bar{y})^2$ is the residual sum of squares that we get after fitting the constant $\hat{\mu}=\bar{y}$.
\medskip
\textbf{Testing a subset of predictors $\beta_2$}
\begin{tabular}{|l|l|l|l|l|}
\hline
Sources & SS & df & MS & MS \\
of variation & & & & ratio \\
\hline
Due to $X_1$ & $S_1$ & $p-q-1$ & $\frac{S_1}{(p-q-1)}$ & ($F_1$) \\
if $\beta_2=0$ & & & & $F_{p-q-1,n-p}$\\
(test of $\beta_1$) & & & & \\
\hline
Due & $S_2$ & $q$ & $\frac{S_2}{q}$ & $F_2$\\
to $X_2$ & & & & $F_{q,n-p}$\\
(test of $\beta_2$) & & & & \\
\hline
Residuals & $S_r$ & $n-p$ & $\hat{\sigma}^2$ & \\
\hline
Total & $S_{yy}$ & n-1 & &\\
\hline
\end{tabular}
%Note: the lecture notes have total SS as $y^T y$ but I think that's a typo.
%Used at the very beginning of a document:
%\verb!\documentclass{!\textit{class}\verb!}!. Use
%\verb!\begin{document}! to start contents and \verb!\end{document}! to
%end the document.
\section{Checking model assumptions}
\subsection{Standardized residuals (\texttt{stdres} in R)}
Recall that $Var(e)=\sigma^2 M$, where $M = I_n - X (X^T X)^{-1} X^T \quad \hbox{M is symmetric, idempotent } n\times n$. The diagonals of $M$ are all less than 1, and are not all equal (i.e., not equal variance), and off-diagonals are not 0 (i.e., the residuals are correlated). Correcting for unequal variance is done by the \textbf{scaled residual}:
\begin{equation}
e_i* = \frac{e_i}{\sqrt{m_{ii}}}
\end{equation}
Note: $Var(e_i*) = \sigma^2$ because $e_i \sim N(0,\sigma^2 m_{ii})$, therefore $e_i=\frac{e_i}{\sqrt{m_{ii}}}*\sim N(0,\sigma^2)$.
The \textbf{standardized residuals} are
\begin{equation}
s_i = \frac{e_i*}{\hat{\sigma}}
\end{equation}
This is approximately $t_{n-p}$ (approximately because $e_i*$ and $\hat{\sigma}$ are not independent).
Since $s_i\sim t_{n-p}$, we can designate a residual as an outlier if $\mid s_i \mid > t_{crit}$ where $t_{crit}$ is the critical t-value.
\subsection{Standardized deletion residuals (\texttt{studres} in R)}
This is a more exact way to test for outliers than the above discussion. Define:
\begin{equation}
\hat{\beta}_{-i} = (X_{-i}^T X_{-i})^{-1} X_{-i}^T y_{-i}
\end{equation}
\noindent
where the $-i$ refers to removing data point $i$.
Standardized deletion residuals are
\begin{equation}
s_{-i} = \frac{e_i}{\hat{\sigma}_{-i}\sqrt{m_{ii}}}
\end{equation}
We can compute $s_{-i}$ from $s_{i}$:
\begin{equation}
s_{-i} = \frac{s_i \sqrt{n-p-1}}{\sqrt{n-p-s_{i}^2}} \sim t_{n-p-1}
\end{equation}
If $n$ is large, $s_{-i}\approx s_i$.
\subsection{Correcting for multiple testing}
\v{S}id\'ak correction:
``suppose we are performing $n$ tests and in each test we specify the probability of making a type I error to be $\beta$ (note: don't confuse this as type II error). Then, if the tests are independent, the probability of at least one false positive claim in the $n$ tests is given by
\begin{equation}
1-(1-\beta)^n = \alpha \Leftrightarrow \beta = 1-(1-\alpha)^{1/n}
\end{equation}
This correction ``has a stronger bound [than the Bonferroni] and so has greater statistical power.''
\subsection{Checks}
\begin{enumerate}
\item Normality: qqnorm etc. Hist is a useful addition to qqplot in large samples. For small samples, use scaled or standardized residuals if sample size is small (not sure why).
\item Independence: index-plots: residuals against observation number. Not useful for small samples. Or: compute correlation between $e_i, e_{i+1}$ pairs of residuals.
\item Homoscedasticity: residuals against fitted. Fan out suggests violation. A quadratic trend in a plot of residuals against predictor x could suggest that a quadratic predictor term is needed; note that $X^T e = 0$. (review exercises 3), so we will never have a perfect straight line in such a plot. Alternative: Bartlett's test.
\end{enumerate}
\subsection{Formal tests of normality}
Komogorov-Smirnov and Shapiro-Wilk. Only useful for large samples ; not very powerful and not much better than diagnostic plots. Tests may be useful as follow-ups if non-normality is suspected.
\subsection{Influence and leverage (\texttt{lm.influence\$hat} in R)}
A point can influence the parameter estimates without being an exceptional outlier. Influence does not depend on ``outlyingness''. Potential to influence (e.g., by being an extreme x value) is called leverage; once the y value is also extreme, we have influence. I.e., it takes an extreme x and y value to be influential, and it takes only an extreme x value to have leverage.
Leverage more formally defined: recall that $M = I_n - X (X^T X)^{-1} X^T$. Define a hat matrix $H=I-M=X (X^T X)^{-1} X^T$. It's called a hat matrix because it puts a hat on y: $\hat{y} = X \hat{\beta} = Hy$.
Since $x_i^T$ is the $i$-th row of $X$, we have $h_{ii} = x_i^T (X^T X)^{-1}x_i$. The measure for leverage is:
\begin{equation}
h_{ii} = 1 - m_{ii}
\end{equation}
Notice that $h_{ii}$ is a scalar, so $\hbox{trace}(h_{ii}=h_{ii}$.
So (because for a square matrix A,B, tr(AB)= tr(BA)):
\begin{equation}
h_{ii} = tr(x_i^T (X^T X)^{-1}x_i)=tr(x_i^T x_i (X^T X)^{-1})
\end{equation}
Since $X^T X = \sum_{i=1}^n x_i x_i^T$, $h_{ii}$ represents the magnitude of $ x_i x_i^T$ relative to the sum of the values for all observations. Note that $h_{ii}$ only depends on X.
Also note that
\begin{equation}
\sum_{i=1}^n h_{ii} = tr(X^T X (X^T X)^{-1}) = tr(I_p)=p \quad mean(h_{ii})=p/n
\end{equation}
$h_{ii}$ measures leverage because $Var(e_i)=\sigma^2 m_{ii} = \sigma^2(1-h_{ii})$ and $Var(\hat{y}_i) = \sigma^2 h_{ii}$. Therefore $h_{ii}$ has to lie between 0 and 1. When it is close to one, the fitted value will be close to the actual value of $y_i$---signalling potential for leverage (aside by SV: the explanation sounds circular to me---this statement says it has leverage by definition. Also, I don't know why I should care that a data point has \textit{potential} to influence the estimates).
A cutoff one can use to identify high leverage points is $h_{ii} > 2p/n$ or $h_{ii} > 3p/n$.
The leverage of a data point is directly related to how far away it is from the mean:
\begin{equation}
h_{ii} = n^{-1} + \frac{(x_i - \bar{x})^2}{S_{xx}}
\end{equation}
In \texttt{lm.influence}, ``coefficients is the matrix whose i-th row contains the change in the estimated coefficients which results when the i-th case is dropped from the regression. sigma is a vector whose i-th element contains the estimate of the residual standard error obtained when the i-th case is dropped from the regression'' (p.\ 71 of lecture notes).
\subsection{Cook's distance D: A measure of influence}
Let $s_i$ be the i-th standardized residual, $\hat{\beta}_{-i}$ the estimate of the vector of parameters with the i-th row removed.
\begin{equation}
D_i = \frac{(\hat{\beta}-\hat{\beta}_{-i})^T(X^T X)^{-1}(\hat{\beta}-\hat{\beta}_{-i})}{p\hat{\sigma}^2} = \frac{s_i^2 h_{ii}}{p(1-h_{ii})}
\end{equation}
A data point is influential if it is outlying as well as high leverage. Cutoff for Cook's distance is $\frac{4}{n}$.
\textbf{Procedure for checking model fit}:
to-do, see p 73
\subsection{Transformations}
Suppose $Y$ is a random variable whose variance depends on its mean. I.e., $E(y)=\mu, Var(y)=g(\mu)$.
The function $g(\cdot)$ is known.
We seek a transformation from $y$ to $z = f(y)$ such that $Var(z)$ is (approximately) constant.
Expand $f(\cdot)$ in a Taylor series expansion, keeping only the first-order term:
\begin{equation}
z= f(y)\approx f(\mu) + (y-\mu)f'(\mu)
\end{equation}
Then: $E(z)=f(\mu)$ and $Var(z)=g(\mu) f'(\mu)^2$. The variance needs to be constant at, say, $k^2$:
\begin{equation}
k^2 = g(\mu) f'(\mu)^2 \Rightarrow f'(\mu)=\frac{k}{\sqrt{g(\mu)}}
\end{equation}
So,
\begin{equation}
f(\mu) = \int f'(\mu) = k\int \frac{1}{\sqrt{g(\mu)}}
\end{equation}
\begin{fmpage}{\linewidth}
\begin{equation}
f(\mu) = k\int [\sqrt{g(\mu)}]^{-1/2}\, d\mu
\end{equation}
\end{fmpage}
\textbf{Example 1}: Let $g(\mu)=a\mu$; then $f(\mu)=2k\sqrt{\frac{\mu}{a}}$. So, $z=\sqrt{\mu}$.
\textbf{Example 2}: Let $g(\mu)=a\mu^2$; then $f(\mu)=k\sqrt{\frac{1}{a}}\log \mu$. So, $z=\log \mu$.
\textbf{Estimating a transformation}: $\exists \lambda$ such that
\begin{equation}
f_\lambda (y_i) = x_i^T \beta + \epsilon_i \quad \epsilon_i \sim N(0, \sigma^2)
\end{equation}
We use maximum likelihood estimation to estimate $\lambda$. Note that
$L(\beta_\lambda, \sigma^2_\lambda, \lambda; y) \propto$
\begin{equation}
(\frac{1}{\sigma})^n \exp [-\frac{1}{2\sigma^2} \sum [f_\lambda(y_i)- x_i^T \beta ]^2] [\prod \explain{f'_\lambda(y_i)}{\hbox{Jacobian}}]
\end{equation}
For fixed $\lambda$, we estimate $\hat{\beta}$ and $\hat{\sigma}^2$ in the usual MLE way, and then we turn our attention to $\lambda$:
\begin{equation}
L(\hat{\beta}_\lambda, \hat{\sigma}^2_\lambda, \lambda; y) = S_\lambda^{-n/2}\prod f'_\lambda(y_i)
\end{equation}
Taking logs:
\begin{equation}
\ell = c-\frac{n}{2} \log S_\lambda + \sum \log f'_\lambda(y_i)
\end{equation}
\textbf{Box-Cox family}:
\begin{equation}
f_\lambda (y) = \left\{
\begin{array}{l l}
\frac{y^\lambda - 1}{\lambda} & \lambda \neq 0\\
\log y & \quad \lambda=0\\
\end{array}
\right.
\end{equation}
We assume that $f_\lambda (y) \sim N(x_i^T \beta,\sigma^2)$. So we have to just estimate $\lambda$ by MLE, along with $\beta$.
\textbf{Box-Cox by hand}:
Since $f_\lambda=\frac{y^\lambda-1}{\lambda}$, it follows that $f'_\lambda(y)= y^{\lambda-1}$.
Now, for different $\lambda$ you can figure out the log likelihoods by hand by solving this equation:
\begin{equation}
\ell = c-\frac{n}{2} \log \explain{S_\lambda}{\hbox{Residual sum of squares}} + (\lambda-1)\sum \log (y_i)
\end{equation}
\section{Factors}
\subsection{Overcoming multicollinearity through parameterization}
to-do: multicollinearity explanation
If the model matrix $X$ is not full rank (this is true when we include a column for the intercept), then we can put constraints on the predictors (through parameterization). E.g., treatment contrasts (corner-point constraints), sum contrasts, etc.
to-do constraints on p.\ 94-95
%\subsection{Merging factor levels}
%This is done to increase group size.
\subsection{Model selection}
\textbf{$S_r$ and $R^2$ can't be used for model selection}:
``$S_r$ will always decrease when we add more regressor variables, so the best fitting model is always the full model which contains all the possible regressor variables - so $S_r$ is not a good model selection tool. For the same reason, the coefficient of determination $R^2$ is not a useful measure in model selection. $R^2$ will not decrease as the number of parameters increases (i.e. it is a non-decreasing function of the number of parameters in the model).''
\textbf{Penalized likelihood methods for model comparison}:
We can compare models using log likelihood:
\begin{equation}
\ell = n \log \hat{\sigma}^2 + z(p)
\end{equation}
\noindent
where $z$ is some penalty function. ``Then we declare that the optimal model is that which minimizes $\ell$. We can think of $z$ as an ad hoc adjustment that tries to give simpler models credit for having fewer regressor variables.''
AIC etc cannot be used to compare across datasets, but can be used to compare non-nested models (cf.\ ANOVA, which allows only nested models to be compared).
\textbf{AIC}: here, $z(p)= 2p$. To calculate AIC:
\begin{equation}
AIC = 2p + n\log \frac{S_r}{n}
\end{equation}
[Note: does not match up with the AIC function output in R.]
\textbf{Where does AIC come from?} From the fact that maximum likelihood of $\hat{\sigma}^2$ is:
\begin{equation}
L(\sigma) \propto (\hat{\sigma}^2)^{-n/2}
\end{equation}
$-2\times loglik=n\log \hat{\sigma}^2=n\log \frac{S_r}{n}$ is a good model selection tool: smaller values (smaller $S_r$) will mean better fit.
\textbf{BIC}: $z(p)= p\log n$. This penalty will be large for large $n$, compared to AIC.
\textbf{Mallow's $C_p$}:
\begin{equation}
C_p = \frac{S_r}{\hat{\sigma}_f^2} - n + 2p_r
\end{equation}
$\hat{\sigma}_f^2$ is the residual mean square of the full model, $S_r$ is residual
sums of squares of reduced model, $p_r$ is the number of regressors in the reduced model.
We want a small $C_p$ and $C_p \approx p$.
\textbf{Best subsets method}: in leaps library, regsubsets command.
\begin{verbatim}
b<-regsubsets(model specification)
summary(b)$rsq, $cp, $bic
\end{verbatim}
\textbf{Backward elimination}: fit full model, and remove the t-value that's smallest, and so on.
Forward elimination goes in the other direction.
\textbf{Stepwise selection}: step function in MASS. Incrementally add a predictor as above, but once one is added, try removing other predictors with smallest t-value. Repeat until nothing can be added or deleted.
\begin{verbatim}
step(fm,scope=list(upper/lower=formula),
direction="forward/backward/both")
\end{verbatim}
\section{Generalized least squares}
Let $Var(\epsilon)=\sigma^2 \Sigma$, where $\Sigma$ is known and non-singular. If $\Sigma\neq I_n$ then we either have correlation or non-equal variance or both. If $\Sigma$ is known, we only need to estimate $\sigma$ and we are back in least squares theory, with some modification:
\begin{equation}
y \sim N(X\beta,\sigma^2 \Sigma)
\end{equation}
Likelihood $L(\beta,\sigma^2; y)$ is now:
\begin{equation}
(2\pi)^{-n/2}\mid \sigma^2 \Sigma\mid^{-1/2} \exp[-\frac{1}{2\sigma^2}(y-X\beta)^T \Sigma^{-1} (y-X\beta)]
\end{equation}
The MLE of $\beta$ minimizes:
\begin{equation}
S=(y-X\beta)^T \Sigma^{-1} (y-X\beta)
\end{equation}
instead of $(y-X\beta)^T (y-X\beta)$.
\textbf{Least squares estimators}:
$\hat{\beta}: (X^T \Sigma^{-1} X)^{-1}X^T \Sigma^{-1} y$
$E(\hat{\beta})=\beta$
$Var(\hat{\beta})=\sigma^2 (X^T \Sigma^{-1}X)^{-1}$
Estimator of $\sigma^2$ is $\frac{S_v}{n-p}$, where
$S_v=y^T \Sigma^{-1} y - \hat{\beta}^T X^T \Sigma^{-1} X\hat{\beta} = y^T \Sigma^{-1} y - \hat{\beta}^T X^T \Sigma^{-1} y$.
\subsection{Weighted least squares}
Suppose $\Sigma=diag(c1,\dots,cn)$ (uncorrelated, but not homoscedastic) and so $\Sigma^{-1}=diag(1/
c1,\dots,1/cn)$. Let $w_i = 1/c_i$.
The sum of squares will be
$S=\sum_{i=1}^n w_i (y_i - x_i^T \beta)^2$. So each squared residual is weighted by that observation's variance; observations with large variance are less reliable, and are down-weighted.
Compared to $X^T X = \sum x_i x_i^T$ in WLS we have $X^T \Sigma^{-1} X = \sum w_i x_i x_i^T$.
And instead of $X^T y = \sum x_i y_i$ in WLS we have $X^T \Sigma^{-1}y = \sum w_i x_i y_i$.
So $\hat{\beta}=(X^T \Sigma^{-1} X)^{-1}X^T \Sigma^{-1}y= ( \sum w_i x_i x_i^T)^{-1} (\sum w_i x_i y_i)$
``Weighted LS is appealing because it allows us to adjust for different (known) variances in the observations in an intuitive way that is easy to implement. The variances of the different observations might be known from pilot or previous studies'' (or are estimated from data).
[SV: I don't get this ``known variance'' business. How can we ever \textbf{know} what the variance is? Pilot or previous data will only yield estimates.]
The main disadvantage of WLS is that weights have to specified in advance.
\subsection{OLS vs WLS}
With dataset wls1.txt, if we fit:
\begin{verbatim}
summary(m0<-lm(y~X.x,data))
summary(m0.wls<-lm(y~X.x,data,
weights=I(1/X.x^2)))
\end{verbatim}
The coefs will be the same in each, but SEs of coefs will be smaller in WLS fit (because $S_r$ will be down-weighted.
\textbf{Effect of scaling weights}:
Multiplying the weights by some constant will change residual standard error, but leaves SEs and coefs unchanged. This is because $Var(\hat{\beta})=\sigma^2 (X^T \Sigma^{-1} X)^{-1}$, so whatever factor $\sigma^2$ gets multiplied by, it will be cancelled out because it also appears in $ \Sigma^{-1}$.
Differently put:
``Let $w_i'$ be the scaled weights, $\hat{\sigma}'$ be the residual standard error in the analysis with the scaled weights, $S_r'$ be the residual sum of squares for the scaled analysis and $(\Sigma^{-1})'$ be the weight matrix for the scaled analysis. Then if $w_i' = 16w_i$, we see that $\hat{\sigma}' = 4\hat{\sigma}$ since $S_r' = 16S_r$.''
The SEs will not change because: ``If
$(\Sigma^{-1})'=16\Sigma^{-1}$ and
$\hat{\sigma}' = 4\hat{\sigma}$
then $Var(\hat{\beta}') =
(\hat{\sigma}^2)' (X^T (\Sigma^{-1})'X)^{-1} =
(\hat{\sigma}^2) (X^T (\Sigma^{-1})X)^{-1}$.
So the standard errors don’t change.''
\begin{verbatim}
summary(m0.wls<-lm(y~X.x,data,
weights=I(16*1/X.x^2)))
\end{verbatim}
\begin{enumerate}
\item
If you get the weights wrong, SEs will increase. So, if unsure about weights use OLS.
\item
``If looking at the standardized residuals shows that there are observations that may be outliers and they reside in regions that will be given high weight then it may be safer to use OLS rather than WLS.''
\item One can estimate the weights from the data, but one needs lots of replicates for this, with fewer replicates the SEs will increase.
\item If you don't have enough replicates, you can group x values close to each other.
\item Outliers can dramatically influence estimates in WLS.
\end{enumerate}
Conclusion: WLS is a powerful tool, if weights are known, but outliers must be studied carefully.
\subsection{Using group means in WLS with replicated data}
Without replicates in the data, we have:
\begin{equation}
X^T y = \sum_i^n x_i y_i
\end{equation}
If we have $k$ replicates:
\begin{equation}
X^T y = \sum_{i=1}^k \sum_{j=1}^{n_i} x_i y_{ij} =
\sum_{i=1}^k x_i \sum_{j=1}^{n_i} y_{ij}
= \sum_{i=1}^k n_i x_i \bar{y}_i
\end{equation}
This is equivalent to having $\bar{y}_i$ as observations for the replicate sets, and $n_i$ as weights in WLS.
If we had $\bar{y}_i$ as observations, then their variances would be $\sigma^2/n_i$, so we would have unequal variances and would use WLS with $w_i = n_i$. \textbf{Example}: tractor data.
\subsection{Replication}
Define replicates here as repeated measurements that are mutually independent (cf.\ replicates which are not independent, as in linear mixed model theory).
Since all $x_i^T$ within a replicate set come from the same distribution, their variance should be an estimate of $\sigma^2$.
Let $y_{ij}$ be the $j$th observation in the $i$th replicate set, where $j=1,\dots,n_i$ and $n_i$ is the size of the $i$th replicate set ($i=1,\dots,k$). When $n_i = 1$ we have no replication, and for higher values we have replication. \textit{Within} each replicate, we can produce an estimator:
\begin{equation}
\hat{\sigma}^2 = (n_i - 1)^{-1} \sum_{j=1}^{n_{i}} (y_{ij} - \bar{y}_i)^2 = (n_i - 1)^{-1}S_i
\end{equation}
$\bar{y}_i=n_i^{-1} \sum_{j=1}^{n_i} y_{ij}$ is the mean of the replicate set.
$S_i \sim \sigma^2 \chi_{n_i - 1}^2$.
In a one-factor model, $S_r = \sum_{i=1}^k S_i$ and dfs are $n-k = \sum_{i=1}^k (n_i -1)$.
So, in the general case, $df_r=n-k$, $S_r = \sum_{i=1}^k S_i\sim \chi_{df_R}^2$. The ratio $\hat{\sigma}^2/df_R$ is an unbiased estimator of $\sigma^2$.
The distributional fact being used here is that the sum of independent chi-squared distributions has a chi-squared distribution, and the degrees of freedom is the sum of dfs of the RVs being summed.
\textbf{The replication estimator}: $\hat{\sigma}_R^2=\frac{S_R}{df_R}$ is the replication estimator of $\sigma^2$.
``The $\hat{\sigma}_R^2$ is independent of the particular form that we have used for the model. We obtain the same replication sum of squares with the same degrees of freedom whether we postulate a linear, quadratic, or some other relationship between Maintenance cost and Age.'' (to-do: don't get this). This is the strength of the replication sum of squares.
The $S_R$ is in general not equal to $S_r$.
\subsection{Partitioning replication sum of squares}
to-do
\bibliographystyle{plain}
\bibliography{/Users/shravanvasishth/Dropbox/Bibliography/bibcleaned}
| {
"alphanum_fraction": 0.6599457663,
"avg_line_length": 34.800608828,
"ext": "tex",
"hexsha": "894d4ae963d7d5151556dc65b7b51236f3c4a89c",
"lang": "TeX",
"max_forks_count": 25,
"max_forks_repo_forks_event_max_datetime": "2022-01-21T08:30:19.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-02-14T12:07:34.000Z",
"max_forks_repo_head_hexsha": "33acde1f79d3ec803f6491b7fee0897e126ddbe6",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ma0511/MScStatisticsNotes",
"max_forks_repo_path": "LinearModels.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "33acde1f79d3ec803f6491b7fee0897e126ddbe6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ma0511/MScStatisticsNotes",
"max_issues_repo_path": "LinearModels.tex",
"max_line_length": 504,
"max_stars_count": 37,
"max_stars_repo_head_hexsha": "bd4f4076fa785785b419c13580fe214120e4087e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "vasishth/MScStatisticsNotes",
"max_stars_repo_path": "LinearModels.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-07T14:38:57.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-02-14T09:10:46.000Z",
"num_tokens": 16465,
"size": 45728
} |
\chapter{QVT-C}
Taken from the QVT Specification | {
"alphanum_fraction": 0.7959183673,
"avg_line_length": 16.3333333333,
"ext": "tex",
"hexsha": "86f7eadaef4c280c86180db0476fbaf6f4614765",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2017-10-24T14:38:41.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-04-06T12:41:50.000Z",
"max_forks_repo_head_hexsha": "c39fb085723f4b3828050a7a20e32a278b4d13ab",
"max_forks_repo_licenses": [
"LPPL-1.3c"
],
"max_forks_repo_name": "arcanefoam/mde_listings",
"max_forks_repo_path": "demo/QVTc.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "c39fb085723f4b3828050a7a20e32a278b4d13ab",
"max_issues_repo_issues_event_max_datetime": "2017-10-24T14:43:14.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-10-24T14:43:14.000Z",
"max_issues_repo_licenses": [
"LPPL-1.3c"
],
"max_issues_repo_name": "arcanefoam/mde_listings",
"max_issues_repo_path": "demo/QVTc.tex",
"max_line_length": 32,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "c39fb085723f4b3828050a7a20e32a278b4d13ab",
"max_stars_repo_licenses": [
"LPPL-1.3c"
],
"max_stars_repo_name": "arcanefoam/mde-listings",
"max_stars_repo_path": "demo/QVTc.tex",
"max_stars_repo_stars_event_max_datetime": "2020-01-27T13:41:49.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-04-13T11:34:36.000Z",
"num_tokens": 13,
"size": 49
} |
We include some examples to demonstrate some of the uses of this package.
\section{Distinguishing groups}~
\begin{example}[Payne\_Grps]
We can use these functions to build groups from bilinear maps and distinguish
seemingly indistinguishable groups. In 2004, S. E. Payne asked if two elation
groups were isomorphic but suspected they were not \cite{Payne:elation-grps}.
The first group, $G_f$, is the elation group of the generalized quadrangle
$H(3,q^2)$, the Hermitian geometry. This group is defined as a Heisenberg group
whose bilinear map is the usual dot product.
\begin{code}
> p := 3;
> e := 4;
> q := p^e; // q = 3^e >= 27
> F := [KSpace(GF(q),2), KSpace(GF(q),2), KSpace(GF(q),1)];
>
> DotProd := function(x)
function> return KSpace(GF(q),1)!(x[1]*Matrix(2,1,x[2]));
function> end function;
>
> DoubleForm := function(T)
function> F := SystemOfForms(T)[1];
function> K := BaseRing(F);
function> n := Nrows(F);
function> m := Ncols(F);
function> MS := KMatrixSpace(K,n,m);
function> Z := MS!0;
function> M1 := HorizontalJoin(Z,-Transpose(F));
function> M2 := HorizontalJoin(F,Z);
function> D := VerticalJoin( M1, M2 );
function> return Tensor( D, 2, 1 );
function> end function;
>
> f := DoubleForm( Tensor( F, DotProd ) );
> f;
Tensor of valence 2, U2 x U1 >-> U0
U2 : Full Vector space of degree 4 over GF(3^4)
U1 : Full Vector space of degree 4 over GF(3^4)
U0 : Full Vector space of degree 1 over GF(3^4)
>
> IsAlternating(f);
true
> Gf := HeisenbergGroup(f);
\end{code}
Now we define Payne's second group, $G_{\bar{f}}$, which is the elation group of
the Roman quadrangle with parameters $(q^2,q)$. In this example, $\bar{f}$ is a
biadditive map, but is bilinear over the prime field $\mathbb{F}_3$. Therefore,
we construct a vector space isomorphism from $\mathbb{F}_3^e$ to
$\mathbb{F}_{3^e}$ and the bilinear commutator map, induced by $\bar{f}$. Hence,
$G_{\bar{f}}$ is the Heisenberg group of this bilinear commutator map.
\begin{code}
> n := PrimitiveElement(GF(q)); // non-square
> MS := KMatrixSpace(GF(q),2,2);
> A := MS![-1,0,0,n];
> B := MS![0,1,1,0];
> C := MS![0,0,0,n^-1];
> F1 := Frame(f);
> F2 := [KSpace(GF(p),4*e), KSpace(GF(p),4*e),\
> KSpace(GF(p),e)];
>
> // take 1/3^r root
> Root := function(v,r)
function> k := Eltseq(v)[1];
function> K := Parent(k);
function> if k eq K!0 then return k; end if;
function> R<x> := PolynomialRing(K);
function> f := Factorization(x^(3^r)-k)[1][1];
function> return K!(x-f);
function> end function;
>
> // biadditive map defining elation grp
> RomanGQ := function(x)
function> u := Matrix(1,2,x[1]);
function> v := Matrix(2,1,x[2]);
function> M := [A,B,C];
function> f := &+[Root(u*M[i]*v,i-1) : i in [1..3]];
function> return KSpace(GF(q),1)![f];
function> end function;
>
> // vector space isomorphisms
> phi := map< F2[1] -> F1[1] | \
> x :-> F1[1]![ GF(q)![ s : s in Eltseq(x)[i+1..e+i] ] : \
> i in [0,e,2*e,3*e] ] >;
> gamma := map< F1[3] -> F2[3] | \
> x :-> F2[3]!&cat[ Eltseq(s) : s in Eltseq(x) ] >;
>
> // bilinear commutator from RomanGQ
> RomanGQComm := function(x)
function> x1 := Eltseq(x[1]@phi)[1..2];
function> x2 := Eltseq(x[1]@phi)[3..4];
function> y1 := Eltseq(x[2]@phi)[1..2];
function> y2 := Eltseq(x[2]@phi)[3..4];
function> comm := RomanGQ( <x2,y1> ) - RomanGQ( <y2,x1> );
function> return comm @ gamma;
function> end function;
>
> f_bar := Tensor( F2, RomanGQComm );
> f_bar;
Tensor of valence 2, U2 x U1 >-> U0
U2 : Full Vector space of degree 16 over GF(3)
U1 : Full Vector space of degree 16 over GF(3)
U0 : Full Vector space of degree 4 over GF(3)
>
> IsAlternating(f_bar);
true
> Gfb := HeisenbergGroup(f_bar);
\end{code}
The groups $G_f$ and $G_{\bar{f}}$ have order $3^{20}$ and are class 2, exponent
3, and minimally generated by 16 elements. In other words, the groups $G_f$ and
$G_{\bar{f}}$ are central extensions of $\mathbb{Z}_3^{16}$ by
$\mathbb{Z}_3^{4}$ and have exponent $3$. Using standard heuristics, these
groups are indistinguishable. However, the invariants associated to their
exponent-$p$ central tensor are vastly different, and thus, they determine that
these groups are non-isomorphic. We show that the centroids of the tensors are
not isomorphic.
\begin{code}
> Tf := pCentralTensor(Gf,1,1);
> Tf;
Tensor of valence 2, U2 x U1 >-> U0
U2 : Full Vector space of degree 16 over GF(3)
U1 : Full Vector space of degree 16 over GF(3)
U0 : Full Vector space of degree 4 over GF(3)
>
> Tfb := pCentralTensor(Gfb,1,1);
> Tfb;
Tensor of valence 2, U2 x U1 >-> U0
U2 : Full Vector space of degree 16 over GF(3)
U1 : Full Vector space of degree 16 over GF(3)
U0 : Full Vector space of degree 4 over GF(3)
>
> Cf := Centroid(Tf);
> Cfb := Centroid(Tfb);
> Dimension(Cf) eq Dimension(Cfb);
false
\end{code}
\end{example}
\section{Simplifying automorphism group computations}~
\begin{example}[Ext\_Over\_Adj] We demonstrate how to to simplify the
automorphism group computation as discussed in \cite{BW:grps-tensor}. We
construct a class 2, exponent $p$, $p$-group $G$ which is a quotient of a
maximal unipotent subgroup of $\text{GL}(3,317^4)$.
\begin{code}
> p := 317;
> e := 4;
> H := ClassicalSylow( GL(3,p^e), p );
> U := UnipotentMatrixGroup(H);
> P := PCPresentation(U);
> Z := Center(P);
>
> N := sub< P | >;
> while #N lt p^2 do
while> N := sub< P | Random(Z), N >;
while> end while;
>
> G := P/N;
> G;
GrpPC : G of order 10246902931634286779441449 = 317^10
PC-Relations:
G.5^G.1 = G.5 * G.9^62 * G.10^133,
G.5^G.2 = G.5 * G.9^312 * G.10^295,
G.5^G.3 = G.5 * G.9^316,
G.5^G.4 = G.5 * G.10^316,
G.6^G.1 = G.6 * G.9^312 * G.10^295,
G.6^G.2 = G.6 * G.9^316,
G.6^G.3 = G.6 * G.10^316,
G.6^G.4 = G.6 * G.9^138 * G.10^163,
G.7^G.1 = G.7 * G.9^316,
G.7^G.2 = G.7 * G.10^316,
G.7^G.3 = G.7 * G.9^138 * G.10^163,
G.7^G.4 = G.7 * G.9^188 * G.10^50,
G.8^G.1 = G.8 * G.10^316,
G.8^G.2 = G.8 * G.9^138 * G.10^163,
G.8^G.3 = G.8 * G.9^188 * G.10^50,
G.8^G.4 = G.8 * G.9^125 * G.10^151
\end{code}
We construct the exponent-$p$ central tensor of $G$ and compute its adjoint $*$-algebra $A$.
\begin{code}
> T := pCentralTensor(G,1,1);
> T;
Tensor of valence 2, U2 x U1 >-> U0
U2 : Full Vector space of degree 8 over GF(317)
U1 : Full Vector space of degree 8 over GF(317)
U0 : Full Vector space of degree 2 over GF(317)
>
> A := AdjointAlgebra(T);
> Dimension(A);
16
> star := Star(A);
\end{code}
If $V=G/\Phi(G)$ is the Frattini quotient of $G$, then our goal is to get the
cotensor space $V\wedge_A V$. Note that $\dim V\wedge V=28$, so standard methods
will compute a stabilizer of $\text{GL}(8,317)$ inside $V\wedge V$. We will
decrease the size of the ambient space resulting in an easier stabilizer
computation.
\begin{code}
> V := Domain(T)[1];
> E := ExteriorCotensorSpace(V,2);
> E;
Cotensor space of dimension 28 over GF(317) with valence 1
U2 : Full Vector space of degree 8 over GF(317)
U1 : Full Vector space of degree 8 over GF(317)
\end{code}
Now we create a sub cotensor space $S$ generated by all $(e_iX)\wedge e_j - e_i\wedge (e_jX)$ for $X\in A$,
and then quotient $V\wedge V$ by $S$. The result is a 4 dimensional space.
\begin{code}
> L := [];
> for E_gen in Generators(E) do
for> F := SystemOfForms(E_gen)[1];
for> for X in Basis(A) do
for|for> L cat:= [E!Eltseq(X*F - F*Transpose(X@star))];
for|for> end for;
for> end for;
>
> S := SubTensorSpace(E,L);
> S;
Cotensor space of dimension 24 over GF(317) with valence 1
U2 : Full Vector space of degree 8 over GF(317)
U1 : Full Vector space of degree 8 over GF(317)
>
> Q := E/S;
> Q;
Cotensor space of dimension 4 over GF(317) with valence 1
U2 : Full Vector space of degree 8 over GF(317)
U1 : Full Vector space of degree 8 over GF(317)
\end{code}
\end{example}
| {
"alphanum_fraction": 0.640357599,
"avg_line_length": 31.9591836735,
"ext": "tex",
"hexsha": "13518792b13099b070679735e02cfc0de4e5e919",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "34c7a454c21f067d71914c0aee43f7e52ed6d884",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "algeboy/eMAGma",
"max_forks_repo_path": "doc/longer-exs.tex",
"max_issues_count": 11,
"max_issues_repo_head_hexsha": "34c7a454c21f067d71914c0aee43f7e52ed6d884",
"max_issues_repo_issues_event_max_datetime": "2017-08-08T22:56:11.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-06-16T20:19:43.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "algeboy/eMAGma",
"max_issues_repo_path": "doc/longer-exs.tex",
"max_line_length": 108,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "34c7a454c21f067d71914c0aee43f7e52ed6d884",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "algeboy/TensorSpace",
"max_stars_repo_path": "doc/longer-exs.tex",
"max_stars_repo_stars_event_max_datetime": "2017-11-04T01:51:36.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-06-14T03:24:16.000Z",
"num_tokens": 2904,
"size": 7830
} |
\section{Constant Generators and Specs}
Sketching extends a simple procedural language with the ability to leave \emph{holes} in place of code fragments that are to be derived by the synthesizer. Each hole is marked by a generator which defines the set of code fragments that can be used to fill a hole. \Sk{} offers a rich set of constructs to define generators, but all of these constructs can be described as syntactic sugar over a simple core language that contains only one kind of generator: an unknown integer constant denoted by the token \C{??}.
From the point of view of the programmer, the integer generator is a placeholder that the synthesizer must replace with a suitable integer constant. The synthesizer ensures that the resulting code will avoid any assertion failures under any input in the input space under consideration. For example, the following code snippet can be regarded as the ``Hello World'' of sketching.
\begin{lstlisting}
harness void main(int x){
int y = x * ??;
assert y == x + x;
}
\end{lstlisting}
This program illustrates the basic structure of a sketch. It contains three elements you are likely to find in every sketch: (i) a \C{harness} procedure, (ii) holes marked by generators, and (iii) assertions.
The harness procedure is the entry point of the sketch, and together with the assertion it serves as an operational specification for the desired program. The goal of the synthesizer is to derive an integer constant $C$ such that when \C{??} is replaced by $C$, the resulting program will satisfy the assertion for all inputs under consideration by the verifier. For the sketch above, the synthesized code will look like this.
\begin{lstlisting}
void main(int x){
int y = x * 2;
assert y == x + x;
}
\end{lstlisting}
\subsection{Harnesses and function equivalence requirement}
A program in sketch can have multiple harness functions each encoding different requirements of the problem. Sketch will guarantee that all harnesses will run to completion without triggering assertion failures for all inputs within bounds. Harness functions are not allowed to take heap allocated objects as inputs and all global variables are reset to their initial values before the evaluation of each harness. Because of this, the order of evaluation of the harnesses does not matter.
Sketch also allows the programmer to express that a function---we call it the implementation---must be functionally equivalent to another function---the specification---by writing \C{implements $fname$} at the end of the signature of the implementation function. When the specification and implementation do not access global variables, the equivalence constraint created by implements can be seen as syntactic sugar for a harness that calls both the implementation and the specification and compares their results. However, using \C{implements} is preferable to writing such a harness because it lets the compiler know that it can replace one function for the other when reasoning about the program. The implements directive imposes stronger constraints than harness; same function cannot have a \C{harness} and an \C{implements} directive.
If the implementation or the specification access global variables, then their equivalence must hold for all possible values of those global variables---not just for their initial values---and they must leave the global variables in the same state after they terminate. This is to ensure that replacing the specification with its implementation will always be semantics preserving. For this same reason, the spec and the sketch cannot access global variables that point to heap allocated values.
\begin{Example}
As a simple example of the use of harnesses and \C{implements}, consider the following sketch.
\begin{lstlisting}
int count=0;
int twox(int x, int y){
++count;
return x + x;
}
int expr1(int x, int y){
return x*?? + y*??;
}
int expr2(int x, int y) implements twox{
count = count+??;
return x*?? + y*??;
}
harness void foo(){
assert expr1(5,2)*expr2(2,4)== 24;
}
\end{lstlisting}
The harness imposes a constraint on the product of the two functions. The \C{implements} in the declaration of \C{expr2} imposes the additional constraint that \C{expr2} must be equivalent to \C{twox}. Because \C{twox} modifies the global variable \C{count}, \C{expr2} must modify the variable in the same way.
\end{Example}
\paragraph{Implementation note} As of the current \Sk{} release, there is a requirement that the specification should not have any unknowns. It is likely that this requirement will be relaxed in future releases. Also, when two functions are related by an \C{implements} directive, the compiler assumes that the specification is simpler than the implementation. The compiler exploits this by replacing every call to the implementation with a call to the specification during the analysis phase. This generally leads to big performance improvements for the solver, but it means that if the implementation calls itself recursively the system will not catch bugs that lead to infinite recursion.
\subsection{Assumptions}
Starting in \Sk{} 1.6.7, the compiler supports standard \C{assume} statements. The semantics of \C{assume $cond$;} dictate that when the condition $cond$ is false, execution stops and any subsequent assertions are ignored. When a function is called by another function, any assumptions in the callee become assertions from the point of view of the caller; \ie{} the caller must guarantee that the inputs and environment it passes to the callee satisfy the assumptions. \emph{Note that this is not true of generator functions (\secref{generators})}; as we will describe later, generators get inlined into their calling context, so any \C{assume} statement in the generator is an assume statement in the function that invokes the generator.
Also, when a function implements a specification, its assumptions must be weaker than those of the specification. In other words, any input that is legal for the specification must be legal for the implementation.
\begin{Example}{Assume and implements}
\begin{lstlisting}
harness int foo(int x){
assume x > 10;
int t= x-10;
assert t > 0;
return t;
}
int moo(int x) implements foo{
assume x > 3;
int t = x-??;
assert t > 0;
return t;
}
harness void main(int x){
assume x > 5;
int t = ??;
moo(x+t);
minimize(t);
}
\end{lstlisting}
In the example above, the harness \C{foo} assumes \C{x>10}, which allows it to satisfy the assertion \C{t>0}. In the case of \C{moo}, the unknown will resolve to \C{10} because of the constraint of equivalence with \C{foo}. Note that the assumption in \C{moo} is not strong enough to prove the assertion, but because \C{moo} implements \C{foo}, it inherits its preconditions.
\end{Example}
\subsection{Types for Constant Generators}
The constant hole \C{??} can actually stand for any of the following different types of constants:
\begin{itemize}
\item Integers (\C{int})
\item Booleans (\C{bit})
\item Constant sized arrays and nested constant sized arrays
\end{itemize}
The system will use a simple form of type inference to determine the exact type of a given hole.
\subsection{Ranges for holes}
When searching for the value of a constant hole, the synthesizer will only search values greater than or equal to zero and less than $2^N$, where $N$ is a parameter given by the flag \C{--bnd-ctrlbits}. If you wan to be explicit about the number of bits for a given hole, you can state it as \C{??(N)}, where \C{N} is an integer constant.
\flagdoc{bnd-ctrlbits}{
The flag \C{bnd-ctrlbits} tells the synthesizer what range of values to consider for all integer holes. If one wants a given integer hole to span a different range of values, one can use the extended notation \C{??(N)}, where \C{N} is the number of bits to use for that hole.
}
\subsection{Minimizing Hole Values}
In many cases, it is useful to ask the synthesizer for the smallest constant that will satisfy a specification. For such cases, the synthesizer supports a function \C{minimize($e$)}, which asks the synthesizer to make $e$ as small as possible. More specifically, the synthesizer will find a minimal $bnd$ such that $e < bnd$ for all inputs. If the program includes multiple \C{minimize} statements, the synthesizer will find a locally minimal set of bounds such that there is no other set of bounds that is strictly better than the one found.
\flagdoc{bnd-mbits}{
The flag \C{bnd-mbits} tells the synthesizer how many bits to use to represent all bounds introduced by \C{minimize(e)}(default 5). Note that the largest value of \C{(e)} will be less than the bound, so if \C{e} can have value $n$, the bound needs enough bits to be able to reach $n+1$.
}
\begin{Example}
For example, consider the following simple program:
\begin{lstlisting}
harness void main(int i, int j){
int i_orig=i, j_orig=j;
if(i > ??){ // we'll call this unknown u1
i = ??; // u2
}
if(j > ??){ // u3
j = ??; // u4
}
if(i_orig > 3 && j_orig > 3)
assert 2*i + j > 6;
minimize(i);
minimize(j);
}
\end{lstlisting}
In the program above, synthesizer will try to minimize the upper bound on \C{i} (we'll call it $b_i$) and the upper bound on j ($b_j$). One possible solution is to have $u1=3$, $u2=4$, $u3=0$, $u4=0$. This will allow the upper bounds $(b_1, b_2)$ to be $(5,1)$. A different possible solution is to have $u1=0$, $u2=0$, $u3=0$, $u4=7$, which will allow the upper bounds to be $(1, 8)$. While the first set of bounds appears better than the second, they are actually incomparable. By contrast, a solution that had $(b_1, b_2)=(6,1)$ would not be allowed because the solution with bounds $(5,1)$ is strictly better.
\end{Example}
The synthesizer has some syntactic sugar on top of \C{minimize} to support the synthesis of a minimal number of statements, a common idiom in \Sk{}. For example, the code below will produce the minimal number of assignments required to swap \C{x} and \C{y}.
\begin{lstlisting}
void swap(ref bit[W] x, ref bit[W] y){
minrepeat{
if(??){ x = x ^ y;}else{ y = x ^ y; }
}
}
harness void main(bit[W] x, bit[W] y){
bit[W] tx = x; bit[W] ty = y;
swap(x, y);
assert x==ty && y == tx;
}
\end{lstlisting}
\section{Generator functions}
\seclabel{generators}
A generator describes a space of possible code fragments that can be used to fill a hole. The constant generator we have seen so far corresponds to the simplest such space of code fragments: the space of integers in a particular range. More complex generators can be created by composing simple generators into \emph{generator functions}.
As a simple example, consider the problem of specifying the set of linear functions of two parameters \C{x} and \C{y}. That space of functions can be described with the following simple generator function:
\begin{lstlisting}
generator int legen(int i, int j){
return ??*i + ??*j+??;
}
\end{lstlisting}
The generator function can be used anywhere in the code in the same way a function would, but the semantics of generators are different from functions. In particular, every call to the generator will be replaced by a concrete piece of code in the space of code fragments defined by the generator. Different calls to the generator function can produce different code fragments. For example, consider the following use of the generator.
\begin{lstlisting}
harness void main(int x, int y){
assert legen(x, y) == 2*x + 3;
assert legen(x,y) == 3*x + 2*y;
}
\end{lstlisting}
Calling the solver on the above code produces the following output
\begin{lstlisting}
void _main (int x, int y){
assert ((((2 * x) + (0 * y)) + 3) == ((2 * x) + 3));
assert (((3 * x) + (2 * y)) == ((3 * x) + (2 * y)));
}
\end{lstlisting}
Note that each invocation of the generator function was replaced by a concrete code fragment in the space of code fragments defined by the generator.
The behavior of generator functions is very different from standard functions. If a standard function has generators inside it, those generators are resolved to produce code that will behave correctly in all the calling contexts of the function as illustrated by the example below.
\begin{lstlisting}
int linexp(int x, int y){
return ??*x + ??*y + ??;
}
harness void main(int x, int y){
assert linexp(x,y) >= 2*x + y;
assert linexp(x,y) <= 2*x + y+2;
}
\end{lstlisting}
For the routines above, there are many different solutions for the holes in \C{linexp} that will satisfy the first assertion, and there are many that will satisfy the second assertion, but the synthesizer will chose one of the candidates that satisfy them both and produce the code shown below. Note that the compiler always replaces return values for reference parameters, but other than that, the code below is what you would expect.
\begin{lstlisting}
void linexp (int x, int y, ref int _out){
_out = 0;
_out = (2 * x) + (1 * y);
return;
}
void _main (int x, int y){
int _out = 0;
linexp(x, y, _out);
assert (_out >= ((2 * x) + y));
int _out_0 = 0;
linexp(x, y, _out_0);
assert (_out_0 <= (((2 * x) + y) + 2));
}
\end{lstlisting}
\subsection{Recursive Generator Functions}
Generators derive much of their expressive power from their ability to recursively define a space of expressions. For example, the code below shows how to use a recursive generator to define a context free grammar of possible expressions.
\begin{Example}{Recursive Generator}
\begin{lstlisting}
generator int rec(int x, int y, int z){
int t = ??;
if(t == 0){return x;}
if(t == 1){return y;}
if(t == 2){return z;}
int a = rec(x,y,z);
int b = rec(x,y,z);
if(t == 3){return a * b;}
if(t == 4){return a + b;}
if(t == 5){return a - b;}
}
harness void sketch( int x, int y, int z ){
assert rec(x,y, z) == (x + x) * (y - z);
}
\end{lstlisting}
\end{Example}
One must be careful when defining recursive generators, however, because the way the generator is defined can have a dramatic impact on the solution time of the resulting code. In particular, there are two aspects that the writer must keep in mind when writing a generator: recursion and symmetries.
\paragraph{Recursion control in generators}
The compiler handles recursive generators by inlining them number of times as guided by the \C{bnd-inline-amnt} flag. This simple approach can cause problems if recursive generators are not written carefully. For example, an alternative way of writing the generator above is shown below.
\begin{Example}{Inefficient recursive generator}
\begin{lstlisting}
generator int rec(int x, int y, int z){
int t = ??;
if(t == 0){return x;}
if(t == 1){return y;}
if(t == 2){return z;}
if(t == 3){return rec(x,y,z) * rec(x,y,z);}
if(t == 4){return rec(x,y,z) + rec(x,y,z);}
if(t == 5){return rec(x,y,z) - rec(x,y,z);}
}
harness void sketch( int x, int y, int z ){
assert rec(x,y, z) == (x + x) * (y - z);
}
\end{lstlisting}
\end{Example}
Both generators describe the same grammar, and therefore in principle the same space of possible expressions, but the second generator will cause problems because each recursive call to \C{rec} will be inlined independently, so each level of inlining will increase the size of the program by a factor of six, instead of only a factor of two.
Another potential issue with recursive generators is that the amount of inlining is controlled by the same flag used to control inlining of functions during analysis. This can be problematic because recursive functions in the program will often require much more inlining than generators. To address this problem, the user can take additional control over inlining by explicitly adding a bound parameter into the generator as shown below.
\begin{Example}{Generator with manual inlining control}
\begin{lstlisting}
generator int rec(int x, int y, int z, int bnd){
assert bnd >= 0;
int t = ??;
if(t == 0){return x;}
if(t == 1){return y;}
if(t == 2){return z;}
int a = rec(x,y,z, bnd-1);
int b = rec(x,y,z, bnd-1);
...
}
\end{lstlisting}
\end{Example}
The synthesizer performs partial evaluation in tandem with inlining, so if we call rec with a constant value for the \C{bnd} parameter, the synthesizer will stop inlining when it determines that this parameter will be less than zero.
\paragraph{Avoiding symmetries}
Another aspect to be careful with when defining recursive generators are symmetries. These happen when different assignments to unknown values can result in the exact same expression. An important source of symmetries are commutative and associative operations. For example, consider two generators shown below.
\begin{Example}{Effect of symmetries on generators}
\begin{lstlisting}
generator int sum(int x, int y, int z, int bnd){
assert bnd > 0;
generator int factor(){
return {| x | y | z|} * {| x | y | z | ?? |};
}
if(??){ return factor(); }
else{return sum(x,y,z, bnd-1) + sum(x,y,z, bnd-1);}
}
generator int sumB(int x, int y, int z, int bnd){
assert bnd > 0;
generator int factor(){
return {| x | y | z|} * {| x | y | z | ?? |};
}
if(??){ return factor(); }
else{ return factor() + sumB(x,y,z, bnd-1);}
}
\end{lstlisting}
\end{Example}
Both represent the same space of expressions, but the generator \C{sumB} forces a right-associativity on the expression, whereas the generator \C{sum} can produce all possible associations, making the generator \C{sumB} more efficient than \C{sum}. Additionally, in \C{sumB} the \C{bnd} parameter has a clear meaning: it is the number of terms in the sum, whereas in generator \C{sum}, the parameter \C{bnd} is the depth of the AST, which is not as straightforward to map to something meaningful to the programmer.
\subsection{Regular Expression Generators}
Sketch provides some shorthand to make it easy to express simple sets of expressions. This shorthand is based on regular expressions. Regular expression generators describe to the synthesizer a set of choices from which to choose in searching for a correct solution to the sketch. The basic syntax is
\begin{lstlisting}
{| regexp |}
\end{lstlisting}
Where the regexp can use the operator | to describe choices, and the operator ? to define optional subexpressions.
For example, the sketch from the previous subsections can be made more succinct by using the regular expression shorthand.
\begin{lstlisting}
generator int rec(int x, int y, int z){
if(??){
return {| x | y | z |};
}else{
return {| rec(x,y,z) (+ | - | *) rec(x,y,z) |};
}
}
harness void sketch( int x, int y, int z ){
assert rec(x,y, z) == (x + x) * (y - z);
}
\end{lstlisting}
Regular expression holes can also be used with pointer expressions. For example, suppose you want to create a method to push a value into a stack, represented as a linked list. You could sketch the method with the following code:
\begin{lstlisting}
push(Stack s, int val){
Node n = new Node();
n.val = val;
{| (s.head | n)(.next)? |} = {| (s.head | n)(.next)? |};
{| (s.head | n)(.next)? |} = {| (s.head | n)(.next)? |};
}
\end{lstlisting}
\subsection{Local Variables Construct}\seclabel{localvariablesconstruct}
Sketch supports the use of the \C{$\$$(type)} construct to instruct the synthesizer to consider all variables of the specified \C{type} within scope when searching for a solution.
\begin{lstlisting}
harness void main(int x) {
int a = 2;
double b = 2.3;
assert x * $\$$(int) == x + x; $//$ $\$$(int) === {| 0 | a | x |}
}
\end{lstlisting}
The value of \textit{type} can be any of the primitive types (see \secref{primitives}) or any user defined type. The default value of any primitive type will also be considered as one of the choices. Local variables inside a function and its formal parameters are considered within scope of the construct. If the construct is used inside a local function, the local variables and formal parameters of the functions where it is defined are also within scope of the construct.
\subsection{High order generators}\seclabel{high-ordergenerators}
Generators can take other generators as parameters, and they can be passed as parameters to either generators or functions. This can be very useful in defining very flexible classes of generators. For example, the generator rec above assumes that you want expressions involving three integer variables, but in some cases you may only want two variables, or you may want five variables. The following code describes a more flexible generator:
\begin{lstlisting}
generator int rec(fun choices){
if(??){
return choices();
}else{
return {| rec(choices) (+ | - | *) rec(choices) |};
}
}
\end{lstlisting}
We can use this generator in the context of the previous example as follows:
\begin{lstlisting}
harness void sketch( int x, int y, int z ){
generator int F(){
return {| x | y | z |};
}
assert rec(F) == (x + x) * (y - z);
}
\end{lstlisting}
In a different context, we may want an expression involving some very specific sub-expressions, but the same generator can be reused in the new context.
\begin{lstlisting}
harness void sketch( int N, int[N] A, int x, int y ){
generator int F(){
return {| A[x] | x | y |};
}
if(x<N){
assert rec(F) == (A[x]+y)*x;
}
}
\end{lstlisting}
High order generators can also be used to describe patterns in the expected structure of the desired code. For example, if we believe the resulting code will have a repeating structure, we can express this with the following high-order generator:
\begin{lstlisting}
generator void rep(int n, fun f){
if(n>0){
f();
rep(n-1, f);
}
}
\end{lstlisting}
\subsection{\C{repeat} construct}
The pattern illustrated by the \C{rep} generator above---where the user wants to generate multiple statements, all of which can be generated from the same code with holes---is quite common. Sketch includes a repeat construct to do exactly this:
\begin{lstlisting}
repeat($N$){
$stmt$
}
\end{lstlisting}
This construct is just syntactic sugar for the recursive $rep$ generator shown earlier. A big difference between using \C{repeat} and writing your own \C{rep} generator is that with repeat, the maximum number of repetitions will be dictated by the loop unrolling bound \C{bnd-unroll-amnt}, whereas if you write your own recursive generator, the maximum number of repetitions will be dictated by the inlining bound \C{bnd-inline-amnt}. Also note that if $N$ is not a constant or a constant hole, the result of using this construct will be a set of nested if statements.
There is also a convenient variant of this construct:
\begin{lstlisting}
repeat($i$ : $N$){
$stmt$
}
\end{lstlisting}
Where $i$ is a fresh variable name, which will automatically be declared as an integer and can be used within $stmt$ to keep track of which copy you are in. The variable will have the value $n$ in the $n^{th}$ copy of $stmt$. As an example of how this may be used, consider the following code:
\begin{lstlisting}
generator int add([int n, int k], int[n] A, int idx, int[k] offst){
int res = 0;
repeat(i: k){
res += A[idx + offst[i]];
}
return res;
}
int[n] combine([int n], int[n] A){
int[n] B;
for(int i=1; i<n-1; ++i){
int[3] offsts = {??-1, ??-1, ??-1}; //The -1 are necessary because ?? can only take positive values.
B[i] = add(A, i, offsts);
}
return B;
}
harness void main(){
assert combine({2, 4, 5}) == {0, 11, 0};
}
\end{lstlisting}
Note that in the sketch above, the \C{-1} in the definition of offsts is necessary because constant holes \C{??} can only take non-negative values.
Given the harness, the combine function above will be synthed into the code below.
\begin{lstlisting}
void combine (int n, int[n] A, ref int[n] _out)
{
_out = ((int[n])0);
for(int i = 1; i < (n - 1); i = i + 1)
{
int _out_s3 = A[i];
_out_s3 = _out_s3 + (A[i + -1]);
_out_s3 = _out_s3 + (A[i + 1]);
_out[i] = _out_s3;
}
return;
}
\end{lstlisting}
Note how the repeat is expanded into three separate statements that read from \C{A} and add the result to the output variable. The compiler is able to tell that the first one is adding to zero, so it simplifies it and folds it into the declaration of the variable.
| {
"alphanum_fraction": 0.7217256439,
"avg_line_length": 50.2069672131,
"ext": "tex",
"hexsha": "7e10cbf217e02f687db90d5c3c8c1bf808f04082",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e646b7d51405e8a693f45472aa3cc6991a6f38af",
"max_forks_repo_licenses": [
"X11"
],
"max_forks_repo_name": "dolphingarlic/sketch-frontend",
"max_forks_repo_path": "docs/SketchManual/holes.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e646b7d51405e8a693f45472aa3cc6991a6f38af",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"X11"
],
"max_issues_repo_name": "dolphingarlic/sketch-frontend",
"max_issues_repo_path": "docs/SketchManual/holes.tex",
"max_line_length": 841,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "e646b7d51405e8a693f45472aa3cc6991a6f38af",
"max_stars_repo_licenses": [
"X11"
],
"max_stars_repo_name": "dolphingarlic/sketch-frontend",
"max_stars_repo_path": "docs/SketchManual/holes.tex",
"max_stars_repo_stars_event_max_datetime": "2020-12-06T03:40:53.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-12-06T03:40:53.000Z",
"num_tokens": 6212,
"size": 24501
} |
\graphicspath{{Pics/combi/algo/}}
\newpage\section{Algorithmic}
\faka
\begin{myitemize}{}
\item \href{https://people.bath.ac.uk/masgcs/algorithms.pdf}{Handout by Cody Johnson}
\end{myitemize}
\faka
\subsection{Data Structures}
\dstruct{http://opendatastructures.org/versions/edition-0.1d/ods-java/node52.html}{Binary
Heap\label{problem:binary_heap}}{Segment tree where, node \texttt{n} has
children \texttt{2n+1, 2n+2}.
%\fig{.8}{BinaryHeap}{Binary Heap}
}
\figdf{.5}{asfoiha}{aoshd}
\begin{enumerate}
\iref{problem:binary_heap_1}{timus 1862,}{Sum of operations}{}
\end{enumerate}
\dstruct{http://www.shafaetsplanet.com/?p=763}{Disjoint Set}{This data
structure keeps track of connectivity by assigning a representative to a
connected subset. And for any two nodes, \texttt{u, v} they are connected iff
they have the same representative.}
\subsection{Minimal Spanning Tree}
\den{Minimum Spanning Tree}{A minimum spanning tree or minimum weight spanning
tree is a subset of the edges of a connected, edge-weighted (un)directed
graph that connects all the vertices's together, without any cycles and
with the minimum possible total edge weight.
\fig{.4}{minimum_spanning_tree}{Minimal Spanning Tree}
}
\lem{MST Cut}{For any \hrf{definition:cut_graph_theory}{cut} of the graph, the
lightest edge in that cut-set is in evey MST of the graph.}
\algorithm{https://en.wikipedia.org/wiki/Kruskal's_algorithm}{Kruskal's
Algorithm}{Kruskal's algorithm is a `minimum-spanning-tree algorithm' which
finds an edge of the least possible weight that connects any two trees in the
forest. It is a greedy algorithm in graph theory as it finds a minimum
spanning tree for a connected weighted graph adding increasing cost arcs at
each step.}
\solu{To optimize this algorithm, Disjoint Set DS is used.}
\algorithm{https://en.wikipedia.org/wiki/Prim's_algorithm}{Prim's
Algorithm}{Greedily build the tree by adding edges one by one. At one step we
add the minimal cost edge that connects the tree to the vertices's that are
not in the tree.}
\subsection{Shortest Path Problem}
\den{Shortest Path Problem}{Finding the shortest path between two nodes in a
weighted or unweighted graph.}
\algorithm{https://en.wikipedia.org/wiki/Breadth-first_search}{Breadth-First
Search}{This algo runs from a node and ``levelizes'' the other nodes.}
\algorithm{https://en.wikipedia.org/wiki/Dijkstra's_algorithm}{Dijkstra's
Algorithm}{It picks the unvisited vertex with the lowest distance, calculates
the distance through it to each unvisited neighbor, and updates the neighbor's
distance if smaller.}
\subsection{Other CP Tricks}
\theo{}{Swap Sort}{In any swap sorting algorithm, the number of swaps needed
has the same parity.}\label{theorem:swap_sort_steps_parity}
\algorithm{https://wcipeg.com/wiki/Convex_hull_trick}{Convex Hull Trick}{Given
a lot of lines on the plane, and a lot of queries each asking for the smallest
value for $ y $ among the lines for a given $ x $, the optimal strategy is to
sort the lines according to their slopes, and adding them to a stack, checking
if they are relevant to the `minimal' convex hull of those lines.}
\fig{.5}{Convex_hull_trick1}{Convex Hull Trick}
\subsection{Fast Fourier Transform}
Let $ A(x) = a_0 + a_1x + \dots a_{n-1}x^{n-1} $ be a polinomial and let $
\w^n = 1 $ be a $ n $th root of unity. We use \textbf{FFT} to multiply two
polynomials in $ n\log n $ time.
\den{DFT Matrix}{Let $\w$ be a $n$th root of unity. The \textbf{DFT Matrix} is
a $ n\times n $ matrix given by:
\[W = \begin{pmatrix}
1 & 1 & 1 & \dots & 1\\
1 & \w & \w^2 & \dots & \w^{n-1}\\
1 & \w^2 & \w^4 & \dots & \w^{2(n-1)}\\
\vdots & \vdots & \vdots & \vdots & \vdots\\
1 & \w^{n-1} & \w^{2(n-1)} & \dots & \w^{(n-1)^2}\\
\end{pmatrix}\]
And its inverse is given by:
\[W^{-1} = \frac{1}{n}
\begin{pmatrix}
1 & 1 & 1 & \dots & 1\\
1 & \w^{-1} & \w^{-2} & \dots & \w^{-(n-1)}\\
1 & \w^{-2} & \w^{-4} & \dots & \w^{-2(n-1)}\\
\vdots & \vdots & \vdots & \vdots & \vdots\\
1 & \w^{-(n-1)} & \w^{-2(n-1)} & \dots & \w^{-(n-1)^2}\\
\end{pmatrix}\]
}
\den{Discrete Fourier Transform}{The \textbf{Discrete Fourier transform} (DFT)
of the polynomial $A(x)$ (or equivalently the vector of coefficients
$\left(a_{0}, a_{1}, \ldots, a_{n-1}\right)$ is defined as the values of
the polynomial at the points $x=\w_n^k,$ i.e. it is the vector:
\begin{align*}
\operatorname{DFT}\left(a_{0}, a_{1}, \ldots, a_{n-1}\right)
&=\left(y_{0}, y_{1}, \ldots,
y_{n-1}\right)=\left(A\left(\w_{n}^{0}\right),
A\left(\w_{n}^{1}\right), \ldots, A\left(\w_{n}^{n-1}\right)\right)
\end{align*}
In other words, we can write $ \operatorname{DFT}(A) $ as:
\[\operatorname{DFT}(A)=
\begin{pmatrix}
y_0\\
y_1\\
y_2\\
\vdots\\
y_n
\end{pmatrix} =
\begin{pmatrix}
1 & 1 & 1 & \dots & 1\\
1 & \w & \w^2 & \dots & \w^{n-1}\\
1 & \w^2 & \w^4 & \dots & \w^{2(n-1)}\\
\vdots & \vdots & \vdots & \vdots & \vdots\\
1 & \w^{n-1} & \w^{2(n-1)} & \dots & \w^{(n-1)^2}\\
\end{pmatrix}
\begin{pmatrix}
a_0\\
a_1\\
a_2\\
\vdots\\
a_n
\end{pmatrix}\]
}
\den{Inverse Discrete Fourier Transform}{The \textbf{Inverse Discrete Fourier Transform}
of values of the polynomial $\left(y_{0}, y_{1}, \ldots,
y_{n-1}\right)$ are the coefficients of the polynomial $\left(a_{0}, a_{1},
\ldots, a_{n-1}\right)$
\[\text { InverseDFT }\left(y_{0}, y_{1}, \ldots, y_{n-1}\right)=\left(a_{0},
a_{1}, \ldots, a_{n-1}\right)\]
}
Thus, if a direct DFT computes the values of the polynomial at the points at
the $n$ -th roots, the inverse DFT can restore the coefficients of the
polynomial using those values.
\begin{algo}[Multiplication of two polynomials]
Say we have $A, B$ two polynomials, and we want to compute $(A\cdot
B)(x)$. Then we first component-wise multiply $ \operatorname{DFT}(A) $
and $ \operatorname{DFT}(B) $, and then retrieve the coefficients of $
A\cdot B $ by applying the Inverse DFT.
\end{algo}
\begin{algo}[Fast Fourier Transform]
We want to compute the DFT$ (A) $ in $ n\log n $ time. Suppose $ n $ is a
power of $ 2 $, $ \w^n = 1 $, and $ A(x) = a_{0} x^{0}+a_{1}
x^{1}+\cdots+a_{n-1} x^{n-1} $. We use divide and conquer by considering:
\begin{align*}
A_{0}(x)&=a_{0} x^{0}+a_{2} x^{1}+\cdots+a_{n-2} x^{\frac{n}{2}-1} \\
A_{1}(x)&=a_{1} x^{0}+a_{3} x^{1}+\cdots+a_{n-1} x^{\frac{n}{2}-1}\\[.5em]
\and A(x) &= A_0(x^2) + xA_1(x^2)
\end{align*}
After we have computed $ \text{DFT}(A_0) = (y^0_i)^{\frac{n}{2} - 1}_{i=0}
$ and $ \text{DFT}(A_0) = (y^1_i)^{\frac{n}{2} - 1}_{i=0} $, we can
compute $ (y_i)^{n-1}_{i=0} $ by:
\[y_k =
\left\{\begin{array}{ll}
y^0_k + \w^ky^1_k, &\text{ for } k < \dfrac{n}{2}\\[1em]
y^0_k - \w^ky^1_k, &\text{ for } k \ge \dfrac{n}{2}
\end{array}\right.\]
\end{algo}
\begin{algo}[Inverse FFT]
Since by definition we know $ \operatorname{DFT}(A) = W\times A $, we
have:
\end{algo}
\newpage
\subsection{Problems}
\prob{https://artofproblemsolving.com/community/c6h1623516p10168885}{Iran TST
2018 P1}{E}{Let $A_1, A_2, ... , A_k$ be the subsets of
$\left\{1,2,3,...,n\right\}$ such that for all $1\leq i,j\leq k$:$A_i\cap
A_j \neq \varnothing$. Prove that there are $n$ distinct positive integers
$x_1,x_2,...,x_n$ such that for each $1\leq j\leq k$: \[lcm_{i \in
A_j}\left\{x_i\right\}>lcm_{i \notin A_j}\left\{x_i\right\}\]}
\solu{Apply induction on either $ k $ or $ n $.}
\prob{https://artofproblemsolving.com/community/c6h1480687p8639246}{ISL 2016
C1}{TE}{The leader of an IMO team chooses positive integers $ n $ and $ k $
with $ n > k $ , and announces them to the deputy leader and a contestant. The
leader then secretly tells the deputy leader an $ n $ -digit binary string,
and the deputy leader writes down all $ n $ -digit binary strings which differ
from the leader’s in exactly $ k $ positions. (For example, if $ n = 3 $ and $
k = 1 $ , and if the leader chooses $ 101 $ , the deputy leader would write
down $ 001, 111 $ and $ 100 $.) The contestant is allowed to look at the
strings written by the deputy leader and guess the leader’s string. What is
the minimum number of guesses (in terms of $ n $ and $ k $ ) needed to
guarantee the correct answer?}\label{problem:constructive_algo_10}
\solu{Small cases check.}
\prob{https://artofproblemsolving.com/community/c6h44479p281572}{ISL 2005
C2}{E}{Let $a_1,a_2,\ldots$ be a sequence of integers with infinitely many
positive and negative terms. Suppose that for every positive integer $n$ the
numbers $a_1,a_2,\ldots,a_n$ leave $n$ different remainders upon division by
$n$. Prove that every integer occurs exactly once in the sequence
$a_1,a_2,\ldots$.}\label{problem:constructive_algo_22}
\solu{Constructing for the beginning.}
\prob{http://ioi2017.org/tasks/practice/coins.pdf}{IOI Practice 2017}{M}{ $ C
$ plays a game with $ A $ and $ B $. There's a room with a table. First $
C $ goes in the room and puts $ 64 $ coins on the table in a row. Each
coin is facing either heads or tails. Coins are identical to one another,
but one of them is cursed. $ C $ decides to put that coin in position $ c
$. Then he calls in $ A $ and shows him the position of the cursed coin.
Now he allows $ A $ to flip some coins if he wants (he can't move any coin
to other positions). After that $ A $ and $ C $ leave the room and sends
in $ B $. If $ B $ can identify the cursed coin then $ C $ loses,
otherwise $ C $ wins.
The rules of the game are explained to $ A $ and $ B $ beforehand, so they
can discuss their strategy before entering the room. Find the minimum
number $ k $ of coin flips required by $ A $ so that no matter what
configuration of $ 64 $ coins C gives them and where he puts the cursed
coin, $ A $ and $ B $ can win with $ A $ flipping at most $ k $ coins.
Find constructions for $ k=32, 8, 6, 3, 2, 1 $ }\label{problem:binary_1}
\solu{XOR XOR XOR \hrf{binary}{binary representation}}
\prob{http://codeforces.com/contest/987/problem/E}{Codeforces 987E}{E}{Petr
likes to come up with problems about randomly generated data. This time
problem is about random permutation. He decided to generate a random
permutation this way: he takes identity permutation of numbers from $ 1 $
to $ n $ and then $ 3n $ times takes a random pair of different elements
and swaps them. Alex envies Petr and tries to imitate him in all kind of
things. Alex has also come up with a problem about random permutation. He
generates a random permutation just like Petr but swaps elements $ 7n+1 $
times instead of $ 3n $ times. Because it is more random, OK?!\\
You somehow get a test from one of these problems and now you want to know
from which one.}
\solu{\hrf{theorem:swap_sort_steps_parity}{This theorem} kills this problem
instantly.}\label{problem:invariant_rules_of_thumb_10}
\prob{https://artofproblemsolving.com/community/c6h53673p336210}{USAMO 2013
P6}{H}{At the vertices's of a regular hexagon are written six nonnegative
integers whose sum is $ 2003^{2003} $. Bert is allowed to make moves of the
following form: he may pick a vertex and replace the number written there by
the absolute value of the difference between the numbers written at the two
neighboring vertices. Prove that Bert can make a sequence of moves, after
which the number $ 0 $ appears at all six
vertices.}\label{problem:monotonicity_with_constraints_1}
\solu{Firstly what comes into mind is to decrease the maximum value, but since
this is a P6, there must be some mistakes. Surely, we can't follow this algo
in the case $ (k, k, 0, k, k, 0) $. But this time, the sum becomes even. So we
have to slowly minimize the maximum,
\hrf{monotonicity_with_constraints}{keeping the sum odd}. And since only odd
number on the board is the easiest to handle, we solve that case first, and
the other cases can be easly handled with an additional algo.}
\prob{https://artofproblemsolving.com/community/c6h546168p3160559}{ISL 2012
C1}{E}{Several positive integers are written in a row. Iteratively, Alice
chooses two adjacent numbers $ x $ and $ y $ such that $ x>y $ and $ x $ is to
the left of $ y $ , and replaces the pair $ (x,y) $ by either $ (y+1,x) $ or $
(x-1,x) $. Prove that she can perform only finitely many such
iterations.}\label{problem:invariant_rules_of_thumb_2}
\solu{Easy invariant.}
\prob{https://artofproblemsolving.com/community/q1h588238p3482449}{AoPS}{E}{There
is a number from the set $ \lbrace 1,-1\rbrace $ written in each of the
vertices's of a regular do-decagon ( $ 12 $ -gon). In a single turn we select
$ 3 $ numbers going in the row and change their signs. In the beginning all
numbers, except one are equal to $ 1 $. Can we transfer the only $ -1 $ into
adjacent vertex after a finite number of
turns?}\label{problem:invariant_rules_of_thumb_6}
\solu{Algo+Proof $ \Longrightarrow $ Invariant.}
\prob{https://artofproblemsolving.com/community/c6h57329p352820}{ISL 1994
C3}{EH}{Peter has three accounts in a bank, each with an integral number of
dollars. He is only allowed to transfer money from one account to another so
that the amount of money in the latter is doubled. Prove that Peter can always
transfer all his money into two accounts. Can Peter always transfer all his
money into one account?}\label{problem:invariant_rules_of_thumb_9}
\solu{Since we want to decrease the minimum, and one of the most simple way is
to consider Euclidean algorithm. So we sort the accounts, $ A < B < C $, and
write $ B = qA+r $, and do some experiment to turn $ B $ into $ r $.}
\prob{https://artofproblemsolving.com/community/c6h225457p1251792}{MEMO 2008,
Team, P6}{M}{On a blackboard there are $ n \geq 2, n \in \mathbb{Z}^{+}$
numbers. In each step we select two numbers from the blackboard and replace
both of them by their sum. Determine all numbers $ n$ for which it is possible
to yield $ n$ identical number after a finite number of
steps.}\label{problem:monotonicity_with_constraints_2}
\solu{The pair thing rules out the case of odds. For evens, we make two
identical sets, and focus on only one of the sets, with an additional move $ x
\rightarrow 2x $ available to use. Since we can now change the powers of $ 2 $
at out will at any time, we only focus on the greatest odd divisors. Our aim
is to slowly decrease the largest odd divisor.}
\prob{https://artofproblemsolving.com/community/c6h1176476p5679356}{USA Dec
TST 2016, P1}{E}{Let $S = \{1, \dots, n\}$. Given a bijection $f : S \to
S$ an orbit of $f$ is a set of the form $\{x, f(x), f(f(x)), \dots \}$ for
some $x \in S$. We denote by $c(f)$ the number of distinct orbits of $f$.
For example, if $n=3$ and $f(1)=2$, $f(2)=1$, $f(3)=3$, the two orbits are
$\{1,2\}$ and $\{3\}$, hence $c(f)=2$.\\
Given $k$ bijections $f_1$, $\ldots$, $f_k$ from $S$ to itself, prove that \[
c(f_1) + \dots + c(f_k) \le n(k-1) + c(f) \]where $f : S \to S$ is the
composed function $f_1 \circ \dots \circ
f_k$.}\label{problem:induction_type1_32}
\solu{Induction reduces the problem to the case of $ k=2 $. Then another
inuction on $ c(f_1) $ solves the problem. The later induction works on the
basis of the fact that a ``swap'' in the bijection changes the number of
cycles by $ 1 $ (either adds $ +1 $ or $ -1 $).}
\prob{}{Cody Johnson}{E}{Consider a set of $ 6 $ integers $ S = \{a_1\dots a_6
\} $. At on step, you can add $ +1 $ or $ -1 $ to all of the $ 6 $ integers.
Prove that you can make a finite number of moves so that after the moves, you
have $ a_1a_5a_6 = a_2a_4a_6 = a_3a_4a_5 $}
\prob{https://artofproblemsolving.com/community/c6h596930p3542095}{ISL 2014
A1}{E}{Let $a_0 < a_1 < a_2 \ldots$ be an infinite sequence of positive
integers. Prove that there exists a unique integer $n\geq 1$ such that \[a_n <
\frac{a_0+a_1+a_2+\cdots+a_n}{n} \leq
a_{n+1}.\]}\label{problem:constructive_algo_21}
\solu{My idea was to construct the sequence with the assumtion that the
condition is false. It leads to either all of the right ineq false or the
condition being true.}
\solu{The magical solution: defining $b_n=(a_n-a_{n-1})+\dots+(a_n-a_1)$ which
eases the inequlity.}
\solu{\hrf{http://artofproblemsolving.com/community/c6h596930p3542145}{The
beautiful solution}: defining $ \delta_i $ as $ a_n = a_0 + \Delta_1 +
\Delta_2 + ... + \Delta_n $ for all $ n, i $.}
\solu{Another idea is to first prove the existence and then to prove the
uniqueness.}
\prob{https://artofproblemsolving.com/community/c6h597093p3543144}{ISL 2014
N3}{EM}{For each positive integer $n$, the Bank of Cape Town issues coins of
denomination $\frac1n$. Given a finite collection of such coins (of not
necessarily different denominations) with total value at most most
$99+\frac12$, prove that it is possible to split this collection into $100$ or
fewer groups, such that each group has total value at most
$1$.}\label{problem:greedy_algorithm_1}\label{problem:extremal_case_whole_11}
\solu{Notice that the sum of the geometric series $ S = \dfrac 1 2 + \dfrac{ 1
}{2^2} + \dfrac{ 1 }{2^3} \dots $ is $ 1 $. And in another problem we
partitioned the set of integers into subsets with each subset starting with an
odd number $ k $ and every other elements of the subset being $ 2^i*k $. We do
similarly in this problem, and partition the set of the coins in a similar
way. Then we take the first $ 100 $ sets whose sum is less than $ 1 $ and
insert the other left coins in these sets, with the condition that the sum of
all of the coins is $ 99 + \dfrac 1 2 $.
\hrf{http://artofproblemsolving.com/community/c6h597093p3543181}{Solu}}
\solu{Replacing $ 100 $ by $ n $, we show that for all $ n $ the codition is
valid. Assume otherwise. Take the minimal $ n $ for which the condition does
not work. Ta-Da! We can show that if $ n $ does not work, so doesn't $ n-1 $.
\hrf{http://artofproblemsolving.com/community/c6h597093p3543194}{Solu}}
\solu{Or just be an EChen and prove the result for at most $k -
\dfrac{k}{2k+1}$ with $k$ groups.}
\solu{Very similar to \hrf{problem:greedy_algorithm_2}{this} problem}
\prob{https://artofproblemsolving.com/community/c6h97506p550634}{China TST
2006}{E}{Given positive integer $n$, find the biggest real number $C$ which
satisfy the condition that if the sum of the reciprocals ($ \frac 1 n $ is the
reciprocal of $ n $) of a set of integers (They can be the same.) that are
greater than $1$ is less than $C$, then we can divide the set of numbers into
no more than $n$ groups so that the sum of reciprocals of every group is less
than $1$.}\label{problem:greedy_algorithm_2}
\prob{}{}{E}{In a $ n*n $ grid, every cell is either black or white. A
`command' is a pair of integers, $ i, j \le n $, after which all of the
cells in the $ i^{th} $ row and the $ j^{th} $ column (meaning a total of
$ 2n-1 $ cells) will switch the state. Our goal is to make every cell of
the same state.
\begin{enumerate} \item Prove that if it can be done, it can be done in
less than $ \frac{n^2}{2} $ commands. \item Prove that it can always be done
if $ n $ is even. \item Prove or disprove for odd $ n $.
\end{enumerate}}\label{problem:constructive_algo_23}
\solu{(a) is really easy, just take into account that flipping all cells
result in the switch of all of the cells. And the question did not ask for an
algorithm.}
\solu{(b) is also easy, notice that we can pair the columns and then make them
look like the same, with a compound command. A better algo is to take the
original algo and to modify it like, take one cell, then do the original move
on all cells in the row and column of this cell.}
\solu{(c) uses Linear Algebra, which I dont know yet, or... use double
counting to build the criteria of the fucntion $ f:\text{states} \rightarrow
\text{subset of moves} $ being bijective.}
\prob{}{OIM 1994, PSMiC}{E}{In every square of an $ n\times n $ board there is
a lamp. Initially all the lamps are turned off. Touching a lamp changes the
state of all the lamps in its row and its column (including the lamp that was
touched). Prove that we can always turn on all the lamps and find the minimum
number of lamps we have to touch to do this.}
\prob{https://atcoder.jp/contests/agc043/tasks/agc043_b}{AtCoder GC043
B}{M}{Given is a sequence of $ N $ digits $ a_1, a_2\ldots a_N $, where
each element is $ 1, 2 $, or $ 3 $. Let $ x_{i,j} $ defined as follows:
\begin{itemize} \item $ x_{1,j} := a_j \quad (1 \leq j \leq N) $ \item $
x_{i,j} := | x_{i-1,j} - x_{i-1,j+1}| \quad (2 \leq i \leq N \text{ and }
1 \leq j \leq N+1-i) $ \end{itemize}
Find $ x_{N,1} $.}
\solu{Since $ |x-y| \equiv x+y (\mod\ 2) $, we can determine the parity of $
X_{N,1} $ using binomial coefficient. Which in turn we can get in $ O(n) $
with bitwise operator. Now we have to distinguish between $ 0, 2 $. For $ 2 $,
all of the rows starting with the second one should have only $ 2 $ and $ 0 $.
Where we can apply the same algorithm as before and find whether the final
digit is $ 2 $ or $ 0 $.}
\prob{acm.timus.ru/problem.aspx?space=1&num=1578}{Timus 1578}{E}{The very last
mammoth runs away from a group of primeval hunters. The hunters are
fierce, hungry and are armed with bludgeons and stone axes. In order to
escape from his pursuers, the mammoth tries to foul the trail. Its path is
a polyline (not necessarily simple). Besides, all the pairs of adjacent
segments of the polyline form acute angles (an angle of 0 degrees is also
considered acute).\\
After the mammoth vanished, it turned out that it had made exactly N turns
while running away. The points where the mammoth turned, as well as the points
where the pursuit started and where the pursuit ended, are known. You are to
determine one if the possible paths of the
mammoth.}\label{problem:greedy_algorithm_3}
\prob{http://codeforces.com/contest/744/problem/B}{CodeForces 744B}{E}{Given a
hidden matrix of $ n\times n,\ n\leq1000 $ where for every $ i $ , $ M_{(i, i)
= 0} $ , Luffy's task is to find the minimum value in the $ n $ rows, formarly
spoken, he has to find values $ \min_{j=1\dots n,\ j\not=i} M_{(i, j)} $. To
do this he can ask the computer questions of following types: In one question,
Luffy picks up a set, $ {a_1, a_2\dots a_k} $ with $ a_i, k \leq n $. And
gives the computer this set. The computer will respond with $ n $ integers.
The $ i $ -th integer will contain the minimum value of $ \min_{j=1\dots k}
M_{(i, a_j)}$ . And on top of this, he can only ask $ 20 $ questions. Luffy
being the stupid he is, doesn't even have any clue how to do this, you have to
help him solving this
problem.}\label{problem:divide_and_conquer_2}\label{problem:binary_query_1}
\solu{If we draw the diagonal in the matrix, we see that we can fit boxes of $
2^i \times 2^i $ in there depending on the $ i $ 's value. Now after we have
decomposed the matrix into such boxes, we can choose several from them to ask
a question. The trick is that for every row, there must be questions asked
from each of the boxes this row covers and no question from here must contain
the $ (i, i) $ cell.}
%\fig{.3}{CF744B}{}
\solu{The \hrf{problem:binary_query_1}{magical solution} goes as following:
For $ i\leq 10 $ , for every $ k=1\dots n $ , include $ k $ in the question if
the $ i $ th bit of $ k $ 's binary form is $ 0 $. And then for the second
round include $ k $ in the question if the $ i $ th bit of $ k $ 's binary
form is $ 1 $.}
\prob{}{}{E}{Alice wants to add an edge $ (u, v) $ in a graph. You want to
know what this edge is. So, you can ask some questions to Alice. For each
question, you will give Alice $ 2 $ non-empty disjoint sets $ S $ and $ T $ to
Alice, and Alice will answer "true" iff $ u $ and $ v $ belongs to different
sets. You can ask atmost $ 3 * \ceil{log_2|V|} $ questions to Alice. Describe
a strategy to find the edge $ (u, v)
$.}\label{problem:binary_query_2}\label{problem:divide_and_conquer_3}
\solu{First find one true answer in $ \ceil{log_2|V|} $ questions, and then
get the result out of these two sets in $ 2*\ceil{log_2|V|} $ questions.}
\solu{The \hrf{problem:binary_query_2}{magical solution} goes as following: In
the $ i^{th} $ question, $ S = {x : i^{th} \text{ bit of } x \text{ is } 0} ,\
T = {x : i^{th} \text{ bit of } x \text{ is } 1} $ }
\prob{https://artofproblemsolving.com/community/c5h1083477p4774079}{USAMO 2015
P4}{E}{Steve is piling $m\geq 1$ indistinguishable stones on the squares
of an $n\times n$ grid. Each square can have an arbitrarily high pile of
stones. After he finished piling his stones in some manner, he can then
perform stone moves, defined as follows. Consider any four grid squares,
which are corners of a rectangle, i.e. in positions $(i, k), (i, l), (j,
k), (j, l)$ for some $1\leq i, j, k, l\leq n$, such that $i<j$ and $k<l$.
A stone move consists of either removing one stone from each of $(i, k)$
and $(j, l)$ and moving them to $(i, l)$ and $(j, k)$ respectively,j or
removing one stone from each of $(i, l)$ and $(j, k)$ and moving them to
$(i, k)$ and $(j, l)$ respectively.\\
Two ways of piling the stones are equivalent if they can be obtained from
one another by a sequence of stone moves.\\
How many different non-equivalent ways can Steve pile the stones on the
grid?}\label{problem:constructive_algo_24}\label{problem:invariant_rules_of_thumb_12}
\solu{Building an invariant, we see that only the sum of the columns is not
sufficient. So to get more control, we take the row sums into account as
well.}
\prob{https://artofproblemsolving.com/community/c6h5758p18993}{ISL 2003
C4}{E}{Let $x_1,\ldots, x_n$ and $y_1,\ldots, y_n$ be real numbers. Let $A =
(a_{ij})_{1\leq i,j\leq n}$ be the matrix with entries \[a_{ij} =
\begin{cases}1,&\text{if }x_i + y_j\geq 0;\\0,&\text{if }x_i + y_j <
0.\end{cases}\]Suppose that $B$ is an $n\times n$ matrix with entries $0$, $1$
such that the sum of the elements in each row and each column of $B$ is equal
to the corresponding sum for the matrix $A$. Prove that
$A=B$.}\label{problem:constructive_algo_25}
\solu{If done after \hrf{problem:constructive_algo_24}{this} problem, this
problem seems straigthforward.}
\prob{https://artofproblemsolving.com/community/c6h1557188p9502889}{India
TST 2017 D1 P3}{E}{Let $n \ge 1$ be a positive integer. An $n \times
n$ matrix is called \textit{good} if each entry is a non-negative
integer, the sum of entries in each row and each column is equal. A
\textit{permutation} matrix is an $n \times n$ matrix consisting of
$n$ ones and $n(n-1)$ zeroes such that each row and each column has
exactly one non-zero entry.\\
Prove that any \textit{good} matrix is a sum of finitely many
\textit{permutation} matrices.}
\solu{Same algo as \hrf{problem:constructive_algo_24}{above}. Either
distributing uniformly or gathering all in a diagonal}
\prob{https://artofproblemsolving.com/community/c6h1388469p7730613}{Tournament
of Towns 2015F S7}{MH}{$N$ children no two of the same height stand in a
line. The following two-step procedure is applied: first, the line is split
into the least possible number of groups so that in each group all children
are arranged from the left to the right in ascending order of their heights (a
group may consist of a single child). Second, the order of children in each
group is reversed, so now in each group the children stand in descending order
of their heights. Prove that in result of applying this procedure $N - 1$
times the children in the line would stand from the left to the right in
descending order of their heights.}
\solu{It's obvious that we need to find some invariant or mono-variant.
Now, an idea, we need to show that for any $ i $, for it to be on it's
rightful place, it doesn't need more than $ N-1 $ moves. How do we show that?
Another idea, think about the bad bois on either of its sides. Now,
observation, `junctions' decrease with each move. Find the `junctions'.}
\prob{https://main.edu.pl/en/archive/oi/2/sze}{Polish OI}{E}{Given $ n $
jobs, indexed from $ 1, 2\dots n $. Given two sequences of reals, $
\{a_i\}^n_{i=1}, \{b_i\}^n_{i=1} $ where, $ 0 \leq a_i, b_i \leq 1 $. If job $
i $ starts at time $ t $ , then the job takes $ h_i(t) = a_it+b_i $ time to
finish. Order the jobs in a way such that the total time taken by all of the
jobs is the
minimum.}\label{problem:forget_and_focus_2}\label{problem:swapping_1}
\solu{Example of a problem which is solved by investigating two adjacent
objects in the optimal arrangement.}
\prob{http://codeforces.com/problemset/problem/960/C}{CodeForces
960/C}{E}{Pikachu had an array with him. He wrote down all the
non-empty subsequences of the array on paper. Note that an array of
size $ n $ has $ 2^n - 1 $ non-empty subsequences in it.\\
Pikachu being mischievous as he always is, removed all the
subsequences in which \[ \text{Maximum element of the subsequence} -
\text{Minimum element of subsequence} \geq d \]
Pikachu was finally left with $ X $ subsequences. \\
However, he lost the initial array he had, and now is in serious
trouble. He still remembers the numbers $ X $ and $ d $. He now wants
you to construct any such array which will satisfy the above
conditions. All the numbers in the final array should be positive
integers less than $ 10^{18} $.\\
Note the number of elements in the output array should not be more than
$10^4$. If no answer is possible, print $ -1
$.}\label{problem:constructive_algo_3}
\prob{https://artofproblemsolving.com/community/c6h35318p220230}{ARO 2005
P10.3, P11.2}{M}{Given $ 2005 $ distinct numbers $
a_1,\,a_2,\dots,a_{2005} $. By one question, we may take three different
indices $ 1\le i<j<k\le 2005 $ and find out the set of numbers $
\{a_i,\,a_j,\,a_k\} $ (unordered, of course). Find the minimal number of
questions, which are necessary to find out all numbers $ a_i
$.}\label{problem:constructive_algo_4}
\solu{The key idea is to ask questions such that it is connected to
multiple other questions, and each question uniquely finds out multiple
elements together. One by itself immediately after the question has been
asked, and one after the next question which is related to this one has been
asked. As we find out three elements' values after one question, first,
second, third, so, let us find first from the previous question, second from
the current question, third from the next question.}
\prob{https://wcipeg.com/problem/ioi0713}{IOI 2007 P3}{M}{You are given
two sets of integers $ A=\{a_1, a_2 \dots a_n\} $ and $ B=\{b_1, b_2 \dots
b_n\} $ such that $ a_i \geq b_i $. At move $ i $ you have to pick $ b_i $
distinct integers from the set $ A_i = \{1, 2, \dots a_i\} $. In total, $ (b_1
+ b_2 +\dots + b_n) $ integers are selected, but not all of these are
distinct. Suppose $ k $ distinct integers have been selected, with
multiplicities $ c_1, c_2, c_3 \dots c_k $. Your score is defined as
\[\sum^k_{i=1}c_i(c_i-1)\] Give an efficient algorithm to select numbers in
order to ``minimize'' your
score.}\label{problem:constructive_algo_5}\label{problem:swapping_2}
\solu{Some investigation shows that if $ c_i > c_j + 1 $ and $ i > j $ ,
then we can always minimize the score. and if $ i < j $ , then we can
minimize the score only when $ i, j \in A_k $ but $ i $ has been taken at move
$ k $ , but $ j $ hasn't. So in the minimal state, either both $ i, j $ has
been taken at move $ k $ , or $ a_k < j $. So the idea is to take elements
from $ A_i $ as large as possible, and then taking smaller values after wards
if the $ c_i $ value of a big element gets more than that of a small element.
In this algorithm, we see that we greedily manipulate $ c_i $. So it is a good
idea to greedily choose $ c_i $ 's from the very beginning.}
\solu{\textbf{Solution Algo:} at step $ i $ , take the set $ \{c_1, c_2
\dots c_{a_i}\} $ and take the smallest $ b_i $ from this set, and add $ 1
$ to each of them (in other words, take their index numbers as the numbers to
take).}
\prob{}{}{E}{Given $ n $ numbers $ \{a_1, a_2, ..., a_n\} $ in arbitrary
order, you have to select $ k $ of them such that no two consecutive
numbers are selected and their sum is
maximized.}\label{problem:constructive_algo_6}\label{problem:swapping_3}
\solu{Notice that if $ a_i $ is the maximum value, and if $ a_i $ is not
counted in the optimal solution, then both of $ a_{i-1}, a_{i+1} $ must be
in the optimal solution, and $ a_{i-1} + a_{i+1} > a_i $. And if $ a_i $ is
counted in the optimal solution, then none of $ a_{i-1}, a_{i+1} $ can be
counted in the optimal solution. So either way, we can remove these three and
replace them by a single element to use induction. So remove $ a_{i-1},
a_{i+1} $ and replace $ a_i $ by $ a_{i-1} + a_{i+1} - a_i $.}
\prob{https://artofproblemsolving.com/community/c5h347285p1860777}{USAMO
2010 P2}{EM}{There are $n$ students standing in a circle, one behind the
other. The students have heights $h_1<h_2<\dots <h_n$. If a student with
height $h_k$ is standing directly behind a student with height $h_{k-2}$ or
less, the two students are permitted to switch places. Prove that it is not
possible to make more than $\binom{n}{3}$ such switches before reaching a
position in which no further switches are possible.}
\prob{https://artofproblemsolving.com/community/c6h1070674p4655727}{Serbia
TST 2015 P3}{H}{We have $2015$ prisinoers. The king gives everyone a hat
coloured in one of $5$ colors. Everyone sees all hats expect his own. Now,the
King orders them in a line(a prisioner can see all guys behind and in front of
him). The king asks the prisinoers one by one does he know the color of his
hat.If he answers NO,then he is killed.If he answers YES,then answers which
color is his hat,if his answers is true,he goes to freedom,if not,he is
killed.All the prisinors can hear did he answer YES or NO,but if he answered
YES,they don't know what did he answered(he is killed in public).They can
think of a strategy before the King comes,but after that they can't
comunicate.What is the largest number of prisinors we can guarentee that can
survive?}
\prob{https://artofproblemsolving.com/community/c6h1113653p5087465}{Taiwan
TST 2015 R3D1P1}{E}{A plane has several seats on it, each with its own
price, as shown below. $2n-2$ passengers wish to take this plane, but
none of them wants to sit with any other passenger in the same column
or row. The captain realize that, no matter how he arranges the
passengers, the total money he can collect is the same. Proof this
fact, and compute how much money the captain can collect.
\fig{.5}{TaiwanTST2015R3D1P1}{} }
\prob{https://artofproblemsolving.com/community/c6h1113195p5083566}{ISL
2014 N1}{E}{Let $n \ge 2$ be an integer, and let $A_n$ be the set \[A_n =
\{2^n - 2^k\mid k \in \mathbb{Z},\, 0 \le k < n\}.\] Determine the largest
positive integer that cannot be written as the sum of one or more (not
necessarily distinct) elements of $A_n$ .}
\solu{Inductive approach}
\prob{https://codeforces.com/contest/330/problem/D}{Codeforces
330D}{E}{Biridian Forest}
\solu{Generalize the condition for a meet-up.}
\prob{https://codeforces.com/contest/1270/problem/F}{Codeforces
1270F}{E}{Let's call a binary string $ s $ awesome, if it has at least
$ 1 $ symbol \texttt{1} and length of the string is divisible by the
number of \texttt{1} in it. In particular, \texttt{1}, \texttt{1010},
\texttt{111} are awesome, but \texttt{0}, \texttt{110}, \texttt{01010}
aren't.\\
You are given a binary string $ s $ of size $ \le 2\times10^5 $. Count the
number of its awesome substrings.}
\solu{The constraint tells us the algorithm should be of complexity $
O(n\sqrt{n}) $. Playing with the funtion $ f(i) $ and the divisibility
condition gives us some ground to work with. Since we need $ \le
c\times\sqrt{n} $ queries around the full string, we remember the prime sieve
trick.}
\prob{https://artofproblemsolving.com/community/c6h60773p366562}{IMO 1986
P3}{M}{To each vertex of a regular pentagon an integer is assigned, so
that the sum of all five numbers is positive. If three consecutive vertices
are assigned the numbers $x,y,z$ respectively, and $y<0$, then the following
operation is allowed: $x,y,z$ are replaced by $x+y,-y,z+y$ respectively. Such
an operation is performed repeatedly as long as at least one of the five
numbers is negative. Determine whether this procedure necessarily comes to an
end after a finite number of steps.}
\solu{Notice how starting from one negative and moving to consective
number on one side moves the number by one vertex nicely.}
\newpage
\subsection{Algorithm Analysis}
\prob{https://artofproblemsolving.com/community/c6h2112349}{GQMO 2020
P4}{MH}{Prove that, for all sufficiently large integers $n$, there exists
$n$ numbers $a_1, a_2, \dots, a_n$ satisfying the following three
conditions:
Each number $a_i$ is equal to either $-1, 0$ or $1$. At least
$\dfrac{2n}{5}$ of the numbers $a_1, a_2, \dots, a_n$ are non-zero. The
sum $\dfrac{a_1}{1} + \dfrac{a_2}{2} + \dots + \dfrac{a_n}{n}$ is $0$.}
\newpage
\subsection{Covering Area with Squares}
\begin{myitemize} \item
\href{https://artofproblemsolving.com/community/c301601h1551769_cover_story_squares}{A
nice blog post by ankogonit} \end{myitemize}
\prob{https://artofproblemsolving.com/community/c301601h1551769_cover_story_squares}{Brazilian
MO 2002, ARO 1979}{E}{Given a finite collection of squares with total area at
least $4$, prove that you can cover a unit square completely with these
squares (with overlapping allowed, of course).}
\solu{Maybe motivated by the number $ 4 $ and how nice it would be if all the
squares had $ 2 $'s power side lengths, the idea is to shrink every square to
a side with side of $ 2 $'s power.}
\prob{}{ARO 1979's Sharper Version}{E}{Given a finite collection of squares
with total area at least $3$, prove that you can cover a unit square
completely with these squares (with overlapping allowed).}
\solu{The idea is to greedily cover the unit square by covering the lowest row
uncovered. And then using boundings to prove that it is possible.}
\prob{}{}{E}{Prove that a finite collection of squares of total area
$\frac{1}{2} $ can be placed inside a unit square without overlap.}
\solu{The same idea as before.}
\prob{}{Tournament of Towns Spring 2012 S7}{EM}{We attempt to cover the plane
with an infinite sequence of rectangles, overlapping allowed.
\begin{enumerate} \item Is the task always possible if the area of the
$n-$th rectangle is $n^2$ for each $n$? \item Is the task always
possible if each rectangle is a square, and for any number $N$, there
exist squares with total area greater than $N$? \end{enumerate}}
\solu{Identical algo and proving technique as above.}
\solu{Using the first problem in this subsection to find a better algo.}
\prob{https://artofproblemsolving.com/community/c6h155700p875004}{ISL 2006
C6}{E}{A holey triangle is an upward equilateral triangle of side length
$n$ with $n$ upward unit triangular holes cut out. A diamond is a
$60^\circ-120^\circ$ unit rhombus.\\
Prove that a holey triangle $T$ can be tiled with diamonds if and only if the
following condition holds: Every upward equilateral triangle of side length
$k$ in $T$ contains at most $k$ holes, for $1\leq k\leq n$.}
\solu{Think of induction and how you can deal with that.}
\prob{https://artofproblemsolving.com/community/c7h442824p2494271}{Putnam 2002
A3}{E}{Let $N$ be an integer greater than $1$ and let $T_n$ be the number of
non empty subsets $S$ of $\{1,2,.....,n\}$ with the property that the average
of the elements of $S$ is an integer. Prove that $T_n - n$ is always even.}
\solu{Try to show an bijection between the sets with average $n$ which has $k$
elements (Here $k$ is an even integer) and the sets with average $n$ but with
number of elements $k+1$. This implies that the number of such sets is even.}
\prob{https://artofproblemsolving.com/community/c6h55343p343870}{USAMO
1998}{E}{Prove that for each $n\geq 2$, there is a set $S$ of $n$ integders
such that $(a-b)^2$ divides $ab$ for every distinct $a,b \in
S$.}\label{problem:sets_scp_1}
\solu{Induction comes to the rescue. Trying to find a way to get from $n$ to
$n+1$, we see that we can \emph{shift} the integers by any integer $k$. So
after shifting, what stays the same, and what changes?}
| {
"alphanum_fraction": 0.6847443938,
"avg_line_length": 45.9062159215,
"ext": "tex",
"hexsha": "10a7f99204e6c43623fa5a3b1a7322cbc56fab71",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-09-27T15:19:26.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-10-15T08:59:33.000Z",
"max_forks_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "AnglyPascal/BCS_Question_Bank",
"max_forks_repo_path": "combi/sec4_algo (M Ahsan Al Mahir's conflicted copy 2020-07-28).tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "AnglyPascal/BCS_Question_Bank",
"max_issues_repo_path": "combi/sec4_algo (M Ahsan Al Mahir's conflicted copy 2020-07-28).tex",
"max_line_length": 94,
"max_stars_count": 48,
"max_stars_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "M-Ahsan-Al-Mahir/BCS_Question_Bank",
"max_stars_repo_path": "combi/sec4_algo (M Ahsan Al Mahir's conflicted copy 2020-07-28).tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-13T19:47:04.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-10-14T17:15:00.000Z",
"num_tokens": 12816,
"size": 42096
} |
%!TEX root = ../dissertation_vkslm.tex
\chapter{Handwritten Signature Verification} \label{ch:sig}
In this chapter, we give a brief reference to some essential concepts related to Handwritten Signature Verification, including definitions of notation and terminology used in the following
chapters. First, we give an introduction and a general overview of the handwritten signature
biometry, afterwards we discuss how an Automatic Handwritten Signature Verification system
works, and finally, we give a brief overview of the state-of-the-art on offline signature synthesis
based on online data.
\section{Handwritten Signature: a behavioral multimodal biometry}
The term ``Biometrics'' is derived from the Greek word ``bio-metriks''. In which ``bio'' means ``life'' and ``metrics'' means ``to measure''. Biometrics refers to the measurements and analysis of biological or behavioral characteristics peculiar to an individual. Biometric systems are a constantly growing technology \cite{jain2004biometrics} and have been introduced as forms of identification and access control. Biometric identifiers are a measurable characteristic used to distinguish and describe individuals \cite{jain2000biometric}.
Biometric systems are often categorized as physiological or behavioral \cite{ross2008introduction}. The physiological category is characterized by measurements of the body, such as fingerprint, face, DNA, iris/retina pattern, and body scent. On the other hand, behavioral biometrics are individual acquired traits and are related to the pattern of behavior of a person. They include typing rhythm, temperament, voice, and handwritten signatures \cite{jain2016}.
Most biometric identifiers require a special type of device for security and control of human identity. However, handwritten signature based biometric systems can be performed requiring no sensor except a pen and a piece of paper. According to \cite{pal2014signature} handwritten signatures can be considered the most legally and socially attributes accepted for person identification. Moreover, the challenge that comes with signature-based authentication is the need for high accuracy results to avoid false authorization or rejection.
Handwritten signature authentication is based on systems for signature verification.
Whether a given signature belongs to a claimed person or not is decided through a signature verification system, which ultimately strives to learn the manner in which
an individual makes use of their muscular memory (hands,
fingers, and wrist) to reproduce a signature \cite{gupta1997review}.
A generic handwritten signature-based biometric system is shown in Figure \ref{fig_ahsv-overview}. Once the user {\boldm $Y$} deposits the signature, a sensor digitalizes the sample. Later, a feature matrix {\boldm $X$} is built with the information extracted from the input sample. Next, the systems typically have two stages: enrollment {\boldm $X_{E}$} and recognition {\boldm $X_{R}$}. The former builds a system database {\boldm $D$} where the users store their reference signatures as a set of templates, whereas the latter is
used to recognize, identify or verify the identity of a user, who typically claim to be one of the registered users. Then, a score {\boldm $S$} is obtained according to the similarity of the
questioned sample to the claimed template. Finally, the system accepts or rejects the questioned sample.
\begin{figure}[!htb]
\centering
\includegraphics[width=\textwidth]{biometry-overview}
\caption{Overview of a typical handwritten signature based system. Figure adapted from \cite{jain2016}.}
\label{fig_ahsv-overview}
\end{figure}
As Figure \ref{fig_ahsv-overview} shows, the signature acquisition sensor can be either an optical scanner or an acquisition device such as a digitizing tablet. These two different acquisition tools characterize the two classes of signatures, namely: static and dynamic.
In the static modality, also referred as offline, an optical scanner is used to obtain the signature directly from the pen on the paper, and only the digital image of the signature is available, see Figure \ref{fig:acquisition} (a). In the dynamic mode, also called online, signatures are acquired through a graphic tablet or a pen-sensitive computer display, see Figure \ref{fig:acquisition} (b). In this mode, data is stored during the writing process and consists of a temporal sequence of the two-dimensional coordinates $(x, y)$ of consecutive points.
\begin{figure}[!htpb]
\centering
\subfloat[]{\includegraphics[width=3.2in]{signature.PNG}}
\hspace*{0.5in} % separation between the subfigures
\subfloat[] {\includegraphics[width=2.7in]{stu500.jpg}}
\caption{Different signature acquisition methods. (a) a signature scanned from paper and (b) digitizing tablet Wacom STU-500 \cite{wacom2016}. } \label{fig:acquisition}
\end{figure}
Specifically, the online modality does not convey information about the overall shape of the signature, the width of the strokes and the texture of the ink on the paper \cite{diaz2014generation}. The offline representation, however, has lost all dynamic information about the manner in which the signature is signed during the acquisition process. As a result, features such as pen trajectory, which can be easily computed in the online domain, can only be inferred from a static image \cite{nel2005estimating}. In Figure \ref{fig:offon} we can see a grayscale offline signatures example and the respective online signature plot.
\begin{figure}[!htb]
\centering
\includegraphics[width=3.8in]{offon}
\caption{An offline and a matching online signature sample. Figure extracted from \cite{sigcomp2009}.}
\label{fig:offon}
\end{figure}
Once the signature sample is acquired, during the enrollment phase, the system tries to create the subject identity based on behavioral features in the signature. Because of the way we sign, however, it is a subtle task. The rapid movement behind the signature creation is determined by a motor program stored in the brain of each signer applied to tools such as the pen and the paper \cite{pirlo2014advances}. According to \cite{plamondon1989automatic} there is a wide variety of human and social aspects that might affect the way we produce our handwritten signature, it might be influenced by country, age, time, habits, psychological or emotional state. The variability created on the signing process must be taken into account in the signature authentication process.
In fact, the unpredictable intra-personal variability, i.e., the similarity between signatures executed by the same writer, is a crucial challenge of signature-based biometric systems. This variability can be attributed to the several sources of noise ($\eta$) that distort the measured trait. According to \cite{jain2016} and shown in Figure \ref{fig_ahsv-overview}, the intra-personal variability affecting the measured sample {\boldm $M$} can be characterized by several variables. These variables include sensor limitations such as resolution or sample rate; biological aging effects or cognitive-motor impairments; user interaction with the sensor; environment changes like background noise and other factors as consequence of the individuals’ mood, hurry or unwillingness to cooperate. The intra-personal variability effect is illustrated in Figure \ref{fig:intraclass}.
\begin{figure}[!h]
\centering
\includegraphics[width=5.2in]{superimposed}
\caption{Superimposed genuine signatures of the same writer. A high intra-personal variability can be noticed. Extracted from \cite{hafemann2015offline}. }
\label{fig:intraclass}
\end{figure}
Another challenge faced by signature-based biometric systems is the unpredictable inter-personal variability, i.e., the similarity between signatures executed by different writers. In a signature-based system, inter-personal variability is mainly attributed to frauds related to malicious people faking the identity of signers. Figure \ref{fig_forgeries} illustrates a visual comparison between genuine signatures and forgeries.
\begin{figure}[!htb]
\centering
\includegraphics[width=\textwidth]{forgeries}
\caption[The first column of signatures are genuine references, the following three samples are questioned signatures. How many forgeries would you be able to detect? Signature images extracted from \cite{mcyt-100}.]{The first column of signatures are genuine references; the following three samples are questioned signatures. How many forgeries would you be able to detect? \protect\footnotemark Signature images extracted from \cite{mcyt-100}.}
\label{fig_forgeries}
\end{figure}
In the field of signature verification, forgeries are usually classified into two types \cite{impedovo2008state}.
\begin{itemize}
\item The first one is the random forgery which is created in a situation which an impostor who has no information about the person or the shape of the original signature tries to verify the identity of a signer using his signature instead of the signature to be tested. The forger does not attempt to simulate or trace a genuine signature.
\item The second type is the skilled forgery, in which the forger tries and practices imitating as closely as possible the static and dynamic information of the genuine signature model. The forger has access to both the user’s name and signature and tries to reproduce it with a similar intra-class variability.
\end{itemize}
\section{Automatic Handwritten Signature Verification}
An Automatic Handwritten Signature Verification System (AHSVS) is conceptually a pattern recognition application. Pattern recognition is one of the most important and active fields of research. During the past few decades, there has been a considerable growth of interest in problems of pattern recognition, and in the last few years, many methods have been developed in this area, in particular on handwriting recognition and signature verification \cite{book}.
As any Pattern Recognition system, an AHSVS has three phases:
\begin{inlinelist}
\item data acquisition and pre-processing
\item feature extraction
\item classification
\end{inlinelist} \cite{impedovo2008state}.
In the first step, the signatures are acquired and preprocessed, the main goal here is to convert them into a format suitable for the modeling process, correcting geometric distortions and removing noise related to the signature acquisition sensor. Afterwards, features are extracted and stored in a knowledge database. On the classification step, the extracted features are used to distinguish between genuine and forged signatures. Therefore the Signature Verification task is, in essence, a binary classification problem, in which the system's prediction to the input signature sample is either genuine or fraudulent.
Verification errors occurring in AHSVS are usually categorized as two types \cite{fairhurst1997signature}. On the one hand, a genuine signer may be rejected by the system as a potential impostor (e.g., it could happen when the signer carelessly executes his/her signature), resulting in what is denoted a Type-1 error or False
Rejection. On the other hand, a skilled forger might be able to produce a sample which would be accepted as genuine, resulting in what is called a Type-2 error or False Acceptance.
The performance of signature verification systems can be improved by increasing the number of samples in the training dataset. The amount of data available for each user is often insufficient in real applications. Even if there is a significant number of users enrolled in the system, a classifier needs to perform well for a new user, for whom only a small set of samples are available.
\section{Off-line Signature Synthesis Using On-line Samples}
According to \cite{guest2013assessment}, although online signature samples can be effectively stored as a time-series for use in any form of an AHSVS, there may be a situation where the signature might be needed to be represented as an image.
One possible scenario is for human visualization. For instance, in a context of a legal document, it may be required to have a image-based representation of signature that was captured in a biometric system and stored as a time-series. There may also be a case where the modality of the signature used for training the user model differs from the signature domain that is being questioned. Although the test sample is a genuine signature, it might not be possible to prove with either an offline or online signature verification system alone. Hence, converting either the training or questioned data into an image reproducing the original static signature would be useful for the development of an integrated version of an offline and online signature verification systems, overcoming the dynamic vs. static dichotomy, and unifying the signature biometry \cite{chapter}.
The works proposed in the literature to generate signature images from online signatures, generally apply different transforms to the dynamic information taking into account the kinematic and the timing order in which the traces were registered, then the samples of the new specimen are interpolated in order to create new images.
The accuracy of a series of interpolation methods for recreating a signature image from the online dynamic data was investigated in \cite{guest2013assessment}. They experimented what they refer to linear interpolation, nearest neighbor interpolation, cubic spline interpolation, and Piecewise cubic Hermite interpolation. They experimented several interpolated line thickness widths, varying according to the pen pressure/force. In the work, the authors measured the performance of the synthetic samples using three statistics: the mean Euclidean distance between the synthetic sample and the real, the maximum distance in pixels between the two images, and the percentage of pixels in the recreated image. They found that the linear technique with variable-width produced the best accuracy with 3.047, 67.147 and 27.416 mean pixel distance, percent direct match, and max distance, respectively.
In \cite{ferrer2013realistic} another method to generate synthetic offline signature databases starting from synthetic dynamic data is described. They convert the dynamic information to offline data using what they refer to as an ink deposition model. In short, they first create an 8-connected sequences from the online signature, then use an approach to model the pen ballpoint following a 2D Gaussian function. In \cite{diaz2014generation} they used this approach to generate synthetic samples based on real online data. They proposed an experimental protocol to assess whether the synthetic signatures are close to the real ones and to measure if it would be feasible to use these synthetic signatures to increase the training data of offline signature verification systems. They observed that the synthetic samples have a close performance to real signatures, but when used to increase the training dataset synthetically their synthetic samples achieved 18.70\% EER, while the real samples 17.68\%, in the skilled forgeries scenario. Moreover, in \cite{galbally2015line} they also experimented their synthesis approach to improve the performance of dynamic signature verifiers.
| {
"alphanum_fraction": 0.8143296876,
"avg_line_length": 131.6293103448,
"ext": "tex",
"hexsha": "86b146553e08848cd7e4039c36d0cfe95ffc1a64",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-02-19T15:39:50.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-02-19T15:39:50.000Z",
"max_forks_repo_head_hexsha": "942bd6e57796d760e152dbfcc31745950dc3fd32",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "victormelo/dissertation",
"max_forks_repo_path": "conteudo/ch3.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "942bd6e57796d760e152dbfcc31745950dc3fd32",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "victormelo/dissertation",
"max_issues_repo_path": "conteudo/ch3.tex",
"max_line_length": 1182,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "942bd6e57796d760e152dbfcc31745950dc3fd32",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "victormelo/dissertation",
"max_stars_repo_path": "conteudo/ch3.tex",
"max_stars_repo_stars_event_max_datetime": "2021-02-19T15:39:48.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-02-19T15:39:48.000Z",
"num_tokens": 3128,
"size": 15269
} |
\startcomponent ma-cb-en-tablesofcontent
\enablemode[**en-us]
\project ma-cb
\startchapter[title=Table of contents (lists)]
\index{table of contents}
\index{list}
\Command{\tex{completecontent}}
\Command{\tex{placecontent}}
\Command{\tex{definelist}}
\Command{\tex{setuplist}}
\Command{\tex{writetolist}}
\Command{\tex{writebetweenlist}}
\Command{\tex{definecombinedlist}}
\Command{\tex{setupcombinedlist}}
A table of contents contains chapter numbers, chapter titles and page numbers and
can be extended with sections, sub sections, etc. A table of contents is
generated automatically by typing:
\starttyping
\placecontent
\stoptyping
Which table of contents is produced depends on the location of this command in
your document. At the start of the document it will generate a list of chapters,
sections etc. But at the top of a chapter:
\startbuffer
\chapter{Hasselt in Summer}
\placecontent
\section{Hasselt in July}
\section{Hasselt in August}
\stopbuffer
\typebuffer
it will only produce a list of (sub) section titles with the corresponding
section numbers and page numbers.
The predefined command \type{\placecontent} is available because it was defined
with:
\shortsetup{definecombinedlist}
This command and \type{\definelist} allows you to define your own lists necessary
for accessing your documents.
The use of this command and its related commands is illustrated for the default available
table of contents.
\startbuffer
\definelist[chapter]
\setuplist
[chapter]
[before=\blank,
after=\blank,
style=bold]
\definelist[section]
\setuplist
[section]
[alternative=d]
\stopbuffer
\typebuffer
Now there are two lists of chapters and sections and these will be combined in a
table of contents with the command \type{\definecombinedlist}.
\startbuffer
\definecombinedlist
[content]
[chapter,section]
[level=subsection]
\stopbuffer
\typebuffer
Now two commands are available: \type{\placecontent} and \type{\completecontent}.
With the second command the title of the table of contents will be added to the
table of contents.
The layout of lists can be varied with the parameter \type{alternative}.
\placetable
[here,force]
[tab:alternatives]
{Alternatives for displaying lists.}
{\starttable[|c|l|]
\HL
\NC \bf Alternative \NC \bf Display \NC\SR
\HL
\NC \type{a} \NC number -- title -- page number \NC\FR
\NC \type{b} \NC number -- title -- spaces -- page number \NC\MR
\NC \type{c} \NC number -- title -- dots -- page number \NC\MR
\NC \type{d} \NC number -- title -- page number (continuing) \NC\MR
\NC \type{e} \NC reserved for interactive purposes \NC\MR
\NC \type{f} \NC reserved for interactive purposes \NC\MR
\NC \type{g} \NC reserved for interactive purposes \NC\LR
\HL
\stoptable}
Lists are set up with:
\shortsetup{setuplist}
\shortsetup{setupcombinedlist}
If you want to change the layout of the generated table of contents you'll have
to remember that it is a (combined) list and that we can set the partial lists
separately.
\startbuffer
\setuplist
[section]
[textstyle=bold,
pagestyle=bold,
numberstyle=bold]
\stopbuffer
\typebuffer
This will result in a bold page number, section title and section number.
Lists are generated and placed with:
\shortsetup{placelist}
So if you want a list of sections at the beginning of a new chapter, you type:
\starttyping
\placelist[section]
\stoptyping
only the sections will be displayed.
A long list or a long table of contents will use up more than one page. To be
able to force page breaking you can type:
\starttyping
\placecontent[extras={8.2=page}]
\stoptyping
A page break will then occur after section 8.2.
In some cases you want to be able to write your own text in an automatically
generated list. This is done with:
\shortsetup{writetolist}
\shortsetup{writebetweenlist}
For example if you want to make a remark in your table of contents after a
section titled {\em Hotels in Hasselt} you can type:
\startbuffer
\section{Hotels in Hasselt}
\writebetweenlist[section]{\blank}
\writetolist[section][location=here]{}{Section under construction}
\writebetweenlist[section]{\blank}
\stopbuffer
\typebuffer
\stopchapter
\stopcomponent
| {
"alphanum_fraction": 0.7497656982,
"avg_line_length": 24.3885714286,
"ext": "tex",
"hexsha": "0bc81014e2cf5226680fdbacbfaf87453f2b507c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "74ea55abde343f4e6e07fa6cd94694816e6e3cc4",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "kensh/pandoc_resume",
"max_forks_repo_path": "tex/texmf-context/doc/context/sources/general/manuals/start/en/ma-cb-en-tablesofcontent.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "74ea55abde343f4e6e07fa6cd94694816e6e3cc4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "kensh/pandoc_resume",
"max_issues_repo_path": "tex/texmf-context/doc/context/sources/general/manuals/start/en/ma-cb-en-tablesofcontent.tex",
"max_line_length": 89,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "74ea55abde343f4e6e07fa6cd94694816e6e3cc4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "kensh/pandoc_resume",
"max_stars_repo_path": "tex/texmf-context/doc/context/sources/general/manuals/start/en/ma-cb-en-tablesofcontent.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1122,
"size": 4268
} |
\section{OUTPUTS}
\label{OUTPUTS}
\begin{ipifield}{}%
{This class defines how properties, trajectories and checkpoints should be output during the simulation. May contain zero, one or many instances of properties, trajectory or checkpoint tags, each giving instructions on how one output file should be created and managed.}%
{}%
{\ipiitem{prefix}%
{A string that will be prepended to each output file name. The file name is given by 'prefix'.'filename' + format\_specifier. The format specifier may also include a number if multiple similar files are output.}%
{default: `i-pi'; data type: string; }%
}
\begin{ipifield}{\hyperref[CHECKPOINT]{checkpoint}}%
{Each of the checkpoint tags specify how to create a checkpoint file, which can be used to restart a simulation. }%
{data type: integer; }%
{\ipiitem{stride}%
{The number of steps between successive writes.}%
{default: 1 ; data type: integer; }%
\ipiitem{overwrite}%
{This specifies whether or not each consecutive checkpoint file will overwrite the old one.}%
{default: True ; data type: boolean; }%
\ipiitem{filename}%
{A string to specify the name of the file that is output. The file name is given by 'prefix'.'filename' + format\_specifier. The format specifier may also include a number if multiple similar files are output.}%
{default: `restart'; data type: string; }%
}
\end{ipifield}
\begin{ipifield}{\hyperref[TRAJECTORY]{trajectory}}%
{Each of the trajectory tags specify how to create a trajectory file, containing a list of per-atom coordinate properties. }%
{data type: string; }%
{\ipiitem{format}%
{The output file format.}%
{default: `xyz'; data type: string; options: `xyz', `pdb'; }%
\ipiitem{filename}%
{A string to specify the name of the file that is output. The file name is given by 'prefix'.'filename' + format\_specifier. The format specifier may also include a number if multiple similar files are output.}%
{default: `traj'; data type: string; }%
\ipiitem{stride}%
{The number of steps between successive writes.}%
{default: 1 ; data type: integer; }%
\ipiitem{flush}%
{How often should streams be flushed. 1 means each time, zero means never.}%
{default: 1 ; data type: integer; }%
\ipiitem{bead}%
{Print out only the specified bead. A negative value means print only one every -(bead) beads, e.g. -2 means print just the even beads, -4 one every four and so on.}%
{default: -1 ; data type: integer; }%
\ipiitem{cell\_units}%
{The units for the cell dimensions.}%
{default: `'; data type: string; }%
}
\end{ipifield}
\begin{ipifield}{\hyperref[PROPERTIES]{properties}}%
{Each of the properties tags specify how to create a file in which one or more properties are written, one line per frame. }%
{data type: string; }%
{\ipiitem{filename}%
{A string to specify the name of the file that is output. The file name is given by 'prefix'.'filename' + format\_specifier. The format specifier may also include a number if multiple similar files are output.}%
{default: `out'; data type: string; }%
\ipiitem{stride}%
{The number of steps between successive writes.}%
{default: 1 ; data type: integer; }%
\ipiitem{shape}%
{The shape of the array.}%
{default: (0,) ; data type: tuple; }%
\ipiitem{mode}%
{If 'mode' is 'manual', then the array is read from the content of 'cell' takes a 9-elements vector containing the cell matrix (row-major). If 'mode' is 'abcABC', then 'cell' takes an array of 6 floats, the first three being the length of the sides of the system parallelopiped, and the last three being the angles (in degrees) between those sides. Angle A corresponds to the angle between sides b and c, and so on for B and C. If mode is 'abc', then this is the same as for 'abcABC', but the cell is assumed to be orthorhombic. 'pdb' and 'chk' read the cell from a PDB or a checkpoint file, respectively.}%
{default: `manual'; data type: string; options: `manual', `file'; }%
\ipiitem{flush}%
{How often should streams be flushed. 1 means each time, zero means never.}%
{default: 1 ; data type: integer; }%
}
\end{ipifield}
\end{ipifield}
| {
"alphanum_fraction": 0.7338629593,
"avg_line_length": 59.2352941176,
"ext": "tex",
"hexsha": "886ead1f74f218611cb7505fcd126fc835f967c5",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-09-26T16:40:20.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-09-26T16:40:20.000Z",
"max_forks_repo_head_hexsha": "38727854014b4537c500db27282ebe3543bde140",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "i-pi/i-pi.github.io",
"max_forks_repo_path": "doc/input_docs/output.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "38727854014b4537c500db27282ebe3543bde140",
"max_issues_repo_issues_event_max_datetime": "2021-09-07T20:30:21.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-02-16T13:18:43.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "i-pi/web",
"max_issues_repo_path": "doc/input_docs/output.tex",
"max_line_length": 607,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "38727854014b4537c500db27282ebe3543bde140",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "i-pi/web",
"max_stars_repo_path": "doc/input_docs/output.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1103,
"size": 4028
} |
\myChapter{Appendix: OSGi}\label{chap:appendixosgi}
\minitoc\mtcskip
\vfill
\section{OSGi Architecture}
To understand how OSGi \cite{Moussa2010ServiceComposition} works and which capabilities could offer to the EA practitioners it is necessary to understand how OSGi is built. OSGi has a layered model that is depicted in Figure \ref{fig:osgi-original}. The terms present in this figure are:
\begin{itemize}
\item Bundles: OSGi components made by developers. They are a normal {\em jar} file including Java classes, interfaces and extra files (such as the {\em Service Descriptions}). They also have additions to the MANIFEST.MF file (a metadata file with information about the {\em jars}).
\item Services: This layer connects bundles in a dynamic way by offering a publish-find-bind model.
\item Life-Cycle: The API to install, start, stop, update, and uninstall bundles.
\item Modules: This layer defines how a bundle can import and export code (using the MANIFEST.MF file).
\item Security: Security aspects are handled in this layer.
\item Execution Environment: Defines what methods and classes are available in a specific platform. For example, mobile devices have less Java classes due to performance constraints.
\end{itemize}
\begin{SCfigure}[20][tb]
\centering
\includegraphics[width=26pc]{gfx/soa/osgi-oficial.jpg}
\caption{OSGi layered architecture. Every layer is built from the one just below.}
\label{fig:osgi-original}
\end{SCfigure}
\section{OSGi configuration files}
With respect to the OSGi layers introduced above, this section details how to use all OSGi capabilities.
OSGi implements a dynamic component model, unlike normal Java
environments. Applications or components (also called
\emph{bundles}) can be remotely installed, started, stopped, updated
or uninstalled on the fly; moreover, the classes and
packaging management is specified in detail. The OSGi framework provides
APIs for the management of services that are exposed or used by the
bundles.
A {\em bundle} is a file containing compiled and packaged classes and a configuration file. This file indicates which classes are importe or exported by the bundle. Being SOA, the most important concept in OSGi is the {\em service}. Services allow {\em bundles} to be dinamically connected, offering a publication-search-connection model. That is, a {\em bundle} exposes a service by a Java interface (service interface in Figure \ref{fig:soadiagram}), and another bundle (or itself) implements that interface. A third {\em bundle} can access this service using the exposed interface without having any knowledge of how it is implemented, using the {\em service registry} (equivalent to service broker of Figure \ref{fig:soadiagram}). Figure \ref{OSGIDIAGRAM} shows an example of the OSGi architecture.
\begin{SCfigure}[20][tb]
\centering
\includegraphics[width=26pc]{gfx/soa/bundles.jpg}
\caption{In OSGi, a service can be implemented by several bundles. Other bundles may chose among this implementations using the service registry. In this figure, Bundles C and D implement a service, and A uses the service registry to use one of them.}
\label{OSGIDIAGRAM}
\end{SCfigure}
Java programmers are familiar with the {\em jar} concept. The first difference among a {\em bundle} and a {\em jar} is that the first one has a MANIFEST.MF file adapted to be used in OSGi. This file indicates which classes imports or exports the {\em bundle}. An example can be seen in Figure \ref{fig:manifest}. This file shows the name of the bundle and its version (this is useful to select specific services), and the execution environment (that is, the Java Virtual Machine required). Also, this file specifies the XML files of the declarative services (in section {\em Service-Component}). However, this {\em bundle} can be used as a normal {\em jar} outside OSGi.
\begin{SCfigure}[20][tb]
\noindent
\ttfamily \footnotesize
\hlstd{}{\bf Manifest-Version:} 1.0\\
\hlstd{}{\bf Bundle-ManifestVersion:} 2\\
\hlstd{}{\bf Bundle-Name:} VRP\\
\hlstd{}{\bf Bundle-SymbolicName:} VRP\\
\hlstd{}{\bf Bundle-Version:} 1.0.0\\
\hlstd{}{\bf Bundle-RequiredExecutionEnvironment:} JAVA-1.6\\
\hlstd{}{\bf Import-Package:} es.ugr.osgiliath,\\
\hlstd{} es.ugr.osgiliath.algorithms,\\
\hlstd{} es.ugr.osgiliath.events,\\
\hlstd{}es.ugr.osgiliath.evolutionary,\\
\hlstd{}es.ugr.osgiliath.evolutionary.basiccomponents.genomes,\\
\hlstd{}es.ugr.osgiliath.evolutionary.basiccomponents.individuals,\\
\hlstd{}es.ugr.osgiliath.evolutionary.elements,\\
\hlstd{}es.ugr.osgiliath.evolutionary.individual,\\
\hlstd{}es.ugr.osgiliath.evolutionary.migrator,\\
\hlstd{}es.ugr.osgiliath.geneticalgorithm.distributed,\\
\hlstd{}es.ugr.osgiliath.problem\\
\hlstd{}{\bf Export-Package:} es.ugr.osgiliath.vrp,\\
\hlstd{}es.ugr.osgiliath.vrp.individual\\
\hlstd{}\hlkwa{Service-Component:} OSGI-INF/vrpinitializer.xml,\\
OSGI-INF/vrpfitnesscalculator.xml,\\
OSGI-INF/vrpcrossover.xml,\\
OSGI-INF/vrpmutation.xml\\
\mbox{}
\normalfont
\caption{Example of MANIFEST.MF. This example defines which packages are necessary to activate the bundle and which packages are exported.}
\label{fig:manifest}
\end {SCfigure}
In normal environments, the creation of a specific implementation of an interface (i.e. {\em FitnessCalculator}) is done as shown in Figure \ref{fig:normalImp}.
\newsavebox{\mintedboxOSGIinst}
\begin{lrbox}{\mintedboxOSGIinst}
\begin{minipage}{10cm}
\begin{minted}[mathescape,
linenos,
frame=lines,
framesep=2mm]{java}
class EvolutionaryAlgorithm implements Algorithm{
FitnessCalculator fc;
//A new instance is bound to a reference
fc = new ExampleFunction();
}
\end{minted}
\end{minipage}
\end{lrbox}
\begin{SCfigure}[20][tb]
\usebox{\mintedboxOSGIinst}
\caption{Normal way to implement an interface in Java.}
\label{fig:normalImp}
\end{SCfigure}
With Declarative Services, the \texttt{new ExampleFunction()} part is not used, so if a new implementation is desired no code recompilation is necessary. Figure \ref{fig:ds} shows a declarative service description file, which establishes in execution time which implementation is bound to the interfaces. This example indicates that the implementation of service \texttt{FitnessCalculator} is \texttt{VRP\-Fit\-ness\-Cal\-cu\-la\-tor}, but this service is not activated until all their references (other services, like \texttt{TransportData}) are also activated. The tag \texttt{cardinality} means that at least one service of that kind must exist (the first \texttt{1} represents optionality) and the second part (the other \texttt{1} indicates the number of different implementations that can be managed: one (\texttt{1}) or many (\texttt{*}). It is also necessary to create XML files for the rest of services to be exposed (i.e. \texttt{TransportData}) . The file where these capabilities are defined is declared in the section \texttt{Service-Component} of the MANIFEST.MF file, as can be seen in Figure \ref{fig:manifest}.
\newsavebox{\mintedboxOSGIDS}
\begin{lrbox}{\mintedboxOSGIDS}
\begin{minipage}{10cm}
\begin{minted}[linenos,
fontsize=\scriptsize,
frame=lines,
framesep=2mm]{xml}
<?xml version="1.0" encoding="UTF-8"?>
<scr:component xmlns:scr="http://www.osgi.org/xmlns/scr/v1.1.0"
name="VRPFitnessCalculator" >
<implementation class="es.ugr.osgiliath.vrp.VRPFitnessCalculator"/>
<service>
<provide
interface="es.ugr.osgiliath.evolutionary.elements.FitnessCalculator"/>
</service>
<reference bind="setTransportData"
unbind="unsetTransportData"
cardinality="1..1"
interface="es.ugr.osgiliath.vrp.TransportData"
name="TransportData"
policy="static"
/>
<property name="name" type="String"
value="vrpfitnesscalculator"/>
</scr:component>
\end{minted}
\end{minipage}
\end{lrbox}
\begin{SCfigure}[20][tb]
\usebox{\mintedboxOSGIDS}
\caption{Service Description. This documents indicates that the implementation of the service \texttt{Fit\-ness\-Cal\-cu\-la\-tor} is \texttt{VRP\-Fit\-ness\-Cal\-cu\-la\-tor}, but it can not activate until their references (other services) are activated.}
\label{fig:ds}
\end{SCfigure}
Code in Figure \ref{fig:OSGIspecific} shows the code for this implementation.
\newsavebox{\mintedboxOSGIspecific}
\begin{lrbox}{\mintedboxOSGIspecific}
\begin{minipage}{10cm}
\begin{minted}[mathescape,
linenos,
fontsize=\footnotesize,
frame=lines,
framesep=2mm]{java}
class VRPFitnessCalculator implements FitnessCalculator{
//Other service references,
TransportData tdata;
//Methods to bind/unbind each reference
public TransportData
setTransportData(TransportData tdata){
this.tdata = tdata;
}
public void
unsetTransportData(TransportData tdata){
this.tdata = null;
}
//Implementation of the interface method
List<Fitness> calculateFitness(List<Individual> inds){
...
}
}
\end{minted}
\end{minipage}
\end{lrbox}
\begin{SCfigure}[20][tb]
\usebox{\mintedboxOSGIspecific}
\caption{Code of the implementation.}
\label{fig:OSGIspecific}
\end{SCfigure}
%We have to say that in future work these kind of files will be automatically created, being this task transparent to future users of the OSGiLiatH framework.
\section{Event Administration}
The Event Administration in OSGi lets the usage of a blackboard communication architecture where bundles can broadcast or receive events without advertising which bundles are sending or receiving these events.
%Acquire a reference to the EventAdmin OSGi service, it implements the org.osgi.service.event.EventAdmin.
%Pick a topic name for the event and make sure that it follows Topic Naming Conventions mentioned above.
%Fill Event Properties in a dictionary that will be passed as a parameter to the publish method.
%Having the Topic Name and Properties, ready invoke one of the following methods of the Event Admin service: postEvent or sendEvent - while postEvent initiates synchronous delivery of the event, sendEvent initiates asynchronous delivery of the event. So by default, your option should be postEvent method unless you have strict requirements to not continue execution until all handlers of the event handle it.
The steps needed to send events to other bundles are:
\begin{itemize}
\item Acquire a reference to the EventAdmin OSGi service (via Declarative Services, for example).
\item Pick a topic name for the event (for example \texttt{``es\-/ugr\-/os\-gi\-liath\-/al\-go\-rithms\-/end\-ge\-ne\-ra\-tion''})
\item Send the event using the \texttt{postEvent} method of EventAdmin, with the topic plus other desired properties %(poner lo de sincrono/asincrono?)
\end{itemize}
Code to send an event to other bundles is shown in Figure \ref{fig:OSGIpostevent}. The programmer specifies the topic String and optional properties to send to other bundles that are listening. The \texttt{ eventAdmin} variable is a reference to \texttt{``org.osgi.service.event. EventAdmin''} service, obtained via Declarative Services or by hand (not showed).
\newsavebox{\mintedboxOSGIpostevent}
\begin{lrbox}{\mintedboxOSGIpostevent}
\begin{minipage}{10cm}
\begin{minted}[mathescape,
linenos,
frame=lines,
framesep=2mm]{java}
Properties props = new Properties(); //Optional
String topic =
"es/ugr/osgiliath/algorithms/endgeneration";
Event evt = new Event(topic,props);
eventAdmin.postEvent(evt);
\end{minted}
\end{minipage}
\end{lrbox}
\begin{SCfigure}[20][tb]
\usebox{\mintedboxOSGIpostevent}
\caption{Code to send an event.}
\label{fig:OSGIpostevent}
\end{SCfigure}
On the other hand, the steps to handle events are:
\begin{itemize}
\item Register a service that implements the OSGi EventHandler interface (via Declarative Services or manually).
\item Specify in this service the topics to subscribe to. For example, the String \texttt{ ``es\-/ugr\-/os\-gi\-li\-ath/al\-go\-ri\-thms/*''} (the * is a wildcard) inside the \texttt{$<$property$>$} tag in the Service Description.
\item Overwrite the handleEvent method of this interface with the desired code.
\end{itemize}
The code in Figure \ref{fig:OSGIreadevent} shows how to handle events. In this case published the \texttt{ ExampleService} have been published with the implementation \texttt{ ExampleImpl}, that is listening under the topic \texttt{ ``es/ugr/osgiliath/algorithms/*''}.
\newsavebox{\mintedboxOSGIreadevent}
\begin{lrbox}{\mintedboxOSGIreadevent}
\begin{minipage}{10cm}
\begin{minted}[mathescape,
linenos,
fontsize=\footnotesize,
frame=lines,
framesep=2mm]{java}
class ExampleImpl implements ExampleService,EventHandler{
public void handleEvent(Event ev){
if(evt.getTopic().endsWith("endgeneration")){
// An event with topic
// "es/ugr/osgiliath/algorithms
// /endgeneration"
System.out.println("Generation over");
else{
// Other event with topic starts with
// "es/ugr/osgiliath/algorithms/"
System.out.println("Other event received");
}
}
}
\end{minted}
\end{minipage}
\end{lrbox}
\begin{SCfigure}[20][tb]
\usebox{\mintedboxOSGIreadevent}
\caption{Code to read an event.}
\label{fig:OSGIreadevent}
\end{SCfigure} | {
"alphanum_fraction": 0.7586180829,
"avg_line_length": 47.4121863799,
"ext": "tex",
"hexsha": "7872dea90d77c8255401542e546021bfc1c906d9",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-04-08T00:59:38.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-11-21T13:14:20.000Z",
"max_forks_repo_head_hexsha": "ba821fd5e72e907ed12e8a8e2458e158ff4505c1",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "fergunet/tesis",
"max_forks_repo_path": "Chapters/appendixosgi.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "ba821fd5e72e907ed12e8a8e2458e158ff4505c1",
"max_issues_repo_issues_event_max_datetime": "2018-05-15T16:58:04.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-03-04T08:11:16.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "fergunet/tesis",
"max_issues_repo_path": "Chapters/appendixosgi.tex",
"max_line_length": 1129,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "ba821fd5e72e907ed12e8a8e2458e158ff4505c1",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "fergunet/tesis",
"max_stars_repo_path": "Chapters/appendixosgi.tex",
"max_stars_repo_stars_event_max_datetime": "2018-10-08T08:45:30.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-02-07T13:29:15.000Z",
"num_tokens": 3519,
"size": 13228
} |
\documentclass[utf8]{beamer}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage[english]{babel}
% serif font for math only
\usefonttheme[onlymath]{serif}
% Big numbers separator
\usepackage{siunitx}
% Make siunitx use serif font for math
\AtBeginDocument{\sisetup{math-rm=\mathrm, text-rm=\rmfamily}}
\usetheme{default}
\title[When PageRank fails]{Ranking nodes in growing networks: \\ When PageRank fails}
\author[De Nicolao]{Pietro De Nicolao \\ \texttt{[email protected]}}
\date{April 14, 2016}
\institute{Politecnico di Milano}
\AtBeginSection[]
{
\begin{frame}<beamer>{Outline}
\tableofcontents[currentsection]
\end{frame}
}
\begin{document}
\begin{frame}
\titlepage
\end{frame}
\section*{Outline}
\begin{frame}{Outline}
\tableofcontents
\end{frame}
\input{sections/1_intro.tex}
\input{sections/2_rm.tex}
\input{sections/3_real.tex}
\input{sections/4_conclusions.tex}
\input{bibliography.tex}
\appendix
\input{sections/appendix.tex}
\end{document}
| {
"alphanum_fraction": 0.7529761905,
"avg_line_length": 20.5714285714,
"ext": "tex",
"hexsha": "75444dad1e1d937b1191502a80ba0bbf5c429c40",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5c037e0e085b697c9c79950c9c4fe5cbcc494c6d",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "pietrodn/csr_pagerank",
"max_forks_repo_path": "main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5c037e0e085b697c9c79950c9c4fe5cbcc494c6d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "pietrodn/csr_pagerank",
"max_issues_repo_path": "main.tex",
"max_line_length": 86,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "5c037e0e085b697c9c79950c9c4fe5cbcc494c6d",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "pietrodn/csr_pagerank",
"max_stars_repo_path": "main.tex",
"max_stars_repo_stars_event_max_datetime": "2017-11-09T14:36:59.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-11-09T14:36:59.000Z",
"num_tokens": 318,
"size": 1008
} |
\clearpage
\chapter*{\textbf{Anexo}}\label{appendix}
\addcontentsline{toc}{chapter}{\textbf{Anexo}}
\markboth{\textbf{Anexo}}{\textbf{Anexo}}
\begin{figure}[htb]
\centering
\includegraphics[width=0.9\textwidth]{pic/anyfigure.png}
\caption[Cool picture]{Some cool photo taken in the lab.}
\label{fig:anyfigure}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[trim = 15mm 70mm 20mm 20mm, clip, rotate=90, width=0.5\textwidth]{pic/anypdf.pdf}
\caption[Adding PDF as figure]{It is possible to add pdf files as figure, while keeping your header and footer from your document. Mostly useful in the appendix.}
\label{fig:pdf}
\end{figure}
| {
"alphanum_fraction": 0.7409638554,
"avg_line_length": 26.56,
"ext": "tex",
"hexsha": "e88333d5d2f73d324ee4c5f6e39c02d2531562f2",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-07-12T06:21:48.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-09-30T16:42:40.000Z",
"max_forks_repo_head_hexsha": "2919f08204de15c6b99470b8daad5de4e4fce03c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "gualdrapa/ISEL_LaTex_OneSided_Report_PT",
"max_forks_repo_path": "doc/401appendix.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2919f08204de15c6b99470b8daad5de4e4fce03c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "gualdrapa/ISEL_LaTex_OneSided_Report_PT",
"max_issues_repo_path": "doc/401appendix.tex",
"max_line_length": 163,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2919f08204de15c6b99470b8daad5de4e4fce03c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "gualdrapa/ISEL_LaTex_OneSided_Report_PT",
"max_stars_repo_path": "doc/401appendix.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 221,
"size": 664
} |
\documentclass[a4paper,12pt]{article}
\usepackage[spanish]{babel}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage[tmargin=2cm]{geometry}
\newcommand{\titlehomework}[4]{\begin{center}\section*{#4}{\large #2}\\#1\\#3\\[2ex]\end{center}}
\renewcommand{\labelitemi}{$\bullet$}
\thispagestyle{empty}
\begin{document}
% \titlehomework{autor}{Materia}{grupo}{nombreDeTarea}
\end{document}
| {
"alphanum_fraction": 0.7468671679,
"avg_line_length": 33.25,
"ext": "tex",
"hexsha": "5b72881f80172c291f4cb5d1f777d891056e0cd0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "11af186f0b6f6be09dd62820af9ba7b78cd7b5b1",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "c0rrigan/docs-repo",
"max_forks_repo_path": "hwtemplate.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "11af186f0b6f6be09dd62820af9ba7b78cd7b5b1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "c0rrigan/docs-repo",
"max_issues_repo_path": "hwtemplate.tex",
"max_line_length": 97,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "11af186f0b6f6be09dd62820af9ba7b78cd7b5b1",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "c0rrigan/docs-repo",
"max_stars_repo_path": "hwtemplate.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 139,
"size": 399
} |
% !TEX root = as_grf_sopt.tex
% \begin{appendices}
\section{Active Search Regret Bound}
\label{appendix:proof_as_regret}
We start by stating the following result.% by \cite{srinivas2012information}.
\begin{theorem}[Theorem 6, {\cite{srinivas2012information}}]
\label{thm:dev}
Let $\delta \in (0,1)$. Assume the observation noises are uniformly bounded by $\sigma_n$ and $f$ has RKHS norm $B$ with kernel $C_0$, which is equivalent to $\bff^\top\widetilde{\cLap}_0\bff\leq B^2$. Define
% \begin{equation*}
$
\alpha_t = \sqrt{2 B^2 + 300 \gamma_t \log(t/\delta)^3},
$
% \end{equation*}
% where $\Vert \cdot \Vert_K$ denotes the RKHS norm associated with the kernel $K$. Then
then
\begin{equation*}
\mbox{Pr}\left(\forall t, \forall v \in V, \;\; |\mu_t(v) - f(v)| \leq \alpha_{t+1} \sigma_t(v) \right) \geq 1 - \delta.
\end{equation*}
\end{theorem}
We use this result to bound our instantaneous regrets.
\begin{lemma}
Conditioned on the high-probability event in Theorem \ref{thm:dev}, the following bound holds:
\begin{equation*}
\forall t, \;\; r_t := f(v^*_t) - f(v_t) \leq 2 \alpha_t k \sigma_{t-1}(v_t),
\end{equation*}
where $v^*_t$ is the node with the $t$-th globally largest function value and $v_t$ is node selected at round $t$.
\end{lemma}
%\begin{proof}
\emph{Proof}.
At round $t$ there are two possible situations. If
$v^*_t$ was picked at some earlier round, the definition of $v^*_t$ implies that there exists some $t' < t$ such that $v^*_{t'}$
has not been picked yet. According to our selection rule, the fact that $s_t(v) \geq \sigma_t(v)$,
and Theorem \ref{thm:dev}, the following holds:
\begin{align*}
&\mu_{t-1}(v_t) + \alpha_t s_{t-1}(v_t) \geq \mu_{t-1}(v^*_{t'}) + \alpha_t s_{t-1}(v^*_{t'}) \\
&\qquad \geq \mu_{t-1}(v^*_{t'}) + \alpha_t \sigma_{t-1}(v^*_{t'})
\geq f(v^*_{t'}) \geq f(v^*_t).
\end{align*}
If $v^*_t$ has not been picked yet, a similar argument gives
\begin{equation*}
\mu_{t-1}(v_t) + \alpha_t s_{t-1}(v_t) \geq \mu_{t-1}(v^*_{t}) + \alpha_t s_{t-1}(v^*_{t})
%&\geq& \mu_{t-1}(v^*_{t}) + \alpha_t \sigma_{t-1}(v^*_{t})\\
\geq f(v^*_t).
\end{equation*}
Thus we always have
\begin{align*}
f(v^*_t) &\leq \mu_{t-1}(v_t) + \alpha_t s_{t-1}(v_t) \\
&\leq f(v_t) + \alpha_t \sigma_{t-1}(v-t) +\alpha_t s_{t-1}(v_t)\\
%&\leq& f(v_t) + 2 \alpha_t s_{t-1}(v_t) \\
&\leq f(v_t) + 2 \alpha_t k \sigma_{t-1}(v_t).
\end{align*}
%\end{proof}
%\begin{lemma}[Lemma 5.3, {\cite{srinivas2012information}}]
% The information gain achieved by the selected nodes can be expressed in terms of the predictive variances.
% Let $\bv_t = (v_1, v_2, \ldots, v_t)$ denote the sequence of selected nodes. Then
% \begin{equation*}
% \mathcal{I}(\by_{\bv_t};f_{\bv_t}) = \frac{1}{2} \sum_{i=1}^t \log(1 + \sigma^{-2}\sigma_{i-1}(v_i)^2).
% \end{equation*}
%\end{lemma}
%Next we bound the sum of squared instantaneous regrets in terms of the maximum information gain.
\begin{lemma}[Lemma 5.4, {\cite{srinivas2012information}}]
Let $\alpha_t$ be defined as in Theorem \ref{thm:dev} and $c_1$ be defined as in Theorem \ref{thm:as_regret}.
Conditioned on the high probability event of Theorem \ref{thm:dev}, the following holds:
\begin{equation*}
\forall T\geq 1, \;\;\sum_{t=1}^T r_t^2 \leq \alpha_T k^2 c_1 \mathcal{I}(\by_{\bv_T};f_{\bv_T}) \leq \alpha_T k^2 c_1 \gamma_T.
\end{equation*}
\end{lemma}
Finally, the Cauchy-Schwarz inequality gives
$R_T \leq \sqrt{T \sum_{t=1}^T r_t^2} \leq k \sqrt{T c_1 \alpha_T \gamma_T}.$
% \end{appendices}
| {
"alphanum_fraction": 0.6618150685,
"avg_line_length": 48.6666666667,
"ext": "tex",
"hexsha": "aba4fa46de3785e6c1572b3b9e977179ab47237c",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2019-11-14T15:33:27.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-12-22T23:55:18.000Z",
"max_forks_repo_head_hexsha": "45d75dc0fe33d3d68784c30ba7f6ecd7b1718c31",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "AutonlabCMU/active-search-gp-sopt",
"max_forks_repo_path": "texts/appendix_proof_as_regret.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "45d75dc0fe33d3d68784c30ba7f6ecd7b1718c31",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "AutonlabCMU/active-search-gp-sopt",
"max_issues_repo_path": "texts/appendix_proof_as_regret.tex",
"max_line_length": 210,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "45d75dc0fe33d3d68784c30ba7f6ecd7b1718c31",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "AutonlabCMU/active-search-gp-sopt",
"max_stars_repo_path": "texts/appendix_proof_as_regret.tex",
"max_stars_repo_stars_event_max_datetime": "2018-10-23T03:53:13.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-10-23T03:53:13.000Z",
"num_tokens": 1400,
"size": 3504
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% %
% HIGZ User Guide -- LaTeX Source %
% %
% Chapter: The input routines %
% %
% Editor: Olivier Couet / CN-AS %
% Last Mod.: 9 July oc %
% %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\Filename{H1Theinputroutines}
\chapter{The input routines}
\index{input routines}
\Filename{H2Cursorinput}
\section{Cursor input}
\index{cursor input}
\subsection{The Generic Routine}
\Shubr[GKS]{IRQLC}{(KWKID,LCDNR,ISTAT*,NT*,PX*,PY*)}
\Action
This routine returns the \Lit{(x,y)} position of the cursor in \WC, and the
index the \NT. Its calling sequence is compatible with the equivalent \GKS{}
routine.
\Pdesc
\begin{DLtt}{1234567}
\item[KWKID]Workstation identifier.
\item[LCDNR]Locator device.
\begin{DLtt}{123}
\item[1] Keyboard.
\item[2] Graphic tablet.
\end{DLtt}
With the \X11{} driver \Lit{LCDNR} can have the following values:
\begin{DLtt}{123}
\item[10] tracking cross
\item[20] cross-hair
\item[30] rubber circle
\item[40] rubber band
\item[50] rubber rectangle
\item[99] the screen coordinates are taken in \Lit{XLOC} and \Lit{YLOC}.
\item[>0] request mode
\item[<0] sample mode
\end{DLtt}
\item[ISTAT] Return status.
\begin{DLtt}{123}
\item[0] Graphic input has been canceled.
\item[1] A point was located and its coordinates are recorded in
\Lit{PX} and \Lit{PY}.
\end{DLtt}
\item[NT] Index of the \NT.
\item[PX] X coordinate of position of locator
\item[PY] Y coordinate of position of locator
\end{DLtt}
\subsection{The Two Points Routine}
\Shubr{IGLOC2}{(KWKID,*NT*,X1*,Y1*,X2*,Y2*,ISTAT*,CHOPT)}
\Action
This routine returns the graphic cursor position in \WC{} space of two points
and the corresponding \NT{} number. Rubberbanding is used to visualize the area
(box) delimited by the two points.
\index{Graphics Input and transformations}
\Pdesc
\begin{DLtt}{1234567}
\item[KWKID] Workstation identifier
\item[NT] Index of the \NT see{}(\Lit{CHOPT}).
\item[X1] X coordinate of the cursor position in \WC~space of the first point.
\item[Y1] Y coordinate of the cursor position in \WC~space of the first point.
\item[X2] X coordinate of the cursor position in \WC~space of the second point.
\item[Y2] Y coordinate of the cursor position in \WC~space of the second point.
\item[ISTAT] Return status:
\begin{DLtt}{123}
\item[0] Graphic input has been canceled.
\item[1] Two points were located and their coordinates are recorded in
\Lit{X1, Y1, X2, Y2}.
\end{DLtt}
\item[CHOPT] \Lit{CHARACTER} variable specifying the option desired:
\begin{DLtt}{12345}
\item[' '] \Lit{NT} is an output parameter.
\item['P'] \Lit{NT} is an input and output parameter. In this case,
\Lit{NT} contains on input the \NT~index with the highest priority.
\end{DLtt}
\end{DLtt}
\subsection{How to get the position both in \NDC~and \WC~space}
\Shubr{IGLOC}{(ICURS,NT*,IBN*,XNDC*,YNDC*,XWC*,YWC*)}
\Action
It is sometimes useful to get a point position both in \NDC~and \WC~space
at the same time. This routine allows to do this for the workstation 1.
\begin{DLtt}{1234567}
\item[ICURS] Cursor type.
\item[NT] \NT~number.
\item[IBN] Button number:
\begin{DLtt}{123}
\item[0] Right button of the mouse.
\item[1] Left button of the mouse.
\item[3] Middle button of the mouse only for the X11 interface.
\end{DLtt}
\item[XNDC] X coordinate of the cursor position in \NDC{} space.
\item[YNDC] Y coordinate of the cursor position in \NDC{} space.
\item[XWC] X coordinate of the cursor position in \WC{} space.
\item[YWC] Y coordinate of the cursor position in \WC{} space.
\end{DLtt}
\Filename{H2Keyboardinput}
\section{Keyboard input}
\index{keyboard input}
\Shubr[GKS]{IRQST}{(KWKID,ISTDNR,ISTAT*,L*,STR*)}
\Action
This routine returns a character string typed on the keyboard.
\Pdesc
\begin{DLtt}{1234567}
\item[KWKID] Workstation identifier. If \Lit{KWKID} is negative, the
parameters \Lit{RQUEST(81)}, \Lit{RQUEST(82)}, \Lit{RQUEST(91)},
and \Lit{RQUEST(92)} given via the \Lit{QUEST COMMON} specify
a box in \NDC{} in which the request string will be done.
If \HIGZ{} is installed with \GKS{} an ``initialise string''
is performed.
\item[ISTDNR] Device number
\item[ISTAT] Return status. \Lit{0}: Break and \Lit{1}: OK
\item[L] Number of characters returned
\item[STR] Character string returned
\end{DLtt}
\par
Note that in the routines \Rind{IRQLC} and \Rind{IRQST} the parameter
\Lit{ISTAT} may be used to identify the button number of the mouse.
\newpage
\Filename{H2MenusInput}
\section{Menus Input}
\index{Menus Input}
\Shubr{IGMENU}{(MN,CHTIT,*X1*,*X2*,*Y1*,*Y2*,NBU,CHUSER\-,N,\-CHITEM,\-
\-CHDEF,\-CHVAL*,ICHOIC*,CHOPT)}
\Action
This routine displays a menu and returns the user's choice in the variable
\Lit{ICHOIC} according to the option chosen. This routine works only on one
menu: the menu management must be performed by the application program but this
routine provides some facilities to manage several menus simultaneously.
\Pdesc
\begin{DLtt}{1234567}
\item[MN] Menu number. To use segment capabilities of the workstation.
If \Lit{MN=0} the segments are not used.
\item[CHTIT] Menu title.
\item[X1] X coordinate of lower left hand corner of menu box
\item[Y1] Y coordinate of lower left hand corner of menu box
\item[X2] X coordinate of upper right hand corner of menu box
\item[Y2] Y coordinate of upper right hand corner of menu box
\item[NBU] Number of User squares.
\item[CHUSER] \Lit{CHARACTER} array of length \Lit{NBU}
containing the text in the users' squares.
The last line of the menu is split into \Lit{NBU} boxes.
\item[N] Number of items.
\item[CHITEM] \Lit{CHARACTER} array of length \Lit{N}
containing the text for the items.
\item[CHDEF] \Lit{CHARACTER} array of length \Lit{N}
containing the text for the parameters.
If \Lit{CHOPT='P'} the menu is split into two columns.
The left column contains the items and
the right column the default value of the corresponding
item.
\Lit{CHDEF(I) (1<I\(\leq\)N)} is a character string which contains
the possible values of the item number \Lit{I}:
\Lit{CHDEF(I)='value1, value2, value3,..., valueN'}.
If \Lit{CHDEF(I)=' '} there are no default values.
\item[CHVAL*] \Lit{CHARACTER} array of length \Lit{N}
into which parameter values are written.
If \Lit{CHOPT='P'} then \Lit{CHVAL(I)} contains the
parameter value for item \Lit{I}.
\item[ICHOIC] Choice number. The description of the possible values returned
in \Lit{ICHOIC} is given in the following table:
\begin{center}
\begin{tabular}{||c|l||}
\hline
0 & Outside of the menu \\
\hline
-100 & Title bar \\
\hline
-1,NBU & User keys \\
\hline
-1000 & Right button of the mouse clicked \\
\hline
$> 0$ & Item number \\
\hline
\end{tabular}
\end{center}
\item[CHOPT] \Lit{CHARACTER} variable specifying the option(s) selected.
\end{DLtt}
The square at the left of the title bar moves and resizes the menu.
The square at the right of the title bar moves the menu.
\newpage
\begin{Tabhere}
\begin{center}
\begin{tabular}{||c|p{12cm}||}
\hline
'H'& The picked item is highlighted. The last choice number must be given
in ICHOIC.\\
\hline
'D'& Display the menu.\\
\hline
'C'& Permit a choice in the displayed menu.\\
\hline
'E'& Erase the menu.\\
\hline
'P'& The menu is a menu with parameters.\\
\hline
'R'& Return the current position of the menu in \Lit{X1,X2,Y1,Y2}.\\
\hline
'S'& Software characters are used to draw the text in the menu.\\
\hline
'U'& Update the user text in the user squares with the value in \Lit{CHUSER}.
The user square number is given in ICHOIC. The options 'U'
and 'H' are incompatible because they used both
ICHOIC as input parameter.\\
\hline
'M'& Menu drawn on a Metafile.\\
\hline
'Z'& Menu stored in the ZEBRA picture.\\
\hline
'N'& The last input position is used to find the menu item.
With this option choices can be made in several menus
at the same time using a \Lit{DO} loop as shown below.
\Lit{NBMENU} is the number of menus on the screen.\\
\hline
'B'& A rubberbanding box is used for the locator.\\
\hline
'T'& The title bar is not drawn, then the menu can not be moved interactively.\\
\hline
'W'& The menu is drawn with Width. \\
\hline
'A'& The menu is drawn with shAdow. \\
\hline
'V'& Draw only the vertical part of width or shadow.\\
\hline
'O'& Like option 'V' but the width or shadow is aligned on the menu frame.\\
\hline
'I'& Input menu. A parameter menu is displayed and \Rind{IGMENU} is
entered directly in request string. This is useful to
perform a request string without a very complicated
initialization part.\\
\hline
'K'& Key menu. The user keys are drawn as key.\\
\hline
\end{tabular}
\end{center}
\caption{Options for \protect\Rind{IGMENU}}
\label{tab-IGMENU}
\end{Tabhere}
\subsection{Example}
This example program shows how \Rind{IGMENU} can manage
several menus at the same time.
\begin{XMPt}{How to manage several menus}
PROGRAM MENU
*
COMMON /PAWC/H(50000)
PARAMETER (NBMENU=3)
CHARACTER*10 CHU, CHI, CHD, CHV, CHTIT, CHOPT
CHARACTER*80 TEXT
CHARACTER*16 CHLOC(3)
DIMENSION CHU(3),NBU(NBMENU),NBI(NBMENU)
DIMENSION CHI(3),CHD(3),CHV(3),CHTIT(NBMENU)
DIMENSION X1(NBMENU),X2(NBMENU),Y1(NBMENU),Y2(NBMENU)
* Last choice in the menu NB i (useful for HIghligth)
DIMENSION ICCH(NBMENU)
DATA CHU /'Quit','Exit','GED'/
DATA CHI /'Choice 1', '|Choice 2', 'Choice 3'/
*.______________________________________
*
*
* Initialize \HIGZ
*
CALL MZEBRA(-3)
CALL MZPAW(50000,' ')
CALL IGINIT(0)
CALL IGWKTY(KWKTYP)
CALL IGSSE(6,KWKTYP)
CALL ISELNT(0)
CALL MESSAGE('Example of the IGMENU usage in multiple input')
*
* Initialize and display menu number 1
*
1 ICCH(1)=0
X1(1)=0.14
X2(1)=0.35
Y1(1)=0.1
Y2(1)=0.25
NBU(1)=2
NBI(1)=3
CHTIT(1)='MENU 1'
CALL IGMENU (0,CHTIT(1),X1(1),X2(1),Y1(1),Y2(1),NBU(1),CHU,
+ NBI(1),CHI,CHD,CHV,ICH,'S D')
*
* Initialize and display menu number 2
*
ICCH(2)=0
X1(2)=0.3
X2(2)=0.56
Y1(2)=0.3
Y2(2)=0.45
NBU(2)=2
NBI(2)=3
CHTIT(2)='MENU 2'
CALL IGMENU (0,CHTIT(2),X1(2),X2(2),Y1(2),Y2(2),NBU(2),CHU,
+ NBI(2),CHI,CHD,CHV,ICH,'S D')
*
* Initialize and display menu number 3
*
ICCH(3)=0
X1(3)=0.05
X2(3)=0.95
NBU(3)=3
NBI(3)=0
CHTIT(3)='MENU 3'
Y1(3)=0.9
Y2(3)=0.935
CALL IGMENU (0,CHTIT(1),X1(3),X2(3),Y1(3),Y2(3),NBU(3),CHU,
+ NBI(3),CHI,CHD,CHV,ICH,'ST D')
*
* Initialize the current menu number
*
IMENU=3
*
* Request in the current menu
*
10 CONTINUE
IF(IMENU.LT.3)THEN
CHOPT='S CR'
ELSE
CHOPT='ST C'
ENDIF
ICH=ICCH(IMENU)
CALL IGMENU (0,CHTIT(IMENU),X1(IMENU),X2(IMENU),
+ Y1(IMENU),Y2(IMENU),NBU(IMENU),CHU,
+ NBI(IMENU),CHI,CHD,CHV,ICH,CHOPT)
*
* If the choice is outside the menu (ICH=0), we search here
* if the input is in an other menu (CHOPT='N')
*
IF(ICH.EQ.0)THEN
DO 20 I=1,NBMENU
IF(I.LT.3)THEN
CHOPT='S CRN'
ELSE
CHOPT='SCTNKU'
ENDIF
ICH=ICCH(I)
CALL IGMENU (0,CHTIT(I),X1(I),X2(I),Y1(I),Y2(I),
+ NBU(I),CHU,
+ NBI(I),CHI,CHD,CHV,ICH,CHOPT)
IF(ICH.NE.0)THEN
IMENU=I
GOTO 30
ENDIF
20 CONTINUE
*
* After the DO loop the input is outside all menus
*
CALL MESSAGE('Outside the menus')
GOTO 10
ENDIF
ICCH(IMENU)=ICH
*
* Analyses the result
*
30 CONTINUE
IF(ICH.GT.0)THEN
WRITE(TEXT,'(''Menu : '',I1,'', choice : '',I1)')IMENU,ICH
CALL MESSAGE(TEXT)
GOTO 10
ENDIF
IF(ICH.EQ.-100)THEN
WRITE(TEXT,'(''Menu : '',I1,'', title bar'')')IMENU
CALL MESSAGE(TEXT)
GOTO 10
ENDIF
IF(ICH.EQ.-1000)THEN
CALL MESSAGE('Right button of the mouse')
GOTO 10
ENDIF
IF(ICH.EQ.-1)THEN
WRITE(TEXT,'(''QUIT from menu : '',I1)')IMENU
CALL MESSAGE(TEXT)
CALL IGEND
GOTO 999
ENDIF
IF(ICH.EQ.-2)THEN
WRITE(TEXT,'(''EXIT from menu : '',I1)')IMENU
CALL MESSAGE(TEXT)
CALL IGEND
GOTO 999
ENDIF
IF(ICH.EQ.-3)THEN
CALL MESSAGE('Invoke the Graphics Editor')
CALL IZPICT('*','S')
CALL IZPICT('P1','M')
CALL IGRNG(20.,20.)
CALL IZGED('P1','S')
GOTO 1
ENDIF
IF(ICH.LT.0)THEN
WRITE(TEXT,'(''Menu : '',I1,'', choice : '',I2)')IMENU,ICH
CALL MESSAGE(TEXT)
GOTO 10
ENDIF
*
999 END
SUBROUTINE MESSAGE(TEXT)
CHARACTER*(*) TEXT
CALL IGZSET('G')
CALL ISELNT(0)
CALL IGSET('FACI',0.)
CALL IGSET('FAIS',1.)
CALL IGSET('BORD',1.)
CALL IGBOX(0.,1.,0.,0.04)
CALL IGSET('TXAL',23.)
CALL IGSET('CHHE',0.02)
CALL IGSET('TXFP',-100.)
CALL ITX(0.5,0.02,TEXT)
call iuwk(0,0)
END
\end{XMPt}
| {
"alphanum_fraction": 0.6146380042,
"avg_line_length": 32.4976635514,
"ext": "tex",
"hexsha": "981d38ee8bdc879d521c2ab57152e695fbbea26d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "berghaus/cernlib-docs",
"max_forks_repo_path": "higz/higzinp.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "berghaus/cernlib-docs",
"max_issues_repo_path": "higz/higzinp.tex",
"max_line_length": 80,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "berghaus/cernlib-docs",
"max_stars_repo_path": "higz/higzinp.tex",
"max_stars_repo_stars_event_max_datetime": "2019-07-24T12:30:01.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-07-24T12:30:01.000Z",
"num_tokens": 4408,
"size": 13909
} |
\chapter{Mining Transdiagnostics Symptoms in Social Media Data}
\section{Abstract}
Mining social media data to predict mental health condition and psychological traits have increasingly attracted attention in the clinical psychology domain. Instead of predicting a specific diagnosis criteria, we adopt a transdiagnostic approach by investigating common symptoms that predispose an individual to a variety of mental disorders. Treatments that targets these factors are called transdiagnostic treatment, which has been widely employed to tackle anxiety and depression disorders. We leverage FB data from 77 users who participated in the myPersonality project back in 2011. We label negative emotion and two transdiagnositc components - reasoning bias (cognitive distortion) and negative thinking among more than 4000 Facebook posts. Our study includes how transdiagnostic symptoms manifested in users with different characteristics. We find that martial status and user's parental relationship are protective factors. Finally, we also identified language features that predict transdiagnotics symptoms.
\section{Introduction}
In the recent years, there is a surging amount of studies attempting to use social media data to predict psychopathology diagnosis. Various attempts on predicting depression has achieved good performance \cite{munmun13, Aldarwish17, Hu17, Coppersmith15}. What makes these prediction tasks challenging at the moment is comorbidity is very common in psychopathology. According to the literature, about 60\% - 70\% of individuals diagnosed with anxiety disorder also meet some of the criteria in depressive and affective disorders \cite{Timothy95}.
The traditional conceptual structure approach to understand psychological disorder is to provide a diagnosis of a specific disorder. However, there is increasing recognition that criteria diagnosis are of less value because many disorders frequently co-occurred and share a number of vulnerability factors, which is also called comorbidity \cite{Kessler98,Hirschfeld99}. In light of the challenge, psychologists are shifting towards a transdiagnositc approach in the recent years. Instead of giving multiple diagnosis to a patient with comorbidity, transdiagnostic approach focuses on common psychological processes underlie the syndromes, which provides a better explanation to the high rate of comorbidity observed in clinical practice \cite{Harvey04}. Treatments and preventative interventions targeting transdiagnostic symptoms have been found to effective among anxiety, depressive disorders \cite{Collins09,Norton04}, eating disorder \cite{Fairburn03} and social phobia \cite{Freda00}, etc. In support of transdiagnostic theory, the National Institute of Mental Health (NIMH), created the RDoC, The RDoc conceptualized five systems that underlie psychopathology: negative valence systems, positive valence systems, cognitive systems, systems for social processes and arousal/regulatory systems (Insel et al. 2010).
The processes of producing language and the way language being used is an important aspect to identify psychopathology (Pennebeker). The cognitive system captures language as a contruct (Maria's paper), however, at present, analyzing language within the transdiagnostic approach framework hasn't been incorporated into analyzing social media data. In this paper, we see how the transdiagnostic symptoms identified from social media text contribute to depression, influence satisfaction with life and how it related to personality. Meanwhile, we also look at user characteristics that might contribute to some of the transdiagnostic symptoms.
% Head 1
\subsection{Investigate Transdiagnositic Symptoms on Social Media}
We explore some of the transdiagnostic symptoms in social media posts. Social media users' threads and updates often reveal their opinions, emotions and daily life activities. Data capturing these information provide researchers a platform to study their behaviors and psychological traits \cite{Kosinski13, Lushi}. Motivated by the fact that most of the studies look at whether social media behaviors reflect mental health symptoms focus on a specific diagnosis criteria \cite{munmun13, Aldarwish17, Hu17, Coppersmith15}, which did not consider high comorbidity rate in many disorders. We consider to look at whether social media posts capture some of the transdiagnostics symptoms.
The components of transdiagnostic treatment or research include attention, memory, reasoning, thought and behaviour. Reasoning refers to thinking that involves deducing conclusions, generating judgements and testing hypotheses logically. Biased reasoning, which is in parallel with cognitive distortion in cognitive behavior therapy, often draws in conclusion different from the reality \cite{Harvey04}. Assessing cognitive distortion is an index of the improvements in behaviours and emotional resiliency \cite{Freda00,Neil96}. Later we will explain the details of identifying cognitive distortion.
Another transdiagnostic component captured by social media text is repetitive negative thinking, which presents across affective disorders, anxiety disorders, insomnia or psychosis \cite{Harvey04}. The two processes in repetitive negative thinking are worry and rumination. They all share three commonalities: a) repetitive, b) uncontrollable c) focus on negative content \cite{Harvey04}. Worry presents mainly in general anxiety disorder (GAD), which is defined as a chain of uncontrollable thoughts or images which represent an attempt to problem-solving an issue that might contains a negative outcome \cite{Thomas83}. Whereas, rumination are content specific to the type of disorder. Rumination on post traumatic stress disorder involves repetitive negative thinking about the trauma and its consequences \cite{Michael07}; Rumination in social phobia contains self-appraisals and evaluation of the partner in a social event \cite{Kashdan07}.
\section{BACKGROUND AND RELATED WORK}
% Head 1
\subsection{The Cognitive Behavior Therapy (CBT)}
Cognitive models of psychopathology proposed that pathological behaviors and emotions are often the consequences of cognitive biases or distortion, which are inadequate interpretation of situations. Beck's cognitive model of psychopathology emphasizes the role of cognitive distortion in the the maintenance of anxiety, depression and other mental disorders \cite{Beck67,Beck11}. Therefore, one of the goals from cognitive behavioral therapy for anxiety and depression is to help an individual to adjust these biases, which is called “cognitive restructuring’. Cognitive restructuring modifies the clients' problematic ways of thinking about themselves, their world and their future \cite{Harvey04}. To identify these biases, therapists observe thoughts contain cognitive distortions and investigate the underlying schema that generate the these thoughts \cite{Beck11,Dobson09}.
Cognitive distortions can be classified according to its content. For example, mind reading, personalization, labeling and all-or-nothing thinking, etc. \cite{Oliveira14}. These thoughts can be true or dismissive to the reality. For example, "My boyfriend doesn't like me anymore." This statement may be true to the fact or based on mind reading, a type of distortion that refers to individuals assume that they know what people think without having sufficient evidence of their thoughts. The categories of cognitive distortions often indicate the type of disorder. For instance, individuals with social anxiety disorder engages in mind reading “No one wants me to be around with them” or catastrophizing "It will be a disaster if I said something wrong in the group". Depressed individuals engages in a wide range of cognitive distortions - labeling "I am the black sheep of the family", fortune telling "I can never be happy without you" \cite{Newman15}.
% Head 2
\subsection{Assessment of Cognitive Distortion}
The most widely applied method for assessing cognitive distortion is the cognitive distortion checklist. The list has been validated in experimental and clinical work \cite{Beck11,Dobson09}. xx and colleagues developed the Cognitive Distortion Questionnaire(CD-Quest) \cite{Simona17}, which is a 15-item questionnaire based on the distortion checklist that assess the frequency and intensity of cognitive distortion. It is administered before the therapy session to help a client to keep track on their thinking errors thus enabling them to be aware of the change over time as the therapy goes on. In this study, we adopted the CD-Quest in our annotation guideline as a criteria for annotators to identify cognitive distortion based on the context information provided in the Facebook posts. Below is an example from CD-Quest:
% quote
\begin{quote}
\textit{Dichotomous thinking (also called all-or-nothing, black-and-white, or
polarized thinking): I view a situation, a person, or an event in "either-or"
terms, fitting them into only two extreme categories instead of on a continuum.
EXAMPLES: "I made a mistake; therefore my performance was a failure." "I ate
more than I planned, so I blew my diet completely."}
\end{quote}
% Head 3
\subsection{Negative Emotion and Psychopathology}
The relationship between negative affect, depression and anxiety has been considered as clinically important historically (Akiskal, 1985; Clark, 1989; Clark and Watson, 1990; Dobson, 1985). How people react towards events tells their coping mechanism, at the heart of reacting and coping with events is people's emotional response (Pennebeker). We also investigate users' negative emotion on social media text and its correlation with user characteristics, behaviors and psychopathology. Instead of using LIWC, we manually label whether a post reflects negative emotion of the author.
\section{Method and Materials}
% Head 1
\subsection{Data}
This corpus consists of 5000 Facebook posts from individuals who participated in the myPersonality project from January 2009 to December 2011. Our methods were carried out in accordance with the approved guidelines from myPersonality. myPersonality was a Facebook-based application collecting psychometric tests from users. Participants opt to allow myPersonality to collect their account information and public Facebook posts. collection of myPersonality complied with the terms of Facebook service. All data are anonymized and gathered with opt-in consents for research purposes. The sample used in our study contains 301 participants who have completed the CES-D scale, Satisfaction with Life Scale, Big-5 Personality Scale and Schwartz Value Survey.
% Head 2
\subsection{Sampling Approach}
To ensure we have enough posts to conduct a longitudinal study. We only include regular posters in our sample. We define regular posters as individuals who posted twice per week or more. We estimated this using the average post count per day during the sampling frame. If an individual had a post count per day of 0.3, this individual made around 109.5 posts in 365 days, which was roughly equivalent to an average of 2.1057692 posts per week. In our sample, 122 out of 301 participants were regular posters. To make sure our sampling approach was conducted under a standard sampling framework, we included 91 regular posters whose last post obtained by myPersonality was less than a week before they completed the CES-D scale. Then we obtain a sample of 4696 posts that were produced two months before CES-D score was obtained. We future eliminate 14 posters who produced less than 20 posts during the two months and posts that are not written in English. Eventually we yield a sample of 4145 posts from 77 users.
% Head 2
\subsection{Annotation Process}
The annotation guideline was developed using 4362 Facebook posts to illustrate negative emotion and cognitive distortions. The extracted posts were first annotated by a specific trained psychologist according to the annotation guideline. The annotation process include three steps. First, we identify whether the post reflect the author's negative emotion, posts that contain a mixed of emotion is labeled as 'mixed'. We group the 'mixed' posts together with the negative emotion posts in the later analysis. Here negative emotion include but not limited to sadness, anger, anxious, boredom, physical complain and so on. Sometimes users repost content that contains negative emotion, but might not reflect negative emotion of the authors. For example,
% quote
\begin{quote}{\itshape}
\textit{you have a sister who has made you laugh, punched you, stuck up for you, drove you crazy, hugged you, watched you succeed , saw you fail, picked you back up, cheered you on, made you strong, and is someone you cant live without someone you can always count on....REPOST THIS IF YOU HAVE A SISTER THAT YOU LOVE.}
\end{quote}
We label these posts as neutral because it is uncertain about the author's emotion when they repost this information. Second, we label posts that contain cognitive distortion. In scoring the cognitive distortion, annotators are given specific cues - CD-Quest as reference of the measurement of cognitive distortion, but are also instructed to rely on their linguistic intuition. In addition, posts from quotes, lyrics, and repost are labeled as non-original posts. Annotators are trained by following instructions and sets of practice examples on the annotation guideline. The difficulty of this task is that a lot of the status contain emotion or thought but does not describe the event that cause the emotion or thought. However, we can still tell that some of the posts contain cognitive distortion even if the post doesn't indicate a situation or context, see table~\ref{tab:zero}.
% Table
\begin{table}%
\caption{Posts with Dognitive distortion}
\label{tab:zero}
\begin{minipage}{\columnwidth}
\begin{center}
\begin{tabular}{|p{5cm}|p{9.5cm}|}
\toprule
post & cognitive distortion \\
\hline\hline
"I feel like my life is waste. I have no story, no influence, no particular skills that are useful. I just suck."
& Labeling,magnification/minimization: The author assigns global negative traits to him/herself, such as 'my life is a waste', 'I just suck'. The author generate the global negative pattern based on some incidents, it is unsure the number of incident. But the author fail to focus on life events that are counter to this statement. \\[5pt]
\hline
I hate the past. It deserves to be erased from memory forever. I don't care if the memories were good & Discounting positives and dichotomous thinking: the author hates everything in the past, which is all-or-nothing thinking and he/she diminishes the positive events or achievements in the past \\[5pt]
\hline
Nothing feels right today. It's weird & The author gives greater weight to perceived failure or weakness but fail to aware of positive events or opportunities \\[5pt]
\bottomrule
\end{tabular}
\end{center}
\bigskip\centering
\emph{Note:}
\end{minipage}
\end{table}%
For the last step, we label posts that contain
ffd- worry and rumination. Worry is often indicated by particular words, such as "anxious" and "nervous". Rumination includes ruminating on a specific event, state or emotion. Here we set the rumination time window as one week, if an individual ruminate on a specific event or a specific emotion/state within a week, we label it as rumination. However, rumination of a specific emotion could be difficult to identify if the post does not contain information about the event or situation that causes the emotion, because we cannot tell whether the negative emotion is pointed at the same event (see Example EMOTION).
% quote
\begin{quote}{\itshape}
EVENT: \\
\textit{"Cry my betts fish died and the other ones dying:("\\
"booh im crying my Betta fish died"}\\
STATE:\\
\textit{Ugh! What a boring morning!\\
I am so bored!\\
Why........SO..BOOOOOOOOOOOOOOOOOOOOREEEED!\\}
EMOTION: \\
\textit{Day1: "I'm so angry that mom threw away my things today." \\
Day 2: ":( " \\
Day 3: ":(" } \\
\end{quote}
% Head 3
\subsection{Self-reported measurement scale}
We now present a number of user characteristics and three self-reported scales that are used to measure depression symptoms, personality ans satisfaction with life. Later we will investigate the relationships between and transdiagnostic symptoms and the self-reported psychological traits.
\subsubsection{Center for Epidemiologic Studies Depression Scale (CES-D Scale)}
CES-D is a self-reported scale designed to measure depression symptoms in the general population\cite{Radloff77}. The scale consists of 20 depression symptoms associated items. It has been tested in psychiatric settings across various cultures over the years. It was found to have high internal consistency and test-retest reliability \cite{Radloff77,Herz86,Roberts80}. Its validity was assessed via correlation with clinical diagnosis of depression and other self-reported trait measurement \cite{Herz86}.
\subsubsection{Five Factor Model of Personality (Big-5)}
The five factor personality model was established in an attempt to understand the description of traits. The dimensions composing the 5-factor models are extraversion, agreeableness, conscientiousness, neuroticism and openness to experience. The five factor structure has been proved to be robust in both self and peer ratings \cite{McCrae92}, children and adult \cite{Ivan95} and across different cultures
\cite{McCrae02}. Early literature found that big-5 is relatively stable over time \cite{McCrae92}. However, recently literature found the opposite \cite{Ardelt00}. Neuroticism was found to have strong correlation with a bunch of psychological disorders, such as anxiety and depression \cite{Ormel04}. Individuals who score high on neuroticism tend to experience negative mood frequently and physical symptoms. Recent studies found that social media data can predict the 5-factor model of personality \cite{Kosinski13}.
\subsubsection{Satisfaction with Life Scale (SWLS)}
The 5-item satisfaction with life scale was developed to measure global life satisfaction. The SWLS has been tested across different cultures and age groups \cite{Diener93} and has been found to have high internal consistency and temporal reliability \cite{Diener85}. Its validity was assessed by correlation with other measures of subjective well-being and specific personality dimension.
\section{Results}
% Head 1
\subsection{Transdiagnostic labels}
Among 4145 posts, 804 of them reflect negative emotion of the author, 36 of them contain a mix of positive and negative emotion. Among 840 posts that contain negative emotion, only 41 contain cognitive distortion, 111 contain negative thinking (85 worry, 26 rumination). 3 posts show both cognitive distortion and negative thinking. Cognitive distortion is rare, it only occurs in 1\% of the posts in this sample. We aggregate a negative emotion score by summing up the number of negative post from each user. We use the same approach to generate a distortion score and negative thinking score for each user. Table~\ref{tab:one} shows the statistics of the three scores and their correlation with depression symptoms (CES-D). Table~\ref{tab:one2} shows the correlation between per post transdiagnostic symptom score with depression symptoms. It is appear that the cumulative transdiagnostic score is more strongly correlated with psychopathology. Therefore, we use the cumulative score in the later analysis. Although posts contain cognitive distortion only account for 1\% of all the posts but they are moderately correlated with self-depression symptoms. Whereas, negative emotion and negative thinking, although more frequently observed in the data, are not significantly correlated with CES-D.
% Table
\begin{table}%
\caption{Transdiagnositc components (cumulative score) and depression symptoms}
\label{tab:one}
\begin{minipage}{\columnwidth}
\begin{center}
\begin{tabular}{lllll}
\toprule
& mean & SD & CES-D & SWL \\
\hline\hline
Negative emotion & 10.91 & 11.835 &0.192 & -0.16\\
Cognitive distortion & 0.532 & 0.981 &0.300** & -0.250* \\
Negative thinking & 1.441 & 2.962 &0.117& 0.023\\
\bottomrule
\end{tabular}
\end{center}
\bigskip\centering
\emph{Note:} * p<0.05, **p<0.01, ***p<0.001
\end{minipage}
\end{table}%
% Table
\begin{table}%
\caption{Transdiagnositc components (per post) and depression symptoms}
\label{tab:one2}
\begin{minipage}{\columnwidth}
\begin{center}
\begin{tabular}{lllll}
\toprule
& mean & SD & CES-D & SWL\\
\hline\hline
Negative emotion & 0.182 & 0.197 &0.123 & -0.108\\
Cognitive distortion & 0.009 & 0.016 &0.261*& -0.208\\
Negative thinking & 0.018 & 0.032 &0.044 & -0.056\\
\bottomrule
\end{tabular}
\end{center}
\bigskip\centering
\emph{Note:} * p<0.05, **p<0.01, ***p<0.001, CES-D: correlation with depression symptoms. SWL: correlation with Satisfaction with Life
\end{minipage}
\end{table}%
Figure ~\ref{fig:one} shows the number of negative emotion and transdiagnostic symptom posts from each user in the two months time window. All the three distributions are negatively skew, which means most of the users do not have a lot of negative emotion, and a majority of the users do not show any cognitive distortion or negative thinking. For those who shows cognitive distortion in their posts, they only have 1-2 posts shows cognitive distortion.
% Figure
\begin{figure}`'
\includegraphics[width=80mm,scale=0.8]{fig1}
\caption{Distribution of negative emotion and transdiagnostic symptoms}
\label{fig:one}
\end{figure}
% Head 2
\subsection{Dataset Statistics}
Since cognitive distortion appears to be most correlated with psychopathology Table~\ref{tab:one}, we now subset a sample of individuals with their cognitive distortion score higher than the group mean, which yields a sample of 26 individuals. We also subset another sample in which individuals have lower than average cognitive distortion score (n = 51). We compare depression symptoms, satisfaction with life and personality among the two groups
Figure ~\ref{fig:two} shows the age distribution of the sample population, the high and low cognitive distortion group. The age distribution shows that individuals from 15-20 years old accounted for the majority number of people in our sample population (skewness = 1.685 , kurtosis = 5.532), the same pattern occurs in the low cognitive distortion group (skewness = 1.332 , kurtosis = 3.911) . Whereas, a majority of the people in the high cognitive distortion group are from 20-22 (skewness = 0.817 , kurtosis = 3.964).
% Figure
\begin{figure}
\includegraphics[width=80mm,scale=0.8]{fig2}
\caption{Distribution of negative emotion and transdiagnostic symptoms}
\label{fig:two}
\end{figure}
% Figure
\begin{figure}
\includegraphics[scale=0.9]{CESD_distortion}
\caption{qqplot of selected variables}
\label{fig:three}
\end{figure}
% Table
\begin{table}%
\caption{t-test Between Users with High or Low Transdiagnostic Symptoms}
\label{tab:two}
\begin{minipage}{\columnwidth}
\begin{center}
\begin{tabular}{llllllllll}
\toprule
& \multicolumn{2}{c}{all(n=77)} & \multicolumn{2}{c}{High Trans(n=26)} & \multicolumn{2}{c}{Low Trans(n=29)} & & \\
& mean & SD & mean & SD & mean & SD & p & Cohen's d \\
\hline\hline
SWL & 4.221 & & 3.831 & &4.483 & & & -0.42 \\
CES-D & 23.860 & & 28.42 & &21.62 & & * & 0.60 \\
ope & 4.166 & & 4.052 & &4.148 & & &-0.37 \\
con & 3.183 & & 3.085 & &3.094 & & & -0.19\\
ext & 3.101 & & 2.838 & &3.094 & & & -0.48 \\
agr & 3.539 & & 3.457 & &3.522 & & &-0.18 \\
neu & 3.022 & & 3.152 & &2.96 & & & 0.22 \\
\bottomrule
\end{tabular}
\end{center}
\bigskip\centering
\emph{Note:} * p<0.05, **p<0.01, ***p<0.001 after boferroni correction. Effect size: 0.8 = large(L); 0.5= moderate(M); 0.2 = small(S)
num. of posts: Number of posts in two months; SWL: Satisfaction with Life score
CES-D: Center for Epidemiological Studies Depression (CESD); ope: openness; con: conscientiousness; ext: extraversion; agr: agreeableness; neu: neuroticism.
\end{minipage}
\end{table}%
We present transdiagnostic symptoms scores from the two groups together with their self-reported big-5 personality score, satisfaction with life score and depression symptom score Table ~\ref{tab:two} . Two users didn’t report their age on their profiles, here we assign the mean age to the them. We conduct independent sample t-tests on the selected variables among the two groups. Figure ~\ref{fig:three} shows the qqplot of the selected variables.
Our observation indicate that users' personality characteristics does not distinguish their transdiagnostic symptoms. However, users with more transdiagnostic symptoms tend to post more posts (nearly twice more than the low symptom group). They also reported significantly more depression symptoms (28\% higher than low symptom users).
We further divide users according to their demographic characteristics (gender, marital status, relationship status and relationship with parents), and observe their differences in transdiagnostic symptoms Table ~\ref{tab:three}. Users missing some of the characteristics information are assigned under the category 'other', users in this category are not included in this analysis. Since Figure \label{fig:two} shows that the transdiagnostic symptoms and negative emotion are not in normal distribution. We conduct Wilcoxon signed-rank test (non-parametric test used when the sample is not normally distributed) to compare these components between male and female. Result shows that there is no gender difference in transdiagnostic symptoms.
We attempt to find out if relationship status contribute to the amount of transdiagnostic symptoms. We used Kruskal-Wallis test to compare the median between users with different relationship status: single, be in a relationship, married. Kruskal-Wallis test is a non-parametric equivalent of one-way analysis of variance (ANOVA). ANOVA is used when the residuals are normally distributed, which is not the case in our sample. Whereas, Kruskal-Wallis can be used to compare the median between the groups when this assumption is not satisfied. Result shows the three groups show no statistical significance in negative emotion (H= 4.516, p >0.05), cognitive distortion (H = 1.573, p >0.05) and negative thinking(H = 1.628, p >0.05). Although the median of the three groups appears no difference but the density plots from the three groups show that most of the married individuals have significantly lower transdiagnostic symptom and negative emotion \label{fig:three}. Having a partner to provide mental support seems to be a protective factor, whereas, no difference is found among people being in a relationship. Therefore, the result can be interpreted the other way around, people has a partner and with less transdiagnostic symptoms are more likely to get married or report married on social media. Moreover, individuals without divorced parents tend to have lower negative emotion cognitive distortion and negative thinking compared with those who have divorced parents.
% Figure
\begin{figure}
\includegraphics[scale=0.9]{neg_emo_rela}
\caption{negative emotion in different relationship statuses}
\label{fig:three}
\end{figure}
% Table
\begin{table}%
\caption{Comparing Transdiagnostic Symptoms Among Different Demographic Groups}
\label{tab:three}
\begin{minipage}{\columnwidth}
\begin{center}
\begin{tabular}{lllllllllll}
\toprule
& \multicolumn{2}{c}{parents together} & \multicolumn{2}{c}{parents NOT together} \\
& mean & SD & mean & SD & p & Hodges-Lehmann estimator \\
\hline
\hline
Negative emotion &6.315 & 4.607 & 13.292 &4.607 &* & -0.067 \\
Cognitive distortion &0.158 & 0.374 &0.625 &1.095 & &0.000 \\
Negative thinking &0.263 & 0.653 & 1.75 &2.937 & &0.000\\
CES-D &19.63 & 10.24 & 25.12 &12.74 & & 6.000\\
\bottomrule
\end{tabular}
\end{center}
\bigskip\centering
\emph{Note:} * p<0.05, **p<0.01, ***p<0.001. Hodges-Lehmann test estimates the pseudo-median in non-parametric test. Hodges-Lehmann estimator shows the difference between the pseudo-median.
\end{minipage}
\end{table}%
% Head 3
\subsection{Late night posts}
Sleep disturbance is one of the major symptoms in depression. We investigate the relationship between sleep disturbance and cognitive distortion. We count the number of posts written from midnight 12:00am until 6:00am in the morning. Then compute the proportion of late night post to the total number of post of that user. We investigate the relationship between proportion of late night post to transdiagnositic symptoms. For transdiagnositic symptoms, we divide the cumulative symptom by the number of post of the user.
Negative emotion (r = 0.293, p < 0.01), cognitive distortion (r = 0.300, p < 0.01) and negative thinking (r = 0.285, p <0.05) are slightly correlated with number of late night post. It appears that negative thinking is more likely to occur in late night post. Our result is also supported by the cognitive model of insomnia. Insomnia individuals suffer unpleasant thoughts and excessive, uncontrollable worry during the pre-sleep period (Borkovec 1979, 1982; Morin, 1993).
% Head 4
\subsection{Linguistics styles}
We also measure the correlation between transdiagnostic components and linguistics style. Linguistics styles capture how an individual use different components of the language in various psychological or social environments. We used LIWC to define the linguistics styles of each Facebook posts, then we aggregate the linguistic style score on user level. Table ~\ref{tab:four} shows the correlation between user linguistics style score and transdiagnostic symptoms.
Clout refers to the social status, confidence, or leadership that people display through their writing. Study found that people with higher status tend to use less first-person pronoun and use more first-person plural and second-person singular pronoun \cite{Kacewicz13}. It appears in our result that clout is strongly correlated with people with more negative emotions. They tend to focus on self, thus, they use less 3rd person or 2ed person pronouns. Our finding is in correspond to the finding from Pennenaker's depression and language study \cite{Pennebaker10}. The difference of self-focus might be a result in response to emotional pain or a thinking pattern that is a predilection for depression \cite{Wolf07}.
It is not surprised to see that emotional tone, which refers to the positive tone, is negatively correlated with negative emotion score. Negative emotion is also moderately correlated with our manual labeled negative emotion. Social referents, which refers to words indicating social roles (father, mother, sister and so on) are slightly to moderately linked to cognitive distortion and negative emotion. Our result indicates that people shows more negative emotion and cognitive distortion on social media are more likely to be socially detached from family and friends. We also find that these people are more present and future oriented and use less exclamation marks. However, this might be particular to social media text, because users seldom describe the negative events happen in the past with detail on Facebook posts, instead, they vent out their feelings to the events. For example, 'I am bored.' 'feeling sick again.' 'I hate today.' Exclamation marks is often used to indicate excitement or surprise in a positive context.
It appears that the content of negative thinking is often related to health and home. However, our result is limited to the context of social media, it's likely that people are less open to talk about financial situation and work issues on social media because that could affect their social image. On the other hand, posts that contain cognitive distortion is not content specific, they tend to have longer words and more words in a sentence and these words are more likely to be in the LIWC dictionary. This is mainly due to the fact that there is a lot of reasoning and thinking process in cognitive distortion posts. In addition, the language on cognitive distortion is also less reward focus and more risk or prevention focus.
% Table
\begin{table}%
\caption{Transdiagnositc components and depression symptoms}
\label{tab:four}
\begin{minipage}{\columnwidth}
\begin{center}
\begin{tabular}{llll}
\toprule
& Negative emotion & Cognitive distortion & Negative thinking \\
\hline\hline
SUMMARY VARIABLE \\
analytic & & -0.262* & \\
clout & -0.518** & &-0.310** \\
authentic & & 0.325** & \\
emotional tone & -0.341*** & & \\
LANGUAGE MATRICS & & & \\
words > 6 letters & & -0.335** & \\
words per sentence & & 0.258* & \\
dictionary words & 0.234* & 0.412*** & \\
GRAMMAR & & & \\
functional words & & 0.344** & \\
total pronouns & & 0.251** & \\
personal pronouns & & 0.251** & \\
1st per pronoun & 0.369** & 0.325** & 0.239* \\
3rd per singular &-0.326*& -0.249* & \\
2nd person & -0.235* & & \\
prepositions & & 0.273* & \\
conjunctions &0.322** & 0.309* & \\
adjective & & 0.270* & \\
comparatives & & 0.320** & \\
verb & 0.244* & & \\
AFFECT WORDS & & & \\
negative emotion &0.322** & & \\
anger &0.413*** & & \\
anxiety & & & \\
sadness & & & \\
swear & 0.385*** & & \\
SOCIAL & & & \\
social words & & & -0.263* \\
female referents &-0.338** & -0.237* & \\
male referents & & -0.261* & \\
COGNITIVE PROCESS & & & \\
differentiation &0.323** & & \\
PERCEPTUAL & & & \\
perceptual process & &0.255* & \\
feeling & & 0.301* & \\
BIOLOGICAL & & & \\
health/illness & & & 0.335* \\
CORE DRIVE & & & \\
reward focus & & -0.249* & \\
risk/prevention focus & & 0.312** & \\
TIME & & & \\
present focus & 0.247* & 0.232* & \\
future focus &0.312* & & \\
PERSONAL CONCERN & & & \\
home & 0.300** & & 0.246*\\
work & & & \\
money & & & \\
PUNCTUATION & & & \\
exclamation marks &-0.257* &-0.317* & \\
\bottomrule
\end{tabular}
\end{center}
\bigskip\centering
\emph{Note:} * p<0.05, **p<0.01, ***p<0.001
\end{minipage}
\end{table}%
% Head 4
\subsection{Cognitive Distortion Regression Model}
We explore the performance of a linear regression model in predicting cognitive distortion. We found high interactions between the LIWC features. Whereas, PCA or SVD-based feature selection methods do not take into account the potential multivariate nature of the data structure. We select features that are most correlated with cognitive distortion according to Table~\ref{tab:four}. "Dictionary words" has the highest correlation with cognitive distortion but also has high interaction with more 1/3 of the language features, therefore, we removed "dictionary words" to avoid multivariation. Then we further remove features that are more than 0.3 correlated with the top features. Our model explains 52\% of variance in the data. Swear words and Risk focus are very strong predictors.
% Table
\begin{table}%
\caption{Cognitive Distortion Linear Regression Model}
\label{tab:five}
\begin{minipage}{\columnwidth}
\begin{center}
\begin{tabular}{lllll}
\toprule
measures & beta & SE & t-Stat \\
\hline
\hline
Intercept & 0.199 &0.160 & 1.240 \\
total pronouns& 0.010* & 0.004 & 2.221 \\
3rd person pronoun & -0.019* & 0.008 & -2.291 \\
preposition & 0.021** & 0.007 & 2.837 \\
swear & 0.021*** & 0.005 & 4.026\\
feeling & 0.029** & 0.010 & 2.929 \\
reward focus & -0.020* & 0.009 & -2.132 \\
risk focus & 0.030*** & 0.008 & 3.494 \\
proportion of late night post & 1.031* & 0.426 & 2.417 \\
\hline
Residual standard error & 0.714 \\
Multiple-R2 & 0.521 \\
Error degrees of freedom & 68 \\
\hline
\bottomrule
\end{tabular}
\end{center}
\bigskip\centering
\emph{Note:} . < 0.1 * p<0.05, **p<0.01, ***p<0.001
parents not together1: no and not in contact with mother; parents not together2: no but in frequent contact with parents;
\end{minipage}
\end{table}%
\section{CONCLUSION}
This research is designed as an approach to complement the current transdiagnositic diagnostic approach with a novel way to access people's behavior. We examine the feasibility to identify transdiagnostic symptoms using Facebook data and finding out the language features that are able to predict cognitive distortion, a core component in CBT, which is highly associated with anxiety and depressive disorders. First, we label negative emotion, cognitive distortion and negative thinking in more than 4000 Facebook posts. Then we investigate the relationship between these components and depression symptoms, satisfaction with life and big-5 personality. Thereafter, we characterize the differences of transdiagnostic symptoms and negative emotions among different demographic groups. Finally, we identify features that are able to predict cognitive distortion.
We found that cognitive distortion is moderately correlated with depression symptoms and satisfaction with life. Marriage and without divorced parents seem to be protective factors in developing transdiagnostic symptoms. We found that some of the language features are best predicting cognitive distortion (explained 55\% of variance in the data). The proportion of posts written at mid night, which is a sign of insomnia, also enhance the prediction.
The major limitation of our work is that our data is from social media platform. Facebook data may not represent the thinking process of an individual most precisely, because users have different degree of selective biased presentation and self-disclosure level. Moreover, this work focus on a limited set of sample that involve 77 users. It would be useful to replicate the study on a larger population to validate the pattern we found in here.
| {
"alphanum_fraction": 0.7700635444,
"avg_line_length": 88.5208333333,
"ext": "tex",
"hexsha": "8f0889d5fa30b4632312bc4fa23642cf985f22ce",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3fcc32356318c1fc8ff69b6794df829311c861e2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "luciasalar/The-Psychology-of-Social-Media-Technology",
"max_forks_repo_path": "Chapter4/chapter4.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3fcc32356318c1fc8ff69b6794df829311c861e2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "luciasalar/The-Psychology-of-Social-Media-Technology",
"max_issues_repo_path": "Chapter4/chapter4.tex",
"max_line_length": 1477,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3fcc32356318c1fc8ff69b6794df829311c861e2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "luciasalar/The-Psychology-of-Social-Media-Technology",
"max_stars_repo_path": "Chapter4/chapter4.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 9239,
"size": 38241
} |
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{graphicx}
\graphicspath{ {images/} }
\newcommand{\Mod}[1]{\ (\mathrm{mod}\ #1)}
\title{Zerocaf: Short ring signatures with Bulletproofs}
\author{
Carlos Perez\\
Dusk Foundation\footnote{https://dusk.network/}\\
\texttt{[email protected]}
\and
Luke Pearson\\
Dusk Foundation\\
\texttt{[email protected]}
}
\date{April 2019}
\begin{document}
\maketitle
\thispagestyle{empty}
\pagestyle{empty}
\begin{abstract}
Zerocaf is one of the cryptographic protocols built by, and implemented within, the Dusk Network, which uses zero-knowledge proofs to show the existence of a private key within one of many public keys. For this, there will be an elliptic curve, used for the key generation, defined over a Ristretto scalar field which enables the use of Ristretto in Bulletproofs while simultaneously abstracting the computationally intensive conversion within Rank-1 Constraint System from co-factor 8 scalar field into a co-factor 1 Ristretto field. This paper provides an explanation of the current curve development, as well as a contextual understanding of how this curve implementation acts as one of aspects within the Zerocaf protocol.
\end{abstract}
\newpage
\tableofcontents
\newpage
\section{Introduction}
The construction and use of elliptic curves is paramount to many cryptographic protocols. Elliptic curves are among the fastest performing primitives where the discrete logarithm problem is hard, which is why they are regarded dominant in the field of cryptography. As the field of cryptography advances, elliptic curves have been proved to be unparalleled in their use as a cryptographic system at which speed and security are two of the most outstanding features. Zerocaf uses elliptic curves for both private and public key generation, these keys are used in conjunction with zero-knowledge proofs to show how they relate to one another. By creating proofs which show a private key exists in one of many public keys, it is possible to perform cryptographic functions on just the private key, which requires significantly less computational effort but is applicable to all of the keys. To understand the theory and some of the practical applications of elliptic curves and their operations in a stand alone context, or their workings with other cryptographic tools, it is important to familiarise with the difference in utility of various elliptic curves. From a greater understanding of these curves, many of the goals in this current project of Dusk will become apparent. The curve implementation and the choice for certain novel methods can be seen holistically with all the aspects discussed in this paper and as part of a wider pragmatic solution to one of the aspects of the Dusk network, which is to perform elliptic curve cryptography inside of a circuit. \\
All of the work associated, both current and future either is or will be written in Rust, as this is the language of the library that is being built.
\section{Set Inclusion}
Set inclusion will be used to show that a private key is one of many public keys. The private and public keys are two generated values from a one way function which are used in a cryptographic system. In any such system, the public key is used for encryption whereas only the private key is used for decryption. A public key is classed as set element, where the set is all of the curve points generated by a base point. The use of set inclusion is to prove that the private key exists as one of many public keys. A set can qualify as a subset of the other set if and only if the elements of the former set are likewise present, yet not the sole elements of the latter set. In order to produce a set inclusion proof, the Prover $\mathcal{P}$ has to convince the Verifier $\mathcal{V}$ that a given set is a subset of another set.
\subsection{Example}
A simplistic example of the logic outlined above is demonstrated hereafter. If: $$ A={1,3,5} $$
$$ B={1,5} $$ then \textbf{\textit{B}} is a subset, or \textbf{\textit{‘proper subset’ of A.}}
It is also important to note that if: $$ B={1,3,5} $$ then \textbf{\textit{B} would not be a subset of \textit{A} as \textit{B = A}}, in this case. Also, if $$ B={1,4} $$ then \textbf{\textit{B} would be a subset but not a proper subset of \textit{A}}, as every element of a $B$ must simultaneously be part of $A$ for the subset to exist.
\subsection{Advantages}
\begin{itemize}
\item The advantage of using subsets is that they have varying mathematical properties, the one which is most pertinent to us is the proof that a subset exists inside of a set.
\item From this, operations can be performed to that particular subset which can be used to show properties and create proofs of the larger subset without the extra expense as the whole set is not being used.
\end{itemize}
A full comprehension of this subset rule is very helpful, as well as largely applicable to the defined curve. \\
For the current set inclusion use case, due to the set elements being public keys and the input being a private key, there needs to be a $ScalarBaseMult (P=x\cdot G)$ operation.
\section{Bulletproofs and Rank 1 Constraint Circuits}
Bulletproofs[1] are short non-interactive zero-knowledge proofs[2]. For example, Bulletproofs can be used to prove that an encrypted number is in a given range, without leaking any information about the number. Compared to SNARKs$[2]$, Bulletproofs require no trusted setup, which further reduces the risk of a malicious set up. However, Bulletproofs verification is computationally more intensive relative to the SNARK proof verification. Bulletproofs, in context to their computational intensity, have linear scaling, which is measured as the size of the arithmetic circuit.\\
Bulletproofs are designed to enable efficient confidential transactions in Bitcoin and other cryptocurrencies. Every transaction contains a cryptographic proof which proves the validity of the spending transaction. Bulletproofs shrink the size of the cryptographic proofs from over 10kB to less than 1kB. To prevent overflows every confidential transaction must carry a proof that all amounts are positive and smaller than a threshold. Such range proofs are much smaller with Bulletproofs, this also allows for $m$ transactions to have valid range proofs. \\
Bulletproofs have many other applications in cryptographic protocols, such as shortening proofs of solvency, short verifiable shuffles, confidential smart contracts, and as a general drop-in replacement for Sigma-protocols. \\
Bulletproofs are an optimization to the \emph{Efficient Zero-Knowledge Arguments for
Arithmetic Circuits in the Discrete Log Setting} paper. The aforementioned paper introduced an inner-product argument by the following diagram.
\begin{center}
\includegraphics[width=8.5cm]{images/circ.png}
\end{center}
The constraint system has the following format:
\begin{itemize}
\item An vector of $n$ multiplications that gives $3 \cdot n$ low-level variables: left, right and output
\item An vector of $q$ linear constraints between these variables.
\item Additional \emph{m high-level variables} that represent external facts.
\end{itemize}
Although Bulletproof implementation provides a solid means of creating fast proofs, the prior choice of curve is important to ensure that binary decomposition is not needed within the circuit for reduction. This reduction is negated as the curve is defined over the Ristretto scalar field.
\section{The Ristretto Scalar Field}
Ristretto $[4]$ is a technique that constructs prime-order elliptic curve groups, the construction of these groups stems from non prime-order Elliptic curves. Ristretto builds upon the Decaf paper$[5]$, where prime-order curve groups are created from curves with co-factor 4. Ristretto, on the other hand, is applicable to Edwards curve groups which have a co-factor 4 or 8. Edwards curve have a point of order 4, this means that points on the curve are not of prime order and they instead have a small co-factor. By using the Ristretto technique the abstraction problem is solved for all potential co-factor related issues with a single protocol. For use of the Ristretto scalar field in this implementation, any chosen curve needs to be defined over the Ristretto scalar field, for the prime-order group Ristretto255. This Ristretto scalar field provides a prime-order group of size $2^{252}$ $[4]$ by encoding group elements. The ristretto255 group will be implemented using points from the curve defined in the next section. This protocol compresses the co-factor of a curve, with the rationale of being able to avoid the drawbacks that are concurrent with a co-factor, whilst being able to capitalize on the robustness of an otherwise solid curve.\\ If a curve given in standard elliptic curve form, defined as: \\\\
\begin{itemize}
\item $Y = X^3 + Ax + B$\\\\
then\\
\item Let $G$ be a group of prime order for the curve, denoted as $q$
\item A co-factor, denoted by $h$, exists such that the order of the curve is $h \cdot q$ for the large prime $q$
\end{itemize}
\hfill \break
There are various advantages and disadvantages to having a co-factor larger than one, therefore a thorough analysis must be performed, so that it is known whether or not co-factor manipulation is needed. For all curves, except for Hessian curves, the co-factor is divisible by 4. To become more useful to a broad spectrum of cryptography, Ristretto is apt for a large number of curves, which have a co-factor of 8 or 4. When the co-factor is greater than 1 multiple operations can be hindered. In the case of set inclusion, having a co-factor larger than one will hinder the curve operations, specifically relating to the scalar base operations. In reference to the need for subset proofs, the goal is obstructed where the co-factor is not compressed, which leads to non-injective behaviour between the groups. Non-injective functions in set mappings, which is a method to describe whether an element exists in another set or not, affects the operations in proving subsets exists within sets. \\\\
For elliptic curves, any scalar multiplication is a 1 to 1 mapping if the group order is prime. Only in a prime-order group is a random scalar for the operation valid, and it must be in the range 1 to $q$-1. Whereas in a non prime-order group, the adding of a small element can lead to a small subgroup confinement attack$[6]$, which makes it possible to present the same result from different inputs. When implemented, Ristretto acts as a thin layer, which provides a protocol to construct a prime-order group. \\\\
To embed a curve into this prime field, the definition that an embedded curve L, is a curve whose base field is defined by the scalar field of another curve, M. In this case, the Doppio curve, which will be eluded to shortly, has a base field which is equal to the scalar field defined by Ristretto255. To visualise how this protocol is performed, when the curve is embedded into the Ristretto scalar field - two arbitrary Edwards points, $P$ and $Q$, may be represented as the equivalent Ristretto points in the Ristretto scalar field. This happens because the Edwards curve is defined over said field. As a method of creating equivalent points, is not dissimilar to how $X$, $Y$, and $Z$ projective coordinates can represent the same $P$ and $Q$ Edwards points for a given Edwards curve. The elements of the created prime-order group, ristretto255, are not curve points, they are simply represented by curve points. For computation understanding, it must be noted that not this prime-order group is not a subgroup of the curve and that there is an unequivocal distinction between the curve points and group elements.
\section{Equations}
\subsection{Twisted Edwards and Montgomery Forms}
In order for a selected elliptic align with the goals defined in this paper, it needs to be both twist secure and Ristretto-ready. The Doppio curve has been chosen for the reasons highlighted above. \\\\
Which is defined as follows: \\\\
\noindent\fbox{%
\parbox{\textwidth}{%
\begin{itemize}
\item Curve equation $$ -x^2+y^2=1-\frac{86649}{86650}x^2y^2 $$ Which is Twisted Edwards and used to implement Ristretto255.
\item $a= -1$
\item $d= \frac{86649}{86650}$
\item $Basepoint: Y = \frac{8}{9}$\\
\item Montgomery form equivalent: $$ y^2=x^3+346598x^2+x $$
\item $A = 346598 $
\item $Basepoint: X = 17$\\
\item The number of points on the curve, G, is $$ 2^{252}-121160309657751286123858757838224683208 $$
\item The prime order of the subgroup, q, is $$ 2^{249}-15145038707218910765482344729778085401 $$ \item The prime order of the Ristretto scalar field, l, is $$ 2^{252} + 27742317777372353535851937790883648493 $$
\item $Cofactor: h =\frac{G}{q} = 8$
\end{itemize}.
}%
}
\subsection{Weierstrass Form}
\noindent\fbox{%
\parbox{\textwidth}{%
\begin{itemize}
\item Weierstrass form equivalent: \\ $$y^2=x^3+ax+b $$
\item $a$ = 2412335192444087404657728854347664746952372119793302535333\\983646055108025796
\item $b$ = 1340186218024493002587627141304258192751317844329612519629\\993998710484804961\\
\end{itemize}
}%
}\\\\
The computation of the Weierstrass form is made to prove point addition in the simplest possible form as this underlines all of the current elliptic curve operations. These initial operations on the field elements are inline, which is made to ensure the most efficient computation possible.\\\\
To better contextualise this curve to a use case within the Dusk Network, the bidding process can be used, as this connects several of the sections in this paper. The bidding process uses the arithmetic of the curve to perform operations, as well as the set inclusion principles to the properties of the bid. It is first necessary to show that a bid lies in the list of valid bids, i.e. is a subset of all valid bids. This is done by set membership to see if an element is part of the total set or by showing that the element is linear in N, where N is the size of the group. Then the necessary requirements for the bid are proven, which is making sure it hashes to the correct values. Following this, the bid is added to a vector of valid bids. A binary vector, which is a vector that compactly stores bits, is then created and this vector must be the same length as the vector of valid bids plus the created bid. In this binary format, a one is indicative of the position of your bid, and zero is indicative of the other bids.
\section{Field Elements}
For curve arithmetic to be performed, it is imperative to have a solid implementation. This allows for a basis on which the most primary operations can be carried out, the crucial nature of these operations stems from the ability to perform multiple cryptographic functions from only a few fundamental operations. \\\\
It is standard when implementing curves from their field elements, that point addition is the first function to be defined, as it is the foundation on which the rest of the operations stand. Point addition is simply adding points to one another along the elliptic curve.\\
The points which can be shown by $x$ and $y$, in Cartesian form, lie upon the elliptic curve and are all multiples of the generator point. Setting the prime field, over which the curve is defined, aside for a moment allows for more clear mental imagery of how point addition works. The image below depicts point addition on a standard elliptic curve, with good visual aids. The generator point, denoted as $G$, is the point from which the addition is begun until the next generator point is reached. This is done by taking a tangent to the Generator point and then reflecting it on the x-axis, because of the mirror symmetry properties$[7]$, which gives the next point. The image below provides the reader with a visual understanding of how the point addition can be performed:
\begin{center}
\includegraphics[width=8.5 cm]{images/ygncy.png}
\end{center}
Point addition varies from curve to curve and optimizations are continually performed whilst the field elements are created. The main rationale behind the need for optimization is to keep the operations time constant. The field elements are represented in bit terms, which are commonly converted to \texttt{u64} arrays. Unfortunately, the aforementioned formatting can lead to problems with the arithmetic in programming. These issues are often centred around over-spill, which occurs when making computations that have bit carrying. Such issues arises when using 32-byte arrays in addition, which impacts the overall performance as the operation leaves remainders due to the bit-carrying.\\\\
In order to avoid the issues mentioned above, radix representations of the field elements are utilized in order to avoid this bit-carrying as well as to eliminate any potential overflows created during addition, which makes the implementation more efficient. Every \textit{field element} has to be represented as an array of five \texttt{u64}'s (in a concrete radix representation), which enables the computation of the product in the form \texttt{u64} $\cdot$ \texttt{u64} = \texttt{u128}\footnote{Please note that the Zerocaf implementation is taking advantage of the Rust Programming Language support for 128-unsigned integer operations.}.
\\\\
To achieve this, the chosen radix is $2^{52}$, which is optimal for dealing with over-spill. An issue which arises from the use of bit terms is the computational speed of the field arithmetic operations. \\\\
In this case, it is known that the most expensive CPU operation is the integer division. In order to avoid the operation highlighted above, an implementation all of the curve arithmetics is combined with bit-shifting techniques$[8]$. Bit-shifting is simply done by moving a series of bits to the left or right to achieve greater efficiency in a mathematical operation. When dealing with radices, there is always a need to add an integer so that the another module can be achieved, this integer is what is used for bit-shifting. The selection of this integer is a simple arithmetic operation on the defined prime of the field.
If we let $x$ be the remainder of the prime field, as shown below:
$$ l = 2^{252}+x $$
The value of the integer $x$ can be proven:
$$ p = 0\mod p$$
$$ p = 2^{252}+x $$
$$ 2^{252}+x = 0\mod p $$
$$ 2^{252} = -x\mod p $$
The integer $x$ is then used in the calculations for radix $2^{52}$, so that a different module can be achieved. \\\\
From this point addition, many of the further operations are made elementary as they all work with the manipulation of points, in some mathematical relation.
\newpage
\begin{thebibliography}{99}
\bibitem{c1} Stanford University, University College London and BlockStream, Benedikt Bünz, Johnathan Bootle, Dan Boneh, Andrew Polestra, Pieter Wuille and Greg Maxwell. Bulletproofs: Short Proofs for Confidential Transactions and More.\\ https://eprint.iacr.org/2017/1066.pdf
\bibitem{c2} Shafi Goldwasser, Silvio Micali, and Charles Rackoff. The knowledge complexity of interactive
proof-systems (extended abstract). In 17th Annual ACM Symposium on Theory of Computing
(STOC’85), pages 291–304, 1985."
\bibitem{c3} Pedersen T.P. (1992) Non-Interactive and Information-Theoretic Secure Verifiable Secret Sharing. In: Feigenbaum J. (eds) Advances in Cryptology — CRYPTO ’91. CRYPTO 1991. Lecture Notes in Computer Science, vol 576. Springer, Berlin, Heidelberg
\bibitem{c4} Isis Lovecruft and Henry de Valence. Ristretto. https://Ristretto.group/Ristretto.html
\bibitem{c5} Mike Hamburg : Deacaf. November, 2015. https://eprint.iacr.org/2015/673.pdf
\bibitem{c6} Feng Hao, Thales E-Security, Cambridge, UK https://eprint.iacr.org/2010/149.pdf
\bibitem{c7}Robert Dijkgraaf: Mirror Symmetry and Elliptic Curves, university of Amsterdam, November 15, 2002
\bibitem{c8} Tehcnological University of Visvesvaraya, Jnana Sangama https://www.academia.edu/8777556/
\end{thebibliography}
\end{document}
| {
"alphanum_fraction": 0.7819608622,
"avg_line_length": 120.5628742515,
"ext": "tex",
"hexsha": "4eac76ab8320e0dec7b6ad746f6ad6f3f7e02acf",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cb56cba2d2e5811e171bf7eaa061919b13d24465",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "CPerezz/Corretto",
"max_forks_repo_path": "docs/main.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "cb56cba2d2e5811e171bf7eaa061919b13d24465",
"max_issues_repo_issues_event_max_datetime": "2019-07-10T21:11:53.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-07-10T21:11:53.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "CPerezz/Corretto",
"max_issues_repo_path": "docs/main.tex",
"max_line_length": 1570,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "cb56cba2d2e5811e171bf7eaa061919b13d24465",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "CPerezz/Corretto",
"max_stars_repo_path": "docs/main.tex",
"max_stars_repo_stars_event_max_datetime": "2020-02-15T14:50:28.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-02-15T13:28:03.000Z",
"num_tokens": 4793,
"size": 20134
} |
\documentclass{article}
\usepackage[intlimits]{amsmath}
\usepackage{amssymb}
\usepackage{amsfonts,amstext,amsthm}
\usepackage{paralist} % For {inparaenum} environment.
\usepackage{mathtools} % For dcases.
\usepackage[normalem]{ulem} % For strikethrough.
\usepackage[usenames,dvipsnames]{xcolor} % For named colors.
\usepackage{hyperref} % For clickable refs.
\usepackage{subcaption} % Allows to include subfigures, floats, etc.
%\usepackage{enumerate}
\usepackage{MnSymbol}
\usepackage[authoryear,sort]{natbib}
\hypersetup{
unicode=false, % non-Latin characters in Acrobat’s bookmarks
pdftoolbar=true, % show Acrobat’s toolbar?
pdfmenubar=true, % show Acrobat’s menu?
pdffitwindow=false, % window fit to page when opened
pdfstartview={FitH}, % fits the width of the page to the window
pdfnewwindow=true, % links in new window
colorlinks=true, % false: boxed links; true: colored links
linkcolor=red, % color of internal links (change box color with linkbordercolor)
citecolor=BrickRed, % color of links to bibliography
filecolor=BurntOrange, % color of file links
urlcolor=blue % color of external links
} % \hypersetup
\renewcommand{\geq}{\geqslant}
\renewcommand{\leq}{\leqslant}
\usepackage[margin=2.5cm]{geometry}
\usepackage{graphicx}
\graphicspath{{./gfx/}}
\DeclareGraphicsExtensions{.eps}
\usepackage{epstopdf}
\usepackage{tikz}
\usetikzlibrary{arrows.meta}
\usetikzlibrary{calc}
\usetikzlibrary{patterns}
\renewcommand{\Pr}{\mathsf{P}}
%\usepackage[nott]{inconsolata} % for C++ code
% C++ styles.
%\newcommand{\cppfont}{\fontfamily{fi4}\selectfont}
%\newcommand{\cppkey}[1]{{\color{blue}#1}}
%\newcommand{\cppclass}[1]{{\color{mint}#1}}
%\newcommand{\cppparam}[1]{{\color{gray}#1}}
% Math
\newcommand{\slfrac}[2]{\left. #1 \middle/ #2 \right.}
% ~~ Theorems and such ~~
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{corollary}{Corollary}
\newtheorem{proposition}{Proposition}
\theoremstyle{definition} %% or \theoremstyle{remark} will produce roman text
\newtheorem{definition}{Definition}
\newtheorem{remark}{Remark}
\newtheorem{assumption}{Assumption}
\newtheorem*{notation}{Notation}
\newtheorem{AUXalgorithm}{Algorithm}
\newenvironment{algorithm}[1]
{\renewcommand\theAUXalgorithm{#1}\AUXalgorithm\begin{enumerate}\item[]}
{\end{enumerate}\endAUXalgorithm}
% ~~ Math operators ~~
\DeclareMathOperator{\EV}{\mathbf{E}} % expected value
\DeclareMathOperator{\Var}{\mathbf{Var}} % variance
\DeclareMathOperator{\Cov}{\mathbf{Cov}} % covariance
\DeclareMathOperator{\Corr}{\mathbf{Corr}} % correlation
\DeclareMathOperator{\SD}{\mathbf{s.d.}} % standard deviation
% ~~ Distributions ~~
\DeclareMathOperator{\DBernoulli}{\mathrm{Bernoulli}} % Bernoulli distribution.
\DeclareMathOperator{\DNorm}{\mathcal{N}} % normal distribution
\DeclareMathOperator{\DLogNorm}{\ln \DNorm } % normal distribution
\DeclareMathOperator{\DExp}{\mathrm{Exp}} % exponential distribution
\DeclareMathOperator{\DPareto}{\mathrm{Pareto}} % exponential distribution
\DeclareMathOperator{\DBeta}{\mathrm{Be}} % exponential distribution
\DeclareMathOperator{\DUnif}{\mathrm{Uniform}} % exponential distribution
\newcommand{\scbeta}[1]{\DBeta_{\left(0, #1\right)}} % scaled beta distribution
% ~~ Misc ~~
\newcommand{\ignore}[1]{}
\newcommand{\nolabel}[1]{}
\newcommand{\cache}[1]{\fbox{$#1$}}
\newcommand{\ceil}[1]{\lceil{#1}\rceil}
\newcommand{\floor}[1]{\lfloor{#1}\rfloor}
\newcommand{\round}[1]{\lsem{#1}\rsem}
\newcommand{\Mode}{\theta}
\newcommand{\Reals}{\mathbb{R}} % Real numbers.
\newcommand{\Integers}{\mathbb{Z}} % Integer numbers.
% ~~ Set notation ~~
\newcommand{\set}[1]{\left\{ {#1} \right\}} % Set with automatically scaled parentheses: {...}.
\newcommand{\xset}[2][]{#1\{ {#2} #1\}} % Set with custom scaled parentheses: {...}.
\newcommand{\cset}[3][\middle |] % Set with condition: {... | ...}.
{\left\{ {#2} \, #1 \, {#3} \right\}}
\begin{document}
\begin{notation}
In this paper we will write $\floor{\cdot }$ for the floor function, and $\round{x} = \floor{x + 0.5}$ for the rounding function.
The ``boxed'' \fbox{quantities} in the algorithms can be pre-computed to speed up calculations.
\end{notation}
\begin{assumption}
In all of the algorithms we assume that all integers live in $\Integers / (M + 1)$, where $M$ is some positive integer.
\end{assumption}
\section{Bernoulli and Discrete Uniform distributions}
Before we proceed to more complicated algorithms, a couple of words on generating events with a prescribed probability $p$, $0 \leq p \leq 1$. Suppose $X \sim \DUnif \set{0, 1, \cdots , M}$. Note, that each value of $X$ occurs with probability $1 / (M + 1)$. Rounding $p$ to the nearest multiple of $1 / (M + 1)$ would be ideal, with the incurred error begin upper-bounded (sharply) by $0.5 / (M + 1)$:
\begin{center}
\begin{tikzpicture}[scale=1.5,
axis/.style={->, >=Latex},
tria/.style={pattern=north east lines, pattern color=gray,dotted},
axpt/.style={color=black, fill=black},
bxpt/.style={color=black},
cxpt/.style={color=black, fill=white}]
% ~~ Vertical alignment ~~
\pgfmathsetmacro{\yTop}{1.5}
\pgfmathsetmacro{\yBot}{0.5}
% ~~ Horizontal alignment ~~
\pgfmathsetmacro{\dx}{1.0}
\pgfmathsetmacro{\xFirst}{1.5}
\pgfmathsetmacro{\xLast}{6.3}
% ~~ Axes ~~
\draw[axis] (\xFirst - 0.5*\dx, \yTop) -- (\xLast + \dx, \yTop) node[right] {$\ensuremath{p}$};
\draw[axis] (\xFirst - 0.5*\dx, \yBot) -- (\xLast + \dx, \yBot) node[right] {Approximated $\ensuremath{p}$};
% ~~ Triangles (in-between vaues) ~~
\draw[tria] (\xFirst, \yTop) -- (\xFirst + 0.5*\dx, \yTop) -- (\xFirst, \yBot) -- cycle;
\draw[tria] (\xFirst + 0.5*\dx, \yTop) -- (\xFirst + 1.5*\dx, \yTop) -- (\xFirst + \dx, \yBot) -- cycle;
\draw[tria] (\xLast - 0.5*\dx, \yTop) -- (\xLast - 1.5*\dx, \yTop) -- (\xLast - \dx, \yBot) -- cycle;
\draw[tria] (\xLast, \yTop) -- (\xLast - 0.5*\dx, \yTop) -- (\xLast, \yBot) -- cycle;
% ~~ Top labels ~~
\draw[axpt] (\xFirst, \yTop) circle (1.0pt) node[above, yshift=1.0ex] {$\ensuremath{0}$};
\draw[cxpt] (\xFirst + 0.5*\dx, \yTop) circle (1.0pt) node[above, yshift=0.5ex] {$\ensuremath{\frac{0.5}{M + 1}}$};
%\draw[axpt] (\xFirst + \dx, \yTop) circle (1.0pt) node[above, yshift=0.5ex] {$\ensuremath{\frac{1}{M + 1}}$};
\draw (0.5*\xFirst + 0.5*\xLast, \yTop) node[above, yshift=1.0ex] {$\ensuremath{\cdots }$};
%\draw[axpt] (\xLast - \dx, \yTop) circle (1.0pt) node[above, yshift=0.5ex] {$\ensuremath{\frac{M}{M + 1}}$};
\draw[cxpt] (\xLast - 0.5*\dx, \yTop) circle (1.0pt) node[above, yshift=0.5ex] {$\ensuremath{\frac{M + 0.5}{M + 1}}$};
\draw[axpt] (\xLast, \yTop) circle (1.0pt) node[above, yshift=1.0ex] {$\ensuremath{1}$};
% ~~ Bottom values ~~
\draw[axpt] (\xFirst, \yBot) circle (1.0pt) node[below, yshift=-1.0ex] {$\ensuremath{0}$};
\draw[axpt] (\xFirst + \dx, \yBot) circle (1.0pt) node[below, yshift=-0.5ex] {$\ensuremath{\frac{1}{M + 1}}$};
\draw[axpt] (0.5*\xFirst + 0.5*\xLast, \yBot) node[below, yshift=-1.5ex] {$\ensuremath{\cdots }$};
\draw[axpt] (\xLast - \dx, \yBot) circle (1.0pt) node[below, yshift=-0.5ex] {$\ensuremath{\frac{M}{M + 1}}$};
\draw[axpt] (\xLast, \yBot) circle (1.0pt) node[below, yshift=-1.0ex] {$\ensuremath{1}$};
\end{tikzpicture}
\end{center}
%
Unfortunately, this mapping yields $M + 2$ possible values, leading to technical complications, since we are restricted to $\Integers / (M + 1)$.
%
To address the issue, we round $p$ to the nearest multiple of $1 / M$ instead.
%
\begin{notation}
For any $p$, $0 \leq p \leq 1$, we will write
\[
p ^\star = \begin{dcases}
\, \round{M \, p} \quad & \text{if $0 \leq p \leq 0.5$}, \\
\, M - \round{M (1 - p)} \quad & \text{if $0.5 < p \leq 1$}.
\end{dcases}
\]
\end{notation}
%
Events with probability $p$ may be approximated with either $\set{X < p^\star }$ or $\set{X \leq p^\star }$.
%
\begin{center}
\renewcommand{\arraystretch}{1.4}
\begin{tabular}{|c|c||c|c|}
\hline
Target probability & $p^\star $ & $\Pr (X < p^\star )$ & $\Pr (X \leq p^\star )$ \\ \hline\hline
\hspace{2em} $0 \leq p < 0.5 / M$ & $0$ & $0$ & $1 / (M + 1)$ \\ \hline
$0.5 / M \leq p < 1.5 / M$ & $1$ & $1 / (M + 1)$ & $2 / (M + 1)$ \\ \hline
\multicolumn{4}{c}{$\cdots $} \\ \hline
$(M - 1.5) / M < p \leq (M - 0.5) / M$ & $M - 1$ & $(M - 1) / (M + 1)$ & $M / (M + 1)$ \\ \hline
$(M - 0.5) / M < p \leq 1$ \hspace{5em} & $M$ & $M / (M + 1)$ & $1$ \\ \hline
\end{tabular}
\end{center}
%
It is not hard to see that this will lead to errors not exceeding $1.5 / M$.
\begin{algorithm}{(Bernoulli)}
\item Let $p$, $0 \leq p \leq 1$, be the parameter of Bernoulli distribution. If $p = 1$, set $\cache{S} = 0$ and $\cache{H} = 1$.
\item If $p < 1$, set $\cache{S} = 1$ and $\cache{H} = p^\star $.
\item Generate $\hat{U} \sim \DUnif \set{0, 1, \cdots , M}$. If $\cache{S} \cdot \hat{U} < \cache{H}$ return \emph{true}, otherwise return \emph{false}.
\end{algorithm}
%
The algorithm above has $1 / M$ as an upper bound for the probabilities of generated events.
%
Now that we have a $\DBernoulli (p)$ generator, we will proceed to generation of $\DUnif \set{a, a + 1, \cdots , b}$, where $a \leq b$ are integers. Unlike the algorithm above, this generation can be done precisely with a rejection method. To this end, we will partition the set $\set{0, 1, \cdots , M}$ into blocks of size $(b - a + 1)$ with possibly one block of a smaller size. If the smaller block is selected---which happens with probability $((M + 1) \mod (b - a + 1)) / (M + 1)$---we start over.
\begin{algorithm}{(Discrete Uniform)}
\item Let $a \leq b$ be the parameters of Uniform distribution, and set $\cache{N} = b - a$.
\item If $\cache{N} = 0$, set $\cache{B} = M$.
\item If $\cache{N} \neq 0$, set $\cache{B} = 1 + \floor{(M - \cache{N}) / (\cache{N} + 1)}$.
\item \label{item:uniform:gen} Generate $\hat{U} \sim \DUnif \set{0, 1, \cdots , M}$, and set $K = \floor{\hat{U} / \,\cache{B}}$ to be the block index where $\hat{U}$ landed. Note that $K \mid \set{K \leq \cache{N}}$ is conditionally $\DUnif \set{0, 1, \cdots , \cache{N}}$.
\item If $K \leq \cache{N}$, accept $a + K$.
\item If $K > \cache{N}$, return to step \ref{item:uniform:gen}.
\end{algorithm}
\section{Discrete distributions with finite support}
\subsection{Overview}
In this section we will consider the so-called ``alias'' algorithm for generating random variates from discrete distributions with finite support, based off of a paper by \citet{Vose:91}.
\subsection{Formal Setup}
Suppose we are dealing with a random variable $X$ with a discrete distribution with finite support over $n \geq 1$ values:
\[
\Pr (X = a_i) = p_i, \qquad 1 \leq i \leq n,
\]
with $p_1 + \cdots + p_n = 1$ and $p_i > 0$, $1 \leq i \leq n$. The algorithm is two-stage: at the first stage we select one of the elements with probability $(1/n)$; at the second stage we either take the chosen item, or its alias.
\begin{center}
\begin{tikzpicture}
\pgfmathsetmacro{\xSep}{3.0}
\pgfmathsetmacro{\xxSep}{1.3}
\pgfmathsetmacro{\ySep}{1.8}
\pgfmathsetmacro{\yySep}{2.5}
\coordinate (root) at (0, 0);
% ~~ Top tier ~~
\node[draw] (kk) at (-\xSep, -\ySep) {$\ensuremath{a_1}$};
\draw (0, -\ySep) node {$\ensuremath{\cdots }$};
\node[draw] (ll) at (\xSep, -\ySep) {$\ensuremath{a_n}$};
% ~~ Bottom tier ~~
\node[draw] (mm_kk) at (-\xSep-\xxSep, -\ySep-\yySep) {$\ensuremath{a_1}$};
\node[draw] (nn_kk) at (-\xSep+\xxSep, -\ySep-\yySep) {$\ensuremath{\text{alias}_1}$};
\draw (0, -\ySep-\yySep) node {$\ensuremath{\cdots }$};
\node[draw] (mm_ll) at (\xSep-\xxSep, -\ySep-\yySep) {$\ensuremath{a_n}$};
\node[draw] (nn_ll) at (\xSep+\xxSep, -\ySep-\yySep) {$\ensuremath{\text{alias}_n}$};
% ~~ Edges ~~
\draw (root) -- (kk) node[midway,rotate=36,yshift=1.5ex] {$\ensuremath{1/n}$};
\draw (root) -- (ll) node[midway,rotate=-30,yshift=1.5ex] {$\ensuremath{1/n}$};
\draw (kk) -- (mm_kk) node[midway,rotate=63,yshift=1.5ex] {$\ensuremath{\pi _1}$};
\draw (kk) -- (nn_kk) node[midway,rotate=-59,yshift=1.5ex] {$\ensuremath{1 - \pi _1}$};
\draw (ll) -- (mm_ll) node[midway,rotate=63,yshift=1.5ex] {$\ensuremath{\pi _n}$};
\draw (ll) -- (nn_ll) node[midway,rotate=-59,yshift=1.5ex] {$\ensuremath{1 - \pi _n}$};
\end{tikzpicture}
\end{center}
The idea behind setting up probabilities correctly is an iterative one, where at each step we eliminate one branch. More specifically, select $j$-th branch such that $p _j \leq 1/n$. Then setting $\pi _j = n \, p_j$ and ensuring that $j$-th element will not appear as an alias at future steps will take care of the left sub-branch. Choosing the $j$-th alias as any element, say $k$-th, with $p _k > 1/n$, and updating $p_k = p_k - (1 - \pi _j) / n$, will take care of the right sub-branch.
\subsection{Alias Algorithm}
\begin{algorithm}{(Continuous generators)}
\item Create an array of conditional probabilities, $\cache{\pi _i }$, $1 \leq i \leq n$, and ``aliases'', $\cache{b_i} = \cache{a_i}$, $1 \leq i \leq n$.
\item Categorize each probability as either ``small'' or ``large'':
\[
S = \cset{i}{p_i \leq 1/n}, \quad
L = \cset{i}{p_i > 1/n}.
\]
\item While $S \neq \varnothing $ and $L \neq \varnothing $:
\begin{enumerate}
\item Take $j \in S$ (index of a small element) and $k \in L$ (index of a large element).
\item Set $\cache{b_j} = \cache{a_k}$ (pair up the small element with the large element).
\item Set $\cache{\pi _j} = n p_j \leq 1$ (record the cutoff value for the small element).
\item Since this large element is now an ``alias'' for the small element, we will adjust the probability of it being selected in the future. Let $\delta _j = 1 / n - p_j \geq 0$ be the probability of element $k$ being selected in the $j$-th branch.
\item Overwrite $p_k = p_k - \delta _j$, making sure that once we reach $k$-th branch, its chances of being selected will be smaller.
\item If $p_k \leq 1 / n$, remove $k$ from $L$ and place it in $S$ (what used to be large is now small).
\item Remove $j$ from $S$.
\end{enumerate}
\item For each $j \in S$ set $\cache{\pi _j} = 1$; for each $k \in L$ set $\cache{\pi _k} = 1$. This will mitigate some rounding errors.
\item The steps above are the initialization stage and will have to be completed only once. The generation itself proceeds as follows:
\begin{enumerate}
\item Generate $K \sim \DUnif \set{1, \cdots , n}$.
\item Generate $U \sim \DUnif [0, 1)$.
\item If $U < \cache{\pi _K}$, accept $\cache{a_K}$.
\item If $U \geq \cache{\pi _K}$, accept $\cache{b_K}$.
\end{enumerate}
\end{algorithm}
\begin{algorithm}{(Discrete uniform integer generators)}
\item Create an array of cutoff values, $\cache{\hat{\pi }_i }$, $1 \leq i \leq n$, and ``aliases'', $\cache{b_i} = \cache{a_i}$, $1 \leq i \leq n$.
\item Categorize each probability as either ``small'' or ``large'':
\[
S = \cset{i}{n p_i \leq 1}, \quad
L = \cset{i}{n p_i > 1}.
\]
\item While $S \neq \varnothing $ and $L \neq \varnothing $:
\begin{enumerate}
\item Take $j \in S$ (index of a small element) and $k \in L$ (index of a large element).
\item Set $\cache{b_j} = \cache{a_k}$.
\item Set $\cache{\hat{\pi }_j} = (n p_j)^\star \leq M$. If $n p_j = 1$, then set $\cache{b_j} = \cache{a_j}$.
\item Set $\delta _j = 1 - n p_j \geq 0$.
\item Overwrite $n p_k = n p_k - \delta _j$.
\item If $n p_k \leq 1$, remove $k$ from $L$ and place it in $S$.
\item Remove $j$ from $S$.
\end{enumerate}
\item For each $j \in S$ set $\cache{b_j} = \cache{a_j}$; for each $k \in L$ set $\cache{b_k} = \cache{a_k}$.
\item The steps above are the initialization stage and will have to be completed only once. The generation itself proceeds as follows:
\begin{enumerate}
\item Generate $K \sim \DUnif \set{1, \cdots , n}$.
\item Generate $\hat{U} \sim \DUnif \set{0, \cdots , M}$.
\item If $\hat{U} < \cache{\hat{\pi }_K}$, accept $\cache{a_K}$.
\item If $\hat{U} \geq \cache{\hat{\pi }_K}$, accept $\cache{b_K}$.
\end{enumerate}
\end{algorithm}
\section{Unimodal absolutely continuous distributions}
\subsection{Overview}
The aim of this section is two-fold: first, to give an overview of the Ziggurat algorithm \citet{Marsaglia+Tsang} for unimodal absolutely continuous distributions; second, to provide one implementation if the algorithm. One of the beauties of the Ziggurat algorithm lies in the fact that while almost all the time it is a very efficient version of the classical rejection method, it allows to handle densities with infinite support---assuming it is known how to sample from the tail (in the normal case see, e.g., \citet{Marsaglia:64}).
\subsection{Formal Setup}
In what follows, let $f$ denote the density function; $\Mode $ denote the mode of the distribution (argument where $f$ achieves its maximum); and $T$ denote its survival function:
\[
T(x) = \int _x ^{\infty} f(t) \,dt.
\]
We will start with the monotone decreasing density case; the monotone increasing case can be handled similarly.
The idea behind the Ziggurat algorithm is to cover the density function with $n \geq 2$ horizontal layers of the same area, where all the layers except for the bottom one are rectangular.
%
More formally, we start with a partition of the vertical interval
\[
0 = f_0 < f_1 < \cdots < f_{n - 1} < f_{n} = f(\Mode ).
\]
For $1 \leq k \leq n - 1$ let
\begin{align*}
a_k = \inf \big\{ x \,:\, f(x) > f_k \big\}, \quad
b_k = \sup \big\{ x \,:\, f(x) > f_k \big\};
\end{align*}
put $a_n = b_n = \Mode $, and define the layers by
\begin{align*}
\begin{dcases}
L_0 = \big\{ (x, y) \,:\, 0 \leq y \leq \min \{f_1, f(x)\} \big\}, \\
L_k = [a_k, b_k] \times [f_k, f_{k + 1}], \quad 1 \leq k \leq n - 1.
\end{dcases}
\end{align*}
This layering is illustrated in Figure~\ref{fig:ziggurat:3_layers}.
\begin{figure}[!ht]
\centering
\newcommand{\xfactor}{4.5}
\newcommand{\yfactor}{8.5}
\newcommand{\plotfun}[1]{\yfactor * #1 * exp(-#1 * #1)}
\newcommand{\plotcoord}[2]{(\xfactor*#1,{\plotfun{#2}})}
\newcommand{\plotMode}{0.7071}
\newcommand{\plotZero}{0}
\newcommand{\plotAone}{0.1536} % unscaled f_1 = 0.15
\newcommand{\plotAtwo}{0.2687} % unscaled f_2 = 0.25
\newcommand{\plotBtwo}{1.2770} % unscaled f_2 = 0.25
\newcommand{\plotBone}{1.5223} % unscaled f_1 = 0.15
\newcommand{\plotXfrom}{0.0}
\newcommand{\plotXto}{1.8}
\begin{tikzpicture}[scale=1.0, >=Latex,
every node/.style={anchor=base},
help lines/.style={dashed, thick},
axis/.style={<->},
important line/.style={thick},
connection/.style={thick, dotted}]
% ~~ Rectangular layers ~~
\draw[pattern=north west lines, pattern color=gray] \plotcoord{\plotAtwo}{\plotMode} rectangle \plotcoord{\plotBtwo}{\plotBtwo}
node[above right, yshift=4ex] {$\ensuremath{L_2}$};
\draw[pattern=north east lines, pattern color=gray] \plotcoord{\plotAone}{\plotBtwo} rectangle \plotcoord{\plotBone}{\plotBone}
node[above right, yshift=2ex] {$\ensuremath{L_1}$};
% ~~ Bottom layer ~~
\fill[pattern=dots, pattern color=gray] plot[domain=\plotXfrom:\plotXto] (\xfactor*\x,{min(\plotfun{\plotBone},\plotfun{\x})})--(\xfactor*\plotXto,0)--(\xfactor*\plotXfrom,0)--cycle;
\draw \plotcoord{\plotXto}{\plotXto} node[above right, yshift=-2ex] {$\ensuremath{L_0}$};
% ~~ Density ~~
\draw[important line] plot[smooth,domain=\plotXfrom:\plotXto] (\xfactor*\x,{\plotfun{\x}}) node[right] {};
\draw[connection]
\plotcoord{\plotAone}{\plotAone}--(\xfactor*\plotAone,0.0)
\plotcoord{\plotAtwo}{\plotAtwo}--(\xfactor*\plotAtwo,0.0)
\plotcoord{\plotMode}{\plotMode}--(\xfactor*\plotMode,0.0)
\plotcoord{\plotBone}{\plotBone}--(\xfactor*\plotBone,0.0)
\plotcoord{\plotBtwo}{\plotBtwo}--(\xfactor*\plotBtwo,0.0);
\draw (\xfactor*\plotAone,-0.4) node {$\ensuremath{a_1}$};
\draw (\xfactor*\plotAtwo,-0.4) node {$\ensuremath{a_2}$};
\draw (\xfactor*\plotMode,-0.4) node {$\ensuremath{a_3 = \Mode = b_3}$};
\draw (\xfactor*\plotBtwo,-0.4) node {$\ensuremath{b_2}$};
\draw (\xfactor*\plotBone,-0.4) node {$\ensuremath{b_1}$};
\draw[connection] \plotcoord{\plotAtwo}{\plotMode}--\plotcoord{\plotZero}{\plotMode} node[left] {$\ensuremath{f_3 = f(\Mode )}$};
\draw[connection] \plotcoord{\plotAone}{\plotBtwo}--\plotcoord{\plotZero}{\plotBtwo} node[left] {$\ensuremath{f_2}$};
\draw[connection] \plotcoord{\plotAone}{\plotBone}--\plotcoord{\plotZero}{\plotBone} node[left] {$\ensuremath{f_1}$};
\draw[connection] (\xfactor*\plotZero,0.0) node[left] {$\ensuremath{f_0 = 0}$};
% ~~ Axes ~~
\draw[->] (\xfactor*\plotZero,-0.2)--(\xfactor*\plotZero,4.3); % Vertical axis.
\draw[->] (-0.1+\xfactor*\plotXfrom,0.0)--(0.4+\xfactor*\plotXto,0.0); % Horizontal axis.
\end{tikzpicture}
\caption{Unimodal Ziggurat algorithm with $3$ layers.}
\label{fig:ziggurat:3_layers}
\end{figure}
%
We want to choose $f_1, \cdots , f_{n - 1}$ in such a way as to make sure that the area of each layer is the same:
\[
|L_0| = |L_1| = \cdots = |L_{n - 1}| = V.
\]
Thus, one arrives at a (non-linear) system of $n$ equations with $n$ unknowns:
\begin{align} \label{eq:zigg_system}
\begin{dcases}
V = (1 - T(a_1)) + (b_1 - a_1) \, f_1 + T(b_1) \\
V = (b_k - a_k) \, \big( f_{k + 1} - f_k \big), \quad 1 \leq k \leq n - 1.
\end{dcases}
\end{align}
\begin{proposition}
System \eqref{eq:zigg_system} has a unique solution.
\end{proposition}
Having set up the basics, let us review the principles of the Ziggurat algorithm.
\begin{itemize}
\item The first step is to select one layer (uniformly) at random.
\item The second step is to generate a (uniform) random point inside the selected layer. If the point happens to be under the graph of the density function, we accept it. Otherwise, we start over from the first step.
\end{itemize}
To facilitate these steps, note that every layer has a rectangular subset that lies entirely under the density curve. Specifically, all
\[
B_k = [a_{k + 1} , b_{k + 1}] \times [f_k, f_{k + 1}] \subseteq L_k, \quad 0 \leq k \leq n - 1,
\]
lie under the graph of $f$ (for the top layer the box $B_{n - 1}$ is trivial).
In light of this, the second step of the algorithm becomes substantially simpler if the point sampled from layer $L_k$ falls inside $B_k$. The probability of this happening, which we will refer to as the \emph{simple coverage probability}, is
\[
p_k = \frac{|B_k|}{|L_k|}.
\]
It is convenient to write these probabilities in terms of the widths of the layers (the term ``width'' is accurate for all layers but the bottom one, in which case we interpret it as the width of a rectangle of the same height and area). Let
\[
a_0 = a_1 - (1 - T(a_1)) / f_1
\quad \text{and} \quad
b_0 = b_1 + T(b_1) / f_1,
\]
and set
\begin{align} \label{eq:layer_width}
\begin{dcases}
%w_0 = V / f_1, \\
w_k = b_k - a_k, \quad 0 \leq k \leq n - 1, \\
w_n = 0.
\end{dcases}
\end{align}
Then the simple coverage probabilities become
\begin{align} \label{eq:simple_coverage_probability}
p_k &= \frac{w_{k + 1}}{w_{k}} \quad \text{for $0 \leq k \leq n - 1$}.
\end{align}
%
Furthermore, let $\alpha _k$ and $\beta _k$, $0 \leq k \leq n - 1$, denote the probabilities of landing to the left and right of $B_k$, respectively. Then
\begin{align} \label{eq:fallout_probability}
\begin{dcases}
\alpha _k = (a_{k + 1} - a_{k}) / w_k, \\
\beta _k = (b_{k} - b_{k + 1}) / w_k,
\end{dcases}
\quad \text{for $0 \leq k \leq n - 1$}.
\end{align}
Note, that in the case of monotone densities, either all $\alpha _k = 0$ or all $\beta _k = 0$, $0 \leq k \leq n - 1$.
%
It is not difficult to see that
\begin{align*}
\Pr \big(\text{simple coverage}\big) &= \frac{1}{n} \sum _{k = 0} ^{n - 1} \Pr \big(\text{simple coverage} \mid \text{layer $k$}\big)
= \frac{p_0 + \cdots + p_{n - 1}}{n}, \\
\Pr \big(\text{rejection}\big) &= \frac{1}{n} \sum _{k = 0} ^{n - 1} \Pr \big(\text{rejection} \mid \text{layer $k$}\big)
= 1 - \frac{1}{n V}.
\end{align*}
\subsection{Ziggurat Algorithm}
The algorithm of \citet{Marsaglia+Tsang} and \citet{Doornik:05} for $n \geq 2$ layers is summarized below.
\begin{algorithm}{(Continuous generators)}
\item \label{item:alg_cont:choose_box} Generate $K \sim \DUnif \{ 0, 1, \cdots , n - 1 \}$, the zero-based layer index.
\item \nolabel{item:alg_cont:horizontal} Generate $U \sim \DUnif [0, 1)$, the relative ``horizontal position'' inside $L_K$.
\item \nolabel{item:alg_cont:bottom_layer} If $K = 0$:
\begin{enumerate}
\item If $U < \cache{\alpha _0}$, accept a variate from the left tail.
\item If $U \geq \cache{1 - \beta _0}$, accept a variate from the right tail.
\item If $\cache{\alpha _0} \leq U < \cache{1 - \beta _0}$, accept $\cache{a_0} + U \cdot \cache{w_0}$, since the random point landed inside $B_0$.
\end{enumerate}
\item \nolabel{item:alg_cont:other_layers} If $K \neq 0$:
\begin{enumerate}
\item Let $X = \cache{a_K} + U \cdot \cache{w_K}$ be the abscissa of the random point inside $L_K$.
\item If $\cache{\alpha _K} \leq U < \cache{1 - \beta _K}$, accept $X$, since the point landed inside $B_K$.
\item Generate $W \sim \DUnif [0, 1)$, the relative vertical position inside $L_K$.
\item If $\cache{f_K} + W \cdot \cache{(f_{K + 1} - f_{K})} < f(X)$, accept $X$, since the point landed under the graph of $f$.
\item Return to step \ref{item:alg_cont:choose_box}.
\end{enumerate}
\end{algorithm}
In practice one usually has to deal with discrete uniform integer generators. For simplicity of presentation, we will assume that the possible values are $\{ 0, 1, \cdots , M \}$; typically, $M$ would be a Mersenne number.
The previous algorithm can be adjusted to take advantage of integer arithmetic in the following way:
\begin{algorithm}{(Discrete uniform integer generators)}
\item \label{item:alg_disc:choose_box} Generate $K \sim \DUnif \{ 0, 1, \cdots , n - 1 \}$, the zero-based layer index.
\item \nolabel{item:alg_disc:horizontal} Generate $\hat{U} \sim \DUnif \{ 0, 1, \cdots , M \}$, the up-scaled relative ``horizontal position'' inside $L_K$.
\item \nolabel{item:alg_disc:bottom_layer} If $K = 0$:
\begin{enumerate}
\item If $\hat{U} < \cache{\round{(M^\star + 1) \alpha _0}}$, accept a variate from the left tail.
\item If $\hat{U} > \cache{\round{(M^\star + 1) (1 - \beta _0)}}$, accept a variate from the right tail.
\item If $\cache{\round{(M^\star + 1) \alpha _0}} \leq \hat{U} \leq \cache{\round{(M^\star + 1) (1 - \beta _0)}}$, accept $\cache{a_0} + \hat{U} \cdot \cache{w_0 / (M + 1)}$, since the random point landed inside $B_0$.
\end{enumerate}
\item \nolabel{item:alg_disc:other_layers} If $K \neq 0$:
\begin{enumerate}
\item Let $X = \cache{a_K} + \hat{U} \cdot \cache{w_K / (M + 1)}$ be the abscissa of the random point inside $L_K$.
\item If $\cache{\round{(M^\star + 1) \alpha _K}} < \hat{U} < \cache{\round{(M^\star + 1) (1 - \beta _K)}}$, accept $X$, since the point landed inside $B_K$.
\item Generate $\hat{W} \sim \DUnif \{0, 1, \cdots , M \}$, the up-scaled relative vertical position inside $L_K$.
\item If $\cache{f_K} + \hat{W} \cdot \cache{(f_{K + 1} - f_{K}) / (M + 1)} < f(X)$, accept $X$, since the point landed under the graph of $f$.
\item Return to step \ref{item:alg_disc:choose_box}.
\end{enumerate}
\end{algorithm}
%
Note, that if we know in advance that $\alpha _0 = \cdots = \alpha _{n - 1} = 0$ or $\beta _0 = \cdots = \beta _{n - 1} = 0$, certain steps in the algorithm can be simplified (or skipped entirely).
\begin{remark}
In most cases, we think it is reasonable to make the following assumptions.
\begin{itemize}
\item If $\alpha _j = 0$ for some $j$, then all $\alpha _k = 0$, $0 \leq k \leq n - 1$.
\item If $\beta _j = 0$ for some $j$, then all $\beta _k = 0$, $0 \leq k \leq n - 1$.
\end{itemize}
\end{remark}
\begin{remark}
Also note, that only $\ceil{\log _2 (n)}$ bits are necessary to generate $K$; the remaining bits may be used, e.g., to implement vectorized Ziggurat versions, or store the random index for repeated iterations.
\end{remark}
\bibliographystyle{plainnat}
\bibliography{samplers}
\end{document}
| {
"alphanum_fraction": 0.6181282832,
"avg_line_length": 54.9393382353,
"ext": "tex",
"hexsha": "cae10a7c052f0bfb95f6bed10fc0ea66750cb3b6",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ab9364c41672a188879d84ebebb5a23f2d6641ec",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ropufu/aftermath",
"max_forks_repo_path": "documentation/samplers.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ab9364c41672a188879d84ebebb5a23f2d6641ec",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ropufu/aftermath",
"max_issues_repo_path": "documentation/samplers.tex",
"max_line_length": 536,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ab9364c41672a188879d84ebebb5a23f2d6641ec",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ropufu/aftermath",
"max_stars_repo_path": "documentation/samplers.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 10254,
"size": 29887
} |
\section{Microfoundations for the Aggregate Demand curve}
| {
"alphanum_fraction": 0.8166666667,
"avg_line_length": 15,
"ext": "tex",
"hexsha": "e20c964afbdd16d0707cc7427853398a1e5811b7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/economics/newClassical/03-00-AD.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/economics/newClassical/03-00-AD.tex",
"max_line_length": 57,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/economics/newClassical/03-00-AD.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 13,
"size": 60
} |
\chapter{Proposed Implementation}
\label{ch:proposed-system}
Our solution is based on a code translator. In the end, a translator is still a type of compiler \cref{fig:compiler-stages},
where we have the analysis and synthesis phase. The analysis phase focuses on verifying that the input is correct and on
making the necessary transformations. While the synthesis starts from a high quality structure and performs the
appropriate transformations to reach the target representation. In the case of our solution we have a difference
and that is that we do not have a single target but multiple ones (\cref{fig:translator}).
In addition to this, in \cref{ch:proposed-implementation}, we have already developed a system capable of analysing,
validating and transforming source code so that we obtained an intermediate language made up of a
high-quality graph. Therefore in our translator we will reuse the lexical, syntactic and semantic
analyser from \cref{ch:proposed-implementation}, that makes the front end of our translator.
And then, what we will truly develop in this section, is the back-end of the translator.
\begin{figure}
\includegraphics{images/translator.pdf}
\centering
\caption[Translator generic structure]{Translator generic structure.}
\label{fig:translator}
\end{figure}
\section{Translator Back-end}
In our solution, the back-end of the translator, also called the synthesis phase,
fulfils two main functions. On one hand, it analyses the intermediate language
in search of some specific incompatibility with the target language. And on the
other hand, it generates the specific code for the target language.
To perform the analysis of the intermediate language and through the Visitor pattern,
a graph path is implemented that validates that no node or set of nodes
violate the restrictions obtained in \cref{eq:restriction}. The visitor that is implemented is
completely reused from the one developed for the Syntax or Semantic analysis in \cref{ch:proposed-implementation}.
For code generation, it is proposed to carry out an implementation of the Visitor pattern for each of the specific code generators,
each one of the specific implementations will perform the transformation function described in \cref{eq:transformation},
without prejudice to the fact that each specific code generator may have more implementations of the visitor associated.
\cref{fig:class-diagram-cg} illustrates this situation where the Java code generation visitor actually contains four more visitors
one for each language specific level of the plain objects.
\begin{figure}
\includegraphics[width=\textwidth]{images/class-diagra-cg.pdf}
\centering
\caption[Class diagram of the code generation visitors]{Class diagram of the code generation visitors.}
\label{fig:class-diagram-cg}
\end{figure}
And as proof of concept of the structure proposed in the previous section, we implemented a system that starts from the one
developed in \cref{sec:anal-implementation} and is capable of generating code from schemas, checking that they are valid.
This solution follows structructure developed in this system and more precisely \cref{fig:class-diagram-cg}. Moreover this
developed system offers a CLI tool (\cref{fig:menu-tool}). In this tool the users can define multiple options as \texttt{--java-pkg=STRING} which if
present will trigger the java code generation and will generate the target object in the specified package.
For example, for the input \texttt{java -jar shexlc.jar --java-pkg=demo person.shexl} where the \texttt{person.shexl}
file corresponds to the schema defined at \cref{fig:shex-translate-small} \texttt{ShEx-Lite} would generate a single java
class with the code that appears at the \texttt{Person.java} file, also in \cref{fig:shex-translate-small}.
This implemented system will be used for evaluating the proposed solution.
\begin{figure}
\includegraphics[width=\textwidth]{images/shexlc-menu.PNG}
\centering
\caption[CLI menu of \texttt{ShEx-Lite} CLI tool]{CLI menu of \texttt{ShEx-Lite} CLI tool.}
\label{fig:menu-tool}
\end{figure}
\section{Tests}
In order to validate the translator, it is no longer possible to rely on the tests previously
defined by the ShEx specification. Therefore, synthetic tests have been designed for this purpose.
These synthetic tests prove that each extreme case of the code generation system works. In addition,
it is also proved that for each specific language of the code generation the type projection is correct.
In addition, all these tests are run continuously on GitHub on Windows, Linux and Mac OS platforms.
See \cref{ch:github} to see the CI configuration.
\section{Generated Objects}
In this section we will give real examples of use cases in which the proposed system has been used to generate objects,
in addition we will study the objects in order to estimate their quality.
\subsection{Real (Hercules ASIO European Project)}
In the framework of the European project Hercules ASIO, financed with FIVE MILLION FOUR HUNDRED SIXTY-TWO THOUSAND SIX HUNDRED
euros, the system described is used to link two parts of the architecture, the ontological infrastructure and the semantic
architecture. The Hercules project tries to find a solution based on linked data to manage the research framework
in Spanish universities. Some examples of the objects generated in this project are \cref{fig:example-2} and \cref{fig:example-3}.
\begin{center}
\noindent\begin{minipage}[t]{.4\textwidth}
\begin{lstlisting}[frame=topline,numbers=left,firstnumber=0,title=\scriptsize\texttt{University.shexl}, basicstyle=\ttfamily\scriptsize]{a}
# Prefixes...
asio:University {
rdfs:name xsd:string ;
rdfs:county xsd:string ;
rdfs:location xsd:String ;
asio:hasRector
@asio:UniversityStaff ;
...
}
\end{lstlisting}
\end{minipage}\hfill
\begin{minipage}[t]{.5\textwidth}
\begin{lstlisting}[language=Java, frame=t,numbers=left,firstnumber=0,title=\scriptsize\texttt{University.java}, basicstyle=\ttfamily\scriptsize]{b}
// Imports...
public class University {
private String name ;
private String country ;
private String location ;
private asio.UniversityStaff hasRector ;
...
// Constructor...
// Getters and Setters...
}
\end{lstlisting}
\end{minipage}
\captionof{figure}{Schema modelling a \texttt{University} in \texttt{shexl} syntax to the left. And the \texttt{ShEx-Lite} generated code in \texttt{Java} to the right.}
\label{fig:example-2}
\end{center}
\begin{center}
\noindent\begin{minipage}[t]{.4\textwidth}
\begin{lstlisting}[frame=topline,numbers=left,firstnumber=0,title=\scriptsize\texttt{Researcher.shexl}, basicstyle=\ttfamily\scriptsize]{a}
# Prefixes...
asio:Researcher {
rdfs:name xsd:string ;
rdfs:surname xsd:string ;
rdfs:id xsd:integer ;
rdfs:orcid xsd:string ;
rdfs:publications
@asio:AcademicPublication * ;
...
}
\end{lstlisting}
\end{minipage}\hfill
\begin{minipage}[t]{.5\textwidth}
\begin{lstlisting}[language=Java, frame=t,firstnumber=0,numbers=left,title=\scriptsize\texttt{Researcher.java}, basicstyle=\ttfamily\scriptsize]{b}
// Imports...
public class University {
private String name ;
private String surname ;
private int id ;
private String orcid ;
private List<asio.AcademicPublication>
publications ;
...
// Constructor...
// Getters and Setters...
}
\end{lstlisting}
\end{minipage}
\captionof{figure}{Schema modelling a \texttt{Researcher} in \texttt{shexl} syntax to the left. And the \texttt{ShEx-Lite} generated code in \texttt{Java} to the right.}
\label{fig:example-3}
\end{center}
\subsection{Synthetic (Generated)}
In addition to the actual use case mentioned above, different generations of synthetically generated objects have been made to validate that the generation is correct.
\cref{fig:example-4} illustrates a generated schema that contains all the possible types that our solution can represent in any object oriented language.
\begin{center}
\noindent\begin{minipage}[t]{.4\textwidth}
\begin{lstlisting}[frame=topline,numbers=left,firstnumber=0,title=\scriptsize\texttt{Synthetic.shexl}, basicstyle=\ttfamily\scriptsize]{a}
# Prefixes...
a:b {
:c xsd:string ;
:d xsd:integer ;
:e xsd:float ;
:e xsd:boolean ;
:f @a:b ;
}
\end{lstlisting}
\end{minipage}\hfill
\begin{minipage}[t]{.5\textwidth}
\begin{lstlisting}[language=Java, frame=t,numbers=left,firstnumber=0,title=\scriptsize\texttt{Synthetic.java}, basicstyle=\ttfamily\scriptsize]{b}
package a;
// Imports...
public class B {
private String c;
private int d;
private int e;
private a.B f;
// Constructor...
// Getters and Setters...
}
\end{lstlisting}
\end{minipage}
\captionof{figure}{Synthetic schema in \texttt{shexl} syntax to the left. And the \texttt{ShEx-Lite} generated code in \texttt{Java} to the right.}
\label{fig:example-4}
\end{center} | {
"alphanum_fraction": 0.7757377788,
"avg_line_length": 48.5136612022,
"ext": "tex",
"hexsha": "f5ba8f54255d005a0f4b814671a22025b7ac175f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ce22ebcf6cd1ce5bafd79398ee87e50dcb18c41d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "thewilly/shex-lite-book",
"max_forks_repo_path": "chapters/part_2/translator-implementation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ce22ebcf6cd1ce5bafd79398ee87e50dcb18c41d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "thewilly/shex-lite-book",
"max_issues_repo_path": "chapters/part_2/translator-implementation.tex",
"max_line_length": 170,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ce22ebcf6cd1ce5bafd79398ee87e50dcb18c41d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "thewilly/shex-lite-book",
"max_stars_repo_path": "chapters/part_2/translator-implementation.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2279,
"size": 8878
} |
%
% documentclass
%
\documentclass[11pt,aspectratio=169,xcolor={table,dvipsnames}]{beamer}
\usetheme[
progressbar=frametitle,
numbering=fraction
]{metropolis}
%
% general packages
%
\usepackage[english,main=ngerman]{babel}
%\usepackage[ngerman,main=english]{babel}
\usepackage{microtype}
\usepackage{hyperref}
\usepackage{hyphenat}
%\hyphenation{Mathe-matik wieder-gewinnen}
%
% font packages
%
% xelatex fonts
\usepackage{fontspec}
\defaultfontfeatures{Scale=MatchLowercase}
\defaultfontfeatures[\rmfamily]{Scale=1}
\setmonofont{JetBrains Mono}
%\setmainfont{Palatino Linotype}
%\setmainfont{Calibri}
% legacy fonts
%\usepackage[utf8]{inputenc}
%\usepackage[T1]{fontenc}
%\usepackage[sc,osf]{mathpazo}
%
% misc packages
%
% BibLateX
%\usepackage{csquotes}
%\usepackage[
% backend=biber,
% style=ieee,
% url=false,
% eprint=false,
% isbn=false,
% hyperref=true,
% bibencoding=inputenc
%]{biblatex}
%\addbibresource{quellen.bib}
%\usepackage{listings}
%\definecolor{codegray}{rgb}{0.5,0.5,0.5}
%\definecolor{codeback}{rgb}{0.93,0.93,0.90}
%\lstset{
% basicstyle=\footnotesize\ttfamily,
% breaklines=true,
% keepspaces=true,
% numbers=left,
% numbersep=8pt,
% tabsize=4,
% frame=single,
% numberstyle=\scriptsize\color{gray},
% backgroundcolor=\color{codeback},
% stringstyle=\color{Purple},
% commentstyle=\color{ForestGreen},
% keywordstyle=\color{BrickRed},
%}
%\usepackage[linesnumbered,ruled,vlined]{algorithm2e}
%\usepackage{tikz}
%\usepackage{subfig}
%\usepackage{caption}
%\usepackage{overpic}
\usepackage{graphicx}
\graphicspath{{imgs/}}
%\usepackage{tabularx}
%\usepackage{array}
%\usepackage{booktabs}
%\usepackage{diagbox}
%\usepackage[obeyDraft,colorinlistoftodos]{todonotes}
%\newcommand{\missing}[1]{\todo[inline]{#1}}
%\newcommand{\rtodo}[1]{\todo[backgroundcolor=red]{#1}}
%\newcommand{\ytodo}[1]{\todo[backgroundcolor=yellow]{#1}}
%\newcommand{\gtodo}[1]{\todo[backgroundcolor=green]{#1}}
%
% math stuff
%
\usepackage{amsmath,amssymb,amsthm,mathtools}
%\numberwithin{equation}{subsection}
\newcommand{\NN}{\mathbb{N}}
\newcommand{\ZZ}{\mathbb{Z}}
\newcommand{\QQ}{\mathbb{R}}
\newcommand{\RR}{\mathbb{R}}
\newcommand{\CC}{\mathbb{C}}
\renewcommand{\d}[1][x]{\,\mathrm{d}#1}
\newcommand{\set}[1]{\left\lbrace#1\right\rbrace}
\newcommand{\Set}[2]{\left\lbrace#1\mid#2\right\rbrace}
\newcommand{\norm}[1]{\left\lVert#1\right\rVert}
\newcommand{\sprod}[2]{\left\langle#1,#2\right\rangle}
\newcommand{\pab}[2]{\frac{\partial #1}{\partial #2}}
\newcommand{\grad}[2]{\text{grad}_{#1} #2}
\DeclareMathOperator{\var}{var}
\DeclareMathOperator*{\argmin}{argmin}
% See https://www.overleaf.com/learn/latex/theorems_and_proofs
%\theoremstyle{plain}
%\newtheorem{definition}{Definition}[subsection]
%\newtheorem{theorem}{Satz}[subsection]
%\newtheorem{corollary}{Korollar}[theorem]
%\theoremstyle{remark}
%\newtheorem*{remark}{Hinweis}
%\newtheorem*{example}{Beispiel}
%
% misc commands
%
\newcommand{\emptyline}{\vspace{\baselineskip}}
%
% document settings
%
\newcommand{\mytitle}{Dokument}
\newcommand{\mysubtitle}{Mit XeLaTeX erzeugt}
\newcommand{\myname}{Daniel}
\newcommand{\mydate}{\today}
\hypersetup{
pdftitle={\mytitle{}},
pdfsubject={\mysubtitle{}},
pdfauthor={\myname{}},
pdfproducer={\myname{}},
pdfcreator={Accomplished with: XeLaTeX, biber, and hyperref-package.},
colorlinks=true,
urlcolor=blue!40!black,
linkcolor=red!40!black,
citecolor=green!40!black,
}
\title{\mytitle{}}
\subtitle{\mysubtitle{}}
\author{\myname{}}
\date{\mydate{}}
% =====================================================================================================
\begin{document}
\frame{\titlepage}
%\begin{frame}
% \frametitle{Inhaltsverzeichnis}
% \tableofcontents
%\end{frame}
\section{Algorithm}
\begin{frame}
\frametitle{Algorithm}
bla
\end{frame}
\begin{frame}
top
\end{frame}
\begin{frame}[t]
top
\end{frame}
%\begin{frame}
% \frametitle{\hphantom{1em}}
% \frametitle{}
% \printbibliography
%\end{frame}
\end{document}
| {
"alphanum_fraction": 0.699777613,
"avg_line_length": 21.1884816754,
"ext": "tex",
"hexsha": "50a06e8fd504f954e5c6e6d4dc6a4517320c5cb8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ebabfaf9d40fb4b73711a86c033c4cfd7a194c66",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "danielgolf/dotfiles-linux",
"max_forks_repo_path": "tex/slide_template.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ebabfaf9d40fb4b73711a86c033c4cfd7a194c66",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "danielgolf/dotfiles-linux",
"max_issues_repo_path": "tex/slide_template.tex",
"max_line_length": 103,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "ebabfaf9d40fb4b73711a86c033c4cfd7a194c66",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "danielgolf/dotfiles-linux",
"max_stars_repo_path": "tex/slide_template.tex",
"max_stars_repo_stars_event_max_datetime": "2020-01-07T10:25:49.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-12-29T05:58:19.000Z",
"num_tokens": 1342,
"size": 4047
} |
% jam 2004-09-10
\section{Mesh Operations}
\label{sec:mesh-operations}
\subsection{Split}
Splitting a simplex produces a refined complex by adding a new vertex
and subdivided some of the simplices.
Suppose $\Ssimplex$ is a $d$-simplex
in the pure $n$-dimensional simplicial complex $\Kcomplex_0$.
Let $\Vvertex$ be a vertex not in $\Kcomplex_0$.
{\it Splitting $\Ssimplex$ around $\Vvertex$}
produces a refined pure $n$-dimensional simplicial complex, $\Kcomplex_1$:
$\Kcomplex_1$ contains all the $n$-simplexes of $\Kcomplex_0$
that do not contain $\Ssimplex$.
The simplexes that do contain $\Ssimplex$ are split:
Let $\Tsimplex$ be an $n$-simplex that contains $\Ssimplex$.
Let $\Rsimplex$ be the $(n-d-1)$-simplex opposite $\Ssimplex$
in $\Tsimplex$.
(If $\Tsimplex = \Ssimplex$, then $\Rsimplex$ is the empty set.)
Let $\{ \Ffacet_0 \ldots \Ffacet_d \}$ be the
d $(d-1)$-simplex facets of $\Ssimplex$.
$\Kcomplex_1$ has
$d$ $n$-simplices formed from
$\Vvertex$, $\Rsimplex$, and
$\Ffacet_i$ where $i=0 \ldots d$.
This is also known as "placing a vertex"
(See Lee~\cite[sec.~17.2]{lee-hdcg-17-2004}).
\subsection{Collapse}
Collapsing a simplex produces a reduced complex by mergng
the vertices of the simplex into a single new one.
Suppose $\Ssimplex$ is a $d$-simplex
in the pure $n$-dimensional simplicial complex $\Kcomplex_0$.
Let $\Vvertex$ be a vertex not in $\Kcomplex_0$.
{\it Collapsing $\Ssimplex$ to $\Vvertex$}
produces a reduced pure $n$-dimensional simplicial complex, $\Kcomplex_1$:
$\Kcomplex_1$ contains all the
$n$-simplices of $\Kcomplex_0$ that share no vertices with $\Ssimplex$.
Any $n$-simplex in $\Kcomplex_0$ sharing more than one vertex
with $\Ssimplex$ is ignored.
For every $n$-simplex in $\Kcomplex_0$ sharing one vertex with $\Ssimplex$,
$\Kcomplex_1$ gets an $n$-simplex with $\Vvertex$ replacing the
shared vertex.
| {
"alphanum_fraction": 0.727613941,
"avg_line_length": 33.9090909091,
"ext": "tex",
"hexsha": "ebed05b7e132a3aadd57c3b82fb4448b5ae84a96",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "970bcbf5e31e40017b2333039e1505c7ea2f56dd",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "palisades-lakes/les-elemens",
"max_forks_repo_path": "doc/old/fosm/mesh-operations.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "970bcbf5e31e40017b2333039e1505c7ea2f56dd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "palisades-lakes/les-elemens",
"max_issues_repo_path": "doc/old/fosm/mesh-operations.tex",
"max_line_length": 75,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "970bcbf5e31e40017b2333039e1505c7ea2f56dd",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "palisades-lakes/les-elemens",
"max_stars_repo_path": "doc/old/fosm/mesh-operations.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 618,
"size": 1865
} |
\section{Notation}
Throghout this paper we assume that we are working in a three-dimensional
euclidian space. We shall use $P_1, P_2, \ldots$ to denote polytopes with
points in $\mathbb{R}^3$, and $\point{p_1}, \point{p_2}, \ldots$ to denote
points in $\mathbb{R}^3$, where in particular $\point{0} = (0, 0, 0)$.
For a $\point{p_i}$ we indicate its three components with $(p_{i_x}, p_{i_y}, p_{i_z})$.
| {
"alphanum_fraction": 0.6990049751,
"avg_line_length": 57.4285714286,
"ext": "tex",
"hexsha": "9dbef738bec9d8367d5a97482aa6e585dafb7d59",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7879d1ceb252f731fa4bbc9d93341b832e62115d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "formalmethods/polytopepacking",
"max_forks_repo_path": "report/notation.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "7879d1ceb252f731fa4bbc9d93341b832e62115d",
"max_issues_repo_issues_event_max_datetime": "2020-03-21T20:44:27.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-03-18T08:05:41.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "bobosoft/polytopepacking",
"max_issues_repo_path": "report/notation.tex",
"max_line_length": 88,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "7879d1ceb252f731fa4bbc9d93341b832e62115d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "formalmethods/polytopepacking",
"max_stars_repo_path": "report/notation.tex",
"max_stars_repo_stars_event_max_datetime": "2016-04-07T13:54:39.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-04-07T13:54:39.000Z",
"num_tokens": 146,
"size": 402
} |
\section{Group operations, revisited}
In this section we revise \emph{Riordan group} concepts described in
\autoref{subsection:back:to:the:basics:riordan:group}, looking at them under
the light of $h$-characterization. In particular, we show how a Riordan arrays
written using the proposed characterization, allows us to build new arrays
respect to group operation $\cdot$ and to find their inverses.
\subsection{Inverting an array}
Let $\mathcal{R}\left(d(t),h(t)\right)$ be a Riordan array and let
$\mathcal{R}_{h(t)}\left(g(h(t)),h(t)\right)$ its $h$-characterization, for some
function $g$. %in the variable $h(t)$.
Using rules for inversion in the Riordan group,
\marginpar{using $\mathcal{R}_{h(t)}$
polymorphically as an array, remind \autoref{par:h:characterization:is:an:array:polymorphism}}
proceed as follow:
\begin{displaymath}
\begin{split}
\mathcal{R}_{h(t)}^{-1}\left(\frac{1}{g(h(\hat{h}(t)))},\hat{h}(t)\right)&=
\left[\mathcal{R}^{-1}\left(\left.\frac{1}{g(h(y))},y\right) \right| y = \hat{h}(t) \right]\\
&= \mathcal{R}_{\hat{h}(t)}^{-1}\left(k(\hat{h}(t)),\hat{h}(t)\right)
\end{split}
\end{displaymath}
where function $k$ is defined as $k(y)=\frac{1}{g(h(y))}$,
therefore we got a new array $\mathcal{T}_{\hat{h}(t)}$ which is the $\hat{h}$-characterization
of $\mathcal{R}_{h(t)}^{-1}$ and, as the same time, of $\mathcal{R}^{-1}$.
\marginpar{again, $\hat{h}(t)$ plays the role of a \emph{variable},
so don't be tempted to say $h(\hat{h}(t))=t$ as in the normal course of things \ldots}
\\\\
For the sake of clarity we apply the previous derivation to build the inverse of $\mathcal{F}$,
the Fibonacci array. To set the stage we need function $h$, take it from the definition:
\begin{displaymath}
\begin{split}
h(t)&=\frac{1-\sqrt{1-4t}}{2}\\
\end{split}
\end{displaymath}
we need also function $g$, take it from the $h$-characterization $\mathcal{F}_{h(t)}$:
\begin{displaymath}
\left[g(y)=\left.\frac{1}{1-y+2y^3-y^4} \right| y=h(t)\right]
\end{displaymath}
we're ready to apply:
\begin{displaymath}
\left[\mathcal{F}^{-1}\left.\left(1-y-y^2,y\right) \right| y = \hat{h}(t) \right]=
\mathcal{F}_{\hat{h}(t)}^{-1}\left(1-\hat{h}(t)-\hat{h}(t)^2,\hat{h}(t)\right)
\end{displaymath}
where function $\hat{h}$ is the compositional inverse of function $h$. Observe
that $1-y-y^2$ in the left array under variable constraint $y=\hat{h}(t)$, is
obtained by evaluating $\frac{1}{g(h(y))}$, considering $y$ as a variable:
using this abstraction, function $\hat{h}(t)$ cannot be used
\emph{functionally}, otherwise $g(h(\hat{h}(k)))=g(k)$, where $k=h(t)$
according to constraint under $g$ is defined. If that would be the case, it
yields a factorization in $h(t)$, not in $\hat{h}(t)$, as we would like to
have.
If the explicit definition for $\mathcal{F}^{-1}$ is desired, just plug $\hat{h}(t)=t-t^2$:
\begin{displaymath}
\left(\mathcal{F}_{\hat{h}(t)}^{-1}\right)^{\stackrel{\hat{h}(t)}{\rightarrow}} =
\mathcal{F}^{-1}\left(1-t+2t^3-t^4,t-t^2\right)
\end{displaymath}
\subsection{Multiplying two arrays}
Finally, let us tackle the product of two Riordan arrays.
Multiplication, the action of the Riordan group, between two elements
$\mathcal{A}(d_{\mathcal{A}}(t), h_{\mathcal{A}}(t))$ and
$\mathcal{B}(d_{\mathcal{B}}(t), h_{\mathcal{B}}(t))$ belonging to the group, is defined as:
\begin{displaymath}
\mathcal{A}\mathcal{B} = \left(d_{\mathcal{A}}(t)d_{\mathcal{B}}(h_{\mathcal{A}}(t)),
h_{\mathcal{B}}(h_{\mathcal{A}}(t))\right)
\end{displaymath}
Consider the $h_{\mathcal{A}}$-characterization of $\mathcal{A}$:
\begin{displaymath}
\mathcal{A}_{h_\mathcal{A}(t)} \left(\gamma(h_{\mathcal{A}}(t)), h_{\mathcal{A}}(t) \right)
\end{displaymath}
for some function $\gamma$, and the $h_{\mathcal{B}}$-characterization of $\mathcal{B}$:
\begin{displaymath}
\mathcal{B}_{h_\mathcal{B}(t)} \left(\eta(h_{\mathcal{B}}(t)), h_{\mathcal{B}}(t) \right)
\end{displaymath}
for some function $\eta$, respectively. Apply now the multiplication rule:
\begin{displaymath}
\begin{split}
& \left(\gamma(h_{\mathcal{A}}(t)), h_{\mathcal{A}}(t) \right)
\left(\eta(h_{\mathcal{B}}(t)), h_{\mathcal{B}}(t) \right) \\
&=\left[\left.\left(\gamma(y), y \right)
\left(\eta(h_{\mathcal{B}}(t)), h_{\mathcal{B}}(t) \right) \right| y=h_{\mathcal{A}}(t) \right]\\
&=\left[\left.\left(\gamma(y)\eta(h_{\mathcal{B}}(y)), h_{\mathcal{B}}(y) \right) \right|
y=h_{\mathcal{A}}(t) \right]\\
\end{split}
\end{displaymath}
Stop here, it's enough to build the $h_{\mathcal{A}}$-characterization
of the product $\mathcal{A}\mathcal{B}$:
\begin{displaymath}
\left[\left.\left(\Omega(y), h_{\mathcal{B}}(y) \right) \right| y=h_{\mathcal{A}}(t) \right]
=\big(\mathcal{A}\mathcal{B}\big)_{h_{\mathcal{A}}(t)}\left(
\Omega(h_{\mathcal{A}}(t)), h_{\mathcal{B}}(h_{\mathcal{A}}(t)) \right)
\end{displaymath}
where $\left[\left.\Omega(y)= \gamma(y)\eta(h_{\mathcal{B}}(y))\right| y=h_{\mathcal{A}}(t) \right]$.
\\\\
Nonetheless,
\marginpar{The following can be skipped on a first reading}
from were we stopped, it's possible to factor $\mathcal{A}\mathcal{B}$ respect to $h_{\mathcal{B}}(t)$ in
a similar way, this time abstracting over $h_{\mathcal{B}}(y)$, remembering to track $y=h_{\mathcal{A}}(t)$:
\begin{displaymath}
\begin{split}
&\left[\left.\left(\gamma(y)\eta(h_{\mathcal{B}}(y)), h_{\mathcal{B}}(y) \right) \right|
y=h_{\mathcal{A}}(t) \right]\\
&=\left.\left[\left.\left(\gamma(\hat{h}_{\mathcal{B}}(k))\eta(k), k \right) \right|
k=h_{\mathcal{B}}(y) \right]\right|_{y=h_{\mathcal{A}}(t)}\\
&=\left.\left[\left.\left(\Theta(k), k \right) \right| k=h_{\mathcal{B}}(y) \right]\right|_{y=h_{\mathcal{A}}(t)}\\
&=\left.\big(\mathcal{A}\mathcal{B}\big)_{h_{\mathcal{B}}(y)}\left(
\Theta(h_{\mathcal{B}}(y)), h_{\mathcal{B}}(y) \right)\right|_{y=h_{\mathcal{A}}(t)}
\end{split}
\end{displaymath}
\marginpar{in order to stop tracking $y=h_{\mathcal{A}}(t)$, we should write $(\Theta(k), k)$,
where $k$ solves
$h_{\mathcal{B}}(\hat{h}_{\mathcal{A}}(\hat{h}_{\mathcal{B}}(k)))=h_{\mathcal{B}}(t)$,
but it doesn't supply an explicit variable substitution}
where $\left.\left[\left.\Theta(k)=\gamma(\hat{h}_{\mathcal{B}}(k))\eta(k) \right|
k=h_{\mathcal{B}}(y) \right]\right|_{y=h_{\mathcal{A}}(t)}$.
Observe that for the latter factorization, it is necessary to compute function $\hat{h}_{\mathcal{B}}$,
the compositional inverse of function $h_{\mathcal{B}}$.
\\\\
For the sake of clarity we apply the previous derivation to the product $\mathcal{P}\mathcal{F}$, namely
we'll multiply Pascal and Fibonacci arrays, \emph{in the given order}, providing
both $\big(\mathcal{P}\mathcal{F}\big)_{h_{\mathcal{P}}(t)}$
and $\left.\big(\mathcal{P}\mathcal{F}\big)_{h_{\mathcal{F}}(y)}\right|_{y=h_{\mathcal{P}}(t)}$.
Recall $\mathcal{P}$ and $\mathcal{F}$ $h$-characterizations, they are respectively:
\begin{displaymath}
\begin{split}
&\mathcal{P}_{h_{\mathcal{P}}(t)}\left( 1+h_{\mathcal{P}}(t), h_{\mathcal{P}}(t) \right)
\text{ where } h_{\mathcal{P}}(t) = \frac{t}{1-t}\\
&\mathcal{F}_{h_{\mathcal{F}}(t)}\left( \frac{1}{1-h_{\mathcal{F}}(t)+
2h_{\mathcal{F}}(t)^3-h_{\mathcal{F}}(t)^4}, h_{\mathcal{F}}(t) \right)
\text{ where } h_{\mathcal{F}}(t) = \frac{1-\sqrt{1-4t}}{2}\\
\end{split}
\end{displaymath}
and recognize the following functions under variable constraint (in the following, $y$
is a ``local'' variable, \emph{it's not shared} among the two definitions):
\begin{displaymath}
\begin{split}
&\left.\left[\gamma(y) = 1+y \right| y=h_{\mathcal{P}}(t) \right]\\
&\left.\left[\eta(y) = \frac{1}{1-y+2y^3-y^4} \right| y=h_{\mathcal{F}}(t) \right]\\
\end{split}
\end{displaymath}
Toward $\big(\mathcal{P}\mathcal{F}\big)_{h_{\mathcal{P}}(t)}$, compute $\Omega$ function:
\begin{displaymath}
\left.\left[\Omega(y)= \gamma(y)\eta(h_{\mathcal{F}}(y))\right| y=h_{\mathcal{P}}(t) \right]
= \left.\left[\Omega(y)= \frac{1+y}{1-y-y^2}\right| y=h_{\mathcal{P}}(t) \right]
\end{displaymath}
\marginpar{$\big(\mathcal{P}\mathcal{F}\big)_{h_{\mathcal{P}}(t)}$}
therefore answer the first question:
\begin{displaymath}
\big(\mathcal{P}\mathcal{F}\big)_{h_{\mathcal{P}}(t)} \left(\frac{1+
h_{\mathcal{P}}(t)}{1-h_{\mathcal{P}}(t)-h_{\mathcal{P}}(t)^2}, \frac{1-\sqrt{1-4h_{\mathcal{P}}(t)}}{2} \right)
\end{displaymath}
\marginpar{$\left.\big(\mathcal{P}\mathcal{F}\big)_{h_{\mathcal{F}}(y)}\right|_{y=h_{\mathcal{P}}(t)}$}
Toward $\left.\big(\mathcal{P}\mathcal{F}\big)_{h_{\mathcal{F}}(y)}\right|_{y=h_{\mathcal{P}}(t)}$, compute $\Theta$ function:
\begin{displaymath}
\begin{split}
&\left.\left[\left.\Theta(k)=\gamma(\hat{h}_{\mathcal{F}}(k))\eta(k) \right| k=h_{\mathcal{F}}(y) \right]\right|_{y=h_{\mathcal{P}}(t)}\\
&= \left.\left[\left.\Theta(k)=\frac{1+k-k^2}{1-k+ 2k^3 - k^4} \right| k=h_{\mathcal{F}}(y) \right]\right|_{y=h_{\mathcal{P}}(t)}\\
\end{split}
\end{displaymath}
where $\hat{h}_{\mathcal{F}}(y)=y-y^2$, therefore answer the second question:
\begin{displaymath}
\left.\big(\mathcal{P}\mathcal{F}\big)_{h_{\mathcal{F}}(y)} \left(
\frac{1+h_{\mathcal{F}}(y)-h_{\mathcal{F}}(y)^2}{1-h_{\mathcal{F}}(y)+ 2h_{\mathcal{F}}(y)^3 - h_{\mathcal{F}}(y)^4} ,
h_{\mathcal{F}}(y) \right)\right|_{y=h_{\mathcal{P}}(t)}
\end{displaymath}
\marginpar{check substituting functions $h_{\mathcal{P}}$ and $h_{\mathcal{F}}$}
Little check plugging in function $h_{\mathcal{P}}(t)$:
\begin{displaymath}
\left(\big(\mathcal{P}\mathcal{F}\big)_{h_{\mathcal{P}}(t)}\right)^{\stackrel{h_{\mathcal{P}}(t)}{\rightarrow}}
= \big(\mathcal{P}\mathcal{F}\big)\left(\frac{1-t}{1-3t+t^2}, \frac{1}{2}-\frac{1}{2}\sqrt{\frac{1-5t}{1-t}} \right)
\end{displaymath}
on the other hand, plugging in function $h_{\mathcal{F}}(y)$, where $y=h_{\mathcal{P}}(t)$:
\begin{displaymath}
\left.\left(\big(\mathcal{P}\mathcal{F}\big)_{h_{\mathcal{F}}(y)}\right)^{\stackrel{h_{\mathcal{F}}(y)}{\rightarrow}}\right|_{y=h_{\mathcal{P}}(t)}
= \left.\big(\mathcal{P}\mathcal{F}\big)\left(\frac{1+y}{1-y-y^2}, \frac{1-\sqrt{1-4y}}{2} \right)\right|_{y=h_{\mathcal{P}}(t)}
\end{displaymath}
| {
"alphanum_fraction": 0.6270731473,
"avg_line_length": 54.612565445,
"ext": "tex",
"hexsha": "db57b61e0d7f861a2206faccbdaa01493d857929",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "massimo-nocentini/master-thesis",
"max_forks_repo_path": "classicthesis/Chapters/h-characterization/group-operations.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "massimo-nocentini/master-thesis",
"max_issues_repo_path": "classicthesis/Chapters/h-characterization/group-operations.tex",
"max_line_length": 151,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "massimo-nocentini/master-thesis",
"max_stars_repo_path": "classicthesis/Chapters/h-characterization/group-operations.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4031,
"size": 10431
} |
\newpage
\section{Assumptions that can be satisfied with current technology}
\label{sec:satisfiable_assumptions}
\subsection{Key length}
We assume that the key lengths are sufficient to prevent brute force attacks.
\subsection{Keys are randomly generated}
We assume that the keys are randomly generated by using high quality randomness that is suitable for cryptographic operations. Therefore we assume that the keys can not be predicted or guessed.
\subsection{ID-card keys are generated in the card}
We assume that the private keys of the ID-card never leave the card and are generated inside of the chip.
\subsection{ID-card private keys do not leak}
We assume that the private keys of the ID-card are protected by special hardware and can not be extracted and leaked.
\subsection{ID-card keys can not be deleted}
We assume that an attacker is not able to delete or regenerate the key pairs of an ID-card.
\subsection{Only strong cryptosystems with sufficient key lengths are used}
We assume that only strong and standardized cryptosystems are used, which can not be broken with classical computers. E.g., we assume that it is not possible to deduce the private key from the public key in case the key pair was randomly generated.
What cryptosystems can be considered strong in near term (10 years)? In order to answer this question, we rely on the ECRYPT CSA 2018 report~\cite{ECRYPT-CSA}. An excerpt from the Table 4.6 of the report is presented in Table~\ref{tab:keylengths}.
\begin{table}[ht]
\begin{center}
\caption{ECRYPT CSA recommended key lengths}
\label{tab:keylengths}
\begin{tabular}{|c|c|}
\hline
Cryptosystem & Minimal output/key length\\
\hline
\hline
Symmetric (AES) & 256 \\
RSA & 3072\\
ECC & 256\\
Hash function & 256\\
\hline
\end{tabular}
\end{center}
\end{table}
Concerning padding, ECRYPT CSA report~\cite{ECRYPT-CSA} recommends using RSA-PSS instead of PKCS\#1 v1.5 for new deployments of RSA signature scheme. PSS scheme is also standardised for use in JSON Web Algorithms RFC7518~\cite{RFC7518}. Note that the standard only supports the schemes utilising SHA2-256, SHA2-384 and SHA2-512 hash functions.
In case of elliptic curve signature schemes, there is no need for a special padding. Care must be taken that the bit lengths of the hash function output and ECC signature input match. RFC7518~\cite{RFC7518} specifies three pairs of ECDSA signature scheme curves and hash functions:
\begin{itemize}
\item ECDSA using P-256 and SHA2-256,
\item ECDSA using P-384 and SHA2-384, and
\item ECDSA using P-521 and SHA2-512.
\end{itemize}
\subsection{Quantum computers are not available}
We assume that large quantum computers, which could break the modern asymmetric cryptosystems are not available to the attacker. In 2019 Quantum Threat Timeline Report\footnote{\url{https://globalriskinstitute.org/publications/quantum-threat-timeline/}} was published and it contains estimates by 22 experts on the field. Half of them estimated that a quantum computer capable of breaking RSA-2048 will be built within 15 years\footnote{\url{https://quantumcomputingreport.com/our-take/how-many-years-until-a-quantum-computer-can-break-rsa-2048/}}.
\subsection{Attacker with superuser access has complete access}
In case an attacker has infected client's device and has superuser access, we assume that the attacker has complete access. Thus, the attacker can choose what to draw on the screen, which values to send to the card reader and the ability to read the keystrokes of the user.
\subsection{Collision resistance property can not be broken}
We assume that the attacker is not able to break the collision resistance property of the cryptographic hash function that is used for signing.
\subsection{Second preimage resistance property can not be broken}
We assume that the attacker is not able to break the second preimage resistance property of the cryptographic hash function that is used for signing.
\subsection{The authentication challenge can not be guessed or predicted}
We assume that the authentication challenge is generated randomly and has sufficient length / entropy.
\subsection{Session cookie is not predictable}
We assume that the session cookie has sufficient entropy and is not predictable.
\subsection{Communication channel is protected by TLS}
The communication channel is secured using TLS 1.2 or a newer TLS version. We do not consider attacks against such channels to be in the scope of this analysis as currently these attacks are not feasible without either compromising the client or the server.
\subsection{Secondary channel to inform the user about card use}
At the first sight this seems easy -- just send the user an email, SMS, push notification or alike. On the other hand, there may be some scenarios where this may be discouraged (most notably I-voting where strong evidence of signature device use can be used for vote selling). In case informing the user would be mandatory, the service would have to be centralized as all service providers would not be able to set up the secondary channel. However, by having a centralized service, the privacy of the user becomes an issue. Therefore, the architecture of the functionality has to be well thought through before a decision is made to implement it.
\newpage
\section{Assumptions that can not be satisfied with current technology}
\label{sec:unsatisfiable_assumptions}
\subsection{Card readers with PIN pad and trusted preview}
Even the officially recommended smart card reader with PIN-firewalled PIN pad (Gemalto IDBridge CT710\footnote{\url{https://gemcard.ro/wp-content/uploads/2016/11/Gemalto_IDBridgeCT700_CT710_brochure.pdf}}) seems to be unavailable in Estonia. Requesting a trusted preview panel seems even more infeasible. Thus, the first step towards not having to trust the computing device would be to make the PIN-firewalled PIN pads available to the end-users. By using PIN firewalled smart card readers the complexity of an attack would increase. As a next step, it should be researched whether smart card readers with trusted preview are available on the market and how they could be integrated to the current software ecosystem.
\subsection{Token Binding}
Token Binding (or an equivalent technology) would mitigate a lot of attacks. However, it does not seem to be universally available across all the browsers and platforms.
\subsection{Browser extension can access details of TLS connection}
Some of the proposed countermeasures for the MITM attack relay on the possibility of accessing the public key of the service provider. Currently only Firefox allows an extension to query information about the TLS connection (e.g., the public key of the service provider for the current session).
\subsection{Using separate key pairs for authentication, encryption and authorization}
\label{subsec:separatekeypairs}
Currently, the Estonian ID-card contains two key pairs, out of which the second key pair is only used for issuing legally binding signatures. However, the first key pair is used for multiple functionalities like authentication, encryption/decryption, logging into services, opening VPN connections, etc. This is not the best design as mixed usage scenarios may cause unexpected vulnerabilities.
When the ID-card owner uses his/her card for a longer-period service, say to log onto a terminal, the card has to be in the reader while the user is logged in, and only when the card is removed from the reader, the user is logged out. The current Open eID architecture relies on mutually authenticated TLS. Therefore, Open eID design allows the user to enter PIN1 only once during the authentication phase, after which the card's authentication security environment is left open. Such design allows to use mutually authenticated TLS without constantly asking the user to enter the PIN1. The downside of this design is that other applications running on the same device could also use the first key pair while the authentication security environment on the card is open (in the above example, during the whole period of being logged onto the terminal).
However, the Web eID architecture does not rely on mutually authenticated TLS channels. Thus, it is no longer necessary to keep the card's authentication security environment open after the authentication has been completed. Closing the access to the authentication security environment would prevent other applications from using the private key from the first key pair. Still, access to the card's authentication security environment can not be simply reset, since this can interfere with other legitimate applications that are using that security environment at the same time.
It is an open question to find out how many users would be negatively affected when access to card's authentication security environment would be reset by Web eID. In the long run, it is advised to issue separate key pairs for separate functionalities like authentication, encryption and authorization. With separate key pairs it would be easier to avoid problems arising from different security requirements of different usage scenarios.
| {
"alphanum_fraction": 0.7907922912,
"avg_line_length": 84.9090909091,
"ext": "tex",
"hexsha": "7619c7ca3eccbf3c729525411428e21697891b33",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d97385ea510a70612183102c529570fc3e2d4766",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "web-eid/web-eid-cybernetica-analysis",
"max_forks_repo_path": "sections/threats-assumptions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d97385ea510a70612183102c529570fc3e2d4766",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "web-eid/web-eid-cybernetica-analysis",
"max_issues_repo_path": "sections/threats-assumptions.tex",
"max_line_length": 851,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d97385ea510a70612183102c529570fc3e2d4766",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "web-eid/web-eid-cybernetica-analysis",
"max_stars_repo_path": "sections/threats-assumptions.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1959,
"size": 9340
} |
% Presentation
\documentclass{beamer}
% Styling
\usetheme{Frankfurt}
% Packages
\usepackage{hyperref}
\usepackage[utf8]{inputenc} % this is needed for german umlauts
\usepackage[english]{babel} % this is needed for german umlauts
\usepackage[T1]{fontenc} % this is needed for correct output
% of umlauts in pdf
\usepackage{mathtools} % maths
\setbeamerfont{myTOC}{series=\bfseries,size=\normalsize}
\AtBeginSection[]{\frame{\frametitle{Outline}%
\usebeamerfont{myTOC}\tableofcontents[current]}}
% Document
\begin{document}
% Metadata & cover
\title{Development of a payment channel over the Bitcoin network}
\subtitle{Final degree project}
\author{David Lozano Jarque <[email protected]>}
\date{5\textsuperscript{th} July 2017}
\subject{Computer Science}
\titlegraphic{\includegraphics[width=\textwidth,height=.5\textheight,keepaspectratio]{img/bitcoin_logo.png}}
% Cover slide
\frame{\titlepage}
% Introduction
\section{Introduction}
% % What is Bitcoin: First apperance
\subsection{What is Bitcoin}
\begin{frame}{Bitcoin's appearance}
\begin{block}{The creator}
Satoshi Nakamoto @ Cryptography (\url{metzdowd.com})\\
November 1\textsuperscript{st}, 2008
\end{block}
\begin{columns}
\begin{column}{0.5\textwidth}
\begin{center}
\includegraphics[width=\textwidth, height=0.8\textheight, keepaspectratio]{img/metzdowd_post.png}
\end{center}
\end{column}
\begin{column}{0.5\textwidth}
\begin{center}
\includegraphics[width=\textwidth, height=0.8\textheight, keepaspectratio]{img/bitcoin_paper.png}
\end{center}
\end{column}
\end{columns}
\end{frame}
% % What is Bitcoin: main specs
\begin{frame}{Bitcoin's definition}
\begin{block}{Definition of Bitcoin}
P2P network that allows payments between users without a trusted third
party
\end{block}
\pause
\begin{block}{Features}
\begin{itemize}[<+->]
\item Public ledger of transactions
\item Public ledger using \textit{blockchain} technology
\item Consensus via \textit{proof-of-work} algorithm
\item Cryptography-enforced (digital ECDSA signatures \& hash functions)
\item No trusted 3rd party (Pure P2P)
\end{itemize}
\end{block}
\end{frame}
% % How does bitcoin work
\subsection{How does Bitcoin work?}
\begin{frame}
\begin{center}
\textbf{How do we move currency?}
\end{center}
\end{frame}
% % % Transactions
\begin{frame}{Transactions}
\begin{block}{What is a Bitcoin transaction?}
Message specifying the transfer of currency units (called \textit{bitcoins})
\end{block}
\pause
\begin{block}{Transaction fields}
A transaction moves currency units given an input to a new output
\begin{itemize}[<+->]
\item \texttt{version}
\item \textbf{\texttt{inputs}}
\item \textbf{\texttt{outputs}}
\item \texttt{locktime}
\end{itemize}
\end{block}
\pause
\begin{exampleblock}{Basic Bitcoin transaction}
\begin{center}
\includegraphics[width=\textwidth, height=0.8\textheight, keepaspectratio]{img/basic_tx.png}
\end{center}
\end{exampleblock}
\end{frame}
% % % Blocks
\begin{frame}
\begin{center}
\textbf{Where do we store transactions?}
\end{center}
\end{frame}
\begin{frame}{Blocks}
\begin{block}{What is a Bitcoin block?}
Collection of transactions
\end{block}
\pause
\begin{exampleblock}{Basic Bitcoin block}
\includegraphics[width=\textwidth, height=0.8\textheight, keepaspectratio]{img/basic_block.png}
\end{exampleblock}
\end{frame}
% % % Blockchain
\begin{frame}
\begin{center}
\textbf{Where do we store blocks?}
\end{center}
\end{frame}
\begin{frame}{\textit{Blockchain}}
\uncover<1-2>{
\begin{block}{Bitcoin's \textit{blockchain}}
Distributed and replicated database containing a collection of blocks, each one linked to the previous one using \textbf{their hashes} forming \textbf{a chain}
\end{block}}
\only<2>{
\begin{exampleblock}{Basic Bitcoin's \textit{blockchain}}
\includegraphics[width=\textwidth, height=0.8\textheight, keepaspectratio]{img/basic_blockchain.png}
\end{exampleblock}}
\only<3>{
\begin{block}{Rewards}
Appending a new block to the chain is rewarded with \textbf{newly generated currency units} with a \textit{no-input} transaction called
a \textbf{generation transaction}
\end{block}}
\end{frame}
% % % Consensus
\begin{frame}
\begin{center}
\textbf{Who decides who can create next block?}
\end{center}
\end{frame}
\begin{frame}{Consensus}
\begin{block}{\textit{Proof-of-work}}
Piece of data difficult to generate but easy to verify it meets certain
requirements
\end{block}
\pause
\begin{block}{Bitcoin's \textit{proof-of-work}}
Field in block's header must contain a hash of the block itself whose
value \textbf{is less than a dynamically adjusted value}
\end{block}
\end{frame}
\begin{frame}{\textit{Proof-of-work}}
\begin{exampleblock}{Basic Bitcoin's \textit{blockchain} + \textit{proof-of-work}}
\includegraphics[width=\textwidth, height=0.8\textheight, keepaspectratio]{img/basic_blockchain_consensus.png}
\end{exampleblock}
\end{frame}
% % % The client
\begin{frame}
\begin{center}
\textbf{How to handle everything?}
\end{center}
\end{frame}
\begin{frame}{The Bitcoin client}
\begin{block}{A Bitcoin client}
Software that allows to operate on the Bitcoin network, handling all data structures and network messages
\end{block}
\pause
\begin{block}{Features}
\begin{enumerate}[<+->]
\item Receive and broadcasts messages (transactions, blocks, ...)
\item Stores and shares the \textit{blockchain}
\item Handles keys and creates payment transactions
\end{enumerate}
\pause
*Feature (2) just in \textbf{full-nodes}
\end{block}
\pause
\begin{exampleblock}{Most used client}
Bitcoin Core (\url{bitcoin.org}) is the most used Bitcoin client (\textbf{85\%} of nodes in the network)
\end{exampleblock}
\end{frame}
% % % The problem: scalability
\begin{frame}
\begin{center}
\textbf{What is the limit of the technology?}
\end{center}
\end{frame}
\subsection{The scalability problem}
\begin{frame}{Blockchain size}
\uncover<1-2>{
\begin{center}
Blockchain size over time (KBytes/years)\\
\includegraphics[width=\textwidth, height=0.8\textheight, keepaspectratio]{img/chart_blockchain_size.png}
\end{center}
}
\only<2>{
\begin{center}
Current blockchain size is approximately \textbf{120GB}
\end{center}
}
\only<3>{
\begin{alertblock}{Increasing transaction demand}
As Bitcoin becomes more popular, more users arrive therefore more transactions need to be processed
\end{alertblock}}
\end{frame}
\begin{frame}{Transaction throughput}
\begin{block}{Throughput limits}
Because of the protocol, blocks must
\pause
\begin{enumerate}[<+->]
\item \textbf{Appear every 10 minutes} (approximately) due to \textit{proof-of-work} difficulty adjustment
\item \textbf{1MB maximum block size} to control the \textit{blockchain} growth rate
\end{enumerate}
\end{block}
\end{frame}
\begin{frame}{Transaction throughput}
\begin{center}
Transactions per block over time (tx amount/years)\\
\includegraphics[width=\textwidth, height=0.8\textheight, keepaspectratio]{img/chart_block_transactions.png}
\end{center}
\pause
\begin{center}
Approximately \textbf{2.000} transactions per block
\end{center}
\end{frame}
\begin{frame}{Transaction throughput}
\begin{block}{Bitcoin's transaction throughput}
Using previous information:
\pause
\begin{displaymath}
\frac{2.000\ tx}{1\ block} \times \pause
\frac{1\ block}{10\ minutes} \times \pause
\frac{1\ minute}{60\ sec.} \approx \pause
\end{displaymath}
\begin{center}
\textbf{3 transactions per second}
\end{center}
\end{block}
\pause
\begin{block}{VISA's transaction throughput}
According to an IBM's studio performed in August of 2010:\pause
\begin{center}
\textbf{24.000 transactions per second}
\end{center}
\end{block}
\end{frame}
\begin{frame}
\begin{center}
\textbf{What can we do?}
\end{center}
\end{frame}
\begin{frame}{Scalability solutions}
\begin{center}
Several solutions have been proposed:\\
\pause
\begin{enumerate}[<+->]
\item \textbf{Increase block size}: Bitcoin Unlimited (1 to 8 MB)
\item \textbf{Reduce transaction size}: \url{SegWit.co} (do not store transaction signatures, also fixes malleability issues)
\item \textbf{Decrease the demand of transactions}: Payment channels
\end{enumerate}
\end{center}
\end{frame}
\section{Bitcoin \& Smart Contracts}
\subsection{Transactions at low-level detail}
\begin{frame}{Transactions}
\uncover<1-5>{
\begin{block}{Transaction fields}
Fields of a transaction are:
\begin{itemize}[<+->]
\item \texttt{version}
\item \textbf{\texttt{inputs}}
\item \textbf{\texttt{outputs}}
\item \texttt{locktime}
\end{itemize}
\end{block}}
\only<5>{
\begin{exampleblock}{Basic Bitcoin transaction}
\begin{center}
\includegraphics[width=\textwidth, height=0.8\textheight, keepaspectratio]{img/basic_tx.png}
\end{center}
\end{exampleblock}}
\only<6>{
\begin{block}{Extra "fields"}
All transactions have an id (also called \textit{txId}), that is the
double SHA-256 hash of the transaction bytes
\end{block}}
\end{frame}
\begin{frame}
\begin{center}
\textbf{How are inputs and outputs specified?}
\end{center}
\end{frame}
\begin{frame}{Inputs specification}
\begin{block}{Input fields}
An input consists of the following fields:
\pause
\begin{enumerate}[<+->]
\item \textbf{previousOutput*}: An output to be spent (combination of a \textit{txId} and output number)
\item \textbf{scriptSig}: Script necessary to authorize the output spend
\item \textbf{sequence}: Number of the transaction in order to enable replacements
\end{enumerate}
\pause
* output must not be spent by any other transaction (also called UTXO)
\end{block}
\end{frame}
\begin{frame}{Inputs specification}
\begin{exampleblock}{Basic transaction's input's fields}
\includegraphics[width=\textwidth, height=0.8\textheight, keepaspectratio]{img/tx_input.png}
\end{exampleblock}
\end{frame}
\begin{frame}{Outputs specification}
\begin{block}{Output fields}
An output consists of the following fields:
\pause
\begin{enumerate}[<+->]
\item \textbf{value}: number of currency units to be sent to the output
\item \textbf{scriptPubKey}: Script specificating the conditions for the output to be spent
\end{enumerate}
\end{block}
\pause
\begin{exampleblock}{Basic transaction's output's fields}
\includegraphics[width=\textwidth, height=0.8\textheight, keepaspectratio]{img/tx_output.png}
\end{exampleblock}
\end{frame}
\begin{frame}
\begin{center}
\textbf{How do the scripts work?}
\end{center}
\end{frame}
\subsection{Bitcoin's scripting language}
\begin{frame}{Bitcoin's scripting}
\begin{block}{Bitcoin scripting language}
Specificic scripting language for Bitcoin protocol (in transactions)
\begin{itemize}[<+->]
\item Simple
\item Stack-based (processed from left to right)
\item Purposefully not Turing-complete (with no loops)
\end{itemize}
\end{block}
\pause
\begin{block}{Technically}
Sequentially read 1-byte opcodes that can perform arithmetical operations, store data into the stack, cryptographic operations and some logic and flow control operations
\end{block}
\end{frame}
\begin{frame}{Transactions and scripts}
\begin{block}{Transactions validity}
In order for a transaction to be valid it must:
\begin{enumerate}[<+->]
\item \textbf{Valid inputs}: Inputs must refer to existing and non-spent outputs (UTXO)
\item \textbf{Valid amounts}: Outputs' amounts must be less or equal to the inputs amounts
\item \textbf{Valid scripts}: The input script followed by the output script referred by the input must execute succesfully and leave a non-empty stack
\end{enumerate}
\end{block}
\end{frame}
\begin{frame}{Standard scripts: P2PKH}
\begin{block}{P2PKH: \textit{pay-to-public-key-hash}}
The output script (\textit{scriptPubKey}) requires the input script (\textit{scriptSig}) to specify a public key whose hash matches the specified and sign the spending transaction with that public key
\end{block}
\pause
\begin{exampleblock}{P2PKH sample}
\begin{itemize}
\visible<3->{\item \textbf{scriptSig}: \texttt{<signature> <pubKey>}}
\visible<2->{\item \textbf{scriptPubKey}: \texttt{OP\_DUP OP\_HASH160 <pubKeyHash> OP\_EQUALVERIFY OP\_CHECKSIG}}
\end{itemize}
\end{exampleblock}
\end{frame}
\begin{frame}{Standard scripts: P2SH}
\begin{block}{P2SH: \textit{pay-to-script-hash}}
The output script (\textit{scriptPubKey}) requires the input script (\textit{scriptSig}) to specify a \textbf{redeem script} that succesfully executes and whose hash matches the specified one
\end{block}
\pause
\begin{exampleblock}{P2SH sample}
\begin{itemize}
\visible<3->{\item \textbf{scriptSig}: \texttt{[<data>] <redeemScript>}}
\visible<2->{\item \textbf{scriptPubKey}: \texttt{OP\_HASH160 <redeemScript\_hash> OP\_EQUAL}}
\end{itemize}
\end{exampleblock}
\end{frame}
\begin{frame}{Smart Contracts}
\begin{block}{Smart Contracts}
Computer protocols intended to facilitate, verify or enforce the negotiation or performance of a contract
\end{block}
\pause
\begin{block}{Smart Contracts in Bitcoin}
Creation of \textit{redeemScripts} redeemable using P2SH script sets in transactions.
\pause
\begin{center}
\textbf{\textit{redeemScripts} are Bitcoin's smart contracts}
\end{center}
\end{block}
\end{frame}
\begin{frame}
\begin{center}
\textbf{What can we do with Smart Contracts?}\\
\pause
\huge\textbf{Payment channels}
\end{center}
\end{frame}
\subsection{What is a payment channel?}
\begin{frame}{What is a Payment channel?}
\begin{block}{Payment channel}
Set of techniques designed to allow users to make multiple Bitcoin transactions without commiting all of them to the Bitcoin block chain
\end{block}
\pause
\begin{block}{\textit{Off-chain} transactions}
Bitcoin transactions that are not commited to the Bitcoin blockchain but would be valid if they were commited
\end{block}
\end{frame}
\begin{frame}{Payment Channel basic scheme}
\begin{block}{Scheme}
All payment channels follow a basic scheme:
\pause
\begin{enumerate}[<+->]
\item \textbf{Funding:} Some funds are locked so they can be moved with payments during the channel operation
\item \textbf{Payment:} Locked funds are moved to pay to a party of the channel
\item \textbf{Closure:} Funds are unlocked and returned to the channel parties with the final balance after all payments
\end{enumerate}
\end{block}
\pause
\begin{block}{Which transactions are \textit{off-chain}?}
\begin{center}
All \textbf{payment transactions are \textit{off-chain}}
\end{center}
\end{block}
\end{frame}
\section{Unidirectional payment channels}
\begin{frame}
\begin{center}
\textbf{What does a unidirectional payment channel allows us to do?}\\
\end{center}
\end{frame}
\begin{frame}{Unidirectional payment channel}
\begin{block}{What allows to do?}
Incrementally pay amounts of funds from one party to another
\end{block}
\pause
\begin{exampleblock}{For instance...}
We will create a channel to allow \textbf{Alice} pay \textbf{Bob} incremental amounts of funds
\end{exampleblock}
\end{frame}
\subsection{Scheme}
\begin{frame}{Locking funds}
\begin{block}{What do we need to do?}
Lock funds into the channel so:
\pause
\begin{enumerate}[<+->]
\item \textbf{Both} must authorize a payment:
\begin{itemize}[<+->]
\item Alice must want to pay some amount to Bob (Bob can not pay hisself)
\item Bob must authorize payments in order to check funds are send to him (and not to Alice)
\end{itemize}
\item \textbf{Refunds} must be possible if a party does not cooperate
\end{enumerate}
\end{block}
\pause
\begin{block}{How to refund}
Lock the funds for an amount of time, so after that time (called the \textit{channel expiry time}) the funds are given back to the funder
\end{block}
\end{frame}
\begin{frame}{Locking funds}
\begin{block}{Ways to lock funds}
In order to accomplish both properties to lock funds, we can:
\pause
\begin{enumerate}[<+->]
\item Create a \textbf{funding transaction} and a time-locked \textbf{refund transaction}
\item Create a \textbf{\textit{smart} funding transaction} with
the time-lock integrated in the \textit{smart contract}
\end{enumerate}
\end{block}
\end{frame}
\begin{frame}{Paying funds}
\begin{block}{What do we need to do?}
In order to create a payment transaction, as both users must authorize payments:
\pause
\begin{enumerate}[<+->]
\item \textbf{Alice} creates and signs a transaction paying some of the locked funds to \textbf{Bob} (and the rest to Alice as return)
\item \textbf{Bob} stores the partially signed transaction that pays some amount of money to him
\item If \textbf{Alice} wants to pay more, repeats the first step with more funds (spending the same funding transaction)
\end{enumerate}
\end{block}
\pause
\begin{block}{Replace by economical incentive}
Bob will \textbf{keep the latest payment transaction} and discard previous ones, as \textbf{the last will be the one that pays more to him}
\end{block}
\end{frame}
\begin{frame}{Closure}
\begin{block}{What do we need to do?}
Two situations can appear when closing the channel:
\begin{enumerate}[<+->]
\item \textbf{Graceful closure}: the channel has been operated and the expiry time is close, so \textbf{latest payment transaction is broadcasted}, spending the funding transaction and closing the channel.
\item \textbf{No cooperation}: if Bob disappears, Alice will \textbf{broadcast a refund transaction} to recover the locked funds
\end{enumerate}
\end{block}
\end{frame}
\subsection{Implementation}
\begin{frame}{Locking the funds}
\begin{block}{Ways to lock funds}
In order to accomplish both properties to lock funds, we can:
\begin{enumerate}
\item Create a \textbf{funding transaction} and a time-locked \textbf{refund transaction}
\item Create a \textbf{\textit{smart} funding transaction} with
the time-lock integrated in the \textit{smart contract}
\end{enumerate}
\end{block}
\pause
\begin{block}{The implementation}
With the \textbf{BIP-65}, an opcode appeared to create time-locked smart contracts, so we can create a \textbf{\textit{smart} funding transaction} with the time lock integrated
\end{block}
\end{frame}
\begin{frame}{Funding transaction}
\begin{exampleblock}{Funding transaction}
\includegraphics[width=\textwidth, height=0.8\textheight, keepaspectratio]{img/unidir_tx_funding.png}
\end{exampleblock}
\pause
\end{frame}
\begin{frame}{Funding transaction}
\begin{exampleblock}{Funding smart contract}
As we said, we need to design a \textit{redeemScript} in order to create a Bitcoin smart contract:
\pause
\begin{center}
\texttt{OP\_IF <time>}\\
\texttt{OP\_CHECKLOCKTIMEVERIFY OP\_DROP}\\
\texttt{<PubKeyAlice\_1> OP\_CHECKSIG}\\
\texttt{OP\_ELSE}\\
\texttt{OP\_2 <PubKeyAlice\_2> <PubKeyBob> OP\_2 OP\_CHECKMULTISIG OP\_ENDIF}
\end{center}
\end{exampleblock}
\pause
\begin{exampleblock}{Technically...}
As we are creating a P2SH, then the output script must be:\
\begin{center}
\texttt{OP\_HASH160 <redeemScript\_hash> OP\_EQUAL}
\end{center}
\end{exampleblock}
\end{frame}
\begin{frame}{Paying funds}
\begin{block}{What do we need to do?}
In order to create a payment transaction, as both users must authorize payments:
\begin{enumerate}
\item \textbf{Alice} creates and signs a transaction paying some of the locked funds to \textbf{Bob} (and the rest to Alice as return)
\item \textbf{Bob} stores the partially signed transaction that pays some amount of money to him
\item If \textbf{Alice} wants to pay more, repeats the first step with more funds (spending the same funding transaction)
\end{enumerate}
\end{block}
\pause
\begin{block}{The implementation}
Alice creates a transaction that spends the funding transaction, with two outputs: one with some amount for Bob and the rest for herself
\end{block}
\end{frame}
\begin{frame}{Payment transaction}
\begin{exampleblock}{Payment transaction}
\includegraphics[width=\textwidth, height=0.8\textheight, keepaspectratio]{img/unidir_tx_payment.png}
\end{exampleblock}
\end{frame}
\begin{frame}{Payment transaction}
\begin{exampleblock}{Spending funding smart contract}
We now need to spend the \textit{redeemScript}
\pause
\begin{center}
\texttt{OP\_0 <sig\_Alice> <sig\_Bob> OP\_0}
\end{center}
\end{exampleblock}
\begin{exampleblock}{Technically...}
As we are spending a P2SH, then the input script must be:\
\begin{center}
\texttt{OP\_0 <sig\_Alice> <sig\_Bob> OP\_0 <redeemScript>}
\end{center}
\end{exampleblock}
\end{frame}
\begin{frame}{Closure transaction}
\begin{block}{What do we need to do?}
Two situations can appear when closing the channel:
\begin{enumerate}[<+->]
\item \textbf{Graceful closure}: the channel has been operated and the expiry time is close, so \textbf{latest payment transaction is broadcasted}, spending the funding transaction and closing the channel.
\item \textbf{No cooperation}: if Bob disappears, Alice will \textbf{broadcast a refund transaction} to recover the locked funds
\end{enumerate}
\end{block}
\pause
\begin{exampleblock}{Graceful closure}
Bob simply broadcasts the latest payment transaction once signed and before channel expiry time
\end{exampleblock}
\end{frame}
\begin{frame}{Closure transaction}
\begin{exampleblock}{Closure transaction (refund)}
\includegraphics[width=\textwidth, height=0.8\textheight, keepaspectratio]{img/unidir_tx_refund.png}
\end{exampleblock}
\end{frame}
\begin{frame}{Closure transaction}
\begin{exampleblock}{Spending funding smart contract (refund)}
We now need to spend the \textit{redeemScript} after the lock time
\pause
\begin{center}
\texttt{<sig\_Alice> OP\_1}
\end{center}
\end{exampleblock}
\pause
\begin{exampleblock}{Technically...}
As we are spending a P2SH, then the input script must be:\
\begin{center}
\texttt{<sig\_Alice> OP\_1 <redeemScript>}
\end{center}
\end{exampleblock}
\end{frame}
\begin{frame}
\begin{center}
\textbf{What if we want Bob to pay Alice too?}
\end{center}
\end{frame}
\section{Bidirectional payment channels}
\begin{frame}{Bidirectional payment channel}
\begin{block}{What allows to do?}
Incrementally pay amounts of funds from one party to another \textbf{and viceversa}
\end{block}
\pause
\begin{exampleblock}{For instance...}
We will create a channel to allow \textbf{Alice} pay \textbf{Bob} incremental amounts of funds \textbf{and viceversa}
\end{exampleblock}
\end{frame}
\subsection{Scheme}
\begin{frame}{Bidirectional payment channels' scheme}
\begin{block}{Source}
Obtained from
\begin{quote}
A Fast and Scalable Payment Network with Bitcoin Duplex Micropayment Channels - \textit{Christian Decker \& Roger Wattenhofer}
\end{quote}
\end{block}
\pause
\begin{block}{Idea}
Use \textbf{two unidirectional channels}, one in each way with an \textbf{invalidation tree} to perform resets
\end{block}
\end{frame}
\subsection{Implementation}
\begin{frame}{Locking the funds}
\begin{block}{Ways to lock funds}
In order to accomplish both properties to lock funds, we can:
\begin{enumerate}
\item Create a \textbf{funding transaction} and a time-locked \textbf{refund transaction}
\item Create a \textbf{\textit{smart} funding transaction} with
the time-lock integrated in the \textit{smart contract}
\end{enumerate}
\end{block}
\pause
\begin{block}{The implementation}
We can still use \textbf{BIP-65} to create a time-locking smart contract
\end{block}
\end{frame}
\begin{frame}{Funding transaction}
\begin{exampleblock}{Funding transaction}
\includegraphics[width=\textwidth, height=0.8\textheight, keepaspectratio]{img/bidir_tx_funding.png}
\end{exampleblock}
\end{frame}
\begin{frame}{Funding transaction}
\begin{exampleblock}{Funding smart contract}
Same as unidirectional channel, but with two outputs
\pause
\begin{center}
\begin{enumerate}[<+->]
\item \textbf{Alice to Bob output}\\
\small{\texttt{OP\_IF <time> OP\_CHECKLOCKTIMEVERIFY OP\_DROP <PubKeyAlice\_1> OP\_CHECKSIG OP\_ELSE OP\_2 <PubKeyAlice\_2> <PubKeyBob\_1> OP\_2 OP\_CHECKMULTISIG OP\_ENDIF}}
\item \textbf{Bob to Alice output}\\
\small{\texttt{OP\_IF <time> OP\_CHECKLOCKTIMEVERIFY OP\_DROP <PubKeyBob\_2> OP\_CHECKSIG OP\_ELSE OP\_2 <PubKeyAlice\_3> <PubKeyBob\_3> OP\_2 OP\_CHECKMULTISIG OP\_ENDIF}}
\end{enumerate}
\end{center}
\end{exampleblock}
\pause
\begin{exampleblock}{Technically...}
As we are creating a P2SH, then the outputs' script must be:\
\begin{center}
\texttt{OP\_HASH160 <redeemScript\_hash> OP\_EQUAL}
\end{center}
\end{exampleblock}
\end{frame}
\begin{frame}{Paying funds}
\begin{block}{What do we need to do?}
In order to create a payment transaction, as both users must authorize payments:
\begin{enumerate}
\item \textbf{Alice} creates and signs a transaction paying some of the locked funds to \textbf{Bob} (and the rest to Alice as return)
\item \textbf{Bob} stores the partially signed transaction that pays some amount of money to him
\item If \textbf{Alice} wants to pay more, repeats the first step with more funds (spending the same funding transaction)
\end{enumerate}
\end{block}
\pause
\begin{block}{The implementation}
Same of a unidirectional payment channel, but Bob can pay Alice too using his channel
\end{block}
\end{frame}
\begin{frame}{Payment transaction}
\begin{exampleblock}{Payment transaction}
\includegraphics[width=\textwidth, height=0.8\textheight, keepaspectratio]{img/bidir_tx_payment.png}
\end{exampleblock}
\end{frame}
\begin{frame}{Payment transaction}
\begin{exampleblock}{Spending funding smart contract}
We now need to spend the \textit{redeemScript}
\pause
\begin{center}
\begin{enumerate}[<+->]
\item \textbf{Alice to Bob output}\\
\small{\texttt{OP\_0 <sig\_Alice> <sig\_Bob> OP\_0}}
\item \textbf{Bob to Alice output}\\
\small{\texttt{OP\_0 <sig\_Alice> <sig\_Bob> OP\_0}}
\end{enumerate}
\end{center}
\end{exampleblock}
\begin{exampleblock}{Technically...}
As we are spending a P2SH, then the input script must be:\
\begin{center}
\texttt{OP\_0 <sig\_Alice> <sig\_Bob> OP\_0 <redeemScript>}
\end{center}
\end{exampleblock}
\end{frame}
\begin{frame}{Closure transaction}
\begin{block}{What do we need to do?}
Two situations can appear when closing the channel:
\begin{enumerate}
\item \textbf{Graceful closure}: the channel has been operated and the expiry time is close, so \textbf{latest payment transaction \underline{of each output} is broadcasted}, spending the funding transaction and closing the channel.
\item \textbf{No cooperation}: if any of the parties do not cooperate, they can \textbf{broadcast a refund transaction} to recover their locked funds
\end{enumerate}
\end{block}
\pause
\begin{exampleblock}{Graceful closure}
Alice and Bob simply broadcast the latest payment transaction once signed and before channel expiry time
\end{exampleblock}
\end{frame}
\begin{frame}{Closure transaction}
\begin{exampleblock}{Closure transaction (refund)}
\includegraphics[width=\textwidth, height=0.8\textheight, keepaspectratio]{img/bidir_tx_refund.png}
\end{exampleblock}
\end{frame}
\begin{frame}{Closure transaction}
\begin{exampleblock}{Spending funding smart contract (refund)}
We now need to spend the \textit{redeemScript} after the lock time
\pause
\begin{center}
\begin{enumerate}[<+->]
\item \textbf{Alice to Bob output refund}\\
\texttt{<sig\_Alice> OP\_1}
\item \textbf{Bob to Alice output refund}\\
\texttt{<sig\_Bob> OP\_1}
\end{enumerate}
\end{center}
\end{exampleblock}
\pause
\begin{exampleblock}{Technically...}
As we are spending a P2SH, then the input script must be:\
\begin{center}
\texttt{<sig\_Alice|Bob> OP\_1 <redeemScript>}
\end{center}
\end{exampleblock}
\end{frame}
\subsection{Problem: channel reseting}
\begin{frame}
\begin{center}
\textbf{What if one of the payment channels gets exhausted?}\\
\pause
\huge\textbf{Channel resetting}
\end{center}
\end{frame}
\begin{frame}{Channel resetting}
\begin{exampleblock}{A simple reset example}
\begin{center}
\includegraphics[width=\textwidth, height=0.8\textheight, keepaspectratio]{img/bidir_reset.png}
\end{center}
\end{exampleblock}
\end{frame}
\begin{frame}{Channel resetting}
\begin{alertblock}{Channels are exhausted}
Both parties own the same amount of funds as at the beginning of the channel but their respective payment channels have been exhausted. No more incremental payments can be performed
\end{alertblock}
\end{frame}
\begin{frame}{Resetting by invalidation trees}
\begin{block}{Invalidation tree}
Tree of transactions that use the timelock field to invalidate old branches of the tree and be able to create new ones with an updated status of the balances
\end{block}
\pause
\begin{block}{Replace by timelock}
Create timelocked transactions so that when using timelocks nearer to the present invalidate transactions with later timelocks
\end{block}
\end{frame}
\begin{frame}{An invalidation tree reset example}
\begin{exampleblock}{Reset by adding a new leaf}
\begin{center}
\includegraphics[width=\textwidth, height=0.8\textheight, keepaspectratio]{img/bidir_tree.png}
\end{center}
\end{exampleblock}
\end{frame}
\begin{frame}{An invalidation tree reset example}
\begin{exampleblock}{Different funding}
\begin{center}
\includegraphics[width=\textwidth, height=0.8\textheight, keepaspectratio]{img/bidir_tree_fund.png}
\end{center}
\end{exampleblock}
\end{frame}
\begin{frame}{An invalidation tree reset example}
\begin{exampleblock}{Reset by branching}
\begin{center}
\includegraphics[width=\textwidth, height=0.8\textheight, keepaspectratio]{img/bidir_tree_expanded.png}
\end{center}
\end{exampleblock}
\end{frame}
\begin{frame}{Differences}
\begin{block}{Basic duplex channel vs Resetable duplex channel}
\begin{itemize}[<+->]
\item \textbf{More complex to use BIP-65*}: As the tree requires linking P2SH single outputs, using BIP-65 to create a timelock contract is more complex to implement\\
\pause
\small{*this would require to generate two outputs and inputs in each first tree node with all data required}
\pause
\item \textbf{More transactions needed:} in order to create the tree (be careful with signing order of all parties to prevent attacks)
\item \textbf{Reduced expiry time:} each tree branch reduces the channel's effective expiry time
\end{itemize}
\end{block}
\end{frame}
\begin{frame}{Pros and cons}
\begin{exampleblock}{Pros}
\begin{itemize}[<+->]
\item \textbf{Simple to create}: no complex transactions needed, unlike the \textit{Lightning Network} smart contracts
\item \textbf{No extra data exchange}: unlike the \textit{Lightning Network}, the protocol does not require to exchange secrets or additional data
\end{itemize}
\end{exampleblock}
\pause
\begin{alertblock}{Cons}
\begin{itemize}[<+->]
\item \textbf{Reducing expiry time}: the more resets needed, the more the effective expiry time is reduced (more invalidating branches and leafs)
\item \textbf{Need to store more transactions}: in other solutions for duplex payment channel, like the \textit{Lightning Network}, just the latest payment transaction must be saved, and not an entire tree.
\end{itemize}
\end{alertblock}
\end{frame}
\section{The Bitcoin framework}
\begin{frame}{Developing problems}
\begin{alertblock}{Problems when implementing the channel}
\begin{itemize}[<+->]
\item \textbf{Lack of documentation}: Bitcoin is missing from good quality, low-level protocol implementation details. Most accurate information is spread around Q\&A sites, \textit{Bitcoin Wiki} and \textit{Bitcoin Core's client} C++ code
\item \textbf{Lack of low-level, documented libraries}: There are very few libraries that handle the Bitcoin protocol complexities (no library found to create raw transaction signatures with a customized transaction)
\end{itemize}
\end{alertblock}
\end{frame}
\begin{frame}{Our Bitcoin framework}
\begin{exampleblock}{Solution: our own Bitcoin framework}
All what we* learned was implemented in our own Bitcoin framework that has:
\begin{itemize}[<+->]
\item \textbf{Designed for ease of use:} Design \& Software design patterns
\item \textbf{OOP and puzzle-friendliness principles:} Modulable and serializable / deserializable patterns
\item \textbf{Extensive documentation:} Every method is well documented
\item \textbf{Extensively tested:} All code has been tested with other libraries \& \textit{Bitcoin Core} client
\end{itemize}
\end{exampleblock}
\uncover<2->{\begin{center}
*developed along Carlos González Cebrecos
\end{center}}
\end{frame}
\begin{frame}{Channel implementation}
\begin{block}{Fork of the Bitcoin framework}
The channel was implemented in a script after forking the framework and can be operated from the CLI passing the required parameters (funds amount, pub/priv keys, previous inputs, ...)
\end{block}
\pause
\begin{alertblock}{Channel lacks ease of use}
Because focused on the \textbf{channel protocol's design to enhace security}, no time was missing to automate the operatibility of the channel:
\begin{itemize}[<+->]
\item \textbf{Bitcoin Core RPC:} to automate transaction broadcasting, UTXO detection, balance detection, fee calculation, ...
\item \textbf{Channel state storage:} automatically store in the user's computer the state of the channel
\item \textbf{Graphical UI:} enable every Bitcoin user enjoy the payment channels' potential
\end{itemize}
\end{alertblock}
\end{frame}
\section{Conclusions}
\begin{frame}
Along this project, I've learned:
\begin{itemize}[<+->]
\item \textbf{Low-level understanding of the Bitcoin protocol}: By learning how to implement Smart Contracts on Bitcoin and creating the framework to ease those smart contracts creation
\item \textbf{Bitcoin lacks of low-level extensive documentation}: Developing in Bitcoin has no formal, low-level detailed guide and most advanced features can just be learned by inspecting the \textit{Bitcoin Core} C++ code
\item \textbf{Payment Channels are the future of Bitcoin}: Maybe \textit{Lightning Network} has a better structure and protocol implementation, but what is crystal clear is that multi-hop duplex payment channels are the Bitcoin's future after \textit{SegWit.co} activates in the \textit{mainnet} allowing a secure implementation of them.
\end{itemize}
\end{frame}
\begin{frame}
\begin{center}
\textbf{\huge{Thanks for your time and attention}}\\~\\
\pause
\huge{Q\&A round}
\end{center}
\end{frame}
\begin{frame}{For more information}
\begin{block}{The project work compilation}
\begin{center}
Documentation:\\
\url{https://davidlj95.com/smart-payment-channel}\\
Code:\\
\url{https://github.com/davidlj95/smart-payment-channel}
\end{center}
\end{block}
\pause
\begin{block}{The Bitcoin framework}
\begin{center}
\url{https://github.com/uab-projects/btc-payment-channels}\\
\pause
Test it!:\\
\texttt{pip install bitcoin-framework}
\end{center}
\end{block}
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.74247454,
"avg_line_length": 39.4079822616,
"ext": "tex",
"hexsha": "70dcf10c24c142a16a6d2c03e81026e506c7cd54",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2022-03-23T09:37:40.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-03-23T09:37:40.000Z",
"max_forks_repo_head_hexsha": "51acef689eca781cce18d72c167bb0dcfe8cf679",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "davidlj95/smart-payment-channel",
"max_forks_repo_path": "presentation/presentation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "51acef689eca781cce18d72c167bb0dcfe8cf679",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "davidlj95/smart-payment-channel",
"max_issues_repo_path": "presentation/presentation.tex",
"max_line_length": 339,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "51acef689eca781cce18d72c167bb0dcfe8cf679",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "davidlj95/smart-payment-channel",
"max_stars_repo_path": "presentation/presentation.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-23T09:37:30.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-12-24T07:43:51.000Z",
"num_tokens": 9951,
"size": 35546
} |
\clearpage
\section{Applying HDM on Testnet}
\nblink{brats/22a\_testnet\_hdm\_circles\_fixed.ipynb}
This chapter shows the Hausdorff Distance Masks method applied on three examples of the testnet.
\subsection{Results}
\begin{figure}[H]
\centering
\begin{subfigure}[t]{.28\textwidth}
\centering
\includegraphics[width=\linewidth]{chapters/06_hdm/testnet/0.png}
\caption{Ground truth segment}
\end{subfigure}\hfill%
\begin{subfigure}[t]{.34\textwidth}
\centering
\includegraphics[width=\linewidth]{chapters/06_hdm/testnet/2.png}
\caption{Regions where the applied masks reduce the accuracy of the segmentation}
\end{subfigure}\hfill%
\begin{subfigure}[t]{.36\textwidth}
\centering
\includegraphics[width=\linewidth]{chapters/06_hdm/testnet/3.png}
\caption{Regions where the applied masks increase the accuracy of the segmentation}
\end{subfigure}
\caption{Visualization (b) and visualization (c) show that both the segmentation region itself and the circle are important for the correct segmentation.}
\label{hdm_testnet_1}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}[t]{.28\textwidth}
\centering
\includegraphics[width=\linewidth]{chapters/06_hdm/testnet/4.png}
\caption{Ground truth segment}
\end{subfigure}\hfill%
\begin{subfigure}[t]{.34\textwidth}
\centering
\includegraphics[width=\linewidth]{chapters/06_hdm/testnet/6.png}
\caption{Regions where the applied masks reduce the accuracy of the segmentation}
\end{subfigure}\hfill%
\begin{subfigure}[t]{.34\textwidth}
\centering
\includegraphics[width=\linewidth]{chapters/06_hdm/testnet/7.png}
\caption{Regions where the applied masks increase the accuracy of the segmentation}
\end{subfigure}
\caption{Similar to above, both the visualization (b) and visualization (c) show that both the segmentation region itself and the square are important for the correct segmentation.}
\label{hdm_testnet_2}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}[t]{.28\textwidth}
\centering
\includegraphics[width=\linewidth]{chapters/06_hdm/testnet/8.png}
\caption{Ground truth segment}
\end{subfigure}\hfill%
\begin{subfigure}[t]{.34\textwidth}
\centering
\includegraphics[width=\linewidth]{chapters/06_hdm/testnet/10.png}
\caption{Regions where the applied masks reduce the accuracy of the segmentation}
\end{subfigure}\hfill%
\begin{subfigure}[t]{.34\textwidth}
\centering
\includegraphics[width=\linewidth]{chapters/06_hdm/testnet/11.png}
\caption{Regions where the applied masks increase the accuracy of the segmentation}
\end{subfigure}
\caption{In this sample, the output looks similar to the two figures above, with slightly different positions for the masks}
\label{hdm_testnet_3}
\end{figure}
\subsection{Discussion}
All three figures (Figure \ref{hdm_testnet_1}, Figure \ref{hdm_testnet_2} and Figure \ref{hdm_testnet_3}) show the importance of not only the segmented region itself (cross/triangle symbol), but also the symbol on the right which decides which symbol on the left should be segmented. With this dataset, increasing the accuracy (the green visualizations in the figures) by masking parts of the image happens, but the decrease of the Hausdorff distance to the ground truth is quite small in comparison to the distance when the accuracy goes down (red visualizations).
As expected from this dataset, the edges and corners of the right symbols are very important to generate the correct output segment.
\subsection{Conclusion}
The new Hausdorff Distance Masks method shows much better results than the modified RISE method. The importance of the right symbol is clearly visible, even the importance of the edges and corners of the symbols are visible.
After validating the basic functionality of this method, we apply it on the BraTS dataset to see if it also provides useful insight into the neural network model.
| {
"alphanum_fraction": 0.7379110251,
"avg_line_length": 51.0617283951,
"ext": "tex",
"hexsha": "0c041668334807dc301a15568d6cdf40bf0b8a86",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a94ecd7cff9f00ecd23ecee319076b78bef79a8e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "andef4/thesis-doc",
"max_forks_repo_path": "chapters/06_hdm/05_testnet.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a94ecd7cff9f00ecd23ecee319076b78bef79a8e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "andef4/thesis-doc",
"max_issues_repo_path": "chapters/06_hdm/05_testnet.tex",
"max_line_length": 565,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a94ecd7cff9f00ecd23ecee319076b78bef79a8e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "andef4/thesis-doc",
"max_stars_repo_path": "chapters/06_hdm/05_testnet.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1048,
"size": 4136
} |
\documentclass[onecolumn]{IEEEtran}
% *** CITATION PACKAGES ***
%
\usepackage{cite}
\usepackage[square, comma, sort&compress, numbers]{natbib}
\usepackage{subfigure}
% *** GRAPHICS RELATED PACKAGES ***
%
\ifCLASSINFOpdf
\usepackage[pdftex]{graphicx}
% declare the path(s) where your graphic files are
% \graphicspath{{../pdf/}{../jpeg/}}
% and their extensions so you won't have to specify these with
% every instance of \includegraphics
\DeclareGraphicsExtensions{.pdf,.jpeg,.png, .tif, .jpg}
\else
% or other class option (dvipsone, dvipdf, if not using dvips). graphicx
% will default to the driver specified in the system graphics.cfg if no
% driver is specified.
% \usepackage[dvips]{graphicx}
% declare the path(s) where your graphic files are
% \graphicspath{{../eps/}}
% and their extensions so you won't have to specify these with
% every instance of \includegraphics
% \DeclareGraphicsExtensions{.eps}
\fi
% *** MATH PACKAGES ***
%
\usepackage[cmex10]{amsmath}
% *** SPECIALIZED LIST PACKAGES ***
%
% \usepackage{algorithmicx}
\usepackage{algorithm}
%\usepackage[noend]{algpseudocode}
\usepackage{algpseudocode}
\usepackage{multicol}
\usepackage{multirow}
\usepackage{booktabs}
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\usepackage{array}
\usepackage{booktabs}
%\usepackage{siunitx}
\usepackage{float}
\usepackage{color}
\usepackage{geometry}
\geometry{left=1.5cm,right=1.5cm,top=1.0cm,bottom=1.0cm}
% correct bad hyphenation here
\hyphenation{op-tical net-works semi-conduc-tor}
\begin{document}
\section{nodules/msnodules ON LIDC-IDRI}
\subsection{Overall Statistics}
The confusion matrix for each type is shown in Table~\ref{tbl:conf_cmp_lidc_nodules}. The overall classification rate is 83.1\% / 84.1\%. Results for all cases are shown. Scores are presented on the most right of the image. It should be noted that cases not correctly classified are labeled out using red rectangles.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{table}[H]
\caption{Confusion Matrix for \emph{nodules/ms-nodules} on LIDC-IDRI}
\centering
\tabcolsep=1pt
\begin{tabular}{p{30pt}|p{60pt}|p{60pt}|p{60pt}|p{60pt}|p{60pt}|p{60pt}}
\toprule
\tabincell{c}{ } &\textbf{G} & \textbf{W} & \textbf{N}& \textbf{P} & \textbf{V} & \textbf{J} \\
\midrule
%%%%%%%%%%%% 111111111111111111111111111
\rule{0pt}{8pt}
\textbf{G} & \textbf{0.80 / 0.47} & 0.00 / 0.09 & 0.20 / 0.35 & 0.00 / 0.01 & 0.01 / 0.08 & 0.00 / 0.00 \\
\rule{0pt}{8pt}
\textbf{W} & 0.01 / 0.00 & \textbf{0.83 / 1.00 }& 0.05 / 0.00 & 0.00 / 0.00 & 0.13 / 0.00 & 0.01 / 0.00 \\
\rule{0pt}{8pt}
\textbf{N} & 0.05 / 0.02 & 0.07 / 0.05 & \textbf{0.92 / 0.87} & 0.06 / 0.02 & 0.04 / 0.02 & 0.07 / 0.02 \\
\rule{0pt}{8pt}
\textbf{P} & 0.00 / 0.01 & 0.00 / 0.01 & 0.12 / 0.08 & \textbf{0.91 / 0.89} & 0.05 / 0.00 & 0.00 / 0.00 \\
\rule{0pt}{8pt}
\textbf{V} & 0.07 / 0.01 & 0.04 / 0.03 & 0.02 / 0.05 & 0.00 / 0.00 & \textbf{0.88 / 0.89} & 0.01 / 0.01 \\
\rule{0pt}{8pt}
\textbf{J} & 0.01 / 0.01 & 0.00 / 0.00 & 0.19 / 0.10 & 0.00 / 0.00 & 0.00 / 0.00 & \textbf{0.86 / 0.89} \\
\bottomrule
\end{tabular}
\tabcolsep=6pt
\begin{tabular}{lll}
\rule{0pt}{10pt}
\tabincell{l}{G = \underline{G}round glass optic} & \tabincell{l}{W = \underline{W}ell-circumscribed} & \tabincell{c}{N = \underline{N}on-nodule} \\
\rule{0pt}{10pt}
\tabincell{l}{P = \underline{P}leural-tail} & \tabincell{l}{V = \underline{V}ascularized} & \tabincell{l}{J = \underline{J}uxta-pleural} \\
\end{tabular}
\label{tbl:conf_cmp_lidc_nodules}
\end{table}
\newpage
\section{\emph{colornodules/ms-colornodules} ON LIDC-IDRI}
\subsection{Overall Statistics}
The confusion matrix for each type is shown in Table~\ref{tbl:conf_cmp_lidc_colornodules}. The overall classification rate is 81.1\% / 85.9\%. Results for all cases are shown. Scores are presented on the most right of the image. It should be noted that cases not correctly classified are labeled out using red rectangles.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{table}[H]
\caption{Confusion Matrix for \emph{colornodule/ms-colornodule} on LIDC-IDRI}
\centering
\tabcolsep=1pt
\begin{tabular}{p{10pt}|p{60pt}|p{60pt}|p{60pt}|p{60pt}|p{60pt}|p{60pt}}
\toprule
\tabincell{c}{ } &\textbf{G} & \textbf{\textbf{W}} & \textbf{N}& \textbf{P} & \textbf{V} & \textbf{J} \\
\midrule
%%%%%%%%%%%%% 2222222222222222222222222222
\rule{0pt}{8pt}
\textbf{G} & \textbf{0.62 / 0.65} & 0.00 / 0.00 & 0.38 / 0.35 & 0.00 / 0.00 & 0.00 / 0.00 & 0.00 / 0.00 \\
\rule{0pt}{8pt}
\textbf{W} & 0.00 / 0.02 & \textbf{0.86 / 0.89 }& 0.03 / 0.04 & 0.00 / 0.01 & 0.11 / 0.04 & 0.00 / 0.00 \\
\rule{0pt}{8pt}
\textbf{N} & 0.00 / 0.01 & 0.02 / 0.03 & \textbf{0.94 / 0.95} & 0.02 / 0.00 & 0.00 / 0.01 & 0.03 / 0.00 \\
\rule{0pt}{8pt}
\textbf{P} & 0.02 / 0.01 & 0.12 / 0.04 & 0.07 / 0.03 & \textbf{0.79 / 0.93} & 0.00 / 0.01 & 0.00 / 0.00 \\
\rule{0pt}{8pt}
\textbf{V} & 0.02 / 0.03 & 0.07 / 0.06 & 0.05 / 0.02 & 0.03 / 0.00 &\textbf{ 0.83 / 0.89} & 0.00 / 0.00 \\
\rule{0pt}{8pt}
\textbf{J} & 0.00 / 0.00 & 0.00 / 0.00 & 0.23 / 0.20 & 0.00 / 0.02 & 0.00 / 0.00 & \textbf{0.76 / 0.78} \\
\bottomrule
\end{tabular}
\tabcolsep=6pt
\begin{tabular}{lll}
\rule{-3pt}{10pt}
\tabincell{l}{G = \underline{G}round glass optic} & \tabincell{l}{W = \underline{W}ell-circumscribed} & \tabincell{c}{N = \underline{N}on-nodule} \\
\tabincell{l}{P = \underline{P}leural-tail} & \tabincell{l}{V = \underline{V}ascularized} & \tabincell{l}{J = \underline{J}uxta-pleural} \\
\end{tabular}
\label{tbl:conf_cmp_lidc_colornodules}
\end{table}
\newpage
\section{\emph{nodulecircles/ms-nodulecircles} ON LIDC-IDRI}
\subsection{Overall Statistics}
The confusion matrix for each type is shown in Table~\ref{tbl:conf_cmp_lidc_nodulecircles}. The overall classification rate is 88.2\% / 92.1\%. Results for all cases are shown. Scores are presented on the most right of the image. It should be noted that cases not correctly classified are labeled out using red rectangles.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{table}[H]
\caption{Confusion Matrix for \emph{nodulecircles/ms-nodulecircles} on LIDC-IDRI}
\centering
\tabcolsep=1pt
\begin{tabular}{p{10pt}|p{60pt}|p{60pt}|p{60pt}|p{60pt}|p{60pt}|p{60pt}}
\toprule
\tabincell{c}{ } &\textbf{G} & \textbf{\textbf{W}} & \textbf{N}& \textbf{P} & \textbf{V} & \textbf{J} \\
\midrule
%%%%%%%%%%%%% 3333333333333333333333333333333
\rule{0pt}{8pt}
\textbf{G} & \textbf{0.80 / 0.83 }& 0.01 / 0.02 & 0.20 / 0.13 & 0.00 / 0.02 & 0.00 / 0.00 & 0.00 / 0.00 \\
\rule{0pt}{8pt}
\textbf{W} & 0.01 / 0.01 &\textbf{ 0.97 / 0.97} & 0.03 / 0.01 & 0.00 / 0.00 & 0.00 / 0.02 & 0.00 / 0.00 \\
\rule{0pt}{8pt}
\textbf{N} & 0.00 / 0.00 & 0.01 / 0.01 & \textbf{0.98 / 0.99 }& 0.01 / 0.00 & 0.00 / 0.00 & 0.01 / 0.00 \\
\rule{0pt}{8pt}
\textbf{P} & 0.01 / 0.00 & 0.00 / 0.00 & 0.14 / 0.10 & \textbf{0.86 / 0.90} & 0.00 / 0.00 & 0.00 / 0.00 \\
\rule{0pt}{8pt}
\textbf{V} & 0.01 / 0.01 & 0.01 / 0.06 & 0.05 / 0.01 & 0.00 / 0.00 & \textbf{0.94 / 0.92 }& 0.00 / 0.00 \\
\rule{0pt}{8pt}
\textbf{J} & 0.00 / 0.00 & 0.00 / 0.00 & 0.21 / 0.09 & 0.00 / 0.01 & 0.01 / 0.00 & \textbf{0.79 / 0.90} \\
\bottomrule
\end{tabular}
\tabcolsep=6pt
\begin{tabular}{lll}
\rule{-3pt}{10pt}
\tabincell{l}{G = \underline{G}round glass optic} & \tabincell{l}{W = \underline{W}ell-circumscribed} & \tabincell{c}{N = \underline{N}on-nodule} \\
\tabincell{l}{P = \underline{P}leural-tail} & \tabincell{l}{V = \underline{V}ascularized} & \tabincell{l}{J = \underline{J}uxta-pleural} \\
\end{tabular}
\label{tbl:conf_cmp_lidc_nodulecircles}
\end{table}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newpage
\section{\emph{nodules/ms-nodules} ON ELCAP}
\subsection{Overall Statistics}
The confusion matrix for each type is shown in Table~\ref{tbl:conf_cmp_elcap_nodules}. The overall classification rate is 79.6\% / 86.5\%. Results for all cases are shown. Scores are presented on the most right of the image. It should be noted that cases not correctly classified are labeled out using red rectangles.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{table}[H]
\caption{Confusion Matrix for \emph{nodules/ms-nodules} on ELCAP}
\centering
\tabcolsep=1pt
\begin{tabular}{p{10pt}|p{60pt}|p{60pt}|p{60pt}|p{60pt}|p{60pt}|p{60pt}}
\toprule
\tabincell{c}{ } &\textbf{G} & \textbf{\textbf{W}} & \textbf{N}& \textbf{P} & \textbf{V} & \textbf{J} \\
\midrule
%%%%%%%%%%%%% 444444444444444444444444444444
\rule{0pt}{8pt}
\textbf{G} & \textbf{0.74 / 0.81} & 0.21 / 0.02 & 0.05 / 0.17 & 0.00 / 0.00 & 0.00 / 0.00 & 0.00 / 0.00 \\
\rule{0pt}{8pt}
\textbf{W} & 0.00 / 0.02 & \textbf{0.61 / 0.86} & 0.37 / 0.03 & 0.00 / 0.01 & 0.01 / 0.08 & 0.01 / 0.01 \\
\rule{0pt}{8pt}
\textbf{N} & 0.00 / 0.00 & 0.00 / 0.00 & \textbf{1.00 / 1.00} & 0.00 / 0.00 & 0.00 / 0.00 & 0.00 / 0.00 \\
\rule{0pt}{8pt}
\textbf{P} & 0.00 / 0.08 & 0.00 / 0.00 & 0.23 / 0.17 & \textbf{0.77 / 0.75} & 0.00 / 0.00 & 0.00 / 0.00 \\
\rule{0pt}{8pt}
\textbf{V} & 0.03 / 0.01 & 0.05 / 0.01 & 0.14 / 0.09 & 0.00 / 0.00 & \textbf{0.78 / 0.82} & 0.00 / 0.06 \\
\rule{0pt}{8pt}
\textbf{J} & 0.01 / 0.02 & 0.00 / 0.00 & 0.40 / 0.16 & 0.01 / 0.00 & 0.00 / 0.00 & \textbf{0.58 / 0.81 }\\
\bottomrule
\end{tabular}
\tabcolsep=6pt
\begin{tabular}{lll}
\rule{-3pt}{10pt}
\tabincell{l}{G = \underline{G}round glass optic} & \tabincell{l}{W = \underline{W}ell-circumscribed} & \tabincell{c}{N = \underline{N}on-nodule} \\
\tabincell{l}{P = \underline{P}leural-tail} & \tabincell{l}{V = \underline{V}ascularized} & \tabincell{l}{J = \underline{J}uxta-pleural} \\
\end{tabular}
\label{tbl:conf_cmp_elcap_nodules}
\end{table}
\newpage
\section{\emph{colornodules/ms-colornodules} ON ELCAP}
\subsection{Overall Statistics}
The confusion matrix for each type is shown in Table~\ref{tbl:conf_cmp_elcap_colornodules}. The overall classification rate is 84.1\% / 84.3\%. Results for all cases are shown. Scores are presented on the most right of the image. It should be noted that cases not correctly classified are labeled out using red rectangles.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{table}[H]
\caption{Confusion Matrix for \emph{colornodules/ms-colornodules} on ELCAP}
\centering
\tabcolsep=1pt
\begin{tabular}{p{10pt}|p{60pt}|p{60pt}|p{60pt}|p{60pt}|p{60pt}|p{60pt}}
\toprule
\tabincell{c}{ } &\textbf{G} & \textbf{\textbf{W}} & \textbf{N}& \textbf{P} & \textbf{V} & \textbf{J} \\
\midrule
%%%%%%%%%%%%% 555555555555555555555555555555
\rule{0pt}{8pt}
\textbf{G} & \textbf{0.57 / 0.79 }& 0.00 / 0.00 & 0.43 / 0.21 & 0.00 / 0.00 & 0.00 / 0.00 & 0.00 / 0.00 \\
\rule{0pt}{8pt}
\textbf{W} & 0.03 / 0.01 & \textbf{0.84 / 0.85 }& 0.06 / 0.09 & 0.00 / 0.00 & 0.07 / 0.04 & 0.00 / 0.01 \\
\rule{0pt}{8pt}
\textbf{N} & 0.01 / 0.00 & 0.01 / 0.00 &\textbf{ 0.97 / 1.00 }& 0.00 / 0.00 & 0.00 / 0.00 & 0.00 / 0.00 \\
\rule{0pt}{8pt}
\textbf{P} & 0.00 / 0.09 & 0.02 / 0.00 & 0.18 / 0.11 & \textbf{0.79 / 0.79} & 0.02 / 0.02 & 0.00 / 0.00 \\
\rule{0pt}{8pt}
\textbf{V} & 0.04 / 0.02 & 0.07 / 0.06 & 0.09 / 0.06 & 0.00 / 0.00 &\textbf{ 0.79 / 0.83 }& 0.01 / 0.03 \\
\rule{0pt}{8pt}
\textbf{J} & 0.00 / 0.01 & 0.00 / 0.00 & 0.28 / 0.36 & 0.00 / 0.00 & 0.02 / 0.01 & \textbf{0.70 / 0.62} \\
\bottomrule
\end{tabular}
\tabcolsep=6pt
\begin{tabular}{lll}
\rule{-3pt}{10pt}
\tabincell{l}{G = \underline{G}round glass optic} & \tabincell{l}{W = \underline{W}ell-circumscribed} & \tabincell{c}{N = \underline{N}on-nodule} \\
\tabincell{l}{P = \underline{P}leural-tail} & \tabincell{l}{V = \underline{V}ascularized} & \tabincell{l}{J = \underline{J}uxta-pleural} \\
\end{tabular}
\label{tbl:conf_cmp_elcap_colornodules}
\end{table}
\newpage
\section{\emph{nodulecircles/ms-nodulecircles} ON ELCAP}
\subsection{Overall Statistics}
The confusion matrix for each type is shown in Table~\ref{tbl:conf_cmp_elcap_nodulecircles}. The overall classification rate is 84.9\% / 90.9\%. Results for all cases are shown. Scores are presented on the most right of the image. It should be noted that cases not correctly classified are labeled out using red rectangles.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{table}[H]
\caption{Confusion Matrix for \emph{nodulecircles/ms-nodulecircles} on ELCAP}
\centering
\tabcolsep=1pt
\begin{tabular}{p{10pt}|p{60pt}|p{60pt}|p{60pt}|p{60pt}|p{60pt}|p{60pt}}
\toprule
\tabincell{c}{ } &\textbf{G} & \textbf{\textbf{W}} & \textbf{N}& \textbf{P} & \textbf{V} & \textbf{J} \\
\midrule
%%%%%%%%%%%%% 666666666666666666666666666666
\rule{0pt}{8pt}
\textbf{G} & \textbf{1.00 / 1.00 }& 0.00 / 0.00 & 0.00 / 0.00 & 0.00 / 0.00 & 0.00 / 0.00 & 0.00 / 0.00 \\
\rule{0pt}{8pt}
\textbf{W} & 0.04 / 0.00 & \textbf{0.92 / 0.95} & 0.03 / 0.05 & 0.00 / 0.00 & 0.01 / 0.00 & 0.00 / 0.00 \\
\rule{0pt}{8pt}
\textbf{N} & 0.10 / 0.04 & 0.00 / 0.04 &\textbf{ 0.90 / 0.92} & 0.00 / 0.01 & 0.00 / 0.00 & 0.00 / 0.00 \\
\rule{0pt}{8pt}
\textbf{P} & 0.04 / 0.02 & 0.01 / 0.00 & 0.11 / 0.06 & \textbf{0.83 / 0.92} & 0.01 / 0.00 & 0.00 / 0.00 \\
\rule{0pt}{8pt}
\textbf{V} & 0.10 / 0.05 & 0.00 / 0.00 & 0.09 / 0.02 & 0.00 / 0.00 & \textbf{0.82 / 0.93 }& 0.00 / 0.00 \\
\rule{0pt}{8pt}
\textbf{J} & 0.02 / 0.04 & 0.01 / 0.00 & 0.32 / 0.20 & 0.02 / 0.00 & 0.00 / 0.00 & \textbf{0.63 / 0.76 }\\
\bottomrule
\end{tabular}
\tabcolsep=6pt
\begin{tabular}{lll}
\rule{-3pt}{10pt}
\tabincell{l}{G = \underline{G}round glass optic} & \tabincell{l}{W = \underline{W}ell-circumscribed} & \tabincell{c}{N = \underline{N}on-nodule} \\
\tabincell{l}{P = \underline{P}leural-tail} & \tabincell{l}{V = \underline{V}ascularized} & \tabincell{l}{J = \underline{J}uxta-pleural} \\
\end{tabular}
\label{tbl:conf_cmp_elcap_nodulecircles}
\end{table}
\end{document}
| {
"alphanum_fraction": 0.630651228,
"avg_line_length": 46.7551724138,
"ext": "tex",
"hexsha": "39763f880afa8ac959c42180f3f6fa41fd3be033",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8c871f0c20d051b1789832c408d0b59dc0e8a338",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "liu3xing3long/cnn_nodule_material",
"max_forks_repo_path": "original.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8c871f0c20d051b1789832c408d0b59dc0e8a338",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "liu3xing3long/cnn_nodule_material",
"max_issues_repo_path": "original.tex",
"max_line_length": 323,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8c871f0c20d051b1789832c408d0b59dc0e8a338",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "liu3xing3long/cnn_nodule_material",
"max_stars_repo_path": "original.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6040,
"size": 13559
} |
\documentclass[12pt]{report}
\usepackage{caption}
\usepackage{float}
\author{Stuart}
\date{\today}
\title{Notes from the first chapter of Database Systems: The CompleteBook}
\begin{document}
\maketitle
\section{The Evolution of Database Systems}
\begin{itemize}
\item Database
\begin{itemize}
\item collection of information what exists over a long period of time
\item collection of data managed by a DBMS
\end{itemize}
\item DBMS = Database Managment System
\item DBMS is expected to:
\begin{itemize}
\item allow database scration with given schemas
\item allow data quering
\item support very large amounts of data
\item enable durability item recovery of database upon failure
\item concurrent access by users with varing access rights
\end{itemize}
\end{itemize}
\subsection{Early Database Management Systems}
\end{document}
| {
"alphanum_fraction": 0.7136706136,
"avg_line_length": 27.3235294118,
"ext": "tex",
"hexsha": "bbf523dc6c6be8c053170b06af4ffd6e76d6fc7d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5e7ebfabbd62ed5697f12971f03c9a30f4daaa87",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "themadprofessor/UniNotes",
"max_forks_repo_path": "Comp/txtbknts/chap1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5e7ebfabbd62ed5697f12971f03c9a30f4daaa87",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "themadprofessor/UniNotes",
"max_issues_repo_path": "Comp/txtbknts/chap1.tex",
"max_line_length": 74,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5e7ebfabbd62ed5697f12971f03c9a30f4daaa87",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "themadprofessor/UniNotes",
"max_stars_repo_path": "Comp/txtbknts/chap1.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 232,
"size": 929
} |
\documentclass[11pt]{article}
%Gummi|065|=)
\title{\textbf{Features}}
\author{}
\date{}
\begin{document}
\maketitle
\section{Sign-Up/Log-In}
A feature which allows the user to sign up using name, email and a password. If user is already registered on the application, then he/she can sign in using email and password. In case the user has forgotten the password, there is an option for resetting it via email.
\section{Entry/Image Form}
An entry form accessed via the 'Hearth' button on the menu-bar. It provides a title, an entry field and an image upload option. The title bar has a not\_empty validator to ensure non-garbage values.
\section{Entries Display - List and Individual}
List of entries with images accessed via the 'Memory' button on the menu-bar. It has options to search for specific names, and against each entry name, there are options for viewing, editing and deleting the specific entry. Clicking on the View Button enables a display screen for each entry.
\section{Event Form}
Another input form for the events, accessed via the 'Event' button the menu-bar with the input fields consisting of date and time pop-ups when selected for easier ease of inputs. Also contains name and description of event.
\section{Events Display - List and Individual}
Selected via the 'Timeline' button on the menu-bar, this displays a list of all stored events in the application's database. The list has options to search for specific names, and against each entry name, there are options for viewing, editing and deleting the specific entry. Clicking on the View Button enables a display screen for each entry.
\section{Privacy}
Each user can access only his or her events and entries via this application when they are logged in, hereby maintaining the privacy of the individual's data.
\section{Upload Image; Search, View, Edit, Delete Entries }
As mentioned earlier, the application allows users to upload their pictures online onto the app, and also allows the options of searching, viweing, editing, and deleting the app's entries and events.
\end{document} | {
"alphanum_fraction": 0.7842761266,
"avg_line_length": 61.3529411765,
"ext": "tex",
"hexsha": "0d031805f893ce2cdbd60b459cf7f9e9c2918f23",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "042cf26d11fecb42ae07a1d40884e6eed85adc16",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "darthbhyrava/Hestia",
"max_forks_repo_path": "files/fe.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "042cf26d11fecb42ae07a1d40884e6eed85adc16",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "darthbhyrava/Hestia",
"max_issues_repo_path": "files/fe.tex",
"max_line_length": 346,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "042cf26d11fecb42ae07a1d40884e6eed85adc16",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "darthbhyrava/Hestia",
"max_stars_repo_path": "files/fe.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 458,
"size": 2086
} |
\documentclass{article}
\usepackage[T1]{fontenc}
\usepackage[osf]{libertine}
\usepackage[scaled=0.8]{beramono}
\usepackage[margin=1.5in]{geometry}
\usepackage{url}
\usepackage{booktabs}
\usepackage{microtype}
\usepackage{sectsty}
\sectionfont{\large}
\subsectionfont{\normalsize}
\usepackage{titlesec}
\titlespacing{\section}{0pt}{10pt plus 2pt minus 2pt}{0pt plus 2pt minus 0pt}
\titlespacing{\subsection}{0pt}{5pt plus 2pt minus 2pt}{0pt plus 2pt minus 0pt}
\setlength{\parindent}{0pt}
\setlength{\parskip}{1ex}
\newcommand{\acro}[1]{\textsc{\MakeLowercase{#1}}}
\begin{document}
{\large \textbf{CSE 515T: Bayesian Methods in Machine Learning (Fall 2019)}} \\[1ex]
\begin{tabular}{rl}
Instructor & Professor Roman Garnett \\
\acro{TA} & Matt Gleeson (\texttt{gleesonm}) \\
\acro{TA} & Adam Kern (\texttt{adam.kern}) \\
Time/Location & Monday/Wednesday 4--5:20pm, Hillman 60 \\
Office Hours (Garnett) & Wednesday 5:30--6:30pm, Hillman 60 \\
Office Hours (\acro{TA}) & \acro{TBA} \\
\acro{URL} & \url{http://cse.wustl.edu/~garnett/cse515t/} \\
GitHub & \url{https://github.com/rmgarnett/cse515t/fall_2019} \\
Piazza message board & \url{https://piazza.com/wustl/fall2019/cse515t}
\end{tabular}
\section*{Course Description}
This course will cover modern machine learning techniques from a Bayesian
probabilistic perspective. Bayesian probability allows us to model and reason
about all types of uncertainty. The result is a powerful, consistent framework
for approaching many problems that arise in machine learning, including
parameter estimation, model comparison, and decision making. We will begin with
a high-level introduction to Bayesian inference, then proceed to cover
more-advanced topics.
This course is meant to lay the groundwork for research in these
areas. If you are looking for a practical introduction with a focus on
implementation, etc. this may not be the best course for you.
\section*{Prerequisites}
We will make heavy use of mathematics in this course. You should have a good
grasp of multivariable calculus (integration, partial derivation, maximization,
etc.), probability (conditional probability, expectations, etc.), and linear
algebra (solving linear systems, eigendecompositions, etc.).
Please note that this is not an introduction to machine learning; the \acro{CSE
417T/517A} courses fill that role. I will assume prior familiarity with the
main concepts of machine learning: supervised and unsupervised learning,
classification, regression, clustering, etc.
\section*{Book}
There is no required book. For each lecture, I will provide a list of related
materials, including book chapters, videos, papers, code, etc.\ on the course
webpage. These are to give you different viewpoints on the subject. Hopefully
you can find one that suits you.
Although no book will be required, the following books are highly aligned with
this course:
\begin{itemize}
\item \emph{Pattern Recognition and Machine Learning} by Christopher M.\ Bishop.
Covers many machine-learning topics thoroughly. Very Bayesian. Can also be
very mathematical and take some effort to read.
\item \emph{Bayesian Reasoning and Machine Learning} by David Barber. Geared
(as much as a machine-learning book could be) towards computer scientists.
Lots of material on graphical models. Freely available
online.\footnote{\url{http://www.cs.ucl.ac.uk/staff/d.barber/brml/}, link also
on course webpage.}
\item \emph{Gaussian Processes for Machine Learning} by Carl Rasmussen and
Christopher Williams. Excellent reference for Gaussian processes. Freely
available online.\footnote{\url{http://www.gaussianprocess.org/gpml/}, link
also on course webpage.}
\end{itemize}
The following books are good resources for Bayesian statistics:
\begin{itemize}
\item \emph{Statistical Decision Theory and Bayesian Analysis} by James Berger.
An old book (1980, advertises ``with 23 illustrations'' on the title page),
but nonetheless an excellent introduction to Bayesian methods. Very clear.
Provides convincing philosophical arguments for the Bayesian viewpoint.
\item \emph{The Bayesian Choice: From Decision-Theoretic Foundations to
Computational Implementation} by Christian Robert. Another fairly technical
resource with passionate arguments for the Bayesian perspective.
\end{itemize}
\section*{Assignments}
There will be a small number of assignments throughout the semester, with two
weeks available to complete each one.
The assignments will form 30\% of your grade, and each will have two types of
questions: traditional ``pencil-and-paper'' questions, and programming exercises
meant to give more insight into applying the techniques we will discuss on
actual data. The former \emph{will not be corrected.} If you make a reasonable
attempt to answer a question, I will give you full credit. After each
assignment, I will provide solutions online.
The programming exercises will require you to implement some of the theoretical
ideas we discuss in class. The point of these exercises is both to lead to a
better understanding by forcing a different viewpoint (that of the designer),
and also to enable interaction. I encourage you to play with the data,
parameters, etc. associated with these exercises to see how the results change.
The point of the exercises is \emph{not} for me to judge your programming
skills, so \emph{please do not hand in your code.} Rather, you should convey
your answers via plots, tables, and/or discussion, as appropriate. As I don't
need to read your code, feel free to use any language you'd like, but note that
if I provide you with my own code, I will do so in \acro{MATLAB.}
\subsection*{Late policy}
Assignments will be due during class on the dates specified on the course
homepage. I will allow you to turn in your assignment up to one class late with
no penalty.
\subsection*{Collaboration policy}
Please feel free to collaborate on the paper-and-pencil questions! This is a
good way to gain a deeper understanding of the material. Of course, you will be
expected to write up your answers separately. Also feel free to collaborate on
a high level on the programming exercises, but please write your own code and
produce your own results.
\section*{Midterm}
There will be a take-home midterm on a date to be determined later (probably
just before or just after Spring Break).This will count for 30\% of your grade.
\section*{Project}
In the second half of the semester, you will complete a project, which will
comprise 30\% of your final grade. The goal of the project will be to apply
Bayesian techniques to a real dataset in a nontrivial way. I will compile a
list of datasets on the course webpage, but you should of course feel free to
find your own that is aligned with your interests. The project should reach
beyond the scope of the homework problems. I will judge the success of a
project based on the methodological approach rather than the quantitative
details of the final outcome. This is an exercise in applying theoretical ideas
in practice, and even the most carefully constructed models or techniques can
fail on a particular problem. Note that I would expect you to think about
\emph{why} your method might have failed (or succeeded!).
You can complete this project in groups of one, two, or three people. Of
course, I will expect more out of larger groups.
There will be four components to this project:
\begin{itemize}
\item A project proposal, due \acro{TBD}. This should be an approximately one
page document describing your idea. I will read this and give
feedback/suggestions.
\item A status report, due \acro{TBD}. I expect this to be one or two pages,
updating me on the progress of your project, including data processing,
implementation, experimental design decisions, etc.
\item A 15-minute presentation describing the project. These will be held in
class during the last class sessions, beginning on \acro{TBD}. The
presentation should briefly explain the idea, the data, and the results of
your investigation.
\item A final report, due \acro{TBD}. This should be an approximately four-page
document explaining the idea, experimental setup, results, and your
interpretation of them.
\end{itemize}
\section*{Grading}
Your final grade will consist of the following weighted components:
\begin{center}
\begin{tabular}{lc}
\toprule
component & \% \\
\midrule
assignments & 30\% \\
midterm & 30\% \\
project proposal & 10\% \\
project status report & 10\% \\
project presentation & 10\% \\
project final report & 10\% \\
\midrule
final project total & 40\% \\
\bottomrule
\end{tabular}
\end{center}
\section*{Topics}
An outline of the topics I expect to cover is below; this is subject to change,
more likely by deletion than addition. If there is a particular topic you would
like me to spend more time on (or don't care about at all!), please let me know.
I will keep the course webpage updated with lecture-specific information and
resources.
\begin{itemize}
\item \textbf{Introduction to the Bayesian method:} review of probability,
Bayes' theorem, Bayesian inference, Bayesian parameter estimation, Bayesian
decision theory, Bayesian model selection.
\item \textbf{Approximate inference:} the Laplace approximation, variational
Bayes, expectation propagation.
\item \textbf{Sampling methods:} rejection sampling, importance sampling, Markov
chain Monte Carlo.
\item \textbf{Parametric models:} Bayesian linear regression, logistic
regression, general linear models, basis expansions, mixture models, latent
Dirichlet allocation.
\item \textbf{Nonparametric models:} Gaussian proesses for regression and
classification.
\item \textbf{Bayesian numerical analysis:} Bayesian optimization, Bayesian
quadrature.
\end{itemize}
\end{document}
| {
"alphanum_fraction": 0.7455909578,
"avg_line_length": 45.8169642857,
"ext": "tex",
"hexsha": "91641c7530fbf957f40511143841b4c89adf3613",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2a7c9657ede4664e080e2914be402de85a8e3c6d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Aahana1/cse515t",
"max_forks_repo_path": "fall_2019/syllabus/syllabus.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2a7c9657ede4664e080e2914be402de85a8e3c6d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Aahana1/cse515t",
"max_issues_repo_path": "fall_2019/syllabus/syllabus.tex",
"max_line_length": 86,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2a7c9657ede4664e080e2914be402de85a8e3c6d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Aahana1/cse515t",
"max_stars_repo_path": "fall_2019/syllabus/syllabus.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2437,
"size": 10263
} |
\documentclass{beamer}
\usepackage{graphicx}
\usepackage{textpos}
\usepackage{listings}
\usepackage{lstautogobble}
\usetheme{Madrid}
\useoutertheme{miniframes} % Alternatively: miniframes, infolines, split
% Setup the university's color pallette
\definecolor{UIUCorange}{RGB}{19, 41, 75} % UBC Blue (primary)
\definecolor{UIUCblue}{RGB}{232, 74, 39} % UBC Grey (secondary)
\definecolor{codegreen}{rgb}{0,0.6,0}
\definecolor{codegray}{rgb}{0.5,0.5,0.5}
\definecolor{codepurple}{rgb}{0.58,0,0.82}
\definecolor{backcolour}{rgb}{0.95,0.95,0.92}
\lstdefinestyle{python}{
backgroundcolor=\color{backcolour},
commentstyle=\color{codegreen},
keywordstyle=\color{magenta},
numberstyle=\tiny\color{codegray},
stringstyle=\color{codepurple},
basicstyle=\ttfamily\footnotesize,
breakatwhitespace=false,
belowskip=-0.5em,
breaklines=true,
captionpos=b,
keepspaces=true,
%numbers=left,
numbersep=5pt,
showspaces=false,
showstringspaces=false,
showtabs=false,
tabsize=2
}
\lstset{style=python}
\AtBeginSection[]{
\begin{frame}
\vfill
\centering
\begin{beamercolorbox}[sep=8pt,center,shadow=true,rounded=true]{title}
\usebeamerfont{title}\insertsectionhead\par%
\end{beamercolorbox}
\vfill
\end{frame}
}
% Setup the university's color pallette
\definecolor{UIUCorange}{RGB}{19, 41, 75} % UBC Blue (primary)
\definecolor{UIUCblue}{RGB}{232, 74, 39} % UBC Grey (secondary)
\setbeamercolor{palette primary}{bg=UIUCorange,fg=white}
\setbeamercolor{palette secondary}{bg=UIUCblue,fg=white}
\setbeamercolor{palette tertiary}{bg=UIUCblue,fg=white}
\setbeamercolor{palette quaternary}{bg=UIUCblue,fg=white}
\setbeamercolor{structure}{fg=UIUCorange} % itemize, enumerate, etc
\setbeamercolor{section in toc}{fg=UIUCblue} % TOC sections
\setbeamercolor{subsection in head/foot}{bg=UIUCorange,fg=UIUCblue}
\setbeamercolor{subsection in head/foot}{bg=UIUCorange,fg=UIUCblue}
\usepackage[utf8]{inputenc}
\usepackage{graphicx}
%Information to be included in the title page:
\title{\textbf{Adv. Dictionaries}}
\author{\textbf{David H Smith IV}}
\institute[\textbf{UIUC}]{\textbf{University of Illinois Urbana-Champaign}}
\date{\textbf{Tues, Nov 02 2021}}
\setbeamertemplate{title page}[default][colsep=-4bp,rounded=true]
\addtobeamertemplate{title page}{\vspace{3\baselineskip}}{}
\addtobeamertemplate{title page}{
\begin{textblock*}{\paperwidth}(-1.0em, -1.2em)
\includegraphics[width=\paperwidth, height=\paperheight]{imgs/uiuc.png}
\end{textblock*}
}{}
\begin{document}
\frame{\titlepage}
\section{Reminders}
%
% Slide 1
%
\begin{frame}
\frametitle{Reminders}
\begin{itemize}
\item Checkpoint 1 was due last night. Please be sure to push your changes.
\item Quiz 3 is Thursday
\item Homework is to attempt the practice quiz (50 pts)
\item Today: Dictionaries and Checkpoint 2 of the game of life.
\end{itemize}
\end{frame}
\section{Dictionaries}
%
% Slide 2
%
\begin{frame}[fragile]
\frametitle{In Review}
\vfill
\begin{minipage}{0.60\textwidth}
\begin{itemize}
\item Consists of \lstinline|key:value| pairs.
\item Why do we care? Tracking relationships between things.
\end{itemize}
\end{minipage}
\begin{minipage}{0.39\textwidth}
\begin{lstlisting}[language=Python, autogobble]
name_map = {
"dhsmith2" : {
"first" : "David",
"second" : "Smith"
},
"mflwr" : {
"first" : "Max",
"second" : "Fowler"
}
}\end{lstlisting}
\end{minipage}
\vfill
\begin{lstlisting}[language=Python, autogobble]
def informal_email(names_dict, netid):
email = "Dear {0}, I wanted you to know ..."
return email.format(names_dict[netid]['first'])
email_text = informal_email(name_map, "dhsmith2")
\end{lstlisting}
asdf
\end{frame}
%
% Slide 2
%
\section{Computing a Histogram}
\begin{frame}[fragile]
\frametitle{Poll Question: Dictionaries}
What is the value of \lstinline|x| after the following function is called?
\begin{lstlisting}[language=Python, autogobble]
def get_item_counts(some_list):
counts = {}
for item in some_list:
counts[item] += 1
return counts
x = get_item_counts(["This", "This", "This", "Is", "A"])
\end{lstlisting}
\vfill
\begin{enumerate}
\item \lstinline|{"This": 3, "Is": 1, "A": 1}|
\item \lstinline|{This: 3, Is: 1, A: 1}|
\item \lstinline|{"This": "3", "Is": "1", "A": "1"}|
\item KeyError
\end{enumerate}
\pause
Why, and how do we fix this?
\end{frame}
%
% Slide 2
%
\begin{frame}[fragile]
\frametitle{Dictionaries: Computing a Histogram}
Creating a count map of items in a collection is a common dictionary pattern:
\begin{lstlisting}[language=Python, autogobble]
def get_item_counts(some_list):
counts = {}
for item in some_list:
if item not in counts:
counts[item] = 1
else:
counts[item] += 1
return counts
\end{lstlisting}
\end{frame}
%
% Slide 2
%
\section{Key, Item, Value Functions}
\begin{frame}[fragile]
\frametitle{Poll Question: Dictionary Functions}
What will be printed to the screen after the following has run?
\begin{lstlisting}[language=Python, autogobble]
dict_1 = {"foo": 5, "bar": 10, "baz": 12}
for i in keys(dict_1):
print(i, end=" ")
\end{lstlisting}
\vfill
\begin{enumerate}[A]
\item foo bar baz
\item 5 10 12
\item NameError
\item SyntaxError
\end{enumerate}
\end{frame}
%
% Slide 2
%
\begin{frame}[fragile]
\frametitle{Poll Question: Dictionary Functions}
What will be printed to the screen after the following has run?
\begin{lstlisting}[language=Python, autogobble]
dict_1 = {"foo": 5, "bar": 10, "baz": 12}
for i in dict_1.keys():
print(i, end=" ")
\end{lstlisting}
\vfill
\begin{enumerate}[A]
\item foo bar baz
\item 5 10 12
\item NameError
\item SyntaxError
\end{enumerate}
\end{frame}
%
% Slide 2
%
\begin{frame}[fragile]
\frametitle{Poll Question: Dictionary Functions}
What will be printed to the screen after the following has run?
\begin{lstlisting}[language=Python, autogobble]
dict_1 = {"foo": 5, "bar": 10, "baz": 12}
for i in dict_1.keys():
print(i, end=" ")
\end{lstlisting}
\vfill
\begin{enumerate}[A]
\item foo bar baz
\item 5 10 12
\item NameError
\item SyntaxError
\end{enumerate}
\end{frame}
%
% Slide 2
%
\begin{frame}[fragile]
\frametitle{Poll Question: Dictionary Functions}
What will be printed to the screen after the following has run?
\begin{lstlisting}[language=Python, autogobble]
dict_1 = {"foo": 5, "bar": 3, "baz": 10}
for i in dict_1.items():
print(i, end=" ")
\end{lstlisting}
\vfill
\begin{enumerate}[A]
\item ('foo', 5) ('bar', 3) ('baz', 10)
\item (5, 'foo') (3, 'bar') (10, 'baz')
\item NameError
\item Something else...?
\end{enumerate}
\end{frame}
%
% Slide 2
%
\begin{frame}[fragile]
\frametitle{Poll Question: Dictionary Functions}
What will be printed to the screen after the following has run?
\begin{lstlisting}[language=Python, autogobble]
dict_1 = {"foo": 5, "bar": 10, "baz": 12}
all_keys = []
total_val = 0
for foo, bar in dict_1.items():
all_keys.append(foo)
total_val += bar
print(all_keys, total_val)
\end{lstlisting}
\vfill
\begin{enumerate}[A]
\item \lstinline|[5, 10, 12] "foobarbaz"|
\item \lstinline|["foo", "bar", "baz"] 27|
\item TypeError
\item Something else...?
\end{enumerate}
\end{frame}
%
% Slide 2
%
\begin{frame}[fragile]
\frametitle{Poll Question: Dictionary Functions}
What will be printed to the screen after the following has run?
\begin{lstlisting}[language=Python, autogobble]
dict_1 = {"foo": 5, "bar": 10, "baz": 12}
for i in dict_1.values():
print(i, end=" ")
\end{lstlisting}
\vfill
\begin{enumerate}[A]
\item foo bar baz
\item 5 5 5
\item 5 10 12
\item Error
\end{enumerate}
\end{frame}
%
% Slide 2
%
\begin{frame}[fragile]
\frametitle{Dictionary Functions}
Functions for iteration:
\begin{enumerate}
\item \lstinline|dict.items()| \textrightarrow \ Generates tuples of all of the key value pairs in the dictionary.
\item \lstinline|dict.keys()| \textrightarrow \ Generates all of the keys in the dictionary.
\item \lstinline|dict.values()| \textrightarrow \ Generates all of the values in the dictionary.
\end{enumerate}
\vfill
Functions for modification:
\begin{enumerate}
\item \lstinline|dict.clear()| \textrightarrow \ Clears all the key value pairs from the dictionary.
\item \lstinline|dict.get(key, default)| \textrightarrow \ Tries to lookup the value associated with a key and gives default if key not found.
\item \lstinline|dict1.update(dict2)| \textrightarrow \ Merges the key:value pairs from dict1 into dict2.
\item \lstinline|dict.pop(key, default)| \textrightarrow \ Removes the key:value pair, returns the value, default if key not found.
\end{enumerate}
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.6743653993,
"avg_line_length": 27.4820359281,
"ext": "tex",
"hexsha": "d1d7ec853088a06ac07242083a5427f949aeb3a0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5705e37a09fe5748964d3b7e4a24f9e99439758d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "CoffeePoweredComputers/uni-high-fall-21",
"max_forks_repo_path": "assets/slides/lecture-11/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5705e37a09fe5748964d3b7e4a24f9e99439758d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "CoffeePoweredComputers/uni-high-fall-21",
"max_issues_repo_path": "assets/slides/lecture-11/main.tex",
"max_line_length": 146,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5705e37a09fe5748964d3b7e4a24f9e99439758d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "CoffeePoweredComputers/uni-high-fall-21",
"max_stars_repo_path": "assets/slides/lecture-11/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2930,
"size": 9179
} |
\subsection{Additive Expression}
\begin{grammar}
\nonterminal{additive-expression}
\produces
\nonterminal{multiplicative-expression} \\
\produces
\nonterminal{additive-expression}
\lexkeyword{+}
\nonterminal{multiplicative-expression} \\
\produces
\nonterminal{additive-expression}
\lexkeyword{-}
\nonterminal{multiplicative-expression} \\
\end{grammar}
\subsubsection{Eliminating left recursion}
Eliminating left recursion leads to
\begin{grammar}
\underbrace{\nonterminal{additive-expression}}_{=: A}
\produces
\underbrace{\nonterminal{multiplicative-expression}}_{=: M}
\underbrace{\nonterminal{additive-expression'}}_{=:A'}\\
\nonterminal{additive-expression'}
\produces
\lexkeyword{+}
\nonterminal{multiplicative-expression}
\nonterminal{additive-expression'}
\\
\produces
\lexkeyword{-}
\nonterminal{multiplicative-expression}
\nonterminal{additive-expression'}
\\
\produces
\end{grammar}\\[-0.5cm]
\noindent
The left associativity of the operators \lexkeyword{+} and \lexkeyword{-} needs
to be handled manually by the parser.
\paragraph{Item automata for A}
\[
\begin{tikzpicture}[
every text node part/.style={align=center},
initial text =
]
\node[state,initial]
(S)[rounded rectangle, draw]
{{$[A\to .MA']$}};
\node[state]
(M)[rounded rectangle, draw,right=of S]
{{$[A\to M.A']$}};
\node[state,accepting]
(A_)[rounded rectangle, draw,right=of M]
{{$[A\to MA'.]$}};
\path[->] (S) edge node [above] {M} (M);
\path[->] (M) edge node [above] {A'} (A_);
\end{tikzpicture}
\]
\paragraph{Item automata for A'}
\[
\begin{tikzpicture}[
every text node part/.style={align=center},
initial text =
]
\node[state,initial,accepting]
(S)[rounded rectangle, draw]
{
{$[A'\to .+MA']$}\\
{$[A'\to .-MA']$}\\
{$[A'\to .]$}
};
\node[state]
(Add)[rounded rectangle, draw,right=of S]
{{$[A'\to +.MA']$}};
\node[state]
(Sub)[rounded rectangle, draw,below=of Add]
{{$[A'\to -.MA']$}};
\node[state]
(M)[rounded rectangle, draw,right=of Add]
{
{$[A'\to +M.A']$} \\
{$[A'\to -M.A']$}
};
\node[state,accepting]
(E)[rounded rectangle, draw,right=of M]
{
{$[A'\to +MA'.]$} \\
{$[A'\to -MA'.]$}
};
\path[->] (S) edge node [above] {\lexkeyword{+}} (Add);
\path[->] (S) edge node [above] {\lexkeyword{-}} (Sub);
\path[->] (Add) edge node [above] {M} (M);
\path[->] (Sub) edge node [above] {M} (M);
\path[->] (M) edge node [above] {A'} (E);
\end{tikzpicture}
\]
\subsubsection{Eliminating left recursion (in EBNF Form)}
Expressing the BNF notation in EBNF reveals how the item automata for $A$ and
$A'$ can be combined in an implementation using a while loop
checking if the
current token is in $\mathrm{First}(A') = \{ \lexkeyword{+}, \lexkeyword{-}\}$:
\begin{grammar}
\underbrace{\nonterminal{additive-expression}}_{= A}
\produces
\underbrace{\nonterminal{multiplicative-expression}}_{= M}
\underbrace{
\{\;
\nonterminal{additive-op}
\nonterminal{multiplicative-expression}
\}
}_{=A'}
\\
\nonterminal{additive-op}
\produces
\lexkeyword{+} \\
\produces
\lexkeyword{-} \\
\end{grammar}\\[-0.5cm]
| {
"alphanum_fraction": 0.6384375,
"avg_line_length": 25.1968503937,
"ext": "tex",
"hexsha": "1d0ff8817c3dbd8adae2b8baee4081a46d95318e",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2021-12-04T10:45:36.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-12-01T10:41:56.000Z",
"max_forks_repo_head_hexsha": "5d32b4a2faf88f58c4a6b0b72490cc76281070ba",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "michael-lehn/uulm_cb_solutions_practical-1",
"max_forks_repo_path": "solutions/c/michael-lehn/doc/add.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "5d32b4a2faf88f58c4a6b0b72490cc76281070ba",
"max_issues_repo_issues_event_max_datetime": "2021-12-04T13:11:18.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-12-01T13:05:48.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "michael-lehn/uulm_cb_solutions_practical-1",
"max_issues_repo_path": "solutions/c/michael-lehn/doc/add.tex",
"max_line_length": 79,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "5d32b4a2faf88f58c4a6b0b72490cc76281070ba",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "michael-lehn/uulm_cb_solutions_practical-1",
"max_stars_repo_path": "solutions/c/michael-lehn/doc/add.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-04T17:04:56.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-12-01T10:45:11.000Z",
"num_tokens": 1041,
"size": 3200
} |
%% FINAL PROJECT FOR ANALYTICAL MODELS
%% last modifed 05/08/06
\documentclass[times,12pt,fullpage]{article}
\usepackage{hyperref}
\usepackage{times}
\usepackage{amsmath,amssymb,amsfonts} % Typical maths resource packages
\topmargin 0.0in \oddsidemargin -0.2in \evensidemargin 0.8in
\textwidth 6.80in \textheight 8.6in
\parskip 7.6pt
\parindent 0.25in
\makeindex
\usepackage{fancyhdr}
\thispagestyle{fancy} \pagestyle{fancy} \fancyhf{}
\lhead{Multiprocessor Modeling Guide} \chead{} \rhead{Bennett
\thepage} \lfoot{} \cfoot{} \rfoot{}
\title{Modeling Guide for Multiprocessing Systems}
\author{Matthew Bennett \\
{\small School of Computing, University of Southern Mississippi} \\
{\small {\em [email protected]} \ Typeset in \LaTeX on \
\today{} } }
\date{ }
\begin{document}
\maketitle
\begin{abstract}\noindent A tremendous variety of techniques exist for the modeling and
subsequent simulation of multiprocessor systems. This paper presents
several of the approaches of Marsan et al. \cite{marsan}, in a
condensed format suitable to practical systems engineer or computer
scientist. Most of these techniques are based upon Generalized
Stochastic Petri Nets. The guide covers Bucci's Preemptive Time Nets
\cite{bucci} to fill in gaps where continuous-time models fail.
\end{abstract}
\section*{Preliminaries}
A working knowledge of multiprocessor architecture, statistical
distributions, and the modeling technique of General Stochastic
Petri Nets is required. For the latter, Murata \cite{murata}
provides a succinct and quick introduction. Markov chains are
mentioned, but the results presented do not require thorough
understanding of the device, since the work has already been
completed and proven by others.
Many of the graphs or numerical results in Marsan \cite{marsan}, and
\cite{murata} are similar enough to be asymptotically bound for some
constant. The purpose of this guide is to provide a straightforward
and efficient means of guiding the systems architect through the
modeling process and on into either simulation or analysis. The
reader may check those sources cited for more in-depth coverage of
very detailed models, since only the simplest models are presented
within.
A few taxonomic niceties will save the designer some time. The
modeling phase has been split into five categories (as in
\cite{marsan}). After the model is developed, the developer may move
on to modeling fault tolerance (\ref{fault tolerance}) or
verification (\ref{analysis}).
\section{Bus-Contention Free Architectures}
The most trivial multiprocessor system to model is one that
experiences no contention whatsoever. The canonical example in
academia (as in \cite{marsan}) is a crossbar-connected switch
between $n$ processor elements and $m$ memory elements. An analog of
crossbar switch is the Plain Old Telephone System (POTS) exchange.
These are rarely seen in practice, since the number of switches and
interconnections increases by a factor of $\Theta(mn)$, making the
interconnection network extremely expensive for even small numbers
of processors and memories.
Marsan \cite{marsan} mentions that most time spent in any
multiprocessing time is idle time due to contention for a common
resource such as a bus. It is therefore no surprise that modeling
contention-free systems is strait-forward in terms of waiting. The
analysis instead concentrates on statistical modeling of random
activity within the system. This is easily accomplished using
queueing networks, where processors are nodes without queues, and
memory modules are nodes with finite queues \cite{marsan}. For a
crossbar architecture, there is an edge from every processor element
to every memory element, since any processor may read any memory
element regardless of other processors. Some simplification occurs
when two or more processors try to access the same memory node in a
given time step: only one is served. Simulation can be performed
using the Monte Carlo method, with memory accesses being produced
using a random variable (``Markovian process'' is exponential) on
all producers (processors) and consumers (memories) in the net.
As more becomes known about the system under analysis, more can be
included in the modeling queueing network. Marsan \cite{marsan}
provides a useful ontology by asking if the devices are synchronous,
or asynchronous, and also whether memory accesses are uniform or
non-uniform between processors.
Marsan gives a number of case studies. The simplest one is
attributed to Bhandarkar \cite{bhan}, who assumes that memory
accesses are uniform across processors, with lost requests in the
case of memory contention. Bandarkar creates a vector of states that
the queueing network can take on, and then further simplifies the
amount of information at hand by creating super-states, or {\em
equivalence classes} of mutually indistinguishable states (since
processors and memories are assumed identical). He derives a formula
(Marsan fig 6.2) which can be used to algorithmically calculate
super-state transition probabilities in a Markov process, and Marsan
(p. 124, 136) gives further approximation formulas from others' work
which can be calculated in better time on a computer. These
approximations are not as important as they once were, because
computers are substantially more capable today than they were in
1986, at the time of publication. They are all valid, as they are
equivalent to the exact result obtained by Bandarkar \cite{bhan}. A
good approximation closed-form for the number of busy memory modules
$ \beta $ is given by Rau \cite{rau} , using binomial approximations
for all events (equation \ref{raus}). In this formula, $m$ counts
memories, and $p$ counts processors.
\begin{equation}\label{raus}
\frac{\sum_{i=0}^{\text{min}(m,p)-1} 2^i \binom {m-1}{i} \binom
{p-1}{i}}
{\sum_{i=0}^{\text{min}(m,p)-1}\frac{2^i}{i+1}\binom{m-1}{i}\binom{p-1}{i}}
\end{equation}
Delay models, in which memory contention requests are queued instead
of dropped, were also considered my multiple authors. Many of the
results were similar, with memory utilization expectedly being
better with the addition of queues to memory \cite{marsan} (136).
Rau's formula given above is an excellent minimum bound for back of
the envelope calculations for networks without bus contention.
\section{Shared Memory Systems}
Most real systems are not free of bus contention, because of the
nonlinear overhead cost of building complete connection networks.
When bus contention is introduced, the location of the bus with
respect to the memory elements and processors plays a vital role in
the performance and contention of the system as a whole. In this
section, we investigate the subset of systems for which a number of
processors each have a private local memory, connected by a local
bus, and must contend for access to a shared memory, which requires
control of one of one or more global buses. This multiprocessor
architecture is common to many multiprocessor, multi-core
workstations as well as supercomputers such as the classic Cray
series. Classical multi-threading problems, including deadlock
prevention and avoidance, starvation, and mutual exclusion must all
be taken into account for any shared memory systems. Marsan
\cite{marsan} lays out a number of simplifying assumptions for
modeling shared memory systems. Namely, they are: no delay when
capturing the bus (other than normal bus contention), CSMA/CA bus
discipline, and immediate release of the bus upon completion (as
with capture). Marsan uses the term ``active state'' to mean that a
processor is not currently accessing memory, ``waiting state'' to
mean that a processor is waiting for the bus or memory, and ``access
state'' to mean that the processor is currently in memory.
\subsection{Single-Bus Shared Memory Systems}
The architecture is defined by Marsan to be a single global bus
connecting a number of local buses to the shared memory resource.
Each local bus serves a single processor and its private memory.
He chooses three values to take into consideration: access times,
active times, and
\subsubsection{Exponential Distribution}
The Exponential Distribution represents . In a generalized
stochastic petri net, a single exponentially distributed random
event can be represented by a place connected to a timed transition,
where the firing rate of that transition takes on the parameter
$\lambda$ of the exponential distribution, and determines the
likelihood of that even firing depending on a continuous time
variable \cite{marsan}. An example of an Exponentially distributed
random variable is likelihood of a webserver receiving a packet at a
particular time.
\subsubsection{Erlang-k Distribution}
An Erlang-K distribution models k Exponentially distributed random
events which depend upon one another. They must occur in {\em
serial}. In the Petri Net representation, Erlang-K events should be
represented by a k-chain of exponential General Stochastic
Continuous-Timed Petri events \cite{marsan}. An example of an Erlang
distributed random variable is the number of packets to a web server
at a particular instant of time.
\subsubsection{Hyper-Exponential Distribution}
A hyper-exponentially distributed random variable counts the number
of simultaneous occurrences of an exponentially distributed random
event, such as processors trying to access the same piece of memory
simultaneously.
\subsubsection{Queueing disciplines}
Marsan \cite{marsan} defines three queueing disciplines for dealing
with memory access contention, and mutual exclusion assurance. Fixed
priority means that the processors have some preordained priority
for which can preempt one another for memory access. Process sharing
is like round-robin in that processors take their turn accessing
memory in the order they attempt to get in, but they are guaranteed
access because of some global device, like in time sharing. First
come first serve simply operated like a FIFO queue, but provides no
guarantee that a process will give up the bus to allow others in.
All of the combinations of these ideas can be analyzed, and have
been in several different papers. Again, see Marsan p. 157 for a
full bibliography of works. A simplified queuing network model or
Markov chain was successfully employed to deal with simple cases in
which access times and/or active times were equally exponentially
distributed, but Petri nets were used by Marsan to get results for
the more complex models where distribution was hyper-exponential,
Erlang, or more complicated.
In all cases, the numerical results of Marsan seem to indicate that
modeling with a generalized stochastic Petri net can produce
approximate simulation results fairly close to the analytical
results for simpler single-bus shared memory systems. The numerical
results show that the queueing discipline is not an important
consideration when modeling small systems. Since Petri nets are more
graphical and more intuitive than analytical methods, and because
the mathematics become complicated for very complex systems, the
recommended method for modeling anything complex is Generalized
Stochastic Petri nets as described in the 2nd half of Marsan
\cite{marsan} (Chapter 7 and the beginning of Chapter 8).
\subsection{Multi-Bus Shared Memory Systems}
Multi-bus Shared Memory systems can be modeled with Petri nets using
the same techniques as single-bus shared memory systems. Generally,
the number of buses are represented as tokens in a resource pool,
which is a place. Depending upon the architecture of the system, and
because of Marsan's simplifying assumptions for shared memory
systems, immediate transitions usually follow the global bus
resource pool (capturing and releasing being instantaneous, but
active, access times being exponential).
Marsan gives a full analytical treatment to many complicated
multi-bus systems, but that is unnecessary since most systems can be
easily intuitively modeled with Petri nets. Marsan et al. first
obtains upper and lower bounds limiting the gain or loss of
processing power, so any numerical results given by Petri net
modeling should fall within the domain on \cite{marsan} (p.
186-189). This is generally a good methodology, since most
bottlenecks in this configuration are assumed to stem from global
bus contention.
\section{Distributed Memory Systems}
Shared memory systems are not the only type of distributed system. A
true distributed system uses a paradigm more like message passing.
This architecture consists of one or more global buses connecting
local buses. Each local bus contains a processor and at least one
memory element. Unlike previous shared memory systems, there is no
memory connected to the global bus(es). Instead, the memory at each
local bus is used by any processor in the system, and both local and
global buses must be secured at each access step. An example of this
type of system is a computer network utilizing file sharing. Because
of the high dynamism of such networks, analysis of any but the
simplest distributed memory systems is impossible with queueing
nets. Marsan \cite{marsan} proposes tens of architectures that are
both single global bus and multiple global bus and also incorporate
distributed memory, but does not give any indication of extending
this methodology to generalized distributed systems.
To model any distributed memory systems, a Petri net should be
employed, and numerical results obtained. The reachability tree of
the Petri net can then be used to discover some analytical artifacts
of the system, such as whether it is live, as described in
\cite{murata}.
\section{Extended Petri Nets}
Bucci \cite{bucci} describes an extended version of Petri nets where
the continuous time model is replaced with discrete time events. A
global clock of some sort . A ``firing event'' occurs whenever the
random variable distribution coincides {\em as well as} the time has
come for that transition to fire. The idea of this extension is that
a Petri net can operate within a discrete time domain, following a
global clock, while keeping the ability to induce transitions from
state-to-state concurrently (ie using threads or a similar
mechanism). This dramatically increases the simulation uses of GSPN
models, but does not really add much to their descriptive power.
Bucci's method should be taken into account any time that a
simulation must occur, especially in a simulation with lots of
simultaneity.
\section{Fault Tolerance Modeling}\label{fault tolerance}
If systems include repairable components, a GSPN event (place /
timed transition) can represent the break and rapair of any
component in the system, just as
\section{Analysis and Verification}\label{analysis}
Generally, any sort of Petri Net model is good for modeling a
specific system, but can only provide numerical results. In order to
assure that those results are statistically significant, many runs
(at least 30) should be performed in simulation, and every variable
must be checked for consistency with a real system. One technique
which is liberally applied by many \cite{murata} \cite{marsan} is to
use a verifiable Markovian process or queueing network to provide
upper and lower analytical bounds. At this point, the system can
simply be built and run. Analysis can be extremely difficult, so
most of the time the Petri net approach will be more useful for the
budding system developer, without rigorous mathematical background.
\begin{thebibliography}{5}
\bibitem{marsan} Marsan, N. Balbo, G. Conte, G. "Stochastic Petri Nets." Performance Models of Multiprocessor Systems. Cambridge: MIT Press. 1986. pp. 72 - 98.
\bibitem{bucci} Bucci, G. Correctness Verification and Performance Analysis Using Stochastic Preemptive Time Petri Nets. IEEE Transactions on Software Engineering. Vol 23, No 11. November 2005. pp. 913 - 937.
\bibitem{murata} Murata, T. Petri Nets: Properties, Analysis, and Applications. Invited Paper. Proceedings of the IEEE. Vol 77. No 4. April 1989. pp. 541 - 579.
\bibitem{bhan} Bhandarkar, D. Analysis of Memory Interference in
Multiprocessors. IEEE Transactions on Computers Vol 24. No 9.
September 1975. 897-908.
\bibitem{rau} Rau, B. Interleaved Memory Bandwidth in a Model of a Multiprocessor Computer System. IEEE Transactions on Computers Vol 28. No 9.
September 1979. 678 - 681.
\end{thebibliography}
\end{document}
| {
"alphanum_fraction": 0.7918489661,
"avg_line_length": 52.8006329114,
"ext": "tex",
"hexsha": "99d994b062553bb88492e839f37b0442f023822c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c011561fc0bf3101766ff86380a33b36d19dedfb",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "twinbee/analModels",
"max_forks_repo_path": "analysis and modeling final.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c011561fc0bf3101766ff86380a33b36d19dedfb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "twinbee/analModels",
"max_issues_repo_path": "analysis and modeling final.tex",
"max_line_length": 210,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c011561fc0bf3101766ff86380a33b36d19dedfb",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "twinbee/analModels",
"max_stars_repo_path": "analysis and modeling final.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3767,
"size": 16685
} |
% Options for packages loaded elsewhere
\PassOptionsToPackage{unicode}{hyperref}
\PassOptionsToPackage{hyphens}{url}
%
\documentclass[
]{article}
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{ifxetex,ifluatex}
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{textcomp} % provide euro and other symbols
\else % if luatex or xetex
\usepackage{unicode-math}
\defaultfontfeatures{Scale=MatchLowercase}
\defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1}
\fi
% Use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
\IfFileExists{microtype.sty}{% use microtype if available
\usepackage[]{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\makeatletter
\@ifundefined{KOMAClassName}{% if non-KOMA class
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}}
}{% if KOMA class
\KOMAoptions{parskip=half}}
\makeatother
\usepackage{xcolor}
\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available
\IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}}
\hypersetup{
pdftitle={Annual Report 2021},
hidelinks,
pdfcreator={LaTeX via pandoc}}
\urlstyle{same} % disable monospaced font for URLs
\usepackage[margin=1in]{geometry}
\usepackage{graphicx,grffile}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
% Set default figure placement to htbp
\makeatletter
\def\fps@figure{htbp}
\makeatother
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{-\maxdimen} % remove section numbering
\usepackage{amsmath}
\usepackage{booktabs}
\usepackage{caption}
\usepackage{longtable}
\title{Annual Report 2021}
\author{}
\date{\vspace{-2.5em}}
\begin{document}
\maketitle
\hypertarget{purpose}{%
\section{Purpose}\label{purpose}}
Oral reading fluency (ORF), generally defined as reading quickly,
accurately, and with prosody, is an essential part of reading
proficiency. Prosody, reading with appropriate expression and phrasing,
is one way to demonstrate that a reader understands the meaning of the
text.
The purpose of this study is to collect prosody ratings of audio
recordings of students' ORF. These human-rated prosody scores will serve
as the basis for training an algorithm that can be used to automatically
generate prosody scores from students' oral reading.
\hypertarget{audio-recordings}{%
\section{Audio Recordings}\label{audio-recordings}}
Audio recordings of students in Grades 2 through 4 reading brief ORF
passages were collected as part of an
\href{https://ies.ed.gov/funding/grantsearch/details.asp?ID=1492}{IES
funded project} called Computerized Oral Reading Evaluation, or
\href{https://jnese.github.io/core-blog/}{CORE}. CORE combines automatic
speech recognition (ASR) to score ORF accuracy and rate, with a latent
variable psychometric model to scale, equate, and link scores across
Grades 2 through 4. The primary goal of CORE is to develop an ORF
assessment system with the potential to reduce: (a) human ORF
administration errors, by standardizing administration setting,
delivery, and scoring; (b) the time cost of ORF administration, by
allowing small-group or whole-classroom testing; (c) the resource cost
to train staff to administer and score the ORF assessment; and (d) the
standard error of ORF measurement.
The work conducted in the
\href{https://ies.ed.gov/funding/grantsearch/details.asp?ID=3427}{current
project} extends this line of research by incorporating prosody into the
measurement model.
The
\href{https://jnese.github.io/core-blog/posts/2019-04-12-consequential-validity-study-procedures/}{Consequential
Validity Study} from the original CORE project conducted in 2017-18 and
2018-19 resulted in the accumulation of 90,720 audio files. Of these,
8,713 were excluded from the current study because they were recordings
of students reading the criterion easyCBM ORF passages from the original
study while the remaining 82,007 (90.4\%) represented recordings of
students reading brief (approximately 50-85 word) passages developed
specifically for the CORE project. From the 82,007 eligible audio
recordings, only those that were at least ten seconds long were selected
(to screen for empty or incomplete files) for a final corpus of 78,712
audio files.
\hypertarget{core-orf-passages}{%
\subsection{CORE ORF Passages}\label{core-orf-passages}}
CORE passages were written by a former teacher, who also co-wrote the
original easyCBM ORF and reading comprehension passages. Each CORE
passage is an original work of fiction, and within 5 words of a targeted
length: \emph{long} = 85 words or \emph{medium} = 50 words. Each passage
has a beginning, middle, and end, follows either a
``problem/resolution'' or ``sequence of events'' format, and contains
minimal use of dialogue and symbols. Exclusion rules for what could not
appear in passages included: religious themes; trademark names, places,
products; cultural/ethnic depictions; age-inappropriate themes (e.g.,
violence, guns, tobacco, drugs). All final CORE passages were reviewed
by two experts in assessment for screening and progress monitoring for
errors (e.g., format and grammatical), and bias (e.g., gender, cultural,
religious, geographical). Final passages included 150 total passages, 50
at each of Grades 2-4, with 20 long passages (80-90 words), and 30
medium passages (45-55 words) for each grade.
\hypertarget{audio-file-selection}{%
\subsection{Audio File Selection}\label{audio-file-selection}}
For the current study, a two-step process was used to select 200 audio
files for 10 CORE ORF passages at each of Grades 2 through 4.
First, for each grade and passage length the 5 CORE passages with the
greatest number of audio file records were selected to create as large
an item bank as possible. This process resulted in the selection of 10
CORE passages (5 long and 5 medium) for each of Grades 2 -- 4, 30
passages in all.
Second, stratified random sampling was applied to select 200 audio
recordings of each CORE passage, oversampling for English learners (ELs)
and students with disabilities (SWDs), two student groups for which the
ASR may be less accurate. Of the 1388 students in the full sample,
approximately \% were dually classified as EL and SWD, \% were
classified as EL only, \% were classified as SWD only, \% were
classified as neither EL nor SWD.
The stratified random sampling plan led to the following quantities of
sampled audio files: 5 students (2.5\%) dually classified as EL and SWD,
65 students (32.5\%) classified as EL only, 65 students (32.5\%)
classified as SWD only, and 65 students (32.5\%) classified as neither
EL nor SWD. A cascading logic was implemented, such that when fewer than
5 recordings included students dually classified as EL and SWD, the
remainder of recordings was sampled from students classified as EL only.
If there were insufficient audio recordings from EL only students, the
remainder was sampled from students classified as SWD only. The
remainder of audio recordings was sampled from students classified as
neither EL nor SWD, of which there were ample recordings.
The design of the project stipulated that each of the 200 audio files
per CORE passage was to be rated for prosody by two different raters,
for a total of 12,000 prosody ratings (10 passages * 3 grade levels *
200 recordings * 2 ratings = 12,000 total prosody ratings).
The 6,000 audio files were grouped into 120 sets of 50 for distribution
to human raters. The 200 audio files per CORE passage were split into
four sets, such that each set of 50 contained audio files of students
reading the same passage. This structure was used to allow raters to get
familiar with a passage and thus provide more reliable ratings. The PI
(J. F. T. Nese) manually distributed the sets as required, descending by
grade and passage such that all four sets of the first Grade 4 passage
were sent to the first eight raters (as each set was rated twice), and
continuing through the last Grade 2 passage.
\begin{verbatim}
## Warning in mask$eval_all_mutate(quo): NAs introduced by coercion
\end{verbatim}
Of the 6,000 selected audio files 836 (14\%) had to be replaced because
they had no audio available to score; either there was no audio (e.g.,
the student was muted or advanced without reading), or the audio did not
allow the rater to confidently give a prosody score (e.g., poor audio
quality, too much background noise, a very quiet reader). All audio
files were replaced with a reading from the same CORE passage. For
\emph{n} audio files that needed to be replaced for a CORE passage,
\emph{n} \(\times\) 1.175 (17.5\% of \emph{n}) were sampled to account
for potential audio recording with no available audio in the replacement
set. An effort was made to replace audio files read by a student with
the same EL/SWD classification. That is, the same cascading logic as
previously described was applied, such that when the number of
recordings for students dually classified as EL and SWD was less than
required in our sampling plan, the remainder was sampled from students
classified as EL only. If there were insufficient audio recordings from
EL only students, the remainder was sampled from students classified as
SWD only. Insufficient recordings led to the remainder of audio
recordings being sampled from students classified as neither EL nor SWD,
of which there were ample recordings. An additional 998 audio files were
distributed to the human raters as replacements.
After the 998 audio file replacements were scored, there remained five
CORE passages that had less than 200 audio files with two different
prosody ratings: three CORE passages had 199 audio files, and two had
197 audio files. For \emph{n} (1 or 3) audio files that needed to be
replaced for a CORE passage, \emph{n} \(\times\) 7 were sampled to
account for potential audio recording with no available audio in the
replacement set. These audio files were randomly sampled (without
stratifying for ELs and SWDs) from those remaining for the respective
CORE passages.
After all selected and usable audio files were rated twice, the final
sample included 14,122 audio files from 1,388 students (4,900 in Grade
2, 4,650 in Grade 3, 4,572 in Grade 4). The number of audio files per
student in the final sample ranged from 2 to 44.
The results of the stratification yielded a sample of 14,122 audio files
that was 3\% (\emph{n} = 358) EL and SWD, 24\% (\emph{n} = 3414) EL
only, 30\% (\emph{n} = 4208) SWD only, and 43\% (\emph{n} = 6142)
neither EL or SWD.
\begin{verbatim}
## Warning: The `.dots` argument of `group_by()` is deprecated as of dplyr 1.0.0.
\end{verbatim}
\captionsetup[table]{labelformat=empty,skip=1pt}
\begin{longtable}{lcc}
\caption*{
\large Sample Demographic Characteristics\\
} \\
\toprule
& & & \\
\textbf{Characteristic} & \textbf{N = 1,388}\textsuperscript{1} & \textbf{N = 14,122}\textsuperscript{1} \\
\midrule
Grade & & \\
2 & 485 (35\%) & 4,900 (35\%) \\
3 & 450 (32\%) & 4,650 (33\%) \\
4 & 453 (33\%) & 4,572 (32\%) \\
Gender & & \\
Female & 609 (49\%) & 5,646 (45\%) \\
Male & 643 (51\%) & 6,930 (55\%) \\
(Missing) & 136 & 1,546 \\
Ethnicity & & \\
Hispanic/Latino & 327 (26\%) & 4,188 (33\%) \\
Not Hispanic/Latino & 925 (74\%) & 8,388 (67\%) \\
(Missing) & 136 & 1,546 \\
Race & & \\
American Indian/Native Alaskan & 52 (4.2\%) & 658 (5.2\%) \\
Asian & 9 (0.7\%) & 130 (1.0\%) \\
Black/African American & 5 (0.4\%) & 56 (0.4\%) \\
Hispanic & 45 (3.6\%) & 354 (2.8\%) \\
Multi-Racial & 108 (8.6\%) & 1,030 (8.2\%) \\
Native Hawaiian/Other Pacific Islander & 4 (0.3\%) & 48 (0.4\%) \\
White & 1,029 (82\%) & 10,300 (82\%) \\
(Missing) & 136 & 1,546 \\
Students with Disabilities (SWD) & 242 (17\%) & 4,566 (32\%) \\
English Language Learners (EL) & 197 (14\%) & 3,772 (27\%) \\
Stratification Groups & & \\
EL \& SWD & 23 (1.7\%) & 358 (2.5\%) \\
EL only & 174 (13\%) & 3,414 (24\%) \\
Not EL or SWD & 972 (70\%) & 6,142 (43\%) \\
SWD only & 219 (16\%) & 4,208 (30\%) \\
\bottomrule
\end{longtable}
\vspace{-5mm}
\begin{minipage}{\linewidth}
\textsuperscript{1}n (\%) \\
\end{minipage}
\hypertarget{research-team}{%
\section{Research Team}\label{research-team}}
The
\href{https://jnese.github.io/coreprosody/research_team.html}{research
team} comprised four faculty with expertise in the assessment of
students' reading fluency (specializations included: two doctorates in
School Psychology, one doctorate in Educational Leadership with a
specialization in Learning Assessment/Systems Performance, and one
doctorate in Educational Psychology), and one graduate research
assistant with experience in literacy. The research team met weekly from
August through November 2020, to refine a prosody scoring rubric, score
audio files to be used as training and demonstration exemplars, and
develop two online sessions to train prosody raters. These sessions were
delivered live as well as recorded for asynchronous delivery for raters
who were unable to attend in person.
\hypertarget{prosody-rubric-development}{%
\section{Prosody Rubric Development}\label{prosody-rubric-development}}
The research team began with the prosody scoring rubric developed by the
National Assessment of Educational Progress (NAEP; \footnote{\href{https://nces.ed.gov/nationsreportcard/pdf/studies/2006469.pdf}{Daane
et al., 2005}}), a four-point scale (below) that focuses on phrasing,
adherence to the author's syntax, and expressiveness to assess prosody
at Grade 4.
\begin{figure}
\includegraphics[width=17.58in]{C:/Users/jnese/Desktop/BRT/GRANT-CORE_II/project/coreprosody/docs/images/naep_rubric} \caption{From [Daane, Campbell, Grigg, Goodman, & Oranje (2005)](https://nces.ed.gov/nationsreportcard/pdf/studies/2006469.pdf)}\label{fig:unnamed-chunk-1}
\end{figure}
Although NAEP only applied the scoring rubric to Grade 4, our research
team made the decision to use the rubric across Grades 2 through 4,
independent of grade and based on the absolute prosody criteria
specified for each of the four prosody levels.
To help draw clear differences between the four prosody levels across
grades, parts of the Multi-Dimensional Fluency Scoring Guide (MFSG;
\footnote{\href{https://www.tandfonline.com/doi/pdf/10.1080/19388070802468715}{Rasinski
et al., 2009}}) were incorporated into the original NAEP rubric.
\begin{figure}
\includegraphics[width=17.58in]{C:/Users/jnese/Desktop/BRT/GRANT-CORE_II/project/coreprosody/docs/images/naep_rubric} \caption{From [Rasinski, Rikli, & Johnston (2009)](https://www.tandfonline.com/doi/pdf/10.1080/19388070802468715)}\label{fig:unnamed-chunk-2}
\end{figure}
The MFSG focuses on assessing aspects of expression, phrasing,
smoothness, pacing, and accuracy. The research team expanded and refined
the NAEP prosody rubric with select parts of the MFSG to add more
specific language and examples.
A systematic process for adapting the NAEP rubric was conducted in
August and September, 2020. First, 30 audio recordings were dispersed
among the research team and scored individually by the four faculty.
These scores and commentary were documented, analyzed, and discussed
during the following week's meetings. A summary of the team's individual
scores was presented, highlighting areas of agreement and disagreement:
9 audio files (30\%) received the same score across all four raters; 13
(43\%) received the same score across three raters with the fourth
rating different by one prosody level; 4 (13\%) were split down the
middle, with two sets of identical scores that differed by one prosody
level; and 4 (13\%) received three different prosody scores, two of
which were scored the same and two of which differed by two prosody
levels. Based on inconsistent variation within the team, it was decided
that more in-depth explanation was needed for each of the score levels.
To achieve this goal, the team listened to recordings together during
online meetings and iteratively specified deeper distinctions between
adjacent scores using the MFSG factors of pace, phrasing, and expression
and volume. The 30 audio recordings were again scored individually by
the four faculty: 12 (40\%) audio files (30\%) received the same score
across all four raters; 12 (40\%) received the same score across three
raters, with the fourth rating different by one prosody level; and 6
(20\%) were split down the middle, with two sets of identical scores
that differed by one prosody level.
The team further refined the adapted rubric to clarify rating criteria
and arrive at more unequivocal prosody scores. That is, the first
version of the adapted rubric did not address whether the overall
storyline was ``represented'' by the reader. After working through
various examples, the research team added the following distinctions for
each proficiency level (italic text represents additions from the MFSG,
and regular text represents additions made by the research team).
\begin{itemize}
\tightlist
\item
\textbf{Level 1}: \emph{Reads slowly and laboriously.} Story line is
incoherent.
\item
\textbf{Level 2}: \emph{Reads moderately slowly.} Overall meaning of
the text is preserved.
\item
\textbf{Level 3}: \emph{Reads with a mixture of run-ons, mid-sentence
pauses for breath, and some choppiness. There is reasonable stress and
intonation.}
\item
\textbf{Level 4}: \emph{Reads smoothly with some breaks, but
self-corrects with difficult words and/or sentence structures. Reads
with varied volume and expression (like talking to a friend with voice
matching the interpretation of the passage).}
\end{itemize}
\hypertarget{the-core-prosody-rubric}{%
\subsection{The CORE + Prosody Rubric}\label{the-core-prosody-rubric}}
\includegraphics[width=19.96in]{C:/Users/jnese/Desktop/BRT/GRANT-CORE_II/project/coreprosody/docs/images/core-ii_rubric}
\hypertarget{exemplar-audio-files}{%
\subsection{Exemplar Audio Files}\label{exemplar-audio-files}}
After finalizing the refined rubric, the research team came to unanimous
agreement on the 30 audio files. Then, additional audio files were
sought with the goal of having 15 exemplar audio files for each of the
four prosody levels. ORF data from the CORE project were used to find
(de-identified) students whose fall easyCBM ORF scores clustered around
a specified percentile; for example, students who scored at or below the
20th percentile as potential candidates for prosody scores of Levels 1
or 2, and students who scored above the 90th percentile as potential
candidates for a prosody score of Level 4. Using this process, the team
identified an additional 31 audio files, each of which were
independently scored by two of the five research team personnel. Of
these, 21 (68\%) received the same score across the two raters. The
remaining 10 audio files were scored by a third member of the research
team, and discussed by the full research team until unanimous score
agreement was achieved. Additional exemplar audio files were still
needed for Levels 2 and 4, so 16 additional files were identified and
underwent the same process just described.
In total, 81 passages were identified and scored by the research team as
exemplars for trainings and demonstrations: 20 at Level 4, 23 at Level
3, 15 at Level 2, and 23 at Level 1. Of these, 24 were used for
Training, 25 were used for Certification (both described below), and the
remaining 32 were retained in case of future need.
Human prosody raters were recruited and required to complete two
\href{https://jnese.github.io/coreprosody/human_prosody_scoring.html\#training-development-implementation}{Training
Sessions}, and meet
\href{https://jnese.github.io/coreprosody/human_prosody_scoring.html\#prosody-certification}{Prosody
Certification} criterion.
\hypertarget{prosody-rater-recruitment}{%
\section{Prosody Rater Recruitment}\label{prosody-rater-recruitment}}
Educators (teachers and specialized professionals) were targeted as
potential prosody raters. Potential prosody raters were recruited in
October -- November 2020 from two sources: teacher participants from the
original CORE project, and through an announcement placed on the easyCBM
- Lite and Deluxe sites for three weeks (10/19/2020 -- 11/6/2020). These
two easyCBM sites have over 79,000 registered users.
\begin{quote}
Paid Opportunity! * We're looking for Grade 2-5 teachers interested in
earning a little extra money scoring oral reading for prosody
(expressiveness). * All work can be done remotely on your own time. * No
prior experience scoring prosody needed (we'll provide the training). *
The more readings you score, the more money you will earn! * This
project is part of an IES-funded research study to develop
computer-scored oral reading fluency measures. * For more information,
please email: [email protected]
\end{quote}
Approximately 300 people responded to the announcement posted on the
easyCBM sites. These respondents were then sent an email introducing
them to the project, the task required of them, what they could expect
(payment terms, remote work, and work commitment), training
requirements, the prosody certification process, and next steps (a
Qualtrics Registration form requiring demographic information, teaching
experience, and a W9). The PI (J. F. T. Nese) corresponded with all
potential prosody raters throughout the process.
\begin{quote}
Thank you for expressing interest in the CORE II Prosody Study! My name
is Joe Nese, I am an education researcher at the University of Oregon,
and I am leading the project.
\end{quote}
\begin{quote}
\textbf{Task:} For this project, we have thousands of audio files from
students in Grades 2-4, reading very short passages (less than 90
seconds each). We need educators (you!) to listen to these audio files
and rate them for prosody -- the expressiveness with which the student
read the passage. These scores will serve as the basis for our
subsequent study to automatically generate prosody scores, which will
provide teachers with important information about their students'
reading. All of the audio files were collected in a previous research
study.
\end{quote}
\begin{quote}
\textbf{What to expect:} You will be paid \$1.25 for every passage you
rate. Each passage is no more than 90 seconds in length, and we expect
you will listen to a passage 2-3 times before you submit your rating.
You can make up to \$25 per hour if you are working carefully and
efficiently. Payment will not be processed until all work is complete
(i.e., all audio files are scored), but no later than April 30, 2021.
\end{quote}
\begin{quote}
You will work remotely, from your computer! We have an online system so
that you can log in and listen to and score the audio files assigned to
you. The work will start in November and continue until all audio files
are scored. This could take several weeks, or several months, depending
on how quickly everyone works. Regardless, you can work on your own
time. But we ask that if you sign up, you commit to scoring at least 100
audio files (approximately 5-7 hours of work). You must score at least
50 passages before any payment will be remitted.
\end{quote}
\begin{quote}
\textbf{Before you begin:} Before you start, you will need to attend two
trainings (about 3-4 hours, total), where we will teach you everything
you need for this project. You will be paid \$15/hour for the training
sessions. You will also need to demonstrate proficiency in rating
prosody by scoring 80\% or higher on two different sets of audio files
which have previously been scored by the research team. If you do not
meet the criteria, you will not be eligible to participate in the
project (but you will be paid for attending the trainings, of course).
\end{quote}
\begin{quote}
We have scheduled two dates for each of the two required trainings
sessions. You may choose either of the two dates listed for each
training, but you must attend both Training Session \#1 and Training
Session \#2. * Training Session \#1: Nov 13 or Nov 16 3:30-5:30 Pacific
* Training Session \#2: Nov 20 or Nov 23 3:30-5:00 Pacific
\end{quote}
\begin{quote}
If you are interested in participating in this project, please click the
link below to enroll! You will answer some questions about yourself,
sign up for training dates, and importantly, submit a W9 so that you can
be paid for your work. You cannot join the project without completing
and submitting the W9. The blank W9 is embedded in the survey, and also
attached here. It will likely be easier to complete the attached W9
before you start the survey so that it is ready to upload.
\end{quote}
\begin{quote}
Click the link below to sign-up!
\end{quote}
\begin{quote}
Thank you so much for your interest in partnering with us on this
important work. We are grateful for all you do on behalf of children.
\end{quote}
Of the 300 respondents, 119 completed the Registration form, and 78
completed the required trainings. (No information is available as to why
some chose not to complete the Registration or the trainings.)
\hypertarget{prosody-certification}{%
\section{Prosody Certification}\label{prosody-certification}}
Prior to starting work, each prosody rater was required to complete the
Training and demonstrate scoring proficiency by obtaining 80\% or higher
agreement with the research team's pre-determined rating on two
different sets of five audio files. Raters had five opportunities to
achieve at least 80\% (4/5) on two of the prosody assessments. Raters
unable to achieve two passing scores received payment for their
participation in training (\$45) but were not be eligible to continue
their participation in this project.
All Prosody Certification Assessments were delivered with Google Forms'
Quiz feature. All participants took the first Prosody Certification
after Training \#1 and before Training \#2. The remaining Prosody
Certification assessments were taken at each participant's pace. Once a
participant met the prosody certification criteria by scoring at least
80\% on two assessments, they began scoring audio files (and took no
more assessments).
Of the 78 people who completed the
\href{https://jnese.github.io/coreprosody/human_prosody_scoring.html\#training-development-implementation}{Trainings},
63 (81\%) met prosody certification, 2 (3\%) failed to meet
certification, and 13 (17\%) did not complete the certification process.
\captionsetup[table]{labelformat=empty,skip=1pt}
\begin{longtable}{lcclcl}
\caption*{
\large Prosody Certification Assessment Passing Rates\\
} \\
\toprule
& n & Fail & Fail (\%) & Pass & Pass (\%) \\
\midrule
Certification \#1 & 78 & 37 & 47\% & 41 & 53\% \\
Certification \#2 & 73 & 20 & 27\% & 53 & 73\% \\
Certification \#3 & 41 & 6 & 15\% & 35 & 85\% \\
Certification \#4 & 10 & 5 & 50\% & 5 & 50\% \\
Certification \#5 & 4 & 1 & 25\% & 3 & 75\% \\
\bottomrule
\end{longtable}
Of the 78 people who took Certification \#1, 53\% passed by scoring 4 or
5; 73\% of the 73 people who took Certification \#2 passed; 85\% of the
41 people who took Certification \#3 passed; 50\% of the 10 people who
took Certification \#4 passed; and 75\% of the 4 people who took
Certification \#5 passed.
\begin{verbatim}
## Warning: Removed 4 rows containing missing values (position_stack).
\end{verbatim}
\begin{verbatim}
## Warning: Removed 4 rows containing missing values (geom_text).
\end{verbatim}
\includegraphics{annual_report_2021_files/figure-latex/traincert_plot-1.pdf}
\hypertarget{training-development-implementation}{%
\section{Training Development \&
Implementation}\label{training-development-implementation}}
A two-session training for prosody raters was developed for in-person,
online delivery across two meetings in November 2020. Each Training
Session was delivered twice: on Friday and the subsequent Monday
afternoon (after the school day had concluded), and participants could
attend either the Friday or the Monday training. There was one week
between Training Session \#1 and Session \#2. For Training Session \#1,
Day 1 was held on 11/13/2020 and Day 2 was held on 11/16/2020. For
Training Session \#2, Day 1 was held on 11/20/2020 and Day 2 was held on
11/23/2020.
Three members of the team were present to deliver content and answer
questions using Powerpoint slides presented on the Zoom platform for
web-based, live interaction. All trainings were recorded via Zoom for
asynchronous training for participants who could not attend one or both
of the live trainings. In total, 78 people completed the trainings.
\captionsetup[table]{labelformat=empty,skip=1pt}
\begin{longtable}{lccc}
\caption*{
\large Training Sessions Attendance\\
} \\
\toprule
Training Session & Day 1 & Day 2 & Asynchronous \\
\midrule
Session \#1 & 35 & 35 & 8 \\
Session \#2 & 34 & 35 & 9 \\
\bottomrule
\end{longtable}
The research team created a
\href{https://jnese.github.io/CORE-II_trainingwebsite/index.html}{website}
for participants to access training resources. The website included: the
\href{https://jnese.github.io/CORE-II_trainingwebsite/\#the-task}{seven-step
process} for scoring audio files; the
\href{https://jnese.github.io/CORE-II_trainingwebsite/prosody_rubric.html}{prosody
rubric} as a resource to print or keep open when rating;
\href{https://jnese.github.io/CORE-II_trainingwebsite/training_materials.html}{training
materials}, including the presentation slides, and a recording of each
Training Session; the 24
\href{https://jnese.github.io/CORE-II_trainingwebsite/exemplar_audiofiles.html}{exemplar
audio files} from Training Session \#1; the link to the scoring site;
and an
\href{https://ies.ed.gov/funding/grantsearch/details.asp?ID=34270}{About}
page with information about the study.
\hypertarget{training-session-1}{%
\subsection{Training Session \#1}\label{training-session-1}}
During Training Session \#1 (2 hours), study logistics and key concepts
were explained to potential raters.
\href{https://jnese.github.io/CORE-II_trainingwebsite/CORE_training_session1day2_FINAL.pdf}{Training}
included: information about the project context; a comprehensive review
of prosody; the task of rating audio recordings for prosody; an
explanation of the rubric and how to rate recordings; how to earn
certification as prosody rater; the expectations and payment structure;
a set of 12 exemplar audio files (about one recording per prosody level
for each of Grades 2 -- 4); and a practice exercise, consisting of 12
exemplar audio files (three at each of the four prosody levels presented
randomly) followed by a discussion of each and the qualities that made
it a specific prosody level.
Participants were also introduced to prosody scoring in partial
increments of 0.5 to facilitate prosody ratings in cases of nuanced
uncertainty. For example, if a rater's prosody rating was undecided
between Level 2 and 3, they could score it as a 2.5. For the purposes of
the study, all half scores (i.e., 1.5, 2.5, and 3.5) were rounded down
because they did not meet the threshold for a higher score.
The Training Session \#1 practice exercise involved three rounds of
listening to each audio file. The purpose of the first listen was to pay
attention to the passage's general meaning so that raters would have a
general sense of what the passage was about and the degree to which the
student's reading conveyed the meaning. The purpose of the second listen
was to train raters to pay attention to reading style (i.e.,
word-by-word, awkward word groups, conversational) and to notice whether
the author's syntax was preserved. The third listen was used to train
raters to pay attention to expressiveness. This step-by-step process was
designed to train raters to attend to all aspects of the rubric, and not
to focus exclusively on any single aspect. The team developed a
seven-step guide for listening to and scoring audio recordings.
\hypertarget{the-task}{%
\paragraph{The Task}\label{the-task}}
\begin{figure}
\includegraphics[width=14.53in]{C:/Users/jnese/Desktop/BRT/GRANT-CORE_II/project/coreprosody/docs/images/task_7steps} \end{figure}
After listening to example recordings to clarify the scale, participants
were able to practice on their own. Recordings were played, and
participants were asked to first think about how they would rate the
recording without sharing their scores, and then they were prompted to
type their prosody score for the recorded reading into the Zoom
platform's chat box feature. Participants' reasoning for scores was
discussed as a large group with research team members facilitating the
discussion and using the rubric to emphasize points made.
After Training Session \#1, participants were given the first Prosody
Certification assessment, which consisted of five audio files to be
scored individually, on their own time, before Training Session \#2 .
Of the 78 people who took the Prosody Certification Assessment \#1, 41
(53\%) passed and 37 (47\%) did not. Note that Certification \#1 was
taken after Training Session \#1, before the entire Training process was
complete.
\includegraphics{annual_report_2021_files/figure-latex/unnamed-chunk-4-1.pdf}
\hypertarget{training-session-2}{%
\subsection{Training Session \#2}\label{training-session-2}}
One week later, participants again met with the research team for
Training Session \#2 (1.5 hours). Training Session \#2 consisted
primarily of a review of the five audio files from Prosody Certification
\#1. Participants were asked to listen to an audio file, with the
prosody score provided by the training facilitator, and to identify key
features that justified the score. After listening to an audio file,
they were asked to share in the Zoom platform's chat box prosody rubric
features (shown on the screen) of the reading that corresponded to the
score. The training facilitator read aloud and discussed the relevant
and important prosody score features, using the rubric to confirm scores
with participants. Participants were encouraged to ask questions if they
did not understand or disagreed with the prosody score. Each audio file
was played multiple times (three to six) to solidify the score and
rationale for the attendees. This process was repeated for each of the
five audio files.
Participants were then given an
\href{https://jnese.github.io/CORE-II_trainingwebsite/CORE_training_session2day2_FINAL.pdf}{introduction}
to and demonstration of the software that they would use to score the
audio files for prosody if they met certification. They were also
introduced to the
\href{https://jnese.github.io/CORE-II_trainingwebsite/index.html}{training
website}.
\hypertarget{prosody-rating-procedures}{%
\section{Prosody Rating Procedures}\label{prosody-rating-procedures}}
\hypertarget{prososy-raters-sample}{%
\subsection{Prososy Raters: Sample}\label{prososy-raters-sample}}
The final prosody sample included 57 prosody raters, 7 from each of FL
and IL, 4 from each of OR, 3 from each of IN, KS, NV and OH, 2 from each
of GA, ID, KY, MI, NC, UT and VA, and 1 from each of AL, AZ, CA, CO, LA,
MT, NM, NY, PA, SC, TN, TX and WA. Nearly all (55) raters were female, 1
was non-binary, and 1 chose not to response.
\hypertarget{highest-degree-earned}{%
\paragraph{Highest Degree Earned}\label{highest-degree-earned}}
Among the prosody raters, 32 (56\%) earned a Masters in education, 19
(33\%) earned a Bachelor, 2 (4\%) earned a Masters in another field, 2
(4\%) earned an Associate, and 2 (4\%) earned a Doctorate.
\includegraphics{annual_report_2021_files/figure-latex/degree_plot-1.pdf}
\hypertarget{professional-roles}{%
\paragraph{Professional Roles}\label{professional-roles}}
The professional roles of the raters were as follows:
\begin{itemize}
\item
17 (30\%) were special education teachers
\item
13 (23\%) were general education teachers
\item
10 (18\%) were reading/literacy specialists
\item
9 (16\%) reported as ``other''
\begin{itemize}
\tightlist
\item
(i.e., dyslexia program specialist; literacy tutor/data
analyst/testing coordinator; project consultant supporting literacy,
behavior and MTSS; RTI coordinator/interventionist; RTI specialist;
MTSS lead; EC compliance case manager; Master's student; and
undergraduate student studying elementary education)
\end{itemize}
\item
3 (5\%) reported as ``school psychologist \textbar{} social worker
\textbar{} counselor \textbar{} behavior specialist \textbar{} etc.''
\item
2 (4\%) reported as ``administrator \textbar{} principal \textbar{}
district support''
\item
2 (4\%) were retired, and reported their role as special educators
before retirement
\item
1 (19\%) an ``other content area specialist'' (i.e., ESOL)
\end{itemize}
\hypertarget{experience}{%
\paragraph{Experience}\label{experience}}
Nearly all of the prosody raters (\emph{n} = 48, 84\%) worked at the
elementary school level, 4 (7\%) worked at the middle school level, 2
(4\%) worked at the high school level, 1 (2\%) worked at the elementary
and middle levels, 1 (2\%) worked at all three levels, and 1 (2\%)
worked with adults.
The average years experience as an educator was 14 years (\emph{SD} =
9.9).
\includegraphics{annual_report_2021_files/figure-latex/experience_plot-1.pdf}
\hypertarget{prosody-raters-certification}{%
\subsection{Prosody Raters:
Certification}\label{prosody-raters-certification}}
All 57 prosody raters met the prosody certification criteria by scoring
at least 80\% on two Prosody Certification Assessments. Note that
Certification \#1 was taken after Training Session \#1, before the
Training was complete.
\captionsetup[table]{labelformat=empty,skip=1pt}
\begin{longtable}{lcclcl}
\caption*{
\large Prosody Certification Assessment Passing Rates\\
} \\
\toprule
& n & Fail & Fail (\%) & Pass & Pass (\%) \\
\midrule
Certification \#1 & 57 & 26 & 46\% & 31 & 54\% \\
Certification \#2 & 55 & 12 & 22\% & 43 & 78\% \\
Certification \#3 & 31 & 2 & 6\% & 29 & 94\% \\
Certification \#4 & 6 & 2 & 33\% & 4 & 67\% \\
Certification \#5 & 2 & 0 & 0\% & 2 & 100\% \\
\bottomrule
\end{longtable}
Of the 57 people who took Certification \#1, 54\% passed by scoring 4 or
5; 78\% of the 55 people who took Certification \#2 passed; 94\% of the
31 people who took Certification \#3 passed; 67\% of the 6 people who
took Certification \#4 passed; and 100\% of the 2 people who took
Certification \#5 passed.
\begin{verbatim}
## Warning: Removed 4 rows containing missing values (position_stack).
\end{verbatim}
\begin{verbatim}
## Warning: Removed 4 rows containing missing values (geom_text).
\end{verbatim}
\includegraphics{annual_report_2021_files/figure-latex/raterscert_fig-1.pdf}
\begin{verbatim}
## Warning: Values are not uniquely identified; output will contain list-cols.
## * Use `values_fn = list` to suppress this warning.
## * Use `values_fn = length` to identify where the duplicates arise
## * Use `values_fn = {summary_fun}` to summarise duplicates
\end{verbatim}
\begin{verbatim}
## Warning: `cols` is now required when using unnest().
## Please use `cols = c(cert1_score, cert2_score, cert3_score, cert4_score, cert5_score)`
\end{verbatim}
The plot below shows which two Prosody Certification Assessments the
raters met prosody certification. Twenty-five raters (44\%) passed
Certification \#1 and Certification \#2, 17 (30\%) passed Certification
\#2 and Certification \#3, 6 (11\%) passed Certification \#1 and
Certification \#3, 4 (7\%) passed Certification \#3 and Certification
\#4, 1 (2\%) passed Certification \#2 and Certification \#5, and 1 (2\%)
passed Certification \#3 and Certification \#5.
\includegraphics{annual_report_2021_files/figure-latex/upset_fig-1.pdf}
\hypertarget{prosody-ratings}{%
\subsection{Prosody Ratings}\label{prosody-ratings}}
The 57certified prosody raters created a profile on a project-designed
Moodle site. Raters each created their own log-in information for the
system. The system allowed raters to skip audio files, go back and
change scores, and complete the rating in multiple sessions, stopping
and re-starting as needed. In addition to the seven-point prosody scale
(1, 1.5, 2, 2.5, 3, 3.5, 4), raters were also given an option to note
``No audio available to score'' in case there was no audio (e.g., the
student was muted or advanced without reading) or the audio did not
allow the rater to confidently give a score (e.g., poor audio quality,
too much background noise, a very quiet reader).
The prosody raters were instructed to first complete a Prosody Review
containing four exemplar files, one at each prosody level (i.e, 1, 2, 3,
4), prior to rating their first set of audio files. The expectation was
set in training that raters must complete a minimum of 50 audio files;
there was no maximum. Upon completion of the first set of recordings,
raters emailed PI Nese to receive another set of recordings to rate.
In this manner, all 14,122 audio recordings were rated by two different
prosody raters from November 28, 2020 through February 8, 2021.
The median number of audio files scored by prosody raters was 182, with
a range from 50 to 865 (\emph{Mean} = 248, \emph{SD} = 203).
\includegraphics{annual_report_2021_files/figure-latex/ratefiles_fig-1.pdf}
\end{document}
| {
"alphanum_fraction": 0.7726263519,
"avg_line_length": 46.9585201794,
"ext": "tex",
"hexsha": "1b392586083e64b927039616d95f9f88e5cff709",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e9e44a836ae574c1e197943ce0d7eb0d773439f8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jnese/coreprosody",
"max_forks_repo_path": "nopublish/annual_report_2021.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e9e44a836ae574c1e197943ce0d7eb0d773439f8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jnese/coreprosody",
"max_issues_repo_path": "nopublish/annual_report_2021.tex",
"max_line_length": 273,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e9e44a836ae574c1e197943ce0d7eb0d773439f8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jnese/coreprosody",
"max_stars_repo_path": "nopublish/annual_report_2021.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 11125,
"size": 41887
} |
\chapter{Arithmetic}
\label{numberchapter}
\index{number}
This chapter describes Scheme's libraries for more specialized
numerical operations: fixnum and flonum arithmetic, as well as bitwise
operations on exact integer objects.
\section{Bitwise operations}
A number of procedures operate on the binary two's-complement
representations of exact integer objects: Bit positions within an
exact integer object are counted from the right, i.e.\ bit 0 is the
least significant bit. Some procedures allow extracting \defining{bit
fields}, i.e., number objects representing subsequences of the
binary representation of an exact integer object. Bit fields are
always positive, and always defined using a finite number of bits.
\section{Fixnums}
\label{fixnumssection}
Every implementation must define its fixnum range as a closed
interval
%
\begin{displaymath}
[-2^{w-1}, 2^{w-1} - 1]
\end{displaymath}
%
such that $w$ is a (mathematical) integer $w \geq 24$. Every
mathematical integer within an implementation's fixnum range must
correspond to an exact integer object that is representable within the
implementation.
A fixnum is an exact integer object whose value lies within this
fixnum range.
This section describes the \defrsixlibrary{arithmetic fixnums} library,
which defines various operations on fixnums.
Fixnum operations perform integer arithmetic on their fixnum
arguments, but raise an exception with condition type
{\cf\&implementation-restriction} if the result is not a fixnum.
This section uses \var{fx}, \vari{fx}, \varii{fx}, etc., as parameter
names for arguments that must be fixnums.
\begin{entry}{%
\rproto{fixnum?}{ obj}{procedure}}
Returns \schtrue{} if \var{obj} is an exact
integer object within the fixnum range, \schfalse{} otherwise.
\end{entry}
\begin{entry}{%
\rproto{fixnum-width}{}{procedure}
\rproto{least-fixnum}{}{procedure}
\rproto{greatest-fixnum}{}{procedure}}
These procedures return $w$,
$-2^{w-1}$ and $2^{w-1} - 1$: the
width, minimum and the maximum value of the fixnum range, respectively.
\end{entry}
\begin{entry}{%
\proto{fx=?}{ \vari{fx} \varii{fx} \variii{fx} \dotsfoo}{procedure}
\proto{fx>?}{ \vari{fx} \varii{fx} \variii{fx} \dotsfoo}{procedure}
\proto{fx<?}{ \vari{fx} \varii{fx} \variii{fx} \dotsfoo}{procedure}
\proto{fx>=?}{ \vari{fx} \varii{fx} \variii{fx} \dotsfoo}{procedure}
\proto{fx<=?}{ \vari{fx} \varii{fx} \variii{fx} \dotsfoo}{procedure}}
These procedures return \schtrue{} if their arguments are (respectively):
equal, monotonically increasing, monotonically decreasing,
monotonically nondecreasing, or monotonically nonincreasing,
\schfalse{} otherwise.
\end{entry}
\begin{entry}{%
\proto{fxzero?}{ fx}{procedure}
\proto{fxpositive?}{ fx}{procedure}
\proto{fxnegative?}{ fx}{procedure}
\proto{fxodd?}{ fx}{procedure}
\proto{fxeven?}{ fx}{procedure}}
These numerical predicates test a fixnum for a particular property,
returning \schtrue{} or \schfalse{}. The five properties tested by
these procedures are: whether the number object is zero, greater than zero,
less than zero, odd, or even.
\end{entry}
\begin{entry}{%
\proto{fxmax}{ \vari{fx} \varii{fx} \dotsfoo}{procedure}
\proto{fxmin}{ \vari{fx} \varii{fx} \dotsfoo}{procedure}}
These procedures return the maximum or minimum of their arguments.
\end{entry}
\begin{entry}{%
\proto{fx+}{ \vari{fx} \varii{fx}}{procedure}
\proto{fx*}{ \vari{fx} \varii{fx}}{procedure}}
These procedures return the sum or product of their arguments,
provided that sum or product is a fixnum. An exception with condition
type {\cf\&implementation-restriction} is raised if
that sum or product is not a fixnum.
\end{entry}
\begin{entry}{%
\proto{fx-}{ \vari{fx} \varii{fx}}{procedure}
\rproto{fx-}{ fx}{procedure}}
With two arguments, this procedure returns the difference
$\vari{fx}-\varii{fx}$, provided that difference is a fixnum.
With one argument, this procedure returns the additive
inverse of its argument, provided that integer object is a
fixnum.
An exception with condition type {\cf\&implementation-restriction} is raised if the
mathematically correct result of this procedure is not a fixnum.
\begin{scheme}
(fx- (least-fixnum)) \lev \exception{\&implementation-restriction}%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{fxdiv-and-mod}{ \vari{fx} \varii{fx}}{procedure}
\proto{fxdiv}{ \vari{fx} \varii{fx}}{procedure}
\proto{fxmod}{ \vari{fx} \varii{fx}}{procedure}
\proto{fxdiv0-and-mod0}{ \vari{fx} \varii{fx}}{procedure}
\proto{fxdiv0}{ \vari{fx} \varii{fx}}{procedure}
\proto{fxmod0}{ \vari{fx} \varii{fx}}{procedure}}
\domain{\varii{Fx} must be nonzero.}
These procedures implement number-theoretic integer division and
return the results of the corresponding mathematical operations
specified in report section~\extref{report:integerdivision}{Integer division}.
\begin{scheme}
(fxdiv \vari{fx} \varii{fx}) \ev \(\vari{fx}~\mathrm{div}~\varii{fx}\)
(fxmod \vari{fx} \varii{fx}) \ev \(\vari{fx}~\mathrm{mod}~\varii{fx}\)
(fxdiv-and-mod \vari{fx} \varii{fx}) \lev \(\vari{fx}~\mathrm{div}~\varii{fx}, \vari{fx}~\mathrm{mod}~\varii{fx}\)\\\>\>; \textrm{two return values}
(fxdiv0 \vari{fx} \varii{fx}) \ev \(\vari{fx}~\mathrm{div}\sb{0}~\varii{fx}\)
(fxmod0 \vari{fx} \varii{fx}) \ev \(\vari{fx}~\mathrm{mod}\sb{0}~\varii{fx}\)
(fxdiv0-and-mod0 \vari{fx} \varii{fx}) \lev \(\vari{fx} \vari{fx}~\mathrm{div}\sb{0}~\varii{fx}, \vari{fx}~\mathrm{mod}\sb{0}~\varii{fx}\)\\\>\>; \textrm{two return values}%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{fx+/carry}{ \vari{fx} \varii{fx} \variii{fx}}{procedure}}
Returns the two fixnum results of the following computation:
%
\begin{scheme}
(let* ((s (+ \vari{fx} \varii{fx} \variii{fx}))
(s0 (mod0 s (expt 2 (fixnum-width))))
(s1 (div0 s (expt 2 (fixnum-width)))))
(values s0 s1))%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{fx-/carry}{ \vari{fx} \varii{fx} \variii{fx}}{procedure}}
Returns the two fixnum results of the following computation:
%
\begin{scheme}
(let* ((d (- \vari{fx} \varii{fx} \variii{fx}))
(d0 (mod0 d (expt 2 (fixnum-width))))
(d1 (div0 d (expt 2 (fixnum-width)))))
(values d0 d1))%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{fx*/carry}{ \vari{fx} \varii{fx} \variii{fx}}{procedure}}
Returns the two fixnum results of the following computation:
\begin{scheme}
(let* ((s (+ (* \vari{fx} \varii{fx}) \variii{fx}))
(s0 (mod0 s (expt 2 (fixnum-width))))
(s1 (div0 s (expt 2 (fixnum-width)))))
(values s0 s1))%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{fxnot}{ \var{fx}}{procedure}}
Returns the unique fixnum that is congruent
mod $2^w$ to the one's-complement of \var{fx}.
\end{entry}
\begin{entry}{%
\proto{fxand}{ \vari{fx} \dotsfoo}{procedure}
\proto{fxior}{ \vari{fx} \dotsfoo}{procedure}
\proto{fxxor}{ \vari{fx} \dotsfoo}{procedure}}
These procedures return the fixnum that is the bit-wise ``and'',
``inclusive or'', or ``exclusive or'' of the two's complement
representations of their arguments. If they are passed only one
argument, they return that argument. If they are passed no arguments,
they return the fixnum (either $-1$ or $0$) that acts as identity for the
operation.
\end{entry}
\begin{entry}{%
\proto{fxif}{ \vari{fx} \varii{fx} \variii{fx}}{procedure}}
Returns the fixnum that is the bit-wise ``if'' of the two's complement
representations of its arguments, i.e.\ for each bit, if it is 1 in
\vari{fx}, the corresponding bit in \varii{fx} becomes the value of
the corresponding bit in the result, and if it is 0, the corresponding
bit in \variii{fx} becomes the corresponding bit in the value of the
result. This is the fixnum result of the following computation:
\begin{scheme}
(fxior (fxand \vari{fx} \varii{fx})
(fxand (fxnot \vari{fx}) \variii{fx}))%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{fxbit-count}{ \var{fx}}{procedure}}
If \var{fx} is non-negative, this procedure returns the
number of 1 bits in the two's complement representation of \var{fx}.
Otherwise it returns the result of the following computation:
%
\begin{scheme}
(fxnot (fxbit-count (fxnot \var{fx})))%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{fxlength}{ \var{fx}}{procedure}}
Returns the number of bits needed to represent \var{fx} if it is
positive, and the number of bits needed to represent {\cf (fxnot
\var{fx})} if it is negative, which is the fixnum result of the
following computation:
\begin{scheme}
(do ((result 0 (+ result 1))
(bits (if (fxnegative? \var{fx})
(fxnot \var{fx})
\var{fx})
(fxarithmetic-shift-right bits 1)))
((fxzero? bits)
result))%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{fxfirst-bit-set}{ \var{fx}}{procedure}}
Returns the index of the least significant $1$ bit in
the two's complement representation of \var{fx}. If
\var{fx} is $0$, then $-1$ is returned.
%
\begin{scheme}
(fxfirst-bit-set 0) \ev -1
(fxfirst-bit-set 1) \ev 0
(fxfirst-bit-set -4) \ev 2%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{fxbit-set?}{ \vari{fx} \varii{fx}}{procedure}}
\domain{\varii{Fx} must be non-negative.} The {\cf fxbit-set?} procedure returns
\schtrue{} if the \varii{fx}th bit is 1 in the two's complement
representation of \vari{fx}, and \schfalse{} otherwise. This is the
result of the following computation:
%
\begin{scheme}
(if (fx>=? \varii{fx} (fx- (fixnum-width) 1))
(fxnegative? \vari{fx})
(not
(fxzero?
(fxand \vari{fx}
(fxarithmetic-shift-left 1 \varii{fx})))))%
\end{scheme}
%
\end{entry}
\begin{entry}{%
\proto{fxcopy-bit}{ \vari{fx} \varii{fx} \variii{fx}}{procedure}}
\domain{\varii{Fx} must be non-negative and less than {\cf
$w-1$}. \variii{Fx} must be 0 or
1.} The {\cf fxcopy-bit} procedure returns the result of replacing
the \varii{fx}th bit of \vari{fx} by \variii{fx}, which is
the result of the following computation:
\begin{scheme}
(let* ((mask (fxarithmetic-shift-left 1 \varii{fx})))
(fxif mask
(fxarithmetic-shift-left \variii{fx} \varii{fx})
\vari{fx}))%
\end{scheme}
%
\end{entry}
\begin{entry}{%
\proto{fxbit-field}{ \vari{fx} \varii{fx} \variii{fx}}{procedure}}
\domain{\varii{Fx} and \variii{fx} must be non-negative and less than
$w$. Moreover, \varii{fx} must be less than or
equal to \variii{fx}.} The {\cf fxbit-field} procedure returns the
number represented by the bits at the positions from \varii{fx} (inclusive) to
$\variii{fx}$ (exclusive), which is
the fixnum result of the following computation:
%
\begin{scheme}
(let* ((mask (fxnot
(fxarithmetic-shift-left -1 \variii{fx}))))
(fxarithmetic-shift-right (fxand \vari{fx} mask)
\varii{fx}))%
\end{scheme}
%
\end{entry}
\begin{entry}{%
\proto{fxcopy-bit-field}{ \vari{fx} \varii{fx} \variii{fx} \variv{fx}}{procedure}}
\domain{\varii{Fx} and \variii{fx} must be non-negative and less than
$w$. Moreover, \varii{fx} must be less than or
equal to \variii{fx}.} The {\cf fxcopy-bit-field} procedure returns
the result of replacing in \vari{fx} the bits at positions from
\varii{fx} (inclusive) to $\variii{fx}$ (exclusive) by the bits in
\variv{fx} from position 0 (inclusive) to position
$\variii{fx}-\varii{fx}$ (exclusive), which
is the fixnum result of the following computation:
\begin{scheme}
(let* ((to \vari{fx})
(start \varii{fx})
(end \variii{fx})
(from \variv{fx})
(mask1 (fxarithmetic-shift-left -1 start))
(mask2 (fxnot
(fxarithmetic-shift-left -1 end)))
(mask (fxand mask1 mask2))
(mask3 (fxnot (fxarithmetic-shift-left
-1 (- end start)))))
(fxif mask
(fxarithmetic-shift-left (fxand from mask3)
start)
to))%
\end{scheme}
\begin{scheme}
(fxcopy-bit-field \sharpsign{}b0000001 2 5 \sharpsign{}b1111000) \lev 1
(fxcopy-bit-field \sharpsign{}b0000001 2 5 \sharpsign{}b0001111) \lev 29
(fxcopy-bit-field \sharpsign{}b0001111 2 5 \sharpsign{}b0001111) \lev 31%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{fxarithmetic-shift}{ \vari{fx} \varii{fx}}{procedure}}
\domain{The absolute value of \varii{fx} must be less than
$w$.} If
%
\begin{scheme}
(floor (* \vari{fx} (expt 2 \varii{fx})))%
\end{scheme}
%
is a fixnum, then that fixnum is returned. Otherwise an exception
with condition type {\cf\&implementation-\hp{}restriction} is
raised.
\end{entry}
\begin{entry}{%
\proto{fxarithmetic-shift-left}{ \vari{fx} \varii{fx}}{procedure}
\proto{fxarithmetic-shift-right}{ \vari{fx} \varii{fx}}{procedure}}
\domain{\varii{Fx} must be non-negative, and less than $w$.}
The {\cf fxarithmetic-shift-left} procedure behaves the same as {\cf
fxarithmetic-shift}, and {\cf (fxarithmetic-shift-right \vari{fx}
\varii{fx})} behaves the same as {\cf (fxarithmetic-shift \vari{fx}
(fx- \varii{fx}))}.
\end{entry}
\begin{entry}{%
\proto{fxrotate-bit-field}{ \vari{fx} \varii{fx} \variii{fx} \variv{fx}}{procedure}}
\domain{\varii{Fx}, \variii{fx}, and \variv{fx} must be non-negative
and less than $w$. \varii{Fx} must be less than or
equal to \variii{fx}. \variv{Fx} must be less than or equal to the difference
between \variii{fx} and \varii{fx}.} The {\cf fxrotate-bit-field}
procedure returns the result of cyclically permuting in \vari{fx} the
bits at positions from \varii{fx} (inclusive) to \variii{fx}
(exclusive) by \variv{fx} bits
towards the more significant bits, which is the result of the
following computation:
\begin{scheme}
(let* ((n \vari{fx})
(start \varii{fx})
(end \variii{fx})
(count \variv{fx})
(width (fx- end start)))
(fxcopy-bit-field n start end
(fxior
(fxarithmetic-shift-left
(fxbit-field n start (fx- end count))
count)
(fxarithmetic-shift-right
(fxbit-field n start end)
(fx- width count)))))%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{fxreverse-bit-field}{ \vari{fx} \varii{fx} \variii{fx}}{procedure}}
\domain{\varii{Fx} and \variii{fx} must be non-negative and less than
$w$. Moreover, \varii{fx} must be less than or
equal to \variii{fx}.} The {\cf fxreverse-bit-field} procedure
returns
the fixnum obtained from \vari{fx} by reversing the
order of the bits at positions from \varii{fx} (inclusive) to
\variii{fx} (exclusive).
\begin{scheme}
(fxreverse-bit-field \sharpsign{}b1010010 1 4) \lev 88 ; \sharpsign{}b1011000%
\end{scheme}
\end{entry}
\section{Flonums}
\label{flonumssection}
This section describes the \defrsixlibrary{arithmetic flonums} library.
This section uses \var{fl}, \vari{fl}, \varii{fl}, etc., as
parameter names for arguments that must be flonums, and \var{ifl}
as a name for arguments that
must be integer-valued flonums, i.e., flonums for which the
{\cf integer-valued?} predicate returns true.
\begin{entry}{%
\proto{flonum?}{ obj}{procedure}}
Returns \schtrue{} if \var{obj} is a flonum, \schfalse{} otherwise.
\end{entry}
\begin{entry}{%
\proto{real->flonum}{ x}{procedure}}
Returns the best flonum representation of
\var{x}.
The value returned is a flonum that is numerically closest to the
argument.
\begin{note}
If flonums are represented in binary floating point, then
implementations should break ties by preferring
the floating-point representation whose least significant bit is
zero.
\end{note}
\end{entry}
\begin{entry}{%
\proto{fl=?}{ \vari{fl} \varii{fl} \variii{fl} \dotsfoo}{procedure}
\proto{fl<?}{ \vari{fl} \varii{fl} \variii{fl} \dotsfoo}{procedure}
\proto{fl<=?}{ \vari{fl} \varii{fl} \variii{fl} \dotsfoo}{procedure}
\proto{fl>?}{ \vari{fl} \varii{fl} \variii{fl} \dotsfoo}{procedure}
\proto{fl>=?}{ \vari{fl} \varii{fl} \variii{fl} \dotsfoo}{procedure}}
These procedures return \schtrue{} if their arguments are (respectively):
equal, monotonically increasing, monotonically nondecreasing,
monotonically decreasing, or monotonically nonincreasing,
\schfalse{} otherwise. These
predicates must be transitive.
\begin{scheme}
(fl=? +inf.0 +inf.0) \ev \schtrue{}
(fl=? -inf.0 +inf.0) \ev \schfalse{}
(fl=? -inf.0 -inf.0) \ev \schtrue{}
(fl=? 0.0 -0.0) \ev \schtrue{}
(fl<? 0.0 -0.0) \ev \schfalse{}
(fl=? +nan.0 \var{fl}) \ev \schfalse{}
(fl<? +nan.0 \var{fl}) \ev \schfalse{}%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{flinteger?}{ fl}{procedure}
\proto{flzero?}{ fl}{procedure}
\proto{flpositive?}{ fl}{procedure}
\proto{flnegative?}{ fl}{procedure}
\proto{flodd?}{ ifl}{procedure}
\proto{fleven?}{ ifl}{procedure}
\proto{flfinite?}{ fl}{procedure}
\proto{flinfinite?}{ fl}{procedure}
\proto{flnan?}{ fl}{procedure}}
These numerical predicates test a flonum for a particular property,
returning \schtrue{} or \schfalse{}.
The {\cf flinteger?} procedure tests whether the number object is an integer,
{\cf flzero?} tests whether
it is {\cf fl=?} to zero, {\cf flpositive?} tests whether it is greater
than zero, {\cf flnegative?} tests whether it is less
than zero, {\cf flodd?} tests whether it is odd,
{\cf fleven?} tests whether it is even,
{\cf flfinite?} tests whether it is not an infinity and not a NaN,
{\cf flinfinite?} tests whether it is an infinity, and
{\cf flnan?} tests whether it is a NaN.
\begin{scheme}
(flnegative? -0.0) \ev \schfalse{}
(flfinite? +inf.0) \ev \schfalse{}
(flfinite? 5.0) \ev \schtrue{}
(flinfinite? 5.0) \ev \schfalse{}
(flinfinite? +inf.0) \ev \schtrue{}%
\end{scheme}
\begin{note}
{\cf (flnegative? -0.0)} must return \schfalse{},
else it would lose the correspondence with
{\cf (fl<? -0.0 0.0)}, which is \schfalse{}
according to IEEE 754~\cite{IEEE}.
\end{note}
\end{entry}
\begin{entry}{%
\proto{flmax}{ \vari{fl} \varii{fl} \dotsfoo}{procedure}
\proto{flmin}{ \vari{fl} \varii{fl} \dotsfoo}{procedure}}
These procedures return the maximum or minimum of their arguments.
They always return a NaN when one or more of the arguments is a NaN.
\end{entry}
\begin{entry}{%
\proto{fl+}{ \vari{fl} \dotsfoo}{procedure}
\proto{fl*}{ \vari{fl} \dotsfoo}{procedure}}
These procedures return the flonum sum or product of their flonum
arguments. In general, they should return the flonum that best
approximates the mathematical sum or product. (For implementations
that represent flonums using IEEE binary floating point, the
meaning of ``best'' is defined by the IEEE standards.)
\begin{scheme}
(fl+ +inf.0 -inf.0) \ev +nan.0
(fl+ +nan.0 \var{fl}) \ev +nan.0
(fl* +nan.0 \var{fl}) \ev +nan.0%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{fl-}{ \vari{fl} \varii{fl} \dotsfoo}{procedure}
\rproto{fl-}{ fl}{procedure}
\proto{fl/}{ \vari{fl} \varii{fl} \dotsfoo}{procedure}
\rproto{fl/}{ fl}{procedure}}
With two or more arguments, these procedures return the flonum
difference or quotient of their flonum arguments, associating to the
left. With one argument, however, they return the additive or
multiplicative flonum inverse of their argument. In general, they
should return the flonum that best approximates the mathematical
difference or quotient. (For implementations that represent flonums
using IEEE binary floating point, the meaning of ``best'' is
reasonably well-defined by the IEEE standards.)
\begin{scheme}
(fl- +inf.0 +inf.0) \ev +nan.0%
\end{scheme}
For undefined quotients, {\cf fl/} behaves as specified by the
IEEE standards:
\begin{scheme}
(fl/ 1.0 0.0) \ev +inf.0
(fl/ -1.0 0.0) \ev -inf.0
(fl/ 0.0 0.0) \ev +nan.0%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{flabs}{ fl}{procedure}}
Returns the absolute value of \var{fl}.
\end{entry}
\begin{entry}{%
\proto{fldiv-and-mod}{ \vari{fl} \varii{fl}}{procedure}
\proto{fldiv}{ \vari{fl} \varii{fl}}{procedure}
\proto{flmod}{ \vari{fl} \varii{fl}}{procedure}
\proto{fldiv0-and-mod0}{ \vari{fl} \varii{fl}}{procedure}
\proto{fldiv0}{ \vari{fl} \varii{fl}}{procedure}
\proto{flmod0}{ \vari{fl} \varii{fl}}{procedure}}
These procedures implement number-theoretic integer division and
return the results of the corresponding mathematical operations
specified in report section~\extref{report:integerdivision}{Integer division}.
In the cases where the
mathematical requirements in section~\extref{report:integerdivision} cannot be
satisfied by any number object, either an exception is raised with
condition type {\cf\&implementation-restriction}, or unspecified
flonums (one for {\cf fldiv} {\cf flmod}, {\cf fldiv0} and {\cf
flmod0}, two for {\cf fldiv-and-mod} and {\cf fldiv0-and-mod0}) are
returned.
\begin{scheme}
(fldiv \vari{fl} \varii{fl}) \ev \(\vari{fl}~\mathrm{div}~\varii{fl}\)
(flmod \vari{fl} \varii{fl}) \ev \(\vari{fl}~\mathrm{mod}~\varii{fl}\)
(fldiv-and-mod \vari{fl} \varii{fl}) \lev \(\vari{fl}~\mathrm{div}~\varii{fl}, \vari{fl}~\mathrm{mod}~\varii{fl}\)\\\>\>; \textrm{two return values}
(fldiv0 \vari{fl} \varii{fl}) \ev \(\vari{fl}~\mathrm{div}_0~\varii{fl}\)
(flmod0 \vari{fl} \varii{fl}) \ev \(\vari{fl}~\mathrm{mod}_0~\varii{fl}\)
(fldiv0-and-mod0 \vari{fl} \varii{fl}) \lev \(\vari{fl}~\mathrm{div}_0~\varii{fl}, \vari{fl}~\mathrm{mod}_0~\varii{fl}\)\\\>\>; \textrm{two return values}%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{flnumerator}{ fl}{procedure}
\proto{fldenominator}{ fl}{procedure}}
These procedures return the numerator or denominator of \var{fl}
as a flonum; the result is computed as if \var{fl} was represented as
a fraction in lowest terms. The denominator is always positive. The
denominator of 0.0 is defined to be 1.0.
%
\begin{scheme}
(flnumerator +inf.0) \ev +inf.0
(flnumerator -inf.0) \ev -inf.0
(fldenominator +inf.0) \ev 1.0
(fldenominator -inf.0) \ev 1.0
(flnumerator 0.75) \ev 3.0 ; \textrm{probably}
(fldenominator 0.75) \ev 4.0 ; \textrm{probably}%
\end{scheme}
Implementations should implement following behavior:
\begin{scheme}
(flnumerator -0.0) \ev -0.0%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{flfloor}{ fl}{procedure}
\proto{flceiling}{ fl}{procedure}
\proto{fltruncate}{ fl}{procedure}
\proto{flround}{ fl}{procedure}}
These procedures return integral flonums for flonum arguments that are
not infinities or NaNs. For such arguments, {\cf flfloor} returns the
largest integral flonum not larger than \var{fl}. The {\cf flceiling}
procedure
returns the smallest integral flonum not smaller than \var{fl}.
The {\cf fltruncate} procedure returns the integral flonum closest to \var{fl} whose
absolute value is not larger than the absolute value of \var{fl}.
The {\cf flround} procedure returns the closest integral flonum to \var{fl},
rounding to even when \var{fl} represents a number halfway between two integers.
Although infinities and NaNs are not integer objects, these procedures return
an infinity when given an infinity as an argument, and a NaN when
given a NaN:
\begin{scheme}
(flfloor +inf.0) \ev +inf.0
(flceiling -inf.0) \ev -inf.0
(fltruncate +nan.0) \ev +nan.0%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{flexp}{ fl}{procedure}
\proto{fllog}{ fl}{procedure}
\rproto{fllog}{ \vari{fl} \varii{fl}}{procedure}
\proto{flsin}{ fl}{procedure}
\proto{flcos}{ fl}{procedure}
\proto{fltan}{ fl}{procedure}
\proto{flasin}{ fl}{procedure}
\proto{flacos}{ fl}{procedure}
\proto{flatan}{ fl}{procedure}
\rproto{flatan}{ \vari{fl} \varii{fl}}{procedure}}
These procedures compute the usual transcendental functions.
The {\cf flexp} procedure computes the base-$e$ exponential of \var{fl}.
The {\cf fllog} procedure with a single argument computes the natural logarithm of
\var{fl} (not the base ten logarithm); {\cf (fllog \vari{fl}
\varii{fl})} computes the base-\varii{fl} logarithm of \vari{fl}.
The {\cf flasin}, {\cf flacos}, and {\cf flatan} procedures compute arcsine,
arccosine, and arctangent, respectively. {\cf (flatan \vari{fl}
\varii{fl})} computes the arc tangent of \vari{fl}/\varii{fl}.
See report
section~\extref{report:transcendentalfunctions}{Transcendental functions} for the underlying
mathematical operations. In the event that these operations do not
yield a real result for the given arguments, the result may be a NaN,
or may be some unspecified flonum.
Implementations that use IEEE binary floating-point arithmetic
should follow the relevant standards for these procedures.
\begin{scheme}
(flexp +inf.0) \ev +inf.0
(flexp -inf.0) \ev 0.0
(fllog +inf.0) \ev +inf.0
(fllog 0.0) \ev -inf.0
(fllog -0.0) \ev \unspecified\\\>; \textrm{if -0.0 is distinguished}
(fllog -inf.0) \ev +nan.0
(flatan -inf.0) \lev -1.5707963267948965\\\>; \textrm{approximately}
(flatan +inf.0) \lev 1.5707963267948965\\\>; \textrm{approximately}%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{flsqrt}{ fl}{procedure}}
Returns the principal square root of \var{fl}. For $-0.0$,
{\cf flsqrt} should return $-0.0$; for other negative arguments,
the result may be a NaN or some unspecified flonum.
\begin{scheme}
(flsqrt +inf.0) \ev +inf.0
(flsqrt -0.0) \ev -0.0%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{flexpt}{ \vari{fl} \varii{fl}}{procedure}}
\domain{Either \vari{fl} should be non-negative, or, if \vari{fl} is
negative, \varii{fl} should be an integer object.}
The {\cf flexpt} procedure returns \vari{fl} raised to the power \varii{fl}. If \vari{fl} is
negative and \varii{fl} is not an integer object, the result may be a
NaN, or may be some unspecified flonum.
If \vari{fl} and \varii{fl} are both zero, the result is
1.0. If \vari{fl} is zero and \varii{fl} is positive, the result is zero.
If \vari{fl} is zero and \varii{fl} is negative, the result may be a NaN, or may be
some unspecified flonum.
\end{entry}
\begin{entry}{%
\ctproto{no-infinities}
\proto{make-no-infinities-violation}{}{procedure}
\proto{no-infinities-violation?}{ obj}{procedure}
\ctproto{no-nans}
\proto{make-no-nans-violation}{}{procedure}
\proto{no-nans-violation?}{ obj}{procedure}}
These condition types could be defined by the following code:
\begin{scheme}
(define-condition-type \&no-infinities
\&implementation-restriction
make-no-infinities-violation
no-infinities-violation?)
(define-condition-type \&no-nans
\&implementation-restriction
make-no-nans-violation no-nans-violation?)%
\end{scheme}
These types describe that a program has executed an arithmetic
operations that is specified to return an infinity or a NaN,
respectively, on a Scheme implementation that is not able to represent
the infinity or NaN. (See report section~\extref{report:infinitiesnanssection}{Representability of infinities and NaNs}.)
\end{entry}
\begin{entry}{%
\proto{fixnum->flonum}{ fx}{procedure}}
Returns a flonum that is numerically closest to \var{fx}.
\begin{note}
The result of this procedure may not be
numerically equal to \var{fx}, because the fixnum precision
may be greater than the flonum precision.
\end{note}
\end{entry}
\section{Exact bitwise arithmetic}
\label{exactsection}
This section describes the \defrsixlibrary{arithmetic bitwise}
library. The exact bitwise arithmetic provides generic operations on
exact integer objects. This section uses \var{ei}, \vari{ei}, \varii{ei}, etc.,
as parameter names that must be exact integer objects.
\begin{entry}{%
\proto{bitwise-not}{ ei}{procedure}}
Returns the exact integer object whose two's complement representation is the
one's complement of the two's complement representation of \var{ei}.
\end{entry}
\begin{entry}{%
\proto{bitwise-and}{ \vari{ei} \dotsfoo}{procedure}
\proto{bitwise-ior}{ \vari{ei} \dotsfoo}{procedure}
\proto{bitwise-xor}{ \vari{ei} \dotsfoo}{procedure}}
These procedures return the exact integer object that is the bit-wise
``and'', ``inclusive or'', or ``exclusive or'' of the two's complement
representations of their arguments. If they are passed only one
argument, they return that argument. If they are passed no arguments,
they return the integer object (either $-1$ or $0$) that acts as identity for
the operation.
\end{entry}
\begin{entry}{%
\proto{bitwise-if}{ \vari{ei} \varii{ei} \variii{ei}}{procedure}}
Returns the exact integer object that is the bit-wise ``if'' of the two's complement
representations of its arguments, i.e.\ for each bit, if it is 1 in
\vari{ei}, the corresponding bit in \varii{ei} becomes the value of
the corresponding bit in the result, and if it is 0, the corresponding
bit in \variii{ei} becomes the corresponding bit in the value of the
result.
This is the result of the following computation:
\begin{scheme}
(bitwise-ior (bitwise-and \vari{ei} \varii{ei})
(bitwise-and (bitwise-not \vari{ei}) \variii{ei}))%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{bitwise-bit-count}{ ei}{procedure}}
If \var{ei} is non-negative, this procedure returns the number of
1 bits in the two's complement representation of \var{ei}.
Otherwise it returns the result of the following computation:
%
\begin{scheme}
(bitwise-not (bitwise-bit-count (bitwise-not \var{ei})))%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{bitwise-length}{ ei}{procedure}}
Returns the number of bits needed to represent \var{ei} if it is
positive, and the number of bits needed to represent {\cf (bitwise-not
\var{ei})} if it is negative, which is the exact integer object that
is the result of the following computation:
\begin{scheme}
(do ((result 0 (+ result 1))
(bits (if (negative? \var{ei})
(bitwise-not \var{ei})
\var{ei})
(bitwise-arithmetic-shift bits -1)))
((zero? bits)
result))%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{bitwise-first-bit-set}{ ei}{procedure}}
Returns the index of the least significant $1$
bit in the two's complement representation of \var{ei}.
If \var{ei} is $0$, then $-1$ is returned.
\begin{scheme}
(bitwise-first-bit-set 0) \ev -1
(bitwise-first-bit-set 1) \ev 0
(bitwise-first-bit-set -4) \ev 2%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{bitwise-bit-set?}{ \vari{ei} \varii{ei}}{procedure}}
\domain{\varii{Ei} must be non-negative.}
The {\cf bitwise-bit-set?} procedure returns
\schtrue{} if the \varii{ei}th bit is 1 in the two's complement
representation of \vari{ei}, and \schfalse{}
otherwise. This is the result of the following computation:
\begin{scheme}
(not (zero?
(bitwise-and
(bitwise-arithmetic-shift-left 1 \varii{ei})
\vari{ei})))%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{bitwise-copy-bit}{ \vari{ei} \varii{ei} \variii{ei}}{procedure}}
\domain{\varii{Ei} must be non-negative, and \variii{ei}
must be either $0$ or $1$.}
The {\cf bitwise-copy-bit} procedure returns the result of replacing
the \varii{ei}th bit of \vari{ei} by \variii{ei}, which is
the result of the following computation:
\begin{scheme}
(let* ((mask (bitwise-arithmetic-shift-left 1 \varii{ei})))
(bitwise-if mask
(bitwise-arithmetic-shift-left \variii{ei} \varii{ei})
\vari{ei}))%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{bitwise-bit-field}{ \vari{ei} \varii{ei} \variii{ei}}{procedure}}
\domain{\varii{Ei} and \variii{ei} must be non-negative, and
\varii{ei} must be less than or equal to \variii{ei}.}
The {\cf bitwise-bit-field} procedure returns the
number represented by the bits at the positions from \varii{ei}
(inclusive) to $\variii{ei}$ (exclusive), which is
the result of the following computation:
%
\begin{scheme}
(let ((mask
(bitwise-not
(bitwise-arithmetic-shift-left -1 \variii{ei}))))
(bitwise-arithmetic-shift-right
(bitwise-and \vari{ei} mask)
\varii{ei}))%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{bitwise-copy-bit-field}{ \vari{ei} \varii{ei} \variii{ei} \variv{ei}}{procedure}}
\domain{\varii{Ei} and \variii{ei} must be non-negative,
and \varii{ei} must be less than or equal to \variii{ei}.}
The {\cf bitwise-copy-bit-field} procedure returns
the result of replacing in \vari{ei} the bits at positions from
\varii{ei} (inclusive) to $\variii{ei}$ (exclusive) by the bits in
\variv{ei} from position 0 (inclusive) to position
$\variii{ei}-\varii{ei}$ (exclusive), which
is the result of the following computation:
%
\begin{scheme}
(let* ((to \vari{ei})
(start \varii{ei})
(end \variii{ei})
(from \variv{ei})
(mask1
(bitwise-arithmetic-shift-left -1 start))
(mask2
(bitwise-not
(bitwise-arithmetic-shift-left -1 end)))
(mask (bitwise-and mask1 mask2)))
(bitwise-if mask
(bitwise-arithmetic-shift-left from
start)
to))%
\end{scheme}
\end{entry}
\begin{entry} {%
\proto{bitwise-arithmetic-shift}{ \vari{ei} \varii{ei}}{procedure}}
Returns the result of the following computation:
%
\begin{scheme}
(floor (* \vari{ei} (expt 2 \varii{ei})))%
\end{scheme}
Examples:
%
\begin{scheme}
(bitwise-arithmetic-shift -6 -1) \lev -3
(bitwise-arithmetic-shift -5 -1) \lev -3
(bitwise-arithmetic-shift -4 -1) \lev -2
(bitwise-arithmetic-shift -3 -1) \lev -2
(bitwise-arithmetic-shift -2 -1) \lev -1
(bitwise-arithmetic-shift -1 -1) \lev -1%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{bitwise-arithmetic-shift-left}{ \vari{ei} \varii{ei}}{procedure}
\proto{bitwise-arithmetic-shift-right}{ \vari{ei} \varii{ei}}{procedure}}
\domain{\varii{Ei} must be non-negative.} The {\cf
bitwise-\hp{}arithmetic-\hp{}shift-\hp{}left} procedure returns the same result as {\cf
bitwise-arithmetic-shift}, and
\begin{scheme}
(bitwise-arithmetic-shift-right \vari{ei} \varii{ei})%
\end{scheme}
returns the same result as
\begin{scheme}
(bitwise-arithmetic-shift \vari{ei} (- \varii{ei}))\textrm{.}%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{bitwise-rotate-bit-field}{ \vari{ei} \varii{ei} \variii{ei} \variv{ei}}{procedure}}
\domain{\varii{Ei}, \variii{ei}, \variv{ei} must be non-negative,
\varii{ei} must be less than or equal to \variii{ei}.}
The {\cf bitwise-rotate-bit-field} procedure returns the result of cyclically permuting in \vari{ei} the
bits at positions from \varii{ei} (inclusive) to \variii{ei} (exclusive) by \variv{ei} bits
towards the more significant bits, which is the result of the
following computation:
%
\begin{scheme}
(let* ((n \vari{ei})
(start \varii{ei})
(end \variii{ei})
(count \variv{ei})
(width (- end start)))
(if (positive? width)
(let* ((count (mod count width))
(field0
(bitwise-bit-field n start end))
(field1 (bitwise-arithmetic-shift-left
field0 count))
(field2 (bitwise-arithmetic-shift-right
field0
(- width count)))
(field (bitwise-ior field1 field2)))
(bitwise-copy-bit-field n start end field))
n))%
\end{scheme}
\end{entry}
\begin{entry}{%
\proto{bitwise-reverse-bit-field}{ \vari{ei} \varii{ei} \variii{ei}}{procedure}}
\domain{\varii{Ei} and \variii{ei} must be non-negative, and
\varii{ei} must be less than or equal to \variii{ei}.} The {\cf bitwise-reverse-bit-field} procedure returns
the result obtained from \vari{ei} by reversing the
order of the bits at positions from \varii{ei} (inclusive) to
\variii{ei} (exclusive).
\begin{scheme}
(bitwise-reverse-bit-field \sharpsign{}b1010010 1 4) \lev 88 ; \sharpsign{}b1011000%
\end{scheme}
\end{entry}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "r6rs-lib"
%%% End:
| {
"alphanum_fraction": 0.6868497595,
"avg_line_length": 34.7821011673,
"ext": "tex",
"hexsha": "b8d1f58600e93ec0ba99ae190c5ae9c3ff2f2668",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2f998d354177dc41a8d3147fd15c056a14ffabda",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "schemedoc/scheme-rnrs-metadata",
"max_forks_repo_path": "r6rs/arith.tex",
"max_issues_count": 3,
"max_issues_repo_head_hexsha": "2f998d354177dc41a8d3147fd15c056a14ffabda",
"max_issues_repo_issues_event_max_datetime": "2019-09-26T17:56:02.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-03-27T22:24:05.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "schemedoc/scheme-rnrs-metadata",
"max_issues_repo_path": "r6rs/arith.tex",
"max_line_length": 175,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "2f998d354177dc41a8d3147fd15c056a14ffabda",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "schemedoc/rnrs-metadata",
"max_stars_repo_path": "r6rs/arith.tex",
"max_stars_repo_stars_event_max_datetime": "2020-09-04T17:38:19.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-09-04T17:38:19.000Z",
"num_tokens": 11252,
"size": 35756
} |
\documentclass[12pt]{article}
%\usepackage[top=0.5in,left=0.5in,right=0.5in, bottom=0.5in]{geometry}
\usepackage[margin=1.00in]{geometry}
\usepackage{amssymb}
\usepackage{algorithm,algorithmic}
\newcommand{\EXCISE}[1]{}
\title{\Large \bf User Guide for Parallel Sampling-Based Motion Planning}
\author{Sam Ade Jacobs}
\begin{document}
\maketitle
This user guide outlines the necessary steps require to compile and run parallel code.
It is a living document and expected to be modified as we make progress with parallel motion planning
research. The primary objective is to streamline the compilation and execution process for users.
A separate file (to be called developer's guide) will outline what developers need to do to write a
STAPL-based parallel SBMP code.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% COMPILATION
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{COMPILATION}
The parallel code heavily depends on STAPL framework for many of its components and STAPL also depends on
Boost and C++ STL. For this reason users need to have both BOOST and STL set up. Since STAPL is usually
ahead of PMPL in terms of compiler, STL and BOOST, it is important that the user have most recent versions of
these libraries and C++ compiler ( with OpenMPI/MPI wrapper). Here are the outlines of steps involved:
\begin{enumerate}
\item Run on the appropriate machine. If you are developing for initial testing, then use any of Parasol Lab quad-core 64-bits machines (i.e. columbo, agate, newdelhi etc)
\item Load the appropriate compilers and libraries by using the module system as necessary. Run {\tt module avail} to check available modules.
\begin{itemize}
\item {\tt load module gcc-4.6.2}
\item {\tt load module boost-1.48}
\item {\tt load module openmpi-x86\_64}
\end{itemize}
\item Compile STAPL first with appropriate platform and STL specified
\begin{itemize}
\item {\tt cd utils/stapl\_release}
\item {\tt gmake platform=LINUX\_gcc stl=4.6.2}
\item {\tt cd ../../}
\end{itemize}
\item Compile PMPL with parallel=1 and the appropriate STL LIB. This can either be set in the Makefile or done on the commandline:
\begin{itemize}
\item {\tt cd src}
\item {\tt gmake platform=LINUX\_64\_gcc parallel=1 STL\_LIB=4.6.2 pmpl}
\end{itemize}
\end{enumerate}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% EXECUTION
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{EXECUTION}
Having successfully compiled the code, the next step is to run:
\begin{enumerate}
\item Change to the environment directory you wish to run: {\tt cd TestRigid}
\item Adjust parameters in the xml input file (e.g., {\tt TestRigid/ParallelPMPLExample.xml}) (Note:this will file will be merged with PMPLExamples.xml soon)
\item Run via mpirun, specifing the number of processors p: {\tt mpirun -np p ../pmpl -f ParallelPMPLExample.xml}
\end{enumerate}
NOTE: Provided you compile in debug mode, totalview is a great debugger for parallel code, detail on how
to use this will be provided in developer's guide.
\end{document}
| {
"alphanum_fraction": 0.6677842121,
"avg_line_length": 48.9701492537,
"ext": "tex",
"hexsha": "d3474e4c851d2869f26cf47e6dd143f7a5aa0103",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "04de04b23e0368e576d2136fee8729de44a3eda5",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "parasol-ppl/PPL",
"max_forks_repo_path": "docs/ParallelSBMP/user_guide.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "04de04b23e0368e576d2136fee8729de44a3eda5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "parasol-ppl/PPL",
"max_issues_repo_path": "docs/ParallelSBMP/user_guide.tex",
"max_line_length": 172,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "04de04b23e0368e576d2136fee8729de44a3eda5",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "parasol-ppl/PPL",
"max_stars_repo_path": "docs/ParallelSBMP/user_guide.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 778,
"size": 3281
} |
\documentclass{article}
\pagestyle{empty}
\usepackage{amsmath,amssymb,amsfonts}
\usepackage{graphicx}
\usepackage{multicol}
\setlength{\oddsidemargin}{0in} \setlength{\evensidemargin}{0in}
\setlength{\topmargin}{0in} \setlength{\textheight}{8.5in}
\setlength{\textwidth}{6.5in}
\begin{document}
\begin{flushleft}
\bfseries{MATH 260, Linear Algebra, Spring `14}\\
\bfseries{Activity 1: Linear equations and systems, geometrically}\\
\bfseries{Honor Code:} \hspace{3.5in}\bfseries{Names:}\\
\end{flushleft}
\begin{flushleft}
\vspace{.25in}
Directions: Everyone should work on the assignment and should fill out their paper. You are expected to make corrections based on what is presented on the board. You will NOT turn in the assignment today. It may be collected in the future.
\vspace{0.2in}
In previous courses you learned about lines and plotting. Today we'll be reviewing a lot of that as a precursor to a more holistic view of systems and LINEar algebra.
\section*{Problem 1:Plotting 2 variables}
\vspace{0.1in}
a) The point-slope form of a line is $y=mx+b$. This is generally easily graphed, most of our equations though will take `standard' form of: $a_1 x + a_2 y = b$. For practice plot the line: $3y-6x=6$.
\vspace{0.1in}
\begin{minipage}{3in}
\includegraphics[scale=0.75]{grid_12_by_12.eps}
\end{minipage}
\vspace{0.2in}
\newpage
b) Now sketch the following pairs of equations:\\
(i) $y=2-4x$ and $y=-4x-3$\\
(ii) $3x+7=y$ and $y=\frac{-2}{3}x-1$\\
(iii) $y=2x+1$ and $3y-3=6x$
\vspace{0.25in}
\begin{minipage}{3in}
\includegraphics[scale=0.75]{grid_12_by_12.eps}
\end{minipage}
\begin{minipage}{3in}
\includegraphics[scale=0.75]{grid_12_by_12.eps}
\end{minipage}
\vspace{0.1in}\\
\begin{minipage}{3in}
\includegraphics[scale=0.75]{grid_12_by_12.eps}
\end{minipage}
\begin{minipage}{3in}
\includegraphics[scale=0.75]{grid_12_by_12.eps}
\end{minipage}
c) When we plot lines, the intersection gives a pair of (x,y) values which will satisfy both equations. How many such pairs do each of the above sets of lines have?\\
\vspace{1in}
Can you draw a pair of lines which has exactly TWO solutions on the fourth grid? Can you come up with any other counts for solutions besides those depicted above?
\newpage
d) Now plot (b-ii) on both of the grids below.\\
(i) On the first also plot $y+1=\frac{-1}{4}x$ \\
(ii) On the second also plot $y=\frac{5}{11}$\\
\vspace{0.2in}
\begin{minipage}{3in}
\includegraphics[scale=.75]{grid_12_by_12.eps}
\end{minipage}
\begin{minipage}{3in}
\includegraphics[scale=.75]{grid_12_by_12.eps}
\end{minipage}
\vspace{0.3in}\\
Similar to having two equations, 3 equations can produce the same types of solution sets: None, one or infinite. Only two of which you've plotted. Notice that we really only needed two equations to get the interesction for (ii), this is called `overdetermined'. Case (i) would be called 'inconsistent' or 'indeterminite'.
\vspace{0.3in}
\newpage
\section*{Problem 2: Plotting 3 variables}
\vspace{0.1in}
First, actually plotting an equation with 3 variables can be very challenging. We've shown the result below which creates a plane.\\
\begin{minipage}{3in}
\includegraphics[scale=0.5]{planepic.eps}
\end{minipage}\\
\vspace{0.1in}
Let's look at another system, this time with 3 variables:\\
$x=3z-2y+1$\\
$3z-2y+x=2$\\
Solve this for $x$, $y$, $z$. \textit{Hint: Solve for $z$ first by substituting the first equation into the second}\\
\vspace{1.5in}
Did you get numbers for each variable? What's do you think is going on here? (Feel free to discuss at your table)\\
\vspace{1in}
\Large Notes about this:
\normalsize
\vspace{2in}
\section*{Problem 3}
Time permitting, solve the following systems by elimination:\\
\end{flushleft}
a)
\begin{align*}
x+y&=4\\
x-y&=0
\end{align*}
\vspace{0.1in}
b)
\begin{equation*}
\begin{array}{ccccccr}
x& + &2y&+&z&= &2\\
x& - & y& & &= &4\\
2x&- & y&+&2z&=&0\\
& &3y&+& z&=&-2
\end{array}
\end{equation*}
\end{document} | {
"alphanum_fraction": 0.7264676113,
"avg_line_length": 32.393442623,
"ext": "tex",
"hexsha": "7ae67213ad2b58bf0777f07e2ecdd820ad5aa8f0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c92c2f3f9e3fc87a1a89041eb7bfaa1a87c9276d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Pelonza/PB-LinAlg",
"max_forks_repo_path": "Spring 2014 - Schmitt/Activity Latex/Activity_1_260.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c92c2f3f9e3fc87a1a89041eb7bfaa1a87c9276d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Pelonza/PB-LinAlg",
"max_issues_repo_path": "Spring 2014 - Schmitt/Activity Latex/Activity_1_260.tex",
"max_line_length": 322,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c92c2f3f9e3fc87a1a89041eb7bfaa1a87c9276d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Pelonza/PB-LinAlg",
"max_stars_repo_path": "Spring 2014 - Schmitt/Activity Latex/Activity_1_260.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1355,
"size": 3952
} |
\documentclass[11pt]{amsbook}
\usepackage{../HBSuerDemir}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{mdwlist}
\usepackage{amssymb}
\newtheorem{theorem}{Theorem}
\newtheorem*{remark}{Remark}
\newtheorem{example}{Example}
\newcommand\tab[1][2cm]{\hspace*{#1}}
\newcommand\tad[1][1cm]{\hspace*{#1}}
\newcommand\tal[1][0.5cm]{\hspace*{#1}}
\usepackage{enumitem}
\usepackage[parfill]{parskip}
\usepackage{fancyhdr}
\fancypagestyle{plain}{
\fancyhf{}
\renewcommand{\headrulewidth}{0pt}
\fancyhead[C]{\thepage}
}
\pagestyle{fancy}
\begin{document}
%%%%%%%%%%%%%%
\hPage{b2p1/002}
%%%%%%%%%%%%%55
\begin{align*}
{(n!)_{0}} &:
\begin{aligned}[t]
1, 1, 2, ... , n!, ...
\end{aligned}\\
((-1)^{n-1})_{-2} &:
\begin{aligned}[t]
-1, 1, -1, ... , (-1)^{n-1}, ...
\end{aligned}
\end{align*}
\indent The notation $(a_n)^q_p$ is used to denote a \underline{finite sequence} admitting
also last term:
\begin{equation*}
(\sqrt{n})^9_4 : \ 2,\ \sqrt{5},\ \sqrt{6},\ \sqrt{7},\ 2\sqrt{2},\ 3
\end{equation*}
\indent In this Section, we discuss briefly infinite sequences only.\\
\indent A sequence is uniquely determined when the first and general term are given. Thus
\ $a_3\ =\ 4$,\ $a_n\ =\ 2^{n-1}$ define sequence
\begin{equation*}
(2^{n-1})_3\ :\ 4,\ 8,\ ...,\ 2^{n-1},\ ...
\end{equation*}
\noindent while some numbers written in succession followed by three dots, such as
\begin{equation*}
5,\ 7,\ 9,\ ...
\end{equation*}
\noindent do not define uniquely a sequence, since the general term is not given, and as the
$4^{th}$ term any number can be assigned arbitrarily other than 11 (that one would expect).
Indeed, the sequence $(a_n)_1$ with general term:
\begin{equation*}
a_n\ =\ (n-1)(n-2)(n-3)\ +\ 2n\ + 3
\end{equation*}
\noindent gives 5, 7, 9 as the first three terms and 17 as the $4^{th}$ term.
%+++++++++++++++++++
\hPage{b2p1/003}
%++++++++++
\par
\underline{\textbf{Determination of Sequences By Recurrence Relations:}}\\
\par
A sequence $(a_{n})_{p}$ can be defined more generally by a recurrence relation\\
\begin{align*}
f(a_{n},\, \dotso , a_{n+k}) = 0\\
\end{align*}
\noindent and k consecutive terms $a_{p} , \dotso , a_{p+k-1}$.\\
\par
The following are two examples for $k = 1$ and $k = 2$\\
\begin{exmp}
Given the sequence defined by\\
\begin{align*}
a_{1} = 3, \quad and\quad a_{n} = a_{n-1} + 2\\
\end{align*}
\begin{hEnumerateAlpha}
\item obtain the first four terms,\\
\item find the general term.\\
\end{hEnumerateAlpha}
\begin{hSolution}
\begin{hEnumerateAlpha}
\item $a_1 = 3$, $a_2 = a_1+2 = 5$, $a_3 = a_2 + 2 =7$, $a_4 = 7 + 2 = 9$ \\
\item Writing the relation for n = 2, 3, \dotso up to n; and adding these member to member, the intermediate terms $a_2$, \dotso , $a_{n-1}$ are canceled, and $a_n$ is obtained:\\
\end{hEnumerateAlpha}
\begin{align*}
\bcancel{ a_{2}} &= a_1 + 2\\
\bcancel{ a_{3}} &= \bcancel{ a_{2}} + 2\\
\vdots &\qquad \quad \vdots \\
a_{n} &= \bcancel{ a_{n-1}} + 2\\
\cline{1-2}
a_n &= a_1 + (n-1)\cdot 2\\
&= 3 + 2n - 2 = 2n+1
\end{align*}
% ++++++++++++++++++++++++++++++++++++++
\hPage{b2p1/004}
% ++++++++++++++++++++++++++++++++++++++
Another definition of a sequence is obtained by giving the first two terms and a relation between $a_n$ and $a_{n-2}$ whose indices differ by 2:
\end{hSolution}
\end{exmp}
\begin{exmp}
Given the sequence defined by
\begin{center}
$a_1 = 3, \ a_2 = 2, \ a_n = \frac{n-1}{n+1}a_{n-2},$
\end{center}
a) obtain the first four terms,
\noindent b) find the general term.
\end{exmp}
\begin{hSolution}
a) $a_1 = 3, \ a_2 = 2, \ a_3 = \frac{3-1}{3+1}a_1 = \frac{3}{2}, \ a_4 = \frac{3}{5}a_2 = \frac{6}{5}$
\noindent b) Since indices differ by 2, one evaluates $a_{2n}$ and $a_{2n+1}$ separately.
Replacing $n$ by $2n$ in the given relation, one gets %Book says seperately but dictionary says separately.
\begin{center}
$a_{2n} = \frac{2n-1}{2n+1}a_{2n-2}$
\end{center}
which, when written for $n = l, 2, ...$ up to $n$, gives
\begin{center}
$a_4 = \frac{3}{5}a_2$
$a_6 = \frac{5}{7}a_4$
$\vdots$ \qquad $\vdots$
$a_{2n} = \frac{2n-1}{2n+1}a_{2n-2}$
\end{center}
which in turn, when multiplied member to member yield
\end{hSolution} %Solution continues in next page.
%%%%%%%%%%%%%%%%%%%%5
\hPage{b2p1/025}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
b) If $\lambda = 0$, the convergence of $\Sigma b_n$ implies that of $\Sigma a_n$ \\(or divergence of $\Sigma a_n$ implies that of $\Sigma b_n$)
c) If $\lambda = \infty$, the convergence of $\Sigma a_n$ implies that of $\Sigma b_n$ \\(or divergence of $\Sigma b_n$ implies that of $\Sigma a_n$)
\begin{proof}
a) If $\lambda \neq 0$, given $\epsilon>0$ and less than $\lambda$, there is $N>0$ such that
$$\lambda - \epsilon < \frac{a_n}{b_n} < \lambda + \epsilon$$ for all $n>N$.
Then by above Corollary the two series have the same nature.
b) Let $\lambda = 0$. Then for $n>N$ for some $N$,
$$ \frac{a_n}{b_n}<\epsilon \quad \text{or} \quad a_n < \epsilon b_n$$
holds. By Theorem 1 the assertion is true.
c) Let $\lambda = \infty$. Then there is $a>>0$ such that
$$ \frac{a_n}{b_n} > a \quad \text{or}\quad a_n > a b_n$$
for all $n>N$. Again by Corollary the assertion is true.
\end{proof}
\underline{Example}. Test the convergence by limit ratio test:
\begin{align*}
a) \sum_1 n e^{-n} && b) \sum_1^\infty \tan{\frac{1}{n}}
\end{align*}
\underline{Solution}.
a) Comparing $a_n = \frac{n}{e^n}$ with $b_n = \frac{1}{n^2}$ we have
$$\lambda = \lim{\frac{a_n}{b_n}} = \lim{\frac{n^3}{e^n}} = 0 \implies \text{conv. of}\ \sum_1 n e^{-n}$$
b) Comparing $a_n = \tan{\frac{1}{n}}$ with $b_n = \frac{1}{n}$ we have
$$\lambda = \lim{\frac{a_n}{b_n}} = \lim{\frac{\tan{\frac{1}{n}}}{\frac{1}{n}}} = 1 \implies \text{div. of} \ \sum \tan{\frac{1}{n}}$$
%%%%%%%%%%%%%%%%%%%%%%%
\hPage{b2p1/27}
%%%%%%%%%%%%
\begin{theorem} [Lim root, lim ratio tests of CAUCHY]
A series $\sum {a_n} $ of positive series is convergent if
\begin{equation}
a)\hspace{3mm} lim \sqrt[n]{a_n} < 1 \hspace{3mm} or \hspace{3mm} b)\hspace{3mm} lim \frac{a_n}{a_n+1} < 1
\end{equation}
and divergent if
\begin{equation}
a')\hspace{3mm} lim \sqrt[n]{a_n} > 1 \hspace{3mm} or \hspace{3mm} b)\hspace{3mm} lim \frac{a_n}{a_n+1} > 1
\end{equation}
Test fails if limits are equal to 1.
\end{theorem}
\begin{proof}
a) Let $lim \sqrt[n]{a_n} = r$. \\
If $r < 1$ there is k such that $r < k < 1$ . Since r is the limit, $\sqrt[n]{a_n} a \leq k$ holds for all $n > N$ for some N. Then by root test, $\sum {a_n} $ is conv. \\
a') If $r > 1$, then $\sqrt[n]{a_n} > 1$ holds for all $n > N$ and ${a_n} \nsucc 0$. (div.) \\
The proofs of b, b' are similar.
\end{proof}
\begin{remark}
If one of the lim root, lim ratio test fails, the others also too. \\
In the failure case, one way apply the following:
\end{remark}
\textbf{\underline{RAABE - DUHAMEL's Test}}: \\
A series $\sum {a_n} $ of positive terms is convergent or divergent according as \\
\begin{equation}
lim \hspace{4mm} n(\frac{a_n}{a_n+1}-1)
\end{equation}
is greater or less than 1. Test fails if limit is equal to 1.
\begin{example}
Test the following series of positive terms for convergence:
%%%%%%%%%%%%%%%%%%%%%%%%5
\hPage{b2p1/028}
%%%%%%%%%%%%%%%%%%%%%%%%5
\[
a) \sum_{1}^{}(\frac{n+1}{n})^{n} b) \sum_{0}^{}\frac{2^n}{n!} c) \sum_{2}^{}\frac{n!}{n^n} d) \sum_{-1}^{}\frac{n}{n+2}
\]
\end{example}
\begin{hSolution}
a) Since $a_n \rightarrow e \neq 0$, the series diverges\footnote{\label{1}It should have been "diverges" or "is divergent" rather than "is diverges", which is grammatically wrong. I've corrected that mistake.}. \\
b) $\frac{a_{n+1}}{a_n} = \frac{2^{n+1}}{(n+1)!} . \frac{n!}{2^n} = \frac{2}{n+1} \rightarrow 0 < 1$ (it converges) \\
c)$\sqrt[n]{a_n} = \frac{\sqrt[n]{n!}}{n} \rightarrow \frac{1}{e} < 1$ (it converges) \\
d) $a_n \rightarrow 1 \neq$ 0 (it diverges)
\end{hSolution}
\subsection{ALTERNATING SERIES}
A series
\[
\sum_{0}^{\infty} (-1)^n a_n = a_0 - a_1 + a_2 - ... + (-1)^n a_n + ... (a_n > 0)
\]
in which the terms alternate in sign is called an \underline{alternating series}.
\par The series
\[
1 - \frac{1}{2} + \frac{1}{3} - ... + (-1)^{n+1} \frac{1}{n} + ...
\]
is an alternating one, known as the \underline{alternating harmonic series}.
\par Since an alternating series is not a series of positive terms, the previous tests cannot be applied. However there is a test special to alternating series which is the following:
\begin{thm}{(LEIBNIZ)}
The alternating series
\[
a_0 - a_1 + a_2 - ... + (-1)^n a_n + ... (a_n > 0)
\]
is convergent if \\
a) $a_0 \geq a_1 \geq a_2 \geq ... \geq a_n \geq ...$ \\
b) $a_n \rightarrow 0$.
\end{thm}
\hPage{b2p1/029}
\tad \textbf{\underline{Proof}.} It will suffice to prove that $s_{2n}$ and $s_{2n+1}$ have the same limit
\[
s_{2n} = (a_{0}-a_{1})+ ... +(a_{2n}-a_{2n})\geq 0 \tag{from 1}\label{myeq}
\]
\[
s_{2n} = a_{0}-(a_{1}-a_{2})-...-(a_{2n-2}-a_{2n-1})-a_{2n} \leq a_{0} \tag{from 1}\label{mye1q}
\]
\tab \tal $\Longrightarrow 0 \leq s_{2n} \leq a_{0}$
Hence ($s_{2n}$) is bounded, and being monotone increasing it converges to a limit s. Now,
\[
s_{2n+1} = s_{2n}+a_{2n+1} + s + 0= s.\blacksquare
\]
\tad \textbf{\underline{Corollary}. In} a convergent alternating series
\[
s = a_{0}-a_{1}+a_{2}-...+(-1) a_{n}+R_{n+1}
\]
with given hypotesis, the ineuality
\[
\hAbs{R_{n+1}} < a_{n+1}
\]
holds, that is the \underline{error} made in taking $s_{n}$ for s is less than $a_{n+1}$
\tad \textbf{\underline{Proof}.}
\[
R_{n+1} = (-1)^{n+1}(a_{n+1}-a_{n+2}+...)
\]
\tab $\Longrightarrow \hAbs{R_{n+1}} \text{}=\hAbs{a_{n+1}-a_{n+2}+ ....} $
\tab \tab \text{ } = $a_{n+1}-(a_{n+2}-a_{n+3})-... < a_{n+1}$
\tad \textbf{\underline{Example}. } Given the alternating harmonic series
\[
1-\frac{1}{2} + \frac{1}{3} - ....... (-1)^{n+1}\frac{1}{n} + .....
\]
\tad a) show its convergence
% ++++++++++++++++++++++++++++++++++++++
\hPage{b2p1/031}
% ++++++++++++++++++++++++++++++++++++++
A series $\sum a_{n}$ such that $\sum|a_{n}|$ is convergent is called an
\\\underline{absolutely convergent series}, and the above theorem states \\that an absolutely convergent series is convergent.
As the alternating harmonic series shows, a series may be
\\convergent without being absolutely convergent. Such series are
\\called \underline{simply convergent}\footnote{In many textbooks \underline{conditional convergent} or \underline{semi-convergent} termi-
\\nologies are used instead of simply convergent.} series:
\begin{align*}
\sum|a_{n}| \text{ (conv.)}
&\Longrightarrow \sum a_{n} \text{ (conv)} \dots \text{ abs. conv. of } \sum a_{n}\\
\sum|a_{n}| \text{ (div.)}
&\Longrightarrow \left\{
\begin{array}{ll}
\sum a_{n} \text{ (conv)} \dots \text{ simply. conv. of } \sum a_{n}\\
\text{or}\\
\sum a_{n} \text{ (div)}
\end{array}
\right.
\end{align*}
There is an essential difference between the absolutely
\\convergent series and simply convergent ones. The absolutely conver-
\\gent series have the following two properties among others:
\begin{enumerate}
\item The terms can be rearranged in any order (rearrangement does not alter the sum).
\item Finitely or infinitely many terms may be replaced by their sum.
\end{enumerate}
These properties may not be shared by simply convergent
\\series, that is, a rearrengement of terms in a simply convergent
\\series may give a different sum as illustrated by the following
\\example:
Consider the simply alternating harmonic series
\[
S\:=\: 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \dots + (-1)^{n+1} \frac{1}{n} + \dots
\]
Let us rearrange the terms to have the series
\[
S'\:=\:(1-\frac{1}{2} - \frac{1}{4})+(\frac{1}{3} - \frac{1}{6} - \frac{1}{8}) +(\frac{1}{5} - \frac{1}{10} - \frac{1}{12})
\]
\[
+ \:\dots\:+\:(\frac{1}{2n+1} - \frac{1}{4n+2} -\frac{1}{4n+4} ) \:+\:\dots
\]
% =======================================================
% ++++++++++++++++++++++++++++++++++++++
\hPage{b2p1/032}
% ++++++++++++++++++++++++++++++++++++++
Observe that every term in one series is contained in the other exactly once:
\begin{align*}
S' &= \sum\limits_{0} \left( \frac{1}{2n+1}-\frac{1}{4n+2}-\frac{1}{4n+4}\right)
\\ &= \sum\limits_{0}\left(\frac{1}{4n+2}-\frac{1}{4n+4}\right)=\frac{1}{2}\sum\limits_{0} \left( \frac{1}{2n+1}-\frac{1}{2n+2}\right)
\\ &= \frac{1}{2}\left(1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\dotsc+\left( -1 \right)^{n+1} \frac{1}{n}+\dotsc \right)=\frac{1}{2} s.
\end{align*}
\subsection{EVALUATION OF SERİES} \
Each series can be evaluated by neglecting the remainder $R_{n+1}$ for certain n with some approximation.
Exact evaluation is impossible in general, except for convergent geometric series and some series whose general term $a_n$ is rational function of $n$. There are other possibilities by the use of power series $\left( \S 1.3 \right)$.
\begin{exmp}
Evaluate the geometric series.\\
a)$\sum\limits_{0}\quad \left( \frac{1}{2}\right)^n $ \qquad \qquad \qquad \qquad
b)$\sum\limits_{0}\quad \left(-1\right)^n \frac{2^n}{3^n}$
\end{exmp}
\begin{hSolution} Recalling
$$a \left(1+r+\dotsc+r^n+\dotsc \right) =\frac{a}{1-r} \qquad \left( \hAbs{r} \right) < 1 $$ \\
we have
a) $ r= \frac{1}{2} \left ( \hAbs{\frac{1}{2}} < 1 \right )\quad \Longrightarrow \quad s= \frac{1}{1-\frac{1}{2}} = 2 , $
b)$ r= - \frac{2}{3} \left ( \hAbs{-\frac{2}{3}} < 1 \right )\quad \Longrightarrow \quad s= \frac{1}{1+\frac{2}{3}} = 3/5 . $
\end{hSolution}
\begin{exmp}
Given $ t = 2,1\overline{37} $ ,
a) write it as a geometric series ,
% ++++++++++++++++++++++++++++++++++++++
\hPage{b2p1/033}
% ++++++++++++++++++++++++++++++++++++++
\begin{enumerate}[label=(\alph*)]
\setcounter{enumi}{1}
\item discuss the convergence and find t as a ratio of two\\
integers.
\end{enumerate}
\underline{Solution}.
\begin{enumerate}[label=(\alph*)]
\item $t = 2+\frac{1}{10}+\frac{37}{1000}+\frac{37}{100000}+\ ...\ +\frac{37}{1000.100^{n-1}}+\ ...$\\
=$\frac{21}{10}+\frac{37}{1000}(1+\frac{1}{100}+\ ...\ +\frac{1}{100^{n-1}}+\ ...)$
\item The series within the paranthesis is a geometric series\\
with r = 1/100 which is absolutely less than 1. Then\\
it is convergent:\\
$t=\frac{21}{10}+\frac{37}{1000}\frac{1}{1-\frac{1}{100}}=\frac{21}{10}+\frac{37}{990}=\frac{2116}{990} \in \mathbb{Q}$
\end{enumerate}
\end{exmp}
\begin{exmp} Find the sums:\\
\begin{enumerate}[label=(\alph*)]
\item $\frac{1}{1.2}+\frac{1}{2.3}+\ ...\ +\frac{1}{n(n+1)}+\ ...$
\item $\frac{1}{2^{2}-1}+\frac{1}{4^{2}-1}+\ ...\ +\frac{1}{(2n)^{2}-1}+\ ...$
\end{enumerate}
\end{exmp}
\underline{Solution}.
\begin{enumerate}[label=(\alph*)]
\item $a_{n}=\frac{1}{n(n+1)}=\frac{A}{n}+\frac{B}{n+1}\Longrightarrow\ A\ =\ 1,\ B\ =\ -1$\par
$\Longrightarrow a_{n}=\frac{1}{n}-\frac{1}{n+1}$\par
$\Longrightarrow s_{n}=(1-\frac{1}{2})+(\frac{1}{2}-\frac{1}{3})+\ ...\ +(\frac{1}{n}-\frac{1}{n+1})$\par
$=1-\frac{1}{n+1}\Longrightarrow S\ =\ 1$
\item $a_{n}=\frac{1}{(2n)^{2}-1}=\frac{A}{2n-1}+\frac{B}{2n+1}\Longrightarrow A=\frac{1}{2},\ B = -\frac{1}{2}$\par
$\Longrightarrow a_{n}=\frac{1}{2}(\frac{1}{2n-1}-\frac{1}{2n+1})$
\end{enumerate}
% ++++++++++++++++++++++++++++++++++++++
\hPage{b2p1/74}
% ++++++++++++++++++++++++++++++++++++++
Accordingly [$\lambda_{i}$, $\delta_{ij}$], [$\lambda\delta_{ij}$] are diagonal and scalar matrices respectively.
For a square matrix A = [$a_{ij}$], the symbols $\hAbs{A}$, det A, det[$a_{ij}$] are used to denote the determinant of A:
\begin{center}
$\hAbs{A}$=$\hAbs a_{ij}$ = det A = det[$a_{ij}$]
\end{center}
We note that if A is a non square matrix, $\hAbs{A}$ is not defined.
\section{Operations with real matrices}
\subsection{Equality} The matrices [$a_{ij}$], [$b_{ij}$] are \underline{equal} if they are of the same size and corresponding elements are equal:
\begin{center}
$[a_{ij}]_{mxn}$ = $[b_{ij}]_{mxn}$ $\iff$ $a_{ij}$ = $b_{ij}$ for all i, j.
\end{center}
\subsection{Addition} The \underline{sum} of two matrices $[a_{ij}]$, $[b_{ij}]$ of the same size is a matrix of the same size whose elements are the sums of their corresponding elements:
\begin{center}
$[a_{ij}]_{mxn}$ + $[b_{ij}]_{mxn}$ = [$a_{ij}$ + $b_{ij}$]$_{mxn}$
\end{center}
\subsection{Multiplication by a scalar} The \underline{product} of a matrix with a scalar is a matrix of the same size obtained by multiplying every element of the matrix by that scalar:
\begin{center}
$c[a_{ij}]$ = [c $a_{ij}]$ = [$a_{ij}$]c
\end{center}
\subsection{Subtraction} The difference A-B of two matrices A and B of the same size is the matrix A+ (- B):
\begin{center}
$[a_{ij}]_{mxn}$ - $[b_{ij}]_{mxn}$ = [$a_{ij}$ - $b_{ij}$]$_{mxn}$
\end{center}
\subsection{Multiplication} The product AB is defined only when the number of columns in A is equal to the number of rows in B.
%++++++++++++++++++++++++++++++
\hPage{b2p1-078}
%+++++++++++++++++++++++++++
\begin{align*}
&& 2A & =
\begin{bmatrix}
0 && 2 && 2\\
2 && 0 && 2\\
2 && 2 && 0\\
\end{bmatrix}
&&\\
\\
&& -3I_3 & =
\begin{bmatrix}
-3 & 0 & 0\\
0 & -3 & 0\\
0 & 0 & -3\\
\end{bmatrix}
&&
\\
\\
&\implies & A^2-2A-3I_3 & = \begin{bmatrix}
-1 & 3 & 3\\
3 & -1 & 3\\
3 & 3 & -1\\
\end{bmatrix}
\end{align*}
\setlength{\parindent}{7ex}
\underline{Remark}. Note that multiplication of a matrix by a scalar
and that of a determinant by a scalar are defined differently: a
matrix is multiplied by a scalar by multiplying every element by
multiplying only one row(column) by that scalar.\par
Thus
\begin{align*}
\qquad \qquad \qquad c \quad
\begin{bmatrix}
8 && 1 && 6\\
3 && 5 && 7\\
4 && 9 && 2\\
\end{bmatrix}
&=
\begin{bmatrix}
8c && c && 6c\\
3c && 5c && 7c\\
4c && 9c && 2c\\
\end{bmatrix} ,
\\
\qquad \qquad \qquad c \quad
\begin{bmatrix}
8 && 1 && 6\\
3 && 5 && 7\\
4 && 9 && 2\\
\end{bmatrix}
&=
\begin{bmatrix}
8c && c && 6c\\
3 && 5 && 7\\
4 && 9 && 2\\
\end{bmatrix}
=
\begin{bmatrix}
8 && c && 6\\
3 && 5c && 7\\
4 && 9c && 2\\
\end{bmatrix}
\end{align*}\\
As a result we have for a matrix A of order n,
\begin{align*}
det \ c[a_ij] = det \ [ca_ij] = c^n \ det[a_ij]
\end{align*}
% ++++++++++++++++++++++++++++++++++++++
\hPage{b2p1/87}
% ++++++++++++++++++++++++++++++++++++++
\begin{center}
Adj A = $\begin{bmatrix}
-3 &\ -1 &\ 1 \\
-3 &\ 3 &\ 3 \\
3 &\ -1 &\ -5
\end{bmatrix}$ = [$A_{ji}$]
\end{center}
b) Adj B = $\begin{bmatrix}
9 &\ -3 \\
-6 &\ 2 \\
\end{bmatrix}$
\begin{thm} $A^{-1} $= $\frac{1}{\hAbs{A}}$ Adj A = $\frac{[A_{ji}]}{\hAbs{A}}$ if $\hAbs{A}$ $\neq$ 0, i. e., if $[A_{ij}]$ is invertible.
\end{thm}
\begin{proof} We need to show that
\begin{center}
A $\frac{Adj A}{\hAbs{A}}$ = I or A Adj A = \hAbs{A} I
\end{center}
Indeed,
\begin{center}
A Adj A = $\begin{bmatrix}
\cdots &\cdots &\cdots \\
a_{il} &\cdots &\ a_{in} \\
\cdots &\cdots &\cdots
\end{bmatrix}$
$\begin{bmatrix}
\vdots & A_{lj} &\vdots \\
\vdots &\vdots &\vdots \\
\vdots & A_{nj} &\vdots
\end{bmatrix}$ /$\hAbs{A}$
\end{center}
\begin{center}
\[=[\sum_{\substack{k}} a_{ik} A_{kj}] /\hAbs{A} = [\delta_{ij}\hAbs{A}]/\hAbs{A}= [\delta_{ij}] = I \]
\end{center}
by Theorem 6 on determinant. (Book I)
\end{proof}
\begin{exmp}. Find the inverses of the matrices A and B in Example 1, if any.
\end{exmp}
\begin{hSolution}
a) The classical adjoint of A was obtained as the matrix
\begin{center}
$\begin{bmatrix}
-3 &\ -3 &\ 3 \\
-1 &\ 3 &\ -1 \\
1 &\ 3 &\ -5
\end{bmatrix}$
\end{center}
and the inverse is obtained by dividing this matrix by $\hAbs{A}$= -6
%%%%%%%%%%%%555
\hPage{b2p1-088}
%%%%%%%%%%%%%%
\begin{align*}
\qquad &&&&& |A| \ = \
\begin{vmatrix}
2 && 1 && 1\\
1 && -2 && -1\\
1 && 1 && 2
\end{vmatrix} \ = \
\begin{vmatrix}
2 && 0 && 1\\
1 && -1 && -1\\
1 && -1 && 2\\
\end{vmatrix}=
2(-2-1) -1.0 =-6
\end{align*}
\begin{flushleft}
Hence,
\end{flushleft}
\begin{center}
$ A^{-1} =
$$
\begin{bmatrix}
1/2 && 1/6 && -1/6\\
1/2 && -1/2 && -1/2\\
-1/2 && 1/6 && 5/6\\
\end{bmatrix}
$$
$
\end{center}
\setlength{\parindent}{3ex}
b) \ Since \ det \ B \ = \ 0, \ B \ is \ not \ invertible.
\vspace{0.1cm}
\end{hSolution}
\underline{Example \ 3}. \ Find \ the \ inverse \ of \\
\begin{align*}
&&&&&&& A=
\begin{bmatrix}
a && b \\
\\
c && d \\
\end{bmatrix}
\ \ \ if \ |A| \ = \ ad-bc \ \neq \ 0
\end{align*}
\underline{Solution}. \ Since
\begin{center}
$ [A_{ij}] =
$$
\begin{bmatrix}
d && -c \\
\\
-b && a \\
\end{bmatrix}
$$
$
\end{center}
we \ \ have \ $
A^{-1} \ =
\begin{bmatrix}
d && -b \\
-c && a \\
\end{bmatrix} \
/[A] = \frac{
\begin{bmatrix}
d && -b \\
-c && a
\end{bmatrix}}
{ ad-bc }
\\
$
\tal 3. \underline{Elementary row operations}\\
\setlength{\parindent}{7ex}
\tal Let A be rectangular matrix of shape mxn.Let the row matrices be $ R_1, \ldots, R_m
$ the following on rows are called the \underline{elementary row operations}: \\
\tal $ R_i \Leftrightarrow R_j $ : Interchanging of ith and jth rows,\\
\tal $ R_i + R_j $ \ : Adding the jth row to the ith row,\\
\tal $ c \ R_i $ \qquad : Multiplying a row by a non zero scalar.\\
%+++++++++++++++++++++++
\hPage{b2p1/094}
%+++++++++++++++++
\hNewLine
\null Multiplying both sides of this equation by $A^{-1}$ (if exists), we have
\hNewLine
\begin{center}
$c_nA^{n-1}+c_{n-1}A^{n-2}+...+c_1I_n+c_0A^{-1}=0,$
\end{center}
\hNewLine
since $A^kA^{-1}=(A^{k-1}A)A^{-1}=A^{k-1}(AA^{-1})=A^{k-1}.$
\hNewLine
\par This latter equality is solvable for $A^{-1}$ when $c_0=\left|A\right|\neq0$ which is the same condition in Method 2.
\hNewLine
\par If $\left|A\right|=c_0=0,$ then $A$ is not invertible and such matrix is called a \underline{singular square matrix}. An invertible matrix is non singular.
\hNewLine
\begin{exmp} Find the inverse of
\begin{center}
a) $A=\begin{bmatrix}2 & 1 & 1\\1 & -2 & -1\\1 & 1 & 2\end{bmatrix}$ \quad\quad b) $B=\begin{bmatrix}2 & 3\\6 & 9\end{bmatrix}$
%\begin{itemize}
%\item[a)] $A=\begin{bmatrix}2 & 1 & 1\\1 & -2 & -1\\1 & 1 & 2\end{bmatrix}$
%\item[b)] $B=\begin{bmatrix}2 & 3\\6 & 9\end{bmatrix}$
%\end{itemize} Itemizing would keep me from putting the items of the set in the same line therefore I am just going to type a) & b)
\end{center}
\end{exmp}
\hNewLine
\begin{hSolution}
\begin{center}
\begin{itemize}
\item[a)] The characteristic equation is
\hNewLine
$P(\lambda)=\left| \begin{array}{ccc} 2-\lambda & 1 & 1\\1 & -2-\lambda & -1\\1 & 1 & 2-\lambda \end{array}\right|=0$
\end{itemize}
\end{center}
\end{hSolution}
\hNewLine
Expansion gives
\hNewLine
\begin{center}
$P(\lambda)=-\lambda^3+2\lambda^2+5\lambda-6=0$
\end{center}
\hNewLine
and by CAYLEY-HAMILTON Theorem we have
\hNewLine
\begin{align*}
&-A^3+2A^2-5A+6I_3=0\\
\Rightarrow &-A^2+2A+5I-6A^{-1}=0\\
\Rightarrow &-6A^{-1}=A^2-2A-5I\\
\end{align*}
\hPage{b2p1/106}
After discarding zero rows, if the remaining is consistent, the given system is consistent. In the consistency case starting from the bottom and going upward considering the equation corresponding to each row one can find the unknowns successively $(x_{n}, x_{n-1}, ... )$, some of which are taken as parameter when possible.\par
When the echelon form of the system is, for instance
\[
\begin{bmatrix}
2 & 0 & -3 & 4 & \vdots & 1 \\
0 & 0 & 0 & 0 & \vdots & 6 \\
0 & 0 & 0 & 0 & \vdots & 0 \\
\end{bmatrix}
\]
the system is inconsistent (no solution). \par
If the echelon XX, for instance,
\[
\begin{bmatrix}
XX & XX & -3 & 4 & \vdots & 1 \\
0 & 0 & 0 & 2 & \vdots & 6 \\
\end{bmatrix}
\]
we have consistency. Then
\begin{align*}
&2x_{4} = 6 \Rightarrow x_{4} = 3 \\
&2x_{1} + x_{2} - 3x_{3} + 4.3 = 1 \\
&\Rightarrow \, 2x_{1} + x_{2} - 3x_{3} = -11 \\
&\qquad x_{1} = s, x_{3} = t \Rightarrow x_{2} = -11 -2s + 3t \\
&\qquad S = [s, -11 - 2s + 3t, t, 3]
\end{align*}
When the echelon form is
\[
\begin{bmatrix}
1 & -3 & \vdots & 4 \\
0 & 2 & \vdots & 3 \\
0 & 0 & \vdots & -1 \\
0 & 0 & \vdots & 0 \\
0 & 0 & \vdots & 0 \\
\end{bmatrix}
\]
%+++++++++++++++++++++++++++++
\hPage{b2p1/110}
%+++++++++++++++++++++++++++
\noindent36. Discuss the solution of the system:
\[
y + z = 1, 2x -2y -z = 0, 4x + 3y +5z = 7
\]
37. Discuss the solution by the use of augmented matrix:
\[
\begin{bmatrix}
2 & -3 & 1 \\
3 & 0 & 2 \\
1 & 3 & 1 \\
\end{bmatrix}
\begin{bmatrix}
x \\
y \\
z \\
\end{bmatrix}
=
\begin{bmatrix}
2 \\
5 \\
4 \\
\end{bmatrix}
\]
38. Find right inverses, if any, of the following matrices:
\[
a)
\begin{bmatrix}
2 & 0 \\
-1 & 3 \\
1 & 1 \\
\end{bmatrix}
\qquad b)
\begin{bmatrix}
3, 7 \\
\end{bmatrix}
\]
39. Find left inverses, if any, of the matrices given in Exercise 38. \newline
40. Find the inverse of $
\begin{bmatrix}
a & b & b \\
0 & d & e \\
0 & 0 & f \\
\end{bmatrix}
\qquad (adf \neq 0)
$ \newline
41. Solve:
\[
\begin{bmatrix}
1 & 1 & 1 \\
a & b & c \\
a^2-1 & b^2-1 & c^2-1 \\
\end{bmatrix}
\begin{bmatrix}
x \\
y \\
z \\
\end{bmatrix}
=
\begin{bmatrix}
1 \\
p \\
p^2-1 \\
\end{bmatrix}
\]
and evaluate
\[
(a^2-a)x+(b^2-b)y+(c^2-c)z-(p^2-p)
\]
42. Find the inverse of:
\[
a)
\begin{bmatrix}
2 & 1 & -1 \\
0 & 2 & 1 \\
5 & 2 & -3 \\
\end{bmatrix}
\qquad b)
\begin{bmatrix}
1 & 0 & 2 \\
2 & -1 & 3 \\
0 & 1 & 8 \\
\end{bmatrix}
\]
43. Find $x, y\in \mathbb{R}$, if any:
\[
\begin{bmatrix}
2 & 3 & 1 \\
0 & -2 & 4 \\
\end{bmatrix}
\begin{bmatrix}
x & y \\
2x & -y \\
-x & 3y \\
\end{bmatrix}
=
\begin{bmatrix}
14 & 2 \\
-16 & 14 \\
\end{bmatrix}
\]
% ++++++++++++++++++++++++++++++++++++++
\hPage{b2p1/111}
% ++++++++++++++++++++++++++++++++++++++
\begin{enumerate*}
\item[44.]
If $ A_{2*2}
\begin{bmatrix}
x \\
y
\end{bmatrix}$
=
$
\begin{bmatrix}
x' \\
y'
\end{bmatrix}$,
we say that the matrix A \underline{maps} the
point $(x,y)$ to the point $(x',y')$. Find A which maps $(2,3)$
to $(1,0)$, and $(-1,1)$ to $(2,-5)$.
\medskip
\item[45.] Find $x, y\in \mathbb{R}$, if any:
$$
\begin{bmatrix}
3 & -1 \\
0 & 2 \\
-2 & 1 \\
\end{bmatrix}
\begin{bmatrix}
x & -y & y-x \\
2y & 3x & x-y
\end{bmatrix}
=
\begin{bmatrix}
-6 & -9 & 12 \\
12 & 0 & -6 \\
6 & 6 & -9
\end{bmatrix}
$$
\end{enumerate*}
\bigskip
\begin{center}
\textbf{ANSWERS TO EVEN NUMBERED EXERCISES}
\end{center}
\begin{enumerate}
\item[32.]
a) $[\pm2,0,\pm1]$ \hspace{5pt} (four solutions),\hspace{10pt} b) $[2,1,-3]$
\item[34.]
$[-2, 2, 1]$
\item[36.]
$[1-k, 1-2k, 2k]$
\bigskip
\item[38.]
a)
$
\begin{bmatrix}
t & \frac{1}{2}t - \frac{1}{4} & -\frac{3}{2}t + \frac{3}{4} \\
s & \frac{1}{2}s + \frac{1}{4} & -\frac{3}{2}s + \frac{1}{4}
\end{bmatrix}
,
$
\hspace{9pt}
b)
$
\frac{1}{7}
\begin{bmatrix} 75 \\ 1-3s \end{bmatrix}
$
\bigskip
\item[40.]
$
\begin{bmatrix}
\frac{1}{a} & -\frac{b}{ad} & \frac{be-cd}{adf} \\
0 & \frac{1}{d} & -\frac{e}{df}\\
0 & 0 & \frac{1}{f}
\end{bmatrix}
$
\bigskip
\item[42.]
a)
$
\begin{bmatrix}
8 & -1 & -1 \\
-5 & 1 & 2 \\
10 & -1 & -4
\end{bmatrix}
,
$
\hspace{9pt}
b)
$
\frac{-1}{7}
\begin{bmatrix}
-11 & 2 & -2 \\
-16 & 8 & 1 \\
2 & -1 & -1
\end{bmatrix}
$
\bigskip
\item[44.]
$
\begin{bmatrix}
-1 & 1 \\
3 & -2
\end{bmatrix}
$
\end{enumerate}
% ++++++++++++++++++++++++++++++++++++++
\hPage{b2p1/114}
% ++++++++++++++++++++++++++++++++++++++
\begin{enumerate}
\item[53.]
If $A_1, ... , A_n$ are invertible matrices of the same order,
\begin{hEnumerateAlpha}
\item
prove $(A_1 ... A_n)^{-1} = A_n^{-1} ... A_1^{-1}$
\item
prove $(A^n)^{-1}=(A^{-1})^n$
\end{hEnumerateAlpha}
\item[54.]
Find the inverses of:
\begin{tabular}{ll}
a)
$
\begin{bmatrix}
{\begin{array}{ccc}
2 & 1 & 0\\
1 & 1 & 0\\
0 & 0 & 1 \end{array} }
\end{bmatrix}
$
&b)
$
\begin{bmatrix}
{\begin{array}{ccc}
1 & 1 & 1\\
0 & 1 & 1\\
0 & 0 & 1 \end{array} }
\end{bmatrix}
$
\end{tabular}
\item[55.]
Show that the product of two upper (lower) square triangular matrices is an upper (lower) triangular matrix.
\item[56.]
Prove that the inverse of a non singular diagonal matrix is a non singular diagonal matrix.
\item[57.]
Evaluate
\begin{center}
$
\begin{bmatrix}
{\begin{array}{cc}
\frac{a \pm \sqrt{ad-bc}}{\sqrt{D}}& \frac{b}{\sqrt{D}} \\
\frac{C}{\sqrt{D}} & \frac{a \pm \sqrt{ad-bc}}{\sqrt{D}} \end{array}}
\end{bmatrix}^2
$
\end{center}
where $D = a + d + 2\sqrt{ad-bc}>0$
\item[58.]
If $(A^{-1})^n$ is denoted by $A^{-n}$, then evaluate $U^{-2}$ , $V^{-2}$ where
$
U=
\begin{bmatrix}
{\begin{array}{rrr}
2 & 1 & 1\\
1 & -2 & -1\\
1 & 1& 2 \end{array} }
\end{bmatrix},
$
$
V=
\begin{bmatrix}
{\begin{array}{rrrr}
1 & 1 & 0 & 0\\
1 & 2 & 0 & 0\\
5 & 2 & 3 & -1\\
-1 & 1 & -5 & 2 \end{array} }
\end{bmatrix}
$
\item[59.]
Prove $[\Delta(\Theta)]^n = \Delta(n\Theta)$, where
$
\Delta(\Theta)=
\begin{bmatrix}
{\begin{array}{ccc}
cos^2\Theta & -sin 2\Theta & sin^2\Theta\\
cos\Theta sin\Theta & cos 2\Theta & -sin\Theta cos\Theta\\
sin^2\Theta & sin 2\Theta & cos^2\Theta \end{array} }
\end{bmatrix}
$
\end{enumerate}
\hPage{b2p1/217}
\begin{flushleft}
$\quad\quad (9x^2+18x)-4y^2+(z^2-4z)+13=0$
$\Rightarrow \quad 9(x^2+2x)-4y^2+(z-2)^2+13=0$
$\Rightarrow \quad 9(x+1)^2-9-4y^2+(z-2)^2+9=0$
$\Rightarrow \quad 9(x+1)^2-4y^2+(z-2)^2=0$ (cone, vertex at $(-1,0,2)$)
\end{flushleft}
\hNewLine
\quad Since
$T=
\left|
\begin{array}{cccc}
18 & 0 & 0 & 18 \\
0 & -8 & 0 & 0 \\
0 & 0 & 2 & -4 \\
18 & 0 & -4 & 26
\end{array}
\right|=0$
\hNewLine
\hNewLine
the cone is degenerate,
\begin{itemize}
\item[b)] Since there is only one cross term, namely $-2xz,$ the standard equation is obtained by rotating $0xz$ about $0$ by a proper angle $\theta$:
\end{itemize}
\hNewLine
\begin{center}
$(x^2-2xz+z^2)-y^2+2x-4y-2z-3=0 \quad\quad\quad\quad(1)$
\hNewLine
\hNewLine
$tan2\theta=\frac{-2}{1-1}=\infty \quad \Rightarrow \quad \theta=\pi/4 \quad \Rightarrow \quad cos\theta=sin\theta=\frac{\sqrt{2}}{2}$
\hNewLine
\hNewLine
$x=\frac{\sqrt{2}}{2}(x\prime-z\prime), \quad z=\frac{\sqrt{2}}{2}(x\prime+z\prime), \quad y=y\prime.$
\end{center}
\hNewLine
Setting these values in $(1)$, we have
\hNewLine
\begin{center}
$\frac{1}{2}(x\prime-z\prime)^2-2\cdot\frac{2}{4}(x\prime^2-z\prime^2)+\frac{1}{2}(x\prime+z\prime)^2-y\prime^2$
\hNewLine
\hNewLine
$+\sqrt{2}(x\prime-z\prime)-4y\prime-s\sqrt{2}(x\prime+z\prime)+-3=0$
\hNewLine
\hNewLine
$4z\prime^2-2y\prime^2-8y\prime-4\sqrt{2}z\prime-6=0$
\hNewLine
\hNewLine
$2(z\prime-\frac{\sqrt{2}}{2})^2-(y\prime+2)^2=0$ (two intersecting planes)
\end{center}
\end{document} | {
"alphanum_fraction": 0.5535191073,
"avg_line_length": 27.7766233766,
"ext": "tex",
"hexsha": "cc17efcb8f8a3147f545605d6420f94436ed1047",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4e71a0ed20d76b93c144c2f9c0fbbd52c04b5ae3",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "yildirimyigit/cmpe220_2016_3",
"max_forks_repo_path": "books/pages/b2p1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4e71a0ed20d76b93c144c2f9c0fbbd52c04b5ae3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "yildirimyigit/cmpe220_2016_3",
"max_issues_repo_path": "books/pages/b2p1.tex",
"max_line_length": 330,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "4e71a0ed20d76b93c144c2f9c0fbbd52c04b5ae3",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "yildirimyigit/cmpe220_2016_3",
"max_stars_repo_path": "books/pages/b2p1.tex",
"max_stars_repo_stars_event_max_datetime": "2019-05-15T22:03:34.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-05-15T22:03:34.000Z",
"num_tokens": 12995,
"size": 32082
} |
\documentclass[]{book}
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{ifxetex,ifluatex}
\usepackage{fixltx2e} % provides \textsubscript
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\else % if luatex or xelatex
\ifxetex
\usepackage{mathspec}
\else
\usepackage{fontspec}
\fi
\defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase}
\fi
% use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
% use microtype if available
\IfFileExists{microtype.sty}{%
\usepackage{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\usepackage{hyperref}
\hypersetup{unicode=true,
pdftitle={A minimal quantitative RNAseq pipeline},
pdfauthor={Dan MacLean},
pdfborder={0 0 0},
breaklinks=true}
\urlstyle{same} % don't use monospace font for urls
\usepackage{natbib}
\bibliographystyle{apalike}
\usepackage{color}
\usepackage{fancyvrb}
\newcommand{\VerbBar}{|}
\newcommand{\VERB}{\Verb[commandchars=\\\{\}]}
\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
% Add ',fontsize=\small' for more characters per line
\usepackage{framed}
\definecolor{shadecolor}{RGB}{248,248,248}
\newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}}
\newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}}
\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}}
\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\BuiltInTok}[1]{#1}
\newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}}
\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}}
\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}}
\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}}
\newcommand{\ExtensionTok}[1]{#1}
\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\ImportTok}[1]{#1}
\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}}
\newcommand{\NormalTok}[1]{#1}
\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}}
\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}}
\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}}
\newcommand{\RegionMarkerTok}[1]{#1}
\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\usepackage{longtable,booktabs}
\usepackage{graphicx,grffile}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}
}
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{5}
% Redefines (sub)paragraphs to behave more like sections
\ifx\paragraph\undefined\else
\let\oldparagraph\paragraph
\renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}}
\fi
\ifx\subparagraph\undefined\else
\let\oldsubparagraph\subparagraph
\renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}}
\fi
%%% Use protect on footnotes to avoid problems with footnotes in titles
\let\rmarkdownfootnote\footnote%
\def\footnote{\protect\rmarkdownfootnote}
%%% Change title format to be more compact
\usepackage{titling}
% Create subtitle command for use in maketitle
\providecommand{\subtitle}[1]{
\posttitle{
\begin{center}\large#1\end{center}
}
}
\setlength{\droptitle}{-2em}
\title{A minimal quantitative RNAseq pipeline}
\pretitle{\vspace{\droptitle}\centering\huge}
\posttitle{\par}
\author{Dan MacLean}
\preauthor{\centering\large\emph}
\postauthor{\par}
\predate{\centering\large\emph}
\postdate{\par}
\date{2020-01-30}
\usepackage{booktabs}
\usepackage{amsthm}
\makeatletter
\def\thm@space@setup{%
\thm@preskip=8pt plus 2pt minus 4pt
\thm@postskip=\thm@preskip
}
\makeatother
\begin{document}
\maketitle
{
\setcounter{tocdepth}{1}
\tableofcontents
}
\hypertarget{about-this-course}{%
\chapter{About this course}\label{about-this-course}}
In this short course we'll look at a method for getting quantitative estimates of gene expression from RNAseq data. The course assumes that you will already have performed a read alignment so is \emph{not} a `read to results' course. The course is very brief and will show you how to use a perform a common pipeline centered around \texttt{DESeq} in \texttt{R} and \texttt{RStudio}
I acknowledge that there are lots of other programs and methods - this course is \emph{not} meant to be comprehensive, it is meant to get you being productive. Seek out further advice if you need to run other programs or systems. Do be encouraged though, lots of what you learn here will be applicable to other pipelines for the same job (they all run in a similar manner with similar objects) so this is a good place to start.
The course is intended to run on your `local' machine, that is to say, your laptop or desktop computer. In general these machines will be powerful enough for most datasets though the pipeline we will learn can be easily adapted for a high performance computing environment if you need greater computational power.
\hypertarget{prerequisites}{%
\section{Prerequisites}\label{prerequisites}}
This course assumes that you are a little familiar with the basics of running R and R commands from the R console. You'll need to know the basics of typing in commands and getting output, not much more.
\hypertarget{r-and-rstudio}{%
\subsection{R and RStudio}\label{r-and-rstudio}}
\hypertarget{installing-r}{%
\subsubsection{Installing R}\label{installing-r}}
Follow this link and install the right version for your operating system \url{https://www.stats.bris.ac.uk/R/}
\hypertarget{installing-rstudio}{%
\subsubsection{Installing RStudio}\label{installing-rstudio}}
Follow this link and install the right version for your operating system \url{https://www.rstudio.com/products/rstudio/download/}
\hypertarget{installing-r-packages-in-rstudio.}{%
\subsubsection{Installing R packages in RStudio.}\label{installing-r-packages-in-rstudio.}}
You'll need the following R packages:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
devtools
\item
atacR
\item
DESeq
\end{enumerate}
For simplicity, install them in that order.
To install \texttt{devtools}:
Start RStudio and use the \texttt{Packages} tab in lower right panel. Click the install button (top left of the panel) and enter the package name \texttt{devtools}, then click install as in this picture
\begin{figure}
\centering
\includegraphics{fig/package_install.png}
\caption{Installing Packages}
\end{figure}
To install \texttt{atacR}:
Type the following into the RStudio console, \texttt{devtools::install\_github("TeamMacLean/atacr")}
To install \texttt{DESeq}:
Type the following into the RStudio console, \texttt{BiocManager::install("DESeq")}
Now you are done! Everything is installed ready for you to work with. Next we need to get the sample data
\hypertarget{sample-bam-file-and-counts-files}{%
\subsection{Sample BAM file and counts files}\label{sample-bam-file-and-counts-files}}
You'll need this zip file of data: \href{https://github.com/TeamMacLean/minimal_quantitative_rnaseq/blob/master/sample_data/sample_data.zip}{sample\_data.zip} which contains a lot of ifles including BAM files and counts. Download it, extract the files and put them into a folder on your machine. I suggest something like \texttt{Desktop/align\_tut}. This will be the directory we'll work from in the rest of the course.
That's all you need to do the lesson. If you have any problems getting this going, then ask someone in the Bioinformatics Team and we'll help.
\hypertarget{intro}{%
\chapter{Counting Aligned Reads in Genomic Regions}\label{intro}}
\hypertarget{about-this-chapter}{%
\section{About this chapter}\label{about-this-chapter}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Questions
\end{enumerate}
\begin{itemize}
\tightlist
\item
How do I calculate counts of reads at genes from my alignments?
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
Objectives
\end{enumerate}
\begin{itemize}
\tightlist
\item
Understand the basis for the gene region and read counting technique
\item
Understand what the count matrix represents
\item
Use the \texttt{make\_counts()} function to make a count matrix
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\tightlist
\item
Keypoints
\end{enumerate}
\begin{itemize}
\tightlist
\item
Gene regions are designated by coordinates in GFF files
\item
A count matrix is a table-like object of reads that are found in a given genomic region
\item
The count matrix is the main object in a DESeq analysis
\end{itemize}
In this chapter we'll look at the fundamentals of read counting from a BAM file of aligned reads.
\hypertarget{counting-the-number-of-reads-that-have-aligned-to-gene-regions}{%
\section{Counting the number of reads that have aligned to gene regions}\label{counting-the-number-of-reads-that-have-aligned-to-gene-regions}}
The basis of quantitative RNAseq is working out how many of our sequence reads have aligned to each gene. In broad terms this is done by taking the genomic coordinates of all the aligned reads (the start and end positions of the read's alignment on the reference genome) and cross-referencing them with the positions of the genes from a gene file. The resulting table is called a count matrix. See the figure below for a representation.
\begin{figure}
\includegraphics[width=12.78in]{fig/align} \caption{A) Graphic of read alignment and gene position showing reads within genes. B) The equivalent count matrix that comes from this alignment}\label{fig:unnamed-chunk-1}
\end{figure}
It is our aim in this section to create a count matrix from BAM files.
\hypertarget{atacr}{%
\subsection{atacR}\label{atacr}}
\texttt{atacR} was initially designed to help with the analysis of ATAC-Cap-seq data, a quite differen sort of data to RNAseq, but as with many bioinformatics pipelines, the first steps are quite common so we can make use of the neat way \texttt{atacR} handles the count matrix creation in the helpful function \texttt{make\_counts()}
\hypertarget{preparing-the-input}{%
\section{Preparing the input}\label{preparing-the-input}}
We needs three things to work: the BAM files, a GFF file and a file of sample information.
\hypertarget{the-gff-file}{%
\subsection{The GFF file}\label{the-gff-file}}
GFF files are one way among many of describing the positions of genes on a genome. Here's a quick look at one.
\begin{verbatim}
chr123 . gene 1300 1500 . + . ID=gene1
chr123 . gene 1050 1500 . + . ID=gene2
\end{verbatim}
As you can see, it's a simple file with a gene represented on each line, by its chromosome (\texttt{chr123}), its start and end and its strand. The best thing about GFF files is that usually we can just download them from the relevant genome website. They tend to be freely available.
\hypertarget{the-sample-information-file}{%
\subsection{The Sample Information file}\label{the-sample-information-file}}
This file is a really simple file that references the BAM file of the alignment with the sample and replicate information. It has three columns: \texttt{sample\_name}, \texttt{bam\_file\_path} and \texttt{treatment}.
Here is an example.
\begin{verbatim}
## Parsed with column specification:
## cols(
## treatment = col_character(),
## sample_name = col_character(),
## bam_file_path = col_character()
## )
\end{verbatim}
\begin{tabular}{l|l|l}
\hline
treatment & sample\_name & bam\_file\_path\\
\hline
control & control\_rep1 & sample\_data/control1/alignedSorted.bam\\
\hline
control & control\_rep2 & sample\_data/control2/alignedSorted.bam\\
\hline
control & control\_rep3 & sample\_data/control3/alignedSorted.bam\\
\hline
treatment & treatment\_rep1 & sample\_data/treatment1/alignedSorted.bam\\
\hline
treatment & treatment\_rep2 & sample\_data/treatment2/alignedSorted.bam\\
\hline
treatment & treatment\_rep3 & sample\_data/treatment3/alignedSorted.bam\\
\hline
\end{tabular}
The \texttt{sample\_name} column describes the treatment and replicate performed, the \texttt{bam\_file\_path} describes the place in which the BAM file for that sample is saved and \texttt{treatment} is the general name for the treatment that was used; this column is usually not unique when you have replicates.
\hypertarget{the-bam-files}{%
\subsection{The BAM files}\label{the-bam-files}}
The BAM files all come from a previously done alignment. The sample information file describes the place where they are kept and the sample they represent.
\hypertarget{sample-files-for-this-chapter}{%
\subsection{Sample files for this chapter}\label{sample-files-for-this-chapter}}
All the files are provided for you in the sample data you downloaded as \texttt{50\_genes.gff} and \texttt{sample\_information.csv} and in the folders containing BAM files. Feel free to examine them and look at how they relate to each other.
Once we have these files prepared, we can go on to use the \texttt{atacR} package to make the count matrix.
\hypertarget{running-make_counts}{%
\section{\texorpdfstring{Running \texttt{make\_counts()}}{Running make\_counts()}}\label{running-make_counts}}
First we must load in \texttt{atacR}. Type the following into the R console.
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{library}\NormalTok{(atacr)}
\end{Highlighting}
\end{Shaded}
Now we can do the counting with \texttt{make\_counts()}. Here's how to do it. Remember to properly describe the path to the files. The paths given here are correct if the files are in a folder called \texttt{sample\_data} in the current working directory.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{count_information <-}\StringTok{ }\KeywordTok{make_counts}\NormalTok{(}\StringTok{"sample_data/50_genes.gff"}\NormalTok{,}
\StringTok{"sample_data/sample_information.csv"}\NormalTok{,}
\DataTypeTok{is_rnaseq =} \OtherTok{TRUE}
\NormalTok{ )}
\end{Highlighting}
\end{Shaded}
The function should run and give no output. Note that it is important to set \texttt{is\_rnaseq} to \texttt{TRUE} to tell the function to count appropriately. The results are saved in the \texttt{count\_information} object.
\hypertarget{summaries-and-diagnostic-plots}{%
\section{Summaries and Diagnostic plots}\label{summaries-and-diagnostic-plots}}
With the counts computed we can do some diagnosis on the quality of the experiment.
We can see summary information with the \texttt{summary()} function
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{summary}\NormalTok{(count_information)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## ATAC-seq experiment of 2 treatments in 6 samples
## Treatments: control,treatment
## Samples: control_rep1,control_rep2,control_rep3,treatment_rep1,treatment_rep2,treatment_rep3
## Bait regions used: 50
## Total Windows: 99
##
## On/Off target read counts:
## sample off_target on_target percent_on_target
## 1 control_rep1 0 57733 100
## 2 control_rep2 0 66155 100
## 3 control_rep3 0 66122 100
## 4 treatment_rep1 0 100547 100
## 5 treatment_rep2 0 120325 100
## 6 treatment_rep3 0 107611 100
## Quantiles:
## $bait_windows
## control_rep1 control_rep2 control_rep3 treatment_rep1 treatment_rep2
## 1% 149.48 294.60 241.12 228.70 102.98
## 5% 386.35 437.75 340.50 328.30 193.90
## 95% 2335.20 2438.20 2927.10 4445.90 6940.20
## 99% 3054.18 2752.19 3291.34 5234.33 9423.95
## treatment_rep3
## 1% 116.50
## 5% 324.00
## 95% 4438.75
## 99% 6948.15
##
## $non_bait_windows
## control_rep1 control_rep2 control_rep3 treatment_rep1 treatment_rep2
## 1% 0 0 0 0 0
## 5% 0 0 0 0 0
## 95% 0 0 0 0 0
## 99% 0 0 0 0 0
## treatment_rep3
## 1% 0
## 5% 0
## 95% 0
## 99% 0
##
## Read depths:
## sample off_target on_target
## 1 control_rep1 0 1154.66
## 2 control_rep2 0 1323.10
## 3 control_rep3 0 1322.44
## 4 treatment_rep1 0 2010.94
## 5 treatment_rep2 0 2406.50
## 6 treatment_rep3 0 2152.22
\end{verbatim}
It is long, but actually quite helpful. The first thing to note is that the words relate to ATAC-Cap-Seq, but in our context `bait regions' just mean gene regions and non-bait just means intergenic regions. The `on\_targets' are read hits to genes, the `off\_targets' are read hits to intergenic regions.
We can see that all the reads have hit in gene regions; that the read depth distribution of genes from the quantiles section give depths in the 1000 - 2000 range. This sort of summary is helpful when you're trying to work out whether the RNAseq is useful, lots of reads `off target' is bad, as is low depth.
\hypertarget{gene-count-plots}{%
\subsection{Gene Count Plots}\label{gene-count-plots}}
We can see the distribution of depths over genes as a plot using the \texttt{plot\_counts()} function
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{plot_counts}\NormalTok{(count_information, }\DataTypeTok{log10 =} \OtherTok{FALSE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Picking joint bandwidth of 488
\end{verbatim}
\includegraphics{01-counting_genes_files/figure-latex/unnamed-chunk-6-1.pdf}
We can see that the mean count per gene (windows in \texttt{atacR}) is about 1000. The distributions in the treatment are bit more skewed than the controls.
\hypertarget{comparing-samples-with-pca}{%
\subsection{Comparing Samples with PCA}\label{comparing-samples-with-pca}}
It is common to examine the similarity of the samples to each other before moving on with analysis, ideally the similar samples will cluster together.
With \texttt{atacR} it is easy to perform a quick PCA analysis.
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{sample_pca_plot}\NormalTok{(count_information)}
\end{Highlighting}
\end{Shaded}
\includegraphics{01-counting_genes_files/figure-latex/unnamed-chunk-7-1.pdf}
Here we can see that the control samples all cluster together, but the treatment samples are a bit more variable. We might want to normalise these counts later as a consequence.
\hypertarget{extracting-and-saving-the-count-matrix}{%
\section{Extracting and saving the count matrix}\label{extracting-and-saving-the-count-matrix}}
We now want to extract out the actual counts hiding inside the \texttt{count\_information} object, we can do this with the \texttt{assay()} extractor function from the \texttt{Summarized\ Experiment} package.
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{library}\NormalTok{(SummarizedExperiment)}
\NormalTok{raw_counts <-}\StringTok{ }\KeywordTok{assay}\NormalTok{(count_information}\OperatorTok{$}\NormalTok{bait_windows)}
\KeywordTok{head}\NormalTok{(raw_counts)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## control_rep1 control_rep2 control_rep3
## Chr1:245989-249141 670 784 548
## Chr2:2195797-2200134 1104 1266 976
## Chr3:2454387-2458244 703 922 198
## Chr4:6650421-6657260 1865 1654 3207
## Chr5:11798344-11805414 1482 1266 1646
## Chr1:12893748-12901885 1186 1416 1458
## treatment_rep1 treatment_rep2 treatment_rep3
## Chr1:245989-249141 1784 2558 368
## Chr2:2195797-2200134 358 1186 4436
## Chr3:2454387-2458244 1373 1167 1726
## Chr4:6650421-6657260 3533 703 2427
## Chr5:11798344-11805414 1258 1690 1864
## Chr1:12893748-12901885 834 594 2684
\end{verbatim}
We can see the counts for each gene in each sample. Because \texttt{atacR} works on windows, the gene coordinates are given. We can replace the coordinates with gene names if we wish as follows
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{gene_names <-}\StringTok{ }\NormalTok{readr}\OperatorTok{::}\KeywordTok{read_csv}\NormalTok{(}\StringTok{"sample_data/gene_names.txt"}\NormalTok{, }\DataTypeTok{col_names =} \OtherTok{FALSE}\NormalTok{ )}\OperatorTok{$}\NormalTok{X1}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Parsed with column specification:
## cols(
## X1 = col_character()
## )
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{rownames}\NormalTok{(raw_counts) <-}\StringTok{ }\NormalTok{gene_names}
\KeywordTok{head}\NormalTok{(raw_counts)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## control_rep1 control_rep2 control_rep3 treatment_rep1
## AT1G01680 670 784 548 1784
## AT1G07160 1104 1266 976 358
## AT1G07920 703 922 198 1373
## AT1G19250 1865 1654 3207 3533
## AT1G32640 1482 1266 1646 1258
## AT1G35210 1186 1416 1458 834
## treatment_rep2 treatment_rep3
## AT1G01680 2558 368
## AT1G07160 1186 4436
## AT1G07920 1167 1726
## AT1G19250 703 2427
## AT1G32640 1690 1864
## AT1G35210 594 2684
\end{verbatim}
In this code chunk we load in the gene names from a file \texttt{gene\_names.txt} using the \texttt{readr} package. Then we use the \texttt{rownames()} function to set the row names of \texttt{raw\_counts}. This \emph{is} a little cumbersome. Often you'll come across fiddly little things like this in bioinformatics analysis. If you ever get stuck feel free to come and chat to us in the bioinformatics team.
Now we can save the matrix to a file for re-use and importing into other programs. We'll do it in two ways 1) to a native R binary file that we can load straight in, 2) to a CSV file we can examine in programs including Excel.
\hypertarget{saving-to-an-r-rds-file}{%
\subsection{Saving to an R RDS file}\label{saving-to-an-r-rds-file}}
To save as a native R object, use \texttt{saveRDS()}, passing the filename you wish to save to.
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{saveRDS}\NormalTok{(raw_counts, }\StringTok{"sample_data/raw_counts.RDS"}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
To save as a csv file use \texttt{write.table()}, again passing the filename you wish to save to.
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{write.csv}\NormalTok{( raw_counts, }\StringTok{"sample_data/raw_counts.csv"}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
Now we can move on to using \texttt{DESeq}.
\hypertarget{running-deseq2}{%
\chapter{\texorpdfstring{Running \texttt{DESeq2}}{Running DESeq2}}\label{running-deseq2}}
\hypertarget{about-this-chapter-1}{%
\section{About this chapter}\label{about-this-chapter-1}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Questions
\end{enumerate}
\begin{itemize}
\tightlist
\item
How do I work out which genes are differentially regulated?
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
Objectives
\end{enumerate}
\begin{itemize}
\tightlist
\item
Build a \texttt{DESeqDataSet} and \texttt{group} factor
\item
Run \texttt{DESeq}
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\tightlist
\item
Keypoints
\end{enumerate}
\begin{itemize}
\tightlist
\item
\texttt{DESeq2} is a package for estimating differential expression
\item
\texttt{DESeq2} needs you to describe the experiment in order to work
\end{itemize}
In this chapter we'll look at how to take our count matrix through \texttt{DESeq2} to estimate differential expression of genes.
\hypertarget{getting-the-count-matrix-and-describing-the-experiment-for-deseq2}{%
\section{Getting the count matrix and describing the experiment for DESeq2}\label{getting-the-count-matrix-and-describing-the-experiment-for-deseq2}}
\hypertarget{the-count-matrix}{%
\subsection{The count matrix}\label{the-count-matrix}}
The object we created in the previous chapter \texttt{raw\_counts} is already in the format we need. If you carried straight into this chapter from the last one, then you already have what you need. If not, you can load in the saved version (there's a copy in the sample data) as follows
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{raw_counts <-}\StringTok{ }\KeywordTok{readRDS}\NormalTok{(}\StringTok{"sample_data/raw_counts.RDS"}\NormalTok{)}
\KeywordTok{head}\NormalTok{(raw_counts)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## control_rep1 control_rep2 control_rep3 treatment_rep1
## AT1G01680 670 784 548 1784
## AT1G07160 1104 1266 976 358
## AT1G07920 703 922 198 1373
## AT1G19250 1865 1654 3207 3533
## AT1G32640 1482 1266 1646 1258
## AT1G35210 1186 1416 1458 834
## treatment_rep2 treatment_rep3
## AT1G01680 2558 368
## AT1G07160 1186 4436
## AT1G07920 1167 1726
## AT1G19250 703 2427
## AT1G32640 1690 1864
## AT1G35210 594 2684
\end{verbatim}
\hypertarget{the-grouping-object}{%
\section{The `grouping' object}\label{the-grouping-object}}
As R is a very powerful statistical programming language, it can support analysis of some very complicated experimental designs. \texttt{DESeq} supports this behaviour and as a result we have to describe our experiment in the appropriate manner.
We need to create a \texttt{data.frame} object that states which group each column is in. A \texttt{data.frame} is basically an R analogue of an Excel sheet. We just need to work out the right order of sample types in the matrix column.
Our experiment names are in the column names of the count matrix, we can see that with the \texttt{colnames()} function.
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{colnames}\NormalTok{(raw_counts)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] "control_rep1" "control_rep2" "control_rep3" "treatment_rep1"
## [5] "treatment_rep2" "treatment_rep3"
\end{verbatim}
The controls are all in columns 1 to 3 and the treatments are in columns 4 to 6. To make the groupings we can just type in the sample types in the appropriate order and put them in a column of a \texttt{data.frame}. That looks like this
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{grouping <-}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(}\DataTypeTok{sample_type =} \KeywordTok{c}\NormalTok{(}\StringTok{"control"}\NormalTok{, }\StringTok{"control"}\NormalTok{, }\StringTok{"control"}\NormalTok{, }\StringTok{"treatment"}\NormalTok{, }\StringTok{"treatment"}\NormalTok{, }\StringTok{"treatment"}\NormalTok{))}
\NormalTok{grouping}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## sample_type
## 1 control
## 2 control
## 3 control
## 4 treatment
## 5 treatment
## 6 treatment
\end{verbatim}
\hypertarget{running-deseq2-1}{%
\section{Running DESeq2}\label{running-deseq2-1}}
Now we have everything we need to run \texttt{DESeq2}. First, we must load in the library.
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{library}\NormalTok{(DESeq2)}
\end{Highlighting}
\end{Shaded}
Next, we can prepare the \texttt{DESeqDataSet} object that combines all the information \texttt{DESeq2} needs to work. We run \texttt{DESeqDataSetFromMatrix()} to do this.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{dds <-}\StringTok{ }\KeywordTok{DESeqDataSetFromMatrix}\NormalTok{(}
\DataTypeTok{countData =}\NormalTok{ raw_counts, }
\DataTypeTok{colData =}\NormalTok{ grouping, }
\DataTypeTok{design =} \OperatorTok{~}\StringTok{ }\NormalTok{sample_type)}
\end{Highlighting}
\end{Shaded}
Here we set the arguments
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
\texttt{countData} which is the actual data, so gets our \texttt{raw\_counts}
\item
\texttt{colData} which tells the group each data column is in so gets \texttt{grouping}
\item
\texttt{design} is an R-ish way of describing the experiment design, for a standard exoeriment like this you use the \texttt{\textasciitilde{}} and the name of the \texttt{grouping} column
\end{enumerate}
Don't worry too much about whether the \texttt{design} argument makes sense at this stage, its a bit out of scope to discuss the way R expects experimental designs for now. Follow the pattern you see here until you have a really complex design and have motivation to come back to it.
Finally, we can do the \texttt{DESeq} analysis. We have a single function for this and all it needs is our prepared data.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{de_seq_analysed <-}\StringTok{ }\KeywordTok{DESeq}\NormalTok{(dds)}
\end{Highlighting}
\end{Shaded}
And now we can extract the results with the helpful \texttt{results()} function. This needs the \texttt{contrast} to be described, basically the column name and the types.
The types are ordered so that the first mentioned is the measurement of interest (ie the \texttt{treatment}) and the second is the baseline to which it is compared (here \texttt{control}). If you get the two the wrong way round, your up-regulated genes will look down-regulated and vice-versa, so take time to check.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{results_data <-}\StringTok{ }\KeywordTok{results}\NormalTok{(de_seq_analysed, }\DataTypeTok{contrast =} \KeywordTok{c}\NormalTok{(}\StringTok{'sample_type'}\NormalTok{, }\StringTok{'treatment'}\NormalTok{, }\StringTok{'control'}\NormalTok{))}
\KeywordTok{head}\NormalTok{(results_data)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## log2 fold change (MLE): sample_type treatment vs control
## Wald test p-value: sample type treatment vs control
## DataFrame with 6 rows and 6 columns
## baseMean log2FoldChange lfcSE
## <numeric> <numeric> <numeric>
## AT1G01680 1022.39953137642 0.638556501008842 0.752757958605879
## AT1G07160 1522.95786602724 0.354926012092298 0.785708629982376
## AT1G07920 946.524794250111 0.671894996505363 0.742814441576441
## AT1G19250 2230.10488838027 -0.591880977166901 0.603665159459885
## AT1G32640 1539.40321179978 -0.420258753438091 0.537635950705001
## AT1G35210 1384.96737134404 -0.485617346461465 0.690715741846472
## stat pvalue padj
## <numeric> <numeric> <numeric>
## AT1G01680 0.848289272412955 0.396276890458589 0.747206163566895
## AT1G07160 0.45172726701533 0.651465472435186 0.796576241413894
## AT1G07920 0.904526028168531 0.365716539229272 0.747206163566895
## AT1G19250 -0.980478942492676 0.326849759032812 0.747206163566895
## AT1G32640 -0.781679039296026 0.434403223096372 0.755158226458792
## AT1G35210 -0.703063962554663 0.482015889229017 0.755158226458792
\end{verbatim}
We get a lot of information back in this column. We can see in amongst all that the important log fold change estimates and the adjusted p-value. Effectively, our analysis is done, we have our differential expression estimates, though we do need to do more to answer questions of interest. That's what we'll do in the next chapter
\hypertarget{saving-the-results}{%
\section{Saving the results}\label{saving-the-results}}
As a final step, we can save the results to a CSV file. As in the earlier chapter we can do this with \texttt{write.csv()}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{write.csv}\NormalTok{(results_data, }\StringTok{"sample_data/results.csv"}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\hypertarget{next-steps}{%
\chapter{Next Steps}\label{next-steps}}
\hypertarget{about-this-chapter-2}{%
\section{About this chapter}\label{about-this-chapter-2}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Questions
\end{enumerate}
\begin{itemize}
\tightlist
\item
How can I filter out the `significant' genes?
\item
How can I find the functions of these genes?
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
Objectives
\end{enumerate}
\begin{itemize}
\tightlist
\item
Filter the genes by \emph{p} value
\item
Find the gene annotations on an external service
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\tightlist
\item
Keypoints
\end{enumerate}
\begin{itemize}
\tightlist
\item
The \emph{p} value we need must be corrected for the large number of genes
\end{itemize}
In this chapter we'll look at how to take our results table and get the significantly differentially expressed genes out of it.
\hypertarget{the-results-data-frame}{%
\section{The results data frame}\label{the-results-data-frame}}
The object we created in the previous chapter \texttt{results\_data} is already in the general format we need. Im going to load a version with different gene names (from Magnaporthe, not Arabidopsis) so we can work on a more familiar genome.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{results_data <-}\StringTok{ }\KeywordTok{read.csv}\NormalTok{(}\StringTok{"sample_data/results_mo.csv"}\NormalTok{)}
\KeywordTok{head}\NormalTok{(results_data)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## gene baseMean log2FoldChange lfcSE stat pvalue
## 1 MGG_00865 1022.3995 0.6385565 0.7527580 0.8482893 0.3962769
## 2 MGG_08134 1522.9579 0.3549260 0.7857086 0.4517273 0.6514655
## 3 MGG_01588 946.5248 0.6718950 0.7428144 0.9045260 0.3657165
## 4 MGG_13806 2230.1049 -0.5918810 0.6036652 -0.9804789 0.3268498
## 5 MGG_06121 1539.4032 -0.4202588 0.5376360 -0.7816790 0.4344032
## 6 MGG_06504 1384.9674 -0.4856173 0.6907157 -0.7030640 0.4820159
## padj
## 1 0.7472062
## 2 0.7965762
## 3 0.7472062
## 4 0.7472062
## 5 0.7551582
## 6 0.7551582
\end{verbatim}
\hypertarget{which-p-value}{%
\subsection{\texorpdfstring{Which \emph{p} value?}{Which p value?}}\label{which-p-value}}
Note that two columns in this data.frame have \emph{p} value information in them - \texttt{pvalue} and \texttt{padj}. Which is the correct one? We need the adjusted \emph{p} value in \texttt{padj}.
The reason for this is that a separate \emph{p} value was calculated in a separate test for each gene. As \emph{p} values have a built in error expectation (IE \emph{p} = 0.05 means the test will be wrong 5 percent of the time on average) then repeating the test means that using a gene level \emph{p} value means we get lots of errors. So we need a whole gene set \emph{p} value rather than a per gene value. Hence \texttt{DESeq} adjusts the \emph{p} value for the whole set of genes.
\hypertarget{filtering-rows-with-significant-p-values}{%
\section{\texorpdfstring{Filtering rows with significant \emph{p} values}{Filtering rows with significant p values}}\label{filtering-rows-with-significant-p-values}}
To filter the rows in the data frame we can use \texttt{tidyverse} tools like \texttt{dplyr} (which we study in a separate training course). Let's keep rows from the \texttt{results\_data} data frame with a \texttt{padj} lower than \texttt{0.05}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{library}\NormalTok{(dplyr)}
\NormalTok{significant_genes <-}\StringTok{ }\KeywordTok{filter}\NormalTok{(results_data, padj }\OperatorTok{<}\StringTok{ }\FloatTok{0.05}\NormalTok{)}
\NormalTok{significant_genes}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## gene baseMean log2FoldChange lfcSE stat pvalue padj
## 1 MGG_12738 552.4170 -1.2700540 1.1960414 1.061881 0.005 0.030
## 2 MGG_01482 1912.4502 1.1010845 0.8063151 1.365576 0.005 0.004
## 3 MGG_17878 988.8704 0.9095213 0.9142498 0.994828 0.002 0.010
\end{verbatim}
The \texttt{filter()} function works by taking the dataframe and the condition and column to filter with.
\hypertarget{filtering-up-and-down-genes}{%
\subsection{Filtering UP and DOWN genes}\label{filtering-up-and-down-genes}}
An elaboration of this is to find the up or down genes. To do this we need to build a filter on the \texttt{log2FoldChange} column. As the fold changes are encoded in a log scale, up regulated genes will have a positive value, down regulated genes will have a negative value
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{up_genes <-}\StringTok{ }\KeywordTok{filter}\NormalTok{(results_data, padj }\OperatorTok{<}\StringTok{ }\FloatTok{0.05}\NormalTok{, log2FoldChange }\OperatorTok{>}\StringTok{ }\DecValTok{0}\NormalTok{)}
\NormalTok{up_genes}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## gene baseMean log2FoldChange lfcSE stat pvalue padj
## 1 MGG_01482 1912.4502 1.1010845 0.8063151 1.365576 0.005 0.004
## 2 MGG_17878 988.8704 0.9095213 0.9142498 0.994828 0.002 0.010
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{down_genes <-}\StringTok{ }\KeywordTok{filter}\NormalTok{(results_data, padj }\OperatorTok{<}\StringTok{ }\FloatTok{0.05}\NormalTok{, log2FoldChange }\OperatorTok{<}\StringTok{ }\DecValTok{0}\NormalTok{)}
\NormalTok{down_genes}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## gene baseMean log2FoldChange lfcSE stat pvalue padj
## 1 MGG_12738 552.417 -1.270054 1.196041 1.061881 0.005 0.03
\end{verbatim}
Note that if you want to find values that are two fold up or down regulated, then you'll need to change the \texttt{log2FoldChange} values to \texttt{1} and \texttt{-1} (as \texttt{log2(2)\ =\ 1} and \texttt{log2(0.5)\ =\ -1}).
You can export each of these tables to files with \texttt{write.csv()} as previously.
\hypertarget{finding-gene-annotations}{%
\section{Finding gene annotations}\label{finding-gene-annotations}}
A common question is `which pathways and functional categories do my genes belong to?'. Answering this requires quite an involved process, and doing it entirely in R is out of scope for this `minimal' RNAseq tutorial. Instead of avoiding the question completely, we'll look at how to achieve a basic annotation using webtools. Specifically, the Ensembl BioMart service.
\hypertarget{biomart}{%
\subsection{BioMart}\label{biomart}}
BioMart is a data warehouse for genomic information that can be queried through a web interface. Not all genome projects provide such a service, but the ones on Ensembl generally do. We'll work with the Magnaporthe one here, available at \url{https://fungi.ensembl.org/Magnaporthe_oryzae/Info/Index}
To access it we need to click \texttt{BioMart} from the top menu, and be patient, it can take a little while to load.
Then we need to follow this procedure to get a gene list annotated
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
From the \texttt{Choose\ Database} drop-down select \texttt{Ensembl\ Fungi\ Genes}
\item
From the \texttt{Choose\ Dataset} drop down select \texttt{Magnaporthe\ oryzae\ genes\ (MG8)}
\item
Select the \texttt{Attributes} page and on the \texttt{External} tab tick \texttt{GO\ Term\ Name}, \texttt{GO\ Term\ Definition} and \texttt{KEGG\ Pathway\ and\ Enzyme\ ID}. These attributes are the things you will retrieve from the BioMart.
\item
Select the \texttt{Filters} page and click on the \texttt{External} tab, paste in the gene IDs of interest into the box or upload a file.
\item
Click \texttt{Results} button at the top
\end{enumerate}
After a wait, the screen should fill with the annotations that you asked for. You can save this to a file using the \texttt{Export} options at the top.
This is all you need to make an annotated gene list.
\hypertarget{further-questions}{%
\section{Further Questions}\label{further-questions}}
Of course, this isn't all you might want to do with your RNAseq data and gene lists. We've achieved our overall goal of getting a minimal RNAseq analysis done. What happens next will be quite different for every experiment. For example, you might want to look at seeing whether a GO Term or enzymatic pathway is enriched. Pretty much everything will be a separate analysis in itself and will require some design and planning. Please feel free to talk to the bioinformatics team when you find yourself at this stage, we'll be extremely happy to work with you!
\bibliography{book.bib,packages.bib}
\end{document}
| {
"alphanum_fraction": 0.7221511777,
"avg_line_length": 43.9764826176,
"ext": "tex",
"hexsha": "1219a923f5ebc2f3ba2f812326ea665de43daa8b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "97c4502086215a1199b561394bc4857a2eac903e",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "TeamMacLean/minimal_quantitative_rnaseq",
"max_forks_repo_path": "docs/Minimal-Quantitative-RNASeq-Pipeline.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "97c4502086215a1199b561394bc4857a2eac903e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "TeamMacLean/minimal_quantitative_rnaseq",
"max_issues_repo_path": "docs/Minimal-Quantitative-RNASeq-Pipeline.tex",
"max_line_length": 558,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "97c4502086215a1199b561394bc4857a2eac903e",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "TeamMacLean/minimal_quantitative_rnaseq",
"max_stars_repo_path": "docs/Minimal-Quantitative-RNASeq-Pipeline.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 12860,
"size": 43009
} |
% \section{201604-1}
% \input{problem/7/201604-1-p.tex} | {
"alphanum_fraction": 0.6785714286,
"avg_line_length": 18.6666666667,
"ext": "tex",
"hexsha": "ef831ae25cbbd757894a42fe03c54a42d3317f42",
"lang": "TeX",
"max_forks_count": 5,
"max_forks_repo_forks_event_max_datetime": "2022-01-28T15:33:04.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-01-01T06:04:16.000Z",
"max_forks_repo_head_hexsha": "9d432ec2255b170f2bb1e0879e42c93f80a1b21c",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "lxlonlyn/CSP-Project",
"max_forks_repo_path": "problem/7/201604-1.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "9d432ec2255b170f2bb1e0879e42c93f80a1b21c",
"max_issues_repo_issues_event_max_datetime": "2022-02-03T15:32:34.000Z",
"max_issues_repo_issues_event_min_datetime": "2022-01-22T15:33:17.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "lxlonlyn/CSP-Project",
"max_issues_repo_path": "problem/7/201604-1.tex",
"max_line_length": 34,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "9d432ec2255b170f2bb1e0879e42c93f80a1b21c",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "lxlonlyn/CSP-Project",
"max_stars_repo_path": "problem/7/201604-1.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-27T03:58:42.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-01-22T15:34:01.000Z",
"num_tokens": 24,
"size": 56
} |
\documentclass[11pt,reqno]{beamer}
\usepackage[utf8x]{inputenc}
\usetheme{Dresden}
\usecolortheme{beaver}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{graphicx}
\usepackage{xcolor}
\usepackage{hyperref}
\setbeamertemplate{navigation symbols}{}
\title{BGT software tutorial 0}
\subtitle{Installing the Jupyter stack}
\author{Peter Cudmore}
\institute{Systems Biology Lab, The University of Melbourne}
\newcommand{\D}[2]{\frac{\mathrm{d} #1}{\mathrm{d} #2}}
\newcommand{\e}{\mathrm{e}}
\newcommand{\I}{\mathrm{i}}
\renewcommand{\mod}[1]{\left|#1\right|}
\newcommand{\DD}[2]{\frac{\mathrm{d}^2 #1}{\mathrm{d} #2^2}}
\newcommand{\bigO}[1]{\text{O}\left(#1\right)}
\renewcommand{\P}[2]{\frac{\partial #1}{\partial #2}}
\renewcommand{\Re}{\operatorname{Re}}
\renewcommand{\Im}{\operatorname{Im}}
\newcommand{\EX}{\mathbb{E}}
\newcommand{\df}[1]{\mspace{2mu} \mathrm{d}#1}
\newcommand{\reals}{\mathbb{R}}
\newcommand{\complex}{\mathbb{C}}
\newcommand{\conj}[1]{\overline{#1}}
\begin{document}
\hypersetup{urlcolor=blue, linkcolor=blue}
\begin{frame}
\titlepage
\addtocounter{framenumber}{-1}
\end{frame}
\begin{frame}
\tableofcontents[hideallsubsections]
\end{frame}
\section{Goals}
\begin{frame}
\begin{itemize}
\item Install Python 3.6 or 3.7
\item Install Jupyter notebook
\item Install Julia 0.6
\item Make sure Julia talks to Python and Jupyter
\end{itemize}
\end{frame}
\begin{frame}{Download and Install Python}
\emph{We will not be using venv or virtual env}
\begin{itemize}
\item All platforms: Install anaconda
\item On windows: \url{https://www.python.org/downloads/windows/} install 3.6.5 or 3.7
\item On Mac: Install homebrew \url{https://brew.sh/} then python \texttt{>brew install python}
\item On Linux; use your package manager to install 3.6 or 3.7 ALONG SIDE distribution version
\end{itemize}
\end{frame}
\begin{frame}{Make sure Python is in your PATH}
Here \texttt{(PYTHON\_DIR)} is the directory where you install python. (For example
\texttt{C:\slash Program FIles\slash Python3.7 } or \texttt{/opt/python}
\vfill
For windows:\\
Following \href{https://helpdeskgeek.com/windows-10/add-windows-path-environment-variable/}{this guide}, add your python folder to the PATH variables.
\vfill
Otherwise add the following line \texttt{\textasciitilde\slash.bashrc} (linux) or \texttt{\textasciitilde\slash.bash\_profile} (osx):\\
\vspace{10pt}
\texttt{set PATH=(PYTHON\_DIR)\slash bin:\$PATH}\\
\vfill
\end{frame}
\begin{frame}{Install Jupyter}
Windows: Using the SAME python interpreter run \texttt{>python -m pip install jupyter}
\vfill
On osx: Using homebrew\\
\texttt{>brew install jupyter}\\
\vfill
(On Ubuntu) Use apt to install\\
\texttt{>sudo apt install jupyter-notebook}\\
\vfill
test this by running \texttt{>jupyter-notebook}
\end{frame}
\begin{frame}{Install Julia}
Install Julia 0.6.4 for your operating system from:\\
\url{https://julialang.org/downloads/oldreleases.html}\\
\vfill
Make sure to add julia to your PATH variable as you did with python.
\vfill
For windows 7 users; make sure to install the additional fixes listed.
\end{frame}
\begin{frame}{Install Julia Binding}
Install Ijulia with
\texttt{>Pkg.add("IJulia")}\\
then exit julia with \texttt{>exit()}
\vfill
Test by opening
\texttt{jupyter-notebook}\\
and creating a new julia notebook.
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.7397669555,
"avg_line_length": 30.4272727273,
"ext": "tex",
"hexsha": "3e09e691ed6c1683a5e5b5c9997640ef9b6b5804",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a17dad1e4e398e76a7b63410f6f7e9e09232608c",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "peter-cudmore/Bond-Graph-Clinic",
"max_forks_repo_path": "tex/clinic_notes_4.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "a17dad1e4e398e76a7b63410f6f7e9e09232608c",
"max_issues_repo_issues_event_max_datetime": "2018-08-29T02:19:02.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-02-20T04:29:37.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "peter-cudmore/Bond-Graph-Clinic",
"max_issues_repo_path": "tex/clinic_notes_4.tex",
"max_line_length": 150,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "a17dad1e4e398e76a7b63410f6f7e9e09232608c",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "peter-cudmore/Bond-Graph-Clinic",
"max_stars_repo_path": "tex/clinic_notes_4.tex",
"max_stars_repo_stars_event_max_datetime": "2020-07-16T14:46:39.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-07-16T14:46:39.000Z",
"num_tokens": 1090,
"size": 3347
} |
\documentclass[12pt, letterpaper]{article}
\usepackage[utf8]{inputenc}
\usepackage[margin=1in]{geometry}
\usepackage{times}
\usepackage{hyperref}
\usepackage{graphicx}
\usepackage{gensymb}
\usepackage{caption}
\renewcommand{\abstractname}{\vspace{-\baselineskip}}
% Document
\begin{document}
\begin{center}
\Large X-ray, Optical, and Radio Solar Event Identification and Prediction using Neural Networks \\
\vspace{.6em}
\large Ted Grosson, Cody Meng, Preston Tracy, Jackson White, Yiwen Zhu
\vspace{.3em}
\\ Advisor: Chris Tunnell | Rice University
\vspace{.5em}
\normalsize
\\ Submitted February 22, 2021
\\
\vspace{1em}
\textbf{Project Pitch} \end{center} \vspace{-2.3em}
\begin{abstract} \normalsize
Solar events, such as flares and prominences, can lead to detrimental effects on Earth systems by disrupting electronics and communication systems, particularly in satellites with limited protection from Earth’s magnetic field. Due to the potential harm of these events, it would be remarkably useful to identify and predict these events when or before they occur to allow rapid preparatory actions to mitigate potential damages as much as possible. We propose applying machine learning techniques to historical solar data from NASA’s Solar Dynamics Observatory in order to train a set of neural networks to identify and predict solar events from live multi-waveband imaging of the Sun. The Solar Dynamics Observatory (SDO) satellite was launched in 2010 and continuously observes the Sun on roughly 10-minute intervals with multiple imaging instruments. In particular, the Atmospheric Imaging Assembly (AIA) aboard SDO images the sun in wavelengths ranging from the UV to optical, providing a constant stream of live, high-resolution solar information \cite{Pesnell2012}. Using the images obtained from AIA, we should be able to easily identify x-ray and optical events and prominences emanating from the Sun. By analyzing historical AIA data prior to and during documented solar events, we may be able to construct a tool that predicts solar events before and as they actually occur.
\end{abstract}
\section*{Core Scientific Question}
In this project, we plan to identify and predict specific solar events from multi-waveband observations of the Sun obtained from the Solar Dynamics Observatory Atmospheric Imaging Assembly. The cause of solar flares and prominences are most often attributed to magnetic fields, but we lack a concrete mechanism by which such events take place \cite{BOB}. Determining whether visible precursors to solar events exist, how early they take place, and how prominently they appear in each waveband, may provide valuable constraints on the theoretical mechanisms by which these solar events are created. Even if we fail to produce a neural network capable of predicting solar events before they occur, this failure may indicate that solar events lack temporal precursors large or coherent enough to appear on the Sun’s surface in x-ray, ultraviolet, and optical wavelength images. Because we are applying similar methods to both identification and prediction of solar events, we can perform at least a cursory analysis comparing the efficacy of our tools applied to both problems, which could provide a point of comparison between the visibility of solar events precursors to the events themselves.
We also hope to determine whether we can precisely identify solar radio events from optical or ultraviolet images of the Sun, as it is unclear as to the degree of precision by which we can identify solar events outside of their principal wavelengths. Should some solar events be recognizable in shorter wavelength images, we may be able to use this information to constrain or classify radio events. For example, if one type of radio event produces an identifiable signature in the optical while another does not, we may be able to use our tools to aid in classifying these events, providing additional clues as to the mechanisms by which either type of event can take place. In addition, if we are able to predict radio events in the optical or ultraviolet, this would provide similar benefits to those described in the previous paragraph: namely, determining how early, prominently, and in what wavelengths precursors to these events can occur.
\section*{Project Objectives}
Our project objectives fall under three overarching steps, which we will discuss in detail within this section.
First, we will attempt to train a neural network to identify solar events from solar images, using historical SDO AIA solar images in 5-6 wavebands between 90 and 300 Angstroms alongside solar event reports obtained from NOAA's Space Weather Prediction Center. The images and solar event reports will be aligned in time, so that the neural network will learn to associate features within the images with specific types of solar events. We will initially focus on two types of events, x-ray events and optical flares.
Next, we will apply this same methodology to radio bursts. We will attempt to use the longest wavelengths available to us on the AIA, but radio emission is well outside of the low-frequency limit of the accessible bandpasses. However, we will still train a neural network on similar sets of AIA images alongside simultaneous radio burst reports. If radio bursts carry signature features present in higher-frequency wavebands, our neural network may be able to identify them.
Finally, should the previous steps be successful, we will transition to prediction of solar events rather than identification. Our overall methodology is the same, but we will train our neural networks on solar event data alongside images taken at various time intervals before the solar events occurred. We will experiment with various time lags, training several neural networks on different lags simultaneously, and evaluating what time lags are optimal or effective for certain kinds of events. With an understanding of predictive accuracy as a function of time lag, we can determine whether and when notable surface signatures arise as precursors to x-ray, optical, and radio solar events. The previous two steps will both serve as points of comparison between the efficacy of our identification and prediction networks as well as provide a proof of concept demonstration of how our predictive network may function under ideal conditions, with a time lag of zero.
\section*{Data Description}
\subsection*{AIA Observations}
In this project we will be using two primary datasets: Atmospheric Imaging Assembly (AIA) instrument observations of the sun, from the Solar Dynamics Observatory (SDO), and space weather reports from the National Oceanic and Atmospheric Administration (NOAA).
The AIA takes observations at ten different wavelengths, which are summarized in Table \ref{AIA_wavelengths}, and each one probes different layers of the sun. The AIA has been observing the sun since 2010, and observes at 4500 Å once every hour, at 1600 and 1700 Å once every 24 seconds, and at the remaining wavelengths once 12 seconds. The images are stored as 4096x4096 pixel .fits files, where each pixel represents the flux captured by the corresponding pixel on the AIA CCD. The headers of the .fits files also include relevant information such as the exposure time and time of the observation. These files can be queried and downloaded from within python, using the sunpy package. Using sunpy we can select images based on their observation time and filter, which will allow us to easily select and download observations which correspond to events in the space weather reports.
The different wavelengths observed by AIA were chosen to examine specific portions of the Sun’s surface or atmosphere. Each of these wavelengths are centered on a different emission line to examine the features visible at the temperature of the emission line. The EUV wavelengths observe extremely hot temperatures, and are centered on different iron ions, ranging from Fe IX (171 Å) to Fe XXIII (131 Å), and observing temperatures from 60,000 K (304 Å) to 20,000,000 K (193 Å) \cite{Lemen2012}. The 1700 Å and 4500 Å wavelengths show continuum images of the Sun in the UV and optical, respectively. The 1600 Å wavelength examines the transition region between the chromosphere and the corona. The 304 Å wavelength also examines the transition region, as well as the chromosphere. The 171 Å wavelength shows the “quiet” (low magnetic activity) corona and coronal loops, and the 193 Å examines a hotter region of the corona, as well as the hotter material in solar flares. The 211 Å and 335 Å wavelengths both examine the hot, magnetically active regions of the corona. Finally, the 94 Å and 131 Å wavelengths both examine flaring regions, with wavelengths centered at different temperatures \cite{Zell2015}. The temperature coverage provided by these wavelengths allows for a more complete reconstruction of thermal structure than previous missions \cite{AIA_ConceptReport}.
\begin{table}
\centering
\caption*{AIA Filter Summary}
\begin{tabular}{||c| c | c | c ||}
\hline
Wavelength (\AA) & Primary Emission Source & Region of Atmosphere & log(T)\\ [0.5ex]
\hline\hline
4500 & Continuum & Photosphere & 3.7 \\
\hline
1700 & Continuum & Photosphere & 3.7 \\
\hline
1600 & C IV + Continuum & Upper Photosphere / Transition Region & 4.7 \\
\hline
335 & Fe XVI & Flaring Regions & 6.8 \\
\hline
304 & He II & Chromosphere / Transition Region & 5.8 \\
\hline
211 & Fe XIV & Active-Region Corona & 6.3 \\
\hline
193 & Fe XII, XXIV & Corona and Hot Flare Plasma & 6.1, 7.3 \\
\hline
171 & Fe IX & Quiet Corona / Upper Transition Region & 5.0 \\
\hline
131 & Fe VIII, XX, XXIII & Flaring Regions & 5.6,7.0,7.2 \\
\hline
94 & Fe XVIII & Flaring Regions & 6.8 \\
\hline
\end{tabular}
\vspace{0.5em}
\caption{The wavelengths observed by the AIA, along with the primary source of emission at each wavelength, and the approximate location and temperature of the portion of the Sun that each wavelength probes. \cite{AIA_ConceptReport}}
\label{AIA_wavelengths}
\end{table}
\subsection*{NOAA Space Weather Reports}
The second dataset, the space weather reports, are created once per day by the NOAA and can contain 13 different types of space weather events, which are listed in Table \ref{space_weather_events}. These reports exist back until 1997, so we have corresponding reports for all AIA data. A sample report from 2015 is depicted in Figure \ref{swr_sample}. The entire collection of reports amounts to only 18 MB in storage size, so we are able to store all the reports on google drive. 79.6\% of the reports since 2010 have at least one observed space weather event, and there are on average 15.0 events per day. There are significant frequency discrepancies between the different types of space weather events. As shown in Figure \ref{swe_freq}, the most common events are X-Ray events (XRA), optical flares in H-Alpha (FLA), and Sweep-Frequency Radio Bursts (RSP), and Fixed-Frequency Radio Bursts (RBR). Each of these events comprise almost a quarter of the dataset.
\begin{figure}
\includegraphics[width=0.8\textwidth]{figures/Space_Weather_Plot.png}
\centering
\caption{Total number of space weather events observed since 2010, grouped by event type (Event types are listed in table \ref{space_weather_events})}
\label{swe_freq}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{||c | c ||}
\hline
Acronym & Full Name \\ [0.5ex]
\hline\hline
BSL & Bright Surge on Limb \\
\hline
DSF & Filament Disappearance \\
\hline
EPL & Eruptive Prominence on Limb\\
\hline
FIL & Filament\\
\hline
FLA & Optical Flare in H-Alpha\\
\hline
FOR & Forbush Decrease (CR Decrease) \\
\hline
GLE & Ground-Level Event (CR Increase) \\
\hline
LPS & Loop Prominence System\\
\hline
PCA & Polar Cap Absorption\\
\hline
RBR & Fixed-Frequency Radio Burst\\
\hline
RNS & Radio Noise Storm\\
\hline
RSP & Sweep-Frequency Radio Burst\\
\hline
SPY & Spray\\
\hline
XRA & X-ray Event \\
\hline
\end{tabular}
\caption{List of Event Types tracked in space weather reports}
\label{space_weather_events}
\end{center}
\end{table}
\subsection*{Data Storage Concerns}
Every event has a listed start time and end time, among other parameters. We will use these start time and end time values to identify when different events should be present, and use those time frames to select the images that will make up our neural net training and testing data sets. Each single .fits file is about 10MB in size, so we will need to be selective about how many images we store. The mean duration for the four most common types of events (XRA, FLA, RBR, and RSP) since 2010 are 20, 18.5, 3.75, and 48 minutes respectively. Considering the average duration of each event, the number of total number events since the AIA began observations, and the frequency of AIA observations, we would need almost a petabyte of storage to save every image in every wavelength for which a space weather event is occurring. Focusing instead on only 5-6 filters, and using one image per event, we should drop our storage needs down to about one terabyte per event type, which should be a more reasonable target.
\section*{Background Information}
Solar flares are highly energetic events of localized increased brightness on the sun over a time period ranging from milliseconds to over an hour. This brightness can be seen across many wavelengths, including x-ray, optical, and radio, and can be seen in Figure \ref{flare}. They tend to be associated with groups of sunspots — localized regions of cooler material and strong magnetic fields — and are often accompanied by the ejection of charged particles. These particles, as well as high-energy electromagnetic radiation, can affect electrical systems and the Earth’s ionosphere, and have the potential to cause major disruptions \cite{BOB}. The main goal of the Solar Dynamics Observatory is to understand the mechanisms of these events which have the potential of affecting life on Earth \cite{Pesnell2012}. The precise cause of solar flares remains uncertain, but a leading theory is that energy is suddenly released from the strong magnetic field in sunspots through a process called reconnection. This occurs when a magnetic field loop ``breaks" and ``reconnects" in a more stable path, releasing most of the energy stored in the loop \cite{BOB}. An additional process which appears in our images is granulation, which occurs as a result of convection in the solar atmosphere. Granules tend to span around 700~km and have lifetimes of five to ten minutes \cite{BOB}. Since we are analyzing how the Sun is changing over time, the granulation pattern could affect our identifications, as seen in Figure \ref{flare_diff}.
There are some successful prior efforts to predict the solar flares based on changes in the Sun’s magnetic field \cite{Raboonik2016}; however, because solar flare events are associated with changes in the Sun’s magnetic field, it may not be possible to directly predict them through purely visual means. Should our prediction method turn out to be satisfactory, it would provide an unprecedented view about the nature of solar flares that the solar flares are correlated to the dynamics of the sun's surface in a way that we can use the emission from the sun's surface to detect and predict a solar flare.
\section*{Potential Risks}
Although the data for this project is easy to access, the biggest risk is the failure to identify events from the images we have. If our produced neural networks are not capable of detecting events, then there is little hope for them to be able to predict such events beforehand. There are curated datasets available which could improve the quality of the images in some ways; however, these are only available a week after the images are taken at the earliest, offering little use in trying to predict events in real time \cite{Galvez2019}. In the event that the lower-quality images are unable to detect events, the curated datasets could still be useful in determining whether flare events are associated with visual changes in the Sun, which is valuable information regardless of whether it can be used to predict flares in real time. Additionally, it is possible that we will need to train our neural networks on more images than expected, in order to successfully detect different objects. This could pose a significant challenge to overcome if we encounter difficulties securing additional storage space or computational power.
\bibliographystyle{unsrt}
\bibliography{bibfile}
\pagebreak
\section*{Appendix}
\begin{figure}[ht]
\includegraphics[width=0.9\textwidth]{figures/swr_sample.png}
\centering
\caption{Sample Space Weather Report text file From June 6, 2015. Along with event types and times, the reports also contain information about the event strengths and locations.}
\label{swr_sample}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=0.9\textwidth]{figures/0819_flare_labeled.png}
\centering
\caption{A small x-ray flare which occurred August 19, 2020 in each AIA wavelength apart from 4500~\AA{}. The flare, circled in red on each image, shows up in all wavelengths to varying degrees.}
\label{flare}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=0.9\textwidth]{figures/0819_flare_diff.png}
\centering
\caption{The same flare as above, after subtracting a previous image from the current one. This process is sometimes called difference imaging, and is useful in seeing how an object changes over time, such as the appearance of a flare. These difference images were taken over a span of two minutes, but varying this time could optimize our detection process. The changing granulation pattern is especially apparent in the 1700~\AA{} image, where the flare is completely drowned out.}
\label{flare_diff}
\end{figure}
\end{document}
| {
"alphanum_fraction": 0.7863852359,
"avg_line_length": 97.3010752688,
"ext": "tex",
"hexsha": "dd64a35ffcf41c7d7c32964d31da646e4f937711",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c326d1efa11885adfe6c9bbf53e8aeaf42dbe2bc",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "PHYS477677/abwoc",
"max_forks_repo_path": "docs/Astro Team 477 Proposal Draft/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c326d1efa11885adfe6c9bbf53e8aeaf42dbe2bc",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "PHYS477677/abwoc",
"max_issues_repo_path": "docs/Astro Team 477 Proposal Draft/main.tex",
"max_line_length": 1528,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "c326d1efa11885adfe6c9bbf53e8aeaf42dbe2bc",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "PHYS477677/Astro-team",
"max_stars_repo_path": "docs/Astro Team 477 Proposal Draft/main.tex",
"max_stars_repo_stars_event_max_datetime": "2021-04-27T03:41:23.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-04-27T03:41:23.000Z",
"num_tokens": 4181,
"size": 18098
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Biology
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%
%% Earth, biosphere
%% Water, chemical elements
%% Inorganic vs Organic Material
%% Evolution
%% DNA, Mutations, Replication
%%
%% Simple organisms, more complex organisms
%% Animals
%% Human Beings
%%
\section{Biology Concepts}
Human beings may have 100 trillion cells. There are several hundred distinct
human cell types.
The most basic cells may have a cell wall, chromosomes, plasma membrane,
fibrils, ribosomes.
DNA is the blue print for life of a cell. It mostly static, mutations may be
introduced. DNA contains the instructions for a cell's structure and function.
It is the blueprint for how the cell runs, reproduces, builds and repairs
itself, and every other function necessary for cell life. Metabolism is a
chemical reaction
A protein is a generic term for anything that is made of amino acids. Proteins
are considered the "cellular machinery"; they are constantly being synthesized, and play many essential structural and enzymatic roles within the
cell. How does a cell die? Proteisn sythensis may be interrupted. Protein
synthesis uses about 75 percent of a cell'ss energy. Protein is a macromolceule
Macromolecules that make up cell material
Bacterial cells can change patterns of enzymes, in order to adapt them to their
specific environment.
With traits, in order to show-up, a dominant trait needs only one trait unit
from one of the parents, and the recessive one needs two, from both parents, in order to prevail,
that is the reason why the ratio between occurrences of dominant traits and
recessive traits is. The same explanation applies to the shape traits.
keywords: earth, biosphere, inorganic vs organic material,
water, DNA, mutations, replication, simple organisms, more complex organisms,
animals, human beings, life, death, mutations, evolution, blind
watchmaker, mitochrondria, flagella, pili, cell walls, cytoplasmic membranes,
ribosomes, cytoplasm.
| {
"alphanum_fraction": 0.7433758587,
"avg_line_length": 40.76,
"ext": "tex",
"hexsha": "b3edd410d01b99723cc0e38265d14e3629372dce",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fa6917a3f28bbfded272953437f7fffcd9b64a69",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "berlinbrown/berlinbrown.github.com",
"max_forks_repo_path": "applets/scala/GameOfLifeCellular/docs/articles/introlife_acm/basic_biology.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fa6917a3f28bbfded272953437f7fffcd9b64a69",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "berlinbrown/berlinbrown.github.com",
"max_issues_repo_path": "applets/scala/GameOfLifeCellular/docs/articles/introlife_acm/basic_biology.tex",
"max_line_length": 145,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "fa6917a3f28bbfded272953437f7fffcd9b64a69",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "berlinbrown/berlinbrown.github.com",
"max_stars_repo_path": "applets/scala/GameOfLifeCellular/docs/articles/introlife_acm/basic_biology.tex",
"max_stars_repo_stars_event_max_datetime": "2015-11-06T00:24:40.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-11-06T00:24:40.000Z",
"num_tokens": 452,
"size": 2038
} |
\chapter{\padic power series}
\section{Elementary functions}
In many of the following reasonings we'll use the next proposition, which will give us the possibility to manipulate formal power series, knowing their behaviour in some neighbourhood of $0$.
\begin{prop}
\label{prop:formal-series}
Let $f(X_1, \dots, X_n) \in \C\llbracket X_1, \dots, X_n \rrbracket$ be a power series and let $\epsilon > 0$ such that $f$ is absolutely convergent on $[-\epsilon, \epsilon]^n$ and $f(x_1, \dots, x_n) = 0$ for every $x_i \in [-\epsilon, \epsilon]$. Then $f \equiv 0$, i.e. all terms of $f$ vanishes.
\end{prop}
\begin{proof}
We prove the proposition by induction on $n$.
\begin{itemize}
\item $n=1$: let
\[
f(X) = \sum_{i=0}^{+\infty} a_iX^i.
\]
Obviously $f(0) = a_0 = 0$ so we can write $f(X) = X \cdot(a_1 + a_2X + \dots) =: X \cdot f_1(X)$. Now $f_1(X) \in \C\llbracket X \rrbracket$ vanishes for every $x \in [-\epsilon, \epsilon] \setminus \{0\}$. It is well known from complex analysis that a formal power series in $\C$ defines a holomorphic function where it converges (so, in particular, it's continuous). We then obtain that $f_1$ is continuous so $f_1(0) = 0$, i.e. $a_1 = 0$. Then we can write $f(X) = X^2 \cdot (a_2 + a_3X + \dots) =: X^2 \cdot f_2(X)$, where $f_2(x) = 0$ for every $x \in [-\epsilon, \epsilon] \setminus \{0\}$. Iterating this process we obtain $a_n = 0$ for each $n \in \N$ so $f \equiv 0$.
\item $n>1$: let's assume that the thesis holds for every $i < n$ and let's prove it for $n$. For brevity, let $Y = (X_1, \dots, X_{n-1})$. Since $f$ is absolutely convergent we can write
\[
f(Y, X_n) = \sum_{i=0}^{+\infty} g_i(Y)X_n^i, \qquad g_i(Y) \in \C\llbracket Y \rrbracket.
\]
For $x_n = 0$ we have $f(y, 0) = g_0(y) = 0$ for $y \in [-\epsilon, \epsilon]^{n-1}$. Then, by induction, $g_0 \equiv 0$ so
\[
f(Y, X_n) = X_n \cdot (g_1(Y) + g_2(Y)X_n + \dots) =: X_n \cdot f_1(Y, X_n)
\]
and by hypothesis $f_1(y, x_n) = 0$ for every $y \in [-\epsilon, \epsilon]^{n-1}, x_n \in [-\epsilon, \epsilon] \setminus \{0\}$. Clearly, fixed $y \in [-\epsilon, \epsilon]^{n-1}$, the function $X \mapsto f_1(y, X)$ is a continuous function so $f_1(y, 0) = \lim_{x \to 0} f_1(y, x) = 0$. We have obtained $0 \equiv f_1(Y, 0) = g_1(Y)$ so, by inductive hypothesis, $g_1(Y) \equiv 0$ and we can write
\[
f(Y, X_n) = X_n^2 \cdot (g_2(Y) + g_3(Y)X_n + \dots) =: X_n^2 \cdot f_2(Y, X_n)
\]
where $f_2(y, x_n) = 0$ if $y \in [-\epsilon, \epsilon]^{n-1}, x_n \in [-\epsilon, \epsilon] \setminus \{0\}$. Iterating this process we obtain $g_n(Y) \equiv 0$ for every $n \in \N$, i.e. $f \equiv 0$.
\end{itemize}
\end{proof}
Another important lemma about \padic power series is the Dwork's lemma. It expresses an important phenomenon in \padic analysis: if we know $F(X^p)/(F(X)^p)$ then we also know something about $F$. This ratio represents how far off $F$ is from commuting with the $p$-power map, which is a very important map also in different contexts (e.g. Frobenius morphism for characteristic $p$ fields).
\begin{lemma}[Dwork's lemma]
\label{lemma:dwork}
Let $F(X) \in 1 + X\Qp\ser{X}$. Then $F(X) \in 1 + X\Zp\ser{X}$ if and only if $\tfrac{F(X^p)}{F(X)^p} \in 1 + pX\Zp\ser{X}$.
\end{lemma}
\begin{proof}
If $F(X) \in 1 + X\Zp\ser{X}$ then, since $(a + b)^p \equiv a^p + b^p \mod p$ and $a^p \equiv a \mod p$ if $a \in \Zp$, we have
\[
F(X)^p = F(X^p) + pG(X) \qquad \exists G(X) \in X\Zp\ser{X}.
\]
Then
\[
\frac{F(X^p)}{F(X)^p} = 1 - p\cdot\frac{G(X)}{F(X)^p} \in 1 + pX\Zp\ser{X},
\]
because $F(X)^p \in 1 + X\Zp\ser{X}$ so it can be inverted.\newline
For the other implication let
$F(X) = \sum a_iX^i$; by hypothesis we know that $\exists G(X) = \sum b_iX^i$ such that $G(X) \in 1 + pX\Zp\ser{X}$ and
\[
F(X^p) = F(X)^p\cdot G(X)
\]
We'll prove by induction that $a_i \in \Zp$. By assumption $F(X) \in 1 + X\Qp\ser{X}$ so $a_0 = 1$. Let's now suppose that $a_i \in \Zp$ for every $i < n$. Looking at the coefficients of $X^n$ on both sides of the above equation we obtain
\begin{gather*}
\text{coefficient of $X^n$ in } \left(\sum_{i=0}^n a_iX^i\right)^p\cdot\left(1 + \sum_{i=1}^n b_iX^i\right) =
\begin{cases}
a_{n/p}, & \text{if $p$ divides $n$;}\\
0, & \text{otherwise;}
\end{cases}.
\end{gather*}
Expanding the expression for the coefficient of $X^n$ on the left and subtracting $a_{n/p}$ (recall that $a_{n/p}^p \equiv a_{n/p} \mod p$) we notice that the resulting expression consists of $pa_n$ added to some terms in $p\Zp$ so we can conclude that $pa_n \in p\Zp$, i.e. $a_n \in \Zp$. (To see why this is true it can be convenient to recall the formula $(x_1 + \dots + x_n)^m = \sum_{i_1 + \dots + i_n = m} \binom{m}{i_1, \dots, i_n} x_1^{i_1}\dots x_n^{i_n}$).
\end{proof}
We'll also prove here a technical lemma, which we will use to study the \padic logarithm.
\begin{lemma}
\label{exercise:7-p.74}
Let $a$ be a primitive $m$-th root of $1$ in $\Qpa$. Then
\begin{enumerate}[label=(\roman*)]
\item if $m = p^n$ for some $n \in \N$ then $\pabs{a-1} = p^{-1/\phi(p^n)}$;
\item otherwise, $\pabs{a-1} = 1$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\Phi_n(X) \in \Zp[X]$ be the $n$-th cyclotomic polynomial. \newline
\textit{(i)} To prove this case we'll do an induction on $n$. The case $n=0$ is trivial, since $a = 1$. If $n=1$ then $a$ is a primitive $p$-th root of $1$, i.e. $a^p = 1, a \neq 1$, so $\Phi_p(a) = 0$. By Eisenstein criterion (\cref{prop:eisenstein}) it is easy to prove that $\Phi_p(X)$ is irreducible over $\Qp$. We consider then
\[
f(X) := \Phi_p(X + 1) = \frac{(X + 1)^p - 1}{X} = X^{p-1} + \binom{p}{p-1} X^{p-2} + \dots + \binom{p}{2}X + p.
\]
Clearly $f(X) \in \Qp[X]$ is irreducible and $f(a - 1) = 0$ so, recalling how we extended $\pabs{\ }$ to $\Qpa$, we have
\[
\pabs{a - 1} = \pabs{p}^{1/(p-1)} = p^{-1/(p-1)}
\]
which concludes the proof of the case $n=1$. Now, suppose we know the thesis holds for $m=p^i, i < n$ and let's prove it also holds for $m=p^n$. If we consider the extension $K = \Qp(a)$ with the usual notation ($A$ is the maximal subring and $M$ is its maximal ideal), it is easy to note that $\pabs{a} = 1$ and
\[
a^{p^n} = 1 \implies a^{p^n} \equiv 1 \mod M
\]
and since $A/M$ is a finite field of characteristic $p$ we obtain
\[
a \equiv 1 \mod M
\]
which means exactly $a = 1 + b$ for some $b \in K, \pabs{b} < 1$. We recall the easy facts
\[
\deg \Phi_{p^n}(X) = \phi(p^n) = p^n - p^{n-1}, \qquad \Phi_{p^n}(X) = \Phi_p \left(X^{p^{n-1}}\right)
\]
which imply $\Phi_{p^n}(1) = \Phi_p(1) = p$. Since $a$ is a primitive $p^n$-th root of $1$, every other primitive $p^n$-th root of $1$ is $a^j$, with $p \nmid j$, so
\[
\Phi_{p^n}(X) = \prod_{1 \leq j < p^n, p \nmid j} (X - a^j).
\]
Evaluating at $X = 1$ we obtain
\[
\pabs{p} = \prod_{1 \leq j < p^n, p \nmid j} \pabs{1 - a^j}.
\]
Using the fact $a \equiv 1 \mod M$ we can see that
\[
\frac{1 - a^j}{1 - a} = 1 + a + \dots + a^{j-1} \equiv j \mod M
\]
and if $p \nmid j$ we obtain
\[
\pabs{\frac{1 - a^j}{1 - a}} = 1
\]
so $\pabs{1 - a^j} = \pabs{1 - a}$, which implies $\pabs{1 - a} = \pabs{p}^{1/\phi(p^n)}$.\newline
\textit{(ii)} First of all let's consider the basic case $p \nmid m$. Then, $a -1$ is a root of the polynomial
\[
f(X) = \Phi_m(X+1) = X^{m-1} + \dots + m = g_1(X) \dotsm g_r(X)
\]
where $g_i(X) \in \Qp[X]$ is an irreducible factor of $f$, and we can assume that every $g_i(X)$ is monic and has coefficients in $\Zp$. By hypothesis $\pabs{m} = 1$ and if $b_i$ is the constant term of $g_i(X)$ we have
\[
\pabs{b_1b_2\dotsm b_r} = \pabs{m} = 1, b_i \in \Zp \implies \pabs{b_i} = 1 \quad \forall \, i=1,\dots,r.
\]
Since $f(a-1)=0$ there is at least one $g_i(X)$ such that $g_i(a - 1)=0$ so $\lambda_{\Qp}(a - 1) = g_i(X)$. Then
\[
\pabs{a-1}^{\deg g_i(X)} = \pabs{b_i} = 1 \implies \pabs{a-1} = 1.
\]
Now let $m = p^nq$ with $p \nmid q$ (clearly $q \in \N_{>1}$) and suppose the thesis holds for every $m \in \N$ such that $m$ is not a power of $p$ and $p^n \nmid m$. Then, if $a$ is a primitive $m$-th root of $1$, $a^p$ is a primitive $(p^{n-1}q)$-th root of $1$ and, by inductive hypothesis, we know
\[
\pabs{a - 1}\cdot\pabs{a^{p-1} + a^{p-2} + \dots + a + 1} = \pabs{a^p - 1} = 1.
\]
Since $\pabs{a} = 1$ we have
\begin{gather*}
\pabs{a-1} \leq 1, \qquad \pabs{a^{p-1} + \dots + 1} \leq 1 \\
\implies \pabs{a-1} = 1
\end{gather*}
which proves the statement.
\end{proof}
We recall that in an ultrametric space, like $(\Cp, \pabs{\ })$, a sequence is Cauchy if and only if the difference between adjacent terms tends to $0$, and if the space is also complete, an infinite series converges if and only if its general term tends to $0$ (see \cref{lemma:cauchy-sequence-ultrametric} and \cref{prop:summable_families}). Now we are ready to define analytic functions on $\Cp$ and prove some of their basic properties.
\begin{defn}
A function $f$ is an \emph{analytic function} if
\[
f(X) = \sum_{n=0}^{+\infty} a_nX^n, \qquad a_n \in \Cp.
\]
We can define $f(x)$ for every $x \in \Cp$ such that the series converges, i.e. $\pabs{a_nx^n} \to 0$ as $n \to +\infty$.
\end{defn}
Like in complex analysis, given an analytic function $f$, we can define its \emph{radius of convergence}. Surprisingly we have the exact same formula as in classic analysis.
\begin{prop}
Let $f(X) = \sum_{n=0}^{+\infty} a_nX^n$ be an analytic function. We can define its radius of convergence as
\[
r := \frac{1}{\limsup \pabs{a_n}^{1/n}},
\]
with the usual meaning: $f$ converges if $\pabs{x} < r$ and diverges if $\pabs{x} > r$.
\end{prop}
\begin{proof}
We recall the definition of $\limsup$: $1/r$ is the least real number such that for any $C > 1/r$ there are only finitely many $\pabs{a_n}^{1/n} > C$. \newline
Let's first consider the case $\pabs{x} < r$: we can write $\pabs{x} = (1 - \epsilon)r$ for some $\epsilon > 0$. We have
\[
\pabs{a_nx^n} = \left(r\pabs{a_n}^{1/n}\right)^n\cdot (1 - \epsilon)^n
\]
and, if $n$ is big enough, by definition of $r$ we have
\[
\pabs{a_n}^{1/n} \leq \frac{1}{r - \frac{1}{2}\epsilon r}.
\]
Then
\[
\lim_{n \to +\infty} \pabs{a_nx^n} \leq \lim_{n \to +\infty} \left( \frac{(1 - \epsilon)r}{(1 - \frac{1}{2}\epsilon)r} \right)^n = \lim_{n \to +\infty} \left( \frac{1 - \epsilon}{1 - \frac{1}{2}\epsilon} \right)^n = 0,
\]
which gives us the desired convergence. \newline
Let's now prove that if $\pabs{x} > r$ (and $r < +\infty$) the series diverges. Let's choose an element $\pabs{x} > r$ and an $\epsilon > 0$ such that $\pabs{x} \geq (1 + \epsilon)r$. By definition of $\limsup$ we can find a subsequence $(a_{n_k})_k$ such that $\pabs{a_{n_k}}^{1/n_k} \geq 1/(r + \tfrac{1}{2}\epsilon r)$. Then
\[
\lim_{k \to +\infty} \pabs{a_{n_k}x^{n_k}} \geq \lim_{k \to +\infty} \left( \frac{1 + \epsilon}{1 +\frac{1}{2} \epsilon} \right) ^ {n_k} = +\infty,
\]
which implies that $f$ cannot converge. \newline
Finally, if $r = +\infty$, i.e. $\lim_{n \to +\infty} \pabs{a_n}^{1/n} = 0$, chosen an element $x \in \Cp^\times$ ($x = 0$ is trivial) we have that, if $n$ is big enough, $\pabs{a_n} \leq 1/(2^n\pabs{x}^n)$ so
\[
\lim_{n \to +\infty} \pabs{a_nx^n} \leq \lim_{n \to +\infty} 2^{-n} = 0
\]
and $f$ converges everywhere.
\end{proof}
This proposition tells us nothing about the case $\pabs{x} = r$. In classical analysis there isn't a simple answer: for example the well known function
\[
\log(1 + X) = \sum_{n=1}^{+\infty} (-1)^{n+1} \frac{X^n}{n}
\]
has radius of convergence $r = 1$ on $\C$. When $\abs{x} = 1$ this series can diverge (for example if $x = -1$ we obtain the divergent series $- \sum 1/n$) or converge (if $x = 1$ we obtain $\sum (-1)^{n+1}/n$, which converges by Leibniz criterion). This happens because, over $\R$, there are conditionally convergent series that aren't absolutely convergent. In \padic analysis this cannot happen because convergence only depends on $\pabs{x}$: a given analytic function behaves exactly in the same way for every $\pabs{x} = r$. We will study more deeply this formal series in $\Cp$ when we'll talk about \padic logarithm.\newline
Let's prove two basic facts about analytic functions. For brevity we'll adopt the notation $D_a(r) := B_{\leq r}(a)$ and $D_a(r^-) := B_{<r}(a)$, where we consider the balls in $\Cp$. We'll also omit the subscript $a$ if $a=0$, for example $D(r) = D_0(r) = B_{\leq r}(0)$.
\begin{prop}
\label{prop:convergence-power-series-Zp}
Every $f(X) \in \Zp \llbracket X \rrbracket$ converges in $D(1^-)$.
\end{prop}
\begin{proof}
Let $f(X) = \sum_{n=0}^{+\infty} a_nX^n \in \Zp \llbracket X \rrbracket$ and let $x \in D(1^-)$. Then
\begin{gather*}
\pabs{x} < 1, \pabs{a_n} \leq 1 \,\forall\, n\in\N \\
\implies \lim_{n \to +\infty} \pabs{a_nx^n} \leq \lim_{n \to +\infty} \pabs{x}^n = 0. \qedhere
\end{gather*}
\end{proof}
\begin{prop}
\label{prop:continuity-analitic-function}
Every $f(X) = \sum_{n=0}^{+\infty} a_nX^n \in \Cp \llbracket X \rrbracket$ which converges in a disc $D = D(r)$, or $D(r^-)$, is continuous on $D$.
\end{prop}
\begin{proof}
Let's first prove continuity in $0$. Let $x \in D$ such that $\pabs{x} < \delta < r$ ($\delta > 0$ will be chosen later); then, by continuity of absolute value, we have
\[
\pabs{f(x) - f(0)} = \pabs{\sum_{n=1}^{+\infty} a_nx^n} \leq \max_{n \in \N^{\times}} \,\pabs{a_nx^n} \leq \max_{ n \in \N^{\times}} \, \left( \pabs{a_n}\cdot\delta^n\right).
\]
Clearly, since $f$ converges on $D$, we must have $1/r' > \limsup \pabs{a_n}^{1/n}$ where $\delta < r' < r$ so, for a large enough $N$, $\pabs{a_n} < r'^{-n}$ if $n > N$. Let's introduce
\[
C(\delta) := \max_{1 \leq n \leq N}\,\left(\pabs{a_n} \cdot \delta^n \right);
\]
it's obvious that $C(\delta) \to 0^+$ as $\delta \to 0^+$. Instead, if $n > N$, we have
\[
\pabs{a_n}\cdot\delta^n \leq \left( \frac{\delta}{r'}\right)^n \leq \left( \frac{\delta}{r'}\right)^N,
\]
since $\delta/r' < 1$.
Then
\[
\pabs{f(x) - f(0)} \leq \max\left\{C(\delta), \left(\frac{\delta}{r'}\right)^N\right\}
\]
and we can make the right member as small as we want by choosing smaller $\delta$. This proves continuity in $0$. \newline
Let's now prove continuity in $0 \neq x \in D$ and consider $y \in D$ such that $\pabs{x - y} < \delta$, where $\delta < \pabs{x}$ will be chosen later, as before. Then, by the isosceles triangle principle, $\pabs{x} = \pabs{y}$. We have
\begin{gather*}
\pabs{f(x) - f(y)} = \pabs{\sum_{n=1}^{+\infty} (a_nx^n - a_ny^n)} \leq \max_{n \in \N^{\times}}\,\left(\pabs{a_n}\cdot\pabs{x^n - y^n}\right) \leq \\
\leq \max_{n \in \N^{\times}}\,\left(\pabs{a_n}\cdot\pabs{(x - y)(x^{n-1} + x^{n-2}y + \dots + xy^{n-2} + y^{n-1})} \right)
\end{gather*}
but $\pabs{x^{n-1} + x^{n-2}y + \dots + xy^{n-2} + y^{n-1}} \leq \max_{1 \leq i \leq n}\, \pabs{x^{n-i}y^{i-1}} = \pabs{x}^{n-1}$ hence
\[
\pabs{f(x) - f(y)} \leq \max_{n \in \N^{\times}}\,\left(\pabs{x - y}\cdot\pabs{a_n}\pabs{x}^{n-1} \right) < \frac{\delta}{\pabs{x}} \cdot \max_{n \in \N^{\times}}\,\left(\pabs{a_n}\cdot\pabs{x}^n \right).
\]
We know that $\lim_{n \to +\infty} \pabs{a_n}\pabs{x}^n = 0$ so as $\delta \to 0^+$ we have $\pabs{f(x) - f(y)} \to 0$, which proves the statement.
\end{proof}
\begin{defn}
The (partial) function $\log_p(1 + X)\colon \Cp \to \Cp$ defined by
\[
\log_p(1 + x) := \sum_{n=1}^{+\infty} (-1)^{n+1} \frac{x^n}{n}
\]
is the \emph{\padic logarithm}.
\end{defn}
\begin{prop}
The function $\log_p(1 + X)$ converges on $D(1^-)$ and diverges elsewhere.
\end{prop}
\begin{proof}
It's immediate to verify that the series converges if $\pabs{x} < 1$ and diverges if $\pabs{x} \geq 1$. In-fact $\pabs{a_n} = p^{\ord n}$ so $\lim_{n \to +\infty} \pabs{a_n}^{1/n} = \lim_{n \to +\infty} p^{(\ord n)/n} = 1$ and we obtain the desired radius of convergence. Lastly, if $\pabs{x} = 1$, we have $\pabs{a_nx^n} = p^{\ord n} \geq 1$ so the series diverges.
\end{proof}
From now on, unless otherwise specified, we'll use $\log_p$ meaning the \padic logarithm we have just defined. Let's now prove the basic property of logarithms, which also holds in \padic environment.
\begin{prop}
\label{prop:padic-logarithm-product-sum}
The logarithm of a product is the sum of the logarithms. More precisely, if $x, y \in D(1^-)$ then $\log_p \left[ (1 + x)(1+y)\right] = \log_p(1 + x) + \log_p(1 + y)$.
\end{prop}
\begin{proof}
First of all let's observe that $x, y \in D(1^-) \implies x + y + xy \in D(1^-)$, so we can compute the logarithms. By definition
\[
\log_p\left[(1 + x)(1 + y)\right] = \sum_{n=1}^{+\infty} (-1)^{n+1} \frac{(x + y + xy)^n}{n}.
\]
If we work in $\R$ with the usual metric we already know that $\log\left[ (1+x)(1+y)\right] = \log(1 + x) + \log(1+y)$ and, using the Taylor expansion of $\log$, we have
\[
\sum_{n = 1}^{+\infty} (-1)^{n+1} \frac{x^n}{n} + \sum_{n = 1}^{+\infty} (-1)^{n+1} \frac{y^n}{n} = \sum_{n = 1}^{+\infty} (-1)^{n+1} \frac{(x + y + xy)^n}{n}
\]
for every $x, y \in \left[-\tfrac{1}{2}, \tfrac{1}{2}\right]$. Thanks to \cref{prop:formal-series} we infer that this relation also holds in the ring of formal power series in two variables $\Q\llbracket X,Y \rrbracket$. Then, using the fact that if a series converges in $\Cp$ its terms can be rearranged in any order without changing the sum, we can write
\begin{gather*}
\log_p\left[(1 + x)(1 + y)\right] = \sum_{n = 1}^{+\infty} (-1)^{n+1} \frac{(x + y + xy)^n}{n} =\\
= \sum_{n = 1}^{+\infty} (-1)^{n+1} \frac{x^n}{n} + \sum_{n = 1}^{+\infty} (-1)^{n+1} \frac{y^n}{n} = \log_p(1 + x) + \log_p(1 + y)
\end{gather*}
which concludes the proof.
\end{proof}
\begin{corollary}
\label{corollary:log-root-of-1}
If $1 + x \in \Cp$ is a root of $1$ and $\pabs{x} < 1$, then $\log_p(1 + x) = 0$. In particular if $1+x$ is a $p^m$-th root of $1$ then $\log_p(1 + x) = 0$.
\end{corollary}
\begin{proof}
Let's first observe that we can actually compute the logarithm of $1 + x$ since by hypothesis $\pabs{x} < 1$ (if $1+x$ is a $p^m$-th root of $1$ then automatically $\pabs{x} < 1$ by \cref{exercise:7-p.74}). Now we have
\[
k\cdot\log_p(1 + x) = \log_p\left[(1 + x)^k\right] = \log_p(1) = 0,
\]
which concludes the proof.
\end{proof}
We have obtained a function, defined on a particular disc of $\Cp$, using the Taylor expansion of the classical $\log(1 + X)$. Now we would like to define the exponential function, beginning from the classical $\exp(x) = \sum_{n=0}^{+\infty} x^n/n!$, and study its relation with the logarithm.
\begin{defn}
The (partial) function $\exp_p(X)\colon \Cp \to \Cp$ defined by
\[
\exp_p(x) := \sum_{n=0}^{+\infty} \frac{x^n}{n!}
\]
is the \emph{\padic exponential}.
\end{defn}
Looking at this series we immediately see that, unlike in the classical case where the $n!$ in the denominator makes sure the series converges for every $x \in \C$, there can be some problems. In-fact if $n!$ is divisible by a high power of $p$, its reciprocal will have a big absolute value. More precisely, we can compute exactly $\pabs{1/n!} = p^{\ord(n!)}$.
\begin{lemma}
\label{exercise:14-p.7}
Given $n \in \N$,
\[
\emph{ord}_p (n!) = \frac{n-S_n}{p-1}
\]
where $S_n$ is the sum of digits in $n$ to base $p$.
\end{lemma}
\begin{proof}
Let's write $n$ in base $p$:
\[
n = a_0 + a_1p + \dots + a_rp^r, \qquad a_i \in \{0, \dots, p-1\},\, a_r \neq 0.
\]
Then $S_n = a_0 + a_1 + \dots + a_n$. By definition, $\ord(n!)$ is the maximum $t$ such that $p^t \mid (n!)$. We can use this little formula to compute it:
\[
\ord(n!) = \sum_{k=1}^{+\infty} \left[\frac{n}{p^k} \right] = \sum_{k=1}^r \left[\frac{n}{p^k} \right]
\]
where $[x]$ is the integer part of $x \in \R$, i.e. the only integer such that $[x] \leq x < [x]+1$. Using the representation of $n$ in base $p$ we have that $\left[n / p^k \right] = 0$ if $k > r$ and, otherwise,
\[
\left[\frac{n}{p^k} \right] = \frac{n - a_0 - \dots - a_{k-1}p^{k-1}}{p^k}
\]
so if we add them together we obtain
\[
\sum_{k=1}^r \left[\frac{n}{p^k} \right] = \sum_{k=1}^r \frac{n - \sum_{j=0}^{k-1} a_jp^j}{p^k}.
\]
With a little bit of computation, recalling that $1 + p + \dots + p^{k-1} = \frac{p^k - 1}{p-1}$, we obtain the desired formula.
\end{proof}
\begin{prop}
The function $\exp_p(X)$ converges on $D(r_p^-)$ and diverges elsewhere, where $r_p := p^{-1/(p-1)}$.
\end{prop}
\begin{proof}
Using \cref{exercise:14-p.7} we obtain
\[
\pabs{1/n!} = p^{\frac{n - S_n}{p - 1} }
\]
and, recalling the formula for the radius of convergence $r = 1/(\limsup \pabs{a_n}^{1/n})$, we can write
\[
\ord r = -\ord \left( \limsup p^{-(\ord a_n)/n} \right) = -\ord \left( p^{-\liminf (\ord a_n)/n} \right) = \liminf \left( \frac{\ord a_n}{n} \right)
\]
so, in our case where $a_n = 1/n!$, we obtain
\[
\ord r = \liminf \left(-\frac{n - S_n}{n(p-1)} \right).
\]
We can use the easy upper bound $S_n \leq (p-1) \cdot \ord n$ to prove
\[
\lim_{n \to +\infty} \frac{S_n - n}{n(p-1)} = -\frac{1}{p-1}
\]
so the exponential series $\sum_{n=0}^{+\infty} x^n/n$ converges if $\pabs{x} < p^{-1/(p-1)} = r_p$ and diverges if $\pabs{x} > p^{-1/(p-1)} = r_p$. If $\pabs{x} = r_p$, i.e. $\ord x = 1/(p-1)$, we have
\[
\ord (a_nx^n) = -\frac{n-S_n}{p-1} + \frac{n}{p-1} = \frac{S_n}{p-1}
\]
and, choosing $n = p^m$ so $S_n = 1$, we have $\pabs{a_{p^m}x^{p^m}} = p^{-1/(p-1)} > 0$; we have found a subsequence of $(\pabs{a_nx^n})_n$ which does not converge to zero so we conclude that if $\pabs{x} =r_p$ the exponential series diverges.
\end{proof}
We immediately note that $D(r_p^-) \subsetneq D(1^-)$, i.e. $\exp_p$ converges in a smaller disc than $\log_p$. We now prove that, like in the classical case, the \padic exponential transforms sums into products.
\begin{prop}
If $x, y \in D(r_p^-)$ then $\exp_p(x + y) = \exp_p(x) \cdot \exp_p(y)$.
\end{prop}
\begin{proof}
Let's first observe that $x, y \in D(r_p^-) \implies x + y \in D(r_p^-)$ so we can compute $\exp_p(x + y)$. The rest of the proof is completely analogue to the proof of \cref{prop:padic-logarithm-product-sum}, using the fact that $\exp(x + y) = \exp(x) \cdot \exp(y)$ if $x, y \in \R$ (which then can be translated to a relation between power series by \cref{prop:formal-series}).
\end{proof}
Finally we have all the tools we need to prove the relation between \padic exponential and logarithm.
\begin{prop}
\label{prop:exp-and-log-inverse}
The functions $\log_p$, defined by $x \mapsto \log_p(1 + (x-1))$, and $\exp_p$ give mutually inverse isomorphisms between the multiplicative group $(D_1(r_p^-), \cdot)$ and the additive group $(D(r_p^-), +)$.
\end{prop}
\begin{proof}
First of all let's observe that $\exp_p\colon D(r_p^-) \to D_1(r_p^-)$ and $\log_p\colon D_1(r_p^-) \to D(r_p^-)$ so that the proposition actually makes sense. To prove that $\exp_p(x) \in 1 + D(r_p^-) \subset 1 + D(1^-)$ let's note that
\[
x \in D(r_p^-) \implies \ord\left(\frac{x^n}{n!}\right) = n\cdot\ord(x) - \ord(n!) > \frac{n}{p-1} - \frac{n - S_n}{p-1} = \frac{S_n}{p-1} \geq \frac{1}{p-1}
\]
so we have
\begin{gather*}
\ord(\exp_p(x) - 1) = \ord\left( \sum_{n=1}^{+\infty} \frac{x^n}{n!} \right) \geq \min_{n \geq 1} \left\{\ord\left(\frac{x^n}{n!}\right) \right\} > \frac{1}{p-1} \\
\implies \exp_p(x) \in 1 + D(r_p^-).
\end{gather*}
Instead, to prove that $\log_p(1 + x) \in D(r_p^-)$ if $x \in D(r_p^-)$ let's observe that
\[
\ord\left(\frac{x^n}{n}\right) - \frac{1}{p-1} > \frac{n}{p-1} - \ord(n) - \frac{1}{p-1} = \frac{n-1}{p-1} - \ord(n) =: f(n).
\]
We claim that $f$ has its minima at $n=1$ and $n=p$, where it's zero. To see why this is true let's first observe that we can just consider the case where $n = p^k$ for $k \in \N$ since if $n' = p^km$ with $p \nmid m$ we have $f(n') \geq f(n)$. It is then an easy calculation to verify that $f(p^{k+1}) \geq f(p^k)$ for $k \in \N$. Thus we have
\[
\ord(\log_p(1 + x)) = \ord\left(\sum_{n=1}^{+\infty} (-1)^{n+1}\frac{x^n}{n}\right) \geq \min_{n > 0} \left\{ \ord\left(\frac{x^n}{n}\right) \right\} > \frac{1}{p-1}
\]
which means precisely $\log_p(1 + x) \in D(r_p^-)$. \newline
We have already proved in some previous propositions that $\exp_p\colon (D(r_p^-),+) \to (D_1(r_p^-), \cdot)$ and $\log_p\colon (D_1(r_p^-), \cdot) \to (D(r_p^-), +)$ are group morphisms so now we have only to prove that they are mutually inverse.\newline
To see that $\log_p \circ \exp_p\colon D(r_p^-) \to D(r_p^-)$ is the identity function we compute
\[
\log_p(\exp_p(x)) = \sum_{n=1}^{+\infty} (-1)^{n+1}\frac{(\exp_p(x) - 1)^n}{n} = \sum_{n=1}^{+\infty} (-1)^{n+1}\frac{\left(\sum_{m=1}^{+\infty} \frac{x^m}{m!}\right)^n}{n}.
\]
Since if $x \in \R$ we have $\log(\exp(x)) = x$ we infer, by \cref{prop:formal-series}, that the following formal identity holds in $\Q\llbracket X\rrbracket$:
\[
\sum_{n=1}^{+\infty} (-1)^{n+1}\frac{\left(\sum_{m=1}^{+\infty} \frac{X^m}{m!}\right)^n}{n} = X
\]
which implies $\log_p(\exp_p(x)) = x$ for $x \in D(r_p^-)$. The same exact reasoning can be also used to prove $\exp_p(\log_p(1 + x)) = 1 + x$ for $x \in D(r_p^-)$.
\end{proof}
This proposition implies, in particular, that $\log_p$ is injective on $D_1(r_p^-)$. It is easy to see that this is the biggest disc where this is true: in-fact if $\zeta \in \Cp$ is a primitive $p$-th root of $1$ then, by \cref{exercise:7-p.74}, $\pabs{\zeta - 1} = p^{-1/(p-1)} = r_p$ and $\log_p(\zeta) = 0 = \log_p(1)$.
\begin{defn}
The (partial) functions $\sin_p\colon \Cp \to \Cp$ and $\cos_p\colon \Cp \to \Cp$ defined by
\begin{gather*}
\sin_p(x) := \sum_{n=0}^{+\infty} (-1)^n \frac{x^{2n+1}}{(2n+1)!}\\
\cos_p(x) := \sum_{n=0}^{+\infty} (-1)^n \frac{x^{2n}}{(2n)!}
\end{gather*}
are the \padic sine and the \padic cosine.
\end{defn}
It's easy to prove that $\sin_p$ and $\cos_p$ are defined on $D(r_p^-)$.
Another important function in classical analysis is the binomial expansion
\[
B_a(x) = \sum_{n=0}^{+\infty} \binom{a}{k} x^n
\]
where $x, a \in \C$ and we used the generalized binomial coefficient defined by:
\begin{gather*}
\binom{a}{k} :=
\begin{cases*}
1, & \text{if $k = 0$} \\
\frac{a(a-1)\dots (a - k + 1)}{k!}, & \text{otherwise} \\
\end{cases*}.
\end{gather*}
This is exactly the MacLaurin series of $f(x) = (1 + x)^a$. Using ratio test it can be proved that for any $a \in \C$ this series converges if $\abs{x} < 1$ and, unless $a \in \N$, diverges if $\abs{x} > 1$. Its behaviour when $\abs{x} = 1$ is a little more complicated and depends on the value of $a$. We'll now try to define an analogue function in the \padic environment.
\begin{defn}
Fixed $a \in \Cp$, the (partial) function $B_{a, p}(X)\colon \Cp \to \Cp$ defined by
\[
B_{a,p}(X) := \sum_{n=0}^{+\infty} \binom{a}{n}X^n = 1 + \sum_{n=1}^{+\infty} \frac{a(a-1)\dots (a - n + 1)}{n!}X^n
\]
is the \padic binomial expansion
\end{defn}
We'll now try to study where it converges (it will be more complicated than the previous functions, since this is actually the first series with coefficient in $\Cp$, and not in $\Q$).
\begin{prop}
\label{prop:convergence-binomial}
If $\pabs{a} > 1$ then the region of convergence of $B_{a, p}(X)$ is $D((r_p/\pabs{a})^-)$. Instead, if $\pabs{a} \leq 1$ the binomial expansion surely converges on $D(r_p^-)$ (although the region of convergence can be bigger). Finally, if $a \in \Zp$ then $B_{a, p}(X) \in \Zp\llbracket X \rrbracket$ so it surely converges on $D(1^-)$.
\end{prop}
\begin{proof}
Let's suppose $\pabs{a} > 1$. Then, by the isosceles triangle principle, if $i \in \Z$ then $\pabs{a - i} = \pabs{a}$ and we obtain that the $n$-th term of the series has norm $\pabs{ax}^n/\pabs{n!}$. Thus, with a little computation, we obtain that the radius of convergence is $r = p^{-1/(p-1)}/\pabs{a} = r_p/\pabs{a}$. Similarly to the exponential case it's easy to prove that the region of convergence if $D((r_p/\pabs{a})^-)$.\newline
If $\pabs{a} \leq 1$ it is more difficult to find the exact region of convergence; anyway if $i \in \Z$ we have $\pabs{a - i} \leq \max\{\pabs{a}, \pabs{i}\} \leq 1$ so $\pabs{\binom{a}{n}x^n} \leq \pabs{x^n/n!}$. Then $B_{a,p}(X)$ surely converges on $D(r_p^-)$.\newline
To prove that if $a \in \Zp$ then $B_{a,p}(X) \in \Zp\llbracket X \rrbracket$ we just need to show that $\binom{a}{n} \in \Zp$ for every $n \in \N$ (we already know $\binom{a}{n} \in \Qp$). Let's fix $n$ and choose $a_0 \in \Z$ such that $a_0 > n$ and $\ord(a - a_0) > N$, where $N$ will be chosen later (to choose $a_0$ we can just truncate the \padic expansion of $a \in \Zp$ at some index greater than $N$). Now $\binom{a_0}{n} \in \Z \subset \Zp$ and it suffices to show that $\pabs{\binom{a_0}{n} - \binom{a}{n}} \leq 1$ for a suitable $N$ (then we can conclude using the ultrametric inequality). This easily follows from the continuity of the polynomial $X(X-1)\dots(X - n + 1)$ (special case of \cref{prop:continuity-analitic-function}). Then $B_{a, p}(X) \in \Zp\llbracket X \rrbracket$ if $a \in \Zp$ so, by \cref{prop:convergence-power-series-Zp}, it converges in $D(1^-)$.
\end{proof}
We can now prove the main property of the binomial expansion, and justify the shorthand $B_{a,p}(X) = (1 + X)^a$, at least for $a \in \Q$.
\begin{prop}
If $a \in \Q^\times$ and $x \in \Cp$ is in the region of convergence of $B_{a,p}(X)$, then $\left[B_{a, p}(x)\right]^{1/a} = 1 + x$.
\end{prop}
\begin{proof}
Let's first consider $a = 1/m$ with $m \in \Z^\times$. The idea behind the proof is the usual one: if $x \in \R$ and $\abs{x} < 1$ we have $B_{1/m}(x) = (1 + x)^{1/m}$ so $B_{1/m}(x)^m = 1 + x$ which, by \cref{prop:formal-series}, gives us the formal identity between the two power series in $\Q\llbracket X \rrbracket$ (observe that $m < 0$ doesn't create problems since $B_{1/m}(X)$ is an invertible element of $\Q\llbracket X \rrbracket$) that we then translate into an equality between \padic analytic functions (observe the trivial fact $a \in \Q \implies B_{a, p}(X) \in \Q\llbracket X \rrbracket$). We must pay attention only to the last step, i.e. we can substitute only $x$ in the region of convergence of $B_{1/m, p}(X)$ so, for example, if $p \mid m$ we can use $x \in D((r_p\pabs{m})^-)$ and if $p \nmid m$ we can choose $x \in D(r_p^-)$. We have proved that $B_{1/m, p}(x)^m = 1 + x$ for every $x$ where $B_{1/m, p}(X)$ converges. \newline
Now let $a = n/m$ with $n, m \in \Z^\times$. It is easy to prove, using the same technique as before, that $B_{n/m, p}(X) = B_{1/m, p}(X) ^ n$. Then we can write
\[
B_{n/m, p}(X)^{m/n} = B_{1/m, p}(X)^m = 1 + X,
\]
which proves the thesis.
\end{proof}
We can use the \padic binomial expansion to study an interesting example of how the same convergent series in $(\Q, \abs{\ })$ and in $(\Qp, \pabs{\ })$ can have different sums.
\begin{example}
Let's consider the following power series:
\begin{gather*}
B_{1/2}\left(\frac{7}{9}\right) = \sum_{n=0}^{+\infty} \binom{1/2}{n} \left(\frac{7}{9}\right)^n \qquad \in \Q\llbracket X \rrbracket, \\
B_{1/2, 7}\left(\frac{7}{9}\right) = \sum_{n=0}^{+\infty} \binom{1/2}{n} \left(\frac{7}{9}\right)^n \qquad \in \Q_7 \llbracket X \rrbracket.
\end{gather*}
They are exactly the same power series but they converge to different numbers, both of which are of course square roots of $\tfrac{16}{9}$ (clearly its square roots are the same both in $\Q$ and in $\Q_7$). In the first case, working in $(\Q, \abs{\ })$, we have
\[
B_{1/2}\left(\frac{7}{9}\right) = \left(1 + \frac{7}{9}\right)^{1/2} = \frac{4}{3} > 0.
\]
Instead, in the second case, we have
\[
B_{1/2, 7}\left(\frac{7}{9}\right) = \left(1 + \frac{7}{9}\right)^{1/2} = -\frac{4}{3} < 0.
\]
In-fact, $\textrm{ord}_7\left(\tfrac{7}{9}\right) = 1$ so for $n \geq 1$ we have
\[
\abs{\frac{1/2(1/2 - 1)\dots(1/2 - n + 1)}{n!}\cdot \left(\frac{7}{9}\right)^n}_7 \leq \frac{7^{-n}}{\abs{n!}_7} = 7^{\frac{-5n - S_n}{6}} < 1
\]
so it must be $B_{1/2, 7}\left(\tfrac{7}{9}\right) \equiv 1 \mod 7$. Now it's easy to see that $-\tfrac{4}{3} \equiv 1 \mod 7$ and $\tfrac{4}{3} \equiv -1 \mod 7$. We conclude that necessarily $B_{1/2, 7}\left(\tfrac{7}{9}\right) = -\tfrac{4}{3}$.
\end{example}
This example also warns us about the danger of using the notation $B_{a, p}(X) = (1 + X)^a$, which comes certainly handy sometimes but we have to remember that it can yield different results than the ones we would expect on $\R$.
\section{The Iwasawa logarithm and Artin-Hasse exponential}
\begin{defn}
Let $X \subseteq \Cp$ be a set with no isolated points. A function $f\colon X \to \Cp$ is \emph{differentiable} at $a \in X$ if
\[
\exists \lim_{X \ni x \to a} \frac{f(x) - f(a)}{x - a} =: f'(a) \in \Cp.
\]
Equivalently, $f$ is differentiable at $a \in X$ if
\[
f(x) = f(a) + (x - a)f'(a) + (x - a)\phi(x), \qquad \lim_{X \ni x \to a} \phi(x) = 0.
\]
\end{defn}
We also introduce a stronger notion of differentiability for \padic functions, which will give us some analogue theorems to the classical case.
\begin{defn}
Let $X \subseteq \Cp$ be a set with no isolated points. A function $f\colon X \to \Cp$ is \emph{strictly differentiable} at $a \in X$ (and we write $f \in S^1(a)$) if the difference quotients
\[
\Phi f(x, y) := \frac{f(x) - f(y)}{x - y}
\]
tends to $\Cp \ni \ell = f'(a)$ as $X \times X \setminus \Delta_X \ni (x, y) \to (a, a)$.
Here we used the notation $\Delta_X = \{(x, x) : x \in X \} \subset X \times X$.
We say $f \in S^1(X)$ if $f \in S^1(a)$ for every $a \in X$.
\end{defn}
In the classical case this definition is not very useful: in-fact if $I \subset \R$ is an open interval and $f \in \mathcal{C}^1(I, \R)$ then $f$ is strictly differentiable at every point of $I$. In the next example we'll see this is not the case in \padic analysis.
\begin{example}
Let's consider the sequence of disjoint open balls $(B_n)_{n \geq 1}$ defined by
\[
B_n := \{x \in \Zp: \pabs{x - p^n} < \pabs{p^{2n}} \} \subseteq \{x \in \Zp: \pabs{x} = \pabs{p^n}\}
\]
and let $f\colon \Zp \to \Cp$ defined by
\begin{gather*}
f(x) :=
\begin{cases}
p^{2n}, & \text{if $x \in B_n$;} \\
0, & \text{otherwise};\\
\end{cases}.
\end{gather*}
The function $f$ is constant on each open ball $B_n$, hence $f$ is locally constant outside the origin. Then $f$ is differentiable at every $\Zp \ni x \neq 0$ and $f'(x) = 0$. At the origin
\[
\lim_{\Zp \ni x \to 0} \frac{f(x) - f(0)}{x} = \lim_{\Zp \ni x \to 0} \frac{f(x)}{x} = 0
\]
so $f'(0) = 0$ (to see why it is true, let $x = up^n, u \in \Zp^\times$; then $f(x) = p^{2n}$ and so $\tfrac{f(x)}{x} = u^{-1}p^n$). Then $f'\colon \Zp \to \Cp$ is identically $0$ so it is obviously continuous (i.e. $f \in \mathcal{C}^1$) but $f$ is not strictly differentiable at $0$. In-fact, let's consider $\Phi f(x, y)$ where $x = x_n = p^n$ and $y = y_n = p^n - p^{2n}$:
\[
\Phi f(x_n, y_n) = \frac{f(x_n) - f(y_n)}{x_n - y_n} = \frac{p^{2n} - 0}{p^{2n}} = 1
\]
so, if we consider this particular path $(x_n, y_n) \to (0,0)$ as $n \to +\infty$ we obtain
\[
0 = f'(0) \neq \lim_{n \to +\infty} \Phi f(x_n, y_n) = 1,
\]
which implies $f$ is not strictly differentiable at $0$ (we have used that $\pabs{y_n} = \pabs{p^n}$ and $y_n \notin B_n$).
\end{example}
We'll now prove a proposition which we're very familiar with in classical analysis.
\begin{prop}
If $f\colon X \to \Cp$ is strictly differentiable at $a \in X$ and $f'(a) \neq 0$, then there is a neighbourhood $V$ of $a \in X$ in which $f$ is injective.
\end{prop}
\begin{proof}
Since $f \in S^1(a)$ and $\pabs{f'(a)} > 0$ we can find a neighbourhood $V$ of $a$ such that
\[
\pabs{\Phi f(x, y) - f'(a)} < \pabs{f'(a)} \qquad \text{for $(x, y) \in V \times V \setminus \Delta_V$}.
\]
Then, by the isosceles triangle principle, we must have $\pabs{\Phi f(x, y)} = \pabs{f'(a)}$ which means exactly
\[
\pabs{f(x) - f(y)} = \pabs{f'(a)}\pabs{x - y} \qquad \text{for $(x, y) \in V \times V$}. \qedhere
\]
\end{proof}
Let's now focus on analytic functions; they are, like in the classical case, everywhere strictly differentiable any number of times (i.e. they're in $\bigcap_{k > 0} S^k$). We'll only prove it for $k=1$.
\begin{thm}
\label{thm:derivative-power-series}
Let $f(X) = \sum_{n \geq 0} a_nX^n$ be an analytic function which converges on $D = D(r^-)$. Then $f \in S^1(D)$ and $f'$ is given by
\[
f'(X) = \sum_{n=1}^{+\infty} na_nX^{n-1}.
\]
\end{thm}
\begin{proof}
It's immediate to note that the radius of convergence of $f'$ is greater or equal to $r$, the radius of convergence of $f$, since $\pabs{n} \leq 1$ for every $n \in \N$. First of all let's fix $x \in D$ and prove that
\[
\lim_{h \to 0}\, \pabs{\frac{f(x+h) - f(x)}{h} - f'(x)} = 0.
\]
We can re-write this limit as
\[
\lim_{h \to 0}\,\pabs{\sum_{n = 2}^{+\infty} a_n \cdot \left( \frac{(x+h)^n - x^n}{h} - nx^{n-1} \right) } = 0;
\]
and using the binomial theorem on $(x + h)^n$ we can then write
\[
\lim_{h \to 0}\,\pabs{\sum_{n = 2}^{+\infty} a_n \cdot \left( \sum_{i=0}^{n-2} \binom{n}{i} x^ih^{n-1-i} \right) } = 0.
\]
Let's now distinguish two cases: $x = 0$ and $x \neq 0$. \newline
If $x = 0$ then we must prove $\lim_{h \to 0}\,\pabs{\sum_{n = 2}^{+\infty} a_n \cdot h^{n-1} } = 0$, which easily follows from
\[
\lim_{h \to 0}\,\pabs{\sum_{n = 2}^{+\infty} a_n \cdot h^{n-1} } \leq \lim_{h \to 0}\,\left( \pabs{h} \cdot \max_{n \geq 2} \left\{\pabs{a_nh^{n-2}} \right\} \right)= 0,
\]
where we considered $0 < \pabs{h} < r$ and exploited the fact that $\lim_{n \to +\infty} \pabs{a_nh^{n-2}} = 0$ so the maximum in the limit above is bounded.\newline
Now, assuming $x \neq 0$ and $0 < \pabs{h} < \pabs{x}$, it's easy to see that
\[
\pabs{\sum_{i=0}^{n-2} \binom{n}{i} x^ih^{n-1-i}} \leq \pabs{h}^{n-1} \cdot \max_{0 \leq i \leq n-2}\left\{\pabs{x^ih^{-i}}\right\} \leq \pabs{h}^{n-1}\cdot \left(\frac{\pabs{x}}{\pabs{h}}\right)^{n-2} = \pabs{h}\cdot \pabs{x}^{n-2}.
\]
Then we have
\[
\pabs{\sum_{n = 2}^{+\infty} a_n \cdot \left( \sum_{i=0}^{n-2} \binom{n}{i} x^ih^{n-1-i} \right) } \leq \pabs{h}\cdot\max_{n \geq 2}\left\{\pabs{a_nx^{n-2}}\right\},
\]
and since $\lim_{n\to +\infty} \pabs{a_nx^{n-2}} = 0$ the maximum above is bounded so
\[
\lim_{h \to 0}\,\pabs{\sum_{n = 2}^{+\infty} a_n \cdot \left( \sum_{i=0}^{n-2} \binom{n}{i} x^ih^{n-1-i} \right) } \leq \lim_{h \to 0} \left( \pabs{h}\cdot\max_{n \geq 2}\left\{\pabs{a_nx^{n-2}}\right\}\right) = 0.
\]
We have proved that $f$ is differentiable everywhere and $f'$ is its derivative. \newline
We won't prove here that $f$ is actually strictly differentiable: a proof of this statement for a particular case (where $r \geq 1$, i.e. $\lim_{n\to+\infty}\pabs{a_n}=0$) can be found at \cite[239]{robert:padic-analysis}.
\end{proof}
\begin{example}
We can now prove one well known result of classical analysis: the derivative of $e^x$ is $e^x$. More precisely, if $x \in D(r_p^-)$ then $\frac{\mathrm{d}}{\mathrm{d}x}\exp_p(x) = \exp_p(x)$. It easily follows applying \cref{thm:derivative-power-series}:
\[
\frac{\mathrm{d}}{\mathrm{d}x}\exp_p(x) = \frac{\mathrm{d}}{\mathrm{d}x}\left( \sum_{n=0}^{+\infty} \frac{x^n}{n!} \right) = \sum_{n=1}^{+\infty} \frac{x^{n-1}}{(n-1)!} = \exp_p(x).
\]
\end{example}
\begin{defn}
Let $f\colon \Cp \to \Cp$ be a (partial) function. If for every $x$ in its domain there exists a neighbourhood where $f$ is a power series, we say that $f$ is \emph{locally analytic}.
\end{defn}
We present now two \padic locally analytic functions: the \emph{Iwasawa logarithm} and the \emph{Artin-Hasse} exponential.
\begin{prop}
There exists a unique function $\Log_p\colon \Cp^\times \to \Cp$ such that:
\begin{enumerate}[label=(\arabic*)]
\item $\Log_p$ agrees with $\log_p$ in $D_1(1^-)$, i.e.,
\[
\Log_p(x) = \sum_{n=1}^{+\infty} (-1)^{n+1}\frac{(x-1)^n}{n} \qquad \text{for $\pabs{x-1} < 1$};
\]
\item $\Log_p(xy) = \Log_p(x) + \Log_p(y)$ for all $x, y \in \Cp^\times$;
\item $\Log_p(p) = 0$.
\end{enumerate}
\end{prop}
\begin{proof}
We recall from \cref{prop:structure-Cp} that any non-zero $x \in \Cp$ can be written as $x = p^r\omega(x_1)\langle x_1\rangle$, where $p^r$ is a root of the equation $X^b - p^a = 0$ where $r=\tfrac{a}{b}=\ord(x)$, $\omega(x_1)$ is a root of $1$ and $\pabs{\langle x_1 \rangle - 1} < 1$. If such an extension of the logarithm exists, then, by \textit{(2)} and \textit{(3)}, it must be
\[
\Log_p(x) = \Log_p(p^r) + \Log_p(\omega(x_1)) + \Log_p(\langle x_1 \rangle) = 0 + 0 + \Log_p(\langle x_1 \rangle) = \log_p(\langle x_1 \rangle),
\]
since $\langle x_1 \rangle \in D_1(1^-)$. Then there is at most one extension of the logarithm and it is the one defined by
\[
\Log_p(x) := \log_p(\langle x_1 \rangle).
\]
First of all we have to show that this is well defined: in-fact we could have chosen another root of $X^b - p^a= 0$ and we would have obtained a different factorization of the same element. Let's suppose that
\[
x = p^r\cdot\omega(x_1)\cdot\langle x_1\rangle = \frac{p^r}{\zeta}\cdot\omega\left(x_1\overline{\zeta}\right)\cdot\left\langle x_1\overline{\zeta}\right\rangle,
\]
where $\zeta \in \Cp$ is a $b$-th root of unity and $\overline{\zeta} = \zeta + M \in A/M$ (we recall that $A = D(1), M = D(1^-)$ in $\Cp$). We have to prove then that $\log_p(\langle x_1\rangle) = \log_p(\left\langle x_1\overline{\zeta}\right\rangle)$. Let's first recall how the Teichm{\"u}ller representatives are defined: if $\overline{\Fp}$ is the algebraic closure of $\Fp$ then $\omega\colon \overline{\Fp} \to \Zpu$ is a section of the projection $\pi\colon \Zpu \twoheadrightarrow \Zpu/p\Zpu = \overline{\Fp}$ such that $\omega(x)^{p^f - 1} = 1$ if $x \in \F_{p^f}^\times$ and $\omega(0)=0$ (it is immediate that since $\omega$ can be defined on every finite field of characteristic $p$, see the proof of \cref{prop:structure-finite-extension}, it can be extended to $\overline{\Fp}$). It's easy to see that $\omega\colon \overline{\Fp}^\times \to \left(\Zpu\right)^\times$ is a group morphism, i.e. $\omega(xy) = \omega(x) \cdot \omega(y)$: in-fact if $x, y \in \F_{p^f}$ then $\omega(xy)$ is defined as the only element of $\Zpu$ such that $\omega(xy)^{p^f} = \omega(xy)$ and $\pi(\omega(xy))=xy$ and it's clear that $\omega(x)\cdot\omega(y)$ satisfies both these conditions.
We also recall from \cref{prop:structure-Cp} that we can find a big enough $f \in \N$ such that
\begin{gather*}
\F_{p^f} \ni x_1 = \frac{x}{p^r} + M,\\
\F_{p^f} \ni x_1\overline{\zeta} = \frac{x\zeta}{p^r} + M = \left(\frac{x}{p^r} + M\right) \cdot (\zeta + M).
\end{gather*}
Explained all the notations, we can finally write
\[
\omega\left(x_1\overline{\zeta}\right) = \omega(x_1)\cdot\omega\left(\overline{\zeta}\right) \implies \left\langle x_1\overline{\zeta}\right\rangle = \frac{x\zeta}{p^r\cdot\omega\left(x_1\overline{\zeta}\right)} = \langle x_1\rangle \cdot \frac{\zeta}{\omega\left(\overline{\zeta}\right)}.
\]
Now, since $\zeta^b = 1$, we have $\overline{\zeta}^b = 1$ so $\omega\left(\overline{\zeta}\right)^b = 1$, since $\omega$ is a group morphism and $\omega(1) = 1$. Finally,
\[
\left(\frac{\zeta}{\omega\left(\overline{\zeta}\right)}\right)^b = \frac{\zeta^b}{\omega\left(\overline{\zeta}\right)^b} = 1
\]
so $\xi := \tfrac{\zeta}{\omega\left( \overline{\zeta}\right)}$ is a root of $1$. Let's prove that $\pabs{\xi - 1} < 1$; let's suppose by contradiction that $\xi = 1 + \Delta$ with $\pabs{\Delta} \geq 1$ and let's write $\langle x_1 \rangle = 1 + \delta$ with $\pabs{\delta} < 1$. We know by hypothesis that $\langle x_1\overline{\zeta} \rangle \in D_1(1^-)$ so we must have
\[
\pabs{(1 + \delta) \cdot (1 + \Delta) - 1} = \pabs{\delta + \Delta \cdot(1 + \delta)} < 1
\]
but since $\pabs{\delta} < 1$ and $\pabs{1 + \delta} = 1$, we have $\pabs{\Delta \cdot (1 + \delta)} = \pabs{\Delta} \geq 1$ so
\[
\pabs{\delta + \Delta\cdot(1 + \delta)} = \max\left\{\pabs{\delta}, \pabs{\Delta \cdot (1 + \delta)}\right\} = \pabs{\Delta} \geq 1,
\]
which is absurd (here we have used several times the isosceles triangle principle). We have proved that $\xi$ is a root of $1$ with $\pabs{\xi - 1} < 1$ so we can compute $\log_p(\xi) = 0$. Then
\[
\log_p\left(\left\langle x_1\overline{\zeta}\right\rangle\right) = \log_p(\langle x_1 \rangle) + \log_p(\xi) = \log_p(\langle x_1 \rangle)
\]
and the function $\Log_p$ is well defined. \newline
Properties \textit{(1)} and \textit{(3)} are now obvious from the definition: if $x \in D_1(1^-)$ then we can choose $x = \langle x \rangle$ so $\Log_p(x) = \log_p(x)$ and $p = p^1 \cdot 1 \cdot 1$ so $\Log_p(p) = \log_p(1) = 0$. To prove \textit{(2)} let $x = p^r\omega(x_1)\langle x_1 \rangle$, $y = p^s \omega(y_1) \langle y_1 \rangle$ and $z = xy = p^{r+s}\omega(z_1)\langle z_1 \rangle$. Now $p^{r+s}$ isn't necessarily the same fractional power as $p^rp^s$ (it can differ by a root of unit), but we can choose to use exactly $p^rp^s$, since the value of $\Log_p$ doesn't depend on the choice of the fractional power. In this case we'll have $z_1 = \tfrac{z}{p^rp^s} + M = x_1y_1$ so $\omega(z_1) = \omega(x_1)\cdot\omega(y_1)$ and $\langle z_1 \rangle = \langle x_1 \rangle \cdot \langle y_1 \rangle$. Then $\Log_p(xy) = \Log_p(x) + \Log_p(y)$.
\end{proof}
\begin{prop}
$\Log_p$ is locally analytic on $\Cp^\times$ with derivative $\Cp^\times \ni x \mapsto \tfrac{1}{x}$.
\end{prop}
\begin{proof}
Let's fix a point $x_0 \in \Cp^\times$ and let $r := \pabs{x_0}$. For every $x \in D_{x_0}(r^-)$ (the largest disc about $x_0$ which doesn't contain $0$) we have $\pabs{\tfrac{x}{x_0} - 1} < 1$ and so
\[
\Log_p(x) = \Log_p\left(x_0 \cdot \left(1 + \frac{x}{x_0} - 1\right)\right) = \Log_p(x_0) + \sum_{n=1}^{+\infty} (-1)^{n+1}\cdot\frac{(x - x_0)^n}{n\cdot x_0^n}.
\]
We have just proved that, in a neighbourhood of $x_0$, $\Log_p$ can be represented by a convergent power series in $x - x_0$. Since this reasoning can be done for any $x_0 \in \Cp^\times$ we can conclude that $\Log_p$ is locally analytic.\newline
Let's consider $x \in D_{x_0}(r^-)$ as above: using the locally analyticity of $\Log_p$ and \cref{thm:derivative-power-series} we obtain:
\begin{gather*}
\frac{\mathrm{d}}{\mathrm{d}x}\Log_p(x) = \sum_{n=1}^{+\infty} (-1)^{n+1}\cdot\frac{(x-x_0)^{n-1}}{x_0^n} = x_0^{-1} \cdot \sum_{n=0}^{+\infty} \left(1 - \frac{x}{x_0}\right)^n = \frac{x_0^{-1}}{(x/x_0)} = \frac{1}{x}. \qedhere
\end{gather*}
\end{proof}
We have found a locally analytic function defined on $\Cp^\times$ which extends $\log_p$ and has the same basic properties.
It is now natural to try to build a homomorphism $f\colon \Cp \to \Cp^\times$ extending the exponential, which is only defined in $D(r_p^-)$. If there exists such an extension then, fixed $x \in \Cp^\times$ and $n \in \N$ such that $p^nx \in D(r_p^-)$, then
\[
f(x)^{p^n} = f(p^nx) = \exp_p(p^nx)
\]
so $f(x)$ must be a $p^n$-th root of $\exp_p(p^nx)$. As stated in the next proposition, this extension can actually be done in a coherent way.
\begin{prop}
There exists a continuous homomorphism $\mathrm{Exp}\colon \Cp \to D_1(1^-)$ extending $\exp_p$.
\end{prop}
\begin{proof}
The idea behind the proof exploits the fact that, since $(D_1(1^-), \cdot)$ is a divisible group, there is an extension property for homomorphisms defined over subgroups. For the whole proof see \cite[259]{robert:padic-analysis}.
\end{proof}
Unlike the Iwasawa logarithm, the extensions $\mathrm{Exp}$ of the exponential are not defined in a canonic way so they're not very useful. Anyway it is easy to prove that, chosen such an extension $\mathrm{Exp}$, $\log_p \circ\, \mathrm{Exp} = \mathrm{id}_{\Cp}$. In-fact:
\[
p^n\cdot \left(\log_p\circ\,\mathrm{Exp}(x)\right) = \log_p\left(\mathrm{Exp}(x)^{p^n}\right) = \log_p\left(\mathrm{Exp}\left(p^nx\right)\right) = \log_p\left(\exp_p\left(p^nx\right)\right) = p^nx.
\]
We'll now describe a slightly different exponential function which converges in $D(1^-)$: the Artin-Hasse exponential. Before defining it we'll need to study some basic properties of the well known M{\"o}bius function.
\begin{defn}
Let $\mu\colon \N^\times \to \N$ be defined by
\[
\mu(n) :=
\begin{cases}
0, & \text{if $n$ is divisible by a perfect square greater than 1;}\\
(-1)^k, & \text{if $n$ is a product of $k$ distinct prime factors;}
\end{cases}.
\]
This is the \emph{M{\"o}bius function}.
\end{defn}
\begin{prop}
\label{prop:mobius-function}
Let $n \in \N^\times$, then
\[
\sum_{d \mid n} \mu(d) =
\begin{cases}
1, & \text{if $n=1$;} \\
0, & \text{otherwise;}
\end{cases}.
\]
In particular, if $p$ is a prime,
\[
\sum_{\mathclap{d \mid n,\,p \nmid d}} \mu(d) =
\begin{cases}
1, & \text{if $n$ is a power of $p$;} \\
0, & \text{otherwise;}
\end{cases}.
\]
\end{prop}
\begin{proof}
The case $n=1$ is trivial ($\mu(1) = 1$). Let $n = p_1^{a_1}\dots \,p_s^{a_s}$ with $s \geq 1$ and $p_i$ prime for every $i=1,\dots,s$. Then, by an easy combinatoric argument, we have
\[
\sum_{d \mid n} \mu(d) = \sum_{\epsilon_i = 0 \lor 1} \mu(p_1^{\epsilon_1}\dots\,p_s^{\epsilon_s}) = \sum_{\epsilon_i = 0 \lor 1} (-1)^{\sum \epsilon_i} = (1 - 1)^s = 0.
\]
The second statement is just a particular case of the first one applied to $n \cdot p^{-\ord(n)}$ in place of $n$.
\end{proof}
\begin{prop}
\label{prop:formal-identity-exp}
In $\Q\llbracket X \rrbracket$ the following holds:
\[
\exp(X) = \prod_{n=1}^{+\infty} B_{-\mu(n)/n}(-X^n).
\]
\end{prop}
\begin{proof}
First of all let's observe that the infinite product of series actually makes sense: in-fact $B_{-\mu(n)/n,\,p}(-X^n) = 1 + \tfrac{\mu(n)}{n}X^n + o(X^n)$ so the $n$-th factor has no power of $X$ less than the $n$-th, so only a finite number of series is involved to determine the coefficient of any power of $X$.
To prove that the identity holds we'll use \cref{prop:formal-series}; let $x \in \R$ with $\abs{x} < 1$, then we know that
\[
B_{-\mu(n)/n}(-x^n) = (1 - x^n)^{-\frac{\mu(n)}{n}}.
\]
Taking the (classical) $\log$ of the right side we obtain
\[
\log\left(\prod_{n=1}^{+\infty} (1 - x^n)^{-\frac{\mu(n)}{n}}\right) = -\sum_{n=1}^{+\infty}\frac{\mu(n)}{n} \cdot \log(1 - x^n) = \sum_{n=1}^{+\infty} \frac{\mu(n)}{n}\cdot\sum_{m=1}^{+\infty} \frac{x^{nm}}{m} = \sum_{j=1}^{+\infty}\left(\frac{x^j}{j}\cdot \sum_{n \mid j} \mu(n)\right)
\]
where in the last step we set $j = nm$ and we rearranged the terms of the series since it is absolutely convergent. To see why this is true, let's consider
\[
\sum_{n=1}^{+\infty} \abs{\frac{\mu(n)}{n}}\cdot\abs{\log(1 - x^n)} \leq \sum_{n=1}^{+\infty} \frac{\abs{\log(1 - x^n)}}{n}.
\]
Since $\abs{\log(1 - x^n)} \sim -\abs{x}^n$ as $n \to +\infty$, we can just study the convergence of the series
\[
\sum_{n=1}^{+\infty} \frac{\abs{x}^n}{n},
\]
which converges since it is dominated by the convergent geometric series $\sum_{n=1}^{+\infty} \abs{x}^n$ (we're using $\abs{x} < 1$). Now that we have justified why we can rearrange terms, using \cref{prop:mobius-function}, we obtain
\begin{gather*}
\log\left(\prod_{n=1}^{+\infty} (1 - x^n)^{-\frac{\mu(n)}{n}}\right) = \sum_{j=1}^{+\infty}\left(\frac{x^j}{j}\cdot \sum_{n \mid j} \mu(n)\right) = x = \log(\exp(x)) \\
\implies \exp(x) = \prod_{n=1}^{+\infty} (1 - x^n)^{-\frac{\mu(n)}{n}}
\end{gather*}
which, translated back to formal power series, concludes the proof.
\end{proof}
We have just proved that
\[
\exp_p(X) = \prod_{n=1}^{+\infty} B_{-\mu(n)/n,\,p}(-X^n)
\]
(recall that $B_{-\mu(n)/n,\,p}(X) = B_{-\mu(n)/n}(X)$ and $\exp_p(X) = \exp(X)$, as elements of $\Q \llbracket X \rrbracket$).
With this new expression of $\exp_p(X)$ we can understand where convergence ``problems'' arise. In-fact if $p \mid n$ and $n$ is square-free (so $\mu(n) \neq 0$) then $\pabs{\mu(n)/n} = \pabs{n}^{-1} \geq p$ so $B_{-\mu(n)/n,\,p}(-X^n)$ converges only if $\pabs{x}^n \in D((r_p\pabs{n})^-)$ (see \cref{prop:convergence-binomial}). If $n=p$ we have convergence precisely when
\[
\pabs{x} < \left(p^{-1/(p-1)}\cdot p^{-1}\right)^{\frac{1}{p}} = p^{-1/(p-1)} = r_p.
\]
Instead, if $p \nmid n$, we have no problems, since $-\tfrac{\mu(n)}{n} \in \Zp$ and, by \cref{prop:convergence-binomial}, we have $B_{-\mu(n)/n,\,p}(-X^n) \in \Zp\llbracket X \rrbracket$ so $x \in D(1^-)$ guarantees convergence. This motivates the following definition.
\begin{defn}
\label{defn:artin-hasse}
The (partial) function $\E_p(X)\colon \Cp \to \Cp$ defined by
\[
\E_p(X) := \prod_{\substack{n=1 \\ p\nmid n}}^{+\infty} B_{-\mu(n)/n,\,p}(-X^n) = \prod_{\substack{n=1 \\ p\nmid n}}^{+\infty} (1 - X^n)^{-\frac{\mu(n)}{n}}
\]
is called the \emph{Artin-Hasse exponential}.
\end{defn}
We observe again that $B_{-\mu(n)/n,\,p}(-X^n) \in 1 + X^n\Q\llbracket X \rrbracket$ so the infinite product makes sense.
\begin{prop}
\label{prop:artin-hasse-formula}
In $\Q\llbracket X \rrbracket$ the following holds:
\[
\E_p(X) = \exp_p\left(\sum_{i=0
}^{+\infty} \frac{X^{p^i}}{p^i}\right).
\]
\end{prop}
\begin{proof}
As usual let's consider $x \in \R$ with $\abs{x} < 1$: then
\[
\E_p(x) := \prod_{\substack{n=1 \\ p\nmid n}}^{+\infty} (1 - x^n)^{-\frac{\mu(n)}{n}}
\]
and taking logarithm we obtain
\[
\log\left(\E_p(x)\right) = -\sum_{\substack{n=1 \\ p \nmid n}} \frac{\mu(n)}{n} \left( \sum_{m=1}^{+\infty} \frac{x^{mn}}{m}\right) = \sum_{j=1}^{+\infty} \left(\frac{x^j}{j} \cdot \sum_{\mathclap{n \mid j,\,p \nmid n}} \mu(n) \right),
\]
where in the last step we set $j=nm$ and we rearranged the terms of the series (we can do it, the proof is analogue to the one given in \cref{prop:formal-identity-exp}). Using \cref{prop:mobius-function} we obtain the following relation (in $\R$):
\[
\log\left(\E_p(x)\right) = \sum_{m=0}^{+\infty} \frac{x^{p^m}}{p^m} \implies \E_p(x) = \exp\left(\sum_{m=0}^{+\infty} \frac{x^{p^m}}{p^m}\right).
\]
We can conclude immediately applying \cref{prop:formal-series} (recall that $\exp_p$ and $\exp$ are exactly the same formal series in $\Q\llbracket X\rrbracket$).
\end{proof}
At this point it is very easy to prove that $\E_p(X)$ converges on $D(1^-)$ (much better than the smaller disc of convergence of $\exp_p(X)$).
\begin{prop}
\label{prop:artin-hasse-convergence}
The Artin-Hasse exponential $\E_p(X)$ converges on $D(1^-)$.
\end{prop}
\begin{proof}
We recall the definition of $\E_p$:
\[
\E_p(X) := \prod_{\substack{n=1 \\ p \nmid n}}^{+\infty} B_{-\mu(n)/n,\,p}(-X^n).
\]
Now if $p \nmid n$ we have already proved that $B_{-\mu(n)/n,\,p}(-X^n) \in 1 + X^n(\Zp\cap\Q)\llbracket X\rrbracket$ so the whole series $\E_p(X)$ has coefficients in $\Zp\cap\Q$. Then we conclude using \cref{prop:convergence-power-series-Zp}.
\end{proof}
We could have proved directly that $\exp_p\left(\sum_{n=0}^{+\infty} \frac{x^{p^n}}{p^n}\right) \in \Zp\llbracket X\rrbracket$ using the Dwork's lemma. In-fact
we already know that $\E_p(X) \in 1 + X\Qp\ser{X}$ and we can compute
\begin{gather*}
\E_p(X^p) = \exp_p\left(\sum_{n=0}^{+\infty} \frac{X^{p^{n+1}}}{p^n}\right), \\
\E_p(X)^p = \exp_p\left(\sum_{n=0}^{+\infty} \frac{X^{p^n}}{p^{n-1}}\right) = \exp_p\left(pX + \sum_{n=0}^{+\infty} \frac{X^{p^{n+1}}}{p^n}\right),
\end{gather*}
where we used $\exp_p(Y)^p = \exp_p(pY)$ (this formal identity can be easily verified using \cref{prop:formal-series}). Since $\tfrac{\exp_p(X)}{\exp_p(Y)} = \exp_p(X - Y)$ (also easy to prove) we have
\[
\frac{\E_p(X^p)}{\E_p(X)^p} = \frac{\exp_p\left(\sum_{n=0}^{+\infty} \frac{X^{p^{n+1}}}{p^n}\right)}{\exp_p\left(pX + \sum_{n=0}^{+\infty} \frac{X^{p^{n+1}}}{p^n}\right)} = \exp_p(-pX) \in 1 + pX\Zp\ser{X}
\]
and we can conclude that $\E_p(X) \in 1 + X\Zp\ser{X}$ thanks to \cref{lemma:dwork}. | {
"alphanum_fraction": 0.5988499357,
"avg_line_length": 73.4209183673,
"ext": "tex",
"hexsha": "956d8e2d0310fe908069e823089e90242119d5ff",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d7c1311e2abc12c80ffac864b74b214e6a63b9fb",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "carlo300/BachelorThesis",
"max_forks_repo_path": "Mainmatter/chapter4.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d7c1311e2abc12c80ffac864b74b214e6a63b9fb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "carlo300/BachelorThesis",
"max_issues_repo_path": "Mainmatter/chapter4.tex",
"max_line_length": 1190,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "d7c1311e2abc12c80ffac864b74b214e6a63b9fb",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "carlo300/BachelorThesis",
"max_stars_repo_path": "Mainmatter/chapter4.tex",
"max_stars_repo_stars_event_max_datetime": "2021-03-29T10:11:24.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-12-21T10:59:24.000Z",
"num_tokens": 23293,
"size": 57562
} |
\chapter{Data formats}
\label{sec:dataformat}
%\section{Level1B data}
%\section{Climatological \textit{apriori} data}
%\section{ZPT data}
%\section{Sensor characteristic data}
%\section{Spectroscopic data}
%\section{Configuration}
%\subsection{Definition of settings}
%\includepdf[pages={1,2,3,4}]{q_fields_test.pdf}
\includepdf[pages={1,2,3,4}]{q_fields.pdf}
%\includepdf[pages={{},-}]{q_fields.pdf}
%\subsection{The actual setting?}
%The Matlab function displayed below describes common
%and frequency mode specific configuration of
%the Level2 processor\dots\todo{? this is not complete, but should this description be included
%5at all.}\
%\input{q_std}
%\section{Level2 data}
%\section{Level2 Auxiliary data}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "L1_ATBD"
%%% End:
| {
"alphanum_fraction": 0.7452711223,
"avg_line_length": 26.4333333333,
"ext": "tex",
"hexsha": "869d89ec92a84aeb8a582e0089a8f247ea67c12d",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-05-18T15:26:54.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-05-18T15:26:54.000Z",
"max_forks_repo_head_hexsha": "19a7fea949a8839897f511bc8ddc6abbb52e9cb0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Odin-SMR/docs",
"max_forks_repo_path": "IODD/appendices.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "19a7fea949a8839897f511bc8ddc6abbb52e9cb0",
"max_issues_repo_issues_event_max_datetime": "2020-10-12T13:45:33.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-09-28T09:28:13.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Odin-SMR/docs",
"max_issues_repo_path": "IODD/appendices.tex",
"max_line_length": 95,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "19a7fea949a8839897f511bc8ddc6abbb52e9cb0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Odin-SMR/docs",
"max_stars_repo_path": "IODD/appendices.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 228,
"size": 793
} |
%\documentclass[08pt,a4paper]{article}
%%% NOT RENDERING BECAUSE NEW COLOUR PACKAGE IS INCOMPATIBLE -- >.FIX LATER %%%
%\usepackage{fontspec}/Users/josephbulbulia/Dropbox/00Bulbulia Pubs/2016/2016_WATTS_BBS_RESPONSE/2016 Norenzayan cultural evolution.pdf
%\usepackage{yfonts}
%\usepackage{kpfonts}
%\usepackage[T1]{fontenc}
%\usepackage{setspace}
%%\usepackage{color}
%\usepackage[usenames, dvipsnames]{xcolor}
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage[english]{babel}
\usepackage[dvipsnames]{xcolor}
%\bibliographystyle{babplain}
%\bibliographystyle{unsrt}
%\usepackage{etaremune}
%\usepackage{kpfonts}
\usepackage[T1]{fontenc}
%\usepackage{setspace}
%%\usepackage{color}
%\usepackage[usenames, dvipsnames]{xcolor}
% USE THIS TO UNDO NUMBERING
%\makeatletter
%\renewenvironment{thebibliography}[1]
% {\section*{\refname}%
% \@mkboth{\MakeUppercase\refname}{\MakeUppercase\refname}%
% \list{}%
% {\setlength{\labelwidth}{0pt}%
% \setlength{\labelsep}{0pt}%
% \setlength{\leftmargin}{\parindent}%
% \setlength{\itemindent}{-\parindent}%
% \@openbib@code
% \usecounter{enumiv}}%
% \sloppyhttps://www.overleaf.com/project/5f064805fb993800018c8f25
% \clubpenalty4000
% \@clubpenalty \clubpenalty
% \widowpenalty4000%
% \sfcode`\.\@m}
% {\def\@noitemerr
% {\@latex@warning{Empty `thebibliography' environment}}%
% \endlist}
%\makeatother
%
\renewcommand*{\refname}{} % no title for bibliography
\usepackage[official]{eurosym}
%\renewcommand*\sfdefault{uop}
%\renewcommand*\familydefault{\sfdefault} %% Only if the base font of the document is to be sans serif
%\usepackage[T1]{fontenc}
% DOCUMENT LAYOUT
\usepackage{geometry}
\geometry{a4paper, textwidth=5.5in, textheight=8.5in, marginparsep=7pt, marginparwidth=.6in}
\setlength\parindent{0in}
% FONTS
%\usepackage{xunicode}
%\usepackage{xltxtra}
%\defaultfontfeatures{Mapping=tex-text} % converts LaTeX specials (``quotes'' --- dashes etc.) to unicode
%\setromanfont [Ligatures={Common}, Numbers={OldStyle}]{EurekaMonoOT}
%\setromanfont[Scale=1]{Latin Modern Roman}
%\setromanfont [Ligatures={Common}, Numbers={OldStyle}]{Avenir Next LT Pro}
%\setmonofont[Scale=.9]{Nitti}
%\setsansfont[Scale=1]{Univers Next Pro}
%\newfontfamily{\secfont}{Latin Modern Roman}%{Akzidenz-Grotesk BQ Medium}
%\newfontfamily{\subsecfont}{Latin Modern Roman}%{Akzidenz-Grotesk BQ Medium}
% ---- CUSTOM AMPERSAND
%\newcommand{\&}{{\fontspec[Scale=.9]{Avenir Next LT Pro}\selectfont\itshape\&}}%{Hoefler Text}
% ---- MARGIN YEARS
\usepackage{marginnote}
\newcommand{\years}[1]{\marginnote{\scriptsize #1}}
\renewcommand*{\raggedleftmarginnote}{}
\setlength{\marginparsep}{7pt}
\reversemarginpar
% HEADINGS
\usepackage{sectsty}
%\usepackage[normalem]{ulem}
%\sectionfont{nitti typewriter} %\mdseries\large\underline}
%\subsectionfont{nitti typewriter cameo} %\rmfamily\mdseries\scshape\normalsize}
%\subsubsectionfont{nitti typewriter cameo} %\bfseries\upshape\normalsize}
\usepackage{graphicx}
% PDF SETUP
% ---- FILL IN HERE THE DOC TITLE AND AUTHOR
%\usepackage[xetex, bookmarks, colorlinks, breaklinks, pdftitle={Joseph Bulbulia - vita},pdfauthor={Joseph Bulbulia}]{hyperref}
%\hypersetup{linkcolor=blue,citecolor=blue,filecolor=black,urlcolor=blue}
\usepackage{hyperref}
\hypersetup{
colorlinks,%
citecolor=green,%
filecolor=magenta,%
linkcolor=red,%
urlcolor=cyan
}
%-------------------------------------
%\setlength{\textwidth}{17cm} \setlength{\oddsidemargin}{-0.5cm}
%-------------------------------------
\usepackage{mdwlist}
\usepackage{geometry}
\usepackage{sectsty}
%\usepackage[normalem]{ulem}
%\sectionfont{\subsecfont}
%\subsectionfont{\subsecfont}
%\subsubsectionfont{\subsecfont}
%\usepackage{titlesec}
%\definecolor{MSBlue}{rgb}{.204,.353,.541}
%\definecolor{MSLightBlue}{rgb}{.31,.506,.741}
%\newfontfamily\subsubsectionfont{Constantia Bold}
%\titleformat*{\section}{\large\subsubsectionfont}
%\titleformat*{\subsection}{\subsubsectionfont}
%\titleformat*{\subsubsection}{\subsubsectionfont}
%\fontsize{10pt}{12pt}
% Customize page headers
%\pagestyle{myheadings}
%\markright{\name}
\usepackage{microtype}
\usepackage{enumerate}
%\usepackage{lmodern}
\usepackage{setspace}
%%\onehalfspacing
\pagestyle{empty}
%\usepackage{sectsty}
%\sectionfont{\rmfamily\mdseries\Large}
%\subsectionfont{\rmfamily\mdseries\itshape\large}
\begin{document}
\singlespacing
%\doublespacing
%\textbf{\Large Joseph Bulbulia}\\~\\
\subsubsection*{Joseph A.Bulbulia}
\small
Professor, School of Psychology\\
Faculty of Science\\
Victoria University of Wellington, New Zealand \\
Research Associate,
Max Planck Institute for the Science of Human History,\\
Kahlaische Strasse 10 D-07745 Jena, \\
Germany
%\href{http://www.shh.mpg.de/2375/en}{http://www.shh.mpg.de/2375/en}\\
%\href{http://www.shh.mpg.de/252138/dlcepeople}{http://www.shh.mpg.de/252138/dlcepeople}\\~\\
\subsubsection*{Contact}
eml:~\href{mailto: [email protected]}{[email protected]}\\
\textsc{url}:~\href{https://josephbulbulia.netlify.app}{https://josephbulbulia.netlify.app}\\
orcid::\href{https://orcid.org/0000-0002-5861-2056}{https://orcid.org/0000-0002-5861-2056}
\subsubsection*{Work}
\years{$2020\to$ ~} Professor of Psychology, Victoria University, New Zealand \\
\years{2018$\to$2020} Maclaurin Goodfellow Chair, School of Humanities The University of Auckland \\
\years{2000$\to$2017} Lecturer$\to$Senior Lecturer$\to$Reader$\to$Professor of Religion, Victoria University
%%\hrule
\subsubsection*{Specialisation}
Religion, evolution, longitudinal well-being, methods.
\subsubsection*{Education}
\noindent
\years{2001}Princeton University, Ph.D. Religion (Program in Religion and Philosophy)\\
\years{1997}Princeton University, M.A. Religion (Program in Religion and Philosophy)\\
\years{1993}Harvard University, M.T.S. Religion\\
\years{1990}Holy Cross, A.B. Philosophy\\
\years{1988$\rightarrow$1989} Oxford University: Mansfield College (Matriculated, Visiting Student) (Philosophy, Politics, Economics)
\subsubsection*{Honours}
\noindent
\years{2019}Appointment: Research Associate, \href{https://www.shh.mpg.de/252138/dlcepeople}{Max Plank Institute for the Science of Human History}\\
\years{2018}Appointment Maclaurin Goodfellow Chair, University of Auckland\\
\years{2016}Victoria University Research Excellence Award\\
\years{2014$\to$}Co-editor: \href{http://www.tandfonline.com/toc/rrbb20/current#.U_V-VoCSzmU}{Religion Brain \& Behavior}\\
\years{2014$\to$2016}President: \href{http://www.iacsr.com/iacsr/Home.html}{International Association for the Cognitive Science of Religion}\\
\years{2012$\to$2014} President Elect: \href{http://www.iacsr.com/iacsr/Home.html}{International Association for the Cognitive Science of Religion}\\
\years{2013$\to$}Editorial Advisory Board: \href{http://www.zygonjournal.org}{Zygon: Journal of Religion and Science }\\
\years{2011$\to$}Editorial Advisory Board: \href{http://www.equinoxpub.com/index.php/JCSR}{Journal of The Cognitive Science of Religion}\\
\years{2010$\to$2012}Secretary General (elected): \href{http://www.iacsr.com/iacsr/Home.html}{International Association for the Cognitive Science of Religion}\\
\years{2010$\to$2014}Editorial Advisory Board: \href{http://www.ibcsr.org/index.php?option=com_content&view=article&id=159&Itemid=89}{\em Religion, Brain \& Behavior}\\
\years{2010$\to$2012}Advisor: \href{http://evolution-of-religion.com/}{The Adaptive Logic of Religious Belief and Behaviour Group}\\
\years{2006$\to$}Distinguished Fellow: \href{http://teo.au.dk/forskning/aktuelt/religion/deltagere/principal/}{Religion Cognition and Culture Group, Aarhus University}\\
\years{2006$\to$2010}Executive Committee (elected): \href{http://www.iacsr.com/Home.html}{International Association for the Cognitive Science of Religion}\\
\years{2006} G\ae steprofessor: \href{http://teo.au.dk/en/research/current/cognition}{Religion, Cognition, Culture Group: Aarhus University}\\
\years{2000}Faculty Fellowship, Stevenson Hall: Princeton University\\
\years{1996$\to$1999}Assistant Master, Stevenson Hall: Princeton University\\
\years{1996}Melon Fellowship: Princeton University\\
\years{1996}Bowen Merit Award: Princeton University\\
\years{1990}Flatly Award (top philosophy graduate): Holy Cross\\
\years{1990}Phi Beta Kappa: Holy Cross
% \section*{Previous Employment}
% \noindent
% %\setlength{\parskip}{-.5pt}
% % \setlength{\parsep}{-.5pt}
% \years{2017$\rightarrow$}Professor, Religious Studies: Victoria University\\
% \years{2014$\rightarrow$2016}Associate Professor, Religious Studies: Victoria University\\
% \years{2006$\rightarrow$2013}Senior Lecturer, Religious Studies: Victoria University\\
% \years{2006}Guest Professor: Aarhus University, Denmark\\
% \years{2000$\to$2005}Lecturer, Religious Studies: Victoria University\\
% \years{2000}Assistant Professor, Philosophy: New Jersey City University\\
% \years{1997$\to$1999}Assistant Master, Stevenson Hall: Princeton University\\
% \years{1994$\to$1996}Preceptor, Department of Religion: Princeton University\\%} \par%}
%-------
\subsubsection*{Publications}
Selected science publications in \textcolor{Orange}{orange}
Selected theory publications in \textcolor{Green}{green}
Selected overviews of the scientific research \textcolor{Purple}{purple}
\begin{thebibliography}{99}
\subsubsection*{2021}
\bibitem{benheim2021}
Beheim, B., Atkinson, Q. D., Bulbulia, J., Gervais, W., Gray, R. D., Henrich, J., Lang, M., Monroe, M. W., Muthukrishna, M., Norenzayan, A., Purzycki, B. G., Shariff, A., Slingerland, E., Spicer, R., \& Willard, A. K. (2021).
\newblock Treatment of missing data determined conclusions regarding moralizing gods.
\newblock \href{https://www.dropbox.com/s/98zks9waw09kmgt/s41586-021-03655-4.pdf?dl=0}{LINK}
\newblock {\em Nature}, 595(7866), E29–E34. \href{https://doi.org/10.1038/s41586-021-03655-4}{https://doi.org/10.1038/s41586-021-03655-4}
\bibitem{johnson2021can}
J.~Bulbulia \& Johnson, D.
\newblock Can evolution make sense of fear? Lessons from Bonhoeffer and Darwin.
\newblock {\em The Bonhoeffer Legacy: An International Journal}, 7(1 and 2), pp 59-79
2021. \href{/Users/jbul176/The Virtues Project Dropbox/Joseph Bulbulia/00Bulbulia Pubs/2021/2021_DOM_BONHOFFER/Joseph-Bulbulia-and-Dominic-DP-Johnson-i8wdhc_5610_1629099188.pdf}{LINK}
\bibitem{cristofori2021brain}
I.~Cristofori, W.~Zhong, S.~Cohen-Zimerman, J.~Bulbulia, B.~Gordon, F.~Krueger,
and J.~Grafman.
\newblock Brain networks involved in the influence of religion on empathy in
male Vietnam war veterans.
\newblock {\em Scientific Reports}, 11(1):1--13, 2021.
\bibitem{deak2021individuals}
C.~K. Deak, M.~D. Hammond, C.~G. Sibley, and J.~Bulbulia.
\newblock Individuals' number of children is associated with benevolent sexism.
\newblock {\em Plos one}, 16(5):e0252194, 2021.
\bibitem{stronge2021religion}
S.~Stronge, J.~Bulbulia, D.~E. Davis, and C.~G. Sibley.
\newblock Religion and the development of character: Personality changes before
and after religious conversion and deconversion.
\newblock {\em Social Psychological and Personality Science}, 12(5):801--811,
2021.
\bibitem{ejova2021awe}
A.~Ejova, J.~Kr{\'a}tk{\`y}, E.~Kundtov{\'a}~Klocov{\'a}, R.~Kundt,
J.~Cig{\'a}n, S.~Kotherov{\'a}, J.~Bulbulia, and R.~D. Gray.
\newblock The awe-prosociality relationship: evidence for the role of context.
\newblock {\em Religion, Brain \& Behavior}, 11(3):294--311, 2021.
\newblock doi:10.1080/2153599X.2021.1940254
\bibitem{ejova2021church}
A.~Ejova, P.~Milojev, E.~L. Worthington~Jr, C.~G. Sibley, and J.~Bulbulia.
\newblock Church attendance buffers against longer-term mental distress.
\newblock {\em Religion, Brain \& Behavior}, 11(2):123--138, 2021.
\bibitem{highland2021national}
\textcolor{Orange}{B.~Highland, E.~L. Worthington, D.~E. Davis, C.~G. Sibley, and J.~A. Bulbulia.
\newblock National longitudinal evidence for growth in subjective well-being
from spiritual beliefs.
\newblock {\em Journal of Health Psychology}, \href{https://www.dropbox.com/s/zkzw3x82kqirwyc/13591053211009280.pdf?dl=0}{LINK}
\newblock \href{https://doi.org/10.1177/13591053211009280}{https://doi.org/10.1177/13591053211009280} }
\bibitem{doi:10.1080/2153599X.2021.1876333}
J.~Bulbulia, W.~J. Wildman, R.~Sosis, and U.~Schjoedt.
\newblock Announcing a new type of manuscript submission: the ``retake''.
\newblock {\em Religion, Brain \& Behavior}, 11(1):1--4, 2021.\href{https://www.tandfonline.com/doi/full/10.1080/2153599X.2021.1876333}{LINK}
\bibitem{shanaah2021hate}
S.~Shanaah, K.~Yogeeswaran, L.~Greaves, J.~A. Bulbulia, D.~Osborne, M.~U.
Afzali, and C.~G. Sibley.
\newblock Hate begets warmth? the impact of an anti-muslim terrorist attack on
public attitudes toward Muslims.
\newblock {\em Terrorism and Political Violence}, pages 1--19, 2021.
\bibitem{van_tongeren_religious_2021}
\textcolor{Orange}{D.~R. Van~Tongeren, C.~N. DeWall, Z.~Chen, C.~G. Sibley, and J.~Bulbulia.
\newblock Religious residue: {Cross}-cultural evidence that religious
psychology and behavior persist following de-identification.
\newblock {\em Journal of Personality and Social Psychology}, 120(2):484--503,
2021.} \href{https://www.dropbox.com/s/dyxqtxui4fqvy5u/VanTongerenEtAL2021JPSP.pdf?dl=0}{LINK}
\bibitem{schjoedt2021celebrating}
U.~Schjoedt, W.~J. Wildman, R.~Sosis, and J.~Bulbulia.
\newblock Celebrating the uninvited.
\newblock {\em Religion Brain \& Behaviour}, 11(2):121--122, 2021.
\bibitem{diwali2020} Piven, S. D., Fischer, R., Shaver, J. H., Mogan, R., Karl, J., Kesberg, R., Richardson, A., Singh, P., Tewari, S., \& {\bf Bulbulia, J.} (in press).
\newblock Kiwi Diwali: A Longitudinal Investigation of Perceived Social Connection Following a Civic Religious Ritual.
\newblock {\em Religion, Brain \& Behavior}.
\subsubsection*{2020}
\bibitem{Sibley1-2020} Sibley, C. G., Afzalib, M. U., Satherley, N., Ejova, A., Stronge, S., Yogeeswaran, K., Grimshaw, M., Hawi, D., Mirnajafi, Z., Barlow, F., Milojev, P., Greaves, L. M., Kapeli, S., Zubielevitch, E., Hamley, L., Basabas, M. C., Wu, M. H., Howard, C., Lee, C. H. J., Huang, Y., Lockhart, C., Bahamondes, J., Manuela, S., Milfont, T. L., Perry, R., Sengupta, N. K., Overall, N. C., Shaver, J. H., Troughton, G., Osborne, D., \& {\bf Bulbulia, J.} (in press).
\newblock Prejudice toward Muslims in New Zealand: Insights from the New Zealand Attitudes and Values Study.
\newblock {\em New Zealand Journal of Psychology}.
\bibitem{Slingerland-2020} Slingerland, E., Atkinson, Q. D., Ember, C. R., Sheehan, O., Muthukrishna, M., {\bf Bulbulia, J.}, \& Gray, R. D. (in press).
\newblock Coding Culture: Challenges and Recommendations for Comparative Cultural Databases.
\newblock {\em Evolutionary Human Sciences}.
\bibitem{greaves2020comparative}
\textcolor{Orange}{L.~M. Greaves, A.~Rasheed, S.~D'Souza, N.~Shackleton, L.~D. Oldfield, C.~G.
Sibley, B.~Milne, and J.~Bulbulia.
\newblock Comparative study of attitudes to religious groups in New Zealand
reveals muslim-specific prejudice.
\newblock {\em K{\=o}tuitui: New Zealand Journal of Social Sciences Online},
15(2):260--279, 2020.}
\bibitem{Fraser:2020aa}
G.~Fraser, Y.~Huang, K.~Robinson, M.~S. Wilson, J.~Bulbulia, and C.~G. Sibley.
\newblock New zealand pet owners' demographic characteristics, personality, and
health and wellbeing: More than just a fluff piece.
\newblock {\em Anthrozo{\"o}s}, 33(4):561--578, 2020.
\bibitem{Fraser1-2020} Fraser, G., {\bf Bulbulia, J.}, Greaves, L., Wilson, M. S., \& Sibley, C. G. (in press).
\newblock Coding responses to an open-ended gender measure in a New Zealand national sample.
\newblock {\em Journal of Sex Research}.
\bibitem{Ejova2020} Ejova, A., Milojev, P., Worthington Jr, E. L., {\bf Bulbulia, J.}, \& Sibley, C. G. (in press).
\newblock The Big Six personality traits and mental distress: Dynamic modelling in a population panel study reveals bi-directional relationships involving Neuroticism, Extraversion and Conscientiousness.
\newblock {\em Personality and Social Psychology Bulletin}.
\bibitem{ShaverJ2020} \textcolor{Orange}{Shaver, J. H., Power, E.A., Purzycki, B. G, Watts, J., Sear, R., Shenk, M. K., Sosis, R., \& {\bf Bulbulia J.} (2020).
\newblock Church attendance and alloparenting: an analysis of fertility, social support and child development among English mothers.
\newblock {\em Phil. Trans. R. Soc. B}, 375:20190428.
\href{http://dx.doi.org/10.1098/rstb.2019.0428}{DOI: 10.1098/rstb.2019.0428}}
\bibitem{PSingh2020} Singh, P., Tewari, S., Kesberg, R., Karl, J. A., {\bf Bulbulia, J.}, \& Fischer, R. (2020).
\newblock Time investments in rituals are associated with social bonding, affect and subjective health: a longitudinal study of Diwali in two Indian communities.
\newblock {\em Phil. Trans. R. Soc. B}, 375:20190430.
\href{http://dx.doi.org/10.1098/rstb.2019.0430}{DOI: 10.1098/rstb.2019.0430}
\bibitem{Sibley2} Sibley, C. G., Greaves, L. M., Satherley, N., Wilson, M. S., Overall, N. C., Lee, C. H. J., Milojev, M., {\bf Bulbulia, J.}, Osborne, D., Milfont, T. L., Houkamau, C. A., Duck, I. M., Vickers-Jones, R., \& Barlow , F. K. (2020).
\newblock Effects of the COVID-19 pandemic and nationwide lockdown on trust, attitudes toward government, and well-being.
\newblock {\em American Psychologist}.
\href{http://dx.doi.org/10.1037/amp0000662}{DOI: 10.1037/amp0000662}
\bibitem{Cohen-Zimerman} Cohen-Zimerman, S., Cristofori, I., Zhong, W., {\bf Bulbulia, J.}, Krueger, F., Gordon, B., \& Grafman, J. (2020).
\newblock Neural underpinning of a personal relationship with God and sense of control: A lesion-mapping study.
\newblock {\em Cognitive, Affective, \& Behavioral Neuroscience}.
\href{https://doi.org/10.3758/s13415-020-00787-4}{DOI: 10.3758/s13415-020-00787-4}
\bibitem{Grafman} \textcolor{purple}{ Grafman, J., Cristofori, I., Zhong, W., \& {\bf Bulbulia, J}. (2020).
\newblock The Neural Basis of Religious Cognition.
\newblock {\em Current Directions in Psychological Science}, 1--8.
\href{https://doi.org/10.1177/0963721419898183}{DOI: 10.1177/0963721419898183}}
\subsubsection*{2019}
\bibitem{this} {\bf Bulbulia, J.}, Troughton, G., Highland, B., \& Sibley, C. G. (2019).
\newblock A National-Scale Typology of Orientations to Religion Poses New Challenges for The Cultural Evolutionary Study of Religious Groups.
\newblock {\em Religion, Brain \& Behavior}, 1--13.
\href{https://doi.org/10.1080/2153599X.2019.1678516}{DOI: 10.1080/2153599X.2019.1678516}
\bibitem{this} Stronge, S., Mok, T., Ejova, A., Lee, C., Zubielevitch, E., Yogeeswaran, K., Hawi, D., Osborne,~D., {\bf Bulbulia, J.}, \& Sibley, C. G. (2019).
\newblock Social Media Use is (Weakly) Related to Psychological Distress.
\newblock {\em Cyberpsychology, Behavior, and Social Networking}, 22(9), 604--609.
\href{https://doi.org/10.1089/cyber.2019.0176}{DOI: 10.1089/cyber.2019.0176}
\bibitem{this}
{\bf Bulbulia, J.}, Wildman, W. J., Schjoedt, U., \& Sosis, R. (2019).
\newblock Religion, In praise of descriptive research.
\newblock {\em Religion, Brain \& Behavior}, 9(3), 219--220.
\href{https://doi.org/10.1080/2153599X.2019.1631630}{DOI: 10.1080/2153599X.2019.1631630}
\bibitem{Hawi}
Hawi, D., Osborne, D., {\bf Bulbulia, J.}, \& Sibley, C. G. (2019).
\newblock Terrorism anxiety and attitudes toward Muslims.
\newblock {\em New Zealand Journal of Psychology}, 48(1), 80--89.
\bibitem{Ben:2019aa}
Highland, B. R., Troughton, G., Shaver, J. H., Barrett, J. L., Sibley, C. G., \&
{\bf Bulbulia,~J}. (2019).
\newblock Attitudes to Religion Predict Warmth for Muslims in New Zealand.
\newblock {\em New Zealand Journal of Psychology}, 48(1), 122--132.
\bibitem{Mogan:2019aa}
Mogan, ~R., {\bf Bulbulia, ~J.}, \& Fischer, ~R. (2019)
\newblock Joint action enhances cohesion and positive affect, but suppresses aspects of creativity when combined with shared goals.
\newblock {\em Frontiers in Psychology}, 9.
\bibitem{doi:10.1080/2153599X.2019.1558595}
Schjoedt,~U., Wildman, W.~J., Sosis, R., \& {\bf Bulbulia,~J}. (2019)
\newblock Vikings, virtual reality, and supernatural agents in predictive
minds.
\newblock {\em Religion, Brain \& Behavior}, 9(1), 1--1.
\bibitem{Shaver:2019aa}
Shaver, J.~H., Sibley, C.~G., Sosis,~R., Galbraith,~D., \& {\bf Bulbulia,~J}. (2019).
\newblock Alloparenting and religious fertility: A test of the religious
alloparenting hypothesis.
\newblock {\em Evolution and Human Behavior}, 40(3), 315--324.
\bibitem{Beheim:2019}
Beheim, B., Atkinson, Q., {\bf Bulbulia, J}., Gervais, W. M., Gray, R., Henrich, J., Lang, M., Monroe, M., Muthukrishna, M., Norenzayan, A., Purzycki, B., Shariff, A., Slingerland, E., Spicer, R., \& Willard, A. K. (2019).
\newblock Corrected analyses show that moralizing gods precede complex societies but serious data concerns remain.
\href{https://doi.org/10.31234/osf.io/jwa2n}{DOI: 10.31234/osf.io/jwa2n}
\subsubsection*{2018}
\bibitem{roitto2018oxford}
{\bf Bulbulia., ~J.} (2018).
\newblock Ritual and cooperation.
\newblock In R.~Uro, J.~J. Day, and R.~Roitto (Eds.), {\em The Oxford
Handbook of Early Christian Ritual}, (pp. 95-114). Oxford University Press.
\bibitem{gervais2018analytic}
\textcolor{Orange}{Gervais, W.~M., van Elk, M., Xygalatas, D., McKay, R.~T., Aveyard, M., Buchtel,
E.~E., Dar-Nimrod, I., Klocov{\'a}, E.~K., Ramsay, J.~E., Riekki, T., \& {\bf Bulbulia, J.} (2018).
\newblock Analytic atheism: A cross-culturally weak and fickle phenomenon?
\newblock {\em Judgment and Decision Making}, 13(3), 268--274.}
\href{https://www.dropbox.com/s/0sltfq9mtgzqlxm/Gervaisetal.2018AnalyticAtheism.pdf?dl=0}{PDF}%{https://www.dropbox.com/s/0sltfq9mtgzqlxm/Gervaisetal.2018AnalyticAtheism.pdf?dl=0}
\bibitem{Sosis:2018aa}
Sosis,~R., {\bf Bulbulia, ~J.}, Wildman, W.~J., \& Schjoedt,~U. (2018).
\newblock The fish that got away? human behavioral ecology and the study of religion.
\newblock {\em Religion, Brain \& Behavior}, 8(4), 351--353.
\href{https://doi.org/10.1080/2153599X.2018.1523608}{DOI: 10.1080/2153599X.2018.1523608}
%\href{DOI: 10.1080/2153599X.2018.1523608}{https://doi.org/10.1080/2153599X.2018.1523608}
\bibitem{Shaver:2018asd}
Shaver, J. H., Sibley, C. G., \& {\bf Bulbulia, J.} (2018).
\newblock Are contemporary {C}hristian {N}ew {Z}ealanders committed to peace?
\newblock In Fountain, P. and Troughton, G., (Eds.), {\em Pursuing Peace in
Godzone: Christianity and the Peace Tradition in New Zealand}, (pp. 194--205). Wellington: Victoria University Press.
\href{https://www.dropbox.com/s/2gklu6owh2yypgs/PPIGShaverBulbulia.pdf?dl=0}{PDF}
\bibitem{watts2018christianity}
\textcolor{Orange}{Watts, J., Sheehan, O., {\bf Bulbulia, J.}, Gray, R.~D., \& Atkinson, Q.~D. (2018).
\newblock Christianity spread faster in small, politically structured societies.
\newblock {\em Nature Human Behaviour}, 2(8), 559--564.}
\href{https://doi.org/10.1038/s41562-018-0379-3}{DOI: 10.1038/s41562-018-0379-3}
\bibitem{watts2018}
Watts, J., Gray, R., \& {\bf Bulbulia, J.} (2018).
\newblock The New Collaborative Scientific Study of Religious History.
\newblock In A. K. Petersen, I. S. Gilhus, L. H. Martin, J. S. Jensen, and J. Sorensen (Eds.), {\em Evolution, Cognition, and the History of Religion: A New Synthesis}, (pp. 62--80). BRILL.
\href{https://www.dropbox.com/s/wr5nq9ygiybspfp/WattsGrayAndBulbulia.pdf?dl=0}{PDF}
\bibitem{wildman2018god}
Wildman, W.~J., Schjoedt, U., Sosis, R., \& {\bf Bulbulia, J.} (2018).
\newblock ``God is watching you''{\ldots} and might be influencing your brain, too.
\newblock {\em Religion, Brain \& Behavior}, 8(3), 263--264.
\href{https://doi.org/10.1080/2153599X.2018.1486619}{DOI:~10.1080/2153599X.2018.1486619}
\bibitem{doi:0}
Zhong, W., Krueger, F., Wilson, M., {\bf Bulbulia, J.}, \& Grafman, J. (2018).
\newblock Prefrontal brain lesions reveal magical ideation arises from enhanced religious experiences.
\newblock {\em Peace and Conflict: Journal of Peace Psychology}, 24(2), 245--249.
\href{http://dx.doi.org/10.1037/pac0000336}{DOI: 10.1037/pac0000336}
\subsubsection*{2017}
\bibitem{10.1080/2153599X.2017.1314409}
Sosis, R., Schjoedt, U., {\bf Bulbulia, J}., \& Wildman, W. J. (2017).
\newblock Wilson’s 15-year-old cathedral.
\newblock {\em Religion, Brain \& Behavior}, 7(2), 95--97.
\href{https://doi.org/10.1080/2153599X.2017.1314409}{DOI: 10.1080/2153599X.2017.1314409}
\bibitem{doi:1}
Gervais, W. M, Xygalatas, D., McKay, R.T., van Elk, M., Buchtel, E., Aveyard, M., Schiavone S. R., Dar-Nimrod, I., Svedholm-H{\"a}kkinen, A. M., Riekki, T., Kundtov{\'a}, E. K., Ramsay, J. E., \& {\bf Bulbulia, J.} (2017).
\newblock Global evidence of extreme intuitive moral prejudice against atheists.
\newblock {\em Nature Human Behaviour}, 1, Article number: 0151.
\href{https://doi.org/10.1038/s41562-017-0151}{DOI: 10.1038/s41562-017-0151}
\bibitem{doi:2}
Shaver, J., Fraser, G.,\& {\bf Bulbulia. J.} (2017).
\newblock Charismatic Signaling: How religion stabilizes cooperation and entrenches inequality.
\newblock In J. Liddle and T. Shackelford, (Eds.), {\em Oxford Handbook of Evolutionary Perspectives on Religion}, Oxford: Oxford University Press.
\newblock \href{https://doi.org/10.1093/oxfordhb/9780199397747.013.17}{DOI:~10.1093/oxfordhb/9780199397747.013.17}
\href{https://www.dropbox.com/s/bnxz7r89jdt93uj/oxfordhb-9780199397747-e-17.pdf?dl=0}{PDF}
% Link: http://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780199397747.001.0001/oxfordhb-9780199397747-e-17?rskey=uxDpbJ&result=1
\bibitem{}{\bf Bulbulia, J.}, Fraser, G., Watts, J., \& Shaver, J. H. (2017).
\newblock Can honest signaling theory clarify religion's role in the evolution of social inequality?
\newblock {\em Religion, Brain \& Behavior}, 7(4), 285--288.
\bibitem{doi:0}{\bf Bulbulia, J.} (2017).
\newblock Anthropology: Tradition's hidden economy.
\newblock {\em Nature Human Behaviour}, 1, Article number: 0070.
\href{https://doi.org/10.1038/s41562-017-0070}{DOI: 10.1038/s41562-017-0070}
\bibitem{doi:0}\textcolor{Purple}{
Mogan, R., Fischer, R., \& {\bf Bulbulia, J.} (2017).
\newblock To be in synchrony or not? A meta-analysis of synchrony's effects on behavior, perception, cognition and affect.
\newblock {\em Journal of Experimental Social Psychology}, 72, 13--20.}
\bibitem{doi:0}
Shaver J. H, Sibley C. G, Osborne D., \& {\bf Bulbulia J.} (2017).
\newblock News exposure predicts anti-Muslim prejudice.
\newblock {\em PLoS ONE} 12(3), pe0174606.
\href{https://doi.org/10.1371/journal.pone.0174606}{DOI:~10.1371/journal.pone.0174606}
\bibitem{Sibley:2016}
Sibley C.~G.,~Robertson, A., Osborne, D., Huang, Y., Milojev, P., Greaves, L., Houkamau, C.~A., {\bf Bulbulia, J.}, \& Barlow,~F. K. (2017).
\newblock Bias and tracking accuracy in voting projections using the {N}ew
{Z}ealand {A}ttitudes and {V}alues Study.
\newblock {\em Political Science}, 69(1), 16--34.
\href{https://doi.org/10.1080/00323187.2017.1321589}{DOI:~10.1080/00323187.2017.1321589}
\bibitem{doi:10.1080/2153599X.2017.1385202}
Sosis, R., Wildman, W.~J., {\bf Bulbulia, J.}, \& Schjoedt, U. (2017).
\newblock Hilbert problems in the scientific study of religion.
\newblock {\em Religion, Brain \& Behavior}, 7(4), 277--278.
%\href{https://www.dropbox.com/s/70e2988hk66wjr7/Hilbert%20Problems%20in%20the%20scientific%20study%20of%20religion.pdf?dl=0}{DOI:~10.1080/2153599X.2017.1385202}
\bibitem{doi:10.1080/2153599X.2017.1267953}
Sosis, R., {\bf Bulbulia, J.}, Wildman, W.~J., \& Schjoedt, U. (2017).
\newblock Religion, Brain \& Behavior's seventh year.
\newblock {\em Religion, Brain \& Behavior}, 7(1), 1--2.
\href{https://doi.org/10.1080/2153599X.2017.1267953}{DOI: 10.1080/2153599X.2017.1267953}
%\href{https://www.dropbox.com/s/yyprgw72lzs1p04/Religion%20Brain%20Behavior%20s%20seventh%20year.pdf?dl=0}{PDF}
\href{https://www.dropbox.com/s/ywxsyerusw7s59w/tandf_rrbb207_1.bib?dl=0}{BIB}
\bibitem{doi:10.1080/2153599X.2017.1368208}
Wildman, W. J., {\bf Bulbulia J.}, Sosis R., \& Schjoedt, U. (2017).
\newblock Models, simulations, abstractions, and insights.
{\em Religion, Brain \& Behavior}, 7(3), 175--177.
%\href{https://www.dropbox.com/s/c5hhuzr5tkhkc49/Models%20simulations%20abstractions%20and%20insights.pdf?dl=0}{PDF}
\bibitem{doi.org/10.1016/j.neuropsychologia.2017.04.009}
Zhong, W., Cristofori, I., {\bf Bulbulia, J.}, Krueger, F., \& Grafman, J. (2017).
\newblock Biological and cognitive underpinnings of religious fundamentalism.
\newblock {\em Neuropsychologia}, 100, 18--25.
\href{https://doi.org/10.1016/j.neuropsychologia.2017.04.009}{DOI:~10.1016/j.neuropsychologia.2017.04.009}
\bibitem{doi.org/10.1007/s10508-017-1047-9}
Greaves, L. M., Barlow, F. K., Lee, C., Matika, C. M., Wang, W., Lindsay, C., Case, C., Sengupta, N. K., Huang, Y., Cowie, L. J., Stronge, S., Storey, M., Souza, L., Manuela, S., Hammond, M. D., Milojev, P., Townrow, C. S., Muriwai, E., Satherly, N., Fraser, G., West-Newman, T., Houkamau, C., {\bf Bulbulia, J.}, Osborne, D., Wilson, M. S., \& Sibley, C. G. (2017).
\newblock Corrigendum to: The Diversity and Prevalence of Sexual Orientation Self-Labels in a New Zealand National Sample (Greaves et al., 2017).
\newblock {\em Archives of Sexual Behavior}, 46, 2209–2210.
\href{ https://doi.org/10.1007/s10508-017-1063-9}{DOI: 10.1007/s10508-017-1063-9}
\subsubsection*{2016}
%%% LARA PUB
%%% SHAVER CHAPTER
%%% OTHER CHAPTER
%%%
\bibitem{Wildman:2016}
Wildman, W.J., Sosis, R., {\bf Bulbulia, J}., \& Spezio, M. L. (2016).
\newblock Critical Self-Correction
\newblock {\em Religion, Brain \& Behavior}, 6(2), 93--94.
\href{https://doi.org/10.1080/2153599X.2016.1171062}{DOI: 10.1080/2153599X.2016.1171062}
\bibitem{Bulbulia:2016aa}
{\bf Bulbulia, J.}, Spezio, M., Sosis, R., \& Wildman, W. J. (2016).
\newblock Standards for Publishing in Religion, Brain \& Behavior.
\newblock {\em Religion, Brain \& Behavior}, 6(4), 275--277.
\href{http://dx.doi.org/10.1080/2153599X.2016.1227123}{DOI:~10.1080/2153599X.2016.1227123 }
\bibitem{greaves2016diversity}
Greaves, L. M., Barlow, F. K., Lee, C., Matika, C. M., Wang, W., Lindsay, C., Case, C., Sengupta, N. K., Huang, Y., Cowie, L. J., Stronge, S., Storey, M., Souza, L., Manuela, S., Hammond, M. D., Milojev, P., Townrow, C. S., Muriwai, E., Satherly, N., Fraser, G., West-Newman, T., Houkamau, C., {\bf Bulbulia, J.}, Osborne, D., Wilson, M. S., \& Sibley, C. G. (2017).
\newblock The diversity and prevalence of sexual orientation self-labels in a New Zealand national sample.
\newblock {\em Archives of Sexual Behavior}, 46(5), 1325-1336.
%\href{}{PDF}
\href{https://doi.org/10.1007/s10508-016-0857-5}{DOI: 10.1007/s10508-016-0857-5}
\href{https://www.dropbox.com/s/gj05gm54am7tos7/2016.sxorient.bib?dl=0}{BIB}
\bibitem{Shaver:2016b}
Shaver, J.~H., \& {\bf Bulbulia, J.} (2016).
\newblock Signaling theory and religion.
\newblock In N. K Clements (Ed.), {\em Religion: Mental Religion}, (pp. 101--117). Macmillan
Interdisciplinary Handbooks.
\href{https://www.dropbox.com/s/3csna1dy4vbib92/Signaling_Theory_and_Religion.pdf?dl=0}{PDF} \href{https://www.dropbox.com/s/p4b2hzshwuk2q9u/2016.Sh.bul.STR.bib?dl=0}{BIB}
\bibitem{Shaver:2016a}
\textcolor{Orange}{Shaver, J.~H., Troughton, G., Sibley, C.~G., \& {\bf Bulbulia, J.} (2016).
\newblock Religion and the unmaking of prejudice toward muslims: Evidence from
a large national sample.
\newblock {\em PLoS ONE}, 11(3),1--25.}
\href{https://www.dropbox.com/s/jw4r962hchojfvr/journal.pone.0150209.PDF?dl=0}{PDF} %\href{https://www.dropbox.com/s/p05yrm3s8u33ngr/10.1371%252Fjournal.pone.0150209.bib?dl=0}{BIB}
\bibitem{doi:10.1080/2153599X.2015.1109787}
Sosis, R., Spezio, M.~L., {\bf Bulbulia, J.}, \& Wildman, W.~J. (2016). Editorial:
\newblock The peer reviewer dilemma: how to appreciate the underappreciated.
\newblock {\em Religion, Brain \& Behavior}, 6(1), 1--3.
%\href{https://www.dropbox.com/s/h2h90rcmycc5dco/2153599x%252E2015%252E1109787.pdf?dl=0}{PDF}
%\href{}{BIB}
\bibitem{spezio2016religion}
Spezio, M.~L., Wildman, W.~J., Sosis, R., \& {\bf Bulbulia, J.} (2016). Editorial:
\newblock Religion and emotion.
\newblock {\em Religion, Brain \& Behavior}, 5(3), 185--187.
%\href{https://www.dropbox.com/s/8zkvkrknib497tg/Religion%20and%20Emotion.pdf?dl=0}{PDF} %\href{https://www.dropbox.com/s/1oam4k5mw8gq8lr/Spez.ReligionEmotion.bib?dl=0}{BIB}
\bibitem{Watts:2016}
Watts, J., {\bf Bulbulia, J.}, Gray, R., \& Atkinson, Q.~D. (2016).
\newblock Clarity and causality needed in claims about big gods.
\newblock {\em Behavior and Brain Sciences}, 39, 41--42.
\href{https://www.dropbox.com/s/xag576vua0i80gq/2016WattsBBSResponseBigGods.pdf?dl=0}{PDF} \href{https://www.dropbox.com/s/71veqhxsq84ufao/wattsEtAl.2016.BigGodsResponse.bib?dl=0}{BIB}
\bibitem{Watts:2016aa}
\textcolor{Orange}{Watts, J., Sheehan, O., Atkinson, Q.~D., {\bf Bulbulia, J.}, \& Gray, R. (2016).
\newblock Ritual human sacrifice promoted and sustained the evolution of
stratified societies.
\newblock {\em Nature}, 532, 228--231.}
\href{https://www.dropbox.com/s/e5avan4ex1nkgi6/nature17159.pdf?dl=0}{PDF}
\subsubsection*{2015}
\bibitem{Satherley:2015}
Satherley, N., Milojev, P., Greaves, L. M., Huang, Y., Osborne, D., {\bf Bulbulia, J}., \& Sibley, C. G. (2015).
\newblock Scales for Sense of Belonging and Support [Database record].
\newblock APA PsycTests.
\href{https://doi.org/10.1037/t41438-000}{DOI: 10.1037/t41438-000}
\bibitem{Bulbulia:2014a}
\textcolor{Orange}{{\bf Bulbulia, J.}, Troughton, G., Greaves, L., Milfont, T.~L., Botera, C., Gray, R., \& Sibley,~C.~G. (2015).
\newblock To burn or to save? the opposing functions of reading scripture on
environmental intentions.
\newblock {\em Religion, Brain \& Behavior}, 6(4), 278--289. }
%\href{https://www.dropbox.com/s/vl9bz550cc46s9g/2153599x%252e2015%252e1026926.pdf?dl=0}{PDF}
%\href{https://www.dropbox.com/s/rolzpckwfegaceb/205.bulbulia.burnsave.bib?dl=0}{BIB}
\bibitem{Bulbulia:2015aa}
{\bf Bulbulia, J.}, Shaver, J., Greaves, L., Sosis, R., \& Sibley, C.~G. (2015).
\newblock Religion and parental cooperation: an empirical test of {S}lone's
sexual signaling model.
\newblock In Slone, D. and Van~Slyke, J., (Eds.), {\em The Attraction of
Religion: A Sexual Selectionist Account}, (pp. 29--62). Bloomsbury Press.
%\href{https://www.dropbox.com/s/6txkkawrhpirwcg/2014.OCT.30.Chapter2%28Bulbulia%20et%20al%29_w_comments%20copy.pdf?dl=0}{PDF} %\href{https://www.dropbox.com/s/7k3pm0nmtriso3b/2015.Bul.Parenting.bib?dl=0}{BIB}
\bibitem{doi:10.1080/2153599X.2015.1084470}
{\bf Bulbulia, J.}, Wildman, W.~J., Sosis, R., \& Spezio, M.~L. (2015). Editorial:
\newblock What are "the Hilbert Problems'' in the study of religion?
\newblock {\em Religion, Brain \& Behavior}, 5(4), 263--265.
\href{https://doi.org/ 10.1080/2153599X.2015.1084470}{DOI:~10.1080/2153599X.2015.1084470}
%\href{https://www.dropbox.com/s/h5t7fexoibjaghm/2153599x%252E2015%252E1084470.pdf?dl=0}{PDF} \href{https://www.dropbox.com/s/5dp90zz5c6y6lif/tandf_rrbb205_263.bib?dl=0}{BIB}
\bibitem{Cristofori:2016aa}
\textcolor{Orange}{Cristofori, I., {\bf Bulbulia, J.}, Shaver, J.~H., Wilson, M., Krueger, F., \& Grafman, J. (2016).
\newblock Neural correlates of mystical experience.
\newblock {\em Neuropsychologia}, 80, 212--220. }
%\href{https://www.dropbox.com/s/uycwz4jt4d4n8tk/Cristofori%20et%20al_Neuropsychologia2016.pdf?dl=0}{PDF} \href{https://www.dropbox.com/s/xyrhei2v3fay7wi/neuralcorrelates.bib?dl=0}{BIB}
\bibitem{Greaves:2015ab}
Greaves, L.~M., Milojev, P., Huang, Y., Stronge, S., Osborne, D., {\bf Bulbulia, J.}, Grimshaw, M., \& Sibley, C.~G. (2015).
\newblock Regional differences in the psychological recovery of Christchurch residents following the 2010/2011 earthquakes: A longitudinal study.
\newblock {\em PLoS ONE}, 10(5), e0124278.
\href{https://www.dropbox.com/s/5rb1nqsw4eiks05/journal.pone.0124278.pdf?dl=0}{PDF} %\href{https://www.dropbox.com/s/31mqt3g816p55yi/info%253Adoi%252F10.1371%252Fjournal.pone.0124278.bib?dl=0}{BIB}
\bibitem{Graeves:2015aa}
Greaves, L.~M., Cowie, L., Fraser, G., Muriwai, E., Zdrenka, M., Huang, Y., Milojev, P., Osborne, D., {\bf Bulbulia, J.}, Wilson, M.~S., Liu, J., Clouston, A., \& Sibley, C.~G. (2015).
\newblock Regional differences and similarities in the personality of {N}ew {Z}ealanders.
\newblock {\em New Zealand Journal of Psychology}, 44(1), 4--16.
%\href{https://www.dropbox.com/s/pwv3wtr3w0le0wx/NZJP-Volume-44-No%20-1-20151.pdf?dl=0}{PDF}
\bibitem{Hoverd:2015aa}
Hoverd, W.~J., {\bf Bulbulia, J.}, Partow, N., \& Sibley, C.~G. (2015).
\newblock Forecasting religious change: a Bayesian model predicting
proportional Christian change in New Zealand.
\newblock {\em Religion, Brain \& Behavior}, 5(1), 15--23.
%\href{https://www.dropbox.com/s/1q7aqj7fs1vw4po/2153599x%252E2013%252E824497.pdf?dl=0}{PDF} \href{https://www.dropbox.com/s/av6jrhlhgdgcz86/tandf_rrbb205_15.bib?dl=0}{BIB}
\bibitem{Satherley:2015}
Satherley, N., Milojev, P., Greaves, L.~M., Huang, Y., Osborne, D., {\bf Bulbulia, J.}, \& Sibley, C.~G. (2015).
\newblock Demographic and psychological predictors of panel attrition: Evidence
from the {N}ew {Z}ealand {A}ttitudes and {V}alues {S}tudy.
\newblock {\em PLoS ONE}, 10(3), e0121950.
\href{https://www.dropbox.com/s/t5j2oyalr97f5aj/journal.pone.0121950.pdf?dl=0}{PDF} %\href{https://www.dropbox.com/s/ubzbz0xp6cwvwe0/info%253Adoi%252F10.1371%252Fjournal.pone.0121950.bib?dl=0}{BIB}
\bibitem{Spezion:2015}
Spezio, M.~L., {\bf Bulbulia, J.}, Wildman, W.~J.,\& Sosis, R. (2015). Editorial:
\newblock Religion, {SCAN}, and developing standards of inquiry.
\newblock {\em Religion, Brain \& Behavior}, 5(3),179--181.
%\href{https://www.dropbox.com/s/ikn5lqnyht5tz2s/2153599x%252E2015%252E1053690.pdf?dl=0}{PDF} \href{https://www.dropbox.com/s/xdtdqvw7m93rfzj/2015.speziobul.bib?dl=0}{BIB}
\bibitem{Sibley:2015aa}
Sibley, C.~G., \& {\bf Bulbulia, J.} (2015).
\newblock Charity explains differences in life satisfaction between religious
and secular New Zealanders.
\newblock {\em Religion, Brain \& Behavior}, 5(2), 91--100.
%\href{https://www.dropbox.com/s/1vhnxqmw9hdlf5i/use.2153599x%252E2014%252E899509.pdf?dl=0}{PDF} \href{https://www.dropbox.com/s/1vq0u6b94sn50ut/use.tandf_rrbb205_91-2.bib?dl=0}{BIB}
\bibitem{Watts:2015aa}
\textcolor{Orange}{Watts, J., Greenhill, S.~J., Atkinson, Q.~D., Currie, T.~E., {\bf Bulbulia, J.}, \&
Gray, R.~D. (2015).
\newblock Broad supernatural punishment but not moralizing high gods precede
the evolution of political complexity in Austronesia.
\newblock {\em Proceedings of the Royal Society of London B: Biological
Sciences}, 282(1804), 20142556--20142556.
\href{https://doi.org/10.1098/rspb.2014.2556}{DOI:10.1098/rspb.2014.2556}}
\href{https://www.dropbox.com/s/kx3br0orme0dztr/20142556.full.pdf?dl=0}{PDF} \href{https://www.dropbox.com/s/xlj936kpv3obvhc/rspb20142556supp1.pdf?dl=0}{SUPPLEMENT}
\href{https://www.dropbox.com/s/jpwqqqtqiomvnqe/broad-supernatural-punishment-but-not-moralizing-high-gods-precede-the-evolution-of-political-complexity-in-austronesia.bib?dl=0}{BIB}
\bibitem{10.1371/journal.pone.0136783}
\textcolor{Orange}{Watts, J., Sheehan, O., Greenhill, S.~J., Gomes-Ng, S., Atkinson, Q.~D.,
{\bf Bulbulia, J.}, \& Gray, R.~D. (2015).
\newblock Pulotu: Database of Austronesian supernatural beliefs and practices.
\newblock {\em PLoS ONE}, 10(9):e0136783.}
\href{https://www.dropbox.com/s/x7nw0co4clo1lkb/journal.pone.0136783.pdf?dl=0}{PDF}
% \href{https://www.dropbox.com/s/upsqwbn0bv3pgqd/info%253Adoi%252F10.1371%252Fjournal.pone.0136783.bib?dl=0}{BIB}
\bibitem{Wildman:2015aa}
Wildman, W.~J., Sosis, R., Spezio, M.~L., \& {\bf Bulbulia, J.} (2015). Editorial:
\newblock The emerging psychology of religion.
\newblock {\em Religion, Brain \& Behavior}, 5(2), 89--90.
\href{https://www.dropbox.com/s/wv6tlmlec5r5lhp/2015.WildmanEtAl.EmergingPysch.Rbb.pdf?dl=0}{PDF} \href{https://www.dropbox.com/s/or9ty6xkr8zsy0r/tandf_rrbb205_89.bib?dl=0}{BIB}
\subsubsection*{2014}
\bibitem{Botero:2014aa}
\textcolor{Orange}{Botero, C.~A., Gardner, B., Kirby, K.~R., {\bf Bulbulia, J.}, Gavin, M.~C., \& Gray,
R.~D. (2014).
\newblock The ecology of religious beliefs.
\newblock {\em Proceedings of the National Academy of Sciences},
111(47), 16784--16789. }
\href{http://www.pnas.org/content/early/2014/11/05/1408701111.abstract?sid=852fbd5d-6be1-4888-8126-1e4ef7c787bd}{PDF} \href{https://www.dropbox.com/s/4kmuja16tvei26q/2014.boteroEtAl.bib?dl=0}{BIB}
\bibitem{Bulbulia:2014aa}
{\bf Bulbulia, J.}, Wilson, M.~S., \& Sibley, C.~G. (2014).
\newblock Thin and thinner: Hypothesis-driven research and the study of humans.
\newblock {\em Numen}, 61(2-3), 166--181.
\href{https://www.dropbox.com/s/7gj657l635ap143/NU_061_02-03_166-181.pdf?dl=0}{PDF}
\href{https://www.dropbox.com/s/p8t8m02g13fbubu/15685276-12341314.bib?dl=0}{BIB}
\bibitem{Bulbulia:2014oq}
\textcolor{Purple}{
{\bf Bulbulia, J.} (2014).
\newblock The arts transform the cognitive science of religion.
\newblock {\em Journal for the Cognitive Science of Religion}, 1(2), 141--160.}
\newblock \href{https://www.dropbox.com/s/gbz16yryorq7egu/pdfgt3Nzf.pdf}{PDF}
\href{https://www.dropbox.com/s/3l1bebxg4cln7q0/2014.BulbuliaArts.bib?dl=0}{BIB}
% 52
\bibitem{Bulbulia:2014ab}
{\bf Bulbulia, J.} (2014).
\newblock Review: Changing minds: Religion and cognition through the ages.
\newblock {\em Journal for the Cognitive Science of Religion}, 2(1), 75--78.
\href{https://www.dropbox.com/s/8sttr3tn8j4zwo5/2014.bulbulia.article.review.Czachesz.Biro.pdf?dl=0}{PDF}
\bibitem{Fischer:2014aa}
Fischer, R., Xygalatas, D., Mitkidis, P., Reddish, P., Tok, P., Konvalinka, I.,
\& {\bf Bulbulia, J.}. (2014).
\newblock The fire-walker's high: Affect and physiological responses in an
extreme collective ritual.
\newblock {\em PLoS ONE}, 9(2), e88355.
%\href{http://www.plosone.org/article/citationList.action?articleURI=info%3Adoi%2F10.1371%2Fjournal.pone.0088355}{PDF} \href{https://www.dropbox.com/s/35bwxeutxyxwnbj/info%253Adoi%252F10.1371%252Fjournal.pone.0088355.bib}{BIB}
\bibitem{Milojev:2014aa}
Milojev, P., Osborne, D., Greaves, L., {\bf Bulbulia, J.}, Wilson, M.~S., Davies, C., Liu, J., \& Sibley, C.~G. (2014).
\newblock Right-wing authoritarianism and social dominance orientation predict
different moral signatures.
\newblock {\em Social Justice Research}, 27(2), 149--174.
%\href{https://www.dropbox.com/s/6yzbwpft6txa3e1/Milojev%20et%20al%20%28in%20press%29%20SJR%20Moral%20Signatures.pdf?dl=0}{PDF} \href{https://www.dropbox.com/s/u3pqsmu7rpk5h04/10.1007%252Fs11211-014-0213-7-2.bib?dl=0}{BIB}
\bibitem{Reddish:2014aa}
Reddish, P., {\bf Bulbulia, J.}, \& Fischer, R. (2014).
\newblock Does synchrony promote generalized prosociality?
\newblock {\em Religion, Brain \& Behavior}, 4(1), 3--19.
%\href{https://www.dropbox.com/s/7811smmlz0dwh3a/ReddishSynchrony_PUBLISHED_2153599x%252E2013%252E764545.pdf}{PDF} \href{https://www.dropbox.com/s/83wv51glxm7md9r/2014.SynchGeneral.bib?dl=0}{BIB}
\bibitem{Sibley:2014aa}
\textcolor{Orange}{Sibley, C.~G., \& {\bf Bulbulia, J.} (2014).
\newblock How do religious identities and basic value orientations affect each
other over time?
\newblock {\em International Journal for the Psychology of Religion},
24(1), 64--76.
\href{https://doi.org/10.1080/10508619.2013.771600}{DOI: 10.1080/10508619.2013.771600}}
\href{https://www.dropbox.com/s/h7gqktxwjc5z11p/10508619.2013.pdf}{PDF}
\href{https://www.dropbox.com/s/9blm1acufkd2jrv/tandf_hjpr2024_64.bib}{BIB}
\bibitem{Troughton:2014}
Troughton, G., {\bf Bulbulia, J.}, \& Sibley, C.~G. (2014).
\newblock Strength of religion and the future of the churches.
\newblock {\em Stimulus: The New Zealand Journal of Christian Thought and Practice}, 21(2), 26.
%\href{https://www.dropbox.com/s/vtkfr9toczn31s1/Stimulus%20Vol%2021%20Is%202_Troughton_Bulbulia_Sibley.pdf?dl=0}{PDF} \href{https://www.dropbox.com/s/kizniqa6y4n3te0/2014.Trought.Strength.bib?dl=0}{BIB}
\bibitem{Wilson:2014aa}
Wilson, M.~S., {\bf Bulbulia, J.}, \& Sibley, C.~G. (2014).
\newblock Differences and similarities in religious and paranormal beliefs: a
typology of distinct faith signatures.
\newblock {\em Religion, Brain \& Behavior}, 4(2), 104--126.
%\href{https://www.dropbox.com/s/1s4tlub7z7s485u/2153599x%252E2013%252E779934.pdf?dl=0}{PDF} \href{https://www.dropbox.com/s/bjkbwycwsqj0tdi/tandf_rrbb204_104.bib?dl=0}{BIB}
\subsubsection*{2013}
\bibitem{Bulbulia:2013}
{\bf Bulbulia, J.}, Osborne, D., \& Sibley, C.~G. (2013).
\newblock Moral foundations predict religious orientations in {N}ew {Z}ealand.
\newblock {\em PLoS ONE}, 8(12), e80224.
%\href{http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0080224}{Open Access} \href{https://www.dropbox.com/s/07ccqrmy5j84g96/10.1371%252Fjournal.pone.0080224-2.bib}{BIB}
\bibitem{Bulbulia:2013ac}
\textcolor{Orange}{{\bf Bulbulia, J.}, Xygalatas, D., Schjoedt, U., Fondevila, S., Sibley, C.~G., \&
Konvalinka, I. (2013).
\newblock Images from a jointly-arousing collective ritual reveal affective
polarization.
\newblock {\em Frontiers in Psychology}, 4(960). }
\href{http://www.frontiersin.org/evolutionary_psychology_and_neuroscience/10.3389/fpsyg.2013.00960/abstract}{Open Access}
%\href{https://www.dropbox.com/s/9c83qghal5bobti/67850_Bulbulia_Images%20From%20a%20Jointly-Arousing%20Collective%20Ritual%20Reveal%20Affective%20Polarization.bib}{BIB}
\bibitem{Bulbulia:2013ad}
\textcolor{Purple}{{\bf Bulbulia, J.}, Geertz, A., Atkinson, Q.~D., Cohen, E., Evans, N., Francois, P.,
Gintis, H., Gray, R., Henrich, J., Jordon, F., Norenzayan, A., Richerson,
P.~J., Slingerland, E., Turchin, P., Whitehouse, H., Widlok, T., \& Wilson,
D. (2013).
\newblock The cultural evolution of religion.
\newblock In P.~J. Richerson and M. Christiansen (Eds.), {\em Cultural
Evolution} (pp. 381--404). MIT press, Cambridge, MA. \newblock [distribution for private use by permission of the \href{http://www.esforum.de}{Ernst Strungmann Forum} and \href{https://mitpress.mit.edu/books/cultural-evolution}{MIT Press}]: }\newblock %\href{https://www.dropbox.com/s/iycggz15nr91r9n/SFR12_20%20Bulbulia%20et%20al_Author%20Copy.pdf}{PDF} \href{https://www.dropbox.com/s/g3jw8iz2x3qo5e0/jbetal_CEVOREL.bib}{BIB}
\bibitem{Bulbulia:2010fk}
{\bf Bulbulia, J.} (2013).
\newblock Toward an evolutionary cognitive science of mental cultures: lessons
from {F}reud.
\newblock In D. Xygalatas and W. McCorkle (Eds.), {\em Mental Culture:
Towards a Cognitive Science of Religion} (pp. 110--127). Equinox,
London.
\newblock %\href{https://www.dropbox.com/s/j4yowmtnqlkh8qt/ACU-Mental%20Culture-PROOF1-08-Bulbulia%20copy.pdf?dl=0}{PDF}
\bibitem{Bulbulia:2013a}
{\bf Bulbulia, J.}, Atkinson, Q.~D., Gray, R., \& Greenhill, S. (2013).
\newblock Why do religious cultures evolve slowly? The cultural evolution of cooperative calling and the historical study of religions. In I. Czachesz and R. Uro (Eds.), {\em Mind, Morality and Magic: Cognitive Science Approaches in Biblical Studies} (pp. 197--212).
\newblock Acumen Publishing, Durham, UK.
\href{https://www.dropbox.com/s/jcf4k7e7mex62ax/2013.BulEtAl.why_rel_evo_slow.pdf?dl=0}{PDF} \href{https://www.dropbox.com/s/vva6osofhrjsd5r/2014.BulEtAl_WhyRelCultSlow.bib?dl=0}{BIB}
\bibitem{Bulbulia:2011kx}
{\bf Bulbulia, J.} (2013).
\newblock Why costly-signalling models of religion require cognitive
psychology.
\newblock In A.~Geertz (Ed.), {\em Origins of Religion, Cognition, and
Culture} (pp. 71--81). Equinox, London. %\href{https://www.dropbox.com/s/0nanocrstu2cdvd/Geertz%20first%20proof%2014-08-13%20chapter%202.pdf}{PDF} \href{https://www.dropbox.com/s/8nc3e0cvpxurm3l/2013.Bulbulia.whyslowly.bib?dl=0}{BIB}
\bibitem{Bulbulia:2013b}
{\bf Bulbulia, J.}, Atkinson, Q. D., Greenhill, S., \& Gray, R. (2013).
\newblock First shots fired for the phylogenetic revolution in religious
studies.
\newblock {\em Cliodynamics: The Journal of Theoretical and Mathematical
History}, 4(1):128--133.
\href{http://escholarship.org/uc/item/05n4z9w8#page-27}{PDF} \href{https://www.dropbox.com/s/hyj3r479lot4u54/2013.Bulbulia.First.Shots..bib?dl=0}{BIB}
\bibitem{Bulbulia:2013qy}
\textcolor{Green}{
{\bf Bulbulia, J.} (2013).
\newblock Bayes and the evolution of religious belief.
\newblock In Moreland, J.~P., Sweis, K.~A., and V., M.~C., (Eds.), {\em
Debating Christian Theism}, (pp. 223--241). Oxford University
Press. } \href{https://www.dropbox.com/s/4r51rkx6hs45xv5/Bayes_Bulbulia.pdf}{PDF} \href{https://www.dropbox.com/s/bvynsi68no2bywu/2013.bulbulia.bayes.bib?dl=0}{BIB}
%\href{http://global.oup.com/academic/product/debating-christian-theism-9780199755431?cc=nz&lang=en}{to book}
\bibitem{Fischer:2013fk}
Fischer, R., Callander, R., Reddish, P., \& {\bf Bulbulia, J.} (2013).
\newblock How do rituals affect cooperation?
\newblock {\em Human Nature}, 24(2):115--125.
\href{https://www.dropbox.com/s/isz3hm6j46x15jv/10.1007_s12110-013-9167-y.pdf}{PDF} %\href{https://www.dropbox.com/s/rs1chyl87faffz9/10.1007%252Fs12110-013-9167-y.bib?dl=0}{BIB}
\bibitem{hoverd2013does}
Hoverd, W.~J., {\bf Bulbulia, J.}, \& Sibley, C.~G. (2013).
\newblock Does poverty predict religion?
\newblock {\em Religion, Brain \& Behavior}, 3(3):185--200. \href{https://www.dropbox.com/s/vfc656k50e30u1c/pov_published.pdf}{PDF} \href{https://www.dropbox.com/s/sqjldzyphdvxi6q/2013.pov.bib?dl=0}{BIB}
\bibitem{Reddish:2013aa}
\textcolor{Orange}{Reddish, P., Fischer, R., \& {\bf Bulbulia, J.} (2013).
\newblock Let's dance together: Synchrony, shared intentionality and
cooperation.
\newblock {\em PLoS ONE}, 8(8):e71182.}
\href{https://www.dropbox.com/s/qhcenknup2xtetx/journal.pone.0071182.pdf}{PDF} %\href{http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0071182}{BIB} %href{http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0071182}{RIS}
\bibitem{Sibley:2013aa}
Sibley, C.~G., \& {\bf Bulbulia, J.} (2013).
\newblock The proportion of religious residents predicts the values of
nonreligious neighbors: evidence from a national sample.
\newblock {\em Religion, Brain and Behavior}, 3(3):219--232.
\href{https://doi.org/10.1080/2153599X.2012.739740}{DOI: 10.1080/2153599X.2012.739740}
%\href{https://www.dropbox.com/s/0exhqk26e7esbhq/2153599x%252E2012%252E739740.pdf}{PDF}
%\href{https://www.dropbox.com/s/0exhqk26e7esbhq/2153599x%252E2012%252E739740.pdf}{BIB}
\bibitem{Schjoedt:2013aa}
Schjoedt, U., S{\o}rensen, J., Nielbo, K.~L., Xygalatas, D., Mitkidis, P., \&
{\bf Bulbulia, J.} (2013).
\newblock Cognitive resource depletion in religious interactions.
\newblock {\em Religion, Brain \& Behavior}, 3(1):39--55.
%\href{https://www.dropbox.com/s/00bifhnxk68wj4i/2153599x%252E2012%252E736714.pdf?dl=0}{PDF} \href{https://www.dropbox.com/s/4qa0khevjckloor/2013.Schjoedt.Cog.Deplet.bib?dl=0}{BIB}
\bibitem{Schjoedt:2013ab}
Schjoedt, U., S{\o}rensen, J., Nielbo, K.~L., Xygalatas, D., Mitkidis, P., \&
{\bf Bulbulia, J.} (2013).
\newblock The resource model and the principle of predictive coding: a
framework for analyzing proximate effects of ritual.
\newblock {\em Religion, Brain \& Behavior}, 3(1):79--86.
\href{https://www.dropbox.com/s/sqmgtkag4zbu3nc/schjoedtetall2013Framework.pdf?dl=0}{PDF} \href{https://scholar.google.co.nz/scholar.bib?q=info:DB2umFzhr70J:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAVTM86rinZ5ScIM9di3Db8hA7UdmttQQN&scisf=4&hl=en}{BIB}
\bibitem{Xygalatas:2013aa}
Xygalatas, D., Schjoedt, U., {\bf Bulbulia, J.}, Konvalinka, I., Jegind{\o}, E.-M.,
Reddish, P., Geertz, A., \& Roepstoff, A. (2013).
\newblock Autobiographical memory in a fire-walking ritual.
\newblock {\em Journal of Cognition and Culture}, 13(1-2):1--16.
\href{https://doi.org/10.1163/15685373-12342081}{DOI: 10.1163/15685373-12342081}
\href{https://www.dropbox.com/s/3srdqka56fe7av3/JOCC_Xygalatas_et_al_2013.pdf}{PDF} \href{https://www.dropbox.com/s/yc0vqndj8w8hn05/2013.xygalat.memory.bib?dl=0}{BIB}
\bibitem{xygalatas2013extreme}
\textcolor{Orange}{Xygalatas, D., Mitkidis, P., Fischer, R., Reddish, P., Skewes, J., Geertz,
A., Roepstorff, A., \& {\bf Bulbulia, J.} (2013).
\newblock Extreme rituals promote prosociality.
\newblock {\em Psychological science}, 24(8):1602--1605.} %\href{https://www.dropbox.com/s/0d5lglhhx5gqzop/Psychological%20Science-2013-Xygalatas-0956797612472910.pdf}{PDF} \href{https://www.dropbox.com/s/axtkifeonok70yb/2013.Extreme.Xygal.bib?dl=0}{BIB}
\subsubsection*{2012}
\bibitem{Bulbulia:2012ab}
\textcolor{Green}{{\bf Bulbulia, J.} (2012).
\newblock Spreading order: religion, cooperative niche construction, and risky
coordination problems.
\newblock {\em Biology \& philosophy}, 27(1):1--27.}
\href{https://doi.org/10.1007/s10539-011-9295-x}{DOI: 10.1007/s10539-011-9295-x}
\href{http://www.springerlink.com/content/e4208k071t6164k7/}{Open Access} %\href{https://www.dropbox.com/s/k7pmdrr80tqwp6k/BULBULIA_10.1007_s10539-011-9295-x%20.pdf?dl=0}{PDF} \href{https://www.dropbox.com/s/io81gsw2gt1w4mk/2012.Bulbulia.SpreadOrder.bib?dl=0}{BIB}
\bibitem{Bulbulia:2012fj}
\textcolor{Green}{{\bf Bulbulia, J.} (2012).
\newblock Ennuitheism.
\newblock In P. MacNamara and W.~J. Wildman (Eds.), {\em Science and the
World's Religions: Vol 3. Religions and Controversies} (pp. 165--194). Praeger, Santa Barbara, CA.} \href{https://www.dropbox.com/s/wgm2tsi4safdmc3/2012_Ennuitheism-bulbulia_published.pdf?dl=0}{PDF} \href{https://www.dropbox.com/s/aalocq092iqrw01/2012.Bulbulia.Ennuitheism.bib?dl=0}{BIB}
\bibitem{Bulbulia:2012aa}
{\bf Bulbulia, J.}, Frean, M., \& Reddish, P. (2012).
\newblock Ecological signalling.
\newblock In G.~W. Dawes and J. Maclaurin (Eds.), {\em A New Science of
Religions} (pp. 100--110). Routledge. %\href{https://www.dropbox.com/s/6kvlr459l2dfzd5/BUL_Chap4-Ecol%20Sign-PROOF.pdf?dl=0}{PDF} \href{https://www.dropbox.com/s/146j28fyr5p5b6q/2012.ecological.signal.bulbuliaEtAl.bib?dl=0}{BIB}
\bibitem{Bulbulia:2012uq}
{\bf Bulbulia, J.}, \& Reddish, P. (2012).
\newblock Explaining effervescence.
\newblock In G.~W. Dawes, and J. Maclaurin (Eds.), {\em A New Science of Religions}
(pp. 43--64). Routledge, New York. \href{https://www.dropbox.com/s/t67m0qyptuec2ix/BUL_Chap6-ExplEffe-PROOF.pdf?dl=0}{PDF} \href{https://www.dropbox.com/s/23rouucf6vs7igi/2012.explainEfferv.bul.bib?dl=0}{BIB}
\bibitem{Bulbulia:2009lb}\textcolor{Purple}{
{\bf Bulbulia, J.}, \& Schjoedt, U. (2012).
\newblock The neural basis of religious belief.
\newblock In F. Krueger and J. Grafman (Eds.), {\em The Neural Basis of
Human Belief Systems} (pp. 169--190). Psychology Press. }\href{https://www.dropbox.com/s/lsatbexn9jasbfh/NB09.pdf?dl=0}{PDF} \href{https://www.dropbox.com/s/m4jmcqzi9i3dmj8/2013.Bulbulia.Schoedt.Neural.Belief.bib?dl=0}{BIB}
\bibitem{Bulbulia:2012kx}
{\bf Bulbulia, J.}, \& Schjoedt, U. (2012).
\newblock Toward an evolutionary social neuroscience of religion.
\newblock {\em Religion, Brain \& Behavior}, 1(3):220--222. \href{https://www.dropbox.com/s/v59etaychbt2z25/2012.bulbulia.schoedt.towardsevoneuroscince.pdf?dl=0}{PDF}
\bibitem{Bulbulia:2012ac}\textcolor{Purple}{
{\bf Bulbulia, J.}, \& Slingerland, E. (2012).
\newblock Religious studies as a life science.
\newblock {\em Numen}, 59(5-6):564--613.} \href{https://www.dropbox.com/s/eom9r3jbt1qvg4a/NU_059_05-06_564-613.pdf}{PDF} \href{https://www.dropbox.com/s/y8o9w2bvnwjimxk/2012.Bulbulia.Slingerland.Ritual.bib?dl=0}{BIB}
\bibitem{Sibley:2012fk}
Sibley, C.~G., \& {\bf Bulbulia, J.} (2012).
\newblock Healing those who need healing: How religious practice interacts with
personality to affect social belonging.
\newblock {\em Journal for The Cognitive Science of Religion}, 1(1):29--45. \href{https://www.equinoxpub.com/journals/index.php/JCSR/article/viewArticle/Healing-Those-Who-Need-Healing}{PDF} \href{https://www.dropbox.com/s/nbd18kv59i1s98a/healing.those.bib?dl=0}{BIB}
\bibitem{Schjoedt:2012kx}
Schjoedt, U., \& {\bf Bulbulia, J.} (2012).
\newblock The need to believe in conflicting propositions.
\newblock {\em Religion, Brain \& Behavior}, 1(3):236--239.
\href{http://www.tandfonline.com/doi/abs/10.1080/2153599X.2011.647857}{PDF} \href{https://www.dropbox.com/s/trju867snuho33k/needtobelieve.bib?dl=0}{BIB}
\bibitem{sibley2012faith}
\textcolor{Orange}{Sibley, C.~G., \& {\bf Bulbulia, J.} (2012).
\newblock Faith after an earthquake: A longitudinal study of religion and
perceived health before and after the 2011 {C}hristchurch {N}ew {Z}ealand
earthquake.
\newblock {\em PLoS one}, 7(12):e49648.\newblock
\href{https://doi.org/10.1371/journal.pone.0049648}{DOI: 10.1371/journal.pone.0049648}}
%\newblock \href{http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0049648}{Open Access PDF + Slides} \href{https://www.dropbox.com/s/p02sexw53slcpn6/faithquake.bib?dl=0}{BIB}
\subsubsection*{2011}
\bibitem{Bulbulia:2011aa}
{\bf Bulbulia, J.}, \& Frean, M. (2011).
\newblock Affording cooperative populations.
\newblock {\em Religion, Brain \& Behavior}, 1(1):66--70. \href{https://www.dropbox.com/s/2zksrbyav9agt14/BULBULIA_AFFORDING_2011.pdf?dl=0}{PDF} \href{https://www.dropbox.com/s/nkk7yepibk3f696/tandf_rrbb201_66.bib?dl=0}{BIB}
\bibitem{Bulbulia:2011ab}\textcolor{Purple}{
{\bf Bulbulia, J.}, \& Sosis, R. (2011).
\newblock Signalling theory and the evolution of religious cooperation.
\newblock {\em Religion}, 41(3):363--388.
\href{https://doi.org/10.1080/0048721X.2011.604508}{DOI: 10.1080/0048721X.2011.604508}}
\newblock \href{https://www.dropbox.com/s/j0zb0a74ut3j93r/Bulbulia_Sosis_Signalling_2011.pdf}{PDF} \href{https://www.dropbox.com/s/yxcfyzqa8d342em/sig.theory.bul.sos.bib?dl=0}{BIB}
\bibitem{Bulbulia:2011ac}
{\bf Bulbulia, J.} (2011).
\newblock The hypnotic stag hunt.
\newblock {\em Journal of Cognition and Culture}, 11(3-4):353--365.
\href{https://doi.org/10.1163/156853711X591297}{DOI: 10.1163/156853711X591297}
\href{https://www.dropbox.com/s/ngdi2ldwnny7sd8/JOCC_011_03-04_06-Bulbulia.pdf}{PDF} \href{https://www.dropbox.com/s/o5lkyk8uvsuytn1/Hypnotic.bib?dl=0}{BIB}
\bibitem{Bulbulia:2011ad}
Frean, M., \& {\bf Bulbulia, J.} (2011).
\newblock Neutral evolution as a route to large-scale cooperation in the stag
hunt game.
\newblock In {\em International Conference on Complex Systems (ICCS) 2011,
hosted by NECSI (New England Complex Systems Institute July 1-6, 2011
(proceedings)}.
%\href{http://homepages.ecs.vuw.ac.nz/foswiki/pub/Users/Marcus/MarcusFreanPublications/ICCS-259-FreanBulbulia.pdf}{LINK}.
\href{https://www.dropbox.com/s/bn4p7ry1cwz2v76/ICCS-259-FreanBulbulia.pdf?dl=0}{PDF}
\bibitem{Konvalinka:2010zr}
\textcolor{Orange}{Konvalinka, I., Xygalatas, D., {\bf Bulbulia, J.}, Schjoedt, U., Jegindoe, E.~M.,
Wallot, S., Van~Orden, G., \& Roepstorff, A. (2011).
\newblock Synchronized arousal between performers and related spectators in a
fire-walking ritual.
\newblock {\em Proceedings of the National Academy of Sciences},
108(20):8514--8519.} \href{https://www.dropbox.com/s/e9kw1bc58arqcum/1016955108-1.full.pdf}{PDF} \href{https://scholar.google.co.nz/scholar.bib?q=info:Z4bz7lwp7CAJ:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAVQLQj1q3zL48FKlyJACORNS7cgksQjIL&scisf=4&hl=en}{BIB}
\bibitem{Slingerland:2011aa}
Slingerland, E., \& {\bf Bulbulia, J.} (2011).
\newblock Introductory essay: Evolutionary science and the study of religion.
\newblock {\em Religion}, 41(3):307--328.
\href{https://doi.org/10.1080/0048721X.2011.604513 }{DOI: 10.1080/0048721X.2011.604513}
\newblock \href{https://www.dropbox.com/s/w7om74yu1byjces/Slingerland_Bulbulia_2011.pdf?dl=0}{PDF} \href{https://scholar.google.co.nz/scholar.bib?q=info:EuqNIKJICaAJ:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAVQLQF9j3ZcOMiq4s0i0IlD2LvlZZY93D&scisf=4&hl=en}{BIB}
\bibitem{Sosis:2011uq}
Sosis, R., \& {\bf Bulbulia, J.} (2011).
\newblock The behavioral ecology of religion: the benefits and costs of one
evolutionary approach.
\newblock {\em Religion}, 41(3):341--362.
\href{https://doi.org/10.1080/0048721X.2011.604514}{DOI: 10.1080/0048721X.2011.604514} \newblock \href{https://www.dropbox.com/s/x6qyj01trcuhfzd/SosisBulbuliaBehEcoRel.pdf?dl=0}{PDF} \href{https://scholar.google.co.nz/scholar.bib?q=info:6KFNe9NrUB0J:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAVQLQzUevx-JM_Jt5ZX4k_9fxb6dV3u0s&scisf=4&hl=en}{BIB}\
\bibitem{Xygalatas:2011ij}
Xygalatas, D., Konvalinka, I., Roepstoff, A., \& {\bf Bulbulia, J.} (2011).
\newblock Quantifying collective effervescence heart-rate dynamics at a
fire-walking ritual.
\newblock {\em Communicative and Integrative Biology}, 4(6):735--738.\newblock
\href{https://doi.org/10.4161/cib.4.6.17609}{DOI: 10.4161/cib.4.6.17609}
\href{https://www.dropbox.com/s/k7sd19xlfiq8huo/XygalatasCIB4-6.pdf}{PDF} \href{https://scholar.google.co.nz/scholar.bib?q=info:ibIMcXjOt4sJ:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAVQLRpR7fjyHaIZem3vp4nLctXQZrVMcE&scisf=4&hl=en}{BIB}
\subsubsection*{2010}
\bibitem{Bulbulia:2010fv}
{\bf Bulbulia, J.}, \& Frean, M. (2010).
\newblock The evolution of charismatic cultures.
\newblock {\em Method and Theory in the Study of Religion}, 22(4):254--271.
\href{https://doi.org/10.1163/157006810X531049}{DOI: 10.1163/157006810X531049} \href{http://db.tt/3EhwMDv}{PDF} \href{https://scholar.google.co.nz/scholar.bib?q=info:CMgUm8a7Y_QJ:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAVQPQTOqB9pbHY7FLyLOV1miSoIIZ5shl&scisf=4&hl=en}{BIB}
\bibitem{Bulbulia:vn}
{\bf Bulbulia, J.}, \& Schjoedt, U. (2010).
\newblock Religious Culture and Cooperative Prediction under Risk: Perspectives from Social Neuroscience.
\newblock In I. Pyysiainen (Ed.), {\em Religion, Economy, and Cooperation},
(pp. 35--59). deGruyter, New York. \href{http://db.tt/MOZlvjr}{PDF}
\subsubsection*{2009}
\bibitem{Bulbulia:2009iw}\textcolor{Green}{
{\bf Bulbulia, J.} (2009).
\newblock Charismatic signalling.
\newblock {\em Journal for the Study of Religion, Nature, Culture},
3(4):518--551.} \newblock
\href{https://doi.org/10.1558/jsrnc.v3i4.518}{DOI: 10.1558/jsrnc.v3i4.518} \newblock \href{http://db.tt/8IiltRr}{PDF} \href{https://scholar.google.co.nz/scholar.bib?q=info:eUGNRtbVR88J:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAVQPSkHpPrfY5B1nrjrJcYvQiHbdtIIuQ&scisf=4&hl=en}{BIB}
\bibitem{Bulbulia:2009qg}
{\bf Bulbulia, J.} \& Krueger, F. (2009). Comment on Attachment and Cooperation in Religious Groups.
\newblock
\newblock{\em Current Anthropology}, 50(6):772--773.
\href{https://doi.org/10.1086/605767}{DOI: 10.1086/605767} \newblock \href{http://db.tt/MMaLhwQ}{PDF}
\bibitem{Bulbulia:2009fk}
{\bf Bulbulia, J.} \& Sosis, R. (2009).
\newblock Belief as ideology.
\newblock {\em Behavior and the Brain Sciences}, 32(6):515--516.
\href{https://doi.org/10.1017/S0140525X09991403}{DOI: 10.1017/S0140525X09991403} \newblock \href{http://db.tt/I9z2SCY}{PDF}
\bibitem{Bulbulia:2009aa}
{\bf Bulbulia, J.} (2009).
\newblock Religion as evolutionary cascade.
\newblock In M. Stuasberg (Ed.), {\em Contemporary Theories of Religion: A
Critical Companion} (pp. 156--172). Routledge, New York. \href{http://db.tt/OLFGqFk}{PDF}
\bibitem{Bulbulia:2009fv}
{\bf Bulbulia, J.}, \& Frean, M. (2009).
\newblock Religion as superorganism.
\newblock In M. Stausberg (Ed.), {\em Contemporary Theories of Religion: A
Critical Companion} (pp. 173--194). Routledge, New York. \href{http://db.tt/LMShwLD}{PDF}
\bibitem{Bulbulia:2009vv}
\textcolor{Green}{{\bf Bulbulia, J.} (2009).
\newblock Religiosity as mental time travel: cognitive adaptations for
religious behavior.
\newblock In J. Schloss and M. Murray (Eds.), {\em The Believing Primate:
Scientific, Philosophical and Theological Perspectives on the Evolution of
Religion} (pp. 44--75). Oxford University Press, New York. } \href{http://db.tt/gGUysok}{PDF} \href{https://www.dropbox.com/s/ri6hs7gbut6ku1f/2009.mentalTime.bib?dl=0}{BIB}
\subsubsection*{2008}
\bibitem{Bulbulia:2008bu}
\textcolor{Green}{{\bf Bulbulia, J.} (2008).
\newblock Meme infection or religious niche construction? an adaptationist
alternative to the cultural maladaptationist hypothesis.
\newblock {\em Method and Theory in the Study of Religion}, 20(1):67--107.} \href{http://db.tt/6IjKsQV}{PDF} \href{https://scholar.google.co.nz/scholar.bib?q=info:7ZPnl0WG0EkJ:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAVQSUw7EcGrYvW0TZ0foofELIVjbg3S2W&scisf=4&hl=en}{BIB}
\bibitem{Bulbulia:2008fp}
{\bf Bulbulia, J.} (2008).
\newblock Ritual studies and ritual theories: A guide for the perplexed.
\newblock {\em Numen}, 55(4):461--473.
\href{https://doi.org/1163/156852708X310545}{DOI: 1163/156852708X310545} \href{http://booksandjournals.brillonline.com/content/journals/10.1163/156852708x310545}{PDF}
\bibitem{Bulbulia:2008ly}
{\bf Bulbulia, J.} (2008).
\newblock Telling nature.
\newblock {\em Landfall}, 215:180--185.\href{http://db.tt/MCbgvBt}{PDF}
\bibitem{Bulbulia:2008lf}\textcolor{Green}{
{\bf Bulbulia, J.} (2008).
\newblock Free love: Religious solidarity on the cheap.
\newblock In J. Bulbulia, R. Sosis, R. Genet, R, E. Harris, K. Wyman and
C. Genet (Eds.), {\em The Evolution of Religion: Studies, Theories, and
Critiques}, (pp. 153--160). Collins Foundation Press, Santa
Margarita, CA.} \href{http://db.tt/8vbZ8wQ}{PDF} \href{https://www.dropbox.com/s/57zbcpj8llfrlgy/freeLove.bib?dl=0}{BIB}
\bibitem{Bulbulia:2008sk}\textcolor{Orange}{
{\bf Bulbulia, J.}, \& Mahoney, A. (2008).
\newblock Religious solidarity: The hand grenade experiment.
\newblock {\em Journal of Cognition and Culture}, 8(3):295--320.
\href{https://doi.org/10.1163/156853708X358191}{DOI: 10.1163/156853708X358191} \newblock} \href{http://db.tt/0969FhI}{PDF} \href{https://scholar.google.co.nz/scholar.bib?q=info:ez_RmJh3ov8J:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAVQSZK_McRldG-i1APaLH3OkcaGDXrgxr&scisf=4&hl=en}{BIB}
\bibitem{Sosis:2007yd}
Sosis, R., \& {\bf Bulbulia, J.} (2008).
\newblock Religion in Eden.
\newblock In J. Bulbulia, R. Sosis, R. Genet, E. Harris, K. Wyman and
C. Genet (Eds.), {\em The Evolution of Religion: Studies, Theories, and
Critiques}, Chapter Introduction (pp. 15--19). Collins Foundation Press,
Santa Margarita, CA. \href{http://db.tt/nSs5qqb}{PDF}
\subsubsection*{2007}
\bibitem{Bulbulia:2007vo}\textcolor{Purple}{
{\bf Bulbulia, J.} (2007).
\newblock The evolution of religion.
\newblock In R. Dunbar and L. Barrett (Eds.), {\em Oxford Handbook of
Evolutionary Psychology} (pp. 621--636). Oxford University
Press, New York.} \href{http://db.tt/nSs5qqb}{PDF} \href{https://www.dropbox.com/s/owcmsyw09umhc1x/theEvoReligion.bib?dl=0}{BIB}
\subsubsection*{2006}
\bibitem{Bulbulia:2006iu}\textcolor{Green}{
{\bf Bulbulia, J.} (2006).
\newblock Nature's medicine: religiosity as an adaptation for health and
cooperation.
\newblock In P. McNamara (Ed.), {\em Where Man and God Meet: the new
sciences of religion and brain}, (pp.~87--121). Greenwood
Publishers, Westwood CT.} \href{http://db.tt/HFNrN38}{PDF}
\subsubsection*{pre-2005}
\bibitem{Bulbulia:2005jl}\textcolor{Green}{
{\bf Bulbulia, J.} (2005).
\newblock Are there any religions?
\newblock {\em Method and Theory in the Study of Religion}, 17(2):71--100.
\href{https://doi.org/10.1163/1570068054305619}{DOI: 10.1163/1570068054305619}} \href{http://db.tt/IUco0fj}{PDF} \href{https://scholar.google.co.nz/scholar.bib?q=info:VkW9oOZSmf8J:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAVQSb1HVG338RqWQkIIV6dbi-5mWgu0GN&scisf=4&hl=en}{BIB}
\bibitem{Bulbulia:2004yo} \textcolor{Green}{
{\bf Bulbulia, J.} (2004).
\newblock Religious costs as adaptations that signal altruistic intention.
\newblock {\em Evolution and Cognition}, 10(1):19--38.} \href{http://db.tt/BCahAsL}{PDF} \href{https://scholar.google.co.nz/scholar.bib?q=info:S_Eaz9FFNg4J:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAVQScQ3OHzAiiRCA0uahLpFkCTVIuvpok&scisf=4&hl=en}{BIB}
\bibitem{Bulbulia:2004be}
\textcolor{Purple}{{\bf Bulbulia, J.} (2004).
\newblock The cognitive and evolutionary psychology of religion.
\newblock {\em Biology and Philosophy}, 18(5):655--686.} \href{http://db.tt/QyYditX}{PDF} \href{https://scholar.google.co.nz/scholar.bib?q=info:wfjh1TV9XQsJ:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAVQSiEyp4iOATTHZI5hVnJLq3P73FdO1P&scisf=4&hl=en}{BIB}
\bibitem{Bulbulia:2003ol}
{\bf Bulbulia, J.} (2003).
\newblock Book review: Nicholas Agar, Life's Intrinsic Value: Science, Ethics and Nature.
\newblock {\em Sophia}, 42(1):85--89. \href{http://db.tt/o8XLYrL}{PDF}
\bibitem{Bulbulia:2003aa}
{\bf Bulbulia, J.} (2003).
\newblock Review of {J}ames {M}cclenon: {W}ondrous Healing: Shamanism, Human
Evolution and the Origin of Religion.
\newblock {\em Method \& Theory in the Study of Religion}, 15(1):100--103.
\href{https://doi.org/10.1163/15700680360549439}{DOI: 10.1163/15700680360549439} \href{http://db.tt/AO5Ut4S}{PDF}
\bibitem{Bulbulia:2002ve}
{\bf Bulbulia, J.} (2002).
\newblock Unweaving the religious mind: A review of Pascal Boyer, religion ex-
plained: The evolutionary origins of religious thought.
\newblock {\em Eras}, 4. \href{http://www.arts.monash.edu.au/publications/eras/edition-4/bulbulia.php}{URL}
\bibitem {Bulbulia:1997aa}
{\bf Bulbulia, J.} (1996).
\newblock Book review: {H}ilary {P}utnam, {W}ords and {L}ife .
\newblock {\em Koinonia}, 8. \href{http://db.tt/vygRUWn}{(DOC)} \href{http://www.ptsem.edu/koinonia/}{Journal}
\subsubsection*{Edited Books}
\bibitem{Bulbulia:2008jb}
{\bf Bulbulia, J.}, Sosis, R., Genet, R., Harris, E., Wyman, K., \& Genet, C.
(2008).\newblock {\em The Evolution of Religion: Studies, Theories, and Critiques}.
\newblock Collins Foundation Press, Santa Margarita, CA. %\href{https://www.dropbox.com/s/hgbsysc1ioluay5/Evolution%20of%20Religion%2C%20Bulbulia%20et%20al.pdf?dl=0}{PDF}
\bibitem{Bulbulia:2004ld}
{\bf Bulbulia, J.}, \& Morris, P. (2004).
\newblock {\em What is Religion For?}
\newblock Milne, Wellington.
\subsubsection*{Dissertation}
\bibitem{Bulbulia:2001cr}
{\bf Bulbulia, J.} (2001).
\newblock {\em Before Eden: Religion and The Evolved Mind}.
\newblock PhD thesis, Princeton University, Princeton, N.J. %\href{https://www.dropbox.com/s/czvxvm24o8r5n55/Bulbulia%20diss.pdf?dl=0}{PDF}
\end{thebibliography}
\subsubsection*{Supervisions}
\subsection*{Completed}
\noindent
\years{2015$\to$2020}K. Deak. The religious paradox of gender inequality and well-being
an evolutionary account of parental cooperation.\\
\years{2015$\to$2018} S. A. Teo. {\em Ph.D. Religious Studies}. Religion as a marker of communal boundaries: Revivalism and Hindu temples in Malaysia. Secondary supervisor.\\
%
\years{2015$\to$2018}R.~Mogan. {\em Ph.D. Psychology}. Aligning Bodies And Minds: New Insights And A Synthesis Of Synchronys Effects On Creative Thinking, Cohesion And Positive Affect. Secondary supervisor.\\
%
\years{2014$\to$2017} B. A. ~Blaschke. {\em Ph.D. Religious Studies}. Contemplative Responses to the Problem of Self: An Ethnography of the Spirit. Secondary supervisor.\\
%
\years{2016} D.~Robinson. {\em M.A. Religious Studies}. Religion and Social Capital in New Zealand. Primary supervisor.\\
%
\years{2015}B.~Davis.{ \em Ph.D. Religious Studies}. Psychology, Cosmology and the Christian Concept of God: Reconciling Theology and Science. Primary supervisor\\
%
\years{2015}R.~Somerfield.{ \em Ph.D. Theatre Studies}. Moving Minds: Undoing cognition in performance environments. Secondary supervisor.\\
%
\years{2015}D.~Murphy.{ \em Ph.D. Religious Studies (scholarship)}. Maori Myth and Extended Cognition: a cognitive approach to memory and oral tradition in the Pacific. Primary supervisor.\\
%
\years{2013}P.~Reddish.{ \em Ph.D. Pyschology (scholarship)}. The Cooperative Effects of Synchronous Interactions. Secondary supervisor.\\%defended 24 OCT 2012
%
\years{2011}C.~Dalzell.{ \em M.A. Religious Studies}. Selecting for Celibacy - Cultural evolution and the puzzling case of Buddhist celibacy. Awarded with distinction. Primary supervisor.\\
%
\years{2011}T.~McVicar. {\em M.A. Religious Studies}. Implicit Cultures: Towards a Psychosocial Theory of Intuitive Religious Beliefs. Awarded with distinction. Primary supervisor.\\
%
\years{2010}A.~Mahoney. {\em Ph.D. Religious Studies (Bright Futures)}. The Evolutionary Psychology of Theology. Primary supervisor.\\
%
\years{2009}S.~Hill. {\em Ph.D. Sociology}. Torn Between Boundaries: Bodies-in-pain, Christianity and Feature Film. Secondary supervisor.\\
%
\years{2009}D.~Murphy. {\em M.A. Religious Studies}. Sacred Technologies: The evolution of the religious cognitive Niche. Awarded with distinction. Primary supervisor.\\
%
\years{2007}A.~Robertson. {\em Ph.D. Psychology}. In Search of a Theoretical Explanation for The Relationship Between Religion and Prejudice. Secondary supervisor.\\ %with Marc Wilson, Psychology.\\
%
\years{2006}Y.~Jeon. {\em Ph.D. Media Studies}. Communication Models of Adolescent Religious Education on The Internet in A Global Age. Secondary supervisor.% with Rowen Cullen, Media Studies.
\subsubsection*{Ongoing}
\noindent
\years{2021$\to$} Johnmark Kempthorne. the Role the Behavioral Immune System Plays in New Zealanders’ Psychology During the COVID-19 Pandemic, Primary Supervisor.
\years{2019$\to$}R. Bonifant. Transmission of Anglican Belief and Practice in Aotearoa New Zealand. Primary Supervisor. \\
%\years{2011$\to$}K.~Sterhle.{ \em Ph.D. cand. Religious Studies}. Secondary supervisor, with Rick Weiss, Religious Studies.\\
%\bibitem{CS:SS}
%C.~Ferrario.{ \em Ph.D. cand. Philosophy}
%\newblock Enroled 2010. Secondary supervisor.
% \years{2008$\to$}E.~Czarnecki.{ \em Ph.D. cand. Religious Studies}. {\em Conceptual Analysis of Costly Signaling.} Primary supervisor, with Marcus Frean, Computer Science.\\
%\years{2008$\to$}A.~Searfross.{ \em Ph.D. cand. Religious Studies}. {\em Transmission and Storage of Memory.} Secondary supervisor, with Paul %Morris, Religious Studies.
%\bibitem{Teitelbaum:2006fv}
%M.~Teitelbaum.{ \em Ph.D. cand. Religious Studies}. Enroled 2007. Primary supervisor.
\subsubsection*{Presentations}
\subsubsection*{Keynotes \& Invited}
\years{2019$^k$} Attitudes to religion and trade in New Zealand (2009-2018). Religion and Business: Reaping the Diversity Dividend. Auckland University of Technology, Auckland, New Zealand. 12 August 2019. \href{https://www.dropbox.com/s/t0vsqu95nbs32t6/2019.AUG.12.bulbulia.Biz.Rel.pptx?dl=0}{LINK}\\
\years{2019$^t$} The interplay of religious affiliation and volunteering, evidence from The New Zealand Attitudes and Values Study 2009-2019. Templeton Annual Meeting. Nassau, Bahamas. 24 June 2019.\\
\years{2019$^t$} What are the sources of Muslim Acceptance in New Zealand? Maclaurin Chapel, Auckland, New Zealand. May 28 2019. \\
\years{2019$^k$} Changing Attitudes to Muslims in New Zealand: years 2013-2017. Amnesty International Annual Hui 2019: IMPACT for Human Rights. Wellington, New Zealand. 10 May 2019.\\
\years{2018$^t$} How Spiritual Are We Really? Engage Conference. Auckland, New Zealand. 21 September 2018.\\
\years{2018$^k$} Teams $>$ STEM $+$ HASS. Plenary Session. International. Deans of Arts, Social Sciences and Humanities Conference (Australasia). Australian National University, Canberra, Australia. 12 September 2018.\\
\years{2018$^k$} Is Virtue Worth The Effort? International Humanist Conference. Auckland, New Zealand. 4 August 2018.\\
\years{2018$^k$} Religion and the evolution of second nature? Erice, Sicily. 12 May 2018.\\
\years{2017$^k$}Computational Phylogenetic Methods Test Functional Hypotheses About Culture. University of Helsinki. 25 May 2017. \\
\years{2016$^t$} {\bf J.~Bulbulia}, Presidential Address. International Association for the Cognitive Science of Religion. Vancouver, Canada. 24 August 2016.\\
\years{2012$^t$}{\bf J.~Bulbulia}, \newblock\href{http://conferences.au.dk/biologicalandculturalevolution/conference-programme/}{~Religious Ritual, Identity, and Coordination: Surprising Results From the Field}\newblock Keynote: Aarhus University, \href{http://conferences.au.dk/biologicalandculturalevolution/}{Biological and Cultural Evolution and Their Interactions}. Denmark.~26~June~2012. \\
\years{2011$^t$}{\bf J.~Bulbulia}, \newblock \href{http://db.tt/8CgWl7MC}{Experimental Anthropology: How Quantitative Field Studies are Revealing New Subtleties in Religious Rituals}
\newblock (invited talk). Aarhus University, Denmark. 18 October 2011. \href{http://aal.au.dk/antro/conference-2011-researching-religion/programme/}{{\bf Website}}\\
\years{2009$^k$}I.~Konvalinka, \textbf{J.~Bulbulia}, D. Xygalatas, and U.~Schjoedt,
Synchronous arousal memory in a Spanish firewalk.
\newblock Keynote Address, International Association for the Cognitive Study of Religion: Cognitive Science Society Annual Meeting. Amsterdam, Netherlands. July 2009.\\
\years{2008$^k$}{\bf J.~Bulbulia}, Bayes theorem and the economics of religious reciprocity.\newblock Keynote Address, European Collaborative Research Foundation. Consciousness in Natural Context. Aarhus University, Denmark. June 2008.\\
\years{2008$^k$}{\bf J.~Bulbulia}, \href{http://video.google.com/videoplay?docid=4493321247767166938#}{The human nest: Epistemic niche construction and religiosity}.\newblock Keynote Address, Religious Ritual, Cognition and Culture. Aarhus University, Denmark. May 2008.\\
\years{2006$^k$}{\bf J.~Bulbulia}, Why religious commitment is also moral commitment.\newblock Faculty Keynote. Aarhus University, Denmark. 7 December 2006.
\subsubsection*{Talks}
\years{2021$t$} "Transformation and transferral of acceptance following the 2019 Christchurch New Zealand Mosque Attacks", Otago University Centre for Peace and Conflict and Religion Department, Dunedin, 11 June, 2021.\href{https://www.dropbox.com/s/xlnd4ow5kefajoi/slides.html?dl=0}{link}\\
\years{2021$t$} Longitudinal Anatomy of Covid-19 Distress: For high school students, Victoria University School of Psychology, 17 June, 2021 \href{https://go-bayes.github.io/reports/posts/covid/student_slides.html#1}{link}\\
\years{2021$t$} Longitudinal Anatomy of Covid-19 Distress, Victoria University School of Psychology, 23 April, 2021 \href{https://go-bayes.github.io/reports/posts/covid/slides.html#1}{link}\\
\years{2021$t$}"A national longitudinal study of religion and climate beliefs/behaviours": International Society for Religious Psychologists (IRPS), Biola CA, 9 January 2021. \href{https://www.dropbox.com/s/t087jsc5inuck95/Bulbulia_Biola_Talk_EcologyReligion_Jan2021.html?dl=0}{link}\\
\years{2020$t$} A national scale longitudinal analysis of the Social Consequences of Religion in New Zealand: 2009-2017 (Three Studies). Templeton Religion Trust. Nassau, Bahamas. 14 January 2020. \\
\years{2019$^t$} A national scale longitudinal analysis of mis/perceptions about Religious Faith and Practice in New Zealand: 2009-2017 (Three Studies). University of the 3rd Age (U3A). New Plymouth, New Zealand. 13 November 2019. \href{https://www.dropbox.com/s/xypex3ektzf51cb/Bulbulia.NewPLymoth.NOV.13.2019.pptx?dl=0}{LINK}\\
\years{2019$t$} Extremism in New Zealand reveals challenges and opportunities for international education partnerships. University of Auckland U21. University of Auckland, Auckland, New Zealand. 25 October 2019. \href{https://www.dropbox.com/s/33q0ths4o9pml9l/v4.bulbulia_Auckland_u21_OCT.25.2019.low.res.pptx?dl=0}{LINK} \\
\years{2019$t$} The interplay of religious affiliation and volunteering, evidence from the New Zealand Attitudes and Values Study 2009-2019. Templeton Religion Trust advisory meeting on the SCORE initiative. Oxford, UK. October 2019. \href{https://www.dropbox.com/s/zsggoh701jvqqnv/v4.Bulbulia.TRT_Presentation.JUNE_23.2019.pptx?dl=0}{LINK}\\
\years{2017$t$} Two paradoxes of religious change in New Zealand. \newblock NCLANZ Meeting. Wellington, New Zealand. 19 September 2017.\\
\years{2017$t$} Punctuated Evolution of Religion, Cultural Evolution Society Conference. Jena, Germany. 14 September 2017.\\
\years{2017$t$} Religion and the good life in New Zealand, 8th Christian Leaders Congress. Hope Centre, Wellington, New Zealand. 23 March 2017.\\
\years{2017$t$} Darwin meets Luther: How Evolutionary Religious Studies Explains the Protestant Reformation. \newblock Empirical Philosophy Workshop. Wellington, New Zealand. 1 June 2017. \\
\years{2016$^t$} Secularization and Simpson's Paradox. Otago University, Dunedin, New Zealand. October 2016.\\
\years{2016$^t$} {\bf J. Bulbulia}, Sex, Signaling, \& Religion Today. International Association for the Cognitive Science of Religion. Vancouver, Canada. 24 August 2016.\\
\years{2016$^t$} {\bf J. Bulbulia}, Evolutionary Theory meets Big Longitudinal Data: Modeling Religion, Tolerance, and Prejudice in a Western Secular Society. International Association for the Cognitive Science of Religion. Vancouver, Canada. 23 August 2016.\\
\years{2016$^t$} {\bf J. Bulbulia}, Religion Evolves Social Inequality. Victoria University Religious Studies Seminar. Wellington, New Zealand. 24 March 2016.\\
\years{2016$^t$} {\bf J. Bulbulia}, Religion and the Evolution of Social Inequality. University of Otago, Psychology Department. University of Otago, Dunedin, New Zealand. 22 February 2016. \\
\years{2015$^t$} {\bf J. Bulbulia}, Longitudinal Study of Religion: recent findings from the New Zealand Attitudes \& Values Study. Queenstown. 8 December 2015. \\
\years{2015$^t$} {\bf J. Bulbulia}, Religion and the Evolution of Social Inequality. Victoria University, Wellington, New Zealand. October 6 2015\\
\years{2015$^t$} {\bf J. Bulbulia}, Computational Phylogenetic Methods Test Functional Hypotheses About Culture. California Technical Institute, Los Angeles, USA. 2 September, 2015.\\
\years{2015$^t$} {\bf J. Bulbulia}, J. Watts, S. Greenhill, and R. Gray, The Punctuated Evolution of Religion. International Association for the History of Religions. Erfurt, Germany. 28 Aug, 2015.\\
\years{2015$^t$} {\bf J. Bulbulia}, Recording History to Understand it: Longitudinal Study of Religion in New Zealand. International Association for the History of Religions. Erfurt, Germany. 25 Aug, 2015.\\
\years{2015$^t$} {\bf J. Bulbulia}, Religion and Charity in New Zealand. International Association for the History of Religions. Erfurt, Germany. 25 Aug, 2015.\\
\years{2015$^t$} J. Watts, R. Gray, and {\bf J. Bulbulia}, Broad supernatural punishment but not moralising high gods precede the evolution of political complexity in Austronesia. Victoria University Empirical Philosophy Workshop. Victoria University, Wellington, New Zealand. 15 March 2015.\\
\years{2015$^t$} {\bf J. Bulbulia} and C. G. Sibley, What is the Dollar Value of Religion? Society of Australian Social Psychology Conference. Newcastle, Australia. 10 April 2015.\\
\years{2014$^t$} G. Troughton, {\bf J. Bulbulia}, D. Osborne, L. Greaves, Y. Huang, P. Milojev, and C. G. Sibley, Religion in Contemporary New Zealand: Insights from a National Probability Study. Combined Australian and New Zealand Religious History Conference. Massey University, Albany Campus. Massey University, Auckland, New Zealand. Wednesday 26 November 2014. \\
\years{2014$^t$} J. H. Shaver, C. G. Sibley, and {\bf J. Bulbulia}, Fast or Slow? Religion and Life History Strategies in New Zealand. IACSR panel, American Academy of Religion. San Diego, USA. Friday, 21 November 2014.\\
\years{2014$^t$} {\bf J. Bulbulia}, J. H. Shaver, and C. G. Sibley, Are Religious People Really More Charitable? IACSR panel, American Academy of Religion. San Diego, USA. Friday 21 November 2014.\\
\years{2014$^t$} {\bf J. Bulbulia} and C. G. Sibley, The New Zealand Attitudes and Values Study: An overview. Presentation to Government: Association of Social Science Research. 5 November 2014 \href{http://assr.org.nz/2014/11/22/the-new-zealand-attitudes-and-values-study-or-how-to-survive-starting-your-own-longitudinal-s-ample/}{link} \\
\years{2014$^t$}{\bf J. Bulbulia}, G. Troughton, and C. G. Sibley, Charity and Religion in New Zealand: The New Zealand Attitudes and Values Study. Laidlaw Theological Seminary. Auckland, New Zealand. 11 September 2014. \\
\years{2014$^t$}{\bf J. Bulbulia} and C. G. Sibley, The New Zealand Attitudes and Values Study: How to Survive a Longitudinal National Probability Sample. LEVYNA, Masaryk University. Brno, Czech Republic. 24 June 2014.\\
\years{2014$^t$}D. Xygalatas and {\bf J. Bulbulia}, Integration in the Cognitive Science of Religion: Methods, Theories, and People. IACSR Conference. Brno, Czech Republic. 21 June 2014. \\
\years{2014$^t$} {\bf J. Bulbulia}, Why Painful Rituals? Philosophy Workshop. Victoria University, Wellington, New Zealand. \\
\years{2013$^t$}{\bf J.~Bulbulia}, Quantifying Ritual Emotion. Victoria University Psychology Seminar. Victoria University, Wellington, New Zealand. 19 July 2013.\href{https://www.dropbox.com/s/nikzacfqpbypcd5/Bulbulia_19_July_VICPSYCH_poster.pdf}{poster}\\ Latex file:\href{https://www.dropbox.com/s/xeejz4jxfprsme0/Bulbulia_19_July_VICPSYCH_poster.tex}{here}\\
\years{2013$^t$}{\bf J.~Bulbulia}, Bayesian Analysis of Rated Images From a
Sacred Ritual Shows Role-Dependent Responses. Invited Poster. University of British Columbia CERC meeting. Vancouver, Canada. 5 May 2013.\href{https://www.dropbox.com/s/clkbjqcdqwhj213/Bulbulia_etal_poster_portrait_UBC_4_MAY_13.pdf}{poster}\\
\years{2013$^t$}{\bf J.~Bulbulia}, Faith After the Quake. Religious Studies Seminar. Wellington, New Zealand. 15 March 2013.\\
\years{2012$^t$}{\bf J.~Bulbulia}, Rituals, Pain, and Cooperation. The Elowitz Lab, California Institute of Technology, Biology Division. 22 November 2012.\\
\years{2012$^t$}{\bf J.~Bulbulia} and C. G. Sibley, How Religion Interacts with Personality to Affect Social Cognition. American Academy of Religion Conference. Chicago, USA. 18 November 2012.\href{https://www.dropbox.com/s/da5n8y0afhst554/18NOV_BULBUBULIA_AAR_2012.pdf}{pdf}\\
\years{2012$^t$}{\bf J.~Bulbulia}, Five Predictions For The Next Fifteen Years In The Experimental Study of Religion, Homo Experimentalis, LEVYNA. Masarky University, Brno, Czech Republic. 24 October 2012.\\
\years{2012$^t$}{\bf J.~Bulbulia}, J. Watts, Q. D. Atkinson, S. Greenhill, and R. Gray. Three Big Questions Cultural Phylogenetics Has Helped Historians to Answer. Society of Biblical Literature. Chicago, USA. 17 November 2012.\\
\years{2012$^t$}{\bf J.~Bulbulia}, Why Rituals? Results from experimental field studies. Language and Cultural Evolution Group. Auckland University. 12 October 2012.\\
\years{2012$^t$}{\bf J.~Bulbulia}, Plenary session report: The Cultural Evolution of Religion. Ernst Str\"ugnman Forum: \href{http://www.esforum.de/forums/esf12_cultural_evolution.html}{Cultural Evolution}. Frankfurt, Germany. 1 June 2012\\
\years{2012$^t$}{\bf J.~Bulbulia}, Subtractive and Additive Conceptions of Experimental Ritual Studies. Laboratory for the Experimental Study of Religion, Department for the Study of Religions. Masaryk University, Brno, Czech Republic. 9 January 2012.(Webinar)\\
\years{2011$^t$}{\bf J.~Bulbulia}, P. Reddish, R. Callander, and R. Fischer, \newblock \href{http://db.tt/H0POMYBN}{Differential Effects on Cooperation from Rituals Varying in Body Synchrony and Sacred Values}\newblock AAR Cognitive Science of Religion Group. San Francisco, USA. 20 November 2011.\\
\years{2011$^t$}{\bf J.~Bulbulia}, P. Reddish, R. Callander, and R. Fischer, \newblock \href{http://www.psych.auckland.ac.nz/uoa/home/about/news-and-events/events/events/template/event_item.jsp?cid=362664}{Explaining collective rituals.}\newblock (invited talk): Auckland Psychology Department. 21 September 2011.\\
\newblock http://necsi.edu/events/iccs2011/accepted.html \newblock \href{https://www.dropbox.com/s/iqukoqbxzc7rzci/ICCS-259-FreanBulbulia.pdf}{PDF}\\
\years{2011$^t$}P. Reddish, R. Fischer and \textbf{J. Bulbulia}, Assessing the Cooperative Effects of Ritual Interactions: Synchrony or Choreography? Conference: The Cognitive Science Society/IACSR Workshop: New developments in the cognitive science of religion. Boston MA, USA. 20 July 2011.\\
\years{2010$^t$}{\bf J.~Bulbulia},
\newblock Evolutionary cognitive science of religion.
\newblock Conference: IAHR. Toronto, Canada. 19 August 2010.\\
\years{2010$^t$}\textbf{J.~Bulbulia}, P.~Reddish, R. Callander, and R.~Fischer, Explaining effervescence. Conference: IAHR. Toronto, Canada. 19 August 2010.\\
\years{2010$^t$}\textbf{J.~Bulbulia} and M.~Frean, Religious cooperation in large social worlds. Conference: IAHR. Toronto, Canada. 17 August 2010.\\
\years{2010$^t$}{\bf J.~Bulbulia}, Animal magnetism and social prediction. Conference: Toward a Unified Science of Religion. Otago University, Dunedin, New Zealand. 12 February 2010.\\
\years{2010$^t$}{\bf J.~Bulbulia}, Coordination at scale: the cooperative affordances of religious rituals. Adaptive Logic Group. Santa Barbara, California, USA. 9 January 2010.\\
\years{2009$^t$}{\bf J.~Bulbulia}, Cognitive study of religion: perspectives from social cognition. University of Vermont: Burlington Vermont, USA. 11 November 2009.\\
\years{2009$^t$}{\bf J.~Bulbulia}, The neuroeconomics of religion. AAR Cognitive Science of Religion Group. Montreal Quebec, Canada. 7 November 2009.\\
\years{2009$^t$}{\bf J.~Bulbulia}, The evolutionary study of religion: a primer. AAR Panel: evolution of Religion. Montreal Quebec, Canada. 8 November 2009.\\
\years{2009$^t$}{\bf J.~Bulbulia}, Explanation in the study of religion: the role of evidence~\&~models. Aarhus University, Religion Cognition and Culture. Aarhus, Denmark. 20 August 2009.\\
\years{2009$^t$}\textbf{J.~Bulbulia} and D.~Xygalatas,
\href{http://www.interacting-minds.net/Sted/IM-SEMINAR2009.html}{Evidence of Emotional but not Perceptual Episodic Memory in a Spanish Firewalk}. Aarhus University, Center for Integrated Neuroscience. Aarhus, Denmark. 17 August 2009\\
\years{2009$^t$}{\bf J.~Bulbulia}, Evolving religious coordination through cues. Association for the Study of Religion and Economics. Washington D.C., USA. 3 April 2009.\\
\years{2009$^t$}\textbf{J.~Bulbulia} and D.~Xygalatas, Firewalking and the evolutionary theory of games. Society for Scientific Anthropology. Asilomar California, United States. 28 March 2009.\\
\years{2009$^t$}{\bf J.~Bulbulia}, Collective effervescence. Center for the Advanced Study in the Behavioral Sciences at Stanford University. Stanford University, Palo Alto California, United States. 30 March 2009.
\\
\years{2008$^t$}{\bf J.~Bulbulia},
\newblock Anthropomorphism and solidarity.
\newblock Biblical Literature and Biblical Religions. Auckland, New Zealand. August 2008.\\
\years{2008$^t$}{\bf J.~Bulbulia}, \newblock Religious signalling in a firewalk.\newblock Australian National University, Philosophy Programme RSSS: Biological Signaling workshop. \newblock Canberra, Australia. 25 July 2008.\\
\years{2007$^t$}{\bf J.~Bulbulia}, \newblock Religion as coordination: Variable religious frames and the stag hunt.\newblock Association for the Study of Religion and Economics. Tampa Florida, USA. 3 November 2007.\\
\years{2007$^t$}{\bf J.~Bulbulia}, \newblock Religious schmelief: Religiosity as skilfull social engagement.
\newblock Society for the Scientific Study of Religion.
\newblock Tampa Florida, USA. 4 November 2007.\\
\years{2007$^t$}{\bf J.~Bulbulia},
\newblock Religious altruism: results of an economic game.
\newblock Aarhus University, Aarhus, Denmark. 31 May 2007.\\
\years{2007$^t$}{\bf J.~Bulbulia},
\newblock The commitments of commitment signalling theory.
\newblock Hawaii Conference on Evolution and Religion. Makaha Resort Hawaii, USA. 6 January 2007.\\
\years{2006$^t$}{\bf J.~Bulbulia},
\newblock Why evolutionary niche construction matters to evolutionary understandings of religion.
\newblock University of Copenhagen, Copenhagen, Denmark. 14 December 2006.\\
\years{2006$^t$}{\bf J.~Bulbulia},
\newblock Constructing the schizophrenic niche: Cognitive adaptations for religious culture (featured speaker).
\newblock Evolution and Religion Workshop, Institute for Cognition and Culture. Queens University Belfast, Northern Ireland. 30 October 2006.\\
\years{2006$^t$}{\bf J.~Bulbulia},
\newblock Religious cognition, social cognition, and moral niche construction.
\newblock Danish Society for Philosophy of Psychology. Copenhagen, Denmark. September 2006.\\
\years{2006$^t$}{\bf J.~Bulbulia},
\newblock The cognitive origins of religion: what cultural inheritance does not explain.
\newblock International Association for the Cognitive Science of Religion. Aarhus, Denmark. January 2006.
% EXTERNAL FUNDING TO SELF SINCE 2012 AS PRIMARY 1,413,816
% TOTAL
\subsubsection*{Grants}
\noindent
\years{2021} J. Bulbulia (PI): {\bf Extension grant Templeton Religion Trust (TRT0196): quantifying the dynamic interplay between virtue and human flourishing: a national-scale longitudinal study, NZD 771,964}.\\
\years{2020} J. Bulbulia (AI) with J. Shaver (PI): {The Longitudinal Study of Cohesion and Conflict: Testing Hypotheses of Social and Religious Change in Fiji} MARSDEN 19-UOO-090, NZD 300,000 \\
\years{2018}J. Bulbulia (PI): {\bf Quantifying the dynamic interplay between virtue and human flourishing: a national-scale longitudinal study, Templeton Religion Trust (TRT0196): NZD 4,596,114}.\\
\years{2017} J. Bulbulia (consultant): {\bf High Fertility and Child Flourishing: The Success of Religions, Sir John Templeton Foundation: USD 82,404.}\\
\years{2016} J. Bulbulia (consultant): {\bf The Religious Replication Project: Using Pre-registered Replications and Bayesian Statistics to Improve the Experimental Study of Religion, Sir John Templeton Foundation: USD 215,000}.\\
\years{2016} J. Bulbulia (AI), Justin McBrayer Fort Lewis College USA (PI), {\bf Explaining Religion: A Planning Grant, Sir John Templeton Foundation: NZD 208,343.15.}\\
\years{2013} J. Bulbulia (PI), {\bf The Social Consequences of Religion.}, Royal Society of New Zealand Marsden NZD 769,000\\
\years{2013} J. Bulbulia (PI), C. G. Sibley (PI), and G. Troughton (AI) {\bf The Personal Consequences of Religion, Templeton World Charity Foundation (TWCF0077): NZD 614,816.} \\
\years{2013} J. Bulbulia (PI), R. Fischer, P. Singh, S. Tewari, and R. Weiss.
\newblock {\bf The interplay between religion, community making, and well-being, New Zealand India Research Institute: NZD 30,000.}\\
\years{2012} J. Bulbulia. \newblock:Center for Theological Inquiry, Princeton NJ: {\bf The Reverend Bayes Meets Charles Darwin visiting fellowship, USD 31,000.} [Offered: not accepted owing to family reasons].\\
\years{2012$\to$2014}R. Gray (PI), S. Greenhill (PI), J. Bulbulia (AI), and Q. D. Atkinson (AI). Sir John F. Templeton Foundation: {\bf The functional role of religion in the evolution of human society: NZD 393,330.60.}\\
\years{2012$\to$2014} R. Gray (PI), S. Greenhill (PI), and J. Bulbulia (AI). RSNZ Marsden:
{\bf The Cultural Evolution of Religion: NZD 775,000.}\\
\years{2012} J.~Bulbulia~(PI) and R. Fischer (AI). Victoria University URF:
{\bf Experimental Studies of Ritual Solidarity: NZD 40,000.}
\years{2011$\to$2013} W. McCorkle (Director), D. Xygalatas (Director), J.Bulbulia~(PI).
Czech Ministry of Education, Youth and Sports. \href{http://www.levyna.cz/team/}{LEVYNA: Laboratory for the Experimental Study of Religion, Department for the Study of Religions at Masaryk University, Brno, Czech Republic}{\bf: \euro~1,100,000}
%%\years{2009$\to$} F. Krueger (PI), J.~Bulbulia~(AI).
%%\newblock NINDS: NIH {\bf Warfighter Head Injury Study Cognitive Neuroscience Section}.
\years{2009} J.~Bulbulia (AI).
\newblock Aarhus University, MindLAB, Denmark: {\bf Housing and accommodation grant, NZD 5,000.}\\
\years{2009} J.~Bulbulia~(PI) and M.~Frean (AI). Victoria University URF: {\bf The evolutionary dynamics of religious culture: NZD 17,000.}\\
\years{2008}J.~Bulbulia~(AI), J. Aarhus University, MindLAB, Denmark: {\bf Interacting Minds, Travel Grant: NZD 4,000}.\\
\years{2008} J.~Bulbulia~(AI), J.
\newblock European Science Foundation: {\bf ESF Travel and Accommodation Grant: NZD 6,000}.\\
\years{2007} R.~Sosis~(PI), J.~Bulbulia~(AI), G.~P. R, and C.~Genet.
\newblock Sir John F. Templeton Foundation: {\bf JFT Grant: Hawaii Evolution and Religion Conference: USD 30,000}.\\
\years{2006} J.~Bulbulia.
\newblock Aarhus University, Religion, Cognition, Culture. Denmark.
{\bf Guest Professor, Religion Cognition and Culture: NZD 80,000}.
\subsubsection*{Service}
\subsection*{Professional}
\years{2012}Ernst Strugmann Forum: Cultural Evolution (Religion) Rappourteur. 26 May--1 June, 2012.\\
%\years{2012$\to$}Editorial Advisory Board: \href{http://www.zygonjournal.org/board_ed_adv.html}{Zygon: Journal of Science and Religion}\\
%\years{2011$\to$}Editorial Advisory Board: \href{http://www.equinoxpub.com/index.php/JCSR}{Journal of The Cognitive Science of Religion}\\
%\years{2010$\rightarrow$}Editorial board, Religion Brain and Behavior.\\
\years{2008$\rightarrow$}Steering committee, \href{http://www.aarweb.org/Meetings/Annual_Meeting/Program_Units/PUinformation.asp?PUNum=AARPU173}{Cognitive Study of Religion Group, American Academy of Religion}.\\
\years{2008$\rightarrow$2010}Co-chair, Explanation, International Association for the History of Religions Conference. Toronto, Canada. 2010.\\
\years{2006$\rightarrow$} International Fellow, Religion Cognition and Culture. Aarhus University.\\
\years{2006$\rightarrow$2010} Member, Faculty of Humanities and Social Sciences Research Committee.\\
\subsubsection*{Victoria}
\years{2015$\rightarrow$} Religious Studies Seminar Series.\\
\years{2011$\rightarrow$} Convener and Chair, Animal Ethics Committee, Victoria University.\\
\years{2008$\rightarrow$2015} Honours Coordinator, Religious Studies, Victoria University.\\
\years{2006$\rightarrow$2010} Faculty of Humanities and Social Sciences Research Committee. \\
\years{2003$\rightarrow$2008} School Marketing Committee.\\
\years{2001$\rightarrow$2005} Programme Library Liaison.
\subsubsection*{Referee}
\begin{description}
\item Asian Journal of Social Psychology
\item Behavior and the Brain Sciences
\item Biology and Philosophy
\item Bloomsbury Press
\item Cambridge University Press
\item Cognition
\item Current Anthropology
\item Evolution and Human Behavior
\item Evolutionary Psychology
\item Human Nature
\item Journal of Economic Psychology
\item International Journal of Intercultural Relations
\item International Journal for the Psychology of Religion
\item Israel Journal of Ecology and Evolution (IJEE)
\item Journal of the American Academy of Religion
\item Journal of Cognition and Culture
\item Journal of Cross Cultural Psychology
\item Journal for the Cognitive Science of Religion
\item Oxford University Press
\item PLoS One
\item Proceedings of the National Academy of Sciences
\item Religion
\item Religion, Brain \& Behavior
\item The Quarterly Review of Biology
\end{description}
\end{document}
| {
"alphanum_fraction": 0.7391960709,
"avg_line_length": 59.4270334928,
"ext": "tex",
"hexsha": "10da4bc23951372b794772550d6e584a3efaa09b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6a334760bb1de494682ab26f83fe421745647f0f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jacopastorius/JosephBulbulia-github.io",
"max_forks_repo_path": "static/files/bulbuliaCV.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6a334760bb1de494682ab26f83fe421745647f0f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jacopastorius/JosephBulbulia-github.io",
"max_issues_repo_path": "static/files/bulbuliaCV.tex",
"max_line_length": 477,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "6a334760bb1de494682ab26f83fe421745647f0f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jacopastorius/JosephBulbulia-github.io",
"max_stars_repo_path": "static/files/bulbuliaCV.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 34794,
"size": 99362
} |
\subsection{lcdout}
\begin{lstlisting}[language=C]
/****************************************************************************/
/* */
/* LCDOUT.H */
/* LCD Output Functions */
/* Include File */
/* Digital Oscilloscope Project */
/* EE/CS 52 */
/* */
/****************************************************************************/
/*
This file contains the constants and function prototypes for the LCD output
functions used in the Digital Oscilloscope project and defined in lcdout.c.
Revision History:
3/8/94 Glen George Initial revision.
3/13/94 Glen George Updated comments.
3/17/97 Glen George Added enumerated type char_style and updated
function prototypes.
*/
#ifndef __LCDOUT_H__
#define __LCDOUT_H__
/* library include files */
/* none */
/* local include files */
/* none */
/* constants */
/* character output styles */
/* size of a character (includes 1 pixel space to the left and below character) */
#define VERT_SIZE 8 /* vertical size (in pixels -> 7+1) */
#define HORIZ_SIZE 6 /* horizontal size (in pixels -> 5+1) */
/* structures, unions, and typedefs */
/* character output styles */
enum char_style { NORMAL, /* "normal video" */
REVERSE /* "reverse video" */
};
/* function declarations */
void clear_region(int, int, int, int); /* clear part of the display */
void plot_hline(int, int, int); /* draw a horizontal line */
void plot_vline(int, int, int); /* draw a vertical line */
void plot_char(int, int, char, enum char_style); /* output a character */
void plot_string(int, int, const char *, enum char_style); /* output a string */
#endif
\end{lstlisting}
\begin{lstlisting}[language=C]
/****************************************************************************/
/* */
/* LCDOUT */
/* LCD Output Functions */
/* Digital Oscilloscope Project */
/* EE/CS 52 */
/* */
/****************************************************************************/
/*
This file contains the functions for doing output to the LCD screen for the
Digital Oscilloscope project. The functions included are:
clear_region - clear a region of the display
plot_char - output a character
plot_hline - draw a horizontal line
plot_string - output a string
plot_vline - draw a vertical line
The local functions included are:
none
The locally global variable definitions included are:
none
Revision History
3/8/94 Glen George Initial revision.
3/13/94 Glen George Updated comments.
3/13/94 Glen George Simplified code in plot_string function.
3/17/97 Glen George Updated comments.
3/17/97 Glen George Change plot_char() and plot_string() to use
enum char_style instead of an int value.
5/27/98 Glen George Change plot_char() to explicitly declare the
size of the external array to avoid linker
errors.
*/
/* library include files */
/* none */
/* local include files */
#include "interfac.h"
#include "scopedef.h"
#include "lcdout.h"
/*
clear_region
Description: This function clears the passed region of the display.
The region is described by its upper left corner pixel
coordinate and the size (in pixels) in each dimension.
Arguments: x_ul (int) - x coordinate of upper left corner of the
region to be cleared.
y_ul (int) - y coordinate of upper left corner of the
region to be cleared.
x_size (int) - horizontal size of the region.
y_size (int) - vertical size of the region.
Return Value: None.
Input: None.
Output: A portion of the screen is cleared (set to PIXEL_WHITE).
Error Handling: No error checking is done on the coordinates.
Algorithms: None.
Data Structures: None.
Global Variables: None.
Author: Glen George
Last Modified: Mar. 8, 1994
*/
void clear_region(int x_ul, int y_ul, int x_size, int y_size)
{
/* variables */
int x; /* x coordinate to clear */
int y; /* y coordinate to clear */
/* loop, clearing the display */
for (x = x_ul; x < (x_ul + x_size); x++) {
for (y = y_ul; y < (y_ul + y_size); y++) {
/* clear this pixel */
plot_pixel(x, y, PIXEL_BGND);
}
}
/* done clearing the display region - return */
return;
}
/*
plot_hline
Description: This function draws a horizontal line from the passed
position for the passed length. The line is always drawn
with the color PIXEL_BLACK. The position (0,0) is the
upper left corner of the screen.
Arguments: start_x (int) - starting x coordinate of the line.
start_y (int) - starting y coordinate of the line.
length (int) - length of the line (positive for a line
to the "right" and negative for a line to
the "left").
Return Value: None.
Input: None.
Output: A horizontal line is drawn at the specified position.
Error Handling: No error checking is done on the coordinates.
Algorithms: None.
Data Structures: None.
Global Variables: None.
Author: Glen George
Last Modified: Mar. 7, 1994
*/
void plot_hline(int start_x, int start_y, int length)
{
/* variables */
int x; /* x position while plotting */
int init_x; /* starting x position to plot */
int end_x; /* ending x position to plot */
/* check if a line to the "right" or "left" */
if (length > 0) {
/* line to the "right" - start at start_x, end at start_x + length */
init_x = start_x;
end_x = start_x + length;
}
else {
/* line to the "left" - start at start_x + length, end at start_x */
init_x = start_x + length;
end_x = start_x;
}
/* loop, outputting points for the line (always draw to the "right") */
for (x = init_x; x < end_x; x++)
/* plot a point of the line */
plot_pixel(x, start_y, PIXEL_GREEN);
/* done plotting the line - return */
return;
}
/*
plot_vline
Description: This function draws a vertical line from the passed
position for the passed length. The line is always drawn
with the color PIXEL_BLACK. The position (0,0) is the
upper left corner of the screen.
Arguments: start_x (int) - starting x coordinate of the line.
start_y (int) - starting y coordinate of the line.
length (int) - length of the line (positive for a line
going "down" and negative for a line
going "up").
Return Value: None.
Input: None.
Output: A vertical line is drawn at the specified position.
Error Handling: No error checking is done on the coordinates.
Algorithms: None.
Data Structures: None.
Global Variables: None.
Author: Glen George
Last Modified: Mar. 7, 1994
*/
void plot_vline(int start_x, int start_y, int length)
{
/* variables */
int y; /* y position while plotting */
int init_y; /* starting y position to plot */
int end_y; /* ending y position to plot */
/* check if an "up" or "down" line */
if (length > 0) {
/* line going "down" - start at start_y, end at start_y + length */
init_y = start_y;
end_y = start_y + length;
}
else {
/* line going "up" - start at start_y + length, end at start_y */
init_y = start_y + length;
end_y = start_y;
}
/* loop, outputting points for the line (always draw "down") */
for (y = init_y; y < end_y; y++)
/* plot a point of the line */
plot_pixel(start_x, y, PIXEL_GREEN);
/* done plotting the line - return */
return;
}
/*
plot_char
Description: This function outputs the passed character to the LCD
screen at passed location. The passed location is given
as a character position with (0,0) being the upper left
corner of the screen. The character can be drawn in
"normal video" (black on white) or "reverse video" (white
on black).
Arguments: pos_x (int) - x coordinate (in character
cells) of the character.
pos_y (int) - y coordinate (in character
cells) of the character.
c (char) - the character to plot.
style (enum char_style) - style with which to plot the
character (NORMAL or REVERSE).
Return Value: None.
Input: None.
Output: A character is output to the LCD screen.
Error Handling: No error checking is done on the coordinates or the
character (to ensure there is a bit pattern for it).
Algorithms: None.
Data Structures: The character bit patterns are stored in an external
array.
Global Variables: None.
Author: Glen George
Last Modified: May 27, 2008
*/
void plot_char(int pos_x, int pos_y, char c, enum char_style style)
{
/* variables */
/* pointer to array of character bit patterns */
extern const unsigned char char_patterns[(VERT_SIZE - 1) * 128];
int bits; /* a character bit pattern */
int col; /* column loop index */
int row; /* character row loop index */
int x; /* x pixel position for the character */
int y; /* y pixel position for the character */
/* setup the pixel positions for the character */
x = pos_x * HORIZ_SIZE;
y = pos_y * VERT_SIZE;
/* loop outputting the bits to the screen */
for (row = 0; row < VERT_SIZE; row++) {
/* get the character bits for this row from the character table */
if (row == (VERT_SIZE - 1))
/* last row - blank it */
bits = 0;
else
/* in middle of character, get the row from the bit patterns */
bits = char_patterns[(c * (VERT_SIZE - 1)) + row];
/* take care of "normal/reverse video" */
if (style == REVERSE)
/* invert the bits for "reverse video" */
bits = ~bits;
/* get the bits "in position" (high bit is output first */
bits <<= (8 - HORIZ_SIZE);
/* now output the row of the character, pixel by pixel */
for (col = 0; col < HORIZ_SIZE; col++) {
/* output this pixel in the appropriate color */
if ((bits & 0x80) == 0)
/* blank pixel - output in PIXEL_WHITE */
plot_pixel(x + col, y, PIXEL_BGND);
else
/* black pixel - output in PIXEL_BLACK */
plot_pixel(x + col, y, PIXEL_GREEN);
/* shift the next bit into position */
bits <<= 1;
}
/* next row - update the y position */
y++;
}
/* all done, return */
return;
}
/*
plot_string
Description: This function outputs the passed string to the LCD screen
at passed location. The passed location is given as a
character position with (0,0) being the upper left corner
of the screen. There is no line wrapping, so the entire
string must fit on the passed line (pos_y). The string
can be drawn in "normal video" (black on white) or
"reverse video" (white on black).
Arguments: pos_x (int) - x coordinate (in character
cells) of the start of the
string.
pos_y (int) - y coordinate (in character
cells) of the start of the
string.
s (const char *) - the string to output.
style (enum char style) - style with which to plot
characters of the string.
Return Value: None.
Input: None.
Output: A string is output to the LCD screen.
Error Handling: No checking is done to insure the string is fully on the
screen (the x and y coordinates and length of the string
are not checked).
Algorithms: None.
Data Structures: None.
Global Variables: None.
Author: Glen George
Last Modified: Mar. 17, 1997
*/
void plot_string(int pos_x, int pos_y, const char *s, enum char_style style)
{
/* variables */
/* none */
/* loop, outputting characters from string s */
while (*s != '\0')
/* output this character and move to the next character and screen position */
plot_char(pos_x++, pos_y, *s++, style);
/* all done, return */
return;
}
\end{lstlisting}
\subsection{char57}
\begin{lstlisting}[language=C]
/****************************************************************************/
/* */
/* CHAR57 */
/* 5x7 Dot Matrix Codes */
/* Digital Oscilloscope Project */
/* EE/CS 52 */
/* */
/****************************************************************************/
/*
This file contains a table of dot matrix patterns for vertically scanned
5x7 characters. The table entries are in ASCII order with 7 bytes per
character. The table starts with 32 special characters (mostly blank
characters) then space, the start of the printable ASCII character set.
The table is called char_patterns. In each byte (horizontal row) the
leftmost pixel is given by bit 4 and the rightmost by bit 0.
Revision History
5/27/08 Glen George Initial revision (from 3/10/95 version of
char57.asm).
*/
/* library include files */
/* none */
/* local include files */
/* none */
/* the character pattern table */
const unsigned char char_patterns[] = {
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x00) */
0x04, 0x0E, 0x15, 0x04, 0x04, 0x04, 0x04, /* up arrow (0x01) */
0x04, 0x04, 0x04, 0x04, 0x15, 0x0E, 0x04, /* down arrow (0x02) */
0x00, 0x04, 0x08, 0x1F, 0x08, 0x04, 0x00, /* left arrow (0x03) */
0x00, 0x11, 0x11, 0x11, 0x1B, 0x14, 0x10, /* greek u (mu) (0x04) */
0x00, 0x04, 0x02, 0x1F, 0x02, 0x04, 0x00, /* right arrow (0x05) */
0x00, 0x11, 0x0A, 0x04, 0x0A, 0x11, 0x00, /* multiply symbol (0x06) */
0x00, 0x04, 0x00, 0x1F, 0x00, 0x04, 0x00, /* divide symbol (0x07) */
0x04, 0x04, 0x1F, 0x04, 0x04, 0x00, 0x1F, /* plus/minus symbol (0x08) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x09) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x0A) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x0B) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x0C) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x0D) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x0E) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x0F) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x10) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x11) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x12) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x13) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x14) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x15) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x16) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x17) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x18) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x19) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x1A) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x1B) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x1C) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x1D) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x1E) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* UNUSED (0x1F) */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* space (0x20) */
0x04, 0x04, 0x04, 0x04, 0x04, 0x00, 0x04, /* ! */
0x0A, 0x0A, 0x0A, 0x00, 0x00, 0x00, 0x00, /* " */
0x0A, 0x0A, 0x1F, 0x0A, 0x1F, 0x0A, 0x0A, /* # */
0x04, 0x0F, 0x14, 0x0E, 0x05, 0x1E, 0x04, /* $ */
0x18, 0x19, 0x02, 0x04, 0x08, 0x13, 0x03, /* % */
0x08, 0x14, 0x14, 0x08, 0x15, 0x12, 0x0D, /* & */
0x0C, 0x0C, 0x08, 0x10, 0x00, 0x00, 0x00, /* ' */
0x02, 0x04, 0x08, 0x08, 0x08, 0x04, 0x02, /* ( */
0x08, 0x04, 0x02, 0x02, 0x02, 0x04, 0x08, /* ) */
0x04, 0x15, 0x0E, 0x1F, 0x0E, 0x15, 0x04, /* * */
0x00, 0x04, 0x04, 0x1F, 0x04, 0x04, 0x00, /* + */
0x00, 0x00, 0x00, 0x0C, 0x0C, 0x08, 0x10, /* , */
0x00, 0x00, 0x00, 0x1F, 0x00, 0x00, 0x00, /* - */
0x00, 0x00, 0x00, 0x00, 0x00, 0x0C, 0x0C, /* . */
0x00, 0x01, 0x02, 0x04, 0x08, 0x10, 0x00, /* / */
0x0E, 0x11, 0x13, 0x15, 0x19, 0x11, 0x0E, /* 0 */
0x04, 0x0C, 0x04, 0x04, 0x04, 0x04, 0x0E, /* 1 */
0x0E, 0x11, 0x01, 0x0E, 0x10, 0x10, 0x1F, /* 2 */
0x0E, 0x11, 0x01, 0x06, 0x01, 0x11, 0x0E, /* 3 */
0x02, 0x06, 0x0A, 0x12, 0x1F, 0x02, 0x02, /* 4 */
0x1F, 0x10, 0x1E, 0x01, 0x01, 0x11, 0x0E, /* 5 */
0x06, 0x08, 0x10, 0x1E, 0x11, 0x11, 0x0E, /* 6 */
0x1F, 0x01, 0x02, 0x04, 0x08, 0x10, 0x10, /* 7 */
0x0E, 0x11, 0x11, 0x0E, 0x11, 0x11, 0x0E, /* 8 */
0x0E, 0x11, 0x11, 0x0F, 0x01, 0x02, 0x0C, /* 9 */
0x00, 0x0C, 0x0C, 0x00, 0x0C, 0x0C, 0x00, /* : */
0x0C, 0x0C, 0x00, 0x0C, 0x0C, 0x08, 0x10, /* ; */
0x02, 0x04, 0x08, 0x10, 0x08, 0x04, 0x02, /* < */
0x00, 0x00, 0x1F, 0x00, 0x1F, 0x00, 0x00, /* = */
0x08, 0x04, 0x02, 0x01, 0x02, 0x04, 0x08, /* > */
0x0E, 0x11, 0x01, 0x02, 0x04, 0x00, 0x04, /* ? */
0x0E, 0x11, 0x01, 0x0D, 0x15, 0x15, 0x0E, /* @ */
0x04, 0x0A, 0x11, 0x11, 0x1F, 0x11, 0x11, /* A */
0x1E, 0x09, 0x09, 0x0E, 0x09, 0x09, 0x1E, /* B */
0x0E, 0x11, 0x10, 0x10, 0x10, 0x11, 0x0E, /* C */
0x1E, 0x09, 0x09, 0x09, 0x09, 0x09, 0x1E, /* D */
0x1F, 0x10, 0x10, 0x1C, 0x10, 0x10, 0x1F, /* E */
0x1F, 0x10, 0x10, 0x1C, 0x10, 0x10, 0x10, /* F */
0x0F, 0x10, 0x10, 0x13, 0x11, 0x11, 0x0F, /* G */
0x11, 0x11, 0x11, 0x1F, 0x11, 0x11, 0x11, /* H */
0x0E, 0x04, 0x04, 0x04, 0x04, 0x04, 0x0E, /* I */
0x01, 0x01, 0x01, 0x01, 0x01, 0x11, 0x0E, /* J */
0x11, 0x12, 0x14, 0x18, 0x14, 0x12, 0x11, /* K */
0x10, 0x10, 0x10, 0x10, 0x10, 0x10, 0x1F, /* L */
0x11, 0x1B, 0x15, 0x15, 0x11, 0x11, 0x11, /* M */
0x11, 0x19, 0x15, 0x13, 0x11, 0x11, 0x11, /* N */
0x0E, 0x11, 0x11, 0x11, 0x11, 0x11, 0x0E, /* O */
0x1E, 0x11, 0x11, 0x1E, 0x10, 0x10, 0x10, /* P */
0x0E, 0x11, 0x11, 0x11, 0x15, 0x12, 0x0D, /* Q */
0x1E, 0x11, 0x11, 0x1E, 0x14, 0x12, 0x11, /* R */
0x0E, 0x11, 0x10, 0x0E, 0x01, 0x11, 0x0E, /* S */
0x1F, 0x04, 0x04, 0x04, 0x04, 0x04, 0x04, /* T */
0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x0E, /* U */
0x11, 0x11, 0x11, 0x0A, 0x0A, 0x04, 0x04, /* V */
0x11, 0x11, 0x11, 0x11, 0x15, 0x1B, 0x11, /* W */
0x11, 0x11, 0x0A, 0x04, 0x0A, 0x11, 0x11, /* X */
0x11, 0x11, 0x0A, 0x04, 0x04, 0x04, 0x04, /* Y */
0x1F, 0x01, 0x02, 0x04, 0x08, 0x10, 0x1F, /* Z */
0x0E, 0x08, 0x08, 0x08, 0x08, 0x08, 0x0E, /* [ */
0x00, 0x10, 0x08, 0x04, 0x02, 0x01, 0x00, /* \ */
0x0E, 0x02, 0x02, 0x02, 0x02, 0x02, 0x0E, /* ] */
0x04, 0x0A, 0x11, 0x00, 0x00, 0x00, 0x00, /* ^ */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1F, /* _ */
0x06, 0x06, 0x04, 0x02, 0x00, 0x00, 0x00, /* ` */
0x00, 0x00, 0x0E, 0x01, 0x0F, 0x11, 0x0F, /* a */
0x10, 0x10, 0x16, 0x19, 0x11, 0x19, 0x16, /* b */
0x00, 0x00, 0x0E, 0x11, 0x10, 0x11, 0x0E, /* c */
0x01, 0x01, 0x0D, 0x13, 0x11, 0x13, 0x0D, /* d */
0x00, 0x00, 0x0E, 0x11, 0x1F, 0x10, 0x0E, /* e */
0x02, 0x05, 0x04, 0x0E, 0x04, 0x04, 0x04, /* f */
0x0D, 0x13, 0x13, 0x0D, 0x01, 0x11, 0x0E, /* g */
0x10, 0x10, 0x16, 0x19, 0x11, 0x11, 0x11, /* h */
0x04, 0x00, 0x0C, 0x04, 0x04, 0x04, 0x0E, /* i */
0x01, 0x00, 0x01, 0x01, 0x01, 0x11, 0x0E, /* j */
0x10, 0x10, 0x12, 0x14, 0x18, 0x14, 0x12, /* k */
0x0C, 0x04, 0x04, 0x04, 0x04, 0x04, 0x0E, /* l */
0x00, 0x00, 0x1A, 0x15, 0x15, 0x15, 0x15, /* m */
0x00, 0x00, 0x16, 0x19, 0x11, 0x11, 0x11, /* n */
0x00, 0x00, 0x0E, 0x11, 0x11, 0x11, 0x0E, /* o */
0x16, 0x19, 0x11, 0x19, 0x16, 0x10, 0x10, /* p */
0x0D, 0x13, 0x11, 0x13, 0x0D, 0x01, 0x01, /* q */
0x00, 0x00, 0x16, 0x19, 0x10, 0x10, 0x10, /* r */
0x00, 0x00, 0x0F, 0x10, 0x0E, 0x01, 0x1E, /* s */
0x04, 0x04, 0x1F, 0x04, 0x04, 0x05, 0x02, /* t */
0x00, 0x00, 0x11, 0x11, 0x11, 0x13, 0x0D, /* u */
0x00, 0x00, 0x11, 0x11, 0x11, 0x0A, 0x04, /* v */
0x00, 0x00, 0x11, 0x11, 0x15, 0x15, 0x0A, /* w */
0x00, 0x00, 0x11, 0x0A, 0x04, 0x0A, 0x11, /* x */
0x11, 0x11, 0x11, 0x0F, 0x01, 0x11, 0x0E, /* y */
0x00, 0x00, 0x1F, 0x02, 0x04, 0x08, 0x1F, /* z */
0x02, 0x04, 0x04, 0x08, 0x04, 0x04, 0x02, /* { */
0x04, 0x04, 0x04, 0x00, 0x04, 0x04, 0x04, /* | */
0x08, 0x04, 0x04, 0x02, 0x04, 0x04, 0x08, /* } */
0x08, 0x15, 0x02, 0x00, 0x00, 0x00, 0x00, /* ~ */
0x0A, 0x15, 0x0A, 0x15, 0x0A, 0x15, 0x0A /* DEL (0x7F) */
};
\end{lstlisting}
| {
"alphanum_fraction": 0.4781194207,
"avg_line_length": 38.3637747336,
"ext": "tex",
"hexsha": "4149369ac721679fd2cae548ef8888bc6a583857",
"lang": "TeX",
"max_forks_count": 29,
"max_forks_repo_forks_event_max_datetime": "2022-03-11T13:42:38.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-04-07T17:35:24.000Z",
"max_forks_repo_head_hexsha": "35b932f87fee8d7c1d1176fb03251e897a8c22c3",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "agural/FPGA-Oscilloscope",
"max_forks_repo_path": "Documentation/Full/code_disp.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "35b932f87fee8d7c1d1176fb03251e897a8c22c3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "agural/FPGA-Oscilloscope",
"max_issues_repo_path": "Documentation/Full/code_disp.tex",
"max_line_length": 86,
"max_stars_count": 64,
"max_stars_repo_head_hexsha": "35b932f87fee8d7c1d1176fb03251e897a8c22c3",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "agural/FPGA-Oscilloscope",
"max_stars_repo_path": "Documentation/Full/code_disp.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-18T22:37:46.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-03-13T12:46:36.000Z",
"num_tokens": 8953,
"size": 25205
} |
\RequirePackage{fix-cm}
\documentclass[a4paper, 12pt]{book}
\usepackage[utf8]{inputenc}
\usepackage{authblk}
\usepackage{setspace}
\usepackage{amsmath}
\usepackage{textcomp}
\usepackage{amssymb}
\usepackage{geometry}
\usepackage{textcomp}
\usepackage{fontspec}
\geometry{
a4paper,
total={170mm,257mm},
left=20mm,
top=20mm,
}
\title{Those Who Were Once People}
\author{Liam Gardner}
\date{\today}
\doublespacing
\newfontface{\cuneiform}[Scale=MatchUppercase]{SantakkuM}
\DeclareTextFontCommand{\textcuneiform}{\cuneiform}
\newcommand\tab[1][1cm]{\hspace*{#1}}
\DeclareMathAlphabet{\mathpzc}{OT1}{pzc}{m}{it}
\DeclareFontFamily{OT1}{pzc}{}
\DeclareFontShape{OT1}{pzc}{m}{it}{<-> s * [0.900] pzcmi7t}{}
\DeclareMathAlphabet{\mathpzc}{OT1}{pzc}{m}{it}
\begin{document}
\newcommand{\AmaGi}{$\stackrel{\hbox{\fontsize{15}{60}\selectfont ama-gi}}{\hbox{\fontsize{30}{60}\selectfont {\symbol{"120BC}\symbol{"12104}}}}$}
\maketitle
\section*{Part 1}
\tab
``That’s all everyone’s been talking about for the last four months!'' I say with more annoyance in my voice than initially intended. ``Even if that kid said he performed the ritual, that doesn’t mean it’s true. He’s probably lying about it.''
\newline
\tab
``That kid was missing for an entire month. He was found with an arm and eye gone. You really think he’d have a reason to lie?''
\newline
\tab
``The ritual itself is full of inconsistencies. Why would you inscribe a Sumerian word onto a stone, then pray to a Roman god?'' I reply. These conversations are getting increasingly annoying. ``What’s the point of it being an old building anyway? Or why does it have to be at midnight? Shouldn’t a time god be able to move freely throughout time?''
\newline
\tab
He answers with a smug annoyance in his voice. ``Janus is the god of doorways and passages, idiot. He’s not a time god.''
\newline
\tab
I sigh, realizing why plot-holes like this manage to continue to exist. ``Janus'', I begin reading from the Wikipedia page of my phone, ``is the god of beginnings, gates, transitions, time, duality, doorways, passages, and endings.''
\newline
\tab
``If you’re so confident that the ritual is fake, why don’t you do it? The school used to be an old church, right?''
\newline
\tab
I wonder how much trouble I’d be in if my parents figured out that I’m spending the night at school performing a weird ritual. ``Yeah, yeah, fine. I’ll prove you wrong. You owe me \$50 if it doesn’t work.''
\newline
\tab
He looks at me in shock. ``For what\textinterrobang'' he almost shouts.
\newline
\tab
I turn to him before entering my next class. ``For wasting my time.''
\newline
\tab
Annoyingly, he follows me in. ``How am I supposed to get that money?'' He asks in a begging voice that only serves to further infuriate me.
\newline
\tab
``If you’re so confident it works, you don’t have to worry about getting the money.'' I retort, grinning. The teacher walks into the classroom causing him to leave, so as to not attract unwanted attention.
\newline
\tab
Classes continue on as usual. People spend their conversations purely on the topic of that urban legend popularized on the internet. The day goes on and I do everything in my power to avoid these types of conversations, which typically leads to avoiding most people entirely. Unfortunately, that particular conversation between classes led to a decent chunk of my classmates thinking I was bragging about me going to perform the ritual. I guess I don’t really have much of a choice about backing out now.
\newline
\tab
The day moves onward as more and more people find out about how I’m going to perform the ritual. I hear some people talk about someone else also going to do it as well, but I don’t really care enough to listen in. The day ends and I go home to pack my things. On the bus, a younger student hands me a stone with letters carved into it. ``For the ritual,'' he says quietly and embarrassed. ``Good luck.'' I nod, staring at the cuneiform inscription, \textcuneiform{\AmaGi} the Sumerian word for freedom. I almost begin to laugh. The more I think about it, the more it seems like something an edgy teenager would try to run away from their life and responsibilities.
\newline
\tab
I get home, enter the kitchen, and grab a pack of candles and a lighter. Heading up to my room, I grab a few blankets to sleep on, choosing to ignore taking a pillow for the sake of remaining relatively light. Chances are, it’ll be easier to stay at school for the night rather than to bus all the way back home. It’ll probably also be safer as well. Heading towards the bus stop, the thought slowly creeps up in the back of my mind that the ritual might actually work. I take a seat on an almost vacant bus and pull up the blog post on my phone.
\newline
\tab
\textit{
Place the candles in a triangle, with the stone in the middle and say the words ``O god of passage, strip us of our sins and bring us to the free world!''
}
\newline
\tab
Reading over it, it also says there’s no way back. A wave of relief washes over me. There’s no way to know if it’s impossible to come back or not. You either don’t know, because you haven’t tried the ritual yourself, or you know you can come back, because you tried it and returned to make the blog post. The more inconsistencies I find in this, the more reassured I feel about my success.
\newline
\tab
Walking into the main hall of the school, I find one other student who probably shares my intention. ``Andraste!'' I call out to her, walking over to see what she’s doing.
\newline
\tab
``Oh,'' she replies, ``I didn’t think you’d show up.''
\newline
\tab
``I didn’t really have much of a choice. This is probably the most effective way to get rid of the legend, right? Or at least in our school.''
\newline
\tab
``Yeah, probably. It’s only 21:00, what do you want to do for three hours?'' She asks. If we share a common goal, we may as well work together, not that it’d help if I stated that. My stomach slowly starts to silently tell me that I haven’t eaten in a while.
\newline
\tab
``Have you had dinner yet?'' I ask, getting increasingly hungry as we speak.
\newline
\tab
``Imagine that, the last thing that happens to me in this world is being asked out on a date!'' she replies almost laughing. ``No, I haven’t eaten yet.''
\newline
\tab
I choose to ignore that comment, not wanting to start an argument. ``Let’s get burgers then,'' I say, ``I haven’t eaten since lunch.'' She nods and we head out to get dinner. Aside from a few staff, the burger joint is completely empty. It feels kind of eerie. There should be at least a few more people here at this time, right? We order our food and sit down. ``What do you think of this?'' I ask, attempting to distract myself from the unnerving atmosphere.
\newline
\tab
``Zius, you don’t really believe in all this, do you?'' she asks, seeming almost concerned.
\newline
\tab
``My name is Ziustra, not Zius,'' I reply with an almost instinctual hostility. After a couple seconds, I continue in a calmer manner. ``No, I don’t believe in any of this. I just hate that it hasn’t died off yet.''
\newline
\tab
She smiles and nods at my response. ``Good, we’re in the same boat then.'' I could’ve probably told her that before we got food...
\newline
\tab
We walk back to the school, talking about all the inconsistencies we found in the ritual. Mixing of religions, not being able to get back, eventually, we found out that the post was copied from another, less-popular forum. Slowly but surely, time passes. Both of us get more and more anxious as we get closer to midnight. Neither of us bring up the possibility of the ritual being true, especially with the thought of being unable to return. The clock shifts to 23:55. ``I guess we should get started,'' I say with a mild tinge of regret in my voice.
\newline
\tab
I pull out three candles from my bag and place them in a triangle about 30 centimetres apart from each other. I reach further into my bag and pull out the inscribed stone given to me by that kid on the bus. Placing the stone in the middle of the triangle, inscription-side down, I look back at the clock to see how much time we have. 23:57. I take out the lighter -- which in hindsight shouldn’t have been placed around blankets and candles -- and light the candles. We wait for the clock to reach midnight before beginning our prayer. 00:00. ``O god of passage, strip us of our sins and bring us into the free world.'' One second has passed. Two seconds have passed. Three, five, ten, thirty; nothing happens. We look over at each other, holding our breath. I practically expect some sort of jump-scare, but nothing comes. We breathe a sigh of relief. We end up spending the night at school in makeshift sleeping bags made from the blankets I brought. I leave the candles as they are just to provide some light since the school’s lights are off.
\newline
\tab
When I wake up, Andraste’s sitting around the now-extinguished candles. I get up, slowly, and she snaps her head in my direction, looking at me like I’m a ghost. I begin to think that maybe I did or said something in my sleep.
\newline
\tab
There’s no sound. No hums from the air conditioner or lights. No insects or wind, nothing. I can hear the sounds of my breathing and heartbeat so clearly it feels like I have control of their pacing. I feel like I have to consciously think to keep my heart moving. ``Do you... hear anything?'' I ask slowly. Even the sound of my own voice feels off.
\newline
\tab
It takes her a moment to reply ``Nothing.'' I nod slowly.
\newline
\tab
We pack up the stuff into our bags and begin to walk towards the exit. ``There’s something... off,'' I begin speaking without really thinking, ``not the silence but --''
\newline
\tab
``The atmosphere.'' She stated in a grave and serious tone before turning back to me. Her next words were the only thing I didn’t want to hear. ``It worked.''
\newline
\tab
I laugh to myself, ``looks like I’m not getting that \$50,'' she gives me a disgusted look but says nothing.
\newline
\tab
We performed the ritual in the corner of the school, on the second floor. It’s typically completely empty during the day, so just in case anyone else shared our thoughts, we wouldn’t see them. I’m sure there are even a couple students that didn’t know this place existed. In one of the main stairways lies confirmation that our ritual worked. A fully-clothed metal statue of a student looks towards the top-left of the stairway -- away from us -- with outstretched arms. ``I don’t think... that statue’s... manmade.''
\newline
\tab
Andraste’s words only serve to terrify me further. She’s probably right, but I don’t want to believe it. I say nothing. Looking at the statue, the metal itself seems so polished and clean that it could be a mirror if it weren’t for the shape.
\newline
\tab
We move closer towards the statue, half-expecting it to move. It doesn’t. ``Look!'' I say, pointing at it’s feet. ``A notebook.'' We haphazardly rush down the stairs. Looking back at the statue in fear, I get a clear look at its -- her -- face. The statue’s face looks like that of a crying girl. Her teardrops solidified onto her face, her eyes are slightly slanted downwards, and her face looks... worn? No, that’s not right. I can’t really describe it, but it only serves to make things even more terrifying. Andraste’s right, that thing doesn’t look manmade. Nobody could sculpt a face that... powerful.
\newline
\tab
She picks up the notebook and begins flipping through it. I move to stand behind her just as she begins reading the last entry of what we’re now figuring out is a diary.
\newline
\tab
\textit{
It started a month ago. Everyone thought it was just a flu, another strand of the influenza virus. Our first hint should’ve been the vaccines -- they didn’t work. A lot more people died, but that’s nothing in comparison to what’s happening now. To be honest, I’d rather be dead. Most people recovered on their own, it was really just the very young children and elderly who were at fatal risk. A week after people recovered, they would start turning into these metal statues. Nobody’s sure why, or even how, but we’ve discovered a few things:
}
\begin{enumerate}
\item \textit{The statues somehow block all technological communication. No radio, no wi-fi, nothing. Everything becomes distorted.}
\item \textit{The statues are virtually indestructible.}
\item \textit{The statues are not contagious and cannot spread the virus. I still wouldn’t go around touching them.}
\end{enumerate}
\tab
\textit{This is day 7 for me. On the off-hand chance that someone finds this, good luck.}
\newline
\tab
\begin{center}
\textit{April 11th, 2018.}
\end{center}
\tab
She drops the journal. ``That was two years ago.'' She says, looking at me.
\newline
\tab
I nod, picking up the journal and putting it in my bag. ``Let’s look around the school more.'' I say, trying to distract both her and me. She only nods, turning away from what once was a person and walking into the main hall of the school. There are more statues of students and teachers in the hallways and classrooms. Every statue either looks like it’s crying or about to cry. Part of me hopes that the journal was a prank or joke, but I know nobody has the ability to create something this... real. We walk into the main office; the receptionist is a crying statue. I walk over to the front desk and pick up a calendar. ``It looks like this all happened two years ago. At least that rules out time travel.'' I sigh. This means that time is synced with our world.
\newline
\tab
Andraste looks at up at me. ``If we didn’t time travel, that means the virus has had time to die off, at least in this area. We aren’t immune, but chances are we’re relatively safe from it.'' I nod in agreement. Her stomach growls, the silence only serving to make the noise seem louder. She blushes in embarrassment.
\newline
\tab
``We need a food supply. If it’s replenishable, that’s probably better. We don’t know how...'' I trail off, not wanting to finish that thought.
\newline
\tab
``You’re right,'' she says, ``let’s check the school garden.''
\newline
\tab
``They grow a few edible things, but we’d still need a way to get meat, right?'' The garden wasn’t ever meant to be used as a food source in the first place. It was probably just meant as a way to judge the students’ level of maturity or responsibility. Either way, small amounts of food is better than nothing, so I keep my mouth shut.
\newline
\tab
``It’s still worth checking out,'' I nod, and we head over to the garden. The moment we walk into the courtyard, I pause. \textit{There’s something wrong.} She walks over to the plants and lifts up a leaf. ``These seem fresh. We should be able to eat these.'' She says, turning to me with what almost seems like excitement in her eyes.
\newline
\tab
``Hey,'' I say slowly. She looks at me quizzically. ``Who’s been tending to the garden?'' she lets go of the leaf, slowly walking towards me. ``Let’s get out of here,'' she says. She has that look in her eyes that makes me think she’s noticed something. I follow her inside, closing the door behind us. ``Did you see something?'' I ask, wondering why she’s so freaked out.
\newline
\tab
``Don’t panic,'' she starts. I can feel my face slowly becoming pale. ``You’re right. If it’s been two years since everyone’s turned to metal, those plants would’ve been overgrown. But they weren’t. The grass was cut, the plants were trimmed, everything was clean. Just like --''
\newline
\tab
``The school,'' I say, catching on to her train of thought. ``There’s someone else here.'' She nods. We don’t move for a few minutes, individually trying to come up with our own ideas of what to do next. ``We should stay in the school. We’ll have an easier time monitoring if anyone comes in or out. Also --''
\newline
\tab
She continues my thoughts. ``We’re a lot less likely to see someone before they see us if we’re outside. We’d be like sitting ducks.''
\newline
\tab
``For now though, we still need food. Let’s go back out into the courtyard. Try and pick out the ones near the back, and only get one or two from each plant.'' With that, we go back outside, trying to make as little noise as possible, while picking out as many vegetables as we can within the span of a few minutes before rushing back inside and placing down our goods.
\newline
\tab
Andraste sighs, ``We’re going to have to cook most of these,'' she says. Looking at our new food source, we’ve collected a few carrots, onions, potatoes, radishes, and leeks. I sigh as well.
\newline
\tab
``We should try the cafeteria,'' I say. She nods in agreement. ``We uprooted most of these. If whoever’s here goes to the garden, they’ll notice that we’re here. We should avoid the garden for the time being.''
\newline
\tab
``If the cafeteria’s anything like our school, the fridge will be empty. At least we’ll be able to cook these,'' she says. We walk towards the cafeteria, carefully taking note of the crying metal statues as well as looking out for any movement.
\newline
\tab
We get to the cafeteria, open the door to the other side of the counter and move towards the deep fryer. She flips the power switch, causing the machine to start making a loud air-conditioner-like hum. ``Shit! Shit! Shit! Shit! Shit!'' she shouts, hurrying to turn off the power. ``What the hell was that\textinterrobang''
\newline
\tab
``We’re so used to the background noise that being without it makes everything else seem louder. You noticed it when we first woke up too right? The sound of your heart beating so clearly it seems like you have to manually control it.''
\newline
\tab
``So what do we do now? Eat everything raw?'' Andraste shouts at me.
\newline
\tab
``We have no idea who the person taking care of this place is or what their intentions are. They might actually want to help rather than kill us.'' She seems doubtful of my words, but the ideas at least calmed her down. ``Turn it on, I’ll wait by the entrance while you cook. If anyone starts coming towards us, I’ll let you know.''
\newline
\tab
``And then?'' She replies annoyed. ``What’ll we do if someone comes for us? What if they’re \textit{not} friendly? How can we even \textit{think} of fighting someone if we don’t know what weapons they have? Someone comes, following the sounds, we’re preparing to stab them with kitchen knives while they have a fucking machine gun!''
\newline
\tab
``Where would they even get a machine gun from?'' I reply in a calm voice, which only serves to anger her more. ``This is the safest option we have aside from eating an onion like an apple. Let’s go prepare the food first, then we’ll turn on the deep fryer when we’re ready. I’ll wait by the door and tell you if I see anything. If I see any movement, we’ll leave the deep fryer on and hide in a closet. If they have guns, at least we’ll stand more of a chance that way.'' She doesn’t fight this argument, though I’m guessing more out of not wanting to fight than out of agreement, and we begin preparing our food.
\newline
\tab
``We could boil these and make a soup,'' she says after a few minutes of silent chopping and peeling. ``It’d probably be healthier too.''
\newline
\tab
I smile in agreement, relieved that I don’t have to watch out for a potential murderer just yet. ``Finish up here, I’ll get the stove and water ready.'' She turns back to chopping the last carrot as I fill a pot with two litres of water and place it on the stove. She comes over, pouring the chopped vegetables into the water as I turn on the element. ``Did we ever check the fridge?'' I ask after half-covering the pot with a lid.
\newline
\tab
``I don’t think so,'' Andraste replies, ``Why? There’s nothing there normally. If this is a mirror of our world, there’d be nothing in it now, right?''
\newline
\tab
``If there’s someone here, and everything has power, why wouldn’t they use the fridge to store food?'' Her eyes widen with realization, but not from what I told her.
\newline
\tab
She looks at me in that same way she did back at the garden. ``If there’s someone here, and they use the cafeteria as food storage, this is probably the place we’re most likely to find them, right?'' The water starts whistling, signifying the end of our wait. Andraste moves to turn off the stove. We carefully ladle our soup into two large bowls, grab spoons, and move behind a stairway at the other end of the hall. ``We may as well leave everything there, if they look at the garden, they’ll notice that we’re here anyway.''
\newline
\tab
``Everything we do leads to us getting closer and closer to being noticed, right? If we’re going to stay here, we should stay in a classroom. That way we can at least lock the door.''
\newline
\tab
She ponders for a moment as to what our next move should be. ``Let’s sleep in the science wing, it’s on the opposite side of the school, away from the garden and cafeteria.''
\newline
\tab
``If someone finds out we’re here, they’d search the entire school.'' I reply, unconvinced that we should avoid our previous locations. ``We should stay on the first floor, somewhere with a window we can escape out of and into the city if we’re found out.''
\newline
\tab
``The science wing should at least have supplies to make some sort of weaponry. We’d be more prepared for defence than we are now. Let’s at least stop by.'' I nod, mostly out of not wanting another argument. As we finish the soup, I begin to wonder what materials she’d want from the science wing. We make our way up the stairs and towards the other side of the school, passing by at least ten other metal statues. Entering the chemistry lab and walking into the room labelled \textit{No Students Allowed}, I turn to her and ask, ``what is it that you’re looking for?''
\newline
\tab
She opens a drawer filled with opaque containers. ``Iron oxide, aluminium powder, and magnesium.''
\newline
\tab
``For what exactly?'' I ask slowly.
\newline
\tab
``Didn’t you ever pay attention in science class? I’m making thermite!''
\newline
\tab
I back out of the \textit{No Students Allowed} room slowly. ``What are you going to use thermite for? I can’t see that being easily weaponized.''
\newline
\tab
``You know that thing some people do where they’ll draw a line of salt to prevent demons from walking in? Here’s the anti-human version,'' she replies, smirking.
\newline
\tab
``Did you forget the tiny fact that we’re also human? That’ll kill us too!'' I almost shout.
\newline
\tab
She turns to me, walking out and holding three containers. ``Hear me out,'' she starts. ``You were right. Sleeping up here makes us cornered. Having a window as an escape option is nice, but if we’re followed it becomes a test of stamina. If they have a gun, we automatically lose. Let’s assume they find us. The noise from trying to break down the door will wake us up and give us enough time to run Then what? How do we make sure we aren’t followed? Between the window and door, we’ll draw a line of thermite that we can ignite of we hear someone. It’ll be bright enough to stop them from getting a good look at us as we run away.'' It seems a bit overkill, but I don’t argue against the idea.
\newline
\tab
``Sprinkler systems activate from heat and smoke detection, so thermite should be able to trigger that as well.'' I reply. ``I never took chemistry though, so don’t expect me to help.''
\newline
\tab
She grins, as if hoping that I wouldn’t be able to help. ``Don’t worry, just get the room ready. I’m assuming you know which one we’ll be sleeping in.'' She says, focusing on sorting out how much of each material she’ll need.
\newline
\tab
``I’d rather not be alone with all the metallic statues,'' I admit, slightly embarrassed. ``There’s just something about the fact that they were all people. It’s kind of unnerving.''
\newline
\tab
She sighs, turning towards me and smiling. ``It’s alright. Sit over there.'' She says, pointing to the other end of the room. I smile back and make my way over as she begins to work her magic -- or I guess it’s more science. It takes her roughly 15 minutes to finish making a bucket-full of thermite.
\newline
\tab
I’m staring out the window as the sun begins to set. ``We should probably stay in 107,'' I say, ``It has a decently big window on the opposite corner as the door. The room itself is also at a size where the distance between the door and window is fairly large, but it isn’t big enough for us to use all of that thermite.
\newline
\tab
She nods and we head down the stairway to room 107. We open the door and find two statues at the front of the room staring at a wall with semi-outstretched arms. Andraste turns to me. ``Are you going to be okay here?''
\newline
\tab
Her question catches me off guard. After a second, I respond. ``I’ll be fine. At least they’re not facing us.''
\newline
\tab
``Get the blankets ready. I’ll make the barrier.'' I chuckle at the word barrier being used to describe a thin line of thermite. I set up two blankets near the centre of the window. She turns back to me. ``You still have your lighter, right?''
\newline
\tab
I smile. ``Shouldn’t you have checked before making the thermite?'' I ask, only to get a glare telling me to answer the question. ``Yeah, I still have it.'' I reply, pulling it out of my bag to show her.
\newline
\tab
We lock the door to the room and get in our poorly made sleeping bags, laying there silent for 30 minutes. She’s not so close that I can hear her breathing or heartbeat, but she’s close enough that I can tell she’s still awake without having to turn my head. ``Are you okay?'' I ask softly.
\newline
\tab
``How could anyone be okay in this situation?'' She replies. In a softer, almost inaudible voice, she continues. ``I’m terrified.''
\newline
\tab
I almost reach out my hand but hold back the urge. ``So am I.'' I reply. ``I never thought that the ritual would actually work.''
\newline
\tab
``Yeah. I just wanted everyone to stop talking about it.'' She pauses. ``I wonder what they’re all doing right now?'' Her tone terrifies me. It makes her sound like she’s on the verge of crying. I can’t stop myself. I reach out and hold her hand -- more for my sake than hers. We stay like this in silence for as long as I retain consciousness.
\section*{Part 2}
\tab
I wake up before her, still holding her hand. I don’t want to wake her just yet, so I continue staring up at the ceiling. She gets up slowly, turning to me and smiling; still holding onto my hand. I smile back. ``How did you sleep?'' I ask. Her expression morphs from a smile to a frown, I begin to question if what I said was misleading. After a moment, I realize that the look on her face wasn’t of anger or disappointment, it was of fear. ``What’s wrong?'' I ask slowly. In the back of my mind I begin to wonder if someone came in and was just watching us sleep. We would’ve heard them break in, right? Unless... they have a key. They could just walk in without a care in the world.
\newline
\tab
``Turn around,'' she whispers. I don’t want to, but I don’t think I have much of a choice.
\newline
\tab
Gradually, I turn my head. A wave of relief washes over me, as I don’t see anyone else in sight. This wave evaporates as my gaze focuses on the statues. \textit{They’ve moved}. Rather than facing the walls, they’ve turned towards us with outstretched hands, crying. ``Did they...?'' the thought won’t finish. I don’t want it to finish. I don’t ever want to imagine the idea of it being true.
\newline
\tab
``Let’s go,'' Andraste almost whispers, probably recognizing my fear, or sharing it. I nod and we get up slowly, our eyes glued on the statues. They don’t move, continuing to reach towards the place where we slept. ``I’d rather not sleep here again tonight.''
\newline
\tab
We walk out of the room, closing the door on the way out and immediately looking back to see if the statues moved again. They didn’t. ``I really don’t want to be here right now,'' I say slowly. I’m not sure if going outside is safe or not, but currently, I want to be as far away from these statues as possible.
\newline
\tab
``Do you want to go outside?'' she asks hesitantly. There is no good answer to that question. We still have no idea where the other person is. We have no idea if they even came to the school yesterday, or if they’re here right now.
\newline
\tab
I nod. ``Let’s... let’s go outside,'' I say after a while. ``It’s probably risky, but with the statues everything has become risky.''
\newline
\tab
``It also gives us an opportunity to find other food sources as well,'' she says. ``The garden will need some time to grow back. I’d also rather not go to the cafeteria if possible -- at least for now. If someone is here, that’s the most likely place we’d find them, right?'' I nod, mostly to hurry up and leave the school.
\newline
\tab
We move towards the main exit. ``All the statues seemed to have turned around.'' I say as we pass by the main office.
\newline
\tab
``Wait,'' she says, suddenly coming to a stop. ``Let’s go back to where we got the journal. I think...'' she trails off.
\newline
\tab
``What, why?'' I respond. ``Let’s just leave the school for now, we can come back later today.''
\newline
\tab
``You don’t see it?'' she asks me, almost annoyed, as if I’ve missed something completely oblivious. ``Where are all the statues facing?''
\newline
\tab
I turn around slowly, trying to recall where each statue was facing before we fell asleep. I genuinely don’t remember. Thinking back to the statue where we found the journal... ``They turn towards us when we sleep?''
\newline
\tab
``Probably,'' she seems way too calm for the situation. It doesn’t really help my mental state, thinking that I’m the only one of us freaking out.
\newline
\tab
``It was a virus that did this, right? If this happened after the symptoms went away, what caused them to all be crying? It’s more reasonable that their faces would turn after, right?''
\newline
\tab
``Well, maybe not reasonable, but considering they moved...'' Standing here posing questions about the statues isn’t getting anything done, but the fact that none of them have moved again so far is mildly reassuring. I’m starting to actually believe they can only move when we’re asleep.
\newline
\tab
We continue onward, leaving the school from the main hall. There are statues everywhere, all of them either crying or about to cry wile facing towards the school -- towards us -- with outstretched arms. Sometime clicks in my head as they all stare towards us. I turn to Andraste, whose eyes tell me she’s thinking the same thing I am. ``Their arms are all reaching towards us, so what happens if they touch us?''
\newline
\tab
I pull out the journal and flip to the last entry. \textit{I still wouldn’t go around touching them}. The text stares back at me with an arrogance gained from either omniscience or disaffiliation. ``Did... did they know what happens if someone touches the statues? Did they know the statues move when --''
\newline
\tab
``No, they probably didn’t,'' Andraste cuts me off. ``This is just an idea, but what if they can only move when \textit{everyone’s} asleep. Outside of us, there’s probably only a few other people who are here since the school seems to be taken care of. That would mean that the statues only have to wait for, let’s say five people to be asleep, compared to seven billion. If I’m right, the people probably never would’ve realized that the statues could move, since there would always be a few people awake every few kilometers,'' she begins developing this idea, but something about it feels off.
\newline
\tab
``That doesn’t explain why they all turn towards us though.'' I reply. Talking about this is only making me more and more uncomfortable. ``Let’s look through the city.''
\newline
\tab
She looks at me, feeling as uncomfortable as I do, and nods in agreement. We leave what was once school property and begin to explore the city. The city is clean; perfectly. There’s no garbage in sight. Garbage cans and recycling bins are all empty, having no stains from garbage that might’ve once inhabited it. We continue walking, passing by an alleyway everyone used to avoid. It’s clean, perfectly. The sun’s light passes through it in ways I had never assumed possible. The places we were taught to avoid as children due to frequent assaults, along with occasional rapes and murders, now look like the rest of the city, purged of the stench of despair and regret. ``The city... it’s...'' she trails off.
\newline
\tab
``Clean,'' I complete. ``Everything’s clean.'' There’s something off about this place. I begin thinking out loud. ``There’s no way one or two people could keep an entire city clean like this. Things should be, at the very least, overgrown somewhere, right? For the city to be kept this clean... there would have to be a group of people actively keeping everything clean, but --''
\newline
\tab
``There isn’t.'' Andraste says, cutting off my thoughts.
\newline
\tab
``How do you know?'' I ask, wanting to confirm my assumption.
\newline
\tab
``Think about it. If there were people here, we would’ve seen them or heard at least one person, right? If they only move at night, the statues wouldn’t have turned towards us, since other people would’ve been awake.''
\newline
\tab
``That’s assuming they can only move when everyone’s asleep though. We still don’t know if that’s true yet.'' I reply, unconvinced. No, that’s not right. Andraste’s probably right, but that only leaves one other possibility as to who’s been cleaning everything. I don’t want to that end up being true. What was that saying? \textit{Better the devil you know than the devil you don’t}.
\newline
\tab
``If there were others...'' she takes a deep breath before continuing, ``why do the statues all turn to us?'' I stop talking. There’s still a chance that there are other people here, but if there are, there’d be something different about them that makes the statues ignore them.
\newline
\tab
``Let’s go into that grocery store. There’s probably a few things we could salvage.'' I say, pointing.
\newline
\tab
``It’s been two years since anyone’s worked at the grocery store -- or worked anywhere -- chances are all the food’s expired,'' she says. ``Besides,'' her voice gets softer. ``There’s gonna be statues there as well. At least here we’re out in the open.''
\newline
\tab
I put a hand on her shoulder. ``Don’t worry,'' I say, ``we’ll be together.'' Her eyes tell me that did nothing to lessen her fear, but there really wasn’t anything else to say. It wouldn’t be fair to tell her that everything would be okay. We walk into the store. There are statues at every register, and some scattered throughout the rest of the store, but they all seem to be wearing an employee uniform. The only statues here are staff. I choose not to say anything. I’m sure she’s noticed by now, but she’s already unnerved. I walk over to the produce aisle with Andraste following quickly behind me. ``Nothing here looks --'' I pick up an apple and bring it closer to my face -- ``or smells expired.''
\newline
\tab
I take a bite out of the apple. Andraste screams at me. ``What the fuck do you think you’re doing\textinterrobang\, What if that was --'' I cut her off, not wanting her to worry about anything for the time being.
\newline
\tab
``It’s fine! Here take a bite.'' I attempt to hand her the apple, but she moves around me and grabs her own.
\newline
\tab
``If there’s something wrong with this, I’ll kill you,'' she says playfully. I begin laughing. We sit down on the clean floor, laughing, eating, and enjoying ourselves in this aberrant world. I know it won’t last. When compared to survival, even without the thought of someone or something trying to kill us, having fun holds so little importance that there’s no point in even thinking about it.
\newline
\tab
I look at the sticker on the apple. \textit{Best Before Jan $1^{st}$, 2021}. I stop eating, causing Andraste to look at me quizzically. ``Is something wrong?'' she asks. Her tone is almost lacking the recognition of abnormality.
\newline
\tab
``Look at the expiry date.'' I say slowly.
\newline
\tab
Her head drops towards the apple. ``20...21?'' she asks -- more to herself than me. ``That can’t be real, right?'' she turns to me with a face of confusion and fear.
\newline
\tab
``It explains why they’re edible... kind of. Let’s go look around the store some more.'' I say, trying to take both my mind and hers off yet another anomaly, though I doubt this store will get any less anomalous.
\newline
\tab
We walk around the store some more, noting that every expiry date for every item is January 1st, 2021. Every item is arranged perfectly symmetrically. Nothing looks squished or haphazardly placed. All items seem like they were each individually placed with care. A couple of them even look like they should fall right off but remain as inanimate as the statues -- at least for now. We walk towards the meat section. Everything is perfect, it feels like -- ``A utopia...'' Andraste’s words cut me off. ``It feels we’re alienated from a utopia.''
\newline
\tab
``Let’s grab some meat to bring back with us.'' I suggest.
\newline
\tab
She looks at me mildly suspiciously. ``Can you cook meat?'' she asks skeptically.
\newline
\tab
``I... uh... probably. How hard can it be?'' I reply with what I’m starting to see as a withering hubris more than anything else.
\newline
\tab
``You don’t sound confident whatsoever,'' she spouts while giggling. It takes her a few seconds to regain her composure. ``What kind of meat do you want to get?''
\newline
\tab
``I don’t really know. Get something that’ll be easy to cut into strips, I guess.'' I reply. ``Do you want to try using the deep fryer?''
\newline
\tab
``I guess the risk isn’t nearly as big as we initially made it out to be,'' she says, more to herself than to me. ``It’s probably fine, right?'' she responds after a moment. Turning back towards the meat, she pulls a package of steak from the back. ``This works, right?'' I nod. We continue to walk around the rest of the store, now looking more for things to buy -- or I guess just take -- rather than scouting for new anomalies.
\newline
\tab
``Let’s head back now, that’ll give us some time to prepare the meat. We can head past the produce section on the way out to get some more vegetables to cook with.''
\newline
\tab
She turns to me with a dramatic shock on her face. ``Are you perhaps planning to actually cook something?''
\newline
\tab
There’s a fear behind her eyes that she’s trying desperately to hide. It’s almost as if she’s thinking \textit{if we pretend things are fine everything’ll go back to normal}. I can’t really ignore that, especially now that I’ve seen it -- at least, I think. ``I can cook perfectly fine, thank you very much,'' I respond with equal dramaticism. In her eyes, I can see it didn’t help. She saw my intent.
\newline
\tab
``Thanks,'' she whispers softly. I pretend not to hear it.
\newline
\tab
We walk back towards the entrance of the store, almost pausing at the cash register. ``I feel like we’re just blatantly stealing,'' I say as a surge of guilt washes over me.
\newline
\tab
Andraste turns to me. ``The statues block all forms of data transfer, right? Even if we had a credit card, I doubt it’d actually work. Even the barcode scanner probably doesn’t work,'' she turns towards me, now grinning. ``It’s not stealing if nobody finds out, right?''
\newline
\tab
``That’s not... Have you ever stolen something before?'' I ask as we walk past the cash register.
\newline
\tab
``You have.''
\newline
\tab
``No I haven’t. When?''
\newline
\tab
``You stole that dead girl’s diary,'' she says teasingly. ``Reading someone else’s diary without their permission is like 7 different kinds of red flags you know.''
\newline
\tab
I grin. ``You should’ve told me that before I read your diary.''
\newline
\tab
She doesn’t respond past that. Part of me thinks that I might’ve gone to far. The rest of me wishes that were the reason she fell silent. Both of us realize that the teasing doesn’t help the situation. All we’re doing is distracting ourselves from something we can’t afford to be distracted from. We don’t have the luxury to truly feel normal right now. ``Let’s go back to the school.''
\newline
\tab
We make our way directly to the cafeteria and begin to prepare the food. Since there’s really nowhere to store the food, everything we took from the grocery store has to be eaten as soon as possible or it’ll go bad. ``Start chopping the meat into thin slices. It’ll start preparing the vegetables.''
\newline
\tab
``What are we making?'' she asks half sarcastically.
\newline
\tab
``I’m thinking we fry almost everything and just eat it like a wrap.''
\newline
\tab
``A wrap? We didn’t get bread How are we meant to eat it like a wrap?''
\newline
\tab
I turn the faucet on, letting the water rush out and onto my fingers. ``That’s what the lettuce is for.'' All of my confidence is met with a glare of pure skepticism and doubt.
\newline
\tab
The meat is cut into thin strips that look almost like bacon. We dunk them in the deep fryer along with small chunks of other vegetables -- peppers, onions, carrots, and a few other things. After waiting a minute while everything cooks, we fish them out and put them on a leaf of lettuce. It’s not as good as I expected. Oil pours out of the food, burning the inside of my mouth as I try not to yelp in pain. Andraste seems to have no problem eating it. ``This actually wasn’t as bad as I expected,'' she says as I struggle to eat without incinerating my mouth. I choose not to respond.
\newline
\tab
We finish eating and leave the cafeteria. I stare back at the clock on the wall. 13:56. I turn back to Andraste. ``What do you want to do now?''
\newline
\tab
She thinks for a moment as we slowly walk back to the main hall. ``Let’s look into a drug store. We’ll probably need vitamins, and it wouldn’t hurt to get some medicine or painkillers either.'' We walk about twenty minutes to the closest drug store and find the exact same thing. No customers, only employees -- or I guess they’re just statues. Every item has a best before date of January 1st, 2021. Everything seems placed with extreme care and in a way where all items are entirely symmetrical. We walk through the aisles and find a few containers of various vitamin pills, which Andraste picks up and puts in her bag. ``You really seem used to stealing.'' It takes me a few minutes before I realize that I had said that out loud.
\newline
\tab
She turns to me, repeating the joke she said at the grocery store. ``It’s only stealing if you get caught.'' I really hope that’s a joke, but I guess it doesn’t matter now anyway.
\newline
\tab
``That’s not suspicious whatsoever,'' I reply dramatically. She laughs it off and we begin to walk out of the store. Something catches my eye and I pause by the checkout counter. ``Hey,'' I say slowly, tapping Andraste’s shoulder.
\newline
\tab
She turns to me in a much more casual way than I expected. ``Yeah?''
\newline
\tab
``Look at the magazines,'' I say. They’re all completely black, except a title. White pages, the only writing on them is a black magazine title in a Times New Roman font. Even weirder, the title isn’t centered, but is aligned to the right of the page.
\newline
\tab
She picks one up and flips through the pages. ``There are only titles,'' she says, extremely unnerved. She drops the magazine and we head back to the school.
\newline
\tab
``There was... no smell.'' I say, thinking back to the drug store. ``They typically have that smell, right?'' She looks at me with pure confusion. ``They typically smell like worn-out hospitals'' I say, unable to come up with a better way to describe it.
\newline
\tab
``Oh!'' she replies, as if having a minor epiphany. The fact that my simile made sense surprises me. ``The smell of different medicines all mixing together. Yeah, that is odd. I didn’t notice it either,'' she pauses for a moment. ``You called it a worn-out hospital?'' she asks laughing.
\newline
\tab
``Am I wrong?''
\newline
\tab
``Kind of. Do you want to go to a hospital and find out?''
\newline
\tab
``Not really.''
\newline
\tab
``So we’ll go with yes; you’re wrong.'' With her mundane victory complete, we head back to the school, which feels like it’s on the other side of the city. Before actually getting back to the school, Andraste turns to me. ``What do you want to do for dinner? We’ve really only had one meal today.''
\newline
\tab
``I hadn’t actually considered that,'' I respond, not really feeling that hungry to begin with. ``Do you want to head by the grocery store again?''
\newline
\tab
``I really don’t want to prepare anything myself though...'' she replies, trailing off into thought.
\newline
\tab
``We could stop by the grocery store and see if they have ramen cups or packaged sushi. Those don’t really take any preparation.''
\newline
\tab
``We could stop by the grocery store and see if they have ramen cups or packaged sushi. Those don’t really take any preparation.''
\newline
\tab
``What are we, college students?''
\newline
\tab
I know it was meant as a joke, but I can’t help but think that I’ll never be able to go to college or university. ``We’ll never be college students.'' I mumble out. She looks at me sympathetically. ``Sorry,'' I say, smiling at her. In her eyes I see a distant hint of alienation. It’s almost as if she doesn’t realize that we’re in the same situation.
\newline
\tab
We walk back into the grocery store. Near the back, we find a selection of gas-station sushi packages. We pick up two packages, along with a few apples, which I put in my bag, and leave the store for the day. ``Are we staying in the same room as last night?'' she asks.
\newline
\tab
``I don’t see why not. The thermite is still there, so we won’t have to set anything up aside from the blankets. Besides, it’s already fairly late. It’ll be more convenient.'' I don’t really think the thermite’s going to be that useful, but the idea that something can protect us is nice to have.
\newline
\tab
We get back into the school and Andraste turns to me. ``The statues will be facing where we sleep. Are you going to be okay?''
\newline
\tab
``They didn’t actually move, right? They just turned around. It’ll be fine. There’s not really any threat from them.'' I do my best to reassure her that I’ll be fine. I’d rather not have the statues watching us as we sleep, but there’s no reason to make her needlessly worry. We enter the classroom and set up our makeshift beds. I look up at the clock as I open my sushi. ``01:56? Doesn’t that seem...''
\newline
\tab
``That doesn’t seem right,'' she says, opening her container of sushi as well. She looks out the window. ``The clock itself doesn’t seem inaccurate. Look outside.'' I turn my gaze to a pitch-black window. ``Something does feel off though.''
\newline
\tab
We finish our sushi and head to sleep, leaving the apples for tomorrow. I do my best not to look at the statues and to keep them out of my mind. As I stare up at the ceiling, I think back to the journal. There’s still so much we don’t know about the statues. We don’t know how they turned to us. We don’t know if they only move when we’re both asleep. We don’t know if they can move freely or not. We don’t know why everything’s so clean. \textit{We don’t know anything}. I’m not sure how much time I actually spend thinking about this, but I’d guess about two hours before I fall asleep.
\newline
\tab
I wake up, not knowing what to do. I don’t want her to wake up to this, but we don’t really have a choice. Fear takes full control of my body. I don’t know what to do. I don’t know what to do. I don’t know what to do. I don’t know what to do. I don’t know what to do. $\mathpzc{Someone\, Help\, Me}$. I stare up. A new metallic face stares down at me. Grinning. Its smiles look almost impossible. It’s as if it was sculpted without knowing the full range of a smile. My eyes widen in fear. It's \textit{my} face.
\newline
\tab
Adrenaline floods through my body and I turn and shove Andraste, quickly waking her up. It takes her a couple of seconds to fully understand the situation. She screams. We sprint out of the room with only our bags. ``What th... what the fuck was that\textinterrobang'' she yells. Andraste falls to her knees and bursts into tears.
\newline
\tab
``I don’t know... but those aren’t the same statues that were in the room before us.'' I look back through the classroom door’s window. The crying statues are still by the wall. They don’t seem like they’ve moved whatsoever. I reach into my bag to get one of the apples we took from the store. ``What?'' I pull out a brown spheroid of congealed... gunk. I drop it onto the floor, and it splatters everywhere. ``What happened to the apples?''
\newline
\tab
``These have no smell,'' she says softly. ``Whatever it is, it doesn’t have a smell.'' Things are just getting weirder and weirder. I begin walking towards the door. ``What are you doing\textinterrobang'' Andraste shouts at me.
\newline
\tab
``Don’t worry, I think these... counterparts... work the same way as the normal statues.'' It feels mildly bizarre to call them normal, but there’s no point in thinking about that right now. ``They haven’t moved at all since we’ve woken up. I’ll be fine.'' Even though I’ve said that half my attention is still on the statues. I open the door. There’s no signs of damage on either side of it. I walk in. The windows are still locked and there’s no break in the line of thermite. I can’t see a way they could’ve gotten in. Did we forget to lock the door? I’m pretty sure it was locked before we went to sleep. I walk out of the room, not having crossed the thermite border. ``I have no idea how they got in,'' I confess. ``The door was locked. There were no marks on the door. The line of thermite wasn’t damaged. The windows were still locked...''
\newline
\tab
``For now, we still need food, among other things. Let’s head back to the grocery store and --'' I cut her off.
\newline
\tab
``Let’s go to our houses,'' I say, unable to take my mind off the smiling statues. ``Those were this world’s version of us, right? We might be able to learn something about them if we go to our --'' I correct myself. ``Their houses.''
\newline
\tab
``I’d rather not see my parents and siblings as statues,'' she says, staring at the floor.
\newline
\tab
``You might not have to. If everyone were home, the school would be empty, right?''
\newline
\tab
She nods slowly, getting up on her feet. ``Let’s at least get some food first.''
\newline
\tab
We head to the grocery store and get some food. Everything seems to have reverted back to the way we first found it. It’s as if everything we took was replaced or put back -- perfectly. We head for the sushi again. ``This doesn’t taste like something they’d sell at a gas station or grocery store,'' Andraste says, turning towards me. ``I’m not sure how I didn’t notice yesterday, but this tastes like it was made in a high-class restaurant.''
\newline
\tab
``Really, you can tell?'' I ask, mildly doubtful. ``I can’t really taste the difference, though it’s been a while since I’ve gotten sushi from a restaurant.''
\newline
\tab
We finish eating our packaged, restaurant-grade sushi and begin walking towards my house. Just in case her siblings or parents are home, I’d rather not want to freak her out any further. ``We’re here,'' I announce. \textit{This isn’t it. This isn’t my house. I have no memory of ever being here.} $\mathpzc{This\, isn’t\, my\, home}$. I turn away from Andraste and walk up the steps towards the door. Andraste remains at the driveway. I don’t ask why. I’m too focused on suppressing the feelings of nostalgia and homeliness. I open the door, almost falling onto my knees. Every surface, every wall, even the floors and ceilings have the words ``welcome home'' written in a dark red. I can feel the blood drain from my face. I slowly take a step in. Every family photo we took and framed doesn’t have people in it. It’s just framed landscapes. Even the people that were in the background are no longer there. I back out of my -- no; the -- house and shut the door. I try to walk down the steps, but my knees give in and I fall.
\newline
\tab
Andraste rushes up and catches me. ``Are you okay? What happened? What did you see?''
\newline
\tab
I don’t have the energy to respond properly. ``Let’s head back.'' I mutter.
\newline
\tab
We walk back to the school, with her half-carrying me. When we get back to the room, the counterpart statues are gone. She lies me down on one of the makeshift beds and places a hand on my forehead. ``Rest here. I’ll stay up and keep watch.'' I smile weakly at her and close my eyes. I do my best not to think of that place, but I can’t get it out of my head. What happened? Who did that? Was it the counterparts? It seems like everything we find only raises more questions. I doubt I’ll actually be able to sleep like this. The counterpart’s disappearance fully clicks in my head. The counterpart statues \textit{can} move when we’re awake. What’s stopping them from coming back here and... I do my best not to finish that thought. Andraste looks down at me. ``Are you feeling better?''
\newline
\tab
I smile. ``A little. How long as it been?''
\newline
\tab
She turns her head up towards the clock. ``It’s been six hours,'' she states. Six hours\textinterrobang\, I feel like I’ve only been lying down here for ten minutes. At most, it feels like twenty.
\newline
\tab
I mutter to myself as I get up. ``What the hell is going on here?''
\end{document} | {
"alphanum_fraction": 0.7427666354,
"avg_line_length": 84.5747663551,
"ext": "tex",
"hexsha": "8eeb7969304dde5814074966fc234ae8babaf795",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ef665f87aa8000277ded27542d8d39df2185e5e6",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "GardnerLiam/Short-Stories-",
"max_forks_repo_path": "Those Who Were Once People/XeLaTex Compilation Files/Those Who Were Once People.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ef665f87aa8000277ded27542d8d39df2185e5e6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "GardnerLiam/Short-Stories-",
"max_issues_repo_path": "Those Who Were Once People/XeLaTex Compilation Files/Those Who Were Once People.tex",
"max_line_length": 1047,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ef665f87aa8000277ded27542d8d39df2185e5e6",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "GardnerLiam/Short-Stories-",
"max_stars_repo_path": "Those Who Were Once People/XeLaTex Compilation Files/Those Who Were Once People.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 13409,
"size": 54297
} |
\chapter{Design in reaction to CDCS}
From its very inception, Skedge's functionality and visual design were driven by the shortcomings of CDCS. Skedge is built \emph{bottom-up}, not \emph{top-down}---every aspect of the application was either made as a reaction to a particular grievance in CDCS or as the natural evolution of an existing Skedge feature. Skedge is thus rooted in \emph{usability} derived from real need, not mere conjecture along the question ``what could students want?''. Its success with students, shown in Chapter 4, demonstrates that this usability extends beyond my own standards and preferences and can fulfill the various discovered use-cases of students in general.
In this chapter, I invite the reader along on a tour of these grievances and their remedies.
%%%
\input{design/modernity}
\clearpage
%%%
\input{design/usability}
\clearpage
%%%
\input{design/search}
\clearpage
%%%
\input{design/social} | {
"alphanum_fraction": 0.7793176972,
"avg_line_length": 37.52,
"ext": "tex",
"hexsha": "8c74ffe5a4c5e0948c626295701af7d46057e5b5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b5c89fe83f59adc0dd389e9712930ce26aa92824",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "dingbat/skedge-thesis",
"max_forks_repo_path": "design.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b5c89fe83f59adc0dd389e9712930ce26aa92824",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "dingbat/skedge-thesis",
"max_issues_repo_path": "design.tex",
"max_line_length": 654,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "b5c89fe83f59adc0dd389e9712930ce26aa92824",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "dingbat/skedge-thesis",
"max_stars_repo_path": "design.tex",
"max_stars_repo_stars_event_max_datetime": "2016-11-05T23:19:24.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-10-28T02:39:06.000Z",
"num_tokens": 216,
"size": 938
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.