Search is not available for this dataset
text
string
meta
dict
%% %% Automatically generated file from DocOnce source %% (https://github.com/hplgit/doconce/) %% %% % #ifdef PTEX2TEX_EXPLANATION %% %% The file follows the ptex2tex extended LaTeX format, see %% ptex2tex: http://code.google.com/p/ptex2tex/ %% %% Run %% ptex2tex myfile %% or %% doconce ptex2tex myfile %% %% to turn myfile.p.tex into an ordinary LaTeX file myfile.tex. %% (The ptex2tex program: http://code.google.com/p/ptex2tex) %% Many preprocess options can be added to ptex2tex or doconce ptex2tex %% %% ptex2tex -DMINTED myfile %% doconce ptex2tex myfile envir=minted %% %% ptex2tex will typeset code environments according to a global or local %% .ptex2tex.cfg configure file. doconce ptex2tex will typeset code %% according to options on the command line (just type doconce ptex2tex to %% see examples). If doconce ptex2tex has envir=minted, it enables the %% minted style without needing -DMINTED. % #endif % #define PREAMBLE % #ifdef PREAMBLE %-------------------- begin preamble ---------------------- \documentclass[% oneside, % oneside: electronic viewing, twoside: printing final, % draft: marks overfull hboxes, figures with paths 10pt]{article} \listfiles % print all files needed to compile this document \usepackage{relsize,makeidx,color,setspace,amsmath,amsfonts,amssymb} \usepackage[table]{xcolor} \usepackage{bm,ltablex,microtype} \usepackage[pdftex]{graphicx} \usepackage[T1]{fontenc} %\usepackage[latin1]{inputenc} \usepackage{ucs} \usepackage[utf8x]{inputenc} \usepackage{lmodern} % Latin Modern fonts derived from Computer Modern % Hyperlinks in PDF: \definecolor{linkcolor}{rgb}{0,0,0.4} \usepackage{hyperref} \hypersetup{ breaklinks=true, colorlinks=true, linkcolor=linkcolor, urlcolor=linkcolor, citecolor=black, filecolor=black, %filecolor=blue, pdfmenubar=true, pdftoolbar=true, bookmarksdepth=3 % Uncomment (and tweak) for PDF bookmarks with more levels than the TOC } %\hyperbaseurl{} % hyperlinks are relative to this root \setcounter{tocdepth}{2} % levels in table of contents % prevent orhpans and widows \clubpenalty = 10000 \widowpenalty = 10000 % --- end of standard preamble for documents --- % insert custom LaTeX commands... \raggedbottom \makeindex \usepackage[totoc]{idxlayout} % for index in the toc \usepackage[nottoc]{tocbibind} % for references/bibliography in the toc %-------------------- end preamble ---------------------- \begin{document} % matching end for #ifdef PREAMBLE % #endif \newcommand{\exercisesection}[1]{\subsection*{#1}} % ------------------- main content ---------------------- % ----------------- title ------------------------- \thispagestyle{empty} \begin{center} {\LARGE\bf \begin{spacing}{1.25} PHY321: Classical Mechanics 1 \end{spacing} } \end{center} % ----------------- author(s) ------------------------- \begin{center} {\bf Homework 9, due Monday April 6${}^{}$} \\ [0mm] \end{center} \begin{center} % List of all institutions: \end{center} % ----------------- end author(s) ------------------------- % --- begin date --- \begin{center} Mar 30, 2020 \end{center} % --- end date --- \vspace{1cm} \paragraph{Practicalities about homeworks and projects.} \begin{enumerate} \item You can work in groups (optimal groups are often 2-3 people) or by yourself. If you work as a group you can hand in one answer only if you wish. \textbf{Remember to write your name(s)}! \item Homeworks are available the week before the deadline. \item How do I(we) hand in? Due to the corona virus and many of you not being on campus, we recommend that you scan your handwritten notes and upload them to D2L. If you are ok with typing mathematical formulae using say Latex, you can hand in everything as a single jupyter notebook at D2L. The numerical exercise(s) should always be handed in as a jupyter notebook by the deadline at D2L. \end{enumerate} \noindent \paragraph{Introduction to homework 9.} This week's exercises are purely numerical and focus on solving gravitational problems. It is based on what you did in homework 4 as well as in the previous two sets. We will, based on the code for the Sun-Earth system in homework 4, add Jupyter and study a three-body problem. We will play around with different masses for Jupyter. The bonus exercise in this homework is a study of the perihelion of Mercury by adding Einstein's relatistic effect (we will not derive it here). This correction to the equations of motion was the very first confirmation of Einstein's general theory of relativity. The effect on the perihelion of Mercury can be measured. In your numerical simulations you will find this to be a very small effect compared to calculations without this relativistic correction. In homework 4 we limited ourselves (in order to test the algorithm) to a hypothetical solar system with the Earth only orbiting around the Sun. We assumed that the only force in the problem is gravity. Newton's law of gravitation is given by a force $F_G$ (we assume this is the force acting on Earth from the Sun) \[ F_G=-\frac{GM_{\odot}M_{\mathrm{Earth}}}{r^2}, \] where $M_{\odot}$ is the mass of the Sun and $M_{\mathrm{Earth}}$ is the mass of the Earth. The gravitational constant is $G$ and $r$ is the distance between the Earth and the Sun. We assumed that the Sun has a mass which is much larger than that of the Earth. We could therefore safely neglect the motion of the Sun. In homework 4 assumed that the orbit of the Earth around the Sun was co-planar, and we took this to be the $xy$-plane. Using Newton's second law of motion we got the following equations \[ \frac{d^2x}{dt^2}=-\frac{F_{G,x}}{M_{\mathrm{Earth}}}, \] and \[ \frac{d^2y}{dt^2}=-\frac{F_{G,y}}{M_{\mathrm{Earth}}}, \] where $F_{G,x}$ and $F_{G,y}$ are the $x$ and $y$ components of the gravitational force. We will again use so-called astronomical units when rewriting our equations. Using astronomical units (AU as abbreviation)it means that one astronomical unit of length, known as 1 AU, is the average distance between the Sun and Earth, that is $1$ AU = $1.5\times 10^{11}$ m. It can also be convenient to use years instead of seconds since years match better the time evolution of the solar system. The mass of the Sun is $M_{\mathrm{sun}}=M_{\odot}=2\times 10^{30}$ kg. The masses of all relevant planets and their distances from the sun are listed in the table here in kg and AU. \begin{quote} \begin{tabular}{ccc} \hline \multicolumn{1}{c}{ Planet } & \multicolumn{1}{c}{ Mass in kg } & \multicolumn{1}{c}{ Distance to sun in AU } \\ \hline Earth & $M_{\mathrm{Earth}}=6\times 10^{24}$ kg & 1AU \\ Jupiter & $M_{\mathrm{Jupiter}}=1.9\times 10^{27}$ kg & 5.20 AU \\ Mars & $M_{\mathrm{Mars}}=6.6\times 10^{23}$ kg & 1.52 AU \\ Venus & $M_{\mathrm{Venus}}=4.9\times 10^{24}$ kg & 0.72 AU \\ Saturn & $M_{\mathrm{Saturn}}=5.5\times 10^{26}$ kg & 9.54 AU \\ Mercury & $M_{\mathrm{Mercury}}=3.3\times 10^{23}$ kg & 0.39 AU \\ Uranus & $M_{\mathrm{Uranus}}=8.8\times 10^{25}$ kg & 19.19 AU \\ Neptun & $M_{\mathrm{Neptun}}=1.03\times 10^{26}$ kg & 30.06 AU \\ Pluto & $M_{\mathrm{Pluto}}=1.31\times 10^{22}$ kg & 39.53 AU \\ \hline \end{tabular} \end{quote} \noindent Pluto is no longer considered a planet, but we add it here for historical reasons. It is optional in this project to include Pluto and eventual moons. In setting up the equations we can limit ourselves to a co-planar motion and use only the $x$ and $y$ coordinates. But you should feel free to extend your equations to three dimensions, it is not very difficult and the data from NASA are all in three dimensions. \href{{http://www.nasa.gov/index.html}}{NASA} has an excellent site at \href{{http://ssd.jpl.nasa.gov/horizons.cgi#top}}{\nolinkurl{http://ssd.jpl.nasa.gov/horizons.cgi\#top}}. From there you can extract initial conditions in order to start your differential equation solver. At the above website you need to change from \textbf{OBSERVER} to \textbf{VECTOR} and then write in the planet you are interested in. The generated data contain the $x$, $y$ and $z$ values as well as their corresponding velocities. The velocities are in units of AU per day. Alternatively they can be obtained in terms of km and km/s. Using our code from homework 4, we will now add Jupyter and play around with different masses for this planet and study numerically a three-body problem. This is a well-studied problem in classical mechanics, \href{{https://en.wikipedia.org/wiki/Three-body_problem}}{with many interesting results, from stable orbits to chaotic motion}. \paragraph{Exercise 1 The three-body problem (100pt).} We will now study the three-body problem, still with the Sun kept fixed as the center of mass of the system but including Jupiter (the most massive planet in the solar system, having a mass that is approximately 1000 times smaller than that of the Sun) together with the Earth. This leads to a three-body problem. Without Jupiter, the Earth's motion is stable and unchanging with time. The aim here is to find out how much Jupiter alters the Earth's motion. The program you have developed in homework 4 can easily be modified by simply adding the magnitude of the force betweem the Earth and Jupiter. This force is given again by \[ F_{\mathrm{Earth-Jupiter}}=-\frac{GM_{\mathrm{Jupiter}}M_{\mathrm{Earth}}}{r_{\mathrm{Earth-Jupiter}}^2}, \] where $M_{\mathrm{Jupiter}}$ is the mass of Jupyter and $M_{\mathrm{Earth}}$ is the mass of the Earth. The gravitational constant is $G$ and $r_{\mathrm{Earth-Jupiter}}$ is the distance between Earth and Jupiter. We assume again that the orbits of the two planets are co-planar, and we take this to be the $xy$-plane (you can easily extend the equations to three dimensions). \begin{itemize} \item 1a (20pt) Modify your coupled first-order differential equations from homework 4 in order to accomodate both the motion of the Earth and Jupiter by taking into account the distance in $x$ and $y$ between the Earth and Jupiter. Write out the differential equations for Earth and Jupyter, keeping the Sun at rest (mass center of the system). \item 1b (10pt) Scale these equations in terms of Astronomical Units. \item 1c (30pt) Set up the algorithm and plot the positions of the Earth and Jupiter using the Velocity Verlet algorithm. Discuss the stability of the solutions using your Velocity Verlet solver. \item 1c (40pt) Repeat the calculations by increasing the mass of Jupiter by a factor of 10, 100 and 1000 and plot the position of the Earth. Discuss your results and study again the stability of the Velocity Verlet solver. Is energy conserved? \end{itemize} \noindent \paragraph{Exercise 2, the bonus part (50pt). The perihelion precession of Mercury.} This is the bonus exercise and gives an additional score of 50 points. It is fully optional. \textbf{I would grade this as a more difficult exercise compared to previous ones}. \href{{https://en.wikipedia.org/wiki/Tests_of_general_relativity}}{It requires also that you read some background literature about the perihelion of Mercury}. You don't need to derive the relativistic correction here. This is something you will meet in a graduate course on General Relativity. The bonus here is that it allows you explore physics you could not have done without a numerical code. An important test of the general theory of relativity was to compare its prediction for the perihelion precession of Mercury to the observed value. The observed value of the perihelion precession, when all classical effects (such as the perturbation of the orbit due to gravitational attraction from the other planets) are subtracted, is $43''$ ($43$ arc seconds) per century. Closed elliptical orbits are a special feature of the Newtonian $1/r^2$ force. In general, any correction to the pure $1/r^2$ behaviour will lead to an orbit which is not closed, i.e.~after one complete orbit around the Sun, the planet will not be at exactly the same position as it started. If the correction is small, then each orbit around the Sun will be almost the same as the classical ellipse, and the orbit can be thought of as an ellipse whose orientation in space slowly rotates. In other words, the perihelion of the ellipse slowly precesses around the Sun. You will now study the orbit of Mercury around the Sun, adding a general relativistic correction to the Newtonian gravitational force, so that the force becomes \[ F = -\frac{GM_\mathrm{Sun}M_\mathrm{Mercury}}{r^2}\left[1 + \frac{3l^2}{r^2c^2}\right] \] where $M_\mathrm{Mercury}$ is the mass of Mercury, $r$ is the distance between Mercury and the Sun, $l=|\vec{r}\times\vec{v}|$ is the magnitude of Mercury's orbital angular momentum per unit mass, and $c$ is the speed of light in vacuum. Run a simulation over one century of Mercury's orbit around the Sun with no other planets present, starting with Mercury at the perihelion on the $x$ axis. Check then the value of the perihelion angle $\theta_\mathrm{p}$, using \[ \tan \theta_\mathrm{p} = \frac{y_\mathrm{p}}{x_\mathrm{p}} \] where $x_\mathrm{p}$ ($y_\mathrm{p}$) is the $x$ ($y$) position of Mercury at perihelion, i.e.~at the point where Mercury is at its closest to the Sun. You may use that the speed of Mercury at perihelion is $12.44\,\mathrm{AU}/\mathrm{yr}$, and that the distance to the Sun at perihelion is $0.3075\,\mathrm{AU}$. You need to make sure that the time resolution used in your simulation is sufficient, for example by checking that the perihelion precession you get with a pure Newtonian force is at least a few orders of magnitude smaller than the observed perihelion precession of Mercury. Can the observed perihelion precession of Mercury be explained by the general theory of relativity? % ------------------- end of main content --------------- % #ifdef PREAMBLE \end{document} % #endif
{ "alphanum_fraction": 0.7232187168, "avg_line_length": 40.3, "ext": "tex", "hexsha": "88662318835d495ca31647411134595b8704be06", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b507f0411c2c92a669c85b8c47c502b9e7fa0c8f", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "schwartznicholas/Physics321", "max_forks_repo_path": "doc/Homeworks/hw9/pdf/hw9.p.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b507f0411c2c92a669c85b8c47c502b9e7fa0c8f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "schwartznicholas/Physics321", "max_issues_repo_path": "doc/Homeworks/hw9/pdf/hw9.p.tex", "max_line_length": 393, "max_stars_count": null, "max_stars_repo_head_hexsha": "b507f0411c2c92a669c85b8c47c502b9e7fa0c8f", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "schwartznicholas/Physics321", "max_stars_repo_path": "doc/Homeworks/hw9/pdf/hw9.p.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3862, "size": 14105 }
\chapter{Lexical Structure} \label{ch:lexical structure} This section describes the lexical structure of the ABS language. ABS programs are written in Unicode.\footnote{\url{http://www.unicode.org}} \section{Line Terminators and White Spaces} Line terminators and white spaces are defined as in Java. %\begin{absgrammar} %LineTerminator ::= \n | \r | \r\n %WhiteSpace ::= LineTerminator | ' ' | \t | \f %\end{absgrammar} % \begin{abssyntax} \NT{LineTerminator} \concrDefn{} \verb_\_\TR{n} ~|~ \verb_\_\TR{r} ~|~ \TR{rn} \\ \NT{WhiteSpace} \concrDefn{} \NT{LineTerminator} ~|~ \verb*_ _ ~|~ \verb_\_\TR{t} ~|~ \verb_\_\TR{f} \end{abssyntax} \section{Comments} Comments are code fragments that are completely ignored and have no semantics in the ABS language. ABS supports two styles of comments: \emph{end-of-line comments} and \emph{traditional comments}. \subsection{End-Of-Line Comments} An end-of-line comment is a code fragment that starts with two slashes, e.g., \verb_// text_. All text that follows \verb_//_ until the end of the line is treated as a comment. \begin{absexample} // this is a comment module A; // this is also a comment \end{absexample} \subsection{Traditional Comments} A traditional comment is a code fragment that is enclosed in \absinline{/* */}, e.g., \absinline{/* this is a comment */}. Nested traditional comments are not possible. \begin{absexample} /* this is a multiline comment */ \end{absexample} \section{Identifiers} ABS distinguishes \emph{identifier} and \emph{type identifier}. They differ in the first character, which must be a lower-case character for identifiers and an upper-case character for type identifiers. % %\begin{absgrammar} %Identifier ::= [:lowercase:] {[:letter:] | [:digit:] | '_'} %TypeId ::= [:uppercase:] {[:letter:] | [:digit:] | '_'} %\end{absgrammar} % \begin{abssyntax} \NT{Identifier} \defn \text{[:lower:]} \MANYG{\text{[:alpha:] ~|~ [:digit:] ~|~ \TR{\_}}} \\ \NT{TypeId} \defn \text{[:upper:]} \MANYG{\text{[:alpha:] ~|~ [:digit:] ~|~ \TR{\_}}} \end{abssyntax} \section{Keywords} The following words are keywords in the ABS language and are \emph{not} regarded as identifiers. \noindent \begin{center} \begin{tabular}{llllll} \TR{adds} & \TR{after} & \TR{assert} & \TR{await} & \TR{builtin} & \TR{case} \\ \TR{core} & \TR{class} & \TR{data} & \TR{def} & \TR{delta} & \TR{else} \\ \TR{export} & \TR{features} & \TR{from} & \TR{get} & \TR{hasField} & \TR{hasInterface} \\ \TR{hasMethod} & \TR{if} & \TR{implements} & \TR{import} & \TR{in} & \TR{interface} \\ \TR{let} & \TR{local} & \TR{modifies} & \TR{module} & \TR{new} & \TR{null} \\ \TR{product} & \TR{productline} & \TR{removes} & \TR{return} & \TR{skip} & \TR{suspend} \\ \TR{this} & \TR{type} & \TR{when} & \TR{while} & &\\ \end{tabular} \end{center} \section{Literals} A \emph{literal} is a textual representation of a value. ABS supports four kinds of literals, \emph{integer literals}, \emph{string literals}, the \emph{this literal}, and the \emph{null literal}. \begin{abssyntax} \NT{Literal} \defn \NT{IntLiteral} ~|~ \NT{StringLiteral} ~|~ \NT{ThisLiteral} ~|~ \NT{NullLiteral} \\ \NT{IntLiteral} \defn 0 ~|~ [1-9]\MANY{[0-9]}\\ \NT{StringLiteral} \defn \TR{"} \MANY{\NT{StringCharacter}} \TR{"}\\ \NT{ThisLiteral} \defn \TR{this}\\ \NT{NullLiteral} \defn \TR{null} \end{abssyntax} \noindent Where a \NT{StringCharacter} is defined as in the Java language \cite[p.~28]{gosling96} \section{Separators} The following characters are \emph{separators}: \begin{verbatim} ( ) { } [ ] , ; : \end{verbatim} \section{Operators} The following tokens are \emph{operators}: \begin{verbatim} || && == != < > <= >= + - * / % ! & \end{verbatim} % . ! = _
{ "alphanum_fraction": 0.6452448821, "avg_line_length": 34.7657657658, "ext": "tex", "hexsha": "ece3d3847d024ce5141d417eb1a15111d6163f6d", "lang": "TeX", "max_forks_count": 33, "max_forks_repo_forks_event_max_datetime": "2022-01-26T08:11:55.000Z", "max_forks_repo_forks_event_min_datetime": "2015-04-23T09:08:09.000Z", "max_forks_repo_head_hexsha": "6f245ec8d684efb0977049d075e853a4b4d7d8dc", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "oab/abstools", "max_forks_repo_path": "abs-docs/ReferenceManual/lexical.tex", "max_issues_count": 271, "max_issues_repo_head_hexsha": "6f245ec8d684efb0977049d075e853a4b4d7d8dc", "max_issues_repo_issues_event_max_datetime": "2022-03-28T09:05:50.000Z", "max_issues_repo_issues_event_min_datetime": "2015-07-30T19:04:52.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "oab/abstools", "max_issues_repo_path": "abs-docs/ReferenceManual/lexical.tex", "max_line_length": 176, "max_stars_count": 38, "max_stars_repo_head_hexsha": "6f245ec8d684efb0977049d075e853a4b4d7d8dc", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "oab/abstools", "max_stars_repo_path": "abs-docs/ReferenceManual/lexical.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-18T19:26:34.000Z", "max_stars_repo_stars_event_min_datetime": "2015-04-23T09:08:06.000Z", "num_tokens": 1278, "size": 3859 }
\subsection{States, messages, representatives} The full-node protocol states are {\em sets} of messages, each message being a triple $(c, v, j)$, where: \begin{itemize} \item $c$ is a (proposed) consensus value; \item $v$ identifies the message sender; \item $j$, the justification, is the {\em protocol state} seen by the sender at the time of message sending. \end{itemize} There are two technical difficulties with the above definition: \begin{itemize} \item States and messages are mutually recursive: states are sets of messages, each containing a state; \item Message ordering should not matter \end{itemize} To solve the first issue, we choose to postpone the definition of messages for now and first define states inductively as follows: \begin{coq} Inductive state : Type := | Empty : state | Next : C -> V -> state -> state -> state. \end{coq} This definition says that a state can be built by extending an existing state given a consensus value \verb|C|, a validator \verb|V|, and another state representing the justification. To clarify that the three are the components of a message, we introduce the following notation for \verb"Next": \begin{coq} Notation "'add' ( c , v , j ) 'to' sigma" := (Next c v j sigma) (at level 20). \end{coq} This definition addresses the first difficulty mentioned above by capturing the recursive nature of states. The second issue critical to the notion of state equality. Defining an equivalence between states which disregards message ordering is possible but non-trivial: the definition would itself need to be recursive, as it requires an equivalence on messages, which in turn is defined in terms of the same state equivalence. To circumvent the hindrance of working with mutually recursive state equality, we instead took the approach of using canonical representatives for states, which we call LocallySorted states, on which we express state equality as syntactic equality in Coq. Although this definition is still recursive, it is easier to express and to work with than with the former equivalence because it is defined in terms of a single state. To define sorted states, we need to be able to compare messages, which amounts to defining a total order relation on states. This can be defined as a lexicographic ordering on the state seen as a list of messages, tweaked slightly to perform recursion in order to compare justifications: \begin{coq} Fixpoint state_compare (sigma1 sigma2 : state) : comparison := match sigma1, sigma2 with | Empty, Empty => Eq | Empty, _ => Lt | _, Empty => Gt | add (c1, v1, j1) to sigma1, add (c2, v2, j2) to sigma2 => match compare c1 c2 with | Eq => match compare v1 v2 with | Eq => match state_compare j1 j2 with | Eq => state_compare sigma1 sigma2 | cmp_j => cmp_j end | cmp_v => cmp_v end | cmp_c => cmp_c end end. \end{coq} Note that defining this ordering requires that there exist orderings on consensus values and validators, but these can be any orderings, and a total ordering is guaranteed to exists for any set in set theory by the axiom of choice \cite{Gonzalez}. The existence of these orderings is reflected in our \verb|StrictlyComparable| types for consensus values and validators. This ordering naturally induces an ordering on messages, and thus allows us to define the notion of a \verb|LocallySorted| state, i.e., one in which each message is smaller than the next and all justifications are themselves \verb|LocallySorted|. Choosing \verb|LocallySorted| states as representatives for states thus reducing the equality testing between states to syntactic equality checking. Protocol states are defined by means of an inductive predicate on states as follows: \begin{coq} Inductive protocol_state : state -> Prop := | protocol_state_empty : protocol_state Empty | protocol_state_next : forall s j , protocol_state s -> protocol_state j -> incl_messages j s -> forall c v , valid_estimate c j -> not_heavy (add_in_sorted_fn (c,v,j) s) -> protocol_state (add_in_sorted_fn (c,v,j) s). \end{coq} The above definition reads as: \begin{itemize} \item a protocol state is either empty; or \item it can be obtained from an existing protocol state $s$ by extending it with a message $(c, v, j)$ such that: \begin{itemize} \item $j$ is a protocol state included in $s$; \item $c$ is a consensus value which the estimator agrees on for $j$; \item adding $(c,v,j)$ to $s$ does not exceed the fault tolerance threshold. \end{itemize} \end{itemize}
{ "alphanum_fraction": 0.7253655435, "avg_line_length": 44.1028037383, "ext": "tex", "hexsha": "9deec561f37e58f3a6984c6e9014f9a1bab1b7f8", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-11-22T08:51:55.000Z", "max_forks_repo_forks_event_min_datetime": "2019-12-17T07:48:58.000Z", "max_forks_repo_head_hexsha": "8c4985f0921fea0a38c05e72a47364471164ab72", "max_forks_repo_licenses": [ "NCSA" ], "max_forks_repo_name": "runtimeverification/casper-cbc-proofs", "max_forks_repo_path": "report/full.tex", "max_issues_count": 105, "max_issues_repo_head_hexsha": "8c4985f0921fea0a38c05e72a47364471164ab72", "max_issues_repo_issues_event_max_datetime": "2021-12-02T10:00:11.000Z", "max_issues_repo_issues_event_min_datetime": "2019-11-26T09:22:58.000Z", "max_issues_repo_licenses": [ "NCSA" ], "max_issues_repo_name": "runtimeverification/casper-cbc-proofs", "max_issues_repo_path": "report/full.tex", "max_line_length": 270, "max_stars_count": 10, "max_stars_repo_head_hexsha": "8c4985f0921fea0a38c05e72a47364471164ab72", "max_stars_repo_licenses": [ "NCSA" ], "max_stars_repo_name": "runtimeverification/casper-cbc-proofs", "max_stars_repo_path": "report/full.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-23T11:21:07.000Z", "max_stars_repo_stars_event_min_datetime": "2020-06-16T15:57:46.000Z", "num_tokens": 1140, "size": 4719 }
% % CMPT 354: Database Systems I - A Course Overview % Section: Functional Dependencies % % Author: Jeffrey Leung % \section{Functional Dependencies} \label{sec:functional-dependencies} \begin{easylist} & \emph{Functional dependency:} For any given value of an attribute(s), there can only be a single value for a set of attributes && Similar to the notion of a \hyperref[subsec:design:identification-of-tables]{superkey} for a set of attributes && $x$ functionally determines $y$ and $y$ is functionally determined by $x$ if there can only be one $y$ value for a given $x$ value && Mathematical definition: If $R$ is a relation and $\alpha \subseteq R$ and $\beta \subseteq R$, then $\alpha \rightarrow \beta$ && E.g. All of the same car models have the same capacity; therefore, capacity is functionally determined by the model. \\ model $\rightarrow$ capacity && Used to: &&& Specify additional constraints &&& Decompose tables to reduce repeated data && Statement about all possible legal instances of a relation &&& Relation $R$ satisfies the set of functional dependencies $F$ if $R$ is legal under $F$ & %TODO Functional dependencies notes in NORMALIZATION & Primary key is a special case of a functional dependency && If X $\rightarrow$ Y on R... & Set of functional dependencies can be used to identify additional functional dependencies && F implies the additional functional dependencies \end{easylist} \subsection{Armstrong's Axioms} \label{subsec:functional-dependencies:armstrongs-axioms} \begin{easylist} & Named after William Armstrong & Axioms: && Where X and Y are sets of attributes && \emph{Reflexivity (A-Axioms):} If X $\supseteq$ Y, then X $\rightarrow$ Y &&& E.g. sin, firstName $\rightarrow$ sin &&& E.g. sin $\rightarrow$ sin &&& \emph{Trivial functional dependency:} X $\rightarrow$ Y where X $\supseteq$ Y && \emph{Augmentation (A-Axioms):} If X $\rightarrow$ Y, then XZ $\rightarrow$ Y, Z for any Z &&& E.g. If sin $\rightarrow$ firstName, then sin, lastName $\rightarrow$ firstName, lastName && \emph{Transitivity (A-Axioms):} If X $\rightarrow$ Y and Y $\rightarrow$ Z, then X $\rightarrow$ Z %TODO &&& E.g. If sin $\rightarrow$ age and age $\rightarrow$ seniority, then sin $\rightarrow$ seniority & Derivable axioms: && \emph{Union:} %TODO && \emph{Decomposition (A-Axioms):} If X $\rightarrow$ YZ, then X $\rightarrow$ Y and X $\rightarrow$ Z &&& Proof: %TODO && \emph{Pseudo-transitivity (A-Axioms):} If X $\rightarrow$ Y and WY $\rightarrow$ Z, then XW $\rightarrow$ Z &&& Proof: See... %TODO ref? don't use tabular? \begin{tabular}{ l } Assume X $\rightarrow$ Y. \\ By the axiom of augmentation, XW $\rightarrow$ WY. \\ Assume WY $\rightarrow$ Z. \\ By the axiom of transitivity, XW $\rightarrow$ Z. \end{tabular} & \emph{F:} Set of identified functional dependencies & \emph{F\textsuperscript{+}:} Closure of \emph{F}; all functional dependencies implied by \emph{F} & \emph{X\textsuperscript{+}:} Attribute closure of the set of attributes in X; list of attributes containing all attributes that are functionally dependent on X & \emph{Sound:} Axioms which do not generate incorrect functional dependencies & \emph{Complete:} Axioms which allow F\textsuperscript{+} to be generated from a given F \end{easylist} \subsection{Removing Redundant Functional Dependencies} \label{subsec:functional-dependencies:removing-redundant-functional-dependencies} \begin{easylist} & \emph{Extraneous attribute:} Attribute \ && Attribute $a$ is extraneous in $X$ if $a \in X$ %TODO notes & Testing if attribute $a \in X$ is extraneous in $X$, compute $(\{X\} - a)^{+}$ with... %TODO & \emph{Canonical cover (of F):} Minimal set of functional dependencies equivalent to \emph{F} without redundant or partial dependencies && F logically implies all dependencies in F\textsubscript{c}; \\ F\textsubscript{c} logically implies all dependencies in F; \\ no functional dependency in F\textsubscript{c} contains an extraneous attribute; \\ each left side of a dependency in F\textsubscript{c} is unique && Denoted by \emph{F\textsubscript{c}} %TODO more notes \end{easylist} \clearpage
{ "alphanum_fraction": 0.7129411765, "avg_line_length": 46.7032967033, "ext": "tex", "hexsha": "3cf995ce9e0e6e0e7bf2aa9ae41395ab69dbeb01", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-12-27T21:44:56.000Z", "max_forks_repo_forks_event_min_datetime": "2020-11-18T09:17:46.000Z", "max_forks_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "AmirNaghibi/notes", "max_forks_repo_path": "cmpt-354-database-systems-i_partial/tex/functional-dependencies.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "AmirNaghibi/notes", "max_issues_repo_path": "cmpt-354-database-systems-i_partial/tex/functional-dependencies.tex", "max_line_length": 162, "max_stars_count": 25, "max_stars_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "AmirNaghibi/notes", "max_stars_repo_path": "cmpt-354-database-systems-i_partial/tex/functional-dependencies.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-09T02:37:39.000Z", "max_stars_repo_stars_event_min_datetime": "2019-08-11T08:45:10.000Z", "num_tokens": 1240, "size": 4250 }
\documentclass[11pt,a4paper]{report} \usepackage{asymptote} \usepackage{wrapfig} \addtolength{\oddsidemargin}{-.875in} \addtolength{\evensidemargin}{-.875in} \addtolength{\textwidth}{1.75in} \addtolength{\topmargin}{-.875in} \addtolength{\textheight}{1.75in} \begin{document} \begin{center} \Large{CHAPTER 3 SUMMARY. \textbf{Vectors and Two-Dimensional Motion}} \large{Justin Yang} October 26, 2012 \end{center} \section*{General Properties of Vectors} A \textbf{vector quantity} has both a magnitude and a direction.\\A \textbf{scalar quantity} has magnitude, but no direction. \hspace{1mm} \noindent \textit{Equality} $\vec{A}\ =\ \vec{B}$ if $A = B$ and their directions are the same. \hspace{1mm} \noindent \textit{Addition} Vectors can be added both geometrically and algebraically. \underline{Geometrically} \textsl{Head-to-tail} and \textsl{parallelogram} methods can be used. Thus, the \textbf{resultant vector} $\vec{R}\ =\ \vec{A} + \vec{B}$ is the sum of two or more vectors. \hspace{1mm} \noindent \textit{Negative} The negative of a vector has the same magintude but opposite direction. \hspace{1mm} \noindent \textit{Subtraction} $\vec{A} + \vec{B}\ =\ \vec{A} + \left(-\vec{B}\right)$. \hspace{1mm} \noindent \textit{Multiplication by a Scalar} $s \vec{A}$ has magnitude $sA$ and has the same direction as $\vec{A}$ if $s$ if positive and opposite direction if $s$ is negative. \begin{wrapfigure}{r}{0.3\textwidth} \begin{center} \vspace{-20pt} \begin{asy} import graph; xaxis("$x$", xmin=-0.5); yaxis("$y$"); import olympiad; defaultpen(1.0); size(5cm); draw((0,0)--(2,1),EndArrow); markscalefactor=0.05; draw(anglemark((2,0),(0,0),(2,1))); draw((2,0)--(2,1),dashed); draw((0,1)--(2,1),dashed); label("$A_x = A \cos{\theta}$",(0,0)--(2,0),S); label("$A_y = A \sin{\theta}$",(0,0)--(0,1),W); dot((2.2,1.2),invisible); \end{asy} \vspace{-20pt} \end{center} \end{wrapfigure} \hspace{1mm} \noindent \textit{Components} The component of $\vec{A}$ in the direction of a directed line $S$ is $A_S = A\cos{\theta}$, where $\theta$ is the angle between $S$ and $\vec{A}$.\\A vector can be specified by its rectangular components along the $x$- and $y$-axes $$A_x = A\cos{\theta},$$ $$A_y = A\sin{\theta}.$$\\It can also be specified by its magnitude and direction through the Pythagorean theorem and the definition of tangent $$A=\sqrt{A_x^2 + A_y^2},$$ $$\tan{\theta} = \frac{A_y}{A_x}.$$\\We can hence define \underline{algebraic} addition of vectors such that $$\vec{A}\ =\ \vec{A} + \vec{B}\ \Longleftrightarrow C_x = A_x + B_x \textrm{ and } C_y = A_y + B_y.$$ \section*{Position, Velocity, and Acceleration} The \textbf{position vector} of a particle at a point $\left(x, y\right)$ is a vector from the origin to the point $$\vec{r} = x\hat{i} + y\hat{j}.$$ \noindent The \textbf{displacement} is the change in position $$\Delta{\vec{r}} = \vec{r}_2 - \vec{r}_1.$$ \noindent The \textbf{average velocity} in the time interval $\Delta{t} = t_2 - t_1$ $$\vec{v}_{av} = \frac{\Delta{\vec{r}}}{\Delta{t}}.$$ \noindent The \textbf{instantaneous velocity} $$\vec{v} = \lim_{\Delta{t} \to 0} \frac{\Delta{\vec{r}}}{\Delta{t}}.$$ \noindent The \textbf{average acceleration} in the time interval $\Delta{t} = t_2 - t_1$ $$\vec{a}_{av} = \frac{\Delta{\vec{v}}}{\Delta{t}}.$$ \noindent The \textbf{instantaneous acceleration} $$\vec{a} = \lim_{\Delta{t} \to 0} \frac{\Delta{\vec{v}}}{\Delta{t}}.$$ \section*{Relative Velocity} Measurements of velocity depend on the reference frame of the observer. A reference frame is just a coordinate system. \noindent Consider a particle of velocity $\vec{v}_{pA}$ relative to reference frame A, which has velocity $\vec{v}_{AB}$ relative to frame B, the velocity of the particle relative to frame B is then $$\vec{v}_{pB} = \vec{v}_{pA} + \vec{v}_{AB}.$$\\However, this relative-velocity realtion is only true when the velocities are small compared to the speed of light. \section*{Projectile Motion} Consider an object projected with an initial velocity $\vec{v}_0$ at angle angle $\theta_0$ with the horizontal surface. The components of the velocity are $$v_{0x} = v_0 \cos{\theta_0},$$ $$v_{0y} = v_0 \sin{\theta_0}.$$ \noindent In the absense of air resistance, the motion of the projectile is the superposition of a constant-velocity motion in the $x$-direction and a constant-acceleration in the $y$-direction. $$a_x = 0,$$ $$a_y = -g.$$ \noindent Thus, the kinematics of one-dimensional motion can be applied $$\Delta{x} = v_{0x} t,$$ $$v_y = v_{0y} - gt,$$ $$\Delta{y} = v_{0y} t - \frac{1}{2} gt^2,$$ $$v_y^2 - v_{0y}^2 = -2g \Delta{y}.$$ We can also show that the path of the projectile is a parabola $$y = x\tan{\theta_0} - \frac{1}{2} \frac{g}{v_0^2 \cos{2\theta_0}}.$$ \end{document}
{ "alphanum_fraction": 0.6695936288, "avg_line_length": 42.2155172414, "ext": "tex", "hexsha": "9ae2344d3d016798ca219a21d3c6afd9e620ceee", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "716f5e7489d3b5fd5dede24eb2bba4673128af0b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "justinyangusa/physics", "max_forks_repo_path": "notes/Ch3Summary.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "716f5e7489d3b5fd5dede24eb2bba4673128af0b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "justinyangusa/physics", "max_issues_repo_path": "notes/Ch3Summary.tex", "max_line_length": 662, "max_stars_count": 1, "max_stars_repo_head_hexsha": "716f5e7489d3b5fd5dede24eb2bba4673128af0b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "justinyangusa/physics", "max_stars_repo_path": "notes/Ch3Summary.tex", "max_stars_repo_stars_event_max_datetime": "2016-09-11T07:10:09.000Z", "max_stars_repo_stars_event_min_datetime": "2016-09-11T07:10:09.000Z", "num_tokens": 1656, "size": 4897 }
\subsection{Wires}
{ "alphanum_fraction": 0.7142857143, "avg_line_length": 5.25, "ext": "tex", "hexsha": "9f83b5f4625d357d54d64cbbd79ce42ed159a55d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/engineering/engineeringElectrical/01-02-Wires.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/engineering/engineeringElectrical/01-02-Wires.tex", "max_line_length": 18, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/engineering/engineeringElectrical/01-02-Wires.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7, "size": 21 }
\documentclass{article} \usepackage{graphicx} \begin{document} \title{Introduction to \LaTeX{}} \author{Author's Name} \maketitle \begin{abstract} The abstract text goes here. \end{abstract} \section{Introduction} Here is the text of your introduction. \begin{equation} \label{simple_equation} \alpha = \sqrt{ \beta } \end{equation} \subsection{Subsection Heading Here} Write your subsection text here. \begin{figure} \centering %\includegraphics[width=3.0in]{myfigure} \caption{Simulation Results} \label{simulationfigure} \end{figure} \section{Conclusion} Write your conclusion here. \end{document}
{ "alphanum_fraction": 0.7401574803, "avg_line_length": 17.1621621622, "ext": "tex", "hexsha": "4709eb63434d1a52ceca4ed4b868d52752ce5254", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f736e230d969f4b60e72ebe7a0a5c10277e0978e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Dogcatfee/Assorted_Templates", "max_forks_repo_path": "Latex Document.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f736e230d969f4b60e72ebe7a0a5c10277e0978e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Dogcatfee/Assorted_Templates", "max_issues_repo_path": "Latex Document.tex", "max_line_length": 44, "max_stars_count": null, "max_stars_repo_head_hexsha": "f736e230d969f4b60e72ebe7a0a5c10277e0978e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Dogcatfee/Assorted_Templates", "max_stars_repo_path": "Latex Document.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 173, "size": 635 }
\chapter{Sensitivity to Errors in Measurement and Preliminary Error Analysis} \chaptermark{Sensitivity to Errors in Measurement} \section{Objectives} \begin{enumerate} \item To recognise different kinds of errors, in particular systematic errors, errors due to the least count of an instrument, random errors, and errors in time measurement. \item To learn to use Vernier calipers and the screw gauge. \item To begin to understand propagation of errors. \item To use preemptive error analysis to plan an experiment. %\item To study the propagation of errors from measured to derived quantities. \end{enumerate} \section{Introduction} When observing the time period of the simple pendulum, you already became aware of the extent to which you are certain of your time measurement. In fact, all the measurements you made -- of mass, of length, of time -- were uncertain. In experimental physics it is of enormous importance to understand how different kinds of errors arise, because without an understanding of errors it is difficult to make sense of data. This is a vast and subtle subject, and so you will learn the elements of it slowly. It is important to appreciate that in physics \textit{an error is not a mistake}. An error in physics is a measure of how certain (and therefore how uncertain!) we are of a measurement. All measurements are to a greater or smaller extent uncertain; there is no such thing as a perfect measurement: therefore, there is no such thing as an error-free measurement (unless it is something like the \textbf{number of oscillations or rotations}). To broaden the scope or your understanding of errors, you will use not only the data you got while observing the oscillations of a pendulum, you will also learn to use two very useful instruments, the Vernier calipers and the screw gauge, which highlight beautifully certain kinds of errors. \begin{tip} Example: if you use a metre scale with a least count (smallest division) of $0.1$ cm upside down to measure a pencil and misread the scale as $94.6$ cm instead of $5.4$ cm, this is a \textbf{mistake}. However, if you note down the length as being $5.4$ cm, but rightly note that it is not \textbf{exactly} $5.4$ cm, but simply that your measuring device does not allow for any more precision, this is a measure of \textbf{uncertainty}. We will expect you to have verified that you haven't done the former, and will take for granted from here on that no mistakes were made in the collection of data. \end{tip} The complete statement of a measured value \textit{must} include an estimate of the level of confidence associated with the value. This allows people to judge the quality of the experiment and allows for comparisons with other similar estimates of the same quantity, or with theoretical predictions. \begin{imp} A proper experiment must report both a ``best'' value and an uncertainty for each measured quantity. You will be expected to do this in all your experiments. \end{imp} Without an uncertainty estimate, it is impossible to answer the basic scientific question: Does my result agree with a theoretical prediction or results from other experiments? This question is fundamental for deciding if a scientific hypothesis is corroborated or refuted. \section{Systematic Errors} Suppose that you are weighing yourself and the pointer is at 1 kg even when no-one is standing on the scales. Then, obviously, all measurements will differ from the correct weights by 1 kg. Such an error is called \text{systematic}, since it appears systematically in all readings. The only way to eliminate a systematic error is to identify its cause and eliminate it. Some devices allow such checking to be done relatively easily, e.g. a metre scale or a weighing scale, but many others do not, e.g. electrical devices with internal errors. Estimating possible errors due to such systematic effects will depend on your understanding of your apparatus and the skill you have developed for thinking about possible problems. However, if you get a value for some quantity that seems rather far off what you expect, you should think about the possibility of systematic error more carefully. You could end up trusting a device that you do not know is faulty. This happens very often, to the best of us. \begin{question} \paragraph{Question:} You use a ruler whose end is worn out, so that it effectively begins at 2 mm. How can you avoid the systematic error that would arise if you used the ruler naively? %\vspace{0.5 cm} \paragraph{Question:} A systematic error can be either positive or negative. What is the difference and under what circumstances would they arise? (If you don't understand this point, you may end up doubling a systematic error in trying to eliminate it.) \end{question} \subsection{Least-count-related Errors} When you make a length measurement with a ruler whose smallest division is 1 mm in size. If you imagine aligning an object of a certain length against a ruler, you can ensure that one end of the object is flush with a mark on the ruler but you cannot ensure that the other end is also flush with another mark -- it general it will be somewhere between two successive marks, and you cannot tell exactly where it is; normally the best that you can do is say, ``It's more than halfway between'', or ``It's less than halfway between''. So the error associated with the least count of the instrument is normally between half and one full least count. Because the least count of an instrument is a limitation, one can try to subdivide the region between two marks. It is obvious, however, that this is not going to work beyond a point, since we will simply not be able to tell where the end of the object lies. A couple of clever designs \textit{effectively} sub-divide the region. The most beautiful of these designs was first included in an instrument called Vernier calipers; another design is found in the screw gauge. You will learn how to use both of these instruments. (The essential design of the Vernier calipers is incorporated in many other instruments, and we then speak of it as having a Vernier scale.) \section{Vernier Calipers and the Screw Gauge} \subsection{Vernier Calipers} You will regularly come across Vernier scales in the lab when using calipers, angular Vernier scales (in spectrometers), and travelling microscopes. Vernier scales allow you to read off a value more precisely than when using an ordinary scale. In this section, we will explain how this works. First consider a ``main scale'' with a least count of 1 unit (you could imagine this is 1 mm, if you wish). This implies that any distance between, say, the 2 and 3 unit marks cannot be determined accurately. In other words, the best you could say is that an object is ``2 and a bit'' units. The Vernier scale allows you to \textbf{quantify} this ``bit'' to some extent. The method is ingenious: instead of measuring this ``bit'' directly, its magnitude is translated into a a degree of coincidence between two scales, the main scale and a secondary one called the Vernier scale. The Vernier scale has divisions that are \textbf{slightly} smaller than that of the main scale, such that $n$ divisions on the Vernier scale have the same length as $n-1$ divisions of the main scale. For example, 9 divisions on the main scale may coincide with 10 divisions on the Vernier scale (see Figure (\ref{fig:Vernier_1})). \begin{figure}[!htb] \centering \includegraphics[scale=0.75]{figs/Vernier1.png} \caption{10 Vernier scale divisions are set to coincide with 9 main scale divisions.} \label{fig:Vernier_1} \end{figure} \begin{question} \paragraph{Question:} What is the spacing between two Vernier scale divisions? Is it: \begin{enumerate} \item 1 \texttt{MSD} unit? \item 0.1 \texttt{MSD} units? \item 0.9 \texttt{MSD} units? \end{enumerate} \end{question} Let us now try to measure the smallest possible distance between 0 and 1 unit. Take a set of Vernier calipers and close the jaws completely. If there is no zero-error, the only two readings on the Vernier scale which match the main scale are $0$ and $10$, as shown in Figure (\ref{fig:Vernier_1}). \begin{figure}[!htb] \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=0.95\linewidth]{figs/Vernier2.png} \caption{Just away from 0, when the coinciding division is 1, the distance is $1\times\texttt{MSD}-1\times\texttt{VSD}=0.1\, \texttt{units}$.} \label{fig:Vernier_2} \end{subfigure}\hfill \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=0.95\linewidth]{figs/Vernier3.png} \caption{Before crossing 1, the last coinciding division is 9, and the distance is $9\times\texttt{MSD}-9\times\texttt{VSD}=0.9\, \texttt{units}$.} \label{fig:Vernier_3} \end{subfigure}% \caption{Measurement with Vernier Calipers} \label{fig:Verniermeasurements} \end{figure} \begin{enumerate} \item Move the Vernier scale \textit{slightly}, until the $1$ on the Vernier scale coincides with the nearest main scale division (this is obviously the $1$ on the main scale, see Figure (\ref{fig:Vernier_2})). \item You will notice that the jaws are slightly apart. The distance between them is of course the distance the $0$ on the Vernier scale has moved. This is the least distance you can measure with this Vernier calliper. \item But this distance is simply the difference between 1 Main Scale Division (\texttt{MSD}) and 1 Vernier Scale Division (\texttt{VSD}), since both the $1$ marks coincide! \item Thus, the spacing between the jaws of the calipers is now $1$ $\texttt{MSD} - 0.9$ $\texttt{MSD} = 0.1$ $\texttt{MSD}$ units! \item We have thus been able to measure a distance of $0.1$ units using two scales, one of least count $1$ unit, and the other of least count $0.9$ $\texttt{MSD}$ units! \end{enumerate} Similarly, if you went one division further, and had the 2 of the Vernier scale coincide with a main scale division, then the distance between the jaws would be $2\times\texttt{MSD}-2\times\texttt{VSD}=0.2$ \texttt{MSD} units. Thus, if the $n$th Vernier scale division coincides with a main scale division, the distance between the jaws is $n \times (\texttt{MSD}-\texttt{VSD})=n \times \texttt{LC}$. Where \texttt{LC} is the Least Count of your Vernier calipers.\footnote{In our case, the Least Count is 0.1 \texttt{MSD} units.} Of course, up until right now we have been measuring distances between 0 and 1 unit. What about an arbitrary distance? Consider the example given in Figure (\ref{fig:Vernier_4}). \begin{figure}[!htb] \centering \includegraphics[scale=0.75]{figs/Vernier4.png} \caption{The main scale reading is more than 2 units. The coinciding Vernier scale division is 4 (the fact the main-scale reading with which it coincides is 6 is not important).} \label{fig:Vernier_4} \end{figure} Here is the general procedure to take a reading using Vernier calipers: \begin{enumerate} \item Look for the main scale reading to the left of the the $0$ of the Vernier scale, this is the Main Scale Reading or \texttt{MSR}; the total reading must be this much plus the extra bit between it and the 0 of the Vernier scale. In the example in Figure (\ref{fig:Vernier_4}), the \texttt{MSR} is 2. \item Now look to find the mark on the Vernier scale which most closely meets any mark on the main scale. This is the Vernier Scale Division or \texttt{VSD}, giving you the most precise digit. In this example, the \texttt{VSD} is 4 because the mark at the end of the 4th division on the Vernier scale coincides with a mark on the main scale (it does not matter which mark on the main scale coincides). If you are still confused, make a movie in your head in which you start with the 0 of the Vernier scale coinciding with a main scale division. Now, slowly, move the 0 of the Vernier scale: First, when the spacing between it and the last main scale division is 1 LC, the first \texttt{VSD} will coincide with a main scale division, then, when that spacing is 2 LC, the \textit{second} \texttt{VSD} will coincide with \textit{another} main scale division, and so on. \item The total distance is given by \begin{equation*} \begin{aligned} \texttt{distance} &=& \texttt{MSR} &+ (\texttt{VSD}\times\texttt{LC}) &=&0.2 &+ (4 \times 0.01) = 0.24 \end{aligned} \end{equation*} In our case, $\texttt{distance} = 0.24$ \texttt{MSD} units. \item Thus, using two scales -- one of least counts $1$ unit and another of least count $0.9$ units -- we have calipers capable of measuring up to (or down to!) $0.1$ units! You have an \textit{effective least count} that is much smaller than the least count of the main scale. \end{enumerate} \begin{question} \paragraph{Question:} Show that the least count of a set of Vernier calipers is given by: \begin{enumerate} \item $$\texttt{LC} = \texttt{Least count of main scale} - \texttt{Least count of Vernier scale}$$ \item $$\texttt{LC} = \frac{\texttt{Least count of main scale}}{\texttt{Number of Vernier scale divisions}}$$ \end{enumerate} \end{question} \subsection{The Micrometer Screw Gauge} The design of the micrometer screw gauge is different, and uses the fact that a linear movement that is imperceptible can be turned into an angular movement that is easily perceptible; the best way to understand this is simply to examine the screw gauge (which, unlike Vernier calipers, is easy to understand). The screw gauge uses a screw with an accurate and constant \textbf{pitch} (the amount by which the thimble moves forward or backward for one complete revolution) as an auxiliary scale marked on a rotatable thimble. The micrometers in our laboratory have a pitch of $0.5$ $\mathrm{mm}$. The rotating thimble is subdivided into 50 equal divisions. The thimble passes through a frame that carries a millimetre scale graduated to $0.5$ $\mathrm{mm}$. The jaws can be adjusted by rotating the thimble using the small ratchet knob. This includes a friction clutch which prevents too much tension from being applied. \begin{imp} Only tighten the screw gauge by rotating the ratchet, otherwise you may damage the instrument. Stop rotating after you hear \textbf{three} clicks. \textbf{Do not} tighten the screw any further. \end{imp} \begin{question} \paragraph{Question:} Show that the least count of a screw gauge is given by $$\texttt{LC} = \frac{\texttt{Pitch of a the screw}}{\texttt{Number of circular scale divisions}}$$ \end{question} \section{Random Errors} Let us go back the simple measurement of the length of an object with a ruler. We place the object against the ruler with one end flush against a certain division of the ruler, say 0 (it does not have to be 0, but usually is). Then we observe where the other end is. In general, it will be, as mentioned earlier, at some point between two successive marks on the ruler. Let us imagine doing this repeatedly. Every time we place one end against 0, it will be at \textit{slightly} different point, and so the other end will also move slightly, in a manner that we cannot control. If this movement is much smaller than the distance between two successive marks on the ruler, i.e. its least count, then this random movement will not be discernible. On the other hand, if the movement is greater than the least count of the ruler, then the length we measure will change from one measurement to the next. In the first case the error is determined by the least count of the ruler, and the length measured will not change from one measurement to the next. In the second case, however, it is clear that the length measured \text{will} change from one measurement to the next. We call this kind of error a \textit{random error}. Another example that may make the idea clearer is the following. Imagine that a sound of a fixed duration is played over and over again, and you are required to time the duration with a stopwatch. You will have to start and stop your stopwatch over and over again. It is easy to see that even though the duration of the sound remains the same, the duration that you measure will vary from one measurement to the next. Once again we have a random error. It should be intuitively clear that the magnitude of the error depends not on the duration of the sound but on the \textit{spread} of your measurements. The number we assign to the random error must therefore be obtained from a measure of this spread. To do this, we must remove what does not matter, namely the actual duration of the sound. Generally we don't know the actual duration of the sound. So we do the best we can: we subtract the mean of our measurements from all of them, since the mean is our best guess of the actual duration. That leaves us a distribution of the variations of the measurements. We are not really concerned with whether the variation is positive or negative. So we use the sum of the squares of the variation, i.e. $\sum_{i = 1}^{n}(x_i - \overline{x})^2$, where $x_i$ is any reading, $\overline{x}$ is the mean, and $n$ is the total number of readings. Because the variations are both positive and negative, this sum grows not in proportion to $n$ but in proportion to $\sqrt{n}$. (This is a subtle point; you will understand it later.) So the standard measure of the spread of the readings is the standard deviation \begin{equation*} \sigma = \sqrt{\frac{\sum_{i = 1}^{n}(x_i - \overline{x})^2}{n - 1}}. \end{equation*} (Why $n-1$ instead of $n$ in the denominator is another subtle point. Again, let it be for the moment. Anyway, the difference between $n$ and $n-1$ is negligible for large $n$, which is the only case in which the idea of random errors is meaningful.) \section{Errors in Time Measurements} The time measurements you will encounter in the lab will be made by human beings (you) using clocks. If you compare the processes of measuring a duration to that of measuring a length, you will immediately see that you no longer have the luxury of aligning one end of your ``object'' (here a duration) with a physical scale: you must start your clock at a moment that you best judge to be the beginning of the duration and end it at what you best judge to be its end. Thus you automatically make two errors, one at the end of your time measurement and one at the beginning. The magnitude of the error has nothing to do with how long or short the duration is. Is this error due to the least count of you clock? It is -- partly; but that's not the whole story: there is another factor to take into account, your reaction time. As a human being you have to judge when when the duration begins and press the start button on your clock; both your judgement and the pressing of the button entail errors that are due to the finite time it takes for your brain and your body to react. Thus the errors at the beginning and the end of a measurement of duration are a combination of the the error due to the least count of your clock and your reaction time. If one is much larger than the other, it dominates and the other may be neglected. Determining the least-count error is easy: it's just the last count of your clock. Determining the error due to finite reaction time is not easy. One way is to do many measurements of a duration that is known to be constant (e.g. the period of a pendulum) and look not at the average but the spread in the readings, a measure of which is the standard deviation. For most human beings have a reaction time of about 0.1 $s$, much larger than the least count of standard lab stop watches, which is usually 0.01 $s$. Thus the error in most of your time measurements will be dominated by your reaction time. \subsection{Time Measurement for Periodic Phenomena} You will often be interested in measuring a duration of a periodic phenomenon, e.g. the oscillation of a pendulum. If you assume that the period of the pendulum has a true value that does not change from one oscillation to the next, then there is a way in which you can dramatically reduce the error on the time period. Since the error in the measurement of a duration occurs only at the ends, you can simply measure the duration for many oscillations, e.g. 10 or 100, and divide by that number. So, if the total error due to your reaction time on a duration (i.e at both ends of a duration together) is 0.1 $s$ and you measure the duration for 10 oscillations, the error associated with a single time period will be $0.1/10 = 0.01 s$. Given that this is true, you may ask why one doesn't simply measure the duration for 1000 oscillations (or some arbitrarily large number). One answer is that the oscillations will damp out. The other had to do with human fatigue: if you are asked to count 1000 oscillations, you may make an error in counting the number of oscillations, and that would lead to a much larger loss in precision than what you gained by increasing the number of oscillations. (Machines don't have this problem and it makes sense, when a mechanical counter is being used, to increase the number of oscillations as much as possible.) A question that might arise is what happens if we measure the duration for 10 oscillations several times. Is this equivalent to measuring the duration for 100 oscillations? No. Now we have a pair of end points for each duration of 10 oscillations, and each point is associated with starting and stopping a clock. If we measure the duration for 100 oscillations (let us call this case 1), we have 2 end points, and if we measure 10 sets of 10 oscillations each (case 2), we have 20 end points, i.e. there are 10 times as many points at which errors can occur. From this we may at first conclude that the error in case 2 is 10 times the error is case 1, but that is not correct: the errors we make at the end points can be either overestimates of underestimates, i.e. positive or negative, and so as we stack up the end points there will be some cancellation. It an be shown -- and at some point you will show this -- that as the number of end points $n$ increases, the error goes up as $\sqrt{n}$. Thus, if the error on time measurement with single pair of end points is $\Delta t$, the error on combining $n$ time measurements is $\sqrt{n} \, \Delta t$. Putting all this together, if the error on a single time measurement is $\Delta t$, the error on the time period $T$ using 100 oscillations is $\frac{\Delta t}{n}$ and the error on $T$ using 10 measurements of 10 oscillations each is $\frac{\sqrt{n} \, \Delta t}{n} = \frac{\Delta t}{\sqrt{n}}$. (Don't worry if this paragraph went over your head; you will understand it slowly, over the semester.) It should be obvious from the discussion above that while time measurement is much more subtle than length measurement, it allows an almost arbitrary improvement in precision (reduction in error) on the time period itself (to be distinguished from the duration of a measurement) if the phenomenon is periodic. This can be used to great effect in certain devices, e.g. the Kater's pendulum. To ensure that this great advantage is not vitiated by incorrect counting of oscillations, it is very important to know exactly how to count the number of oscillations. A very common error is to count $n-1$ oscillations when one wants to count $n$. You will learn the correct procedure in the lab. %\section{Theory: Error Analysis} %\subsection{Types of Errors} %``Error'' analysis is a rather glaring misnomer. All measurements have some degree of uncertainty that may come from a variety of sources, from the lack of precision of the measuring instrument to random fluctuations in the environment. The process of evaluating the uncertainty associated with a measurement result is often called uncertainty analysis or error analysis. However, the common definition of an ``error'' being a ``mistake'' is very misleading. However, the terminology is so rampant that we will be forced to use it at times, referring to ``error'' analysis when we really mean ``uncertainty'' analysis. \begin{tip} \paragraph{A cautionary tale:} Neutrinos are weakly interacting nearly massless particles that are theorised to be moving close to -- if not at -- the speed of light. In September 2011, scientists of the OPERA collaboration reported evidence that neutrinos they produced at CERN in Geneva and recorded at the OPERA detector at Gran Sasso, Italy, had travelled \textbf{faster} than light. If this were true, it would have violated one of the pillars of modern physics, the Theory of Relativity. The neutrinos were calculated to have arrived approximately 60.7 nanoseconds sooner than light would have if it had traversed the same distance in a vacuum. Assuming the error were entirely due to random effects, scientists calculated that there was a 0.2-in-a-million chance that this may be a false positive. In March 2012 it was confirmed that a fibre cable was not fully screwed in during data gathering, and this effectively decreased the reported flight time of the neutrinos by 73ns, making them seem faster than light. The corrected difference between the measured and expected arrival time of neutrinos (compared to the speed of light) was approximately 6.5 $\pm$ 15 ns. This is consistent with \textbf{no difference at all}, thus the speed of neutrinos is consistent with the speed of light within the margin of error. \end{tip} %\subsubsection{Least count errors} %As you have already seen, the least count of a measuring instrument is the smallest value the instrument can resolve. For example, if we use a metre scale to measure an object and find that it is $23.5$cm, the uncertainty here \textbf{due to the measuring instrument} is $0.1$cm. Thus, we can say that the length of the object is given by $l = (23.5 \pm 0.1) cm$ where $\pm \Delta l = \pm 0.1$cm is the uncertainty in measurement due to the least count of the metre scale. This is the value up to which the ``best'' value of $23.5$ may be specified with any confidence. %In general, such errors are the \textbf{lower bounds} of experimental errors. No error that is less than the least count error need be considered. That being said, if you find other sources of error to be greater than the least count error, then those sources should be used. For example, the stopwatches in the lab have a least count of $0.01$s. However, your reaction time of turning on and off a stopwatch would be at best $0.1$s. Thus, while the least count error is significantly smaller, the \textbf{actual} error in a time measurement using the stopwatch would be of the order of $0.1$s. %\subsubsection{Systematic errors} %Imagine now that the metre scale that you are using to measure the length of the object is made of metal, and as a result expands slightly in summer. If you performed a set of measurements, you would obviously not find the ``true'' value of the length. However, such an error may be discovered by comparing your measurements from the metal scale with a wooden scale, for example. When using instruments like digital multimeters you obviously rely on it having been calibrated properly. However, there might have been an accident that a student did not report, and as a result there could well be a systematic error for someone unlucky enough to be the one using it next. %\subsubsection{Random errors} %Another reason for two measurements not agreeing totally is due to small, uncorrelated variations in the environment or measurement process. For example, while measuring the time period of a pendulum, you might get different readings due to small differences in your reaction time. Similarly, when measuring the height of a cylinder with a Vernier calliper, you may see small variations depending on \textbf{where} you measure the height, as it is possible the cylinder was not made to be smooth up to $0.1$mm everywhere. %Such variations, being \textbf{random}, are expected to fall just as often \textit{above} a certain value as \textit{below} it. A truly random source would give you a \textbf{Gaussian} distribution of errors; the mean of this distribution $$\overline{m}_N = \frac{\sum_{i=1}^{N}{m_i}}{N}$$ would approach the ``best'' value as the number $N$ of well-sampled values increases. The uncertainty of these measurements would then be quantified by the \textbf{standard deviation}\footnote{There is a reason for the $N-1$, it is known as the number of ``degrees of freedom'' in statistics. Understanding it is not essential for this course.} $$\Delta m_N = \sqrt{\frac{(m_i - \overline{m}_N)^2}{N-1}}$$ %more conventionally denoted by $\sigma$. \subsubsection{Precision, Accuracy, and Statistics} \begin{enumerate} \item \textbf{Accuracy:} is how closely a measurement comes to some ``true'' value. It determines how well we have eliminated \textbf{\textit{systematic errors}} and mistakes in our measurements. \item \textbf{Precision:} is how closely a set of measurements agree \textit{with each other}. %It determines how well we have eliminated \textbf{\textit{random errors}} in our measurements. \end{enumerate} To use a sports simile, if you're playing football and are accurate, your aim will always take the ball close to or into the goal. If, however, you harbour a personal grudge against the coach and succeed in beaning him on the head every time you get the ball, then you would be \textit{precise}, but not accurate.\footnote{It is widely accepted that the point of football is to get the ball into the goal and not to traumatise your coach, which is why this example works.} \textbf{A caveat:} In any physics experiment worth doing, you usually don't have the answer beforehand. This means you don't have a ``true'' value with which to compare your answer. This is why it is so important to be both precise (reduce all random errors present in your measurements) and accurate (reduce all systematic errors present in your measurement). You must also, however, come up with ways to check if your answer is accurate \textit{without} trying to resort to a ``textbook'' solution. \begin{question} \paragraph{Question:} In the example of the ``superluminal'' neutrinos given earlier, was the error of the OPERA team one of inaccuracy, or of imprecision? \end{question} \section{Propagation of Errors} In general we are interested not just in our observations but in quantities derived from them, i.e. in derived quantities. It is obvious that the a derived quantity will inherit some the errors in the observations from which it is derived. But to what extent will it do so? We will use a few examples to understand this. \begin{enumerate} \item We want the radius of a diameter, we usually measure its diameter. Here the observed quantity is the diameter $d$ and the derived quantity is the the radius $r$. We have $r = d/2$. Thus if there is an error of $\pm \delta$ in measuring $d$ (for example due to the least count of the measuring instrument), then, clearly the error in $r$ is $\pm \delta/2$. \item We want the perimeter $p$ of a rectangle having measured its length $l$ and its width $w$. $p = 2l + 2w$. If the errors in measuring $l$ and $w$ are $\pm \delta$, for each, the error in $p$ is clearly $\pm 2 \delta \pm 2 \delta$, which ranges between $- 4 \delta$ and $+ 4\delta $, depending on the signs on the errors on $l$ and $w$. So the error on $p$ is $\pm 4 \delta$. \item We want the area $A$ of a rectangle having measured its length $l$ and its width $w$. $A_\text{true} = l \, w$. Suppose that the error in the measurement of $l$ is $\pm \delta l$ and the error in measuring $w$ is $\delta w$, then what we derive is $A_\text{derived} = (l \pm \delta l) (w \pm \delta w)$. If the errors are much smaller than measurements, we can neglect terms of order $\delta^2$. Then, the value of $A_\text{derived}$ ranges between $A_\text{true} - w \delta l - l \delta w$ and $A_\text{true} + w \delta l + l \delta w$. Notice that the expression for the error is rather complicated; we can simplify it if we express it in terms of the relative error. Since the error in $A$ is $\delta A = A_\text{derived} - A_\text{true}$, you can easily show that \begin{equation*} \frac{\delta A}{A} = \pm \frac{\delta l}{l} \pm \frac{\delta w}{w}. \end{equation*} The quantity $\delta x / x$ is called the \textit{relative error} in $x$. \item We want the volume $V$ of a sphere given a measurement of its diameter $d$. You can easily show, using the method given above, that \begin{equation*} \frac{\delta V}{V} = \pm 3 \, \frac{\delta d}{d}. \end{equation*} Notice how the sensitivity of the relative error in $V$ to the relative error in $d$ depends on the \textit{power} in the dependence of $V$ on $d$. (Since $V = (4/3) \pi r^3$, $V \propto d^3$.) \item We want the mass per unit length $\lambda$ of a wire having measured its mass $m$ and its length $l$, with . $\lambda_\text{true} = (m \pm \delta m) / (l \pm \delta l)$. If you go through the same process as in the last case considered, you will find that \begin{equation*} \frac{\delta \lambda}{\lambda} = \pm \frac{\delta m}{m} \pm \left( -\frac{\delta l}{l} \right). \end{equation*} \end{enumerate} \section{Using Error Analysis to Plan Your Experiment} Suppose that you want to measure the volume of a tile (whose length $l$ and width $w$ are much greater than its thickness $t$). Since $V = l \times w \times t$, you have \begin{equation*} \frac{\delta V}{V} = \pm \frac{\delta l}{l} \pm \frac{\delta w}{w} \pm \frac{\delta h}{h}. \end{equation*} This formula shows you right away that there is not much point improving one measurement indefinitely if the dominant cause of error is another measurement. What we want is for all the measurements to contribute more or less equally. This immediately tells us that we must choose our instruments and number of measurements so that this goal is achieved. \begin{question} \paragraph{Question:} What instruments would you choose measure the length, width, and thickness of the tile so that the relative errors in them are comparable? %\vspace{0.5 cm} \paragraph{Question:} Suppose $P = x y^3$, where $x$ and $y$ are the measured quantities. On which of them is the relative error in $P$ more sensitive? \end{question} \newpage \section{The Experiment} \subsection{Apparatus} \begin{enumerate} \item A pair of Vernier calipers \item A Screw gauge \item A set of ball bearings \item A set of aluminium cylinders \item A small measuring flask \item A container with a spout \end{enumerate} \subsection{Description} \begin{enumerate} \item In \textbf{Part A} you will measure the radius, height, and volume of different cylinders. You will then plot an appropriate graph from which you will extract the value of $\pi$. \item In \textbf{Part B} you will measure the radius and volume of different ball bearings. You will then plot an appropriate graph from which you will extract the value of $\pi$. \end{enumerate} \subsection{Suggested Procedure} \subsubsection{Part A} \begin{enumerate} \item Decide on the best instrument and method to measure the following properties of the cylinders: \begin{enumerate} \item Radius \item Height \item Volume \end{enumerate} \begin{question} \paragraph{Question:} How and where would you measure the radius of the cylinders? How many trials would you take? \end{question} \item Be sure to take a sufficient number of \textbf{well-sampled} readings. \item Decide on an appropriate graph between the measured quantities from which you can extract the numerical value of $\pi$. \end{enumerate} \subsubsection{Part B} \begin{enumerate} \item Decide on the best instrument and method to measure the radii and the volumes of the bearings. \item Be sure to take a sufficient number of \textbf{well-sampled} readings. \item Decide on an appropriate graph between the measured quantities to extract the numerical value of $\pi$. \end{enumerate} \begin{question} \paragraph{Question:} How does your estimate compare with the known value of $\pi \approx 3.14159$? \end{question} \newpage
{ "alphanum_fraction": 0.7566330446, "avg_line_length": 103.0621468927, "ext": "tex", "hexsha": "00a25686e85547f810392693dcb717c02645088e", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-01-20T07:01:11.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-20T07:01:11.000Z", "max_forks_repo_head_hexsha": "c0ad1e8216311fcad835880c938eafdd75bcceff", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "dpcherian/PHY102-Lab-Manual", "max_forks_repo_path": "03_Errors.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c0ad1e8216311fcad835880c938eafdd75bcceff", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "dpcherian/PHY102-Lab-Manual", "max_issues_repo_path": "03_Errors.tex", "max_line_length": 1552, "max_stars_count": null, "max_stars_repo_head_hexsha": "c0ad1e8216311fcad835880c938eafdd75bcceff", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "dpcherian/PHY102-Lab-Manual", "max_stars_repo_path": "03_Errors.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8928, "size": 36484 }
\documentclass[12pt, right open]{memoir} \usepackage{graphicx} \usepackage{tikz} \usetikzlibrary{matrix,chains,positioning,decorations.pathreplacing,arrows,automata} \usetikzlibrary{shapes.geometric, calc, intersections} \usepackage{mathtools} \usepackage{amsmath} \usepackage{float} \floatstyle{boxed} \restylefloat{figure} \graphicspath{data/images} \usepackage{ifthen} \setcounter{secnumdepth}{5} \begin{document} \chapter{Introduction} The whole universe is seen in the form of a person. The body of the universe consists of the 12 Rashis, which can be roughly translated as “astrological signs.” Each Rashi represents a part of the body. The nine Grahas or planets govern this universal body. The 12 Rashis are represented in the Maharishi Jyotish program as 12 equal segments of a circle, each covering 30 degrees.The sequence of the 12 Rashis begins with Mesha as the first Rashi. \begin{figure} \centering \includegraphics[scale=1]{data/images/jyotish_nakshatras.jpg} \caption{The Circle of 27 Nakshatras with Reference to the Earth} \end{figure} This illustration shows the sequence of the 27 equal segments of the Nakshatras, each consisting of 13 degrees 20 minutes, with their Sanskrit names and numbers. As seen from the earth, the moon passes through this circle of the 27 Nakshatras in about 27 days. Thus it takes the moon about one day to pass through one Nakshatra. The moon is shown here moving through Pushya Nakshatra in Karka Rashi. This example also shows the sun moving through Ashvini Nakshatra in Mesha Rashi. Each of the other planets are also moving through the 27 Nakshatras. Nakshatra Pada: Each Nakshatra is divided into 4 equal parts of 3 degrees 20 minutes each part. They are called Nakshatra Pada 1, 2, 3, and 4. Each Nakshatra Pada has its own significance for interpretation. %\begin{tabular}{|l | l|} %\hline %\textbf{Vedic Name} & \textbf{English Names} \\ \hline % Mesha & Aries \\ \hline % Vrishabha & Taurus\\ \hline % Mithuna & Gemini\\ \hline % Karka & Cancer\\ \hline % Simha & Leo\\ \hline % Kanya & Virgo\\ \hline % Tula & Libra\\ \hline % Vrishchika & Scorpio\\ \hline % Dhanu & Sagittarius\\ \hline % Makara & Capricorn\\ \hline % Kumbha & Aquarius\\ \hline % Meena & Pisces\\ \hline %\end{tabular} \section{Ashwini Star or Nakshatra} You are born when the Moon was in Ashwini star or nakshatra. You are Scrupulous, prosperous, obedient, truthful and obtain all comforts. You are endowed with good family and children and wealth. You are daring, handsome and monied. You are a capable administrator, cruel, of big body and respected. You sacrifice money, have a good conduct and an enjoyer. You are quick and candid, a knower of scriputures, rational and succeed in quarrels. You have long hands and wide eyes. You are respected by Kings and governments and speak sweetly. You are daring, arrogant, thief and a fraud. You are unkind, practice of forbidden paths, intelligent and contain some sympal on your girdle. You are fickle and undertake travelling. You may court other women and are farsighted. You are not independent, but strong. \section{Bharani Star or Nakshatra} If you are born in Bharani Star, incur the displeasure of others, enjoyer, obedient and learned, fear water and get the wealth of bad people. You know the scriptures, rationale, intelligent and win the quarrels. You bear scars on your body due to injuries. You are good speaker and suffer from heart diseases. You are stable, knowledgeable and truthful. You are longlived. You are also determined, and proud, you do not tolerate other’s prosperity. \section{Krithika Star or Nakshatra} If you are born with Krithika star, benefit of friends and family, endowed with son and enjoy life and are very prosperous. You suffer from hunger and are without strength and money, you undertake purposeless travelling but recollect the help rendered by others. You are very harsh and do jobs forbidden. You are cruel and do good jobs and they also have defective face. You are bright, instant angry and run after other women. You are heavy eater and fiery. You are stingy, intelligent, famous, successful, loved by partner, with property earned by yourself, white haired, afraid of wind, and carry the symbols of til and fish. \section{Rohini Star or Nakshatra} If you are born with Rohini Star, you are agriculturist, expert, well behaved, handsome, good speaker and poet. You are stable minded, respected, enjoyer and interested in love making. You are of sweet speech, intelligent, capable and bright. You are longlived and perform accepted jobs, religious, truthful and help those who have helped you. You are respected by government. You respect Gods and Brahmins and know the science of metres and metaphors. You are able servants of your lord and determined. You are endowed with good looking hands and wide forehead, handsome, independent, loved by your children, experts, wealthy and respect to corns and money, have desire to wear new clothes, suffer from eye diseases, litter feared and play with women. \section{Mrigashira Star or Nakshatra} If you are born with Mrigashira or Mrigaseesha Star, you are truthful, handsome, enjoywealth and prosperity, pure in heart, bright, sage like and fickle minded. You are interested in learning, obedient, always loving teachers, friendly with government authorities and respected by them. You are enthusiastic, feared, monied, lovable, knowledgeable in architecture, kingly and good administration. You are pleasant spoken, intelligent and affectionate to your mother. You are quick and arrogant and hate others. \section{Aridhra Star or Nakshatra} If you are born with Aridhra or Thiruvadhirai Star, you are soft, stable minded, strong, earning by sacrifice, afflicted by sickness, fear and angry. You suffer due to hunger, hard bodied, lovable, forget to help rendered by others, expert in trade and commerce, cruel, having may relatives, ill advisers and hate all. You are with pride, of lower levels and do jobs which are forebidden. You are benefit of money and corns. Such people are poets, little learned, longlived and little interested in things. \section{Punarvasu or Punarpoosam Star} You are endowed with children, good qualities, wealth and you indulge in bringing difference of opinion among friends. You speak very clearly and are secretive in your dealings. You are learned in the sciences and interested in personal decoration of gems and gold ornaments. You are givers and famous. You are religious, attractive, carry out works of other women. You are tolerant, satisfied with small things and fast moves. You collect a good circle of friends, daily eat good food, sacrificer and worship lord Vishnu. You are peaceful, happy, enjoying a endowed with good progeny. You are noted to be longlived, loved by wife, handsome, interested in pomp, of good heart, high in conduct, with low voice, like cereals, intelligent, thinker and possess fish sign on stomach. \section{Pushya or Poosam Star} You are wealthy, happy, respected, stable minded, endowed with handsome body, respecting parent, following their religion and most obedient. You command money and vehicles of novelty. You are important person in the world, knowledgeable in the science of spells and respected by government.You retain your stand, and are happy. You are characterized by the presence of fish symbol on the hands and feet,. You are endowed with all good qualities, short tempered and interested in the discussion of scriptures. \section{Ashlesha or Oilyam Star} You are cruel, daring and angry, you probably co-habit with people of same sex and animals like cow. You travel aimlessly, do undesirable jobs, harmful to your own and other people, you are proud, and suffer on a account of sex starvation. You are angry and are strong like lion. You know Brahman and dependent on others for life. You are cruel to all beings, pay fines by doing sinful acts, unstable minded and fearful. You are always sad, serve others, short tempered and vary bad persons. You are childish, slow in the work of your lord, wide eyed, endowed with fish sign in eyes, develop enmity with your own people, highly stingy and stubborn. \section{Magham or Magha Star or Nakshatra} You are reserved, highly sexy, endowed with comforts and have wife who are also reserved, knowledgeable, pure and sacrifice things. You are cruel, respect your father, sharp in reacting with others, non-sinners and destroy your enemies. You are harsh spoken, hefty bodies, angry, transact with teachers and government, interested in the advise of well wishers or sages and are bright. You tread the religious path and endowed with pleasant qualities, enjoyer and truthful. You are served by many servants, conduct big operations and are leader of a big party or army. You are learned, long lived, earn good money, obedient and help your relatives. You have blood shot eye and possess fish sign on the chest. \section{Poorvaphalguni or Pooram Star or Nakshatra} You are expert in love making, strong and handsome, but extremely fearful. You help others, and carryout works which cannot be done by others. You are angry, fraud, cruel and candid. You are bright, wealthy, giver, expert in music and dance. You are wise and are in the government service. You are longlived and and be get few children. You are learned, reserved, handsome, love your brothers, endowed with soft hands and feet, born of Rajamsha, strict transactor, endowed with a attractive eyes, idle and possess government signs. \section{Uttaraphalguni or Utharam Star} You are wealthy, knowing the science of weapons, impotent, respectful and endowed with attractive eyes. You are fixed to the scienes and stingy in spending money. You are giver, kind hearted, happy and endowed with good qualities, famous, kingly, daring and extremely soft personalities. You win over your enemies, loved by women, experts in arts, truthful and learned. You are obedient, religious, loved by people, stable and bright. You are fearful and good warmers. \section{Hastham or Hastha Star or Nakshatra} You are stable bodied, untruthful, bear excellent character and warrior. You are giver, independent, famous, interested in worshipping of gods and pious people, and likely to get all the properties of your father. You are learned, handsome, wealthy, daring, helpful to others and all knowledgeable. You are cruel and grab other’s property. You are bad people, strong, interested in music, happy with your relatives and friends, respected by government authorities, respect the gods and pious, destroy your enemies, famous and possess the signs of fish. \section{Chitra Star or Nakshatra} Those born in their star Chitra, are extremely fearful of low mind suffer from hunder, and troubled in mind. They are weak and perverse sexed. They are troubled by their enemies, obedient, capable and wear peculiar dresses. They advance peculiar arguments. They are experts in love making, bright, wealthy, enjoyers and learned. They are probably distorted (personalities), strong, brave, endowed with wife and children and respect the gods and pious. \section{Swati Star or Nakshatra} Those born in Swati star, are wise, witty and learned. They are religious and pleasant minded. They are endowed with the body of kingly symbols, bright and handsome and loved by their wives. They perform religious works, truthful, expert in transactions, sexy, givers and learned. They are obedient, lovers of pious people, experts in architecture and sculpture, stingly and respect gods and Brahmins. They help others, daring, long lived, large eyed, kind, famous, lover of relatives and friends, independent, eat optimum food, and followers of their own religion. \section{Vishaka Star or Nakshatra} Those born in Vishaka star are physically impure, strong, go after other’ women, benefit of relatives, and cause deference among friends. They are always thinking about sex, experts in the jobs of fire, brewery, lores and metallurgical operations and do not develop friendship with any body. They are jealous, subile, kind, forbidden love, of controlled senses, wealthy and stingy. They are intelligent, stable minded, good speakers, love relatives, children and friends. They are great people. They are candid quarrel some and addicted to prostitutes. They are witty, obedient, long lived, help many, speakers of truth, short tempered, lovers of women, possess blood shot eyes and also fish sign on the secret parts. \section{Anuradha or Anusha Star or Nakshatra} Those born in the star Anuradha are interested in women with commandable complexion of the boday, be wildered, idle and weak minded. They are famous, experts in arts, and destroyers of their enemies. They are servers of Government, brave, stationed in countries other than their own, handsome, and destroyers of their sins. They are truthful, respect kings, respect their mothers and are musicians. They enjoy life and go after other women. They are fit for friendship, kind hearted, of helping nature, intelligent, enjoyers and travelers. \section{Jyeshtha or Kettai Star or Nakshatra} Those born in this Jyeshtha star, are fond of daily physical exercise, warriours, always respecting elders, famous and endowed with fine qualities. They gain fame, are bright, accompany Government, affected in mind, deeply idle, well known and speak final. They are addicted to bad jobs, capable of underging difficulties, unbearable, cruel, untruthful and wealthy, they are tall and endowed with few children. \section{Moola Star or Nakshatra} Those born in moola star, are fast workers, soft, fickle, endowed with good qualities, harmful and untrustworthy. They route out the different parties unconcerned, otherwise they enhance the prosperity of their clean and comfort their mothers. They are sagelike, wealthy, enjoyers, helping others, comforted by women and suffer from phlegm diseases. They are also proud, serve the government, processes knowledge of finer aspects, daily enjoyers, handsome and hated by all. They are happy and have vehicles, harmful, do permanent jobs, and famous by defeating their enemies. They are religious, thieves, respectable, show affection to relatives, red complexioned, sexy, obedient, independent, blood shot-eyed and loved by women. \section{Purvashadha or Pooradam Star or Nakshatra} Those born in this purvashadha Star, are thieves, proud, angry, wealthy, endowed with good qualities, always speak untruth and capable. They are drunkards, fickle, good speakers, having attractive face, obedient, lovers of cows, suffer ungrudgingly, youthful and live with respect. They help at the sight of difficulty, loved by people and experts in almost all fields. They are slightly taller, possess sublet fingures, famous sophisticated. \section{Uttarashadha or Uthiradam Star or Nakshatra} Those born in this uttarashadha star are born with Brahamsa, cause difference of opinion between friends, do physical exercises, loved by people, enjoyers and travel a lot. They are givers, successful, obtain enjoyment from wife and happy. They recollect the help rendered by others, religious, obedient, handsome and wealthy. Loved by their relatives they are great givers. They are all knowledgeable, pure, endowed with good qualities, entertain guests, witty, kind hearted, furnished of grand qualities, brave, endowed with long nose and fish signs and private parts. \section{Shervana or Thiruvonam Star or Nakshatra} Those born in this Shravana Star are experts in music, dance and drama, truthful and stable minded, they enjoy hearing scriptures. They are endowed with many children and friends, respect pious people, win over their enemies, large-hearted, knowledgeable, brave, learned, wealthy, expert in arts, religious, famous, givers, recollect the help rendered by others, accompanied by relatives, and pure in heart. They are interested in scents and flowers, hefty bodied, go after other women, handsome, large hearted, support many relatives and loved by all. They are good, speakers, enjoy life, wise, possess signs of fish on private parts. \section{Dhanishta or Avittam Star or Nakshatra} Those born in Dhanishta star are difficult to win, benefit of sadness, good eaters and very famous. They serve the elders and always protect others. They are religious, endowed with many good qualities, wealthy, kind hearted and having good establishment. They are sad, cruel, suffer from disease like tuberclosis, and are stubborn. They are after other women, daring and loved by women. They worship the god and elders, good givers and wealthy. They are interested in music, respected by relatives, decorated by jewellery and rule many people. They are bright, jobfull, cruel, hate Brahmins, hot constitutional, stingy and possess long feet. They speak much and show pride in undertaking many works. \section{Shatabhisha or Sathayam Star or Nakshatra} Those born in this Shatabhisha star are endowed with many children, angry, daring, extreme and unbearable; they are almost selfish and gamblers. They are of questionable character and experts in magic. They are stubborn, stable minded, specially knowledgeable, brave, cruel, eat little, stingy, wealthy, servers and go after other women. They live in foreign lands and most sexy, they are respected by the world, plenty minded, their enemies, long lived and are witty. They are good in various transactions, astrologers, suffer from various diseases and are endowed with some knowledge. \section{Purvabhadra or Poorattathy Star or Nakshatra} Those born in this Purvabhadra Star, and worship of gods and teachers, of righteous conduct and of broad minded. They are respectable and above controversies. They are having control over their senses, experts in all kinds of jobs, endowed with good chest, they are at home with the learned, determined, won over by women, stubborn, despire others, irreligious, brave, givers, enjoyers, soft spoken, handsome and living their relatives. They are speakers, endowed with children, help excess, possess long tongue, suffer from the disease of ears, obedient, helping nature. \section{Uttarabhadra or Uthirattathy Star or Nakshatra} Those born in this Uttorabhadra Star suffer from fluctuation of funds, wise and surrounded by many people. They are affected by the imbalance of bile and always ride. They are prominent among their clan, wear jewels, always do constructive jobs, obtain wealth, and cors and they are givers. They are endowed with children, religious, winners of their enemies, happy, determined, sexy, learned, sacrificers, respected in all circles, suffer ferom various diseases, brave, wise, idle, fickle, possess the signs of fish an til on their private parts and loved by their relatives. They are just and they earn by righteous means. \section{Revathi Star or Nakshatra} Those born in this Revathi Star practice intelligent things, candid, handsome, talk angrily and quick. They are independent in pursuing any jobs. They are endowed with good qualities, have control over their senses, always thinking about their home, wise, handsome, wealthy, enjoy life, learned, fickle, earn good money, sexy, brave, givers, lose women, pure in mind, endowed with well proportioned body, capable, hefty bodied, bright, help others and move foreign lands. \end{document}
{ "alphanum_fraction": 0.7963249716, "avg_line_length": 105.868852459, "ext": "tex", "hexsha": "c0991d1d1ac1af0128b240d8f2f58b0cde5c2ffc", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "68a383f604b1071e7010d63bf4c6c0f0116c4afb", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Mageswaran1989/ost", "max_forks_repo_path": "nakshtra.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "68a383f604b1071e7010d63bf4c6c0f0116c4afb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Mageswaran1989/ost", "max_issues_repo_path": "nakshtra.tex", "max_line_length": 803, "max_stars_count": null, "max_stars_repo_head_hexsha": "68a383f604b1071e7010d63bf4c6c0f0116c4afb", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Mageswaran1989/ost", "max_stars_repo_path": "nakshtra.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4427, "size": 19374 }
\chapter{{\tt\bfseries print}: Output Analysis (deprecated)}\label{print.chapter} \noindent \code{print} is deprecated, but is still available until CmdStan v3.0. See the next chapter for usage (replace \code{stansummary} with \code{print}). \chapter{{\tt\bfseries stansummary}: Output Analysis}\label{stansummary.chapter} \noindent \CmdStan is distributed with a posterior analysis utility that is able to read in the output of one or more Markov chains and summarize the posterior fits. This operation mimics the \code{print(fit)} command in RStan, which itself was modeled on the print functions from R2WinBUGS and R2jags. \section{Building the stansummary Command} \CmdStan's \code{stansummary} command is built along with \code{stanc} into the \code{bin} directory. It can be compiled directly using the makefile as follows. % \begin{quote} \begin{Verbatim}[fontshape=sl] > cd <cmdstan-home> > make bin/stansummary \end{Verbatim} \end{quote} % \section{Running the stansummary Command} The \code{stansummary} command is executed on one or more \code{output.csv} files. These files may be provided as command-line arguments separated by spaces. That means that wildcards may be used, as they will be replaced by space-separated file names by the operating system's command-line interpreter. Suppose there are three samples files in a directory generated by fitting a negative binomial model to a small data set. % \begin{quote} \begin{Verbatim}[fontshape=sl] > ls output*.csv \end{Verbatim} % \begin{Verbatim} output1.csv output2.csv output3.csv \end{Verbatim} % \begin{Verbatim}[fontshape=sl] > bin/stansummary output*.csv \end{Verbatim} \end{quote} % The result of \code{bin/stansummary} is displayed in \reffigure{bin-stansummary-eg}.% % \footnote{RStan's and PyStan's output analysis stansummary may be different than that in the command-line version of Stan.} % \begin{figure} \begin{Verbatim}[fontsize=\footnotesize] Inference for Stan model: negative_binomial_model 1 chains: each with iter=(1000); warmup=(0); thin=(1); 1000 iterations saved. Warmup took (0.054) seconds, 0.054 seconds total Sampling took (0.059) seconds, 0.059 seconds total Mean MCSE StdDev 5% 50% 95% N_Eff N_Eff/s R_hat lp__ -14 7.0e-02 1.1e+00 -17 -14 -13 226 3022 1.0e+00 accept_stat__ 0.94 3.1e-03 9.7e-02 0.75 0.98 1.0 1000 13388 1.0e+00 stepsize__ 0.16 5.1e-16 3.6e-16 0.16 0.16 0.16 0.50 6.7 1.0e+00 treedepth__ 2.9 4.1e-02 1.2e+00 1.0 3.0 5.0 829 11104 1.0e+00 n_leapfrog__ 8.0 2.1e-01 6.3e+00 1.0 7.0 19 870 11648 1.0e+00 divergent__ 0.00 0.0e+00 0.0e+00 0.00 0.00 0.00 1000 13388 nan energy__ 15 8.7e-02 1.5e+00 14 15 18 282 3775 1.0e+00 alpha 16 1.9e+00 2.0e+01 1.9 9.7 50 114 1524 1.0e+00 beta 9.9 1.1e+00 1.2e+01 1.1 6.1 31 124 1664 1.0e+00 Samples were drawn using hmc with nuts. For each parameter, N_Eff is a crude measure of effective sample size, and R_hat is the potential scale reduction factor on split chains (at convergence, R_hat=1). \end{Verbatim} \vspace*{-6pt} \caption{\small\it Example output from \code{bin/stansummary}. The model parameters are \code{alpha} and \code{beta}. The values for each quantity are the posterior means, standard deviations, and quantiles, along with Monte-Carlo standard error, effective sample size estimates (per second), and convergence diagnostic statistic. These values are all estimated from samples. In addition to the parameters, \code{bin/stansummary} also outputs \code{lp\_\_}, the total log probability density (up to an additive constant) at each sample, as well as NUTS-specific values that can be helpful in diagnostics. The quantity \code{accept\_stat\_\_} is the average Metropolis acceptance probability over each simulated Hamiltonian trajectory and \code{stepsize\_\_} is the integrator step size used in each simulation. \code{treedepth\_\_} is the depth of tree used by NUTS while \code{n\_leapfrog\_\_} is the number of leapfrog steps taken during the Hamiltonian simulation; \code{treedepth\_\_} should always be the binary log of \code{n\_leapfrog\_\_}. \code{divergent\_\_} indicates whether or not the simulated Hamiltonian trajectory became unstable and diverged. Finally, \code{energy\_\_} is value of the Hamiltonian (up to an additive constant) at each sample, also known as the energy. } \label{bin-stansummary-eg.figure} \end{figure} %\end{quote} % The posterior is skewed to the high side, resulting in posterior means ($\alpha=17$ and $\beta=10$) that are a long way away from the posterior medians ($\alpha=9.5$ and $\beta=6.2$); the posterior median is the value listed under \code{50\%}, which is the 50th percentile of the posterior values. For Windows, the forward slash in paths need to be converted to backslashes. \subsection{Output of stansummary Command} \subsubsection{\code{divergent}} \CmdStan uses a symplectic integrator to approximate the exact solution of the Hamiltonian dynamics, and when the step size is too large relative to the curvature of the log posterior this approximation becomes unstable and the trajectories can diverge and threaten the validity of the sampler; \code{divergent} indicates whether or not a given trajectory diverged. If there are any divergences then the samples may be biased -- common solutions are decreasing the step size (often by increasing the target average acceptance probability) or reparameterizing the model. \subsubsection{\code{energy}} The energy, \code{energy}, is used to diagnose the accuracy of any Hamiltonian Monte Carlo sampler. If the standard deviation of \code{energy} is much larger than $\sqrt{D / 2}$, where $D$ is the number of \emph{unconstrained} parameters, then the sampler is unlikely to be able to explore the posterior adequately. This is usually due to heavy-tailed posteriors and can sometime be remedied by reparameterizing the model. \section{Command-line Options} In addition to the filenames, \code{stansummary} includes three flags to customize the output. \begin{description} \longcmd{help} {stansummary usage information} {No help output by default} % \cmdarg{sig\_figs}{int} {Sets the number of significant figures displayed in the output} {Valid values: 0 \textless sig\_figs} {default = \code{2}} % \cmdarg{autocorr}{int} {Calculates and then displays the autocorrelation of the specified chain} {Valid values: Any integer matching a chain index} {No autocorrelation output by default} % \cmdarg{csv\_file}{string} {Writes output as a csv file with comments written as \#} {Valid values: Any valid filename} {Appends output to the file if it exists} \end{description}
{ "alphanum_fraction": 0.7325732429, "avg_line_length": 41.9939393939, "ext": "tex", "hexsha": "9bebced38c7b30b25c0e05206f22ff49cda8acb1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7f96708bfbe6c3b8f716aacd2e7053099158a7aa", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "danluu/cmdstan", "max_forks_repo_path": "src/docs/cmdstan-guide/stansummary.tex", "max_issues_count": 8, "max_issues_repo_head_hexsha": "7f96708bfbe6c3b8f716aacd2e7053099158a7aa", "max_issues_repo_issues_event_max_datetime": "2019-01-17T18:51:39.000Z", "max_issues_repo_issues_event_min_datetime": "2019-01-17T18:51:16.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "danluu/cmdstan", "max_issues_repo_path": "src/docs/cmdstan-guide/stansummary.tex", "max_line_length": 86, "max_stars_count": 1, "max_stars_repo_head_hexsha": "7f96708bfbe6c3b8f716aacd2e7053099158a7aa", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "danluu/cmdstan", "max_stars_repo_path": "src/docs/cmdstan-guide/stansummary.tex", "max_stars_repo_stars_event_max_datetime": "2019-09-06T15:53:17.000Z", "max_stars_repo_stars_event_min_datetime": "2019-09-06T15:53:17.000Z", "num_tokens": 2129, "size": 6929 }
\newpage \section{Notes on Section 4} \subsubsection{Explaining Lemma 4.1} A proof of Lemma 4.1 can also be found in the appendix of the original paper. However, the relation between the norms is missing in the original paper - that is why we state it here again. From 3.5 in the paper, we had: \begin{align} \frac{1}{(L+1)^2} \pqnorm{\theta}{fr}{2} = \E{v^\TT X X^\TT v} = \E{\norm{f_\theta}^2}. \end{align} Remember that the output of the network, $f_\theta(x)$, is a vector in $\setreal^k$ of probabilities for each of the $k$ classes. For the Frobenius Norm of a Matrix $A$ and a vector $x$ the following holds. \begin{align} \frobnorm{A} &\geq \spectralnorm{A} \\ \frobnorm{x} &= \lnorm{x} = \spectralnorm{x} \\ \spectralnorm{Ax} &\leq \spectralnorm{A} \cdot \lnorm{x} \\ \end{align} It follows. \begin{align} \E{\spectralnorm{f_\theta}^2} &= \E{ \spectralnorm{f_\theta}^2} \\ &= \E{ \spectralnorm{\structuredNN}^{2} } \\ &\leq \E{ \spectralnorm{x}^2 \prod \spectralnorm{D^i(x)}^2 \prod \spectralnorm{W^i}^2 }. \end{align} Since $W^i$ is independent of the data $x$, it does not have to be inside the expectation. \begin{align} \frac{1}{(L+1)^2} \frnorm{\theta}^2 &= \E{\spectralnorm{f_\theta}^2} \\ &\leq \E{ \spectralnorm{x}^2 \prod \spectralnorm{D^i(x)}^2 \prod \spectralnorm{W^i}^2 } \\ &= \E{ \lnorm{x}^2 \prod_{t=1}^{L+1} \spectralnorm{D^t(x)}^2 } \prod_{t=0}^{L} \spectralnorm{W^i}^2 \end{align} Now taking the root on both sides reveals Lemma 4.1 and concludes the explainiation. \begin{figure}[htb] \centering \includegraphics[width=\textwidth]{whiteboard_notes/18.jpg} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=\textwidth]{whiteboard_notes/19.jpg} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=\textwidth]{whiteboard_notes/20.jpg} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=\textwidth]{whiteboard_notes/21.jpg} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=\textwidth]{whiteboard_notes/22.jpg} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=\textwidth]{whiteboard_notes/23.jpg} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=\textwidth]{whiteboard_notes/24.jpg} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=\textwidth]{whiteboard_notes/25.jpg} \end{figure}
{ "alphanum_fraction": 0.6990779547, "avg_line_length": 28.4047619048, "ext": "tex", "hexsha": "4325433540500eac0028f34a2df1a68ab68a0b76", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-07-21T19:33:48.000Z", "max_forks_repo_forks_event_min_datetime": "2018-07-21T19:33:48.000Z", "max_forks_repo_head_hexsha": "dbbdf93098af3a201bf67449a29c15cd633e2430", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ML-KA/PDG-Theory", "max_forks_repo_path": "2_fisher_rao_norm/section4.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "dbbdf93098af3a201bf67449a29c15cd633e2430", "max_issues_repo_issues_event_max_datetime": "2018-10-04T13:36:53.000Z", "max_issues_repo_issues_event_min_datetime": "2018-10-04T13:36:53.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ML-KA/PDG-Theory", "max_issues_repo_path": "2_fisher_rao_norm/section4.tex", "max_line_length": 131, "max_stars_count": 2, "max_stars_repo_head_hexsha": "dbbdf93098af3a201bf67449a29c15cd633e2430", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ML-KA/PDG-Theory", "max_stars_repo_path": "2_fisher_rao_norm/section4.tex", "max_stars_repo_stars_event_max_datetime": "2018-12-18T13:09:46.000Z", "max_stars_repo_stars_event_min_datetime": "2018-07-19T20:29:51.000Z", "num_tokens": 874, "size": 2386 }
% 9.5.07 % This is a sample documentation for Compass in the tex format. % We restrict the use of tex to the following subset of commands: % % \section, \subsection, \subsubsection, \paragraph % \begin{enumerate} (no-nesting), \begin{quote}, \item % {\tt ... }, {\bf ...}, {\it ... } % \htmladdnormallink{}{} % \begin{verbatim}...\end{verbatim} is reserved for code segments % ...'' % \section{Function Call Allocates Multiple Resources} \label{FunctionCallAllocatesMultipleResources::overview} ``CERT Secure Coding RES30-C'' states \begin{quote} Allocating more than one resource in a single statement could result in a memory leak, and this could lead to a denial-of-service attack. \end{quote} \subsection{Parameter Requirements} This checker takes no parameters and inputs source file. \subsection{Implementation} This pattern is checked using a simple AST traversal on each SgFunctionCallExp node. For each node get the expression list of its arguments and check if any such argument expressions are the {\tt new} keyword. If the number of {\tt new} expressions exceeds one then flag an error. \subsection{Non-Compliant Code Example} \begin{verbatim} class A { }; class B { }; int foo( A *a, B *b ) { return 0; } int main() { A *a = new A; B *b = new B; int i = foo( a, b ); //ok... return foo( new A, new B ); //bad } \end{verbatim} \subsection{Compliant Solution} See the call to {\tt foo} above. \subsection{Mitigation Strategies} \subsubsection{Static Analysis} Compliance with this rule can be checked using structural static analysis checkers using the following algorithm: \begin{enumerate} \item Traverse all SgFunctionCallExp nodes \item For each node get the list of argument expressions \item Count the number of {\tt new} keyword argument expressions \item If the number of {\tt new} keyword argument expressions exceeds one then flag an error. \item Report all violations. \end{enumerate} \subsection{References} \htmladdnormallink{RES30-C. Never allocate more than one resource in a single statement}{https://www.securecoding.cert.org/confluence/display/cplusplus/RES30-C.+Never+allocate+more+than+one+resource+in+a+single+statement} % ex. \htmladdnormallink{ISO/IEC 9899-1999:TC2}{https://www.securecoding.cert.org/confluence/display/seccode/AA.+C+References} Forward, Section 6.9.1, Function definitions''
{ "alphanum_fraction": 0.7507443641, "avg_line_length": 33.5857142857, "ext": "tex", "hexsha": "f10923e4d68b8b6cf31cd12914686d30da1208c2", "lang": "TeX", "max_forks_count": 146, "max_forks_repo_forks_event_max_datetime": "2022-03-04T07:32:53.000Z", "max_forks_repo_forks_event_min_datetime": "2015-04-27T02:48:34.000Z", "max_forks_repo_head_hexsha": "7435d4fa1941826c784ba97296c0ec55fa7d7c7e", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "sujankh/rose-matlab", "max_forks_repo_path": "projects/compass/extensions/checkers/functionCallAllocatesMultipleResources/functionCallAllocatesMultipleResourcesDocs.tex", "max_issues_count": 174, "max_issues_repo_head_hexsha": "7435d4fa1941826c784ba97296c0ec55fa7d7c7e", "max_issues_repo_issues_event_max_datetime": "2022-03-31T16:51:05.000Z", "max_issues_repo_issues_event_min_datetime": "2015-01-28T18:41:32.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "sujankh/rose-matlab", "max_issues_repo_path": "projects/compass/extensions/checkers/functionCallAllocatesMultipleResources/functionCallAllocatesMultipleResourcesDocs.tex", "max_line_length": 280, "max_stars_count": 488, "max_stars_repo_head_hexsha": "7597292cf14da292bdb9a4ef573001b6c5b9b6c0", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "maurizioabba/rose", "max_stars_repo_path": "projects/compass/extensions/checkers/functionCallAllocatesMultipleResources/functionCallAllocatesMultipleResourcesDocs.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-30T07:15:46.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-09T08:54:48.000Z", "num_tokens": 603, "size": 2351 }
\documentclass{tufte-handout} \title{Deep learning - Practical Methodology - Notes} \author[Cosmin G. Alexandru]{Cosmin G. Alexandru} %\date{28 March 2010} % without \date command, current date is supplied %\geometry{showframe} % display margins for debugging page layout \usepackage{graphicx} % allow embedded images \setkeys{Gin}{width=\linewidth,totalheight=\textheight,keepaspectratio} \graphicspath{{graphics/}} % set of paths to search for images \usepackage{amsmath} % extended mathematics \usepackage{booktabs} % book-quality tables \usepackage{units} % non-stacked fractions and better unit spacing \usepackage{multicol} % multiple column layout facilities \usepackage{lipsum} % filler text \usepackage{fancyvrb} % extended verbatim environments \fvset{fontsize=\normalsize}% default font size for fancy-verbatim environments % Standardize command font styles and environments \newcommand{\doccmd}[1]{\texttt{\textbackslash#1}}% command name -- adds backslash automatically \newcommand{\docopt}[1]{\ensuremath{\langle}\textrm{\textit{#1}}\ensuremath{\rangle}}% optional command argument \newcommand{\docarg}[1]{\textrm{\textit{#1}}}% (required) command argument \newcommand{\docenv}[1]{\textsf{#1}}% environment name \newcommand{\docpkg}[1]{\texttt{#1}}% package name \newcommand{\doccls}[1]{\texttt{#1}}% document class name \newcommand{\docclsopt}[1]{\texttt{#1}}% document class option name \newenvironment{docspec}{\begin{quote}\noindent}{\end{quote}}% command specification environment \begin{document} \maketitle% this prints the handout title, author, and date %\printclassoptions This are notes on the 11th Chapter, Pracitcal Methodology, from the Deep Learning book\cite{goodfellow-et-al-2016}. The book\cite{goodfellow-et-al-2016} recommends the following procces for designing a machine learning system to solve a particular problem: \begin{enumerate} \item Choose your performance(error) metric. \item Establish a working end-to-end pipeline as soon as possible. \item Instrument the system to determine bottelnecks in performance and debugging. \item Make incremental changes. \end{enumerate} \section{Performance Metric}\label{sec:perf-metrics} First thing to do when developing a machine learning system is to determine which error(performance) metric to use and improve. For practical cases the error metric is lower bounded by the Bayes error. This is because the input features might have incomplete information, the processes modeled is stochastic, finite amount of data, etc. Multiple performance metrics can be chosen. When multiple performance metrics are used, one can usually trade between them to achieve the desired global performance. The performance metrics should be specific to the problem that is solved. Here are a few performance metrics to consider when designing a machine learning system: \begin{itemize} \item accuracy \item precision and recall \item coverage \item precision-recall curve \item $F-score = {2pr}/({p+r})$ \end{itemize} \section{Baseline Models}\label{sec:baseline-model} The second step is to choose a model for your problem and establish a working end-to-end system. If the problem you are working on has been solved or partialy solved before, it is a good idea to start from there. Here are a few design choices that you can make but keep in mind that the field of machine learning is advancing quickly and better solution might be available (the book was published in 2016): \begin{itemize} \item Simple problem - simple model, e.g. logistic regression. \item For supervized learning on fixed size input vectors, use a feed forword network with fully connected layers. \item For input with topological structure, use a convolutional neural network. \item If the input or output is a squence, use a gated recurrent network (LSTM or GRU). \end{itemize} The most utilized methods for training (optimization) are Stochastic Gradient Descent with momentum (SGD) or Adam\cite{DBLP:journals/corr/KingmaB14}. Popular learning rate decay schemes for SGD: \begin{itemize} \item Linear decay until fixed minimum. \item Exponential decay. \item Decrease learning rate by a factor of 2-10 each time validation error plateaus. \end{itemize} It is good practice to use some form of regularization in the optimization proccess, especialy for small number of examples. Some of the most successfuly and widely used methods of regularization are: \begin{itemize} \item Batch normalization. \item Early stopping (should be used almost universally\cite{goodfellow-et-al-2016}). \item Dropout is an excellent regularizer. \end{itemize} \section{More Data}\label{sec:more-data} Often it is much better to gather more data than to improve the learning algorithm. First thing to check when deciding wether to gather more data or not, is, the performance on the training set. If the performance on the training set is bad it is very probable that the model is not be using the training data that is already available. In this case, instead of gathering more data it is recomended to: \begin{itemize} \item Increase the model size(e.g: number of neurons per layer, number of layers, etc.) \item Tune the learning rate. \item Check the data quality. \end{itemize} When the performance on the training set is acceptable, check the performance on a test set. If the performance on the test set is good the there is nothing left to do, otherwise it is almost always a good idea to gather more data. When deciding to gather more data you should always keep in mind the cost of gathering more data. In order to decide how much more data to gather check to see how the performance of the system scales with the size of the training data. For this purpose one can plot curves of the performance depending on the training data size. Logarithmic steps in data size are recomended. Sometimes the cost of gathering more data is too high. In this cases, the alternatives to gathering data are: reduce the size of the model, improve regularization (e.g: add drop out, adjust hpyperparameters such as weight decay coefficents). \section{Hyperparameters}\label{sec:hyperparameters} Almost, if not, all models come with hyperparameters to be tuned in order to achieve the best performance for a specific problem. The hpyerparameters can affect from the time and memory cost of the running algorithm to the quality of the model. Selection of the hyper parameters can be done manually or automatically. The primary goal of hyperparameter search is to adjust the effective capacity of the model. Effective capacity of the model depends on three factors: \begin{itemize} \item Representation capacity of the model (more hidden layers / more units per hidden layer = greater representational capacity). \item Optimization algortihm to successfully minimize the cost function. \item Degree to which the cost function and training procedure regularizes the model. \end{itemize} The generalization error ploted as a founction of one of the hyperparameters follows a U-shaped curve. At one end of the U-shaped curve there is the underfitting regime when the generalization error is high because the training error is high and the hyperparameter corresponds to low capacity. At the other end the hyperparameter corresponds to high capacity and the generalization error is high because the gap between the training error and test error is high. Somewhere in the middle lies the optimal model capacity, which achieves the lowest possible generalization error, by adding a medium generalization gap to a medium amount of training error. \subsection{Manual hyperparameters tunning} When choosing to mannually tune the hyperparameters, one must have a good understanding of the relation between the hyperparameters, training error, generalization error and computational resources. One of the most important hyperparameters to tune, especially when manual tunning is chosen, is the learning rate. \subsection{Automatic hyperparameters tunning} When tunning parameters automatically one has to choose the way to explore the U-shaped curve by choosign the appropiate hyperparameters ranges. The direct approach is to choose the hyperparameters on a grid. This method is ussualy slow and the results are limited. A better aproach is to choose hyperparameters randomly. This way achieves better search space coverage. \subsection{Model-based hyperparameter optimization} The search for good hyperparameters can be cast as and optimization problem. Current approaches to include Spearmint\cite{snoek-et-al-2016}, TPE \cite{bergstar-et-al-2011} and SMAC\cite{hutter-et-al-2011}. \section{Debugging}\label{sec:debugging} Debuging machine learning systems can be difficult because a code problem can be hidden by the learning algorithm that manages to optimze despite the coding error. Methods to debug software problems: \begin{itemize} \item Visualize the model in action. \item Visualize the worst mistake. \item Reason about software using train and test error. \item Fit a small dataset. \item Compare back-propagation derivatives to numerical derivatives. \item Monitor historgrams of activations and gradient. \end{itemize} \bibliography{references} \bibliographystyle{plainnat} \end{document}
{ "alphanum_fraction": 0.785388612, "avg_line_length": 45.1244019139, "ext": "tex", "hexsha": "06c3de7ecb1033987b7e8c0545ced68e31183943", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "be333f3413be900aa9be20606e2367f97be9ca7d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "bucharest-cv/meetings", "max_forks_repo_path": "ch11_practical_methodology_deep_learning_book/source/notes.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "be333f3413be900aa9be20606e2367f97be9ca7d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "bucharest-cv/meetings", "max_issues_repo_path": "ch11_practical_methodology_deep_learning_book/source/notes.tex", "max_line_length": 112, "max_stars_count": null, "max_stars_repo_head_hexsha": "be333f3413be900aa9be20606e2367f97be9ca7d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "bucharest-cv/meetings", "max_stars_repo_path": "ch11_practical_methodology_deep_learning_book/source/notes.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2191, "size": 9431 }
\section{Scene and Camera complexity} \pgfplotsset{ every axis/.append style={colorbar = false}, } This section is supposed to illustrate why we cannot really compare different scenes with each other in respect to bvh depth. Figure \ref{badExample1} and \ref{badExample2} shows the relative memory results together with the average bvh depth of the scene. For nodes it seems very random, and for the leafs the differences are very small so that might be constant? \begin{figure}[!htb] \begin{minipage}[t]{0.5\textwidth} \begin{tikzpicture} \begin{axis} [ %view={90}{0} for x, view={0}{0} for y restriction view={0}{0}, xlabel = average Bvh depth, ylabel = relative memory NODE, cycle list name=exotic, legend style={at={(0.05,0.95)}, anchor = north west}, %xtick = {0, 2, ..., 20}, %xticklabels={$8$,$16$,$24$,$32$,$40$,$48$,$56$,$64$,$72$,$80$,$88$,$96$,$104$,$112$,$120$,$128$}, ] \addplot+[thick, only marks]table[x = averageBvhDepth, y = memoryRelative, col sep=comma]{Data/LaptopResultsNoSub/NodeMemorySsePerf_N4L4.txt}; \end{axis} \end{tikzpicture} \end{minipage} \label{badExample1} \begin{minipage}[t]{0.5\textwidth} \begin{tikzpicture} \begin{axis} [ %view={90}{0} for x, view={0}{0} for y restriction view={0}{0}, xlabel = average Bvh depth, ylabel = relative memory LEAF, cycle list name=exotic, legend style={at={(0.05,0.95)}, anchor = north west}, %xtick = {0, 2, ..., 20}, %xticklabels={$8$,$16$,$24$,$32$,$40$,$48$,$56$,$64$,$72$,$80$,$88$,$96$,$104$,$112$,$120$,$128$}, ] \addplot+[thick, only marks]table[x = averageBvhDepth, y = memoryRelative, col sep=comma]{Data/LaptopResultsNoSub/LeafMemorySsePerf_N4L4.txt}; \end{axis} \end{tikzpicture} \end{minipage} \label{badExample2} \end{figure} \newpage \section{Subdivision test results} The idea is to use subdivision to measure the effect of different triangle numbers and bvh depths without changing the scene. Sponza (262,267 trinagles) with different subdivisions. From original to triangle count * 41 (so from 262,267 to 10,752,947 in steps of 262,267). The times below are normalize with 1.46683 seconds (the time for N4L4, no subdivision) \begin{figure}[!htb] \begin{minipage}[t]{0.9\textwidth} \begin{tikzpicture} \begin{axis} [ %view={90}{0} for x, view={0}{0} for y restriction view={0}{0}, xlabel = subdivision, ylabel = total time normalized, cycle list name=linestyles*, legend style={at={(0.95,0.95)}, anchor = north east}, xtick = {1, 4, ..., 41}, %xticklabels={$8$,$16$,$24$,$32$,$40$,$48$,$56$,$64$,$72$,$80$,$88$,$96$,$104$,$112$,$120$,$128$}, ] \addplot+[thick, mark=none]table[discard if not={nameId}{4}, discard if not={branch}{4}, discard if not={branchMemory}{4}, x expr = \thisrow{subdivision} +1, y expr = \thisrow{rayTimeSum} / 1.46683, col sep=comma]{Data/LaptopResults/totalNodeSsePerfTable.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{4}, discard if not={branch}{8}, discard if not={branchMemory}{8}, x expr = \thisrow{subdivision} +1, y expr = \thisrow{rayTimeSum} / 1.46683, col sep=comma]{Data/LaptopResults/totalNodeSsePerfTable.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{4}, discard if not={branch}{12}, discard if not={branchMemory}{12}, x expr = \thisrow{subdivision} +1, y expr = \thisrow{rayTimeSum} / 1.46683, col sep=comma]{Data/LaptopResults/totalNodeSsePerfTable.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{4}, discard if not={branch}{16}, discard if not={branchMemory}{16}, x expr = \thisrow{subdivision} +1, y expr = \thisrow{rayTimeSum} / 1.46683, col sep=comma]{Data/LaptopResults/totalNodeSsePerfTable.txt}; \legend{N4L4,N8L4,N12L4,N16L4} \end{axis} \end{tikzpicture} \end{minipage} \caption{Sponza. Time values are divided by N4L4(no subdivision). A subdivision of 16 is faster than the original scene.} \end{figure} Its a bit unexpected to have a scene with 16 times the amount of triangles perform better than the original scene. The reason subdivision 4 and 16 have better performance than the rest might come from the way the subdivision is performed (exact explanation in the log). I also compared it to the Node intersection counts and both results have a similar trend. (no graph for now since i still have to rework the old data manager to support subdivisions). \begin{figure}[!htb] \begin{minipage}[t]{0.9\textwidth} \begin{tikzpicture} \begin{axis} [ %view={90}{0} for x, view={0}{0} for y restriction view={0}{0}, xlabel = subdivision, ylabel = total time normalized, cycle list name=linestyles*, legend style={at={(0.95,0.95)}, anchor = north east}, xtick = {1, 4, ..., 41}, %xticklabels={$8$,$16$,$24$,$32$,$40$,$48$,$56$,$64$,$72$,$80$,$88$,$96$,$104$,$112$,$120$,$128$}, ] \addplot+[thick, mark=none]table[discard if not={nameId}{4}, discard if not={leaf}{4}, discard if not={leafMemory}{4}, x expr = \thisrow{subdivision} +1, y expr = \thisrow{rayTimeSum} / 1.47138, col sep=comma]{Data/LaptopResults/totalLeafSsePerfTable.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{4}, discard if not={leaf}{8}, discard if not={leafMemory}{8}, x expr = \thisrow{subdivision} +1, y expr = \thisrow{rayTimeSum} / 1.47138, col sep=comma]{Data/LaptopResults/totalLeafSsePerfTable.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{4}, discard if not={leaf}{12}, discard if not={leafMemory}{12}, x expr = \thisrow{subdivision} +1, y expr = \thisrow{rayTimeSum} / 1.47138, col sep=comma]{Data/LaptopResults/totalLeafSsePerfTable.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{4}, discard if not={leaf}{16}, discard if not={leafMemory}{16}, x expr = \thisrow{subdivision} +1, y expr = \thisrow{rayTimeSum} / 1.47138, col sep=comma]{Data/LaptopResults/totalLeafSsePerfTable.txt}; \legend{N4L4,N4L8,N4L12,N4L16} \end{axis} \end{tikzpicture} \end{minipage} \caption{Sponza. Time values are divided by N4L4(no subdivision). Different leaf sizes have less effect on the total time than the node sizes.} \end{figure} \vspace{50mm} %5mm vertical space For Amazon lumberyard (Figure \ref{AmazonLumberyardInterior}) (1,020,903 triangles) we can observe very similar trends as in sponza in regards to increasing subdivision. A subdivision of 4 faster than without subdivision, the difference is larger than with sponza. The time we normalize with is 1.81506 seconds. \begin{figure}[!htb] \begin{minipage}[t]{0.7\textwidth} \begin{tikzpicture} \begin{axis} [ %view={90}{0} for x, view={0}{0} for y restriction view={0}{0}, xlabel = subdivision, ylabel = total time normalized, cycle list name=linestyles*, legend style={at={(1.35,0.95)}, anchor = north east}, xtick = {1, 3, ..., 21}, %xticklabels={$8$,$16$,$24$,$32$,$40$,$48$,$56$,$64$,$72$,$80$,$88$,$96$,$104$,$112$,$120$,$128$}, ] \addplot+[thick, mark=none]table[discard if not={nameId}{9}, discard if not={branch}{4}, discard if not={branchMemory}{4}, x expr = \thisrow{subdivision} +1, y expr = \thisrow{rayTimeSum} / 1.79101, col sep=comma]{Data/LaptopResults/totalNodeSsePerfTable.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{9}, discard if not={branch}{8}, discard if not={branchMemory}{8}, x expr = \thisrow{subdivision} +1, y expr = \thisrow{rayTimeSum} / 1.79101, col sep=comma]{Data/LaptopResults/totalNodeSsePerfTable.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{9}, discard if not={branch}{12}, discard if not={branchMemory}{12}, x expr = \thisrow{subdivision} +1, y expr = \thisrow{rayTimeSum} / 1.79101, col sep=comma]{Data/LaptopResults/totalNodeSsePerfTable.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{9}, discard if not={branch}{16}, discard if not={branchMemory}{16}, x expr = \thisrow{subdivision} +1, y expr = \thisrow{rayTimeSum} / 1.79101, col sep=comma]{Data/LaptopResults/totalNodeSsePerfTable.txt}; \legend{N4L4,N8L4,N12L4,N16L4} \end{axis} \end{tikzpicture} \end{minipage} \begin{minipage}[t]{0.7\textwidth} \begin{tikzpicture} \begin{axis} [ %view={90}{0} for x, view={0}{0} for y restriction view={0}{0}, xlabel = subdivision, ylabel = total time normalized, cycle list name=linestyles*, legend style={at={(1.35,0.95)}, anchor = north east}, xtick = {1, 3, ..., 21}, %xticklabels={$8$,$16$,$24$,$32$,$40$,$48$,$56$,$64$,$72$,$80$,$88$,$96$,$104$,$112$,$120$,$128$}, ] \addplot+[thick, mark=none]table[discard if not={nameId}{9}, discard if not={leaf}{4}, discard if not={leafMemory}{4}, x expr = \thisrow{subdivision} +1, y expr = \thisrow{rayTimeSum} / 1.78588, col sep=comma]{Data/LaptopResults/totalLeafSsePerfTable.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{9}, discard if not={leaf}{8}, discard if not={leafMemory}{8}, x expr = \thisrow{subdivision} +1, y expr = \thisrow{rayTimeSum} / 1.78588, col sep=comma]{Data/LaptopResults/totalLeafSsePerfTable.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{9}, discard if not={leaf}{12}, discard if not={leafMemory}{12}, x expr = \thisrow{subdivision} +1, y expr = \thisrow{rayTimeSum} / 1.78588, col sep=comma]{Data/LaptopResults/totalLeafSsePerfTable.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{9}, discard if not={leaf}{16}, discard if not={leafMemory}{16}, x expr = \thisrow{subdivision} +1, y expr = \thisrow{rayTimeSum} / 1.78588, col sep=comma]{Data/LaptopResults/totalLeafSsePerfTable.txt}; \legend{N4L4,N4L8,N4L12,N4L16} \end{axis} \end{tikzpicture} \end{minipage} \caption{Amazon Lumberyard Interior. for both plots: Time values are divided by N4L4(no subdivision).} \label{AmazonLumberyardInterior} \end{figure} \newpage \section{Relative memory for different subdivisions} Now how does the relative memory behave with different subdivisions? (Reminder: relative memory = time to load memory / compute time) \begin{figure}[!htb] \begin{minipage}[t]{0.8\textwidth} \begin{tikzpicture} \begin{axis} [ %view={90}{0} for x, view={0}{0} for y restriction view={0}{0}, xlabel = subdivision, ylabel = relative memory NODE, cycle list name=linestyles*, legend style={at={(0.95,0.95)}, anchor = north east}, xtick = {0, 4, ..., 41}, %xticklabels={$8$,$16$,$24$,$32$,$40$,$48$,$56$,$64$,$72$,$80$,$88$,$96$,$104$,$112$,$120$,$128$}, ] \addplot+[thick, mark=none]table[discard if not={nameId}{4}, x expr = \thisrow{subdivision} +1, y = memoryRelative, col sep=comma]{Data/LaptopResults/NodeMemorySsePerf_N4L4.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{4}, x expr = \thisrow{subdivision} +1, y = memoryRelative, col sep=comma]{Data/LaptopResults/NodeMemorySsePerf_N8L4.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{4}, x expr = \thisrow{subdivision} +1, y = memoryRelative, col sep=comma]{Data/LaptopResults/NodeMemorySsePerf_N12L4.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{4}, x expr = \thisrow{subdivision} +1, y = memoryRelative, col sep=comma]{Data/LaptopResults/NodeMemorySsePerf_N16L4.txt}; \legend{N4L4,N8L4,N12L4,N16L4} \end{axis} \end{tikzpicture} \end{minipage} \begin{minipage}[t]{0.8\textwidth} \begin{tikzpicture} \begin{axis} [ %view={90}{0} for x, view={0}{0} for y restriction view={0}{0}, xlabel = subdivision, ylabel = relative memory LEAF, cycle list name=linestyles*, legend style={at={(1.25,0.95)}, anchor = north east}, xtick = {0, 4, ..., 41}, %xticklabels={$8$,$16$,$24$,$32$,$40$,$48$,$56$,$64$,$72$,$80$,$88$,$96$,$104$,$112$,$120$,$128$}, ] \addplot+[thick, mark=none]table[discard if not={nameId}{4}, x expr = \thisrow{subdivision} +1, y = memoryRelative, col sep=comma]{Data/LaptopResults/LeafMemorySsePerf_N4L4.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{4}, x expr = \thisrow{subdivision} +1, y = memoryRelative, col sep=comma]{Data/LaptopResults/LeafMemorySsePerf_N4L8.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{4}, x expr = \thisrow{subdivision} +1, y = memoryRelative, col sep=comma]{Data/LaptopResults/LeafMemorySsePerf_N4L12.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{4}, x expr = \thisrow{subdivision} +1, y = memoryRelative, col sep=comma]{Data/LaptopResults/LeafMemorySsePerf_N4L16.txt}; \legend{N4L4,N4L8,N4L12,N4L16} \end{axis} \end{tikzpicture} \end{minipage} \caption{Those two graphs show how different subdivision affect relative memory time for leafs and nodes. Interesting is that for the Nodes the N12L4 line is below the N8L4 line.} \label{LeafNodeMemoryRelativeSponza} \end{figure} \newpage \begin{figure}[!htb] \begin{minipage}[t]{0.8\textwidth} \begin{tikzpicture} \begin{axis} [ %view={90}{0} for x, view={0}{0} for y restriction view={0}{0}, xlabel = subdivision, ylabel = relative memory NODE, cycle list name=linestyles*, legend style={at={(1.25,0.95)}, anchor = north east}, xtick = {0, 2, ..., 20}, %xticklabels={$8$,$16$,$24$,$32$,$40$,$48$,$56$,$64$,$72$,$80$,$88$,$96$,$104$,$112$,$120$,$128$}, ] \addplot+[thick, mark=none]table[discard if not={nameId}{9}, x expr = \thisrow{subdivision} +1, y = memoryRelative, col sep=comma]{Data/LaptopResults/NodeMemorySsePerf_N4L4.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{9}, x expr = \thisrow{subdivision} +1, y = memoryRelative, col sep=comma]{Data/LaptopResults/NodeMemorySsePerf_N8L4.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{9}, x expr = \thisrow{subdivision} +1, y = memoryRelative, col sep=comma]{Data/LaptopResults/NodeMemorySsePerf_N12L4.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{9}, x expr = \thisrow{subdivision} +1, y = memoryRelative, col sep=comma]{Data/LaptopResults/NodeMemorySsePerf_N16L4.txt}; \legend{N4L4,N8L4,N12L4,N16L4} \end{axis} \end{tikzpicture} \end{minipage} \begin{minipage}[t]{0.8\textwidth} \begin{tikzpicture} \begin{axis} [ %view={90}{0} for x, view={0}{0} for y restriction view={0}{0}, xlabel = subdivision, ylabel = relative memory LEAF, cycle list name=linestyles*, legend style={at={(1.25,0.95)}, anchor = north east}, xtick = {0, 2, ..., 20}, %xticklabels={$8$,$16$,$24$,$32$,$40$,$48$,$56$,$64$,$72$,$80$,$88$,$96$,$104$,$112$,$120$,$128$}, ] \addplot+[thick, mark=none]table[discard if not={nameId}{9}, x expr = \thisrow{subdivision} +1, y = memoryRelative, col sep=comma]{Data/LaptopResults/LeafMemorySsePerf_N4L4.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{9}, x expr = \thisrow{subdivision} +1, y = memoryRelative, col sep=comma]{Data/LaptopResults/LeafMemorySsePerf_N4L8.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{9}, x expr = \thisrow{subdivision} +1, y = memoryRelative, col sep=comma]{Data/LaptopResults/LeafMemorySsePerf_N4L12.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{9}, x expr = \thisrow{subdivision} +1, y = memoryRelative, col sep=comma]{Data/LaptopResults/LeafMemorySsePerf_N4L16.txt}; \legend{N4L4,N4L8,N4L12,N4L16} \end{axis} \end{tikzpicture} \end{minipage} \caption{Same graphs as Figure \ref{LeafNodeMemoryRelativeSponza} but for amazon lumberyard interior. For the Node part the N12L4 line is also below the N8L4 line} \label{LeafNodeMemoryRelativeAmazon} \end{figure} \newpage \begin{figure}[!htb] \begin{minipage}[t]{0.8\textwidth} \begin{tikzpicture} \begin{axis} [ %view={90}{0} for x, view={0}{0} for y restriction view={0}{0}, xlabel = average Bvh depth, ylabel = relative memory NODE, cycle list name=linestyles*, legend style={at={(0.05,0.95)}, anchor = north west}, %xtick = {0, 2, ..., 20}, %xticklabels={$8$,$16$,$24$,$32$,$40$,$48$,$56$,$64$,$72$,$80$,$88$,$96$,$104$,$112$,$120$,$128$}, ] \addplot+[thick, mark=none]table[discard if not={nameId}{4}, x = averageBvhDepth, y = memoryRelative, col sep=comma]{Data/LaptopResults/NodeMemorySsePerf_N4L4.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{9}, x = averageBvhDepth, y = memoryRelative, col sep=comma]{Data/LaptopResults/NodeMemorySsePerf_N4L4.txt}; \legend{sponza N4L4, amazon N4L4} \end{axis} \end{tikzpicture} \end{minipage} \begin{minipage}[t]{0.8\textwidth} \begin{tikzpicture} \begin{axis} [ %view={90}{0} for x, view={0}{0} for y restriction view={0}{0}, xlabel = average Bvh depth, ylabel = relative memory LEAF, cycle list name=linestyles*, legend style={at={(0.05,0.95)}, anchor = north west}, %xtick = {0, 2, ..., 20}, %xticklabels={$8$,$16$,$24$,$32$,$40$,$48$,$56$,$64$,$72$,$80$,$88$,$96$,$104$,$112$,$120$,$128$}, ] \addplot+[thick, mark=none]table[discard if not={nameId}{4}, x = averageBvhDepth, y = memoryRelative, col sep=comma]{Data/LaptopResults/LeafMemorySsePerf_N4L4.txt}; \addplot+[thick, mark=none]table[discard if not={nameId}{9}, x = averageBvhDepth, y = memoryRelative, col sep=comma]{Data/LaptopResults/LeafMemorySsePerf_N4L4.txt}; \legend{sponza N4L4, amazon N4L4} \end{axis} \end{tikzpicture} \end{minipage} \caption{Now for the interesting configuration (N4L4) the relative memory time of both scenes. x axis is the avera bvh depth.} \end{figure}
{ "alphanum_fraction": 0.6931752458, "avg_line_length": 53.6956521739, "ext": "tex", "hexsha": "4d6034e44c741ee3b373ba34ffdf005daa342e75", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d911f06e1af88859a9c94926669f334a64c516eb", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Hengoo/BVHRaytracer", "max_forks_repo_path": "LatexPlots/FirstSubdivisionAnalysis.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d911f06e1af88859a9c94926669f334a64c516eb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Hengoo/BVHRaytracer", "max_issues_repo_path": "LatexPlots/FirstSubdivisionAnalysis.tex", "max_line_length": 453, "max_stars_count": null, "max_stars_repo_head_hexsha": "d911f06e1af88859a9c94926669f334a64c516eb", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Hengoo/BVHRaytracer", "max_stars_repo_path": "LatexPlots/FirstSubdivisionAnalysis.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6367, "size": 17290 }
\subsection{Benchmarking instances} This section explains the two sets of benchmarks used.The medium size instance set consists of 38 small or medium size instances, and the large size instance set consists of 200 problem instances. The medium size set is used to compare the ILP model versus the Metaheuristics models. The large size set is used to test parameter values in order to find the best performing setup for each BRKGA and GRASP model, and finally to compare them solving one large problem instance. \subsubsection{The instance generation process} The algorithm used to generate problem instances is based on generating random but valid schedules of nurses and then adding the number of nurses working at each hour to compute the demand. This approach assures that the problem instance will always have a solution. Besides that, the algorithm places the schedules close to few selected hours of the schedule (adding some variability). The input values of the algorithm are all the variables of the problem instance, the selected hours of the schedule to place nurse schedules around and some proportion of extra nurses to add to the problem without using them to create the demand. \subsubsection{The medium and large instance sets} To decide how the size of a problem is determined, we choose to reference the time it takes for the ILP model to solve the problem instance using the Cplex solver. We generate and solve a series of instances with the ILP until we have around 20 instances that take 60 minutes or less to solve by ILP. Then, as we cannot perform the same procedure to generate the large instance set, we execute a series of experiments to determine the influence of the problem variables to the steps the ILP model has to do to decrease the gap between the best integer and the best bound. The results are shown in figure~\ref{fig_ilp_size}, figure~\ref{fig_ilp_size2} and figure~\ref{fig_ilp_size3} page~\pageref{fig_ilp_size}. The table~\ref{tab:ilp_size} page~\pageref{tab:ilp_size} summarizes the findings and shows how we applied them to create the large benchmark instance set. \begin{figure}[h!] \begin{subfigure}[b]{.49\linewidth} \centering \includegraphics[width=\linewidth]{./img/instances_nurses_ilp_evol.png} \caption{ Evolution of Gap of the ILP model with different number of nurses}\label{fig1a} \end{subfigure}% \begin{subfigure}[b]{.49\linewidth} \centering \includegraphics[width=\linewidth]{./img/instances_hours_ilp_evol.png} \caption{ Evolution of Gap of the ILP model with different number of hours }\label{fig1b} \end{subfigure}\vfill \caption{Evolution of Gap of the ILP model with different values of number of nurses (\subref{fig1a}) and number of hours (\subref{fig1b}). } \label{fig_ilp_size} \end{figure} \begin{figure}[h!] \begin{subfigure}[b]{.49\linewidth} \centering \includegraphics[width=\linewidth]{./img/instances_maxpresence_ilp_evol.png} \caption{Evolution of Gap of the ILP model with different values of maxPresence parameter }\label{fig1c} \end{subfigure} \begin{subfigure}[b]{.49\linewidth} \centering \includegraphics[width=\linewidth]{./img/instances_maxconsec_ilp_evol.png} \caption{Evolution of Gap of the ILP model with different values of maxConsec parameter }\label{fig1d} \end{subfigure} \caption{Evolution of Gap of the ILP model with different values of maxPresence (\subref{fig1c}) and maxConsec (\subref{fig1d}). } \label{fig_ilp_size2} \end{figure} \begin{figure}[h!] \begin{subfigure}[b]{.49\linewidth} \centering \includegraphics[width=\linewidth]{./img/instances_maxhours_ilp_evol.png} \caption{Evolution of Gap of the ILP model with different values of maxHours parameter }\label{fig1c} \end{subfigure} \begin{subfigure}[b]{.49\linewidth} \centering \includegraphics[width=\linewidth]{./img/instances_minhours_ilp_evol.png} \caption{Evolution of Gap of the ILP model with different values of minHours parameter }\label{fig1d} \end{subfigure} \caption{Evolution of Gap of the ILP model with different values of maxHours (\subref{fig1c}) and minHours (\subref{fig1d}). } \label{fig_ilp_size3} \end{figure} \begin{table}[ht] \centering \begin{tabularx}{0.75\textwidth}{|l|c|c|c|} \hline \textbf{Problem} & \textbf{Effect on problem size} & \textbf{Medium size set} & \textbf{Large size set} \\ \textbf{Variable} & \textbf{when increasing} & 38 instances & 199 instances \\ \hline \textbf{$nNurses$} & increases & 64 & 64 to 4096 \\ \textbf{$hour$} & increases & 24 & 24, 48, 72 \\ \textbf{$maxPrensce$} & decreases & 16 & 8 to 27 \\ \textbf{$maxConsec$} & decreases & 5 & 4 to 13 \\ \textbf{$maxHours$} & decreases & 4 to 10 & 2 to 12 \\ \textbf{$minHours$} & increases & 1 to 5 & 1 to 9\\ \hline \end{tabularx} \caption{Benchmark datasets and variables influence on problem size} \label{tab:ilp_size} \end{table}
{ "alphanum_fraction": 0.7723895997, "avg_line_length": 53.8444444444, "ext": "tex", "hexsha": "ab5c6146de13da859a129f7be6cd83a52e9978e5", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-02-10T02:38:13.000Z", "max_forks_repo_forks_event_min_datetime": "2021-02-10T02:38:13.000Z", "max_forks_repo_head_hexsha": "0b4796d082908f6644bd28ad4bfad9552879ea75", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "presmerats/Nurse-Scheduling-LP-and-Heuristics", "max_forks_repo_path": "Documentation/tex/Instances.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0b4796d082908f6644bd28ad4bfad9552879ea75", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "presmerats/Nurse-Scheduling-LP-and-Heuristics", "max_issues_repo_path": "Documentation/tex/Instances.tex", "max_line_length": 633, "max_stars_count": 1, "max_stars_repo_head_hexsha": "0b4796d082908f6644bd28ad4bfad9552879ea75", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "presmerats/Nurse-Scheduling-LP-and-Heuristics", "max_stars_repo_path": "Documentation/tex/Instances.tex", "max_stars_repo_stars_event_max_datetime": "2021-02-10T02:38:12.000Z", "max_stars_repo_stars_event_min_datetime": "2021-02-10T02:38:12.000Z", "num_tokens": 1344, "size": 4846 }
\section{Spawning Actors} Actors are created using the function \lstinline^spawn^. %The arguments passed to \lstinline^spawn^ depend on the actor's implementation. %\subsection{Using \lstinline^spawn^ to Create Actors from Functors or Classes} The easiest way to implement actors is to use functors, e.g., a free function or lambda expression. The arguments to the functor are passed to \lstinline^spawn^ as additional arguments. The function \lstinline^spawn^ also takes optional flags as template parameter. %The optional template parameter of \lstinline^spawn^ decides whether an actor should run in its own thread or takes place in the cooperative scheduling. The flag \lstinline^detached^ causes \lstinline^spawn^ to create a thread-mapped actor (opt-out of the cooperative scheduling), the flag \lstinline^linked^ links the newly created actor to its parent -- not available on top-level spawn -- and the flag \lstinline^monitored^ automatically adds a monitor to the new actor. Actors that make use of the blocking API (see Section \ref{Sec::BlockingAPI}) must be spawned using the flag \lstinline^blocking_api^. Flags are concatenated using the operator \lstinline^+^, as shown in the examples below. \begin{lstlisting} #include "cppa/cppa.hpp" using namespace cppa; void my_actor1(); void my_actor2(event_based_actor*, int arg1, const std::string& arg2); void ugly_duckling(); class my_actor3 : public event_based_actor { /* ... */ }; class my_actor4 : public sb_actor<my_actor4> { public: my_actor4(int some_value) { /* ... */ } /* ... */ }; // whenever we want to link to or monitor a spawned actor, // we have to spawn it using the self pointer, otherwise // we can use the free function 'spawn' (top-level spawn) void server(event_based_actor* self) { // spawn function-based actors auto a0 = spawn(my_actor1); auto a1 = self->spawn<linked>(my_actor2, 42, "hello actor"); auto a2 = self->spawn<monitored>([] { /* ... */ }); auto a3 = spawn([](int) { /* ... */ }, 42); // spawn thread-mapped actors auto a4 = spawn<detached>(my_actor1); auto a5 = self->spawn<detached + linked>([] { /* ... */ }); auto a6 = spawn<detached>(my_actor2, 0, "zero"); // spawn class-based actors auto a7 = spawn<my_actor3>(); auto a8 = self->spawn<my_actor4, monitored>(42); // spawn thread-mapped actors using a class auto a9 = spawn<my_actor4, detached>(42); // spawn actors that need access to the blocking API auto aa = self->spawn<blocking_api>(ugly_duckling); // compiler error: my_actor2 captures the implicit // self pointer as event_based_actor* and thus cannot // be spawned using blocking_api flag /*-auto ab = self->spawn<blocking_api>(my_actor2);-*/ } \end{lstlisting} %TODO: check how std::bind(..., self) behaves atm %\textbf{Note}: \lstinline^spawn(fun, arg0, ...)^ is \textbf{not} equal to \lstinline^spawn(std::bind(fun, arg0, ...))^! %For example, a call to \lstinline^spawn(fun, self, ...)^ will pass a pointer to the calling actor to the newly created actor, as expected, whereas \lstinline^spawn(std::bind(fun, self, ...))^ wraps the type of \lstinline^self^ into the function wrapper and evaluates \lstinline^self^ on function invocation. %Thus, the actor will end up having a pointer \emph{to itself} rather than a pointer to its parent.
{ "alphanum_fraction": 0.727520436, "avg_line_length": 55.05, "ext": "tex", "hexsha": "57211fc1e58820064ff98980c5cc9de00124b967", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "58f35499bac8871b8f5b0b024246a467b63c6fb0", "max_forks_repo_licenses": [ "BSL-1.0" ], "max_forks_repo_name": "syoummer/boost.actor", "max_forks_repo_path": "manual/SpawningActors.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "58f35499bac8871b8f5b0b024246a467b63c6fb0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSL-1.0" ], "max_issues_repo_name": "syoummer/boost.actor", "max_issues_repo_path": "manual/SpawningActors.tex", "max_line_length": 320, "max_stars_count": 2, "max_stars_repo_head_hexsha": "58f35499bac8871b8f5b0b024246a467b63c6fb0", "max_stars_repo_licenses": [ "BSL-1.0" ], "max_stars_repo_name": "syoummer/boost.actor", "max_stars_repo_path": "manual/SpawningActors.tex", "max_stars_repo_stars_event_max_datetime": "2020-01-20T08:05:41.000Z", "max_stars_repo_stars_event_min_datetime": "2015-03-20T21:11:16.000Z", "num_tokens": 867, "size": 3303 }
\section{Hardware Interface (hardware.h)}\label{sec:hardwareinterface} In this section we describe the design of the hardware (\gls{lowerlayer}) interface, used by an \gls{upperlayer} protocol to simulate radio communication between nodes in a \gls{manet}. The section will serve as the documentation, as well as a programmers guide, for the \mintinline{cpp}{hardware.h} interface. %The hardware interface is implemented in modern C++, using templates, which will allow a protocol implementation to transmit instances of arbitrary structures or classes between nodes, provided that the structure or class is a trivially copyable type~\cite{website:cpptriviallycopyable}. \begin{description}[style=nextline,leftmargin=0cm] \item[\mintinline{cpp}{void hardware::init(const mpilib::geo::Location &loc)}] Initialises the hardware functionality by initialising the \gls{mpi} functionality, as well as registering the node with the \gls{mpi} controller. The location is stored on the controller, and can later be updated by using the \mintinline{cpp}{set_location()} function. The location of a node is used to compute neighbourhood information, as well as the \gls{pathloss} experienced when transmitting data between nodes. This function has to be called exactly once, before calling any other hardware functions. \item[\mintinline{cpp}{void hardware::deinit()}] De-initialises the hardware functionality by un-registering the node from the \gls{mpi} controller, as well as de-initialising the \gls{mpi} functionality. This function has to be called exactly once, before terminating the protocol. \item[\mintinline{cpp}{std::chrono::microseconds}\\\mintinline{cpp}{hardware::broadcast(std::vector<unsigned char> &packet)}] Transmit a vector of bytes. \autoref{algo:hwfuncstransmit} contains a pseudo code description of this function. Returns the duration of the transmission, in microseconds. \item[\mintinline{cpp}{std::vector<std::vector<unsigned char>>}\\\mintinline{cpp}{hardware::listen(std::chrono::microseconds duration)}] Listen for data packets for a given duration of microseconds. Returns a vector of vectors of bytes. \autoref{algo:hwfuncslisten} contains a pseudo code description of this function. \item[\mintinline{cpp}{void hardware::sleep(std::chrono::microseconds duration)}] Sleep for a given duration of microseconds. \autoref{algo:hwfuncssleep} contains a pseudo code description of this function. \item[\mintinline{cpp}{void report_localtime()}] Report the current local time to the controller. \item[\mintinline{cpp}{unsigned long hardware::get_id()}] Gets the unique identifier of the node, assigned by the \gls{mpi} library. This function will return 0, if the \mintinline{cpp}{init_hardware()} function has not yet been called. \item[\mintinline{cpp}{unsigned long hardware::get_world_size()}] Gets the total amount of nodes registered to the \gls{mpi} controller. This function will return 0, if the \mintinline{cpp}{init_hardware()} function has not yet been called. \item[\mintinline{cpp}{bool hardware::set_location(const mpilib::geo::Location &loc)}] Updates the location registered on the \gls{mpi} controller. Returns \mintinline{cpp}{true} if the location was successfully updated, and \mintinline{cpp}{false} if the location update failed on the controller, or if the \mintinline{cpp}{init_hardware()} function has not yet been called. \end{description} A sample C++ implementation of the Slotted ALOHA protocol using the hardware interface described above, can be found at \autoref{minted:cpp:slottedaloha} in \autoref{app:slottedaloha}.
{ "alphanum_fraction": 0.7635679742, "avg_line_length": 103.3888888889, "ext": "tex", "hexsha": "70636b3a52fbdd42c6f5b07e0e27d4dbc099cb85", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "66bccba28a32ee47b7b874122de41c87f253349e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Joklost/masters", "max_forks_repo_path": "reports/p9/sections/03-mpi/04-hardwareinterface.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "66bccba28a32ee47b7b874122de41c87f253349e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Joklost/masters", "max_issues_repo_path": "reports/p9/sections/03-mpi/04-hardwareinterface.tex", "max_line_length": 516, "max_stars_count": null, "max_stars_repo_head_hexsha": "66bccba28a32ee47b7b874122de41c87f253349e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Joklost/masters", "max_stars_repo_path": "reports/p9/sections/03-mpi/04-hardwareinterface.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 866, "size": 3722 }
\chapter[Areal uniformity and diversity]{Areal uniformity and diversity in northern Eurasia}\label{areality} %%% In the previous chapter, the prototypical and the known minor noun phrase types occurring in the languages of northern Eurasia were characterized and illustrated with examples. This survey thus provides an overall picture of the degree of typological uniformity or divergence with regard to adjective attribution marking within both the whole area and each genealogical unit. \section{Attested attribution marking devices} %%% Altogether 13 (simple and combined) types of adjective attribution marking devices are attested in the languages of northern Eurasia: %%% \begin{enumerate} \item Anti\hyp{}construct state\\as in \ili{Kildin Saami}\is{anti\hyp{}construct state} \item Anti\hyp{}construct state + \isi{head\hyp{}driven agreement} (“double agreement”)\\as in \ili{Swedish}\is{anti\hyp{}construct state} \item Anti\hyp{}construct state + \isi{construct state} (“double\hyp{}construct state”)\\as in \ili{Northern Saami}\is{anti\hyp{}construct state} \item Anti\hyp{}construct state agreement\\as in \ili{Russian}\is{anti\hyp{}construct state agreement} \item Appositional head\hyp{}driven agreement\\as in \ili{Georgian}\is{appositional head\hyp{}driven agreement} \item Attributive article\\as in \ili{Yiddish}\is{attributive article} \item Attributive article + \isi{head\hyp{}driven agreement} (“double agreement”)\\as in \ili{Albanian}\is{attributive article} \item Attributive nominalization\\as in \ili{Udmurt}\is{attributive nominalization} \item Construct state\\as in \ili{Northern Kurdish}\is{attributive nominalization}\is{construct state} \item Incorporation\\as in \ili{Chukchi}\is{incorporation} \item Juxtaposition\\as in \ili{Komi-Zyrian}\is{juxtaposition} \item Head\hyp{}driven agreement\\as in \ili{Finnish}\is{head\hyp{}driven agreement} \item Modifier\hyp{}headed possessor agreement\\as in \ili{Oroch}\is{modifier\hyp{}headed possessor agreement} \end{enumerate} %%% Only one type attested in the world-wide sample (see the Appendix) does not occur in the northern Eurasian area: the floating construct state marker (\textit{linker}) found, for instance, in \ili{Tagalog} (Austronesian). \is{linker} The Indo-European\il{Indo-European languages} family has the largest absolute number of attested adjective attribution marking devices (nine). It is followed by Nakh\hyp{}Daghestanian\il{Nakh-Daghestanian languages} and Uralic\il{Uralic languages} (five each) and Kartvelian\il{Kartvelian languages} and Tungusic\il{Tungusic languages} (four each). The Mongolic\il{Mongolic languages} family has the lowest possible number with only one attested device, just as with Kamchatkan\il{Kamchatkan languages} and the isolates Ainu,\il{Ainu} Basque,\il{Basque} Korean\il{Korean} and Nivkh.\il{Nivkh} The most rare types are: (1) \isi{modifier\hyp{}headed possessor agreement}, which is attested only as a secondary device in a few \ili{Tungusic languages}, and (2) the combined construct device (i.e., “double\hyp{}construct state”), which is attested only marginally in one single language, \ili{Northern Saami} (Uralic). Attributive nominalization\is{attributive nominalization} combined with \isi{head\hyp{}driven agreement} is also very rare. This type occurs as the primary device only in the \ili{Albanian languages} (Indo-European), but it is also attested as a secondary or tertiary device in a few other languages. Head-marking construct state is also relatively uncommon in the northern Eurasian area as it is attested only in \ili{Iranian languages} (Indo-European). The most common type is \isi{juxtaposition}, followed by \isi{head\hyp{}driven agreement}. \section{Prototypes of attribution marking devices} %%% Several language families of northern Eurasia exhibit clear prototypes of adjective attribution marking devices: all Mongolic\il{Mongolic languages} and Turkic\il{Turkic languages} languages have \isi{juxtaposition} as the default device, as is the case for the languages of most branches of Uralic\il{Uralic languages} as well. Head\hyp{}driven agreement occurs as another prototype in many branches of the Indo-European\il{Indo-European languages} family. Even though the attested deviation from the prototype is much higher in Indo-European\il{Indo-European languages} than in Mongolic,\il{Mongolic languages} Turkic\il{Turkic languages} and Uralic,\il{Uralic languages} \isi{head\hyp{}driven agreement} marking can be shown to occur prototypically in most Indo-European\il{Indo-European languages} taxa. For the Abkhaz-Adyghe,\il{Abkhaz-Adyghe languages} Chukotko-Kamchatkan,\il{Chukotko-Kamchatkan languages} Kartvelian,\il{Kartvelian languages} Nakh\hyp{}Daghestanian\il{Nakh-Daghestanian languages} and Tungusic\il{Tungusic languages} families, synchronic prototypes are not very easy to find because a predominant type does not occur inside these families. The other language families of northern Eurasia are either isolates (Nivkh,\il{Nivkh} Ainu,\il{Ainu} Japanese, Korean,\il{Korean} Basque)\il{Basque} or they exhibit rather shallow genealogical diversity (Yukaghir,\il{Yukaghir languages} Ye\-ni\-sei\-an\il{Yeniseian languages}). Together with a few other families, predominantly spoken outside the investigated area (Eskimo-Aleut,\il{Eskimo-Aleut languages} Sino-Tibetan,\il{Sino-Tibetan languages} Semitic),\il{Semitic languages} these families are excluded from generalizations about prototypes. Larger language families representing a strikingly high diversity in regard to the attested absolute number of adjective attribution marking devices are Indo-European,\il{Indo-European languages} Nakh\hyp{}Daghestanian,\il{Nakh-Daghestanian languages} Uralic\il{Uralic languages} and Tungusic.\il{Tungusic languages} A strikingly high degree of uniformity is found in Mongolic\il{Mongolic languages} and Turkic.\il{Turkic languages} \section[Diachronic implications]{Diachronic implications of uniformity and diversity inside and across taxa} %%% %Measuring Looking at the degree of diversity (or uniformity) from a synchronic point of view may help identify diachronic processes. A very high degree of diversity inside a given taxon as compared to its proto-stage is likely to manifest pervasive linguistic changes and the innovation of new types. Similarly, the synchronic attestation of a high degree of uniformity inside a given taxon indicates the inheritance of original types without significant innovations. A taxon is defined as a group of related languages which go back to a common reconstructed (or documented) language, i.e., a subbranch of a language family or, ultimately, the proto-form of a whole language family. The East Saamic\il{West Saamic languages} languages, for instance, form a group of sister languages which derived from Proto\hyp{}East-Saamic.\il{Proto\hyp{}East Saamic} Proto\hyp{}East-Saamic\il{Proto\hyp{}East Saamic} is derived together with its Saamic\il{Saamic languages} sister languages from a more distant proto-stage, i.e., Proto\hyp{}Saamic\il{Proto\hyp{}Saamic}, which again is derived together with its Uralic\il{Uralic languages} sister languages from Proto\hyp{}Uralic.\il{Proto\hyp{}Uralic} Since the proto-stages of languages are normally reconstructed as single languages, it can be assumed that most of them had only one single type of adjective attribution marking (similar to the prevailing number of languages spoken today, cf.~the sample in the Appendix). Daughter languages which descend from a proto-language will either inherit the original adjective attribution marking devices, innovate secondary (or tertiary, etc.) devices or replace the original devices with borrowed new ones. The Proto\hyp{}Saamic\il{Proto\hyp{}Saamic} daughter language of Proto\hyp{}Uralic,\il{Proto\hyp{}Uralic} for instance, has replaced the original Uralic\il{Uralic languages} \isi{juxtaposition} with anti\hyp{}construct state marking (see \S\ref{saamic diachr}). The \ili{Proto\hyp{}Baltic\slash{}Slavic} daughter languages of Proto\hyp{}Indo-European\il{Proto\hyp{}Indo-European} inherited the original Indo-European\il{Indo-European languages} \isi{head\hyp{}driven agreement} marking but innovated a secondary type, i.e., anti\hyp{}construct state agreement marking (see \S\ref{slavic diachr}). All modern Mongolic\il{Mongolic languages} languages, by contrast, exhibit juxtaposition uniformly and have obviously inherited this device from their proto-languages (Proto\hyp{}Dagur,\il{Proto\hyp{}Dagur} Proto\hyp{}Moghol,\il{Proto\hyp{}Moghol} \ili{Proto\hyp{}Mongolic}, etc.) which in turn must have inherited juxtaposition from Proto\hyp{}Mongolic.\il{Proto\hyp{}Mongolic} A comparison of synchronically attested diversity inside and across taxa might thus have diachronic implications. %The simple calculations in Table~\ref{diversity} illustrate the degree of diversity in the investigated families of northern Eurasia. Column 1 lists all families, branches and subbranches in alphabetical order. Isolates and taxa with only one member language are not included in the table, and neither are taxa which are not spoken predominantly in northern Eurasia, with only two exceptions: the Iranian\il{Iranian languages} and Indo-Aryan\il{Indo-Aryan languages} subbranches within the Indo-European\il{Indo-European languages} family. Since the highest possible diversity is of interest here, the number of all attested devices (including secondary and tertiary types restricted to special noun phrase types) is counted. % %The second column in Table~\ref{diversity} (“Languages”) gives the number of coded languages from each taxon. The third column (“Types”) gives the absolute number of attested types. The next two columns 4 and 5 present ratio figures. The first of them (“Ratio (tax.)”) results from dividing the number of attested types in the higher branch by the number of types attested for the respective taxon:\medskip % %$\text{Diversity Ratio}_{\text{taxon}} = \frac{\text{Types}_{\text{family}}}{\text{Types}_{\text{taxon}}}$.\medskip % %\noindent For instance, West Saamic\il{West Saamic languages} has a ratio of $1.00$ because it exhibits all four types attested in the whole Saamic\il{Saamic languages} branch. The Saamic\il{Saamic languages} branch as such has a ratio of $1.25$ because four types are found in Saamic\il{Saamic languages} compared to five types attested for the whole Uralic\il{Uralic languages} family. Similarly, South Slavic\il{South Slavic languages} also has a ratio of $1.00$ because it exhibits all three types attested in Slavic.\il{Slavic languages} But the Slavic\il{Slavic languages} branch as such has a higher ratio of $3.00$ (meaning a lesser degree of diversity) because only three types are attested in this branch out of nine types for the whole Indo-European\il{Indo-European languages} family. % %\newpage %The last ratio figures (“Ratio (lgs.)”) result from dividing the overall number of languages by the number of attested types in the given taxon:\medskip % %$\text{Diversity Ratio}_{\text{languages}} = \frac{\text{Languages}_{\text{taxon}}}{\text{Types}_{\text{taxon}}}$.\medskip % %\noindent For instance, five \ili{West Saamic languages} are coded for four different types, resulting in a ratio of $1.25$. For the whole Saamic\il{Saamic languages} branch altogether nine languages are coded for four types, resulting in a somewhat higher ratio figure of $2.25$. South Slavic\il{South Slavic languages} has the ratio of $1.33$ because the four \ili{South Slavic languages} are coded for three types; Slavic,\il{Slavic languages} however, has $4.33$ because 13 \ili{Slavic languages} are coded for only three different types. % %The absolute number of types shows directly which families or branches inside families exhibit more types than other comparable taxa. The first ratio in column 4 (against the number of types in the higher branch) indicates where the more diverse or the more uniform branches are located inside a primary taxon (i.e., inside a family or a higher branch of a family). These ratio figures can be used for a comparison of languages inside families or between comparable taxa across families. East Saamic\il{East Saamic languages} (Uralic) with a ratio of $1.33$, for instance, seems less diverse as South Slavic\il{South Slavic languages} (Indo-European) with $1.00$. This is true even though the proto-stages of both taxa have a comparable time depth (approximately 1000 AD), both taxa have four members and they both exhibit three attested types of adjective attribution marking devices. However, the number of three attested types in each of the two branches is checked against the overall number of types attested in the respective higher branches: four types are attested in the whole Saamic branch,\il{Saamic languages} only three types are attested in Slavic.\il{Slavic languages} In a similar way is the whole Saamic\il{Saamic languages} branch, with a ratio of $1.25$, more diverse compared to the whole Uralic family,\il{Uralic languages} (exhibiting almost all types attested for the whole family) than Slavic languages within Indo-European\il{Indo-European languages} with a ratio of $3.00$ (exhibiting only one third of the generally attested types in the whole family). % %The second ratio in column 5 (against the number of coded languages) relativizes the first two figures. It seems much more likely that a higher number of coded languages results in a higher number of detected devices. The second ratio can thus serve to test the degree of diversity (in column 3 and 4). % %The simple calculations presented in Table \ref{diversity} can perhaps illustrate the degree of diversity, at least in those cases where the two ratio figures (against the number of coded types and against the number of coded languages inside taxa) and the degree of diversity compared to the whole sample coincide to a certain degree. The significance values in column 6 (“Diversity”) are labeled impressionistically as \textit{very low, low, mid-low, mid-high, high}. A hyphen marks those cases where a significance value cannot be found because the taxon in question has too few members (less than four). %Note that a value \textit{very high} is not found. %BRIEFLY EXPLAIN WHY THIS IS SO? % However, this classification does not mark diversity in absolute terms. It marks only the deviation from the average value of the whole sample. The Turkic\il{Turkic languages} family, for instance, can be shown to have a very low diversity and several of its branches clearly have a low diversity level as well. For the Mongolic\il{Mongolic languages} family, a very low value has been calculated. Whereas a low value has even been calculated for the whole Nakh\hyp{}Daghestanian\il{Nakh-Daghestanian languages} family, the Daghestanian\il{Daghestanian languages} branch as well as two of its subbranches have a relatively high diversity value. The same is true for Uralic,\il{Uralic languages} which has a low diversity value as a family and in several of its branches. One Uralic branch, Saamic,\il{Saamic languages} has a high value. Tungusic\il{Tungusic languages} has a middle diversity value but two of its branches are clearly more highly diverse. For Indo-European,\il{Indo-European languages} a significance value has not been found, although inside Indo-European, high values are calculated for Indo-Iranian\il{Indo-Iranian languages} and Germanic.\il{Germanic languages} Thus, the general picture partly coincides with what is known about areal distribution and spread of other linguistic features \citep[cf., e.g.,][]{nichols1992}: less diversity (higher numbers) is found in the inner parts of \isi{North Asia} (Mongolic,\il{Mongolic languages} Turkic),\il{Turkic languages} whereas languages in the northern Eurasian periphery, especially in south\hyp{}easternmost Europe (\isi{Caucasus}) but also in north-easternmost Europe (Circum-Baltic)\is{Circum-Baltic area} and in north-easternmost Asia (Pacific Rim),\is{Pacific Rim area} exhibit a higher degree of diversity (lower numbers) with respect to the morpho-syntax of adjective attribution. %Even though the figures in Table~\ref{diversity} summarize exclusively synchronic findings and the applied calculations are rather impressionistic, it stands to reason that they reflect historical developments (i.e., language changes) in certain parts of the area. Note that the underlying sample is not balanced and thus perhaps not easily applicable for true statistical analyses. However, this is an exploratory study; detailed statistical investigations are left for future research. So far it can only be assumed that the massive innovations in several neighboring taxa or in larger geographic sub-areas attested synchronically may point to contact-induced changes in areal hotbeds of innovation. In Part~IV (Diachrony), some light will be shed on diachronic variation and on the evolution of highly diverse adjective attribution marking inside language families of northern Eurasia. %\newcommand{\divvaltabheader}{& \textit{Family} & Languages & Types & \parbox{1cm}{Ratio (tax.)} % & \parbox{1cm}{Ratio (lgs.)} % & \parbox{2.2cm}{\centering Diversity % \newline value}\\} % %\begin{table} %\small %\begin{tabularx}{\textwidth}{Xrrrrrc} %\lsptoprule %\divvaltabheader %\midrule %\multicolumn{2}{r}{\textit{Abkhaz-Adyghe}} &\textit{4}&\textit{2} &–&\textit{2.00} &low \il{Abkhaz-Adyghe languages}\\ %\midrule %\multicolumn{2}{l}{Abkhaz} &2 &1 &2.00 &2.00 &– \il{Abkhaz languages}\\ %\multicolumn{2}{l}{Circassian} &2 &1 &2.00 &2.00 &– \il{Circassian languages}\\ %\midrule %\multicolumn{2}{r}{\textit{Chukotko-Kamchatkan}}&\textit{4}&\textit{3} &–&\textit{1.33} &?? \il{Chukotko-Kamchatkan languages}\\ %\midrule %\multicolumn{2}{l}{Chukchi} &1 &2 &1.50 &0.50 &– \il{Chukchi languages}\\ %\multicolumn{2}{l}{Kamchatkan} &1 &1 &3.00 &1.00 &– \il{Kamchatkan languages}\\ %\multicolumn{2}{l}{Koryak-Alutor} &2 &2 &1.50 &1.00 &– \il{Koryak-Alutor languages}\\ %\midrule %\multicolumn{2}{r}{\textit{Indo-European}} &\textit{65}&\textit{9} &–&\textit{7.22} &low \il{Indo-European languages}\\ %\midrule %\multicolumn{2}{l}{Albanian} &2 &2 &4.50 &1.00 &– \il{Albanian languages}\\ %\multicolumn{2}{l}{Armenian} &1 &2 &4.50 &0.50 &– \il{Armenian languages}\\ %\multicolumn{2}{l}{Baltic} &2 &2 &4.50 &1.00 &– \il{Baltic languages}\\ %\multicolumn{2}{l}{Celtic} &6 &2 &4.50 &3.00 &low \il{Celtic languages}\\ %&Brittonic &3 &1 &2.00 &3.00 &– \il{Brittonic languages}\\ %&Gaelic &3 &2 &1.00 &1.50 &– \il{Gaelic languages}\\ %\multicolumn{2}{l}{Germanic} &14 &5 &1.80 &2.80 &high \il{Germanic languages}\\ %&N-Germanic &6 &4 &1.25 &1.50 &mid \il{North Germanic languages}\\ %&W-Germanic &8 &3 &1.66 &2.66 &mid \il{West Germanic languages}\\ %\multicolumn{2}{l}{Hellenic} &1 &2 &4.50 &0.50 &– \il{Hellenic languages}\\ %\multicolumn{2}{l}{Indo-Iranian} &14 &7 &1.28 &2.00 &high \il{Indo-Iranian languages}\\ %&Indo-Aryan &6 &3 &2.33 &2.00 &mid \il{Indo-Aryan languages}\\ %&Iranian &8 &6 &1.66 &1.33 &high \il{Iranian languages}\\ %\multicolumn{2}{l}{Romance} &10 &2 &4.50 &5.00 &low \il{Romance languages}\\ %&E-Romance &1 &2 &1.00 &0.50 &– \il{East Romance languages}\\ %&Italo-W-Romance &7 &1 &2.00 &7.00 &low \il{Italo-West Romance languages}\\ %&S-Romance &2 &1 &2.00 &2.00 &– \il{South Romance languages}\\ %\multicolumn{2}{l}{Slavic} &13 &3 &3.00 &4.33 &low \il{Slavic languages}\\ %&E-Slavic &3 &2 &1.50 &1.50 &– \il{East Slavic languages}\\ %&S-Slavic &4 &3 &1.00 &1.33 &mid \il{South Slavic languages}\\ %&W-Slavic &6 &1 &3.00 &6.00 &very low \il{West Slavic languages}\\ %\lspbottomrule %\end{tabularx} %\caption[Number and ratio of attested types per genealogical unit]{Number and ratio of attested types per genealogical unit: absolute number of types (column 3), ratio against the generally attested number of types in the respective higher branch or family (column 4, higher numbers mean less diversity), ratio against the number of coded languages (column 5, higher numbers mean less diversity) and a diversity value (column 6, only for taxa with more than three languages).} %\label{diversity} %\end{table} % % %\begin{table} %\small %\begin{tabularx}{\textwidth}{Xrrrrrc} %\lsptoprule %\divvaltabheader %\midrule %\multicolumn{2}{r}{\textit{Kartvelian}} &\textit{4} &\textit{3} &– &\textit{1.33} &mid \il{Kartvelian languages}\\ %\midrule %\multicolumn{2}{l}{Georgian} &2 &3 &1.00 &0.66 &– \il{Georgian languages}\\ %\multicolumn{2}{l}{Svan} &1 &2 &1.50 &0.50 &– \il{Svan languages}\\ %\multicolumn{2}{l}{Zan} &2 &2 &1.50 &1.00 &– \il{Zan languages}\\ %\midrule %\multicolumn{2}{r}{\textit{Mongolic}} &\textit{6}&\textit{1} &–&\textit{6.00} &very low \il{Mongolic languages}\\ %\midrule %\multicolumn{2}{l}{Dagur} &1 &1 &1.00 &1.00 &– \il{Dagur languages}\\ %\multicolumn{2}{l}{Moghol} &1 &1 &1.00 &1.00 &– \il{Moghol languages}\\ %\multicolumn{2}{l}{Mongolian} &5 &1 &1.00 &5.00 &very low \il{Mongolian languages}\\ %\midrule %\multicolumn{2}{r}{\textit{Nakh-Daghestanian}} &\textit{28}&\textit{5} &–&\textit{5.60} &low \il{Nakh-Daghestanian languages}\\ %\midrule %\multicolumn{2}{l}{Daghestanian} &25 &5 &1.00 &5.00 &mid \il{Daghestanian languages}\\ %&Avar-Andi-Tsezic &13 &4 &1.25 &3.25 &mid \il{Avar-Andi-Tsezic languages}\\ %&Dargwa &1 &2 &2.50 &0.50 &– \il{Dargwa languages}\\ %&Lak &1 &2 &2.50 &0.50 &– \il{Lak languages}\\ %&Lezgic &10 &4 &1.25 &2.25 &mid \il{Lezgic languages}\\ %\multicolumn{2}{l}{Nakh} &3 &3 &1.66 &1.00 &– \il{Nakh languages}\\ %&Bats &1 &2 &1.50 &0.50 &– \il{Bats languages}\\ %&Chechen-Ingush &2 &2 &1.50 &1.00 &– \il{Chechen-Ingush languages}\\ %\midrule %\multicolumn{2}{r}{\textit{Tungusic}} &\textit{10}&\textit{4} &–&\textit{2.25} &mid \il{Tungusic languages}\\ %\midrule %\multicolumn{2}{l}{Amur Tungusic} &5 &4 &1.00 &1.25 &high \il{Amur Tungusic languages}\\ %&Nanay-Ulcha-Orok &3 &3 &1.33 &1.00 &– \il{Nanay-Ulcha-Orok languages}\\ %&Oroch-Udege &2 &3 &1.33 &0.66 &– \il{Oroch-Udege languages}\\ %\multicolumn{2}{l}{Manchu} &1 &1 &4.00 &1.00 &– \il{Manchu languages}\\ %\multicolumn{2}{l}{N-Tungusic} &4 &4 &1.00 &1.00 &high \il{North Tungusic languages}\\ %\midrule %\multicolumn{2}{r}{\textit{Turkic}} &\itshape 22&\textit{2} &–&\textit{11.00} &very low \il{Turkic languages}\\ %\midrule %\multicolumn{2}{l}{Bulgar} &1 &2 &1.00 &0.50 &– \il{Bulgar Turkic languages}\\ %\multicolumn{2}{l}{Common Turkic} &21 &2 &1.00 &10.50 &low \il{Common Turkic languages}\\ %&Altay &2 &1 &2.00 &2.00 &– \il{Altay Turkic languages}\\ %&Karluk &2 &2 &1.00 &1.00 &– \il{Karluk languages}\\ %&Kipchak &8 &1 &2.00 &8.00 &very low \il{Kipchak languages}\\ %&Lena &2 &1 &2.00 &2.00 &– \il{Lena Turkic languages}\\ %&Oguz &4 &2 &1.00 &2.00 &low \il{Oguz languages}\\ %&Yenisey &2 &1 &2.00 &2.00 &– \il{Yenisey Turkic languages}\\ %\lspbottomrule %\end{tabularx} %\label{diversity2} %\end{table} % % %\begin{table}[t] %\small %\begin{tabularx}{\textwidth}{Xrrrrrc} %\lsptoprule %\divvaltabheader %\midrule %\multicolumn{2}{r}{\textit{Uralic}} &\textit{37}&\textit{5} &– &\textit{7.40} &?? \il{Uralic languages}\\%??low %\midrule %\multicolumn{2}{l}{Finnic} &7 &1 &5.00 &7.00 &low \il{Finnic languages}\\ %\multicolumn{2}{l}{Hungarian} &1 &1 &5.00 &1.00 &– \il{Hungarian}\\ %\multicolumn{2}{l}{Khanty} &2 &1 &5.00 &2.00 &– \il{Khanty languages}\\ %\multicolumn{2}{l}{Mansi} &2 &1 &5.00 &2.00 &– \il{Mansi languages}\\ %\multicolumn{2}{l}{Mari} &2 &2 &2.50 &1.00 &– \il{Mari languages}\\ %\multicolumn{2}{l}{Mordvin} &2 &1 &5.00 &2.00 &– \il{Mordvin languages}\\ %\multicolumn{2}{l}{Permic} &3 &3 &1.66 &1.00 &– \il{Permic languages}\\ %\multicolumn{2}{l}{Saamic} &9 &4 &1.25 &2.25 &high \il{Saamic languages}\\ %&E-Saamic &4 &3 &1.33 &1.33 &high \il{East Saamic languages}\\ %&W-Saamic &5 &4 &1.00 &1.25 &high \il{West Saamic languages}\\ %\multicolumn{2}{l}{Samoyedic} &7 &2 &2.50 &3.50 &?? \il{Samoyedic languages}\\%??low %&Enets &2 &1 &2.00 &2.00 &– \il{Enets languages}\\ %&Nenets &1 &2 &1.00 &0.50 &– \il{Nenets languages}\\ %&Nganasan &1 &1 &2.00 &1.00 &– \il{Nganasan}\\ %&Selkup &3 &1 &2.00 &3.00 &– \il{Selkup languages}\\ %\midrule %\multicolumn{2}{r}{\textit{Yukaghir}} &\textit{2}&\textit{2} &– &\textit{1.00} &– \il{Yukaghir languages}\\ %\lspbottomrule % %\end{tabularx} %\vspace{7.55cm} %\label{diversity3} %\end{table} %\clearpage
{ "alphanum_fraction": 0.7350749259, "avg_line_length": 115.5462962963, "ext": "tex", "hexsha": "3d020eb2a8a01394fac8db39be034705ed330b11", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ca113bd66d56345895af9a6d5bd9adbcde69fc22", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "langsci/sidl", "max_forks_repo_path": "inprep/Riessler/chapters/chap-areal.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ca113bd66d56345895af9a6d5bd9adbcde69fc22", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "langsci/sidl", "max_issues_repo_path": "inprep/Riessler/chapters/chap-areal.tex", "max_line_length": 2312, "max_stars_count": null, "max_stars_repo_head_hexsha": "ca113bd66d56345895af9a6d5bd9adbcde69fc22", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "langsci/sidl", "max_stars_repo_path": "inprep/Riessler/chapters/chap-areal.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8168, "size": 24958 }
\subsection{Structure} \begin{frame}{Content of the Lecture 1 / 2} \textbf{General:} \begin{itemize} \item Most of you had a lecture on basic progamming $\ldots$ \begin{center}\color{MainA}performance was not an issue\end{center} \vspace{1em} \item<2- | handout: 1> Here it is going to be: \begin{enumerate} \item<3- | handout: 1> How fast is our program? \item<4- | handout: 1> How can we make it faster? \item<5- | handout: 1> How can we proof that it will always be that fast? \end{enumerate} \vspace{1em} \item<6- |handout:1> \textbf{Important} issues: \begin{itemize} \item Most of the time: application {\color{MainA}runtime} \item Sometimes also: resource / {\color{MainA}space consumption} \end{itemize} \end{itemize} \end{frame} %------------------------------------------------------------------------------- \begin{frame}{Content of the Lecture 2 / 2} \textbf{Algorithms:} \begin{columns} \begin{column}{0.40\textwidth}<2- |handout:1> \begin{itemize} \item Sorting \item Dynamic Arrays \item Associative Arrays \item Hashing \item Edit distance \end{itemize} \end{column} \begin{column}{0.60\textwidth}<2- |handout:1> \begin{itemize} \item Priority Queue \item Linked Lists \item Pathfinding / Dijkstra Algorithm \item Search Trees \end{itemize} \end{column} \end{columns} \vspace*{2.0em} \onslide<3- |handout:1>{\textbf{Mathematics:}} \begin{columns}<4- |handout:1> \begin{column}{0.40\textwidth} \begin{itemize} \item Runtime analysis \item $\mathcal{O}$-Notation \end{itemize} \end{column} \begin{column}{0.60\textwidth} \begin{itemize} \item Proof of correctness \vspace*{1.5em} \end{itemize} \end{column} \end{columns} \end{frame} %%% =================================================================== %%% This should be at the END of the file !!!!!! %%% %%% Local Variables: %%% mode: latex %%% TeX-master: "../../Lecture.tex" %%% End: %%% ===================================================================
{ "alphanum_fraction": 0.4852941176, "avg_line_length": 26.7659574468, "ext": "tex", "hexsha": "a6b9be979a0c7a910bb3e3edf38004d2009da022", "lang": "TeX", "max_forks_count": 8, "max_forks_repo_forks_event_max_datetime": "2021-04-05T08:36:38.000Z", "max_forks_repo_forks_event_min_datetime": "2016-10-07T11:55:23.000Z", "max_forks_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "TobiOnline/AlgoDat", "max_forks_repo_path": "Lecture-1/Chapter/eng/10_Content.tex", "max_issues_count": 23, "max_issues_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_issues_repo_issues_event_max_datetime": "2019-10-20T15:40:10.000Z", "max_issues_repo_issues_event_min_datetime": "2016-10-08T09:27:41.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "TobiOnline/AlgoDat", "max_issues_repo_path": "Lecture-1/Chapter/eng/10_Content.tex", "max_line_length": 81, "max_stars_count": 17, "max_stars_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "TobiOnline/AlgoDat", "max_stars_repo_path": "Lecture-1/Chapter/eng/10_Content.tex", "max_stars_repo_stars_event_max_datetime": "2020-08-26T11:07:16.000Z", "max_stars_repo_stars_event_min_datetime": "2016-12-16T17:42:34.000Z", "num_tokens": 691, "size": 2516 }
\section{Stereo vision} Three-dimensional dynamic capture means in general the way of recording a sequence of movements of a real-life target or scene. This section introduces the methods for seeing three-dimensional structures from stereo setups, consisting of two or more cameras. The configuration of a proper setup with two is described; it can be extended to the multi-view domain. Most of the sophisticated algorithms able to recover a dense reconstruction base their work on the same principles, as follows. \subsection{Coordinate systems and transforms} \label{sec:coord} The previous section described how to record a view of a scene with a camera. From now on, the term camera refers to a particular camera configuration, which can be a single physical camera moved to different locations. The camera is a projective object located somewhere in the imaged scene. Its \textit{intrinsic parameters} model the properties of projection, but do not take into account the camera location in any global coordinate system. The \textit{extrinsic parameters} contain the camera location and rotation in another global coordinate system, structured as a matrix. This is especially advantageous when there are more than one cameras and their coordinates must be related. \cite{hartley03multiview,heyden2005multiple} This part quickly reviews basic transforms whose results are needed in the later reconstruction steps. %Calibration is often specified with a camera projection matrix, or several separate matrices. %It may be convenient to store intrinsics and extrinsics separately if the intrinsic matrix is constant for several pictures, for example. \simplefig{h}{% \includegraphics[width=0.7\textwidth]{gfx/pinhole3d} }{fig:pinhole3d} {Pinhole camera geometry. Camera coordinate system origin at O, axis X3 points towards the optical axis, Y1 and Y2 point to image plane axes and R is the principal point, at the image center. The point P projects to Q, as well as everything else on the line joining them. The image plane is f units away from camera origin; f is called the focal length.} In computer graphics and vision, points and directions are usually described in homogeneous coordinates. Translation, perspective projection, rotation, and other operations are conveniently described as matrices by using an additional dimension for points, of which usually the last element is 1: $(x, y, z, 1)$. All points $(xw, yw, zw, w)$ map to the same point $(x, y, z)$. \cite{dubrofsky2009homography,hartley03multiview} %Homography definition (mapping of points and lines in $P^2$) The imaging process essentially captures a projection to a flat two-dimensional plane of the camera's view, as described in section \ref{sec:imaging}. When relating points between different cameras that view the same scene, the cameras' relational positions and rotations must be known. One of the cameras is often conveniently chosen as the origin of a global coordinate frame, so that its extrinsic parameters become unity transforms (programming libraries often assume this, see e.g. \cite{opencv}). Each three-dimensional point in the world is transformed to the small sensor or film inside the camera, which is then digitized to a discrete two-dimensional grid of pixels. The size of this pixel array (i.e. image) is referred to as the camera's resolution. %Figure \ref{fig:TODO} illustrates this transformation chain, which is encoded as the following equations, given a homogeneous point (4-dimensional vector) $X$ representing a 3D location described in physical (e.g. metric) coordinates: The transformation chain is encoded as follows, given a homogeneous point (4-dimensional vector) $X$ representing a 3D location described in physical (e.g. metric) coordinates: \begin{align} x &= P X\\ &= M_i X_s\\ % X_s on the sensor &= M T R X\\ &= M_i M_p T R X\\ % R, T camera pose, M_4 to camera sensor, M_3 to pixel coords \end{align} $x$ is a 2d pixel in a discrete image, $X_s$ exists on the sensor. $R$, $T$ encode the camera rotation and translation (extrinsics); $M_p$ projects the world coordinates to the camera sensor (film) - still in world coordinates (intrinsics!), and finally the affine $M_i$ transforms the points from the sensor to pixel coordinates on the digital discretized image. The whole projection $P = M_i M_p T R$ can be used as-is without decomposing it to separate matrices, unless the individual parameters are needed. As the chain consists of several matrices, some of them are defined only up to scale; the coordinate systems' units can be chosen freely. Software packages usually do not decompose the chain, because it is not needed and unique parameters cannot be found because of scaling. %The external camera parameters are called the extrinsics: camera coordinate system position and rotation (heading) in the global space. %Camera position sits at the projection center blah. The internal parameters, intrinsics, encode how the image is formed on the sensor: they consist of focal length, sensor size and principal point: (last column left out, as it's full of zeroes in 3x4) \begin{equation} M = \begin{pmatrix} m_x & \gamma & u_0\\ 0 & m_y & v_0\\ 0 & 0 & 1 \end{pmatrix} \cdot \begin{pmatrix} f & 0 & 0\\ 0 & f & 0\\ 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} \alpha_x & \gamma & u_0\\ 0 & \alpha_y & v_0\\ 0 & 0 & 1 \end{pmatrix} \end{equation} For simplicity, it is often denoted $\alpha_x = m_x f$, $\alpha_y = m_y f$. $R = (u_0, v_0)$ is the image center (or principal point). For square pixels, $m_x = m_y$, and for a non-skewed sensor, $\gamma = 0$, which is often the case. \cite{hartley03multiview,szeliski10vision,heyden2005multiple} %Le image. Horizontal planar triangle, lines between camera origins etc. lecture11.pdf. \subsection{Camera calibration} %In order to accurately measure a scene with a camera, the camera's properties must be known. Reconstruction algorithms need to relate points between images; the camera properties are needed. Calibrating a camera means to measure its intrinsics and extrinsics in order to map its data to a known coordinate frame. Calibration has always to be done, but it does not necessary need to be a manual step before scanning; self-calibration attempts to target this convenience problem. \cite{pollefeys1999hand,hartley03multiview} %Projective calibration only is too general, as it leaves out some assumptions that can be done about a physical world, such as relative angles and sizes; metric calibration something something. \cite{zisserman1995metric}. Automatic calibration tools rely on an amount of feature pairs of which the best matches are found, or a known pattern, such as a planar checkerboard pattern [chuan; zhang] whose features are also distinguished with a similar algorithm but a priori knowledge of the object structure is used for precise calibration. These usually need several pictures taken with the same camera from different poses. The checkerboard calibration step can also measure optical distortion at the same time. \cite{opencv,camcalmatlab} %TODO Figure: show extrinsic in matlab cam calibs, nice pics (both cam and world centered) %Single three-dimensional calibration object is also sufficient blbl One possible way is direct linear transform (DLT) \cite{hartley03multiview}: the whole matrix $P$ is solved from $x_i = PX_i$ by constructing a system of equations from the projections of some known points $i$, and minimizing an error metric, as the case is usually overconditioned. %Methods that dig the matrix out of a single image have certain restrictions, and won't work if e.g. seven points lie on the same plane [longuet-higgins etc.] %XXX see below. Intrinsic, extrinsic. Distortions. Projection matrices. Camera resectioning. %many single planar chessboard pics vs. a single image of an accurate 3d model. %The scale of values in the equations above affects the precision [hartley, in defense of .., h,ziss]. A similarity transform can be used to modify the values to a more consistent range; this is called normalization of the data. \subsection{Binocular disparity} %Essential, fundamental matrices. Correspondence problem. Rectification, undistortion. Epipolar geometry. \simplefig{h!}{ \begin{tikzpicture}[scale=0.3] % P, Z \draw[fill] (0, 20) circle [radius=0.3]; \node at (0, 21) {$P$}; \draw [<->] (0, 0.5) -- (0, 19.5); \node at (0.5, 10) {$Z$}; % origins, T % TODO: circles, node ends not exactly at those points \draw [<->] (-9.5,0) -- (9.5, 0); \draw[fill] (-10, 0) circle [radius=0.3]; \draw[fill] ( 10, 0) circle [radius=0.3]; \node at (-10, -1) {$O_l$}; \node at (10, -1) {$O_r$}; \node at (0, -1) {$T$}; % headings \draw [->] (-10, 0) -- (-10, 10); \draw [->] (10, 0) -- (10, 10); % image planes, at y=4 \draw[thick] (-13, 4) -- (-7, 4); \draw[thick] (13, 4) -- (7, 4); \draw [<->] (-6, 0.5) -- (-6, 3.5); \node at (-5.5, 2) {$f$}; % intersection points at principals and xs \draw[fill] (-10, 4) circle [radius=0.3]; \draw[fill] (10, 4) circle [radius=0.3]; \node at (-10.5, 3) {$c_l$}; \node at (10.5, 3) {$c_r$}; \node at (-9, 5) {$x_l$}; \node at (9, 5) {$x_r$}; % O-to-P \draw (-10, 0) -- (0, 20); \draw (10, 0) -- (0, 20); % p \draw[fill] (8, 4) circle [radius=0.3]; \node at (8, 3) {$p_r$}; \draw[fill] (-8, 4) circle [radius=0.3]; \node at (-8, 3) {$p_l$}; \end{tikzpicture} }{fig:simplestereo} {A very simple stereo setup, picture from above. The image planes (thick lines) are actually imaginary, as a real film in a camera would exist behind the principal point and project the image upside down, as described earlier in \ref{sec:imaging}. The coordinates exist in the world coordinate units. The symbols $O$ are the camera origins ($T$ units between each other); $c$ the principal points; $x$ the image plane coordinates of $p$ w.r.t. the principal points; and $f$ is the focal length. The unknown is $Z$, depth of point $P$.} %Next, the setup of binocular stereo vision is described. Common stereo vision rigs use the simplest possible case: two identical cameras with a fixed distance, both oriented to the same direction, parallel to the line connecting them, as in figure \ref{fig:simplestereo}. Assuming known calibration with identical cameras (same focal length and sensor) in a setup described above, visualized in figure \ref{fig:simplestereo}, points can be triangulated as follows: From similar triangles with a common vertex at $P$, we get (note that $x_r < 0$ as it's to the left, towards to the negative axis, from the corresponding plane's origin) \begin{align} \frac{Z}{T} &= \frac{Z-f}{T - x_l + x_r} \\ &= \frac{Z-f}{T - d}\\ ZT - Zd &= ZT - fT\\ Z &= \frac{fT}{d} \label{eq:z} \end{align} The disparity $d$ is the difference of the points in their image planes, $d = x_r - x_l$. If the image planes would be fixed as being physically correct, in the back side of the camera origins, the focal length should be negated to keep the correct interpretation and sign because the projected physical image is mirrored in both axes. Image processing between the sensor and a picture file usually inverts this. As the equation \ref{eq:z} shows, depth is directly inversely proportional to disparity in this simple case. To map the depth to correct units, only focal length $f$ and the baseline $T$ are needed additionally; when using pixel coordinates instead of physical in $d$, also the pixel size should be taken into account. All of these are encoded in the camera parameters. Algorithms such as those in OpenCV \cite{opencv} can compute point clouds from disparity images. \subsection{Epipolar geometry} \simplefig{h!}{ \begin{tikzpicture}[scale=0.5] % cameras \draw[fill] (-11,-1) circle [radius=0.2]; \draw[fill] ( 11,-1) circle [radius=0.2]; \draw (-11,-1) -- (11, -1); \node at (0, -2) { baseline }; \node at (-11,-2) {$C_l$}; \node at ( 11,-2) {$C_r$}; % planes \draw (-10,1) -- (-10,7) -- (-5,3) -- (-5,-3) -- cycle; \draw ( 10,1) -- ( 10,7) -- ( 5,3) -- ( 5,-3) -- cycle; % 3d pts \draw[fill] ( 0,3) circle [radius=0.2]; \draw[fill] ( 0,9) circle [radius=0.2]; \node at (0,2) {$A$}; \node at (0,8) {$B$}; % origins via pts \draw (-11,-1) -- (0,3) -- (11,-1); \draw (-11,-1) -- (0,9) -- (11,-1); % epis \draw[fill] (-6,-1) circle [radius=0.1]; \draw[fill] (6,-1) circle [radius=0.1]; \node at (-6,-1.5) { $e_l$ }; \node at (6,-1.5) { $e_r$ }; % projections \draw[fill] (-7.5,0.2727) circle [radius=0.1]; \draw[fill] (-7.5,2.1818) circle [radius=0.1]; \node at (-8.5, 0.2727) {$A_l$}; \node at (-8.5, 2.1818) {$B_l$}; % lines from epis \draw (-6,-1) -- +(-2*1.5,2*1.2727);%(-7.5,0.2727); \draw (-6,-1) -- +(-2*1.5,2*3.1818);%(-7.5,2.1818); \draw[fill] (7.5,0.2727) circle [radius=0.1]; \draw[fill] (7.5,2.1818) circle [radius=0.1]; \node at (8.5, 0.2727) {$A_r$}; \node at (8.5, 2.1818) {$B_r$}; \draw (6,-1) -- +(2*1.5,2*1.2727);%(7.5,0.2727); \draw (6,-1) -- +(2*1.5,2*3.1818);%(7.5,2.1818); \end{tikzpicture} }{fig:epigeom} {Two camera views on same scene. World points $A$, $B$ project to planes of different views imaged from $C_l$ and $C_r$ on the left ($A_l$ and $B_l$), and to the right ($A_r$, $B_r$). When $A_l$ is known, its corresponding point $A_r$ (not initially known in practice) is found on the epipolar line joining $e_r$ and $A_r$ in the right image. All epipolar lines in a view join in the same point ($e_l$ and $e_r$).} Triangulation or reconstruction of the scene structure given by image pair(s) is usually done on the base of a known relationship between the cameras. Such relationship, known as calibrating the cameras, can be automatically determined, given by corresponding points that can be distinguished in each image and matched. \cite{trucco1998introductory,hartley03multiview} In stereo vision, the same scene of interest is seen by two or more cameras at the same time. The cameras are rarely aligned perfectly such as in the disparity setup described above, however. Epipolar geometry encodes the relations between arbitrarily positioned cameras in a standard way so that coordinates of a 3D point seen in several images can be calculated with the same triangulation. A point seen by camera $C_l$ at 3D point A could be anywhere on the line between $C_l$'s origin and P, because a certain line passing through the principal point always projects to a point. This line is seen as a single point $A_l$. From another viewpoint in camera $C_r$, this line equals to some line on B's image plane. The real point must be on that line. The inverse applies for any point on $C_r$ and a line on $C_l$. The lines on the image planes are called epipolar lines. Essential matrix defines how the camera poses differ by the something something points seen by both. When $A_l$, $A_r$ encode the points in figure \ref{fig:epigeom} by the corresponding camera coordinates, and the baseline difference (vector from $C_l$ to $C_r$) is marked as $t$, it holds that $(A_l-C_l) \cdot t \times (A_r-C_r) = 0$, as all the vectors are coplanar; the cross product yields a normal to the plane, which is perpendicular to all the vectors, thus the dot product equals 0. \cite{hartley03multiview} Essential matrix is a matrix form of this relation; it includes the relative rotation and translation of the two cameras. %\begin{align*} \label{eq:essential} % %A_r &= R (A_l - t) \\ % %A_r^T R T A_l &= 0 \\ % %A_r^T E A_l &= 0 % A_l \cdot t \times A_r = 0\\ % A_l \cdot t \times R A_r = 0\\ % A_l^T T %\end{align*} %where $T$ is the cross-product form of $t$ encoded in a matrix form as below. The essential matrix is obtained as $E = R T$. % %Le image. lecture11.pdf. O->p dot (O->O' cross O'->p') = 0 % %Cross product expressed in a skew-symmetric matrix form is %\begin{equation} %\vec a \times \vec b = %\begin{pmatrix} % 0 & -a_z & a_y\\ % a_z & 0 & -a_x\\ % -a_y & a_x & 0 %\end{pmatrix} %\begin{pmatrix} % b_x\\b_y\\b_z %\end{pmatrix} %= \vec c %\end{equation} Fundamental matrix relates the corresponding points in stereo images; it has the same meaning as the essential matrix, but it works in the pixel coordinates of the cameras, which are obtained after the projective transform that takes the intrinsics into account. Inverting the matrix $M_i$ (\ref{sec:coord}) in sensor-to-pixel coordinate transform and using it on pixel coordinates, world coordinates seen by the camera can be obtained. %\[ %\hat pAl = M_p A_l\\ %\hat A_r = M_p A_r %\] % %and using it on pixel coordinates, the world coords can be obtained, plugging in to the equation \ref{eq:essential} % %\[ %A_r^T E A_l = 0\\ %(M_p^-1 \hat A_r)^T E (M_p^-1 \hat A_l) = 0\\ %\hat A_r^T M_p^-T E M_p^-1 A_l = 0\\ %\hat A_r^T F \hat A_l = 0 %\] % %the fundamental matrix % %\[ %F = M_p^-T E M_p^-1 = M_p^-T R T M_p^-1 %\] The fundamental matrix relates the pixels and epipolar lines, and as such it is useful in image processing where the images are described as pixels in a color array (image) and not colored physical coordinates. %Epipole can be interpreted as the location of another camera as seen by other camera. \subsection{Point matching} Previously, basics for reconstructing three-dimensional location for a point pair were introduced, assuming known positions for the same point in different images. To reconstruct a whole scene from a full image, all pairwise points must be matched, i.e. found that what pixel in one view represents the same object as one in other view. Matching is often also called correspondence searching: Given a pixel in one image, what is the corresponding pixel in another image taken from the same scene? Pixels correspond to each other if they represent the same physical point. To describe the pixel's characteristics, its surrounding environment is encoded as a \textit{feature}, a easily recognizable, unique property vector. When discussing about features, not every pixel's neighbourhoods are used; \textit{good} features are those that have strongly distinguishable properties, such as edges and corners. Edges or corners are essentially high-frequency information in the image that can be interpreted as a 2D discrete function; thus, they can be detected by a discrete high-pass or band-pass filter, zeroing all but those pixels where a high difference is found \cite{marr1980theory} Neighbourhood of a pixel where features are detected is often called a window or a patch. Matching can be done in sparse or dense mode; sparse matching finds a set of features from each image, and tries to match them. Dense matching runs through each pixel of one image and tries to find the same from another one with e.g. template matching \cite{duda1973pattern}; from the coordinate differences in image space between the two images, a disparity map is built. The disparities are then directly transformed to depth values, yielding a point cloud. Scale-invariant feature transform (SIFT) \cite{lowe1999object} is a commonly used algorithm for local feature detection. A GPU implementation is also available \cite{changchang2007siftgpu}. Invariance to scaling, translation and rotation makes SIFT useful in describing features that can be matched between unaligned images. Other similar commonly used means are Speede Up Robust Features (SURF) \cite{bay2006surf} Harris corner detector \cite{harris1988combined}. \subsection{Correspondence and rectification} \label{sec:rectification} In order to triangulate a real point from two or more photographs, the location of the point in all images must be known. Rectification is a process that simplifies this search problem by restricting the search to a single dimension. By aligning the cameras such that their images are coplanar, the search only has to be performed on a line that is parallel to the line connecting the camera centers. After rectification, the corresponding lines are axis-aligned (horizontal or vertical) in both images. \cite{hartley03multiview} % LOL TODO guido gerig image rectification (stereo) slides \subsection{N-view stereo} The case of more cameras than a single pair uses the same principles in epipolar geometry. It brings more restrictions and dimensions; in three dimensions, for example, the fundamental matrix becomes three-dimensional, the trifocal tensor. \cite{hartley03multiview} It is also more computationally heavy, as more data must be processed; if no information about camera parameters is available, pairwise checks between the images may become expensive. \cite{wu2013towards} Multiple baseline stereo is a simple special case for many cameras. When all the cameras lie on the same baseline, calibration is easier and points can be selected by using a minimized sum of errors. \cite{okutomi1993multiple} The cameras that are used in capturing a scene can be fixed or positioned arbitrarily; in fact, the structure from motion technique \cite{snavely2006photo,fitzgibbon1998automatic} enables to use just one camera that is moved around. Accurate and fast reconstructions are still traditionally done with stereo camera pairs, though. Another common way is to use pairwise cameras for individual reconstructions to build a point cloud for every pair, and then register them together. \cite{bradley2010high} %\subsection{Structure from motion} % %Structure from motion (SfM) refers usually to recovering the structure of a scene from the motion of a single camera. %For each view, the pose of the camera is determined and the scene structure is extended with the new information in the image. %(pollefeys) %Bundle adjustment is used to refine the camera parameters. %\subsection{Post-processing} %Uv mapping. Manual work. 3d noise removal; ignore points that have no close pair in other clouds. %Rendering: "as usual". %Post: remodel the mesh (face), see what it would look like. Refine parameters to get a similar output as in the photos (normal map etc.), backproject. Use colors and highpass them; assume uniform lightning and locally uniform texture color (bradley). (Simply a rendering technique, that level of detail in 3D structure might not be needed). Still, structured light and/or shading assumptions [shape from single image cues/shading trucco,verri p.225] done too. \subsection{Reprojection errors} The quality of the reconstruction is measured by reprojecting the 3D points back to the cameras with the estimated parameters, and calculating the distance between the projected and original point. \cite{hartley03multiview} Bundle adjustment \cite{wu2011multicore} seeks to optimize all camera parameters at once with good performance. A common way to handle feature errors is Random Sample Consensus (RANSAC). Random subsets of the sample space is iterated, and samples that do not fit well to a model that is constructed of a smaller set are ignored. The iteration that matches most samples is selected. \cite{hartley03multiview}
{ "alphanum_fraction": 0.7466689889, "avg_line_length": 60.4368421053, "ext": "tex", "hexsha": "3f28a119d4f7fcc49372a4236ba9e5b4a140276b", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-04-27T03:55:10.000Z", "max_forks_repo_forks_event_min_datetime": "2019-04-27T03:55:10.000Z", "max_forks_repo_head_hexsha": "25838d80eeb93d7fc2028c6a69ee745652c9efa4", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "sooda/thesis", "max_forks_repo_path": "semma/tex/stereo-vision.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "25838d80eeb93d7fc2028c6a69ee745652c9efa4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "sooda/thesis", "max_issues_repo_path": "semma/tex/stereo-vision.tex", "max_line_length": 535, "max_stars_count": 6, "max_stars_repo_head_hexsha": "25838d80eeb93d7fc2028c6a69ee745652c9efa4", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "sooda/thesis", "max_stars_repo_path": "semma/tex/stereo-vision.tex", "max_stars_repo_stars_event_max_datetime": "2021-01-15T01:13:52.000Z", "max_stars_repo_stars_event_min_datetime": "2015-06-06T23:57:02.000Z", "num_tokens": 6173, "size": 22966 }
% BEGIN LICENSE BLOCK % Version: CMPL 1.1 % % The contents of this file are subject to the Cisco-style Mozilla Public % License Version 1.1 (the "License"); you may not use this file except % in compliance with the License. You may obtain a copy of the License % at www.eclipse-clp.org/license. % % Software distributed under the License is distributed on an "AS IS" % basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See % the License for the specific language governing rights and limitations % under the License. % % The Original Code is The ECLiPSe Constraint Logic Programming System. % The Initial Developer of the Original Code is Cisco Systems, Inc. % Portions created by the Initial Developer are % Copyright (C) 2006 Cisco Systems, Inc. All Rights Reserved. % % Contributor(s): % % END LICENSE BLOCK %------------------------------------------------------------------------ \chapter{Dynamic Code} %HEVEA\cutdef[1]{section} %------------------------------------------------------------------------ \label{chapdynamic} Support for dynamic code is provided partly for compatibility with Prolog. It is worth noting that \eclipse provides much better primitives (see chapter~\ref{chaparrays}) to support the non-logical storage of information---a major use for dynamic predicates in Prolog. An {\eclipse} predicate can be made \emph{dynamic}. That is, it can have clauses added and removed from its definition at run time. This chapter discusses how to do this, and what the implications are. %------------------------------------------------------------------------ \section{Compiling Procedures as Dynamic or Static} \label{compdynamic} If it is intended that a procedure be altered through the use of \bipref{assert/1}{../bips/kernel/dynamic/assert-1.html} and \bipref{retract/1}{../bips/kernel/dynamic/retract-1.html}, the system should be informed that the procedure will be dynamic, since these predicates are designed to work on dynamic procedures. If \bipref{assert/1}{../bips/kernel/dynamic/assert-1.html} is applied on a non-existing procedure, an error is raised, however the default error handler for this error only declares the procedure as dynamic and then makes the assertion. A procedure is by default static unless it has been specifically declared as dynamic. Clauses of static procedures must always be consecutive, they may not be separated in one or more source files or by the user from the top level. If the static procedure clauses are not consecutive, each of the consecutive parts is taken as a separate procedure which redefines the previous occurrence of that procedure, and so only the last one will remain. However, whenever the compiler encounters nonconsecutive clauses of a static procedure in one file, it raises an exception whose default handler prints a warning but it continues to compile the rest of the file. If a procedure is to be dynamic the {\eclipse} system should be given a specific \defnotion{dynamic declaration}. A dynamic declaration takes the form \begin{quote} \notation{:- dynamic \pattern{SpecList}.}\indextt{dynamic/1} \end{quote} The predicate \bipref{is_dynamic/1}{../bips/kernel/dynamic/is_dynamic-1.html} may be used to check if a procedure is dynamic: \begin{quote} \notation{is_dynamic(\pattern{Name/Arity}).} \end{quote} When the goal \begin{quote} \notation{compile(\pattern{Somefile})} \end{quote} is executed, and \about{Somefile} contains clauses for procedures that have already been defined in the Prolog database, each of those procedures are treated in one of two ways. If such a procedure is dynamic, its clauses compiled from \about{Somefile} are added to the database (just as would happen if they were asserted), and the existing clauses are not affected. For example, if the following clauses have already been compiled: \begin{quote} \begin{verbatim} :- dynamic city/1. city(london). city(paris). \end{verbatim} \end{quote} and the file \about{Somefile} contains the following Prolog code: \begin{quote} \begin{verbatim} city(munich). city(tokyo). \end{verbatim} \end{quote} then compiling \about{Somefile} will cause adding the clauses for \predspec{city/1} to those already compiled, as \predspec{city/1} has been declared dynamic. Thus the query \notation{city(X}) will give: \begin{quote} \begin{verbatim} [eclipse 5]: city(X). X = london More? (;) X = paris More? (;) X = munich More? (;) X = tokyo yes. \end{verbatim} \end{quote} If, however, the compiled procedure is static, the new clauses in \about{Somefile} replace the old procedure. Thus, if the following clauses have been compiled: \begin{quote} \begin{verbatim} city(london). city(paris). \end{verbatim} \end{quote} and the file \about{Somefile} contains the following Prolog code: \begin{quote} \begin{verbatim} city(munich). city(tokyo). \end{verbatim} \end{quote} when \about{Somefile} is compiled, then the procedure \predspec{city/1} is redefined. Thus the query \notation{city(X}) will give: \begin{quote} \begin{verbatim} [eclipse 5]: city(X). X = munich More? (;) X = tokyo yes. \end{verbatim} \end{quote} When the \bipref{dynamic/1}{../bips/kernel/dynamic/dynamic-1.html} declaration is used on a procedure that is already dynamic, which may happen for instance by recompiling a file with this declaration inside, the system raises the error 64 (``procedure already dynamic''). The default handler for this error, however, will only erase all existing clauses for the specified procedure, so that when such a file is recompiled several times during its debugging, the system behaves as expected, the existing clauses are always replaced. The handler for this error can of course be changed if required. If it is set to \bipref{true/0}{../bips/kernel/control/true-0.html}, for instance, the \bipref{dynamic/1}{../bips/kernel/dynamic/dynamic-1.html} declaration will be just silently accepted without erasing any clauses and without printing an error message. %------------------------------------------------------------------------ \section{Altering programs at run time} The Prolog \Index{database} can be updated during the execution of a program. {\eclipse} allows the user to modify procedures dynamically by adding new clauses via \bipref{assert/1}{../bips/kernel/dynamic/assert-1.html} and by removing some clauses via \bipref{retract/1}{../bips/kernel/dynamic/retract-1.html}. These predicates operate on dynamic procedures; if it is required that the definition of a procedure be altered through assertion and retraction, the procedure should therefore first be declared dynamic (see the previous section). The effect of \bipref{assert/1}{../bips/kernel/dynamic/assert-1.html} and \bipref{retract/1}{../bips/kernel/dynamic/retract-1.html} on static procedures is explained below. The effect of the goal \begin{quote} \preddef{assert(\pattern{ProcClause})}\indextt{assert/1} \end{quote} where \about{ProcClause}\footnote{% It should be remembered that because of the definition of the syntax of a term, to assert a procedure of the form \notation{p :- q,r} it is necessary to enclose it in parentheses: \notation{assert((p:-q,r))}.} is a clause of the procedure \about{Proc}, is as follows. \begin{enumerate} \item If \about{Proc} has not been previously defined, the assertion raises an exception, however the default handler for this exception just declares the given procedure silently as dynamic and executes the assertion. \item If \about{Proc} is already defined as a dynamic procedure, the assertion adds \about{ProcClause} to the database after any clauses already existing for \about{Proc}. \item If \about{Proc} is already defined as a static procedure, then the assertion raises an exception. \end{enumerate} \noindent The goal \begin{quote} \preddef{retract(\pattern{Clause})}\indextt{retract/1} \end{quote} will unify \about{Clause} with a clause on the dynamic database and remove it. If \about{Clause} does not specify a dynamic procedure, an exception is raised. {\eclipse}'s dynamic database features the so-called \defnotion{logical update semantics}. This means that any change in the database that occurs as a result of executing one of the built-ins of the abolish, assert or retract family affects only those goals that start executing afterwards. For every call to a dynamic procedure, the procedure is virtually frozen at call time. \section{Differences between static and dynamic code} \begin{itemize} \item Only dynamic procedures can have clauses added or removed at run time. \item Matching clauses (section~\ref{matching}) are not supported by dynamic code. A runtime error (about calling an undefined procedure \notation{-?->/1}) will be raised when executing dynamic code that has a matching clause head. \item Clauses for a dynamic procedure need not be consecutive. \item Source tracing is not supported for dynamic procedures. \item \predspec{assert/1}, \predspec{retract/1} and \predspec{clause/1} do not perform clause transformation on the clause. If clause transformation is required, this can be done explicitly with \bipref{expand_clause/2}{../bips/kernel/compiler/expand_clause-2.html} before. \item Internally, dynamic procedures are represented differently from static procedures. The execution of dynamic procedures will generally be slower than for static procedures. \end{itemize} %HEVEA\cutend
{ "alphanum_fraction": 0.7513036075, "avg_line_length": 36.4224806202, "ext": "tex", "hexsha": "a5195ca5968627c37e01731bf4a411ce980493b1", "lang": "TeX", "max_forks_count": 55, "max_forks_repo_forks_event_max_datetime": "2022-03-31T05:00:03.000Z", "max_forks_repo_forks_event_min_datetime": "2015-02-03T05:28:12.000Z", "max_forks_repo_head_hexsha": "06a9f54721a8d96874a8939d8973178a562c342f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "lambdaxymox/barrelfish", "max_forks_repo_path": "usr/eclipseclp/documents/userman/umsdynamic.tex", "max_issues_count": 12, "max_issues_repo_head_hexsha": "06a9f54721a8d96874a8939d8973178a562c342f", "max_issues_repo_issues_event_max_datetime": "2020-03-18T13:30:29.000Z", "max_issues_repo_issues_event_min_datetime": "2016-03-22T14:44:32.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "lambdaxymox/barrelfish", "max_issues_repo_path": "usr/eclipseclp/documents/userman/umsdynamic.tex", "max_line_length": 80, "max_stars_count": 111, "max_stars_repo_head_hexsha": "06a9f54721a8d96874a8939d8973178a562c342f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "lambdaxymox/barrelfish", "max_stars_repo_path": "usr/eclipseclp/documents/userman/umsdynamic.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-01T23:57:09.000Z", "max_stars_repo_stars_event_min_datetime": "2015-02-03T02:57:27.000Z", "num_tokens": 2284, "size": 9397 }
\documentclass[a4paper]{article} \usepackage[english]{babel} \usepackage[utf8x]{inputenc} \usepackage{amsmath} \usepackage{graphicx} \title{MATH 542 Homework 10} \author{Saket Choudhary\\[email protected]} \begin{document} \maketitle \subsection*{Problem 3c.1} \subsubsection*{Problem 3c.1.a} \begin{align*} var[S^2] &= Var[\frac{Y'(I_n-P)Y}{n-p}]\\ &= \frac{1}{(n-p)^2}Var[Y'(I_n-P)Y]\\ &= \frac{1}{(n-p)^2}\times (2\sigma^4(n-p))\\ &= \frac{2\sigma^4}{n-p} \end{align*} \subsubsection*{Problem 3c.1.b} \begin{align*} A_1 &= \frac{1}{n-p+2}[I_n-X(X'X)^{-1}X']\\ &= \frac{R}{n-p+2}\\ E[(Y'A_1Y-\sigma^2)^2] &= Var(Y'A_1Y-\sigma^2) + (E[Y'A_1Y-\sigma^2])^2\\ &= Var(Y'A_1Y) + (E[Y'A_1Y]-\sigma^2)^2\\ &= \frac{Var(Y'RY)}{(n-p+2)^2} + ( \frac{E[Y'RY]}{(n-p+2)}-\sigma^2)^2\\ &= \frac{2\sigma^4(n-p)}{(n-p+2)^2} + (\frac{\sigma^2(n-p)}{n-p+2}-\sigma^2)^2 \text{ using 3.12 from textbook}\\ &= \frac{2\sigma^4(n-p)}{(n-p+2)^2} + \frac{4\sigma^4}{(n-p+2)^2}\\ &= \frac{2\sigma^4}{n-p+2} \end{align*} \subsubsection*{Problem 3c.1.c} \begin{align*} E[Y'A_1Y] &= \frac{E[Y'RY]}{n-p+2}\\ &= \frac{\sigma^2(n-p)}{n-p+2}\text{ using 3.12 from textbook}\\ MSE[Y'A_1Y] &= E[(Y'A_1Y-\sigma^2)^2]\\ &= \frac{2\sigma^4}{n-p+2}\\ MSE[S^2] &= E[S^2-(E[S^2])^2]\\ &= Var(S^2)\\ &= \frac{2\sigma^4}{n-p}\\ &<\frac{2\sigma^4}{n-p+2}\\ &\leq MSE[Y'A_1Y] \end{align*} \section*{Problem 3d.1} \subsubsection*{Problem 3d.1.a} Given $Y_i \sim N(\theta, \sigma^2)$ or $Y_i = \theta + \epsilon_i$ where $\epsilon_i \sim N(0, \sigma^2)$ \\ $\mathbf{Y} = \mathbf{1_n}\theta + \mathbf{\epsilon}$ thus $\hat{\theta} = (\mathbf{1_n}'\mathbf{1_n})^{-1}\mathbf{1_n'}Y = \frac{1}{n}\mathbf{1_n'}Y = \bar{Y}$ Thus, using theorem $3.5(ii)$ $\bar{Y}$ and $S^2=\sum_i(Y_i-\bar{Y})^2$ are independent \subsubsection*{Problem 3d.1.b} Borrowing from part (a) we have: $RSS=Q=\sum_i(Y_i-\bar{Y})^2$ $\implies$ using theorem $3.5(iii)$: $RSS/\sigma^2 \sim \chi^2_{n-1}$ \subsection*{Problem 3d.2} \begin{align*} RSS &= Y'(I_n-P)Y\\ &= Y'(I_n-P)Y-\beta'X'(I-P)(Y-X\beta)+Y'(I-P)(-X\beta) \text{ both terms are zero using PX=P and P=P'} \\ &= (Y-X\beta)'(I_n-P)(Y-X\beta)\\ &= \epsilon'(I_n-P)\epsilon \end{align*} \begin{align*} (\hat{\beta}-\beta)'X'X(\hat{\beta}-\beta) &= Z'Z\\ Z &= X(\hat{\beta}-\beta)\\ &= X((X'X)^{-1}X'Y-(X'X)^{-1}X'X\beta)\\ &= P(Y-X\beta)\\ &= P\epsilon\\ (\hat{\beta}-\beta)'X'X(\hat{\beta}-\beta) &= \epsilon'P'P\epsilon \end{align*} \begin{align*} Cov[RSS, (\hat{\beta}-\beta)'X'X(\hat{\beta}-\beta)] &= Cov[\epsilon'(I_n-P)\epsilon, \epsilon'P'P\epsilon]\\ &= Cov[\epsilon'(I_n-P)\epsilon, \epsilon'PP\epsilon] \text{ using } P'=P\\ &= Cov[\epsilon'(I_n-P)\epsilon, \epsilon' P\epsilon] \text{ using } PP = P\\ &= \sigma^2(I-P)P \\ &= 0 \end{align*} Thus, $RSS$ and $(\hat{\beta}-\beta)'X'X(\hat{\beta}-\beta)$ are indepedent \subsection*{Problem 3.12} \begin{align*} Y &= X\beta + \epsilon\\ \bar{Y} &= \frac{1}{n}\mathbf{1_n}Y\\ \sum_i(Y_i-\hat{Y_i})^2 &= (Y-X\hat{\beta})'(Y-X\hat{\beta})\\ &= (Y-X(X'X)^{-1}X'Y)'(Y-X(X'X)^{-1}X'Y)\\ &= (Y-PY)'(Y-PY)\\ &= Y'(I-P)'(I-P)Y\\ &= Y'(I-P)Y\text{ using idempotency of } I-P\\ Cov[\frac{1}{n}\mathbf{1_n}Y, (I-P)Y]\\ &= \frac{1}{n}\mathbf{1_n}Cov[Y](I-P)'\\ &= \sigma^2(n-p)\frac{1}{n}\mathbf{1_n}(I-P)' \end{align*} Since the first column of the design matrix is all 1, $1_n$ belongs to the column space of $X$ and is orthogonal to $(I-P)'$ (P being the projection matrix) $\implies Cov[\frac{1}{n}\mathbf{1_n}Y, (I-P)Y] = 0$ \end{document}
{ "alphanum_fraction": 0.580525414, "avg_line_length": 33.3523809524, "ext": "tex", "hexsha": "ae055eb915ff5272eeed25ddd16faa0defb3d017", "lang": "TeX", "max_forks_count": 12, "max_forks_repo_forks_event_max_datetime": "2022-02-10T03:21:09.000Z", "max_forks_repo_forks_event_min_datetime": "2015-09-25T19:06:45.000Z", "max_forks_repo_head_hexsha": "c5cfa2410d47c7e43a476a8c8a9795182fe8f836", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "saketkc/hatex", "max_forks_repo_path": "2016_Spring/MATH-542/HW10/hw10.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "c5cfa2410d47c7e43a476a8c8a9795182fe8f836", "max_issues_repo_issues_event_max_datetime": "2015-09-23T21:21:52.000Z", "max_issues_repo_issues_event_min_datetime": "2015-09-16T23:11:00.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "NeveIsa/hatex", "max_issues_repo_path": "2016_Spring/MATH-542/HW10/hw10.tex", "max_line_length": 210, "max_stars_count": 19, "max_stars_repo_head_hexsha": "c5cfa2410d47c7e43a476a8c8a9795182fe8f836", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "NeveIsa/hatex", "max_stars_repo_path": "2016_Spring/MATH-542/HW10/hw10.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-10T03:20:47.000Z", "max_stars_repo_stars_event_min_datetime": "2015-09-10T02:45:33.000Z", "num_tokens": 1659, "size": 3502 }
\section{dup} \index{dup} \begin{shaded} \begin{alltt} /** dup Duplicate the input object. If n is an asterix the initial input is output forever and is never consumed. If n is zero more, the object is output and along with n coppies of it. If n is -1 then dup acts like hole and just consumes its input. \end{alltt} \end{shaded}
{ "alphanum_fraction": 0.7343283582, "avg_line_length": 23.9285714286, "ext": "tex", "hexsha": "3d60bd4062752955ffe32a7f1687549424d273c4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ec27b6e3f908fbc50cb6dc54696daea68ae59103", "max_forks_repo_licenses": [ "ICU" ], "max_forks_repo_name": "RexxLA/NetRexx", "max_forks_repo_path": "documentation/njpipes/dup.tex", "max_issues_count": 25, "max_issues_repo_head_hexsha": "ec27b6e3f908fbc50cb6dc54696daea68ae59103", "max_issues_repo_issues_event_max_datetime": "2022-02-01T16:14:50.000Z", "max_issues_repo_issues_event_min_datetime": "2022-01-24T12:13:53.000Z", "max_issues_repo_licenses": [ "ICU" ], "max_issues_repo_name": "RexxLA/NetRexx", "max_issues_repo_path": "documentation/njpipes/dup.tex", "max_line_length": 75, "max_stars_count": null, "max_stars_repo_head_hexsha": "ec27b6e3f908fbc50cb6dc54696daea68ae59103", "max_stars_repo_licenses": [ "ICU" ], "max_stars_repo_name": "RexxLA/NetRexx", "max_stars_repo_path": "documentation/njpipes/dup.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 102, "size": 335 }
\documentclass[a4paper,12pt]{extarticle} \usepackage[top=3cm]{geometry} \usepackage{titling} \setlength{\droptitle}{12em} \usepackage[scaled]{helvet} \renewcommand\familydefault{\sfdefault} \usepackage[T1]{fontenc} \usepackage{graphicx} \usepackage{hyperref} \usepackage[usenames, dvipsnames]{color} \definecolor{mygray}{rgb}{0.5, 0.5, 0.5} \definecolor{mygreen}{rgb}{0,0.6,0} \definecolor{myblue}{RGB}{57, 135, 189} \usepackage{listings} \lstset{ language=java, basicstyle=\ttfamily\footnotesize, keywordstyle=\color{myblue}, commentstyle=\color{mygray}, stringstyle=\color{mygreen}, frame=single, showstringspaces=false, } \title{\textbf{Software Engineering\\\vspace{5mm} CPS2002\\\vspace{5mm} Assignment Report}} \author{\LARGE Martin Bartolo - 0218300L\vspace{1mm}\\ \LARGE Mikhail Cassar - 0319599M\vspace{3mm}\\ \large BSc (Hons) Computing Science and Mathematics} \date{Assignment due 27\textsuperscript{th} May 2019} \setcounter{secnumdepth}{0} \begin{document} \setlength{\parindent}{10pt} \setlength{\footskip}{50pt} \pagenumbering{arabic} \maketitle \thispagestyle{empty} \newpage \tableofcontents \newpage \section{Introduction} The aim of this assignment was to collaboratively work on a software project, with the main focus being on rigorous software testing and the use of Git. Our first task was to set up our environments, namely our Git repository on Github and our Jenkins environment on the University Jenkins server. First, we initialised our Git repository and ensured that each team member could commit changes and push and pull them from Github. When this was ensured, we set up our Jenkins environment to work with Maven and scan for changes from Github every few minutes. Whenever changes are found, they are built and run with a detailed code coverage report and test results being displayed using the Emma plug-in. Our progress at this point can be seen by viewing the "Part1" tag on our Github repository. After completing our set-up, we were ready to start working on the two remaining tasks which will be discussed in detail throughout the remainder of this report. \newpage \section{Initial Game Design} \noindent The design of the game consists of sectioning the game into different classes, here we will talk about the contents of those classes while also explaining how they all work together.\\ \noindent The information provided will be regarding the basic version of the game i.e. before any enhancements to the game were made. The basic version of the game can be found under the tag "Part2". \subsection{Game Class} \noindent The \textbf{Game} class contains the most essential parts of the game, this is because it encompasses all the other parts of the game. Also, this class contains the \textit{Main} method and this is where the game executes. Below is a listing showing the main game loop. \begin{lstlisting} public static void main(String[] args) { //This variable is used to hold the //previous directions taken by a given player String directions; System.out.print("Welcome to the Treasure Map Game by Martin Bartolo and Mikhail Cassar"); Game game = new Game(); //Run startGame method to initialise players and map game.startGame(game); // will be set to true when the treasure //is found by one of the players boolean foundTreasure = false; //An array which holds all the players //who found the treasure on a given turn //This is just in case more than one //player finds it on the same turn boolean[] winners = new boolean[game.players.size()]; //Generating the initial html files here before //there are any moves //Generating an html file for each player in the game for(int i = 0; i < game.players.size(); i++){ if(game.generateHtmlFile(i, game.map.mapSize, " ") == 0){ System.err.println("Could not generate HTML files"); } } //Main game loop while (true) { //Increment amount of turns //which have been played game.turns++; System.out.println("------------------------------\n"); //Get each players desired direction //of movement for the current turn game.directionsLoop(); //Generating an html file for each //player's current state for(int i = 0; i < game.players.size(); i++){ //Obtaining the last 4 directions of each player directions = game.getPreviousDirections(i); if(game.generateHtmlFile(i, game.map.mapSize ,directions) == 0){ System.err.println("Could not generate HTML files"); } } //Go through each player in the //game and check if they found the treasure //Mark the players who have found the treasure int i = 0; for(Player player: game.players){ if(player.foundTreasure){ foundTreasure = true; winners[i] = true; } i++; } //If the treasure has been found by one of the players if (foundTreasure) { for(i = 0; i < winners.length; i++){ if (winners[i]){ System.out.println("Congratualtions player " + (i+1) + ", you have found the treasure in " + game.turns + " turns!"); } } break; } } System.out.println("---------------------------------\n"); //After a player wins the game the user is able to //prompted to exit the game game.exitGame(game); } //Method to initialise map along with players and // their starting positions private void startGame(Game game) { game.playerNum = getPlayerNum(); map.mapSize = getMapSize(); map.generate();// Generate map //In this loop all the Player objects are created //along with their starting position in the map for (int i = 0; i < game.playerNum; i++) { //A new player object is made Player player = new Player(); //The random position of the player is //set to a grass tile player.position = player.setStartingPosition( map.getGrassTiles()); //The created player is added to the ArrayList of //players players.add(player); } } \end{lstlisting} \vspace{4mm} \noindent The design of the \textbf{Game} class needed to include an efficient method on how to organise the implementation of the \textbf{Player} objects and the \textbf{Map} object. All of this processing is first done in the \textit{startGame} method, since this is where the initialisation and setting up of the game takes place.\\ \newpage \begin{lstlisting} //Method to initialise map along with players and // their starting positions private void startGame(Game game) { game.playerNum = getPlayerNum(); map.mapSize = getMapSize(); map.generate();// Generate map //In this loop all the Player objects are created //along with their starting position in the map for (int i = 0; i < game.playerNum; i++) { //A new player object is made Player player = new Player(); //The random position of the player is //set to a grass tile player.position = player.setStartingPosition(map.getGrassTiles()); //The created player is added to the ArrayList of //players players.add(player); } } \end{lstlisting} \vspace{4mm} \noindent This method is called in the beginning of the \textbf{Game} class. The number of players and the map size is obtained from the user initially and with that information the game can be set up. First the \textbf{Map} object created will have a tile map generated for it. This is taken care of in the \textit{generate} method in the \textbf{Map} class.\\ \begin{center} \includegraphics[scale=0.5]{BasicMap2.png}\\ Asking the player to input the number of players and the size of the map \end{center} \noindent After generating the tile map, the \textbf{Player} objects are created, initialised (by being given a starting position) and placed into an array list. By doing this the \textbf{Player} objects are easier to handle.\\ \noindent After setting up the game and thus exiting the \textit{startGame} method, an initial map file is generated for each player using the\textit{generateHtmlFile} method in the \textbf{Game} class. This method generates an html file for each player with all the tiles being gray except the tiles which the current player has discovered.\\ \noindent During the actual game, the same \textit{generateHtmlFile} method is used when generating the HTML files at the end of turn. However, here the full functionality of the method is used so the previous directions and the previous positions of the current player are displayed in the HTML file. Given that there are various tile types within the game, a conditional statement is used within the \textit{generateHtmlFile} method which displays the corresponding colour depending on the tile type. The code mentioned is displayed below. \begin{lstlisting} switch(tileType){ //Grass tile case 0: if(playerHere){ htmlText.append("<div class=\ "cellGreen\">" + "<img src=\"player.png\" alt=\"player\">" + "</div>\n"); } else{ htmlText.append("<div class=\"cellGreen\"></div>\n"); } break; //Water tile case 1: if(playerHere){ htmlText.append("<div class=\"cellBlue\">" + "<img src=\"player.png\" alt=\"player\">" + "</div>\n"); } else{ htmlText.append("<div class=\"cellBlue\"></div>\n"); } break; //Treasure Tile case 2: if(playerHere){ htmlText.append("<div class=\"cellYellow\">" + "<img src=\"player.png\" alt=\"player\">" + "</div>\n"); } else{ htmlText.append("<div class=\"cellYellow\"></div>\n"); } break; default: //No need to check for player here as a player can //never be on a gray tile htmlText.append("<div class=\"cellGray\"></div>\n"); break; } } \end{lstlisting} \vspace{4mm} \noindent After setting up the map files, the game starts. A loop is made which iterates until a player wins. After each round of play, each \textbf{Player} object will be checked to see if they have landed on the treasure tile and if so they will be declared as a winner.\\ \begin{center} \includegraphics[scale=1]{BasicMap3.png} \hspace{1mm} Given display when a player wins \end{center} \noindent During each turn of the game, each player is asked to input a direction depending on where they want to move. The \textit{directionsLoop} method manages this.\\ \begin{lstlisting} //Loop through each player in ArrayList for (Player player : players) { System.out.println("Player " + (players.indexOf(player) + 1) + ", please choose a direction (u, d, l or r)."); validMove = false; while (!validMove) { direction = 'x'; //Make sure that user input is //valid (i.e. one of u, d, l or r) while(direction == 'x' || direction == 'y') { scanner = new Scanner(System.in); direction = validateDirectionInput(scanner); } //Check if move is within map and execute if it is if (checkOutOfBounds(direction, player, map.mapSize) == 1) { validMove = true; //Change player's position variables to new position player.move(direction); //Triggers event for corresponding tile type map.evaluateCurrentPlayerTile(player); } } } \end{lstlisting} \vspace{4mm} \newpage \noindent As mentioned before, the HTML file for each player is updated at the end of each turn. Also, at the end of each turn the treasure flag of each player is checked to determine if a player has landed on the treasure tile. If at least one player has stepped on the treasure tile, the game stops at the end of the turn and the winners are displayed.\\ \noindent All user input within the game is taken care of by validation methods, so if the user gives an incorrect input the program will not crash but handle the input gracefully.\\ \newpage \subsection{Map Class} \noindent The \textbf{Map} class is used to generate the tile map structure which the game is played on. Apart from that, the tile events which occur are also taken care of by this class.\\ \noindent The map generation which occurs in the \textit{startGame} method constructs a 2-dimensional array called \textit{tiles}, which is the size of the map in both dimensions. The array holds the tile type of each tile position in the game. Each position in the \textit{tiles} array corresponds to a tile in the map. Below one can see how each tile is randomly generated. \begin{lstlisting} //Keep on looping until all tiles are randomly generated while(generatedTilesNum(generatedTiles) != mapSize*mapSize){ //Randomly generate a pair of tiles //Numbers generated will be from 0 to mapSize-1 randomPair[0] = random.nextInt(mapSize); randomPair[1] = random.nextInt(mapSize); //Now checking if the tile has already been obtained //in the generated tiles. //This is done by checking if the boolean value of //the tile is true if(generatedTiles[randomPair[0]][randomPair[1]][0]){ //Tile has already been generated //So the user must get another tile which has not //already been generated //Keeps on looping until a new tile is generated while(generatedTiles[randomPair[0]][randomPair[1]][0]){ randomPair[0] = random.nextInt(mapSize); randomPair[1] = random.nextInt(mapSize); } } } \end{lstlisting} \vspace{4mm} \noindent After a tile is randomly generated, a tile type has to be assigned. This is done using the below code within the same \textit{generate} method. The tile type is an integer from 0 to 2, since there are three types in the current implementation of the game. However, if a new tile wants to be included in the game, this can easily be done with this implementation. Below is the code of how the tile types are assigned to a given tile position. \begin{center} \includegraphics[scale=1]{greyTile.png} \hspace{1mm} \includegraphics[scale=1]{greenTile.png} \hspace{1mm} \includegraphics[scale=1]{blueTile.png} \hspace{1mm} \includegraphics[scale=1]{yellowTile.png} \hspace{1mm} Each tile implemented within the game \end{center} \begin{lstlisting} //Keep on looping until the current tile is given a tile type do { //Set the tile type for the newly generated tile //A random number from 0 to 2 is obtained which //correspond to a tile type tileType = random.nextInt(3); //A switch statement is used to go through //each of the possible tile types switch (tileType) { //Grass tile case 0: tiles[randomPair[0]][randomPair[1]][0] = 0; //The counter is updated since another //grass tile has been added to the map grassCount += 1; break; //Water tile case 1: if (waterCount == waterMaxTiles) { //When the maximum number of water tiles is reached, //the check is set to true. //This is so that the while loop will continue until //a random number is obtained which is not full. continue; } else { tiles[randomPair[0]][randomPair[1]][0] = 1; //The counter is updated since another water tile has //been added to the map waterCount += 1; } break; //Treasure tile case 2: if (treasureCount == treasureMaxTiles) { //When a treasure tile is already placed //the check is set to true continue; } else { tiles[randomPair[0]][randomPair[1]][0] = 2; //The counter is updated since a teasure // tile has been added to the map treasureCount += 1; } break; default: //This case is accessed only when a random //number which is not 0,1 or 2 is obtained System.err.println("Invalid random number obtained"); break; \end{lstlisting} \vspace{4mm} \newpage \subsection{Player Class} \noindent The Player class manages the individual properties of a player, such as the tile positions and the directions the player has traversed.\\ \noindent When a \textbf{Player} object is created the positions and the directions the player has moved are saved using an array list for each. \begin{lstlisting} //This array list is used to hold the previous //positions the player ArrayList<Position> positions = new ArrayList<Position>(); //This array list is used to hold the previous directions //of the player ArrayList<String> directions = new ArrayList<String>(); \end{lstlisting} \vspace{4mm} \noindent In the \textit{startGam)} method, a starting position is set for each new \textbf{Player} object which is created, this is done using the \textit{setStartingPosition} method, this method takes two random values from a \textit{grassTiles} array which holds all the tile positions of grass tiles within the tile map. These two numbers are added to the \textit{positions} array list of the player and the player is displayed starting at this position. \begin{lstlisting} //This method sets the starting position of a player Position setStartingPosition(int[][] grassTiles){ Random random = new Random(); Position position = new Position(0, 0); //Obtaining the length of grassTiles so as to be able to //know from which range to obtain a random number int grassCount = grassTiles.length; //random number from 0 to length of grassTiles is obtained int grassTilesIndex = random.nextInt(grassCount); //The start position is set position.x = grassTiles[grassTilesIndex][0]; position.y = grassTiles[grassTilesIndex][1]; //The current position of the player is set //to the start position this.position = position; //The start positions is added to the created player addToPositions(position.x, position.y); return position; } \end{lstlisting} \vspace{4mm} \newpage \noindent This class also modifies the current position of the current player through the use of the \textit{move} method. Depending on which direction the player moves, the current position of the player is modified and the added to the \textit{positions} array list, the given direction is added to the \textit{directions} array list. \begin{lstlisting} //Method to move the player's position according to a //given direction void move(char direction){ //A switch statement is used to represent all //possible directions switch(direction){ case 'l': // change player's position this.position.x --; // add position to list of previous positions addToPositions(position.x, position.y); // add direction to list of previous directions directions.add("left"); break; case 'r': // change player's position this.position.x ++; // add position to list of previous positions addToPositions(position.x, position.y); // add direction to list of previous directions directions.add("right"); break; case 'u': // change player's position this.position.y --; // add position to list of previous positions addToPositions(position.x, position.y); // add direction to list of previous directions directions.add("up"); break; case 'd': // change player's position this.position.y ++; // add position to list of previous positions addToPositions(position.x, position.y); // add direction to list of previous directions directions.add("down"); break; default: break; } } \end{lstlisting} \vspace{4mm} \newpage \begin{center} \includegraphics[width=\textwidth]{BasicMap1.png}\\ The initial position of a player when the game starts \end{center} \newpage \subsection{Position Class} The \textbf{Position} class is used to simplify the way in which the program interprets the player's location within the tile map.\\ \noindent As seen in the class diagram, a \textbf{Position} class is used by \textbf{Player} objects, so any instances of \textbf{Position} objects are within the \textbf{Player} objects. \begin{lstlisting} public class Position { //Value of each coordinate int x; int y; //Method to display a position object @Override public String toString() { return "Position{" + "x=" + x + ", y=" + y + '}'; } //Constructor for the player object Position(){ } //Constructor for the position object when both //x and y values are given Position(int px, int py) { x = px; y = py; } \end{lstlisting} \vspace{4mm} \newpage \section{Enhancements} \subsection{Different Map Types} The design pattern chosen for this enhancement was the \textbf{factory design pattern}. Since this enhancement was centred around the map's creation then it was clear that a creational design pattern had to be chosen. The main things which needed to be kept in mind were that 2 initial map types had to be implemented while more map types could easily be added to the future. Therefore, the Factory design pattern was the most suitable for the task.\\ The class diagram of the enhancement can be seen below.\\ \begin{center} \includegraphics[width=\textwidth]{Enhancement1CD.png}\\ \end{center} \newpage \noindent Our implementation is centred around a \textbf{Map} interface which contains each method which will be used by the different map types. Each type of map then has a class which implements this main \textbf{Map} interface as can be seen in the class diagram above. This design allows for the easy implementation of additional map types in the future.\\ \noindent A factory method is present in the \textbf{MapCreator} class which is passed the map's type and its size. This class has a creator subclass for every map type, allowing for easy creation of additional map types in the future. An instance of the correct creator subclass will then be made and used to create the map. This can be seen in the code snippet below. \vspace{-1mm} \begin{lstlisting} //Factory Method Map createMap(String type, int mapSize){ MapCreator creator; if(type.equals("safe")){ creator = new SafeMapCreator(mapSize); } else if(type.equals("hazardous")){ creator = new HazardousMapCreator(mapSize); } else{ creator = null; System.err.println("Invalid map type"); } if (creator != null) { return creator.create(); } else{ return null; } } \end{lstlisting} \vspace{4mm} \noindent The map is then created in the appropriate creator subclass by setting the map size and calling the generate method in the map type's class (either the \textbf{SafeMap} or \textbf{HazardousMap} class in our implementation but more can easily be added). A code snippet of this create method for the "safe" map type is shown below. \vspace{-1mm} \begin{lstlisting} //Method to create and return a safe map @Override public Map create(){ //getInstance method used here to obtain a //static instance of the hazardous map SafeMap map = SafeMap.getInstance(); map.setMapSize(mapSize); map.generate(); return map; } \end{lstlisting} \noindent In the generate method for the safe map, the maximum amount of water tiles is set to 10\% and then rounded down to the nearest integer. This code snippet is shown below. \begin{lstlisting} //The maximum number of water tiles in a map is set to //10% of the total tiles int waterMaxTiles = (int) Math.floor((mapSize*mapSize) * 0.1); \end{lstlisting} \vspace{4mm} \noindent In the generate method for the hazardous map, the maximum amount of water tiles is set to 25\% and then rounded up to the nearest integer. This code snippet is shown below. \begin{lstlisting} //The maximum number of water tiles in a map is set to //25% of the total tiles int waterMaxTiles = (int) Math.ceil((mapSize*mapSize) * 0.25); \end{lstlisting} \vspace{4mm} \noindent Apart than these changes, the map is generated as it was in the generate method in the basic version of the program before the enhancements.\\ \noindent When running the program, the user is asked what map type they would like to play with. This can be seen below. \begin{center} \includegraphics[width=\textwidth]{Figure1.png} \end{center} \noindent After the user chooses the map type, the map is created as discussed on the previous page and then the game can be played. On the next page is a comparison between the ending screen of a game played by 4 players in 2 teams on a 6x6 safe map and a game played by the same players on a 6x6 hazardous map. Notice the different amounts of water tiles encountered between the 2 map types. \newpage \begin{center} \includegraphics[scale=0.5]{Figure2.png}\\ Ending screen of game using a safe map \end{center} \begin{center} \includegraphics[scale=0.5]{Figure3.png}\\ Ending screen of game using a hazardous map \end{center} \newpage \subsection{Single Map File} The design pattern chosen for this enhancement was the \textbf{singleton design pattern}. Since the problem involved the creation of only one map file it was clear that a creational design pattern was needed. The problem which needed to be solved here was that there was only one instance of a map file throughout the whole game. This could easily be taken care of using the singleton design pattern.\\ \noindent The class diagram of the enhancement can be seen below.\\ \begin{center} \includegraphics[width=\textwidth]{Enhancement2CD.png}\\ \end{center} \noindent Given the first enhancement led to the creation of the \textbf{safeMap} class and \textbf{hazardousMap} class, a singleton design pattern for each subclass has to be implemented since they produce different objects.\\ \noindent After the user chooses which map type they want, through the process previously defined in the first enhancement, the program goes to the corresponding \textbf{Map} subclass through the \textbf{MapCreator} class.\\ \noindent In the \textbf{SafeMap} and \textbf{HazardousMap} classes, a private static instance is initialised. By doing this, the \textbf{Map} object cannot be used in any other method and since it is static, the object will be directly affected by every modification which occurs throughout the lifetime of the program. This means that given only one map, any change which occurs will affect that single map file and so there is only one map file which is being affected. This can be seen in the screenshot below.\\ \begin{center} \includegraphics[width=\textwidth]{Singleton1.png}\\ Local repository containing a single map file \end{center} \noindent A \textit{getInstance} method is also created in the subclasses. This method declares the previously initialised static object created. However, the object is only declared if the object is null. This is so that it would lead to the creation of a new object and so this method takes care of that.\\ \begin{lstlisting} private static SafeMap map = null; //Constructor for SafeMap private SafeMap(){ } //This method is used to obtain the static //instance of the object static SafeMap getInstance(){ if(map == null){ map = new SafeMap(); } return map; } \end{lstlisting} \vspace{4mm} \newpage \begin{lstlisting} //A static instance is created to implement the //singleton design pattern private static HazardousMap map = null; //constructor for HazardousMap //Constructor is set to private to implement the //singleton design pattern private HazardousMap(){ } //This method is used to obtain the static //instance of the object static HazardousMap getInstance(){ if(map == null){ map = new HazardousMap(); } return map; } \end{lstlisting} \vspace{4mm} \noindent In the \textbf{MapCreator} class, the \textit{getInstance} method is used so as to create a \textbf{Map} object. The methods for each sub-class are very similar. \begin{lstlisting} //Method to create and return a safe map @Override public Map create(){ //getInstance method used here to obtain // a static instance of the hazardous map SafeMap map = SafeMap.getInstance(); map.setMapSize(mapSize); map.generate(); return map; } \end{lstlisting} \vspace{4mm} \begin{lstlisting} //Method to create and return a hazardous map @Override public Map create(){ //getInstance method used here to obtain //a static instance of the hazardous map HazardousMap map = HazardousMap.getInstance(); map.setMapSize(mapSize); map.generate(); return map; } \end{lstlisting} \vspace{4mm} \newpage \noindent In the initial version of the game, an HTML file was created for each player and at the end of each turn the HTML file is updated using the \textit{changeHtmlFile} method. \begin{lstlisting} //A file object is being created where the name is given //depending on the number of the player File file = new File("map.html"); //The actual file is created here try { //If file already exists set return value //to 2 to mark that it is being overwritten if(!file.createNewFile()){ returnValue = 2; } } catch (IOException e) { e.printStackTrace(); returnValue = 0;//Set return value to error } \end{lstlisting} \vspace{4mm} \noindent Now, after implementing the singleton design pattern instead, we have only one file which is modified to the corresponding player's map tile when it is the player's turn to choose their desired direction. \begin{lstlisting} //Loop through each player in ArrayList for (Player player : players) { //At the end of the current player's turn //the main html file is changed teams.get(getTeamIndex(player)).changeHtmlFile (players.indexOf(player), map, player); System.out.println("Player " + (players.indexOf(player) + 1) + ", please choose a direction (u, d, l or r)."); } \end{lstlisting} \noindent This implementation can be used to display the HTML file of hundreds of players which are playing together. \newpage \subsection{Teams} The design pattern chosen for this enhancement was the \textbf{composite design pattern}. This enhancement required us to rethink the concept of a player in the game. We needed to find a way how to add players randomly to a team, this problem was a structural one since this problem deals with the composition of different classes.\\ \begin{center} \includegraphics[scale=0.5]{Enhancement2CD.png}\\ Ending screen of game using a hazardous map \end{center} \noindent The reason the composite design pattern was a good choice is because this design pattern structures classes into two types: \begin{enumerate} \item Leaf \item Composite \end{enumerate} \noindent In our case the \textbf{Player} object corresponds to the Leaf while the \textbf{Team} objects corresponds to the Composite. The reason this works is because a Leaf can be though of as a unit while a composite contains various leaves, this is just like a team having multiple players in it. Thus a team can be seen as a complex object of multiple singular object.\\ \noindent The composite design pattern goes further with this idea as it allows for composite objects to contain other composite objects, however in our case this would not make as much sense since a team containing a team would not really work well in our game. So our design pattern is a basic implementation of the composite design pattern.\\ \noindent A \textbf{User} interface was implemented as the component within the composite design pattern. This interface is implemented by both the \textbf{Player} class and the \textbf{Team} class as they are both components with the design pattern.\\ \begin{lstlisting} import java.util.ArrayList; public interface User { //The User interface is the component of the //composite design pattern ArrayList<Position> positions = new ArrayList<Position>(); //Method used to add a position to the positions //ArrayList using the x and y values void addToPositions(int posx, int posy); } \end{lstlisting} \vspace{4mm} \noindent Given this is a basic implementation of the composite design pattern, the component element of the design pattern is not as useful here, since the enhancement could easily be implemented by just using the \textbf{Team} class. This is the reason for the \textbf{User} interface not having as much common objects between the \textbf{Player} subclass and the \textbf{Team} subclass.\\ \noindent When running the program, the user is asked how many teams they would like to play with. This can be seen below. \begin{center} \includegraphics[scale=0.5]{Team3.png}\\ \end{center} \noindent After the user chooses the number of teams within the game, the players are randomly distributed among the teams. All of this is done within \textit{startGame} method, using the \textit{generateTeam} method and the \textit{distributeRemainder} method.\\ \vspace{4mm} \newpage \begin{lstlisting} //Holds the players which have been added to a team ArrayList<Player> addedPlayers = new ArrayList<Player>(); //Now to assign all the player objects to a random team for (int i = 0; i < game.teamNum; i++) { //A new team is created Team team; //Generate the team team = generateTeam(addedPlayers, playersInTeamNum); //Add the team to the list of teams in the game teams.add(team); } \end{lstlisting} \vspace{4mm} \noindent All the \textbf{Player} objects which have already been added to a team are taken into account. This is used so as to add a unique \textbf{Player} object to each \textbf{Team} object. The random distribution of players is taken care of in the \textit{generateTeam} method which can be seen below.\\ \begin{lstlisting} //Method to generate the teams Team generateTeam(ArrayList<Player> addedPlayers, int playersInTeamNum){ Random random = new Random(); //Check if the current player exists in a team boolean playerIsInATeam; //Holds a random index of a player int rand; //A new team is created Team team = new Team(); //Randomly add the specified amount of players to a team //Get playerInTeamNum random players from the players //array list for (int j = 0; j < playersInTeamNum; j++) { //At each iteration this check is always initialised //to true //If there is no matching value in the array list then //the while loop breaks playerIsInATeam = false; //Keep on looping until a new player index is obtained do { //A random index is obtained rand = random.nextInt(playerNum); //If no player has currently been added to a team if(addedPlayers.size() == 0){ //A random index is obtained rand = random.nextInt(playerNum); //An initial player has been added team.addPlayer(players.get(rand)); addedPlayers.add(players.get(rand)); //If the size of a team is only one player then //this team is full //If not then continue adding players until the //maximum is reached if(playersInTeamNum== 1){ playerIsInATeam = false; } } //If a player has been added to a team else { //Initialised to false since we are checking //if the current player is already in a team playerIsInATeam = false; //Loops through all the players which //are already in a team //Ends when a player which is not in a //team is obtained for (Player player : addedPlayers) { //If the current player has already been //generated if (players.get(rand) == player) { playerIsInATeam = true; } } if(!playerIsInATeam){ //Add the player to the team team.addPlayer(players.get(rand)); addedPlayers.add(players.get(rand)); //The check is false and the loop is broken break; } } //Keep on looping while the current player //being obtained is already in a team } while (playerIsInATeam); } //Returns a team with the player return team; } \end{lstlisting} \vspace{4mm} \noindent This method works by first creating a \textbf{Team} object, and then iterates for the total number of players to add to each team. A random index from the \textit{players} array list is obtained and compared to the indexes of the players within the \textit{addedPlayers} array list. If the random index does not match with any of those in \textit{addedPlayers}, then that player is not in any team and can be added to a team, the player is also added to \textit{addedPlayers}. This is repeated for each team.\\ \noindent At the beginning of each game in the \textit{startGame} method, after obtaining the number of players and teams, two more values are obtained. The \textit{playersInTeamNum} variable is the total number of players in each team which can all be equally divided into a team. The \textit{extraPlayersNum} is the total number of players which are removed so as to be able to divide the number of players equally at first. \begin{lstlisting} //get number of players from user game.playerNum = getPlayerNum(); //get number of teams from user game.teamNum = getTeamNum(); //The remainder of the total number of players //divided by the total number of teams is obtained int extraPlayersNum = playerNum % teamNum; //The total number of player per team is obtained //excluding the extra players int playersInTeamNum = (playerNum - extraPlayersNum)/teamNum; \end{lstlisting} \vspace{4mm} \newpage \noindent The extra players are then randomly inputted into random teams using the \textit{distributeRemainder} method. \begin{lstlisting} // Method to distribute the remaining player if //the players are not evenly distrubted among the teams void distributeRemainder(ArrayList<Player> addedPlayers, int extraPlayersNum){ Random random = new Random(); //Array list which holds the teams which //have already been generated ArrayList<Team> obtainedTeams = new ArrayList<Team>(); //Check if the current player exists in a team boolean playerIsInATeam; //check if an extra player is already added to the team boolean teamIsFull; //Holds a random index for the player //and the team respectively int playerIndex; int teamIndex; //Obtain a player which is not in a team //for extraPlayerNum times for(int i = 0; i < extraPlayersNum; i++){ //First obtain a random unique team //This is so only one extra player is added //to a team so there would not be much of //a disadvantage to the other players //Keep on looping until a new team index is obtained do { teamIsFull = false; //A random index is obtained teamIndex = random.nextInt(teamNum); //This is used so as to set up the //obtainedTeams array list if(obtainedTeams.size() == 0){ obtainedTeams.add(teams.get(teamIndex)); } else{ for (Team team : obtainedTeams) { //If the current player has //already been generated if (teams.get(teamIndex) == team) { teamIsFull = true; } } } }while(teamIsFull); //Now a new team is obtained with every iteration //Keep on looping until a new player index is obtained do { //At each iteration the checks are set to false playerIsInATeam = false; //A random index is obtained playerIndex = random.nextInt(playerNum); for (Player player : addedPlayers) { //If the current player has //already been generated if (players.get(playerIndex) == player) { playerIsInATeam = true; } } if(!playerIsInATeam){ //Add the player to the team teams.get(teamIndex).addPlayer( players.get(playerIndex)); addedPlayers.add(players.get(playerIndex)); //The check is set to false and //the loop is broken break; } }while(playerIsInATeam); } } \end{lstlisting} \vspace{4mm} \noindent The method works by looping through each extra player. During each iteration, a unique team is obtained and after that a unique player is obtained. Code similar to that of the \textit{generateTeam} method is then used.\\ \newpage \noindent Below, one can see the implementation of teams in the game compared to the game with only 2 players. The screenshots below show a 8x8 safe map. \begin{center} \includegraphics[scale=0.5]{Team1.png}\\ Late game screen in a game with two teams of three players each \end{center} \begin{center} \includegraphics[scale=0.5]{Team2.png}\\ Late game screen of a game between two players \end{center} \noindent One can see the different patches of tiles in a team game. This is because there are different players playing and so every player on the team is able to observe the tiles. Alternatively, in a game not in cooperative mode, all the tiles are those which the single player has visited.\\ \newpage \section{Code Coverage} \subsection{Basic Version of the Game} After finishing our development and testing of the basic version of the game, our code coverage statistics taken from Jenkins were as follows. \begin{center} \includegraphics[scale = 0.75]{Figure4.png}\\ \end{center} \noindent Using the built in code coverage plug-in in IntelliJ, we were able to take note of which methods were not covered by our unit tests to ensure that each method which was not covered was left this way with good reason. The methods which were not tested are all in the \textbf{Game} class. An explanation of why these methods were not covered is given below. \begin{itemize} \item \textit{Main} method:\\ This method simply calls other methods to initialise the game and run the main game loop. Each of these methods are tested individually \item \textit{startGame} method:\\ Like the Main method, this method calls other methods to initialise the map and players. Each of these methods are tested individually \item \textit{getPlayerNum} method:\\ This method simply receives a user input and passes it to the \textit{validatePlayerNum} method which ensures that the input is an integer within the allowed range (2-8). All the testing for this is therefore done in the \textit{validatePlayerNum} method. \item \textit{getMapSize} method:\\ This method simply receives a user input and passes it to the \textit{validateMapSize} method which ensures that the input is an integer within the allowed range (5-50 depending on the number of players). All the testing for this is therefore done in the \textit{validateMapSize} method. \item \textit{directionsLoop} method:\\ This method is a loop which asks each player for the direction they would like to move in and then checks that this move is allowed with the \textit{checkOutOfBounds} and \textit{validateDirectionInput} methods. If it is then the \textit{move} method in the \textbf{Player} class is called upon to execute the desired move. The \textit{checkOutOfBounds} and \textit{validateDirection} input methods are both tested to make sure that only a valid direction is accepted and the \textit{move} method in the \textbf{Player} class is also tested to make sure every move case can be properly handled by the program. Therefore, there is no need to test the \textit{directionsLoop} method. \item \textit{endGame} method:\\ This method receives a character from the \textit{getExitChar} method. If this character is 'e' then the map html files are deleted using the \textit{deleteHtmlFiles} method and the program is exited. The \textit{getExitChar} method is tested individually using the \textit{validateExitChar} method to ensure that only 'e' causes the program to exit. The \textit{deleteHtmlFiles} method is also tested individually to ensure that the files are deleted correctly every time. There is therefore no need to test the \textit{endGame} method. \item \textit{getExitChar} method:\\ This method receives a character user input from the user which is validated in the \textit{validateExitChar} method. If this input is 'e' then it is returned to the \textit{endGame} method which will delete the map html files and exit the program. Since it is tested in the \textit{validateExitChar} method, \textit{getExitChar} does not need to be tested. \end{itemize} \newpage \subsection{The Game After Enhancements} After we finished adding and testing the required enhancements to the game, our code coverage statistics taken from Jenkins were as follows. \begin{center} \includegraphics[scale = 0.75]{Figure5.png}\\ \end{center} \noindent Again, using the built in code coverage plug-in in IntelliJ, we took note of which methods were not covered by our unit tests to ensure that each method which was not covered was left this way with good reason. The methods which were not tested in the basic version of the game are still not tested in this version for the same reasons and will therefore not be explained again. An explanation of why some of the new methods were not covered is given below. \renewcommand\labelitemii{$\triangleright$} \begin{itemize} \item \textbf{Game} class:\\ \vspace{-5mm} \begin{itemize} \item \textit{getTeamNum} method:\\ This method simply receives a user input and passes it to the \textit{validateTeamNum} method which ensures that the input is an integer within the allowed range (between 2 and the amount of players). All the testing for this is therefore done in the \textit{validateTeamNum} method. \item \textit{getMapType} method:\\ This method simply receives a user input and passes it to the \textit{validateMapType} method which ensures that the input is a string which is either "safe" or "hazardous". All the testing for this is therefore done in the \textit{validateMapType} method. \end{itemize} \newpage \item \textbf{MapCreator} class:\\ \vspace{-5mm} \begin{itemize} \item \textit{create} method:\\ This method is overridden in the \textbf{SafeMapCreator} and \textbf{HazardousMapCreator} classes and each of these override methods are tested individually. There is therefore no need to test this method. \end{itemize} \end{itemize} \newpage \section{Instructions to Run The Game} \begin{enumerate} \item Download and extract the project from\\ \url{https://github.com/martin-and-mikhail/CPS2002_MartinMikhail}\\ Please note that the "Part1", "Part2" and "Part3" tags can be used to view the \item Compile and run \textbf{Game.java} to run the game or run the \textbf{Game} class from an IDE \item Configure the game by choosing the amount of players, amount of teams, and the size and type of map. Each player will then be shown which team they are on. An example of this is shown below. \begin{center} \includegraphics[scale=0.65]{Figure6.png}\\ \end{center} \item Open up the map.html file in a browser. Each player will be at a random starting position at this point in the game. \item The game will now start. In each round, each player will be prompted to enter their desired direction of movement. This can be done by entering u, d, l or r and then pressing enter to confirm the direction. If the player lands on a grass tile, they and their team members will have this tile changed to green on their map. If the player lands on a water tile, they will be notified and will be moved back to their starting position. This tile will also be marked on the player and their team mates' maps. If the player lands on the treasure tile then they will be marked as one of the game's winners, along with anyone else who lands on the treasure in the round, at the end of the round. Please note that the map is updated before each player must input a direction. Make sure to refresh the browser after every move is made. \end{enumerate} \end{document}
{ "alphanum_fraction": 0.6751647149, "avg_line_length": 41.5507717303, "ext": "tex", "hexsha": "1666628f1c9add920bcdc0683fc01d361a76d57f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "fdc9f37788218ba555bff7b995068267f82a8c43", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "martin-and-mikhail/CPS2002_MartinMikhail", "max_forks_repo_path": "WriteUp/AssignmentReport.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "fdc9f37788218ba555bff7b995068267f82a8c43", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "martin-and-mikhail/CPS2002_MartinMikhail", "max_issues_repo_path": "WriteUp/AssignmentReport.tex", "max_line_length": 956, "max_stars_count": null, "max_stars_repo_head_hexsha": "fdc9f37788218ba555bff7b995068267f82a8c43", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "martin-and-mikhail/CPS2002_MartinMikhail", "max_stars_repo_path": "WriteUp/AssignmentReport.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 11699, "size": 51149 }
\section{Memory Considerations}\index{Memory!how to minimize} Of their nature, large systems use a large amount of memory. This will sometimes prevent a calculation from being run. There are, however, several ways to reduce the amount of memory needed by a specific calculation. These can be summarized as follows: \begin{itemize} \item Run using the MOZYME function. For large systems, the MOZYME function uses much less memory than the default MOPAC. The MOZYME function only works for closed shell RHF ground state systems. Also, it cannot be used when high precision is needed, such as in \comp{FORCE}, hyperpolarizability, or \comp{IRC} calculations. \item Run the job with \comp{SIZES}. Run the job that fails using \comp{SIZES}. This will print out the sizes of all arrays created dynamically. By inspection of these arrays, ways can usually be found to reduce the memory demand. For example, the default geometry optimizer using the \comp{MOZYME} function, \comp{EF}, uses a large amount of memory. Switching to the BFGS optimizer (\comp{BFGS}) will save a lot of memory. \item Run in \comp{UNSAFE} mode. Normally, by default, MOPAC runs in \comp{SAFE} mode. This means that when things start to go wrong in the SCF, special procedures can be used in an attempt to generate a SCF. These procedures use a large amount of memory, and are normally not used. Most likely, if memory is limited, the job can be run in \comp{UNSAFE} mode. This will save a lot of memory, but at the same time, if things start to go wrong, the job has an increased chance of failing. If the job fails, the results will usually indicate the failure in an obvious way. \item Reduce the cutoffs. In a \comp{MOZYME} calculation, useful results can often be obtained even when the values of \comp{CUTOF1} and \comp{CUTOF2} are reduced. When these are reduced, the amount of memory needed drops quite rapidly. If the objective of the calculation is to optimize the geometry, then reducing the cutoffs will both increase the speed and reduce the memory demand without significant loss of accuracy. \end{itemize}
{ "alphanum_fraction": 0.7592421442, "avg_line_length": 47.0434782609, "ext": "tex", "hexsha": "3f5d363be32a3fd537974f5176efe6be848ca6a6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "01510e44246de34a991529297a10bcf831336038", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "openmopac/MOPAC-archive", "max_forks_repo_path": "manuals/MOPAC2000_manual/d_memory.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "01510e44246de34a991529297a10bcf831336038", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "openmopac/MOPAC-archive", "max_issues_repo_path": "manuals/MOPAC2000_manual/d_memory.tex", "max_line_length": 80, "max_stars_count": 2, "max_stars_repo_head_hexsha": "01510e44246de34a991529297a10bcf831336038", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "openmopac/MOPAC-archive", "max_stars_repo_path": "manuals/MOPAC2000_manual/d_memory.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-16T20:54:11.000Z", "max_stars_repo_stars_event_min_datetime": "2021-12-16T20:53:27.000Z", "num_tokens": 536, "size": 2164 }
\section{Defining sets}
{ "alphanum_fraction": 0.7307692308, "avg_line_length": 6.5, "ext": "tex", "hexsha": "b30e36601730d917ea5c2b815ff8223d2e1c4756", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/logic/setsSpecification/01-00-Defining_sets.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/logic/setsSpecification/01-00-Defining_sets.tex", "max_line_length": 23, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/logic/setsSpecification/01-00-Defining_sets.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8, "size": 26 }
\section{\sc Programming Languages} \textbf{Languages}{: \textit{Python, C/C++, Java SE, VHDL, ARM Assembly, AVR Assembly, Racket, ML, Scheme, HTML/CSS, Javascript, Latex.}} \hfill \textbf{Frameworks and Environments}{: \textit{Keras, Tensorflow, Numpy, Matplotlib, Pandas, RapidMiner, Weka, OpenMP, CUDA, React Native, Node.js, Laravel.}} \hfill \textbf{Databases}{: \textit{MongoDB, MySQL, PostgreSQL.}} \hfill \endinput
{ "alphanum_fraction": 0.724537037, "avg_line_length": 39.2727272727, "ext": "tex", "hexsha": "6960c9a5c6974b58bf621af36b08b7d32d7bc17a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "73545f3e25225d9bf972170551bb78f1d54964b3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "aligholami/RTP", "max_forks_repo_path": "plang.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "73545f3e25225d9bf972170551bb78f1d54964b3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "aligholami/RTP", "max_issues_repo_path": "plang.tex", "max_line_length": 159, "max_stars_count": null, "max_stars_repo_head_hexsha": "73545f3e25225d9bf972170551bb78f1d54964b3", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "aligholami/RTP", "max_stars_repo_path": "plang.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 143, "size": 432 }
\documentclass[pdf]{beamer} \usepackage{graphicx} \usepackage{tikzsymbols} \usepackage{amsmath,amsfonts,amssymb} \usepackage[linesnumbered]{algorithm2e} \usepackage{hyperref} \usetheme{Antibes} \usecolortheme{default} \title{Weekly meeting} \date{29th of January, 2020} \author{Joan Marcè i Igual} \begin{document} \begin{frame} \maketitle \end{frame} \section{Done} \begin{frame} \frametitle{What has been done?} \begin{itemize} \item Submit short paper to CAPITAL (accepted \Laughey) \item Prepare Related work for paper \item Write proofs for $t_{avail}$ and $t_h$ (needs to be reviewed) \item Decide on pseudocode for new scheduler \end{itemize} \end{frame} \begin{frame} \frametitle{Pseudocode} \begin{algorithm}[H] \scriptsize \newcommand{\cbusy}{\ensuremath{c_{busy}}} \newcommand{\creserved}{\ensuremath{c_{res}}} \SetKwFunction{Ready}{readySortedByPriority} \SetKwFunction{Schedule}{schedule} $m := $ total number of cores\; $\cbusy :=$ number of cores executing non-preemptive jobs\; $\creserved :=$ 0 \tcp*[l]{number of cores reserved by unscheduled jobs} \ForEach{$J_i \in $ \Ready{$t$}}{ \uIf(\tcp*[h]{if enough cores are available (not busy and not reserved)}){$s_i^{\min} \le (m - \cbusy) \land s_i^{\min} \le (m - \creserved)$}{ \Schedule{$J_i$}\; $\cbusy := \cbusy + s_i^{\min}$\; \If(\tcp*[h]{only if some cores were already reserved}){$\creserved > 0$}{ $\creserved := \creserved + s_i^{\min}$\; } } \uElseIf(\tcp*[h]{if too many cores are busy}){$s_i^{\min} > (m - \cbusy) \land s_i^{\min} \le (m - \creserved)$}{ $\creserved := \creserved + s_i^{\min}$\; } \Else(\tcp*[h]{if too many cores are reserved, i.e., $s_i^{\min} > (m - \creserved)$}){ \Return } } \end{algorithm} \end{frame} \section{In Progress} \begin{frame} \frametitle{What's next} \begin{itemize} \item Run schedulability analysis of Limited-Preemptive Malleable Gang Scheduling with JLFP scheduler and get some results \item Prepare 12 minutes presentation for CAPITAL \item Finish pseudocode for core distribution \item Test proposed scheduler (work started) \begin{itemize} \item ReTiS Lab has \href{https://github.com/ReTiS-Lab/METASIM}{METASIM} \end{itemize} \item Create a schedule for the project \item Create schedulability analysis mathematical formulation for Limited-Preemptive Malleable Gang Scheduling with new scheduler \end{itemize} \end{frame} \end{document}
{ "alphanum_fraction": 0.6121362669, "avg_line_length": 32.3908045977, "ext": "tex", "hexsha": "14c242eebc442b0efdef6f31a364d9f63f8ac9b9", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2016-10-14T12:10:40.000Z", "max_forks_repo_forks_event_min_datetime": "2016-10-10T08:40:56.000Z", "max_forks_repo_head_hexsha": "acea91e3d339165855742dd5c5d6961158d5c391", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jmigual/APATeoria", "max_forks_repo_path": "Notes/TFM_presentations/2020_01_29/main.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "acea91e3d339165855742dd5c5d6961158d5c391", "max_issues_repo_issues_event_max_datetime": "2020-08-05T10:35:08.000Z", "max_issues_repo_issues_event_min_datetime": "2020-08-05T10:35:07.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jmigual/APATeoria", "max_issues_repo_path": "Notes/TFM_presentations/2020_01_29/main.tex", "max_line_length": 155, "max_stars_count": null, "max_stars_repo_head_hexsha": "acea91e3d339165855742dd5c5d6961158d5c391", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jmigual/APATeoria", "max_stars_repo_path": "Notes/TFM_presentations/2020_01_29/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 821, "size": 2818 }
\chapter{Conclusions} The purpose of this chapter is to offer a model for transport policy. The need for any policy comes out of the need to control monopoly power, infrastructure overuse leading to congestion and environmental problems caused by a lack of property rights. It also acknowledges that there can be a problem with over competition. The traditional method of control has been one of franchise, with varying levels of state control over the franchisee. This has ranged from siting of service provision, control over tariffs charged and even the limits on the quantity of services provided. Although these restrictions on suppliers were introduced as measures to protect the travelling public, they have also restricted the options of those who wish to consume transport services. The model presented here aims to keep as many market forces as possible acting on suppliers. It is based on a system of tradable permits to offer services. It can easily bc applied to the supply of bus and train services. With some work, it might be possible to extend it to motorway traffic, however it is unlikely to work on all roads owing to the likely complexity of implementing such a scheme. The scheme will now be outlined for each of the transport modes The franchise system of control has been a common thread in the railway industry. In Britain, the right to build a new train line required the passing of an Act of Parliament. Parliament, once it found that it had created a legalised monopoly then sought to protect the public interest through price regulation. A system of tradable permits removes the monopoly problem altogether. The transport infrastructure needs to be separated from the operating companies and also from train leasing companies (as has already been done in the UK). However, rather than offer one company the franchise right to run train services on any particular stretch of track, many companies should compete to offer transport services over the same lines. The services on a line would be divided up into a number of daily time slots. For example, if three trains ran per hour, then three time slots representing one service per hour would be available. A company may now bid for any of these time slots. The highest bidder for any of these services acquires the right to run those particular services for a year. At the end of the year they must hand back a proportion of those service. These services are then re-auctioned, although the current holder would not be stopped from bidding to regain the services if they so desire. They will however only bid for them if they are making a reasonable rent from the services they provide. For this to happen they must make customers happy, or the customers will have moved to another service provider on the same track, reducing the poor quality supplier's revenue. The handing back of a proportion of services is designed to allow competition to enter the market should a companies standards fall: A newcomer need only hid a higher price for the services returned, lease the trains from the independent train leasing companies and is then free to provide a more efficient service, drawing customers from the established service providers. Market entry is now easy. However, for a truly contestable market, \textit{hit and run} competition must be possible. For this to happen, exit costs as well as entry costs must be kept to a minimum. This is the principle reason for allowing trading of the service provision permits\footnote{Basic checks on a company's financial and technical ability to run a train service can be conducted by a regulator before entry is allowed into the franchising process.}: once monopoly profits and poor service providers have been competed away, hit and run competitors may sell to those that have a competitive advantage and thus wish to stay in the industry. Transport service consumers might not only benefit from the competition to provide basic services efficiently. Competing service providers could also offer varying levels of comfort and secondary products (buffet car, bar, newspaper provision) at different tariffs. The extra cost of these services would only be accepted if they were though useful by consumers. This will lead to efficient allocation of resources that will vary automatically as customers tastes vary. A second benefit that comes from having permits that are tradable is that it allows better specialisation than the franchise system that is currently operating in the UK where services are sold pre-bundled. A hotelier who might have a competitive advantage when it comes to providing luxury sleeper services will find it far easier to purchase service provision permits on an open market, leaving regular train operators to service daylight transport where they excel. The travelling public is also likely to benefit from better continuity of service. As an example, if a train operating company suffered from industrial action then it would be easy for the other companies operating upon the same line to run the lost services removing any need for an expensive operator of last resort. The same reasoning applies if owing to any financial difficulties a train operating company was forced into liquidation, or it it had its franchise removed for breech of contract for continual safety breaches or failure to run the correct number of services. This chapter will now turn to look at the provision of bus services. Unlike the railway system where it is assumed that most routes have been built, bus companies can add geographically new services at wish and so this must be catered for in the model. All current routes should be mapped out and each time slot allotted a service provision permit. The number of buses thus offering services can thus be controlled to avoid the bus congestion problems the occurred in Darlington. The permits will then be auctioned in a manner similar to the railway network, allowing competition to occur between service providers. If a company feels that it has found a new route, then it must submit this for approval. This will be a simple check to make sure that this is not really a route already provided. Once approved, as a reward for finding the new route, the company that found the route will be offered a six month monopoly on the route (this can be viewed as a mirror of the patent system) as an incentive to encourage ingenuity. After this period, the permit system will he introduced. With the two modes so far discussed it is easy for cheats to be caught by the other permitted transport providers (who have a built in incentive to do this). In our third transport mode: road traffic, this is not as easy. Following the advances in information technology, checking of quota abuse for motorway users is becoming more feasible. Hauliers and other regular users could buy permits that allowed repeated (daily) access to the infrastructure. For more casual users permits to travel on the motorway could hc bought using a computer over a phone line. Here the system would have to differ: allowing only for immediate travel and no resale of permits. The main benefit of this system is the removal of congestion caused bottlenecks. To avoid traffic simply being diverted onto minor roads, a system of fixed charges per journey would have to be implemented alongside the motorway quotas. This fixed charge will hopefully also shift consumers towards public transport as this becomes relatively cheaper. The author acknowledges that the state of technology is not quite available yet, and that strong efforts are going to be required to make sure all cars are fitted with the technology to make this possible. Nonetheless, some effort to charge at use for road transport is a prerequisite for providing sensible competition between transport modes, reduced congestion, and a rational consideration of the real cost of the journey about to be undertaken. In this chapter, a model for regulating the transport industry through the use of tradable permits has been discussed. Much more work than is possible in a dissertation of this length is required before any such schemes could be put into operation. Research is required to look at how to choose the number of available time slots, both initially and also how to vary the number with the available traffic flows. Research is also needed into the costs of adding any necessary technology to road vehicles to allow for per use charging and some research into the elasticities of demand for car and public transport would provide a basis for setting a sensible fixed charge per road journey. The author hopes that the model that has been outlined in this chapter and the need for systems based upon these lines as shown by the previous chapters will provide a useful anchor point in the continuing debate about the future of the transport industry.
{ "alphanum_fraction": 0.8171897851, "avg_line_length": 306.5172413793, "ext": "tex", "hexsha": "ab6a47d005626412b2de7365ea1e931e723a47a2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "771673be1b4810fcef7d08b86c76e76c9a8da532", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "garethwebber/commontransportpolicy", "max_forks_repo_path": "LaTeX/chapters/ch07-conclusions.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "771673be1b4810fcef7d08b86c76e76c9a8da532", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "garethwebber/commontransportpolicy", "max_issues_repo_path": "LaTeX/chapters/ch07-conclusions.tex", "max_line_length": 944, "max_stars_count": null, "max_stars_repo_head_hexsha": "771673be1b4810fcef7d08b86c76e76c9a8da532", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "garethwebber/commontransportpolicy", "max_stars_repo_path": "LaTeX/chapters/ch07-conclusions.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1652, "size": 8889 }
\section{Appendix} \subsection{VWM Network model} \begin{notation} Treat 1D tensors as column vectors and 2D tensors as matrices, where appropriate. We use lower case to represent both 1D and 2D tensors but occasionally use upper case for 2D tensors where matrix operations are involved. \begin{enumerate} \item Let $\Delta^d = \{ (x_0, x_1, \dots, x_d) : x_0 + x_1 + \dots + x_d = 1, x_i \ge 0, i = 0, 1, \dots, d\}$ denote the standard $d$-simplex. \item Let $\circ$ denote concatenation of two tensors with identical shape except possibly for their last dimensions $d_1$ and $d_2$, respectively, resulting in a tensor with last dimension of $d_1+d_2$. \item Let $\odot$ denote element-wise product of two tensors of same shape, i.e., Hadamard product for vectors/matrices. \item Let $\otimes$ denote tensor product of two tensors, i.e. Kronecker product for vectors/matrices. \end{enumerate} \end{notation} \subsection{Basic layers/modules} \colorbx{Linear (Affine) Layer} \begin{description} \item[Inputs:] A tensor $x$ with last dimension $n$. \item[Parameter:] An affine function $\cG: \Reals^n \to \Reals^m$ with weight and bias parameters. \item[Output:] A tensor $y$ with last dimension $m$, and remaining dimensions same as that of $x$, obtained by applying $\cG$ to each 1D slice of $x$ along the last dimension. \end{description} \colorbx{Attention Module} \begin{description} \item[Inputs:] \begin{enumerate} \item[] \item Query: $q \in \Reals^d$ \item Keys: $K \in \Reals^{N \times d}$ \item Values: $V \in \Reals^{N \times d}$. By default $V=K$, unless mentioned explicitly. \end{enumerate} \item[Parameter:] Weight $w \in \Reals^d$ \item[Outputs:] \begin{enumerate} \item[] \item Content vector: $h = V^{\T} u \in \Reals^d$ \item Attention vector: $w = \softmax(K(w \odot q)) \in \Reals^N$ \end{enumerate} \end{description} \colorbx{Interaction Module} \begin{description} \item[Inputs:] \begin{enumerate} \item[] \item Base object: $b \in \Reals^d$ \item Feature objects: $f \in \Reals^{M \times d}$ \end{enumerate} \item[Parameters:] \begin{enumerate} \item[] \item Base object projection linear layer: $\cG: \Reals^d \to \Reals^d$ \item Feature objects projection linear layer : $\cK: \Reals^{M \times d} \to \Reals^{M \times d}$ \item Modifier linear layer: $\cH: \Reals^{M \times 2d} \to \Reals^{M \times d}$ \end{enumerate} \item[Output:] Modified feature objects $f' = \cH( \cK(f) \odot ( \vone \otimes \cG(b))) \in \Reals^{M \times d}$ \end{description} \hrulefill \subsection{SAM cell} The SAM recurrent cell is executed for $T$ reasoning steps for every frame in the temporal order. Within a single frame, the cell state at the end of each reasoning step $t=1,2, \dots, T$ is denoted by $(c_t, M_t, o_t)$, where: \begin{enumerate} \item $c_t \in \Reals^d$ is the control state; \item $M_t \in \Reals^{N \times d}$ is the visual working memory with $N$ slots; \item $w_t \in \Reals^N$ is the write head; and \item $so_t \in \Reals^d$ is the summary visual object. \end{enumerate} The initial state is such that both $c_0$ and $so_0$ are initialized to a fixed value at the start of each frame. However $M_0$ is initialized only once at the start of the first frame and otherwise taken to be the value of $M_T$ at the end of the previous frame. %The number of slots $N$ for the SAM $M_t$ is not fixed because the neural network %parameters do not depend on it. It can be variable across the different datasets used %for training, validation and test as well as within each dataset. This, for example, enables a %form of transfer learning where we can train on an easy dataset for one value of $N$ %and study its generalization to a hard dataset using a larger value of $N$. \colorbx{Question-driven Controller} %The Question-driven Controller plays an important role in the reasoning process. %It drives the attention over the question and produces the new control states. Each new control state defines a new reasoning operation. The inputs of this unit are the past control state, the question encoding and the contextual words (see Question Encoding Unit). It uses the dot product attention between the contextual words %and the combination of the past control states and the question encoding. This attention layer produces the new control state. % %This unit also outputs the temporal class weights that will be used in the Reasoning Unit. It gives access to a temporal information for the current words (last, latest, now, none temporal). \begin{description} \item[Inputs:] \begin{enumerate} \item[] \item Reasoning step $t = 1,2, \dots, T$ \item Previous control state: $c_{t-1} \in \Reals^d$ \item Contextual words: $cw \in \Reals^{L \times d}$ \item Question encoding: $q \in \Reals^d$ \end{enumerate} \item[Parameters:] \begin{enumerate} \item[] \item Reasoning step-dependent linear layer: $\cG_t: \Reals^d \to \Reals^d$, depending on $s$ \item Concatenation linear layer: $\cH: \Reals^{2d} \to \Reals^d$ \item Attention module $\cA$ \item Temporal classifier: $\cK: \Reals^d \to \Delta^3$. A two-layer feedforward network with ELU activation in the hidden layer of $d$ units. The classes for the temporal context are labeled ``last'', ``latest'', ``now'', as well as a fourth class label ``none`` indicating no temporal context. If $\tau \in \Delta^3$ is the output of the classifier, we denote the components by $\tlast$, $\tlatest$, $\tnow$ and $\tnone$. \end{enumerate} \item[Outputs:] \begin{enumerate} \item[] \item Control state $c_t \in \Reals^d$ \item Control attention $ca_t \in \Reals^L$ \item Temporal class weights $\tau_t \in \Reals^4$ \end{enumerate} \item[Equations:] \begin{enumerate} \item[] \item Modulation: $y = \cH\bigl([c_{t-1}, \cG_t(q)]\bigr)$ \item Control state and attention: $c_t, ca_t= \cA(y, cw)$ \item Temporal classification: $\tau_t = \cK(c_t)$ \end{enumerate} \end{description} \colorbx{Visual Retrieval Unit} %The visual retrievial unit is responsible to extract visual information from the current image given a control state coming from the Question-driven Controller. It is first projecting the past summary object and the feature maps together using the interaction module. %It is then using the attention module as follow. The query are the control states and the keys and are the feature maps coming from the image encoder. The results of this attention is applied on the modified features maps coming from the interaction module. %This unit outputs the extracted object and the visual attention. \begin{description} \item[Inputs:] \begin{enumerate} \item[] \item Control state: $c_t \in \Reals^d$ \item Previous summary object: $so_{t-1} \in \Reals^d$ \item Feature map of current frame: $F \in \Reals^{H \times W \times d}$ \end{enumerate} \item[Parameters:] \begin{enumerate} \item[] \item Interaction module $\cI$ \item Attention module $\cA$ \end{enumerate} \item[Outputs:] \begin{enumerate} \item[] \item Visual object: $vo_t \in \Reals^d$ \item Visual attention: $va_t \in \Reals^{H \times W \times d}$ \end{enumerate} \item[Equations:] \begin{enumerate} \item[] \item Modified feature map: $\hat{F} = \cI(so_{t-1}, F)$ \item Visual object and attention: $vo_t, va_t = \cA(y, \hat{F}, M_{t-1})$ \end{enumerate} \end{description} \begin{note} Appropriate flatten/unflatten operations are performed to match the signature of the modules. \end{note} \colorbx{Memory Retrieval Unit} %The role of the memory retrieval unit is to read and extract object from memory). %As the Visual Retrieval Unit, it uses the combination of the two following submodules. The interaction module blends together the extracted object and the content of the memory. The attention module then extract the corresponding object in memory if present. This unit outputs the extracted object and its corresponding location, we call it the "read head". \begin{description} \item[Inputs:] \begin{enumerate} \item[] \item Control state: $c_t \in \Reals^d$ \item Previous summary object: $so_{t-1} \in \Reals^d$ \item Previous SAM $M_{t-1} \in \Reals^{N \times d}$ \end{enumerate} \item[Parameters:] \begin{enumerate} \item[] \item Interaction module $\cI$ \item Attention module $\cA$ \end{enumerate} \item[Outputs:] \begin{enumerate} \item[] \item Memory object: $mo_t \in \Reals^d$ \item Read head: $\rhead_t \in \Reals^N$ \end{enumerate} \item[Equations:] \begin{enumerate} \item[] \item Modified SAM: $\hat{M}_t = \cI(so_{t-1}, M_{t-1})$ \item Memory object and attention: $mo_t, \rhead_t = \cA(y, \hat{M}_t, M_{t-1})$ \end{enumerate} \end{description} \colorbx{Reasoning Unit} %control_state, visual_object, memory_object, temporal_class_weights \begin{description} \item[Inputs:] \begin{enumerate} \item[] \item Control state: $c_t \in \Reals^d$ \item Visual object: $vo_t \in \Reals^d$ \item Memory object: $mo_t \in \Reals^d$ \item Temporal class weights $\tau \in \Delta^3$ \end{enumerate} \item[Parameters:] Validator modules $\cG, \cK: \Reals^{2d} \to \Reals$. Both $\cG, \cK$ are two-layer networks of $2d$ hidden units, using ELU activation in the hidden layer, and sigmoid in the output layer. \item[Output:] Predicate gates for the current reasoning step \begin{enumerate} \item Object match predicate gates (i) image: $\imatch_t \in [0,1]$ and (ii) memory: $\mmatch_t \in [0,1]$. \item Memory update predicate gates (i) add: $\doadd_t \in [0,1]$ and (ii) replace: $\doreplace_t \in [0,1]$ \end{enumerate} % \item[Equations:] % \begin{enumerate} % \item[] % \item $\imatch_t \in [0,1]$: % It's true if there is a valid visual object in the current temporal context. % % \item $\mmatch_t \in [0,1]$: % It's true if there is a valid memory object in the current temporal context. % % \item $\doadd_t \in [0,1]$: % It's true if the current visual object should be added as a new object to memory. % % \item $\doreplace_t \in [0,1]$: % It's true if the current visual object should replace an existing object in memory. % \end{enumerate} \end{description} \colorbx{Memory Update Unit} %This unit is meant to update the content of the memory. % %Three actions can happen: % %\begin{itemize} % \item There is no object to be added to memory, the memory remains unchanged % \item There is one object that needs to be added to memory, but a similar object is already in memory at a given location. The new object will replace the old object at this location % % \item There is one object that needs to be added to memory, and it is a new object. It is added at the write head location. %\end{itemize} % % %This module also updates the position of the write head. If a new object as been added to the current write head position, the right head shifts right to a new empty slot. If the object has been replaced, the write head doesn't move. % visual_object, visual_working_memory, read_head, write_head, do_replace, do_add_new \begin{description} \item[Inputs:] \begin{enumerate} \item[] \item Visual object: $vo_t \in \Reals^d$ \item Memory object: $mo_t \in \Reals^d$ \item Memory update predicate gates: $\doadd_t, \doreplace_t \in [0,1]$ \item Read head: $\rhead_t \in \Reals^N$ \item Previous SAM $M_{t-1} \in \Reals^{N \times d}$ \item Previous write head: $\whead_{t-1} \in \Reals^N$ \end{enumerate} \item[Outputs:] \begin{enumerate} \item[] \item SAM $M_t \in \Reals^{N \times d}$ \item Read head: $\rhead_t \in \Reals^N$ \item Write head: $\whead_t \in \Reals^N$ \end{enumerate} \end{description} \colorbx{Summary Object Update Unit} %The Summary Unit is the last unit of the SAMCell. It is responsible to output the new summary object. It first picks which object is relevant between the object extracted from memory and the visual object extracted from the image. Once the relevant object is picked, it is combined with the former summary object through a linear layer to become the new summary object. It is the final step of the SAMCell reasoning cycle. % %The image encoder, question encoder and output unit are described in the appendix. \begin{description} \item[Inputs:] \begin{enumerate} \item[] \item Previous summary object: $so_{t-1} \in \Reals^d$ \item Visual object: $vo_t \in \Reals^d$ \item Memory object: $mo_t \in \Reals^d$ \item Object predicate gates: $\imatch_t, \mmatch_t \in [0,1]$ \end{enumerate} \item[Parameters:] Concatenation linear layer $\cH$ \item[Output:] New summary object: $so_t = \cH\bigl([so_{t-1}, (\imatch_t * vo_t + \mmatch_t * mo_t)]\bigr) \in \Reals^d$ \end{description} \noindent\makebox[\linewidth]{\rule{\paperwidth}{1pt}} \colorbx{Image Encoder} \begin{description} \item[Inputs:] Images \item[Output:] Features maps: $F_k$ \end{description} The Image Encoder unit is responsible to preprocess the sequence of images one at the time. It is a 4 layer convolutional neural network. Every layer is composed of the following sequence of operations: \begin{enumerate} \item 2D convolution ($stride=1$) \item 2D max pooling \item 2D batch normalization \item Relu (only for the first 3 layers) \end{enumerate} The first layer has 3 inputs channels and 32 output channels. Layer 2 has 32 and 64. Layer 3 has 64 and 64. Finally, layer 4 has 64 inputs channels and 128 output channels . The resulting tensor (feature maps) are of dimensions [batch size, $H*W =7*7$, $dim$=128]. \colorbx{Question Encoder} \begin{description} \item[Inputs:] \begin{enumerate} \item question string $question$ \item question lengths $question_l$ \end{enumerate} \item[Output:] \begin{enumerate} \item contextual word embeddingg $cw$ \item question encoding $q$ \end{enumerate} \end{description} The strings questions are first embedded ($torch.NN.Embeddings$) and then passed through a $d$-dimensional biLSTM ($d$=128). The final hidden state of the biLSTM becomes the question encoding representation $q$. Whereas the biLSTM outputs are projected via a linear layer to become the final words encodings named contextual words $cw$. \colorbx{Output Unit} \begin{description} \item[Inputs:] \begin{enumerate} \item question encoding $q$ \item summary object $so_t$ \end{enumerate} \item[Output:] prediction \end{description} The output unit predicts the final answer to the question. It is a 2-linear-layer-ELU classifier that produces a distribution over the possible candidates. It is based on two inputs, the final summary object $so_t$ and the question encoding $q$ that are concatenated together. \subsection{Training and Implementation Details} \subsubsection{Implementation Details} SAMNet is implemented using MI-Prometheus~\cite{kornuta2018accelerating}, a framework based on Pytorch. We trained all our models using NVIDIA’s GeForce GTX TITAN X GPUs. SAMNet was trained using 8 reasoning steps and a hidden state size of 128. The external memory has 128-bit slots for all experiments. We trained our model until convergence but we also have set a training time limit of 80 hours. \subsubsection{Training and testing Methodology} We compared our model to the original COG model ~\cite{yang2018dataset} using their implementation (https://github.com/google/cog) and scores provided by the authors through personal communications. We used the same training parameters detailed in the original paper and reproduced their results. For the generalization experiments from canonical to hard, we used the verified model and obtained new results that were not reported in the reference paper. In Table 1 COG section shows 4 columns divided into two parts: "paper" and "ours" which distinguish between the results reported in the paper vs. our own experiments. Our experiments focused on the 22 classification tasks provided by the COG dataset. More details about the dataset are given in Table 2. First we evaluated SAMNet's performance on the canonical setting and compared it with the COG Model. As shown in Table 1 we could achieve a small improvement in accuracy, from 97.6\% for the COG model to 98\% for SAMNet. Next we focused on the hard setting of the dataset which increases the number of distractors from 1 to 10 and the number of frames from 4 to 8. The first approach was to train a model on the hard training set, and test it on the hard test set. This is the same approach used by the COG paper ~\cite{yang2018dataset} to evaluate performance on the hard dataset. We achieve a test accuracy of 96.1 \% which represents a 16\% improvement from the COG model score (see Table 1). %It shows that SAMNet's design choices make a difference when it come to harder tasks with longer sequences and more distractors objects. The second approach was to see if the models can generalize from the easy to the hard setting. For this experiment, we trained a model on the canonical dataset, and directly tested on the hard dataset. This experiment highlighted the most significant difference between SAMNet and the baseline COG model. Finally we trained a model on the canonical data set, fine-tuned it on the hard data set using only 25k iterations, and tested on the hard dataset. Thanks to fine-tuning, we can observe a significant improvement from 91.6\% to 96.5\% test accuracy which represents the state of the art accuracy for the hard setting (classification tasks). After a short fine-tuning process, the transferred model could generalize well to harder tasks and even surpass the accuracy obtained in the first approach. We note that the third approach is also twice faster than the first one, and it is more effective in terms of accuracy. A more granular analysis of accuracy per task shows a major improvement for the two hardest tasks, AndCompareShape and AndCompareColor. Those two tasks represent a higher level of difficulty due to the number of objects to be remembered in order to answer the question correctly. As we can see in Table 1 we could achieve a 12\% improvement for the canonical data set and almost a 30\% improvement for the hard dataset. The large improvement in these memory-intensive tasks indicate that the SAMNet's external memory plays a crucial role in our results. The training and implementation details are in appendix. \begin{table}[htb] \caption{COG test set accuracies for SAMNet \& COG models. Below `paper' denotes results from~\cite{yang2018dataset} while `code' denotes results of our experiments using their implementation~\cite{yang2018implement}} \centering \begin{adjustbox}{width=\textwidth} \begin{tabular}{lcccccccccc} \toprule Model & & SAMNet & && && COG&& \\ \cmidrule{2-5} \cmidrule{7-11} &&&&& & paper & ours & ours & paper&\\ \cmidrule{7-9} \cmidrule{10-11} Trained on & canonical & canonical & canonical & hard & & canonical & canonical & canonical & hard \\ Fine tuned on & - & - & hard & - & & - & - & hard & - \\ Tested on & canonical & hard & hard & hard & &canonical & hard & hard & hard \\ \midrule Overall accuracy & 98.0 & 91.6 & 96.5 & 96.1 & & 97.6 & 65.9 & 78.1& 80.1 \\ \midrule AndCompareColor & 93.5 & 82.7 & 89.2 & 80.6 & & 81.9 & 57.1 & 60.7 & 51.4 \\ AndCompareShape & 93.2 & 83.7 & 89.7 & 80.1 & & 80.0 & 53.1 & 50.3 & 50.7 \\ AndSimpleCompareColor & 99.2 & 85.3 & 97.6 & 99.4 & & 99.7 & 53.4 & 77.1 & 78.2 \\ AndSimpleCompareShape & 99.2 & 85.8 & 97.6 & 99.2 & & 100.0 & 56.7 & 79.3 & 77.9 \\ CompareColor & 98.1 & 89.3 & 95.9 & 99.7 & & 99.2 & 56.1 & 67.9 & 50.1 \\ CompareShape & 98.0 & 89.7 & 95.9 & 99.2 & & 99.4 & 66.8 & 65.4 & 50.5 \\ Exist & 100.0 & 99.7 & 99.8 & 99.8 & & 100.0 & 63.5 & 96.1 & 99.3 \\ ExistColor & 100.0 & 99.6 & 99.9 & 99.9 & & 99.0 & 70.9 & 99 & 89.8 \\ ExistColorOf & 99.9 & 95.5 & 99.7 & 99.8 & & 99.7 & 51.5 & 76.1 & 73.1 \\ ExistColorSpace & 94.1 & 88.8 & 91.0 & 90.8 & & 98.9 & 72.8 & 77.3 & 89.2 \\ ExistLastColorSameShape & 99.5 & 99.4 & 99.4 & 98.0 & & 100.0 & 65.0 & 62.5 & 50.4 \\ ExistLastObjectSameObject & 97.3 & 97.5 & 97.7 & 97.5 & & 98.0 & 77.5 & 61.7 & 60.2 \\ ExistLastShapeSameColor & 98.2 & 98.5 & 98.8 & 97.5 & & 100.0 & 87.8 & 60.4 & 50.3 \\ ExistShape & 100.0 & 99.5 & 100.0 & 100.0 & & 100.0 & 77.1 & 98.2 & 92.5 \\ ExistShapeOf & 99.4 & 95.9 & 99.2 & 99.2 & & 100.0 & 52.7 & 74.7 & 72.70 \\ ExistShapeSpace & 93.4 & 87.5 & 91.1 & 90.5 & & 97.7 & 70 & 89.8 & 89.80 \\ ExistSpace & 95.3 & 89.7 & 93.2 & 93.3 & & 98.9 & 71.1 & 88.1 & 92.8 \\ GetColor & 100.0 & 95.8 & 99.9 & 100.0 & & 100.0 & 71.4 & 83.1 & 97.9 \\ GetColorSpace & 98.0 & 90.0 & 95.0 & 95.4 & & 98.2 & 71.8 & 73. & 92.3 \\ GetShape & 100.0 & 97.3 & 99.9 & 99.9 & & 100.0 & 83.5 & 89.2 & 97.1 \\ GetShapeSpace & 97.5 & 89.4 & 93.9 & 94.3 & & 98.1 & 78.7 & 77.3 & 90.3 \\ SimpleCompareShape & 99.9 & 91.4 & 99.7 & 99.9 & & 100.0 & 67.7 & 96.7 & 99.3 \\ SimpleCompareColor & 100.0 & 91.6 & 99.80 & 99.9 & & 100.0 & 64.2 & 90.4 & 99.3 \\ \bottomrule \end{tabular} \end{adjustbox} \label{results} \end{table} \begin{table}[htb] \caption{COG Dataset parameters for the canonical setting and the hard setting } \centering \begin{adjustbox}{width=\textwidth} \begin{tabular}{ccccccc} \toprule Dataset & number of frames & maximum memory duration & number of distractors & size of training set & size of validation/test set \\ \midrule Canonical setting & 4 & 3 & 1 & 10000320 & 500016 & \\ \midrule Hard setting & 8 & 7& 10 & 10000320 & 500016 \\ \bottomrule \end{tabular} \end{adjustbox} \label{tab:parameters} \end{table}
{ "alphanum_fraction": 0.7034987423, "avg_line_length": 41.8869731801, "ext": "tex", "hexsha": "34d9821e33edc9e0e4de34542059d38be3cf1bf1", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-07-30T10:13:26.000Z", "max_forks_repo_forks_event_min_datetime": "2020-07-30T10:13:26.000Z", "max_forks_repo_head_hexsha": "4c5c503cb3976186d6eda4628f7c45914feba9fa", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Bhaskers-Blu-Org1/mi-visual-reasoning-pubs", "max_forks_repo_path": "COG_publications/2019_neurips_kr2ml/appendix_model.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4c5c503cb3976186d6eda4628f7c45914feba9fa", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Bhaskers-Blu-Org1/mi-visual-reasoning-pubs", "max_issues_repo_path": "COG_publications/2019_neurips_kr2ml/appendix_model.tex", "max_line_length": 625, "max_stars_count": null, "max_stars_repo_head_hexsha": "4c5c503cb3976186d6eda4628f7c45914feba9fa", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Bhaskers-Blu-Org1/mi-visual-reasoning-pubs", "max_stars_repo_path": "COG_publications/2019_neurips_kr2ml/appendix_model.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7118, "size": 21865 }
\chapter{Problem statement- State of the art} \label{ch:problem-statement} \markboth{Problem statement- State of the art}{} \paragraph{} The key idea of this thesis is an architecture that enables the implementation of economically attractive solutions that offer High-Availability to the IoT Edge Layer. It also proposes the most optimum way for critical infrastructure to be fault-tolerant, both in terms of services/functionality but also in terms of data quality (data integrity). However, preliminary discussions with experts of the domain (e.g. balena, EdgeX, blockchain domain) led us to a more generic approach: That of resource sharing in a concept of share economy, in the domain of IoT Edge resources. \section{The Sharing Economy} Sharing Economy or “gig” economy is a relatively new term \cite{sharingeconomy}, popularised by famous companies like Airbnb or Uber. The idea is that services and goods can be distributed in a decentralized fashion, not by companies with employees but by private individuals who rent currently unused resources to other private individuals in a peer to peer fashion. In the above mentioned examples, individuals can rent their house (or part of it) when they don’t use it (Airbnb), or rent the “unused” seats of their cars, working, in essence, as taxi drivers whenever they choose to (e.g Uber, Lyft, etc.). Figure \ref{fig:sharing_economy} illustrates the concept of “Sharing Economy” for Airbnb which in essence provides access to Market to both parties of the value chain (Demand, Offer). \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{images/sharingEconomy.jpg} \caption{The Sharing Economy Model;use-case of Airbnb\cite{airbnb}} \label{fig:sharing_economy} \end{figure} \section{Sharing Economy in the Internet of Things?} We now attempt to apply the principles outlined in the above examples to the domain of IoT Edge resources.By tapping into that notion, IoT edge nodes could in fact share resources in order to provide services one to another, without necessarily belonging to the same stakeholder or even forming a federation. By incentivizing the IoT Edge Nodes to share resources, with a currency of sorts, we can foresee a \acrfull{m2m} marketplace, where IoT Edge Nodes will dynamically rent the resources that they need. In this scenario, an IoT Edge Node could provide Fault - Tolerance capabilities to nodes that are willing to pay for them. This could lead to a much more sustainable path of proliferating IoT Edge Computing , since it can considerably reduce the upkeep cost. There is no sense in owning Resources that you don’t use, only to anticipate a spike or an exception in the normal activity, but rather you can rent ad-hoc that extra computational power, if ever there is a need for it. Indeed, the principle can be expanded with not only conventional computing resources, which we will define in Section \ref{st:edge-resources} , but even with IoT specific “resources”. We argue that the handling of constrained devices (provision, management, data aggregation, semantic notation, metadata addition) can be viewed as a resource to be rented. Dynamic handover of Constrained Devices is an example of IoT specific “resources” that would provide value in a Fault-Tolerance scenario, as the Edge will not only need to take up any processing or decision making, but will also need to handle the data generators for a seamless transition from abnormal to normal operation. In essence, we envision 3rd party applications and/or services to run on the same physical Edge device. This physical Edge device will provide the needed resources to those applications so that they perform their tasks. It is important here to underline that our solution is not about a multi-tenant platform, where different applications from different parties run on the same hardware. Each edge hardware is owned and operated by a distinct entity (an IoT platform provider for example), which is able to procure some resources to other parties so that they run a part (or even the entirety) of their application logic on the hardware. It is a dynamic relationship that can change rapidly due to the marketplace foundation and the open-ended offer and demand of resources (free market). In other words, we believe that a platform for IoT Edge Computing as a service, is a much more interesting approach, while Fault - Tolerance is only a use case for it. It is interesting to note that there is not much bibliography concerning the high availability of edge computational resources by sharing them in a p2p economy, other than conventional methods such as backup systems. \section{MEC framework} This idea has already been explored in the \acrfull{mec} framework\cite{7574435} which is believed to be critical for the introduction of 5G networks. Interestingly, while 4G is focused on delivering video content, 5G will focus on the growing M2M connections that is a direct result of the increase of IoT traffic and services\cite{cisco-visualise}. MEC offers cloud-computing capabilities in the RAN environment (Radio Access Network) while also offering an information technology service environment at the edge. They are characterized by very low latency, increased bandwidth and real-time access to radio networks. Moreover-and this is the part that is of interest to us- operators can open their RAN edge to authorized stakeholders so they can deploy their own applications and services toward their own subscribers (i.e private, enterprises and vertical solutions). MEC launched in 2014 by the European Telecommunications Standards Institute MEC Industry Specification Group (ISG) which by the time of writing has finished its work regarding the reference architecture for MEC. \begin{figure}[ht] \centering \includegraphics[width=0.9\textwidth]{images/mecarch.png} \caption{The High-Level Architecture of the MEC framework\cite{7574435}} \label{fig:mec} \end{figure} Figure \ref{fig:mec} illustrates the outline of the MEC reference architecture, where the top layer is the whole-system management layer that has an overall visibility of the system, while the bottom layer consists of the connectivity layer which includes all the possible connectivity protocols, from local area networks (LAN) to cellular internet. The layer in between, the host level features the ME platform and applications module, while also the virtualization environment on top of which the applications run. The ME platform is responsible for procuring to the ME applications, all the necessary resources and services they need, MEC can be seen as a cloud server running at the edge of a mobile network. While MEC is extremely relevant to our solution and a cutting edge piece of standardized software, it diverges significantly from what we are proposing as it assumes trust between the MEC host and the MEC applications. Moreover, it is not designed for a dynamic supply and demand of Edge IoT Resources but rather for more longstanding federation agreements. Finally, it is important to note that MEC was designed with telecommunications providers in mind, tailoring the architecture to their unique characteristics. Although more diverse deployments are discussed in MEC bibliography, it remains to see their actual effectiveness in regards to the private small-scale provider. It is important to note that as the research continues on the solution, it is possible that the technology stack will swift to use MEC as the standardisation will improve the quality of the end result. \section{Edge Resources} \label{st:edge-resources} According to Mohammad Aazam et al.\cite{aazam2015dynamic} where they illustrate a dynamic resource provisioning mechanism for micro datacenters, computational resources are modeled by a set with the following elements \textit{R = [ CPU, storage, memory, bandwidth]}. In this research work, we will enrich the definition of Edge Resource with a superset of \textit{R}, \textit{R’ = [R, connectivity interface, uptime, battery power]}. This addition of two resources was necessary to model services that demand cyber physical I/O, as for example the handover of bluetooth sensors. The service-provider , apart from apparent CPU, memory \& storage resources, must also “spend” a bluetooth interface of the device. Depending on the use case, battery power or uptime could be an extremely valuable resource. In this context, uptime is defined as the total amount of time that the service will "run" (and which the service provider must ensure the \acrshort{qos}). Note that the estimation, procurement , management and the monetary evaluation of resources is beyond the scope of this thesis. Nevertheless, the reader will find the above definition of an Edge Resource enlightening, as we dive into the architecture and implementation, in order to better grasp the positioning of our solution. Regarding resource management, there are works for efficient scheduling and adaptive offloading schemes that reduce complexity\cite{feng2017ave} with application to autonomous vehicles. Most researchers focus on the optimization of load distribution in a homogenous network of edge nodes that can work collaboratively and can share the computational load. An example is Rogers Owen et al.\cite{rogers2012financial}, who present a methodology for resource allocation amongst Cloud computers (without resource prediction) while Deelman et al. present trade offs and performance metrics for various resource provisioning plans. Zhen Xiao et al.\cite{xiao2012dynamic} present a resource allocation system that exploits virtualization technology in order to provision resources to services according to their needs, a vastly different model since it is apparent that it refers to the same physical Edge Node. \section{ Fault-Tolerance/High-Availability} High availability or Fault-Tolerance is a common cloud terminology and references to the ability of a system to continue it’s functionality in the case of a critical failure due to the replication of the services to multiple machines in distributed manner. While system redundancy has been partially solved by architectures such as the multi-region architecture, these solutions apply only to the cloud layer and usually the physical mainframes belong to the same stakeholder. Our research has indicated that fault-tolerance at the edge layer is indeed a new concept, since the edge rarely performed any critical functionality, apart from aggregating sensor data and forwarding them to the cloud. A research study by D. LeClair et al. suggested that Edge Computing availability should surpass that of a data center(99.9\%) and should support at least five nines (99.999\%) availability\cite{ismail2015evaluation}. Sudip Misra et al. propose an algorithm for fault-tolerance of IoT devices, regarding the routing of the IoT devices, using a Learning Automata as the core of the proposed algorithm \cite{misra2012adaptive}. Bukhary Ikhwan Ismail et al. propose a scheme utilizing the docker container technology as an Edge Computing (EC) platform, meeting a list of requirements that they defined as necessary: 1) deployment and termination; 2) resource \& service management; 3) fault tolerance and 4)caching \cite{ismail2015evaluation}. The container technology is in fact part of our solution as mentioned in Architecture Chapter \ref{ch:system-architecture}, but it’s only part of the solution, since the writers use docker swarm and a local docker image registry for each group of edge nodes. This creates multiple “single” points of failure, since a fail-over of the registry can result to the inability of the system to load new images. Details regarding the container technology and an in depth explanation will be given in Section \ref{st:containers}. Such approaches are inherently different from our proposal since we will be targeting a broader spectrum of Edge devices, where trust between the actors (devices) is not a prerequisite. Our approach is closer to the architecture and principles of Helium \cite{helium}, a startup company which aims to create a decentralized network where nodes are compensated for providing Long Range Network Bandwidth to any sensor interested, as shown in Figure \ref{fig:helium}. The idea is to replicate the model of The Things Network\cite{ttn}, where people contribute their LoRa gateway to the network for anyone to use, but with the addition of an economic incentive to do so. Moreover, they use another communication protocol, called LongFi\cite{gemmell-2019}, a bi-directional data transfer system that is closely related to LoRa in terms of specifications (long range, low bandwidth). On top of it, the system is decentralized, meaning that the network is able to reach a consensus on who is actually providing wireless network coverage without depending on a centralized authority. Our work was greatly influenced by Helium, as network coverage could indeed be one of the resources that our architecture will be able to procure. An example, would be an Edge Device, that provides the activities of data aggregation and north-side forwarding, to any Constraint Device that is interested to pay for them (i.e as-a-service). \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{images/helium.png} \caption{Illustration of a helium gateway providing coverage to IoT devices\cite{helium}} \label{fig:helium} \end{figure} \section{Storage Layer} Another project that not only influenced the architecture and approach of our solution, but is also included as part of the proposed architecture, is Filecoin. Filecoin wants to turn the world’s unused storage into a decentralized algorithmic market, where anyone can offer his unused storage and be compensated for it, while clients can use the network to store their data. It is built on top of a very successful distributed file system called \acrfull{ipfs}, which is usually referred to as the decentralized HTTP. Although the project is still under heavy development, it is foreseen that the free-market economics of the project will render it a cheaper solution versus the already established storage solutions (such as Google Cloud, AWS, etc.). The Filecoin protocol will be described in depth in Section \ref{st:filecoin}. Decentralization offers the elimination of the single point of failure, a point which is more interesting than it seems, even with the high-availability services that exist today. Recently, a user in twitter \cite{gigazine} mentioned that because of an AWS datacenter power failure (which was coupled with a failure of the on-site backup power generators), the entire datastore of his companies data was erased. Even though the data was restored, because of the timely backup routine, in different scenarios where the data influx is much greater (such as a typical IoT scenario), this power failure could create a hole in the historical data of the platform. Finally, an interesting argument for decentralization is that censorship is impossible to be conducted in a decentralized ecosystem. We must remember that although the Cloud providers have a strong incentive to act in a faithfuly, they are in fact in control of the data we store, a point which depending on the application vertical can have more or less importance, especially in privacy sensitive scenarios (e.g healthcare technology). Taking into account our need for a storage network, as a means to reduce storage costs, and decentralization there are a couple of projects that we could potentially use, namely: 1)StorJ\cite{storj}, 2)Sia\cite{sia} and 3)Filecoin\cite{filecoinlabs}. We outline some of their attributes and we conclude on why Filecoin was chosen. \begin{enumerate} \item \textbf{Sia:} Sia is an early project that started back in 2013 at HackMIT and officially launched in 2015. It aims to leverage underutilized hard drive capacity around the world to create a data storage marketplace which is more efficient and cheaper to current solutions. They have a working product and have interesting design features. They have preference for the Proof-of-Work(PoW), have their own ASIC chips for Siacoin mining and utilise file contracts to set rules and requirements for storage (similar to a smart contract). Their Proof of Storage algorithm is utilised to further protect and validate proofs and file contracts on the network. \item \textbf{StorJ:} Another decentralised storage project, built on the Ethereum network that has quite a lot of community users and is dedicated to the open-source ethos. Storj is a platform, cryptocurrency, and suite of decentralized applications that allows you to store data in a secure and decentralized manner. Their technology revolves around file sharing, similarly to how torrents work and separates parts of the files to users in the network. When a user wants the file, he requests it and Storj uses a distributed hash table to locate all the shards and piece them together. These files are also encrypted before sharing and the person uploading it has their own private key to validate ownership. The other entity of the company Storj Labs is the for-profit side that rents out it’s network to thousands of users and charges for the network usage. This is a slightly more centralised model and competes with the likes of Dropbox and Google Drive. They also have partnerships with Microsoft Azure and Heroku to deploy some of their development tools which is a great initiative for the open source developer ecosystem. \item \textbf{Filecoin + IPFS:} The InterPlanetary File System (IPFS)\cite{ipfslabs} is a protocol started by Protocol Labs to create a new way to server information on the web. Currently the Internet works off a location based addressing where you go to a URL like medium.com which has an IP of X.X.X.X and then you get served your articles. These URL’s are pointed to certain servers around the world. Instead, what IPFS does is serving information based on what it is as opposed to where it is (location). With their routing algorithms, a user can choose where he gets the content from and he can set his privacy on what peers/nodes he trusts to receive the files from. A high level overview of the protocol is shown in Figure \ref{fig:ipfs} \cite{ipfslabs}. \begin{itemize} \item Hash addressing makes the content immutable. It does not disappear like current HTTP protocol. \item Saves bandwidth by collecting content from multiple nodes and pieces instead of from one server. \item Access to content “offline” or in low connectivity 3rd world or rural areas, in the same sense that git works offline. \item Censorship resistant. \item It is built around the decentralisation ethos. \end{itemize} To further utilise this technology, Filecoin was proposed as a way of creating a decentralised storage network by using the unused storage lying around and incentivize users to be part of the sharing economy through Filecoins (FIL) by using the IPFS protocol. The project is still under heavy development, so it will be interesting to see how they build with their proposed Proof of Space and Proof of Replication consensus algorithms, which will be discussed further in the Architecture Chapter \ref{ch:system-architecture}. \end{enumerate} \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{images/ipfs.png} \caption{High-Level overview of the IPFS protocol \cite{ipfslabs}} \label{fig:ipfs} \end{figure} \subsection{Why Filecoin was chosen} Filecoin was chosen because it already utilizes a protocol that has been used for several years, establishing itself as the standard for the distributed storage of files in the blockchain community. Moreover, Protocol Labs is a proven development team, having already developed IPFS itself, illustrating serious technical capabilities as also a thorough technical approach. Finally, the development of Filecoin is considerably slower than the other projects, as the team establishes the basic research on top of which the technology will be built. The research based orientation and the technical aspects of the Filecoin whitepaper\cite{filecoin-whitepaper} show great promise and care for the engineering quality of the product, aspects that are highly valued in a domain as pioneering as blockchain. We wanted to integrate in our architecture, a solution that can be viewed for the long-term, supporting in the meantime its development by being early adopters and contributors. \subsection{Data Integrity} Data integrity is the overall completeness, accuracy and consistency of data, which means that the data must be collected and stored in an tamper-proof and undeletable fashion. This is extremely important for certain problem domains, such as critical infrastructure. Data Integrity can be achieved by enforcing the Edge systems to digitally sign the sensor data while maintaining an end-to-end cryptography between the sensor and the Edge. Moreover, data integrity dictates that data storage is inherently high-available, meaning that the data storage is equally important as the data collection. The problem of data integrity has been extensively studied in all traditional computing and communication systems and some preliminary results exist for sensor networks\cite{acharya2008data}. Data integrity became extremely relevant after the trend of storing data on the cloud. While it is easy to check data integrity after completely downloading the data to be checked, downloading large amounts of data just for checking data integrity is a considerable waste of bandwidth. Moreover, remote data integrity validation was firstly introduced using the RSA (Rivest–Shamir–Adleman) based methods and afterwards using a challenge-response model, by Shah et al\cite{shah2007auditing}. In the second model, the user challenges the data storage computer system to respond with a succinct proof which will show that indeed he is storing the data. The simplest of such methods is the division of the data into blocks and the hashing them, but there are other methods with increased efficiency\cite{kumar2011data}. Blockchain technology is a recent candidate for this issue, as it combines the characteristics of an immutable record with a distributed database. There are currently two mains usages of blockchain to enforce data integrity on the data pipeline. \begin{enumerate} \item Store data directly on the blockchain, extremely secure but raises a lot of issues regarding scalability and performance. Infeasible for large streams of data. \item Package data and store a hash on the blockchain. Thus an immutable succinct record of that data is created. A user needs both the data and the hash to verify the correctness of the data. \end{enumerate} \section{Use Cases} As the thesis attempts to introduce an innovative concept in a new domain, it is pertinent to introduce two (2) use cases that will help the reader to understand the usability of the proposed architecture. The first use-case is the one that was chosen to be the objective of the proof of concept implementation, the fault-tolerance scenario. The second is more generic, illustrating the ad-hoc provision of services. \subsection{Use Case 1: Fault-Tolerance} The fault tolerance use case was inspired by a scenario, local to the university of Patras. Near the university, there is a bridge called “Rio-Antirio bridge” or as officially named “Charilaos Trikoupis Bridge”. It is one of the world's longest multi-span cable-stayed bridges and the longest of the fully suspended type, as shown in Figure \ref{fig:riobridge}. A true marvel of engineering, it has 4 pillars over an exceptionally difficult area with numerous challenges, namely the frequent earthquakes, the weak ground and the depth of the gulf. A bridge of this magnitude, houses numerous sensors which aid the bridge operators into verifying the integrity and safety of the bridge, while also acting proactively to any challenge that may rise \cite{rion-antirion-bridge}. It is safe to presume that there is an Edge system that aggregates this data, maybe performs some processing and analytics and then feeds the cloud system that offers the rest of the services to the operators. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{images/rio-bridge.jpg} \caption{The Rio-Antirio Bridge in Western Greece \cite{riowiki}} \label{fig:riobridge} \end{figure} It is also safe to presume that there are protocols in place for the case that the computer system (“Edge”) fails and the operators are left effectively blind. What would happen if the University of Patras, which is nearby, could in fact jump in with and offer a micro-datacenter, in order to take over the bridge management until the issue is fixed? Due to the extreme locality of the two computer systems, University of Patras can still be considered as “edge” for the Rio-Antirio bridge IoT system. In Figure \ref{fig:rio-map} we see a map overview of the scenario, where the university is clearly indicated by the blue market at the bottom, while the Rio bridge side at the top. The black line that connects the two landmarks is approximately 2.3km, illustrating the extreme locality of the two Edge computers. \begin{figure}[ht] \centering \includegraphics[width=0.6\textwidth]{images/usecase1map.png} \caption{A map illustrating the close proximity of the Bridge and the University} \label{fig:rio-map} \end{figure} The handover of the sensors and all relevant functionality would happen automatically, as soon as a failure was detected by the platform at the University of Patras. The two edge systems would have a priori signed a digital contract, containing all the necessary information, such as needed resources, services, protocols, contract activation conditions and of course payment. The goal is for this dynamic federation to happen in an automated fashion, on a platform that will guarantee the Quality of Service needed for such an important critical infrastructure. \subsection{Use case 2: Drone passage over IoT Edge Nodes} The second use-case is more generic in order to showcase the robustness of the architecture over a broad spectrum of issues. In this use-case, we envision a drone that flies over a predetermined route in order to perform some specialized functionality. This drone activity is characterized by three important aspects: \begin{enumerate} \item It generates a large number of raw data (such as video feed). \item It needs to perform pre-processing before sending it to the cloud due to bandwidth limitations. \item Limited battery capacity. \end{enumerate} It is apparent that the processing of the data can not be done on the drone itself as the battery would be impossible to realistically handle the consumption demands of the CPU, coupled with the inherent power consumption hungry nature of a drone (i.e the motors). For this reason, the drone knowing the route beforehand, has reached an agreement with a number of edges that are geographically near the route in order to offload the pre-processing and cloud forwarding to them. The drone would then simply fly and stream the data to them, while the IoT Edges will know exactly what to do and what specifications their services need to meet. The streaming could use a low-range protocol such as WiFi or Bluetooth. Figure \ref{fig:drone_map} illustrates an autopilot program for drones \cite{ardupilot}, which enables drone to fly over a predetermined route. \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{images/droneroute.png} \caption{The GUI of a Drone route mapping software\cite{ardupilot}} \label{fig:drone_map} \end{figure}
{ "alphanum_fraction": 0.8054149162, "avg_line_length": 160.0517241379, "ext": "tex", "hexsha": "041d558057002780c3407c2e8ad210d3c3e57933", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-05-16T12:16:10.000Z", "max_forks_repo_forks_event_min_datetime": "2019-12-02T04:41:51.000Z", "max_forks_repo_head_hexsha": "e7bba9c75b0559076e8cbf77ea472c908a41d4ea", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "odyslam/thesis-eaas", "max_forks_repo_path": "Thesis/chapters/chapter1_problemstatement.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "e7bba9c75b0559076e8cbf77ea472c908a41d4ea", "max_issues_repo_issues_event_max_datetime": "2020-11-19T17:22:59.000Z", "max_issues_repo_issues_event_min_datetime": "2020-10-30T19:05:47.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "odyslam/thesis-eaas", "max_issues_repo_path": "Thesis/chapters/chapter1_problemstatement.tex", "max_line_length": 1416, "max_stars_count": 15, "max_stars_repo_head_hexsha": "e7bba9c75b0559076e8cbf77ea472c908a41d4ea", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "OdysLam/thesis-eaas", "max_stars_repo_path": "Thesis/chapters/chapter1_problemstatement.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-16T12:16:07.000Z", "max_stars_repo_stars_event_min_datetime": "2019-11-04T12:15:24.000Z", "num_tokens": 5746, "size": 27849 }
% ----------------------------------------------------------------------------- % HEADER % ----------------------------------------------------------------------------- \documentclass[a4paper, 10pt]{article} \usepackage{jheppub} \usepackage[T1]{fontenc} \usepackage{colortbl,xcolor,float} \definecolor{orange}{rgb}{1,0.5,0} % ----------------------------------------------------------------------------- % COVER PAGE % ----------------------------------------------------------------------------- \title{{\includegraphics[scale=.4]{logo.png}}\ The LaTeX report} \author{Generated by elijahsheridan on 17 September 2020, 11:19:27} \abstract{ This report has been generated automatically by {\sc MadAnalysis} 5.\\$~$\\ Please cite:\\ \begin{quote} \textbf{E.~Conte, B.~Fuks and G.~Serret},\\ \textit{MadAnalysis 5, A User-Friendly Framework for Collider Phenomenology},\\ Comput. Phys. Commun. {\bf 184} (2013) 222-256,\\ arXiv:1206.1599 [hep-ph].\\ \end{quote} To contact us:\\ \begin{quote} \textbf{http://madanalysis.irmp.ucl.ac.be}\\ \textbf{[email protected]}\\ \end{quote} } % ----------------------------------------------------------------------------- % BEGIN DOCUMENT % ----------------------------------------------------------------------------- \begin{document} \maketitle \flushbottom % ----------------------------------------------------------------------------- % SECTION Setup % ----------------------------------------------------------------------------- \newpage \section{ Setup} \subsection{ Command history} \texttt{ma5>set main.currentdir = /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-optimization/\-ma\_scripts\\ } \texttt{ }\texttt{ }\texttt{ma5>\# set directory where running "./\-bin/\-ma5"\\ } \texttt{ }\texttt{ }\texttt{ma5>set main.currentdir = /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data \# need to change this directory path --> exit and type "pwd" to get the path\\ } \texttt{ }\texttt{ }\texttt{ma5>set main.lumi = 40\\ } \texttt{ }\texttt{ }\texttt{ma5>set main.fom.formula = 5\\ } \texttt{ }\texttt{ }\texttt{ma5>set main.fom.x = 0.0\\ } \texttt{ }\texttt{ }\texttt{ma5>\# import samples --> change the path to the LHE file\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-axion\_signal/\-axion\_signal\_gurrola\_cuts\_1MeV.lhe.gz as signal\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_0\_100\_merged.lhe.gz as bg\_vbf\_0\_100\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_100\_200\_merged.lhe.gz as bg\_vbf\_100\_200\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_200\_400\_merged.lhe.gz as bg\_vbf\_200\_400\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_400\_600\_merged.lhe.gz as bg\_vbf\_400\_600\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_600\_800\_merged.lhe.gz as bg\_vbf\_600\_800\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_800\_1200\_merged.lhe.gz as bg\_vbf\_800\_1200\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_1200\_1600\_merged.lhe.gz as bg\_vbf\_1200\_1600\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_1600\_inf\_merged.lhe.gz as bg\_vbf\_1600\_inf\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_0\_100\_merged.lhe.gz as bg\_dip\_0\_100\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_100\_200\_merged.lhe.gz as bg\_dip\_100\_200\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_200\_400\_merged.lhe.gz as bg\_dip\_200\_400\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_400\_600\_merged.lhe.gz as bg\_dip\_400\_600\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_600\_800\_merged.lhe.gz as bg\_dip\_600\_800\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_800\_1200\_merged.lhe.gz as bg\_dip\_800\_1200\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_1200\_1600\_merged.lhe.gz as bg\_dip\_1200\_1600\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_1600\_inf\_merged.lhe.gz as bg\_dip\_1600\_inf\\ } \texttt{ }\texttt{ }\texttt{ma5>\# define bg and signal samples\\ } \texttt{ }\texttt{ }\texttt{ma5>set signal.type = signal\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_0\_100.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_100\_200.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_200\_400.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_400\_600.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_600\_800.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_800\_1200.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_1200\_1600.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_1600\_inf.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_0\_100.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_100\_200.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_200\_400.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_400\_600.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_600\_800.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_800\_1200.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_1200\_1600.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_1600\_inf.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>\# a jet can be from a light quark or b quark\\ } \texttt{ }\texttt{ }\texttt{ma5>define jets = j\\ } \texttt{ }\texttt{ }\texttt{ma5>define e = e+ e-\\ } \texttt{ }\texttt{ }\texttt{ma5>define mu = mu+ mu-\\ } \texttt{ }\texttt{ }\texttt{ma5>define ta = ta+ ta-\\ } \texttt{ }\texttt{ }\texttt{ma5>define lept = e mu ta\\ } \texttt{ }\texttt{ }\texttt{ma5>define ax = 9000005\\ } \texttt{ }\texttt{ }\texttt{ma5>\# cuts\\ } \texttt{ }\texttt{ }\texttt{ma5>select M(a[1] a[2]) > 500\\ } \texttt{ }\texttt{ }\texttt{ma5>select PT(a[1]) > 300\\ } \texttt{ }\texttt{ }\texttt{ma5>select M(jets[1] jets[2]) > 750\\ } \texttt{ }\texttt{ }\texttt{ma5>select sdETA(jets[1] jets[2]) > 3.6 or sdETA(jets[1] jets[2]) < -3.6\\ } \texttt{ }\texttt{ }\texttt{ma5>\# define which plots to make\\ } \texttt{ }\texttt{ }\texttt{ma5>plot PT(jets[1])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot ETA(jets[1])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot PHI(jets[1])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot PT(jets[2])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot ETA(jets[2])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot PHI(jets[2])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot DELTAR(jets[1], jets[2])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot M(jets[1] jets[2])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot sdETA(jets[1] jets[2])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot M(a[1] a[2])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot PT(a[1])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot PT(a[2])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot THT\\ } \texttt{ }\texttt{ }\texttt{ma5>plot MET\\ } \texttt{ }\texttt{ }\texttt{ma5>plot TET\\ } \texttt{ }\texttt{ }\texttt{ma5>plot DELTAR(a[1], a[2])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot sdETA(a[1] a[2])\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set the plot/\-graph parameters\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[5].xmin = 0\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[5].xmax = 2000\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[5].nbins = 200\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[5].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[5].titleX = "p\_\{T\}[j\_\{1\}] (GeV)"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[6].xmin = -8\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[6].xmax = 8\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[6].nbins = 160\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[6].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[6].titleX = "\#eta[j\_\{1\}]"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[7].xmin = -3.2\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[7].xmax = 3.2\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[7].nbins = 64\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[7].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[7].titleX = "\#phi[j\_\{1\}]"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[8].xmin = 0\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[8].xmax = 1000\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[8].nbins = 100\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[8].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[8].titleX = "p\_\{T\}[j\_\{2\}] (GeV)"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[9].xmin = -8\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[9].xmax = 8\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[9].nbins = 160\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[9].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[9].titleX = "\#eta[j\_\{2\}]"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[10].xmin = -3.2\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[10].xmax = 3.2\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[10].nbins = 64\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[10].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[10].titleX = "\#phi[j\_\{2\}]"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[11].xmin = 0\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[11].xmax = 15\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[11].nbins = 75\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[11].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[11].titleX = "\#DeltaR[j\_\{1\},j\_\{2\}]"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[12].xmin = 120\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[12].xmax = 2000\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[12].nbins = 160\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[12].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[12].titleX = "M[j\_\{1\},j\_\{2\}] (GeV)"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[13].xmin = 2.4\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[13].xmax = 8\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[13].titleX = "\#Delta\#eta(j\_\{1\},j\_\{2\})"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[14].xmin = 0\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[14].xmax = 1000\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[14].nbins = 400\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[14].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[14].titleX = "M[a\_\{1\},a\_\{2\}] (GeV)"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[15].xmin = 0\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[15].xmax = 1000\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[15].nbins = 80\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[15].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[15].titleX = "p\_\{T\}[a\_\{1\}]"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[16].xmin = 0\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[16].xmax = 2000\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[16].nbins = 400\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[16].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[16].titleX = "p\_\{T\}[a\_\{2\}] (GeV)"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[17].xmin = 0\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[17].xmax = 4000\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[17].nbins = 80\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[17].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[17].titleX = "THT"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[18].xmin = 0\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[18].xmax = 1000\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[18].nbins = 200\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[18].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[18].titleX = "MET"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[19].xmin = 0\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[19].xmax = 8000\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[19].nbins = 80\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[19].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[19].titleX = "TET"\\ } \texttt{ }\texttt{ }\texttt{ma5>submit four\_cuts\_eff\_flow\_chart\\ } \texttt{ }\texttt{ }\subsection{ Configuration} \begin{itemize} \item MadAnalysis version 1.6.33 (2017/\-11/\-20). \item Histograms given for an integrated luminosity of \textcolor{blue}{40.0}\textcolor{blue}{ fb}$^{\textcolor{blue}{-1}}$\textcolor{blue}{.} \textcolor{blue}{} \end{itemize} % ----------------------------------------------------------------------------- % SECTION Datasets % ----------------------------------------------------------------------------- \newpage \section{ Datasets} \subsection{ signal} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-post\_optimization\_studies/\-mad\_analyses} . \item Sample consisting of: \textcolor{blue}{signal} events. \item Generated events: \textcolor{blue}{1000000 } events. \item Normalization to the luminosity: \textcolor{blue}{4094}\textcolor{blue}{ +/\-- }\textcolor{blue}{2 } events. \item Ratio (event weight): \textcolor{blue}{0.0041 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-axion\_signal/\-axion\_signal\_gurrola\_cuts\_1MeV.lhe.gz}& {\cellcolor{white} 1000000}& {\cellcolor{white} 0.102 @ 0.028\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_vbf\_0\_100} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-post\_optimization\_studies/\-mad\_analyses} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{1000000 } events. \item Normalization to the luminosity: \textcolor{blue}{12150}\textcolor{blue}{ +/\-- }\textcolor{blue}{24 } events. \item Ratio (event weight): \textcolor{blue}{0.012 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_0\_100\_merged.lhe.gz}& {\cellcolor{white} 1000000}& {\cellcolor{white} 0.304 @ 0.19\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_vbf\_100\_200} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-post\_optimization\_studies/\-mad\_analyses} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{965662 } events. \item Normalization to the luminosity: \textcolor{blue}{9695}\textcolor{blue}{ +/\-- }\textcolor{blue}{17 } events. \item Ratio (event weight): \textcolor{blue}{0.01 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_100\_200\_merged.lhe.gz}& {\cellcolor{white} 965662}& {\cellcolor{white} 0.242 @ 0.17\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_vbf\_200\_400} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-post\_optimization\_studies/\-mad\_analyses} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{984165 } events. \item Normalization to the luminosity: \textcolor{blue}{5413}\textcolor{blue}{ +/\-- }\textcolor{blue}{11 } events. \item Ratio (event weight): \textcolor{blue}{0.0055 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_200\_400\_merged.lhe.gz}& {\cellcolor{white} 984165}& {\cellcolor{white} 0.135 @ 0.2\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_vbf\_400\_600} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-post\_optimization\_studies/\-mad\_analyses} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{1000000 } events. \item Normalization to the luminosity: \textcolor{blue}{986}\textcolor{blue}{ +/\-- }\textcolor{blue}{2 } events. \item Ratio (event weight): \textcolor{blue}{0.00099 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_400\_600\_merged.lhe.gz}& {\cellcolor{white} 1000000}& {\cellcolor{white} 0.0247 @ 0.14\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_vbf\_600\_800} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-post\_optimization\_studies/\-mad\_analyses} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{1000000 } events. \item Normalization to the luminosity: \textcolor{blue}{252}\textcolor{blue}{ +/\-- }\textcolor{blue}{1 } events. \item Ratio (event weight): \textcolor{blue}{0.00025 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_600\_800\_merged.lhe.gz}& {\cellcolor{white} 1000000}& {\cellcolor{white} 0.0063 @ 0.13\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_vbf\_800\_1200} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-post\_optimization\_studies/\-mad\_analyses} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{400839 } events. \item Normalization to the luminosity: \textcolor{blue}{114}\textcolor{blue}{ +/\-- }\textcolor{blue}{1 } events. \item Ratio (event weight): \textcolor{blue}{0.00028 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_800\_1200\_merged.lhe.gz}& {\cellcolor{white} 400839}& {\cellcolor{white} 0.00287 @ 0.16\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_vbf\_1200\_1600} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-post\_optimization\_studies/\-mad\_analyses} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{953803 } events. \item Normalization to the luminosity: \textcolor{blue}{20}\textcolor{blue}{ +/\-- }\textcolor{blue}{1 } events. \item Ratio (event weight): \textcolor{blue}{2.1e-05 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_1200\_1600\_merged.lhe.gz}& {\cellcolor{white} 953803}& {\cellcolor{white} 0.000515 @ 0.16\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_vbf\_1600\_inf} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-post\_optimization\_studies/\-mad\_analyses} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{270148 } events. \item Normalization to the luminosity: \textcolor{blue}{7}\textcolor{blue}{ +/\-- }\textcolor{blue}{1 } events. \item Ratio (event weight): \textcolor{blue}{2.6e-05 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_1600\_inf\_merged.lhe.gz}& {\cellcolor{white} 270148}& {\cellcolor{white} 0.000191 @ 0.11\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_dip\_0\_100} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-post\_optimization\_studies/\-mad\_analyses} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{1040000 } events. \item Normalization to the luminosity: \textcolor{blue}{2710847}\textcolor{blue}{ +/\-- }\textcolor{blue}{4614 } events. \item\textcolor{red}{Ratio (event weight): }\textcolor{red}{2.6 }\textcolor{red}{ - warning: please generate more events (weight larger than 1)!} \textcolor{red}{} \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_0\_100\_merged.lhe.gz}& {\cellcolor{white} 1040000}& {\cellcolor{white} 67.8 @ 0.17\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_dip\_100\_200} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-post\_optimization\_studies/\-mad\_analyses} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{1040000 } events. \item Normalization to the luminosity: \textcolor{blue}{1095362}\textcolor{blue}{ +/\-- }\textcolor{blue}{1528 } events. \item\textcolor{red}{Ratio (event weight): }\textcolor{red}{1.1 }\textcolor{red}{ - warning: please generate more events (weight larger than 1)!} \textcolor{red}{} \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_100\_200\_merged.lhe.gz}& {\cellcolor{white} 1040000}& {\cellcolor{white} 27.4 @ 0.14\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_dip\_200\_400} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-post\_optimization\_studies/\-mad\_analyses} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{1040000 } events. \item Normalization to the luminosity: \textcolor{blue}{239548}\textcolor{blue}{ +/\-- }\textcolor{blue}{414 } events. \item Ratio (event weight): \textcolor{blue}{0.23 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_200\_400\_merged.lhe.gz}& {\cellcolor{white} 1040000}& {\cellcolor{white} 5.99 @ 0.17\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_dip\_400\_600} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-post\_optimization\_studies/\-mad\_analyses} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{1040000 } events. \item Normalization to the luminosity: \textcolor{blue}{28798}\textcolor{blue}{ +/\-- }\textcolor{blue}{53 } events. \item Ratio (event weight): \textcolor{blue}{0.028 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_400\_600\_merged.lhe.gz}& {\cellcolor{white} 1040000}& {\cellcolor{white} 0.72 @ 0.18\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_dip\_600\_800} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-post\_optimization\_studies/\-mad\_analyses} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{662009 } events. \item Normalization to the luminosity: \textcolor{blue}{6674}\textcolor{blue}{ +/\-- }\textcolor{blue}{28 } events. \item Ratio (event weight): \textcolor{blue}{0.01 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_600\_800\_merged.lhe.gz}& {\cellcolor{white} 662009}& {\cellcolor{white} 0.167 @ 0.41\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_dip\_800\_1200} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-post\_optimization\_studies/\-mad\_analyses} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{1040000 } events. \item Normalization to the luminosity: \textcolor{blue}{2942}\textcolor{blue}{ +/\-- }\textcolor{blue}{6 } events. \item Ratio (event weight): \textcolor{blue}{0.0028 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_800\_1200\_merged.lhe.gz}& {\cellcolor{white} 1040000}& {\cellcolor{white} 0.0736 @ 0.17\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_dip\_1200\_1600} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-post\_optimization\_studies/\-mad\_analyses} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{337115 } events. \item Normalization to the luminosity: \textcolor{blue}{513}\textcolor{blue}{ +/\-- }\textcolor{blue}{3 } events. \item Ratio (event weight): \textcolor{blue}{0.0015 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_1200\_1600\_merged.lhe.gz}& {\cellcolor{white} 337115}& {\cellcolor{white} 0.0128 @ 0.51\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_dip\_1600\_inf} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-post\_optimization\_studies/\-mad\_analyses} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{1040000 } events. \item Normalization to the luminosity: \textcolor{blue}{187}\textcolor{blue}{ +/\-- }\textcolor{blue}{1 } events. \item Ratio (event weight): \textcolor{blue}{0.00018 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_1600\_inf\_merged.lhe.gz}& {\cellcolor{white} 1040000}& {\cellcolor{white} 0.00469 @ 0.15\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} % ----------------------------------------------------------------------------- % SECTION Histos and cuts % ----------------------------------------------------------------------------- \newpage \section{ Histos and cuts} \subsection{Cut 1} \textbf{* Cut: select M ( a[1] a[2] ) > 500.0}\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{20.0mm}|m{27.0mm}|m{27.0mm}|m{33.0mm}|m{32.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Events kept: K}& {\cellcolor{yellow} Rejected events: R}& {\cellcolor{yellow} Efficiency: K /\- (K + R)}& {\cellcolor{yellow} Cumul. efficiency: K /\- Initial}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 2827.3 +/\-- 29.6}& {\cellcolor{white} 1266.8 +/\-- 29.6}& {\cellcolor{white} 0.69057 +/\-- 0.00722}& {\cellcolor{white} 0.69057 +/\-- 0.00722}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 5.94 +/\-- 2.44}& {\cellcolor{white} 12144.4 +/\-- 23.2}& {\cellcolor{white} 0.000489 +/\-- 0.000201}& {\cellcolor{white} 0.000489 +/\-- 0.000201}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 25.13 +/\-- 5.01}& {\cellcolor{white} 9670.2 +/\-- 17.3}& {\cellcolor{white} 0.002592 +/\-- 0.000516}& {\cellcolor{white} 0.002592 +/\-- 0.000516}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 44.33 +/\-- 6.63}& {\cellcolor{white} 5368.9 +/\-- 12.7}& {\cellcolor{white} 0.00819 +/\-- 0.00122}& {\cellcolor{white} 0.00819 +/\-- 0.00122}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 18.9 +/\-- 4.3}& {\cellcolor{white} 967.96 +/\-- 4.51}& {\cellcolor{white} 0.01914 +/\-- 0.00436}& {\cellcolor{white} 0.01914 +/\-- 0.00436}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 7.29 +/\-- 2.66}& {\cellcolor{white} 244.79 +/\-- 2.68}& {\cellcolor{white} 0.0289 +/\-- 0.0106}& {\cellcolor{white} 0.0289 +/\-- 0.0106}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 4.24 +/\-- 2.02}& {\cellcolor{white} 110.52 +/\-- 2.03}& {\cellcolor{white} 0.0369 +/\-- 0.0176}& {\cellcolor{white} 0.0369 +/\-- 0.0176}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 0.925 +/\-- 0.940}& {\cellcolor{white} 19.67 +/\-- 0.94}& {\cellcolor{white} 0.0449 +/\-- 0.0456}& {\cellcolor{white} 0.0449 +/\-- 0.0456}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.363 +/\-- 0.588}& {\cellcolor{white} 7.296 +/\-- 0.588}& {\cellcolor{white} 0.0473 +/\-- 0.0767}& {\cellcolor{white} 0.0473 +/\-- 0.0767}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 495.3 +/\-- 22.3}& {\cellcolor{white} 2710351 +/\-- 4612}& {\cellcolor{white} 1.83e-04 +/\-- 8.21e-06}& {\cellcolor{white} 1.83e-04 +/\-- 8.21e-06}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 1178.6 +/\-- 34.4}& {\cellcolor{white} 1094184 +/\-- 1525}& {\cellcolor{white} 1.08e-03 +/\-- 3.13e-05}& {\cellcolor{white} 1.08e-03 +/\-- 3.13e-05}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 1106.8 +/\-- 33.2}& {\cellcolor{white} 238442 +/\-- 413}& {\cellcolor{white} 0.004620 +/\-- 0.000139}& {\cellcolor{white} 0.004620 +/\-- 0.000139}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 364.0 +/\-- 19.0}& {\cellcolor{white} 28434.7 +/\-- 54.9}& {\cellcolor{white} 0.012640 +/\-- 0.000658}& {\cellcolor{white} 0.012640 +/\-- 0.000658}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 131.5 +/\-- 11.4}& {\cellcolor{white} 6542.9 +/\-- 29.3}& {\cellcolor{white} 0.0197 +/\-- 0.0017}& {\cellcolor{white} 0.0197 +/\-- 0.0017}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 76.70 +/\-- 8.64}& {\cellcolor{white} 2865.64 +/\-- 9.95}& {\cellcolor{white} 0.02607 +/\-- 0.00294}& {\cellcolor{white} 0.02607 +/\-- 0.00294}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 16.44 +/\-- 3.99}& {\cellcolor{white} 497.07 +/\-- 4.73}& {\cellcolor{white} 0.03201 +/\-- 0.00777}& {\cellcolor{white} 0.03201 +/\-- 0.00777}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 7.09 +/\-- 2.61}& {\cellcolor{white} 180.69 +/\-- 2.63}& {\cellcolor{white} 0.0378 +/\-- 0.0139}& {\cellcolor{white} 0.0378 +/\-- 0.0139}\\ \hline \end{tabular} \end{center} \end{table} \newpage \subsection{Cut 2} \textbf{* Cut: select PT ( a[1] ) > 300.0}\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{20.0mm}|m{27.0mm}|m{27.0mm}|m{33.0mm}|m{32.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Events kept: K}& {\cellcolor{yellow} Rejected events: R}& {\cellcolor{yellow} Efficiency: K /\- (K + R)}& {\cellcolor{yellow} Cumul. efficiency: K /\- Initial}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 2603.7 +/\-- 30.8}& {\cellcolor{white} 223.6 +/\-- 14.5}& {\cellcolor{white} 0.92093 +/\-- 0.00508}& {\cellcolor{white} 0.63597 +/\-- 0.00752}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 0.753 +/\-- 0.868}& {\cellcolor{white} 5.19 +/\-- 2.28}& {\cellcolor{white} 0.127 +/\-- 0.136}& {\cellcolor{white} 6.20e-05 +/\-- 7.14e-05}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 4.46 +/\-- 2.11}& {\cellcolor{white} 20.67 +/\-- 4.54}& {\cellcolor{white} 0.1774 +/\-- 0.0762}& {\cellcolor{white} 0.000460 +/\-- 0.000218}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 13.66 +/\-- 3.69}& {\cellcolor{white} 30.66 +/\-- 5.52}& {\cellcolor{white} 0.3082 +/\-- 0.0694}& {\cellcolor{white} 0.002524 +/\-- 0.000682}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 10.39 +/\-- 3.21}& {\cellcolor{white} 8.5 +/\-- 2.9}& {\cellcolor{white} 0.550 +/\-- 0.114}& {\cellcolor{white} 0.01053 +/\-- 0.00325}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 4.98 +/\-- 2.21}& {\cellcolor{white} 2.31 +/\-- 1.51}& {\cellcolor{white} 0.684 +/\-- 0.172}& {\cellcolor{white} 0.01977 +/\-- 0.00877}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 3.20 +/\-- 1.76}& {\cellcolor{white} 1.04 +/\-- 1.02}& {\cellcolor{white} 0.754 +/\-- 0.209}& {\cellcolor{white} 0.0278 +/\-- 0.0154}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 0.719 +/\-- 0.833}& {\cellcolor{white} 0.206 +/\-- 0.452}& {\cellcolor{white} 0.777 +/\-- 0.433}& {\cellcolor{white} 0.0349 +/\-- 0.0404}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.279 +/\-- 0.519}& {\cellcolor{white} 0.0833 +/\-- 0.2870}& {\cellcolor{white} 0.770 +/\-- 0.699}& {\cellcolor{white} 0.0365 +/\-- 0.0677}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 54.8 +/\-- 7.4}& {\cellcolor{white} 440.5 +/\-- 21.0}& {\cellcolor{white} 0.1106 +/\-- 0.0141}& {\cellcolor{white} 2.02e-05 +/\-- 2.73e-06}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 247.5 +/\-- 15.7}& {\cellcolor{white} 931.0 +/\-- 30.5}& {\cellcolor{white} 0.2100 +/\-- 0.0119}& {\cellcolor{white} 2.26e-04 +/\-- 1.44e-05}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 444.3 +/\-- 21.1}& {\cellcolor{white} 662.5 +/\-- 25.7}& {\cellcolor{white} 0.4015 +/\-- 0.0147}& {\cellcolor{white} 1.85e-03 +/\-- 8.79e-05}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 236.0 +/\-- 15.3}& {\cellcolor{white} 128.0 +/\-- 11.3}& {\cellcolor{white} 0.648 +/\-- 0.025}& {\cellcolor{white} 0.008194 +/\-- 0.000531}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 91.95 +/\-- 9.53}& {\cellcolor{white} 39.53 +/\-- 6.27}& {\cellcolor{white} 0.70 +/\-- 0.04}& {\cellcolor{white} 0.01378 +/\-- 0.00143}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 53.56 +/\-- 7.25}& {\cellcolor{white} 23.14 +/\-- 4.79}& {\cellcolor{white} 0.6983 +/\-- 0.0524}& {\cellcolor{white} 0.01820 +/\-- 0.00246}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 11.29 +/\-- 3.32}& {\cellcolor{white} 5.15 +/\-- 2.26}& {\cellcolor{white} 0.687 +/\-- 0.114}& {\cellcolor{white} 0.02199 +/\-- 0.00647}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 4.60 +/\-- 2.12}& {\cellcolor{white} 2.49 +/\-- 1.57}& {\cellcolor{white} 0.649 +/\-- 0.179}& {\cellcolor{white} 0.0245 +/\-- 0.0113}\\ \hline \end{tabular} \end{center} \end{table} \newpage \subsection{Cut 3} \textbf{* Cut: select M ( jets[1] jets[2] ) > 750.0}\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{20.0mm}|m{27.0mm}|m{27.0mm}|m{33.0mm}|m{32.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Events kept: K}& {\cellcolor{yellow} Rejected events: R}& {\cellcolor{yellow} Efficiency: K /\- (K + R)}& {\cellcolor{yellow} Cumul. efficiency: K /\- Initial}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 2140.3 +/\-- 32.0}& {\cellcolor{white} 463.5 +/\-- 20.3}& {\cellcolor{white} 0.8220 +/\-- 0.0075}& {\cellcolor{white} 0.52277 +/\-- 0.00781}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 0.0486 +/\-- 0.2204}& {\cellcolor{white} 0.705 +/\-- 0.839}& {\cellcolor{white} 0.0645 +/\-- 0.2830}& {\cellcolor{white} 4.00e-06 +/\-- 1.81e-05}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 1.16 +/\-- 1.08}& {\cellcolor{white} 3.29 +/\-- 1.81}& {\cellcolor{white} 0.261 +/\-- 0.208}& {\cellcolor{white} 0.000120 +/\-- 0.000111}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 6.68 +/\-- 2.58}& {\cellcolor{white} 6.98 +/\-- 2.64}& {\cellcolor{white} 0.489 +/\-- 0.135}& {\cellcolor{white} 0.001235 +/\-- 0.000477}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 7.13 +/\-- 2.66}& {\cellcolor{white} 3.3 +/\-- 1.8}& {\cellcolor{white} 0.686 +/\-- 0.144}& {\cellcolor{white} 0.0072 +/\-- 0.0027}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 4.01 +/\-- 1.99}& {\cellcolor{white} 0.979 +/\-- 0.987}& {\cellcolor{white} 0.804 +/\-- 0.178}& {\cellcolor{white} 0.01589 +/\-- 0.00788}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 2.93 +/\-- 1.69}& {\cellcolor{white} 0.27 +/\-- 0.52}& {\cellcolor{white} 0.916 +/\-- 0.156}& {\cellcolor{white} 0.0255 +/\-- 0.0147}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 0.697 +/\-- 0.820}& {\cellcolor{white} 0.022 +/\-- 0.148}& {\cellcolor{white} 0.969 +/\-- 0.203}& {\cellcolor{white} 0.0338 +/\-- 0.0398}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.276 +/\-- 0.516}& {\cellcolor{white} 0.00281 +/\-- 0.05298}& {\cellcolor{white} 0.990 +/\-- 0.189}& {\cellcolor{white} 0.0361 +/\-- 0.0674}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 54.8 +/\-- 7.4}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.0 +/\-- 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 3.16 +/\-- 1.78}& {\cellcolor{white} 244.4 +/\-- 15.6}& {\cellcolor{white} 0.01277 +/\-- 0.00714}& {\cellcolor{white} 2.89e-06 +/\-- 1.62e-06}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 25.80 +/\-- 5.08}& {\cellcolor{white} 418.5 +/\-- 20.5}& {\cellcolor{white} 0.0581 +/\-- 0.0111}& {\cellcolor{white} 1.08e-04 +/\-- 2.12e-05}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 35.47 +/\-- 5.95}& {\cellcolor{white} 200.5 +/\-- 14.1}& {\cellcolor{white} 0.1503 +/\-- 0.0233}& {\cellcolor{white} 0.001232 +/\-- 0.000207}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 26.86 +/\-- 5.17}& {\cellcolor{white} 65.09 +/\-- 8.03}& {\cellcolor{white} 0.2921 +/\-- 0.0474}& {\cellcolor{white} 0.004024 +/\-- 0.000775}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 32.73 +/\-- 5.69}& {\cellcolor{white} 20.83 +/\-- 4.55}& {\cellcolor{white} 0.6112 +/\-- 0.0666}& {\cellcolor{white} 0.01112 +/\-- 0.00193}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 9.49 +/\-- 3.05}& {\cellcolor{white} 1.80 +/\-- 1.34}& {\cellcolor{white} 0.841 +/\-- 0.109}& {\cellcolor{white} 0.01848 +/\-- 0.00594}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 4.41 +/\-- 2.07}& {\cellcolor{white} 0.191 +/\-- 0.437}& {\cellcolor{white} 0.9584 +/\-- 0.0931}& {\cellcolor{white} 0.023 +/\-- 0.011}\\ \hline \end{tabular} \end{center} \end{table} \newpage \subsection{Cut 4} \textbf{* Cut: select sdETA ( jets[1] jets[2] ) > 3.6 or sdETA ( jets[1] jets[2] ) < -3.6}\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{20.0mm}|m{27.0mm}|m{27.0mm}|m{33.0mm}|m{32.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Events kept: K}& {\cellcolor{yellow} Rejected events: R}& {\cellcolor{yellow} Efficiency: K /\- (K + R)}& {\cellcolor{yellow} Cumul. efficiency: K /\- Initial}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 767.0 +/\-- 25.0}& {\cellcolor{white} 1373.3 +/\-- 30.2}& {\cellcolor{white} 0.3584 +/\-- 0.0104}& {\cellcolor{white} 0.1873 +/\-- 0.0061}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 0.0486 +/\-- 0.2204}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.00e-06 +/\-- 1.81e-05}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 1.16 +/\-- 1.08}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.000120 +/\-- 0.000111}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 6.04 +/\-- 2.46}& {\cellcolor{white} 0.638 +/\-- 0.799}& {\cellcolor{white} 0.905 +/\-- 0.114}& {\cellcolor{white} 0.001117 +/\-- 0.000454}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 4.4 +/\-- 2.1}& {\cellcolor{white} 2.69 +/\-- 1.64}& {\cellcolor{white} 0.622 +/\-- 0.182}& {\cellcolor{white} 0.00450 +/\-- 0.00213}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 1.64 +/\-- 1.28}& {\cellcolor{white} 2.37 +/\-- 1.53}& {\cellcolor{white} 0.409 +/\-- 0.246}& {\cellcolor{white} 0.00650 +/\-- 0.00506}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 0.622 +/\-- 0.787}& {\cellcolor{white} 2.3 +/\-- 1.5}& {\cellcolor{white} 0.213 +/\-- 0.239}& {\cellcolor{white} 0.00542 +/\-- 0.00686}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 0.0548 +/\-- 0.2337}& {\cellcolor{white} 0.642 +/\-- 0.789}& {\cellcolor{white} 0.0786 +/\-- 0.3225}& {\cellcolor{white} 0.00266 +/\-- 0.01135}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.0057 +/\-- 0.0755}& {\cellcolor{white} 0.271 +/\-- 0.511}& {\cellcolor{white} 0.0206 +/\-- 0.2703}& {\cellcolor{white} 0.000744 +/\-- 0.009856}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.0 +/\-- 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 3.16 +/\-- 1.78}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2.89e-06 +/\-- 1.62e-06}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 19.12 +/\-- 4.37}& {\cellcolor{white} 6.68 +/\-- 2.58}& {\cellcolor{white} 0.7410 +/\-- 0.0863}& {\cellcolor{white} 7.98e-05 +/\-- 1.83e-05}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 12.3 +/\-- 3.5}& {\cellcolor{white} 23.21 +/\-- 4.82}& {\cellcolor{white} 0.3458 +/\-- 0.0799}& {\cellcolor{white} 0.000426 +/\-- 0.000122}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 3.6 +/\-- 1.9}& {\cellcolor{white} 23.26 +/\-- 4.82}& {\cellcolor{white} 0.1340 +/\-- 0.0657}& {\cellcolor{white} 0.000539 +/\-- 0.000284}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 1.47 +/\-- 1.21}& {\cellcolor{white} 31.26 +/\-- 5.56}& {\cellcolor{white} 0.0449 +/\-- 0.0362}& {\cellcolor{white} 0.000500 +/\-- 0.000412}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 0.105 +/\-- 0.324}& {\cellcolor{white} 9.38 +/\-- 3.04}& {\cellcolor{white} 0.0111 +/\-- 0.0340}& {\cellcolor{white} 0.000205 +/\-- 0.000631}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 0.00975 +/\-- 0.09875}& {\cellcolor{white} 4.40 +/\-- 2.07}& {\cellcolor{white} 0.00221 +/\-- 0.02238}& {\cellcolor{white} 5.19e-05 +/\-- 5.26e-04}\\ \hline \end{tabular} \end{center} \end{table} \newpage \subsection{ Histogram 1} \textbf{* Plot: PT ( jets[1] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 766}& {\cellcolor{white} 1.0}& {\cellcolor{white} 370.671}& {\cellcolor{white} 277.6}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.08114}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 0.0486}& {\cellcolor{white} 1.0}& {\cellcolor{white} 42.7818}& {\cellcolor{white} 7.815}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 1.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 109.408}& {\cellcolor{white} 25.35}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 6.04}& {\cellcolor{white} 1.0}& {\cellcolor{white} 214.843}& {\cellcolor{white} 58.14}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 4.44}& {\cellcolor{white} 1.0}& {\cellcolor{white} 360.116}& {\cellcolor{white} 74.81}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 1.64}& {\cellcolor{white} 1.0}& {\cellcolor{white} 512.148}& {\cellcolor{white} 95.26}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 0.623}& {\cellcolor{white} 1.0}& {\cellcolor{white} 713.131}& {\cellcolor{white} 144.6}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 0.0549}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1037.47}& {\cellcolor{white} 214.8}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.00581}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1488.05}& {\cellcolor{white} 353.5}& {\cellcolor{orange} 0.0}& {\cellcolor{orange} 7.318}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.}& {\cellcolor{white} 0.0}& {\cellcolor{white} 0.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 3.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 98.8303}& {\cellcolor{white} 32.52}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 19.1}& {\cellcolor{white} 1.0}& {\cellcolor{white} 243.87}& {\cellcolor{white} 57.21}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 12.3}& {\cellcolor{white} 1.0}& {\cellcolor{white} 405.044}& {\cellcolor{white} 76.84}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 3.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 583.265}& {\cellcolor{white} 92.47}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 1.47}& {\cellcolor{white} 1.0}& {\cellcolor{white} 800.398}& {\cellcolor{white} 151.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 0.105}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1173.89}& {\cellcolor{white} 227.2}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 0.00975}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1675.33}& {\cellcolor{white} 329.4}& {\cellcolor{orange} 0.0}& {\cellcolor{orange} 11.11}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_0.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 2} \textbf{* Plot: ETA ( jets[1] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 766}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00426672}& {\cellcolor{white} 2.037}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 0.0486}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.873486}& {\cellcolor{white} 2.934}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 1.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.133336}& {\cellcolor{white} 2.773}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 6.04}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.006481}& {\cellcolor{white} 2.235}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 4.44}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0267496}& {\cellcolor{white} 1.928}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 1.64}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00605067}& {\cellcolor{white} 1.754}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 0.623}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0592612}& {\cellcolor{white} 1.598}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 0.0549}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0746631}& {\cellcolor{white} 1.45}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.00581}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.0871655}& {\cellcolor{white} 1.227}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.}& {\cellcolor{white} 0.0}& {\cellcolor{white} 0.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 3.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1.65038}& {\cellcolor{white} 2.73}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 19.1}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.146548}& {\cellcolor{white} 1.676}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 12.3}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.1076}& {\cellcolor{white} 1.425}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 3.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0364249}& {\cellcolor{white} 1.3}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 1.47}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0506134}& {\cellcolor{white} 1.215}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 0.105}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.0264047}& {\cellcolor{white} 1.124}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 0.00975}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0490747}& {\cellcolor{white} 0.8389}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_1.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 3} \textbf{* Plot: PHI ( jets[1] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 766}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00274665}& {\cellcolor{white} 1.814}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 0.0486}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.168141}& {\cellcolor{white} 1.997}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 1.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0767497}& {\cellcolor{white} 1.778}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 6.04}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0816388}& {\cellcolor{white} 1.818}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 4.44}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.0212645}& {\cellcolor{white} 1.801}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 1.64}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00569695}& {\cellcolor{white} 1.808}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 0.623}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.0641089}& {\cellcolor{white} 1.814}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 0.0549}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.0646329}& {\cellcolor{white} 1.787}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.00581}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.217904}& {\cellcolor{white} 1.755}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.}& {\cellcolor{white} 0.0}& {\cellcolor{white} 0.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 3.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.466209}& {\cellcolor{white} 0.3607}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 19.1}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0522949}& {\cellcolor{white} 1.826}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 12.3}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.125515}& {\cellcolor{white} 1.849}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 3.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.113698}& {\cellcolor{white} 1.83}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 1.47}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.0576669}& {\cellcolor{white} 1.795}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 0.105}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.152016}& {\cellcolor{white} 1.941}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 0.00975}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0323032}& {\cellcolor{white} 1.815}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_2.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 4} \textbf{* Plot: PT ( jets[2] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 766}& {\cellcolor{white} 1.0}& {\cellcolor{white} 102.403}& {\cellcolor{white} 76.9}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 0.0486}& {\cellcolor{white} 1.0}& {\cellcolor{white} 31.916}& {\cellcolor{white} 8.29}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 1.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 53.936}& {\cellcolor{white} 17.53}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 6.04}& {\cellcolor{white} 1.0}& {\cellcolor{white} 90.5854}& {\cellcolor{white} 37.67}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 4.44}& {\cellcolor{white} 1.0}& {\cellcolor{white} 125.017}& {\cellcolor{white} 60.47}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 1.64}& {\cellcolor{white} 1.0}& {\cellcolor{white} 168.787}& {\cellcolor{white} 85.77}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 0.623}& {\cellcolor{white} 1.0}& {\cellcolor{white} 215.936}& {\cellcolor{white} 127.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 0.0549}& {\cellcolor{white} 1.0}& {\cellcolor{white} 286.77}& {\cellcolor{white} 196.2}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.00581}& {\cellcolor{white} 1.0}& {\cellcolor{white} 301.151}& {\cellcolor{white} 261.1}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.}& {\cellcolor{white} 0.0}& {\cellcolor{white} 0.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 3.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 49.3379}& {\cellcolor{white} 16.83}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 19.1}& {\cellcolor{white} 1.0}& {\cellcolor{white} 68.617}& {\cellcolor{white} 36.13}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 12.3}& {\cellcolor{white} 1.0}& {\cellcolor{white} 78.6224}& {\cellcolor{white} 56.28}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 3.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 94.6258}& {\cellcolor{white} 83.07}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 1.47}& {\cellcolor{white} 1.0}& {\cellcolor{white} 124.553}& {\cellcolor{white} 124.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 0.105}& {\cellcolor{white} 1.0}& {\cellcolor{white} 176.706}& {\cellcolor{white} 198.4}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 0.00975}& {\cellcolor{white} 1.0}& {\cellcolor{white} 138.5}& {\cellcolor{white} 204.7}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_3.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 5} \textbf{* Plot: ETA ( jets[2] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 766}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00824761}& {\cellcolor{white} 3.058}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 0.0486}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1.04772}& {\cellcolor{white} 3.507}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 1.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.324417}& {\cellcolor{white} 3.174}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 6.04}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.0275519}& {\cellcolor{white} 2.946}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 4.44}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.0664249}& {\cellcolor{white} 2.847}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 1.64}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00657983}& {\cellcolor{white} 2.765}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 0.623}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.123933}& {\cellcolor{white} 2.754}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 0.0549}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.151477}& {\cellcolor{white} 2.787}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.00581}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.203626}& {\cellcolor{white} 2.95}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.}& {\cellcolor{white} 0.0}& {\cellcolor{white} 0.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 3.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.140052}& {\cellcolor{white} 2.918}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 19.1}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.291332}& {\cellcolor{white} 3.134}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 12.3}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0693841}& {\cellcolor{white} 3.116}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 3.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.0792687}& {\cellcolor{white} 3.236}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 1.47}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.0158785}& {\cellcolor{white} 3.221}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 0.105}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.142349}& {\cellcolor{white} 3.23}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 0.00975}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.328753}& {\cellcolor{white} 3.419}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_4.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 6} \textbf{* Plot: PHI ( jets[2] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 766}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00468976}& {\cellcolor{white} 1.813}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 0.0486}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.0859607}& {\cellcolor{white} 1.498}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 1.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.17327}& {\cellcolor{white} 1.879}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 6.04}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0817648}& {\cellcolor{white} 1.828}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 4.44}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0599984}& {\cellcolor{white} 1.828}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 1.64}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.0117494}& {\cellcolor{white} 1.812}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 0.623}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0684445}& {\cellcolor{white} 1.824}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 0.0549}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0586342}& {\cellcolor{white} 1.806}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.00581}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.198205}& {\cellcolor{white} 1.858}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.}& {\cellcolor{white} 0.0}& {\cellcolor{white} 0.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 3.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1.00733}& {\cellcolor{white} 1.634}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 19.1}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0248154}& {\cellcolor{white} 1.887}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 12.3}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0373808}& {\cellcolor{white} 1.79}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 3.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.0797356}& {\cellcolor{white} 1.824}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 1.47}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.124598}& {\cellcolor{white} 1.781}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 0.105}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.323666}& {\cellcolor{white} 1.753}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 0.00975}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.421228}& {\cellcolor{white} 1.806}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_5.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 7} \textbf{* Plot: DELTAR ( jets[1] , jets[2] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 766}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.1046}& {\cellcolor{white} 1.028}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 0.0486}& {\cellcolor{white} 1.0}& {\cellcolor{white} 6.62263}& {\cellcolor{white} 0.3293}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 1.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.99091}& {\cellcolor{white} 0.8146}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 6.04}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.24335}& {\cellcolor{white} 0.8413}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 4.44}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.96829}& {\cellcolor{white} 0.7025}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 1.64}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.83273}& {\cellcolor{white} 0.5935}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 0.623}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.73982}& {\cellcolor{white} 0.5158}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 0.0549}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.67427}& {\cellcolor{white} 0.4511}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.00581}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.62399}& {\cellcolor{white} 0.461}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.}& {\cellcolor{white} 0.0}& {\cellcolor{white} 0.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 3.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.99929}& {\cellcolor{white} 0.4811}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 19.1}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.87458}& {\cellcolor{white} 0.6286}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 12.3}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.6746}& {\cellcolor{white} 0.5749}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 3.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.64892}& {\cellcolor{white} 0.5595}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 1.47}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.61435}& {\cellcolor{white} 0.495}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 0.105}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.51401}& {\cellcolor{white} 0.442}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 0.00975}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.44387}& {\cellcolor{white} 0.4353}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_6.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 8} \textbf{* Plot: M ( jets[1] jets[2] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 766}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1797.17}& {\cellcolor{white} 804.0}& {\cellcolor{red} 0.0}& {\cellcolor{red} 32.54}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 0.0486}& {\cellcolor{white} 1.0}& {\cellcolor{white} 886.102}& {\cellcolor{white} 84.95}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 1.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1373.99}& {\cellcolor{white} 562.4}& {\cellcolor{orange} 0.0}& {\cellcolor{orange} 13.79}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 6.04}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1686.56}& {\cellcolor{white} 787.6}& {\cellcolor{red} 0.0}& {\cellcolor{red} 26.93}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 4.44}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2066.12}& {\cellcolor{white} 824.9}& {\cellcolor{red} 0.0}& {\cellcolor{red} 43.08}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 1.64}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2497.1}& {\cellcolor{white} 840.0}& {\cellcolor{red} 0.0}& {\cellcolor{red} 71.38}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 0.623}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2990.13}& {\cellcolor{white} 928.2}& {\cellcolor{red} 0.0}& {\cellcolor{red} 86.9}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 0.0549}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3719.45}& {\cellcolor{white} 1119}& {\cellcolor{red} 0.0}& {\cellcolor{red} 93.07}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.00581}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4126.54}& {\cellcolor{white} 1410}& {\cellcolor{red} 0.0}& {\cellcolor{red} 92.7}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.}& {\cellcolor{white} 0.0}& {\cellcolor{white} 0.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 3.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1172.68}& {\cellcolor{white} 166.8}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 19.1}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1168.01}& {\cellcolor{white} 358.4}& {\cellcolor{green} 0.0}& {\cellcolor{green} 4.826}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 12.3}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1373.0}& {\cellcolor{white} 458.4}& {\cellcolor{orange} 0.0}& {\cellcolor{orange} 11.51}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 3.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1704.65}& {\cellcolor{white} 623.1}& {\cellcolor{red} 0.0}& {\cellcolor{red} 31.09}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 1.47}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2097.6}& {\cellcolor{white} 885.2}& {\cellcolor{red} 0.0}& {\cellcolor{red} 47.5}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 0.105}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2709.62}& {\cellcolor{white} 1239}& {\cellcolor{red} 0.0}& {\cellcolor{red} 60.91}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 0.00975}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2721.5}& {\cellcolor{white} 1274}& {\cellcolor{red} 0.0}& {\cellcolor{red} 62.96}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_7.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 9} \textbf{* Plot: sdETA ( jets[1] jets[2] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 766}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0125143}& {\cellcolor{white} 4.892}& {\cellcolor{red} 50.11}& {\cellcolor{red} 0.6048}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 0.0486}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.174236}& {\cellcolor{white} 6.403}& {\cellcolor{red} 50.0}& {\cellcolor{red} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 1.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.457753}& {\cellcolor{white} 5.752}& {\cellcolor{red} 46.55}& {\cellcolor{red} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 6.04}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0340329}& {\cellcolor{white} 5.008}& {\cellcolor{red} 50.23}& {\cellcolor{red} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 4.44}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0931745}& {\cellcolor{white} 4.618}& {\cellcolor{red} 51.09}& {\cellcolor{red} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 1.64}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.0126305}& {\cellcolor{white} 4.376}& {\cellcolor{red} 49.93}& {\cellcolor{red} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 0.623}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.183195}& {\cellcolor{white} 4.209}& {\cellcolor{red} 52.39}& {\cellcolor{red} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 0.0549}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.226141}& {\cellcolor{white} 4.078}& {\cellcolor{red} 52.75}& {\cellcolor{red} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.00581}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.290791}& {\cellcolor{white} 3.996}& {\cellcolor{red} 45.88}& {\cellcolor{red} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.}& {\cellcolor{white} 0.0}& {\cellcolor{white} 0.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 3.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1.51032}& {\cellcolor{white} 5.585}& {\cellcolor{red} 33.38}& {\cellcolor{red} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 19.1}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.144784}& {\cellcolor{white} 4.518}& {\cellcolor{red} 51.81}& {\cellcolor{red} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 12.3}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.176984}& {\cellcolor{white} 4.26}& {\cellcolor{red} 47.85}& {\cellcolor{red} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 3.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.115694}& {\cellcolor{white} 4.219}& {\cellcolor{red} 51.54}& {\cellcolor{red} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 1.47}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0664918}& {\cellcolor{white} 4.119}& {\cellcolor{red} 50.96}& {\cellcolor{red} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 0.105}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.168753}& {\cellcolor{white} 4.051}& {\cellcolor{red} 47.8}& {\cellcolor{red} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 0.00975}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.279678}& {\cellcolor{white} 4.007}& {\cellcolor{red} 46.29}& {\cellcolor{red} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_8.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 10} \textbf{* Plot: M ( a[1] a[2] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 766}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1364.91}& {\cellcolor{white} 757.5}& {\cellcolor{red} 0.0}& {\cellcolor{red} 60.26}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 0.0486}& {\cellcolor{white} 1.0}& {\cellcolor{white} 999.408}& {\cellcolor{white} 375.3}& {\cellcolor{red} 0.0}& {\cellcolor{red} 24.96}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 1.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 847.835}& {\cellcolor{white} 279.9}& {\cellcolor{red} 0.0}& {\cellcolor{red} 25.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 6.04}& {\cellcolor{white} 1.0}& {\cellcolor{white} 806.306}& {\cellcolor{white} 333.7}& {\cellcolor{red} 0.0}& {\cellcolor{red} 18.74}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 4.44}& {\cellcolor{white} 1.0}& {\cellcolor{white} 757.771}& {\cellcolor{white} 293.6}& {\cellcolor{orange} 0.0}& {\cellcolor{orange} 14.41}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 1.64}& {\cellcolor{white} 1.0}& {\cellcolor{white} 774.989}& {\cellcolor{white} 292.6}& {\cellcolor{red} 0.0}& {\cellcolor{red} 16.22}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 0.623}& {\cellcolor{white} 1.0}& {\cellcolor{white} 795.097}& {\cellcolor{white} 304.5}& {\cellcolor{red} 0.0}& {\cellcolor{red} 18.66}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 0.0549}& {\cellcolor{white} 1.0}& {\cellcolor{white} 827.522}& {\cellcolor{white} 348.5}& {\cellcolor{red} 0.0}& {\cellcolor{red} 22.42}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.00581}& {\cellcolor{white} 1.0}& {\cellcolor{white} 902.86}& {\cellcolor{white} 410.2}& {\cellcolor{red} 0.0}& {\cellcolor{red} 27.83}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.}& {\cellcolor{white} 0.0}& {\cellcolor{white} 0.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 3.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 674.287}& {\cellcolor{white} 36.08}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 19.1}& {\cellcolor{white} 1.0}& {\cellcolor{white} 785.534}& {\cellcolor{white} 368.8}& {\cellcolor{red} 0.0}& {\cellcolor{red} 16.86}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 12.3}& {\cellcolor{white} 1.0}& {\cellcolor{white} 771.771}& {\cellcolor{white} 325.2}& {\cellcolor{red} 0.0}& {\cellcolor{red} 16.48}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 3.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 805.657}& {\cellcolor{white} 366.8}& {\cellcolor{red} 0.0}& {\cellcolor{red} 20.17}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 1.47}& {\cellcolor{white} 1.0}& {\cellcolor{white} 805.114}& {\cellcolor{white} 335.3}& {\cellcolor{red} 0.0}& {\cellcolor{red} 19.04}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 0.105}& {\cellcolor{white} 1.0}& {\cellcolor{white} 924.629}& {\cellcolor{white} 435.1}& {\cellcolor{red} 0.0}& {\cellcolor{red} 31.87}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 0.00975}& {\cellcolor{white} 1.0}& {\cellcolor{white} 930.522}& {\cellcolor{white} 452.3}& {\cellcolor{red} 0.0}& {\cellcolor{red} 25.91}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_9.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 11} \textbf{* Plot: PT ( a[1] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 766}& {\cellcolor{white} 1.0}& {\cellcolor{white} 718.918}& {\cellcolor{white} 350.1}& {\cellcolor{red} 0.0}& {\cellcolor{red} 17.58}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 0.0486}& {\cellcolor{white} 1.0}& {\cellcolor{white} 379.899}& {\cellcolor{white} 64.59}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 1.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 373.902}& {\cellcolor{white} 77.67}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 6.04}& {\cellcolor{white} 1.0}& {\cellcolor{white} 391.46}& {\cellcolor{white} 92.25}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.182}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 4.44}& {\cellcolor{white} 1.0}& {\cellcolor{white} 436.107}& {\cellcolor{white} 113.9}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.3112}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 1.64}& {\cellcolor{white} 1.0}& {\cellcolor{white} 516.8}& {\cellcolor{white} 150.3}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.5535}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 0.623}& {\cellcolor{white} 1.0}& {\cellcolor{white} 657.311}& {\cellcolor{white} 223.5}& {\cellcolor{orange} 0.0}& {\cellcolor{orange} 7.762}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 0.0549}& {\cellcolor{white} 1.0}& {\cellcolor{white} 890.939}& {\cellcolor{white} 358.6}& {\cellcolor{red} 0.0}& {\cellcolor{red} 43.46}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.00581}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1323.7}& {\cellcolor{white} 534.5}& {\cellcolor{red} 0.0}& {\cellcolor{red} 72.17}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.}& {\cellcolor{white} 0.0}& {\cellcolor{white} 0.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 3.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 327.174}& {\cellcolor{white} 7.434}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 19.1}& {\cellcolor{white} 1.0}& {\cellcolor{white} 393.477}& {\cellcolor{white} 77.89}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 12.3}& {\cellcolor{white} 1.0}& {\cellcolor{white} 475.422}& {\cellcolor{white} 123.4}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.2257}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 3.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 603.382}& {\cellcolor{white} 164.2}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.5606}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 1.47}& {\cellcolor{white} 1.0}& {\cellcolor{white} 778.016}& {\cellcolor{white} 241.8}& {\cellcolor{red} 0.0}& {\cellcolor{red} 17.11}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 0.105}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1095.08}& {\cellcolor{white} 412.4}& {\cellcolor{red} 0.0}& {\cellcolor{red} 65.16}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 0.00975}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1602.75}& {\cellcolor{white} 495.3}& {\cellcolor{red} 0.0}& {\cellcolor{red} 87.04}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_10.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 12} \textbf{* Plot: PT ( a[2] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 766}& {\cellcolor{white} 1.0}& {\cellcolor{white} 486.902}& {\cellcolor{white} 328.9}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.3176}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 0.0486}& {\cellcolor{white} 1.0}& {\cellcolor{white} 359.837}& {\cellcolor{white} 74.21}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 1.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 285.217}& {\cellcolor{white} 94.57}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 6.04}& {\cellcolor{white} 1.0}& {\cellcolor{white} 209.251}& {\cellcolor{white} 118.9}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 4.44}& {\cellcolor{white} 1.0}& {\cellcolor{white} 159.332}& {\cellcolor{white} 118.8}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 1.64}& {\cellcolor{white} 1.0}& {\cellcolor{white} 157.378}& {\cellcolor{white} 113.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 0.623}& {\cellcolor{white} 1.0}& {\cellcolor{white} 159.508}& {\cellcolor{white} 121.4}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 0.0549}& {\cellcolor{white} 1.0}& {\cellcolor{white} 167.561}& {\cellcolor{white} 142.6}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.00581}& {\cellcolor{white} 1.0}& {\cellcolor{white} 183.782}& {\cellcolor{white} 190.2}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.}& {\cellcolor{white} 0.0}& {\cellcolor{white} 0.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 3.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 253.391}& {\cellcolor{white} 44.39}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 19.1}& {\cellcolor{white} 1.0}& {\cellcolor{white} 186.1}& {\cellcolor{white} 97.8}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 12.3}& {\cellcolor{white} 1.0}& {\cellcolor{white} 144.408}& {\cellcolor{white} 100.8}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 3.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 140.303}& {\cellcolor{white} 115.8}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 1.47}& {\cellcolor{white} 1.0}& {\cellcolor{white} 130.659}& {\cellcolor{white} 108.9}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 0.105}& {\cellcolor{white} 1.0}& {\cellcolor{white} 151.48}& {\cellcolor{white} 146.8}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 0.00975}& {\cellcolor{white} 1.0}& {\cellcolor{white} 125.146}& {\cellcolor{white} 119.5}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_11.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 13} \textbf{* Plot: THT}\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 766}& {\cellcolor{white} 1.0}& {\cellcolor{white} 473.074}& {\cellcolor{white} 297.8}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 0.0486}& {\cellcolor{white} 1.0}& {\cellcolor{white} 74.6978}& {\cellcolor{white} 15.53}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 1.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 163.344}& {\cellcolor{white} 25.73}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 6.04}& {\cellcolor{white} 1.0}& {\cellcolor{white} 305.428}& {\cellcolor{white} 55.11}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 4.44}& {\cellcolor{white} 1.0}& {\cellcolor{white} 485.133}& {\cellcolor{white} 55.92}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 1.64}& {\cellcolor{white} 1.0}& {\cellcolor{white} 680.936}& {\cellcolor{white} 55.85}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 0.623}& {\cellcolor{white} 1.0}& {\cellcolor{white} 929.067}& {\cellcolor{white} 102.2}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 0.0549}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1324.24}& {\cellcolor{white} 99.5}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.00581}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1789.2}& {\cellcolor{white} 191.2}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.}& {\cellcolor{white} 0.0}& {\cellcolor{white} 0.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 3.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 148.168}& {\cellcolor{white} 37.58}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 19.1}& {\cellcolor{white} 1.0}& {\cellcolor{white} 312.487}& {\cellcolor{white} 53.59}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 12.3}& {\cellcolor{white} 1.0}& {\cellcolor{white} 483.667}& {\cellcolor{white} 56.67}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 3.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 677.891}& {\cellcolor{white} 53.6}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 1.47}& {\cellcolor{white} 1.0}& {\cellcolor{white} 924.952}& {\cellcolor{white} 102.7}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 0.105}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1350.6}& {\cellcolor{white} 122.7}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 0.00975}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1813.83}& {\cellcolor{white} 223.3}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_12.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 14} \textbf{* Plot: MET}\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 766}& {\cellcolor{white} 1.0}& {\cellcolor{white} 9.23621e-09}& {\cellcolor{white} 1.193e-08}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 0.0486}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2.92664e-09}& {\cellcolor{white} 2.061e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 1.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.59216e-09}& {\cellcolor{white} 2.634e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 6.04}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.28184e-09}& {\cellcolor{white} 3.059e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 4.44}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.56314e-09}& {\cellcolor{white} 3.759e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 1.64}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.7028e-09}& {\cellcolor{white} 3.425e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 0.623}& {\cellcolor{white} 1.0}& {\cellcolor{white} 6.73764e-09}& {\cellcolor{white} 5.92e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 0.0549}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1.72505e-08}& {\cellcolor{white} 1.831e-08}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.00581}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.09154e-08}& {\cellcolor{white} 2.364e-08}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.}& {\cellcolor{white} 0.0}& {\cellcolor{white} 0.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 3.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2.6048e-09}& {\cellcolor{white} 5.584e-10}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 19.1}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.15652e-09}& {\cellcolor{white} 2.886e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 12.3}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.26293e-09}& {\cellcolor{white} 2.848e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 3.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.42862e-09}& {\cellcolor{white} 2.905e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 1.47}& {\cellcolor{white} 1.0}& {\cellcolor{white} 7.13192e-09}& {\cellcolor{white} 8.331e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 0.105}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2.55869e-08}& {\cellcolor{white} 2.401e-08}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 0.00975}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.47198e-08}& {\cellcolor{white} 2.306e-08}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_13.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 15} \textbf{* Plot: TET}\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 766}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1678.89}& {\cellcolor{white} 734.6}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 0.0486}& {\cellcolor{white} 1.0}& {\cellcolor{white} 814.434}& {\cellcolor{white} 141.3}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 1.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 822.463}& {\cellcolor{white} 164.7}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 6.04}& {\cellcolor{white} 1.0}& {\cellcolor{white} 906.139}& {\cellcolor{white} 195.9}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 4.44}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1080.57}& {\cellcolor{white} 211.3}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 1.64}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1355.11}& {\cellcolor{white} 216.7}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 0.623}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1745.89}& {\cellcolor{white} 291.5}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 0.0549}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2382.74}& {\cellcolor{white} 401.9}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.00581}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3296.68}& {\cellcolor{white} 649.6}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.}& {\cellcolor{white} 0.0}& {\cellcolor{white} 0.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 3.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 728.733}& {\cellcolor{white} 80.65}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 19.1}& {\cellcolor{white} 1.0}& {\cellcolor{white} 892.065}& {\cellcolor{white} 159.2}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 12.3}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1103.5}& {\cellcolor{white} 208.7}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 3.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1421.58}& {\cellcolor{white} 241.6}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 1.47}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1833.63}& {\cellcolor{white} 311.4}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 0.105}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2597.16}& {\cellcolor{white} 496.1}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 0.00975}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3541.73}& {\cellcolor{white} 662.3}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_14.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 16} \textbf{* Plot: DELTAR ( a[1] , a[2] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 766}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2.94599}& {\cellcolor{white} 0.5977}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.01708}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 0.0486}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.37874}& {\cellcolor{white} 0.2507}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 1.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.31191}& {\cellcolor{white} 0.3704}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 6.04}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.31275}& {\cellcolor{white} 0.5726}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 4.44}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.19307}& {\cellcolor{white} 0.6709}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 1.64}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2.98826}& {\cellcolor{white} 0.6955}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 0.623}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2.80957}& {\cellcolor{white} 0.7215}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 0.0549}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2.63129}& {\cellcolor{white} 0.7516}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.00581}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2.4724}& {\cellcolor{white} 0.6814}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.}& {\cellcolor{white} 0.0}& {\cellcolor{white} 0.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 3.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.21229}& {\cellcolor{white} 0.191}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 19.1}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.36774}& {\cellcolor{white} 0.7038}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 12.3}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.29268}& {\cellcolor{white} 0.7919}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 3.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.12605}& {\cellcolor{white} 0.8764}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 1.47}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2.99442}& {\cellcolor{white} 0.8567}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 0.105}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2.85727}& {\cellcolor{white} 0.8239}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 0.00975}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2.60421}& {\cellcolor{white} 0.657}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_15.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 17} \textbf{* Plot: sdETA ( a[1] a[2] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 766}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00338677}& {\cellcolor{white} 1.456}& {\cellcolor{green} 0.0005338}& {\cellcolor{green} 0.0005338}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 0.0486}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.407498}& {\cellcolor{white} 1.471}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 1.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.201543}& {\cellcolor{white} 1.52}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 6.04}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0121975}& {\cellcolor{white} 2.014}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 4.44}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00671925}& {\cellcolor{white} 2.304}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 1.64}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0144826}& {\cellcolor{white} 2.227}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 0.623}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.0268473}& {\cellcolor{white} 2.074}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 0.0549}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00305105}& {\cellcolor{white} 1.89}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.00581}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0378065}& {\cellcolor{white} 1.642}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.}& {\cellcolor{white} 0.0}& {\cellcolor{white} 0.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 3.16}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.537315}& {\cellcolor{white} 1.068}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 19.1}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0986204}& {\cellcolor{white} 2.077}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 12.3}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.0922186}& {\cellcolor{white} 2.373}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 3.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.190539}& {\cellcolor{white} 2.36}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 1.47}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00670366}& {\cellcolor{white} 2.163}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 0.105}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.0071442}& {\cellcolor{white} 2.124}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 0.00975}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.26003}& {\cellcolor{white} 1.746}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_16.png}\\ \caption{ } \end{center} \end{figure} % ----------------------------------------------------------------------------- % SECTION Summary % ----------------------------------------------------------------------------- \newpage \section{ Summary} \subsection{Cut-flow charts} \begin{itemize} \item How to compare signal (S) and background (B): \textcolor{blue}{S/\-sqrt(S+B+(xB)**2)} . \item Object definition selections are indicated in cyan. \item Reject and select are indicated by 'REJ' and 'SEL' respectively \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{36.0mm}|m{36.0mm}|m{36.0mm}|m{33.0mm}|} \hline {\cellcolor{yellow} Cuts}& {\cellcolor{yellow} Signal (S)}& {\cellcolor{yellow} Background (B)}& {\cellcolor{yellow} S vs B}\\ \hline {\cellcolor{white} Initial (no cut)}& {\cellcolor{white} 4094.08 +/\-- 1.13}& {\cellcolor{white} 4113516 +/\-- 4877}& {\cellcolor{white} 2.01760 +/\-- 0.00132}\\ \hline {\cellcolor{white} SEL: M ( a[1] a[2] ) > 500.0}& {\cellcolor{white} 2827.3 +/\-- 29.6}& {\cellcolor{white} 3483.6 +/\-- 58.9}& {\cellcolor{white} 35.590 +/\-- 0.333}\\ \hline {\cellcolor{white} SEL: PT ( a[1] ) > 300.0}& {\cellcolor{white} 2603.7 +/\-- 30.8}& {\cellcolor{white} 1182.5 +/\-- 34.3}& {\cellcolor{white} 42.31 +/\-- 0.38}\\ \hline {\cellcolor{white} SEL: M ( jets[1] jets[2] ) > 750.0}& {\cellcolor{white} 2140.3 +/\-- 32.0}& {\cellcolor{white} 160.9 +/\-- 12.6}& {\cellcolor{white} 44.617 +/\-- 0.377}\\ \hline {\cellcolor{white} SEL: sdETA ( jets[1] jets[2] ) > 3.6 or sdETA ( je}& {\cellcolor{white} 767.0 +/\-- 25.0}& {\cellcolor{white} 53.75 +/\-- 7.33}& {\cellcolor{white} 26.772 +/\-- 0.479}\\ \hline \end{tabular} \end{center} \end{table} \end{document}
{ "alphanum_fraction": 0.5384640991, "avg_line_length": 79.3057022175, "ext": "tex", "hexsha": "7f016dd98de00bb138ae7d7439c789ee4458600b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7d3fc08f5ae5b17a3500eba19a2e43f87f076ce5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sheride/axion_pheno", "max_forks_repo_path": "post_optimization_studies/mad_analyses/four_cuts_eff_flow_chart/Output/PDF/MadAnalysis5job_0/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7d3fc08f5ae5b17a3500eba19a2e43f87f076ce5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sheride/axion_pheno", "max_issues_repo_path": "post_optimization_studies/mad_analyses/four_cuts_eff_flow_chart/Output/PDF/MadAnalysis5job_0/main.tex", "max_line_length": 356, "max_stars_count": null, "max_stars_repo_head_hexsha": "7d3fc08f5ae5b17a3500eba19a2e43f87f076ce5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sheride/axion_pheno", "max_stars_repo_path": "post_optimization_studies/mad_analyses/four_cuts_eff_flow_chart/Output/PDF/MadAnalysis5job_0/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 54534, "size": 150205 }
\section{Introduction}% ===> this file was generated automatically by noweave --- better not edit it The {\Tt{}xtnetdraw\nwendquote} module is an X toolkit implementation of the {\Tt{}netdraw\nwendquote} interface. \section{Interface} \nwfilename{netdraw-xt.nw}\nwbegincode{1}\sublabel{NW1O6D2a-3wIrEv-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-3wIrEv-1}}}\moddef{netdraw-xt.h~{\nwtagstyle{}\subpageref{NW1O6D2a-3wIrEv-1}}}\endmoddef\nwstartdeflinemarkup\nwenddeflinemarkup #ifndef NETDRAW_XT_H #define NETDRAW_XT_H #include <mesh.h> void xtnetdraw_init(int argc, char** argv); void xtnetdraw(mesh_t mesh, double* x); #endif /* NETDRAW_XT_H */ \nwnotused{netdraw-xt.h}\nwendcode{}\nwbegindocs{2}\nwdocspar \section{Implementation} I am \emph{not} an expert X programmer. Most of what I know about practical X programming is gleaned from manual pages, examples online, and experimenting with crossed fingers. Caveat user. \nwenddocs{}\nwbegincode{3}\sublabel{NW1O6D2a-1HWiET-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-1HWiET-1}}}\moddef{netdraw-xt.c~{\nwtagstyle{}\subpageref{NW1O6D2a-1HWiET-1}}}\endmoddef\nwstartdeflinemarkup\nwenddeflinemarkup #include <sugar.h> #ifdef HAVE_X11 #include <stdlib.h> #include <assert.h> #include <math.h> #include <sys/types.h> #include <unistd.h> #include <X11/StringDefs.h> #include <X11/Intrinsic.h> #include <X11/Core.h> #include <X11/Xaw/Box.h> #include <X11/Xaw/Command.h> #include <netdraw_xt.h> #include <netdraw.h> #include <mesh.h> #include <affine.h> \LA{}types~{\nwtagstyle{}\subpageref{NW1O6D2a-4H4VOG-1}}\RA{} \LA{}macros~{\nwtagstyle{}\subpageref{NW1O6D2a-1VvxMr-1}}\RA{} \LA{}static data~{\nwtagstyle{}\subpageref{NW1O6D2a-IAwJm-1}}\RA{} \LA{}static functions~{\nwtagstyle{}\subpageref{NW1O6D2a-1duChy-1}}\RA{} \LA{}draw functions~{\nwtagstyle{}\subpageref{NW1O6D2a-2Q2ktV-1}}\RA{} \LA{}functions~{\nwtagstyle{}\subpageref{NW1O6D2a-nRuDO-1}}\RA{} #endif /* HAVE_X11 */ \nwnotused{netdraw-xt.c}\nwendcode{}\nwbegindocs{4}\nwdocspar \subsection{Draw commands} The {\Tt{}line\nwendquote} command is our one and only drawing function. Its only exposure to the outside world will be when it is passed into the {\Tt{}draw\nwendquote} model using the {\Tt{}netdraw\nwendquote} interface. \nwenddocs{}\nwbegincode{5}\sublabel{NW1O6D2a-IAwJm-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-IAwJm-1}}}\moddef{static data~{\nwtagstyle{}\subpageref{NW1O6D2a-IAwJm-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-1HWiET-1}}\nwprevnextdefs{\relax}{NW1O6D2a-IAwJm-2}\nwenddeflinemarkup static void line(void* pself, double* x1, double* x2); static double* get_x(void* pself); static netdraw_fun_t xt_draw_fun = \{line, NULL, NULL, get_x\}; \nwalsodefined{\\{NW1O6D2a-IAwJm-2}}\nwused{\\{NW1O6D2a-1HWiET-1}}\nwendcode{}\nwbegindocs{6}\nwdocspar There are two coordinate frames that we care about when displaying a line: global (world) coordinates, and view coordinates. There's also the screen coordinates. %' \nwenddocs{}\nwbegincode{7}\sublabel{NW1O6D2a-2Q2ktV-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-2Q2ktV-1}}}\moddef{draw functions~{\nwtagstyle{}\subpageref{NW1O6D2a-2Q2ktV-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-1HWiET-1}}\nwprevnextdefs{\relax}{NW1O6D2a-2Q2ktV-2}\nwenddeflinemarkup static void line(void* pself, double* x1, double* x2) \{ xtnet_gc_t* self = (xtnet_gc_t*) pself; double pts[6]; memcpy(pts + 0, x1, 3*sizeof(double)); memcpy(pts + 3, x2, 3*sizeof(double)); affine_apply(self->toview, pts + 0); affine_apply(self->toscreen, pts + 0); affine_apply(self->toview, pts + 3); affine_apply(self->toscreen, pts + 3); XDrawLine(self->disp, self->win, self->gc, (int) pts[0], (int) pts[1], (int) pts[3], (int) pts[4]); \} \nwalsodefined{\\{NW1O6D2a-2Q2ktV-2}}\nwused{\\{NW1O6D2a-1HWiET-1}}\nwendcode{}\nwbegindocs{8}\nwdocspar \nwenddocs{}\nwbegincode{9}\sublabel{NW1O6D2a-2Q2ktV-2}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-2Q2ktV-2}}}\moddef{draw functions~{\nwtagstyle{}\subpageref{NW1O6D2a-2Q2ktV-1}}}\plusendmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-1HWiET-1}}\nwprevnextdefs{NW1O6D2a-2Q2ktV-1}{\relax}\nwenddeflinemarkup static double* get_x(void* pself) \{ xtnet_gc_t* self = (xtnet_gc_t*) pself; return self->x; \} \nwused{\\{NW1O6D2a-1HWiET-1}}\nwendcode{}\nwbegindocs{10}\nwdocspar \subsection{Draw all} The {\Tt{}draw{\_}all\nwendquote} function is called by the main X interface routine on display events, and is responsible for drawing the device. Naturally, the routine passes the buck on to the model functions. \nwenddocs{}\nwbegincode{11}\sublabel{NW1O6D2a-1duChy-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-1duChy-1}}}\moddef{static functions~{\nwtagstyle{}\subpageref{NW1O6D2a-1duChy-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-1HWiET-1}}\nwprevnextdefs{\relax}{NW1O6D2a-1duChy-2}\nwenddeflinemarkup static void draw_all(xtnet_gc_t* self) \{ int i, n; netdraw_gc_t netdraw; netdraw.methods = &xt_draw_fun; netdraw.data = self; n = mesh_num_elements(self->mesh); for (i = 1; i <= n; ++i) element_display( mesh_element(self->mesh, i), &netdraw ); \} \nwalsodefined{\\{NW1O6D2a-1duChy-2}\\{NW1O6D2a-1duChy-3}\\{NW1O6D2a-1duChy-4}\\{NW1O6D2a-1duChy-5}}\nwused{\\{NW1O6D2a-1HWiET-1}}\nwendcode{}\nwbegindocs{12}\nwdocspar \subsection{X interface routines} \nwenddocs{}\nwbegincode{13}\sublabel{NW1O6D2a-IAwJm-2}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-IAwJm-2}}}\moddef{static data~{\nwtagstyle{}\subpageref{NW1O6D2a-IAwJm-1}}}\plusendmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-1HWiET-1}}\nwprevnextdefs{NW1O6D2a-IAwJm-1}{\relax}\nwenddeflinemarkup static int argc; static char** argv; \nwused{\\{NW1O6D2a-1HWiET-1}}\nwendcode{}\nwbegindocs{14}\nwdocspar \nwenddocs{}\nwbegincode{15}\sublabel{NW1O6D2a-nRuDO-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-nRuDO-1}}}\moddef{functions~{\nwtagstyle{}\subpageref{NW1O6D2a-nRuDO-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-1HWiET-1}}\nwprevnextdefs{\relax}{NW1O6D2a-nRuDO-2}\nwenddeflinemarkup void xtnetdraw_init(int my_argc, char** my_argv) \{ argc = my_argc; argv = my_argv; \} \nwalsodefined{\\{NW1O6D2a-nRuDO-2}}\nwused{\\{NW1O6D2a-1HWiET-1}}\nwendcode{}\nwbegindocs{16}\nwdocspar The main function {\Tt{}xtnetdraw\nwendquote} forks a new process to serve as the X client, and then sets up the main X loop in the new process. \nwenddocs{}\nwbegincode{17}\sublabel{NW1O6D2a-nRuDO-2}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-nRuDO-2}}}\moddef{functions~{\nwtagstyle{}\subpageref{NW1O6D2a-nRuDO-1}}}\plusendmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-1HWiET-1}}\nwprevnextdefs{NW1O6D2a-nRuDO-1}{\relax}\nwenddeflinemarkup void xtnetdraw(mesh_t mesh, double* x) \{ \LA{}widget declarations~{\nwtagstyle{}\subpageref{NW1O6D2a-1OEBUZ-1}}\RA{} xtnet_gc_t* self; int n; Arg wargs[10]; if (fork() != 0) return; \LA{}initialize \code{}self\edoc{}~{\nwtagstyle{}\subpageref{NW1O6D2a-3ei8GD-1}}\RA{} \LA{}initialize X client~{\nwtagstyle{}\subpageref{NW1O6D2a-lVBrR-1}}\RA{} \} \nwused{\\{NW1O6D2a-1HWiET-1}}\nwendcode{}\nwbegindocs{18}\nwdocspar The {\Tt{}xtnet{\_}gc{\_}t\nwendquote} structure contains information about the graphics context needed by the drawing routines. The {\Tt{}mesh\nwendquote} and {\Tt{}x\nwendquote} fields store the mesh pointer and displacement vector; the {\Tt{}toview\nwendquote} and {\Tt{}toscreen\nwendquote} transformations describe how to map to view coordinates and to screen coordinates; and the {\Tt{}disp\nwendquote}, {\Tt{}win\nwendquote}, and {\Tt{}gc\nwendquote} fields store the X context on which to display. \nwenddocs{}\nwbegincode{19}\sublabel{NW1O6D2a-4H4VOG-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-4H4VOG-1}}}\moddef{types~{\nwtagstyle{}\subpageref{NW1O6D2a-4H4VOG-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-1HWiET-1}}\nwenddeflinemarkup typedef struct xtnet_gc_t \{ mesh_t mesh; double* x; double toview[12]; double toscreen[12]; Display* disp; Window win; GC gc; \} xtnet_gc_t; \nwused{\\{NW1O6D2a-1HWiET-1}}\nwendcode{}\nwbegindocs{20}\nwdocspar The X context fields are set during the display event, when we actually have an X context to work with. For now, we just save the mesh and displacement vectors and initialize the transformations. \nwenddocs{}\nwbegincode{21}\sublabel{NW1O6D2a-3ei8GD-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-3ei8GD-1}}}\moddef{initialize \code{}self\edoc{}~{\nwtagstyle{}\subpageref{NW1O6D2a-3ei8GD-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-nRuDO-2}}\nwenddeflinemarkup self = (xtnet_gc_t*) malloc(sizeof(struct xtnet_gc_t)); self->mesh = mesh; self->x = x; set_transforms(mesh, self->toview, self->toscreen); \nwused{\\{NW1O6D2a-nRuDO-2}}\nwendcode{}\nwbegindocs{22}\nwdocspar \nwenddocs{}\nwbegincode{23}\sublabel{NW1O6D2a-1duChy-2}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-1duChy-2}}}\moddef{static functions~{\nwtagstyle{}\subpageref{NW1O6D2a-1duChy-1}}}\plusendmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-1HWiET-1}}\nwprevnextdefs{NW1O6D2a-1duChy-1}{NW1O6D2a-1duChy-3}\nwenddeflinemarkup static void set_transforms(mesh_t mesh, double* toview, double* toscreen) \{ int i, n; double xmin, xmax, ymin, ymax, scale; \LA{}get coordinate range information~{\nwtagstyle{}\subpageref{NW1O6D2a-2XykZa-1}}\RA{} \LA{}set transformation to view coordinates~{\nwtagstyle{}\subpageref{NW1O6D2a-2gFa0b-1}}\RA{} \LA{}set transformation to screen coordinates~{\nwtagstyle{}\subpageref{NW1O6D2a-406z1-1}}\RA{} \} \nwused{\\{NW1O6D2a-1HWiET-1}}\nwendcode{}\nwbegindocs{24}\nwdocspar The first step in figuring out our transformations is to figure out what ranges of coordinate values we'll see. \nwenddocs{}\nwbegincode{25}\sublabel{NW1O6D2a-1VvxMr-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-1VvxMr-1}}}\moddef{macros~{\nwtagstyle{}\subpageref{NW1O6D2a-1VvxMr-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-1HWiET-1}}\nwprevnextdefs{\relax}{NW1O6D2a-1VvxMr-2}\nwenddeflinemarkup #define min(a,b) (((a)<(b)) ? (a) : (b)) #define max(a,b) (((a)>(b)) ? (a) : (b)) \nwalsodefined{\\{NW1O6D2a-1VvxMr-2}}\nwused{\\{NW1O6D2a-1HWiET-1}}\nwendcode{}\nwbegindocs{26}\nwdocspar \nwenddocs{}\nwbegincode{27}\sublabel{NW1O6D2a-2XykZa-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-2XykZa-1}}}\moddef{get coordinate range information~{\nwtagstyle{}\subpageref{NW1O6D2a-2XykZa-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-1duChy-2}}\nwenddeflinemarkup xmin = ymin = 1; xmax = ymax = -1; n = mesh_num_nodes(mesh); for (i = 1; i <= n; ++i) \{ mesh_node_t* node = mesh_node(mesh, i); xmin = min(xmin, node->x[0]); xmax = max(xmax, node->x[0]); ymin = min(ymin, node->x[1]); ymax = max(ymax, node->x[1]); \} \nwused{\\{NW1O6D2a-1duChy-2}}\nwendcode{}\nwbegindocs{28}\nwdocspar Once we have the coordinate ranges, we use them to scale the device into the cube $[-0.5, 0.5]^3$. There should probably be a check here to ensure that the ranges aren't both zero. We also want the bounding box for the device to be reasonably well-centered. \nwenddocs{}\nwbegincode{29}\sublabel{NW1O6D2a-2gFa0b-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-2gFa0b-1}}}\moddef{set transformation to view coordinates~{\nwtagstyle{}\subpageref{NW1O6D2a-2gFa0b-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-1duChy-2}}\nwenddeflinemarkup scale = 1/max(xmax-xmin, ymax-ymin); affine_identity(toview); affine_translate_xyz(toview, -(xmax+xmin)/2, -(ymax+ymin)/2, 0); affine_scale(toview, scale); \nwused{\\{NW1O6D2a-1duChy-2}}\nwendcode{}\nwbegindocs{30}\nwdocspar We need some size information in order to figure out the transformation to screen coordinates. From some experimentation, these look like good size values. \nwenddocs{}\nwbegincode{31}\sublabel{NW1O6D2a-1VvxMr-2}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-1VvxMr-2}}}\moddef{macros~{\nwtagstyle{}\subpageref{NW1O6D2a-1VvxMr-1}}}\plusendmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-1HWiET-1}}\nwprevnextdefs{NW1O6D2a-1VvxMr-1}{\relax}\nwenddeflinemarkup #define FRAME_IN_SPACE 100 #define FRAMEW 510 #define FRAMEH 510 #define FRAME_INW (FRAMEW - 2*FRAME_IN_SPACE) #define FRAME_INH (FRAMEH - 2*FRAME_IN_SPACE) \nwused{\\{NW1O6D2a-1HWiET-1}}\nwendcode{}\nwbegindocs{32}\nwdocspar To get to the screen coordinates, we first reflect across the $y$ axis. We then translate the interval $[-0.5, 0.5]$ into $[0,1]$, scale to screen widths, and then translate again to get a little border. \nwenddocs{}\nwbegincode{33}\sublabel{NW1O6D2a-406z1-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-406z1-1}}}\moddef{set transformation to screen coordinates~{\nwtagstyle{}\subpageref{NW1O6D2a-406z1-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-1duChy-2}}\nwenddeflinemarkup affine_identity(toscreen); toscreen[4] = -1; affine_translate_xyz(toscreen, 0.5, 0.5, 0); affine_scale_x(toscreen, FRAME_INW); affine_scale_y(toscreen, FRAME_INH); affine_translate_xyz(toscreen, FRAME_IN_SPACE, FRAME_IN_SPACE, 0); \nwused{\\{NW1O6D2a-1duChy-2}}\nwendcode{}\nwbegindocs{34}\nwdocspar The setup for the X client is pretty standard: we create a top level widget, add children and event handlers, and then display the widget and enter the Xt event loop. \nwenddocs{}\nwbegincode{35}\sublabel{NW1O6D2a-1OEBUZ-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-1OEBUZ-1}}}\moddef{widget declarations~{\nwtagstyle{}\subpageref{NW1O6D2a-1OEBUZ-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-nRuDO-2}}\nwprevnextdefs{\relax}{NW1O6D2a-1OEBUZ-2}\nwenddeflinemarkup Widget toplevel; \nwalsodefined{\\{NW1O6D2a-1OEBUZ-2}}\nwused{\\{NW1O6D2a-nRuDO-2}}\nwendcode{}\nwbegindocs{36}\nwdocspar \nwenddocs{}\nwbegincode{37}\sublabel{NW1O6D2a-lVBrR-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-lVBrR-1}}}\moddef{initialize X client~{\nwtagstyle{}\subpageref{NW1O6D2a-lVBrR-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-nRuDO-2}}\nwenddeflinemarkup toplevel = XtInitialize(argv[0], "drawing", NULL, 0, &argc, argv); \LA{}add widgets~{\nwtagstyle{}\subpageref{NW1O6D2a-pzde1-1}}\RA{} \LA{}set frame size~{\nwtagstyle{}\subpageref{NW1O6D2a-1j16uN-1}}\RA{} \LA{}add event handlers~{\nwtagstyle{}\subpageref{NW1O6D2a-5ooJp-1}}\RA{} XtRealizeWidget(toplevel); XtMainLoop(); \nwused{\\{NW1O6D2a-nRuDO-2}}\nwendcode{}\nwbegindocs{38}\nwdocspar There are three widgets in the user interface: a parent box where everything lives, a drawing pad, and a ``quit'' button. \nwenddocs{}\nwbegincode{39}\sublabel{NW1O6D2a-1OEBUZ-2}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-1OEBUZ-2}}}\moddef{widget declarations~{\nwtagstyle{}\subpageref{NW1O6D2a-1OEBUZ-1}}}\plusendmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-nRuDO-2}}\nwprevnextdefs{NW1O6D2a-1OEBUZ-1}{\relax}\nwenddeflinemarkup Widget box; Widget drawing; Widget quit; \nwused{\\{NW1O6D2a-nRuDO-2}}\nwendcode{}\nwbegindocs{40}\nwdocspar \nwenddocs{}\nwbegincode{41}\sublabel{NW1O6D2a-pzde1-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-pzde1-1}}}\moddef{add widgets~{\nwtagstyle{}\subpageref{NW1O6D2a-pzde1-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-lVBrR-1}}\nwenddeflinemarkup box = XtCreateManagedWidget("box", boxWidgetClass, toplevel, NULL, 0); drawing = XtCreateManagedWidget("draw", coreWidgetClass, box, NULL, 0); quit = XtCreateManagedWidget("quit", commandWidgetClass, box, NULL, 0); \nwused{\\{NW1O6D2a-lVBrR-1}}\nwendcode{}\nwbegindocs{42}\nwdocspar \nwenddocs{}\nwbegincode{43}\sublabel{NW1O6D2a-1j16uN-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-1j16uN-1}}}\moddef{set frame size~{\nwtagstyle{}\subpageref{NW1O6D2a-1j16uN-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-lVBrR-1}}\nwenddeflinemarkup n = 0; XtSetArg(wargs[n], XtNheight, FRAMEH); n++; XtSetArg(wargs[n], XtNwidth, FRAMEW); n++; XtSetValues(drawing, wargs, n); \nwused{\\{NW1O6D2a-lVBrR-1}}\nwendcode{}\nwbegindocs{44}\nwdocspar The setup for the ``quit'' button is darned simple -- we just get out of dodge. \nwenddocs{}\nwbegincode{45}\sublabel{NW1O6D2a-5ooJp-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-5ooJp-1}}}\moddef{add event handlers~{\nwtagstyle{}\subpageref{NW1O6D2a-5ooJp-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-lVBrR-1}}\nwprevnextdefs{\relax}{NW1O6D2a-5ooJp-2}\nwenddeflinemarkup XtAddCallback(quit, XtNcallback, quit_func, NULL); \nwalsodefined{\\{NW1O6D2a-5ooJp-2}\\{NW1O6D2a-5ooJp-3}}\nwused{\\{NW1O6D2a-lVBrR-1}}\nwendcode{}\nwbegindocs{46}\nwdocspar \nwenddocs{}\nwbegincode{47}\sublabel{NW1O6D2a-1duChy-3}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-1duChy-3}}}\moddef{static functions~{\nwtagstyle{}\subpageref{NW1O6D2a-1duChy-1}}}\plusendmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-1HWiET-1}}\nwprevnextdefs{NW1O6D2a-1duChy-2}{NW1O6D2a-1duChy-4}\nwenddeflinemarkup static void quit_func(Widget w, XtPointer client, XtPointer call) \{ exit(0); \} \nwused{\\{NW1O6D2a-1HWiET-1}}\nwendcode{}\nwbegindocs{48}\nwdocspar The redraw event simply sets the appropriate fields in the {\Tt{}self\nwendquote} record, creates a new graphics context, and then calls the {\Tt{}draw{\_}all\nwendquote} routine to do most of the work. \nwenddocs{}\nwbegincode{49}\sublabel{NW1O6D2a-5ooJp-2}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-5ooJp-2}}}\moddef{add event handlers~{\nwtagstyle{}\subpageref{NW1O6D2a-5ooJp-1}}}\plusendmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-lVBrR-1}}\nwprevnextdefs{NW1O6D2a-5ooJp-1}{NW1O6D2a-5ooJp-3}\nwenddeflinemarkup XtAddEventHandler(drawing, ExposureMask, FALSE, (XtEventHandler) redisplay_event, self); \nwused{\\{NW1O6D2a-lVBrR-1}}\nwendcode{}\nwbegindocs{50}\nwdocspar \nwenddocs{}\nwbegincode{51}\sublabel{NW1O6D2a-1duChy-4}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-1duChy-4}}}\moddef{static functions~{\nwtagstyle{}\subpageref{NW1O6D2a-1duChy-1}}}\plusendmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-1HWiET-1}}\nwprevnextdefs{NW1O6D2a-1duChy-3}{NW1O6D2a-1duChy-5}\nwenddeflinemarkup static void redisplay_event(Widget w, XtPointer client, XExposeEvent* ev, Boolean* continue_dispatch) \{ xtnet_gc_t* self = (xtnet_gc_t*) client; if (ev->count != 0) return; self->disp = XtDisplay(w); self->win = XtWindow(w); XClearWindow(self->disp, self->win); self->gc = XCreateGC(self->disp, self->win, 0, NULL); XSetForeground(self->disp, self->gc, 1); XSetBackground(self->disp, self->gc, 0); draw_all(self); XFreeGC(self->disp, self->gc); \} \nwused{\\{NW1O6D2a-1HWiET-1}}\nwendcode{}\nwbegindocs{52}\nwdocspar \nwenddocs{}\nwbegincode{53}\sublabel{NW1O6D2a-2XnWFD-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-2XnWFD-1}}}\moddef{clear and redraw~{\nwtagstyle{}\subpageref{NW1O6D2a-2XnWFD-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-1duChy-5}}\nwenddeflinemarkup self->disp = XtDisplay(w); self->win = XtWindow(w); XClearWindow(self->disp, self->win); self->gc = XCreateGC(self->disp, self->win, 0, NULL); XSetForeground(self->disp, self->gc, 1); XSetBackground(self->disp, self->gc, 0); draw_all(self); XFreeGC(self->disp, self->gc); \nwused{\\{NW1O6D2a-1duChy-5}}\nwendcode{}\nwbegindocs{54}\nwdocspar The key press event handler rotates the displayed device by composing the transformation to view coordinates with a rotation. Up and down arrows rotate about the $x$ axis; left and right arrows about the $y$ axis; and page up and page down about the $z$ axis. \nwenddocs{}\nwbegincode{55}\sublabel{NW1O6D2a-5ooJp-3}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-5ooJp-3}}}\moddef{add event handlers~{\nwtagstyle{}\subpageref{NW1O6D2a-5ooJp-1}}}\plusendmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-lVBrR-1}}\nwprevnextdefs{NW1O6D2a-5ooJp-2}{\relax}\nwenddeflinemarkup XtAddEventHandler(drawing, KeyPressMask, FALSE, (XtEventHandler) keypress_event, self); \nwused{\\{NW1O6D2a-lVBrR-1}}\nwendcode{}\nwbegindocs{56}\nwdocspar \nwenddocs{}\nwbegincode{57}\sublabel{NW1O6D2a-1duChy-5}\nwmargintag{{\nwtagstyle{}\subpageref{NW1O6D2a-1duChy-5}}}\moddef{static functions~{\nwtagstyle{}\subpageref{NW1O6D2a-1duChy-1}}}\plusendmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW1O6D2a-1HWiET-1}}\nwprevnextdefs{NW1O6D2a-1duChy-4}{\relax}\nwenddeflinemarkup static void keypress_event(Widget w, XtPointer client, XKeyEvent* ev, Boolean* continue_dispatch) \{ xtnet_gc_t* self = (xtnet_gc_t*) client; double inc = (M_PI / 18); switch (ev->keycode) \{ case 98: /* Up */ affine_rotate_ox(self->toview, -inc); break; case 104: /* Down */ affine_rotate_ox(self->toview, inc); break; case 100: /* Left */ affine_rotate_oy(self->toview, inc); break; case 102: /* Right */ affine_rotate_oy(self->toview, -inc); break; case 99: /* PgUp */ affine_rotate_oz(self->toview, -inc); break; case 105: /* PgDn */ affine_rotate_oz(self->toview, inc); break; \} \LA{}clear and redraw~{\nwtagstyle{}\subpageref{NW1O6D2a-2XnWFD-1}}\RA{} \} \nwused{\\{NW1O6D2a-1HWiET-1}}\nwendcode{} \nwixlogsorted{c}{{add event handlers}{NW1O6D2a-5ooJp-1}{\nwixu{NW1O6D2a-lVBrR-1}\nwixd{NW1O6D2a-5ooJp-1}\nwixd{NW1O6D2a-5ooJp-2}\nwixd{NW1O6D2a-5ooJp-3}}}% \nwixlogsorted{c}{{add widgets}{NW1O6D2a-pzde1-1}{\nwixu{NW1O6D2a-lVBrR-1}\nwixd{NW1O6D2a-pzde1-1}}}% \nwixlogsorted{c}{{clear and redraw}{NW1O6D2a-2XnWFD-1}{\nwixd{NW1O6D2a-2XnWFD-1}\nwixu{NW1O6D2a-1duChy-5}}}% \nwixlogsorted{c}{{draw functions}{NW1O6D2a-2Q2ktV-1}{\nwixu{NW1O6D2a-1HWiET-1}\nwixd{NW1O6D2a-2Q2ktV-1}\nwixd{NW1O6D2a-2Q2ktV-2}}}% \nwixlogsorted{c}{{functions}{NW1O6D2a-nRuDO-1}{\nwixu{NW1O6D2a-1HWiET-1}\nwixd{NW1O6D2a-nRuDO-1}\nwixd{NW1O6D2a-nRuDO-2}}}% \nwixlogsorted{c}{{get coordinate range information}{NW1O6D2a-2XykZa-1}{\nwixu{NW1O6D2a-1duChy-2}\nwixd{NW1O6D2a-2XykZa-1}}}% \nwixlogsorted{c}{{initialize \code{}self\edoc{}}{NW1O6D2a-3ei8GD-1}{\nwixu{NW1O6D2a-nRuDO-2}\nwixd{NW1O6D2a-3ei8GD-1}}}% \nwixlogsorted{c}{{initialize X client}{NW1O6D2a-lVBrR-1}{\nwixu{NW1O6D2a-nRuDO-2}\nwixd{NW1O6D2a-lVBrR-1}}}% \nwixlogsorted{c}{{macros}{NW1O6D2a-1VvxMr-1}{\nwixu{NW1O6D2a-1HWiET-1}\nwixd{NW1O6D2a-1VvxMr-1}\nwixd{NW1O6D2a-1VvxMr-2}}}% \nwixlogsorted{c}{{netdraw-xt.c}{NW1O6D2a-1HWiET-1}{\nwixd{NW1O6D2a-1HWiET-1}}}% \nwixlogsorted{c}{{netdraw-xt.h}{NW1O6D2a-3wIrEv-1}{\nwixd{NW1O6D2a-3wIrEv-1}}}% \nwixlogsorted{c}{{set frame size}{NW1O6D2a-1j16uN-1}{\nwixu{NW1O6D2a-lVBrR-1}\nwixd{NW1O6D2a-1j16uN-1}}}% \nwixlogsorted{c}{{set transformation to screen coordinates}{NW1O6D2a-406z1-1}{\nwixu{NW1O6D2a-1duChy-2}\nwixd{NW1O6D2a-406z1-1}}}% \nwixlogsorted{c}{{set transformation to view coordinates}{NW1O6D2a-2gFa0b-1}{\nwixu{NW1O6D2a-1duChy-2}\nwixd{NW1O6D2a-2gFa0b-1}}}% \nwixlogsorted{c}{{static data}{NW1O6D2a-IAwJm-1}{\nwixu{NW1O6D2a-1HWiET-1}\nwixd{NW1O6D2a-IAwJm-1}\nwixd{NW1O6D2a-IAwJm-2}}}% \nwixlogsorted{c}{{static functions}{NW1O6D2a-1duChy-1}{\nwixu{NW1O6D2a-1HWiET-1}\nwixd{NW1O6D2a-1duChy-1}\nwixd{NW1O6D2a-1duChy-2}\nwixd{NW1O6D2a-1duChy-3}\nwixd{NW1O6D2a-1duChy-4}\nwixd{NW1O6D2a-1duChy-5}}}% \nwixlogsorted{c}{{types}{NW1O6D2a-4H4VOG-1}{\nwixu{NW1O6D2a-1HWiET-1}\nwixd{NW1O6D2a-4H4VOG-1}}}% \nwixlogsorted{c}{{widget declarations}{NW1O6D2a-1OEBUZ-1}{\nwixu{NW1O6D2a-nRuDO-2}\nwixd{NW1O6D2a-1OEBUZ-1}\nwixd{NW1O6D2a-1OEBUZ-2}}}% \nwbegindocs{58}\nwdocspar \nwenddocs{}
{ "alphanum_fraction": 0.743263004, "avg_line_length": 52.6043956044, "ext": "tex", "hexsha": "340ffdbf0175925211da070f445810662a492ddc", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "699534852cb37fd2225a8b4b0072ebca96504d23", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "davidgarmire/sugar", "max_forks_repo_path": "sugar31/src/tex/netdraw-xt.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "699534852cb37fd2225a8b4b0072ebca96504d23", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "davidgarmire/sugar", "max_issues_repo_path": "sugar31/src/tex/netdraw-xt.tex", "max_line_length": 332, "max_stars_count": null, "max_stars_repo_head_hexsha": "699534852cb37fd2225a8b4b0072ebca96504d23", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "davidgarmire/sugar", "max_stars_repo_path": "sugar31/src/tex/netdraw-xt.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 10226, "size": 23935 }
\chapter{Experiments} \label{chap-5} This chapter presents the results of initial experiments to create an approximate Danilov distribution in the SNS. The computational studies in Chapter \ref{chap-2} and Chapter \ref{chap-3} were used to guide the experiments, and the diagnostics described in Chapter \ref{chap-4} were used to measure the painted distribution. We repeat the following from Chapter \ref{chap-1}: Elliptical painting requires the creation of elliptical modes in the ring. The SNS ring is uncoupled, but elliptical modes can be created by equating the horizontal and vertical tunes. Simulations predict that the addition of solenoids to the ring will stabilize the beam against nonlinearities which strongly influence the motion in this setup (see Chapter \ref{chap-3} and Appendix \ref{app-C}). Solenoid magnets were planned to be installed in the SNS ring in 2021, but their installation was delayed until late 2022, outside the time frame of this work. Therefore, in the following experiments, the quality of the painted beam was not expected to approach the ``best-case scenario" simulated in Chapter \ref{chap-3}. But it was hoped that the measured beam would be distinguishable from one produced by normal injection methods. The signatures we desire are a reduced 4D emittance and a uniform charge density. A brief outline of this chapter: First, the experimental setup and data collection procedure are described. In Experiment 1, a production beam is measured for comparison and elliptical painting is attempted at a beam energy of 1 GeV. In Experiment 2, the beam energy is lowered to 0.8 GeV to allow proper scaling of the injection coordinates. In Experiment 3, several parameters are varied to study their effect on the measured 4D emittance. Finally, the implications of these experiments are discussed. \section{Procedure} Accelerator physics experiments are performed in the SNS control room using the OpenXAL framework, which provides a high-level interface to perform tasks such as changing magnet strengths, triggering the beam, etc. It can also perform single-particle or envelope tracking using an online model of the accelerator. OpenXAL scripts are written in Java or Jython and are executed from the command line. Many graphical user interface (GUI) OpenXAL applications have been developed over the history of the SNS and are available for use in the control room. The following steps are taken during the experimental setup: % \begin{enumerate} \item To increase the maximum injection angle, the beam energy is lowered from 1.0 GeV to 0.8 GeV by turning off several RF cavities at the end of the linac, then scaling every subsequent magnet in the machine. Lowering the energy can cause other accelerator components to trip or malfunction due to the modified timing of the beam pulses, and these issues must be corrected one-by-one. The first attempt to lower the energy to 0.8 GeV was successful and took approximately six hours. The task can now be performed by machine operators in less than half that time.\footnote{A lower beam energy is possible but requires significantly more effort, especially when the number of accumulated turns is large. Reduction of the energy requires the reduction of a master reference oscillator frequency, and the phase-locked loops of the various accelerator components become unstable if this frequency becomes too small. Circumvention of this issue requires changes to firmware that affect many other systems in the machine. An initial attempt to lower the energy to 0.6 GeV was successful but took over thirty-six hours.} % \item The horizontal and vertical tunes are set to the same value using the Ring Optics Control (ROC) application. ROC varies several quadrupoles until the model tunes are equal to the desired tunes. The tunes are measured using turn-by-turn BPM readings from a single minipulse in the ring. Generally, the measured and model tunes are not quite equal; we therefore shift the ROC input tunes until the measured tunes converge to the desired tunes. % \item Optional: The injection region is modified to increase the maximum injection angle.\footnote{One option is to utilize orbit corrector dipoles to provide a closed bump in either plane, thus moving the ring orbit closer to the foil. Another option is to steer the injected beam; this is not ideal because it requires modification of the trajectory of the unstripped H$^-$ ions after the foil, which must be guided to the beam dump. Finally, the Chicane dipole magnets can be modified, but again, this is complicated by the beam dump trajectory. The optimization of this system is an ongoing problem, and no modifications to the injection region are made in this work.} % \item The eight injection kicker magnets are calibrated using the Ring Injection Control (RIC) application, as described in Chapter \ref{chap-1}. % \item The kicker voltages required to obtain the desired injection coordinates at the start and end of injection are determined (as described in Chapter \ref{chap-1}). % \item Square root waveforms connecting the initial/final voltages are applied to the kicker magnets. The duration of the waveforms is chosen to be consistent with the desired number of injected turns, i.e., beam intensity. % \item The number of injected turns before extraction is chosen. This allows measurement of the beam at different times during accumulation. It is also possible to store the beam in the ring after it reaches full intensity, but this is not attempted here. % \end{enumerate} % The next task is to prepare for the measurements. For the wire-scanner measurement, the first step is to modify the RTBT optics using the application developed in Chapter \ref{chap-4}. If the fixed-optics method is used, the optics are changed immediately. If the multi-optics method is used, the optics are pre-computed and stored for later use. The second possible measurement is the tomographic reconstruction from $x$-$y$ projections on the target. Since the optics calculation is time-consuming, it can be run in the background while wire-scans are collected. \section{Experiment 1} At the time of our first experiment, setup of the injection region had not yet been completed. Although simulations indicated that a sizeable beam could not be painted at 1 GeV beam energy, this had not been tested. Furthermore, the SNS energy had not yet been decreased — a time-consuming task. Therefore, the goal of Experiment 1 was to push the injection coordinates $x$ and $y'$ to their limits at 1 GeV. We decided to measure the distribution not only at its final state but also at intermediate states during accumulation. The number of injected turns was reduced from 1000 to 500, halving the beam intensity, and the beam was measured every 50 turns using the fixed-optics method. \subsection{Experiment 1a: correlated painting} We first performed correlated painting for later comparison. The measured wire-scanner profiles are shown in Fig.~\ref{fig:exp1a_wsmeas}. % \begin{figure}[!p] \centering \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp1a/waterfall.png} \end{subfigure} \vfill \vspace*{1.25cm} \vfill \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp1a/rms.png} \end{subfigure} \caption{Measured wire-scanner profiles from Experiment 1a. The top figure shows the measured profiles on each wire as a function of time. The bottom plots show the moments extracted from the profiles.} \label{fig:exp1a_wsmeas} \end{figure} % % \begin{figure}[!p] \centering \begin{subfigure}{0.6\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp1a/corner.png} \end{subfigure} \hfill \begin{subfigure}[t]{0.39\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp1a/emittances.png} \end{subfigure} \caption{Reconstructed emittances and covariance ellipses from Experiment 1a. In this and subsequent figures, the reconstruction is performed at BPM17 and the light/dark ellipses correspond to the start/end of injection.} \label{fig:exp1a_emittances} \end{figure} % Each subplot shows the evolution of the projection onto a single wire; each row corresponds to a different wire-scanner and each column to a different projection axis — $x$, $y$, or $u$. Recall that in correlated painting, the injection angles are zero and the positions are increased from an initial offset. The initial offset is evident from the two peaks in the measured $x$ and $y$ profiles. The hollow center of the distribution eventually fills in due to nonlinear effects. The reconstructed emittances and covariance ellipses at BPM17, just before QH18, are shown in Fig.~\ref{fig:exp1a_emittances}. The size and location of the error bars were computed by repeating the reconstruction multiple times with 3\% random noise added to the measured moments, then taking the mean and standard deviation over the trials. For our purposes, the most important feature of Fig.~\ref{fig:exp1a_emittances} is that the measured cross-plane correlation is small throughout injection, demonstrating that there is very little coupling from electromagnetic fields of the beam or ring in the standard SNS painting scheme. \subsection{Experiment 1b: attempted elliptical painting} We then attempted elliptical painting. First, the horizontal and vertical tunes were set to 6.18. The next step was to move the closed orbit to the foil, which was found to be possible in the vertical plane but impossible in the horizontal plane. The minimum horizontal distance from the foil was 10 mm, and the maximum vertical injection angle was 0.7 mrad; assuming $\alpha_y \approx 0$ and $\beta_y \approx 10$, the painted vertical emittance can be estimated as % \begin{equation} \begin{aligned} \varepsilon_y &\approx \frac{1}{4}\beta_y {y_{max}'}^2 = 1.22 \,\, \text{mm~mrad}, \end{aligned} \end{equation} % which is only four times larger than the emittance from the linac. Although this is not ideal, we continued using initial coordinates ($x$, $x'$, $y$, $y'$) $\approx$ (10 mm, 0 mrad, 0 mm, 0 mrad) and final coordinates ($x$, $x'$, $y$, $y'$) $\approx$ (21 mm, 0 mrad, 0 mm, 0.7 mrad). Let us pause to predict the beam evolution using these settings, assuming linear transport: initial particles would oscillate along a flat horizontal ellipse in the $x$-$y$ plane, and as time progressed, the horizontal and vertical size of the ellipse would grow at different rates depending on the maximum injected $x$ and $y'$ coordinates. The inclusion of space charge complicates the analysis due to the non-uniformity introduced by the initial horizontal offset. The measured wire-scanner profiles are shown in Fig.~\ref{fig:exp1b_wsmeas}. % \begin{figure}[!p] \centering \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp1b/waterfall.png} \end{subfigure} \vfill \vspace*{1.25cm} \vfill \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp1b/rms.png} \end{subfigure} \caption{Measured wire-scanner profiles from Experiment 1b.} \label{fig:exp1b_wsmeas} \end{figure} % The horizontal projection at 50 turns is hollow — evidence of the initial offset of the closed orbit — but quickly filaments. The most important feature of Fig.~\ref{fig:exp1b_wsmeas} is the vertical beam size, which starts at a small value and increases throughout injection. This is an indication that the injection kicker waveforms are correct. Notice that the beam is much smaller than in Experiment 1a. We now turn to the reconstructed emittances and covariance ellipses in Fig.~\ref{fig:exp1b_emittances}. % \begin{figure}[!p] \centering \begin{subfigure}{0.6\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp1b/corner.png} \end{subfigure} \hfill \begin{subfigure}[t]{0.39\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp1b/emittances.png} \end{subfigure} \caption{Reconstructed emittances and covariance ellipses from Experiment 1b.} \label{fig:exp1b_emittances} \end{figure} % There is a clear separation between the intrinsic and apparent emittances throughout accumulation. A possible explanation for the larger-than-expected increase in vertical emittance is that space charge coupled the motion during injection, causing emittance exchange as new particles were added to the bunch; another possible explanation is emittance blow-up due to many particles being injected into a small region of vertical phase space; the answer is left for future work. The most important feature of Fig.~\ref{fig:exp1b_emittances} is that the measured distribution was significantly different than in the previous experiment and was closer to the desired case; in other words, the ring modifications seem to have worked as intended. We close with a PyORBIT simulation of this experiment in Fig.~\ref{fig:exp1b_sim}. % \begin{figure}[!p] \centering \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp1b/sim_snapshots.png} \end{subfigure} \vfill \vspace*{1.0cm} \vfill \begin{subfigure}{0.7\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp1b/sim_emittances.png} \end{subfigure} \caption{Simulation of Experiment 1b.} \label{fig:exp1b_sim} \end{figure} % Keep in mind that the $\beta$ functions of the ring at the injection point are not the same as in the experiment, so the exact values of the emittances are not expected to agree. Qualitative agreement with the measured emittance growth appears to be present. \section{Experiment 2} In Experiment 2, the beam energy was lowered to 0.8 GeV. At this energy, the closed orbit was able to reach the foil with zero slope, i.e., $x = x' = y = y' = 0$, as required to paint a uniform density beam. Again, only $x$ and $y'$ need to change during injection. There is no limit on $x_{max}$ since increasing $x$ involves decreasing the horizontal kickers, but the vertical kickers could only reach $y'_{max} \approx 1.1$ mrad. In the linear approximation, this angle is expected to produce a vertical emittance of 3-4 mm~mrad. We estimate the ratio of painted emittances as % \begin{equation}\label{eq:painted_emittance_ratio} \frac{\varepsilon_y}{\varepsilon_x} \approx \beta_x \beta_y \left(\frac{{y_{max}}'}{x_{max}}\right)^2 . \end{equation} % To paint equal emittances would require $x_{max}$ $\approx$ 10 mm — a small beam. We decided to use $x_{max}$ = 21 mm, maintaining the same beam intensity as in Experiment 1. The measured wire-scanner profiles are shown in Fig.~\ref{fig:exp2_wsmeas}. % \begin{figure}[!p] \centering \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp2/waterfall.png} \end{subfigure} \vfill \vspace*{1.25cm} \vfill \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp2/rms.png} \end{subfigure} \caption{Measured wire-scanner profiles during injection from Experiment 2.} \label{fig:exp2_wsmeas} \end{figure} % One important feature of Fig.~\ref{fig:exp2_wsmeas} is that the beam must have some rotational symmetry in the $x$-$y$ plane since the growth in beam size is similar on all wires. A second important feature is that the beam sizes start at a small value and increase at approximately the square root of time (light grey curves have been added showing the ideal square root time dependence given the final beam size). A third important feature is that the profiles appear to be more consistent with a Gaussian distribution than a uniform density distribution. This will be discussed more at the end of the chapter. The reconstructed emittances and covariance ellipses are shown in Fig.~\ref{fig:exp2_emittances}. % \begin{figure}[!p] \centering \begin{subfigure}{0.6\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp2/corner.png} \end{subfigure} \hfill \begin{subfigure}[t]{0.39\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp2/emittances.png} \end{subfigure} \caption{Reconstructed emittances and covariance ellipses from Experiment 2.} \label{fig:exp2_emittances} \end{figure} % The apparent emittances grow linearly from a small value, as intended. The measured apparent emittances are equal throughout injection, which is not expected from Eq.~\ref{eq:painted_emittance_ratio}. A possible explanation is that the beam experienced space-charge-driven emittance exchange as it circulated, resulting in equal emittances as new particles were added to the distribution (see Chapter \ref{chap-2}). The reconstructed intrinsic emittances begin to diverge at the end of injection, but the measured cross-plane correlation is small. Additionally, the error bars are larger than in the previous experiment; this is most likely due to a larger mismatch of the beam Twiss parameters at the RTBT entrance, which is expected given the increase in beam perveance at 0.8 GeV energy and reduced beam size. Possible bias in the measurement, expected to be near the 10\% level, must also be kept in mind. Given the small measured cross-plane correlation and larger error bars, it cannot be claimed that the 4D emittance is significantly reduced. A simulation of this case is shown in Fig.~\ref{fig:exp2_sim}. % \begin{figure}[!p] \centering \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp2/sim_snapshots.png} \end{subfigure} \vfill \vspace*{1.0cm} \vfill \begin{subfigure}{0.7\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp2/sim_emittances.png} \end{subfigure} \caption{Simulation of Experiment 2.} \label{fig:exp2_sim} \end{figure} % Notice that $\varepsilon_2$ begins to flatten after turn 100, but does not remain flat, and although the final $x$-$y'$ projection has a higher density along the painting path, the linear correlation is significantly blurred. Space charge has a strong effect on the evolution at this intensity, energy, and beam size, providing some explanation for the measurements in Fig.~\ref{fig:exp2_emittances}. \section{Experiment 3} The goal of this final experiment was to vary the free parameters of the machine and record the intrinsic emittances in each case, with the plan to examine the most promising case in more detail. To begin, the setup from Experiment 2 was repeated. One difference was that the bunch length was increased from roughly 30/64 of the ring length to 40/64 of the ring length to better approximate a coasting beam. This was done by modifying the chopper settings before the linac and should have increased the total charge of the bunch without changing its charge density. Beam current monitor (BMC) measurements of the longitudinal distribution in the ring are shown in Fig.~\ref{fig:bcm_waterfall}; it is clear that there are no strong peaks from RF bunching. % \begin{figure}[!p] \centering \includegraphics[width=\textwidth]{Images/chapter5/exp3/bcm_waterfall.png} \caption{Evolution of the longitudinal distribution in the ring as measured by a beam current monitor (BCM).} \label{fig:bcm_waterfall} \end{figure} % Since the vertical injection angle could not be increased, the remaining free parameters were the beam intensity and horizontal beam size: measured emittances for three different intensities and two different painting settings are shown in Fig.~\ref{fig:exp3_search}.\footnote{The intensities are not exact; they are obtained by multiplying the nominal minipulse intensity by the number of injected turns.} Collective effects seem to affect the final distribution. It is somewhat surprising that the split in the intrinsic emittances increased with the beam intensity. % \begin{figure}[!p] \centering \vspace*{1.0cm} \includegraphics[width=\textwidth]{Images/chapter5/exp3/search.png} \caption{Measured emittances vs. beam intensity for two sets of injected coordinates. (Error bars not shown).} \label{fig:exp3_search} \vspace*{1.0cm} \end{figure} % The measurement process of the previous two experiments was repeated for the center cluster in the right subplot: which amounts to a 50\% increase in horizontal beam size and 20\% reduction in beam size. The measured wire-scanner profiles are shown in Fig.~\ref{fig:exp3_wsmeas}. % \begin{figure}[!p] \centering \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp3/waterfall.png} \end{subfigure} \vfill \vspace*{1.25cm} \vfill \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp3/rms.png} \end{subfigure} \caption{Measured wire-scanner profiles from Experiment 3.} \label{fig:exp3_wsmeas} \end{figure} % The beam size curves do not fit the square root dependence as well as in Experiment 2. The measured profiles look similar to those in Experiment 2, although the horizontal projection at WS21 has developed a sharp peak surrounded by a lower density cloud. Fig.~\ref{fig:exp3_emittances} shows the reconstructed emittances and covariance ellipses. % \begin{figure}[!p] \centering \begin{subfigure}{0.6\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp3/corner.png} \end{subfigure} \hfill \begin{subfigure}[t]{0.39\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp3/emittances.png} \end{subfigure} \caption{Reconstructed emittances and covariance ellipses from Experiment 3.} \label{fig:exp3_emittances} \end{figure} % From Eq.~\ref{eq:painted_emittance_ratio}, the ratio $\varepsilon_y / \varepsilon_x$ is expected to be quite small — around 1/8 — but again, the apparent emittances remain nearly equal. A future study could split the tunes and perform correlated painting with zero offset from the foil, varying $x_{max}$, and $y_{max}$ to examine whether this phenomenon is caused by coupled space charge forces. Although the error bars remain large, the reconstructed $\varepsilon_{1,2}$ significantly deviate from $\varepsilon_{x,y}$ after 300 turns, which manifests in the tilting of the reconstructed ellipses in the cross-plane projections. This was the largest correlation measured so far. Beam images on the target were also collected as the horizontal and vertical phase advances were scanned. The bunch length was inadvertently decreased by a factor of three beforehand, precluding direct comparison with the wire-scanner measurements. Two of the images are shown in Fig.~\ref{fig:exp3_target_scan}. % \begin{figure}[!p] \centering \vspace*{5cm} \includegraphics[width=\textwidth]{Images/chapter5/exp3/target_scan/target_scan.png} \caption{Scan of the phase advances at the target. Left: processed images on last two steps in the scan. Top right: $x$-$y$ correlation coefficients computed from the images. Bottom right: Phase advances at the target.} \label{fig:exp3_target_scan} \vspace*{5cm} \end{figure} % The $x$-$y$ correlation coefficient, although small, clearly depends on the phase advances, demonstrating that there were cross-plane correlations in the beam.\footnote{The $x$-$y$ correlation coefficient is calculated directly from the image.} It is recommended that this setup be repeated in a future experiment. It should be examined whether additional slight changes to the RTBT optics can reduce the uncertainty in the fixed-optics measurement, and the multi-optics measurement should be performed on the final distribution for comparison. Additionally, images of the same beam should be collected as the phase advances are scanned at the target. We conclude with a simulation of this experiment in Fig.~\ref{fig:exp3_sim}. % \begin{figure}[!p] \centering \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp3/sim_snapshots.png} \end{subfigure} \vfill \vspace*{1.0cm} \vfill \begin{subfigure}{0.7\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp3/sim_emittances.png} \end{subfigure} \caption{Simulation of Experiment 3.} \label{fig:exp3_sim} \end{figure} % This looks closer to the best-case scenario from Chapter \ref{chap-3}, even though solenoids are not present in the ring and the vertical injection angle is limited. Again, the predicted ratio $\varepsilon_1 / \varepsilon_2$ is larger than what was measured. We repeated the simulation as the horizontal tune was varied in steps of 0.005 around its original value of 6.18. At $\nu_x = 6.2$, all cross-plane correlation in the beam was eliminated. Fig.~\ref{fig:exp3_sim_nux6.195_nuy6.18} shows the case when $\nu_x = 6.195$. % \begin{figure}[!p] \centering \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp3/sim_snapshots_nux6.195_nuy6.18.png} \end{subfigure} \vfill \vspace*{1.0cm} \vfill \begin{subfigure}{0.7\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp3/sim_emittances_nux6.195_nuy6.18.png} \end{subfigure} \caption{Simulation of Experiment 3 with $\nu_x = 6.195$, $\nu_y = 6.18$.} \label{fig:exp3_sim_nux6.195_nuy6.18} \end{figure} % The time at which the intrinsic emittances diverge from the apparent emittances has been pushed towards the end of injection. Although it is difficult to make detailed comparisons between the measurements and the simulations due to possible differences in the ring Twiss parameters at the injection point (resulting in a different beam size for the same kicker settings), uncertainty in the total beam charge, some uncertainty in the measured injection position/angle, and relatively large error bars on the measured emittances, the similarities between the measured emittances in Fig.~\ref{fig:exp3_emittances} and the simulated emittances in Fig.~\ref{fig:exp3_sim_nux6.195_nuy6.18} are striking.\footnote{Although the tune split was measured to be $\approx$ 0.01 with small expected uncertainty $\cite{Pelaia2016}$, the uncertainty in the tune measurement should be re-examined and work should be done to synchronize the ring model in PyORBIT with the online model in OpenXAL. The measurement should also be repeated as the tune split is varied.} These simulations indicate that the elliptical painting method (without solenoids) is very sensitive to the tune split in the ring, which may place a practical lower limit on the 4D emittance at this time; however, the qualitative agreement between measurement and simulation obtained here leaves open the possibility that tuning of the injection region and the addition of solenoid magnetic fields to the ring will bring the beam closer to a self-consistent state. \section{Summary and additional comparison between experiments} Let us make two additional comparisons between the final distributions in the three experiments. First, we reconstruct the covariance matrix at different locations in the RTBT. This will not change the emittances but will change the correlations between the phase space coordinates: the smaller the 4D emittance, the larger the variation in these correlations. See Fig.~\ref{fig:exp3_compare_corr}. % \begin{figure}[!p] \centering \vspace*{3.0cm} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp3/compare_corr.png} \end{subfigure} \hfill \begin{subfigure}{0.32\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp2/compare_corr.png} \end{subfigure} \hfill \begin{subfigure}{0.32\textwidth} \includegraphics[width=\textwidth]{Images/chapter5/exp1a/compare_corr.png} \end{subfigure} \caption{Reconstructed cross-plane correlation coefficients for Experiments 3, 2, and 1a.} \label{fig:exp3_compare_corr} \vspace*{3.0cm} \end{figure} % The black lines represent the reconstructed values and the grey regions represent the standard deviation. This is simply an alternative way to visualize the measured reduction in 4D emittance in Experiment 3. Second, although the color histograms in Fig.~\ref{fig:exp1a_wsmeas}, Fig.~\ref{fig:exp1b_wsmeas}, Fig.~\ref{fig:exp2_wsmeas}, and Fig.~\ref{fig:exp3_wsmeas} are useful to show the measured beam evolution in one figure, the 1D profiles may be more difficult to interpret than a normal histogram plot. We therefore include Fig.~\ref{fig:exp1a_fits}, Fig.~\ref{fig:exp1b_fits}, Fig.~\ref{fig:exp2_fits}, and Fig.~\ref{fig:exp3_fits}, which show the final measured wire-scanner profiles from each experiment. Also plotted are the projections of a Gaussian distribution (red) and uniform density elliptical distribution (blue) with the same standard deviation as the RMS calculation from the measurement. % \begin{figure}[!p] \centering \includegraphics[width=0.9\textwidth]{Images/chapter5/exp1a/fits_9.png} \caption{Measured wire-scanner profiles for the final distribution in Experiment 1a. Also plotted are the projections of a Gaussian distribution (red) and uniform density elliptical distribution (blue) with the same standard deviation as the rms calculation from the measurement.} \label{fig:exp1a_fits} \end{figure} % \begin{figure}[!p] \centering \includegraphics[width=0.9\textwidth]{Images/chapter5/exp1b/fits_9.png} \caption{Measured wire-scanner profiles for the final distribution in Experiment 1b.} \label{fig:exp1b_fits} \end{figure} % \begin{figure}[!p] \centering \includegraphics[width=0.9\textwidth]{Images/chapter5/exp2/fits_9.png} \caption{Measured wire-scanner profiles for the final distribution in Experiment 2.} \label{fig:exp2_fits} \end{figure} % \begin{figure}[!p] \centering \includegraphics[width=0.9\textwidth]{Images/chapter5/exp3/fits_7.png} \caption{Measured wire-scanner profiles for the final distribution in Experiment 3.} \label{fig:exp3_fits} \end{figure} % We now review the main results of the experiments in this chapter, commenting on these wire-scanner profiles along the way. Recall that our goal was to carry out the elliptical painting method in the SNS, measure the painted distribution, and compare the measurements to an ideal Danilov distribution. This has been accomplished. In Experiment 1, the Ring Injection Control (RIC) application was tested at 1 GeV beam energy, and the beam emittance was efficiently measured throughout injection. First, correlated painting was used in Experiment 1a; the measured intrinsic emittances remained close to the apparent emittances, showing that there was very little cross-plane correlation in the beam. Second, in Experiment 1b, it was found that lowering the beam energy was necessary to inject particles onto the closed orbit, which is a necessary condition to perform the elliptical painting method. Nonetheless, setting equal tunes in the ring and varying the vertical injection angle resulted in a measured split in the intrinsic emittances. Furthermore, some of the measured wire-scanner profiles were more consistent with the projection of a uniform density elliptical distribution than with a Gaussian distribution. This gave us confidence that RIC was working as intended. In Experiment 2, the beam energy was lowered to 0.8 GeV. BPM measurements verified that the initial kicker settings could be achieved so that particles were injected onto the closed orbit. The final kicker settings were chosen so that ($x$, $x'$, $y$, $y'$) $\approx$ (21 mm, 0 mrad, 0 mmm, 1.1 mrad). Wire-scanner measurements showed that the beam size grew with approximate square root time-dependence, as intended, but only a small split in the intrinsic emittances was measured at the end of injection. Additionally, the wire-scanner profiles were more consistent with a Gaussian distribution. In Experiment 3, the beam size and intensity were varied. The most promising case — 20\% reduction in beam intensity and 50\% increase in horizontal beam size — was investigated in more detail. A larger split in the intrinsic emittances was measured during the last hundred turns of injection. Additionally, the tilt angle of the beam image on the target was shown to depend on the phase advances at the target (even though the bunch length was inadvertently reduced). Although most of the wire-scanner profiles still appeared to be consistent with a Gaussian distribution, one could argue that there were subtle differences from the previous experiment. For example, in Fig.~\ref{fig:exp3_fits}, the horizontal projection at WS21 exhibits a sharp peak, resembling the $y'$ projection in Fig.~\ref{fig:Holmes}. (This is also present at WS20 to a lesser extent.) These features only appeared after 300 turns, when the intrinsic emittances began to split. Finally, simulations were performed to reproduce the measured emittance growth during injection. By splitting the tunes by 0.015 in the simulation, the qualitative behavior of the emittance growth was reproduced. This showed that the distribution is quite sensitive to the tune split in the ring. These results are promising: extensive troubleshooting has occurred during machine setup, the ring orbit has been measured and controlled, and modifications to the machine have been shown to have a positive effect on the painted distribution. Furthermore, the simulation-measurement agreement in Experiment 3 offers hope that future optimization of the SNS will result in the production of a Danilov-like distribution in the ring. One immediate need is to modify the injection region to increase the maximum vertical injection angle so that a larger, rounder beam can be painted.
{ "alphanum_fraction": 0.7751343731, "avg_line_length": 83.4485294118, "ext": "tex", "hexsha": "0ba04670698d3b3c1f365721a8dcd3473f27f60d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "53845b2acfd6da962c19967a98987208988d841e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "austin-hoover/dissertation", "max_forks_repo_path": "MainText/chapter5.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "53845b2acfd6da962c19967a98987208988d841e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "austin-hoover/dissertation", "max_issues_repo_path": "MainText/chapter5.tex", "max_line_length": 1515, "max_stars_count": null, "max_stars_repo_head_hexsha": "53845b2acfd6da962c19967a98987208988d841e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "austin-hoover/dissertation", "max_stars_repo_path": "MainText/chapter5.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8325, "size": 34047 }
\chapter{Picturing ancient Greece through the dialects}\label{chap:7}\largerpage When in 1579 Franciscus Junius the Elder (1545–1602) held his \textit{Discourse on the antiquity and excellence of the \ili{Hebrew} language} at the short-lived reformed academy of Neustadt (the \textit{Casimirianum}), he could not resist emphasizing the merits of this sacred tongue vis-à-vis the Greek language: \begin{quote} Indeed, as to individual words, fluency of expression is achieved by the fact that there are neither innumerous words nor so many dialects [in \ili{Hebrew}] as among the verbose and mendacious Greeks, since almost every single author among them seems to have forged himself his own language because of a certain malicious rivalry.\footnote{\citet[\textsc{b.3}\textsc{\textsuperscript{v}}]{Junius1579}: “In uocibus enim singulis pertinet ad facilitatem istud, quod non habentur innumerae uoces neque dialecti tam multae, ut apud uerbosos et mendaces Graecos, quorum singuli paene auctores suam sibi linguam cacozelo quodam uidentur fabricasse”. This discourse was reprinted in Junius’s \ili{Hebrew} grammar (\citealt{Junius1580}: ẽ.ii\textsc{\textsuperscript{v}}–ẽ.iii\textsc{\textsuperscript{r}}). The word \textit{cacozelus} (< Greek κακόζηλoς) can mean both ‘using a bad, affected style’ and – in the neuter (τὸ κακόζηλoν) – ‘unhappy imitation; rivalry’ (\citealt{LiddellScott1940}: \textit{s.v.}). Here, “cacozelo quodam” must be interpreted as an \isi{ablative} of the substantivized adjective expressing a cause.} \end{quote} Junius’s observation on the uniformity of \ili{Hebrew}, favorably compared to the endlessly affected variation of Greek, betrays his negative ideas about the countless differences existing among the Greek dialects. It moreover shows that he connected the Greek dialects to other aspects of Greekness, in this case the Greeks’ innate verbosity, mendacity, and malicious competitiveness. Junius was not the only scholar to do so. Numerous early modern thinkers related dialectal differences existing in Greek to language-external aspects of ancient Greece. How and why did they do so? And to what extent were they inspired by ancient and medieval sources? \section{Texts and tribes}\label{sec:7.1} As the Greek dialects were principally studied for philological reasons, scholars associated them closely with the literary texts composed in them (see Chapter~\ref{chap:3}). As a matter of fact, ever since antiquity, it had been customary to link a dialect primarily to an author or a group of authors. \ili{Aeolic} was written by authors such as Alcaeus and Sappho, \ili{Attic} by \iai{Plato} and Thucydides, \ili{Doric} by Alcman and \iai{Theocritus}, and \ili{Ionic} by \iai{Herodotus} and Hippocrates. Oddly enough, several Greek scholars mistook Pindar’s language for the \ili{Koine}, a misconception definitively corrected only in the Renaissance. The Italian Hellenist Angelo Canini (1521–1557) was already able to rightly identify the poet’s speech as principally \ili{Doric} (\citealt{Canini1555}: a.4\textsc{\textsuperscript{r}}). The dialects were moreover tied up with specific literary genres. \ili{Doric} was, for example, the usual dialect of \isi{bucolic poetry} and tragic choral odes. At the same time, the dialects were also associated with the homogeneously conceived tribes speaking them. \ili{Aeolic} was the dialect of the Aeolian Greeks, \ili{Doric} of the Dorians, \ili{Ionic} of the Ionians, and \ili{Attic} of the inhabitants of Attica. This coincidental close linking of the dialects with literature, on the one hand, and the people speaking them, on the other, made authors prone to transferring evaluative labels associated with literary genres and Greek tribes to the dialects themselves. In this and the following sections I will focus on such dialect attitudes. Research into language and dialect attitudes in general is a recent, though well-established field of investigation (see e.g. \citealt{Edwards2009}: 73–98; \citealt{Garrett2010}: 19–29). It studies what qualities and vices are ascribed to specific speech forms, and how and why this happens. In other words, it endeavors to map out the impressions languages and dialects convey to speakers. Such impressions are often construed or reinforced by cultural \isi{stereotypes} – i.e. assumptions about the alleged characteristics of specific regions and ethnic groups – so that the study of language and dialect attitudes may be considered a contribution to \isi{imagology} as well (on \isi{imagology}, see \citealt{BellerLeerssen2007}). Early modern attitudes to other languages and dialects have already received considerable attention. William J. \citet{Jones1999}, for instance, has studied the attitudes of early modern German scholars toward European languages. However, no systematic treatment of early modern attitudes toward the ancient Greek dialects exists, which is why I aim to offer a first exploration of the matter here, with a focus on attitudes toward the canonical four dialects: \ili{Aeolic}, \ili{Attic}, \ili{Doric}, and \ili{Ionic}.\footnote{For attitudes toward \ili{Attic} in early modern German works, see the brief account of \citet[251--252]{Roelcke2014}.} Here, too, it is impossible to understand early modern views separately from ancient Greek and Byzantine ideas. For this reason, I will very briefly delve into Greek views first. \section{Dialect attitudes from antiquity to early modernity}\label{sec:7.2} Ancient and Byzantine Greek authors expressed their assessments of individual dialects at various occasions in their works, almost as a rule in passing. This occurred in diverging genres, including works of grammar, \isi{philosophy}, history, \isi{geography}, \isi{rhetoric}, and even poetry. As most relevant comments are of a cursory nature, there was no canonical, generally accepted evaluation of the Greek dialects. Some ancient Roman authors also attributed labels to Greek dialects in the same sporadic fashion. \tabref{tab:7.1} offers a synoptic overview of the most important ancient and medieval attitudes toward the dialects. It suggests that negative labels were more numerous than positive ones. This does not indicate, however, that the canonical dialects were predominantly assessed in a negative way. Many of the unfavorable evaluations were only mentioned by one author, such as the label of “barbarian” in the case of (\ili{Lesbian}) \ili{Aeolic}, whereas some of the positive labels were widespread, in particular the eloquence and elegance of \ili{Attic}. \begin{longtable}{>{\raggedright\arraybackslash\hangindent=.5em}p{3cm}>{\raggedright\arraybackslash}p{\textwidth - 3\tabcolsep - 3cm}} \caption{Ancient and medieval attitudes toward the canonical Greek dialects}\label{tab:7.1}\\ \lsptoprule Label & Sources (\& early modern authors relying on them)\\\midrule\endfirsthead\midrule Label & Sources (\& early modern authors relying on them)\\\midrule\endhead\endfoot\lspbottomrule\endlastfoot \multicolumn{2}{c}{Aeolic}\\\midrule barbarian & \iai{Plato}, \textit{Protagoras} 341c, said specifically of \ili{Lesbian} \ili{Aeolic}.\\ obscure & \iai{Dionysius of Halicarnassus}, \textit{De imitatione} 31.2.8.\\ unusual, affected, insolent & \iai{Apuleius}, \textit{Apologia (Pro se de magia liber)} 9; \iai{Athenaeus}, \textit{Deipnosophistae} 14.19. (See e.g. \citealt{Munthe1748}: 3.)\\ old-fashioned, archaic & \ia{Scholia Vaticana@\textit{Scholia Vaticana}}\textit{Scholia Vaticana} (ed. \citealt{Hilgard1901}: 117).\\ \midrule\multicolumn{2}{c}{Attic}\\\midrule mixed & \iai{Pseudo-Xenophon}, \textit{Atheniensium respublica} 2.8; \iai{Athenaeus}, \textit{Deipnosophistae} 3.94; \ia{Plutarch!Pseudo-Plutarch}Pseudo-Plutarch, \textit{De Homero 2}. (See e.g. \citealt{Schwartz1721}: 223; \citealt{Maittaire1706}: iii; \citealt{Saumaise1643a}: 437–438, respectively.)\\ (too) elaborate & \iai{Heraclides Criticus}, \textit{Descriptio Graeciae} 1.4. (See e.g. \citealt{Estienne1573}: ¶.ii\textsc{\textsuperscript{v}}–¶.iii\textsc{\textsuperscript{r}}, referring to “Artemidori geographiae fragmentum”.)\\ concise, popular, fitting for pleasantries & \iai{Demetrius}, \textit{De elocutione} 177; \iai{Cicero}, \textit{Orator} 89. (See e.g. \citealt{Munthe1748}: 3.)\\ excellent, charming, eloquent & \iai{Quintilian}, \textit{Institutio oratoria} 6.3.107, 8.1.2 \& 10.1.100; \iai{Cicero}, \textit{Orator} 25 \& 28 and \textit{Brutus} 172; \iai{Velleius Paterculus}, \textit{Historiae Romanae} 1.18.1. (See e.g. \citealt{Duret1613}: 690; \citealt{Rollin1726}: 118–119.)\\ artificial & \ia{Scholia Vaticana@\textit{Scholia Vaticana}}\textit{Scholia Vaticana} (ed. \citealt{Hilgard1901}: 117).\\\midrule\multicolumn{2}{c}{Doric}\\\midrule \is{broadness}broad, flat & \iai{Theocritus}, \textit{Idyllia} 15.87–88 and \ia{Scholia in Theocritum@\textit{Scholia in Theocritum}}\textit{Scholia in Theocritum} (\textit{scholia uetera}) on this passage; \ia{Hermogenes of Tarsus}Hermogenes, Περὶ ἰδεῶν λόγoυ 1.6; \iai{Demetrius}, \textit{De elocutione} 177. (See e.g. \citealt{Caelius1542}: 465; \citealt{Estienne1573}: ¶.ii\textsc{\textsuperscript{r}}-¶.ii\textsc{\textsuperscript{v}}; \citealt{Saumaise1643a}: 77.)\\ annoying, affected & \iai{Suetonius}, \textit{De uita Caesarum}, \textit{Tiberius} 56.1.\\ obscure & \iai{Porphyry}, \textit{Vita Pythagorae} 53. (See e.g. \citealt{Bentley1699}: 317; \citealt{Mazzocchi1754}: 119 n.5.)\\ rustic & \iai{Pseudo-Probus}, \textit{Commentarius in Vergilii Bucolica et Georgica}, \textit{praefatio}. \iai{Marcus Manilius} (\textit{Astronomica} 767) associated Dorians with rusticity in general terms. (See \citealt{Rapin1659}: 121; cf. \textit{infra}.)\\\relax [\textit{said of old \ili{Doric}:}] rough, difficult\newline\hspace*{-.5em}\relax [\textit{said of new \ili{Doric}:}] gentler, easier & \textit{Scholia in Theocritum (scholia uetera)} F.a.–c. (For old \ili{Doric}, see e.g. \citealt{Mazzocchi1754}: 118–119; for new \ili{Doric}, see e.g. \citealt{Valckenaer1773}: 208.)\\\relax magnificent & \ia{Scholia Vaticana@\textit{Scholia Vaticana}}\textit{Scholia Vaticana} (ed. \citealt{Hilgard1901}: 117). (See e.g. \citealt{Estienne1581}: 15–16.)\\\midrule\multicolumn{2}{c}{Ionic}\\\midrule fluent, pleasant & \iai{Quintilian}, \textit{Institutio oratoria} 9.14.18. (See e.g. \citealt{Munthe1748}: 9.)\\ relaxed, frivolous & \ia{Scholia Vaticana@\textit{Scholia Vaticana}}\textit{Scholia Vaticana} (ed. \citealt{Hilgard1901}: 117).\\ % % % \begin{tabularx}{\textwidth}{lQQ} % % % \lsptoprule % % % \textsc{Dialect} & \textsc{Label} & \textsc{Sources} (\& early modern authors relying on them)\\ % % % \multicolumn{1}{c}{Aeolic} & barbarian & Plato, \textit{Protagoras} 341c, said specifically of \ili{Lesbian} \ili{Aeolic}.\\ % % % & obscure & Dionysius of Halicarnassus, \textit{De imitatione} 31.2.8.\\ % % % & unusual, affected, insolent & Apuleius, \textit{Apologia (Pro se de magia liber)} 9; Athenaeus, \textit{Deipnosophistae} 14.19. (See e.g. \citealt{Munthe1748}: 3.)\\ % % % & old-fashioned, archaic & \textit{Scholia Vaticana} (ed. \citealt{Hilgard1901}: 117).\\ % % % \multicolumn{1}{c}{Attic} & mixed & Pseudo-Xenophon, \textit{Atheniensium respublica} 2.8; Athenaeus, \textit{Deipnosophistae} 3.94; Pseudo-Plutarch, \textit{De Homero 2}. (See e.g. \citealt{Schwartz1721}: 223; \citealt{Maittaire1706}: iii; \citealt{Saumaise1643a}: 437–438, respectively.)\\ % % % & (too) elaborate & \iai{Heraclides Criticus}, \textit{Descriptio Graeciae} 1.4. (See e.g. \citealt{Estienne1573}: ¶.ii\textsc{\textsuperscript{v}}–¶.iii\textsc{\textsuperscript{r}}, referring to “Artemidori geographiae fragmentum”.)\\ % % % & concise, popular, fitting for pleasantries & Demetrius, \textit{De elocutione} 177; \iai{Cicero}, \textit{Orator} 89. (See e.g. \citealt{Munthe1748}: 3.)\\ % % % & excellent, charming, eloquent & Quintilian, \textit{Institutio oratoria} 6.3.107, 8.1.2 \& 10.1.100; \iai{Cicero}, \textit{Orator} 25 \& 28 and \textit{Brutus} 172; Velleius Paterculus, \textit{Historiae Romanae} 1.18.1. (See e.g. \citealt{Duret1613}: 690; \citealt{Rollin1726}: 118–119.)\\ % % % & artificial & \textit{Scholia Vaticana} (ed. \citealt{Hilgard1901}: 117).\\ % % % \multicolumn{1}{c}{Doric} & broad, flat & \iai{Theocritus}, \textit{Idyllia} 15.87–88 and \textit{Scholia in Theocritum} (\textit{scholia uetera}) on this passage; Hermogenes, Περὶ ἰδεῶν λόγoυ 1.6; Demetrius, \textit{De elocutione} 177. (See e.g. \citealt{Caelius1542}: 465; \citealt{Estienne1573}: ¶.ii\textsc{\textsuperscript{r-v}}; \citealt{Saumaise1643a}: 77.)\\ % % % & annoying, affected & Suetonius, \textit{De uita Caesarum}, \textit{Tiberius}, 56.1.\\ % % % & obscure & \iai{Porphyry}, \textit{Vita Pythagorae} 53. (See e.g. \citealt{Bentley1699}: 317; \citealt{Mazzocchi1754}: 119 n.5.)\\ % % % & rustic & Pseudo-Probus, \textit{Commentarius in Vergilii Bucolica et Georgica}, \textit{praefatio}. \iai{Marcus Manilius} (\textit{Astronomica} 767) associated Dorians with rusticity in general terms. (See \citealt{Rapin1659}: 121; cf. \textit{infra}.)\\ % % % & [\textit{said of old \ili{Doric}:}] rough, difficult & \textit{Scholia in Theocritum (scholia uetera)} F.a.–c. (For old \ili{Doric}, see e.g. \citealt{Mazzocchi1754}: 118–119; for new \ili{Doric}, see e.g. \citealt{Valckenaer1773}: 208.)\\ % % % & [\textit{said of new \ili{Doric}:}] gentler, easier & \\ % % % & magnificent & \textit{Scholia Vaticana} (ed. \citealt{Hilgard1901}: 117). (See e.g. \citealt{Estienne1581}: 15–16.)\\ % % % \multicolumn{1}{c}{Ionic} & fluent, pleasant & Quintilian, \textit{Institutio oratoria} 9.14.18. (See e.g. \citealt{Munthe1748}: 9.)\\ % % % & relaxed, frivolous & \textit{Scholia Vaticana} (ed. \citealt{Hilgard1901}: 117).\\ % % % \lspbottomrule % % % \end{tabularx} \end{longtable} \pagebreak It can be noted here that ancient scholars were prone to link the Greek tribes and their dialects to styles within certain arts as well. The Greek dialects were in other words not approached in isolation, but viewed as an undeniable characteristic of the Greek world, pervading numerous dimensions of it. Modes of \isi{music} were called \ili{Doric} and \ili{Aeolic} because they were reminiscent of certain features of these dialects, and a similar association occurred in scholarship on \isi{architecture}. It would lead me too far to treat this complex extrapolation of the traditional Greek tribal-dialectal scheme to \isi{music} and \isi{architecture} in detail here, all the more since its impact on early modern views was highly limited.\footnote{See \citet{Cassio1984}. \citet[118]{Mazzocchi1754} was exceptional in connecting the canonical dialects and architectural styles with the same evaluative properties. In the case of \ili{Doric}, this was coarseness and roughness. In doing so, he no doubt relied on \iai{Vitruvius}, \textit{De architectura} 4.1.6–8.} Yet it is important to keep in mind that the dialects were intertwined with other domains of knowledge, and that they were able to evoke strong sensual associations going beyond the level of language even in ancient and medieval times.\largerpage[-1] As can be expected, early modern scholars relied to a considerable degree on ancient and Byzantine sources when attributing evaluative labels to the canonical dialects; this can be gathered from \tabref{tab:7.1}, which offers a rudimentary chart of this dependence of early modern Hellenists on earlier sources. There are nonetheless three major differences between ancient and medieval texts, on the one hand, and early modern works, on the other. Firstly, scholars introduced numerous new assessments, as \tabref{tab:7.2} reveals. These were very often a direct consequence of the \isi{literary usage} of the dialect in question. For instance, the frequent characterization of \ili{Doric} as “boorish” or “rustic” seems to have largely been an early modern innovation. \iai{Pseudo-Probus} already called \ili{Doric} \textit{rusticus} in his commentary on Vergil’s \textit{Bucolics} and \textit{Georgics}, but this is an isolated instance, which barely influenced early modern authors. The early modern emphasis on \ili{Doric} rusticity is likely to have been due to a stronger association of \ili{Doric} with the \isi{bucolic poetry} of authors such as \iai{Theocritus}, a very popular poet among humanists and in their schools. This is in agreement with a broader tendency in language attitudes. Indeed, Brigitte \citet{Schlieben-lange1992} has shown that it is not uncommon for properties of texts to be transferred to the variety in which they are written. For example, in a letter dating to November 1511, a German student learning Greek in Paris characterized the \ili{Doric} dialect as “scabrous” or “filthy” (\textit{scaber}) and “somewhat rustic” (\textit{subrusticus}). He complained that his teacher, the polyglot humanist and later cardinal Girolamo Aleandro (1480–1542), kept focusing on the \ili{Doric} poetry of \iai{Theocritus} instead of reading texts in the \textit{lingua communis}, the Greek \ili{Koine}. The student did admit, however, that this dialect was very apt for rustic subject matter.\footnote{The letter, written by a certain Johannes Kierher, is cited in \citet[220 n.435; cf. also p. 103]{Botley2010}.} The idea of \ili{Doric} roughness was also fostered by its close association with the rugged Peloponnese and the rather unrefined mores of its inhabitants, not in the least those of warlike Sparta. The Dutch philologist Isaac Vossius (1618–1698) linked harshness and rusticity to \ili{Doric} in his 1673 treatise on ancient poetry, claiming that the Ionians laughed at the Dorians for this reason. The Dorians, in turn, allegedly mocked the Ionians for their effeminacy \citep[55]{Vossius1673}. Vossius was, in a sense, fictitiously reconstructing the mutual social behavior of two ancient Greek tribes by relying on widespread \isi{stereotypes} about them. \begin{longtable}{>{\raggedright\arraybackslash\hangindent=.5em}p{3cm}>{\raggedright\arraybackslash}p{\textwidth - 3\tabcolsep - 3cm}} \caption{Early modern attitudes toward the canonical ancient Greek dialects. The number of examples offered in the right column can be taken as an indication of the frequency of each label.\label{tab:7.2}}\\ \lsptoprule Label & Testimonies\\\midrule\endfirsthead\midrule Label & Testimonies\\\midrule\endhead\endfoot\lspbottomrule\endlastfoot \multicolumn{2}{c}{Aeolic}\\\midrule sweet, adequate for lyric poems & \citet[a.4\textsc{\textsuperscript{r}}]{Canini1555} called \ili{Aeolic} “melicis apta”. Cf. \citet[103]{Hoius1620}. \citet[106]{Giraudeau1739} regarded it as “pronuntiatu suauissima”.\\ heavy, weighty, serious & \citet[16]{Estienne1581} believed it to display a certain \textit{grauitas}, ‘seriousness’, which is central to his discussion of the qualities of \ili{French} vis-à-vis \ili{Italian} as well \citep[71]{Swiggers2009}.\\ rough, uncultivated, unpleasant & \citet[61]{Walper1589}; \citet[415]{Walper1590} labeled it together with the allegedly cognate \ili{Doric} dialect \textit{incultior}, \textit{ingratus auribus}, \textit{minus politus}, and \textit{insuauis}. See e.g. also \citet[515, \textit{asper}]{Fabricius1711}; \citet[6, \textit{rudis}]{Georgi1729}; \citet[e.g. 28, \textit{inamoenus}]{Munthe1748}.\\ \is{broadness}broad, rather thick & By \isi{analogy} with \ili{Doric}, to which \ili{Aeolic} was believed to be closely cognate, \citet[582]{Nibbe1725} called \ili{Aeolic} \textit{breit}. See e.g. also \citet[\textsc{a.2}\textsc{\textsuperscript{v}}]{Hauptmann1776}, where the verb \textit{platustomé\={o}} (πλατυστoμέω), ‘to speak with a broad mouth’, is applied to \ili{Aeolic}. \Citet[17]{Von1705} characterized \ili{Aeolic} \isi{pronunciation} as \textit{obtusior}.\\ \midrule\multicolumn{2}{c}{Attic}\\\midrule\relax (most) elegant, noble, polished, cultivated, tender, fine, pure, neat, honey-sweet, etc. & \citet[a.i\textsc{\textsuperscript{v}}]{Melanchthon1518} called \ili{Attic} “elegantissima”. See e.g. also \citet[209]{Vergara1537}; \citet[5\textsc{\textsuperscript{r}}]{Baile1588}; \citet[334]{Alsted1630}. \citet[226]{Ruland1556} attributed \textit{concinnitas} to \ili{Attic} and characterized it as beautiful and charming. See e.g. also \citet[\textsc{e}.iii\textsc{\textsuperscript{v}}]{Oreadini1525}; \citet[76, 112, 424]{Saumaise1643a}, who linked this label to a round-mouthed \isi{pronunciation}. \citet[96]{Hoius1620} called \ili{Attic} \textit{mellitus}.\\ copious & \citet[a.3\textsc{\textsuperscript{v}}]{Canini1555} dubbed \ili{Attic} \textit{copiosus}.\\ manly, weighty & \citet[6]{Georgi1729} applied the adjectives \textit{uirilis} and \textit{grauis} to \ili{Attic}. See e.g. also \citet[515]{Fabricius1711}.\\ \midrule\multicolumn{2}{c}{Doric}\\\midrule boorish, rustic & \citet[317]{Bentley1699} e.g. labeled \ili{Doric} \textit{rustic}. This property led the translator of \citet[117]{Rapin1659} to call \ili{Doric} “sometimes scarce true grammar” \citep[31]{Rapin1684}.\\ pleasant, adequate for smoother poets & \citet[a.4\textsc{\textsuperscript{r}}]{Canini1555} dubbed it “suauissima” and “poetis mollioribus accommodatissima”. See e.g. also \citet[139\textsc{\textsuperscript{r}}]{Vuidius1569}.\\ rough, uncultivated, unpleasant & See above on \ili{Aeolic} and \citet[46\textsc{\textsuperscript{r}}]{Gessner1555}, labelling \ili{Doric} \textit{crassissimus}. \citet[54]{Vossius1673} characterized \ili{Laconian}, a variety of \ili{Doric}, as rough, threatening, and “doglike”. This last property was linked to the frequency of the letter rho, the dog’s letter, at the end of many \ili{Laconian} words. Cf. \citet[24]{Munthe1748}.\\ short in speech & Attributed to \ili{Laconian} \ili{Doric} by \iai{Plato} (\textit{Leges} 641e), it was extrapolated to \ili{Doric} as a whole by \citet[393]{Saumaise1643a}. Cf. \citet[138\textsc{\textsuperscript{v}}]{Beroaldo1493}.\\ magnificent, warlike, manly & \citet[55]{Vossius1673} described the \ili{Doric} dialect as “magnifica et bellica, sed absque iracundia”. He also associated it with manliness.\\ distinguished, flourishing & \citet[161]{Gesner1774} called \ili{Doric} \textit{florentissimus}.\\\midrule ~\\ ~\\ ~\\ % dummy lines to decrease page length \multicolumn{2}{c}{Ionic}\\\midrule long in speech, slow, redundant & \citet[677]{Caelius1542} opposed \ili{Ionic} lengthiness in speech to \ili{Laconian} brevity (he called the Ionians “\textit{makrológoi} [μακρoλόγoι]”). \citet[75]{Saumaise1643a} spoke of \ili{Ionic} slowness and redundancy.\\ elegant, polished, neat, honey-sweet & \citet[\textsc{a.2}\textsc{\textsuperscript{r}}]{Hauptmann1776} ascribed \textit{mundities} to \ili{Ionic}. \citet[290]{Verwey1684} spoke of the \textit{mel Ionicum}.\\ faint, delicate, womanish & \citet[75]{Saumaise1643a} linked the \textit{genius} of \ili{Ionic} to the mores and the “long and fluid” clothing style of the Ionians, which he characterized as both faint and womanish. He pointed to the migration to Asia as the cause of their effeminacy. See e.g. also \citet[139\textsc{\textsuperscript{r}}]{Vuidius1569}.\\ \end{longtable} % % % \begin{table} % % % \caption{Early modern attitudes toward the canonical ancient Greek dialects. The number of examples offered in the right column can be taken as an indication of the frequency of each label.}\label{tab:7.2} % % % % % % % % % \begin{tabularx}{\textwidth}{XXX} % % % \lsptoprule % % % % % % \multicolumn{1}{c}{\textbf{\textsc{Dialect}}} & \textbf{\textsc{Label}} & \textbf{\textsc{Testimonies}}\\ % % % \multicolumn{1}{c}{Aeolic} & sweet, adequate for lyric poems & \citet[a.4\textsc{\textsuperscript{r}}]{Canini1555} called \ili{Aeolic} “melicis apta”. Cf. \citet[103]{Hoius1620}. \citet[106]{Giraudeau1739} regarded it as “pronuntiatu suauissima”.\\ % % % & heavy, weighty, serious & \citet[16]{Estienne1581} believed it to display a certain \textit{grauitas}, ‘seriousness’, which is central to his discussion of the qualities of \ili{French} vis-à-vis \ili{Italian} as well \citep[71]{Swiggers2009}.\\ % % % & rough, uncultivated, unpleasant & \citet[61]{Walper1589}; \citet[415]{Walper1590} labeled it together with the allegedly cognate \ili{Doric} dialect \textit{incultior}, \textit{ingratus auribus}, \textit{minus politus}, and \textit{insuauis}. See e.g. also \citet[515 (\textit{asper})]{Fabricius1711}; \citet[6 (\textit{rudis})]{Georgi1729}; \citet[e.g. 28 (\textit{inamoenus})]{Munthe1748}.\\ % % % & broad, rather thick & By \isi{analogy} with \ili{Doric}, to which \ili{Aeolic} was believed to be closely cognate, \citet[582]{Nibbe1725} called \ili{Aeolic} \textit{breit}. See e.g. also \citet[\textsc{a.2}\textsc{\textsuperscript{v}}]{Hauptmann1776}, where the verb \textit{platustoméo\={} } (πλατυστoμέω), ‘to speak with a broad mouth’, is applied to \ili{Aeolic}. Von der \citet[17]{Von1705} characterized \ili{Aeolic} \isi{pronunciation} as \textit{obtusior}.\\ % % % \multicolumn{1}{c}{Attic} & (most) elegant, noble, polished, cultivated, tender, fine, pure, neat, honey-sweet, etc. & \citet[a.i\textsc{\textsuperscript{v}}]{Melanchthon1518} called \ili{Attic} “elegantissima”. See e.g. also \citet[209]{Vergara1537}; \citet[5\textsc{\textsuperscript{r}}]{Baile1588}; \citet[334]{Alsted1630}. \citet[226]{Ruland1556} attributed \textit{concinnitas} to \ili{Attic} and characterized it as beautiful and charming. See e.g. also \citet[\textsc{e}.iii\textsc{\textsuperscript{v}}]{Oreadini1525}; \citet[76, 112, 424]{Saumaise1643a}, who linked this label to a round-mouthed \isi{pronunciation}. \citet[96]{Hoius1620} called \ili{Attic} \textit{mellitus}.\\ % % % & copious & \citet[a.3\textsc{\textsuperscript{v}}]{Canini1555} dubbed \ili{Attic} \textit{copiosus}.\\ % % % & manly, weighty & \citet[6]{Georgi1729} applied the adjectives \textit{uirilis} and \textit{grauis} to \ili{Attic}. See e.g. also \citet[515]{Fabricius1711}.\\ % % % \multicolumn{1}{c}{Doric} & boorish, rustic & \citet[317]{Bentley1699} e.g. labeled \ili{Doric} \textit{rustic}. This property led the translator of \citet[117]{Rapin1659} to call \ili{Doric} “sometimes scarce true grammar” \citep[31]{Rapin1684}. See also the main text below.\\ % % % & pleasant, adequate for smoother poets & \citet[a.4\textsc{\textsuperscript{r}}]{Canini1555} dubbed it “suauissima” and “poetis mollioribus accommodatissima”. See e.g. also \citet[139\textsc{\textsuperscript{r}}]{Vuidius1569}.\\ % % % & rough, uncultivated, unpleasant & See above on \ili{Aeolic} and \citet[46\textsc{\textsuperscript{r}}]{Gessner1555}, labelling \ili{Doric} \textit{crassissimus}. \citet[54]{Vossius1673} characterized \ili{Laconian}, a variety of \ili{Doric}, as rough, threatening, and “doglike”. This last property was linked to the frequency of the letter rho, the dog’s letter, at the end of many \ili{Laconian} words. Cf. \citet[24]{Munthe1748}.\\ % % % & short in speech & Attributed to \ili{Laconian} \ili{Doric} by Plato (\textit{Leges} 641e), it was extrapolated to \ili{Doric} as a whole by \citet[393]{Saumaise1643a}. Cf. \citet[138\textsc{\textsuperscript{v}}]{Beroaldo1493}.\\ % % % & magnificent, warlike, manly & \citet[55]{Vossius1673} described the \ili{Doric} dialect as “magnifica et bellica, sed absque iracundia”. He also associated it with manliness.\\ % % % & distinguished, flourishing & \citet[161]{Gesner1774} called \ili{Doric} \textit{florentissimus}.\\ % % % \multicolumn{1}{c}{Ionic} & long in speech, slow, redundant & Caelius \citet[677]{Caelius1542} opposed \ili{Ionic} lengthiness in speech to \ili{Laconian} brevity (he called the Ionians “\textit{makrológoi} [μακρoλόγoι]”). \citet[75]{Saumaise1643a} spoke of \ili{Ionic} slowness and redundancy.\\ % % % & elegant, polished, neat, honey-sweet & \citet[\textsc{a.2}\textsc{\textsuperscript{r}}]{Hauptmann1776} ascribed \textit{mundities} to \ili{Ionic}. \citet[290]{Verwey1684} spoke of the \textit{mel Ionicum}.\\ % % % & faint, delicate, womanish & \citet[75]{Saumaise1643a} linked the \textit{genius} of \ili{Ionic} to the mores and the “long and fluid” clothing style of the Ionians, which he characterized as both faint and womanish. He pointed to the migration to Asia as the cause of their effeminacy. See e.g. also \citet[139\textsc{\textsuperscript{r}}]{Vuidius1569}.\\ % % % % % % \lspbottomrule % % % \end{tabularx} % % % \end{table} A second major difference is that the sources and motivations of early modern scholars to propose dialect evaluations are more transparent than those of their ancient and Byzantine predecessors. Early modern attitudes toward the \ili{Ionic} dialect provide a good example of the various ways in which Hellenists supplemented the ancient and Byzantine sources. To start with, philologists introduced new properties by quoting ancient testimonies that did not so much concern the dialects as the tribes speaking them. These ancient text passages encouraged early modern scholars to construe a specific mental picture of these tribes, their customs, and their speech. Claude de Saumaise, for example, characterized \ili{Ionic} as \textit{mollis}, ‘effeminate, delicate’, by referring to a verse of the Roman poet \iai{Martial} (ca. \textsc{ad} 40–103): “nor let the \textit{delicate Ionians} be praised for their temple of Trivia”.\footnote{\citet[75]{Saumaise1643a}, citing \iai{Martial}, \textit{Spectaculorum liber} 1.3: “nec Triuiae templo \textit{molles} laudentur \textit{Iones}” (my emphasis).} Saumaise moreover linked \ili{Ionic} effeminacy to their clothing style and, more fundamentally, to their migration from Greece to Asia, thus presenting a classic case of an \is{Orientalism}Orientalist attitude (cf. \citealt{Said2003}). The Danish philologist and professor Caspar Frederik Munthe (1704–1763) and his colleague Ludvig Heiberg (1723–1760), in turn, relied on the Byzantine scholia on Thucydides for their opposition of \ili{Ionic} delicacy to \ili{Doric} manliness.\footnote{See \citet[15]{Munthe1748}, relying on \ia{Scholia in Thucydidem@\textit{Scholia in Thucydidem}}\textit{Scholia in Thucydidem} (\textit{Scholia uetera et recentiora}), commentary at 1.124.1. On the Doric–\ili{Ionic} opposition in antiquity, see \citet{Cassio1984}.} Here, the alleged properties of the people speaking a dialect were transferred to the dialect itself, a procedure very common throughout history. Indeed, John \citet[66--68]{Edwards2009} has pointed out that there exists a clear causative link between \isi{stereotypes} about certain social groups and the esthetic qualities attributed to the varieties they speak (see also \citealt{Silverstein2003}; \citealt{Preston2018}: 200). Some early modern philologists even argued that certain tribal characteristics manifested themselves in specific dialectal features. Isaac Vossius linked \ili{Ionic} delicacy and effeminacy to concrete features of the dialect: the frequency of the letter eta ⟨η⟩ in it, its lack of contractions, its many diminutives, and other linguistic “flatteries”, such as the alleged usage of feminine articles with male objects and animals, even with the most “monstrous” ones.\footnote{\citet[55]{Vossius1673}: “Nihil hac mollius et effeminatius, siue ubique occurrentem litteram ἦτα, siue frequentes uocalium hiatus, siue etiam crebra diminutiua aliaque spectes blandimenta. Adeo huic populo terrori fuit, quidquid esset uirile, ut quibusque fere rebus masculis et beluis etiam quantumuis immanibus, sequioris sexus articulos praeposuerint”.} Before the early modern period, the link between linguistic features and evaluative properties was practically non-existent with one sole exception: the idea of \ili{Doric} \isi{broadness} was sometimes connected to the frequency of the letter alpha ⟨α⟩ in this dialect.\footnote{See \ia{Scholia in Theocritum@\textit{Scholia in Theocritum}}\textit{Scholia in Theocritum} (\textit{scholia uetera}) at \textit{Idyllia} 15.87–88.} Finally, one scholar, Henri Estienne, created new authoritative documentation himself in order to establish the smooth character of \ili{Ionic}. In his commentary on \ili{Attic} of 1573, \citet[ii\textsc{\textsuperscript{r}}]{Estienne1573} quoted – somewhat pretentiously, one might say – a Greek epigram of his own invention to prove the historical primacy of \ili{Ionic} as well as its sweet and delicate character. He had prefixed this poem to his edition of the Ionian historian \iai{Herodotus}, published three years earlier: \begin{quote}\raggedright The \ili{Ionic} dialect is indeed sweet, far above all,\linebreak and utters delicate noises, but certainly,\linebreak as far as \ili{Ionic} surpasses all, so far\linebreak does \iai{Herodotus} surpass those speaking \ili{Ionic}.\footnote{\citet[8]{Estienne1570}: “Ἔστι μὲν ἔστιν Ἰὰς λιγυρὴ διάλεκτος ἁπασῶν / ἔξοχα, καὶ μαλακοὺς ἐξαφιεῖσα θρόους· / ἀλλὰ γὰρ ὅσσον Ἰὰς πασῶν προφερεστάτη ἐστί, / τόσσον ἰαζόντων Ἡρόδοτος προφέρει”.} \end{quote} A third difference is that early modern Hellenists tried to organize their evaluations in a much more systematic manner. In Greek scholarship, there had been only one isolated attempt at doing so. A Byzantine scholiast, commenting on the ancient grammar attributed to \iai{Dionysius Thrax}, was exceptional in trying to systematize the characteristic properties of the Greek dialects, linking them to the customs of the individual Greek tribes: \begin{quote} The Greeks indeed differ from the barbarians with respect to customs,\linebreak speech as well as ways of life. One has to know, however, that, among the Greeks, there are the Dorians, the Aeolians, the Ionians, and the Attics. And we are explaining qualities occurring among these, for even these [tribes] do differ from one another in their ways as well as their customs. In fact, the \ili{Doric} tribe seems to be manlier in its ways of life, and magnificent in the sounds of its names and in the tone of its voice, whereas the \ili{Ionic} is relaxed in all these aspects, since the Ionians are frivolous. The \ili{Attic} tribe seems to differ as regards way of life and artificiality of speech, whereas the \ili{Aeolic} is distinctive through the austerity of its way of life and the old fashion of its speech.\footnote{% \textit{Commentaria in Dionysii Thracis Artem Grammaticam}, \ia{Scholia Vaticana@\textit{Scholia Vaticana}}\textit{Scholia Vaticana (partim excerpta ex Georgio Choerobosco, Georgio quodam, Porphyrio, Melampode, Stephano, Diomede)} (ed. \citealt{Hilgard1901}: 117): “καὶ γὰρ ἤθεσι καὶ διαλέκτῳ καὶ ἀγωγαῖς διαφέρουσιν <οἱ> Ἕλληνες τῶν βαρβάρων. Γινώσκειν δὲ χρὴ ὅτι τῶν Ἑλλήνων οἱ μέν εἰσι Δωριεῖς, οἱ δὲ Αἰολεῖς, οἱ δὲ Ἴωνες, οἱ δὲ Ἀττικοί. συμβεβηκυίας δὲ διὰ τούτων δηλοῦμεν ποιότητας, καὶ γὰρ καὶ οὗτοι τρόποις καὶ ἤθεσι διαφέρουσιν ἀλλήλων· δοκεῖ γὰρ τὸ Δώριον ἀνδρωδέστερόν τε εἶναι τοῖς βίοις, καὶ μεγαλοπρεπὲς τοῖς φθόγγοις τῶν ὀνομάτων καὶ τῷ τῆς φωνῆς τόνῳ, τὸ δὲ Ἰωνικὸν ἐν πᾶσι τούτοις ἀνειμένον – χαῦνοι γὰρ οἱ Ἴωνες – τὸ δὲ Ἀττικὸν εἴς τε δίαιταν καὶ φωνῆς ἐπιτέχνησιν ἀεὶ διαφέρειν, τὸ δὲ Αἰολικὸν τῷ τ’ αὐστηρῷ τῆς διαίτης καὶ τῷ τῆς φωνῆς ἀρχαιοτρόπῳ”.} \end{quote} Such general accounts are as a rule absent from ancient and Byzantine treatises on the dialects. During the early modern period, however, dialect evaluations were frequently included in handbooks for the Greek dialects as a piece of standard information, especially from the seventeenth century onward. This is in keeping with a more general development in early modern discourse on \isi{stereotypes} of ethnic groups, as, Joep \citet[17]{Leerssen2007} argues, \begin{quote} the cultural criticism of early-modern Europe […] began, in the tradition of Julius Caesar Scaliger (1484–1558), to sort European cultural and societal patterns into national categories, thereby formalizing an older, informal tradition of attributing essential characteristics to certain national or ethnic groups. \end{quote} In early modern formalized discussions of the Greek dialects and their properties, many of the same qualities and vices recurred, thus encouraging the canonization of a number of properties. Numerous instances of this tendency could be cited, but let me limit myself here to listing three representative examples from different centuries, which all have a clear link with philology: \begin{quote} \ili{Attic} is the most elegant and copious of all and the cherisher of eloquence, which most of the noblest writers employed. Related to this is \ili{Ionic}, which the oldest authors used, Democritus, Hippocrates, \iai{Herodotus}; Homer also for a large part and \iai{Hesiod}. \ili{Doric} is the most pleasant and the most adequate for smoother poets, which the choruses of tragedians have also received so as to moderate the bitterness of the subject. This dialect was used by the Pythagoreans, Pindar, Epicharmus, Sophron, and \iai{Theocritus}. Similar to this is \ili{Aeolic}, adequate for lyric poems, which Alcaeus, Sappho, and many others expressed in their writings, of whom fortune has left nothing at all, except for those passages that are cited by others.\footnote{\citet[a.3\textsc{\textsuperscript{v}}–a.4\textsc{\textsuperscript{r}}]{Canini1555}: “Attica omnium elegantissima et copiosissima eloquentiaeque altrix, quam plurimi nobilissimi scriptores celebrarunt. Huic affinis Ionica, quam uetustissimi auctores usurparunt, Democritus, Hippocrates, \iai{Herodotus}; Homerus etiam magna ex parte atque Hesiodus. Dorica suauissima est et poetis mollioribus accommodatissima, quam etiam tragicorum chori ad temperandam argumenti acerbitatem receperunt. Ea usi sunt Pythagorici, Pindarus, Epicharmus, Sophron et \iai{Theocritus}. Huic similis Aeolica, melicis apta, quam scriptis expressere Alcaeus, Sappho aliique permulti, e quibus, praeter pauca quae ab aliis citantur, nihil omnino fortuna reliquum fecit”. For another sixteenth-century example, see \citet[138\textsc{\textsuperscript{v}}–139\textsc{\textsuperscript{r}}]{Vuidius1569}. Cf. also already \citet[12\textsc{\textsuperscript{v}}]{Lopad1536} and \citet[\textsc{a.6}\textsc{\textsuperscript{v}}\textsc{–a.7}\textsc{\textsuperscript{r}}]{Gessner1543}.} \end{quote} \begin{quote} The first is \ili{Attic}, which indeed must be preferred as the noblest above all others. It was mainly in this dialect that Thucydides, Demosthenes, Isocrates, and the majority of the historiographers wrote. The second is \ili{Ionic}, which has a wonderful grace and charm, which mainly \iai{Herodotus}, Hippocrates, and the poets, even \ili{Doric} ones, used. The third is \ili{Doric}, a little rougher and harder because of the \isi{pronunciation}, as the Dorians are said “to pronounce \is{broadness}broadly” (that is, to speak with a wide and open mouth). This dialect was employed by, among others, \iai{Theocritus} and Pindar. The fourth, finally, is \ili{Aeolic}, which no authors have followed avowedly, but the poets have interspersed it hither and thither in their writings, especially, however, Alcaeus, Sappho, what is more, \iai{Theocritus} himself and Pindar (as it has many things in common with \ili{Doric}), also Homer and therefore others.\footnote{\citet[2--3]{Merigon1621}: “Prima est Attica, quae quidem ut nobilior omnibus aliis praeponi debet; hac autem scripsere praecipue Thucydides, Demosthenes, Isocrates et maior pars historiographorum. Secunda, Ionica, quae mirificum habet leporem et uenustatem, qua usi sunt praecipue \iai{Herodotus}, Hippocrates et poetae, etiam Dorici. Tertia Dorica, quae paulo asperior et durior propter pronuntiationem, quippe πλατυάζειν (hoc est lato et diducto ore loqui) dicuntur Dores; hanc autem dialectum celebrauit inter alios \iai{Theocritus} et Pindarus. Quarta denique est Aeolica, quam nulli auctores ex professo sectati sunt, sed eam huc illuc in suis scriptis insperserunt poetae, praecipue uero Alcaeus, Sappho, immo \iai{Theocritus} ipse et Pindarus (ut pote cum Dorica multa communia habentem) tum Homerus aliique ideo”. See \citet{Hoius1620} and \citet{Rhenius1626} for other seventeenth-century examples.} \end{quote} \begin{quote} In this way, it happened that neither the \ili{Ionic} nor the \ili{Doric} nor any other dialect was similar to the \ili{Attic} dialect, but that \ili{Attic} surpassed all these dialects, as it is not too delicate, like the \ili{Ionic}, nor too hard, like the \ili{Doric}, nor too rude, like the \ili{Aeolic}, but moderate, manly, weighty, and most shining of all.\footnote{\citet[6]{Georgi1729}: “[…] quo contigit, ut Atticae dialecto neque Ionica neque Dorica neque alia quaedam, similis fuerit, sed eas omnes superaret, cum neque nimis mollis sit, ut Ionica, neque nimis dura, ut Dorica, neque nimis rudis, ut Aeolica, sed temperata, uirilis, grauis atque omnium nitidissima”. Cf. also \citet[197--199]{Ries1786}.} \end{quote} The above three accounts also exhibit differences. In the first case, the emphasis is on the link between a dialect and its \isi{literary usage} by different authors and in distinct genres. The primacy of \ili{Attic} is also suggested, but this stands out much more clearly in the second account, which seems to construe a kind of evaluative ranking of the dialects: \ili{Attic} first, \ili{Ionic} second, \ili{Doric} third, and \ili{Aeolic} fourth. In the third passage, the superiority of \ili{Attic} is likewise maintained, but it seems that the other three dialects were believed to be on more or less the same level. Evaluative attitudes toward the ancient Greek dialects were, for the greater part, the product of post factum projections of virtues and vices on these literary varieties. Indeed, in the early modern period and even in antiquity, attitudes were usually based on an esthetic sensation during the act of reading. The ancient Roman rhetorician \iai{Quintilian} experienced the fluency and pleasantness of \ili{Ionic} in this fashion, as he added to his judgment the following reservation: “at least as I perceive it”.\footnote{\iai{Quintilian}, \textit{Institutio oratoria} 9.14.18: “ut ego quidem sentio”.} To demonstrate the different impressions distinct Greek dialects conveyed, the French Hellenist Henri Estienne even transposed a \ili{Doric} verse of the Hellenistic poet \iai{Callimachus} (4th/3rd cent. \textsc{bc}) into \ili{Ionic} as follows: \begin{quote} Original \ili{Doric}: \textit{Tòn dè kholōsaménā per hómōs proséphēsen Athā́nā} [Tὸν δὲ χoλωσαμένα περ ὅμως πρoσέφασεν Ἀθάνα]. \end{quote} \begin{quote} Ionicized version: \textit{Tòn dè kholōsaménē per hómōs proséphēsen Athḗnē} [Tὸν δὲ χoλωσαμένη περ ὅμως πρoσέφησεν Ἀθήνη].\footnote{\citet[15--16]{Estienne1581}, with reference to \iai{Callimachus}, \textit{In lauacrum Palladis} 5.79.} \end{quote} \begin{quote} English translation according to the Loeb series: “And Athena was angered, yet said to him”. \end{quote} The \ili{Doric} verse allegedly became, when transposed to \ili{Ionic}, feeble and inadequate and lost its seriousness and majesty, and this solely through the replacement of the letter alpha by eta. An evaluative label could also result from a conscious critical review of the style in which a literary work was composed. The \ili{Doric} texts known to the Neoplatonist philosopher \iai{Porphyry} (ca. 234–305/310) seemed to be written in an obscure style, which is why in his biography of Pythagoras he labeled the dialect itself obscure (\textit{Vita Pythagorae} 53). In other words, it was not direct, oral contact with a dialect that triggered evaluative attitudes, but indirect confrontation through reading, either as an immediate sensation or as the result of a conscious assessment of the style of a text. This distinguishes premodern attitudes toward the ancient Greek dialects from those toward vernacular languages and dialects, which were usually at least partly informed by direct exposure to the variety in its spoken form. \is{language and customs|(}Apart from encounters with literary texts, it was the link that scholars frequently made between the customs of a tribe and its language – \textit{lingua et mores} in \ili{Latin} – which led them to conjure up evaluative labels for Greek dialects.\footnote{\citet{VanHal2013} offers a preliminary historical survey of the \textit{lingua et mores} link, while pointing out that it deserves further study.} Indeed, many attitudes were motivated by \isi{stereotypes} about the four canonical Greek tribes, as I have shown throughout this section.\footnote{See e.g. the \ia{Scholia Vaticana@\textit{Scholia Vaticana}}\textit{Scholia Vaticana} quotation above as well as the ideas of Saumaise and Vossius.} Early modern scholars took ancient and Byzantine attitudes as their starting point and complemented them in various ways. This materialized not only in the form of new evaluative statements and an increased emphasis on certain properties, especially \ili{Attic} elegance and \ili{Doric} rusticity, but – most notably – it also resulted in a tendency toward canonizing dialect attitudes. Even though there remained some variation in the early modern perception of the Greek dialects, it is nonetheless safe to state that the evaluation of the four traditional dialects became a canonized format. Indeed, it constituted an almost inherent part of the study of the \ili{Ancient Greek} language and its literary dialects and was for this reason integrated into many Greek language manuals. Since scholars usually felt the \ili{Koine} to be of a particular nature, they did not assign specific properties to it, either in antiquity and the Byzantine era or in the early modern period.\is{language and customs|)} \section{Evaluative discourse between Greek and the vernacular}\label{sec:7.3} The \isi{evaluative discourse} on the Greek dialects must have been widely known in learned circles, as it apparently influenced attitudes toward \is{vernacular|(}vernacular speech forms to some extent. The terminology used to label vernacular tongues and their dialects sometimes resembled that found in evaluations of the Greek dialects. This emerges most clearly from cases in which scholars assigned labels to both Greek and vernacular speech forms in their works. Let me look at two noteworthy examples from the seventeenth and the eighteenth century, respectively: Isaac Vossius and Friedrich Gedike. In his widely read treatise on ancient poetry and its original rhythm, published in Oxford in 1673, the Dutch philologist Isaac \citet[54--55]{Vossius1673} opposed the effeminate \ili{Ionic} dialect to virile \ili{Doric}, for which he may have relied on a Byzantine commentary on Thucydides.\footnote{Cf. \ia{Scholia in Thucydidem@\textit{Scholia in Thucydidem}}\textit{Scholia in Thucydidem} (\textit{Scholia uetera et recentiora}), commentary at 1.124.1.} Directly after that, he provided a brief outline of the qualities of a number of vernacular tongues of his time. Especially relevant to my purposes is his characterization of \ili{English}, with which he was very well acquainted, having moved to England in 1670. Vossius described the language as “delicate” (\textit{mollis}) and “effeminate” (\textit{muliebris}). To exemplify this linguistically, he referred to the \ili{English} preference for the letter \textit{êta} (“ἦτα”) and its avoidance of the letter ⟨a⟩. Vossius’s views on the \ili{Ionic} dialect, cited earlier in this chapter, irrefutably informed his assessment of \ili{English} (see \sectref{sec:7.2} above). \ili{Ionic} was also known for having the letter eta where the other dialects had a long alpha, and Vossius spoke of the Greek letter eta rather than the \ili{English} letter ⟨e⟩ in characterizing this supposed property of \ili{English}. He did add, however, that \ili{English} “delicacy” (\textit{mollities}) was somewhat tempered by the harshness of its syllables and the frequency of consonants in this language \citep[56]{Vossius1673}. After that, Vossius praised \ili{French} for its strength and its many war-related words, which is reminiscent of his description of the \ili{Doric} dialect. Friedrich \citet[\textsc{xx}]{Gedike1779}, a German scholar from the late \isi{Enlightenment}, drew a detailed comparison between the Greek and \ili{German} dialect contexts in his \textit{Thoughts on purism and language enrichment}. Gedike modeled his threefold classification of Greek on his perception of vernacular \ili{German} diversity, thus proceeding in a direction opposite to Vossius, who had moved from Greek to the \is{vernacular|)}vernacular. First, Gedike compared \ili{Ionic} with \ili{Low German} (\textit{Niederdeutsch} or \textit{Plattdeutsch}), both of which he described as being “smooth” (\textit{sanft}) and “delicate” (\textit{weich}). He associated this characteristic with the absence of aspirations and rough diphthongs, and emphasized the obviousness of the parallel he was pointing out. He proceeded by treating the similarity of \ili{Doric} and \ili{Upper German} (\textit{Oberdeutsch}), which was situated in the \isi{broadness} with which they were pronounced. They moreover contained, Gedike argued, many hissing sounds, aspirations, and diphthongs. This gave them a “solemn” (\textit{feierlich}) and “splendid” (\textit{prunkvoll}) air. Gedike thus assessed \ili{Doric} in distinctly positive terms. Finally, the “middle dialects” were discussed: \ili{Attic} and \ili{High German} (\textit{Hochdeutsch}). They were, however, not exactly in the middle, because both inclined toward the respective “solemn” varieties: \ili{Doric} and \ili{Upper German}. Gedike refrained from elaborating more extensively on the properties of \ili{Attic} and \ili{High German} in his 1779 work. However, three years later, in an article on the Greek dialects, he stated that \ili{Attic} was less rough than \ili{Doric} and less fluid, yet more consistent than \ili{Ionic}. Something similar held true for \ili{High German}, he suggested \citep[25]{Gedike1782}. \citet[\textsc{xx–xxi}]{Gedike1779} rounded off his comparison by stating that, just like the ancient Greek dialects, the three \ili{German} dialects also used to be “book languages” (\textit{Büchersprachen}), until the \ili{High German} speech of the Lutheran \is{Protestant}Reformation expelled the two others from writing. Gedike’s comparison of Greek and \ili{German} dialects was applauded by several of his contemporaries, including the famed grammarian of \ili{German} Johann Christoph Adelung (1732–1806; see \citealt{Adelung1781}: 56 and also \citealt{Moritz1781}: 20). \section{Beyond the early modern era}\label{sec:7.4} The \isi{evaluative discourse} on the Greek dialects did not end with the arrival of modernity. On the contrary, it persisted until very late. In the nineteenth century, the distinguished German philologist Heymann Steinthal (1823–1899) noted the following on the Greek dialects in general and \ili{Attic} in particular: \begin{quote} Each dialect counts as a phase in time and an interior moment of the spirit. In the \ili{Attic} dialect, the Greek spirit manifested itself last, but also most perfectly, and, to be sure, in such an encompassing manner that one may rightly say that the other dialects have been neutralized in it. This is also why all Greek dialects have perished in and with it.\footnote{\citet[9]{Steinthal1891}: “Jeder Dialekt gilt als ein Abschnitt in der Zeit und ein inneres Moment des Geistes. Im attischen Dialekt offenbarte sich der griechische Geist am spätesten, aber auch am vollkommensten, und zwar in so umfassender Weise, dass man wol sagen darf, in ihm seien die andren Dialekte aufgehoben gewesen. Darum sind auch in und mit ihm alle griechischen Dialekte zu Grunde gegangen”.} \end{quote} Steinthal’s underlying assumptions were, however, different from those of early modern \isi{evaluative discourse}. He presumed the existence of a Greek \textit{Volksgeist}, which has to be viewed against the background of his interest in the \isi{psychology} of tribes and nations (\textit{Völkerpsychologie}), and he supposed that some Greek tribes represented that \textit{Geist} better than others. Still, it is telling that, as with his early modern predecessors, evaluating the Greek dialects came naturally to him. Today, the idea of \ili{Attic} elegance and primacy is still latent in the sense that it is taught as the principal variety of \ili{Ancient Greek} in most high school and university curricula. This is largely a modern innovation, as early modern grammars tended to describe “the Greek language”, usually a form of the \ili{Koine} with typically \ili{Attic} and \ili{Ionic} elements interspersed, as Federica \citet[123]{Ciccolella2008} has rightly suggested. Be that as it may, literary \ili{Attic} was generally valued most highly even by early modern Hellenists (cf. \citealt{Roelcke2014}: 251). In other words, the shift from the early modern to the modern period coincided with a shift in the prototypical form of Greek: from a hybrid form of \ili{Koine} Greek to \ili{Attic} Greek.\footnote{Differences in the prototypicalization of Greek throughout history require further study \citep[105--106]{VanRooyFcb}.} Early modern scholars approached and evaluated the Greek dialects principally against the backdrop of reading and understanding Greek literature, even though \isi{stereotypes} about the traditional four Greek tribes likewise constituted an important trigger for dialect attitudes. The authors sometimes also assumed a connection between the dialects and certain other aspects of ancient Greece, albeit in a much looser way than with Greek literature and the Greek tribes. What are these other aspects? \section{Geography, politics, and natural disposition}\label{sec:7.5} First of all, in keeping with the idea, widespread in early modern times, that \isi{geography} was responsible for \isi{dialectal diversification}, the terrain of Greece was frequently appealed to in order to account for the existence of Greek dialects.\footnote{On the link between \isi{geography} and dialectal diversity, see \citet[]{VanRooyFcd}.} The \isi{Protestant} theologian and renowned Hellenist Philipp \citeauthor{Melanchthon1518} (\citeyear[a.1\textsc{\textsuperscript{v}}]{Melanchthon1518}; \citeyear[\textsc{a.}i\textsc{\textsuperscript{v}}]{Melanchthon1520}) described in his grammar of the language Greece as “spacious” (\textit{ampla}) and “wide” (\textit{lata}), while presenting dialectal diversity as a self-evident consequence of this aspect of Greek \isi{geography} (cf. also \citealt{Ruland1556}: 1). The dialects were linked to the many islands of ancient Greece in particular, most notably by the Anglo-Welsh writer James Howell (ca. 1594–1666). Inspired by the prominent philologist Josephus Justus Scaliger, \citet[89]{Howell1650b} emphasized that “the cause why from the beginning ther wer so many differing dialects in the \textit{Greek} tongue was because it was slic’d into so many islands” (cf. \citealt{Howell1642}: 138–139; \citealt{Scaliger1610}: 121). Howell’s treatment of Greek diversity was actually triggered by a comment on \ili{Italian} dialects, which he subsequently compared to their Greek counterparts. He claimed that, in the case of \ili{Italian}, dialectal variation was caused by “multiplicity” or “diversity of governments” rather than \isi{geography}. This brings me to a second major link made by early modern scholars, that between the dialects and the political diversity of ancient Greece, which, in turn, was often viewed as a consequence of the rugged \isi{geography} of the area. The humanist \ia{Valla, Lorenzo}Lorenzo Valla’s famous praise of the \ili{Latin} language cannot be left unmentioned in this context: \begin{quote} Just as the Roman law is one law for many peoples, so is the \ili{Latin} language one for many. The language of Greece, a single country, is shamefully not single, but as various as there are factions in the state.\footnote{Valla in \citet[122]{Regoliosi1993}: “multarum gentium, uelut una lex, una est lingua Romana: unius Graeciae, quod pudendum est, non una sed multae sunt, tamquam in republica factiones”. The translation is adopted from \citet[10]{Trapp1990}. On this passage, see e.g. Tavoni in \citet[90 n.55]{Benvoglienti1975} and \citet[212--213]{Trovato1984}.} \end{quote} The polyhistor Daniel Georg Morhof (1639–1691) made a similar point, emphasizing the inability of Athens to impose its dialect on neighboring city-states \citep[146]{Morhof1685}. The German classical scholar Johann Matthias Gesner (1691–1761) similarly suggested that Greek dialectal diversity was caused by the fact that “ancient Greece did not have a capital and dominant city, but several cities had the same and equal rights”.\footnote{\citet[160--161]{Gesner1774}: “Origo autem dialectorum uariarum haec est; quia Graecia antiqua non habuit caput et dominam urbem, sed plures urbes eadem habebant et paria iura”. Cf. \citet[395--396]{Rollin1731}; \citet[136--138]{Priestley1762}; \citet[204]{Ries1786}.} The poet Pierre de Ronsard (1524–1585), for his part, contrasted Greek diversity to his native \ili{French} context, connecting at the same time Greek linguistic abundance to the fragmented political landscape of ancient Greece (\citeyear{Ronsard1565}: 5\textsc{\textsuperscript{r}}). Ronsard interestingly added that if there still were political diversity in France, each ruler would desire, for reasons of honor, that their subjects wrote in the language of their native country.\footnote{Cf. \citet[lxviii]{Court1778} for a similar observation. See also Chapter 8, \sectref{sec:8.1.3}.} An odd characterization of ancient Greece was proposed by the Bohemian \isi{Protestant} scholar Christoph(orus) Crinesius (1584–1629). Operating within a biblical framework and deriving Greek from \ili{Hebrew}, \citet[77]{Crinesius1629} held that the Greek dialects were the varieties spoken in the different provinces of the kingdom of Javan, a grandson of Noah and traditionally associated with the Ionians. In other words, he incorrectly claimed that the linguistic variation of ancient Greece coincided with the regional-administrative division of a politically unitary empire. Apart from political diversity, the dialects were often also connected to the many colonies established by the Greeks (see e.g. \citealt{Simonis1752}: 207). It is worthwhile recalling here that certain early eighteenth-century scholars believed the \isi{geopolitical diversity} of early modern Greece to correlate with \isi{vernacular} Greek dialectal variation as well (see Chapter 2, \sectref{sec:2.10}). In other words, ancient and \isi{vernacular} dialects of Greek were thought to have emerged under similar circumstances. Certain scholars associated the dialects with the Greeks’ natural disposition and innate character. This connection was, however, much rarer. In this case, the Greek dialects were taken as a symptom of a negative characteristic of the Greek people as a whole: their inconstancy. This emerges most clearly from the words of Franciscus Junius the Elder, quoted at the outset of this chapter and labeling the Greeks as “verbose” and “mendacious” because of a certain “malicious rivalry” that led them to forge so many different dialects. This view was silently copied by the Dutch biblical scholar Johannes Leusden (1624–1699).\footnote{\citet[a.4\textsc{\textsuperscript{r}}\textsc{–}a.4\textsc{\textsuperscript{v}}, 167]{Leusden1656}. Schultens (in \citet[§\textsc{xlix.}δ]{Eskhult_albert_nodate}) quoted Leusden, without realizing that Leusden relied on Junius. For the rivalry among speakers of different dialects, cf. also \citet[5\textsc{\textsuperscript{r}}]{Baile1588}; \citet[\textsc{b.3}\textsc{\textsuperscript{r}}]{Schorling1678}.} It was moreover implicit in \ia{Valla, Lorenzo}Lorenzo Valla’s ridiculing of Greek multiplicity as opposed to Roman uniformity, quoted earlier in this section. \section{Reconstructing ancient Greece: Antiquarians on the dialects}\label{sec:7.6} As the previous section has shown, Renaissance Hellenists realized that the phenomenon of Greek linguistic diversity was not only relevant for the study of language and literature, but could also help a scholar shed light on other aspects of ancient Greece, especially the character of its tribes and its \isi{geopolitical constitution}. This realization motivated many authors to devote attention to the Greek dialects outside of philological contexts in the strict sense, especially in the not always clearly distinguished fields of \isi{historiography}, antiquarianism, and \isi{geography}. How did scholars active in these branches fit dialectal diversity into their descriptions and reconstructions of ancient Greece and its regions and colonies? Let me provide a brief and necessarily eclectic answer to this question, which deserves further study. In 1589, the obscure Taranto philologist and antiquarian Giovanni Giovane (Latinized: Johannes Juvenis) published his \textit{Eight books on the antiquity and changing fortune of the people of Taranto}. One of the first sections of this historiographical-antiquarian monograph comprised a short lexicon of the ancient Greek \il{Tarentine|(}dialect spoken in the city of Taranto or \textit{Táras} (Tάρας), its original Greek name, situated in modern-day southern Italy (\citealt{Giovane1589}: 9–18). Giovane was, however, aware that not all words he included were specific to Taranto. Yet he still presented Tarentine as a distinct Greek dialect and recognized it as a variety of \ili{Doric}. \citet[8--9]{Giovane1589} did so on the authority of Aristotle as well as by pointing out the \ili{Doric} character of the extant fragments attributed to the Pythagorean philosopher Archytas of Tarentum (5th/4th cent. \textsc{bc}). What is more, Giovane believed it to be common knowledge that grammarians have reckoned \il{Tarentine|)}Tarentine Greek among the countless dialects of the language (cf. Chapter 2, \sectref{sec:2.8}). The Dutch antiquarian Johannes Meursius (1579–1639) inserted information on specific Greek dialects in a fashion similar to Giovane in two posthumously published works: firstly, a book on ancient Laconia, in which the \ili{Doric} character and particularities of its \il{Laconian}dialect were outlined (\citealt{Meursius1661}: 216–233), and secondly, a treatise on ancient Crete and other Greek islands, in which the \ili{Doric} \il{Ancient Greek!Cretan}Cretan dialect was described and Cretan words were listed (\citealt{Meursius1675}: 254–258). Apart from such antiquarian writings, the Greek dialects attracted attention in more general works on the history of ancient Greece and neighboring areas, especially in the eighteenth century, for instance in Charles Rollin’s (1661–1741) popular multivolume account of ancient history and in Nicolas Fréret’s (1688–1749) dissertation on the first inhabitants of ancient Greece. \citet[395--396]{Rollin1731} linked the dialects to the enormous \isi{geopolitical diversity} of ancient Greece, whereas \citet[esp. 107--129]{Freret1809} framed Greek within a larger family of dialects anciently spoken over an area stretching from \ili{Celtic} lands to those of the Syrians and Medes; against this background, he described the development of Greek and its dialects out of a now lost \isi{protolanguage}. Clearly written out of \is{historiography}historiographical interest was the \textit{Brief dissertation on the settlements and colonies of the dialects of the Greek language} (\citeyear{Hoius1620}) of the Bruges humanist Andreas Hoius (1551–1635). A professor of Greek and history at the university of Douai, today in northern France, \citet[95]{Hoius1620} principally attempted to trace the history of the Greek tribes and their migrations, which he held responsible for the variation in the Greek tongue, as well as to map out the \isi{geography} of Greece. The dialects themselves hovered in the background of this dissertation, and Hoius mentioned only some of their linguistic particularities explicitly. One of his main theses was that all the Greek dialects were originally spoken in Greece in the strict sense, from which he excluded Asia Minor, part of modern-day Turkey.\textsuperscript{} What is more, there were initially only two tribes in Greece, which Hoius asserted on the authority of \iai{Herodotus}: the migratory “Pelasgians”, equated with the Aeolians, from whom the Romans derived, and the stationary “Hellenists” (\citealt{Hoius1620}: 102, referring to \iai{Herodotus} 1.57–58). The history of the Greek tribes and the historical status of the dialects also served as the principal focus of a dissertation defended by the Hellenist Georg Friedrich Thryllitsch in 1709 at the university of Wittenberg.\footnote{Cf. the dissertation presented (likewise at Wittenberg) by Georg Caspar Kirchmaier and Johannes Crusius (= \citealt{KirchmaierCrusius1684}), even though here the history of the Greek alphabet (chapters \textsc{i}{}--\textsc{iii}), the correct \isi{pronunciation} of Greek (chapter \textsc{iv}), and the particularities of the Greek dialects (most of chapter \textsc{v}) were the main focus of attention.} Its title neatly summed up Thryllitsch’s goal, which consisted in presenting “some historical-technical suggestions about the Greek dialects collected on the basis of a consideration of the origins and migrations of the Greek tribes”.\footnote{“Suspiciones quasdam historico-technicas de dialectis Graecis ex consideratione originum migrationumque Graecarum nationum collectas”.} One of the main aims of this dissertation consisted in reconciling the biblical account with that of Greek historiographers, for which the traditional association of Javan with Ion – making \ili{Ionic} the oldest Greek dialect – was invoked (\citealt{Thryllitsch1709}: \textsc{a.4}\textsc{\textsuperscript{r}}\textsc{–b.3}\textsc{\textsuperscript{r}}; cf. Chapter 5, \sectref{sec:5.4}). The French etymologist Gilles Ménage (1613–1692) apparently planned to compose seven books on the ancient Greek dialects, as \citet[252]{Leibniz1991} informs us; this might have been the culmination of the early modern \is{historiography}historiographical interest in the dialects, since Ménage’s work was not only intended to include – like Hoius’s and Thryllitsch’s accounts – information on Greek \isi{geography}, tribes, and colonies, but also an extensive description of the linguistic particularities of the dialects. It was, unfortunately, never realized. \is{epigraphy|(} \is{historiography}Historiographers and especially antiquarians sometimes put their philological knowledge of the Greek dialects into practice when analyzing the language of Greek inscriptions discovered in the Mediterranean area. This application was, however, relatively rare, probably because the discipline of epigraphy was only nascent – the first collections containing Greek inscriptions were published in the late sixteenth century – and the Greek dialects remained predominantly tied up with the study of literary texts.\footnote{On Greek inscriptions in the early modern period, see \citet{Stenhouse_greekness_nodate}. \citet{Stenhouse2005} discusses the occasional usage of Greek inscriptions by sixteenth-century Italian historiographers. \citet{Liddel2014} briefly elaborates on the usage of the so-called Parian Marble in early modern chronology. A more comprehensive study of the early interest in Greek epigraphy remains a desideratum.} Yet the antiquarian editors of Greek inscriptions did their best to identify the dialect of the pieces they were publishing, with varying success. Thomas Lydiat relied on his knowledge of the Greek dialects and of Greek history to identify the language of the Parian Chronicle as a mixed \ili{Koine}–\ili{Ionic} variety; scholars now agree, however, that it is composed in \ili{Attic}, even if on the island of Paros, where the chronicle was found, a variety of central \ili{Ionic} was originally spoken.\footnote{See Lydiat in \citet[\textsc{ii}.116–117]{Prideaux1676}. On the dialect of Paros, see e.g. \citet[531]{Alonso2018}.} Lydiat’s observation featured in the \is{epigraphy|(}epigraphic collection edited by Humphrey Prideaux (1648–1724) in 1676 and centered around the so-called Arundel marbles. These were named after the eager art and antiquities collector Thomas Howard (1586–1646), Earl of Arundel, who had acquired the marble sculptures and inscriptions through his contacts in the Ottoman empire, thus laying the foundation of the first major collection of Greek inscriptions in England, now principally preserved at the Ashmolean Museum in Oxford (on the eventful history of the marbles, see \citealt{Vickers2006}). \citet[\textsc{i.}a.1\textsc{\textsuperscript{v}}, 123]{Prideaux1676} himself drew attention to an inscription in the collection regarding a treaty between two \il{Ancient Greek!Cretan}Cretan cities because of its unusual dialect. Even though he cautiously pointed out some \ili{Doric} features in his notes to this inscription, he did not feel confident enough to identify its language as \ili{Doric}. In summary, Lydiat and Prideaux activated their philological knowledge of the Greek dialects for antiquarian-epigraphic purposes, but not always successfully so. In the seventeenth century, inscriptional evidence was occasionally also invoked by scholars tackling typical philological questions such as the variety and history of the Greek language and the \isi{literary usage} of the dialects. Claude de \citet[430]{Saumaise1643a} saw the \ili{Doric} character of \il{Ancient Greek!Cretan}Cretan Greek confirmed by epigraphic data, whereas Richard \citet[311]{Bentley1699} combined his knowledge of the Greek dialects and inscriptional evidence to correctly identify the dialect of Sicily as \ili{Doric}. Bentley did so in his well-known dismissal of the authenticity of a collection of letters written in \ili{Attic} and attributed to Phalaris, the tyrant of Akragas on Sicily (modern-day Agrigento) in the sixth century \textsc{bc}. How could a \ili{Sicilian} tyrant ever have written letters in \ili{Attic}, especially considering that this dialect had not yet eclipsed all the others in Phalaris’s lifetime? If the letters were indeed authored by Phalaris, Bentley convincingly pointed out, they would have been written in a variety of \ili{Doric}. The eighteenth century witnessed an increasing interest in Greek inscriptions, especially among antiquarians who had enjoyed a decent philological education. Hellenists finally started to consider inscriptions to be a valuable source of dialectal data (cf. \citealt{Walch1772}: 87). This growing fascination with epigraphical documents\is{epigraphy|)} also resulted in lengthier discussions of the dialectal identity of specific inscriptions or collections of inscriptions. Let me take a look here at two notable Italian examples. The priest and early archeologist Alessio Simmaco Mazzocchi (1684–1771) was the first to edit in their entirety the so-called Heraclean Tablets, two bronze plates discovered separately in 1732 and 1735 near the ancient city of Heraclea Lucania in the southernmost part of modern-day Italy and currently preserved in the archeological museum of Naples. One side of the tablets contains a \ili{Latin} legal inscription from the first century \textsc{bc}, which Michael Maittaire had already published in 1735; the other has two Greek inscriptions from the late fourth or early third century \textsc{bc}.\footnote{See \citet{Uguzzoni1968} for a modern edition and discussion of the Heraclean Tablets. See also \citet{Weiss2016}, who argues that the dating of the tablets should be reconsidered.} Mazzocchi included an extensive commentary on the tablets in his edition, which appeared in 1754 at the Naples printing press of Benedetto Gessari, and which also touched on linguistic aspects of the inscriptions. Thanks to his excellent philological education, he was able to correctly identify the dialect of the Greek inscriptions as \ili{Doric}, which he believed to be the oldest variety of Greek. However, misguided by the obscure ancient and medieval accounts on the Greek dialects as well as by the odd-looking alphabet of the inscriptions, \citet[118--120]{Mazzocchi1754} further specified the language as “Old \ili{Doric}” as opposed to the “New \ili{Doric}” dialect. This New \ili{Doric} was allegedly introduced by \ili{Sicilian} poets such as Epicharmus and Sophron in the fifth century \textsc{bc}. Mazzocchi contended, however, that New \ili{Doric} did not spread to all regions at the same time, and some regions, like Magna Graecia in Italy, preserved Old \ili{Doric} for a longer period. This complex argument allowed Mazzocchi to situate the two Greek inscriptions in approximately the correct time frame – i.e. around 250 \textsc{bc} – as well as to account for its unusual orthography. He even proposed a relative chronology for the two inscriptions, based on orthographic and linguistic data \citep[135]{Mazzocchi1754}. In conclusion, Mazzocchi’s philological schooling enabled him to formulate a detailed and well-founded assessment of the language of the Heraclean Tablets, even if his results were still firmly grounded in traditional ideas on the Greek dialects and his views have been surpassed by modern scholarship (see \citealt{Weiss2016} for a state of the art). My second example is the \ili{Sicilian} antiquarian and numismatist Gabriele Lancillotto Castelli (1727–1794), who relied on established dialectal features to prove that not only \ili{Doric}, but also \ili{Attic} and \ili{Ionic} were spoken on his native island, contrary to what was commonly believed (\citeyear{Castelli1769}: \textsc{xv}). The language of inscriptions of various types, including coins, constituted one of Castelli’s principal pieces of evidence for his hypothesis (\citealt{Castelli1769}: \textsc{xv–xvi,} \textsc{xxi}). At the same time, however, he also made ample use of ancient authorities to substantiate his views. For example, inspired by the historian Thucydides, he claimed that a kind of intermediate \ili{Doric}–\ili{Chalcidian} \ili{Ionic} variety was in use among the inhabitants of the \ili{Sicilian} city of Himera (see \citealt{Castelli1769}: \textsc{xxxiii} for a neat overview of his theses). He still wavered, in other words, between evidence and authority as he was exploring the new inscriptional data available to him. Moving beyond \is{historiography}historiographical and antiquarian works focusing on ancient Greece, I cannot leave unmentioned here that the Greek dialects of antiquity were often the only ones discussed at some length in \is{geography}geographical descriptions of Europe or the world, especially before the eighteenth century. The English churchman Peter Heylyn (1599–1662) referred to them in his long description of Greece, included in his \textit{Microcosmus, or A little description of the great world} of \citeyear{Heylyn1621}: “The language they spake was the \textit{Greeke}, of which were five dialects, \textit{1 Atticke. 2 Doricke. 3 Aeolicke. 4 Beoticke. 5} The \textit{common} dialect or phrase of speech” (\citealt[205]{Heylyn1621}; see e.g. also \citealt[15, 60, 63]{Speed1676}). He claimed to be relying on Nicolaus Clenardus’s grammar of Greek, but Heylyn’s classification into \ili{Attic}, \ili{Doric}, \ili{Aeolic}, \ili{Boeotian}, and \ili{Koine} does not feature in Clenardus’s work and has no parallels in the early modern period. In the revised edition of 1625, \citet[375]{Heylyn1625} replaced \ili{Boeotian} by \ili{Ionic}, most likely because he had realized his idiosyncrasy. In summary, the dialects occupied an important place in a number of early modern \is{historiography}historiographical and antiquarian works concentrating on parts of ancient Greece, and they were often discussed in close conjunction with the history, \isi{geography}, and tribes of Greece. In the eighteenth century, antiquarians increasingly involved \is{epigraphy}epigraphic dialect evidence in their attempts at providing encompassing descriptions of ancient Greece and its many different settlements, especially those in regions of modern-day Italy. The Greek dialect inscriptions from these areas were, after all, better accessible to Western scholars than the ones hidden away in Ottoman Greece. The dialects, finally, also figured in comprehensive \is{geography}geographical works covering more than Greece alone, albeit more marginally so. These accounts tended to be rather unoriginal in their information regarding the dialects, as Heylyn’s case demonstrates. \section{Conclusion}\label{sec:7.7} Before the modern period, scholars eagerly applied evaluative labels to the canonical ancient Greek dialects. Most of these attitudes must be understood against the background of the study of Greek literature and resulted in particular from the perceptions readers had of texts and their form. This holds for ancient and medieval times as well as for the early modern period, even though early modern philologists also relied to a considerable extent on the attitudes of their predecessors. Scholars linked the Greek dialects with other aspects of ancient Greece and Greek culture as well, and increasingly so from the Renaissance onward. Assumptions about the customs of individual Greek tribes triggered specific attitudes toward their respective dialects, and, on a more general level, the fickleness of the Greek people in its entirety was believed to have caused the vast dialectal diversity of its language. Put another way, early modern \isi{stereotypes} about Greeks in general and the tribes of ancient Greece in particular played a pivotal role in evaluating Greek linguistic diversity. In addition, the authors perceived a close connection between the Greek dialects and the ethnic and \isi{geopolitical constitution} of Greece. To sum up, early modern scholars attempted to fit the dialects into the larger picture of ancient Greece. Even though they principally had a philologically colored view of the matter, they frequently related the dialects to other, non-textual aspects of Greek culture and Greekness.
{ "alphanum_fraction": 0.7839460345, "avg_line_length": 311.2509960159, "ext": "tex", "hexsha": "494d8e9f2dbfbb7bfb8f56836545c6e99c79e012", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d5ec08842ec0a62ed43629432bcf24d521f32aac", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "langsci/253", "max_forks_repo_path": "chapters/07.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d5ec08842ec0a62ed43629432bcf24d521f32aac", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "langsci/253", "max_issues_repo_path": "chapters/07.tex", "max_line_length": 4079, "max_stars_count": null, "max_stars_repo_head_hexsha": "d5ec08842ec0a62ed43629432bcf24d521f32aac", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "langsci/253", "max_stars_repo_path": "chapters/07.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 22838, "size": 78124 }
%\section{\texorpdfstring{Polynomial Hierarchy cont}{Polynomial Hierarchy cont}} %\vspace{5mm} %\large %\subsection{Tutorial} %\begin{exercise} % \begin{enumerate}[label=\alph*)] % \item $\exists \TP = \TNP$. % \item $\forall \TP = co-\TNP$. % \item $\forall k > 0: \exists \Sigma_k = \Sigma_k$ % \item $\forall k > 0: \forall \Pi_k = \Pi_k$ % \item $\exists \Pi_k = \Sigma_{k + 1}$ % \item $\forall \Sigma_k = \Pi_{k + 1}$ % \end{enumerate} %\end{exercise} \subsection{PH collapse} \begin{theorem}[Polynomial hierarchy collapse at level k]\label{ph_collapse} if $\Sigma_k = \Pi_k$ for some $k > 0$ then \[ \forall j \geq 0: \Sigma_{k + j} = \Pi_{k + j} = \Sigma_k \] \end{theorem} \begin{proof} By induction on j. For 0 is true by assumption. Induction step: \[ \Sigma_{k + j + 1} \xlongequal{\cref{polyn_q}\ c)} \exists \Pi_{k + j} \xlongequal{I.H.} \exists \Sigma_{k + j} \] By \cref{polyn_q} c) \[ \exists \Sigma_{k + j} = \Sigma_{k + j} \] By I.H. again \[ \Sigma_{k + j} = \Sigma_k \] Similarly for $\Pi_{k + j + 1}$. \[ \Pi_{k + j + 1} \xlongequal{\cref{polyn_q}\ f)} \forall \Sigma_{k + j} \xlongequal{I.H.} \forall \Pi_{k + j} \xlongequal{I.H.} \Sigma_k \] \end{proof} \begin{theorem}[PH not collapse] Either $\forall k: \Sigma_k \subset \Sigma_{k + 1}$ or PH collapses. \end{theorem} \begin{proof} Assume \[ \Sigma_k = \Sigma_{k + 1} \] We know \[ \Sigma_{k + 1} = \TNP(\Sigma_{k}) = \TNP(\Pi_k) \supset \Pi_k \] Implies by assumption \[ \Pi_k \subseteq \Sigma_k \Rightarrow \Sigma_k = \Pi_k \] Then by \cref{ph_collapse} PH collapses after $k$. In particular for $k = 0$ we get \[ \TP = \TNP \Rightarrow PH = \TP \] \end{proof} \begin{consequence} \[ \text{If}\ \exists k \in \N: \TP = \Sigma_0 \subset \Sigma_k \Rightarrow \TP \subset \TNP \] \end{consequence} \begin{proof} By reversing previous condition. \end{proof} \begin{definition}[PSPACE-complete] $L$ is PSPACE-complete if: \\ $L \in PSPACE$ and\\ $\forall L_a \in PSPACE: L_a$ is poly time reducible to $L$. Note that we use Time reducibility for Space class. \end{definition} \begin{lemma}[$PH = \Sigma_k$]\label{ph_col_lemma} Let $L$ be $PS$-complete and $L \in \Sigma_k$ then $PH = \Sigma_k$. \end{lemma} \begin{proof} Take $L_2 \in PS$ arbitrary, then by the polynomial reduction \[ \exists DTM\ M_d: x \in L_2 \iff M_d(x) \in L \] $M_d$ is a transducer. Also there is acceptor \[ \exists NTM\ M_n, \exists D \in \Sigma_{k - 1}: L = L(M_n, D) \] Then we construct new NTM by concatenation of $M_d$ and $M_n$. % todo join 2 proofs, predn 5 od 33:00 \[ L_2 \in PS \] Therefore \[ PS \subseteq \Sigma_k \] We already know that \[ PH \subseteq PS \] Therefore \[ PH = \Sigma_k \] \end{proof} \begin{consequence} if $PH = PS$ then \[ \exists k \in \N: PH = \Sigma_k \] Assuming that $\exists L \in PS$-complete. Which implies, that if PH grows infinitely and no PS-complete is in PH. Then $PS \setminus PH$ contains all $PS$-complete languages. \end{consequence} \begin{proof} We take L, by $PH = PS$ \[ \exists k: L \in \Sigma_k \] then by \cref{ph_col_lemma} \[ PH = PS \] \end{proof} \section{\texorpdfstring{PS-complete lang}{PS-complete lang}} \vspace{5mm} \large \begin{definition}[QBF - quantifiable boolean formula] %todo predn 5 od 50:00 \begin{enumerate} \item if $x$ is a variable then $x$ is a QBF and $x$ is a \emph{free} variable \item if $E_1, E_2$ are QBF then \[ \neg E_1, (E_1) \land (E_2), (E_1) \lor (E_2) \] are also QBFs. And the status of variables (free/bounded) does not change. \item if $E$ is a QBF then \[ \exists x (E), \forall x (E) \] are also QBFs. And all ocurrences of $x$ become bounded. Status of other variables does not change. \end{enumerate} \end{definition} \begin{definition}[QBF problem] QBF problem (language). Input: QBF $F$ with no free variables. Question: $F = 1$?? \end{definition} How do we evaluate QBF with no free variables? \begin{itemize} \item $ \exists x (E) \iff E_0 \lor E_1$ \item $ \forall x (E) \iff E_0 \land E_1$ \end{itemize} Where $E_0$ is formula where every occurrence of $x$ is replaced by $0$. Similarly $E_1$. \begin{example} \[ \forall x (\forall x (\exists y (x \lor y)) \land \neg x) \] by rules above \[ (\forall x (\exists y (x \lor y)) \land \neg 0) \land (\forall x (\exists y (x \lor y)) \land \neg 1) \] \end{example} \begin{note} SAT - language of satisfiable CNFs. We can think of it as \[ \exists x_1 \exists x_2 \ldots \exists x_n (F(x_1, x_2, \ldots, x_n)) \] Therefore SAT is a special case of QBF. \end{note} \begin{theorem}[QBF $\in$ PS] QBF $\in$ PS. \end{theorem} \begin{proof} We construct DTM to evaluate QBF without free variables as following \begin{itemize} \item $\neg(E) \to$ evaluate E and negate all results \item $(E_0) \lor (E_1) \to $ evaluate $E_0$, $E_1$ then by disjunction \item $(E_0) \land (E_1) \to $ evaluate $E_0$, $E_1$ then by conjunction \item $\exists (E) \to$ compute $E_0, E_1$, then compute $E_0 \lor E_1$ \item $\forall(E) \to$ compute $E_0, E_1$, then compute $E_0 \land E_1$ \end{itemize} We have at most $n$ operators. We get binary tree that evaluates the formula. Where every branch is bounded by total length of initial formula. $\bigO(n^2)$ is enough space for evaluation. \end{proof} \begin{example} \[ F = \vee_{1 \leq j \leq n} (x_i \to y_i) = \vee_{1 \leq j \leq n} (\neg x_i \lor y_j)\] Can be viewed as bipartite graph. Claim: The resulting formula is shortest CNF representing $F$. By the completness of resolution. Length changed to $\Theta(n^2)$. Now 2nd formula \[ H = (\exists z) [(\wedge_{1 \leq i \leq n}(x_i \to z)) \land (\wedge_{1 \leq j \leq n} (z \to y_j))] \] $H$ is an encoding of $F$ with auxiliary variables. Can be viewed as bipartite graph but with single node in between parts. %todo image predn 5 od 1:19:00 However, $H$ is shorter since it is $\Theta(n)$. \end{example} \begin{proof} As every $x$ implies every $y$ models of $F$ are: \[ (0, 0, 0, \ldots, *, * \ldots, *) \cup (*, * \ldots, *, 1, 1, \ldots, 1)\] Where $*$ represents arbitrary value. If we rewrite $H$ and substitute $0 \lor 1$ for $z$ we get: \[ [(\wedge_{1 \leq i \leq n}(x_i \to 0)) \land (\wedge_{1 \leq j \leq n}(0 \to y_j))] \lor [(\wedge_{1 \leq i \leq n}(x_i \to 1)) \land (\wedge_{1 \leq j \leq n}(1 \to y_j))]\] Therefore models are the same. \end{proof} \begin{note} Trick with auxiliary variables is used, if we have a requirement of only one $x_i$ to be 1. Which can be represented by formula: \[ \wedge_{1 \leq i,j \leq n}(x_i \lor \neg x_j) \] Which is $\Theta(n^2)$ and auxiliary variable makes it linear. \end{note} \begin{theorem}[QBF is PS-hard] QBF is PS-hard (sketch). \end{theorem} \begin{proof} Every $L \in PS$ arbitrary can be reduces to QBF in poly time. By the definition of PS \[ \exists DTM\ M: L = L(M), \exists p(n)\ \text{M accepts L in space}\ p(n)\] We have $2^{c_m p(n)}$ configurations of M and every configuration can be encoded by string of length \[ c_m p(n) = m(n) := m \] We assume, that there is only 1 accepting configuration. $x \in L \iff \exists $ path of length $m$ in \emph{configuration graph} from $C_0 \to C{acc}$. Use similar algorithm as in Savic theorem \cref{savic}, but encode computation in QBF. Notation, where $\varphi$ is an encoding of allowed transition in Cook-Levin theorem. We construct QBF $\psi$. \begin{itemize} \item $ \psi_0(C, C^{\prime}) = 1 \iff \varphi_m(C, C^{\prime})$ is satisfiable \item $ \psi_i(C, C^{\prime}) = 1 \iff $ there exists path $C \to C^{\prime}$ of length $2^i$ \item $ \psi_m(C_0, C_{acc}) = 1 \iff x \in L$ \end{itemize} Obvious idea that would not work \[ \psi_i(C, C^{\prime}) = \exists C_{int} (\psi_i(C, C_{int}) \land \psi_i(C_{int}, C^{\prime})) \] Since every such change doubles size of the formula, we end up with \[ |\psi_m| \in \Omega(2^m p(n)) \] Main idea \[ \psi_i(C, C^{\prime}) = \exists C_{int} \forall D_1, D_2 [(D_1 = C \land D_2 = C^{\prime}) \lor (D_1 = C_{int} \land D_2 = C^{\prime})] \Rightarrow \psi_{i - 1}(D_1, D_2) \] Formally, implication could be replaced by $\neg x \lor y$. Now \[ |\psi_i| = |\psi_{i - 1}| + \bigO(m) \] Therefore \[ |\psi_m| = \bigO(m^2) \] Also, going from $\psi_{i - 1} \to \psi_{i}$ we need 1 existential, 1 universal quantifier. In the end, $\psi_m$ has $m$ pairs of alternating existential and universal quantifiers. Therefore $QBF \in \Sigma_m$. \end{proof} \subsection{P-completeness} \begin{note} If we use polynomial time reducibility, almost all languages(except trivial: empty and all words) are $\TP$-complete. Therefore we use a different reducibility. \end{note} \begin{definition}[log-space reducibility]\label{log_red} A is \emph{log-space} reducible to B if $\exists$ DTM transducer M that works in log space (excluding input and output tape). Such that $x \in A \iff M(x) \in B $. \end{definition} \begin{definition}[$\TP$-complete] L is $\TP$-complete $\iff L \in \TP \land \forall A \in \TP A$ is log-space reducible to L. \end{definition} \begin{theorem}[P-complete vs LOG]\label{p_comp_log} Let $L$ be $\TP$-complete and $L \in LOG = DS(\log(n)) \Rightarrow \TP = LOG$. \end{theorem} \begin{proof} Since $c_n^{\log n} = (2^{\log n})^{\log c_n} = n^{\log c_n}$. \[ LOG \subseteq \TP \] We want \[ \TP \subseteq LOG \] Let $B \in \TP$ arbitrary, we need log-space acceptor for B $\Rightarrow B \in LOG$. From $L$ is $\TP$-complete $\Rightarrow \exists$ log-space DTM $M_L: x \in B \iff M_L(x) \in L$. From $L \in LOG \Rightarrow \exists $ log-space DTM acceptor $M_{log}: L = L(M_{log}) $. We cannot simply concatenate 2 machines, as output tape of the first machine $M_L$ becomes work tape of the 2nd. Output tape is not guaranteed to be log-space. Let $Y$ be the output of $M_L$ \[ |Y| \leq 2^{c_M \log n} = n^{c_M} \] Idea: keep just current symbol on output of $M_L$ and the position. Then start the next step of $M_{log}$. Then restart $M_L$ and discard output with position $ < i$. Repeat. Works, because we do not worry about Time, but space. We need 2 counters $i, j \in \{ 1, \ldots, |Y| \}$. Which require \[ \log(n^c) = c \log n \] Where $i$ keeps the position, updates with every move of the head. $j$ is reset to $0$ and is incremented with every symbol $M_{log}$ outputs. Symbols are discarded until $i = j$. %One counter is incremented with write on output tape, other counter keeps position on the output tape. %while(work) %{ % for(j = 0; j < i; ++j) % { % if( % } %} \includegraphics[scale=0.4]{p_nlog.eps} \end{proof} \begin{consequence} Let $L$ be $\TP$-complete and $L \in NLOG = NS(\log(n)) \Rightarrow \TP = NL$. Proof is same, but acceptor is non deterministic. \end{consequence} Q: what if we use log-space reducibility in $\TNP$-complete definition? This is stricter, since if we can reduce in log-space, we can also reduce in polynomial time (by time and space comparison with $2^n$). \subsection{$NL = co-NL$ by Szelepcsenyi-Immerman} % 6th lecture \begin{theorem}[Szelepcsenyi-Immerman(1988)] \[NL = co-NL \] \end{theorem} \begin{proof} Similar proof as in case $L \in \TNP$-complete $\land L \in co-\TNP$. But the reducibility is again log space \cref{log_red}. Assume, $L \in NL$-complete. Let $L_a \in NL$ be arbitrary. Let $M_{log}$ be a log space transducer, $M$ be acceptor for $co-L$. Then $M_{log} + M$ is an acceptor for $co-L_a$. Output of the transducer can be quite large. Which then becomes working tape for acceptor. Similarly, as in previous theorem \cref{p_comp_log} output of the transducer is added character by character. Let the desirable language be \[ L = PATH = \{ (G, s, t)|\ \text{G is an undirected graph in which}\ \exists Path(s, t) \} \] Graph can be encoded by the following: \begin{itemize} \item for $n$ vertices $\log n$ counter is enough. \item for each vertex we store list of neighbors. \end{itemize} Easy to see, that PATH $\in \TP$, since BFS, DFS can solve problem in poly time. PATH $\in LOG$ is an open question. However, with NTS log space is enough since we can try all paths with size $n$ by picking the next neighbor randomly. In each step it is sufficient to remember code of the current vertex and path size counter on separate tapes. Both of them are of size $\bigO(\log n)$. The algorithm: \begin{enumerate} \item vertex curr $= s; counter = 0$; \item while($counter < n$)\{ \item pick neighbor U at random; if($U = t$) accept; \item $curr = U$; \item \} \item reject. \end{enumerate} To show that PATH is $NL$-hard, we will treat the graph as the configuration graph. And the $Path(s, t)$ will be a path from $C_0$ to $C_{accept}$. WLOG there is only one $C_{accept}$ configuration. Let $L_a \in NL$ be arbitrary, therefore $\exists$ log space acceptor $M: L_a = L(M)$. Transducer $M_{log}$ encodes the configuration graph of $L_a$. Code of the acceptor is not a problem, as size of $M$ does not depend on the input. Transition table if $M$ is embedded in the control unit. Space needed for 1 configuration of $M$ is $\bigO(\log n)$. As the work tape is bounded by the assumption, and for the position of the head $\bigO(\log n)$ is also enough. The last step is to prove co-PATH $\in NL$. Algorithm is the following: \begin{itemize} \item count vertices reachable from $s$ in G. \item count vertices reachable from $s$ in $G \setminus \{t\}$. \item if equal - accept. \end{itemize} There is no need to generate encoding of graph $G \setminus \{t\}$, since $t$ is part of input and we can ignore it by single "if". How to count reachable vertices? The number is bounded by $n$ therefore, $\bigO(\log n)$ space is enough. \begin{definition}[$R_i$] \[ R_i = \{ u |\ u\ \text{is reachable from}\ s\ \text{by path length}\ \leq i \} \] Also \[ R_i = R_{i - 1} \cup \{ a |\ \exists b \in R_{i - 1}: (a, b) \in E(G) \} \] \end{definition} We want to compute $|R_n|$. To do so, we compute non-deterministically compute $|R_0|, |R_1|, \ldots, |R_n|$. At each step, we remember only the last number. Algorithm to compute $R_i$: % todo rewrite, extremely messy \begin{enumerate} \item $|R_0| = 1$; \item guess $0 \leq g \leq n$ and verify: \begin{enumerate} \item $g \geq |R_i|$\\ Generate non-deterministically all subsets $V \subseteq V(G)$ of vertices of size $g$. Vertices are ordered by increasing number of their codes. Check whether $\forall v \in V: \exists Path(s, v): |Path(s, v)| = i$. \item $g \leq |R_i|$.\\ Is equivalent to check \[ |V \setminus R_i| \geq n - g \] Algorithm: % todo rewrite as procedure \begin{enumerate} \item non deterministically generate $V_t \subseteq V(G): |V_t| = (n - g)$ vertices. \item $\forall a \in V_t$ check $a \in V(G) \setminus R_i$. \item generate $V_r \subseteq V(G): |V_r| = |R_{i - 1}|$ non deterministically: select $|R_{i - 1}|$ vertices and check $Path(s, a): |Path(s, a)| = i - 1$. As $R_{i-1}$ is unique we get same set every time. \item $\forall b \in V_r$ check $a \ne b \land (b, a) \notin E(G)$. \end{enumerate} \end{enumerate} \end{enumerate} Required space is again $\bigO(\log n)$ as in each step algorithm stores constant number of vertices. \end{proof}
{ "alphanum_fraction": 0.6569053958, "avg_line_length": 35.7331786543, "ext": "tex", "hexsha": "a2472c52635b8cd23deabe10cd34033c92901fbf", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "cb69e2889ce8374c64aa6ac9faf64ad1ab01add7", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "karlov/NTIN063", "max_forks_repo_path": "src/prednasky/05_prednaska.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cb69e2889ce8374c64aa6ac9faf64ad1ab01add7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "karlov/NTIN063", "max_issues_repo_path": "src/prednasky/05_prednaska.tex", "max_line_length": 178, "max_stars_count": null, "max_stars_repo_head_hexsha": "cb69e2889ce8374c64aa6ac9faf64ad1ab01add7", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "karlov/NTIN063", "max_stars_repo_path": "src/prednasky/05_prednaska.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5414, "size": 15401 }
\input{../header_class} %---------- start document ---------- % \section{group -- algorithms for finite groups}\linkedzero{group} \begin{itemize} \item {\bf Classes} \begin{itemize} \item \linkingone{group}{Group} \item \linkingone{group}{GroupElement} \item \linkingone{group}{GenerateGroup} \item \linkingone{group}{AbelianGenerate} \end{itemize} \end{itemize} \C \subsection{\negok Group -- group structure}\linkedone{group}{Group} \initialize \func{Group}{\hiki{value}{class},\ \hikiopt{operation}{int}{-1}}{Group}\\ \spacing % document of basic document \quad Create an object which wraps \param{value} (typically a ring or a field) only to expose its group structure. \\ \spacing % added document \quad The instance has methods defined for (abstract) group. For example, \linkingtwo{group}{Group}{identity} returns the identity element of the group from wrapped \param{value}.\\ \spacing % input, output document \quad \param{value} must be an instance of a class expresses group structure. \param{operation} must be 0 or 1; If \param{operation} is 0, \param{value} is regarded as the additive group. On the other hand, if \param{operation} is 1, \param{value} is considered as the multiplicative group. The default value of \param{operation} is 0.\\ \negok You can input an instance of \linkingone{group}{Group} itself as \param{value}. In this case, the default value of \param{operation} is the attribute \linkingtwo{group}{Group}{operation} of the instance. \begin{at} \item[entity]\linkedtwo{group}{Group}{entity}:\\ The wrapped object. \item[operation]\linkedtwo{group}{Group}{operation}:\\ It expresses the mode of operation; $0$ means additive, while $1$ means multiplicative. \end{at} \begin{op} \verb+A==B+ & Return whether A and B are equal or not.\\ \verb+A!=B+ & Check whether A and B are not equal.\\ \verb+repr(A)+ & representation\\ \verb+str(A)+ & simple representation\\ \end{op} \begin{ex} >>> G1=group.Group(finitefield.FinitePrimeField(37), 1) >>> print G1 F_37 >>> G2=group.Group(intresidue.IntegerResidueClassRing(6), 0) >>> print G2 Z/6Z \end{ex}%Don't indent! \method \subsubsection{setOperation -- change operation}\linkedtwo{group}{Group}{setOperation} \func{setOperation}{\param{self},\ \hiki{operation}{int}}{(None)}\\ \spacing % document of basic document \quad Change group type to additive ($0$) or multiplicative ($1$).\\ \spacing % added document %\spacing % input, output document \quad \param{operation} must be 0 or 1.\\ \subsubsection{\negok createElement -- generate a GroupElement instance}\linkedtwo{group}{Group}{createElement} \func{createElement}{\param{self},\ \param{*value}}{\out{GroupElement}}\\ \spacing % document of basic document \quad Return \linkingone{group}{GroupElement} object whose group is \param{self}, initialized with \param{value}.\\ \spacing % added document \quad \negok This method calls \param{self}.\linkingtwo{group}{Group}{entity}.createElement.\\ \spacing % input, output document \quad \param{value} must fit the form of argument for \param{self}.\linkingtwo{group}{Group}{entity}.createElement.\\ \subsubsection{\negok identity -- identity element}\linkedtwo{group}{Group}{identity} \func{identity}{\param{self}}{\param{GroupElement}}\\ \spacing % document of basic document \quad Return identity element (unit) of group.\\ \spacing % added document \quad Return zero (additive) or one (multiplicative) corresponding to \linkingtwo{group}{Group}{operation}. \negok This method calls \param{self}.\linkingtwo{group}{Group}{entity}.identity or \linkingtwo{group}{Group}{entity} does not have the attribute then returns one or zero. \spacing % input, output document \subsubsection{grouporder -- order of the group}\linkedtwo{group}{Group}{grouporder} \func{grouporder}{\param{self}}{\param{long}}\\ \spacing % document of basic document \quad Return group order (cardinality) of \param{self}.\\ \spacing % added document \quad \negok This method calls \param{self}.\linkingtwo{group}{Group}{entity}.grouporder, card or \_\_len\_\_.\\ We assume that the group is finite, so returned value is expected as some long integer. If the group is infinite, we do not define the type of output by the method. \spacing % input, output document \begin{ex} >>> G1=group.Group(finitefield.FinitePrimeField(37), 1) >>> G1.grouporder() 36 >>> G1.setOperation(0) >>> print G1.identity() FinitePrimeField,0 in F_37 >>> G1.grouporder() 37 \end{ex}%Don't indent! \C \subsection{GroupElement -- elements of group structure}\linkedone{group}{GroupElement} \initialize \func{GroupElement}{\hiki{value}{class},\ \hikiopt{operation}{int}{-1}}{GroupElement}\\ \spacing % document of basic document \quad Create an object which wraps \param{value} (typically a ring element or a field element) to make it behave as an element of group.\\ \spacing % added document \quad The instance has methods defined for an (abstract) element of group. For example, \linkingtwo{group}{GroupElement}{inverse} returns the inverse element of \param{value} as the element of group object.\\ \spacing % input, output document \quad \param{value} must be an instance of a class expresses an element of group structure. \param{operation} must be $0$ or $1$; If \param{operation} is $0$, \param{value} is regarded as the additive group. On the other hand, if \param{operation} is $1$, \param{value} is considered as the multiplicative group. The default value of \param{operation} is $0$.\\ \negok You can input an instance of \linkingone{group}{GroupElement} itself as \param{value}. In this case, the default value of \param{operation} is the attribute \linkingtwo{group}{GroupElement}{operation} of the instance. \begin{at} \item[entity]\linkedtwo{group}{GroupElement}{entity}:\\ The wrapped object. \item[set]\linkedtwo{group}{GroupElement}{set}:\\ It is an instance of \linkingone{group}{Group}, which expresses the group to which \param{self} belongs. \item[operation]\linkedtwo{group}{GroupElement}{operation}:\\ It expresses the mode of operation; $0$ means additive, while $1$ means multiplicative. \end{at} \begin{op} \verb+A==B+ & Return whether A and B are equal or not.\\ \verb+A!=B+ & Check whether A and B are not equal.\\ \verb+A.ope(B)+ & Basic operation (additive $+$, multiplicative $*$)\\ \verb+A.ope2(n)+ & Extended operation (additive $*$, multiplicative $**$)\\ \verb+A.inverse()+\linkedtwo{group}{GroupElement}{inverse} & Return the inverse element of \param{self}\\ \verb+repr(A)+ & representation\\ \verb+str(A)+ & simple representation\\ \end{op} \begin{ex} >>> G1=group.GroupElement(finitefield.FinitePrimeFieldElement(18, 37), 1) >>> print G1 FinitePrimeField,18 in F_37 >>> G2=group.Group(intresidue.IntegerResidueClass(3, 6), 0) IntegerResidueClass(3, 6) \end{ex}%Don't indent! \method \subsubsection{setOperation -- change operation}\linkedtwo{group}{GroupElement}{setOperation} \func{setOperation}{\param{self},\ \hiki{operation}{int}}{(None)}\\ \spacing % document of basic document \quad Change group type to additive ($0$) or multiplicative ($1$).\\ \spacing % added document %\spacing % input, output document \quad \param{operation} must be $0$ or $1$.\\ \subsubsection{\negok getGroup -- generate a Group instance}\linkedtwo{group}{GroupElement}{getGroup} \func{getGroup}{\param{self}}{\out{Group}}\\ \spacing % document of basic document \quad Return \linkingone{group}{Group} object to which \param{self} belongs.\\ \spacing % added document \quad \negok This method calls \param{self}.\linkingtwo{group}{GroupElement}{entity}.getRing or getGroup.\\ \negok In an initialization of \linkingone{group}{GroupElement}, the attribute \linkingtwo{group}{GroupElement}{set} is set as the value returned from the method.\\ \spacing % input, output document \subsubsection{order -- order by factorization method}\linkedtwo{group}{GroupElement}{order} \func{order}{\param{self}}{\param{long}}\\ \spacing % document of basic document \quad Return the order of \param{self}.\\ \spacing % added document \quad \negok This method uses the factorization of order of group.\\ \negok We assume that the group is finite, so returned value is expected as some long integer. \negok If the group is infinite, the method would raise an error or return an invalid value.\\ \spacing % input, output document \subsubsection{t\_order -- order by baby-step giant-step}\linkedtwo{group}{GroupElement}{t\_order} \func{t\_order}{\param{self},\ \hikiopt{v}{int}{2}}{\param{long}}\\ \spacing % document of basic document \quad Return the order of \param{self}.\\ \spacing % added document \quad \negok This method uses Terr's baby-step giant-step algorithm.\\ This method does not use the order of group. You can put number of baby-step to \param{v}. \negok We assume that the group is finite, so returned value is expected as some long integer. \negok If the group is infinite, the method would raise an error or return an invalid value.\\ \spacing % input, output document \quad \param{v} must be some int integer.\\ \begin{ex} >>> G1=group.GroupElement(finitefield.FinitePrimeFieldElement(18, 37), 1) >>> G1.order() 36 >>> G1.t_order() 36 \end{ex}%Don't indent! \C \subsection{\negok GenerateGroup -- group structure with generator}\linkedone{group}{GenerateGroup} \initialize \func{GenerateGroup}{\hiki{value}{class},\ \hikiopt{operation}{int}{-1}}{GroupElement}\\ \spacing % document of basic document \quad Create an object which is generated by \param{value} as the element of group structure. \\ \spacing % added document \quad This initializes a group `including' the group elements, not a group with generators, now. We do not recommend using this module now. The instance has methods defined for an (abstract) element of group. For example, \linkingtwo{group}{GroupElement}{inverse} returns the inverse element of \param{value} as the element of group object.\\ The class inherits the class \linkingone{group}{Group}.\\ \spacing % input, output document \quad \param{value} must be a list of generators. Each generator should be an instance of a class expresses an element of group structure. \param{operation} must be $0$ or $1$; If \param{operation} is $0$, \param{value} is regarded as the additive group. On the other hand, if \param{operation} is $1$, \param{value} is considered as the multiplicative group. The default value of \param{operation} is $0$.\\ \begin{ex} >>> G1=group.GenerateGroup([intresidue.IntegerResidueClass(2, 20), ... intresidue.IntegerResidueClass(6, 20)]) >>> G1.identity() intresidue.IntegerResidueClass(0, 20) \end{ex}%Don't indent! \C \subsection{AbelianGenerate -- abelian group structure with generator}\linkedone{group}{AbelianGenerate} \initialize \quad The class inherits the class \linkingone{group}{GenerateGroup}.\\ \subsubsection{relationLattice -- relation between generators}\linkedtwo{group}{AbelianGenerate}{relationLattice} \func{relationLattice}{\param{self}}{\out{\linkingone{matrix}{Matrix}}}\\ \spacing % document of basic document \quad Return a list of relation lattice basis as a square matrix each of whose column vector is a relation basis.\\ \spacing % added document \quad The relation basis, $V$ satisfies that $\prod_{i} \mbox{generator}_i V_i=1$. \spacing % input, output document \subsubsection{computeStructure -- abelian group structure}\linkedtwo{group}{AbelianGenerate}{computeStructure} \func{computeStructure}{\param{self}}{\param{tuple}}\\ \spacing % document of basic document \quad Compute finite abelian group structure.\\ \spacing % added document \quad If \param{self} $G \simeq \oplus_i <h_i>$, return [($h_1$, ord($h_1$)),..($h_n$, ord($h_n$))] and $^\# G$, where $<h_i>$ is a cyclic group with the generator $h_i$.\\ \spacing % input, output document \quad The output is a tuple which has two elements; the first element is a list which elements are a list of $h_i$ and its order, on the other hand, the second element is the order of the group. \begin{ex} >>> G=AbelianGenerate([intresidue.IntegerResidueClass(2, 20), ... intresidue.IntegerResidueClass(6, 20)]) >>> G.relationLattice() 10 7 0 1 >>> G.computeStructure() ([IntegerResidueClassRing,IntegerResidueClass(2, 20), 10)], 10L) \end{ex}%Don't indent! \C %---------- end document ---------- % \input{../footer}
{ "alphanum_fraction": 0.6959650869, "avg_line_length": 46.1519434629, "ext": "tex", "hexsha": "1bbebc7d50244f2338323329371a3fe16e88827a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "turkeydonkey/nzmath3", "max_forks_repo_path": "manual/en/group.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "turkeydonkey/nzmath3", "max_issues_repo_path": "manual/en/group.tex", "max_line_length": 175, "max_stars_count": 1, "max_stars_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "turkeydonkey/nzmath3", "max_stars_repo_path": "manual/en/group.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-26T19:22:17.000Z", "max_stars_repo_stars_event_min_datetime": "2021-05-26T19:22:17.000Z", "num_tokens": 3675, "size": 13061 }
\section{Progress} Till now, we have implemented MFCC feature extractor and GMM model for acoustic modeling. For GMM we employ scikit-learn\cite{scikit-learn}. We've tested on 20 speakers for a closed set recognition task and calculate accuracy of recognition. 30 seconds utterance is used for training, and 5 seconds utterance is used for test. We randomly extract 100 continuous 5-second test utterance from utterance of a speaker as test set. There's no overlap between training and test utterance. For the last week, we fine-tuned parameters of our model and scrutinizing more paper preceding to our work. The test is repeated 20 times to obtain an accurate estimate of accuracy. The test result is as follows: \begin{itemize} \item \textbf{Spontaneous} 0.940 \item \textbf{Reading} 0.926 \item \textbf{Whisper} 0.931 \end{itemize}
{ "alphanum_fraction": 0.776744186, "avg_line_length": 37.3913043478, "ext": "tex", "hexsha": "a7db18bb23300129f94a3d44b38f7dac4010fd14", "lang": "TeX", "max_forks_count": 315, "max_forks_repo_forks_event_max_datetime": "2022-03-29T08:13:36.000Z", "max_forks_repo_forks_event_min_datetime": "2015-01-21T00:06:00.000Z", "max_forks_repo_head_hexsha": "1f5d150ded23af4c152b8d20f1ab4ecec77b40e1", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "juliia5m/knu_voice", "max_forks_repo_path": "doc/Midterm-Report/progress.tex", "max_issues_count": 91, "max_issues_repo_head_hexsha": "1f5d150ded23af4c152b8d20f1ab4ecec77b40e1", "max_issues_repo_issues_event_max_datetime": "2021-05-19T08:51:26.000Z", "max_issues_repo_issues_event_min_datetime": "2015-03-19T09:25:23.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "juliia5m/knu_voice", "max_issues_repo_path": "doc/Midterm-Report/progress.tex", "max_line_length": 80, "max_stars_count": 717, "max_stars_repo_head_hexsha": "1f5d150ded23af4c152b8d20f1ab4ecec77b40e1", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "juliia5m/knu_voice", "max_stars_repo_path": "doc/Midterm-Report/progress.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-30T12:45:45.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-03T15:25:46.000Z", "num_tokens": 235, "size": 860 }
\documentclass[onecolumn, draftclsnofoot, 10pt, compsoc]{IEEEtran} \usepackage{graphicx} \usepackage{url} \usepackage{setspace} \usepackage{geometry} \usepackage{listings} \usepackage{tikz} \usetikzlibrary{arrows,automata} \usepackage{caption} \geometry{textheight=9.5in, textwidth=7in} % 1. Fill in these details \def \CapstoneTeamName{ Team 41} \def \CapstoneTeamNumber{ 41} \def \GroupName{ 30k CS Avionics} \def \GroupMemberOne{ Joshua Novak} \def \GroupMemberTwo{ Allison Sladek} \def \GroupMemberThree{ Levi Willmeth} \def \CapstoneProjectName{ 30K Rocket Spaceport America} \def \CapstoneSponsorCompany{ Oregon State University} \def \CapstoneSponsorPerson{ Dr. Nancy Squires} % 2. Uncomment the appropriate line below so that the document type works \def \DocType{ %Problem Statement %Requirements Document %Technology Review Design Document %Progress Report } \newcommand{\NameSigPair}[1]{ \par \makebox[2.75in][r]{#1} \hfill \makebox[3.25in]{\makebox[2.25in]{\hrulefill} \hfill \makebox[.75in]{\hrulefill}} \par\vspace{-12pt} \textit{ \tiny\noindent \makebox[2.75in]{} \hfill \makebox[3.25in]{\makebox[2.25in][r]{Signature} \hfill \makebox[.75in][r]{Date}} } } % 3. If the document is not to be signed, uncomment the RENEWcommand below %\renewcommand{\NameSigPair}[1]{#1} % \renewcommand{\thesubsubsection}{\thesection.\alph{subsubsection}} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} \begin{titlepage} \pagenumbering{gobble} \begin{singlespace} %\includegraphics[height=4cm]{coe_v_spot1} \hfill % 4. If you have a logo, use this includegraphics command to put it on the coversheet. %\includegraphics[height=4cm]{CompanyLogo} \par\vspace{.2in} \centering \scshape{ \huge CS Capstone \DocType \par {\large\today}\par \vspace{.5in} \textbf{\Huge\CapstoneProjectName}\par \vfill % {\large Prepared for}\par % \Huge \CapstoneSponsorCompany\par % \vspace{5pt} % {\Large\NameSigPair{\CapstoneSponsorPerson}\par} {\large Prepared by }\par % \GroupName\par % 5. comment out the line below this one if you do not wish to name your team % \CapstoneTeamName\par \vspace{5pt} {\Large \NameSigPair{\GroupMemberOne}\par \NameSigPair{\GroupMemberTwo}\par \NameSigPair{\GroupMemberThree}\par } \vspace{20pt} } \end{singlespace} \section*{Revision History} \begin{tabular*}{1\linewidth}{@{\extracolsep{\fill}}|c|c|c|c|} \hline Name & Date & Reason For Changes & Version\\ \hline Levi Willmeth, Joshua Novak, Allison Sladek&11/26/17&Initial document draft&0.1\\ \hline Levi Willmeth, Joshua Novak, Allison Sladek&4/29/18&Update due to hardware changes&0.2\\ \hline \end{tabular*} \\ \begin{abstract} This design document outlines the major technical challenges, design elements, and testing goals that team 41 will complete for Oregon State University's 30k Spaceport America Cup entry in 2018. The competition involves designing, building, and launching a student-made rocket to 30,000 feet, and is scored on several criteria including a software ground station which records and displays near real time telemetry from the rocket, and a separate scientific payload. \end{abstract} \end{titlepage} \newpage \pagenumbering{arabic} \tableofcontents % 7. uncomment this (if applicable). Consider adding a page break. %\listoffigures %\listoftables % ============================================================================== \section{Project Overview} \subsection{Introduction} This project will design, write, and test software that will fly on board the Oregon State University's entry to the Spaceport America Cup's 30k Challenge. The Spaceport America Cup is an international engineering competition to design, build, and fly a student-made rocket to 30,000 feet. The competition is scored on several criteria including software components like flight avionics, recording and displaying telemetry, and conducting a scientific experiment during flight. \subsection{Stakeholders} The project stakeholders are Oregon State University, our primary mentor Dr. Nancy Squires, our ESRA 2017-18 team members including Mechanical Engineering and Electrical Engineering student subteams, and our Computer Science capstone team 41. \subsection{Design Concerns} This project can be broken into four major design components, each with unique design challenges and considerations: rocket avionics, payload avionics, receiving telemetry and stored data, and displaying telemetry and stored data. The individual component design concerns will be covered in each component section. The overall project design concerns are that the rocket is made by students, and that we are safe at all times during building and launch, that the rocket achieves a high score during the competition, and that we act as positive role models for future students considering joining Oregon State University's rocket program. % ============================================================================== \section{Major Design Components} \subsection{Overview of design elements} This project requires that multiple pieces of software and hardware work together to accomplish our mission. The rocket will carry at least two flight computers which will record on-board flight data, as well as multiple telemetry modules that will record and transmit flight data during flight. During the flight we will capture and display live telemetry data, which will be further improved after the flight by importing stored data from the flight computers and telemetry modules. The on-board flight computers will also perform important tasks during the flight. The rocket avionics will use inertial measurement units and pressure sensors to detect apogee. The payload avionics will use accelerometers to measure gravity and engage a propeller to accelerate the payload downward, to create a micro gravity environment for the scientific experiment. On the ground, a set of affordable computers will be connected to radios which will receive the live telemetry signals. These computers will parse out text from the incoming audio stream, and store that information over a local network and into a database. Nearby team members and spectators will be encouraged to use their laptops, tablets and cell phones to connect to a WiFi network to access a web page containing a flight summary, maps of the rocket's location and altitude, among other information. This information will be displayed within a few seconds of real-time, using data received from BeelineGPS telemetry modules on board the rocket and payload. After the rocket and payload have landed and been recovered, we will import additional data from the on-board computers into the database. This will provide a large amount of additional data and allow us to display additional graphs and information about the flight. \subsection{Avionics} \subsubsection{Reading from Sensors} Joshua Novak will manage avionics code that handles reading from sensors. There are a variety of sensors that will need to be read from for this project. These include accelerometers, altimeters, and GPS devices among other sensors. Avionics code will need to be written that reads from these sensors and can be called by functional avionics code for the rocket and paylod. The output of this code will need to represent the output of the sensor, interpreted according to the standards of the device. The code is written in Python. This decision was made due to the ease of using Python, as well as tests by our team finding that the program was largely I/O gated, meaning that increasing processing speed or decreasing memory usage with C would not see any notable gains. This has also made the creation of Unit Tests somewhat easier due to the many utilities available for the task. The code will be running on a Raspberry Pi and will need to be as efficiently as possible. Libraries will be used when they are available, and code will be written for specific sensor models. The CS team will also work alongside the ECE team to include any code needed for their capstone requirements, assisting them as necessary. Alternatively, the ECE team and CS team will write code separately, with CS avionics being used on the rocket and ECE avionics being written solely to satisfy their requirements. Unit tests for the sensor avionics will be written alongside the code. Since Joshua is also responsible for testing avionics code, he will handle this as well, and ensure that any code written by the ECE team is thoroughly tested. \subsubsection{Rocket Avionics} Levi Willmeth will manage writing the rocket avionics for this project. The main rocket hardware components will include a Raspberry Pi Zero, at least one deployment module, Beeline GPS, and sensors including at least a thermometer, gyroscope, accelerometer, and altimeter. Redundant sensors can be averaged to achieve a much more accurate reading than could otherwise be achieved with fewer sensors. This will allow better analysis of the data, and more accurate apogee detection. Separation of the main rocket at apogee to deploy both the parachute and payload is a mission critical operation. If it is triggered at the wrong time, the rocket may not reach the goal altitude, or could be in the wrong position for a successful separation. As such, it will be controlled by the Stratologger, a commercial product that measures altitude, temperature, and battery voltage, and activates separation when apogee is detected. The main purpose of the Raspberry Pi will then be to record data logs of the main rocket sensors for later analysis. These logs will allow the team to compare our sensors against the Stratologger. This will allow testing of the Stratologger’s accuracy, and ensure that deployment will function as expected after pre-flight testing. \subsubsection{Payload Avionics} Levi Willmeth will manage writing the payload avionics for this project. The scientific payload will carry a brushless motor and ESC, several IMU's, a Raspberry Pi Zero flight computer, a video camera, and a BeeLineGPS telemetry module. The flight computer will be turned on several minutes before launch and record up-close video of the scientific experiment throughout the flight. It will also read acceleration and attitude data from each of the IMU's to detect separation and measure the amount of gravity experienced by the experiment. After separation has been detected, the flight computer will begin increasing the speed of the motor to propel the payload downward, which will reduce acceleration felt by the experiment. The goal of the experiment is to achieve zero gravity for as long as possible, which we estimate will be 10-12 seconds. Accurately detecting separation and ejection from the body of the rocket will be a critical task for the payload avionics. If the payload begins to spin the propeller before being ejected from the body of the rocket, it will immediately destroy the propeller and possibly damage other portions of the rocket. The motor is powerful enough that if held still while being powered, it could draw enough current to easily start a fire. It will be important to develop an accurate, not precise, method to determine when the rocket has separated and the payload has been ejected from the body of the rocket. Because we are unable to add physical components like a manual switch or wire connecting the payload to the body of the rocket, we will need to rely on our sensors. The payload will carry multiple 6 degree of freedom IMU's which will give us acceleration and gyroscopic readings in the x, y, and z planes. We will be able to use these sensors to remain in an idle state during launch, glide, and finally to detect the rather large impulse forces created during separation and ejection. We plan to modeling the avionics system as a state machine with pre-launch, launch, glide, separation, microgravity, and parachute states. This type of direct, forward progression allows us to design a state machine which prevents the payload avionics from inadvertently returning to a previous state, and gives us a solid idea of what behaviors to look for in order to transition to the next state. \begin{figure}[!ht] \centering \begin{tikzpicture}[->,shorten >=1pt,auto,node distance=2.8cm,semithick] \node[initial,state] (P) {Pre-Launch}; \node[state] (L) [right of=P] {Launch}; \node[state] (G) [right of=L] {Glide}; \node[state] (S) [right of=G] {Separation}; \node[state] (M) [right of=S] {Microgravity}; \node[state] (C) [right of=M] {Parachute}; \path (P) edge [right] node{} (L); \path (L) edge [right] node{} (G); \path (G) edge [right] node{} (S); \path (S) edge [right] node{} (M); \path (M) edge [right] node{} (C); \end{tikzpicture} \caption{Conceptual state diagram for payload avionics.} \end{figure} For example, during the pre-launch state we can effectively ignore all sensor readings until we see a very high acceleration on the z axis, which signals launch. We can begin logging all sensor values from that point on (or even retroactively record a set of rolling pre-launch values) until we see the z axis acceleration end, which signals that we have entered the glide state. Obviously we will need to look at more than a single sensor value to transition between states, but the general idea is to understand what we can expect during each phase of launch, and encode those conditions as state transitions. The major design concern for payload avionics are that accuracy is more important than precision. A false early start would risk damaging or completely destroying the rocket with the payload propellers, but a late start would only risk missing some or all of the experimental data. \subsubsection{Avionics Testing} The CS and ECE teams will be writing avionics for the rocket and payload which will be able to determine when separation should occur. Due to the high degree of precision that this requires and the significant negative consequences of failure for this code, ensuring the functionality of this code is of the utmost importance. It is also standard within the aerospace industry to very thoroughly test any and all avionics code. Writing a thorough test suite will be good practice for working within the aerospace or related industries. There are three types of testing that the team has decided to carry out: hardware tests, unit tests, and simulations with randomization. The CS team will cooperate with the ECE team to perform hardware tests. Hardware tests will involve making certain that each sensor is functional, powered, and giving reasonable readings. Joshua will take responsibility to ensure that every electronic component on both the payload and the rocket have been thoroughly tested. Unit tests will involve making sure that the basic functionality of the code is intact. This will mean ensuring that the programs and functions written to read from sensors give accurate output to a given input. These will mostly be used to test code reading from sensors, or that the code will trigger certain events when given input that should do so. Some tests may be written to ensure a minimum degree of robustness, such as throwing out inputs that are invalid (such as an input that is a string of characters that should be a number). Joshua will oversee the writing of unit tests on avionics code. He will check to ensure that the goals for line coverage are met by the unit tests, and aim to have the unit tests ensure all code functions correctly to expected inputs. The final aspect of testing the avionics code is to simulate a launch with robustness testing. This will involve encapsulating the functionality of the code for triggering flight events in some manner, and then testing that code against simulated inputs. These inputs will then be randomly altered in a variety of manners. If possible, this will be extended to include testing of code for sensor interaction, with the code interacting with simulated sensors rather than actual sensors. The randomization will likely take one of the following forms, but may include other forms if the code calls for it. \begin{itemize} \item Throwing a value that is out of bounds (this included the following) \begin{itemize} \item A value that is of the wrong type \item A value that is too large \item A value that is too small \end{itemize} \item Having a sensor cease responding \item Having a sensor continually send the same value \item Offsetting a sensor’s outputs by some set amount for the rest of the launch \item Having a sensor feed a value that is not out of bounds, but is not correct \item Having a sensor feed values for the rest of the launch that are incorrect and \begin{itemize} \item in bounds \item out of bounds \item either in or out of bounds \end{itemize} \end{itemize} The number of randomized failures will be able to be set at the start of the test, but not timing or the types of failures. Some failures may be weighted as being more significant, and therefore valued as a larger number of failures. The test will not be given a clear pass fail, but will instead push the outputs of the avionics code to a csv, which will be compared against the csv output of the code with no randomization as well as the inputs given by the simulation. \subsection{Telemetry and Parsing Flight Data} \subsubsection{Transmitting Telemetry} The ECE subteam selected the BigRedBee BeeLineGPS telemetry module which cannot be used with additional sensors and only transmits using APRS formatting. This means that we cannot control which fields are being transmitted, or the packet formatting of the telemetry data. Each transmitter will send a packet once per second, containing the latitude, longitude, and altitude of the transmitter. These packets will be transmitted as audio data and can be decoded into a string in the APRS format. There will be two telemetry transmitters in use during the flight. One will be in the nosecone of the rocket, and the other will separate with the payload. They will be on slightly different radio frequencies because the timing is not guaranteed, so the signals may overlap. \subsubsection{Receiving Telemetry} \label{subsec:telemetry} Levi Willmeth will manage receiving telemetry and parsing flight data for this project. We will be using a software TNC called Direwolf to decode the audio tones into strings of text. Because the timing is not guaranteed and the signals may overlap, we will be using one primary receiver per radio signal, with an additional secondary receiver for redundancy. We expect to use four raspberry pi zeros to receive two incoming radio signals, to provide redundancy during signal processing. \begin{center} \includegraphics[width=0.65\textwidth]{images/parser_diagram.eps} \label{flow-of-data} \captionof{figure}{Flow of information into the parser.} \end{center} % Processing each signal will be done using affordable Raspberry Pi Zero computers, which will be connected and networked to a more powerful Raspberry Pi 3 B computer, using an OTG USB cable. This will allow each of the parsing computers to be powered and networked using a single wire. All of the ground computers will be housed in a portable, hard-shell carrying case that also contains a USB power bank. This will allow us to turn on the system, and begin parsing on multiple radio channels without any additional steps. When the parsing computers boot up, they will immediately begin running the Direwolf software that allows them to receive an audio source and parse it into a string of text.\cite{direwolf} They will also begin running a Python script to take the input string from Direwolf, validate it for content, and insert individual fields into a database on the local network. The database will be described in section \ref{subsec:database} of this document. These programs will be started as services so that they can be stopped, started, or automatically restart themselves as needed.\cite{script-as-service}\\ \begin{lstlisting}[frame=single] 2017-11-28 14:15:56UTC,ESRA-1,131812h3521.00N/07820.00W,496650,8642,446.875,PL \end{lstlisting} \label{example-aprs} \captionof{figure}{Example APRS packet.} Each incoming radio packet may be from our rocket, or it may be from an unrelated transmitter on the same frequency. This is because we are using uncontrolled analog radio frequencies that may contain information from other, unknown users. We will use Python to parse the string, validate that it contains the correct radio call sign for our payload or rocket, and perform a checksum to test for corruption during transmittal. If the packet does not pass checksum, contains an invalid call sign, or does not contain the correct data fields, a warning will be added to the parser log and the string will be inserted into a special database table for invalid inputs. This will allow us to inspect errors without discarding any data. \subsubsection{Parsing Data} The parsing program described in \ref{subsec:telemetry} will be configured to either process incoming strings from Direwolf, or to accept a well formatted file. This will allow the same parsing program that listens for radio inputs to parse the data recorded and stored on board the rocket and payload during flight. After the flight, a human operator will be able to remove the SD card from the rocket and payload avionics flight computers, plug the SD card into a laptop, and point the parsing program at the SD card. Using the appropriate table name and steps similar to those used while parsing radio inputs, the import program will split the strings into individual fields and insert them into the database over the network.\\ \begin{lstlisting}[frame=single] timestamp,acc_x1,acc_x2,acc_x3,acc_y1,acc_y2,acc_y3 2017-11-28 14:15:56UTC,27.2,56.7,32.2,234.2,235.7,463.4 2017-11-28 14:15:57UTC,27.1,56.7,32.2,234.2,235.7,463.4 2017-11-28 14:15:58UTC,27.2,56.8,32.2,234.2,235.7,463.4 \end{lstlisting} \label{example-file} \captionof{figure}{Example avionics output file.} \subsection{Database} \label{subsec:database} Levi Willmeth will set up and manage a MariaDB database for this project. This project will collect several hundred thousand lines of sensor data, using several different sources. All of this data will need to be sortable and searchable in order to interpret the results of our flight. Furthermore, we want to be able to record and analyze different flights independently. Using a database to store our flight data will allow us to accomplish all of these goals. Because the rocket avionics records different types of sensors than the payload avionics or the radio telemetry modules, we will use one table per type of data source. That means one table will hold all data received from a BeeLineGPS telemetry module, another table will hold all data imported from the payload avionics, and so on. These tables will also need to relate to each other in order to allow the display program to make appropriate connections between different sensors recorded on different hardware. This will be done using a Flight table with a primary key that will be used as a foreign key on each of the individual data tables. \begin{center} \includegraphics[width=\textwidth]{images/database_schema_v102.eps} \label{database-schema} \captionof{figure}{Early draft of the database schema.} \end{center} When the telemetry parsing Python script begins, it will check the Flights table of the database to see if there is a flight record with an Active status. This table and field will be used to synchronize the parsers so that records from one radio frequency can be related to records from another frequency at the same time. The Flights table has a primary key called 'flight\_id' that is used as a foreign key on other related tables. To eliminate duplicate rows caused by using redundant receivers, we will use a MySQL query that first checks if a row exists, before inserting a new row. This can be done as a single query: \cite{unique-sql-insert} \begin{lstlisting}[frame=single] INSERT INTO BeelineGPS (f_id, time, lat, lon, alt, callsign) SELECT * FROM (SELECT $cur_time, $cur_flight_id) AS tmp WHERE NOT EXISTS ( SELECT id FROM BeelineGPS WHERE time = $cur_time ) LIMIT 1; \end{lstlisting} There may be situations where a given timestamp contains data across multiple tables. For example, the rocket avionics may record some types of data at 100 Hz, and other types of data at only 10 Hz. Some of these records may share an identical timestamp, others may not. That is why we are using multiple tables and a foreign key instead of one larger table and assuming our timestamp is unique. While graphing this data, we are likely to want to relate data from different tables. For example if we recorded latitude and longitude in the BeelineGPS table, and acceleration in the Rocket\_Avionics table, we may want to combine that data to see our acceleration at different points across a map. This is where the database becomes exceptionally useful compared to flat files. We can use a join query to select records from both tables, based on the knowledge that they will have identical f\_id's and similar timestamps. The syntax would look something like this: \begin{lstlisting}[frame=single] SELECT BeelineGPS.lat, BeelineGPS.lon, Rocket_Avionics.acc_x FROM BeelineGPS INNER JOIN Rocket_Avionics ON BeelineGPS.f_id=Rocket_Avionics.f_id WHERE BeelineGPS.flight_id=$cur_flight; \end{lstlisting} In this example the latitude may be sampled at only 1 Hz while acceleration could be sampled at closer to 100 Hz. We could choose to average out many acceleration values to find the best fit at the moment we recorded the latitude, or we could let MySQL find a single value using the Rocket\_Avionics timestamp closest to the BeelineGPS timestamp. \subsection{Display Components} \subsection{Networking} Allison will set up networking between the ground station Raspberry Pis. The ground station will consist of a Raspberry Pi 3, several radio receivers, several Raspberry Pi Zeros, and a battery. The Raspberry Pi zeros will parse data received through the radios before sending it to the database and server hosted on the Raspberry Pi 3. The server will allow users to connect through an ad hoc network and display the graphical flight data on mobile devices. USB OTG (On-The-Go) will be used to connect the four Pi Zeros to the Pi 3B, which can then assign them IP addresses and power the other Pis. Setting up this connection is a relatively straightforward process that involves editing configuration files on the zeros to give them static IP addresses, and configuring the host Pi to forward network information and assign static IP addresses to each wired connection. \cite{OTG} The connected Pis will also serve the graphed flight data to mobile devices around the ground station. This will be accomplished through an ad hoc network. The Pi 3B has an integrated wifi module and changing the interface configuration file at /etc/network/interfaces will allow the the Pi to operate in ad hoc mode. \cite{config-adhoc} \subsubsection{Web hosting} Allison will manage the web-based user interface for graph viewing. The dynamic graphing display will be hosted by an Node.js server running on a networked Raspberry Pi 3B. Using express-handlebars, we will be able to display many different graphs while reusing code through partials. The flight data graphs generated by the dynamic graphing program will be served as HTML pages to connected mobile devices. Users will be able to select flights and graph types to view on their personal devices while connected to the network. \subsubsection{Dynamic graphing} Dynamic Graphing will be managed by Joshua Novak. The dynamic graphs will be written in JavaScript, using the non-commercial version of CanvasJS. The graphs will be able to be displayed in an HTML page, and will update through queries to a database. These updates will be once a second, though new data may not be displayed if it is not available. At least the following charts will be displayed in the form of dynamic graphs \begin{itemize} \item Location of the rocket on a map \item Altitude of the rocket \item Vertical velocity of the rocket \end{itemize} The following will be analyzed for launches \begin{itemize} \item Temperature \item Velocity \item Acceleration \item Barometric pressure \end{itemize} The first kind of graph that will be displayed is the map. The map will be generated with basic JavaScript. An image of a map around the launch site will be used, with javascript overlaying dots representing the recorded locations of the rocket. The dots will have an on hover effect to display the GPS coordinates of that point, and the timestamp associated with it. Some additional effects may be added to make the graph more readable, such as gradually changing the hue of dots over time, so that newer points appear different from older ones. This will only be used to show the location of the rocket on a map. The points at which particular flight events occur may be represented with a different image, such as a broken circle for separation, or a half circle once the parachute is deployed. The other kind of graph that will be used is a basic value vs time graph. This graph will likely be generated using CanvasJS. It will display a maximum number of points relative to the size of the window, so that the graph scales well to different display sizes. These points will be the last several datapoints for the flight. The time will be determined by the timestamp. The relevant value to be graphed opposing time will generally be taken directly from the database query where it is available, and calculated from other data when it is not available but can be calculated with a reasonable degree of accuracy. This will likely be used for all other fields. Finally, we will use R for analyzing data recovered from the Payload and the Rocket. This will be better suited for the task, as we want to be able to find and ignore outlier data as well as generate lines of best fit. % ============================================================================== \bibliography{references} \bibliographystyle{IEEEtran} \end{document}
{ "alphanum_fraction": 0.7793644082, "avg_line_length": 79.4392764858, "ext": "tex", "hexsha": "660e4275a1d91ff64812c53fcef9f1a72f8a21e1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6c534ad28667884c03485cdd01371290175842bb", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "OregonStateRocketry/30k2018-CS-Capstone", "max_forks_repo_path": "Documentation/Assignments/Design_Document/design_doc.tex", "max_issues_count": 34, "max_issues_repo_head_hexsha": "6c534ad28667884c03485cdd01371290175842bb", "max_issues_repo_issues_event_max_datetime": "2018-05-28T16:50:22.000Z", "max_issues_repo_issues_event_min_datetime": "2017-10-27T16:11:34.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "OregonStateRocketry/30k2018-CS-Capstone", "max_issues_repo_path": "Documentation/Assignments/Design_Document/design_doc.tex", "max_line_length": 797, "max_stars_count": 1, "max_stars_repo_head_hexsha": "6c534ad28667884c03485cdd01371290175842bb", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "OregonStateRocketry/30k2018-CS-Capstone", "max_stars_repo_path": "Documentation/Assignments/Design_Document/design_doc.tex", "max_stars_repo_stars_event_max_datetime": "2017-11-21T18:54:29.000Z", "max_stars_repo_stars_event_min_datetime": "2017-11-21T18:54:29.000Z", "num_tokens": 6880, "size": 30743 }
\documentclass[margin,line]{res} \usepackage{verbatim} \usepackage{url} \urlstyle{same} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \oddsidemargin -.5in \evensidemargin -.5in \textwidth=6.0in \itemsep=0in \parsep=0in \newenvironment{list1}{ \begin{list}{\ding{113}}{% \setlength{\itemsep}{0in} \setlength{\parsep}{0in} \setlength{\parskip}{0in} \setlength{\topsep}{0in} \setlength{\partopsep}{0in} \setlength{\leftmargin}{0.17in}}}{\end{list}} \newenvironment{list2}{ \begin{list}{$\bullet$}{% \setlength{\itemsep}{0in} \setlength{\parsep}{0in} \setlength{\parskip}{0in} \setlength{\topsep}{0in} \setlength{\partopsep}{0in} \setlength{\leftmargin}{0.2in}}}{\end{list}} \begin{document} \name{ \begin{tabular*}{7.4in} {@{\extracolsep{\fill}}lr} Toby Dylan Hocking & Curriculum Vitae \end{tabular*} } \begin{resume} \section{\sc Contact and General Info} \vspace{.05in} \begin{tabular*}{6.1in} {@{\extracolsep{\fill}}ll} NAU Building 90, Office 210 & Birth: 17 March 1984 in Newport Beach, California\\ 1295 S. Knoles Dr. & Citizenship: USA \\ Flagstaff, AZ 86011 & Language skills:\\ Web: \url{http://tdhock.github.io} & English (native)\\ E-mail: [email protected] & French (fluent since 2009)\\ \end{tabular*} \section{\sc Research Interests} Machine learning algorithms, statistical software, and data visualization techniques. Emphasis on efficient algorithms for large datasets, based on constrained optimization (regression, classification, ranking, clustering, segmentation, changepoint detection, survival analysis). Application domains include medicine, genomics, neuroscience, audio, internet, sensors, recommendation systems. \section{\sc Professional Experience} {\bf Northern Arizona University}, Flagstaff, Arizona, USA (2018-present).\\ \vspace*{-.1in} \begin{list2} \item[] Assistant Professor, School of Informatics, Computing, and Cyber Systems. \item[] ``Optimization algorithms for machine learning and interactive data analysis.'' \end{list2} {\bf McGill University}, Montreal, Canada (2014-2018).\\ \vspace*{-.1in} \begin{list2} \item[] Postdoc with Guillaume Bourque, Department of Human Genetics. \item[]``Changepoint detection and regression models for peak detection in genomic data.'' \end{list2} {\bf Tokyo Institute of Technology}, Tokyo, Japan (2013).\\ \vspace*{-.1in} \begin{list2} \item[] Postdoc with Masashi Sugiyama, Department of Computer Science. \item[] ``Support vector machines for ranking and comparing.'' \end{list2} {\bf Sangamo BioSciences}, Richmond, CA, USA (2006-2008).\\ \vspace*{-.1in} \begin{list2} \item[] Research Assistant with Jeff Miller in the Technology group. \item[] ``A web app for visualization and statistical analysis of experimental data.'' \end{list2} \section{\sc Education} {\bf \'{E}cole Normale Sup\'{e}rieure}, Cachan, France (2009-2012).\\ \vspace*{-.1in} \begin{list2} \item[] Ph.D. with Francis Bach, D\'{e}partement d'Informatique; Jean-Philippe Vert, Institut Curie. \item[] ``Learning algorithms and statistical software, with applications to bioinformatics." \end{list2} {\bf Universit\'e Paris 6}, Paris, France (2008-2009).\\ \vspace*{-.1in} \begin{list2} \item[] Master of Statistics, internship at INRA with Mathieu Gautier and Jean-Louis Foulley. \item[] ``A Bayesian Outlier Criterion to Detect SNPs under Selection in Large Data Sets." \end{list2} {\bf University of California, Berkeley}, CA, USA (2002-2006).\\ \vspace*{-.1in} \begin{list2} \item[] Double B.A. in Statistics, Molecular and Cell Biology; thesis in Statistics with Terry Speed. \item[] ``Chromosomal copy number analysis using SNP microarrays and a binomial test statistic.'' \end{list2} \section{\sc Honors and Awards (Selected)} Co-PI on National Science Foundation grant, \$3,000,000, Sept 2021 to Aug 2026. ``MIM: Discovering in reverse – using isotopic translation of omics to reveal ecological interactions in microbiomes.'' Air Force Research Laboratory, Summer Faculty Fellowship, May--July 2020, ``Machine learning algorithms for understanding physically unclonable functions based on resistive memory devices.'' PI on R Consortium Grant, \$34,000, Jan--Dec 2020, ``RcppDeepState: an easy way to fuzz test compiled code in R packages.'' ``Mobilit\'e entrant'' travel award to work with Guillem Rigaill in Universit\'e Evry, France, 2016. International useR conference, Best Student Poster Award, ``Adding direct labels to plots,'' 2011. INRIA/INRA (French computer science and agricultural research institutes), Ph.D. scholarship, 2009 (declined). UC Berkeley, Department of Statistics VIGRE research scholarship, 2001. UC Berkeley, Cal Band George Miller scholarship, 2000. \section{\sc Papers in progress and under review} Liehrmann A, {\bf Hocking TD}. Optimal multiple changepoint detection models for recognition of histone modification events in ChIP-Seq data. Barnwal A, Cho H, {\bf Hocking TD}. Survival regression with accelerated failure time model in XGBoost. {\bf Hocking TD}, Khare A. Learning penalty functions for changepoint detection using elastic-net regularized accelerated failure time models. Chaves AP, Egbert J, {\bf Hocking TD}, Doerry E, Gerosa MA. Chatbots language design: the influence of language use on user experience. Under review at ACM Transactions on Computer-Human Interaction. Abraham A, Prys-Jones T, De Cuyper A, Ridenour C, Hempson G, {\bf Hocking TD}, Clauss M, Doughty C. Improved estimation of gut passage time considerably affects trait-based dispersal models. Under review at {\it Functional Ecology}. {\bf Hocking TD}, Srivastava A. Labeled Optimal Partitioning. Preprint arXiv:2006.13967. Fotoohinasab A, {\bf Hocking TD}, Afghah F. A Greedy Graph Search Algorithm Based on Changepoint Analysis for Automatic QRS-Complex Detection. Under review at {\it Computers in Biology and Medicine}, arXiv:2004.13558. {\bf Hocking TD}, Vargovich J. Linear time dynamic programming for the exact path of optimal models selected from a finite set. Under review at {\it Journal of Computational and Graphical Statistics}, arXiv:2003.02808. Runge V, {\bf Hocking TD}, Romano G, Afghah F, Fearnhead P, Rigaill G. gfpop: an R Package for Univariate Graph-Constrained Change-point Detection. Under review at {\it Journal of Statistical Software}, arXiv:2002.03646. {\bf Hocking TD}. Regular expressions and reshaping using data tables and the nc package. Under review at R Journal. Venuto D, {\bf Hocking TD}, Spanurattana S, Sugiyama M. Support vector comparison machines. Under review at {\it Machine Learning}, arXiv:1401.8008. \section{\sc Peer-reviewed publications} In addition to peer-reviewed journals, I publish papers at computer science conferences such as {\it ICML} and {\it NeurIPS}, with double-blind peer reviews, and $\approx$20\% acceptance rates. Fotoohinasab A, {\bf Hocking TD}, Afghah F. A Graph-Constrained Changepoint Learning Approach for Automatic QRS-Complex Detection. Accepted for {\it Asilomar Conference on Signals, Systems, and Computers}, arXiv:2004.13558. {\bf Hocking TD}, Rigaill G, Fearnhead P, Bourque G. Generalized Functional Pruning Optimal Partitioning (GFPOP) for Constrained Changepoint Detection in Genomic Data. Accepted for publication in {\it Journal of Statistical Software}, arXiv:1810.00117. Fotoohinasab A, {\bf Hocking TD}, Afghah F. A Graph-constrained Changepoint Detection Approach for ECG Segmentation. In proceedings of {\it 42th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)}. {\bf Hocking TD}, Rigaill G, Fearnhead P, Bourque G. Constrained dynamic programming and supervised penalty learning algorithms for peak detection in genomic data. {\it Journal of Machine Learning Research} 21(87):1--40, 2020. {\bf Hocking TD}, Bourque G. Machine Learning Algorithms for Simultaneous Supervised Detection of Peaks in Multiple Samples and Cell Types. {\it Pacific Symposium on Biocomputing} 25:367-378. {\bf Hocking TD}. Comparing namedCapture with other R packages for regular expressions. {\it R Journal} (2019). doi:10.32614/RJ-2019-050 Jewell S, {\bf Hocking TD}, Fearnhead P, Witten D. Fast Nonconvex Deconvolution of Calcium Imaging Data. {\it Biostatistics} (2019), doi: 10.1093/biostatistics/kxy083. Depuydt P, Koster J, Boeva V, {\bf Hocking TD}, Speleman F, Schleiermacher G, De Preter K. Meta-mining of copy number profiles of high-risk neuroblastoma tumors. {\it Scientific Data} (2018). Alirezaie N, Kernohan KD, Hartley T, Majewski J, {\bf Hocking TD}. ClinPred: Prediction Tool to Identify Disease-Relevant Nonsynonymous Single-Nucleotide Variants. American Journal of Human Genetics (2018). doi:10.1016/j.ajhg.2018.08.005 Sievert C, Cai J, VanderPlas S, Khan F, Ferris K, {\bf Hocking TD}. Extending ggplot2 for linked and dynamic web graphics. {\it Journal of Computational and Graphical Statistics} (2018). Depuydt P, Boeva V, {\bf Hocking TD}, {\it et al}. Genomic Amplifications and Distal 6q Loss: Novel Markers for Poor Survival in High-risk Neuroblastoma Patients. Journal of the National Cancer Institute (2018). DOI:10.1093/jnci/djy022. Drouin A, {\bf Hocking TD}, Laviolette F. Max margin interval trees. {\it Neural Information Processing Systems (NeurIPS)}, 2017. {\bf Hocking TD}, Goerner-Potvin P, Morin A, Shao X, Pastinen T, Bourque G. Optimizing ChIP-seq peak detectors using visual labels and supervised machine learning. {\it Bioinformatics} (2017) 33 (4): 491-499. Shimada K, Shimada S, Sugimoto K, Nakatochi M, Suguro M, Hirakawa A, {\bf Hocking TD}, Takeuchi I, Tokunaga T, Takagi Y, Sakamoto A, Aoki T, Naoe T, Nakamura S, Hayakawa F, Seto M, Tomita A, Kiyoi H. Development and analysis of patient-derived xenograft mouse models in intravascular large B-cell lymphoma. {\it Leukemia} 2016. Chicard M, Boyault S, Colmet-Daage L, Richer W, Gentien D, Pierron G, Lapouble E, Bellini A, Clement N, Iacono I, Bréjon S, Carrere M, Reyes C, {\bf Hocking TD}, Bernard V, Peuchmaur M, Corradini N, Faure-Conter C, Coze C, Plantaz D, Defachelles A-S, Thebaud E, Gambart M, Millot F, Valteau-Couanet D, Michon J, Puisieux A, Delattre O, Combaret V, Schleiermacher G. Genomic copy number profiling using circulating free tumor DNA highlights heterogeneity in neuroblastoma. {\it Clinical Cancer Research} 2016. Maidstone R, {\bf Hocking TD}, Rigaill G, Fearnhead P. On optimal multiple changepoint algorithms for large data. {\it Statistics and Computing} (2016). doi:10.1007/s11222-016-9636-3 {\bf Hocking TD}, Rigaill G, Bourque G. PeakSeg: constrained optimal segmentation and supervised penalty learning for peak detection in count data. {\it International Conference on Machine Learning (ICML)}, 2015. Suguro M, Yoshida N, Umino A, Kato H, Tagawa H, Nakagawa M, Fukuhara N, Karnan S, Takeuchi I, {\bf Hocking TD}, Arita K, Karube K, Tsuzuki S, Nakamura S, Kinoshita T, Seto M. Clonal heterogeneity of lymphoid malignancies correlates with poor prognosis. {\it Cancer Sci.} 2014 Jul;105(7):897-904. {\bf Hocking TD}, Boeva V, Rigaill G, Schleiermacher G, Janoueix-Lerosey I, Delattre O, Richer W, Bourdeaut F, Suguro M, Seto M, Bach F, Vert J-P. SegAnnDB: interactive Web-based genomic segmentation. {\it Bioinformatics} (2014) 30 (11): 1539-1546. DOI:10.1093/bioinformatics/btu072 {\bf Hocking TD}, Wutzler T, Ponting K and Grosjean P. Sustainable, extensible documentation generation using inlinedocs. {\it Journal of Statistical Software} (2013), 54(6), 1-20. DOI:10.18637/jss.v054.i06 {\bf Hocking TD}, Schleiermacher G, Janoueix-Lerosey I, Boeva V, Cappo J, Delattre O, Bach F, Vert J-P. Learning smoothing models of copy number profiles using breakpoint annotations. {\it BMC Bioinfo.} 2013, 14:164. DOI:10.1186/1471-2105-14-164 {\bf Hocking TD}, Rigaill G, Bach F, Vert J-P. Learning sparse penalties for change-point detection using max-margin interval regression. {\it International Conference on Machine Learning (ICML)}, 2013. {\bf Hocking TD}, Joulin A, Bach F, Vert J-P. Clusterpath: an Algorithm for Clustering using Convex Fusion Penalties. {\it International Conference on Machine Learning (ICML)}, 2011. Gautier M, {\bf Hocking TD}, Foulley JL. A Bayesian outlier criterion to detect SNPs under selection in large data sets. {\it PloS ONE} 5 (8), e11913 (2010). Doyon Y, McCammon JM, Miller JC, Faraji F, Ngo C, Katibah GE, Amora R, {\bf Hocking TD}, Zhang L, Rebar EJ, Gregory PD, Urnov FD, Amacher SL. Heritable targeted gene disruption in zebrafish using designed zinc-finger nucleases. {\it Nature biotechnology} 26 (6), 702-70 (2008). \section{\sc Books, Chapters, Manuals} {\bf Hocking TD} and Killick R. {\it Changepoint detection algorithms and applications in R}. Textbook in preparation. {\bf Hocking TD}. Introduction to Machine Learning and Neural Networks. Chapter in textbook {\it Land Carbon Cycle Modeling: Matrix Approach, Data Assimilation, and Ecological Forecasting} edited by Yiqi Luo. (expected publication Jan 2022) {\bf Hocking TD}. Animated interactive data visualization using the grammar of graphics (The animint2 Manual), 17 web pages/chapters with interactive graphics and exercises. (2018) \section{\sc Conference Tutorials} {\bf Hocking TD}, Killick R. Introduction to optimal changepoint detection algorithms, {\it useR} 2017. {\bf Hocking TD}, Ekstr\o m CT. Understanding and creating interactive graphics, {\it useR} 2016. \section{\sc Invited talks (selected)} University of Waterloo, Université de Montréal, Sainte-Justine Children's Hospital, University of Québec à Montréal, Polytechnique Montréal (2017); Universit\'e Laval Centre for Big Data Research (2016); McGill Barbados epigenomics workshop (2015); Sapporo Japan Workshop on Machine Learning and Applications to Biology (2013); Google Research New York, Universit\'e Rennes, Universit\'e Angers, INRIA Lille (2012); Institut de Biologie de Lille (2011). \newpage \section{\sc Teaching} Fall 2020, Northern Arizona University, CS499/599, Unsupervised Learning. Summer 2020, 90 minute lecture ``Introduction to Machine Learning and Neural Networks'' for summer school on ``New Advances in Land Carbon Cycle Modeling.'' Spring 2020, Northern Arizona University, CS499, Deep Learning. Fall 2019, Northern Arizona University, CS/EE599, Reproducible Machine Learning Research. Spring 2019, Northern Arizona University, CS499, Optimization algorithms for machine learning. \section{\sc Professional Service} President of organizing committee for ``R in Montreal 2018'' conference. Co-administrator and mentor for R project in Google Summer of Code, since 2012. Reviewer since 2010: Technometrics, International Conference on Machine Learning (ICML), Advances in Neural Information Processing Systems (NeurIPS), Journal of Machine Learning Research (JMLR), Artificial Intelligence Review, Journal of Computational and Graphical Statistics (JCGS), R Journal, Bioinformatics, PLOS Computational Biology, BMC Bioinformatics, IEEE Transactions on Pattern Analysis and Machine Intelligence, Information and Inference. Editor for Journal of Statistical Software since 2018. \section{\sc Software Online (Selected)} Numerous free/open-source software contributions using R, C, C++, Python, and JavaScript. %\url{https://github.com/tdhock/} {\bf R}: contributions to base R regex functionalty, ggplot functionality in plotly package, data reshaping in data.table package. Maintainer of numerous R packages for machine learning (changepoint detection, regression, ranking, etc), directlabels for labeled figures, animint2 for animated interactive figures, inlinedocs for documentation generation. {\bf Python}: contributions to pandas module for data manipulation (str.extractall regex functionality), maintainer of GUI software for labeling and changepoint detection in genomic data (annotate\_regions, SegAnnDB). \begin{comment} \pagebreak \section{\sc References} \noindent {\bf Francis Bach}\\ Scientific Leader, SIERRA Project-Team\\ %Department of Computer Science\\ Institut National de Recherche en Informatique et en Automatique\\ %Phone: (604)822-9878\\ E-mail: [email protected]\\ Webpage: \url{http://www.di.ens.fr/~fbach} \noindent {\bf Michael Friedlander}\\ Associate Professor, Department of Computer Science\\ University of British Columbia\\ %Phone: (604)822-9878\\ E-mail: [email protected]\\ Webpage: http://www.cs.ubc.ca/$\sim$mpf \noindent {\bf Kevin Murphy}\\ Research Scientist, %Department of Computer Science, Department of Statistics\\ %University of British Columbia\\ Google\\ %Phone: (604)822-9878\\ E-mail: [email protected]\\ Webpage: http://www.cs.ubc.ca/$\sim$murphyk %\noindent {\bf R\'{o}mer Rosales}\\ %Research Scientist\\ %CAD and Knowledge Systems\\ %Siemens Medial Solutions USA, Inc.\\ %%Phone: (610)448 4793\\ %E-mail: [email protected]\\ %Webpage: http://people.csail.mit.edu/romer/\\ % %\noindent {\bf Glenn Fung}\\ %Research Scientist\\ %CAD and Knowledge Systems\\ %Siemens Medial Solutions USA, Inc.\\ %E-mail: [email protected]\\ %Webpage: http://pages.cs.wisc.edu/$\sim$gfung/\\ % \noindent {\bf Russell Greiner}\\ Professor, Department of Computing Science\\ University of Alberta\\ %Phone: (604)822-9878\\ E-mail: [email protected]\\ Webpage: http://webdocs.cs.ualberta.ca/$\sim$greiner/ \end{comment} \end{resume} \end{document}
{ "alphanum_fraction": 0.7613610282, "avg_line_length": 38.6430155211, "ext": "tex", "hexsha": "85c574190e22a64ba2e08a734b30dfaf2c7d741e", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-04-13T07:37:00.000Z", "max_forks_repo_forks_event_min_datetime": "2021-04-13T07:37:00.000Z", "max_forks_repo_head_hexsha": "0f5b7621e50ad01c8da74447f59f26b5cc7807c6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "tdhock/tdhock.github.io", "max_forks_repo_path": "HOCKING-cv.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0f5b7621e50ad01c8da74447f59f26b5cc7807c6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "tdhock/tdhock.github.io", "max_issues_repo_path": "HOCKING-cv.tex", "max_line_length": 120, "max_stars_count": 1, "max_stars_repo_head_hexsha": "0f5b7621e50ad01c8da74447f59f26b5cc7807c6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "tdhock/tdhock.github.io", "max_stars_repo_path": "HOCKING-cv.tex", "max_stars_repo_stars_event_max_datetime": "2019-01-05T10:02:51.000Z", "max_stars_repo_stars_event_min_datetime": "2019-01-05T10:02:51.000Z", "num_tokens": 4966, "size": 17428 }
% Chapter 1 \chapter{Introduction} % Main chapter title \label{introduction} % For referencing the chapter elsewhere, use \ref{Chapter1} \lhead{Chapter 1. \emph{Introduction}} % This is for the header on each page - perhaps a shortened title \par \textit{Sentiment Analysis} is the technique of detecting sentiment/opinion behind a \textit{word, sentence, collection of sentences, documents,} and even a \textit{collection of documents} in some cases. Here, \textit{word, sentence, collection of sentences, documents,} and \textit{collection of documents} can be termed as chunks of text which determines granularity of the analysis. We can make use of the general term \emph{text} when the discussion applies to all these chunks and specify the exact granularity when required. \textit{Sentiment Analysis} might also involve classifying \emph{text} as either \emph{Objective} (factual information) or \emph{Subjective} (expressing some sentiment or opinion). This is called as \textit{Subjectivity Analysis}. It can also be considered as preprocessing for \textit{Sentiment Analysis} in some cases. But \textit{Subjectivity Analysis} is considered a task within \textit{Sentiment Analysis}. \textit{Sentiment analysis} is also known as \textit{Opinion Mining} and these two terms are used quite interchangeably. \par In most cases, \textit{Sentiment Analysis} is a binary classification task in which a \emph{text} is classified as either positive or negative. Examples of binary classification are \textit{movie reviews, product reviews, etc}. \textit{Ternary Classification}, wherein the \emph{text} is classified as positive, negative or objective also has many applications. \par This field is considerably new and is gaining of lot of attention. \textit{Movie reviews} of critics are classified as positive or negative by using this technique. Same is the case with product reviews. \textit{tweets, comments, etc.} are analyzed to detect the positive or negative sentiment behind them and sentiment about a particular entity. \textit{IR} also makes use of \textit{SA} these days to filter out subjective information and retrieve only the objective data. There is also a motivation for sentiment aware IR in which documents of relevant sentiment (either positive or negative) are fetched. \section{Motivation} \par The \textit{Motivation} behind this research is to study sentiment analysis in general and the how can some of the techniques used in information retrieval be applied to it in particular. The unrealistic assumption made before applying this technique in many cases is that subjective data is available. Subjective text has to be retrieved from the web. Also, most of the data available is domain dependent. This makes most of the supervised methods not feasible for domain independent sentiment analysis. Thus, there is a need for a semi-supervised or unsupervised method which is domain independent. The fact that most of the previous works take into account all the sentiment-bearing words and phrases is the motivation behind using deep semantics. \section{Problem Statement} \par The aim of this project is to use different approaches like topic modeling and deep semantics for sentiment analysis. To achieve the first goal different topic models need to be used for \textit{SA} and compared with each other on some standard data. To achieve the second goal, some sort of deep semantic processing has to be performed. This semantic information is then to be used to perform sentiment classification. \section{Contributions} \par Following contributions were made during the course of this project. \begin{enumerate} \item Studied \textit{LDA}, \textit{JST}, and \textit{Topical n-grams} models. \item Evaluated \textit{LDA} for document clustering. \item Used LDA and Topical n-gram model for binary sentiment classification. \item Evaluated \textit{LDA, JST,} and \textit{Topical n-gram model} for binary sentiment classification task. \item Proposed an approach for generation of positive and negative word lists using \textit{LDA}. \item Developed a rule-based system for ternary sentiment classification using \textit{UNL}. \end{enumerate} %---------------------------------------------------------------------------------------- \section{Roadmap}\label{sec:roadmap} % To be changed later \par This report gives the problem statement and the contributions in \cref{introduction}. \cref{sa} starts with a psychological viewpoint of \textit{sentiment}. This is followed by formal problem definition, and then types of \textit{SA}, challenges in \textit{SA}, and some applications of \textit{SA} are listed. \cref{sa} also explains basic machine learning techniques and their applications in \textit{SA}. \cref{ir} starts with basics of \textit{IR}. Then it moves on to discuss the various \textit{text modeling} approaches prevalent in \textit{IR} with a focus on \textit{LDA}. It also shows how \textit{SA} can be used for sentiment aware \textit{IR}. \cref{topicmodeledsa} starts with the use of \textit{LDA} for sentiment analysis. It then discusses two models which combine sentiment and topics. The pros and cons of these models have been discussed. Also, the topical n-grams models is explained and a technique to use it for the sentiment classification task is explained. \cref{unl} focuses on the second aspect of this research which is make use of deep semantics for sentiment analysis. . \cref{conclusions} concludes and hints on some future work. \section*{SUMMARY} In this chapter, the problem statement was introduced along with the motivation. The significant contributions done so far have been listed. The structure of the report was described in the \sref{sec:roadmap}. In the next chapter we will learn \textit{Sentiment Analysis} and several machine learning techniques and how these are used in \textit{Sentiment Analysis}. \clearpage
{ "alphanum_fraction": 0.7841714479, "avg_line_length": 72.2682926829, "ext": "tex", "hexsha": "03d3cde3525d3b5e54399eef2719580a2a9efc98", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3b1b8b0b3b33d3728f000a4260aa2e264df39079", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "nikolodien/Paper-submissions", "max_forks_repo_path": "Report/tex_files/Chapters/introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3b1b8b0b3b33d3728f000a4260aa2e264df39079", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "nikolodien/Paper-submissions", "max_issues_repo_path": "Report/tex_files/Chapters/introduction.tex", "max_line_length": 174, "max_stars_count": null, "max_stars_repo_head_hexsha": "3b1b8b0b3b33d3728f000a4260aa2e264df39079", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "nikolodien/Paper-submissions", "max_stars_repo_path": "Report/tex_files/Chapters/introduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1354, "size": 5926 }
\chapter{Quantum mechanics} \cite{manousakis2016practical} \section{Understanding the need for quantum mechanics} Classical mechanics works for slow big things. Relativity works for fast big things. Non-relativistic quantum mechanics works for slow small. Relativistic quantum mechanics works for fast small things. We abbreviate \emph{quantum mechanics} to QM. \begin{table}[h] \centering \begin{tabular}{l|ll} & slow & fast \\ \hline small & non-relativistic QM & relativistic QM \\ big & Galilean relativity & Einsteinian relativity \end{tabular} \caption{Four theories} \end{table} % https://en.wikipedia.org/wiki/History_of_quantum_mechanics % http://theorie2.physik.uni-erlangen.de/index.php/Papers_from_the_beginning_of_quantum_mechanics % Schrodinger's paper % https://journals.aps.org/pr/abstract/10.1103/PhysRev.28.1049 (paywall) % http://web.archive.org/web/20081217040121/http://home.tiscali.nl/physis/HistoricPaper/Schroedinger/Schroedinger1926c.pdf Dirac's 1978 4th edition \emph{Principles of quantum mechanics} \cite{dirac1978principles} Self-interference? Counterintuitive. % https://en.wikipedia.org/wiki/Mathematical_formulation_of_quantum_mechanics#Postulates_of_quantum_mechanics % https://en.wikipedia.org/wiki/Matter_wave % https://en.wikipedia.org/wiki/Matter_wave#de_Broglie_relations % https://en.wikipedia.org/wiki/Pilot_wave % causal interpretation of quantum mechanics % https://en.wikipedia.org/wiki/De_Broglie%E2%80%93Bohm_theory % https://en.wikipedia.org/wiki/Louis_de_Broglie % https://en.wikipedia.org/wiki/Bra%E2%80%93ket_notation % https://en.wikipedia.org/wiki/Uncertainty_principle The \emph{Heisenberg uncertainty principle}? % https://en.wikipedia.org/wiki/Wave_function#Definition_.28one_spinless_particle_in_1d.29 Example wave function? Dirac's relativistic quantum mechanics Positron Quantum tunneling. It boosts nuclear decay. With it, a particle can escape a trap. But, it prevents smaller computers. \section{Knowing the history of quantum mechanics? Not the best way to learn?} Bohr model of hydrogen atom From Wikipedia\footnote{\url{https://en.wikipedia.org/wiki/Photoelectric_effect\#History}}. 1887: Heinrich Hertz discovered the photoelectric effect. 1902: Philipp Lenard observed that the energy of an emitted electron increased with the frequency of the light. % https://en.wikipedia.org/wiki/Annus_Mirabilis_papers#Photoelectric_effect 1905: Albert Einstein explained that light is a particle, and the energy of a light particle is \(E = hf\). Bohr inferred that because hydrogen emission spectrum line is discrete, then hydrogen electron can only occupy certain orbitals with discrete spacing between orbitals. % https://en.wikipedia.org/wiki/Photoelectric_effect \index{effect!photoelectric} \emph{Photoelectric effect}: High-frequency light shone onto metal dislodges electron. Low-frequency light has no effect. The voltage required to stop the dislodged electron depends on the frequency of the light. The current depends on the intensity of the light. The \emph{work function} of the metal is \( \phi = h f_0 \), the energy required to dislodge its electron. The rest of the energy becomes the kinetic energy of the electron. The kinetic energy of an ejected electron is \( E = h f - \phi \). The energy of a photon with frequency \(f\) is \( E = hf \). \emph{Stern\textendash{}Gerlach experiment} % https://en.wikipedia.org/wiki/Stern%E2%80%93Gerlach_experiment \emph{Zeeman effect: electron spin splits spectral lines} \emph{Stark effect: electric field splits spectral lines} The 1914 \emph{Franck\textendash{}Hertz experiment}\footnote{\url{https://en.wikipedia.org/wiki/Franck\%E2\%80\%93Hertz_experiment}} hurled electrons at mercury vapor. No electron lost less than \SI{4.9}{eV} of kinetic energy. This experiment supports Bohr's idea of quantized orbitals? \cite{franck1967zusammenstosse} % https://en.wikipedia.org/wiki/Spectral_line Every atom has a unique spectral line, an emission pattern. To identify the atom, we put it in a \emph{spectrometer}, and match the emitted spectral line to known patterns\footnote{We can find some known patterns in \url{https://en.wikipedia.org/wiki/Spectral_line}.}. Such activity is called \emph{spectrometry}. Spectral lines also tell us what stars are made of. It was 1925. Heisenberg was trying to explain hydrogen \emph{spectral lines}. While the \emph{old quantum theory} was trying to fix classical mechanics, Heisenberg suggested that we start afresh instead \cite{heisenberg1925quantum}. \footnote{\url{http://www.vub.ac.be/CLEA/IQSA/history.html}} \Hyperlink{https://en.wikipedia.org/wiki/Matrix_mechanics}{Understanding matrix mechanics} Born\textendash{}Heisenberg\textendash{}Jordan formulation: \footnote{\url{http://fisica.ciens.ucv.ve/~svincenz/SQM333.pdf}} infinite matrices. What? % https://en.wikipedia.org/wiki/Davisson%E2%80%93Germer_experiment 1923\textendash{}1927: Davisson\textendash{}Germer experiment of de Broglie wavelength: de Broglie wavelength is about the relationship between the momentum and the wave vector (and thus the wave number) of free particles. \( p = \hbar k \). \cite{okun2012abc} 1927: Davisson and Germer's electron diffraction experiment: % https://en.wikipedia.org/wiki/History_of_quantum_field_theory Meanwhile, Paul Dirac was trying to quantize the electromagnetic field. % https://en.wikipedia.org/wiki/Quantum_electrodynamics % https://en.wikipedia.org/wiki/Precision_tests_of_QED % quantum mechanics in one dimension % http://www.tcm.phy.cam.ac.uk/~bds10/aqp/handout_1d.pdf % QM exercises % https://www.math.temple.edu/~prisebor/qm1.pdf % https://plato.stanford.edu/entries/qt-nvd/#3 % In a letter to Birkhoff from 1935, von Neumann says: “I would like to make a confession which may seem immoral: I do not believe in Hilbert space anymore” %Heisenberg 1925 paper %scan of English translation %http://www.mat.unimi.it/users/galgani/arch/heis25ajp.pdf %backup link %http://fisica.ciens.ucv.ve/~svincenz/SQM261.pdf %scan of German original? %http://www.chemie.unibas.ch/~steinhauser/documents/Heisenberg_1925_33_879-893.pdf %on that paper %The 1925 Born and Jordan paper “On quantum mechanics” %http://people.isy.liu.se/icg/jalar/kurser/QF/references/onBornJordan1925.pdf % Heisenberg's 1924 paper % Heisenberg\textendash{}Born\textendash{}Jordan "On quantum mechanics" % http://people.isy.liu.se/icg/jalar/kurser/QF/references/onBornJordan1925.pdf %2004 paper %Understanding Heisenberg's 'Magical' Paper of July 1925: a New Look at the Calculational Details %https://arxiv.org/pdf/quant-ph/0404009.pdf %https://arxiv.org/abs/quant-ph/0404009 % https://en.wikipedia.org/wiki/Heisenberg_picture 1961: Clauss J\"onsson electron double-slit experiment: % https://en.wikipedia.org/wiki/Interaction-free_measurement % https://en.wikipedia.org/wiki/Elitzur%E2%80%93Vaidman_bomb_tester % https://en.wikipedia.org/wiki/Mach%E2%80%93Zehnder_interferometer Quantum physics and nuclear physics\footnote{\url{https://www.patana.ac.th/secondary/science/anrophysics/ntopic13/commentary.htm}} \section{Working with complex numbers} The \emph{imaginary number}\footnote{There is nothing imaginary about the imaginary number. There is nothing real about the real number either. Descartes coined these names, and they have stuck.} is \( i = \sqrt{-1} \). Thus, \(i^2 = -1\). The set of \emph{complex numbers} is \( \Complex = \{ a + bi ~|~ a, b \in \Real \} \). Example of complex number: \( 1 + 2i \). Every real number is a complex number: a real number is a complex number with zero imaginary part. Thus, \( \Real \subset \Complex \). \section{Working with complex vectors} A \emph{complex vector} is a vector where every element is a complex number. The set of \(n\)-dimensional complex vectors is \(\Complex^n\). The \emph{dual} of a vector space \(V\) is \(V^*\) which is obtained by transposing every element of \(V\). Thus, every column vector in \(V\) becomes a row vector in \(V^*\). Also, \((V^*)^* = V\). The \emph{complex Hilbert space} is a subset of \(\Complex^\infty\). \emph{Dirac bra-ket notation}: \(\bra{A}\) (read ``bra-\(A\)'') is a row vector in the dual space of \(\Complex^\infty\). \(\ket{B}\) (read ``ket-\(B\)'') is a column vector in \(\Complex^\infty\). \(\braket{A}{B}\) is matrix multiplication \(\bra{A} \cdot \ket{B}\). The result of \(\bra{A} \cdot \ket{B}\) is a complex number. A ket represents a \emph{quantum state}. \section{Working with operators} % FIXME COIK % https://en.wikipedia.org/wiki/Projective_Hilbert_space % https://en.wikipedia.org/wiki/Quantum_state A \emph{pure quantum state} is a ray in complex Hilbert space. % https://en.wikipedia.org/wiki/Configuration_space_(physics) A \emph{configuration} of a \(d\)-dimensional system with \(n\) particles consists of \(d \times n\) real numbers. The \emph{configuration space} of the system is the set of all such points. An \emph{operator} is a higher-order function? \footnote{\url{https://en.wikipedia.org/wiki/Operator_(physics)\#Examples_of_applying_quantum_operators}} Linear operator, adjoint, Hermitian, matrix % https://en.wikipedia.org/wiki/Wave%E2%80%93particle_duality For example, consider a ball falling near the ground. How do we say it in the language of operators? \section{Working with wave functions} The \emph{wave function} of a system is a function that takes a configuration of the system and gives a complex number. Example: the wave function of a system with \(n\) particles has the shape \(\psi(x_1,\ldots,x_n,t)\) where every \(x_k : \Real^3\) and \(t:\Real\). \emph{Born's statistical interpretation}: The value of \( |\psi(c)|^2 \) is the \emph{probability density} of measuring/finding/observing the configuration \(c\)? Sobolev space? \(L^2\) space? The momentum of the object? The position of the object? % FIXME COIK An \emph{observable} is a linear self-adjoint operator on a Hilbert space. % https://en.wikipedia.org/wiki/Schr%C3%B6dinger_picture The \emph{Schr\"odinger equation} constrains the wave function of a system? % E.T. Jaynes's book % http://bayes.wustl.edu/etj/prob/book.pdf % https://en.m.wikipedia.org/wiki/Old_quantum_theory % https://en.m.wikipedia.org/wiki/Transformation_theory_(quantum_mechanics) % https://physics.stackexchange.com/questions/1417/general-relativity-gravitation-in-time-and-one-spatial-dimension % Schutz's book? An Introduction to Relativistic Quantum Mechanics I. From Relativity to Dirac Equation % https://arxiv.org/pdf/0708.0052.pdf % https://ocw.mit.edu/courses/physics/8-323-relativistic-quantum-field-theory-i-spring-2008/lecture-notes/ % https://en.wikipedia.org/wiki/Quantum_field_theory Introduction to Relativistic Quantum Field Theory % http://www.tep.physik.uni-freiburg.de/lectures/QFT14/qft.pdf % quantum mechanics made simple, weng cho CHEW % http://wcchew.ece.illinois.edu/chew/course/QMALL20121005.pdf % rather unstructured % http://studylib.net/ \subsection{Modeling a lone one-dimensional free particle} The wave function of one free (not influenced by any external fields) spinless one-dimensional particle is \( \psi(x,t) = A \exp(k x - \omega t) \). Momentum? Energy? Position? \subsection{Modeling two one-dimensional particles} Simulation hypothesis. Assume the simulation hypothesis: the Universe is a simulation. How do we overload it? How do we crash it? How do we glitch it? How do we hack it? How do we debug it? A ket is a column vector. A bra is a row vector. The dual of a ket is its \emph{conjugate transpose}. A ket represents a \emph{quantum state}? quantum locking\footnote{\url{https://www.ted.com/talks/boaz_almog_levitates_a_superconductor/transcript}}
{ "alphanum_fraction": 0.7664091025, "avg_line_length": 35.9054878049, "ext": "tex", "hexsha": "39ce74c8cbaf05ca51e082f75335848fb587fa42", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-10-02T15:20:22.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-02T15:20:22.000Z", "max_forks_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0", "max_forks_repo_licenses": [ "Apache-2.0", "CC0-1.0" ], "max_forks_repo_name": "edom/work", "max_forks_repo_path": "research/physics/mechanics-quantum.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0", "max_issues_repo_issues_event_max_datetime": "2022-02-16T00:55:32.000Z", "max_issues_repo_issues_event_min_datetime": "2020-12-02T18:37:37.000Z", "max_issues_repo_licenses": [ "Apache-2.0", "CC0-1.0" ], "max_issues_repo_name": "edom/work", "max_issues_repo_path": "research/physics/mechanics-quantum.tex", "max_line_length": 156, "max_stars_count": null, "max_stars_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0", "max_stars_repo_licenses": [ "Apache-2.0", "CC0-1.0" ], "max_stars_repo_name": "edom/work", "max_stars_repo_path": "research/physics/mechanics-quantum.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3256, "size": 11777 }
\documentclass[12pt]{article} % Add "draft" option for draft mode \usepackage[paper=a4paper, top=25mm, bottom=25mm]{geometry} % Font and encoding \usepackage{lmodern} % Better computer modern font \usepackage[T1]{fontenc} % Better glyph encoding \usepackage[utf8]{inputenc} % Necessary for unicode characters \usepackage[english]{babel} % Necessary for correct hyphenation \usepackage{textcomp} % Necessary to display unicode characters like € \usepackage{csquotes} % Quotes (\MakeOuterQuote{"} needed!) % Other packages \usepackage[style=authoryear]{biblatex} % Bibliography \usepackage[parfill]{parskip} % Add space between paragraphs \usepackage[hidelinks]{hyperref} % Clickable \ref and \cite \usepackage{graphicx} % Needed for figures \usepackage{booktabs} % Needed for tables \usepackage{caption} % Needed for captionsetup \usepackage{subcaption} % Needed for subfigure and subtable \usepackage{amsmath} % Better math environments and \text \usepackage{physics} % e.g. for derivative formatting \usepackage{siunitx} % \si{\unit} and \SI{value}{\unit} \usepackage{titlesec} % Needed for titleformat \usepackage{authblk} % Authors formatting % Setup \MakeOuterQuote{"} % \setcounter{secnumdepth}{0} % \titleformat{\chapter}{\normalfont\bfseries\Huge}{\thechapter}{20pt}{} % \setlength{\parskip}{.5\baselineskip} \setlength\parindent{0pt} \captionsetup{width=.9\linewidth} % \captionsetup[subfigure]{width=.9\linewidth} \addbibresource{bib.bib} \title{Effects of Layer Freezing when Transferring DeepSpeech from English to German} \author[1]{\textbf{Onno Eberhard}} \author[ ]{\textbf{Torsten Zesch}} \affil[ ]{Language Technology Lab} \affil[ ]{University of Duisburg-Essen} \affil[1]{\href{mailto:[email protected]}{\texttt{[email protected]}}} \date{} \begin{document} \maketitle \begin{abstract}\noindent In this paper, we train Mozilla's DeepSpeech architecture on a German language speech dataset and compare the results of different training methods. We build on previous efforts by \textcite{agarwal-zesch-2019-german} and reproduce their results by training the model from scratch. We improve upon these results by using an English pretrained version of DeepSpeech for weight initialization and experiment with the effects of freezing different layers during training. We see that freezing even one layer already improves the results dramatically. \end{abstract} \section{Introduction} The field of automatic speech recognition is dominated by research specific to the English language. There exist plenty available text-to-speech models pretrained on (and optimized for) English data. When it comes to German, the range of available pretrained models becomes much sparser. In this paper, we train Mozilla's implementation\footnote{\url{https://github.com/mozilla/DeepSpeech}} of Baidu's DeepSpeech architecture \parencite{hannun2014deep} on German speech data. We use transfer learning to leverage the availability of a pretrained English version of DeepSpeech and observe the difference made by freezing different layers during training. The rationale for using transfer learning is not only that English and German are closely related languages. In fact, one could argue that they are very different in this context, because DeepSpeech is trained to directly infer written characters from audio data and English and German pronunciations of some characters differ greatly. However, the first few layers of the DeepSpeech network are likely not inferring the final output character, but rather lower lever features of the spoken input, such as phonemes, which are shared across different languages. Thus, this approach should also work for languages which are not related at all. It is to be expected that the model should give better results when trained on a small dataset than a model trained from scratch, because it does not have to learn these lower level features again. \section{Training} \subsection{DeepSpeech architecture} Mozilla's DeepSpeech implementation differs in many ways from the original model presented by \textcite{hannun2014deep}. The architecture is described in detail in the official documentation\footnote{\url{https://deepspeech.readthedocs.io/en/latest/DeepSpeech.html}} and is depicted in Figure \ref{fig:ds}. From the raw speech data, Mel-Frequency Cepstral Coefficients \parencite{imai1983cepstral} are extracted and passed to a 6-layer deep recurrent neural network. The first three layers are fully connected with a ReLU activation function. The fourth layer is a Long Short-Term Memory unit \parencite{hochreiter1997long}; the fifth layer is again fully connected and ReLU activated. The last layer outputs probabilities for each character in the language's alphabet. It is fully connected and uses a softmax activation for normalization. The character-probabilities are used to calculate a Connectionist Temporal Classification (CTC) loss function \parencite{graves2006connectionist}. The weights of the model are optimized using the Adam method \parencite{kingma2014adam} with respect to the CTC loss. \begin{figure}[ht] \centering \includegraphics[width=.85\textwidth]{ds.png} \caption{DeepSpeech architecture (adapted from the official documentation\protect\footnotemark[\value{footnote}])} \label{fig:ds} \end{figure} \subsection{Training Details} \label{sec:training} In transfer learning, all weights of the model are initialized to those of the English pretrained model, which is provided by Mozilla\footnote{\url{https://github.com/mozilla/DeepSpeech/releases}}. In addition to transfer learning, we also train the model from scratch with random weight initialization, thereby reproducing a result from \textcite{agarwal-zesch-2019-german}. In total, we train 6 different models: \begin{enumerate} \item The whole model from scratch ("Complete Training") \item The model with weights initialized to those of the English pretrained model, all weights are optimized during training ("Simple Transfer") \item The English-initialized model with the first layer frozen \item The English-initialized model with the first two layers frozen \item The English-initialized model with the first three layers frozen \item The English-initialized model with the first three and the fifth layer frozen \end{enumerate} We used Mozilla's DeepSpeech version 0.7.4 for training the model from scratch and version 0.7.3 for the transfer learning approach. The complete training script is available online\footnote{\url{https://github.com/onnoeberhard/deepspeech-paper/blob/master/training.sh}}. The modified versions of DeepSpeech that utilize layer freezing are also available online\footnote{\url{https://github.com/onnoeberhard/deepspeech-transfer}, the different versions with a different number of frozen layers are in the branches \emph{transfer-1}, \emph{\mbox{transfer-2}}, \emph{transfer} and \emph{\mbox{transfer-4}}.}. The weights were frozen by adding \texttt{trainable=False} at the appropriate places in the TensorFlow code. For all models, we had to reinitialize the last layer, because of the different alphabet sizes of German and English (ä, ö, ü). We trained the models on the German-language Mozilla Common Voice 2.0 speech dataset\footnote{\url{https://voice.mozilla.org/en/datasets}}. For inference and testing we used the language model KenLM \parencite{heafield-2011-kenlm}, trained on the corpus described in \parencite[Section~3.2]{Radeck-Arneth2015}. The text corpus consists of a mixture of text from the sources Wikipedia and Europarl as well as crawled sentences. The whole corpus was preprocessed with MaryTTS \parencite{schroder2003german} and cleaned in the same way as described in \textcite{agarwal-zesch-2019-german}. All these steps were chosen to be identical to those taken by \textcite{agarwal-zesch-2019-german}, such that comparison of results would be as meaningful as possible. In training each model, we used a batch size of 24, a learning rate of 0.0005 and a dropout rate of 0.4. We did not perform any hyperparameter optimization. These hyperparameters, as well as those selected for training the language model, differ from those chosen by \textcite{agarwal-zesch-2019-german}. This explains why our results differ even though the same model and training data was used. The training was done on a Linux machine with 96 Intel Xeon Platinum 8160 CPUs @ 2.10GHz, 256GB of memory and an NVIDIA GeForce GTX 1080 Ti GPU with 11GB of memory. Training the models for 30 epochs took approximately one hour per model. \section{Results} The test results from the six different models described in section \ref{sec:training} are compiled in table \ref{tab:results}. For testing, the epoch with the best validation loss during training was taken for each model. The last row is taken from \textcite[Table~3]{agarwal-zesch-2019-german} and describes only the result of training with the same data as we did. The difference between our result without transfer learning (WER 0.697) and theirs (WER 0.797) likely stems from the difference in hyperparameters used. Figures \ref{fig:3c} and \ref{fig:4c} show the learning curves for all training procedures, with the green line (3 frozen layers) being the same in both plots. The epochs used for testing are also marked in the figures. \begin{table}[ht] \centering \vspace{5mm} \begin{tabular}{lrrr} \toprule Method & WER \\ \midrule Complete Training & 0.697 \\ Simple Transfer & 0.627 \\ 1 Frozen Layer & 0.483 \\ 2 Frozen Layers & 0.443 \\ \textbf{3 Frozen Layers} & \textbf{0.437} \\ 4 Frozen Layers & 0.462 \\ \cite{agarwal-zesch-2019-german} & 0.797 \\ \bottomrule \end{tabular} \caption{Testing results (Word Error Rate)} \label{tab:results} \end{table} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{../3curves.pdf} \caption{Learning curves: With and without transfer learning and layer freezing} \label{fig:3c} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{../4curves.pdf} \caption{Learning curves: Comparison of freezing a different number of layers} \label{fig:4c} \end{figure} The best results were achieved by the model with the first three layers frozen during training. It is notable however, that the other three models that utilize layer freezing are not far off. The training curves look remarkably similar (see Figure~\ref{fig:4c}). All four models achieve much better results than the two models without layer freezing. The results seem to indicate that freezing the first layer brings the largest advantage in training, with diminishing returns on freezing the second and third layers. Additionally freezing the fifth layer slightly worsens the result. The model with four frozen layers could only optimize the LSTM weights and the weights of the output layer. It is surprising that it still achieves good results. It might be interesting to see what happens when the LSTM layer is frozen. It is probable that with a larger dataset the benefits of freezing weights decrease and better results are achieved with freezing fewer or no layers. However, for languages or dialects with little available training data this transfer learning approach seems promising. A next step might be to train these models with more training data and see if layer freezing is still beneficial. The chosen speech dataset (Mozilla Common Voice 2.0) is not very large; \textcite{agarwal-zesch-2019-german} achieved a best result of 0.151 WER when training the model on a large dataset, in contrast to a result of 0.797 WER when training the same model on only the dataset used here. The TensorFlow checkpoint for our best model (3 frozen layers) is available online\footnote{\url{https://onnoeberhard.com/deepspeech}}. \section{Further Research} An interesting idea for further research is to use a different pretrained model than the English one. English seems to work alright for transferring to German, but it is possible that the lower level language features extracted by a model only trained for recognizing English speech are not sufficient for transferring to certain other languages. For example, when just transcribing speech there is no need for such a model to learn intonation features. This might be a problem when trying to transfer such a pretrained model to a tonal language like Mandarin or Thai. There might also be phonemes that don't exist or are very rare in English but abundant in other languages. For these reasons one could try and build a new model that is trained on language data from many different languages, with the same task of transcribing speech to text. The model will not be good at this task, because it would have to learn all these languages at the same time and differentiate between them, which is not easy without a very large number of tunable parameters\footnote{A GPT-3-sized \parencite{brown2020language} model might be able to learn many languages simultaneously}. However, if trained correctly, it should at least extract the lower level features from these languages. This means that it should be possible to transfer this "bad" model to any language and get good results without large amounts of training data, when using layer freezing again. One might want to make the initial "bad" model much larger than DeepSpeech because it can be trained on large amounts of speech data from many languages. The size does not matter too much when transferring, because most of the weights would be frozen. To implement this idea, the DeepSpeech implementation used in this paper is not sufficient, because it does not support training on languages with different alphabet sizes at the same time. Instead, it might be possible to use a multitask learning approach, where the model is not only trained to output the correct character, but also the language of the current utterance. The predicted language can then be used to select the correct alphabet used for predicting the current character. \section{Summary} Transfer learning seems to be a powerful approach to train an automatic speech recognition system on a small dataset. The effect we saw when transferring DeepSpeech from English to German was not very large when just initializing the parameters, but only freezing the first layer already improved the results dramatically. Freezing more layers improved the outcome even more, but with larger training datasets this might have adverse effects. \section{Acknowledgements} We want to thank Aashish Agarwal for valuable help in setting up DeepSpeech and for providing preprocessing scripts as well as the hyperparameters we used for training. \printbibliography \end{document}
{ "alphanum_fraction": 0.7797283869, "avg_line_length": 104.1034482759, "ext": "tex", "hexsha": "6add42ad9d1554d0496ad223179a294458a09aac", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1349a5a99f00ba026dd0747d4955556701544fc3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "AASHISHAG/deepspeech-paper", "max_forks_repo_path": "paper/document.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1349a5a99f00ba026dd0747d4955556701544fc3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "AASHISHAG/deepspeech-paper", "max_issues_repo_path": "paper/document.tex", "max_line_length": 1493, "max_stars_count": 1, "max_stars_repo_head_hexsha": "1349a5a99f00ba026dd0747d4955556701544fc3", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "AASHISHAG/deepspeech-paper", "max_stars_repo_path": "paper/document.tex", "max_stars_repo_stars_event_max_datetime": "2021-06-04T11:57:49.000Z", "max_stars_repo_stars_event_min_datetime": "2021-06-04T11:57:49.000Z", "num_tokens": 3424, "size": 15095 }
\chapter{Heap Reference Analysis} Analyzing properties of heap data is not very trivial. This is because the spatial and temporal structure of stack and static data is simple to understand. The stack variables have a compile-time name(alias) associated with it. However, this is not the case with heap data. We need to devise a flow and context sensitive analysis to get information from heap data. \section{Difficulties in Analysis of Heap Data} A program accesses data through expressions having l-values and hence are called access expressions. The l-values can either be a scalar (x), or may involve array access such as \emph{a[2*i]} or can be a reference expression like \emph{x.l.data}. In the case of reference or array, the mapping of the access expression and the l-value may change. The reference expression is primarily used to access the heap.\cite{hra} \\ Heap analysis tries to find out the answer to the questions: \begin{itemize} \item Can an access expression $a_1$ at program point $p_1$ have the same l-value as access expression $a_2$ at program point $p_2$. \item Can there exist objects in the heap that will not be reachable from the access expressions? \item Which of the access links will be live at a particular point? \end{itemize} \section {Pointer Analysis} Pointer analysis is a static analysis technique that establishes which pointers or heap references can point to which variables. Pointer analysis collects information about indirect accesses in programs. It can enable precise data analysis and precise inter-procedural control flow analysis. The latter will be used in the VASCO tool that will be discussed later in the report. \\ We have two types of pointer analysis information: points-to analysis and alias analysis. Alias analysis tell if two references point to the same location on the heap. It is transitive in nature. Whereas, points-to analysis tells which memory locations are pointed by references at run-time. \\ Alias information plays an important role in the liveness analysis. Must-alias information is needed to improve precision. May aliases can be found out using information from the points-to analysis. \\ In Java pointers are not created explicitly.\cite{mtpreport} All objects in Java are accessed using references and these references are termed as pointers here. Every time we create an object in Java it creates pointer to the object. This pointer could then be set to a different object or to null, but the original object will still exist. Thus points-to analysis for Java programs identifies the objects pointed to by references at run time. Thus we wish to determine the objects pointed to by a reference variable or a field. Consider the Java program in Figure 2.1.\\ \begin{figure} \begin{minipage}[b]{0.45\linewidth} \begin{verbatim} class A(){} class B(){ public A f; public void set(A p) { this.f = p; } } class C(){ public B g; \end{verbatim} \end{minipage} \quad \begin{minipage}[b]{0.45\linewidth} \begin{verbatim} public void set(B q){ this.g = q; } } s1 : A x = new A() s2 : B y = new B() s3 : C z = new C() s4 : y.set(x); s5 : z.set(y); s5 : A a = z.g.f; \end{verbatim} \end{minipage} \caption{Code to illustrate heap access and points-to analysis} \end{figure} In the code given in Figure 2.1, three heap objects pointed to by $x$, $y$ and $z$ are created at $s_1$ , $s_2$ and $s_3$ respectively. We refer to the objects based of their allocation site as $o_1$, $o_2$ and $o_3$. The statement $s_4$ assigns the $f$ field of $o_2$ to point to $o_1$. Similarly, the $g$ field of object $o_3$ is pointed to object $o_2$. Finally the variable $a$ points to the object $o_1$, through reference field indirections.\\ {\bfseries Liveness and Points-to Analysis} : We can remove some information from the points-to analysis result by considering only information for live pointers. For field references and indirections, liveness is defined using points-to information. \cite{liveness} \\ {\bfseries Points-to graph} in Java contain two types of edges. The first type of edge is to represent the information that reference variable $v$ is pointing to object $o$. The second type of edge represents the field $f$ of $o_1$ pointing to $o_2$. Example of points to graph for the code in Figure 2.1 is shown if Figure 2.2. \\ \begin{figure} \centering \includegraphics[width=0.6\textwidth]{Figures/rsz_points_to_graph.png} \caption{Points to graph for Java program in 2.1} \label{fig:points-to java} \end{figure} \section{Heap Reference Analysis} A reference can be represented by access path. In order to perform liveness analysis of heap and identify the set of live links, naming of links is necessary. This is achieved by access path. An Access Path is defined as root variable name following any number of field names and is represented as x $\rightarrow$ n1 $\rightarrow$ n2 ....nk where x is root variable, n1 , n2 .. are field names. If access path \emph{x $\rightarrow$ f $\rightarrow$ d} is live then, the objects pointed to by \emph{x}, \emph{x.f} and \emph{x.f.d} are live. Example of access path is given for the expression \emph{x.left.right.data} in figure 2.3 \\ \begin{figure} \centering \includegraphics[width=0.6\textwidth]{Figures/hra_access_path.png} \caption{Heap reference using access expression \emph{x.left.right.data}.} \label{fig:access_path_example} \end{figure} An access path can be unbounded in the case of loops. Thus, we require to set a bound on the representation of access paths for liveness information. This is achieved using access graphs which summarizes information based on allocation sites. Access Graph is a directed graph representing access paths starting from root variable. Root node is connected to any number of nodes each having unique labels of form $n_i$ where n is name of the field and i is the program point. Inclusion of program points in access graphs helps in summarization and performing the merge operation.\cite{slides} Example of access graph and liveness analysis is shown in Figure 2.4. \\ \begin{figure} \centering \includegraphics[width=0.6\textwidth]{Figures/heap_summarization_liveness.png} \caption{Example of use of access graph and liveness data flow values} \label{fig:access_graph_example} \end{figure} Availability and Anitcipability analysis of heap data : An access path $\rho$ is said to available at a program point $p$ if the target of each prefix of $\rho$ is guaranteed to be created along each path reaching $p$. An access path $\rho$ is said to be anticipable at $p$ if the target of each prefix of $\rho$ will be dereferenced along every path starting from $p$. Note that access graphs are not needed to carry out to carry out available and anticipable analysis over heap data because the sets are bounded as as result of every control flow path nature of the problems.
{ "alphanum_fraction": 0.7619458837, "avg_line_length": 71.6288659794, "ext": "tex", "hexsha": "4a0316d332db10ac5595d1b3ccf0138237907a81", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a6b54fc995ec5a50f29746e8d2a9c2a87b0aec90", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "anshulp93/btp-report-slides", "max_forks_repo_path": "Chapters/chapter2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a6b54fc995ec5a50f29746e8d2a9c2a87b0aec90", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "anshulp93/btp-report-slides", "max_issues_repo_path": "Chapters/chapter2.tex", "max_line_length": 631, "max_stars_count": null, "max_stars_repo_head_hexsha": "a6b54fc995ec5a50f29746e8d2a9c2a87b0aec90", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "anshulp93/btp-report-slides", "max_stars_repo_path": "Chapters/chapter2.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1721, "size": 6948 }
\documentclass[11pt]{article} \usepackage{amsmath,amssymb,amsthm} \usepackage{fancyhdr} \usepackage{tikz-qtree} \usepackage{enumerate} \usepackage{graphicx} % margins \usepackage[vmargin=1in,hmargin=1.5in]{geometry} % Config %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newcommand{\ass}{2} \newcommand{\name}{Connor Taffe} \newcommand{\tno}{3742} % last 4 digits of T number. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \title{ $A_{\ass}$ \\ {\large Assignment \ass\\ CS 3482; Professor Tang} } \author{ \name. T no. \tno } \pagestyle{fancy} \rhead{Homework \ass} \lhead{{\name}. T no. \tno} \begin{document} \maketitle {\bf Notice}: I have used a format for binary number addition where extraneous places are shown when they would illustrate the generation of a $C_{\text{in}}$ bit. For example, a one bit adder might be accompanied by an example expression such as the following: \[ 1.1_2 + 0.1_2 = 10_2 \] The topmost bit of the rightmost number in the expression is the $C_{\text{out}}$ value, and the lowest bit is the $S$ (sum) for the current bit addition. The lower bit in the operands is only shown to illustrate that a carry ($C_{\text{in}}$) bit was generated on some previous addition. \section{Single-bit full adder} \begin{enumerate}[(a)] \item{ Show the truth table of $C$ (carry) and S(sum) of single-bit adder with input $X$, $Y$ and $C_{\text{in}}$ (carry-in) simulating the single-bit binary arithmetic addition. The truth table of $C$ (carry) is as follows in figure \ref{fig:1a}. C is the carry bit, so it is true if $XY$ is true ($1_2+1_2=10_2$) or if $(X \oplus Y)C_{\text{in}}$ is true ($1.1_2+0.1_2=10_2$). The truth table of $S$ (sum) is illustrated in the same figure. It can be seen that the sum bit is only on if $CXYC_{\text{in}} + C'(X+Y+C_{\text{in}})$ is true. This last statment is purely observational. This is because a two bits ($1_2+1_2=10_2$) rolls over to the next place (generates carry without sum bit), but three bits ($1.1_2+1.1_2=11_2$) rolls over and leaves one behind (generates carry with sum bit), and one ($0_2+1_2=1_2$) does not roll over (generates no carry, but sum bit). \begin{table}[h] \centering \begin{tabular}{lllll} $X$ & $Y$ & $C_{\text{in}}$ & $C$ & $S$ \\ \hline 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 1 \\ 0 & 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 1 & 0 \\ 1 & 0 & 0 & 0 & 1 \\ 1 & 0 & 1 & 1 & 0 \\ 1 & 1 & 0 & 1 & 0 \\ 1 & 1 & 1 & 1 & 1 \\ \end{tabular} \caption{\label{fig:1a} Truth table for $C$ and $S$ in single-bit full adder.} \end{table} } \item{ Show the sum of min-term expression of $C$ and $S$. The minterms of $C$ are as follows: \[ C_{\sum_{\text{minterm}}} = X'YC_{\text{in}} + XY'C_{\text{in}} + XYC_{\text{in}}' + XYC_\text{in} \] As we can see more clearly, two bits must be set (first three min-terms) to rollover, or all three bits (last min-term) to generate a carry and a sum. The minterms of $S$ are as follows: \[ S_{\sum_{\text{minterm}}} = X'Y'C_{\text{in}} + X'YC_{\text{in}}' + XY'C_{\text{in}}' + XYC_\text{in} \] As we can see more clearly, only one bit can be set (first three min-terms), or three bits can be set (last min-term) for a sum bit to be on. } \item{ Implement the single-bit full adder on LogicWorks using the following Boolean expressions: \[ C = XY + XC_{\text{in}} + YC_{\text{in}} \] \[ S = X \oplus Y \oplus C_{\text{in}} \] For illustrative purposes, here follows the parse trees of the above boolean expressions (figures \ref{fig:1c}, \ref{fig:1c2}). They can be used to show the structure of the resulting logic circuit. \begin{figure}[h] \centering \Tree [.$+$ [.$*$ $X$ $Y$ ] [.$*$ $X$ $C_{\text{in}}$ ] [.$*$ $Y$ $C_\text{in}$ ] ] \caption{\label{fig:1c} Parse tree of boolean expression for $C$.} \end{figure} \begin{figure}[h] \centering \Tree [.$\oplus$ $X$ $Y$ $C_{\text{in}}$ ] \caption{\label{fig:1c2} Parse tree of boolean expression for $S$.} \end{figure} Figure \ref{fig:1cc} shows the LogicWorks circuit implementation and figure \ref{fig:1ct} shows the testing of the circuit. The testing shows that the circuit is properly functioning for each of 8 inputs. It correlates to the truth table in figure \ref{fig:1a}. In the first column, $X=0,Y=0,C_{\text{in}}=0$, and $S=0,C=0$ which correlates with the truth table. The second column shows $X=0,Y=1,C_{\text{in}}=0$, and $S=1,C=0$, which correlates with the truth table. The remaining columns also correlate with the truth table, as $S$ is true only if one or three of the inputs are true, and $C$ is true only if two or three of the inputs are true, which is the same as was discussed above. \begin{figure}[h] \centering \includegraphics[width=300pt]{1c} \caption{\label{fig:1cc} LogicWorks circuit implementation.} \end{figure} \begin{figure}[h] \centering \includegraphics[width=200pt]{1t} \caption{\label{fig:1ct} Testing of circuit.} \end{figure} } \item{ Pack your single-bit full adder and name it as {\tt fa-1}. Show the testing of your packaged single-bit full adder. The LogicWorks circuit is shown in figure \ref{fig:1dc}, while the testing is shown in figure \ref{fig:1dt}. The $C_{\text{in}}$, $X$, and $Y$ inputs are tested in the same order as the previous test, and we can see that it has the exact same output for $S$ and $C$. Since we have established that the output for the previous testing was correct, this packaged circuit must also be correct. \begin{figure}[h] \centering \includegraphics[width=200pt]{1pc} \caption{\label{fig:1dc} LogicWorks circuit implementation.} \end{figure} \begin{figure}[h] \centering \includegraphics[width=200pt]{1pt} \caption{\label{fig:1dt} Testing of circuit.} \end{figure} } \end{enumerate} \section{Ripple carry binary adder} \begin{enumerate}[(a)] \item{ Implement a 4-bit ripple carry binary adder with inputs $A_{3\text{-}0}$, $B_{3\text{-}0}$ and $C_{\text{in}}$ and outputs $S_{3\text{-}0}$ and $C_{\text{out}}$, using the singe-bit full adder you built in the previous question. Show three test cases to demonstrate that your adder works correctly (You need to do binary add yourself and then show that your adder delivers the correct result, for each of the three input cases). The three cases for testing are as follows: \begin{enumerate}[1.] \item{ \[ 1010.1_2 + 1100.1_2 = 10111_2 \] \[ C_{\text{in}} = 1, A_0 = 0, A_1 = 1, A_2 = 0, A_3 = 1, B_0 = 0, B_1 = 0, B_2 = 1, B_3 = 1, \] \[ S_0 = 1, S_1 = 1, S_2 = 1, S_3 = 0, C_{\text{out}} = 1 \] Testing shown in figure \ref{fig:2atc}. \begin{figure}[h] \centering \includegraphics[width=150pt]{2at1} \caption{\label{fig:2atc} LogicWorks circuit test 1.} \end{figure} } \item{ \[ 1010.0_2 + 0100.1_2 = 01110_2 \] \[ C_{\text{in}} = 0, A_0 = 0, A_1 = 1, A_2 = 0, A_3 = 1, B_0 = 0, B_1 = 0, B_2 = 1, B_3 = 0, \] \[ S_0 = 0, S_1 = 1, S_2 = 1, S_3 = 1, C_{\text{out}} = 0 \] Testing shown in figure \ref{fig:2atc1}. \begin{figure}[h] \centering \includegraphics[width=150pt]{2at2} \caption{\label{fig:2atc1} LogicWorks circuit test 2.} \end{figure} } \item{ \[ 0000.1_2 + 1111.1_2 = 10000_2 \] \[ C_{\text{in}} = 1, A_0 = 0, A_1 = 0, A_2 = 0, A_3 = 0, B_0 = 1, B_1 = 1, B_2 = 1, B_3 = 1, \] \[ S_0 = 0, S_1 = 0, S_2 = 0, S_3 = 0, C_{\text{out}} = 1 \] Testing shown in figure \ref{fig:2atc2}. \begin{figure}[h] \centering \includegraphics[width=150pt]{2at3} \caption{\label{fig:2atc2} LogicWorks circuit test 3.} \end{figure} } \end{enumerate} } \item{ Figures \ref{fig:2btc}, \ref{fig:2btc1}, \ref{fig:2btc2} depict testing of the packaged circuit correlating to the above tests. \begin{figure}[h] \centering \includegraphics[width=200pt]{2pt1} \caption{\label{fig:2btc} LogicWorks circuit test 1.} \end{figure} \begin{figure}[h] \centering \includegraphics[width=200pt]{2pt2} \caption{\label{fig:2btc1} LogicWorks circuit test 1.} \end{figure} \begin{figure}[h] \centering \includegraphics[width=200pt]{2pt3} \caption{\label{fig:2btc2} LogicWorks circuit test 1.} \end{figure} } \item{ Test the longest propagation delays of your {\tt RP Adder-4} from the inputs to $C_{\text{out}}$ and $S_3$ (the highest bit of the sum) by setting $A_{3\text{-}0}$ and $B_{3\text{−}0}$ to make their sum to be 1111 and connecting $C_{\text{in}}$ to a clock. The circuit is shown in figure \ref{fig:2cc} and the output is shown in figure \ref{fig:2ct}. The testing shows a 1ns delay between $C_{\text{in}}$ and $S_0$, and a 2ns delay between $S_0$ and the next subcircuit ($S_1$), which continues between subcircuits. The delay between the last subcircuit ($S_3$) and its carry $C_{\text{out}}$ is 1ns. This totals 8ns. This is because each circuit has a 1ns delay between its $C_{\text{in}}$ input and its $S$ output, and an additional 1ns (2ns total) between its input and its carry output ($C$). Since this circuit is a ripple adder, each single-bit adder depends on the carry from the last single-bit adder. This leads to a 2ns delay for each additional bit. This type of circuit is $O(n)$ concerning gate delay (gates are operations) where $n$ is bits of the number. \begin{enumerate}[i.] \item{ Figure \ref{fig:2cc} shows the testing circuit. } \item { Figure \ref{fig:2ct} shows the waveform of the testing circuit with $C_{\text{in}}$ as a reference line. } \item{ The dalay from $C_{\text{in}}$ to $C_{\text{out}}$ is 8ns. The dalay from $C_{\text{in}}$ to $C_{\text{out}}$ is 7ns. The reason this differs by one nanosecond is that the carry ($C$) bit takes 2ns to compute while the $S$ bit takes only 1ns for each subcircuit. This is because the carry ($C$) bit is calculated with two gates in a row ($*$ then $+$), while the $S$ bit is calculated with a single gate ($\oplus$). The overall delay is caused by the dependency chain of each subsequent bit adder on the last carry bit output. Thusly this circuit is $O(n)$ operations (gate operations), meaning it scales in number of operations over time by the number of bits. The delay is $D_t = \sum\limits_{i=0}^n di$ where $d$ is the delay of each single-bit addition, in this case 2ns, $D_t$ is the total delay, and $n$ is the number of bits. } \end{enumerate} \begin{figure}[h] \centering \includegraphics[width=200pt]{2pct} \caption{\label{fig:2cc} LogicWorks circuit delay test.} \end{figure} \begin{figure}[h] \centering \includegraphics[width=200pt]{2ptt} \caption{\label{fig:2ct} Output showing delay.} \end{figure} } \end{enumerate} \section{Single-bit full adder with $p$ (carry propagation) and $g$ (carry generation) outputs.} \begin{enumerate}[(a)] \item{ Show the truth tables of $g$ (carry generation) and $p$ (carry propagation) of single-bit adder as functions of input $X$ and $Y$ (They do not depend on $C_{\text{in}}$). The truth table is shown in figure \ref{fig:3a}. \begin{table}[h] \centering \begin{tabular}{llll} $X$ & $Y$ & $g$ & $p$ \\ \hline 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 \\ 1 & 0 & 0 & 1 \\ 1 & 1 & 1 & 0 \\ \end{tabular} \caption{\label{fig:3a} Truth table for $g$ and $p$ for a single bit adder.} \end{table} } \item{ Implement and pack the single-bit full adder with $g$ and $p$ outputs on LogicWorks. Show the testing of your packaged single-bit full adder with $g$ and $p$ outputs. (You need to test and show all 8 input cases.) Figure \ref{fig:3bc} shows the testing circuit for the packaged Single-bit full adder with $p$ (carry propagation) and $g$ (carry generation) outputs. Figure \ref{fig:3bt} shows the testing. As you can see $S$ is only on if one of the inputs $X$, $Y$, or $C_{\text{in}}$ is set or all are set, which is the same as the original single-bit adder, so it is correct. $p$ is correct because it should be on when either $X$ or $Y$ is on, but not both, and it is. $g$ is correct because it should be on if both $X$ and $Y$ are on, and it is. Thusly we can conclude that this single-bit adder with $p$ (carry propogation) and $g$ (carry generation) is correct. This shows all eight test cases are correct. \begin{figure}[h] \centering \includegraphics[width=200pt]{3pc} \caption{\label{fig:3bc} LogicWorks circuit.} \end{figure} \begin{figure}[h] \centering \includegraphics[width=200pt]{3pt} \caption{\label{fig:3bt} Waveform test.} \end{figure} } \end{enumerate} \section{4-bit carry look-ahead unit ({\tt LA-4})} \begin{enumerate}[(a)] \item{ Derive the Boolean expressions for carries $c1, \ldots, c4$ as functions of $g0, \ldots, g3$, $p0, \ldots, p3$, and $c0$. The formula for a carry bit is $C=g+pC_{\text{in}}$. For the first bit, carry in is given, and for the others, carry in is generated recersively based on the formula. \[ C_1 = g_0+p_0C_0 \] \[ C_2 = g_1+p_1C_1 = g_1+p_1(g_0+p_0C_0) \] \[ C_3 = g_2+p_2C_2 = g_2+p_2(g_1+p_1C_1) = g_2+p_2(g_1+p_1(g_0+p_0C_0)) \] \[ C_4 = g_3+p_3C_3 = g_3+p_3(g_2+p_2C_2) = g_3+p_3(g_2+p_2(g_1+p_1C_1)) = g_3+p_3(g_2+p_2(g_1+p_1(g_0+p_0C_0))) \] These can be simplified using the distributive property to derive: \[ C_1 = g_0+p_0C_0 \] \[ C_2 = g_1 + p_1(g_0+p_0C_0) = g_1 + p_1g_0 + p_1p_0C_0 \] \[ C_3 = g_2+p_2(g_1+p_1g_0 + p_1p_0C_0) = g_2 + p_2g_1 + p_2p_1g_0 + p_2p_1p_0C_0 \] \[ C_4 = g_3+p_3(g_2 + p_2g_1 + p_2p_1g_0 + p_2p_1p_0C_0) = g_3 + p_3g_2 + p_3p_2g_1 + p_3p_2p_1g_0 + p_3p_2p_1p_0C_0 \] This second way is better because it is the sum of min-terms approach that we know can reduce gate dalays to a mamimum of 3. } \item{ Derive the Boolean expressions for block carry generation $G$ and block carry propagation $P$ as functions of $g0, \ldots, g3$ and $p0, \ldots, p3$. The formula for $G$ and $P$ is $C_4=G+PC_0$, so using the $C_4$ derived in the previous step, we can see the following values for $G$ and $P$: \[ G = g_3 + p_3g_2 + p_3p_2g_1 + p_3p_2p_1g_0 \] \[ P = p_3p_2p_1p_0 \] } \item{ Implement and pack the {\tt LA-4} unit in LogicWorks. Show the internal circuit of the unit. The circuit can be seen in figure \ref{fig:4c}. \begin{figure}[h] \centering \includegraphics[width=200pt]{4c} \caption{\label{fig:4c} LogicWorks circuit implementation.} \end{figure} } \item{ Show the testing of $c2$ of the {\tt LA-4}. Figure \ref{fig:4dc} shows the circuit in testing setup. Figure \ref{fig:4dt} shows the testing output wavelengths for all 32 applicable input combinations. $C_2 = g_1 + p_1(g_0+p_0C_0)$, so we can confirm the correctness of the circuit by showing that each value of $C_2$ is because $g_1$ or $p_1$ and either $g_0$ or $p_0$ and $C_0$. We can see for the first half when $C_0$ is off, that $C_2$ is off while $g_1$ and $p_1$ are off, and then turns on while $g_1$ is on. When $g_1$ is next off, $C_2$ reflects $g_0$ as $p_0$s value is of no importance whilst $C_0$ is off. It continues to follow the formula for the remainder of the wavelength and we can confirm it is correct. \begin{figure}[h] \centering \includegraphics[width=200pt]{4ct} \caption{\label{fig:4dc} LogicWorks circuit implementation.} \end{figure} \begin{figure}[h] \centering \includegraphics[width=200pt]{4t} \caption{\label{fig:4dt} Wavelength recording.} \end{figure} } \end{enumerate} \section{4-bit carry look-ahead adder} \begin{enumerate}[(a)] \item{ Implement the 4-bit carry look-ahead adder with inputs $A_{3\text{-}0}$ , $B_{3\text{-}0}$ and $C_{\text{in}}$ and outputs $S_{3\text{-}0}$, $C_{\text{out}}$, $G$ and $P$. Show the internal circuit of the adder. Figure \ref{fig:5ac} shows the internal circuit of the adder. \begin{figure}[h] \centering \includegraphics[width=200pt]{5c} \caption{\label{fig:5ac} LogicWorks circuit implementation.} \end{figure} } \item{ Pack the 4-bit carry look-ahead and name it as {\tt Adder-4}. Show three test cases to demonstrate that your adder works correctly. The three cases for testing are as follows: \begin{enumerate}[1.] \item{ \[ 1010.1_2 + 1100.1_2 = 10111_2 \] \[ C_{\text{in}} = 1, X_0 = 0, X_1 = 1, X_2 = 0, X_3 = 1, Y_0 = 0, Y_1 = 0, Y_2 = 1, Y_3 = 1, \] \[ S_0 = 1, S_1 = 1, S_2 = 1, S_3 = 0, C_{\text{out}} = 1 \] Testing shown in figure \ref{fig:5t}. \begin{figure}[h] \centering \includegraphics[width=150pt]{5t} \caption{\label{fig:5t} LogicWorks circuit test 1.} \end{figure} } \item{ \[ 1010.0_2 + 0100.1_2 = 01110_2 \] \[ C_{\text{in}} = 0, X_0 = 0, X_1 = 1, X_2 = 0, X_3 = 1, Y_0 = 0, Y_1 = 0, Y_2 = 1, Y_3 = 0, \] \[ S_0 = 0, S_1 = 1, S_2 = 1, S_3 = 1, C_{\text{out}} = 0 \] Testing shown in figure \ref{fig:5t1}. \begin{figure}[h] \centering \includegraphics[width=150pt]{5t1} \caption{\label{fig:5t1} LogicWorks circuit test 2.} \end{figure} } \item{ \[ 0000.1_2 + 1111.1_2 = 10000_2 \] \[ C_{\text{in}} = 1, X_0 = 0, X_1 = 0, X_2 = 0, X_3 = 0, Y_0 = 1, Y_1 = 1, Y_2 = 1, Y_3 = 1, \] \[ S_0 = 0, S_1 = 0, S_2 = 0, S_3 = 0, C_{\text{out}} = 1 \] Testing shown in figure \ref{fig:5t2}. \begin{figure}[h] \centering \includegraphics[width=150pt]{5t2} \caption{\label{fig:5t2} LogicWorks circuit test 2.} \end{figure} } \end{enumerate} } \item{ Test the longest propagation delays of your carry look-ahead Adder-4 from the inputs to $C_{\text{out}}$ and $S_3$ (the highest bit of the sum) by setting $A_{3 \text{−} 0}$ and $B_{3 \text{−} 0}$ to make their sum to be 1111 and connecting $C_{\text{in}}$ to a clock. \begin{enumerate}[i.] \item{ Show your testing circuit. Figure \ref{fig:5dtc} depicts the circuit. \begin{figure}[h] \centering \includegraphics[width=150pt]{5dtc} \caption{\label{fig:5dtc} LogicWorks circuit test 2.} \end{figure} } \item{ Show the waveforms of the inputs and outputs with the rising edges of $C_{\text{in}}$ as reference lines. Shown in figure \ref{fig:5dt}. \begin{figure}[h] \centering \includegraphics[width=150pt]{5dt} \caption{\label{fig:5dt} Wavelength} \end{figure} } \item{ Show the propagation delays from $C_{\text{in}}$ to $C_{\text{out}}$ and $S_3$. Explain why these delays are such as you observed. The propogation delay from $C_{\text{in}}$ to $C_{\text{out}}$ is 3ns, while from $C_{\text{in}}$ to $S_3$ is 3ns. This is because the new way of calculating carry bits with the look-ahead unit does not depend on the previous cascading carry and is thusly $O(1)$ for $n$ bits. The $S$ takes an additional 1ns because it must wait for the cary bit to come back from the look-ahead. } \end{enumerate} } \end{enumerate} \end{document}
{ "alphanum_fraction": 0.6497393118, "avg_line_length": 38.0555555556, "ext": "tex", "hexsha": "c8f38d7a47abd0cef70875da05a532ae2075003d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c11949646ec2dd7a30581cd2108a183ec3e65403", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "cptaffe/hw", "max_forks_repo_path": "corg/ass2/ass.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "c11949646ec2dd7a30581cd2108a183ec3e65403", "max_issues_repo_issues_event_max_datetime": "2015-03-16T20:54:08.000Z", "max_issues_repo_issues_event_min_datetime": "2015-03-16T20:54:08.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "cptaffe/os_hw", "max_issues_repo_path": "corg/ass2/ass.tex", "max_line_length": 691, "max_stars_count": null, "max_stars_repo_head_hexsha": "c11949646ec2dd7a30581cd2108a183ec3e65403", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "cptaffe/os_hw", "max_stars_repo_path": "corg/ass2/ass.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7308, "size": 19180 }
\section{Pointers}
{ "alphanum_fraction": 0.7142857143, "avg_line_length": 5.25, "ext": "tex", "hexsha": "a93c94986e36acd1816a70d9c5a0274de5441581", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/computer/pointers/01-00-Strings.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/computer/pointers/01-00-Strings.tex", "max_line_length": 18, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/computer/pointers/01-00-Strings.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7, "size": 21 }
\section{Quadratic Hamiltonians and the Equipartition Function} \begin{align} H &= \sum_{i=1}^{N}\left( \frac{p_i^2}{2m_i} + \frac{m_i\omega_i^2}{2}x_1^2 \right) \end{align} The partition function is \begin{align} Z &= \int_{-\infty}^{\infty} \prod_{i=1}^{N} \frac{dx_i\, dp_i}{h^N} e^{-\beta\sum_{i}\left( \frac{p_i^2}{2m_i} + \frac{m_i\omega_i^2}{2} x_i^2 \right)}\\ &= \prod_{i=1}^{N}\frac{1}{h}\int_{-\infty}^{\infty} e^{% -\frac{\beta m_i\omega_i^2}{2}x_i^2 } \int_{-\infty}^{\infty} dp_i\, e^{-\beta \frac{p_i^2}{2m_i}}\\ &= {\left(\frac{k_B T}{\hbar}\right)}^{N} \frac{1}{\prod_{i=1}^{N}\omega_i} \end{align} Then the free energy is \begin{align} F &= -k_B T \ln Z \\ &= -N k_B T \ln\left( \frac{k_B T}{2} \right) + k_B T \sum_{i=1}^{N} \ln\omega_i \end{align} The energy is \begin{align} E &= F + TS \end{align} where \begin{align} S &= \frac{\partial F}{\partial T}\\ &= Nk_B \ln\left( \frac{k_B T}{\hbar} \right) + \frac{N k_B T}{T} - k_B \sum_{i=1}^{N}\ln \omega_i \end{align} and so the energy is \begin{align} E &= Nk_B T = \left( \frac{k_B T}{2} + \frac{k_B T}{2} \right) N \end{align} Textbooks are confusing and students fall for it. I think a 1D harmonic oscillator has 1 degree of freedom. I think a 1D free particle also has 1 degree of freedom. Forget degrees of freedom, what matters is the number of quadratic terms in the Hamiltonian. Each quadratic term in the Hamiltonian has energy $\frac{1}{2}k_B T$. \begin{example} 1D array of masses connected by springs with periodic boundary conditions. \end{example} The Hamiltonian is \begin{align} H &= \sum_{i=1}^{N}\left[ \frac{p_i^2}{2m} + \frac{k}{2} {\left( x_i - x_{i-1} \right)}^2 \right] \end{align} So this is quadratic but a slightly more complicated quadratic. You see there are cross-terms like $-2x_2x_3 + \cdots$ in there. You have to account for boundary conditions too. You have to say that $x_N = x_0$, including the one that wraps around. There are 3 terms and they are all there. What you learn in Chacko's class is that you can do a change of variables that's going to make this quadratic Hamiltonian that is decoupled. This is called finding the normal modes. It's something you're going to relearn in Chacko's class. I'm going to state here that you can do a change in variables. That's going to be equivalent to this \begin{align} H &= \sum_{i=0}^{N-1}\left[ \frac{p_i^2}{2m} + \frac{m\omega_1^2}{2} q_i^2 \right] \end{align} Basically the $q_i$s are Fourier transforms of the $x_i$'s. You will find that the magic works and the $q$'s are independent of one another. If there s one canonical transformation to learn, it's this one. The range of applicability of this calculation is huge. Basically physics is to do more and more complicated cases of harmonic oscillators. Now I can complete the partition function of these guys, taking both terms and just writing the answer. What is the average energy of this guy of temperature $T$? It's $\frac{1}{2}k_B T$ for every quadratic term. There are $2N$ terms. This is going to be again \begin{align} E &= \frac{1}{2}k_B T \times 2 \times N = Nk_BT \end{align} All you needed to know how many normal modes. You didn't even need to find the frequency. In physics 101, they shake a slinky. There are transverse modes, longitudinal modes, and more complicated modes I can't do with my hands. Each one has different frequencies. Some long wavelength ones move fast or slow, but it doesn't matter for the average energy $E$, it's just $\frac{1}{2}k_BT$ for every quadratic term in the Hamiltonian. We can now state the equipartition theorem. \begin{question} How do you know how many terms? \end{question} If you start with $N$ coordinates, you get $N$ normal modes no matter how complicated. \begin{theorem}[Equipartition] The thermal average energy of a Hamiltonian $H$ is \begin{align} E &= \frac{1}{2} k_B T N \end{align} where $N$ is hte numbero f quadratic terms in $H$. \end{theorem} This is true for classical systems. Let's consider a few systems now for examples. \begin{example} Imagine you have a solid insulator. what is the average enregy of an insulating solid? \end{example} Please give an example of a solid. You give 5 things that re not solids with crystal structures. Glass, styrofoam, plastic. They're all rigid, but they're not solids. The reason I'm not talking about insulators, is because they have electrons that move around adn that changes the energy. I'm thinking of an insulator solid, just a bunch of molecules in a lattice that shake. The modules have an equilibrium position, andthey shake around in a complicated way. To first approximation, they have some quadratic terms, and you expand in powers like the distance between the atoms, and the energy is quadratic. You want to know how many modules htere are. Supposey ou have $N$ modules on my chunck of rock. How many quadratic terms do I have in the Hamiltnian? There's a 3D knietic term, so that's 3 quadratic terms for the kinetic enregy. Whata bout the potential energy? Again every module has 3 coordiantes $x,y,z$, so there are 3 quadratic terms per molecule. So the energy should be \begin{align} E &= \left( \underbrace{3N}_{\text{kinetic}} + \underbrace{3N}_{potential} \right) \frac{1}{2}k_B T\\ &= 3Nk_B T \end{align} \begin{question} Are you assuming a cubic lattice? \end{question} No, it could bea really complicated crystal structure, but you still have the same number of normal modes. The specific heat of the substance is \begin{align} c &= \frac{1}{N}\frac{\partial E}{\partial T} = 3k_B \end{align} We could check this. Every insulating solid should have exactly the same specific heat. That's amazing. Andi t's not just any number, nad it's a specific number. You can see this thing here What I have here isa bunch of pure elements, andh ere I have a specific heat of hte number. The line in the middle is $3k_B$. This starts from 30, and they're all around 23. So you know there are bad ones like potassium 29, there's another one 23. It's an amazing result. See the work we did. \begin{question} Are we assuming only neighbouring particles are talking to each other? \end{question} No, I could have $x_1$ connected to $x_{42}$. As long as the energy is quadratic in distance, it doesn't matter. You could find the normal modes, and you find there are $3N$ normal modes, and that's the only thing that matters. \begin{question} At higher temperatures in a crystal lattice, you'l have anharonic effects like thermal expansion, does that influene the specifi heat? \end{question} There's no non-linear effects that contribute at high temperature. How much does a rock expand when you heat a rock. This is the Dulong-Petit law. Validity because it really cnovineced people that convineced people statistical mechanics was right. It's failure is that it suggested quantum mechanics. Before we dothat, let's do a really classic case. Classic in classical mechanics and classic in that you do it in high school. \begin{example}[diatomic gas] Ideal, nonrelativistic, classical gas. What if it's diatomic? \end{example} The molecules now look like this. [picture] So what's the answer? If there are $N$ molecules, how many quadratic terms are you going to have? Every particle can move in the $x,y,z$ directions, but the two particles don't move independently, so that's $6N$ terms. But there is a potential term for the spring between the two particles, so \begin{align} E &= (6N + N)\frac{1}{2}k_B T = \frac{7}{2}Nk_B T \end{align}
{ "alphanum_fraction": 0.7120978121, "avg_line_length": 30.3515625, "ext": "tex", "hexsha": "099d1654580d1f6942903024726bdecb9f6fd243", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ehua7365/umdphysnotes", "max_forks_repo_path": "phys612/lecture20.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ehua7365/umdphysnotes", "max_issues_repo_path": "phys612/lecture20.tex", "max_line_length": 80, "max_stars_count": 1, "max_stars_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ehua7365/umdphysnotes", "max_stars_repo_path": "phys612/lecture20.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-11T12:53:46.000Z", "max_stars_repo_stars_event_min_datetime": "2021-12-11T12:53:46.000Z", "num_tokens": 2294, "size": 7770 }
\section{Evaluation} \label{section:evaluation} In this chapter the system designed in Chapter~\ref{section:design_and_implementation} and implemented in Chapter~\ref{sec:implementation_and_deployment_details} will be evaluated. First the methodology of this evaluation will be documented and then the results are demonstrated. The evaluation will refer to a particular use case presented in Section~\ref{subsec:implementation_details}. The evaluation for the general SSI with IoT architecture presented in Section~\ref{subsec:architecture_for_ssi_with_iot} will be discussed at the end of this chapter. \subsection{Methodology} \label{subsec:methodology} % In order to evaluate a system that relies heavily on open-source software and whether the requirements are satisfied with the assumptions that were made, the evaluation does not regard benchmarking the actual proof-of-concept system, but rather assessing if the main components used can comply with the requirements listed in Chapter~\ref{sec:requirement_elicitation}. The evaluation will be split into three distinct steps: \begin{itemize} \item Benchmark the "Hyperledger Indy Network" and "Hyperledger Aries Cloud Agent - Python" in order to assess performance-, scalability- and availability-related NFRs; \item Qualitative evaluation of the proposed solution from domain experts; \item Cross-check requirements against the decisions taken throughout the project (and evaluation results). \end{itemize} \paragraph{Benchmark "Hyperledger Indy Network" and "Hyperledger Aries Cloud Agent - Python"} This part of the evaluation will be made possible with the help of the development team of Hyperledger Indy, since performance and scalability tests were already made to the network. The results from these tests will be presented to validate the Non-Functional Requirements of the system. Given that the agents are a core part of the system, being able to know the time they take to perform their tasks is imperative to know if these can be used at scale. \paragraph{Qualitative evaluation from domain experts} In order to measure the validity of the new system, and whether it can actually be considered an improvement to the current implementation (dissected in Section~\ref{subsec:case_description}), a qualitative assessment from domain experts is a vital part of understanding whether some assumptions are valid, feasible or impossible. This evaluation process consisted of: \begin{itemize} \item Gathering contacts from various domain-specific experts that are related to the targeted use case (Energy, Blockchain, Self-Sovereign Identity). \item Inviting contacts to participate in the evaluation process, by having them sign up via a Google Forms with preliminary assessment questions, to profile the participants. \item Recording a presentation of the project with a small demo to send to the interested participants. \item Sending the interested evaluators the recorded presentation and demo, together with a Google Forms to assess their perspective on certain aspects of the system. \end{itemize} \paragraph{Cross-check requirements} For this part of the evaluation, each requirement will be analyzed and discussed, to understand whether in the novel architecture of the system that given requirement is (partially) met, or not. \subsection{Benchmark "Hyperledger Indy Network" and "Hyperledger Aries Cloud Agent - Python"} \label{subsec:benchmark_hyperledger_indy_and_aries} \paragraph{Hyperledger Indy Network evaluation} As mentioned before, assessing the capability of the network that is the backbone of the proposed solution is a major part of understanding if it can handle the proposed use case requirements. Investigating performance tests made to the network landed on an issue on the Indy's JIRA repository. Issue \textbf{INDY-1343 - Prove production stability of an Indy network}\footnote{\url{https://jira.hyperledger.org/browse/INDY-1343}} arose from the following need: \textit{"Before encouraging people to use the Sovrin network for live loads, we need to prove that it will be stable under conditions similar to production use."} Sovrin's name has been highlighted previously in the Related Work chapter (Section~\ref{subsubsec:hyperledger_indy_sovrin}), and corresponds to the production level network created by the Sovrin Foundation. The tests had mixed conclusions, with certain scenarios underperforming while others achieving the expected TPS throughput. After these tests, three more issues were lifted on the project's JIRA repository related to testing the performance of the network, INDY-1388\footnote{\url{https://jira.hyperledger.org/browse/INDY-1388}}, INDY-1607\footnote{\url{https://jira.hyperledger.org/browse/INDY-1607}} and INDY-2214\footnote{\url{https://jira.hyperledger.org/browse/INDY-2214}}. Table~\ref{tab:benchmarks_indy} contains information on the issues regarding network stability and their current status. It is worth noting that the latest issue (INDY-2214) has yet to be resolved, since December 2019. A detailed description of the tests can be found in a Google Sheets shared by the development team, that highlights their findings\footnote{\url{https://docs.google.com/spreadsheets/d/1DTjDsLSysFBiKU-9z4-IzunJk4wEy44hE_PGZYxnN_8/edit\#gid=1813415708}} (Available on 13th June 2021). \begin{table}[t] \centering \begin{tabular}{|ccc|} \hline Issue \# & Title & Status \\ \hline INDY-1343 & Prove production stability of an Indy network & \greencheck \\ INDY-1388 & Prove stability under a DOS of an Indy network & \greencheck \\ INDY-1607 & Proof of stability under load & \greencheck \\ INDY-2214 & Repeat: Prove production stability of an Indy network & - \\ \hline \end{tabular} \caption{Issues on Indy's JIRA Repository related to benchmarking the network} \label{tab:benchmarks_indy} \end{table} Although arguably the JIRA or the project's GitHub repositories are usually the best source of information regarding benchmarking tests, the best source to validate the current state of the Hyperledger Indy network comes directly from the Sovrin's foundation network status dashboard\footnote{\url{https://sovrin.org/ssi-metrics-dashboards/}}. Figure~\ref{fig:sovrin_metrics} demonstrates the network status from 12th June 2020 to 12th June 2021, regarding various metrics. The most relevant information regards the read and write availability, that achieves an outstanding \textbf{99.999\% read and write availability}. \begin{figure}[!htb] \centering \includegraphics[width=0.8\linewidth]{images/sovrin_metrics.png} \caption{Sovrin Network (Hyperledger Indy nodes) metrics from 12 June 2020 to 12 June 2021} \label{fig:sovrin_metrics} \end{figure} \paragraph{Hyperledger Aries Cloud Agent - Python evaluation} In order to assess the performance of the agent implementation, two different methods were used: Use the performance benchmark tests provided by the development team; and measure the times when conducting actions with the agents in the proof-of-concept implementation. For the first, in the original Aries Cloud Agent - Python repository the developers included a basic \textit{performance.py} script that deploys two agents, and has one issue a predefined number of credentials to the other. These benchmarks are therefore a good indication of the times taken to issue credentials from one agent to another, and can vary depending on whether there is mediation between the two agents or not, and also if the credentials are revoked after they are issued. This latter also indicates the feasibility to revoke credentials, something that was not explored in-depth with the proof-of-concept. For the second set of benchmarks to the agent implementation, the interactions of the agents will be timed manually to assess the performance requirements. The commands used to execute each of the runs, as well as more details extracted from the runs are listed in Appendix~\ref{app:benchmark_runs}. In this appendix the computer specifications are also listed for reference. Looking at the results found in Table~\ref{tab:overview_of_different_attempts}, it is possible to see that an ACA-Py agent is able to issue credentials to another agent in an acceptable time, averaging 0.23s in three of the runs and 0.26s in the last. Note that these results are limited by the fact that the agents are running in the same piece of hardware, which may lead to an improvement in the times. Still, the results are 10 times faster than the time of interactions of the current system's validation periods, as listed in \textbf{Requirement NFR-2.4} - \textit{The process of validation the client's credentials should not take longer than the original method (designed for 2 seconds, with the worst case scenario being 30 seconds)}. For this reason, it is possible that the times may increase in a production environment, but not exceeding the optimal range of values. \begin{table}[t] \centering \begin{tabular}{|ccccc|} \hline Run \# & Revocation & Mediation & Count & Average Credential Times \\ \hline 1 & \redcheckk & \greencheck & 1000 & 0.23s \\ 2 & \greencheck & \redcheckk & 1000 & 0.23s \\ 3 & \redcheckk & \redcheckk & 1000 & 0.23s \\ 4 & \greencheck & \greencheck & 1000 & 0.26s \\ \hline \end{tabular} \caption{Overview of the different attempts} \label{tab:overview_of_different_attempts} \end{table} Besides these tests made to the credential issuance process, the actions needed for the charging process were also manually timed. When performing the charging process, the "Request Proofs" action from the EV Owner took on average less than 2 seconds. Note that this step occurs between an ACA-Py agent and a Trinsic Wallet agent. For the process of the "Request EV Credentials" from the EV agent, this process also takes less than 2 seconds on average. This process involves the CS agent sending the request for the proof, the EV agent generating the proof, sending it back to the CS agent and then it being verified. At last, the "Issue Receipt Credential" process from the CS to the EV Owner's Trinsic Wallet agent also takes on average less than 2 seconds, but depends on the internet signal at the time of receiving the credential proposal on the Trinsic Wallet App dashboard. \subsection{Qualitative Evaluation from domain experts} \label{subsec:qualitative_evaluatin_from_domain_experts} In this section the results from the evaluation provided by the domain experts will be presented to extract results that can validate the requirements. \subsubsection{Preliminary Form Results} \label{subsubsec:preliminary_form} In the preliminary form shared with potential participants to the evaluation, the questions mostly related to profile questions, to understand the area expertise of the evaluators and how well they are aware of the charging process of EVs. A total of six (6) people responded to the form, confirming their willingness to evaluate the proposed solution. Note that some of the participants had expertise in more than one area as demonstrated by Table~\ref{tab:expertise_of_evaluators}. \begin{table}[!htb] \centering \begin{tabular}{|cc|} \hline Field of Expertise & Count \\ \hline Blockchain & 4 \\ Energy & 3 \\ Self-Sovereign Identity & 1 \\ Architecture & 1 \\ EV Automotive & 1 \\ \hline \end{tabular} \caption{Expertise of the Evaluators} \label{tab:expertise_of_evaluators} \end{table} From the six respondents, the follow data was extracted: \begin{itemize} \item 3 out of 6 responded to be able to "teach someone else" about blockchain. \item 2 out of 6 responded to be able to "teach someone else" about Self-Sovereign Identity. \item None of the respondents own an EV, but more than half (4) of them are knowledged in the process of charging an EV. Out of those 4 people, 4 are aware of (most of) the parties in the process, 3 know how to charge an EV, and only 2 of them are aware of the current payment model. \end{itemize} \subsubsection{Results from experts' evaluation of system} \label{subsubsec:results_from_experts_evaluation_of_system} After gathering the profiles of the six evaluators, a video containing a pre-recorded slide presentation explaining the project was shared with the latters. In this video it was included part of the proof-of-concept's demonstration of how an EV Owner is able to charge its EV making use of Verifiable Credentials to attest the claims explained in the flow present in Section~\ref{paragraph:charging_and_billing_flow_with_ssi}. This expert evaluation was mostly used to assess the usability requirements as well as the business value of the novel architecture. From the six (6) evaluators that agreed to answer the preliminary form, the following results arose: When asked to prioritize "Ease of Use", "Data Privacy", "Cost Transparency", "Availability" and "Integration with Mobile Applications" in an EV Charging Network from the least important (1) to the most important (5), these were the results: \begin{itemize} \item Availability and Cost Transparency both obtained 23 points, making these two the most prioritized quality attributes; \item Ease of Use obtained 22 points, close to the first two quality attributes. \item Data Privacy and Integration with Mobile Applications scored 14 and 8 points, respectively, demonstrating that these two are not the highest priority for the evaluators. \end{itemize} The questions below constitute the questionnaire made to the evaluators. Each of them was given an ID. \begin{itemize} \item \#1 - \textit{How understandable is the presented solution, from a user and business perspective?} \item \#2 - \textit{How do you think the solution compares with the current implementation of the network?} \item \#3 - \textit{With this solution, would you feel confident your personal data is protected?} \item \#4 - \textit{With this solution, would you consider the process of charging the vehicle more transparent?} \item \#5 - \textit{Do you consider the employed payment model user-friendly?} \item \#6 - \textit{Which feature(s) from this solution did you appreciate the most?} \item \#7 - \textit{Which feature(s) would you like to see added to this solution?} \item \#8 - \textit{Do you have any other comments on the presented solution?} \end{itemize} From the aforementioned questions, the more relevant finding were: \begin{itemize} \item 4 out of 6 believe that the solution is "Very easy to understand" from a user perspective while 3 out of 6 thought the same from a business perspective. \item 4 out of 6 believe that the new solution is much better than the current implementation of the network. \item At least 6 out of 6 "Agree" with question \#3, with 3 of them selecting "Strongly Agree". \item The same results from question \#3 were obtained for question \#4. \item 5 out of 6 "Agree" to question \#5 regarding the user-friendliness of the payment model. \item The feature with the most appreciation was the "Added more transparency with the addition of the kWh price" feature, with 4 evaluators selecting that option for question \#6. 3 evaluators also enjoyed the "Removal of Login", 2 selected "Added Vehicle and Owner verification for extra security" and 1 liked the "Removal of external party - Maximising revenue" feature. None of the evaluators saw great value in the "Added receipt-credential to the end of the transaction". \end{itemize} For question \#7 regarding possible future features to add, the comments made by the evaluators are found below. \begin{spverbatim} "I know that right now there is a solution to have multiple suppliers per charge poll and users can charge their cars in different location. So an approach is based on you credentials and your supplier contract, this should be easy to identify and get a message saying you are now charging your car with supplier A in this rate." "Using a unique vehicle identity in the process might give away the owner or driver, compromising privacy. Zero-knowledge might help alleviate that iff the vehicle ID is not anyway shared with the charging station." "Include price options in case you are not in a hurry to charge and the network can use the optimal moment to charge, although that is not really related to the SSI part." "try to remove as much steps in the user interaction as possible." \end{spverbatim} For question \#8 regarding additional comments or the overall appraisal of the system, the comments made by the evaluators are found below. \begin{spverbatim} "The overall process is well thought-out and nicely implemented." "I understand the value of SSI for identifying myself as the owner or the car and the one charging it, however what is the benefit of sharing the receipt as a VC? That is also shared with the energy supplier, right? Or is that to resolve a dispute on the invoice in case the supplier charges you more? Or do you see scenario's where I would need to share that with some other party?" "Excelent demo showing the way forward for EV charging" "Although data privacy is not a major requirement for me, I would still prefer to have something that protects me as per your solution than what is currently in the market today" "Need to think about multiple users of a single vehicle and also Fleet driver use case." \end{spverbatim} The overall assessment of this evaluation method allowed to understand how domain experts in the area perceive the system, which in turn granted good prospects to fulfill the necessary requirements. All evaluators provided valuable feedback and the consensus was that the solution is an improvement to the previous iteration, providing more transparency to the system and improved the user's experience with the system. The privacy limitations regarded in the comments will be addressed at a later stage, when cross-checking the requirements. \subsection{Cross-check requirements} \label{subsec:cross-check-requirements} In this section, the requirements listed in Section~\ref{subsec:list_of_requirements_and_decisions} will be evaluated, and a link to the respective motivation and rationales is traced to a more detailed description that will be found in Appendix~\ref{app:requirement_evaluation}. These motivations reflect whether the specific requirement is satisfied or not using the decisions taken across the document (from technological, architectural or assumptions). For more information regarding the assessment of each requirement, the appendix contains all the necessary information for that purpose. Despite that, a few of the most important requirements will be discussed to highlight the most important decisions. For both upcoming sections, a table has been made containing a mapping between the RequirementID, the priority of the requirement, the approval status and a link to the respective detailed table on the appendix. Below it is possible to find a legend for the different status symbols and their meaning. \begin{itemize} \item \greencheck : Requirement fulfilled \item \textbf{-} : Needs further investigation or inconclusive \item \redcheckk : Requirement not fulfilled \end{itemize} \subsubsection{Functional Requirements} \label{subsubsec:evaluation_functional_requirements} Table~\ref{tab:full_requirements_evaluation_table(FR)} contains the aforementioned table, and it is possible to observe that regarding the Functional Requirements, almost all of the requirements were met, with the exception of one whose priority is a "Could have" and another which is prioritized as a "Must have". Starting with the most important requirements that were fulfilled, the ones that add more value to the novel system and eliminate the liabilities presented in the case description in Section~\ref{subsubsec:current_liabities} are \ref{evaluation:FR-1.2}, \ref{evaluation:FR-1.4}, \ref{evaluation:FR-3.1} and \ref{evaluation:FR-5.1}. Requirement~\ref{evaluation:FR-1.2} reflects on the fact that now the EV Owner is presented with a kWh rate before the charging session takes place, allowing the owner to decide whether to charge the EV at that time. This price is dependant on the contracted eMSP company, given that different companies might obtain better contracts with the CPOs and provide better prices for their clients. Requirement~\ref{evaluation:FR-1.4}'s fulfillment provides a way for the EV Owners to obtain a receipt at the end of each charging session, in the form of a Verifiable Credential. This allows the client to obtain proof of that transaction, with information regarding the full price of the charging session, providing more transparency to the latter. Requirement~\ref{evaluation:FR-3.1} addresses the fact that the EV Owner now is forced to prove ownership of the EV in order to charge the vehicle, giving the system an extra layer of security, preventing thieves from charging stolen EVs. Requirement~\ref{evaluation:FR-5.1} concerns matters of privacy, since Personally Identifiable Information (PII) should not be written to any publicly accessible repository, to comply with current privacy regulations. This matter is fulfilled with the current SSI implementation, that only writes credentials and private DIDs (PII) to the entity's wallet and not in the DLT itself, complying with this requirement. For the requirements that were not addressed by the system (\ref{evaluation:FR-3.3} and \ref{evaluation:FR-6.1}), the rationale behind their absence from the system will be detailed when explaining the limitations of the system in Section~\ref{subsubsec:limitations}. \begin{longtable}{|p{.15\textwidth}p{.1\textwidth}p{0.05\textwidth}p{.2\textwidth}|} \hline \textbf{ID} & \textbf{Priority} & \textbf{Status} & \textbf{Appendix}\\ \hline \hline \textbf{FR-1.1} & Must & \greencheck & \ref{evaluation:FR-1.1} \\ \textbf{FR-1.2} & Should & \greencheck & \ref{evaluation:FR-1.2}\\ \textbf{FR-1.3} & Must & \greencheck & \ref{evaluation:FR-1.3}\\ \textbf{FR-1.4} & Could & \greencheck & \ref{evaluation:FR-1.4} \\ \hline \textbf{FR-2.1} & Must & \greencheck & \ref{evaluation:FR-2.1} \\ \textbf{FR-2.2} & Must & \greencheck & \ref{evaluation:FR-2.2} \\ \textbf{FR-2.3} & Must & \greencheck & \ref{evaluation:FR-2.3} \\ \hline \textbf{FR-3.1} & Should & \greencheck & \ref{evaluation:FR-3.1} \\ \textbf{FR-3.2} & Must & \greencheck & \ref{evaluation:FR-3.2} \\ \textbf{FR-3.3} & Could & \redcheckk & \ref{evaluation:FR-3.3} \\ \textbf{FR-3.4}& Should & \greencheck & \ref{evaluation:FR-3.4} \\ \hline \textbf{FR-4.1} & Must & \greencheck & \ref{evaluation:FR-4.1} \\ \hline \textbf{FR-5.1} & Must & \greencheck & \ref{evaluation:FR-5.1} \\ \textbf{FR-5.2} & Must & \greencheck & \ref{evaluation:FR-5.2} \\ \hline \textbf{FR-6.1} & Must & \redcheckk & \ref{evaluation:FR-6.1} \\ \hline \caption{Requirements Verification (Functional Requirements)} \label{tab:full_requirements_evaluation_table(FR)} \end{longtable} \subsubsection{Non-Functional Requirements} \label{subsubsec:evaluation_non_functional_requirements} Table~\ref{tab:full_requirements_evaluation_table(NFR)} contains the aforementioned table, and it is possible to observe that regarding the Non-Functional Requirements, almost all of the requirements were met, with the exception of one whose priority is a "Must have". Two of the requirements were marked with \textbf{"-"} to indicate that they were inconclusive or that they would need further investigation. From the requirements that were fulfilled, the most important findings would be related to \ref{evaluation:NFR-2.4}, \ref{evaluation:NFR-3.3} and \ref{evaluation:NFR-4.3}. These assess some of the most important quality attributes listed in Chapter~\ref{sec:requirement_elicitation}: Usability, Security/Privacy and also Availability. Requirement~\ref{evaluation:NFR-2.4} reflects on the time taken by the system to verify credentials. Following the evaluation made to the Hyperledger Indy and ACA-Py infrastructures in Section~\ref{subsec:benchmark_hyperledger_indy_and_aries}, it was evident that the credential issuance and verification process is aligned with the time taken in the current non-SSI implementation of the system. For this reason, it was concluded that this requirement was met. Requirement~\ref{evaluation:NFR-3.3}'s fulfillment provides the system with an indisputable way to prevent forgery of credential. Based on the concepts of Self-Sovereign Identity and Verifiable Credentials, each credential held by the EV Owner/client is practically impossible to forge, tamper or spoof, given that it is based on advanced cryptographic methods. Requirement~\ref{evaluation:NFR-4.3} addresses the system's availability. The current surveys on the EV charging systems proposed that these systems should guarantee an uptime of at least 97\% of the time. After the benchmarks conducted in Section~\ref{subsec:benchmark_hyperledger_indy_and_aries}, it was possible to see that the best example for an Hyperledger Indy network is able to guarantee a 99.999\% uptime at a production level, fulfilling this requirement. A note goes also to \ref{evaluation:NFR-4.1}, given that the CSs need to be online in order to communicate to the ACA-Py agents. This introduces a difficult requirement to evaluate, given that this mostly relies on the applied area where the solution shall be used, and its network coverage. Assuming an adoption in the Netherlands, and according to an analysis report conducted by Eurostat\footnote{\url{https://www.cbs.nl/en-gb/news/2018/05/the-netherlands-leads-europe-in-internet-access}} in 2018, the Netherlands are the country in the top 28 European Union countries with the greatest network coverage, with an astounding 98\% of Dutch households having internet access. From this data, it is possible to foresee that this requirement shall be achieved with relative ease and that is why it was marked with a \greencheck. Two of the requirements still need more research to assess their fulfillment, namely \ref{evaluation:NFR-1.1} and \ref{evaluation:NFR-3.2}. The first regards the scalability of the system, which heavily relies on the supporting network (Hyperledger Indy). For the second requirement which was tagged with \textbf{"-"}, regarding GDPR compliance, the decision to mark it with \textbf{"-"} takes in consideration two choices/simplifications made in the current system. These limitations will be detailed in Section~\ref{subsubsec:limitations}. \begin{longtable}{|p{.15\textwidth}p{.1\textwidth}p{0.05\textwidth}p{.2\textwidth}|} \hline \textbf{ID} & \textbf{Priority} & \textbf{Status} & \textbf{Appendix}\\ \hline \hline \textbf{NFR-1.1} & Must & \textbf{-} & \ref{evaluation:NFR-1.1} \\ \hline \textbf{NFR-2.1} & Should & \greencheck & \ref{evaluation:NFR-2.1} \\ \textbf{NFR-2.2} & Must & \greencheck & \ref{evaluation:NFR-2.2} \\ \textbf{NFR-2.3} & Must & \greencheck & \ref{evaluation:NFR-2.3} \\ \textbf{NFR-2.4} & Should & \greencheck & \ref{evaluation:NFR-2.4} \\ \hline \textbf{NFR-3.1} & Should & \greencheck & \ref{evaluation:NFR-3.1} \\ \textbf{NFR-3.2} & Must & \textbf{-} & \ref{evaluation:NFR-3.2}\\ \textbf{NFR-3.3} & Must & \greencheck & \ref{evaluation:NFR-3.3} \\ \hline \textbf{NFR-4.1} & Must & \greencheck & \ref{evaluation:NFR-4.1} \\ \textbf{NFR-4.2} & Must & \greencheck & \ref{evaluation:NFR-4.2} \\ \textbf{NFR-4.3} & Must & \greencheck & \ref{evaluation:NFR-4.3} \\ \hline \caption{Requirements Verification (Non-Functional Requirements)} \label{tab:full_requirements_evaluation_table(NFR)} \end{longtable} \subsubsection{Limitations} \label{subsubsec:limitations} In this section the limitations of the system will be discussed, following the discussion on the assessment of the requirements. For some of the requirements in the system, assumptions or simplifications were made, bearing in mind that work is being done to mitigate these assumptions or that the ecosystem is not yet mature enough to handle these features. \paragraph{EV Owner not requesting information from the Charging Station} Requirement \ref{evaluation:FR-3.3} - \textit{An EV Owner could be made aware that the CS belongs to the CPO it claims}, was not fulfilled given that the current implementation of the Mobile agent is very limited and does not allow to request credentials from another agent. This may eventually be achievable if a custom agent is made for this purpose, or if the act of presenting the credential is engaged by the CS. Additionally, and although this requirement would add a bit more transparency to the process, it could have damaged user-friendliness. \paragraph{Lack of ability to delegate credentials in current agent implementations} Requirement \ref{evaluation:FR-6.1} - \textit{An EV Owner must be able to delegate (temporary) possession of its EV to another driver}, was not fulfilled due to the lack of implementation in the current agents of a feature called "Delegated Credentials". Delegated Credentials are a topic which has not been addressed previously, since its implementation may differ from the current idea and prospect. Essentially, delegated credentials are credentials which are delegated to another entity by the holder of that credential. In the scope of the case handled in this study, a delegated credential would allow an individual to have temporary ownership of the EV, giving that individual power to charge the vehicle. This delegated credential should be chained to the original credential, and, in case the original credential is revoked, the delegated credential also loses its validity. "Delegated Credentials" can also go by the name of "Chained Credentials", but the core idea remains the same. This concept has been presented as an RFC in the Hyperledger Aries ecosystem, labelled \textit{Aries RFC 0104: Chained Credentials}\footnote{\url{https://github.com/hyperledger/aries-rfcs/blob/master/concepts/0104-chained-credentials/README.md}}. Whether this feature would be present in the ACA-Py agents was something that was discussed with the development team at the official ACA-Py rocket chat. The discussion landed on the following response presented in Figure~\ref{fig:delegated_creds} and that can be seen in the following thread\footnote{\url{https://chat.hyperledger.org/channel/aries-cloudagent-python/thread/QSkomLPYHQsvzyHS6}}. \begin{figure}[!htb] \centering \includegraphics[width=0.7\linewidth]{images/Delegated_creds.png} \caption{Thread discussing future plans for ACA-Py on the RFC-104: Chained Credentials} \label{fig:delegated_creds} \end{figure} \paragraph{Scalability of the Hyperledger Indy Network} Requirement \ref{evaluation:NFR-1.1} - \textit{The system must scale according to the growth of the market. Supposing an adoption in the Dutch Market (300k EVs in 2020), the system must be able to support this number of vehicles and the estimated growth (3M EVs by 2030)}, was assessed to the extent possible (as seen in Section~\ref{subsec:benchmark_hyperledger_indy_and_aries}) but the results were inconclusive since they are not updated anymore, not reflecting the current scalability of the Hyperledger Indy Network. Hence it is difficult to understand whether the current implementation of the Hyperledger Indy Network would sustain that many EVs. This was the main reason why this requirement was tagged with the \textbf{"-"} tag. \paragraph{Link between EV/EV Owner and GDPR} Requirement~\ref{evaluation:NFR-3.2} - \textit{The system must comply with Dutch regulations, including GDPR} was tagged with \textbf{"-"} regarding its fulfillment in the novel system, given the assumption made to link both the EV and the EVOwner's registrations at the TA. The method used in this system is oversimplified, since the CS essentially validates the ownership of the EV by matching the credentialIDs in both credentials. This imposes linkability issues that are not GDPR compliant, so this matter needs to be investigated in the future to prevent linkability issues. \paragraph{Mobile Agent and GDPR concerns} Again, regarding Requirement~\ref{evaluation:NFR-3.2} - \textit{The system must comply with Dutch regulations, including GDPR} there was another motive why it was tagged with \textbf{-}, given that the option to choose the Trinsic Wallet agent has its drawbacks, since this agent is regarded as a \textit{thin agent}. This definition comes from the fact that it stores all of the data (credentials) on the cloud, which is not the best privacy-preserving solution and ultimately defeats the goal of SSI that declares that all PPI should be held by the owner of the data. For this reason, this issue is known, and with a mobile agent that stores the credentials on the device compliance with GDPR would ultimately be achieved regarding this aspect. \paragraph{Connection of Charging Station and Electric Vehicle} In the current implementation of the system, a hard assumption has been made in the way the CS agent connects to the EV. These are manually connected to demonstrate how the request between the agents can occur so that the CS could request proof from the EV regarding its registration with a TA. This assumption was made given that the EV has no way to connect to the CS in an out-of-band fashion (using a QR Code), and the fact that the EV does not make use of a Public DID (for security and privacy matters), which in turn invalidates the possibility for the CS to be the one to engage for the connection with the EV. Work has been made towards how data can be shared between EVs and CSs, in a video-presentation made by Vector\footnote{\url{https://www.vector.com/int/en/}}, an eMobility-dedicated company that demonstrated the current state-of-the-art protocols, seen in Figure~\ref{fig:ev_cs_protocols}. Here it is possible to see the use of the ISO 15118 standard, named \textit{“Road Vehicles – Vehicle to grid communication interface"} which regards how the EV can connect to the grid via CSs, which accounts for bilateral interchange of information between the two parties. Given the current state of the standards, it may be a matter of time until the information that is shared includes a connection request from the EV to the CS, which would then power the rest of the use case. \begin{figure}[!htb] \centering \includegraphics[width=0.6\linewidth]{images/EV_CS_Protocols.png} \caption[Current Electric Vehicle and Charging Station protocols]{Current Electric Vehicle and Charging Station protocols \protect \footnotemark} \label{fig:ev_cs_protocols} \end{figure} \footnotetext{\url{https://www.youtube.com/watch?v=S\_llUuyAcqs}} \subsection{Evaluation of the General Architecture of SSI with IoT} \label{subsec:evaluation_of_the_general_architecture} It was evident that the Verifiable Electric Vehicle Charging System as proposed in this work is able to fulfill most of the requirements, with the ones that were not having been provided documentation in order to apply them in the future. Despite having presented one use case of Self-Sovereign Identity with Internet of Things devices (EVs and CSs) that presents good prospects for adoption, assessing the general architecture presented on Section~\ref{subsec:architecture_for_ssi_with_iot} requires a more in-depth analysis, making use of other case studies. For this end, more use cases need to be discussed to give power to this idea. Nonetheless, it was possible to verify that in IoT devices that do not lack computational power and electricity, there are existing implementations of agents and networks that could be used by such devices, like an EV. On top of that, the general architecture presents the necessary components for any Self-Sovereign Identity system that has agents acting on behalf of any entity, whether humans, organizations, or IoT devices.
{ "alphanum_fraction": 0.774662513, "avg_line_length": 101.65, "ext": "tex", "hexsha": "472513f8b28747a498e4b117cb70b767aa44021f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b1761565ed551c0ffdad4c1d6b9aa652eda3dfed", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "FilipeCapela98/SSI-IoT-PoC", "max_forks_repo_path": "thesis/chapters/8-evaluation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b1761565ed551c0ffdad4c1d6b9aa652eda3dfed", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "FilipeCapela98/SSI-IoT-PoC", "max_issues_repo_path": "thesis/chapters/8-evaluation.tex", "max_line_length": 1997, "max_stars_count": null, "max_stars_repo_head_hexsha": "b1761565ed551c0ffdad4c1d6b9aa652eda3dfed", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "FilipeCapela98/SSI-IoT-PoC", "max_stars_repo_path": "thesis/chapters/8-evaluation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8548, "size": 36594 }
% !TeX spellcheck = en_US \chapter{Plugin Design and Implementation} \label{chap:ch4} This chapter describes how the concept was implemented and presents the design of the plugin. First, a decision had to be made, for what \gls{IDE} the extension would be implemented. This decision, as well as some reasoning for it, can be found in \cref{sec:ch4:s1}. Then, the icons conceived in \cref{sec:ch3:s2} needed to be created, which is described in \cref{sec:ch4:s2}. \Cref{sec:ch4:s3} presents the architecture for the prototype. Finally, some implementation details of the plugin, as well as interesting parts of the implementation process, are described and the final prototype presented in \cref{sec:ch4:s4}. \section{Choosing the IDE} \label{sec:ch4:s1} The concept presented in \cref{chap:ch3} is independent of any \gls{IDE}. However, the extension can only be implemented for one \gls{IDE} during the course of this thesis. Therefore, a decision had to be made, which \gls{IDE} to choose. To support this decision, the popularity, the support for creating plugins, and my prior experience were considered. To determine the popularity, a current online ranking \footnote{\url{https://pypl.github.io/IDE.html}}, which is based on Google statistics, was consulted. According to it, at the time of writing, Visual Studio was the most popular with a 25\% share. The second best was \gls{Eclipse} with 16\%, followed by Android Studio and Visual Studio Code (completely separate from Visual Studio) with 11\% and 9\%. The last two considered were pyCharm on rank 5 with 8\% and IntelliJ IDEA on rank 6 with 6\%. We decided against Android Studio and pyCharm, as they only support the development of Android Apps and Python programs, respectively. All the other \glspl{IDE} support multiple languages allowing the extension to be used in a more diverse set of scenarios. Of the remaining four candidates, the ability and complexity of creating extensions were researched. The \gls{IDE} with the easiest process to create plugins is Visual Studio Code\footnote{\url{https://code.visualstudio.com/api/get-started/your-first-extension}}. The second best is \gls{Eclipse}, also having a lot of material helping with creating plugins for it, such as the book by Clayberg et al. \cite{clayberg2006eclipse}. Plugins are a core concept of both and are used to provide almost all features available in these \glspl{IDE}. Next, IntelliJ IDEA also has a fairly simple process for creating plugins\footnote{\url{https://jetbrains.org/intellij/sdk/docs/basics/getting_started.html}}. Finally, Visual Studio also supports creating plugins, but the process is more complex than with the previous three\footnote{\url{https://docs.microsoft.com/en-us/visualstudio/extensibility/starting-to-develop-visual-studio-extensions?view=vs-2019}}. Additionally, Visual Studio Extensions can only be developed on Windows. \Gls{Eclipse} scored second in both of the above criteria and the winners of both categories are worse in the other. Additionally, I already had experience with developing plugins for it. Therefore, the \gls{Eclipse} \gls{IDE} was chosen as the one for which the extension will be implemented. As the prototype plugin integrates the \gls{Eclipse} system into the \gls{Eclipse} \gls{IDE}, it has been named \gls{GropiusEI}. \section{Icon Creation} \label{sec:ch4:s2} To create the icons described in \cref{sec:ch3:s2} the plan was made to combine multiple individual icons, one for each property, into many finished icons. The center of each icon indicates the type of issue, as that is the most relevant property. The main symbol's color shows the state of the issue because it is the least essential property. However, to not discriminate people with partial or full colorblindness, a small black icon is also overlaid over the main icon. The remaining three properties are visualized as annotations, small symbols displayed in the corners of the icon. All of these parts are drawn as vector graphics so that the final icons can easily be scaled to any size. One challenge while drawing was that eclipse uses 16x16 pixel icons. Therefore, all finished icons need to convey their information even if they are just 16x16 pixels. This significantly reduces the possible detail in the parts. \begin{figure}[!h] \centering \includegraphics[width=0.5\textwidth]{graphics/iconParts.png} \caption{Issue Icon Parts} \label{fig:c4:icon_parts} \end{figure} \Cref{fig:c4:icon_parts} shows all the parts described above. The first row contains the bugs in all three possible states, open, work-in-progress, and closed. In the second row are the enhancements with the same three states. The third row contains the annotations for the other three properties. The first annotation is used when this issue is the cause of another issue in some other component. In the finished icon, it is located in the top right corner. The second symbol is the annotation for when the issue is caused by another issue and is, therefore, just a symptom of that issue. On the final icons, this is shown in the top left. The third symbol represents the fact that the issue is assigned to the current developer. It is added in the bottom left of the final icon. \begin{figure}[!h] \centering \includegraphics[width=0.5\textwidth]{graphics/iconCombinations.png} \caption{Issue Icon Examples} \label{fig:c4:icon_combinations} \end{figure} In \cref{fig:c4:icon_combinations} some examples of the final icons can be seen. In total, there are 48 different combinations of the above parts. All of those need to be created and then converted into a pixel-based image format in multiple sizes so they can be used. This would be a very tedious manual task for one person. Especially, as the icons were changed multiple times during the development process. Therefore, a small script was created, which does all that automatically and results in a folder with all combinations as \gls{PNG} files in the specified sizes. \section{Eclipse Plugin Architecture} \label{sec:ch4:s3} One possible approach that was investigated is to implement the plugin as an extension to Mylyn. As described in \cref{ssec:ch2:ss2.3}, Mylyn is an \gls{Eclipse} plugin for managing tasks and can, therefore, also be used to manage plugins. However, many of the required features from \cref{sec:ch3:s1} could, to our knowledge, not be implemented in a Mylyn extension without contributing some significant changes to Mylyn itself. Any changes proposed for Mylyn would need to be discussed with the community and justified to the maintainers causing, in the best case, a major delay for the implementation of the extension. In the worst case, some changes may be rejected entirely, preventing the implementation of some of the required features. Therefore, it was decided to create a new \gls{Eclipse} extension. As described in \cref{sec:ch3:s3}, the plugin is split into several components for portability reasons. Therefore, the prototype actually consists of a total of four eclipse plugins, as can be seen in \cref{fig:c4:component_diagram}. Moreover, it was decided to use a model-driven software development approach similar to the one described by Beydeda et al. \cite{beydeda2005model}. As a big part of the tool is a list and form for viewing and manipulating data, which can be described formally, large parts of these two \gls{UI} elements can be automatically generated that way. \begin{figure}[!h] \centering \includegraphics[width=\textwidth]{graphics/Component_Diagram.png} \caption{Component Diagram of the Eclipse Extension} \caption*{\footnotesize{The same figure can be found enlarged in \cref{chap:appendix:design_docs}}} \label{fig:c4:component_diagram} \end{figure} The plugin on the top left of \cref{fig:c4:component_diagram}, called \lstinline|data| contains the data model for the plugin. It is first modeled as an \gls{UML} model, then converted to an \gls{EMF} ecore model, and then the model code is generated based on that. Ecore is the metamodel to represent models in the \gls{EMF} world \cite{steinberg2008emf}. The plugin \lstinline|api-binding| is responsible for all communication with the \gls{Gropius} back-end through the \gls{Gropius} \gls{API}. Additionally, it converts between the data format defined in the \lstinline|data| plugin and the format used by the \gls{API}. As mentioned in \cref{ssec:ch2:ss1.2}, the \gls{API} provided by the Gropius system uses \gls{graphql}. Therefore, the client needs to specify exactly what data is required and create mutations for any changes to be sent to the server. To be able to create these mutations, the difference between the old and the new version of the change needs to be recorded. Both the specification of the required data and the creation of mutations is also done by this component. The third plugin, just called \lstinline|ui|, implements all the \gls{UI} elements. It is the only plugin of the four that directly interacts with the \gls{Eclipse} \gls{IDE}. This way, it is the only plugin, which needs major changes when the tool is ported to another \gls{IDE}, which supports Java-based plugins. It uses the \gls{Parsley} framework introduced by Lorenzo Bettini \cite{bettini2014developing} to generate the issue list and issue details view based on the data model from the \lstinline|data| component. The \lstinline|logic| plugin holds all classes containing logic needed by the \lstinline|ui|, which is not \gls{Eclipse} specific. One example would be the classes for filtering the issues based on some properties. They are used by the filters for the issue list but do not require any \gls{IDE} specific functions. A separate \gls{Eclipse} plugin for the messaging component would likely make sense. However, it was not modeled, as the idea of the messaging component was dropped from this thesis for time constraint reasons. \begin{figure}[!h] \centering \includegraphics[width=\textwidth]{graphics/dataClassDiagram.png} \caption{Class Diagram of the Data Model} \caption*{\footnotesize{The same figure can be found enlarged in \cref{chap:appendix:design_docs}}} \label{fig:c4:data_class_diagram} \end{figure} The class diagram of the data model for \gls{GropiusEI} used to generate the \lstinline|data| component can be seen in \cref{fig:c4:data_class_diagram}. It is based on the Domain Meta Model shown in \cref{ssec:ch2:ss1.2}. At the core of the data model is the \lstinline|CrossComponentIssue|, with a title, text body, and a flag indicating whether it is open. Additionally, it has various references to other objects, such as the assigned developers. All \lstinline|CrossComponentIssue|s are contained within a \lstinline|CrossComponentIssueManagementSystem|. That also has a list of all Labels, Developers, and Components relevant for the associated issues. The tool always works with the issues contained in exactly one \lstinline|CrossComponentIssueManagementSystem|. One important reference for the functionality of the tool is the \lstinline|linkedIssues| list of a \lstinline|CrossComponentIssue|. It allows \gls{GropiusEI} to show users linked issues as well as letting users navigate between them. Another core concept of this model are the \lstinline|Location|s. One issue can be at zero or more \lstinline|Location|s, which specify the resource and line the \lstinline|Location| is at. Additionally, a \lstinline|Location| can be in a \lstinline|Component| or an \lstinline|Interface|. \lstinline|Interface|s can be provided by exactly one \lstinline|Component| and consumed by any number of them. \section{Eclipse Plugin Implementation} \label{sec:ch4:s4} During the implementation phase, one of the biggest challenges was to understand and to correctly use \gls{Parsley} as well as \gls{EMF}. Not a lot of documentation and tutorials could be found for either of them. However, as both are open source, answers for specific questions could be searched and found in the code itself. Yet, this process is rather tedious and slow, so a lot of time was spent just looking through these frameworks' source code. \Gls{Parsley} is well suited for generating default \glspl{UI} for data models. It has basic support to customize them, but very little existing customization already available. Therefore, a lot of the implementation work during this thesis was improving classes, which already exist in parsley and adding new variants of them, which allow further customization. Most of them could be contributed to \gls{Parsley} after a little cleanup. \begin{figure}[!h] \centering \includegraphics[width=0.5\textwidth]{graphics/screenshot_improvement_fromControl_arrow.png} \caption{Difference between original and new Control for Lists} \label{fig:c4:screenshot_improvement_formControl} \end{figure} One example is the \gls{UI} element displayed inside a form for the value of properties, which are lists. Parsley only supports to display the string representation of the complete list in one label. The new implementation of the same class allows child classes to customize this by overwriting some methods. Based on this, some other classes were implemented using a grid of controls instead of a single label. This allows for nicer formatting when there are too many elements for one line. Additionally, this allows the use of links as the control for each element, which is used, for example, by the \lstinline|linkedIssues| property. The difference between the original and the improved version can be seen in \cref{fig:c4:screenshot_improvement_formControl}. The control for the list \lstinline|Labels| is created by the original, and you can see the last label is not entirely readable. The other controls are the new ones, with \lstinline|Developers| using a grid of labels and the other a grid of links. Another significant amount of time was taken by working on the \lstinline|api-binding| plugin. As the \gls{Gropius} back-end was not ready in time, the component was implemented to work with a simple mock-up of the back-end. Because of the fact, that the data from the \gls{API} is transformed into the data model generated with \gls{EMF}, the \gls{graphql} \gls{API} cannot be used as intended. Instead, all relevant data needs to be queried at once every time the data is reloaded from the server. This problem was detected too late to change the overall architecture of the plugin, mainly because the decision to switch to \gls{graphql} for the \gls{Gropius} \gls{API} was made after the concept for \gls{GropiusEI} was already done. However, to achieve this, a complex query needs to be sent to the server. Additionally, no appropriate \gls{java} library could be found for using \gls{graphql} \glspl{API} without manually generating the queries and manually parsing the responses. In the end, some open-source ruby scripts\footnote{\url{https://github.com/Shopify/graphql_java_gen}} were used to generate helper classes from the \gls{Gropius} \gls{graphql} schema. The next problem was, that the mock-up of the back-end used to implement the \lstinline|api-binding| component is not sophisticated enough. The returned data is not consistent in itself, preventing a correct transformation into the \gls{GropiusEI} data model. As the real back-end would not be ready in time, the decision was made to halt implementing the \lstinline|api-binding| plugin for now. For the remainder of the thesis, the data is generated by a mock-data generator and stored in a file instead. \subsection{The Prototype} Due to the lack of time, many of the features described in \cref{sec:ch3:s3} were not implemented. Everything that was implemented is presented in the next few paragraphs. \begin{figure}[!h] \centering \includegraphics[width=0.45\textwidth]{graphics/screenshot_gropius_ei_issue_list.png} \caption{Prototype Issue List} \label{fig:c4:screenshot_issue_list} \end{figure} As stated in \cref{sec:ch3:s3} the main elements of \gls{GropiusEI} are the issue list and the issue detail view. Deviating from the concept, those two elements are implemented in one big view element, as shown in \cref{fig:c4:screenshot_issue_list}. This is the case because \gls{Parsley} provides such a view, and implementing interactions between the two parts is easier that way. The issue list is in the upper half of the picture. It consists of multiple columns, each responsible for one property of the issues. By default, only the title and label columns are shown, but the user can change this through the menu on the top right. In theory, all properties that can be seen in the form in the lower half of the image can also be displayed as columns in the list. However, especially the text body does not make much sense to be displayed in the list, as it can take a lot of space. In the first column of the list, the icons from \ref{sec:ch4:s2} are displayed in addition to the property of that column. Currently, the annotations for this issue being caused by another or causing another issue are not used because that feature has not yet been implemented. The entries of columns with a list of values are not sorted in any way but display the values in the order they are contained in the respective list for this property. This list is initially ordered by when the value was added but can be reordered by the user in the dialog for editing that property. As you can see in \cref{fig:c4:screenshot_issue_list_filtered_sorted}, the issue list also supports sorting by a column. This is done by clicking on the column header. Furthermore, filtering is supported using the buttons in the bar on the top edge of the view. Currently, only the filters \lstinline|Only show open issues| and \lstinline|Only show issues assigned to me| are implemented. However, all the other filters specified in \cref{sec:ch3:s3} should be fairly straight forward to include. \begin{figure}[!h] \centering \includegraphics[width=0.45\textwidth]{graphics/screenshot_gropius_ei_issue_list_filtered_sorted.png} \caption{Prototype Issue List with Filter and Sorting} \label{fig:c4:screenshot_issue_list_filtered_sorted} \end{figure} \Cref{fig:c4:screenshot_issue_list} also shows the details view, which additionally is the form for editing a selected issue. On the top of that form, the icon and the title of the issue can be seen. Below that, all properties are displayed. Like the issue list, the properties with multiple values are ordered according to the order in the data instead of being sorted somehow. For the \lstinline|Linked Issues| and the \lstinline|Locations| each value is a link, selecting the correct issue in the list and jumping to that location in the code, respectively. To change a textual property, the user can simply type in the corresponding text area. The properties, which contain a list of values, can be modified using the button to the right. \begin{figure}[!h] \centering \includegraphics[width=0.8\textwidth]{graphics/screenshot_gropius_ei_edit_list.png} \caption{Prototype Edit Developers Dialog} \label{fig:c4:screenshot_edit_list} \end{figure} When that button is pressed a dialog similar to the one in \cref{fig:c4:screenshot_edit_list} is opened. With it, the user can select the values for that issue from a list of all possible values for the property. This dialog also allows the reordering of the values mentioned above. Changes done in the dialog are applied when the \lstinline|Ok| button is pressed. \begin{figure}[!h] \centering \includegraphics[width=0.8\textwidth]{graphics/screenshot_gropius_ei_edit_locations.png} \caption{Prototype Edit Locations Dialog} \label{fig:c4:screenshot_edit_locations} \end{figure} The only exception for the editing dialogs is the \lstinline|locations| property. For those a dialog (shown in \cref{fig:c4:screenshot_edit_locations}), which allows creating and editing locations is opened. Using the \lstinline|New| button, a new location is added to the list of locations for that issue. Then it, as well as any existing location, can be edited with the left half of the dialog. After all changes, the issue list is in an unsaved state, indicated by a star in the view name in the tab of the view. To save the changes, all usual ways to save a resource in \gls{Eclipse} can be used, such as the save button in the toolbar. The green plus button seen above the issue list in \cref{fig:c4:screenshot_issue_list} can be used to create new issues. It adds a new issue with empty properties to the list, which can then be edited using the form below. The feature for creating issues directly from some lines of code has not yet been implemented.
{ "alphanum_fraction": 0.7949178022, "avg_line_length": 74.7137681159, "ext": "tex", "hexsha": "8c952dc986f18e72a1c8ddc5bf5bff402dd190a5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b51b7445cf949314c322f4d0acdabdbc0162cefc", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "neumantm/BA-thesis", "max_forks_repo_path": "content/design_and_implementation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b51b7445cf949314c322f4d0acdabdbc0162cefc", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "neumantm/BA-thesis", "max_issues_repo_path": "content/design_and_implementation.tex", "max_line_length": 250, "max_stars_count": null, "max_stars_repo_head_hexsha": "b51b7445cf949314c322f4d0acdabdbc0162cefc", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "neumantm/BA-thesis", "max_stars_repo_path": "content/design_and_implementation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4933, "size": 20621 }
\newcommand{\Val}{\fun{Val}} \newcommand{\POV}[1]{\ensuremath{\mathsf{PresOfVal}(\mathsf{#1})}} \newcommand{\DBE}[2]{\ensuremath{\mathsf{DBE}({#1},~{#2})}} \newcommand{\DGO}[2]{\ensuremath{\mathsf{DGO}({#1},~{#2})}} \newcommand{\transtar}[2]{\xlongrightarrow[\textsc{#1}]{#2}\negthickspace^{*}} \section{Properties} \label{sec:properties} This section describes the properties that the ledger should have. The goal is to to include these properties in the executable specification to enable e.g. property-based testing or formal verification. % TODO - the hand proofs of preservation of ada and non-negative deposit % pot are woefully out of date, mostly due to the removal of decaying deposits. %\input{hand_proofs} \subsection{Header-Only Validation} \label{sec:header-only-validation} The header-only validation properties of the Shelley Ledger are the analogs of those from Section 8.1 of \cite{byron_chain_spec}. In any given chain state, the consensus layer needs to be able to validate the block headers without having to download the block bodies. Property~\ref{prop:header-only-validation} states that if an extension of a chain that spans less than $\StabilityWindow$ slots is valid, then validating the headers of that extension is also valid. This property is useful for its converse: if the header validation check for a sequence of headers does not pass, then we know that the block validation that corresponds to those headers will not pass either. First we define the header-only version of the $\mathsf{CHAIN}$ transition, which we call $\mathsf{CHAINHEAD}$. It is very similiar to $\mathsf{CHAIN}$, the only difference being that it does not call $\mathsf{BBODY}$. \begin{figure}[ht] \begin{equation}\label{eq:chain-head} \inference[ChainHead] { \var{bh} \leteq \bheader{block} & \var{gkeys} \leteq \fun{getGKeys}~\var{nes} & \var{s} \leteq \bslot{(\bhbody{bh})} \\ (\wcard,~\wcard,~\wcard,~(\wcard,~\wcard,~\wcard,~\var{pp}),~\wcard,~\wcard,\wcard) \leteq \var{nes} \\~\\ \fun{chainChecks}~\var{pp}~\var{bh} \\~\\ { {\begin{array}{c} \var{gkeys} \\ \end{array}} \vdash\var{nes}\trans{\hyperref[fig:rules:tick]{tick}}{\var{s}}\var{nes'} } \\~\\ (\var{e_1},~\wcard,~\wcard,~\wcard,~\wcard,~\wcard,\wcard) \leteq\var{nes} \\ (\var{e_2},~\wcard,~\wcard,~\var{es},~\wcard,~\var{pd},\var{osched}) \leteq\var{nes'} \\ (\wcard,~\wcard,\var{ls},~\wcard,~\var{pp'})\leteq\var{es}\\ ( \wcard, ( (\wcard,~\wcard,~\wcard,~\wcard,~\wcard,~\var{genDelegs}),~ (\wcard,~\wcard,~\wcard)))\leteq\var{ls}\\ \var{ne} \leteq \var{e_1} \neq \var{e_2}\\ { {\begin{array}{c} \var{pp'} \\ \var{osched} \\ \var{pd} \\ \var{genDelegs} \\ \var{s_{now}} \\ \var{ne} \end{array}} \vdash {\left(\begin{array}{c} \var{cs} \\ \var{lab} \\ \eta_0 \\ \eta_v \\ \eta_c \\ \eta_h \\ \end{array}\right)} \trans{\hyperref[fig:rules:prtcl]{prtcl}}{\var{bh}} {\left(\begin{array}{c} \var{cs'} \\ \var{lab'} \\ \eta_0' \\ \eta_v' \\ \eta_c' \\ \eta_h' \\ \end{array}\right)} } \\~\\~\\ } { \var{s_{now}} \vdash {\left(\begin{array}{c} \var{nes} \\ \var{cs} \\ \eta_0 \\ \eta_v \\ \eta_c \\ \eta_h \\ \var{lab} \\ \end{array}\right)} \trans{chainhead}{\var{bh}} {\left(\begin{array}{c} \varUpdate{\var{nes}'} \\ \varUpdate{\var{cs}'} \\ \varUpdate{\eta_0'} \\ \varUpdate{\eta_v'} \\ \varUpdate{\eta_c'} \\ \varUpdate{\eta_h'} \\ \varUpdate{\var{lab}'} \\ \end{array}\right)} } \end{equation} \caption{Chain-Head rules} \label{fig:rules:chainhead} \end{figure} \begin{property}[Header only validation]\label{prop:header-only-validation} For all environments $e$, states $s$ with slot number $t$\footnote{i.e. the component $\var{s_\ell}$ of the last applied block of $s$ equals $t$}, and chain extensions $E$ with corresponding headers $H$ such that: % $$ 0 \leq t_E - t \leq \StabilityWindow $$ % we have: % $$ e \vdash s \transtar{\hyperref[fig:rules:chain]{chain}}{E} s' \implies e \vdash s \transtar{\hyperref[fig:rules:chainhead]{chainhead}}{H} s'' $$ where $t_E$ is the maximum slot number appearing in the blocks contained in $E$, and $H$ is obtained from $E$ by applying $\fun{bheader}$ to each block in $E$. \end{property} \begin{property}[Body only validation]\label{prop:body-only-validation} For all environments $e$, states $s$ with slot number $t$, and chain extensions $E = [b_0, \ldots, b_n]$ with corresponding headers $H$ such that: $$ 0 \leq t_E - t \leq \StabilityWindow $$ we have that for all $i \in [1, n]$: $$ e \vdash s \transtar{\hyperref[fig:rules:chainhead]{chainhead}}{H} s_{h} \wedge e \vdash s \transtar{\hyperref[fig:rules:chain]{chain}}{[b_0 \ldots b_{i-1}]} s_{i-1} \implies e_{i-1} \vdash s_{i-1}\trans{\hyperref[fig:rules:chainhead]{chainhead}}{h_i} s'_{h} $$ where $t_E$ is the maximum slot number appearing in the blocks contained in $E$. \end{property} Property~\ref{prop:body-only-validation} states that if we validate a sequence of headers, we can validate their bodies independently and be sure that the blocks will pass the chain validation rule. To see this, given an environment $e$ and initial state $s$, assume that a sequence of headers $H = [h_0, \ldots, h_n]$ corresponding to blocks in $E = [b_0, \ldots, b_n]$ is valid according to the $\mathsf{chainhead}$ transition system: % $$ e \vdash s \transtar{\hyperref[fig:rules:chainhead]{chainhead}}{H} s' $$ % Assume the bodies of $E$ are valid according to the $\mathsf{bbody}$ rules, but $E$ is not valid according to the $\mathsf{chain}$ rule. Assume that there is a $b_j \in E$ such that it is \textbf{the first block} such that does not pass the $\mathsf{chain}$ validation. Then: % $$ e \vdash s \transtar{\hyperref[fig:rules:chain]{chain}}{[b_0, \ldots b_{j-1}]} s_j $$ But by Property~\ref{prop:body-only-validation} we know that % $$ e_j \vdash s_j \trans{\hyperref[fig:rules:chainhead]{chainhead}}{h_j} s_{j+1} $$ which means that block $b_j$ has valid headers, and this in turn means that the validation of $b_j$ according to the chain rules must have failed because it contained an invalid block body. But this contradicts our assumption that the block bodies were valid. \begin{property}[Existence of roll back function]\label{prop:roll-back-funk} There exists a function $\fun{f}$ such that for all chains $$C = C_0 ; b; C_1$$ we have that if for all alternative chains $C'_1$, $\size{C'_1} \leq \frac{\StabilityWindow}{2}$, with corresponding headers $H'_1$ $$ e \vdash s_0 \transtar{\hyperref[fig:rules:chain]{chain}}{C_0;b} s_1 \transtar{\hyperref[fig:rules:chain]{chain}}{C_1} s_2 \wedge e \vdash s_1 \transtar{\hyperref[fig:rules:chain]{chain}}{C_1'} s'_1 \implies (\fun{f}~(\bheader{b})~s_2) \transtar{\hyperref[fig:rules:chainhead]{chainhead}}{H'_1} s_h $$ \end{property} Property~\ref{prop:roll-back-funk} expresses the fact the there is a function that allow us to recover the header-only state by rolling back at most $k$ blocks, and use this state to validate the headers of an alternate chain. Note that this property is not inherent to the $\mathsf{chain}$ rules and can be trivially satisfied by any function that keeps track of the history of the intermediate chain states up to $k$ blocks back. This property is stated here so that it can be used as a reference for the tests in the consensus layer, which uses the rules presented in this document. \subsection{Validity of a Ledger State} \label{sec:valid-ledg-state} Many properties only make sense when applied to a valid ledger state. In informal terms, a valid ledger state $l$ can only be reached when starting from an initial state $l_{0}$ (ledger in the genesis state) and only executing LEDGER state transition rules as specified in Section~\ref{sec:ledger-trans} which changes wither the UTxO or the delegation state. \begin{figure}[ht] \centering \begin{align*} \genesisId & \in & \TxId \\ \genesisTxOut & \in & \TxOut \\ \genesisUTxO & \coloneqq & (\genesisId, 0) \mapsto \genesisTxOut \\ \ledgerState & \in & \left( \begin{array}{c} \UTxOState \\ \DPState \end{array} \right)\\ && \\ \fun{getUTxO} & \in & \UTxOState \to \UTxO \\ \fun{getUTxO} & \coloneqq & (\var{utxo}, \wcard, \wcard, \wcard) \to \var{utxo} \end{align*} \caption{Definitions and Functions for Valid Ledger State} \label{fig:valid-ledger} \end{figure} In Figure~\ref{fig:valid-ledger} \genesisId{} marks the transaction identifier of the initial coin distribution, where \genesisTxOut{} represents the initial UTxO. It should be noted that no corresponding inputs exists, i.e., the transaction inputs are the empty set for the initial transaction. The function \fun{getUTxO} extracts the UTxO from a UTxO state. \begin{definition}[\textbf{Valid Ledger State}] \begin{multline*} \forall l_{0},\ldots,l_{n} \in \LState, lenv_{0},\ldots,lenv_{n} \in \LEnv, l_{0} = \left( \begin{array}{c} \genesisUTxOState \\ \left( \begin{array}{c} \emptyset\\ \emptyset \end{array} \right) \end{array} \right) \\ \implies \forall 0 < i \leq n, (\exists tx_{i} \in \Tx, lenv_{i-1}\vdash l_{i-1} \trans{ledger}{tx_{i}} l_{i}) \implies \applyFun{validLedgerState} l_{n} \end{multline*} \label{def:valid-ledger-state} \end{definition} Definition~\ref{def:valid-ledger-state} defines a valid ledger state reachable from the genesis state via valid LEDGER STS transitions. This gives a constructive rule how to reach a valid ledger state. \subsection{Ledger Properties} \label{sec:ledger-properties} The following properties state the desired features of updating a valid ledger state. \begin{property}[\textbf{Preserve Balance}] \begin{multline*} \forall \var{l}, \var{l'} \in \LState: \applyFun{validLedgerstate}{l}, l=(u,\wcard,\wcard,\wcard), l' = (u',\wcard,\wcard,\wcard)\\ \implies \forall \var{tx} \in \Tx, lenv \in\LEnv, lenv \vdash\var{u} \trans{utxow}{tx} \var{u'} \\ \implies \applyFun{destroyed}{pc~utxo~stkCreds~rewards~tx} = \applyFun{created}{pc~stPools~tx} \end{multline*} \label{prop:ledger-properties-1} \end{property} Property~\ref{prop:ledger-properties-1} states that for each valid ledger $l$, if a transaction $tx$ is added to the ledger via the state transition rule UTXOW to the new ledger state $l'$, the balance of the UTxOs in $l$ equals the balance of the UTxOs in $l'$ in the sense that the amount of created value in $l'$ equals the amount of destroyed value in $l$. This means that the total amount of value is left unchanged by a transaction. \begin{property}[\textbf{Preserve Balance Restricted to TxIns in Balance of TxOuts}] \begin{multline*} \forall \var{l}, \var{l'} \in \ledgerState: \applyFun{validLedgerstate}{l}, l=(u,\wcard,\wcard,\wcard), l' = (u',\wcard,\wcard,\wcard)\\ \implies \forall \var{tx} \in \Tx, lenv \in\LEnv, lenv \vdash \var{u} \trans{utxow}{tx} \var{u'} \\ \implies \fun{ubalance}(\applyFun{txins}{tx} \restrictdom \applyFun{getUTxO}{u}) = \fun{ubalance}(\applyFun{outs}{tx}) + \applyFun{txfee}{tx} + depositChange \end{multline*} \label{prop:ledger-properties-2} \end{property} Property~\ref{prop:ledger-properties-2} states a slightly more detailed relation of the balances change. For ledgers $l, l'$ and a transaction $tx$ as above, the balance of the UTxOs of $l$ restricted to those whose domain is in the set of transaction inputs of $tx$ equals the balance of the transaction outputs of $tx$ minus the transaction fees and the change in the deposit $depositChange$~(cf.~Fig.~\ref{fig:rules:utxo-shelley}). \begin{property}[\textbf{Preserve Outputs of Transaction}] \begin{multline*} \forall \var{l}, \var{l'} \in \ledgerState: \applyFun{validLedgerstate}{l}, l=(u,\wcard,\wcard,\wcard), l' = (u',\wcard,\wcard,\wcard)\\ \implies \forall \var{tx} \in \Tx, lenv \in\LEnv, lenv \vdash \var{u} \trans{utxow}{tx} \var{u'} \implies \forall \var{out} \in \applyFun{outs}{tx}, out \in \applyFun{getUTxO}{u'} \end{multline*} \label{prop:ledger-properties-3} \end{property} Property~\ref{prop:ledger-properties-3} states that for all ledger states $l, l'$ and transaction $tx$ as above, all output UTxOs of $tx$ are in the UTxO set of $l'$, i.e., they are now available as unspent transaction output. \begin{property}[\textbf{Eliminate Inputs of Transaction}] \begin{multline*} \forall \var{l}, \var{l'} \in \ledgerState: \applyFun{validLedgerstate}{l}, l=(u,\wcard,\wcard,\wcard), l' = (u',\wcard,\wcard,\wcard)\\ \implies \forall \var{tx} \in \Tx, lenv \in\LEnv, lenv \vdash \var{u} \trans{utxow}{tx} \var{u'} \implies \forall \var{in} \in \applyFun{txins}{tx}, in \not\in \fun{dom}(\applyFun{getUTxO}{u'}) \end{multline*} \label{prop:ledger-properties-4} \end{property} Property~\ref{prop:ledger-properties-4} states that for all ledger states $l, l'$ and transaction $tx$ as above, all transaction inputs $in$ of $tx$ are not in the domain of the UTxO of $l'$, i.e., these are no longer available to spend. \begin{property}[\textbf{Completeness and Collision-Freeness of new Transaction Ids}] \begin{multline*} \forall \var{l}, \var{l'} \in \ledgerState: \applyFun{validLedgerstate}{l}, l=(u,\wcard,\wcard,\wcard), l' = (u',\wcard,\wcard,\wcard)\\ \implies \forall \var{tx} \in \Tx, lenv \in\LEnv, lenv \vdash \var{u} \trans{utxow}{tx} \var{u'} \\ \implies \forall ((txId', \wcard) \mapsto \wcard) \in \applyFun{outs}{tx}, ((txId, \wcard) \mapsto \wcard) \in\applyFun{getUTxO}{u} \implies \var{txId'} \neq \var{txId} \end{multline*} \label{prop:ledger-properties-5} \end{property} Property~\ref{prop:ledger-properties-5} states that for ledger states $l, l'$ and a transaction $tx$ as above, the UTxOs of $l'$ contain all newly created UTxOs and the referred transaction id of each new UTxO is not used in the UTxO set of $l$. \begin{property}[\textbf{Absence of Double-Spend}] \begin{multline*} \forall l_{0},\ldots,l_{n} \in \ledgerState, l_{0} = \left( \begin{array}{c} \left\{ \genesisUTxO \right\} \\ \left( \begin{array}{c} \emptyset\\ \emptyset \end{array} \right) \end{array} \right) \wedge \applyFun{validLedgerState} l_{n}, l_{i}=(u_{i},\wcard,\wcard,\wcard)\\ \implies \forall 0 < i \leq n, tx_{i} \in \Tx, lenv_{i}\in\LEnv, lenv_{i} \vdash u_{i-1} \trans{ledger}{tx_{i}} u_{i} \wedge \applyFun{validLedgerState} l_{i} \\ \implies \forall j < i, \applyFun{txins}{tx_{j}} \cap \applyFun{txins}{tx_{i}} = \emptyset \end{multline*} \label{prop:ledger-properties-no-double-spend} \end{property} Property~\ref{prop:ledger-properties-no-double-spend} states that for each valid ledger state $l_{n}$ reachable from the genesis state, each transaction $t_{i}$ does not share any input with any previous transaction $t_{j}$. This means that each output of a transition is spent at most once. \subsection{Ledger State Properties for Delegation Transitions} \label{sec:ledg-prop-deleg} \begin{figure}[ht] \centering \begin{align*} \fun{getStDelegs} & \in & \DState \to \powerset \Credential \\ \fun{getStDelegs} & \coloneqq & (\var{stkCreds}, \wcard, \wcard,\wcard,\wcard,\wcard) \to \var{stkCreds} \\ &&\\ \fun{getRewards} & \in & \DState \to (\AddrRWD \mapsto \Coin) \\ \fun{getRewards} & \coloneqq & (\wcard, \var{rewards}, \wcard,\wcard,\wcard,\wcard) \to \var{rewards} \\ &&\\ \fun{getDelegations} & \in & \DState \to (\Credential \mapsto \KeyHash) \\ \fun{getDelegations} & \coloneqq & (\wcard, \wcard, \var{delegations},\wcard,\wcard,\wcard) \to \var{delegations} \\ &&\\ \fun{getStPools} & \in & \LState \to (\KeyHash \mapsto \DCertRegPool) \\ \fun{getStPools} & \coloneqq & (\wcard, (\wcard, (\var{stpools},\wcard,\wcard,\wcard))) \to \var{stpools} \\ &&\\ \fun{getRetiring} & \in & \LState \to (\KeyHash \mapsto \Epoch) \\ \fun{getRetiring} & \coloneqq & (\wcard, (\wcard, (\wcard, \wcard, \var{retiring},\wcard))) \to \var{retiring} \\ \end{align*} \caption{Definitions and Functions for Stake Delegation in Ledger States} \label{fig:stake-delegation-functions} \end{figure} \begin{property}[\textbf{Registered Staking Credential with Zero Rewards}] \begin{multline*} \forall \var{l}, \var{l'} \in \ledgerState: \applyFun{validLedgerstate}{l}, l = (\wcard, ((d, \wcard), \wcard)), l' = (\wcard, ((d',\wcard), \wcard)), dEnv\in\DEnv \\ \implies \forall \var{c} \in \DCertRegKey, dEnv\vdash \var{d} \trans{deleg}{c} \var{d'} \implies \applyFun{cwitness}{c} = \var{hk}\\ \implies hk\not\in \fun{getStDelegs}~\var{d} \implies \var{hk} \in \applyFun{getStDelegs}{d'} \wedge (\applyFun{getRewards}\var{d'})[\fun{addr_{rwd}}{hk}] = 0 \end{multline*} \label{prop:ledger-properties-6} \end{property} Property~\ref{prop:ledger-properties-6} states that for each valid ledger state $l$, if a delegation transaction of type $\DCertRegKey$ is executed, then in the resulting ledger state $l'$, the set of staking credential of $l'$ includes the credential $hk$ associated with the key registration certificate and the associated reward is set to 0 in $l'$. \begin{property}[\textbf{Deregistered Staking Credential}] \begin{multline*} \forall \var{l}, \var{l'} \in \ledgerState: \applyFun{validLedgerstate}{l}, l = (\wcard, (d, \wcard)), l' = (\wcard, (d', \wcard)), dEnv\in\DEnv \\ \implies \forall \var{c} \in \DCertDeRegKey, dEnv\vdash\var{d} \trans{deleg}{c} \var{d'} \implies \applyFun{cwitness}{c} = \var{hk}\\ \implies \var{hk} \not\in \applyFun{getStDelegs}{d'} \wedge hk\not\in \left\{ \fun{stakeCred_{r}}~sc\vert sc\in\fun{dom}(\applyFun{getRewards}{d'}) \right\}\\ \wedge hk \not\in \fun{dom}(\applyFun{getDelegations}{d'})) \end{multline*} \label{prop:ledger-properties-7} \end{property} Property~\ref{prop:ledger-properties-7} states that for $l, l'$ as above but with a delegation transition of type $\DCertDeRegKey$, the staking credential $hk$ associated with the deregistration certificate is not in the set of staking credentials of $l'$ and is not in the domain of either the rewards or the delegation map of $l'$. \begin{property}[\textbf{Delegated Stake}] \begin{multline*} \forall \var{l}, \var{l'} \in \ledgerState: \applyFun{validLedgerstate}{l}, l = (\wcard, (d,\wcard)), l' = (\wcard, (d',\wcard)), dEnv\in\DEnv \\ \implies \forall \var{c} \in \DCertDeleg, dEnv \vdash\var{d} \trans{deleg}{c} \var{d'} \implies \applyFun{cwitness}{c} = \var{hk}\\ \implies \var{hk} \in \applyFun{getStDelegs}{d'} \wedge (\applyFun{getDelegations}{d'})[hk] = \applyFun{dpool}{c} \end{multline*} \label{prop:ledger-properties-8} \end{property} Property~\ref{prop:ledger-properties-8} states that for $l, l'$ as above but with a delegation transition of type $\DCertDeleg$, the staking credential $hk$ associated with the deregistration certificate is in the set of staking credentials of $l$ and delegates to the staking pool associated with the delegation certificate in $l'$. \begin{property}[\textbf{Genesis Keys are Always All Delegated}] \label{prop:genkeys-delegated} \begin{multline*} \forall \var{l}, \var{l'} \in \LState: \applyFun{validLedgerstate}{l},\\ \implies \forall \Gamma \in \seqof{\Tx}, env \in (\Slot \times \PParams), \\ env \vdash\var{l} \trans{ledgers}{\Gamma} \var{l'} \implies |genDelegs| = 7 \end{multline*} \end{property} Property \ref{prop:genkeys-delegated} states that all seven of the genesis keys are constantly all delegated after applying a list of transactions to a valid ledger state. \subsection{Ledger State Properties for Staking Pool Transitions} \label{sec:ledg-state-prop} \begin{property}[\textbf{Registered Staking Pool}] \begin{multline*} \forall \var{l}, \var{l'} \in \ledgerState: \applyFun{validLedgerstate}{l}, l = (\wcard, (\wcard, p)), l' = (\wcard, (\wcard, p')), pEnv\in\PEnv \\ \implies \forall \var{c} \in \DCertRegPool, \var{p} \trans{pool}{c} \var{p'} \implies \applyFun{cwitness}{c} = \var{hk}\\ \implies \var{hk}\in\applyFun{getStPools}{p'} \wedge \var{hk} \not\in \applyFun{getRetiring}{p'} \end{multline*} \label{prop:ledger-properties-9} \end{property} Property~\ref{prop:ledger-properties-9} states that for $l, l'$ as above but with a delegation transition of type $\DCertRegPool$, the key $hk$ is associated with the author of the pool registration certificate in $\var{stpools}$ of $l'$ and that $hk$ is not in the set of retiring stake pools in $l'$. \begin{property}[\textbf{Start Staking Pool Retirement}] \begin{multline*} \forall \var{l}, \var{l'} \in \ledgerState, \var{cepoch} \in \Epoch: \applyFun{validLedgerstate}{l}, l = (\wcard, (\wcard,p)), l' = (\wcard, (\wcard,p')), pEnv\in\PEnv \\ \implies \forall \var{c} \in \DCertRetirePool, pEnv\vdash\var{p} \trans{POOL}{c} \var{p'} \\ \implies e = \applyFun{retire}{c} \wedge \var{cepoch} < e < \var{cepoch} + \emax \wedge \applyFun{cwitness}{c} = \var{hk}\\ \implies (\applyFun{getRetiring}{p'})[\var{hk}] = e \wedge \var{hk} \in \fun{dom}(\applyFun{getStPools}{p})\wedge\fun{dom}(\applyFun{getStPools}{p'} ) \end{multline*} \label{prop:ledger-properties-10} \end{property} Property~\ref{prop:ledger-properties-10} states that for $l, l'$ as above but with a delegation transition of type $\DCertRetirePool$, the key $hk$ is associated with the author of the pool registration certificate in $\var{stpools}$ of $l'$ and that $hk$ is in the map of retiring staking pools of $l'$ with retirement epoch $e$, as well as that $hk$ is in the map of stake pools in $l$ and $l'$. \begin{property}[\textbf{Stake Pool Reaping}] \begin{multline*} \forall \var{l}, \var{l'} \in \ledgerState, \var{e} \in \Epoch: \applyFun{validLedgerstate}{l},\\ l = (\wcard, (d, p)), l' = (\wcard, (d', p')), pp\in\PParams, acnt, acnt'\in\Acnt \\ \implies pp\vdash\var{(acnt, d, p} \trans{poolreap}{e} \var{(acnt, d', p')} \implies \forall \var{retire}\in{(\fun{getRetiring}~p)}^{-1}[e], retire \neq \emptyset \\ \wedge \var{retire} \subseteq \fun{dom}(\applyFun{getStPool}{p}) \wedge \var{retire} \cap\fun{dom}(\applyFun{getStPool}{p'})=\emptyset \\ \wedge\var{retire} \cap \fun{dom}(\applyFun{getRetiring}{p'}) = \emptyset \end{multline*} \label{prop:ledger-properties-11} \end{property} Property~\ref{prop:ledger-properties-11} states that for $l, l'$ as above but with a delegation transition of type POOLREAP, there exist registered stake pools in $l$ which are associated to stake pool registration certificates and which are to be retired at the current epoch $\var{e}$. In $l'$ all those stake pools are removed from the maps $stpools$ and $retiring$. \subsection{Properties of Numerical Calculations} \label{sec:prop-numer-calc} The numerical calculations for refunds and rewards in (see Section~\ref{sec:epoch}) are also required to have certain properties. In particular we need to make sure that the functions that use non-integral arithmetic have properties which guarantee consistency of the system. Here, we state those properties and formulate them in a way that makes them usable in properties-based testing for validation in the executable spec. \begin{property}[\textbf{Minimal Refund}] \label{prop:minimal-refund} The function $\fun{refund}$ takes a value, a minimal percentage, a decay parameter and a duration. It must guarantee that the refunded amount is within the minimal refund (off-by-one for rounding / floor) and the original value. \begin{multline*} \forall d_{val} \in \mathbb{N}, d_{min} \in [0,1], \lambda \in (0, \infty), \delta \in \mathbb{N} \\ \implies \max(0,d_{val}\cdot d_{min} - 1) \leq \floor*{d_{val}\cdot(d_{min} + (1-d_{min})\cdot e^{-\lambda\cdot\delta})} \leq d_{val} \end{multline*} \end{property} \begin{property}[\textbf{Maximal Pool Reward}] \label{prop:maximal-pool-reward} The maximal pool reward is the expected maximal reward paid to a stake pool. The sum of all these rewards cannot exceed the total available reward, let $Pool$ be the set of active stake pools: \begin{equation*} \forall R \in Coin:\sum_{p \in Pools} \floor*{\frac{R}{1+p_{a_{0}}}\cdot \left( p_{\sigma'}+p_{p'}\cdotp_{a_{0}}\cdot\frac{p_{\sigma'}-p_{p'}\cdot\frac{p_{z_{0}}-p_{\sigma'}}{p_{z_{0}}}}{p_{z_{0}}} \right)}\leq R \end{equation*} \end{property} \begin{property}[\textbf{Actual Reward}] \label{prop:actual-reward} The actual reward for a stake pool in an epoch is calculated by the function $\fun{poolReward}$. The actual reward per stake pool is non-negative and bounded by the maximal reward for the stake pool, with $\overline{p}$ being the relation $\frac{n}{\max(1, \overline{N})}$ of the number of produced blocks $n$ of one pool to the total number $\overline{N}$ of produced blocks in an epoch and $maxP$ being the maximal reward for the stake pool. This gives us: \begin{equation*} \forall \gamma \in [0,1] \implies 0\leq \floor*{\overline{p}\cdot maxP} \leq maxP \end{equation*} \end{property} The two functions $\fun{r_{operator}}$ and $\fun{r_{member}}$ are closely related as they both split the reward between the pool leader and the members. \begin{property}[\textbf{Reward Splitting}] \label{prop:reward-splitting} The reward splitting is done via $\fun{r_{operator}}$ and $\fun{r_{member}}$, i.e., a split between the pool leader and the pool members using the pool cost $c$ and the pool margin $m$. Therefore the property relates the total reward $\hat{f}$ to the split rewards in the following way: \begin{multline*} \forall m\in [0,1], c\in Coin \implies c + \floor*{(\hat{f} - c)\cdot (m + (1 - m)) \cdot \frac{s}{\sigma}} + \sum_{j}\floor*{(\hat{f} - c)\cdot(1-m)\cdot\frac{t_{j}}{\sigma}} \leq \hat{f} \end{multline*} \end{property} \clearpage %%% Local Variables: %%% mode: latex %%% TeX-master: "ledger-spec" %%% End:
{ "alphanum_fraction": 0.6519849531, "avg_line_length": 42.1895223421, "ext": "tex", "hexsha": "5c33ac10d694161e545c8522455e7398bddd714a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d371ad2ebf5d1ddff93776fac2dfd1045aa6b06c", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "michaelpj/cardano-ledger-specs", "max_forks_repo_path": "shelley/chain-and-ledger/formal-spec/properties.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d371ad2ebf5d1ddff93776fac2dfd1045aa6b06c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "michaelpj/cardano-ledger-specs", "max_issues_repo_path": "shelley/chain-and-ledger/formal-spec/properties.tex", "max_line_length": 125, "max_stars_count": null, "max_stars_repo_head_hexsha": "d371ad2ebf5d1ddff93776fac2dfd1045aa6b06c", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "michaelpj/cardano-ledger-specs", "max_stars_repo_path": "shelley/chain-and-ledger/formal-spec/properties.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8984, "size": 27381 }
\documentclass{report} %Packages \usepackage{amsmath} %for math \usepackage{tabularx} %for tables \usepackage{graphicx} %for figures \usepackage[export]{adjustbox} %adjust placement of figures left and right \usepackage{float} %force placement of figures on page [H] \usepackage{caption} %for captions \usepackage{subcaption} %for subcaptions \usepackage[T1]{fontenc} %for underscore in captions \usepackage{setspace} %to set line spacing \usepackage[margin=1in]{geometry} %set margins \usepackage{indentfirst} %Indents the first paragraph of each section \usepackage{hyperref} %for hyperlinks \hypersetup{ colorlinks=true, linkcolor=black, filecolor=black, urlcolor=black, citecolor=black, } \usepackage[utf8]{inputenc} %Code \usepackage{xcolor} %Code \definecolor{codegreen}{rgb}{0,0.6,0} \definecolor{codegray}{rgb}{0.5,0.5,0.5} \definecolor{codepurple}{rgb}{0.58,0,0.82} \definecolor{backcolour}{rgb}{0.95,0.95,0.92} \usepackage{listings} %Code \lstdefinestyle{mystyle}{ backgroundcolor=\color{backcolour}, commentstyle=\color{codegreen}, keywordstyle=\color{magenta}, numberstyle=\tiny\color{codegray}, stringstyle=\color{codepurple}, basicstyle=\ttfamily\footnotesize, breakatwhitespace=false, breaklines=true, captionpos=b, keepspaces=true, numbers=left, numbersep=5pt, showspaces=false, showstringspaces=false, showtabs=false, tabsize=2 } \lstset{style=mystyle} %Table of contents settings %\setcounter{tocdepth}{1} % Show sections \setcounter{tocdepth}{2} % + subsections %\setcounter{tocdepth}{3} % + subsubsections %\setcounter{tocdepth}{4} % + paragraphs %\setcounter{tocdepth}{5} % + subparagraphs \begin{document} %Title page \pagenumbering{gobble} %No page number (will apply to title page and contents page) \begin{titlepage} \begin{center} \vspace*{3cm} \LARGE{QPORT\\} \medskip \Large{An Open Source Quantitative Portfolio Management System\\} \medskip \begin{figure}[!htb] %Prioritises image placement according to "Here", "Top" or "Bottom" in that order of preference. Can force placement HERE by replacing with [H] \center{\includegraphics[width=50mm]{Figures/icon.jpg}} %If image in same directory, no need for pathway \end{figure} \LARGE{Developer's Guide\\} \large{\today\\} \vspace{3cm} \large{Contributors\\Akirabha Chanuntranont: [email protected]} \end{center} \end{titlepage} %Contents page \tableofcontents \newpage \pagenumbering{arabic} %From here, pages will be numbered using arabic numbers \chapter{Code Structure} \label{Chapter: Code Structure} \begin{figure}[!htb] \center{\includegraphics[width=100mm]{Figures/Code_Structure/general_structure.jpg}} \caption{General structure of the code} \label{Figure: General structure of the code} \end{figure} A core princicple in the structure of QPORT's code is modularity. As seen in fig.\ref{Figure: General structure of the code}, there are only two levels in the code: the top level, which contains the Main component, and the bottom level, which contains the GUI component and multiple Module components. The responsibilities of each component are as follows.\par The Main component is the top level of the code and is responsible for managing data transfer between the lower level components, such as the GUI and the Modules. Particularly, the Main component is where the GUI and all of the modules get wired up and where their routines are defined. For example, the Main component decides what functions to call in a module when the user presses, say, a `Calculate' button, and so on. No calculations occur in the Main component. The main component only decides how to pass data laterally between the Modules and the GUI. It is best to think of the Main component as a central processor. The Main component is implemented in the file \verb|main.py|.\par The GUI component is on the bottom level of the code and is in many ways just another module. In fact, the GUI component may be removed and (with some light editing in the Main component), QPORT can be made into a lightweight terminal program. The GUI component is responsible only for displaying outputs to the user and providing the user with an interface to send data to the Main component, which then decides which module to reroute the data to. No calculations occur in the GUI (apart from GUI-specific calculations, such as determining the window size etc.), nor are any routines specified. It is best to think of the GUI component as a canvas on which to lay out the components. The GUI component is implemented in the file \verb|gui.py| and is based on the PySimpleGUI project, whose documentation can be found here \url{https://pysimplegui.readthedocs.io/en/latest/}.\par The Modules component comprise the bulk of bottom layer of the code and are responsible for the calculations required by QPORT. As QPORT is a quantitative finance toolbox, it is best to think of each module as one tool. For example, the Portfolio Optimizer module, implemented in the file \verb|PortOptimizer.py|, is only responsible for the tool to calculate optimal portfolio configurations in accordance with Modern Portfolio Theory. Each module should be self-contained and should not import any of the other modules. However, as seen in fig.\ref{Figure: General structure of the code}, a module may share data with other modules through routines defined in the Main component. This convention is in place to prevent instantiating duplicate class objects and also to avoid high dependency problems between modules. Remember, modularity is key.\par \chapter{Class: Routines} \label{Chapter: Routines} \begin{description} \item[Contributors: ] A. Chanuntranont \item[Component: ] Main Component \item[Purpose: ] Handles data transfers between the modules and defines module routines. \item[Remarks: ] Since this class is a member of the Main component, it necessarily has dependencies with other modules. \item[File: ] \verb|main.py| \item[Directory: ] \verb|.\src| \item[Dependencies: ] \verb|gui.py|, \verb|PortOptimizer.py| \item[Attributes: ] \verb| | \begin{description} \item[\texttt{Routines.gui}: ] Class GUI from \verb|gui.py| \item[\texttt{Routines.port\_optimizer}: ] Class PortOptimizer from \verb|PortOptimizer.py| \end{description} \end{description} \section{Portfolio Optimizer Module Routines} \label{Section: Portfolio Optimizer Module Routines} \subsection{\texttt{Routines.display\char`_port\char`_optimizer\char`_reference()}} \label{Subsection: Routines.display_port_optimizer_reference} \begin{description} \item[Purpose: ] Passes the current values of query dates, reference asset ticker, and the risk free rate stored in Class PortOptimizer (see Section \ref{Chapter: PortOptimizer}) to the GUI to be displayed. \item[Arguments: ] None \item[Returns: ] None \item[Interactions: ] \verb| |\\ \verb|gui.window[`-TERMOUT-']|, \verb|gui.window[`-DATE START IN-']|, \verb|gui.window[`-DATE END IN-']|,\\ \verb|gui.window[`-RISK FREE ASSET IN-']|, \verb|gui.window[`-RISK FREE RATE-']| \item[Outcome: ] \verb| |\\ Updates input and text boxes in the Portfolio Optimizer window's \verb|reference_layout|, defined in \verb|gui.py\port_optimizer_window()| \end{description} \subsection{\texttt{Routines.update\char`_port\char`_optimizer\char`_stats()}} \label{Subsection: Routines.update_port_optimizer_stats} \begin{description} \item[Purpose: ] Requests the Portfolio Optimizer module (see Section \ref{Chapter: PortOptimizer}) to calculate optimal portfolios and corresponding statistics. The output is retrieved and passed to the GUI to be displayed. \item[Arguments: ] None \item[Returns: ] None \item[Interactions: ] \verb| |\\ \verb|gui.window[`-TERMOUT-']|, \verb|gui.window[`-CURRENT PORT-']|, \verb|gui.window[`-CASH IN-']|, \\ \verb|gui.window[`-CORRMAT-']|, \verb|gui.window[`-COVMAT-']|, \verb|gui.window[`-CURRENT STDEV-']|, \\ \verb|gui.window[`-CURRENT MARKET VALUE-']|, \verb|gui.window[`-CURRENT EXPECTED RETURN-']|, \\ \verb|gui.window[`-CURRENT SHARPE-']|, \verb|gui.window[`-EFFICIENT FRONTIER GRAPH-']|, \\ \verb|gui.window[`-STOCK RETURNS GRAPH-']|, \verb|gui.window[`-LOWEST RISK PORT-']|, \\ \verb|gui.window[`-LOWEST RISK EXPECTED RETURN-']|, \verb|gui.window[`-LOWEST RISK STDEV-']|, \\ \verb|gui.window[`-LOWEST RISK SHARPE-']|, \verb|gui.window[`-BEST SHARPE PORT-']|, \\ \verb|gui.window[`-BEST SHARPE EXPECTED RETURN-']|, \verb|gui.window[`-BEST SHARPE STDEV-']|, \\ \verb|gui.window[`-BEST SHARPE SHARPE-']|, \verb|gui.window[`-HIGHEST RETURN PORT-']|, \\ \verb|gui.window[`-HIGHEST RETURN EXPECTED RETURN-']|, \verb|gui.window[`-HIGHEST RETURN STDEV-']|, \\ \verb|gui.window[`-HIGHEST RETURN SHARPE-']| \verb|PortOptimizer.curr_port|, \verb|PortOptimizer.corr_matrix|, \verb|PortOptimizer.cov_matrix|, \\ \verb|PortOptimizer.lowest_risk_port|, \verb|PortOptimizer.best_sharpe_port|, \\ \verb|PortOptimizer.highest_returns_port| \verb|PortOptimizer.port_stats()|, \\ \verb|PortOptimizer.plot_eff_frontier()|, \verb|PortOptimizer.plot_stock_returns()|, \item[Outcome: ] \verb| |\\ The optimal portfolios are calculated by the Portfolio Optimizer module and stored in\\ \verb|PortOptimizer.lowest_risk_port|, \verb|PortOptimizer.best_sharpe_port| and\\ \verb|PortOptimizer.highest_returns_port|. The portfolio statistics for each portfolio are calculated by the Portfolio Optimizer module and stored in their respective containers. The Portfolio Optimizer Module then produces graphs of the efficient frontier and the cumulative returns on each asset, writes the figures to \verb|.\media\graphs| and returns the path. Routines then passes the data and path to the GUI to be displayed. \end{description} \subsection{\texttt{Routines.port\char`_optimizer\char`_routine()}} \label{Subsection: Routines.port_optimizer_routine} \begin{description} \item[Purpose: ] To manage how the user interacts with the Portfolio Optimizer module. \item[Arguments: ] (\verb|event|, \verb|values|) \begin{description} \item[\texttt{event}:] str type variable GUI \verb|[`-KEY-']| \item[\texttt{values}:] str type variable from a GUI input box \end{description} \item[Returns: ] None \item[Interactions: ] \verb| |\\ \verb|Routines.display_port_optimizer_reference()|, \verb|Routines.update_port_optimizer_stats()| \verb|gui.window[`-RESET-']|, \verb|gui.window[`-SET REFERENCE-']|, \verb|gui.window[`-ADD POSITION-']|,\\ \verb|gui.window[`-LOAD PORT-']|, \verb|gui.window[`-SAVE PORT-']|, \verb|gui.window[`-DATE START IN-']|,\\ \verb|gui.window[`-DATE END IN-']|, \verb|gui.window[`-RISK FREE ASSET IN-']|, \verb|gui.window[`-CASH IN-']|,\\ \verb|gui.window[`-SET CASH IN-']|, \verb|gui.window[`-TERMOUT-']|, \verb|gui.window[`-SYMBOL IN-']|,\\ \verb|gui.window[`-VOL IN-']|, \verb|gui.window[`-CURRENT PORT-']|, \\ \verb|gui.sg.popup_get_file()|, \verb|gui.port_optimizer_window()|, \verb|PortOptimizer.historical|, \verb|PortOptimizer.curr_port|, \verb|PortOptimizer.get_data()| \item[Outcome: ] \verb| |\\ \begin{description} \item[If User clicks \texttt{[`-LOAD PORT-']}: ] \verb| |\\ A popup is launched that allows the user to select a .txt file containing the desired portfolio configuration. When the file is chosen, the path to the file is returned to \verb|Routines| which then reads the file, converts and assigns the values to their respective variables in the Portfolio Optimizer module. The Portfolio Optimizer module then calculates the optimal portfolios and corresponding statistics and passes the values to \verb|Routines|, which then reroutes the outputs to the GUI to be displayed. \item[If User clicks \texttt{[`-SAVE PORT-']}: ] \verb| |\\ A popup is launched that allows the user to specify the path to the directory in which they will save the portfolio configuration. When the path is chosen, it is returned to \verb|Routines| which then creates a .txt file at the location and writes the values from \verb|PortOptimizer.historical[`riskfreeasset']|, \verb|PortOptimizer.curr_port[`cash']|, \verb|PortOptimizer.curr_port[`symbols']| and\\ \verb|PortOptimizer.curr_port[`volumes']| into the .txt file. \item[If User clicks \texttt{[`-ADD POSITION-']}: ] \verb| |\\ The value of the ticker symbol is recieved from \verb|gui.window[`-SYMBOL IN-']| as a string. The corresponding volume is recieved from \verb|gui.window[`-VOL IN-']| and converted into an int. Then, \verb|Routines| checks to see if the symbol is already in \verb|PortOptimizer.curr_port|. If it is, the volume is added to the existing volume. Otherwise, the symbol and volume are added to \verb|PortOptimizer.curr_port| and the historical data is queried. An exception is when the input volume is \verb|`remove'|, in which case the symbol and corresponding volume is removed from the portfolio. The Portfolio Optimizer module is then requested to recalculate the optimal portfolios. The results are rerouted to the GUI for display. \item[If User clicks \texttt{[`-SET CASH IN-']}: ] \verb| |\\ The value of the cash position is retrieved from \verb|gui.window[`-CASH IN-']| as a string and converted into a float. The value is then input into \verb|PortOptimizer.curr_port[`cash']| and the Portfolio Optimizer module is requested to recalculate the optimal portfolios. The results are rerouted to the GUI for display. \item[If User clicks \texttt{[`-SET REFERENCE-']}: ] \verb| |\\ The values in \verb|gui.window[`-DATE START IN-']|, \verb|gui.window[`-DATE END IN-']| and\\ \verb|gui.window[`-RISK FREE ASSET IN-']| are assigned to \verb|PortOptimizer.historical[`period'][0]|, \verb|PortOptimizer.historical[`period'][1]| and \verb|PortOptimizer.historical[`riskfreeasset']| respectively. The historical data for the risk free asset and all of the portfolio assets are then queried over the set period. The Portfolio Optimizer module is requested to recalculate the optimal portfolios. The results are rerouted to the GUI for display. \item[If User clicks \texttt{[`-RESET-']}: ] \verb| |\\ The \verb|Routines.port_optimizer| attribute is reinitialized to a new \verb|PortOptimizer| object. The GUI is then requested to close the current window and reinitialize a new \verb|port_optimizer_window()|. \end{description} \end{description} \chapter{Class: PortOptimizer} \label{Chapter: PortOptimizer} \begin{description} \item[Contributors: ] A. Chanuntranont \item[Component: ] Portfolio Optimizer Module \item[Purpose: ] Calculates optimal portfolio configurations for a given set of assets, in accordance with Modern Portfolio Theory. \item[Remarks: ] The current approach is to stochastically find the efficient frontier by sampling portfolios from a uniform distribution between $(0, 1]$. Research is being done to develop a more analytical approach. \item[File: ] \verb|PortOptimizer.py| \item[Directory: ] \verb|.\src| \item[Dependencies: ] \verb|numpy|, \verb|pandas_datareader|, \verb|matplotlib.pyplot|, \verb|datetime|, \verb|dateutil.relativedelta| \item[Attributes: ] \verb| | \begin{description} \item[\texttt{PortOptimizer.n\char`_eff\char`_samples}: ] int that specifies the number of samples to draw. \item[\texttt{PortOptimizer.curr\char`_port}: ] python dictionary with elements, \begin{description} \item[\texttt{`symbols'}: ] python list of strings that stores each ticker symbol \item[\texttt{`volumes'}: ] python list of ints that stores the volume held of each asset \item[\texttt{`marketvalues'}: ] python list of floats that stores the total market value of the position of each asset in the portfolio \item[\texttt{`weights'}: ] a numpy array of the percentage each asset represents of the total portfolio value \item[\texttt{`cash'}: ] float that specifies the total cash on hand available to be allocated \item[\texttt{`totalvalue'}: ] float that specifies the total value of the portfolio, which includes both assets and cash \item[\texttt{`returns'}: ] float that specifies the returns of the total portfolio, in percent \item[\texttt{`stdev'}: ] float that specifies the standard deviations of the total portfolio returns, in percent \item[\texttt{`sharpe'}: ] float that specifies the Sharpe ratio of the portfolio \end{description} \item[\texttt{PortOptimizer.lowest\char`_risk\char`_port}: ] python dictionary with same elements as \texttt{PortOptimizer.curr\char`_port} \item[\texttt{PortOptimizer.best\char`_sharpe\char`_port}: ] python dictionary with same elements as \texttt{PortOptimizer.curr\char`_port} \item[\texttt{PortOptimizer.highest\char`_returns\char`_port}: ] python dictionary with same elements as \texttt{PortOptimizer.curr\char`_port} \item[\texttt{PortOptimizer.historical}: ] python dictionary with elements, \begin{description} \item[\texttt{`period'}: ] python list of strings that stores \verb|[query-start-date, query-end-date]| \item[\texttt{`resolution'}: ] char that specifies the bin-width of the queried data \item[\texttt{`prices'}: ] pandas array of prices for each asset over the query period \item[\texttt{`returns'}: ] pandas array of historical returns for each asset over the query period \item[\texttt{`avgreturns'}: ] pandas array of average historical returns for each asset \item[\texttt{`stdevreturns'}: ] pandas array of the standard deviation of historical returns for each asset \item[\texttt{`riskfreeasset'}: ] str of the risk-free asset ticker symbol \item[\texttt{`riskfree'}: ] float that specifies the yield on the risk-free asset over the query period \end{description} \item[\texttt{PortOptimizer.corr\char`_matrix}: ] pandas 2D array that stores the correlation matrix of the assets in the portfolio \item[\texttt{PortOptimizer.cov\char`_matrix}: ] pandas 2D array that stores the covariance matrix of the assets in the portfolio \end{description} \end{description} \section{\texttt{PortOptimizer.get\char`_data()}} \label{Section: PortOptimizer.get_data} \begin{description} \item[Purpose: ] To query historical prices of each asset in the portfolio and the yield of the risk-free asset over the query period. Also calculates key statistics such as average return, standard deviation of returns, correlation and covariance matrices. \item[Arguments: ] None \item[Returns: ] None \item[Interactions: ] \verb| |\\ \verb|PortOptimizer.historical|, \verb|PortOptimizer.curr_port|, \verb|PortOptimizer.corr_matrix|,\\ \verb|PortOptimizer.cov_matrix|, \verb|pandas_datareader.get_data_yahoo()| \item[Outcome: ] \verb| |\\ The historical yield of the risk-free asset over the query period is stored in\\ \verb|PortOptimizer.historical[`riskfree']|. The historical prices of each asset in the portfolio are queried over the period and stored in \verb|PortOptimizer.historical[`returns']|. The average and standard deviations of returns of each asset are then calculated and stored in\\ \verb|PortOptimizer.historical[`avgreturns']| and \verb|PortOptimizer.historical[`stdevreturns']| respectively. The correlation and covariance matrices of the assets in the portfolio are calculated and stored in \verb|PortOptimizer.corr_matrix| and \verb|PortOptimizer.cov_matrix| respectively. \end{description} \section{\texttt{PortOptimizer.port\char`_stats()}} \label{Section: PortOptimizer.port_stats} \begin{description} \item[Purpose: ] Calculates portfolio statistics such as total market value, asset weights, total portfolio return, standard deviation of portfolio returns, and the portfolio's Sharpe ratio. \item[Arguments: ] (\verb|port|) \begin{description} \item[\texttt{port}: ] \verb|PortOptimizer.port| object, such as \verb|PortOptimizer.curr_port| \end{description} \item[Returns: ] \verb|PortOptimizer.port| object, such as \verb|PortOptimizer.curr_port| \item[Interactions: ] \verb| |\\ \verb|PortOptimizer.port| object, \verb|PortOptimizer.historical|, \verb|PortOptimizer.cov_matrix| \item[Outcome: ] \verb| |\\ The values of keys \verb|[`marketvalues']|, \verb|[`totalvalue']|, \verb|[`weights']|, \verb|[`returns']|, \verb|[`stdev']|, \verb|[`sharpe']| are calculated and stored in the \verb|PortOptimizer.port| object that was passed in. The \verb|PortOptimizer.port| is then returned. \end{description} \section{\texttt{PortOptimizer.find\char`_eff\char`_frontier()}} \label{Section: PortOptimizer.find_eff_frontier} \begin{description} \item[Purpose: ] Stochastically finds the efficient frontier by sampling portfolio weights from a uniform distribution and extracts portfolios with the lowest standard deviation, highest Sharpe ratio and highest returns. \item[Arguments: ] None \item[Returns: ] A python list of \verb|PortOptimizer.port| objects, such as \verb|PortOptimizer.curr_port| \item[Interactions: ] \verb| |\\ \verb|PortOptimizer.n_eff_samples|, \verb|PortOptimizer.curr_port|, \verb|PortOptimizer.lowest_risk_port|,\\ \verb|PortOptimizer.best_sharpe_port|, \verb|PortOptimizer.highest_returns_port|, \verb|PortOptimizer.historical|, \verb|PortOptimizer.port_stats()|, \verb|np.sum()|, \verb|np.floor()| \item[Outcome: ] \verb| |\\ The portfolios with the lowest standard deviation, highest Sharpe ratio and highest returns are found and assigned to \verb|PortOptimizer.lowest_risk_port|, \verb|PortOptimizer.best_sharpe_port|and \verb|PortOptimizer.highest_returns_port| respectively. The array \verb|rand_ports| of stochastically generated portfolios is returned. \end{description} \section{\texttt{PortOptimizer.plot\char`_eff\char`_frontier()}} \label{Section: PortOptimizer.plot_eff_frontier} \begin{description} \item[Purpose: ] Produces a plot of the efficient frontier from stochastically generated portfolios. \item[Arguments: ] None \item[Returns: ] str path to saved plot. \item[Interactions: ] \verb| |\\ \verb|PortOptimizer.find_eff_frontier()| \verb|PortOptimizer.curr_port| \verb|PortOptimizer.lowest_risk_port| \verb|PortOptimizer.best_sharpe_port| \verb|PortOptimizer.highest_returns_port| \verb|PortOptimizer.historical| \verb|plt.plot()| \verb|plt.scatter()| \verb|np.arange()| \item[Outcome: ] \verb| |\\ The \verb|PortOptimizer.find_eff_frontier()| is called and the collection of stochatically generated portfolios is retrieved. The standard devations and returns of these portfolios are plotted on a scatter plot to show the efficient frontier. The Sharpe ratio of the \verb|PortOptimizer.best_sharpe_port| portfolio is used to produce the Capital Allocation Line (CAL). The current portfolio, lowest risk portfolio, best sharpe portfolio and highest returns portfolio are marked on the scatter plot. The scatter plot is saved to \verb|.\media\graphs\eff_frontier_graph.PNG| and the path is returned. \end{description} \section{\texttt{PortOptimizer.plot\char`_stock\char`_returns()}} \label{Section: PortOptimizer.plot_stock_returns} \begin{description} \item[Purpose: ] Produces a plot of the cumulative returns of each asset in the portfolio over the query period. \item[Arguments: ] None \item[Returns: ] str path to saved plot. \item[Interactions: ] \verb| |\\ \verb|self.historical|, \verb|plt.plot()| \item[Outcome: ] \verb| |\\ The cumulative returns of each asset in the portfolio are calculated over the query period and are plotted onto a graph. The plot is saved to \verb|.\media\graphs\stock_returns_graph.PNG| and the path is returned. \end{description} \end{document}
{ "alphanum_fraction": 0.6613196688, "avg_line_length": 73.9862637363, "ext": "tex", "hexsha": "d8243ca6733d042bcf04ddf4c060ecb49f53510b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0cc7123d74bd135607e92caa0ff600bbc872b127", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "bynkchan/QPORT", "max_forks_repo_path": "tex/Dev_Guide/Developers_Guide.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0cc7123d74bd135607e92caa0ff600bbc872b127", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "bynkchan/QPORT", "max_issues_repo_path": "tex/Dev_Guide/Developers_Guide.tex", "max_line_length": 890, "max_stars_count": null, "max_stars_repo_head_hexsha": "0cc7123d74bd135607e92caa0ff600bbc872b127", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "bynkchan/QPORT", "max_stars_repo_path": "tex/Dev_Guide/Developers_Guide.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6489, "size": 26931 }
\documentclass[10pt,a4paper]{article} \usepackage[latin1]{inputenc} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{graphicx} \usepackage{subcaption} \usepackage{pdfpages} \usepackage[left=2.00cm, right=2.00cm, top=2.00cm, bottom=2.00cm]{geometry} \begin{document} \begin{center} \Large{\underline{\bf Generating Random Graphs}} \end{center} \section{Introduction} \end{document}
{ "alphanum_fraction": 0.728929385, "avg_line_length": 25.8235294118, "ext": "tex", "hexsha": "6d52c1db6f09c471e1585494bec32cd05690b58d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b3bebd84524e3198c89723f47272560f0a15de91", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Tom271/DeltaLink", "max_forks_repo_path": "Generating_Random_Graphs.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b3bebd84524e3198c89723f47272560f0a15de91", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Tom271/DeltaLink", "max_issues_repo_path": "Generating_Random_Graphs.tex", "max_line_length": 76, "max_stars_count": null, "max_stars_repo_head_hexsha": "b3bebd84524e3198c89723f47272560f0a15de91", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Tom271/DeltaLink", "max_stars_repo_path": "Generating_Random_Graphs.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 142, "size": 439 }
\chapter{Generic Implementation} \label{chap-gen-impl} \input{sections/generic_implementation/regular.tex} \input{sections/generic_implementation/generic_zipper.tex} \input{sections/generic_implementation/complexity.tex} \input{sections/generic_implementation/hashmap_trie.tex} \input{sections/generic_implementation/mem_strategy.tex} \input{sections/generic_implementation/pattern_synonym.tex}
{ "alphanum_fraction": 0.855, "avg_line_length": 28.5714285714, "ext": "tex", "hexsha": "d2da8b835b8def0532417dfe31ab0c415018021b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "897946211f14901b656a89b2f56c624c84b4e810", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jortvangorkum/thesis-paper", "max_forks_repo_path": "sections/generic_implementation/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "897946211f14901b656a89b2f56c624c84b4e810", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jortvangorkum/thesis-paper", "max_issues_repo_path": "sections/generic_implementation/main.tex", "max_line_length": 59, "max_stars_count": null, "max_stars_repo_head_hexsha": "897946211f14901b656a89b2f56c624c84b4e810", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jortvangorkum/thesis-paper", "max_stars_repo_path": "sections/generic_implementation/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 94, "size": 400 }
\filetitle{solve}{Calculate first-order accurate solution of the model}{model/solve} \paragraph{Syntax}\label{syntax} \begin{verbatim} M = solve(M,...) \end{verbatim} \paragraph{Input arguments}\label{input-arguments} \begin{itemize} \itemsep1pt\parskip0pt\parsep0pt \item \texttt{M} {[} model {]} - Paramterised model object. Non-linear models must also have a steady state values assigned. \end{itemize} \paragraph{Output arguments}\label{output-arguments} \begin{itemize} \itemsep1pt\parskip0pt\parsep0pt \item \texttt{M} {[} model {]} - Model with newly computed solution. \end{itemize} \paragraph{Options}\label{options} \begin{itemize} \item \texttt{'expand='} {[} numeric \textbar{} \emph{\texttt{0}} \textbar{} \texttt{NaN} {]} - Number of periods ahead up to which the model solution will be expanded; if \texttt{NaN} the matrices needed to support solution expansion are not calculated and stored at all and the model cannot be used later in simulations or forecasts with anticipated shocks or plans. \item \texttt{'eqtn='} {[} \emph{\texttt{'all'}} \textbar{} \texttt{'measurement'} \textbar{} \texttt{'transition'} {]} - Update existing solution in the measurement block, or the transition block, or both. \item \texttt{'error='} {[} \texttt{true} \textbar{} \emph{\texttt{false}} {]} - Throw an error if no unique stable solution exists; if \texttt{false}, a warning message only will be displayed. \item \texttt{'linear='} {[} \emph{\texttt{@auto}} \textbar{} \texttt{true} \textbar{} \texttt{false} {]} - Solve the model using a linear approach, i.e.~differentiating around zero and not the currently assigned steady state. \item \texttt{'progress='} {[} \texttt{true} \textbar{} \emph{\texttt{false}} {]} - Display progress bar in the command window. \item \texttt{'refresh='} {[} \emph{\texttt{true}} \textbar{} \texttt{false} {]} - Refresh dynamic links before computing the solution. \item \texttt{'select='} {[} \emph{\texttt{true}} \textbar{} \texttt{false} {]} - Automatically detect which equations need to be re-differentiated based on parameter changes from the last time the system matrices were calculated. \item \texttt{'warning='} {[} \emph{\texttt{true}} \textbar{} \texttt{false} {]} - Display warnings produced by this function. \end{itemize} \paragraph{Description}\label{description} The IRIS solver uses an ordered QZ (or generalised Schur) decomposition to integrate out future expectations. The QZ may (very rarely) fail for numerical reasons. IRIS includes two patches to handle the some of the QZ failures: a SEVN2 patch (Sum-of-EigenValues-Near-Two), and an E2C2S patch (Eigenvalues-Too-Close-To-Swap). \begin{itemize} \item The SEVN2 patch: The model contains two or more unit roots, and the QZ algorithm interprets some of them incorrectly as pairs of eigenvalues that sum up accurately to 2, but with one of them significantly below 1 and the other significantly above 1. IRIS replaces the entries on the diagonal of one of the QZ factor matrices with numbers that evaluate to two unit roots. \item The E2C2S patch: The re-ordering of thq QZ matrices fails with a warning \texttt{'Reordering failed because some eigenvalues are too close to swap.'} IRIS attempts to re-order the equations until QZ works. The number of attempts is limited to \texttt{N-1} at most where \texttt{N} is the total number of equations. \end{itemize} \paragraph{Example}\label{example}
{ "alphanum_fraction": 0.7292912041, "avg_line_length": 36.2164948454, "ext": "tex", "hexsha": "3c15983dc69a4b8e82b7f9a03e43d4d890136bb9", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_forks_repo_path": "-help/model/solve.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_issues_repo_issues_event_max_datetime": "2020-09-02T10:40:25.000Z", "max_issues_repo_issues_event_min_datetime": "2017-03-28T08:13:20.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_issues_repo_path": "-help/model/solve.tex", "max_line_length": 88, "max_stars_count": 1, "max_stars_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_stars_repo_path": "-help/model/solve.tex", "max_stars_repo_stars_event_max_datetime": "2017-12-06T13:38:38.000Z", "max_stars_repo_stars_event_min_datetime": "2017-12-06T13:38:38.000Z", "num_tokens": 1021, "size": 3513 }
\documentclass[utf8,bachelor,english]{gradu3} % If you are writing a Master's Thesis, use the following instead: %\documentclass[utf8,english]{gradu3} \usepackage{graphicx} % for including pictures \usepackage{amsmath} % useful for math (optional) \usepackage{booktabs} % good for beautiful tables % NOTE: This must be the last \usepackage in the whole document! \usepackage[bookmarksopen,bookmarksnumbered,linktocpage]{hyperref} \addbibresource{malliopas.bib} % The file name of your bibliography database \begin{document} \title{Machine Learning \\ DIVY PATEL 171080056 \\Under the guidance of Pranav Sir} \translatedtitle{\LaTeX-tutkielmapohjan {gradu3} käyttö} \studyline{All study lines} \avainsanat{% \LaTeX, {gradu3}, pro gradu -tutkielmat, kandidaatintutkielmat, käyttöohje} \keywords{\LaTeX, {gradu3}, Master's Theses, Bachelor's Theses, user's guide} \tiivistelma{% Tämä kirjoitelma on esimerkki siitä, kuinka {gradu3}-tutkielmapohjaa käytetään. Se sisältää myös käyttöohjeet ja tutkielman rakennetta koskevia ohjeita. Tutkielman tiivistelmä on tyypillisesti lyhyt esitys, jossa kerrotaan tutkielman taustoista, tavoitteesta, tutkimusmenetelmistä, saavutetuista tuloksista, tulosten tulkinnasta ja johtopäätöksistä. Tiivistelmän tulee olla niin lyhyt, että se, englanninkielinen abstrakti ja muut metatiedot mahtuvat kaikki samalle sivulle. Sen tulee kertoa täsmälleen samat asiat kuin englannikielinen abstrakti. } \abstract{% This document is a sample {gradu3} thesis document class document. It also functions as a user manual and supplies guidelines for structuring a thesis document. The abstact is typically short and discusses the background, the aims, the research methods, the obtained results, the interpretation of the results and the conculsions of the thesis. It should be so short that it, the Finnish translation, and all other meta information fit on the same page. The Finnish tiivistelmä of a thesis should usually say exactly the same things as the abstract. } % use a separate \author command for each author, if there is more than one \supervisor{Supervised work} % use a separate \supervisor command for each supervisor, if there % is more than one % you don't need this line in a thesis \type{Thesis} \maketitle \preface This is where you can write a preface for your thesis. Most theses don't have prefaces, but if you write one, keep it short (at least one page). The preface should discuss more the thesis process than the content of the thesis. For example, if there is something out of the ordinary in your choice of a thesis topic or if something out of the ordinary happened during its prepararion, the preface is where you could write about it. It is also customary in a preface to thank by name those persons who helped you with your thesis -- at least your supervisor, your spouse and your children, if any. (Your family likely will have helped you by encouraging and supporting you.) The preface is typically in the first person (``I''). It is also common to sign it. \today \bigskip The Author \begin{thetermlist} \item[\TeX] A batch-oriented typesetting system written by Donald Knuth in 1977--1989 \parencite[see][]{knuth86:_texbook}. \item[\LaTeX] A system, built on top of \TeX\ \parencite{knuth86:_texbook}, for typesetting structured documents \parencite[see][]{lamport94:_latex}. Its current version is \LaTeXe. \end{thetermlist} \mainmatter \chapter{Introduction} Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to effectively perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model of sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to perform the task.[1][2]:2 Machine learning algorithms are used in the applications of email filtering, detection of network intruders, and computer vision, where it is infeasible to develop an algorithm of specific instructions for performing the task. Machine learning is closely related to computational statistics, which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning.[3][4] In its application across business problems, machine learning is also referred to as predictive analytics. The name machine learning was coined in 1959 by Arthur Samuel.[5] Tom M. Mitchell provided a widely quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E."[6] This definition of the tasks in which machine learning is concerned offers a fundamentally operational definition rather than defining the field in cognitive terms. This follows Alan Turing's proposal in his paper "Computing Machinery and Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what we (as thinking entities) can do?".[7] In Turing's proposal the various characteristics that could be possessed by a thinking machine and the various implications in constructing one are exposed. \chapter{Machine Learning Tasks} Machine learning tasks are classified into several broad categories. In supervised learning, the algorithm builds a mathematical model from a set of data that contains both the inputs and the desired outputs. For example, if the task were determining whether an image contained a certain object, the training data for a supervised learning algorithm would include images with and without that object (the input), and each image would have a label (the output) designating whether it contained the object. In special cases, the input may be only partially available, or restricted to special feedback.[clarification needed] Semi-supervised learning algorithms develop mathematical models from incomplete training data, where a portion of the sample input doesn't have labels. Classification algorithms and regression algorithms are types of supervised learning. Classification algorithms are used when the outputs are restricted to a limited set of values. For a classification algorithm that filters emails, the input would be an incoming email, and the output would be the name of the folder in which to file the email. For an algorithm that identifies spam emails, the output would be the prediction of either "spam" or "not spam", represented by the Boolean values true and false. Regression algorithms are named for their continuous outputs, meaning they may have any value within a range. Examples of a continuous value are the temperature, length, or price of an object. In unsupervised learning, the algorithm builds a mathematical model from a set of data which contains only inputs and no desired output labels. Unsupervised learning algorithms are used to find structure in the data, like grouping or clustering of data points. Unsupervised learning can discover patterns in the data, and can group the inputs into categories, as in feature learning. Dimensionality reduction is the process of reducing the number of "features", or inputs, in a set of data. Active learning algorithms access the desired outputs (training labels) for a limited set of inputs based on a budget, and optimize the choice of inputs for which it will acquire training labels. When used interactively, these can be presented to a human user for labeling. Reinforcement learning algorithms are given feedback in the form of positive or negative reinforcement in a dynamic environment, and are used in autonomous vehicles or in learning to play a game against a human opponent.[2]:3 Other specialized algorithms in machine learning include topic modeling, where the computer program is given a set of natural language documents and finds other documents that cover similar topics. Machine learning algorithms can be used to find the unobservable probability density function in density estimation problems. Meta learning algorithms learn their own inductive bias based on previous experience. In developmental robotics, robot learning algorithms generate their own sequences of learning experiences, also known as a curriculum, to cumulatively acquire new skills through self-guided exploration and social interaction with humans. These robots use guidance mechanisms such as active learning, maturation, motor synergies, and imitation.[clarification needed] \chapter{History and relationships to other fields.} Arthur Samuel, an American pioneer in the field of computer gaming and artificial intelligence, coined the term "Machine Learning" in 1959 while at IBM[8]. As a scientific endeavour, machine learning grew out of the quest for artificial intelligence. Already in the early days of AI as an academic discipline, some researchers were interested in having machines learn from data. They attempted to approach the problem with various symbolic methods, as well as what were then termed "neural networks"; these were mostly perceptrons and other models that were later found to be reinventions of the generalized linear models of statistics.[9] Probabilistic reasoning was also employed, especially in automated medical diagnosis.[10]:488 However, an increasing emphasis on the logical, knowledge-based approach caused a rift between AI and machine learning. Probabilistic systems were plagued by theoretical and practical problems of data acquisition and representation.[10]:488 By 1980, expert systems had come to dominate AI, and statistics was out of favor.[11] Work on symbolic/knowledge-based learning did continue within AI, leading to inductive logic programming, but the more statistical line of research was now outside the field of AI proper, in pattern recognition and information retrieval.[10]:708–710; 755 Neural networks research had been abandoned by AI and computer science around the same time. This line, too, was continued outside the AI/CS field, as "connectionism", by researchers from other disciplines including Hopfield, Rumelhart and Hinton. Their main success came in the mid-1980s with the reinvention of backpropagation.[10]:25 Machine learning, reorganized as a separate field, started to flourish in the 1990s. The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It shifted focus away from the symbolic approaches it had inherited from AI, and toward methods and models borrowed from statistics and probability theory.[11] It also benefited from the increasing availability of digitized information, and the ability to distribute it via the Internet. \section{Relation to Data Mining} Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown knowledge. Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by other supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data. \section{Relation to optimisation.} Machine learning also has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the pre-assigned labels of a set of examples). The difference between the two fields arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples.[12] \section{Relation to statistics} Machine learning and statistics are closely related fields. According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics.[13] He also suggested the term data science as a placeholder to call the overall field.[13] Leo Breiman distinguished two statistical modelling paradigms: data model and algorithmic model,[14] wherein "algorithmic model" means more or less the machine learning algorithms like Random forest. Some statisticians have adopted methods from machine learning, leading to a combined field that they call statistical learning.[15] On the command line, biber is simple to use. Once \LaTeX (or pdf\LaTeX) has been run once, invoke biber with the document name (without the .tex part) as its argument. After that, run \LaTeX\ (or pdf\LaTeX) at least once, until the latest run does not request another run. For example: \chapter{Theory} A core objective of a learner is to generalize from its experience.[2][16] Generalization in this context is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set. The training examples come from some generally unknown probability distribution (considered representative of the space of occurrences) and the learner has to build a general model about this space that enables it to produce sufficiently accurate predictions in new cases. The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common. The bias–variance decomposition is one way to quantify generalization error. For the best performance in the context of generalization, the complexity of the hypothesis should match the complexity of the function underlying the data. If the hypothesis is less complex than the function, then the model has underfit the data. If the complexity of the model is increased in response, then the training error decreases. But if the hypothesis is too complex, then the model is subject to overfitting and generalization will be poorer.[17] In addition to performance bounds, computational learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done in polynomial time. There are two kinds of time complexity results. Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time. \chapter{Approaches} \section{Supervised and Semi-Supervised Learning.} Supervised learning algorithms build a mathematical model of a set of data that contains both the inputs and the desired outputs.[18] The data is known as training data, and consists of a set of training examples. Each training example has one or more inputs and a desired output, also known as a supervisory signal. In the case of semi-supervised learning algorithms, some of the training examples are missing the desired output. In the mathematical model, each training example is represented by an array or vector, and the training data by a matrix. Through iterative optimization of an objective function, supervised learning algorithms learn a function that can be used to predict the output associated with new inputs.[19] An optimal function will allow the algorithm to correctly determine the output for inputs that were not a part of the training data. An algorithm that improves the accuracy of its outputs or predictions over time is said to have learned to perform that task.[6] Supervised learning algorithms include classification and regression.[20] Classification algorithms are used when the outputs are restricted to a limited set of values, and regression algorithms are used when the outputs may have any numerical value within a range. Similarity learning is an area of supervised machine learning closely related to regression and classification, but the goal is to learn from examples using a similarity function that measures how similar or related two objects are. It has applications in ranking, recommendation systems, visual identity tracking, face verification, and speaker verification. \section{Unsupervised Learning.} Unsupervised learning algorithms take a set of data that contains only inputs, and find structure in the data, like grouping or clustering of data points. The algorithms therefore learn from test data that has not been labeled, classified or categorized. Instead of responding to feedback, unsupervised learning algorithms identify commonalities in the data and react based on the presence or absence of such commonalities in each new piece of data. A central application of unsupervised learning is in the field of density estimation in statistics,[21] though unsupervised learning encompasses other domains involving summarizing and explaining data features. Cluster analysis is the assignment of a set of observations into subsets (called clusters) so that observations within the same cluster are similar according to one or more predesignated criteria, while observations drawn from different clusters are dissimilar. Different clustering techniques make different assumptions on the structure of the data, often defined by some similarity metric and evaluated, for example, by internal compactness, or the similarity between members of the same cluster, and separation, the difference between clusters. Other methods are based on estimated density and graph connectivity. \section{Reinforcement Learning} Reinforcement learning is an area of machine learning concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. Due to its generality, the field is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, statistics and genetic algorithms.[22][23] In machine learning, the environment is typically represented as a Markov Decision Process (MDP). Many reinforcement learning algorithms use dynamic programming techniques.[22][23][24] Reinforcement learning algorithms do not assume knowledge of an exact mathematical model of the MDP, and are used when exact models are infeasible.[22][23] Reinforcement learning algorithms are used in autonomous vehicles or in learning to play a game against a human opponent. \printbibliography \appendix \section{Moving from gradu2 to gradu3} Moving an incomplete thesis from gradu2 to gradu3 is not particularly difficult. The first thing to do is to change gradu2 into gradu3 in the \string\documentclass\ command. Most of the options given to it must be removed, as they are not supported. A ``kandi'' option is changed into ``bachelor''; any ``english'' option is retained, and so is ``utf8'', ``latin1'', or ``latin9''. Table~~\ref{tbl:cmdchange} lists the command name changes that are needed. A dash indicates that there is no corresponding command. Note especially the new commands. \begin{table}[h]\centering \begin{tabular}{ll} \toprule gradu2 & gradu3 \\ \midrule --- & \string\maketitle \\ --- & \string\supervisor \\ \string\acmccs & --- \\ \string\aine & \string\subject\\ \string\copyrightowner & --- \\ \string\fulltitle & --- \\ \string\laitos & \string\department\\ \string\license & --- \\ \string\linja & \string\studyline\\ \string\paikka & --- \\ \string\setauthor & \string\author\\ \string\termlist & thetermlist environment\\ \string\tyyppi & \string\type\\ \string\yhteystiedot & \string\contactinformation\\ \string\yliopisto & \string\university\\ \string\ysa & --- \\ \bottomrule \end{tabular} \caption{Command changes from gradu2 to gradu3} \label{tbl:cmdchange} \end{table} The most effort is likely needed to converting citations and the bibliography. \section{Rarely needed features} In addition to features already mentioned, gradu3 offers the following additional features: \begin{itemize} \item The standard options ``draft'' and ``final'' work. \item The option ``finnish'' works (but is not needed, as it is the default). \item You can change the University of the thesis by using the \string\university\ command. \item You can change the Department of the thesis by using the \string\department\ command. \item You can change the formal subject of the thesis by using the \string\subject\ command. In English theses, the subject should be prefixed by ``in'' (for example, ``in Information Technology''); in Finnish theses, use a capital initial letter and the genitive form (``Tietotekniikan''). \item You can change the type of the thesis by using the \string\type\ command. \item You can set the date of the thesis by using the \string\setdate\ command. Give it three parameters (day of month, month, and year) in numerical form. \item The chapterquote environment can be used to give an epigraph to a chapter. There is one mandatory parameter (the attribution of the epigraph). \item The command \string\graduclsdate\ prints the release date of the current version of gradu3, and the command \string\graduclsversion\ prints its version number. \end{itemize} \end{document}
{ "alphanum_fraction": 0.7977532982, "avg_line_length": 92.6088709677, "ext": "tex", "hexsha": "e53920bc08d0ba993bbf46693e4c30dd12ffe738", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6ae3857f02fc1a4d0e3e2760f501e7b4909f24aa", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "DivyPatel9881/OSS_Lab", "max_forks_repo_path": "OSS_Lab/Thesis/enthesis.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6ae3857f02fc1a4d0e3e2760f501e7b4909f24aa", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "DivyPatel9881/OSS_Lab", "max_issues_repo_path": "OSS_Lab/Thesis/enthesis.tex", "max_line_length": 1275, "max_stars_count": null, "max_stars_repo_head_hexsha": "6ae3857f02fc1a4d0e3e2760f501e7b4909f24aa", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "DivyPatel9881/OSS_Lab", "max_stars_repo_path": "OSS_Lab/Thesis/enthesis.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4869, "size": 22967 }
% $Header$ % $Author: fager $ % $Date: 2009-04-28 20:51:04 +0200 (ti, 28 apr 2009) $ % $Revision: 99 $ % $Log$ % Revision 1.2 2004/10/21 18:59:06 fager % Version logging added. Comments from KA implemented. % \section{Installation}%\hypertarget{Installation} MuWave has been developed using \matlab v.6.5 (Release 13), which is assumed to be installed on the user's computer. Please check the Help/About menu to make sure you are running this or a later version. The installation of the MuWave-toolbox consists of a few simple steps: \begin{enumerate} \item Download the latest version from the MuWave web page at \newline http://www.chalmers.se/mc2/EN/laboratories/microwave-electronics/education/graduate-courses/empirical-modeling. \newline The downloaded zip-file contains all \matlab m-files and documentation necessary. \item Extract all files to a directory of your choice. In WinZip, make sure to have the ''Use folder names'' option checked. \item Add MuWave to \matlab's path. This is done from the File menu by selecting ''Set Path...''. Click ''Add Folder'' in the dialog box that appears and browse to select the ''muwave'' directory, which was created where you extracted the zip-file. Please make sure to press the ''Save button'' before closing the Set Path dialog box. \item Normally, MuWave is now ready for action. However, it may in rare cases be necessary to execute the following line before MuWave is used the first time: \begin{verbatim} >> clear classes; rehash; \end{verbatim} \end{enumerate} You can verify that MuWave has been successfully installed by creating an empty measurement object. This is done by executing: \verb">> meassp". The following should be displayed at the command prompt: \begin{verbatim} >> meassp Measurement info Date : 18-Oct-2004 20:33:57 Origin : Operator : Info : Measurement state MeasType: SP Measurement Data xparam-object type: S reference: 50 ports: 2 elements: 0 \end{verbatim}
{ "alphanum_fraction": 0.7071665876, "avg_line_length": 37.625, "ext": "tex", "hexsha": "b885c2aaf0652ff64463692106aac29d5234fbec", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a91c034b0383dda5458eb4935926b8b65a211311", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "extrakteon/muwave", "max_forks_repo_path": "doc/usersmanual/Installation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a91c034b0383dda5458eb4935926b8b65a211311", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "extrakteon/muwave", "max_issues_repo_path": "doc/usersmanual/Installation.tex", "max_line_length": 186, "max_stars_count": 1, "max_stars_repo_head_hexsha": "a91c034b0383dda5458eb4935926b8b65a211311", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "extrakteon/muwave", "max_stars_repo_path": "doc/usersmanual/Installation.tex", "max_stars_repo_stars_event_max_datetime": "2017-04-30T17:44:29.000Z", "max_stars_repo_stars_event_min_datetime": "2017-04-30T17:44:29.000Z", "num_tokens": 556, "size": 2107 }
\section{Graphs} \begin{frame}{Graphs}{Paths} For a graph {\color{MainA}$G = (V, E)$}: \begin{itemize} \item A path of {\color{MainA}$G$} is a sequence of edges \begin{math} {\color{MainA}u_1}, {\color{MainA}u_2}, \dots, {\color{MainA}u_i} \in {\color{MainA}V} \end{math} with \begin{itemize} \item Undirected graph: \begin{math} \{{\color{MainA}u_1}, {\color{MainA}u_2}\}, \{{\color{MainA}u_2}, {\color{MainA}u_3}\}, \dots, \{{\color{MainA}u_{i-1}}, {\color{MainA}u_i}\} \in {\color{MainA}E} \end{math} \item Directed graph: \begin{math} ({\color{MainA}u_1}, {\color{MainA}u_2}), ({\color{MainA}u_2}, {\color{MainA}u_3}), \dots, ({\color{MainA}u_{i-1}}, {\color{MainA}u_i}) \in {\color{MainA}E} \end{math} \end{itemize} \item The {\color{MainA}length of a path} is \begin{itemize} \item Without weights: {\color{MainA}number of edges} taken \item With weights: {\color{MainA}sum of weigths of edges} taken \end{itemize} \end{itemize} \end{frame} %------------------------------------------------------------------------------- \begin{frame}{Graphs}{Paths} For a graph {\color{MainA}$G = (V, E)$}: \begin{itemize} \item The {\color{MainA}shortest path} between two vertices ${\color{MainA}u}, {\color{MainA}v}$ is the path \begin{math} {\color{MainA}P} = ({\color{MainA}u}, \dots, {\color{MainA}v}) \end{math} with the shortest length ${\color{MainA}d(u,v)}$ or lowest costs \item The {\color{MainA}diameter} of a graph is the {\color{MainA}longest shortest path} \end{itemize} \end{frame}
{ "alphanum_fraction": 0.4807886754, "avg_line_length": 29.9696969697, "ext": "tex", "hexsha": "12afaa0fac194a8eaf180064c00b8acc97d91915", "lang": "TeX", "max_forks_count": 8, "max_forks_repo_forks_event_max_datetime": "2021-04-05T08:36:38.000Z", "max_forks_repo_forks_event_min_datetime": "2016-10-07T11:55:23.000Z", "max_forks_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "TobiOnline/AlgoDat", "max_forks_repo_path": "Lecture-13/Chapter/eng/010_Graphs.tex", "max_issues_count": 23, "max_issues_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_issues_repo_issues_event_max_datetime": "2019-10-20T15:40:10.000Z", "max_issues_repo_issues_event_min_datetime": "2016-10-08T09:27:41.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "TobiOnline/AlgoDat", "max_issues_repo_path": "Lecture-13/Chapter/eng/010_Graphs.tex", "max_line_length": 81, "max_stars_count": 17, "max_stars_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "TobiOnline/AlgoDat", "max_stars_repo_path": "Lecture-13/Chapter/eng/010_Graphs.tex", "max_stars_repo_stars_event_max_datetime": "2020-08-26T11:07:16.000Z", "max_stars_repo_stars_event_min_datetime": "2016-12-16T17:42:34.000Z", "num_tokens": 637, "size": 1978 }
\documentclass[10pt, a4paper, twocolumn, titlepage]{article} \usepackage[utf8]{inputenc} %\usepackage{ngerman} \usepackage{datetime} \usepackage{xcharter} \usepackage{graphicx} \usepackage[section]{placeins} \usepackage{spverbatim} \usepackage{hyperref} \usepackage{fancyvrb} \usepackage{vhistory} \usepackage{lipsum} %\usepackage{fontspec} %\newfontfamily\collfont[Path=./]{Valkyrie T3 Regular.otf} \makeatletter \renewcommand\paragraph{\@startsection{paragraph}{4}{\z@}% {1em}% {-1em}% {\normalfont\normalsize\itshape}} \makeatother \newcommand{\formaturl}[1]{\underline{#1}} \newcommand{\inlinecode}{\texttt} \renewcommand*{\ttdefault}{qcr} \newcommand{\AW}{Andreas Wiedel} \title{LaTex Template} \author{Andreas Wiedel \\ [email protected] \\ } \date{\today} \begin{document} \maketitle \begin{abstract} The abstract text. \end{abstract} \onecolumn \begin{versionhistory} \vhEntry{1.0.0}{2018-06-27}{\AW}{Creation of the document} \end{versionhistory} \newpage \clearpage \tableofcontents % create a table of contents \listoffigures \listoftables \newpage \clearpage \twocolumn \section{Sample Text} \cite{sample} \lipsum[10-30] \onecolumn \lipsum[3-9] \twocolumn \lipsum[10-30] \newpage \clearpage %\onecolumn \bibliographystyle{elsarticle-num-custom} \bibliography{references} \end{document}
{ "alphanum_fraction": 0.7468448404, "avg_line_length": 14.329787234, "ext": "tex", "hexsha": "9c92ee779e98d264ce5df6c952b9d97771da5dd6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "03f13f5fcce57b0a9706276806fa3573b599a10b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Kaishiyoku/kaishiyoku-latex-article-template", "max_forks_repo_path": "template.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "03f13f5fcce57b0a9706276806fa3573b599a10b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Kaishiyoku/kaishiyoku-latex-article-template", "max_issues_repo_path": "template.tex", "max_line_length": 60, "max_stars_count": null, "max_stars_repo_head_hexsha": "03f13f5fcce57b0a9706276806fa3573b599a10b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Kaishiyoku/kaishiyoku-latex-article-template", "max_stars_repo_path": "template.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 454, "size": 1347 }
\chapter{Gravity and Geometry} The observable universe is stable. There are two obvious configurations in which this is possible: \begin{enumerate} \item Static universe, masses are arranged in a grid, all nett forces cancel. However small fluctuations cause the system to collapse therefore this is no possible description for the universe. %TODO picture \item Expanding universe, all masses move away from each other, overcoming the gravitational attraction. Theoretically such a system can be described by using Newtonian Physics introducing additional energy contributions. This turns out to be inconsistent. \end{enumerate} Since in the second description all particles are accelerated relative to each other, there are no inertial systems. A theory in which all observers are equal must therefore be local and thus be described by means of differential geometry. We claim that the laws of physics are the same in every system. If we assume that the \name{Maxwell}'s equations are right, the Newtonian theory of gravity must be wrong. Implications: All free falling systems are equivalent (i.e. indistinguishable by the observer). Light must bend, otherwise a beam could be used to deduce whether your system is inertial. The following example illustrates that Euclidean geometry is no adequate description of space-time. \begin{example}[Rotating Sphere] see Introduction to tensor calculus %TODO copy or reference page \end{example} \section{Coordinate Systems} We will start by studying coordinate systems in the flat space $\Reals^2$, which should be familiar. \subsection*{Cartesian Coordinates} Cartesian coordinates are described by two coordinates $x,y$ that are measured in two orthogonal directions from the origin. The distance $s$ between two arbitrary points $(x_1,y_1)$ and $(x_2,y_2)$ can be calculated using \name{Pythagoras}' theorem \begin{equation} s^2=(x_1-x_2)^2+(y_1-y_2)^2\,. \end{equation} An infinitesimal distance is likewise given by \begin{equation} \dif s^2=\dif x^2+\dif y^2\,. \label{eq:cartline} \end{equation} \subsection*{Polar Coordinates} If we describe a point in flat space by an angle $\varphi$ and an distance $r$ from the origin, we get polar coordinates. The conversion between the systems reads \begin{equation} x= r\cos\varphi\quad y= r\sin\varphi\,. \end{equation} A infinitival change in the polar coordinates therefore results in \begin{align} \dif x&= \dpd{x}{r}\dif r+\dpd{x}{\varphi}\dif \varphi = \cos\varphi\dif r-r\sin\varphi\dif \varphi\,,\\ \dif y&= \dpd{x}{r}\dif r+\dpd{y}{\varphi}\dif \varphi = \sin\varphi\dif r+r\cos\varphi\dif \varphi\,. \end{align} Plugging this into \eqref{eq:cartline} gives the line element in polar coordinates \begin{equation} \dif s^2=\dif r^2+r^2\dif \varphi^2 \end{equation} \begin{figure}[hbtp!] \centering \includegraphics{cartcoord.pdf} \includegraphics{polarcoord.pdf} \caption{} %TODO Caption \end{figure} \begin{figure}[hbtp!] \centering \includegraphics[scale=0.75]{CoordinateGridCartesian.pdf}\quad \includegraphics[scale=0.75]{CoordinateGridPolar.pdf} \caption{Coordinate grids.} %TODO Caption \end{figure} In matrix form \begin{equation} \dif s^2= \begin{bmatrix} \dif r& \dif \varphi \end{bmatrix} \begin{bmatrix} 1& 0\\ 0& r^2\\ \end{bmatrix} \begin{bmatrix} \dif r\\ \dif \varphi \end{bmatrix}\, . \end{equation} The matrix \begin{equation} g(\vec{r})= \begin{bmatrix} 1& 0\\ 0& r^2\\ \end{bmatrix}\, , \end{equation} is called the \emph{metric}. In general we have \begin{equation} \dif s^2 = g_{ij}\dif x^i\dif x^j\, . \end{equation} The idea is to keep the law of inertia, i.e. particles still move on straight line. However, we need to generalize the concept of a 'straight' line, in a curved space. \section{Variation Principle} \label{sec:varprinc} We know that straight lines are curves minimizing the distance between two points. We generalize this concept to curved space by an variation principle. Again we take a look at flat space, but with curved coordinates. The length $S$ of a curve $\gamma$ with $\gamma^i(\lambda) = x^i(\lambda)$ is given by the integral \begin{equation} S=\int_{\gamma}\sqrt{\dif s^2} = \int_{\gamma}\sqrt{\tensor{g}{_i_j}\dif \tensor{x}{^i}\dif \tensor{x}{^j}}=\int_{a}^{b}\sqrt{\tensor{g}{_i_j}\dod{\tensor{x}{^i}}{\lambda} \dod{\tensor{x}{^j}}{\lambda}}\dif \lambda\, . \end{equation} As stated above generalised straight lines satisfy $\delta S = 0$. If we define $L:=\left(\tensor{g}{_i_j}\od{\tensor{x}{^i}}{\lambda} \od{\tensor{x}{^j}}{\lambda}\right)^{\nicefrac{1}{2}}$, $S$ takes a form familiar from classical mechanics: \begin{equation} S=\int_a^b L\dif \lambda\, . \end{equation} The extremal condition implies the Euler Lagrange equations \begin{equation} \dod{}{\lambda}\pd{L}{\left(\pd{\tensor{x}{^i}}{\lambda}\right)} -\pd{L}{\tensor{x}{^i}} =0\, . \end{equation} We can calculate the relevant terms to \begin{align} \dpd{L}{\tensor{x}{^i}}&=\frac{1}{2\sqrt{g_{ij}\od{x^i}{\lambda} \od{x^j}{\lambda}}}\tensor{g}{_j_k_{,i}}\dod{x^j}{\lambda} \dod{x^k}{\lambda}\,,\\ \dpd{L}{\left(\pd{\tensor{x}{^i}}{\lambda}\right)} &=\frac{1}{\sqrt{g_{ij}\od{x^i}{\lambda} \od{x^j}{\lambda}}}\tensor{g}{_j_i}\dod{x^j}{\lambda}\, . \end{align} If we choose the parameter $\lambda$ so that we are parametrised by the arc length\footnote{this is impossible for null i.e. lightlike geodesics, it can be shown however, that the resulting equation also holds true for null geodesics.} i.e. \begin{equation} \od{}{\lambda}\left(\sqrt{g_{ij}\od{x^i}{\lambda}\od{x^j}{\lambda}}\right)=0\,, \end{equation} the Euler Lagrange equations simplify to \begin{equation} 0=\frac{1}{\sqrt{\tensor{g}{_i_j}\od{\tensor{x}{^j}}{\lambda} \od{\tensor{x}{^j}}{\lambda}}}\dod{}{\lambda} \left(\tensor{g}{_j_i}\dod{\tensor{x}{^j}}{\lambda}\right) -\frac{1}{2\sqrt{\tensor{g}{_i_j}\od{x^i}{\lambda} \od{\tensor{x}{^j}}{\lambda}}}\tensor{g}{_j_k_{,i}}\dod{\tensor{x}{^j}}{\lambda} \dod{\tensor{x}{^k}}{\lambda}\,, \end{equation} or equivalently \begin{equation} \begin{split} 0 &=\dod{}{\lambda}\left(\tensor{g}{_j_i}\dod{\tensor{x}{^j}}{\lambda}\right) -\frac{1}{2}\tensor{g}{_j_k_{,i}}\dod{x^a}{\lambda} \dod{x^k}{\lambda}\\ &=\tensor{g}{_j_i_{,k}}\dod{\tensor{x}{^j}}{\lambda}\dod{\tensor{x}{^k}}{\lambda} +\tensor{g}{_j_i}\dod[2]{\tensor{x}{^j}}{\lambda} -\frac{1}{2}\tensor{g}{_j_k_{,i}}\dod{\tensor{x}{^j}}{\lambda}\\ &=\tensor{g}{_j_i}\dod[2]{\tensor{x}{^j}}{\lambda} +\frac{1}{2}\left(\tensor{g}{_j_i_{,k}}+\tensor{g}{_i_j_{,k}} -\tensor{g}{_j_k_{,i}}\right)\dod{\tensor{x}{^j}}{\lambda} \dod{\tensor{x}{^k}}{\lambda}\label{eq:PreGeo}\,. \end{split} \end{equation} The term invoking derivatives of the metric defines the \emph{Christoffel symbols of the first kind} \begin{equation} \csym{j}{k}{i}:=\frac{1}{2} \left(\tensor{g}{_j_i_{,k}}+\tensor{g}{_i_j_{,k}} -\tensor{g}{_j_k_{,i}}\right)\, . \end{equation} It is convenient to multiply \eqref{eq:PreGeo} by the inverse metric $g^{li}$ so that we obtain the \emph{geodesic equation} \begin{equation} 0 = \od[2]{\tensor{x}{^l}}{\lambda} +\cSym{l}{j}{k}\od{\tensor{x}{^j}}{\lambda}\od{\tensor{x}{^k}}{\lambda}\, .\label{eq:geodeq} \end{equation} Where $\cSym{l}{j}{k}$ are the \emph{Christoffel symbols of the second kind} \begin{equation} \cSym{l}{j}{k}:=g^{li}\csym{j}{k}{i}=\frac{1}{2}g^{li} \left(\tensor{g}{_j_i_{,k}}+\tensor{g}{_i_j_{,k}} -\tensor{g}{_j_k_{,i}}\right)\, . \end{equation} % remark is obsolete as long as bracket notation for christoffel symbols is used %\begin{remark} %Although the notation looks as the Christoffel symbols form a tensor, however % they do not. %\end{remark} In flat space we have $\tensor{g}{_i_j}=\tensor{\eta}{_i_j}$ and can easily check that all Christoffel symbols vanish. We therefore recover the ordinary equation of motion for a free particle \begin{equation} 0 = \od[2]{\tensor{x}{^i}}{\lambda}\, . \end{equation} \begin{figure}[hbtp!] \centering \includegraphics{sphere_geodesics1.pdf} \caption{Great circles are geodesics, i.e. shortest connections of points, on a sphere.} %TODO Caption \end{figure} \begin{figure}[hbtp!] \centering \includegraphics{WorldlineLightcones.pdf} \caption{} %TODO Caption \end{figure} % \begin{example} % Suppose a observer follows a free falling body in a homogeneous field. % Therefore a transformation between the system of the earth and the one of the body are given by % (for simplicity we only consider the coordinate along it is falling) % \begin{equation} % (t,x)\to\left(t,x-\frac{1}{2}gt^2\right) % \end{equation} % Analogous to the Riemannian case discussed before, the line element takes the form % \begin{equation} % \begin{split} % \dif s^2&=-\dif t^2 +\dif x^2\\ % &=-\dif t'^2+(\dif x'- gt\dif t')(\dif x'- gt\dif t')\\ % &=(g^2t'^2-1)\dif t'^2-2gt\dif x'\dif t'+\dif x'^2 % \end{split} % \end{equation} % \end{example} % \section{Newtonian Limit}
{ "alphanum_fraction": 0.7006916555, "avg_line_length": 38.8051948052, "ext": "tex", "hexsha": "e53a857303e886e69fd01bd0b155045ecadfdf36", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c3ca730b97d2f90a6e74da296cf1b5bb0305126b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "QuantumDancer/GeneralRelativity", "max_forks_repo_path": "src/03-gravity-and-geometry.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c3ca730b97d2f90a6e74da296cf1b5bb0305126b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "QuantumDancer/GeneralRelativity", "max_issues_repo_path": "src/03-gravity-and-geometry.tex", "max_line_length": 122, "max_stars_count": 1, "max_stars_repo_head_hexsha": "c3ca730b97d2f90a6e74da296cf1b5bb0305126b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Bigben37/GeneralRelativity", "max_stars_repo_path": "src/03-gravity-and-geometry.tex", "max_stars_repo_stars_event_max_datetime": "2020-07-31T13:18:57.000Z", "max_stars_repo_stars_event_min_datetime": "2020-07-31T13:18:57.000Z", "num_tokens": 3001, "size": 8964 }
\chapter{Sample definitions for derived forms} \label{derivedformsappendix} This appendix contains sample definitions for some of the keywords described in this report in terms of simpler forms: \subsubsection*{{\tt cond}} The {\cf cond} keyword (section~\ref{cond}) could be defined in terms of {\cf if}, {\cf let} and {\cf begin} using {\cf syntax-rules} as follows: \begin{scheme} (define-syntax \ide{cond} (syntax-rules (else =>) ((cond (else result1 result2 ...)) (begin result1 result2 ...)) ((cond (test => result)) (let ((temp test)) (if temp (result temp)))) ((cond (test => result) clause1 clause2 ...) (let ((temp test)) (if temp (result temp) (cond clause1 clause2 ...)))) ((cond (test)) test) ((cond (test) clause1 clause2 ...) (let ((temp test)) (if temp temp (cond clause1 clause2 ...)))) ((cond (test result1 result2 ...)) (if test (begin result1 result2 ...))) ((cond (test result1 result2 ...) clause1 clause2 ...) (if test (begin result1 result2 ...) (cond clause1 clause2 ...)))))% \end{scheme} \subsubsection*{{\tt case}} The {\cf case} keyword (section~\ref{case}) could be defined in terms of {\cf let}, {\cf cond}, and {\cf memv} (see library chapter~\extref{lib:listutilities}{List utilities}) using {\cf syntax-rules} as follows: \begin{scheme} (define-syntax \ide{case} (syntax-rules (else) ((case expr0 ((key ...) res1 res2 ...) ... (else else-res1 else-res2 ...)) (let ((tmp expr0)) (cond ((memv tmp '(key ...)) res1 res2 ...) ... (else else-res1 else-res2 ...)))) ((case expr0 ((keya ...) res1a res2a ...) ((keyb ...) res1b res2b ...) ...) (let ((tmp expr0)) (cond ((memv tmp '(keya ...)) res1a res2a ...) ((memv tmp '(keyb ...)) res1b res2b ...) ...)))))% \end{scheme} \subsubsection*{{\tt let*}} The {\cf let*} keyword (section~\ref{let*}) could be defined in terms of {\cf let} using {\cf syntax-rules} as follows: \begin{scheme} (define-syntax \ide{let*} (syntax-rules () ((let* () body1 body2 ...) (let () body1 body2 ...)) ((let* ((name1 expr1) (name2 expr2) ...) body1 body2 ...) (let ((name1 expr1)) (let* ((name2 expr2) ...) body1 body2 ...)))))% \end{scheme} \subsubsection*{{\tt letrec}} The {\cf letrec} keyword (section~\ref{letrec}) could be defined approximately in terms of {\cf let} and {\cf set!} using {\cf syntax-rules}, using a helper to generate the temporary variables needed to hold the values before the assignments are made, as follows: \begin{scheme} (define-syntax \ide{letrec} (syntax-rules () ((letrec () body1 body2 ...) (let () body1 body2 ...)) ((letrec ((var init) ...) body1 body2 ...) (letrec-helper (var ...) () ((var init) ...) body1 body2 ...)))) (define-syntax letrec-helper (syntax-rules () ((letrec-helper () (temp ...) ((var init) ...) body1 body2 ...) (let ((var <undefined>) ...) (let ((temp init) ...) (set! var temp) ...) (let () body1 body2 ...))) ((letrec-helper (x y ...) (temp ...) ((var init) ...) body1 body2 ...) (letrec-helper (y ...) (newtemp temp ...) ((var init) ...) body1 body2 ...))))% \end{scheme} The syntax {\cf <undefined>} represents an expression that returns something that, when stored in a location, causes an exception with condition type {\cf\&assertion} to be raised if an attempt to read from or write to the location occurs before the assignments generated by the {\cf letrec} transformation take place. (No such expression is defined in Scheme.) A simpler definition using {\cf syntax-case} and {\cf generate-\hp{}temporaries} is given in library chapter~\extref{lib:syntaxcasechapter}{{\cf syntax-case}}. \subsubsection*{{\tt letrec*}} The {\cf letrec*} keyword could be defined approximately in terms of {\cf let} and {\cf set!} using {\cf syntax-rules} as follows: \begin{scheme} (define-syntax \ide{letrec*} (syntax-rules () ((letrec* ((var1 init1) ...) body1 body2 ...) (let ((var1 <undefined>) ...) (set! var1 init1) ... (let () body1 body2 ...)))))% \end{scheme} The syntax {\cf <undefined>} is as in the definition of {\cf letrec} above. \subsubsection*{{\tt let-values}} The following definition of {\cf let-values} (section~\ref{let-values}) using {\cf syntax-rules} employs a pair of helpers to create temporary names for the formals. \begin{scheme} (define-syntax let-values (syntax-rules () ((let-values (binding ...) body1 body2 ...) (let-values-helper1 () (binding ...) body1 body2 ...)))) (define-syntax let-values-helper1 ;; map over the bindings (syntax-rules () ((let-values ((id temp) ...) () body1 body2 ...) (let ((id temp) ...) body1 body2 ...)) ((let-values assocs ((formals1 expr1) (formals2 expr2) ...) body1 body2 ...) (let-values-helper2 formals1 () expr1 assocs ((formals2 expr2) ...) body1 body2 ...)))) (define-syntax let-values-helper2 ;; create temporaries for the formals (syntax-rules () ((let-values-helper2 () temp-formals expr1 assocs bindings body1 body2 ...) (call-with-values (lambda () expr1) (lambda temp-formals (let-values-helper1 assocs bindings body1 body2 ...)))) ((let-values-helper2 (first . rest) (temp ...) expr1 (assoc ...) bindings body1 body2 ...) (let-values-helper2 rest (temp ... newtemp) expr1 (assoc ... (first newtemp)) bindings body1 body2 ...)) ((let-values-helper2 rest-formal (temp ...) expr1 (assoc ...) bindings body1 body2 ...) (call-with-values (lambda () expr1) (lambda (temp ... . newtemp) (let-values-helper1 (assoc ... (rest-formal newtemp)) bindings body1 body2 ...))))))% \end{scheme} \subsubsection*{{\tt let*-values}} The following macro defines {\cf let*-values} in terms of {\cf let} and {\cf let-values} using {\cf syntax-rules}: \begin{scheme} (define-syntax let*-values (syntax-rules () ((let*-values () body1 body2 ...) (let () body1 body2 ...)) ((let*-values (binding1 binding2 ...) body1 body2 ...) (let-values (binding1) (let*-values (binding2 ...) body1 body2 ...)))))% \end{scheme} \subsubsection*{{\tt let}} The {\cf let} keyword could be defined in terms of {\cf lambda} and {\cf letrec} using {\cf syntax-rules} as follows: \begin{scheme} (define-syntax \ide{let} (syntax-rules () ((let ((name val) ...) body1 body2 ...) ((lambda (name ...) body1 body2 ...) val ...)) ((let tag ((name val) ...) body1 body2 ...) ((letrec ((tag (lambda (name ...) body1 body2 ...))) tag) val ...))))% \end{scheme} %%% Local Variables: %%% mode: latex %%% TeX-master: "r6rs" %%% End:
{ "alphanum_fraction": 0.5566675683, "avg_line_length": 26.4071428571, "ext": "tex", "hexsha": "52dfb64a6da2ab6239b26db43c7badb4995aff61", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2f998d354177dc41a8d3147fd15c056a14ffabda", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "schemedoc/scheme-rnrs-metadata", "max_forks_repo_path": "r6rs/derived.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "2f998d354177dc41a8d3147fd15c056a14ffabda", "max_issues_repo_issues_event_max_datetime": "2019-09-26T17:56:02.000Z", "max_issues_repo_issues_event_min_datetime": "2019-03-27T22:24:05.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "schemedoc/scheme-rnrs-metadata", "max_issues_repo_path": "r6rs/derived.tex", "max_line_length": 112, "max_stars_count": 1, "max_stars_repo_head_hexsha": "2f998d354177dc41a8d3147fd15c056a14ffabda", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "schemedoc/scheme-rnrs-metadata", "max_stars_repo_path": "r6rs/derived.tex", "max_stars_repo_stars_event_max_datetime": "2019-03-27T20:28:48.000Z", "max_stars_repo_stars_event_min_datetime": "2019-03-27T20:28:48.000Z", "num_tokens": 2013, "size": 7394 }
\documentclass[10pt, compress]{beamer} \PassOptionsToPackage{export}{adjustbox} \usepackage{ bm, tikz, tikzscale, float, subfloat, caption, subcaption, fancybox, xmpmulti, empheq, framed, } \usetikzlibrary{fit, positioning, bayesnet} \usepackage[mode=buildnew]{standalone} %\usepackage[export]{adjustbox} \usepackage{booktabs} \usepackage[scale=2]{ccicons} \usefonttheme[onlymath]{serif} \usepackage{pdfpcnotes} %\usemintedstyle{trac} \title{Bayesian learning of structured embeddings} \subtitle{Thesis proposal} \date{May 11, 2018} \author{Sharad Vikram} \institute{UCSD} %\setcounter{tocdepth}{1} \include{defs} \usetheme{metropolis} % Use metropolis theme \begin{document} \begin{frame} \titlepage \end{frame} \section{Overview} \begin{frame}{Unsupervised learning} \begin{columns}[T] \begin{column}{0.5\textwidth} \begin{overlayarea}{\textwidth}{7cm} \begin{center} \Large \textbf{Structure}\\ \vspace{10pt} \only<1>{\includegraphics[width=\textwidth]{img/intro-0}} \only<2>{\includegraphics[width=\textwidth]{img/intro-1}} \only<3>{\includegraphics[width=\textwidth]{img/intro-2}} \only<4->{\includegraphics[width=\textwidth]{img/intro-3}} \end{center} \end{overlayarea} \end{column} \begin{column}{0.5\textwidth} \begin{overlayarea}{\textwidth}{5cm} \centering \Large \textbf{Embedding} \\ \vspace{10pt} \only<5->{\includegraphics[width=\textwidth]{img/intro-right-1}} \end{overlayarea} \end{column} \end{columns} \end{frame} \begin{frame}{Bayesian unsupervised learning} \centering \begin{columns}[T] \begin{column}{0.5\textwidth} \begin{overlayarea}{\textwidth}{7cm} \begin{center} \Large \textbf{Structure}\\ \vspace{10pt} \multiinclude[<+->][format=png, start=1]{img/bayesian-intro-left} \end{center} \end{overlayarea} \end{column} \begin{column}{0.5\textwidth} \begin{overlayarea}{\textwidth}{5cm} \centering \Large \textbf{Embedding} \\ \vspace{60pt} \onslide<4>{\includegraphics[]{img/bayesian-intro-right-1}} \end{overlayarea} \end{column} \end{columns} \end{frame} \begin{frame}{Research overview} \centering \multiinclude[<+>][format=png, graphics={width=0.8\textwidth}]{img/overview} \end{frame} \section{Structure learning} \begin{frame}{Structure learning} \centering What is structure learning? \pause \centering \begin{figure} \centering \pause \begin{subfigure}[t]{0.27\textwidth} \centering \includegraphics[width=\textwidth]{img/clustering} \end{subfigure} \pause \hfill \begin{subfigure}[t]{0.27\textwidth} \centering \includegraphics[width=\textwidth]{img/tree-1234-balanced} \end{subfigure} \pause \hfill \begin{subfigure}[t]{0.27\textwidth} \centering \includegraphics[width=\textwidth]{img/lds} \end{subfigure} \end{figure} \pause Structure learning seeks out patterns in data that has intuitive meaning to us \end{frame} \begin{frame}{Challenge: ambiguity} \centering \only<1-4>{Consider the clustering scenario.} \only<5>{A Bayesian approach models this ambiguity} \multiinclude[<+->][format=png, start=0, end=4]{img/ambiguous-bayes} \end{frame} \begin{frame}{Can we do better?} \centering \only<-3>{ \vspace{20pt} \multiinclude[<+-3>][format=png]{img/cluster-interaction} \vspace{20pt} } \only<4>{\includegraphics{img/ambiguous-cluster3-2}} \onslide<3-4>{User provides constraint $1$ must link with $2$.} \end{frame} \begin{frame}{Interactive structure learning} \multiinclude[<+>][graphics={width=\textwidth}, format=png, start=1]{img/structure-interaction} \onslide<2->{Algorithm provides a structure $S$} \onslide<3->{and human provides a constraint $c$} \end{frame} \begin{frame}{Bayesian interactive structure learning} \centering How are these problems framed in the Bayesian setting? \pause \vspace{10pt} \only<2>{\includegraphics{tikz/structure}} \only<3>{\includegraphics{tikz/interactive-structure}} \onslide<2>{\textbf{Structure learning:} We are interested in $p(S | \bm{y})$} \onslide<3>{\textbf{Interactive structure learning:} We are interested in $p(S | \bm{y}, c)$} \end{frame} \begin{frame}{Work in Bayesian interactive structure learning} \begin{itemize} \item Constrained clustering - \cite{Wagstaff2000ClusteringConstraints, NelsonRevisitingConstraints, ShentalComputingConstraints, LangeLearningData} \\ \pause \begin{center} \includegraphics[]{img/must-link} \hspace{20pt} \includegraphics[]{img/mustnot-link} \end{center} \pause \item Constrained Bayesian hierarchical clustering - \cite{Vikram2016} \pause $\begin{array}{l}\includegraphics{img/overview-digit-1}\end{array}$ \end{itemize} \end{frame} \begin{frame}{Interactive hierarchical clustering} \begin{center} \includegraphics<1>[width=\textwidth]{img/interaction-0} \includegraphics<2>[width=\textwidth]{img/interaction-1} \includegraphics<3>[width=\textwidth]{img/interaction-2} \includegraphics<4->[width=\textwidth]{img/interaction-3} \end{center} \begin{itemize} \item<4-> User provides a \alert{triplet}. $a$ and $b$ should be in a subtree together without $c$. \item<5-> We show the user the induced tree on a subset of the data $T|_S$. \end{itemize} \end{frame} \begin{frame}{Bayesian hierarchical clustering} We desire a probability distribution over all possible trees that explain the data. \pause \begin{center} %\includegraphics<2>[width=0.7\textwidth]{img/3-cluster-distribution.png} \includegraphics[width=0.7\textwidth]{img/3-cluster-linear-distribution.png} \end{center} \end{frame} \begin{frame}{Bayesian hierarchical clustering: generative process} Define a generative model for the data. \begin{center} $\begin{array}{l}\includegraphics{tikz/tmc}\end{array}$ \end{center} \begin{itemize} \pause \item Prior distribution $p(\tau)$, (Dirichlet diffusion tree, Kingman's coalescent, Time-marginalized coalescent) \pause \item Likelihood model $p(\bm{x} | \tau)$, (Brownian motion, Dirichlet-multinomial diffusion) \end{itemize} \begin{center} \includegraphics<2-3>[width=0.6\textwidth]{img/tree-data-0} \includegraphics<4>[width=0.6\textwidth]{img/tree-data-1} \end{center} \end{frame} \begin{frame}{Inference in BHC} We are interested in the posterior distribution $p(\tau | \bm{x})$, which we compute via MCMC methods like Metropolis-Hastings. \pause A popular local search step is \alert{subtree-prune and regraft} (SPR). \begin{center} \multiinclude[<+>][format=png, graphics={width=\textwidth}, start=1]{img/spr} \end{center} \end{frame} \begin{frame}{Incorporating triplet feedback} Recall the interaction model. \begin{center} \includegraphics[width=0.7\textwidth]{img/interaction-3} \end{center} \pause \textbf{Idea:} enforce triplet constraints with modified SPR move \pause \begin{center} \includegraphics[width=\textwidth]{img/cspr-animation} \end{center} \end{frame} \begin{frame}{Results} \centering \vspace{10pt} \begin{figure} \frame{\includegraphics[width=0.7\textwidth]{img/trees/MNIST-result}} \caption*{Results of interactive Bayesian hierarchical clustering on MNIST} \end{figure} \end{frame} \begin{frame}{Overview} \centering \includegraphics[width=0.8\textwidth]{img/overview-1} \end{frame} \section{Embedding learning (background)} \iffalse \begin{frame}{Bayesian embedding learning} \centering What's the difference between embedding and structure learning? \pause \begin{center} \begin{columns} \begin{column}{0.3\textwidth} \includegraphics{tikz/structure} \end{column} \pause \begin{column}{0.3\textwidth} \includegraphics{tikz/vae} \end{column} \end{columns} \end{center} \end{frame} \fi \begin{frame}{Variational autoencoder} The VAE is a generative model for data. \begin{align*} x_i &\sim \N(0, I) \\ y_i &\sim \N(\mu_\gamma(x_i), \Sigma_\gamma(x_i)) \end{align*} where $\mu_\gamma$ and $\Sigma_\gamma$ are neural networks parameterized by $\gamma$. \pause \centering \includegraphics{tikz/vae} \pause How do we do inference? \alert<+>{Variational inference} \end{frame} \begin{frame}{Variational inference at a high level} For models where sampling is too slow, \textbf{variational inference} is a viable approach. \pause Consider a latent variable model with latent variable $x$ and observation $y$. Our desired posterior is $p(x | y)$. \pause \textbf{Strategy}: convert inference into optimization \begin{itemize} \pause \item Instantiate \emph{variational distribution} $q_\phi(x)$ where $\phi$ are free parameters \pause \item Define objective $\elbo[q_\phi(x)]$. \pause \item Maximize objective $\phi^* = \argmax_\phi \elbo[q_\phi(x)]$ \end{itemize} \pause If $q_\phi(x)$ is sufficiently expressive, it can approximate $p(x | y)$ quite well. \end{frame} \begin{frame}{Evidence lower bound} This objective is called the \alert{evidence lower bound} (ELBO). \pause \begin{align*} \elbo[q(x)] &\triangleq \E_{q(x)}\left[\log\frac{p(y, x)}{q(x)}\right] \end{align*} It has a unique relationship to the KL divergence $\KL(q(x)\|p(x|y))$. We can rewrite the KL divergence as \begin{align*} \KL(q(x)\|p(x|y)) &= \int q(x)\log\frac{q(x)}{p(x | y)}dx \\ &= \log p(y) - \E_{q(x)}\left[\log\frac{p(y, x)}{q(x)}\right] \end{align*} \end{frame} \begin{frame}{Variational autoencoder} \textbf{Basic strategy:} gradient-based variational inference \pause \begin{center} \begin{framed} Pick $q_\phi(x | y)$ (can be conditional) to be a \emph{neural network} parameterized by $\phi$ \end{framed} \end{center} \pause Let $r_\phi(y)$ be a neural network (with weights $\phi$) that outputs the parameters to a distribution, for example Gaussian. \begin{align*} q_\phi(x | y) = \N(r_\phi(y)) \end{align*} \pause Goal: learn the weights for the two neural networks $(\mu_\gamma(x), \Sigma_\gamma(x))$ and $r_\phi(y)$ via SGD on the ELBO. \pause \begin{align*} \elbo[q_\phi(x | y)] &= \E_{q_\phi(x | y)}\left[\log\frac{p(y, x)}{q_\phi(x | y)}\right] \end{align*} \end{frame} \begin{frame}{Sidebar: Monte-Carlo ELBO} We can't compute the ELBO because of the expectation w.r.t $q_\phi(x|y)$. \begin{align*} \elbo[q_\phi(x | y)] &= \E_{q_\phi(x | y)}\left[\log \frac{p(y, x)}{q_\phi(x | y)}\right] \end{align*} We can sample from $q$ however: \begin{align*} \hat{\elbo}[q_\phi(x | y)] &\triangleq \frac{1}{L}\sum_{l = 1}^L \log \frac{p(y, x^{(l)})}{q_\phi(x^{(l)} | y)} \end{align*} Now, we can compute gradients. \begin{align*} \gamma^{(t + 1)} &\leftarrow \gamma^t + \rho \nabla_\gamma \hat{\elbo} \\ \phi^{(t + 1)} &\leftarrow \phi^t + \rho \nabla_\phi \hat{\elbo} \\ \end{align*} \end{frame} \section{Structured embedding learning} \begin{frame}{Structured embedding models} \begin{figure} \centering \pause \begin{subfigure}[t]{0.27\textwidth} \centering \includegraphics{tikz/lgmm} \caption*{Latent GMM} \end{subfigure} \pause \hfill \begin{subfigure}[t]{0.27\textwidth} \centering \includegraphics{tikz/ltmc} \caption*{Latent hierarchical clustering} \end{subfigure} \pause \hfill \begin{subfigure}[t]{0.27\textwidth} \centering \includegraphics{tikz/llds} \caption*{Latent LDS} \end{subfigure} \end{figure} \end{frame} \begin{frame}{Structured variational autoencoder} SVAE - \cite{Johnson2016} \centering \begin{columns} \begin{column}{0.3\textwidth} \includegraphics{tikz/lgmm} \end{column} \begin{column}{0.3\textwidth} \begin{align*} \pi &\sim \textrm{Dirichlet}(\alpha) \\ \{\mu_k, \Sigma_k\}_{k = 1}^K &\sim \NIW(\Psi, \mu_0, \kappa, \nu) \\ z_i | \pi &\sim \textrm{Categorical}(\pi) \\ x_i | z_i, \{\mu_k, \Sigma_k\}_{k = 1}^K \ &\sim \N(\mu_{z_i}, \Sigma_{z_i}) \\ y_i | x_i &\sim \N(\mu_\gamma(x_i), \Sigma_\gamma(x_i)) \end{align*} \end{column} \end{columns} \end{frame} \begin{frame}{Why SVAE?} \begin{center} \begin{figure} \includegraphics[frame,width=\textwidth]{img/svae-example} \caption{\cite{Johnson2016}} \end{figure} \end{center} \pause In this scenario, the SVAE enables modeling non-Gaussian cluster shapes. \end{frame} \begin{frame}{Structured variational autoencoder} \centering \begin{columns} \begin{column}{0.3\textwidth} \includegraphics{tikz/lgmm} \end{column} \begin{column}{0.3\textwidth} \begin{align*} \pi &\sim \textrm{Dirichlet}(\alpha) \\ \{\mu_k, \Sigma_k\}_{k = 1}^K &\sim \NIW(\Psi, \mu_0, \kappa, \nu) \\ z_i | \pi &\sim \textrm{Categorical}(\pi) \\ x_i | z_i, \{\mu_k, \Sigma_k\}_{k = 1}^K \ &\sim \N(\mu_{z_i}, \Sigma_{z_i}) \\ y_i | x_i &\sim \N(\mu_\gamma(x_i), \Sigma_\gamma(x_i)) \end{align*} \end{column} \end{columns} \pause How do we do inference? \end{frame} \begin{frame}{What makes inference in these models hard?} \begin{figure} \centering \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics{tikz/lgmm} \caption*{Latent GMM} \end{subfigure} \hfill \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics{tikz/ltmc} \caption*{Latent hierarchical clustering} \end{subfigure} \hfill \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics{tikz/llds} \caption*{Latent LDS} \end{subfigure} \end{figure} \pause \centering All of these models have a \alert{nonconjugate} edge! \end{frame} \begin{frame}[standout] \centering Can we generalize using neural networks as a tool for approximate inference? \end{frame} \begin{frame}{Mean-field variational inference} \centering What happens if the model is simple (conjugate-exponential)? \pause Mean-field variational inference \begin{columns} \begin{column}{0.3\textwidth} \includegraphics{tikz/gmm} \end{column} \begin{column}{0.5\textwidth} \begin{itemize} \item Initialize individual distributions: \\ $q(\pi), \{q(z_i)\}_{i = 1}^N, \{q(\mu_k, \Sigma_k)\}_{k = 1}^K$. \item Perform coordinate ascent on the ELBO over each $q$. \end{itemize} \end{column} \end{columns} \end{frame} \begin{frame}{Variational message passing} Mean-field variational inference has a corresponding graph-based algorithm, called \emph{variational message passing} (VMP). \pause Each individual coordinate ascent corresponds to computing ``messages'' from a node's neighbors. \begin{center} \multiinclude[graphics={width=0.5\textwidth},start=1]{img/vmp} \end{center} \end{frame} \begin{frame}{Inference in structured embedding models} First of all, a neural network is neither conjugate nor exponential. How do we use VMP? \pause \textbf{Strategy:} Run VMP, but use ``fake'' messages for the neural network observation model. \begin{center} \multiinclude[<+>][graphics={width=0.25\textwidth}, start=1]{img/nvmp} \end{center} \end{frame} \begin{frame}{Neural variational message passing} Neural variational message passing - Vikram 2018 (under review for UAI 2018) $\begin{array}{l}\includegraphics{img/overview-digit-2}\end{array}$ \begin{itemize} \pause \item Generalizes VAE and SVAE \pause \item Extends it to new models (non-linear dynamical system, Bayesian logistic regression) \pause \item Optimizes a lower bound on the ELBO \pause \item Enables very simple software package \begin{center} \begin{figure}[H] \includegraphics[width=0.8\textwidth]{img/nvmp-code} \end{figure} \end{center} \end{itemize} \end{frame} \begin{frame}{Example: Bayesian logistic regression} \centering \begin{columns} \begin{column}{0.3\textwidth} \includegraphics{tikz/blr} \end{column} \begin{column}{0.3\textwidth} \begin{align*} w &\sim \N(0, I) \\ y_i | w, x_i &\sim \mathrm{Bernoulli}(\sigma(w^Tx_i)) \end{align*} \end{column} \end{columns} \pause \alert{Problematic message:} $m^{y_i \rightarrow w}$ \pause \textbf{Replace with:} $r_\phi(y_i, x_i)$ \pause NVMP achieves parity with gradient descent! \end{frame} \begin{frame}{Overview} \centering \includegraphics[width=0.8\textwidth]{img/overview-2} \end{frame} \section{Applications} \begin{frame}{Model-based reinforcement learning} Joint work with Marvin Zhang and Sergey Levine of UC Berkeley \pause Consider an agent in a system, and a set of its possible actions. \begin{center} \includegraphics[width=0.5\textwidth]{img/agent-env-1.png} \end{center} \pause Model learning is the problem of estimating the dynamics of the system when we don't know it beforehand. \end{frame} \begin{frame}{Model learning} Formally, consider an agent in a system with state space $\mathcal{S}$ with action space $\mathcal{A}$ with underlying dynamics function $p(s_{t + 1} | s_t, a_t)$. We are interested in learning an approximate dynamics function \begin{align*}\hat{p}(s_{t + 1} | s_t, a_t)\end{align*} from a dataset of trajectories \begin{align*}\bm{\tau} = \{(s^{(i)}_0, a^{(i)}_0, s^{(i)}_1, a^{(i)}_1, \ldots, s^{(i)}_T)\}_{i = 1}^N\end{align*} \end{frame} \iffalse \begin{frame}{Bayesian linear dynamical system} \begin{columns} \begin{column}{0.3\textwidth} \includegraphics{tikz/blds} \end{column} \begin{column}{0.3\textwidth} \begin{align*} A, \Sigma &\sim \MNIW(\Psi, \nu, M_0, V) \\ s_{t + 1} | s_t, a_t, A, \Sigma &\sim \N(A\begin{bmatrix}s_t\\a_t\end{bmatrix}, \Sigma) \end{align*} \end{column} \end{columns} We are interested in the posterior distribution $ p(A, \Sigma | {s_0, a_0, \ldots, s_T}) $ which can be computed analytically. \end{frame} \fi \begin{frame}{Latent linear dynamical system} \begin{columns} \begin{column}{0.3\textwidth} \includegraphics{tikz/lblds} \end{column} \hspace{10pt} \begin{column}{0.6\textwidth} \begin{align*} A, \Sigma &\sim \MNIW(\Psi, \nu, M_0, V) \\ x_{t + 1} | x_t, a_t, A, \Sigma &\sim \N(A\begin{bmatrix}x_t\\a_t\end{bmatrix}, \Sigma) \\ s_t | x_t &\sim \N(\mu_\gamma(x_t), \Sigma_\gamma(x_t)) \end{align*} \end{column} \end{columns} The posterior distribution $ p(A, \Sigma, x_0, \ldots, x_T | {s_0, a_0, \ldots, s_T}) $ is no longer tractable, but can be approximated via NVMP. \end{frame} \begin{frame}{Reinforcement learning} \begin{center} Learning to Model: Deep Structured Representations for Dynamics Inference in Reinforcement Learning - Zhang, Vikram, \& Levine (submitting to NIPS 2018) $\begin{array}{l}\includegraphics{img/overview-digit-3}\end{array}$ \end{center} \pause High level approach: \begin{itemize} \pause \item Learns a mapping from non-linear dynamical system to an LDS \pause \item Model-predictive control in the latent space to minimize a cost function ( \href{7.mp4}{demo1}, \href{10-2.mp4}{demo2} ) \end{itemize} \pause Why is this approach good? \begin{itemize} \pause \item Sample efficiency \pause \item Works on image-based domains and complex robotics tasks \end{itemize} \end{frame} \begin{frame}{Overview} \centering \includegraphics[width=0.8\textwidth]{img/overview-3} \end{frame} \begin{frame}{Hierarchical clustering in latent spaces (WIP)} \begin{center} \includegraphics{tikz/ltmc} \end{center} \pause Existing work: \begin{itemize} \item Nested Chinese restaurant process - \cite{Goyal2017NonparametricLearning} \item Dirichlet diffusion tree - ? \item Kingman Coalescent - ? \item Time marginalized coalescent - ? \end{itemize} \pause Why? Inference in these models is harder! $\begin{array}{l}\includegraphics{img/overview-digit-4}\end{array}$ \end{frame} \begin{frame}{Challenges} Most challenges in using these models come from scalability: \begin{itemize} \item Representing a full binary tree over a million data points isn't practical \item Searching the space of trees if your dataset is large is impractical \item How do you train a tree with minibatches? Variational inference? \item How do you incorporate interaction? \end{itemize} My solutions: \begin{itemize} \item $q(\tau)$ - pruned tree \item Variational particle approximation - \cite{Saeedi2014VariationalApproximations} \item Neural variational message passing \item Interaction fits in naturally with a particle approximation \end{itemize} \end{frame} \begin{frame}{Overview} \centering \includegraphics[width=0.8\textwidth]{img/overview-4} \end{frame} \begin{frame}{Future work} Interactive learning \begin{itemize} \item Noisy constraints \end{itemize} \pause Structured embeddings \begin{itemize} \pause \item Language modeling - latent Markov model \pause \item More complex dynamical systems - switching states \end{itemize} \end{frame} \begin{frame}{Acknowledgements} \begin{itemize} \item Thanks to my advisor Sanjoy Dasgupta \item Thanks to Gary Cottrell, Julian McAuley, Lawrence Saul and Zhuowen Tu for serving on my committee \item Thanks to Chris Tosh and Stefanos Poulis for the helpful feedback on the talk \end{itemize} \end{frame} \begin{frame}[standout] Questions? \end{frame} \begin{frame}[allowframebreaks]{References} \bibliographystyle{unsrtnat} \bibliography{mendeley} \end{frame} \appendix \begin{frame}{Intelligent subset queries} Recall the interaction model. \begin{center} \includegraphics[width=\textwidth]{img/interaction-3} \end{center} \pause %We pick a subset of data $S$ and show a user the candidate tree %restricted to $S$, $T|_S$. %\pause \textbf{Idea:} pick subsets that have high variance under the posterior distribution \end{frame} \begin{frame}{Measuring subtree variance} First, sample $N$ trees from posterior distribution. \pause \metroset{block=fill} \begin{block}{Definition: tree distance variance (TDV)} Given a subset of data $S$ and tree samples $\mathcal{T} = T_1, \ldots, T_N$, \begin{align} \mathrm{TDV}(S, \mathcal{T}) = \max_{i, j \in S} \mathrm{Var}_{T \in \mathcal{T}}\left[\texttt{tree-dist}_{T|S}(i, j)\right] \end{align} where $\texttt{tree-dist}_T$ is the number of edges needed to get from leaf $i$ to leaf $j$ in tree $T$. \end{block} \pause Instantiate $L$ subsets randomly and pick the one with the highest variance. \end{frame} \begin{frame}{KL-divergence} Kullback-Leibler (KL) divergence is a measure of how far one probability distribution is from another. \pause For distributions $q(x)$ and $p(x)$, \begin{align*} \textrm{KL}(q(x)\|p(x)) = \int q(x)\log \frac{q(x)}{p(x)}dx = \E_{q(x)}\left[\log\frac{q(x)}{p(x)}\right] \end{align*} \pause \textbf{Properties:} \begin{itemize} \item $\textrm{KL}(q(x)\|p(x)) = 0$ if $q(x) = p(x)$. \item Asymmetric \end{itemize} \end{frame} \begin{frame}{Exponential family} A probability distribution is in the \emph{exponential family} if it can be parametrized in the following way. \pause \begin{align*} \log p(y | x) = \left\langle\eta(x), t_y(y)\right\rangle - \log Z(\eta(x)) \end{align*} where \begin{itemize} \pause \item $\eta(x)$ - natural parameter function \pause \item $t_y(y)$ - sufficient statistic function \pause \item $\log Z(\eta(x))$ - log-partition function \end{itemize} \pause \textbf{Examples:} Gaussian, Categorical, Dirichlet, inverse-Wishart \end{frame} \begin{frame}{Conjugacy} \begin{center} \includegraphics{tikz/conjugate-pair} \end{center} A pair of random variables are conjugate if you can rearrange $p(y | x)$ from \pause \begin{align*} \log p(x) &= \left\langle\eta_x, t_x(x)\right\rangle - \log Z(\eta_x) \\ \log p(y | x) &= \left\langle\eta_y(x), t_y(y)\right\rangle - \log Z(\eta_y(x)) \end{align*} to \pause \begin{align*} \log p(y | x) \propto \left\langle t_x(x), \eta^*_y(y)\right\rangle \end{align*} \end{frame} \begin{frame}{Posterior distribution} \begin{center} \includegraphics{tikz/conjugate-pair} \end{center} \begin{align*} \log p(y | x) \propto \left\langle t_x(x), \eta^*_y(y)\right\rangle \end{align*} \pause The corresponding posterior distribution is: \begin{align*} \log p(x | y) = \left\langle t_x(x), \eta^*_y(y) + \eta_x\right\rangle + \log Z(\eta^*_y(y) + \eta_x) \end{align*} \pause \centering \fbox{ \scalebox{2}{$\eta^*_y(y) + \eta_x$} } \end{frame} \begin{frame}{Messages} Remember that $\eta_j$ is the natural parameter of the distribution $q(\mathbf{H}_j)$. \pause The message from a parent $Y$ to a child $X$ is \begin{align*} m_{Y \rightarrow X} = f(\eta_Y) \end{align*} \pause The message from a child $X$ to a parent $Y$ is \begin{align*} m_{X \rightarrow Y} = g(\eta_X, \{m_{i \rightarrow X}\}_{i \in \textrm{cp}_Y}) \end{align*} \pause In conjugate-exponential PGMs, messages can be computed in closed-form. \end{frame} \begin{frame}{Summary of VMP} \textbf{Setup:} Conjugate-exponential graphical model \pause \textbf{Problem:} Compute posterior $p(\mathbf{H} | \mathbf{V})$, approximated with $q(\mathbf{H}) = \prod_j q(\mathbf{H}_j)$ \pause \textbf{Solution:} Until converged, for each hidden node $\mathbf{H}_j$: \begin{enumerate} \pause \item Collect messages from children and parents \pause \item Compute updated distribution parameters from messages \end{enumerate} \pause \textbf{Benefits:} efficient, simple, can incorporate mini-batches (stochastic variational inference) \pause \textbf{Drawbacks:} can be underexpressive (conjugate-exponential requirement) \end{frame} \begin{frame}{Messages in SVAE} The message from a \textbf{non-conjugate, non-exponential family} observation $X$ to a parent $Y$ is \begin{align*} m_{X \rightarrow Y} = r_\phi(t_X(X)) \end{align*} where $r_\phi$ is a neural network whose output is the same shape of a message from a conjugate-exponential child. \end{frame} \end{document}
{ "alphanum_fraction": 0.6607522546, "avg_line_length": 27.8916155419, "ext": "tex", "hexsha": "09edb2962a6792346559477111243b777003df89", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "db97f6da8385d2876f8662012841284ce3036959", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sharadmv/thesis-proposal", "max_forks_repo_path": "presentation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "db97f6da8385d2876f8662012841284ce3036959", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sharadmv/thesis-proposal", "max_issues_repo_path": "presentation.tex", "max_line_length": 160, "max_stars_count": null, "max_stars_repo_head_hexsha": "db97f6da8385d2876f8662012841284ce3036959", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sharadmv/thesis-proposal", "max_stars_repo_path": "presentation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8746, "size": 27278 }
\section{Thursday, 18 October 2018} \subsection{Steps} \begin{itemize} \item We get the columns that ends with ``MEAN'' because that columns represent the average values. In that way the dataset is more representative. After that, we work on the visualization. \item We have done a \texttt{PCA} attempt but the \texttt{PCA} fails because we haven't deleted the version column and there are rows where the version number is not a float, it's a string like ``x.x.x'', so we eliminate the version column. The covariance is pretty high, about $0.85$. \begin{figure}[!htb] \centering \includegraphics[width=0.5\textwidth]{../../reports/figures/PCA_AccelerometerStat.png} \caption{Principal Component Analysis} \label{fig:pca} \end{figure} \item Next, we start clustering with k-means. We analyse different groups in order to interpret the plots. We distribute the job: Ivan analyse the magnetic field, Benjamin the gyroscope and Dídimo the accelerometer. \item If we do k-means with the complete dataset, the computer is out of memory, that is the reason why we only consider the values of 5 days. \item We do k-means in a range of 2 to 50 clusters to know what is the ideal number of clusters. In order to get it, we use the \textit{silhouette} method and we analyse which is the best coefficient. When we already know the ideal number of clusters, we plot the graphic, showing the different groups. \begin{figure}[!htb] \centering \includegraphics[width=0.5\textwidth]{../../reports/figures/KMeans_AccelerometerStat.png} \caption{K-means clustering algorithm with best number of clusters} \label{fig:k-means} \end{figure} \end{itemize}
{ "alphanum_fraction": 0.7753357753, "avg_line_length": 60.6666666667, "ext": "tex", "hexsha": "3065af2ce7d4794622c65e7aaa401a6e8e0f1789", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c13584cdcb7c4df1ab2814cf42a3c2bd3c203e75", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ivangarrera/MachineLearning", "max_forks_repo_path": "docs/LabBook/18_10_18.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c13584cdcb7c4df1ab2814cf42a3c2bd3c203e75", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ivangarrera/MachineLearning", "max_issues_repo_path": "docs/LabBook/18_10_18.tex", "max_line_length": 303, "max_stars_count": null, "max_stars_repo_head_hexsha": "c13584cdcb7c4df1ab2814cf42a3c2bd3c203e75", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ivangarrera/MachineLearning", "max_stars_repo_path": "docs/LabBook/18_10_18.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 419, "size": 1638 }
% !Mode:: "TeX:UTF-8" \documentclass[11pt, a4paper]{article} %\usepackage{xltxtra,fontspec,xunicode} \usepackage{amsmath} \usepackage{amssymb} \usepackage{breqn} \usepackage{autobreak} \usepackage{braket,mleftright} \usepackage{amsfonts} \usepackage[section]{placeins} \usepackage{float} \usepackage{siunitx} \usepackage{graphicx} \usepackage{subcaption} \usepackage{mwe} \usepackage{indentfirst} \usepackage{caption} \usepackage{physics} \usepackage{geometry} \usepackage{graphicx} \usepackage{algorithm}% http://ctan.org/pkg/algorithm \usepackage{algpseudocode}% http://ctan.org/pkg/algorithmicx \geometry{top=1in, bottom=1in, left=1in, right=1in} %\linespread{1.5} \DeclareMathOperator*{\argmax}{argmax} \DeclareMathOperator*{\argmin}{argmin} \newcommand{\degc}{$\,^\circ$C} \begin{document} \title{Work Report} \author{Zhuoran Qiao} \date{\today} \maketitle \section{Introduction} \paragraph{1} Developed a genetic algorithm based approach to simulate kinetics of co-transcriptional folding. \paragraph{2} Tested effect of folding rate variation on folding population dynamics and $p_{unbound}$. \section{Progress} \subsection{Framework} Various algorithms or programs have been developed to predict RNA folding pathway ultilizing force-field based simulations and multiple coarse grained energy landscape sampling methods built on Markov state model. Those present methods have succeeded in revealing multiscale dynamic events during RNA folding, however are either designed for only predicting annealing dynamics or limited to RNA segments with length up to hundred bases. To quantitatively predict folding dynamics coupled with transcription, we developed a genetic algorithm and chemical Master equation based approach, which is capable of capturing kilobase level kinetics. Our method is built on two following assumptions: \paragraph{1} All populated RNA secondary structures (SS) are linkage of locally optimal or sub-optimal structures at various folding sites; \paragraph{2} Global structural rearrangement of a partial RNA segment is permitted only if it's folding to the optimal SS on that segment. \\ Formally, we denote a domain $D_{A,\,B}$ as a segment between base $A$ and $B$ that all contacts on that segment are local. For simplicity, we denote \textbf{foldon} as domains with optimal secondary structures: $D^{foldon}_{A,\,B} = \text{MFE(sequence[A,B])}$. Note that $'.'$ is a trival example of foldon. Our assumption 1 can be rewritten as \begin{equation} D_{A,\,B} = D^{foldon}_{A, i_1} \oplus D^{foldon}_{i_1, i_2} \oplus ... \oplus D^{foldon}_{i_n, B} \end{equation} Where $\oplus$ represents a link operation. Note that all structural information of $D_{A,\,B}$ is encoded by the sequential representation $[A,\, i_1,\, ...,\, i_n,\, B]$; as a foldon is also a linkage of smaller foldons, there could be multiple way to represent $D_{A,\,B}$. Here we introduce \textbf{Irreducible Foldon Representation} (IFR) to be the sequential representations for which linkage of every adjacent foldons is not another foldon: $\forall k,\, D^{foldon}_{i_k, i_{k+1}} \oplus D^{foldon}_{i_{k+1}, i_{k+2}} \neq D^{foldon}_{i_{k}, i_{k+2}}$. Then the sufficient and necessary condition for structural rearrangement is \begin{center} $\langle D^{u}_{A,\,B} | \hat{\mathbf{T}} | D^{v}_{A,\,B} \rangle \neq 0$ if and only if $\exists\, i,\,j$ satisfies $i,\,j \in D^{u}_{A,\,B}$.IFR, $i,\,j \in D^{v}_{A,\,B}$.IFR; $D^{u}_{A,\,i} = D^{v}_{A,\,i},\, D^{u}_{j,\,B} = D^{v}_{j,\,B}$; $D^{u}_{i,\,j} = D^{foldon}_{i,\,j}$ or $D^{v}_{i,\,j} = D^{foldon}_{i,\,j}$. Then $\langle D^{u}_{A,\,B} | \hat{\mathbf{T}} | D^{v}_{A,\,B} \rangle = \langle D^{u}_{i,\,j} | \hat{\mathbf{T}} | D^{v}_{i,\,j} \rangle$. \end{center} \subsection{Folding pathway identification \& Rate calculation} \label{section:rate} Given two domains between which rearrangement is allowed, the task is to compute forward and backward rate constant linking each other. Multiple methods to rigorously calculate the maximum likelihood during RNA folding have been reported; here we proposed a computationally feasible approach: the forward free energy barrier is estimated by sum up all free energy associated with old stacks unzipping and new loop forming; then rate constant $k_{uv} = \langle D^{u}_{A,\,B} | \hat{\mathbf{T}} | D^{v}_{A,\,B} \rangle $ is calculated by Arrhenius approximation $k_{uv} = k_0 \exp[-\frac{1}{RT}(\Delta G^{Stack}_{u}+\Delta G^{Loop}_{v})]$. 'New' and 'old' helices are identified by comparing elementary domains (defined as domains that cannot be decomposed to smaller valid domains) between reactant and product domains; identical elementary domains are excluded. \subsection{Algorithm procedure} During every iterative elongation step, an active species pool of strands with unique SS and diffrent population is updated. New candidate strands $D^{Candidate}_{0,\, L+\Delta L}$ with length $L+\Delta L$ are generated by a recombination process: for every old strand $D^{Strand}_{0, \text{ L}}$, all indices in its IFR is identified as possible rearrangement site, then its child strands is generated by linking partial domains $D^{Strand}_{0, \text{ Site}}$ with a foldon $D^{foldon}_{\text{Site}, L+\Delta L}$ that terminated at $L+\Delta L$. We assume that elongation will not change the inital population distribution of secondary structures: child strands with the exact parental SS on $[0, L]$ ($D^{child}_{0, L+\Delta L} = D^{strand}_{0, L} \oplus D^{foldon}_{L, L+\Delta L}$) will also inherit the population of their parents. After structual generation the rate matrix among all candidate strands within the new active species pool is calculated (see part \ref{section:rate}). Then the population distribution of strands after elongation is computed by propagate the chemical master equation. For the sake of computational efficiency, we introduce a cutoff $N $ as the size limit of the active species pool. After each elongation step, we impose a selection sweep on all active strands; species with top $N $ fitness is reserved. In the current edition, we simply used population as the fitness function. Population of remaining strands within the active pool is renormalized after selection. Pseudocodes of the whole simulation procedure are as follows (Algorithm \ref{algorithm:1}): \begin{algorithm} \begin{algorithmic}[1] \State Initalize ActivePool \While{$\text{sequence length} > \text{current length }$} \State $\text{OldPool} \gets \text{ActivePool }$ \State $\text{renew ActivePool }$ \State $\text{Current length} \gets \text{Current length} + dL$ \State $\text{dt} \gets dL / k_T$ \Comment{Transcription time} \For{$\text{left boundary} \in \text{\{0, dL, 2dL, ..., Current length - dL\}}$}\Comment{Get all new foldons} \State $D^{foldon}_{\text{left boundary}, \text{ Current length}} \gets \text{numpy.mfe(sequence[left boundary, Current length]})$ \EndFor \For{$\text{Strand} \in \text{OldPool }$}\Comment{Recombination} \For{$\text{Site} \in \text{Strand.IFR }$} \State $D^{Candidate}_{0,\text{Current length}} \gets D^{Strand}_{0, \text{ Site}} \oplus D^{foldon}_{\text{Site}, \text{ Current length}}$ \If {$D^{Candidate}_{0,\text{Current length}} \in \text{ActivePool }$} \State $\text{update $D^{Candidate}_{0,\text{Current length}}$.IFR}$ \Else \State $\text{add $D^{Candidate}_{0,\text{Current length}}$ to ActivePool}$ \EndIf \If {$\text{site} = \text{Current length} - dL$} \State $\langle \text{ActivePool}.\textbf{population}\,|D^{Candidate}_{0,\text{Current length}}\rangle \gets \langle \text{OldPool}.\textbf{population}\,|D^{Strand}_{0, \text{ Site}}\rangle$ \EndIf \EndFor \EndFor \For{$D^{\text{u}}_{0,\text{Current length}} \neq D^{\text{v}}_{0,\text{Current length}} \in \text{ActivePool}$}\Comment{Calculate new rate matrix} \State $\text{calculate } D^{u}_{\text{rearrange}} ,\ D^{v}_{\text{rearrange}}$\Comment{Find all helices involved in rearrangement} \State $\langle D^{u}_{\text{rearrange}} | \hat{\mathbf{T}} |D^{v}_{\text{rearrange}} \rangle \gets k_0 \exp(-\frac{1}{RT}(\Delta G^{Stack}_{u}+\Delta G^{Loop}_{v}) )$ \EndFor \State $\langle \text{ActivePool}.\textbf{population}\,|\, \gets \langle \text{ActivePool}.\textbf{population}\,|\, \exp(t\times\hat{\mathbf{T}}) $\Comment{Master equation} \State $\text{reserve top $N$ populated strands in ActivePool}$\Comment{Selection} \State $\text{renormalize }\langle \text{ActivePool}.\textbf{population}\,|$ \EndWhile \end{algorithmic} \caption{Co-transcriptional folding elongation procedure}\label{algorithm:1} \end{algorithm} \subsection{Results} The only remaining free parameter to be determined is $k_0/k_T$, the ratio of pre-exponential factor in Arrhenius rate formulation and trancription rate ($nt\cdot s^{-1}$). I tuned $k_0/k_T$ from $10^{1}$ to $10^{15}$ and obtained the data for $k_0/k_T=\infty$ by calculate stationary distribution ($\frac{1}{Q}\exp(-G_i)$) after every elongation step for strand i in active pool. \paragraph{Population analysis} For folA-WT four predominant local folding motifs within SD sequence are identified. Figure \ref{fig:local_foldings} shows exemplary secondary structures containing these motifs; figure \ref{fig:populations} shows evolution of these structure motifs during co-transcriptional folding with different $k_0/k_T$. Identical motifs are marked by the same color as in figure \ref{fig:local_foldings}. Surprisingly we noticed that when $k_0/k_T=\infty$, exchange between dominant motifs is very frequent at early stage of transcription, indicating the sensitivity of local structures on long-range contacts, and dependence of motif predominance on the limited timescale for folding. \begin{figure*} \centering \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=\textwidth]{folding_example_purple} \caption[]% {{\small Motif 1}} \label{fig:mean and std of net14} \end{subfigure} \hfill \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=\textwidth]{folding_example_blue} \caption[]% {{\small Motif 2}} \label{fig:mean and std of net24} \end{subfigure} \vskip\baselineskip \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=\textwidth]{folding_example_red} \caption[]% {{\small Motif 3}} \label{fig:mean and std of net34} \end{subfigure} \quad \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=\textwidth]{folding_example_grey} \caption[]% {{\small Motif 4}} \label{fig:mean and std of net44} \end{subfigure} \caption[ Exemplary ss containing folding motifs within folA-WT Shine-Dalgarno sequence ] {\small Exemplary ss containing folding motifs within folA-WT Shine-Dalgarno sequence} \label{fig:local_foldings} \end{figure*} \begin{figure*}[tph] \centering \includegraphics[width=\textwidth]{RNA_local_population_evolution_summary} \caption[Population dynamics]{\small Population dynamics of four structrual motifs during co-transcriptional folding.} \label{fig:populations} \end{figure*} \paragraph{$p_{unbound}$ analysis} We calculated $p_{unbound}$ with respect to transcription time and $k_0/k_T$ (Figure \ref{fig:RNA_p_unbound_SD_k_tuning}-\ref{fig:RNA_p_unbound_base[-14]_k_tuning}). Deviation of asymptotic behavior from equilibrium value (calculated by nupack.ppairs) is possiblly due to the limited set of foldons (only used mfe to obtain current results). \begin{figure} \centering \includegraphics[width=\linewidth]{p_unbound/RNA_p_unbound_SD_k_tuning} \caption{} \label{fig:RNA_p_unbound_SD_k_tuning} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{p_unbound/RNA_p_unbound_base[-8]_k_tuning} \caption{} \label{fig:RNA_p_unbound_base[-8]_k_tuning} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{p_unbound/RNA_p_unbound_base[-9]_k_tuning} \caption{} \label{fig:RNA_p_unbound_base[-9]_k_tuning} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{p_unbound/RNA_p_unbound_base[-10]_k_tuning} \caption{} \label{fig:RNA_p_unbound_base[-10]_k_tuning} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{p_unbound/RNA_p_unbound_base[-11]_k_tuning} \caption{} \label{fig:RNA_p_unbound_base[-11]_k_tuning} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{p_unbound/RNA_p_unbound_base[-12]_k_tuning} \caption{} \label{fig:RNA_p_unbound_base[-12]_k_tuning} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{p_unbound/RNA_p_unbound_base[-13]_k_tuning} \caption{} \label{fig:RNA_p_unbound_base[-13]_k_tuning} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{p_unbound/RNA_p_unbound_base[-14]_k_tuning} \caption{} \label{fig:RNA_p_unbound_base[-14]_k_tuning} \end{figure} %\paragraph{a.} Using a genetic-algorithm based method and NUPACK to predict populated RNA configurations during cotranscriptional folding (in progress); %\paragraph{b.} Using master equation method to simulate evolution of folding configurations and SD sequence accessibility. %In order to examine when the one-dimensional coordinate projection could be recognized as effective reaction coordinate, %we then examine the probability distribution of committors for transition path trajectories $p(q|TP)$, for which a single peak of probability $p(q|TP)$ have been ultilized as a indicator for 'good' reaction coordinates. % Firstly, We found that for harmonic toy model, the shape of $p(q|TP)$ is very sensitive to the definition of source/sink region. For illustration, $p(q|TP)$ for two different selection % of source/sink regions was compared: in the first case, only two free energy minimum was indentified as source or sink\textbf{(S1)}; in the second case, only the barrier top was % defined as the transition path, while other two region of the free energy landscape was calssified as source/sink\textbf{(S2)}. %\cite{Jacobs2018,Chaudhury2010,Vanden-Eijnden2010,Metzner,Krivov} \small %\bibliographystyle{plain} %\bibliography{0727} \end{document}
{ "alphanum_fraction": 0.7353165523, "avg_line_length": 55.2528735632, "ext": "tex", "hexsha": "1848f4d54fa3be311e11c828a07e6bb0bd9d719c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "31b626ebf3d08b16b1cdf6544c36bbea75147719", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Utenaq/GenoFold", "max_forks_repo_path": "Reports/0815/Work_Report_Zhuoran_Qiao_0726-0815.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "31b626ebf3d08b16b1cdf6544c36bbea75147719", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Utenaq/GenoFold", "max_issues_repo_path": "Reports/0815/Work_Report_Zhuoran_Qiao_0726-0815.tex", "max_line_length": 707, "max_stars_count": null, "max_stars_repo_head_hexsha": "31b626ebf3d08b16b1cdf6544c36bbea75147719", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Utenaq/GenoFold", "max_stars_repo_path": "Reports/0815/Work_Report_Zhuoran_Qiao_0726-0815.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4159, "size": 14421 }
% **************************** Define Graphics Path ************************** \ifpdf \graphicspath{{Chapter4/Figs/Raster/}{Chapter4/Figs/PDF/}{Chapter4/Figs/}} \else \graphicspath{{Chapter4/Figs/Vector/}{Chapter4/Figs/}} \fi \chapter{What do you mean?! The predictive power of vocabulary manipulation for abuse detection}\label{chap:liwc} \chaptermark{What do you mean?!} One of the key issues in machine learning for content moderation is that such systems both in deployed settings (see \cref{chap:filter}) and in research (see \cref{chap:intro} and \cref{chap:nlp}) over-fit to individual tokens that see over-representation in the positive and negative classes respectively. While research efforts have been made to address such issues \cite{CITE: cite papers that try to address overfitting}, the problem of over-fitting to words and identity markers remain an open question for the field. While some such approaches have addressed this problem by replacing certain words and phrases to balance identity distributions \cite{Dixon:2018}. Here, I propose a different approach which serves to address the issue of models over-fitting to tokens by 1) minimising the size of the vocabulary in order to avoid over-fitting to distributional skews of low-frequency tokens across classes; 2) representing documents in terms of how they represent thoughts, feelings, and personality; and 3) through such vocabulary minimisation highlight the importance of how words are used rather than their surface forms while retaining model performance. An additional benefit of such vocabulary reduction is a proportional reduction in model size and training time for complex models such as neural networks, resulting in models that have a smaller environmental impact \citep{Strubell:2019}. Through the use of the Linguistic Inquiry and Word Count (LIWC) dictionary \cite{LIWC:2015,Pennebaker:2001}, I pre-process documents from large vocabularies, that are riddled with obfuscations and intentionally and unintentionally misspelled words into a smaller vocabulary set representing instead psycholinguistic properties of words. Through a reduction of thousands, or in some cases hundreds of thousands, of unique tokens to hundreds of LIWC categories, I aim for models to gain deeper insight into language patterns of abuse than simply selecting the most frequently used tokens. Moreover, I show that such a reduction is accompanied by a negligible reduction in intra- and inter-dataset performance in comparison to models using the full surface-token vocabularies. \zw{Update Model parameter reduction number when the Wulczyn models have rerun} Through the use of simple deep neural networks and `shallow' linear models, I show that through reducing the vocabulary sizes by up to $99\%$ and the number of model parameters by up to $99\%$, while increasing the depth of the information in the remaining vocabulary, it is possible to achieve comparable performances within datasets and mild improvements on out-of-domain datasets. This holds two strong implications for future research on computational hate speech detection: first that current approaches through an over-reliance on surface forms are computationally inefficient, and second that the exclusive use of surface forms of tokens can lead models to overly attend to the occurrence of certain tokens and variations (e.g. prominent misspellings) \citep{Rottger:2021}. Finally, as datasets for hate speech detection frequently contain biases along racialised and dialectal lines \citep{Waseem:2018,Davidson:2019}, the use of LIWC can serve as small aid in avoiding such biases as dialectal spellings of words are likely to not appear in the dictionary, thus being relegated to unknown tokens (see \cref{tab:liwc_tok} for synthetic examples of LIWC representations). Thus, this chapter seeks to provide answers to the following research questions: \begin{minipage}{0.9\textwidth} \vspace{5mm} \begin{enumerate}[start=1, label={\textbf{RQ \arabic*}}] \item{\textit{Can LIWC provide a meaningful substitute to using words or sub-word tokens as input tokens and how is model performance affected by such a substitution?}} \item{\textit{What are the implications of using LIWC as input on model development in terms of training time?}} \item{\textit{What are the implications on generalisability of LIWC-based models?}} \end{enumerate} \end{minipage} \section{Previous work} In the interest of curtailing the spread of online abuse, a large number of technical approaches have been considered in the ever-increasing body of research on the topic (please see \cref{sec:nlp} for a broad overview on the topic). Here, I focus on three different strands of research. First, I briefly introduce the LIWC dictionary. Second, I consider manual development of features for machine learning models, as it is necessary to form hypotheses for what might serve as indicators of abuse on the basis of the dataset and problem in question. Third, I examine neural network approaches for abusive language detection. Finally, I consider the growing body of research devoted to examining the generalisability of computational models for abusive language detection. I restrict my attention to studies in conducted on abuse in English as it is most pertinent to this work. \subsection{Linguistic Inquiry and Word Count} The Linguistic Inquire and Word Count dictionary and software was initially developed by \citet{Pennebaker:2001} in an effort to address the issue of high disagreement and negative effects on well-being of judges, as they reviewed essays written on people's experiences of emotional upheaval. In order to minimise such costs, \citet{Pennebaker:2001} turned to computationally counting words that were in $80$ ``psychology-relevant categories'' in order to gain an understanding of the emotional states and cognition of the authors at the time of writing. By passing over a large body of text within a single document, e.g. personal essays, \citet{Pennebaker:2001} compute the percentage occurrence of each invoked category. While there are some examples that appear clear cut, e.g. the categorisation of articles such as `a' and `the', % TODO review footnote \footnote{Though the categorisation of word classes may seem trivial, however which class a word is categorised into depends on the linguistic theory that a given classification is based on \cite{CITE: Wait for Adina}.} other word classes, such as ``emotion word categories'' are more clearly subjective and require deeper human consideration \citep{Tauscik:2010}. Though LIWC was initially developed using long form texts, the version of the dictionary that I use in this dissertation is an expanded version that also used Twitter and ``blogs'' in the development of the dictionary \citep{Pennebaker:2015}. As such, though not originally intended for the use on short-form messages, LIWC has evolved with the rise of new forms of communication in efforts for the dictionary to accurately reflect language use in short-form documents. As LIWC was originally developed using long-form documents in the United States of America, the language that is reflected in the dictionary is predominately white American English and thus it excludes other languages and many dialects within American English. By such exclusion, the dictionary does not accommodate for different forms of communicating, in particular it is likely to only insufficiently cover the language use of a variety of marginalised communities. Such a lack of recognition however has a benefit. By not learning language patterns of e.g. African American English speakers who are disproportionately represented in the positive classes of several datasets for abuse \citep{Waseem:2018,Davidson:2019}, it is possible that models trained on this representation is less likely to be biased against those groups.\footnote{Although such lack of recognition can have positive effects, such as lower false positive rate, the politics of not being recognised, as argued by \citet{Benjamin:2019} are not straightforward and the lack of recognition does not provide a guarantee that systemic harm will not occur. For instance, if systems developed to detect abuse did not recognise Multi-cultural London English due to vocabulary reductions, any abuse that was written in that dialect would not be recognised, leaving those users in harms way. Given that LIWC was developed using ``dictionaries, thesauruses, questionnaires, and lists made by research assistants'' \citep{Tauscik:2010} in a North American context, it is highly unlikely that word forms that differ from mainstream usage were included. For instance, the commonly used `brotha' and `bruva' in North American and British contexts, respectively, are absent from the dictionary.} \vspace{5mm} In this thesis, I utilise LIWC to provide the word categories that each word invokes, and rather than compute the overall word classes exhibited in a document, I use the LIWC categories of each word in a document as an alternative representation of the document from which percentages of word categories invoked can be recovered. Thus, my approach diverges slightly in the goals of using LIWC, however it does not diverge in the method for obtaining information about the psychological state of the author. \subsection{Modelling} \subsubsection{Manually Crafted Features} A large body of work has sought to use manually developed features for online abuse detection \citep{Davidson:2017,Waseem:2017,Ibrohim:2019,Vega:2019,Wiegand:2018,Tian:2020,Kumar:2019,Fortuna:2018}, showing performance boosts from using manually developed features such as the predicted author gender \citep{Waseem-Hovy:2016} or Part-of-Speech (POS) tags \citep{Davidson:2017}. The primary reasons for using manually crafted features is two-fold: First, by using manually crafted features it is necessary to have some understanding of the data at hand and some intuition about which features may distinguish the classes in the data from one another. Second, as manual features are frequently used with models that don't use neural architecture, they allow for interpretable machine learning models, in the sense that one can often identify how each token contributed towards a final prediction. Moreover, as features are often computationally fast to compute, the use of features along with their expressive interpretability, allow for quickly testing hypothesis surrounding online abuse and its nature. Considering a handful of systems that use some of the most frequently used features for the development of automated systems for detecting various forms of online abuse, distinct modelling choices, features and rationales for their use become prominent. Here I provide a brief overview of prominent features; how they are used, including which models and feature weighting schemes they are used with; and the explicit and implicit rationales for the use of each feature. First, the most common feature used, and rarely used on its own, is a Bag-of-Words (BoW) \citep{Fortuna:2018,Davidson:2017}, where each token in a document is treated as independent from the remainder of the document. The use of this feature frequently relies on the use of stop-word lists to remove tokens that are bound to occur frequently across a majority of documents, such as determiners, to avoid models from learning spurious correlations with such words and an individual class due to fluctuations in the data. The understanding of abuse that underlies this feature, is that the occurrence of some tokens are likely to disproportionately occur in abusive contexts, and that those tokens, in isolation, will indicate abuse. Several works have complicated this notion \citep[e.g.]{Waseem:2018,Davidson:2019}, arguing that tokens, in isolation, do not provide the necessary context to determine whether a text is abusive and due to certain perspectives on abuse being overly represented \citep{Waseem:2016} in annotation guidelines and annotations, some words that have been reclaimed, and thus have an innocuous usage potentially in addition to an abusive use, may be disproportionately represented in the positive classes. To address the issue of token independence, several approaches use n-grams, often bi-grams \citep{Waseem:2016} and tri-grams \citep{Davidson:2017} to aid with identifying abuse. Here, by considering groups of sequential token occurrences independently from one another, a step is taken away from the independence of individual tokens, instead to the independence of short sequences of tokens. Due to this remaining independence assumption, similar issues arise to the limitations of BoW hold for n-grams. POS tags have also seen frequent use in abusive language detection tasks \citep{Fortuna:2018} and are often used as n-grams. The intuition behind the use of POS tags for abuse detection is that abuse may differ from non-abuse in terms of linguistic structure. While n-grams of POS tags with an independence assumption may not reveal the full depth of the linguistic syntax available through POS tagged data (in contrast to the POS tags of the entire sequence being treated as a single feature), it does relay \textit{some} information on the linguistic structure which has been proven helpful for predicting abuse \citep{Fortuna:2018}. Another frequently used feature is sentiment analysis \citep{Fortuna:2018} with the underlying assumption that abuse and negative sentiment are correlated, and can thus aid in detecting some forms of abuse. Similarly to BoW and n-grams, this is a feature that is most frequently used in combination with other features as sentiment alone is not presumed to be a good predictor of abuse \citep{Fortuna:2018}. Sentiment as a feature, like the use of LIWC proposed in this dissertation, assumes that some higher level reasoning about the data can be helpful to automatically detecting abuse. Specifically, its use suggests that there the concepts of negativity and hostility towards entities will be relevant to detecting abuse in texts. Notably, some previous work that uses sentiment as a feature for abuse detection \citep{Davidson:2017} relies on previously built systems for detecting sentiment. An implication of using previously trained systems for computing sentiment, rather than assuming that sentiment can be extrapolated only from the dataset, is that sentiment and abuse, while correlated are not equated and thus that the task of detecting sentiment, while related is a distinct task from detecting abuse. As such, sentiment and abuse detection are tasks that in some cases co-constitute each other while there may be no correlation in other cases. Finally, LIWC has previously been proposed as a feature for the classification of abuse in a small number of studies \citep{Nina-Alcocer:2019,Joksimovic:2019}. In these studies, LIWC has been used in conjunction with other features such as lexical features (e.g. word n-grams) and syntactic features (e.g. POS tags) \citep{Joksimovic:2019}. This use of LIWC, similar to the motivations for its use in this chapter, relies on an assumption that the mental states of the speaker and the interpretations of readers will relay information on the intention of the speaker to cause offence. For instance, \citet{Nina-Alcocer:2019} compute the percentages of emotions that are expressed in abusive documents in efforts to identify correlations between impassioned speech and abuse, asserting an intuition that abusive speech is likely to occur in individual moments dominated by emotion rather than rationality. A position that \citep{Waseem:2016} argue is likely as they find that considering the top $100$ most frequently occurring tokens, ranked using Term Frequency - Inverse Document Frequency (TF-IDF), does not aid in the prediction of hate speech, suggesting that in many cases it may be a question of moments of abuse rather than consistently abusive people. All features must be weighted, either through raw counts or their relative frequency. One such frequently used weighting scheme is TF-IDF which weights features by their relative frequency in the corpus \citep{Fortuna:2018}, assigning higher weight to the features that are rare corpus-wide and lower weights to those that common. As such, TF-IDF can be a useful measure to address the dominance of high-frequency tokens. At the same time, TF-IDF also increases the capacity for models to overfit to the corpus and generalise poorly, as tokens that are unique to a corpus may not exist in other data or even be common to other data. The use of n-grams as features provides a similar double-edged benefit, where models learn sequences of words. In abuse detection the most common n-grams are unigrams, bi-grams, and trigrams. Such word-sequences can be helpful for models in uncovering patterns of language use in the corpus but are also sensitive to vocabulary changes that occur across datasets. For instance \citet{Waseem-Hovy:2016} train a logistic regression classifier and identify that character n-grams of innocuous words such as `Islam' and `Muslim' rank as some of the most predictive features due to the disproportionate occurrences of such terms in the hateful classes. Many of the previously mentioned works use similar machine learning models, with a particular dominance of Logistic Regression and Support Vector Machines (SVMs) (please see \cref{chap:nlp} for more detail). One notable exception to this is the work of \citet{Gorrell:2018}. In this work, the authors use a ``set of NLP tools, combining them into a semantic pipeline'' \citep[pp. 601]{Gorrell:2018}. Rather than using supervised classification techniques, a rule-based systems was developed to detect abuse that they argue allows for a interpretable and easy to modify method to address weaknesses of the approach without the need for additional large quantities of data.\footnote{This detail on the rule-based nature of the classification systems was provided by Genevieve Gorrell in personal communications.} However, this approach is a laborious one as it requires the researchers to manually identify patterns of abuse and construct rules that can address such patterns along with any exceptions to the patterns that are not abusive. \vspace{5mm} In this chapter, we take inspiration from the use of manually crafted features as a way to provide testable hypothesis while departing from the notion of feature generation. Specifically, we hypothesise that LIWC categories can provide deep information for predictive modelling that can allow for high performance in spite of token sparsity when using neural network methods. \subsubsection{Neural Networks}\label{sec:liwc_nn} Though the earliest models for the tasks were predominately linear models that used manually generated features \citep{Waseem-Hovy:2016,Davidson:2017,Warner:2012} more recent work has been dominated by the development of neural network based models for automated abuse detection, posting ever-evolving State-of-the-Art models and classification performances \citep[e.g.]{Park:2017,Badjatiya:2017,Zimmerman:2018,Stoop:2019,Isaksen:2020}. Here I consider a handful of neural network methods for detecting abuse, focusing on the distinct implications following the modelling choices and the logics that underpin them. As all neural network based methods that I examine receive only the text as input, the primary differences between the models is in their use and organisation of different types of layers and the loss function selected for the respective models. The most commonly used architecture for neural networks that in the surveyed literature is a CNN \citep{Park:2017,Gamback:2017,Wulczyn:2017,Kolhatkar:2021,Zimmerman:2018,Wang:2020}. As CNNs have been the subject of particularly interest, a number of distinct modelling approaches have been proposed. First, relying on a simple neural network architecture, \citet{Kolhatkar:2021} use GloVe embeddings as the first layer, followed by three convolutional layers (that have window sizes $3, 4,$ and $5$, respectively) with global maximum pooling layers. Prior to passing to an output layer, dropout is applied to the output of the convolutional layers which is then passed to a dense layer. All layers prior to the output layer use a ReLU (see \cref{chap:nlp} for more detail) activation function. The output layer applies the sigmoid function to provide a prediction from the model. This model most closely resembles the CNN architecture used in this chapter. As this model uses a pre-trained word-embedding layer as its input layer, the input the model receives are documents that have been subject to tokenisation processes. A different architecture is proposed by \citet{Park:2017}. In their work they compare a single classifier, what they dub a `one-step classifier', that predicts the final classes directly with a stacked architecture of two models, or a `two-step classifier' in their vernacular, that first predicts whether content is abusive and second predicts which type of abuse the documents predicted as abusive are. There are two governing distinctions between the two-step architecture proposed by \citet{Park:2017} and the architecture proposed by \citet{Kolhatkar:2021}. First, \citet{Kolhatkar:2021} acts as a one-step classifier whereas the architecture proposed by \citet{Park:2017} acts in two steps. Second, \citet{Kolhatkar:2021} only acts on documents tokenised into words and punctuation whereas \citet{Park:2017} propose a CNN that takes documents tokenised into words and punctuation in addition to documents tokenised into characters. \citet{Park:2017} show that through the use of a one-step CNN trained on word and character input, they achieve a performance boost obtaining a F1-score of $0.827$ on the datasets proposed by \citet{Waseem-Hovy:2016} and \citet{Waseem:2016}, though the performance boost is lost once a two-step hybrid CNN is used. As CNNs build feature mappings by passing over the data using filters, they come with certain assumptions built into them. As researchers define the number of filters and the stride size, they also define the range within which they believe that relevant terms are likely to occur. The implication of this is then that there will likely be some, potentially overlapping, ranges that the models learn patterns from. Depending on how researchers define these, the models will develop feature mappings corresponding to the ranges provided. Another frequently used architecture is LSTMs \citep{Badjatiya:2017,Kolhatkar:2021,Meyer:2019}. Here, \citet{Kolhatkar:2021} propose using a bi-directional LSTM that, like their CNN, has a pre-trained embedding layer, a recurrent layer, a dropout layer, and a fully connected output layer with a sigmoid activation to predict the output classes. \citet{Meyer:2019} on the other hand take develop on the idea of a hybrid CNN, developing a LSTM architecture that takes documents tokenised into words and characters as input. The word representation is obtained through tokenisation passed through an embedding layer and the character representation is obtained by processing the documents with a CNN. Using these approaches, \citet{Kolhatkar:2021} show comparable performances between the CNN and bi-directional LSTM on their dataset. \citet{Meyer:2019} on the other hand show that a baseline model only using character level information performs comparably with other more complex approaches, obtaining a macro F1-score of $0.7923$ for the baseline and $0.7924$ for the final system on the dataset proposed by \citet{Waseem-Hovy:2016}, and notably outperforms several other previously proposed methods. The use of LSTMs, that rely on recurrence, break with the independence assumption of the manual feature-based models. By recurring over a document, each new token is considered in conjunction with the previous tokens that have not been forgotten. In this way, an assumption is built into the models that through processing enough token sequences, it will be possible to identify patterns that connote abuse. Such a reliance on the text alone does not consider the positionality of abuse; \citet{Waseem:2018} argue only through understanding the context within which the speaker and audience exist in, is it possible to deem something as abusive. For instance, it is only through an understanding of the speaker that one can deem whether the \textit{n-word} is weaponised as abuse or is reclaimed to connote complex social identity.\vspace{5mm} All methods described that rely on documents tokenised into sentences rely on pre-trained embedding layers (most frequently GloVe \citet{Pennington:2014}) that come with their own benefits and costs. For instance, word embeddings that are trained on web-text are likely to harbour social biases \citep{Bolukbasi:2016} that have been proven hard to address \citep{Gonen:2019}. On the other hand, they also allow for better representations of related concepts and will be less susceptible to creating different representations for closely related concepts as a result of dataset biases. For instance, the concepts `Television' and `T.V.' might only be distantly related, if at all, in a small dataset due to few co-occurrences within the dataset. In a larger dataset, spanning millions of documents, these two concepts are likely to appear as closely related as a robust language representation will likely have been achieved for such commonly occurring tokens. The methods that rely on character embeddings are also subject to similar distributional concerns, however this can be a benefit when used in conjunction with word embeddings. As there are only a much smaller set of possible characters compared to words, less data is needed to train robust embedding layers, though the trained character embeddings will be particularly attuned to the dataset at hand. On the other hand, due to such particularity of the character embeddings, they are less likely to map well onto other domains even if they show good performance on the dataset that they are derived from.\vspace{5mm} For the work in this chapter, the use of pre-trained embeddings is not appropriate for some models. Specifically the models that use LIWC-represented documents as LIWC embeddings are not publicly available or have been developed, to the best of my knowledge. Moreover, documents represented through LIWC categories are poorly suited for training general embeddings as only a small set of tokens are defined and they are not necessarily distributed in a fashion suitable for developing such generalised embeddings. Second, I don't use pre-trained embeddings in the architectures for all other models to ensure that any comparisons with the LIWC-based models are a direct comparison of the influence of using LIWC as input tokens and avoiding potentially confounding factors. \subsection{Datasets}\label{sub:liwc_datasets} In order to understand and validate my approach, I train a model on multiple datasets. Moreover, I take each model that is learned on a given dataset and apply it to all other datasets. To accommodate prediction on a model trained on one dataset to others, I reduced all classification tasks to a binary task of abusive and not-abusive. This has downstream implications for the construction of the datasets and for the validity of the prediction task on the auxiliary datasets. First, the dataset distributions are modified as tasks with more than two classes see their data collapsed. For some datasets, this means that the class imbalances are improved, as the majority class is non-abusive. The exception to this is the dataset proposed by \citet{Davidson:2017} where the largest class is the `offensive' class, which I combine with `hateful', further minimising the relative size of the negative class. Second, as each dataset has been collected with different rationales and annotated with distinct purposes (please see \cref{sub:abuse_data} for more detail), direct comparisons, and subsequently model predictions on each dataset, can be at odds with the goals of the datasets. For this reason, high scores on prediction metrics on external datasets should be viewed as a weak indication of the ability to identify general patterns while low scores can indicate a number of factors including, but not limited to, highly distinct data sources, annotation strategies, and lastly the questions each dataset inherently seeks to ask. With these concerns in mind, I decide to use datasets with distinct sources that are developed for different purposes. Rather than resist or seek to minimise the modelling concerns, I choose to lean into them to allow space for understanding how LIWC-based modelling may influence the training and model performance on each dataset as well as seek to gain an understanding on which axes model generalisation may be afforded using LIWC-based modelling (see \cref{tab:vocab_sizes} for the vocabulary sizes for each dataset and input type). In this chapter I use the \textit{StormFront} dataset~\citep{Garcia:2019}, the \textit{Offence} dataset \citep{Davidson:2017}, the \textit{Hate Speech} dataset~\citep{Waseem-Hovy:2016}, the \textit{Expert Hate} dataset~\citep{Waseem:2016}, and finally the \textit{Toxicity} dataset~\citep{Wulczyn:2017} (please see \cref{sub:abuse_data} for a detailed overview of each dataset). \subsubsection{StormFront} First, I use the \textit{StormFront} dataset was collected from the white supremacist web forum of the same name by \citet{Garcia:2019}. The data consists of $2,392$ documents, split into $1,531$ training documents, $383$ documents for validation, and $478$ test documents. While the full dataset published consists of $10,000$ documents annotated as `hate' and `not-hate', with a large class imbalance towards non-abusive comments, I choose to use a balanced subset of the data provided by the authors to test how LIWC-based models perform when trained on a) small data and b) a balanced data distribution. The dataset is initially split into a training and a evaluation set, I created a validation set by pulling extracting a stratified sample from the training data, retaining the class balance from the balanced subset. \subsubsection{Offence} The second dataset used to train and evaluate my models is the \textit{Offence} dataset collected from Twitter by \citet{Davidson:2017}. This dataset was collected to distinguish offensive tweets from hateful ones. This dataset is distinguished from all other datasets in that the positive classes, i.e. `offensive' and `hateful' accounting for $1,430$ documents and $19,190$ documents, respectively. This leaves only $4,163$ documents in the negative class. Once binarised, the dataset consists of $4,163$ documents in the negative class and $20,620$ documents in the positive class. The dataset is provided by the authors as a single file containing all documents, so I create stratified splits of the data into a training set ($80\%$ or $19,826$ documents), a validation set ($10\%$ or $2,478$ documents), and a evaluation set ($10\%$ or $2,479$ documents), retaining the original class distribution in each split. Using this dataset further allows for an investigation into how sensitive LIWC-based modelling is to dataset skews. \subsubsection{Hate Speech} I also use the \textit{Hate Speech} dataset, which is collected from Twitter by \citet{Waseem-Hovy:2016}. This dataset contains $16,914$ documents that follow a more traditional class distribution for abusive language data. In this dataset the positive classes of `racism' and `sexism' are collapsed into a single positive class, `abuse', consisting of $5,355$ documents and the negative class occupying the remaining $11,559$ documents. The primary function this dataset serves in this chapter is to allow some insight into whether the LIWC-based models would function under a distinct annotation criteria that is motivated by academic work in Gender Studies and Critical Race Theory on marginalisation, rather than social media guidelines for acceptable behaviour. \subsubsection{Expert Hate} The \textit{Expert Hate} dataset proposed by \citet{Waseem:2016} contains $6,909$ documents and is also collected from Twitter and is also designed as a multi-class classification task. In this dataset the positive classes consist of `sexism' ($13\%$ or $898$ documents), `racism' ($1.41\%$ or $97$ documents) and `both' ($0.70\%$ or $48$ documents) while the negative class consists of $84.19\%$ of the dataset. I reduce this down to a binary classification task and split the dataset into a training set ($80\%$ or $5,527$ documents), a validation set ($10\%$ or $690$ documents), and a evaluation set ($10\%$ or $692$ documents) ensuring that binary the class distribution is retained. This dataset is annotated following the annotation guidelines proposed by \citet{Waseem-Hovy:2016}, however it is annotated using intersectional feminist activists as crowd-workers. This dataset then allows for testing the influence of LIWC-based models on data annotated by experts. \subsubsection{Toxicity} Finally, I use the \textit{Toxicity} dataset published by \citet{Wulczyn:2017}. This dataset was collected from Wikipedia editor discussion pages and annotated as `toxic' and `not-toxic' and it is the largest dataset with $159,686$ documents. These documents are provided split into a training set consisting of $95,692$ documents, a validation set with $32,128$ documents and a evaluation set containing $31,866$ documents. Similarly to the \textit{Hate Speech} and \textit{Expert Hate} datasets, this dataset is highly imbalanced with the positive class accounting for $~16\%$ of the entire dataset. I use this dataset to gain an understanding of how large scale datasets can influences the performance, size, and training time of LIWC-based models. \section{Modelling}\label{sec:liwc_modelling} In order to understand the impact of LIWC-based modelling, I design feature-based and neural network models. I develop a Logistic Regression and a SVM model with a linear kernel for each type of input data (Word unigrams, BPE unigrams, and LIWC unigrams) to allow for feature-based analysis of what patterns are identified. To investigate how neural network models operate on the input data, I develop three types of neural networks for each input type: First, I train a MLP to provide an initial insight into whether neural network approaches might be appropriate, second I develop a LSTM model to investigate whether there are any benefits from its recurrent nature and finally, I develop a CNN model due to their dominance in the literature.\vspace{5mm} I specify two different training procedures, one for the linear baseline models and one for the neural networks. For the linear baseline models, I tokenise and pre-process the data and perform a grid-search over the parameter space. For neural network models, I similarly tokenise and pre-process the data and perform a Bayesian hyper-parameter search to identify the best performing parameter setting given by macro F1-score. I then reuse this best performing parameter settings and re-run the model with 5 different random seeds to ensure that the behaviour of the model on the dataset is not the result of the consequences of the random seed propagated into a model's initialisation of its tensors. \subsection{Pre-processing} Prior to providing any model data, it is necessary to pre-process the data to make it suitable for the experiment conducted. In my experiments I examine how modifying the vocabulary that a model relies on might influence model construction. To this effect, it is necessary to have some distinct pre-processing steps for the datasets depending on the experiment while others are shared. For the shared pre-processing steps, I lower-case all documents, replace all usernames, that follow the Twitter standard of an `@' followed by a string, with a generic \textit{<USER>} token, replace all website URLs with a generic \textit{<URL>} token, and finally, replace all hashtags with a generic \textit{<HASHTAG>} token. The resulting vocabulary sizes for each dataset and data type can be seen in \cref{tab:vocab_sizes}. For the LIWC-based models and the word-based models, I pre-process documents using the python library Ekphrasis \citep{baziotis:2017} which was developed specifically to handle the particularities of social media texts. For instance, elongated words are mapped to their unelongated form, e.g. `heyyyy' is mapped to `hey' (see \cref{bpe_tok} and \cref{tab:liwc_tok} for examples of tokenisation). No further processing is done for experiments using word tokens as input. \zw{Once models are re-run update \cref{tab:vocab_sizes}} \begin{table}[] \centering \begin{tabular}{llll} Dataset & Word Vocabulary & BPE Vocabulary & LIWC Vocabulary \\\hline Offence & $16,768$ & $16,663$ & $857$ \\ Toxicity & $134,946$ & $95,712$ & $1,022$ \\ Hate Expert & $9,110$ & $9,181$ & $739$ \\ Hate Speech & $14,730$ & $14,834$ & $837$ \\ StormFront & $5,566$ & $5,510$ & $622$ \end{tabular} \caption{Vocabulary sizes for each input type and the training set for each dataset.} \label{tab:vocab_sizes} \end{table} For the LIWC experiments on the other hand, I take another step after the initial tokenisation to compute the LIWC categories invoked by each word. Each token obtained is passed through a function which identifies all LIWC categories that the token invokes and combines them into a single token, where each LIWC category is separated by an underscore. All tokens that are not recognised by LIWC are replaced with a general token for \textit{<UNK>} token (see \cref{tab:liwc_tok} for examples on the result of the pre-processing of documents). For the BPE-based models on the other hand, I pre-process documents by computing using the 200-dimensional Byte-Pair Embeddings from the BPE python library \citep{Heinzerling:2018}. Byte-Pair Encodings are well suited to handle the particularities of social media text, as it breaks unrecognised words into subwords, thus minimising unknown tokens in the validation and evaluation sets. Through this process, the hope is that even if part of a of a word is out-of-vocabulary for the model some of its subwords will be within a model's vocabulary, allowing the remaining subwords to be used for inference. \begin{table} \centering \resizebox{\textwidth}{!}{% \begin{tabular}{l|l|l} Document & Word Token Representation & Byte-Pair Representation\\\hline Man I fucking hate animals! & Man I fucking hate animals ! & \_man \_i \_fucking \_hate \_animals !\\ Man I fking h8 animals! & Man I fking h8 animals ! & \_man \_i \_f king \_h 0 \_animals !\\ Bruv I fking hate animals! & Bruv I fking hate animals ! & \_br uv \_i \_f king \_hate \_animals ! \end{tabular}% } \caption{Word token and BPE representation.} \label{tab:bpe_tok} \end{table} In reviewing \cref{tab:vocab_sizes}, it is clear that computing LIWC representations result in smaller vocabularies while BPE representations of the documents results in similar sized vocabularies for all datasets with the exception of the BPE representation of the \textit{StormFront} dataset and the \textit{Offence} dataset where the vocabulary sizes decrease slightly. It is unsurprising that the vocabulary size would grow using BPE as subwords for all unrecognised tokens are computed using the Byte-Pair Encoding. More surprising is the small drop in BPE vocabulary sizes. These drops suggest that a set of the words that are unrecognised by the pre-trained Byte-Pair Embeddings as complete words share a relatively small set of subwords. \begin{table}[] \centering \footnotesize \begin{tabular}{l|p{10.5cm}} Document & LIWC Representation \\ \hline Man I fucking hate animals & MALE\_SOCIAL PPRON\_FUNCTION\_I\_PRONOUN AFFECT\_SEXUAL\_BIO\_INFORMAL\_NEGEMO\_ANGER\_ADJ\_SWEAR AFFECT\_NEGEMO\_ANGER\_VERB\_FOCUSPRESENT UNK UNK \\\hline Man I fking h8 animals & MALE\_SOCIAL PPRON\_FUNCTION\_I\_PRONOUN UNK NUM UNK UNK \\\hline Bruv I fking hate animals & UNK PPRON\_FUNCTION\_I\_PRONOUN UNK AFFECT\_NEGEMO\_ANGER\_VERB\_FOCUSPRESENT UNK UNK \end{tabular} \caption{Examples of LIWC representations.} \label{tab:liwc_tok} \end{table} \zw{Double check these numbers after retraining} For the documents represented through the LIWC categories that they invoke, I observe a sharp decline in the sizes of the vocabularies, with the smallest decrease in vocabulary being an $88.6\%$ decrease while the largest decrease is $99.7\%$. This is expected as the LIWC dictionary only encompasses a small number of words as many words used in informal conversations on online platforms are likely to fall outside of those considered when developing the dictionary. Moreover, it is also not surprising that a drop would occur as many of the datasets are created and published after the creation of the LIWC dictionary and examine domains that are unlikely to be well represented within the LIWC dictionary. Consequently will be subject to some language drift in addition to domain shifts. \begin{table}[h] \centering \resizebox{0.8\textwidth}{!}{% \begin{tabular}{lllll} & Not Abuse & Abuse & Intersection & Vocab size\\\hline Offence & $24\; (2.8\%)$ & $150\; (17.6\%)$ & $677\; (79.6\%)$ & $851$\\ Toxicity & $131\; (12.8\%)$ & $5\; (0.5\%)$ & $886\; (86.9\%)$ & $1,022$\\ Hate Expert & $241\; (32.6\%)$ & $25\; (3.4\%)$ & $473\; (64\%)$ & $739$ \\ Hate Speech & $116\; (13.9\%)$ & $47\; (5.62\%)$ & $674\; (80.5\%)$ & $837$\\ StormFront & $74\; (11.9\%)$ & $117\ (18.8\%)$ & $431\; (69.3\$)$ & $622$ \end{tabular}% } \caption{Number of unique LIWC tokens in each class for each dataset and the size of their intersection.} \label{tab:liwc_vocab_overlaps} \end{table} \begin{table}[h] \centering \resizebox{0.8\textwidth}{!}{% \begin{tabular}{lllll} & Not Abuse & Abuse & Intersection & Vocab size\\\hline Offence & $3,303\; (19.7\%)$ & $8,656\; (51.6\%)$ & $4,809\; (28.7\%)$ & $16,768$\\ Toxicity & $71,491\; (74.7\%)$ & $1,560\; (1.6\%)$ & $22,659\; (23.7\%)$ & $95,710$\\ Hate Expert & $6,155\; (67.6\%)$ & $953\; (10.5\%)$ & $2,002\; (22.98\%)$ & $9,110$\\ Hate Speech & $7,042\; (47.8\%)$ & $2,599\; (17.6\%)$ & $5,089\; (34.6\%)$ & $14,730$\\ StormFront & $1,834\; (32.9\%)$ & $2,273\; (40.8\%)$ & $1,459\; (26.2\%)$ & $5,566$ \end{tabular}% } \centering \caption{Number of unique word tokens in each class for each dataset and the size of their intersection.} \label{tab:word_vocab_overlaps} \end{table} \begin{table}[h] \centering \resizebox{0.8\textwidth}{!}{% \begin{tabular}{lllll} & Not Abuse & Abuse & Intersection & Vocab size\\\hline Offence & $3,199\; (19.2\%)$ & $7,978\; (47.9\%)$ & $5,486\; (32.9\%)$ & $16,663$\\ Toxicity & $71,493\; (74.7\%)$ & $1,560\; (1.6\%)$ & $22,659\; (23.4\%)$ & $95,712$\\ Hate Expert & $6,231\; (67.9\%)$ & $971\; (10.6\%)$ & $1,979\; (21.6\%)$ & $9,181$\\ Hate Speech & $7,074\; (47.7\%)$ & $2,653\; (17.9\%)$ & $5,107\; (34.4\%)$ & $14,834$\\ StormFront & $1,804\; (32.7\%)$ & $2,240\; (40.7\%)$ & $1,466\; (26.6\%)$ & $5,510$ \end{tabular}% } \caption{Number of unique BPE tokens in each class for each dataset and the size of their intersection.} \label{tab:bpe_vocab_overlaps} \end{table} Considering \cref{tab:liwc_vocab_overlaps,tab:word_vocab_overlaps,tab:bpe_vocab_overlaps} that display the token distribution for each type of input on each dataset, there are some clear implications for my research questions. First, only minor distributional shifts between word-based vocabularies (see \cref{tab:word_vocab_overlaps}) and BPE-based vocabularies (see \cref{tab:bpe_vocab_overlaps}). As processing and representing documents as their byte-pair represented counter-parts results in the computation of subwords, such small distributional discrepancies are to be expected. Second, observing the differences between LIWC vocabulary distributions (see \cref{tab:liwc_vocab_overlaps}) and the word vocabulary distributions, it is clear that the distributional changes are large and that the LIWC-based representation has large ramifications on the datasets and subsequently models trained for the task. For instance, as the vast majority of tokens are shared between both classes, there are fewer potential signals for models to overfit to, e.g. where a word-based model trained on the \textit{Toxicity} dataset may have at least $73,051$ unique tokens, that is $76.3\%$ of all unique tokens in the dataset, that it can potentially learn spurious correlations on, a LIWC-based model is only provided with $136$ unique tokens, or $13.3\%$ of all unique tokens, that it is likely to overfit to.\footnote{In both instances disregarding the possibility of over-fitting to the patterns of occurrences between different tokens.} Similarly to n-gram character-based modelling, a smaller set of unique tokens is likely to result in a matrix that is, in places, more dense, allowing for a model to identify patterns based on the interaction of tokens rather than individual tokens. This particular case is likely for LIWC-based models as the vast majority (between $64\%$ and $86\%$) of tokens are shared between both classes. \subsection{Linear Baseline Models}\label{sec:baseline_models} For the linear baseline models, I train several different linear models (i.e. Logistic Regression models and SVMs) that function as baselines using the \citet{Pedregosa:2015}. For each algorithm, I train three different models: a) surface-token based model that uses sentences tokenised into words, b) models on the Byte-Pair encoded representation, and c) models that use the LIWC-based representation as their input data. For all baseline models, I only use token unigrams as features, as these provide competitive baselines for many of the datasets (see \autoref{tab:liwc_baseline_linear_scores}). To ensure that the baseline models use the most appropriate, I perform a cross-validated grid-search (as implemented by \citet{Pedregosa:2015}) over all possible settings of the model parameters for each model. For both SVM and Logistic Regression models, I explore values of $C\in [0.1, 0.2, 0.3, \ldots, 0.9]$ to examine the strength of regularisation, and I explore with $L1$ and $L2$ regularisers. For Logistic Regression, I also set the parameter search to consider \textit{Elasticnet} as a third regulariser option. All linear baseline models prefer an L2 regularisation. Considering \cref{tab:liwc_baseline_linear_params}, it is clear that in most cases using a word token input results in the best scores on the development set, the exception to the rule being the \textit{Hate Expert} dataset. However, an interesting pattern emerges, for many of the models, using the LIWC-tokenised input provides highly competitive results, suggesting the efficacy of using LIWC-based tokenisation even on linear models, a promising sign for the subsequent experiments. In the training procedure for the linear models, I first fit a count vectoriser on the training data and fit a model to the vectorised training data. For prediction on other datasets, all datasets are passed through vectoriser fitted to the training data. This ensures that all datasets are processed and indexed in accordance to the vocabulary of the training dataset and the model. A notable difference between the training of linear models and their neural network counterparts is that linear models are only provided with the dataset once for each cross-validation set and the order of the documents in the dataset is not randomised whereas the neural network models iterate multiple times over the training dataset where the order of the documents in the dataset is shuffled between each iteration. \begin{table}[] \centering \resizebox{0.5\textwidth}{!}{% \begin{tabular}{llccc} & & Model & C & Evaluation F1-score \\\hline \multirow{6}{*}{\rotatebox{90}{Offence}} & \multirow{2}{*}{Word} & SVM & 0.1 & \textbf{0.9222} \\ & & LR & 0.9 & 0.9093 \\ & \multirow{2}{*}{BPE} & SVM & 0.1 & 0.9216 \\ & & LR & 0.8 & 0.9119 \\ & \multirow{2}{*}{LIWC} & SVM & 0.1 & 0.9207 \\ & & LR & 0.2 & 0.9140 \\\hline \multirow{6}{*}{\rotatebox{90}{Toxicity}} & \multirow{2}{*}{Word} & SVM & 0.2 & \textbf{0.8678} \\ & & LR & 1.0 & 0.8660 \\ & \multirow{2}{*}{BPE} & SVM & 0.1 & 0.8664 \\ & & LR & 1.0 & 0.8673 \\ & \multirow{2}{*}{LIWC} & SVM & 0.9 & 0.8514 \\ & & LR & 1.0 & 0.8374 \\\hline \multirow{6}{*}{\rotatebox{90}{Hate Expert}} & \multirow{2}{*}{Word} & SVM & 0.1 & 0.7587 \\ & & LR & 1.0 & 0.7653 \\ & \multirow{2}{*}{BPE} & SVM & 0.1 & \textbf{0.8090} \\ & & LR & 0.8 & 0.7974 \\ & \multirow{2}{*}{LIWC} & SVM & 1.0 & 0.6378 \\ & & LR & 0.7 & 0.6354 \\\hline \multirow{6}{*}{\rotatebox{90}{Hate Speech}} & \multirow{2}{*}{Word} & SVM & 0.1 & \textbf{0.7995} \\ & & LR & 0.9 & 0.7928 \\ & \multirow{2}{*}{BPE} & SVM & 0.1 & 0.7853 \\ & & LR & 0.5 & 0.7676 \\ & \multirow{2}{*}{LIWC} & SVM & 0.4 & 0.7214 \\ & & LR & 1.0 & 0.7265 \\\hline \multirow{6}{*}{\rotatebox{90}{StormFront}} & \multirow{2}{*}{Word} & SVM & 0.1 & 0.7485 \\ & & LR & 0.9 & \textbf{0.7508} \\ & \multirow{2}{*}{BPE} & SVM & 0.3 & 0.7041 \\ & & LR & 1.0 & 0.7406 \\ & \multirow{2}{*}{LIWC} & SVM & 0.1 & 0.7068 \\ & & LR & 0.1 & 0.7249 \end{tabular}% } \caption{Optimal parameter values for linear baselines by macro F1-score on in-domain evaluation sets.} \label{tab:liwc_baseline_linear_params} \end{table} \subsection{Neural Models}\label{sec:redux_neural} I implement three different neural network model types using PyTorch~\citep{Paszke:2019} and perform a hyper-parameter search on each model type for every dataset and input type. Specifically, I implement a Multi-Layered Perceptron (MLP) model, a Long-Short Term Memory (LSTM) network, and a Convolutional Neural Network (CNN). I choose to implement an MLP as it is the simplest form of neural networks and it can provide early insights into the applicability of neural network based architectures. As the LIWC tokens are distributed such that the vast majority of tokens are shared by both classes, I also train a LSTM network to take long-range dependencies into account. I choose a LSTM over a basic RNN model as unknown tokens are likely to occur frequently in the LIWC-based data due to the small size of the dictionary and resulting vocabularies, and it may be desirable for any recurrent model trained on the data to be afforded the ability to forget sequences of unknown tokens. Finally, I train a CNN model as this model type has been dominant in the literature. In order to focus on the utility of the different document representations, I train models with simple architectures. To this end, I also don't use pre-trained embeddings as embedding layers within the model as I am not aware of any general purpose pre-trained LIWC-embeddings and, as is apparent from \cref{tab:liwc_tok}, the LIWC tokens generated for each token would most likely be out-of-vocabulary for most pre-trained word embeddings. Instead I opt to train the embedding layer along with all other layers. To address the issue of the model over-fitting the data, either by identifying spurious correlations in the data or by over-training the model, I subject each model to dropout and early stopping (see \cref{sec:dropoutearly} for more detail on dropout and early stopping). To address the issues of exploding and vanishing gradients, I employ gradient clipping \citep{Bengio:1994}, to normalise the value of the gradients in the training procedures.\vspace{5mm} I use a single training procedure for all models to control the influence of confounding factors in the training process. The models are given data, which they iterate over in a pre-defined number of epochs, shuffling the dataset between each epoch. Within each epoch, batches of the data through are passed through the model for prediction during training. The loss following the model performance is then back-propagated through model, updating the internal representation in the process. This process is repeated for the assigned maximum number of epochs, or the model triggers the early stopping \citep{Early-stopping paper here} criteria, which is that the computed loss on the validation set has been strictly increasing for at least $15$ epochs. Once a model has finished training, performance in terms of macro F1-score, precision, recall and accuracy are computed on the validation set and evaluation sets. To be able to speak to the training time, I start a timer when the model training procedure is initiated and stop the timer when the model has fully completed it's training, but prior to any inference made using the model. I repeat this process for at least $200$ unique trials for each model and dataset combination and use the F1-score on the validation set to identify the best configuration of hyper-parameters. Once the best hyper-parameters are selected, I rerun the models with $5$ different random seeds and obtain these models' performance on all evaluation sets, including the in-domain test for the dataset the models are trained on and the out-of-domain evaluation sets from the remaining datasets. \subsection{Hyper-Parameter Search}\label{sub:liwc_hyperparam} In efforts to identify the best hyper-parameters without performing a grid-search of all possible combinations, I turn towards Bayesian Hyper-Parameter Tuning \citep{Neal:1996}. Briefly, Bayesian Optimisation allows for estimating the best hyper-parameters for a model through a series of trials with different hyper-parameter settings. I use the implementation of Bayesian Hyper-Parameter Optimisation offered through \citet{Wandb} and set the objective of the hyper-parameter optimisation to maximise the macro F1-score on the development data (please refer to \cref{sub:bho} for more detail). The parameters that I perform the optimisation for varies across the different model types as they require different hyper-parameters to be defined. A set of hyper-parameters are constant across models: the size of mini-batches provided to the model for training, the learning rate, the number of epochs, and the embedding size. For each dataset and model type, I perform at last $200$ trials with different parameter settings, leading to choosing a final set of hyper-parameters that I run with five different random seeds. The values for the learning rate are sampled from a uniform distribution while the batch size and number of training epochs are sampled from a categorical distribution. More generally, the values for all hyper-parameters, asides from dropout and the learning rate are sampled from a categorical distribution. \begin{itemize} \item Maximum epoch count: $\{50, 100, 200\}$, \item Batch size: $\{16, 32, 64\}$, \item learning rate: $[0.00001, 1.0]$ \end{itemize} \zw{Update \cref{tab:redux_embedding_offence_params,tab:redux_embedding_toxicity_params,tab:redux_embedding_hate_expert_params,tab:redux_embedding_hatespeech_params,tab:redux_embedding_stormfront_params} when the models have rerun} \zw{Update \cref{ig:davidson_dev_loss,fig:davidson_dev_f1} \cref{fig:wulczyn_dev_loss,fig:wulczyn_dev_f1} \cref{fig:waseem_dev_loss,fig:waseem_dev_f1} \cref{fig:waseem_hovy_dev_loss,fig:waseem_hovy_dev_f1} \cref{fig:garcia_dev_loss,fig:garcia_dev_f1} when models have rerun} \begin{landscape} \begin{table}[b] \centering \resizebox{0.75\paperheight}{!}{% \begin{tabular}{llcccccccccc} & Model & Embedding Dimension & Hidden Dimension & Window Size & Filters & Batch Size & Learning Rate & Dropout & Activation Function & \# Epochs & Validation F1-score \\\hline \multirow{3}{*}{\rotatebox{90}{\textit{WORD}}} & MLP & 100 & 200 & N/A & N/A & 64 & 0.005804 & 0.3976 & tanh & 200 & 0.9616 \\ & CNN & 300 & N/A & 1, 2, 3 & 128 & 32 & 0.6387 & N/A & tanh & 200 & 0.4841 \\ & LSTM & 64 & 300 & N/A & N/A & 64 & 0.001574 & 0.09701 & tanh & 50 & 0.9742 \\\hline \multirow{3}{*}{\rotatebox{90}{\textit{BPE}}} & MLP & 100 & 300 & N/A & N/A & 64 & 0.004817 & 0.2367 & relu & 200 & 0.9698 \\ & CNN & 64 & N/A & 1, 2, 3 & 128 & 16 & 0.007243 & N/A & tanh & 50 & 0.9664 \\ & LSTM & 200 & 200 & N/A & N/A & 16 & 0.002144 & 0.05892 & tanh & 50 & \bf{0.9783} \\\hline \multirow{3}{*}{\rotatebox{90}{\textit{LIWC}}} & MLP & 300 & 300 & N/A & N/A & 64 & 0.002731 & 0.4415 & relu & 100 & 0.8827 \\ & CNN & 64 & N/A & 3, 4, 5 & 256 & 64 & 0.002429 & N/A & relu & 200 & 0.9625 \\ & LSTM & 200 & 64 & N/A & N/A & 64 & 0.0009199 & 0.06586 & tanh & 100 & 0.9644 \end{tabular}% } \caption{Best hyper-parameters for models trained on the \textit{Offence} dataset.} \label{tab:redux_embedding_offence_params} \end{table} \begin{table}[b] \centering \resizebox{0.75\paperheight}{!}{% \begin{tabular}{llcccccccccc} & Model & Embedding Dimension & Hidden Dimension & Window Size & Filters & Batch Size & Learning Rate & Dropout & Activation Function & \# Epochs & Validation F1-score \\\hline \multirow{3}{*}{\rotatebox{90}{\textit{WORD}}} & MLP & 100 & 200 & N/A & N/A & 64 & 0.000853 & 0.1695 & tanh & 100 & 0.8367 \\ & CNN & 100 & N/A & 1, 2, 3 & 256 & 64 & 0.2427 & N/A & relu & 50 & 0.845 \\ & LSTM & 300 & 300 & N/A & N/A & 16 & 0.8862 & 0.167 & tanh & 200 & 0.841 \\\hline \multirow{3}{*}{\rotatebox{90}{\textit{BPE}}} & MLP & 64 & 300 & N/A & N/A & 64 & 0.00113 & 0.3273 & tanh & 200 & 0.854 \\ & CNN & 200 & N/A & 3, 4, 5 & 256 & 64 & 0.00009431 & N/A & relu & 100 & 0.8637 \\ & LSTM & 200 & 200 & N/A & N/A & 32 & 0.4259 & 0.2247 & tanh & 200 & \bf{0.8733} \\\hline \multirow{3}{*}{\rotatebox{90}{\textit{LIWC}}} & MLP & 64 & 100 & N/A & N/A & 64 & 0.01256 & 0.0489 & relu & 200 & 0.8233 \\ & CNN & 64 & N/A & 2, 3, 4 & 128 & 64 & 0.03826 & N/A & tanh & 100 & 0.7895 \\ & LSTM & 100 & 200 & N/A & N/A & 16 & 0.0698 & 0.2282 & tanh & 50 & 0.8322 \end{tabular}% } \caption{Best hyper-parameters for models trained on the \textit{Toxicity} dataset.} \label{tab:redux_embedding_toxicity_params} \end{table} \begin{table}[b] \centering \resizebox{0.75\paperheight}{!}{% \begin{tabular}{llcccccccccc} & Model & Embedding Dimension & Hidden Dimension & Window Size & Filters & Batch Size & Learning Rate & Dropout & Activation Function & \# Epochs & Validation F1-score \\\hline \multirow{3}{*}{\rotatebox{90}{\textit{WORD}}} & MLP & 100 & 100 & N/A & N/A & 32 & 0.2576 & 0.1904 & tanh & 50 & 0.77 \\ & CNN & 64 & N/A & 2, 3, 4 & 128 & 64 & 0.005644 & N/A & tanh & 200 & 0.8007 \\ & LSTM & 300 & 300 & N/A & N/A & 64 & 0.9014 & 0.4473 & tanh & 50 & 0.8347 \\\hline \multirow{3}{*}{\rotatebox{90}{\textit{BPE}}} & MLP & 64 & 200 & N/A & N/A & 16 & 0.7323 & 0.3823 & tanh & 200 & 0.8007 \\ & CNN & 300 & N/A & 2, 3, 4 & 256 & 64 & 0.01062 & N/A & relu & 200 & 0.7995 \\ & LSTM & 64 & 64 & N/A & N/A & 32 & 0.9958 & N/A0.1481 & tanh & 200 & \bf{0.8389} \\\hline \multirow{3}{*}{\rotatebox{90}{\textit{LIWC}}} & MLP & 300 & 200 & N/A & N/A & 54 & 0.0391 & 0.07923 & tanh & 50 & 0.6543 \\ & CNN & 100 & N/A & 3, 4, 5 & 64 & 32 & 0.01064 & N/A & tanh & 200 & 0.6103 \\ & LSTM & 300 & 300 & N/A & N/A & 16 & 0.2163 & 5789 & tanh & 50 & 0.6546 \end{tabular}% } \caption{Best hyper-parameters for models trained on the \textit{Hate Expert} dataset.} \label{tab:redux_embedding_hate_expert_params} \end{table} \end{landscape} \begin{landscape} \begin{table}[] \centering \resizebox{0.75\paperheight}{!}{% \begin{tabular}{llcccccccccc} & Model & Embedding Dimension & Hidden Dimension & Window Size & Filters & Batch Size & Learning Rate & Dropout & Activation Function & \# Epochs & Validation F1-score \\\hline \multirow{3}{*}{\rotatebox{90}{\textit{WORD}}} & MLP & 200 & 64 & N/A & N/A & 32 & 0.251 & 0.4156 & tanh & 50 & 0.7536 \\ & CNN & 200 & N/A & 2, 3, 4 & 256 & 16 & 0.004019 & N/A & tanh & 200 & 0.7365 \\ & LSTM & 200 & 200 & N/A & N/A & 16 & 0.7499 & 0.4364 & tanh & 50 & 0.7766 \\\hline \multirow{3}{*}{\rotatebox{90}{\textit{BPE}}} & MLP & 200 & 300 & N/A & N/A & 64 & 0.0005028 & 0.3471 & relu & 50 & 0.7756 \\ & CNN & 200 & N/A & 1, 2, 3 & 128 & 64 & 0.002686 & N/A & relu & 100 & 0.7717 \\ & LSTM & 300 & 300 & N/A & N/A & 16 & 0.2446 & 0.03933 & tanh & 200 & \bf{0.7972} \\\hline \multirow{3}{*}{\rotatebox{90}{\textit{LIWC}}} & MLP & 100 & 100 & N/A & N/A & 64 & 0.01481 & 0.214 & tanh & 100 & 0.7348 \\ & CNN & 100 & N/A & 2, 3, 4 & 128 & 16 & 0.002787 & N/A & tanh & 50 & 0.738 \\ & LSTM & 64 & 64 & N/A & N/A & 32 & 0.5301 & 0.4377 & tanh & 50 & 0.7592 \end{tabular}% } \caption{Best hyper-parameters for models trained on the \textit{Hate Speech} dataset.} \label{tab:redux_embedding_hatespeech_params} \end{table} \begin{table}[] \centering \resizebox{0.75\paperheight}{!}{% \begin{tabular}{llllllllllll} & Model & Embedding Dimension & Hidden Dimension & Window Size & Filters & Batch Size & Learning Rate & Dropout & Activation Function & \# Epochs & Validation F1-score \\\hline \multirow{3}{*}{\rotatebox{90}{\textit{WORD}}} & MLP & 64 & 200 & N/A & N/A & 32 & 0.4665 & 0.4862 & tanh & 200 & 0.7395 \\ & CNN & 200 & N/A & 1, 2, 3 & 64 & 32 & 0.07107 & N/A & tanh & 100 & 0.7198 \\ & LSTM & 200 & 64 & N/A & N/A & 32 & 0.8046 & 0.2547 & tanh & 200 & 0.7297 \\\hline \multirow{3}{*}{\rotatebox{90}{\textit{BPE}}} & MLP & 64 & 100 & N/A & N/A & 32 & 0.000325 & 0.1971 & relu & 200 & 0.7538 \\ & CNN & 200 & N/A & 3, 4, 5 & 256 & 64 & 0.009577 & N/A & relu & 100 & 0.7356 \\ & LSTM & 100 & 300 & N/A & N/A & 16 & 0.5939 & 0.4588 & tanh & 200 & 0.7503 \\\hline \multirow{3}{*}{\rotatebox{90}{\textit{LIWC}}} & MLP & 64 & 100 & N/A & N/A & 64 & 0.01948 & 0.3201 & tanh & 50 & 0.7118 \\ & CNN & 64 & N/A & 2, 3, 4 & 128 & 64 & 0.01204 & N/A & tanh & 100 & 0.5873 \\ & LSTM & 64 & 200 & N/A & N/A & 16 & 0.3154 & 0.4174 & tanh & 200 & 0.7272 \end{tabular}% } \caption{Best hyper-parameters for models train on the \textit{StormFront} dataset.} \label{tab:redux_embedding_stormfront_params} \end{table} \end{landscape} \begin{figure} \centering \includegraphics[width=\textwidth]{davidson_dev_f1.pdf} \caption{In-domain macro F1-score on validation set for models trained on the \textit{Offence} dataset.} \label{fig:davidson_dev_f1} \end{figure} As a result of the $200$ trials, several parameter settings compete to be the best performing model, with little difference in their scores on the validation set. Although only the model with the highest macro F1-score performance on the validation set is chosen, I make note of the top $5$ models, should one of the best performing models result in inconsistent results across different values of the random seed. The hyper-parameters for the best and most stably performing model are presented in \cref{tab:redux_embedding_offence_params,tab:redux_embedding_toxicity_params,tab:redux_embedding_hate_expert_params,tab:redux_embedding_hatespeech_params,tab:redux_embedding_stormfront_params}. Taking the best performing hyper-parameters, I re-run each model and dataset combination with five different random seeds ($22$, $32$, $42$, $310$, and $922$) and display the macro F1 scores and the loss on the in-domain validation sets in \cref{fig:davidson_dev_f1,fig:davidson_dev_loss,fig:wulczyn_dev_f1,fig:wulczyn_dev_loss,fig:waseem_dev_f1,fig:waseem_dev_loss,fig:waseem_hovy_dev_f1,fig:waseem_hovy_dev_loss,fig:garcia_dev_f1,fig:garcia_dev_loss} as the models are trained. Note, that in \cref{fig:davidson_dev_f1,fig:wulczyn_dev_f1,fig:waseem_dev_f1,fig:waseem_hovy_dev_f1,fig:garcia_dev_f1} the standard deviations is depicted through shading whereas in \cref{fig:davidson_dev_loss,fig:wulczyn_dev_loss,fig:waseem_dev_loss,fig:waseem_hovy_dev_loss,fig:garcia_dev_loss} the shading represents the standard error.\footnote{I display the standard error here, rather than the standard deviation, as displaying the standard deviation on a log scale results in unreadable graphs due to outliers. Ignoring outliers in producing the graph has the natural consequence that parts of the graph exist outside of the displayed area.} Moreover, as loss values most frequently are small, I display the losses on a logarithmic scale to allow for more readable figures.\vspace{5mm} In observing the F1-scores on the validation sets in \cref{fig:davidson_dev_f1,fig:wulczyn_dev_f1,fig:waseem_dev_f1,fig:waseem_hovy_dev_f1,fig:garcia_dev_f1}, a general pattern emerges in which most models, regardless of input type, display similar learning curves. A notable exceptions to these are the models trained on the \textit{StormFront} dataset (see \cref{fig:garcia_dev_f1}), in which two general patterns emerge: the first which has large variability in the models' performance from epoch to epoch and tends to trigger the early stopping criteria earlier, and the second which has a more stable learning curve with smaller performance drops and improvements between each epoch and trains for more epochs. Another exception can be observed in the models trained on \textit{Toxicity} dataset (see \cref{fig:wulczyn_dev_f1}). Here three salient trends occur. In the first, models start with a high F1-score and show little improvement as over the epochs and trigger early stopping. In the second, models start with a lower F1-score and show steady improvements until early stopping is triggered, % TODO Move this down to when I specifically look at the Wulczyn graphs? with the exception of the LIWC CNN models which starts with relatively stable performance before having a large variability in performance from epoch to epoch, finally stabilising briefly before early stopping is triggered. The third trend, starts with a relatively low model performance (below $0.4$ in F1-score) and only trigger early stopping late in the process (if ever). As these models train, there is an early sharp increase in the model performances followed by reaching a plateau and only post minor improvements in the later epochs. Turning to the loss developments in \cref{fig:davidson_dev_loss,fig:wulczyn_dev_loss,fig:waseem_dev_loss,fig:waseem_hovy_dev_loss,fig:garcia_dev_loss} there are three unique patterns: first, the loss rises throughout the training process; second, the loss remains almost entirely unchanged throughout the entire training process, and finally the third pattern; the loss is volatile throughout the process, rising or dropping from epoch to epoch. \begin{figure} \centering \includegraphics[width=\textwidth]{davidson_dev_loss_stderr_logscale.pdf} \caption{Validation losses for models trained on the \textit{Offence} dataset.} \label{fig:davidson_dev_loss} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{wulczyn_dev_f1.pdf} \caption{In-domain macro F1-score on validation set for models trained on the \textit{Toxicity} dataset.} \label{fig:wulczyn_dev_f1} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{wulczyn_dev_loss_stderr_logscale.pdf} \caption{Validation losses for models trained on the \textit{Toxicity} dataset.} \label{fig:wulczyn_dev_loss} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{waseem_dev_f1.pdf} \caption{In-domain macro F1-score on validation set for models trained on the \textit{Hate Expert} dataset.} \label{fig:waseem_dev_f1} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{waseem_dev_loss_stderr_logscale.pdf} \caption{Validation losses for models trained on the \textit{Hate Expert} dataset.} \label{fig:waseem_dev_loss} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{waseem_hovy_dev_f1.pdf} \caption{In-domain macro F1-score on validation set for models trained on the \textit{Hate Speech} dataset.} \label{fig:waseem_hovy_dev_f1} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{waseem_hovy_dev_loss_stderr_logscale.pdf} \caption{Validation losses for models trained on the \textit{Hate Speech} dataset.} \label{fig:waseem_hovy_dev_loss} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{garcia_dev_f1.pdf} \caption{In-domain macro F1-score on validation set for models trained on the \textit{StormFront} dataset.} \label{fig:garcia_dev_f1} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{garcia_dev_loss_stderr_logscale.pdf} \caption{Validation losses for models trained on the \textit{StormFront} dataset.} \label{fig:garcia_dev_loss} \end{figure} \subsubsection{Multi-Layered Perceptron} The first neural architecture that I implement is a Multi-Layered Perceptron. I choose this model as it is the simplest form of a neural network and thus well suited for an initial investigation into the feasibility of neural network approaches for the LIWC-based representations. The MLP architecture that I use also lays the basis for the architectures for all other neural network based models in this chapter. The network consists of an embedding input layer, a hidden layer, an output layer, and a softmax layer which produces the probabilities for each class (please refer to \cref{fig:liwc_mlp} for a depiction of the model). I subject the model's representation to a dropout layer and a non-linear activation function between the input layer and hidden layer, and the hidden and output layer. \afterpage{ \begin{figure} \centering \includegraphics[scale=0.75]{mlp.jpg} \caption{Multi-Layered Perceptron model architecture.} \label{fig:liwc_mlp} \end{figure} } I perform a hyper-parameter search over shared the hyper-parameters detailed in \cref{sub:liwc_hyperparam} in addition to the following hyper-parameter values, that are specific to the MLP: \begin{itemize} \item dropout probability: $[0.0, 0.5]$, \item hidden layer dimension: $\{64, 100, 200, 300\}$, and \item the activation function: $\{ReLU, Tanh\}$ \end{itemize} \subsubsection{Long-Short Term Memory} The second model I choose to implement is a LSTM model. This model was chosen as its recurrent nature allows for experiments that can offer insights into the efficacy of the model recalling long and short-range interactions between tokens in the data. I choose the LSTM variant of an RNN over a vanilla RNN as LIWC encoded documents are likely to have several out-of-vocabulary tokens between each known token and thus retaining all information about such unknown tokens may not be conducive to identifying salient patterns in LIWC token representation. \begin{figure} \centering \includegraphics[scale=0.75]{lstm.jpg} \caption{Long-Short Term Memory Network model architecture.} \label{fig:liwc_lstm} \end{figure} I implement a simple architecture for the LSTM model to retain focus on the input types and to avoid the influence of confounding factors such as attention. The LSTM consists of an input embedding layer, a uni-directional LSTM layer, a linear output layer, and a softmax to produce the class probabilities (see \autoref{fig:liwc_lstm} for a depiction of the LSTM model). I apply a dropout to the output of the input layer and the output of the LSTM layer in efforts to prevent the model from over-fitting on any particular pattern. I use the PyTorch implementation of an LSTM which uses a Tanh activation function \citep{Paszke:2019}, for which reason I do not apply other non-linearities to the model. Thus, for the LSTM our hyper-parameter tuning considers the following parameters and values: \begin{itemize} \item dropout probability: $[0.0, 0.5]$, \item embedding layer dimension: $\{64, 100, 200, 300\}$, and \item hidden layer dimension: $\{64, 100, 200, 300\}$ \end{itemize} \subsubsection{Convolutional Neural Network} For the final neural network architecture, I use a Convolutional Neural Network due to its dominance in the literature (see \cref{sec:liwc_nn}) and its use in commercial tools (e.g. the Perspective API). Similarly to the MLP and LSTM, the input layer is an embedding layer, followed by three two-dimensional convolutional layers, each of which are subject to a non-linear activation function, a one dimensional max-pooling layer, and a output layer. The representation obtained through the output layer is subjected to a softmax function that computes the probability distribution for the classes (see \cref{fig:liwc_cnn} for a depiction of the network). \begin{figure} \centering \includegraphics[scale=0.75]{cnn.jpg} \caption{Convolutional Neural Network model architecture.} \label{fig:liwc_cnn} \end{figure} Unlike the MLP and LSTM models, the CNN models are not subject to a dropout. However as the CNN requires window sizes to be set, for the sliding window and the number of filters the convolutional layer is to apply, these are added as hyper-parameters to tune. The hyper-parameters tuned by the model are thus: \begin{itemize} \item window size: $\{(1, 2, 3), (2, 3, 4), (3, 4, 5)\}$, \item Number of filters: $\{64, 128, 256\}$, \item hidden layer dimension: $\{64, 100, 200, 300\}$, and \item the activation function: $\{ReLU, Tanh\}$ \end{itemize} \section{Results} To answer the research questions set forth, I examine three different aspects of my models: First, I examine the model performances on a held out evaluation set from dataset they are trained on in an effort to answer whether a LIWC representation is appropriate for modelling. Second, I compare the model performances on the evaluation sets of datasets that they are not trained on. Finally, I observe the time it takes for models to be trained using the different representations. To answer these questions, I develop three different neural network types and train each one on each of the five datasets introduced in \cref{sub:liwc_datasets}, resulting in $45$ different model architectures trained. These $45$ model architectures are then subject to at least $200$ hyper-parameter selection trials, resulting in over $9,000$ models trained in the process of identifying the best hyper-parameters. Once the hyper-parameters are determined, I perform an additional $5$ runs for each model architecture, examining the influence of the random seed. \subsection{Baseline Models} In \cref{tab:linear_offence_baselines,tab:linear_toxicity_baselines,tab:linear_hateExp_baselines,tab:linear_hatespeech_baselines,tab:linear_stormfront_baselines}, though only baselines, there are a number of interesting patterns that emerge. First, the baselines validate the hypothesis that LIWC-based representation can serve as a viable input to machine learning models, as the LIWC-based models often achieve competing scores, and in some instances outperform models with other data representations, e.g. in recall on the in-domain prediction on the \textit{Offence} test data (see \cref{tab:linear_offence_baselines}) and the F1-score achieved on the out-of-domain datasets, for instance for the models trained on the \textit{Toxicity} dataset (see \cref{tab:linear_toxicity_baselines}). % The validation scores described in \cref{tab:linear_offence_baselines,tab:linear_toxicity_baselines,tab:linear_hateExp_baselines,tab:linear_hatespeech_baselines,tab:linear_stormfront_baselines} are not as strong as the state-of-the-art in-domain models \cite{Fortuna:2018}, in fact scores for the they are comparable to the scores reported on the evaluation set of the in the original paper \cite{Davidson:2017} which provided initial baseline scores.\footnote{We do not report these baseline scores as their work does not identify which weighting of their F1-score was used.}. While several previous work augment the textual data with syntactic knowledge \cite{Davidson:2017} or advanced token representations \cite{Salminen:2020} to boost classification performance, we only use the byte-pair encoded documents and the LIWC encoded documents, to ensure comparability with our experimental models. Moreover, as our primary concern is learning classifiers whose performance generalise to other datasets, unlike much prior work which concerns itself with learning classifiers that perform well within the dataset, we do not take further steps towards boosting our baseline classifiers in-domain performances. \begin{table}[] \centering \begin{minipage}{0.42\paperheight} \resizebox{0.42\paperheight}{!}{% \begin{tabular}{ll|ll|ll|ll} & & \multicolumn{2}{c|}{Word} & \multicolumn{2}{c|}{BPE} & \multicolumn{2}{c}{LIWC} \\ & & LR & SVM & LR & SVM & LR & SVM \\ \hline \multirow{4}{*}{\it{Offence}} & Accuracy & 0.9504 & \bf{0.9544} & 0.9484 & 0.9504 & 0.9399 & 0.9407 \\ & Precision & 0.9098 & \bf{0.9140} & 0.9064 & 0.9073 & 0.8772 & 0.8755 \\ & Recall & 0.9137 & 0.9257 & 0.9095 & 0.9174 & 0.9275 & \bf{0.9385} \\ & F1-score & 0.9118 & \bf{0.9197} & 0.9079 & 0.9123 & 0.8995 & 0.9025 \\ \hline \multirow{4}{*}{Toxicity} & Accuracy & 0.8319 & 0.8311 & 0.8882 & 0.8904 & \bf{0.9342} & 0.9308 \\ & Precision & 0.6505 & 0.6467 & 0.7055 & 0.7077 & \bf{0.8108} & 0.7988 \\ & Recall & 0.7941 & 0.7828 & 0.8006 & 0.7949 & 0.8062 & \bf{0.8067} \\ & F1-score & 0.6800 & 0.6752 & 0.7393 & 0.7397 & \bf{0.8085} & 0.8027 \\ \hline \multirow{4}{*}{Hate Expert} & Accuracy & 0.6416 & 0.6792 & 0.6922 & 0.7413 & 0.7630 & \bf{0.7688} \\ & Precision & 0.4892 & 0.4927 & 0.5223 & 0.5365 & \bf{0.5738} & 0.5685 \\ & Recall & 0.4832 & 0.4899 & 0.5316 & 0.5416 & \bf{0.5857} & 0.5737 \\ & F1-score & 0.4748 & 0.4864 & 0.5193 & 0.5381 & \bf{0.5783} & 0.5708 \\ \hline \multirow{4}{*}{Hate Speech} & Accuracy & 0.6413 & 0.6519 & 0.6294 & 0.6554 & \bf{0.6738} & 0.6696 \\ & Precision & 0.5835 & 0.5834 & 0.5565 & 0.5778 & \bf{0.5941} & 0.5838 \\ & Recall & \bf{0.5810} & 0.5722 & 0.5503 & 0.5601 & 0.5564 & 0.5475 \\ & F1-score & \bf{0.5820} & 0.5741 & 0.5510 & 0.5597 & 0.5490 & 0.5364 \\ \hline \multirow{4}{*}{StormFront} & Accuracy & 0.5879 & 0.5921 & 0.5795 & \bf{0.6172} & 0.5000 & 0.5314 \\ & Precision & 0.5887 & 0.5946 & 0.5798 & \bf{0.6181} & 0.5000 & 0.5705 \\ & Recall & 0.5879 & 0.5921 & 0.5795 & \bf{0.6172} & 0.5000 & 0.5314 \\ & F1-score & 0.5869 & 0.5893 & 0.5791 & \bf{0.6164} & 0.4226 & 0.4559 \end{tabular}% } \caption{Baseline scores for linear models trained on the \textit{Offence} dataset.} \label{tab:linear_offence_baselines} \vfill \resizebox{0.42\paperheight}{!}{% \begin{tabular}{ll|ll|ll|ll} & & \multicolumn{2}{c|}{Word} & \multicolumn{2}{c|}{BPE} & \multicolumn{2}{c}{LIWC} \\ & & LR & SVM & LR & SVM & LR & SVM \\ \hline \multirow{4}{*}{Offence} & Accuracy & 0.6414 & 0.7019 & 0.6563 & 0.6962 & 0.8072 & \bf{0.8665} \\ & Precision & 0.6442 & 0.6609 & 0.6501 & 0.6636 & 0.7291 & \bf{0.7763} \\ & Recall & 0.7576 & 0.7825 & 0.7686 & 0.7897 & 0.8736 & \bf{0.9083} \\ & F1-score & 0.5983 & 0.6459 & 0.6109 & 0.6437 & 0.7496 & \bf{0.8116} \\ \hline \multirow{4}{*}{\it{Toxicity}} & Accuracy & 0.9577 & \bf{0.9577} & 0.9584 & 0.9583 & 0.9530 & 0.9549 \\ & Precision & 0.9058 & 0.8961 & 0.9052 & 0.9015 & 0.9135 & \bf{0.9210} \\ & Recall & 0.8352 & \bf{0.8478} & 0.8406 & 0.8446 & 0.7952 & 0.8008 \\ & F1-score & 0.8662 & 0.8700 & 0.8693 & \bf{0.8703} & 0.8420 & 0.8484 \\ \hline \multirow{4}{*}{Hate Expert} & Accuracy & 0.8020 & 0.8035 & \bf{0.8353} & 0.8309 & 0.8309 & 0.8309 \\ & Precision & 0.4834 & 0.5111 & \bf{0.6325} & 0.6131 & 0.6167 & 0.6262 \\ & Recall & 0.4929 & 0.5053 & 0.5513 & 0.5449 & 0.5524 & \bf{0.5640} \\ & F1-score & 0.4786 & 0.4974 & 0.5582 & 0.5493 & 0.5600 & \bf{0.5750} \\ \hline \multirow{4}{*}{Hate Speech} & Accuracy & 0.6761 & 0.6767 & 0.6684 & 0.6690 & 0.6785 & \bf{0.6838} \\ & Precision & 0.5876 & 0.5916 & 0.5606 & 0.5650 & 0.5925 & \bf{0.6099} \\ & Recall & 0.5325 & 0.5378 & 0.5217 & 0.5246 & 0.5325 & \bf{0.5423} \\ & F1-score & 0.5023 & 0.5130 & 0.4866 & 0.4925 & 0.5005 & \bf{0.5167} \\ \hline \multirow{4}{*}{StormFront} & Accuracy & 0.5544 & 0.5649 & \bf{0.5649} & 0.5628 & 0.5126 & 0.5146 \\ & Precision & 0.6697 & 0.6645 & \bf{0.6829} & 0.6675 & 0.5866 & 0.5799 \\ & Recall & 0.5544 & 0.5649 & \bf{0.5649} & 0.5628 & 0.5126 & 0.5146 \\ & F1-score & 0.4632 & \bf{0.4872} & 0.4811 & 0.4817 & 0.3800 & 0.3901 \end{tabular}% } \caption{Baseline scores for linear models trained on the \textit{Toxicity} dataset.} \label{tab:linear_toxicity_baselines} \end{minipage} \end{table} Second, though the baselines often produce substandard classification performances on out-of-domain data, ranging from random performance to worse than a majority baseline, they sometimes do yield surprisingly good results along individual metrics. E.g. in \cref{tab:linear_hateExp_baselines} a word-token model achieves a high performance on the \textit{Toxicity} dataset while a LIWC-based model posts surprisingly high scores on the \textit{Offence} dataset. \begin{table}[] \centering \begin{minipage}{0.42\paperheight} \resizebox{0.42\paperheight}{!}{% \begin{tabular}{ll|ll|ll|ll} & & \multicolumn{2}{c|}{Word} & \multicolumn{2}{c|}{BPE} & \multicolumn{2}{c}{LIWC} \\ & & LR & SVM & LR & SVM & LR & SVM \\ \hline \multirow{4}{*}{Offence} & Accuracy & 0.3651 & 0.3836 & 0.2925 & 0.3066 & 0.6006 & \bf{0.6575} \\ & Precision & 0.5537 & 0.5551 & 0.5495 & 0.5511 & 0.6213 & \bf{0.6317} \\ & Recall & 0.5695 & 0.5759 & 0.5451 & 0.5507 & 0.7159 & \bf{0.7339} \\ & F1-score & 0.3620 & 0.3783 & 0.2922 & 0.3066 & 0.5608 & \bf{0.6025} \\ \hline \multirow{4}{*}{Toxicity} & Accuracy & \bf{0.9054} & 0.9029 & 0.9022 & 0.9004 & 0.8482 & 0.8419 \\ & Precision & \bf{0.7241} & 0.6800 & 0.6698 & 0.6525 & 0.6011 & 0.5973 \\ & Recall & 0.5339 & 0.5299 & 0.5291 & 0.5323 & 0.6265 & \bf{0.6293} \\ & F1-score & 0.5404 & 0.5338 & 0.5325 & 0.5381 & \bf{0.6111} & 0.6090 \\ \hline \multirow{4}{*}{\it{Hate Expert}} & Accuracy & 0.8960 & \bf{0.8988} & 0.8960 & 0.8974 & 0.8671 & 0.8584 \\ & Precision & 0.8162 & 0.8193 & \bf{0.8327} & 0.8270 & 0.7537 & 0.7249 \\ & Recall & \bf{0.7492} & 0.7625 & 0.7285 & 0.7446 & 0.6549 & 0.6536 \\ & F1-score & 0.7764 & \bf{0.7865} & 0.7657 & 0.7765 & 0.6851 & 0.6779 \\ \hline \multirow{4}{*}{Hate Speech} & Accuracy & 0.6885 & 0.6885 & 0.6791 & 0.6850 & \bf{0.7063} & 0.7009 \\ & Precision & 0.6463 & 0.6463 & 0.5866 & 0.6199 & \bf{0.6658} & 0.6474 \\ & Recall & 0.5285 & 0.5285 & 0.5158 & 0.5260 & 0.5774 & \bf{0.5828} \\ & F1-score & 0.4777 & 0.4777 & 0.4577 & 0.4770 & 0.5681 & \bf{0.5792} \\ \hline \multirow{4}{*}{StormFront} & Accuracy & 0.5063 & 0.5063 & 0.5000 & 0.4979 & 0.5418 & \bf{0.5460} \\ & Precision & \bf{0.6516} & 0.6087 & 0.5000 & 0.2495 & 0.6364 & 0.6048 \\ & Recall & 0.5063 & 0.5063 & 0.5000 & 0.4979 & 0.5418 & \bf{0.5460} \\ & F1-score & 0.3507 & 0.3541 & 0.3370 & 0.3324 & 0.4458 & \bf{0.4720} \end{tabular}% } \caption{Baseline scores for linear models trained on the \textit{Hate Expert} dataset.} \label{tab:linear_hateExp_baselines} \vfill \resizebox{0.42\paperheight}{!}{% \begin{tabular}{ll|ll|ll|ll} & & \multicolumn{2}{c|}{Word} & \multicolumn{2}{c|}{BPE} & \multicolumn{2}{c}{LIWC} \\ & & LR & SVM & LR & SVM & LR & SVM \\ \hline \multirow{4}{*}{Offence} & Accuracy & 0.4490 & 0.4748 & 0.2727 & 0.4986 & \bf{0.5450} & 0.5365 \\ & Precision & 0.5887 & 0.5926 & 0.5483 & \bf{0.5955} & 0.6083 & 0.6073 \\ & Recall & 0.6353 & 0.6479 & 0.5381 & 0.6584 & 0.6873 & \bf{0.6841} \\ & F1-score & 0.4378 & 0.4593 & 0.2714 & 0.4784 & \bf{0.5168} & 0.5102 \\ \hline \multirow{4}{*}{Toxicity} & Accuracy & \bf{0.8786} & 0.8654 & 0.9006 & 0.8711 & 0.7278 & 0.7277 \\ & Precision & 0.5730 & 0.5571 & \bf{0.6462} & 0.5788 & 0.5325 & 0.5333 \\ & Recall & 0.5390 & 0.5404 & 0.5268 & 0.5551 & 0.5702 & \bf{0.5719} \\ & F1-score & \bf{0.5463} & 0.5456 & 0.5290 & 0.5630 & 0.5222 & 0.5230 \\ \hline \multirow{4}{*}{Hate Expert} & Accuracy & 0.8468 & 0.8555 & \bf{0.8931} & 0.8540 & 0.8309 & 0.8324 \\ & Precision & 0.6881 & 0.7187 & \bf{0.8246} & 0.7167 & 0.6447 & 0.6443 \\ & Recall & 0.6043 & 0.6172 & \bf{0.7229} & 0.6273 & 0.5949 & 0.5881 \\ & F1-score & 0.6254 & 0.6428 & \bf{0.7591} & 0.6525 & 0.6097 & 0.6030 \\ \hline \multirow{4}{*}{\textit{Hate Speech}} & Accuracy & \bf{0.8381} & 0.8422 & 0.6838 & 0.8191 & 0.7719 & 0.7701 \\ & Precision & 0.8280 & \bf{0.8332} & 0.6187 & 0.7968 & 0.7464 & 0.7446 \\ & Recall & 0.7882 & \bf{0.7932} & 0.5193 & 0.7769 & 0.7006 & 0.6973 \\ & F1-score & 0.8030 & \bf{0.8081} & 0.4602 & 0.7853 & 0.7138 & 0.7106 \\ \hline \multirow{4}{*}{StormFront} & Accuracy & 0.5544 & \bf{0.5816} & 0.5000 & 0.5565 & 0.5523 & 0.5586 \\ & Precision & 0.6858 & \bf{0.7069} & 0.5000 & 0.6593 & 0.5929 & 0.6051 \\ & Recall & 0.5544 & \bf{0.5816} & 0.5000 & 0.5565 & 0.5523 & 0.5586 \\ & F1-score & 0.4587 & \bf{0.5069} & 0.3370 & 0.4712 & 0.4974 & 0.5037 \end{tabular}% } \caption{Baseline scores for linear models trained on the \textit{Hate Speech} dataset.} \label{tab:linear_hatespeech_baselines} \end{minipage} \end{table} Third, many of the LIWC-based models post strong performances on in-domain data, suggesting that simple linear models may be highly appropriate for the LIWC-based input. Moreover, these strong in-domain performances also provide a weak suggestion that neural network architectures as most of the of the datasets that are under consideration are, in neural network terms, small datasets consisting of thousands, rather than several hundred thousand documents. Fourth, an interesting trend of the models trained on the \textit{Offence} and the \textit{Toxicity} datasets obtain surprisingly good scores on each other. There are two potential reasons for this trend, 1) the two datasets are the largest in datasets so more general patterns may be learned, 2) as the notion of ``offensive'' as constructed by \citet{Davidson:2017} bears strong similarities with the notion of ``toxicity'' constructed by \citet{Wulczyn:2017}, yielding in subsets of the datasets that strongly share similarities with each other. \begin{table} \centering \resizebox{0.42\paperheight}{!}{% \begin{tabular}{ll|ll|ll|ll} & & \multicolumn{2}{c|}{Word} & \multicolumn{2}{c|}{BPE} & \multicolumn{2}{c}{LIWC} \\\hline & & LR & SVM & LR & SVM & LR & SVM \\ \hline \multirow{4}{*}{Offence} & Accuracy & 0.2929 & 0.2945 & 0.2912 & 0.3142 & 0.4248 & \bf{0.6821} \\ & Precision & 0.5308 & 0.5372 & 0.5392 & 0.5398 & 0.5405 & 0.\bf{6185} \\ & Recall & 0.5300 & 0.5357 & 0.5367 & 0.5429 & 0.5643 & \bf{0.7027} \\ & F1-score & 0.2928 & 0.2944 & 0.2911 & 0.3141 & 0.4088 & \bf{0.6078} \\\hline \multirow{4}{*}{Toxicity} & Accuracy & 0.6271 & \bf{0.6393} & 0.6370 & 0.6233 & 0.4075 & 0.4606 \\ & Precision & 0.5085 & 0.5129 & 0.4987 & 0.5001 & 0.4959 & \bf{0.5208} \\ & Recall & 0.5224 & 0.5336 & 0.4967 & 0.5002 & 0.4886 & \bf{0.5587} \\ & F1-score & 0.4637 & \bf{0.4726} & 0.4576 & 0.4541 & 0.3511 & 0.3944 \\\hline \multirow{4}{*}{Hate Expert} & Accuracy & 0.7731 & 0.7702 & \bf{0.7905} & 0.7789 & 0.6792 & 0.6734 \\ & Precision & \bf{0.5181} & 0.5111 & 0.5157 & 0.4923 & 0.4983 & 0.5091 \\ & Recall & \bf{0.5144} & 0.5089 & 0.5095 & 0.4950 & 0.4976 & 0.5135 \\ & F1-score & \bf{0.5146} & 0.5085 & 0.5063 & 0.4896 & 0.4921 & 0.5020 \\\hline \multirow{4}{*}{Hate Speech} & Accuracy & 0.6696 & 0.6732 & \bf{0.6820} & 0.6803 & 0.6389 & 0.6294 \\ & Precision & 0.5821 & 0.5888 & \bf{0.6081} & 0.6032 & 0.5928 & 0.5920 \\ & Recall & 0.5438 & 0.5459 & 0.5586 & 0.5524 & 0.5969 & \bf{0.5998} \\ & F1-score & 0.5297 & 0.5312 & 0.5486 & 0.5388 & \bf{0.5943} & 0.5932 \\\hline \multirow{4}{*}{\textit{StormFront}} & Accuracy & 0.7259 & \bf{0.7427} & 0.7280 & 0.7197 & 0.6987 & 0.6820 \\ & Precision & 0.7260 & \bf{0.7428} & 0.7280 & 0.7201 & 0.7015 & 0.6830 \\ & Recall & 0.7259 & \bf{0.7427} & 0.7280 & 0.7197 & 0.6987 & 0.6820 \\ & F1-score & 0.7259 & \bf{0.7426} & 0.7280 & 0.7195 & 0.6977 & 0.6816 \end{tabular}% } \caption{Baseline scores for linear models trained on the \textit{StormFront} dataset.} \label{tab:linear_stormfront_baselines} \end{table} Finally, although these baseline models do obtain surprisingly good scores for many of the models, there are several instances of noteworthy performance drops between the word-token and BPE-token models and their LIWC counterpart, notably the LIWC-based models often perform well on some metrics, but fall short on others. For instance in \cref{tab:linear_stormfront_baselines,tab:linear_hatespeech_baselines}, the LIWC-based models perform well on recall and perform poorly on F1-score. Thus, within the space of these strong results there is room for improvement for more stable performances on other metrics. As most of the datasets are imbalanced, I focus my attention on the macro F1-score performance due to its particular ability to handle imbalanced data well and its use in the previous literature \citep{Macro F1 papers}. \subsection{Neural models} In \cref{fig:davidson_mlp_test,fig:wulczyn_mlp_test,fig:waseem_mlp_test,fig:waseem_mlp_test,fig:garcia_mlp_test,fig:davidson_lstm_test,fig:wulczyn_lstm_test,fig:waseem_lstm_test,fig:waseem_lstm_test,fig:garcia_lstm_test, fig:davidson_cnn_test,fig:wulczyn_cnn_test,fig:waseem_cnn_test,fig:waseem_cnn_test,fig:garcia_cnn_test} , I show the in-domain and out-of-domain results of using the neural network architectures described in \cref{sec:liwc_modelling} for modelling abuse using the three different document representations. The bars in each figure represent the scores of each models on the test set in question, e.g. in \cref{fig:davidson_mlp_test} I show the macro F1-scores achieved by all models on the test set for the \textit{Offence} dataset. Considering figures collectively, it's clear that in-domain models in most cases, predictably, out-perform models trained on out-of-domain datasets. Interestingly, it's also clear that while LIWC-based models in many cases are comparable to models using full surface-form vocabulary. Moreover, this similarity in performance largely also holds for out-of-domain performance. Out-of-domain performance too is worthy of note in comparison with in-domain models and other out-of-domain models and with the linear baselines. Curiously, in spite of improved performance on in-domain evaluation sets, there is a general trend of slightly decreased performance on out-of-domain data. One dataset however, is notable in its in-domain and all out-of-domain predictions: The \textit{StormFront} dataset. Notably, linear models perform at par, or better than all configurations of neural models. The most likely explanation for this can be found in the small dataset size of less than $3,000$ documents. One way to address such a short-coming of this dataset is to increase the dataset size. Although the experiments I conduct with the dataset keep the number of documents lower than the total annotated set in order to maintain a balanced dataset, the dataset does in some cases outperform models trained on larger dataset for out-of-domain prediction. Specifically, models trained on the \textit{StormFront} dataset have comparable out-of-domain classification performance with the \textit{Offence} dataset on the evaluation sets for the \textit{Hate Expert} and \textit{Hate Speech} datasets (see \cref{fig:waseem_mlp_test,fig:waseem_lstm_test,fig:waseem_cnn_test,fig:waseem_hovy_mlp_test,fig:waseem_hovy_lstm_test,fig:waseem_hovy_cnn_test}). More generally, from the out-of-domain classification performances, there seems to be a correlation with annotation guideline similarity and out-of-domain performance. For instance, in \cref{fig:davidson_mlp_test,fig:davidson_lstm_test,fig:davidson_cnn_test,fig:wulczyn_mlp_test,fig:wulczyn_lstm_test,fig:wulczyn_cnn_test} I observe that models trained on the \textit{Offence} and \textit{Toxicity} datasets outperform models trained on other datasets. For both of these datasets, the governing understanding of abuse and hate speech are that not all speech that is offensive is necessarily also problematic as the \textit{Offence} dataset specifically seeks to disentangle hate from offence and \textit{Toxicity} asks its annotators to identify the documents that may make participants in a conversation exit the conversation. Thus, for these two datasets, the governing question is not necessarily the protection of marginalised communities and identities but instead identifying a degree of acceptable abuse and hostility. In contrast, the \textit{Hate Expert} and \textit{Hate Speech} datasets seek to identify communications that are harmful to marginalised communities. Thus, it's no surprise that the out-of-domain performances for models trained on these two datasets perform reasonably well with each other. However here it is also clear that there is an influence of dataset size. Where the \textit{Hate Speech} dataset consists of $~16,000$ documents, the \textit{Hate Expert} dataset consists of $~7,000$ documents, which is also apparent from the fact that the models trained on the \textit{Hate Speech} dataset perform better on the \textit{Hate Expert} evaluation set than the models trained on the \textit{Hate Expert} dataset perform on the \textit{Hate Speech} dataset. The \textit{StormFront} dataset on the other hand is annotated to identify deliberate attacks against ``specific group[s] of people'' on the basis of their group membership or characteristics of group's identities \citep{Garcia:2019}. This annotation criteria forms a subset of the annotation guidelines that are used for the \textit{Hate Speech} and \textit{Hate Expert} datasets. Moreover, the collection strategies for the three datasets also share common characteristics. Where \citet{Garcia:2019} specifically seek out content from a white supremacist forum for their dataset, \citet{Waseem:2016,Waseem-Hovy:2016} sample from Twitter by searching for keywords that were likely to result in a large set of abuse. Thus, while the domain of the data and the annotation guidelines are not the same, there are likely to be similarities in the content and annotations produced. \begin{figure} \begin{minipage}{\textwidth} \centering \includegraphics[width=\textwidth]{all_mlp_davidson_test.pdf} \caption{Macro F1 scores for all MLP models on the \textit{Offence} evaluation set with the standard deviation represented in error bars.} \label{fig:davidson_mlp_test} \vfill \includegraphics[width=\textwidth]{all_mlp_wulczyn_test.pdf} \caption{Macro F1 scores for all MLP models on the \textit{Toxicity} evaluation set with the standard deviation represented in error bars.} \label{fig:wulczyn_mlp_test} \vfill \includegraphics[width=\textwidth]{all_mlp_waseem_test.pdf} \caption{Macro F1 scores for all MLP models on the \textit{Hate Expert} evaluation set with the standard deviation represented in error bars.} \label{fig:waseem_mlp_test} \end{minipage} \end{figure} Attending to the questions surrounding the use of LIWC to represent documents for modelling abuse, I turn to the baseline and experimental model performances on the validation and evaluation sets of the LIWC-based models. In the validation sets for the linear baselines (see \cref{tab:liwc_baseline_linear_params}), the LIWC-based methods do not outperform any other model type, for some datasets, a highly competitive score is obtained (e.g. $0.9207$ for LIWC-based SVM model against $0.9222$ for a word-token based SVM). For other datasets however, the score obtained by LIWC-based models is much lower, suggesting that LIWC-based modelling may be an appropriate means of modelling abuse under some conditions. For the neural network based models a similar story presents itself, although the LIWC-based models perform reasonably well in comparison to models that use a larger vocabulary (e.g. $0.9644$ for the LIWC-based model in \cref{tab:redux_embedding_offence_params} and $0.9783$ for the BPE-based model). On the evaluation sets however, a slightly different patterns plays out. For most evaluation datasets, at least one of the LIWC-based models outperform some of the surface token based models, and in some cases outperform all other models. In particular, \cref{fig:wulczyn_mlp_test}, the in-domain LIWC-based model outperforms all other model types, though notably posts lower scores than the out-of-domain LIWC-based \textit{Offence} model. \begin{figure} \begin{minipage}{\textwidth} \centering \includegraphics[width=\textwidth]{all_mlp_waseem_hovy_test.pdf} \caption{Macro F1 scores for all MLP models on the \textit{Hate Speech} evaluation set with the standard deviation represented in error bars.} \label{fig:waseem_hovy_mlp_test} \vfill \includegraphics[width=\textwidth]{all_mlp_garcia_test.pdf} \caption{Macro F1 scores for all MLP models on the \textit{StormFront} evaluation set with the standard deviation represented in error bars.} \label{fig:garcia_mlp_test} \vfill \includegraphics[width=\textwidth]{all_lstm_davidson_test.pdf} \caption{Macro F1 scores for all LSTM models on the \textit{Offence} evaluation set with the standard deviation represented in error bars.} \label{fig:davidson_lstm_test} \end{minipage} \end{figure} On the evaluation sets however, a slightly different pattern plays out. For most evaluation datasets, at least one of the LIWC-based models outperform some of the surface token based models, and in some cases outperform all other models. In particular, \cref{fig:wulczyn_mlp_test}, the in-domain LIWC-based model outperforms all other model types, though curiously achieves a lower score than the out-of-domain LIWC-based \textit{Offence} model. Overall, the patterns displayed by the MLPs (see \cref{fig:garcia_mlp_test,fig:waseem_mlp_test,fig:waseem_hovy_mlp_test,fig:wulczyn_mlp_test,fig:davidson_mlp_test}) indicate that LIWC-based document representations are appropriate for the development of neural networks for abuse detection, in spite of the large reduction in vocabulary size. \zw{Revisit after models have re-run} In fact, the model performances also weakly indicate that there are benefits to be found in medium sized and large datasets.\footnote{Medium and large are relative here to to the sizes of hate speech and abuse data, often ranging between $5,000$ to $100,000$ samples, rather than large scale datasets for computing that contain millions of samples.} Considering the LIWC-based \textit{Toxicity} and \textit{Hate Speech} models, their performance seems slightly less volatile to domain shifts in comparison to their surface token counterparts, although they still exhibit a high degree of volatility as the goals of the datasets change. \begin{figure} \begin{minipage}{\textwidth} \centering \includegraphics[width=\textwidth]{all_lstm_wulczyn_test.pdf} \caption{Macro F1 scores for all LSTM models on the \textit{Toxicity} evaluation set with the standard deviation represented in error bars.} \label{fig:wulczyn_lstm_test} \vfill \includegraphics[width=\textwidth]{all_lstm_waseem_test.pdf} \caption{Macro F1 scores for all LSTM models on the \textit{Hate Expert} evaluation set with the standard deviation represented in error bars.} \label{fig:waseem_lstm_test} \vfill \includegraphics[width=\textwidth]{all_lstm_waseem_hovy_test.pdf} \caption{Macro F1 scores for all LSTM models on the \textit{Hate Speech} evaluation set with the standard deviation represented in error bars.} \label{fig:waseem_hovy_lstm_test} \end{minipage} \end{figure} As I can establish that neural network methods are appropriate for modelling using LIWC representation, I turn to ask what the influence of recurrence is on the performance of models when prediction on in-domain and out-of-domain datasets through the use of LSTM models. In examining the test scores on the LSTM models (see \cref{fig:davidson_lstm_test,fig:wulczyn_lstm_test,fig:waseem_lstm_test,fig:waseem_hovy_lstm_test,fig:garcia_lstm_test}), it is immediately clear that for most cases, the MLPs outperform the LSTMs for in-domain prediction. The exception to this rule are the models trained on the \textit{Toxicity} dataset, where the LSTM models outperform all MLP models. To the point of this chapter, the LSTM model trained with a LIWC representation of the \textit{Offence} dataset also outperforms all in-domain MLP models on the \textit{Toxicity} evaluation set. Moreover, comparing only the LIWC-based LSTMs with the LIWC-based MLPs, the latter consistently outperform the former across all training datasets datasets and for most in-domain evaluations and all out-of-domain evaluations. Although it is slightly surprising that recurrence seems to have a small negative effect on the in-domain and out-of-domain generalisibility of the models, in many cases the drops are small. For instance, for the LIWC-based models trained on the \textit{Offence} dataset, the MLP obtains a macro F1-score of $0.8987$ while the LSTM achieves a score of $0.8748$. Taken in concert with the fact that the in-domain LIWC-based LSTM model that outperforms its corresponding MLP model can suggest that the size of the dataset may have an influence on the performance of LIWC-based models. \begin{figure} \begin{minipage}{\textwidth} \centering \includegraphics[width=\textwidth]{all_lstm_garcia_test.pdf} \caption{Macro F1 scores for all LSTM models on the \textit{StormFront} evaluation set with the standard deviation represented in error bars.} \label{fig:garcia_lstm_test} \vfill \includegraphics[width=\textwidth]{all_cnn_davidson_test.pdf} \caption{Macro F1 scores for all CNN models on the \textit{Offence} evaluation set with the standard deviation represented in error bars.} \label{fig:davidson_cnn_test} \vfill \includegraphics[width=\textwidth]{all_cnn_wulczyn_test.pdf} \caption{Macro F1 scores for all CNN models on the \textit{Toxicity} evaluation set with the standard deviation represented in error bars.} \label{fig:wulczyn_cnn_test} \end{minipage} \end{figure} For the LIWC-based CNN models it appears that the opposite effect is true, for small and medium sized datasets the LIWC-based CNN models improve on the in-domain evaluations. The exception to this pattern is the \textit{StormFront} dataset where the LIWC-based MLP outperforms all other LIWC-based models. For the largest dataset, the \textit{Toxicity} dataset, the LIWC-based CNN model slightly improves on the LIWC-based MLP ($0.6722$ for the MLP and $0.6803$ for the CNN) but falls short in comparison the the LSTM model ($0.7870$). Curiously, the LIWC-based CNN outperforms all other LIWC-based models on the in-domain evaluation of the \textit{Offence} dataset. In comparison to the surface-form based CNN models, the LIWC-based models underperform achieving lower scores in almost all cases. On the other hand, the surface-form based CNNs outperform their LSTM and MLP counterparts. Once again, such under-performance appears to be correlated with size, as the drop in LIWC-based CNN performance minimises as the datasets grow in size. Considering how CNNs identify feature maps, this intuitively makes sense, as the larger the dataset, the more variety and stability there will be in the patterns of LIWC tokens that are encountered. \begin{figure} \begin{minipage}{\textwidth} \centering \includegraphics[width=\textwidth]{all_cnn_waseem_test.pdf} \caption{Macro F1 scores for all CNN models on the \textit{Hate Expert} evaluation set with the standard deviation represented in error bars.} \label{fig:waseem_cnn_test} \vfill \includegraphics[width=\textwidth]{all_cnn_waseem_hovy_test.pdf} \caption{Macro F1 scores for all CNN models on the \textit{Hate Speech} evaluation set with the standard deviation represented in error bars.} \label{fig:waseem_hovy_cnn_test} \includegraphics[width=\textwidth]{all_cnn_garcia_test.pdf} \caption{Macro F1 scores for all CNN models on the \textit{StormFront} evaluation set with the standard deviation represented in error bars.} \label{fig:garcia_cnn_test} \end{minipage} \end{figure} \subsubsection{Computational costs} The final motivation for using LIWC representations over surface forms is the hypothesis that with a smaller vocabulary, the models will train faster, thus reducing the environmental impact of developing machine learning models for abuse detection. Figures \ref{fig:davidson_train_time} to \ref{fig:garcia_train_time} show the number of minutes taken for each model to train on each dataset with the error bars representing the standard deviation across $5$ runs. First, there is a predictable correlation with the complexity of the machine learning model and the time required to train a model, with the MLP models being the quickest to train and the LSTM models taking the longest, with the exception of the CNN models trained on the \textit{Hate Expert} dataset, where the LSTM and CNN models mostly taking roughly the same amount of time to train (see \cref{fig:waseem_train_time}). Considering the influence of document representation on training time, the results point in multiple directions. Using LIWC-based representations results in faster training times for some models while slower training times for others. Unsurprisingly, the improvements in training time are negatively correlated with the size of the surface-token vocabularies: As the size of the surface token vocabulary increase, the training time for LIWC-based models decrease. \begin{figure}[h] \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=\textwidth]{davidson_train_time.pdf} \caption{Training time for each model type on the \textit{Offence} dataset.} \label{fig:davidson_train_time} \vfill \includegraphics[width=\textwidth]{wulczyn_train_time.pdf} \caption{Training time for each model type on the \textit{Toxicity} dataset.} \label{fig:wulczyn_train_time} \end{minipage}% \hfill \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=\textwidth]{waseem_train_time.pdf} \caption{Training time for each model type on the \textit{Hate Expert} dataset.} \label{fig:waseem_train_time} \vfill \includegraphics[width=\textwidth]{waseem_hovy_train_time.pdf} \caption{Training time for each model type on the \textit{Hate Speech} dataset.} \label{fig:waseem_hovy_train_time} \end{minipage} \vfill \centering \includegraphics[width=0.50\textwidth]{garcia_train_time.pdf} \caption{Training time for each model type on the \textit{StormFront} dataset.} \label{fig:garcia_train_time} \end{figure} In the medium sized datasets however, the relation between training time and vocabulary minimisation is less clear as is apparent from \cref{fig:davidson_train_time,fig:waseem_train_time,fig:waseem_hovy_train_time}. For each of these datasets, some LIWC-based models train quicker while others are slower. In part, this appears to be connected to model complexity, where the more complex the underlying model is, the slower the training time also is. For instance, in \cref{fig:waseem_train_time} the LIWC-based MLP is the quickest MLP to be trained while the LIWC-based CNN and LSTM models are slower than all other CNN and LSTM models, respectively. \zw{Talk about training time vs. model performance} Reflecting on the feasibility of using LIWC-based document representations for training neural networks, I turn to the two largest datasets, the \textit{Offence} and \textit{Toxicity} datasets, and the performance of LIWC-based models while bearing in mind their training time (see \cref{tab:time_spent_davidson_wulczyn}). Beyond being the two largest datasets, I choose these to compare as their operationalisations of abuse share large similarities. Thus, one can consider each dataset a domain shifted dataset to the other, providing a more reasonable point of comparison than using a dataset which is annotated with a fundamentally different goal. In \cref{tab:time_spent_davidson_wulczyn}, it immediately stands out that some LIWC-based models take longer to train than their surface token-based counter-parts. Moreover, It is clear that many of the surface token-based models that perform very well on the in-domain evaluation set do not see the performance transfer to other datasets. Conversely, many of the LIWC-based models see a lesser drop in performance on external data. This suggests that LIWC-based modelling may capture more general patterns of abuse that models are otherwise prone to overfit away from in pursuit of improved in-domain performance. \zw{Revisit after models have been retrained} \begin{table}[h] \centering \begin{tabular}{c|llll} Dataset & Model & \textit{Offence} & \textit{Toxicity} & Training time \\\hline \multirow{9}{*}{\rot{\textit{Offence}}} & Word MLP & 0.9544 & 0.091 & 1m 12s \\ & BPE MLP & 0.9726 & 0.4872 & 2m 48s \\ & LIWC MLP & 0.8914 & 0.7383 & 3m 30s \\ & Word LSTM & 0.8734 & 0.2662 & 2m 0s \\ & BPE LSTM & 0.8664 & 0.5996 & 15m 12s \\ & LIWC LSTM & 0.8747 & 0.6833 & 4m 6s \\ & Word CNN & 0.3495 & 0.3478 & 4m 30s \\ & BPE CNN & 0.9736 & 0.5512 & 10m 18s \\ & LIWC CNN & 0.9565 & 0.6803 & 10m 24s \\\hline \multirow{9}{*}{\rot{\textit{Toxicity}}} & Word MLP & 0.6312 & 0.5787 & 40m 18s \\ & BPE MLP & 0.6012 & 0.5404 & 28m 12s \\ & LIWC MLP & 0.8269 & 0.6722 & 12m 54s \\ & Word LSTM & 0.5901 & 0.8281 & 4h 22m 30s \\ & BPE LSTM & 0.6343 & 0.8612 & 5h 40m 42s \\ & LIWC LSTM & 0.6592 & 0.6833 & 3h 25m 48s \\ & Word CNN & 0.6905 & 0.8414 & 33m 6s \\ & BPE CNN & 0.5867 & 0.8616 & 1h 9m 24s \\ & LIWC CNN & 0.5164 & 0.6803 & 29m 30s \end{tabular}% \caption{Time to train models on the \textit{Offence} and \textit{Toxicity} datasets with their in-domain and cross-domain macro F1-Scores.} \label{tab:time_spent_davidson_wulczyn} \end{table} Moreover, from the table it also stands out that while gains from using LIWC-based representation in terms of training time are modest, at best, for the \textit{Offence} dataset the reduction in training time is much more substantial for the \textit{Toxicity} dataset, in many cases with only a small drop in in-domain performance. Thus, the larger the original dataset is, the greater the benefit will be when using LIWC-based representations, in terms of reduction of training time for the model. \section{Conclusions and future work} One of the core concerns surrounding content moderation technologies is that machine learning models for the task of identifying abuse overfit to spurious correlations and unique tokens in the datasets. In this chapter, I have sought to examine how alternative forms of document representations can alleviate such issues. In addition, I examine how a reduction in the vocabulary size can affect the time it takes to train machine learning models for detecting abuse. Through the use of LIWC, I perform a vocabulary reduction of up to $98.9\%$ of the surface-form vocabulary and show that in spite of such a reduction, reasonable in-domain and out-of-domain model performances can be achieved. In particular, I find that out-of-domain model performances are contingent on similarities in the data sampling process or the goals of objectives of annotating data. For instance, the \textit{Toxicity} dataset and the \textit{Offence} dataset are sampled from two different sources, Wikipedia editor discussion pages and Twitter, respectively. However, the goal of the annotation tasks for both datasets share similarities in the operationalisation of ``toxic'' and ``offensive'' allowing for models to generalise onto the out-of-domain evaluation sets. By using simple neural network architectures, I show how LIWC-based models can learn similar levels of performance as surface token-based models. In this chapter, I do not make use of pre-trained embedding layers \citep{Park:2017,Kolhatkar:2021} in my model or language models (e.g. BERT \citep{Devlin:2019}) that are fine-tuned to a specific task that many contemporary models make use of \citep{Vidgen:2021,Isaksen:2020}. I avoid these as they are not compatible with the LIWC vocabulary and thus would not be applicable to the core questions in this chapter. Addressing the second aim of this chapter, to investigate the implication of using LIWC to represent documents on the computational, and thus environmental costs of developing machine learning models, show that training time of neural network models has a relation to the size of the surface-token vocabulary size and that models that make use of LIWC can provide competitive in-domain results and, in some instances outperform on out-of-domain evaluation sets. Moreover, I find that the question of whether the time consumed by LIWC-based modelling, whether it is less or more than surface token-based models, can be reframed as a question of in-domain validity or generalisability onto an unseen sample. As the goal for machine learning models is ultimately to generalise onto unseen data where the distributions of data may not mirror those that the models have been trained on, LIWC-based document representations may prove to be a valuable direction as it trains models to identify patterns in cognitive processes and the emotional state of the author and the output labels, while reducing the number of tokens that can act as confounding factors. The results in this chapter have several implications for research into detecting online abuse. First, the positive results using LIWC suggests that thinking carefully about document representation and vocabulary reduction can have beneficial outcomes. Second, the generally strong performances of the LIWC-based linear baselines suggests that although the field has moved on to non-linear modelling, there is still room for improvement using classical machine learning models. Moreover, the results for the LIWC-based models leave open questions about how the interaction with surface form tokens would influence the in-domain and out-of-domain generalisability of machine learning models. Therefore I plan to address these questions in future work by using pre-trained word embedding layers to examine the efficacy of combining LIWC with surface forms of tokens, to minimise the number of unknown tokens while retaining the depth of information provided by LIWC.
{ "alphanum_fraction": 0.665247097, "avg_line_length": 135.4315352697, "ext": "tex", "hexsha": "28b25047fd6a8e329da1702ecc199bd052d4a1fb", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "25f7f157a0d334e3bd55be13174b2cb1a1fd3329", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ZeerakW/Dissertation", "max_forks_repo_path": "Chapter4/chapter4.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "25f7f157a0d334e3bd55be13174b2cb1a1fd3329", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ZeerakW/Dissertation", "max_issues_repo_path": "Chapter4/chapter4.tex", "max_line_length": 1753, "max_stars_count": 1, "max_stars_repo_head_hexsha": "25f7f157a0d334e3bd55be13174b2cb1a1fd3329", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "zeeraktalat/Dissertation", "max_stars_repo_path": "Chapter4/chapter4.tex", "max_stars_repo_stars_event_max_datetime": "2021-03-28T14:05:32.000Z", "max_stars_repo_stars_event_min_datetime": "2021-03-28T14:05:32.000Z", "num_tokens": 33222, "size": 130556 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Deedy - One Page Two Column Resume % LaTeX Template % Version 1.1 (30/4/2014) % % Original author: % Debarghya Das (http://debarghyadas.com) % % Original repository: % https://github.com/deedydas/Deedy-Resume % % IMPORTANT: THIS TEMPLATE NEEDS TO BE COMPILED WITH XeLaTeX % % This template uses several fonts not included with Windows/Linux by % default. If you get compilation errors saying a font is missing, find the line % on which the font is used and either change it to a font included with your % operating system or comment the line out to use the default font. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % TODO: % 1. Integrate biber/bibtex for article citation under publications. % 2. Figure out a smoother way for the document to flow onto the next page. % 3. Add styling information for a "Projects/Hacks" section. % 4. Add location/address information % 5. Merge OpenFont and MacFonts as a single sty with options. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % CHANGELOG: % v1.1: % 1. Fixed several compilation bugs with \renewcommand % 2. Got Open-source fonts (Windows/Linux support) % 3. Added Last Updated % 4. Move Title styling into .sty % 5. Commented .sty file. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % Known Issues: % 1. Overflows onto second page if any column's contents are more than the % vertical limit % 2. Hacky space on the first bullet point on the second column. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass[]{deedy-resume-openfont} \usepackage{enumitem} \begin{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % LAST UPDATED DATE % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \lastupdated %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % TITLE NAME % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \namesection{}{Jack Deadman}{ %\href{mailto:}{[email protected]} Award winning graduate with experience in working in teams to deliver high quality software to solve real world problems. } \vspace{0pt}{ %A research focused student with an interest in speech and audio processing, experience in delivering high quality solutions to solve real world problems. %Award winning graduate with experience in working in teams to deliver high quality software to solve real world challenges. } %\namesection{Jack}{Deadman}{ \urlstyle{same}\url{http://jackdeadman.com}} %\href{mailto:}{[email protected]} %} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % COLUMN ONE % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{minipage}[t]{0.33\textwidth} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % EDUCATION %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Education} \subsection{University of Sheffield} \descript{MComp in Computer Science} \location{First Class (Hons).} \location{Year 1: 82.1\%} \location{Year 2: 82.7\%} \location{Year 3: 83.2\%} \location{Year 4: 79.5\%} \sectionsep \subsection{Sussex Downs College} \location{Software Development: Distinction*} \location{Computing: A} \location{Mathematics: B} \sectionsep \subsection{Bishop Bell School} \location{11 GCSEs A*-C including Maths and English.} \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % LINKS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Links} Github:// \href{https://github.com/jackdeadman}{\custombold{jackdeadman}} \\ LinkedIn:// \href{https://www.linkedin.com/in/jack-deadman-19101297}{\custombold{jackdeadman}} \\ Twitter:// \href{https://twitter.com/jackdeadman96}{\custombold{@jackdeadman96}} \\ \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % COURSEWORK %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Projects} \subsection{University} Sound Event Detection\\ Open GL 3D Graphics Scene\\ Genesys and Software Hut \textit{(Software Engineering modules with real client)} \sectionsep \subsection{Side Projects} Natural Deduction tool\\ Quad (Custom Linux login screen)\\ Chat client with \LaTeX \ support\\ \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % SKILLS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Skills} \subsection{Programming} \location{Skilled:} Java \textbullet{} JavaScript \textbullet{} Ruby\\ Rails \textbullet{} Python \textbullet{} Numpy\\ CSS \textbullet{} Sass \textbullet{} \LaTeX\ \\ PHP \textbullet{} React \textbullet{} Node \\ \location{Familiar:} Bash \textbullet{} Haskell \textbullet{} Prolog \textbullet{} C \sectionsep \subsection{Non-Technical} Teaching\\ Presenting\\ \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % COLUMN TWO % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{minipage} \hfill \begin{minipage}[t]{0.66\textwidth} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % EXPERIENCE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Experience} \runsubsection{University of Sheffield} \descript{| Postgraduate Researcher } \location{September 2018 – Present} %\vspace{\topsep} % Hacky fix for awkward extra vertical space \begin{enumerate}[leftmargin=0.4cm, itemsep=-0.3ex, topsep=0pt] \item[--] Working in the Speech and Hearing group under the supervision of Prof. Jon Barker researching applying video information to inform beamforming algorithms for pre-processing in automatic speech recognition. \end{enumerate} \sectionsep \runsubsection{University of Sheffield} \descript{| Summer Research Intern } \location{June 2017 – July 2017} \begin{enumerate}[leftmargin=0.4cm, itemsep=-0.3ex, topsep=0pt] \item[--] Worked in the Testing and Security research group under the supervision of Dr. Achim Brucker developing analysis tools to detect malicious browser extensions. \item[--] Successfully deployed the software on around 60,000 browser extensions using the Univesity's high performance computing facility. \item[--] Successfully finished an academic paper reporting our results. \end{enumerate} \sectionsep \runsubsection{Visual Wind} \descript{| Software Engineer } \location{June 2016 – Sep 2016} \begin{enumerate}[leftmargin=0.4cm, itemsep=-0.3ex, topsep=0pt] \item[--] Worked as part of a small team of people to produce visualisation software for wind turbines. \item[--] Introduced agile methodologies into the company. \item[--] Produced a proof of concept application which successfully aided a joint venture secure funding. \end{enumerate} \sectionsep \runsubsection{University of Sheffield} \descript{| Student Ambassador} \location{December 2015 – June 2018} \begin{enumerate}[leftmargin=0.4cm, itemsep=-0.3ex, topsep=0pt] %\item[--] Introduce prospective students to the University on open days and assist lecturers during outreach events. \item[--] Regularly gave a 15 minute presentation to a group of around 15 students explaining what University life is like and answered queries they have. \end{enumerate} \sectionsep \runsubsection{University of Sheffield} \descript{| Student Teaching Assistant} \location{September 2016 – Present} \begin{enumerate}[leftmargin=0.4cm, itemsep=-0.3ex, topsep=0pt] \item[--] Assist in teaching undergraduate lab sessions. \item[--] Invigilate assessments. \item[--] Mark undergraduate assignments. \end{enumerate} \sectionsep \runsubsection{University of Sheffield} \descript{| Support Worker} \location{September 2017 – Present} \begin{enumerate}[leftmargin=0.4cm, itemsep=-0.3ex, topsep=0pt] \item[--] Take lecture notes for a visually impaired individual as well as assist their learning by explaining visual concepts and mathematical equations. \end{enumerate} \sectionsep \runsubsection{University of Sheffield} \descript{| Mentor} \location{September 2016 – June 2018} \begin{enumerate}[lefftmargin=0.4cm, itemsep=-0.3ex, topsep=0pt] \item[--] As a welfare mentor I helped students settle into University as they transition into a new environment. As a programming mentor I helped student settle in to Univesity academically. %\item[--] Successfully helped 6 individuals settle in with University life whilst being a welfare mentor. %\item[--] Successfully helped 2 individuals with programming difficulties whist being a programming mentor as part of the Computer Science society. \end{enumerate} \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % AWARDS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Awards} \begin{tabular}{rll} 2018 & Full Ph.D scholarship & EPSRC\\ 2018 & Mappin Medal & University of Sheffield\\ 2017 & Summer Research Scholarship & EPSRC\\ 2017 & Highly Commended paper in Computer Science & The Undergraduate awards\\ 2016 & Winning team in "Software Hut" module & Prize chosen by client \\ \end{tabular} \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % SOCIETIES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{minipage} \end{document} \documentclass[]{article}
{ "alphanum_fraction": 0.6672433679, "avg_line_length": 32.3507462687, "ext": "tex", "hexsha": "50dfbce86f6a47a172bb0b6c2284002b8a48f179", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "397a6d630947544ff65b35d774ddd9e393fbf78d", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "jackdeadman/Curriculum-Vitae", "max_forks_repo_path": "main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "397a6d630947544ff65b35d774ddd9e393fbf78d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "jackdeadman/Curriculum-Vitae", "max_issues_repo_path": "main.tex", "max_line_length": 215, "max_stars_count": 1, "max_stars_repo_head_hexsha": "397a6d630947544ff65b35d774ddd9e393fbf78d", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "jackdeadman/Curriculum-Vitae", "max_stars_repo_path": "main.tex", "max_stars_repo_stars_event_max_datetime": "2018-03-18T21:51:45.000Z", "max_stars_repo_stars_event_min_datetime": "2018-03-18T21:51:45.000Z", "num_tokens": 2104, "size": 8670 }
\documentclass[a4paper, 12pt]{article} %\usepackage{enumerate} \usepackage{graphicx} \usepackage{url} % allow url in bibtex \usepackage{float} % force picture to place [H] HERE \usepackage{graphicx} %load package for graphic \usepackage{subcaption} % two figure side by side \usepackage{indentfirst} % indent the first paragraph in section \title{CS412 - Final Report\\ Face expression recognition} \date{\today} \begin{document} \begin{center} \large VNUHCM - University of Science\\ Faculty of Information Technology\\ Advanced Program in Computer Science \end{center} \begingroup \let\newpage\relax \maketitle \endgroup \textbf{Group members:} \begin{enumerate} \item 1351040 : Thai Thien \item 1351059 : Ho Ngoc Huynh Mai \end{enumerate} \section{Introduction} We choose to do Face expression recognition. Our implement will based on paper Robust Facial Expression Classification Using Shape and Appearance Features of SL Happy and Aurobinda Routray \cite{7050661}. We adopt the preprocessing process, the feature extraction. But we do not intend to work on extract active patches. For the model, we plan to implement 2 model, a Support Vector Machine (as in \cite{7050661}) and a simple Neural network. \section{Project Details} The purpose of this project is implement the system to recognize facial expression, including, but not limited to anger, disgust, fear, happiness, sadness and surprise. \\ Input: A face of someone. \\ Output: The facial expression. \\ \section{Methodology} \subsection{Preprocessing} \label{sec:preprocessing} \subsubsection{Gaussian blur} \label{sec:preprocessing_blur} By default, Opencv load image with BGR channel. We convert image to grayscale. Then apply Gaussian Blur with kernel size 5x5 and $\sigma$ = 1 to filter out noise. \subsubsection{Face Detection} \label{sec:preprocessing_face_detect} The purpose of this step is to detect the face in input image. We use image which is smoothed from the step above as input for face detection. We use Haar feature-based cascade classifiers \cite{viola2001rapid} for face detection. We use pre-trained data provided by OpenCV \cite{bradski2000opencv} haarcascade\_frontalface\_default.xml. \subsubsection{Extract and resize} \label{sec:preprocessing_resize} We create the region of interest (ROI) from the face position detected by Haar classifiers. We create new image from region of interest and resize the image to 96x96. \subsubsection{Result} \begin{figure}[H] \centering \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=0.9\textwidth]{./raw/img1.png} \end{subfigure} % \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=0.9\textwidth]{./raw/img2.png} \end{subfigure} \caption[]{Before preprocessing} \label{fig:beforepre} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=0.9\textwidth]{./processed/img1.png} \end{subfigure} % \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=0.9\textwidth]{./processed/img2.png} \end{subfigure} \caption[]{After preprocessing} \label{fig:afterpre} \end{figure} \subsubsection{Detect and extract facial patches} \label{sec:preprocessing_facial_patches} We use Haar feature-based cascade classifiers \cite{viola2001rapid} with Opencv \cite{bradski2000opencv} pretrained classifiers for eyes, nose and mouth. For each face, we extract two eyes, one nose, one mouth and save to same folder. For face we cannot extract 4 patches, we discard that sample. \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{./raw/KAAN139.png} \caption[]{original image} \label{fig:lbp_to_output} \cite{lbptutorial} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[b]{0.2\textwidth} \includegraphics[width=0.9\textwidth]{./processed/eye0.png} \end{subfigure} % \begin{subfigure}[b]{0.2\textwidth} \includegraphics[width=0.9\textwidth]{./processed/eye1.png} \end{subfigure} % \begin{subfigure}[b]{0.2\textwidth} \includegraphics[width=0.9\textwidth]{./processed/nose.png} \end{subfigure} % \begin{subfigure}[b]{0.2\textwidth} \includegraphics[width=0.9\textwidth]{./processed/mouth.png} \end{subfigure} \caption[]{Facial Patches} \label{fig:beforepre} \end{figure} \subsection{Feature selection} This step we select feature to represent image. Local Binary Pattern (lbp) is a type of texture descriptor\cite{pietikainen2010local}. Before construct lbp, the image must be convert into gray-scale. We already did that in \ref{sec:preprocessing} For each pixel in our image, we consider 8 neighbor pixels. \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{./figure/lbp_thresholding.jpg} \caption[]{Local binary patterns (LBP)} \label{fig:lbp} \cite{lbptutorial} \end{figure} We compare the intensitiy of the center pixel with its neighbor. Whenever the intensity of a neighbor pixel is greater or equal the middle pixel, we set to 1, else we set to 0. Then we take 8 bit represent of neighbor pixels and convert it into decimal representation. For example, in \textbf{figure \ref{fig:lbp}}, we have $0010111_2$ = $23_10$. Then we can store the values in another 2D array as in \textbf{figure \ref{fig:lbp_to_output}}. There are 8 neighbors pixels, therefore, there are $2^8 = 256$ different pattern of lbp. \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{./figure/lbp_to_output.jpg} \caption[]{Local binary patterns (LBP)} \label{fig:lbp_to_output} \cite{lbptutorial} \end{figure} Finally, we calculate the histogram of each pattern occur. We treat this histogram as our feature vector. For Local binary patterns histogram (lbph) implementation, we use scikit-image library \cite{van2014scikit} \subsubsection{Feature extraction for full face} \label{sec:extract_full} We take a full face after preprocessed in \ref{sec:preprocessing_blur}, \ref{sec:preprocessing_face_detect}, \ref{sec:preprocessing_resize}. Then we extract lbph feature from that image to make feature vector of a sample. \subsubsection{Feature extraction for facial patches} \label{sec:extract_patches} For each sample image, apply preprocess in \ref{sec:preprocessing_blur}, \ref{sec:preprocessing_face_detect}, \ref{sec:preprocessing_resize} and \ref{sec:preprocessing_facial_patches} to get 4 patches. Then we extract lbph feature for each patches. We concatenate feature vectors of each patches to create feature vector of a sample. \subsection{The model} \subsubsection{Support Vector Machine} \label{sec:model_svm} The model is a classifier model, which take feature vector as a input and output the class it belong to. There are 6 classes anger, disgust, fear, happiness, sadness, surprise and neutral. Then, we implement the One-Against-One Support Vector Machine. We need total 21 OAO SVM for 7 classes. We use scikit-learn to \cite{pedregosa2011scikit} implement our SVM model. We use 3 different kernel: linear, polynomial, rbf. For each kernel, we run hyperparameter tunning for recall and precision. \subsection{Experiment result} \subsubsection{Experiment 1} We set up the experiment for \ref{sec:preprocessing_blur}, \ref{sec:preprocessing_face_detect}, \ref{sec:preprocessing_resize}, \ref{sec:extract_full}, \ref{sec:model_svm} \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{./figure/experiment_result.png} \caption[]{Experiment result full face} \label{fig:result} \end{figure} \subsubsection{Experiment 2} We set up the experiment for \ref{sec:preprocessing_blur}, \ref{sec:preprocessing_face_detect}, \ref{sec:preprocessing_resize}, \ref{sec:preprocessing_facial_patches}, \ref{sec:extract_patches}, \ref{sec:model_svm} \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{./figure/experiment2.png} \caption[]{Experiment result facial patches} \label{fig:result2} \end{figure} \section{Dataset} Dataset: jaffe \cite{lyons1998japanese} test \cite{Goodfellow-et-al-2016} \bibliographystyle{ieeetr} \bibliography{Bibtex} \end{document}
{ "alphanum_fraction": 0.767542237, "avg_line_length": 40.1435643564, "ext": "tex", "hexsha": "cc38bfc6ce714b73408ca92689928f2063d786d1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "89aa252f349561b6d4a6a4a9c8d0ae6e00429f92", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ttpro1995/CV_FinalProject", "max_forks_repo_path": "Doc/ReportFinal/finalreport.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "89aa252f349561b6d4a6a4a9c8d0ae6e00429f92", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ttpro1995/CV_FinalProject", "max_issues_repo_path": "Doc/ReportFinal/finalreport.tex", "max_line_length": 533, "max_stars_count": 1, "max_stars_repo_head_hexsha": "89aa252f349561b6d4a6a4a9c8d0ae6e00429f92", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ttpro1995/CV_FinalProject", "max_stars_repo_path": "Doc/ReportFinal/finalreport.tex", "max_stars_repo_stars_event_max_datetime": "2018-01-16T17:29:34.000Z", "max_stars_repo_stars_event_min_datetime": "2018-01-16T17:29:34.000Z", "num_tokens": 2309, "size": 8109 }
\chapter{Miscellaneous} \section{Some Math} \renewcommand{\thepage}{} Again, we present you a small mathematical example. Get familiar with this syntax in order to create your own formulas\footnote{\url{https://en.wikibooks.org/wiki/LaTeX/Mathematics}}. \begin{equation} \varpi_q(k,l)=\begin{cases} &1\quad \quad G_q(k,l)>\rho_{\text{th}}\\ &0\quad \quad \text{otherwise}\\ \end{cases} \end{equation} with $\rho_{\text{th}}$ defined as a constant threshold. The remixing error spreading is performed as follows: \begin{eqnarray} \hat{X}^{(i+1)}_q(k,l)&=&{\varpi}_q(k,l)\left(\tilde{{X}}^{(i)}_q(k,l)+\frac{E^{(i)}(k,l)}{\sum_{q=1}^Q{{\varpi}_q(k,l)}}\right)\nonumber\\ \tilde{\mathbf{X}}^{(i)}&=&\mathcal{G}(\hat{X}_q^{(i)}), \end{eqnarray} where $\sum_{q=1}^Q{{\varpi}_q(k,l)}$ accounts for the overal contributions of sources in time frequency error distribution and the remixing error $E^{(i)}(k,l)$ is defined as \begin{equation} E^{(i)}(k,l)=Y(k,l)-\sum_{q=1}^{Q}{\hat{X}_q(k,l)}. \end{equation} \begin{mdframed} \begin{lstlisting}[caption={A new math example}] \begin{equation} \varpi_q(k,l)=\begin{cases} &1\quad \quad G_q(k,l)>\rho_{\text{th}}\\ &0\quad \quad \text{otherwise}\\ \end{cases} \end{equation} with $\rho_{\text{th}}$ defined as a constant threshold. The remixing error spreading is performed as follows: \begin{eqnarray} \hat{X}^{(i+1)}_q(k,l)&=&{\varpi}_q(k,l)\left(\tilde{{X}}^{(i)}_q(k,l)+\frac{E^{(i)}(k,l)}{\sum_{q=1}^Q{{\varpi}_q(k,l)}}\right)\nonumber\\ \tilde{\mathbf{X}}^{(i)}&=&\mathcal{G}(\hat{X}_q^{(i)}), \end{eqnarray} where $\sum_{q=1}^Q{{\varpi}_q(k,l)}$ accounts for the overal contributions of sources in time frequency error distribution and the remixing error $E^{(i)}(k,l)$ is defined as \begin{equation} E^{(i)}(k,l)=Y(k,l)-\sum_{q=1}^{Q}{\hat{X}_q(k,l)}. \end{equation} \end{lstlisting} By adding a label to your equation, you will be able to refer to the equation within the text! \end{mdframed} \section{Program code / listing} Three types of source codes are supported: code snippets, code segments, and listings of stand alone files. Snippets are placed inside paragraphs and the others as separate paragraphs the difference is the same as between text style and display style formulas\footnote{\url{https://en.wikibooks.org/wiki/LaTeX/Source_Code_Listings}}. In the following, we will give you a short introduction on all three code listing types. \subsection{individual added program code} \begin{lstlisting}[caption = {Single code}] #include <stdio.h> #define N 10 int main() { int i; puts("Hello world!"); for (i = 0; i < N; i++) { puts("LaTeX is also great for programmers!"); } return 0; } \end{lstlisting} \begin{mdframed} \begin{lstlisting}[caption = {Example of \emph{Single code}}] \begin{lstlisting}[caption = {Single code}] #include <stdio.h> #define N 10 int main() { int i; puts("Hello world!"); for (i = 0; i < N; i++) { puts("LaTeX is also great for programmers!"); } return 0; } \end {lstlisting} \end{lstlisting} \end{mdframed} \subsection{Add specific code file} \lstinputlisting[language=C, caption = {Same code but now we added the code file instead of copying the code into latex}]{code/test.c} \begin{mdframed} \begin{lstlisting}[caption = {Example of \emph{Adding specific code file}}] \lstinputlisting[language=C, caption = {Same code but now we added the code file instead of copying the code into latex}]{code/test.c} \end{lstlisting} \end{mdframed} \subsection{Scope on specific code file} \lstinputlisting[language=C, firstline=6, lastline=13, caption = {Scope on specific code file}]{code/test.c} \begin{mdframed} \begin{lstlisting}[caption = {Example of \emph{Scope on specific code file}}] \lstinputlisting[language=C, firstline=6, lastline=13, caption = {Specific scope on code file}]{code/test.c} \end{lstlisting} \end{mdframed}
{ "alphanum_fraction": 0.687849517, "avg_line_length": 34.814159292, "ext": "tex", "hexsha": "b292d18f24e2792a7a4e062ff8e62a8c87460e8c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "572c39fa3c3926f327d9f1f4fd8e670d050e93bf", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "thealexinator2904/Industrial-automation", "max_forks_repo_path": "sensorik/Chapters/appendixA.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "572c39fa3c3926f327d9f1f4fd8e670d050e93bf", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "thealexinator2904/Industrial-automation", "max_issues_repo_path": "sensorik/Chapters/appendixA.tex", "max_line_length": 184, "max_stars_count": null, "max_stars_repo_head_hexsha": "572c39fa3c3926f327d9f1f4fd8e670d050e93bf", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "thealexinator2904/Industrial-automation", "max_stars_repo_path": "sensorik/Chapters/appendixA.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1332, "size": 3934 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % CS624: Analysis of Algorithms % Copyright 2015 Pejman Ghorbanzade <[email protected]> % Creative Commons Attribution-ShareAlike 4.0 International License % More info: https://github.com/ghorbanzade/beacon %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Question 4} Prove that if $f = O(g)$ and $g = O(h)$ then $f = O(h)$. \subsection*{Solution} Since $f = O(g)$, there are constants $c$ and $x_0$ such that \begin{equation} f(x) \leq c \times g(x) \label{eq9} \end{equation} for any $x \geq x_0$. Since $g = O(h)$, there are constants $d$ and $x_1$ such that \begin{equation} g(x) \leq d \times h(x) \end{equation} for any $x \geq x_1$. We take $x_2 = max(x_0,x_1)$ and $e = c \times d$ as constants with which the following holds true for $x \geq x_2$. \begin{equation} f(x) \leq c \times d \times h(x) = e \times h(x) \end{equation} By definition of Big-Oh, $f = \mathcal{O}(h)$ is proved.
{ "alphanum_fraction": 0.5831683168, "avg_line_length": 30.6060606061, "ext": "tex", "hexsha": "bd2e6fe19a829139ad7f7071c623bad0ced0c137", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-12-06T17:18:05.000Z", "max_forks_repo_forks_event_min_datetime": "2019-09-20T05:58:32.000Z", "max_forks_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ghorbanzade/beacon", "max_forks_repo_path": "umb-cs624-2015s/src/tex/hw01/hw01q04.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ghorbanzade/beacon", "max_issues_repo_path": "umb-cs624-2015s/src/tex/hw01/hw01q04.tex", "max_line_length": 116, "max_stars_count": 2, "max_stars_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ghorbanzade/beacon", "max_stars_repo_path": "umb-cs624-2015s/src/tex/hw01/hw01q04.tex", "max_stars_repo_stars_event_max_datetime": "2020-01-01T11:16:51.000Z", "max_stars_repo_stars_event_min_datetime": "2019-11-13T20:00:10.000Z", "num_tokens": 333, "size": 1010 }
\documentclass[12pt]{report} \usepackage{graphicx} \graphicspath{ {images/} } \usepackage[utf8]{inputenc} \usepackage[a4paper,width=150mm,top=25mm,bottom=25mm]{geometry} %bibliograpgy \usepackage[american]{babel} \usepackage{csquotes} \usepackage[style=mla,backend=biber,autocite=footnote]{biblatex} \addbibresource{mla-test-bib.bib} \DeclareAutoCiteCommand{footnote}[f]{\footcite}{\footcites} \newcommand{\mychapter}[2]{ \setcounter{chapter}{#1} \setcounter{section}{0} \chapter*{#2} \addcontentsline{toc}{chapter}{#2} } \begin{document} \begin{titlepage} \centering \includegraphics[width=0.15\textwidth]{example-image-1x1}\par\vspace{1cm} {\scshape\LARGE Vienna International School \par} \vspace{1cm} {\scshape\Large Extended Essay \par} \vspace{1.5cm} {\huge\bfseries Predicting the breakdown probability for a plasma discharge in the GOLEM tokamak\par} \vspace{2cm} {\Large\itshape Jakub Mandula\par} \vfill supervised by\par Marcia \textsc{Clarke} \vfill % Bottom of the page {\large \today\par} \end{titlepage} \mychapter{0}{Abstract} This is my abstract \tableofcontents \mychapter{1}{Introduction} \input{chapters/introduction} \mychapter{2}{Background information} \input{chapters/background} \mychapter{3}{Theory} \input{chapters/theory} \mychapter{4}{Conclusion} \input{chapters/conclusion} \mychapter{5}{Bibliography} \printbibliography \appendix \mychapter{6}{Appendix Title} \input{chapters/appendixA} \end{document}
{ "alphanum_fraction": 0.7559002023, "avg_line_length": 20.5972222222, "ext": "tex", "hexsha": "1b01f7005201c0e7adfb3625abe8018b0a463a1a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7f4c0ed950b0c1695d4358f8c500e86ea918ea3f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "zpiman/EE", "max_forks_repo_path": "Essay/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7f4c0ed950b0c1695d4358f8c500e86ea918ea3f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "zpiman/EE", "max_issues_repo_path": "Essay/main.tex", "max_line_length": 102, "max_stars_count": null, "max_stars_repo_head_hexsha": "7f4c0ed950b0c1695d4358f8c500e86ea918ea3f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "zpiman/EE", "max_stars_repo_path": "Essay/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 487, "size": 1483 }
\documentclass{article} \usepackage{fullpage} \usepackage[top=2.1cm, bottom=2.1cm, left=1cm, right=1cm]{geometry} \begin{document} \section*{Runoffs} \subsection*{sbcrnf.F90} From sbc\_rnf, read the runoff temperature into sf\_t\_rnf: \begin{verbatim} IF( ln_rnf_tem ) CALL fld_read ( kt, nn_fsbc, sf_t_rnf ) ! idem for runoffs temperature if required \end{verbatim} Set the ``heat''=t*waterflux added into rnf\_tsc: \begin{verbatim} IF( ln_rnf_tem ) THEN ! use runoffs temperature data rnf_tsc(:,:,jp_tem) = ( sf_t_rnf(1)%fnow(:,:,1) ) * rnf(:,:) * r1_rau0 \end{verbatim} \subsection*{trasbc} Add the ``heat'' to the temperature trend and distribute it in depth according to h\_rnh \begin{verbatim} tsa(ji,jj,jk,jp_tem) = tsa(ji,jj,jk,jp_tem) & & + ( rnf_tsc_b(ji,jj,jp_tem) + rnf_tsc(ji,jj,jp_tem) ) * zdep \end{verbatim} \subsection*{tranxt} Include in the Asselin filter. \section*{Sensible/LW Heat Flux} \subsection*{sbcssm.F90} In sbc\_oce, sst\_m is calculated from tsn using eos\_pt\_from\_ct if necessary: \begin{verbatim} zts(ji,jj,jp_tem) = tsn(ji,jj,mikt(ji,jj),jp_tem) ... IF( ln_useCT ) THEN ; sst_m(:,:) = eos_pt_from_ct( zts(:,:,jp_tem), zts(:,:,jp_sal) ) \end{verbatim} \subsection*{sbcblk\_core.F90} Then this sst\_m passed through sbc\_blk\_core as pst and used in blk\_oce\_core to calculate qns e.g. \begin{verbatim} zst(:,:) = pst(:,:) + rt0 ! convert SST from Celcius to Kelvin (and set minimum value far above 0 K) ... zqlw(:,:) = (sf(jp_qlw)%fnow(:,:,1) - Stef * zst(:,:)*zst(:,:)*zst(:,:)*zst(:,:) ) * tmask(:,:,1) ! Long Wave \end{verbatim} So, river runoffs need to be in Conservative Temperature. \end{document}
{ "alphanum_fraction": 0.6605609616, "avg_line_length": 31.1964285714, "ext": "tex", "hexsha": "4d18f8b503a3ce3a2e9a7776c3fe8426c46b4f66", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "52633f4fe82af6d7c69dff58f69f0da4f7933f48", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "SalishSeaCast/analysis-susan", "max_forks_repo_path": "TEOS10_runoff_tracking.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "52633f4fe82af6d7c69dff58f69f0da4f7933f48", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "SalishSeaCast/analysis-susan", "max_issues_repo_path": "TEOS10_runoff_tracking.tex", "max_line_length": 111, "max_stars_count": null, "max_stars_repo_head_hexsha": "52633f4fe82af6d7c69dff58f69f0da4f7933f48", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "SalishSeaCast/analysis-susan", "max_stars_repo_path": "TEOS10_runoff_tracking.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 601, "size": 1747 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Friggeri Resume/CV % XeLaTeX Template % Version 1.0 (5/5/13) % % This template has been downloaded from: % http://www.LaTeXTemplates.com % % Original author: % Adrien Friggeri ([email protected]) % https://github.com/afriggeri/CV % % License: % CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/) % \addbibresource{bibliography.bib} % Important notes: % This template needs to be compiled with XeLaTeX and the bibliography, if used, % needs to be compiled with biber rather than bibtex. % Specify the bibliography file to include publications % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \RequirePackage{fixltx2e} \documentclass[]{friggeri-cv} % Add 'print' as an option into the square bracket to remove colors from this template for printing \usepackage{rotating} \usepackage{unicode-math} \begin{document} \header{rui}{Oliveira}{Software Engineer} % Your name and current job title/field %---------------------------------------------------------------------------------------- % SIDEBAR SECTION %---------------------------------------------------------------------------------------- \begin{aside} % In the aside, each new line forces a line break \section{info} Rui Pedro A. Oliveira Portuguese, UE 1994.11.10 Male Mbti: INFP \section{contact} Based in Porto, Portugal ~ +(351) 915015377 ~ \href{mailto:[email protected]}{[email protected]} ~ \href{http://ruioliveiras.com}{http://ruioliveiras.com} ~ \section{languages} portuguese english \section{Tech} Scala, Java, Akka, Kubernetes, Helm, Sql, NoSql (Couchchbase) Git, Linux {\color{red} $\varheartsuit$} A little bit of Bash ~ \end{aside} %--------------------------------------------------------------------------------------- % Resumo %--------------------------------------------------------------------------------------- \section{aboutMe} I've been a programmer since 2010, my first steps as a programmer were in high school, in a programming course. I’ve continued my studies and became a BSc in Informatics Engineering. I’ve started working as a Full-Stack developer, in a research and development Role, and in 2016 I went to Web Summit. I started working in 2017 on Jumia has a full-stack developer where I learned the best practices on a big enterprise. I'm currently working on dott.pt a Portuguese marketplace where I grow up a lot, I started to be a backend in a team of 3 elements, to be a team leader of a team, and still help other teams. % Currently attending the 3rd year of the BSc in Informatics Engineering at Minho University (Braga, Portugal), % after concluding a high school professional programming course. Throughout the BSc, the students are required % to do several practical assignments, being some of them detailed in the portfolio below. % The high school final project is also highlighted. In addition to a good educational background, a short term % summer internship was held at the end of the 2nd year of study. %Student of BSc in Informatics Engineering (some practical works on my Portfolio page)\\ High school: Professional course in programming (internship and final project). %--------------------------------------------------------------------------------------- % WORK EXPERIENCE SECTION %--------------------------------------------------------------------------------------- \section{experience} \begin{entrylist} %------------------------------------------------ \entry {2018 now} {dott.pt - Portuguese Marketplace } {Porto, Portugal} {\emph{\textbf{Team Leader - 2021/01 - now}} \\ Managing a team of 3 backend developers, we worked in scrum, first using Jira then Clickup. \begin{itemize} \item Normally we had always one bigger project, and space in the sprint for quick wins \item Team members participated in new feature Anasyls. \item We worked with PO, and sometimes directly with the stakeholders. \end{itemize} \emph{\textbf{Senior Backend - 2018/06 - 2021/01}} \\ I was one of 3 the backend developers that started the Marketplace from scratch *: \begin{itemize} \item Scala, Lagom (micro service framework) \item Akka Streams \item high oriented to events, asynchronous processing, and parallelism. \end{itemize} {\footnotesize *ẁith some stuff produced by a software factory in Go lang. } \emph{\textbf{Backend Architect 2018-now \normalfont{Un-official} }} \\ Even I never had a title of \textit{Backend Architect}, I always had responsibility to get the solution to core problems and define best practices. \begin{itemize} \item Authentication and authorization, plans to do that an Api-Gateway \item Log Traceability between services. Every log had: TraceId, RequestId... \item Adapter layer for Couchbase: we could run N1QL on couchbase or run in "mocked in memory" for unit tests \item Components to consume messages using Pubsub or InMemory for UnitTests. \item Resilient consumers: on failure, the consumer would recover the step \item Components to validate json dynamically \end{itemize} \emph{\textbf{DevOps 2018-now \normalfont{Un-official}}} \\ Also, I never had the title of DevOps, but I also worked a lot on this front: \begin{itemize} \item Google cloud computing, staging production (GCP) \item Managing Kubernetes, creation of Helm deployment \item Managing Couchbase cluster, first on Kubernetes then on VM instances. \item Build Continues Integration, first in cloud build, then in Circle CI. \item Script utilities to help the team manage Production / Staging on their machines \end{itemize} {\footnotesize There was no person dedicated to DevOps since very recently } } \end{entrylist} \newpage \section{experience} \begin{entrylist} \entry {2017 2018} {Jumia - African Marketplace } {Porto,Portugal} {\emph{Software Engineer - Full-Stack Developer (1 year 4 months)} \\ Development of the Order Management System (OMS), in an agile team. \begin{itemize} \item Micro-service arquitecture in Java \item Legacy support old Monolithic Php application \item Code evangelist – introduced the teams to best pratices and helping my colleagues to be introduced to best pratices too. \end{itemize} keywords: Play Framework, Akka, Java, Scala, SBT, Rabbit, MariaDb, NoSQL (couchbase / mongodb / redis) } \entry {2015 2016} {Impacting Group } {Porto,Portugal} {\emph{Software Engineer - Full-Stack Developer (1 year 7 months)} \\ % Development of Saas4saaS. Saas4saaS is a set of services to create Software as a service, the implemented services was User management and Subscription Management. % I was responsible % R\&D about the possibility of create a new product. % Development of saas4saas, a project made in scala using Spray and Akka. Research and development of Software as a Service: \begin{itemize} \item Centralized Log Solution (working prototype) \item Centralized User management (Production ready) \item Subscription management (Production ready) \end{itemize} keywords: Java, Spring, Maven, Scala, Akka, Play, SBT, NoSQL (couchbase), Swagger, } \entry {2014} {Impacting group - {\normalfont Wondeotec: emailbidding.com}} {Porto,Portugal} {\emph{Summer Internship (2 months)} \\ Participation in the development of EmailBidding, an email marketing platform, made in Symfony2. Highlights: Php, Symfony2, team working with scrum and JIRA. } % \entry % {2012} % {Codevision} % {Braga,Portugal} % {\emph{Professional Internship (2 months)} \\ % Development of a mobile web application (Javascript - Sencha Touch framework, Visual Studio, MVC Architecture) % } \end{entrylist} % -------- % New Page % ------- %\newpage %--------------------------------------------------------------------------------------- % EDUCATION SECTION %--------------------------------------------------------------------------------------- \section{education} \begin{entrylist} \entry {2021} {Leadership Formation and Negotiation skill Formation (total 40h)} {Business School - one of the top Portuguese management Universities,} { \textbf{Leadership}: MBTI; Kind of leadership and elements on the team; How is important to the leader to adapt is to the team elements. \textbf{Negotiation skills} Two sisters wanted an orange, but there are only one at home, there are 3 solutions for this problem: \begin{itemize} \item One of the sisters will take the orange, the other doesn’t (win-lose) \item They split the orange into two halves, none of them will take the full orange, but is good enough (win-win) \item Understand each other needs, and found that one wants orange skin, \end{itemize} the other wants to make orange juice (100% win-win) The art of negotiations is really understanding other needs and found the best Win-Win. } % LICENCIATURA------------------------------------------ \entry {2012--2015} {BSc in Informatics Engineering } {University of Minho, Portugal} { Comprehensive and prestigious undergraduate course in computer science and informatics engineering. Highlights: object-oriented, functional and imperative programming; Algorithms and data structures; Relational database architecture and management; Distributed Systems - concurrency and synchronization. Degree grade: 16 } %CURSO PROFISSIONAL------------------------------------------ \entry {2009--2012} {Professional Course {\normalfont High School}} {Didaxis, Portugal} {\emph{Professional Course in Management and Programming of Computer Systems}. Highlights: introduction to C, .Net and Java programming languages. Final project: development of an Android Application (see below in awards). Internship: Codevision (see below in experience). Degree grade: 17} %------------------------------------------------ \end{entrylist} % -------- % New Page % ------- % \newpage %------------------------------------------------------------------------------------- % AWARDS SECTION %------------------------------------------------------------------------------------- \section{other skills} \begin{entrylist} %------------------------------------------------ \entry {2016} {Market Prospection: Web Summit 2016} {Lisbon, Portugal} { I went to the Web summit 2016 to get market feedback about the SaaS created by my company in the Impacting Group, } \entry {2014} {Speaking: Php Introduction} {University of Minho, Portugal} {Workshop based course. Subjects: HTML, PHP and Laravel Framework.} \entry {2013} {Entrepreneur: My Project is Entrepreneur} {Municipal award, Vila Nova de Famalicão (Portugal)} {Final project of the Professional Course, an Android application. It was winner of a entrepreneur conquest. } % Android Application to manage people wardrobe.} %------------------------------------------------ % \entry % {2012} % {TECLA (1\textsuperscript{st} place)} % {Programming Competition, Aveiro (Portugal)} % {TECLA is a programming competition for teams of two elements. The objective is to solve algorithmic challenges.} %------------------------------------------------ \entry {2011} {Competitor: ONI (5\textsuperscript{th} place)} {Programming Competition, Porto (Portugal)} {ONI is the Portuguese National Olympiads of Informatics.} \end{entrylist} %---------------------------------------------------------------------------------------- % COMMUNICATION SKILLS SECTION %---------------------------------------------------------------------------------------- %\section{speaking} %\begin{entrylist} % \entry % {2014} % {Php Introduction} % {University of Minho, Portugal} % {Workshop based course. Subjects: HTML, PHP and Laravel framework.} %\end{entrylist} \section{portfolio {\normalfont \normalsize (\href{https://bitbucket.org/ruioliveiras}{https://bitbucket.org/ruioliveiras})}} \begin{entrylist} % SEI \entry {2016} {Personal Blog {\normalfont \normalsize (\href{http://ruioliveiras.com}{http://ruioliveiras.com})}} {Porto, Portugal} { I have done this blog using a static generator (initially with HarpJS, but now with Gatsby). I'm creating a Sc ala tutorial in the following link: \href{http://ruioliveiras.com/scala}{http://ruioliveiras.com/scala} % My blog is responsible, the side-menu in mobile collapse. And support a summary in the posts. } \entry {2015} {Musical challenge application Client/Server} {University of Minho, Portugal} { Application created with Java. Allows the existence of many inter-connected servers, with each client connecting to one server. Technical highlights: \begin{itemize} \item Client/Server communication using UDP Datagrams \item Server/Server communication using TCP sockets \item UPD datagram fragmentation system \end{itemize} } % SEI % \entry % {2014} % {SEI 2015 web {\normalfont \normalsize (\href{https://github.com/cesium/SEI15}{https://github.com/cesium/SEI15})}} % {University of Minho, Portugal} % {Participation in the development of SEI 2015 Web-Site. Implemented angularJS usage to avoid html repetitions.} % distributed systems: % \entry % {2014} % {Distributed System \"Warehouse\" Client/Server } % {University of Minho, Portugal} % {Client/Server made with Java using socket. Technical highlights: % \begin{itemize} % \item Concurrency control with reentrant locks and conditions. % \item Communication inside sockets with pure text using serializing and deserializing. % \end{itemize} % } % habitat application \entry {2014} {CRUD management Application for Habitat Portugal} {University of Minho, Portugal} {CRUD management Application made with Java and MySql database. Technical highlights: \begin{itemize} \item Three layers: View, Business, Data. \item Database with triggers \item Generic interface design \end{itemize} } % Dpum % \entry % {2014} % {Dpum CESIUM code validator} % {University of Minho, Portugal} % {Web site created to manage a programming conquest at Minho university.} %------------------------------------------------ %\begin{itemize} % \item Login, using session token with time out; % \item Manage Items, Users, purchases; % \item Add pictures to items; %\end{itemize} %------------------------------------------------ \entry {2013} {C++ File Compressor, Shannon Fano algorithm} {University of Minho, Portugal} {Implementation highlights: % Graph based algorithms and Bitwise operations. \begin{itemize} \item Graphs algorithms; \item Dynamic allocated memory; \item Bitwise operation; \end{itemize} } %------------------------------------------------ \end{entrylist} \end{document}
{ "alphanum_fraction": 0.6761805556, "avg_line_length": 35.6435643564, "ext": "tex", "hexsha": "ded19fec7c3288bcd3dbba8db8f76dfe4aa50a57", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-05-21T16:36:12.000Z", "max_forks_repo_forks_event_min_datetime": "2018-05-21T16:36:12.000Z", "max_forks_repo_head_hexsha": "854caa82f1e0d4b8403907af0a0e2153877e0383", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ruioliveiras/ruioliveiras.com", "max_forks_repo_path": "curriculum-vitae/cv_RuiOLiveira_en.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "854caa82f1e0d4b8403907af0a0e2153877e0383", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ruioliveiras/ruioliveiras.com", "max_issues_repo_path": "curriculum-vitae/cv_RuiOLiveira_en.tex", "max_line_length": 321, "max_stars_count": null, "max_stars_repo_head_hexsha": "854caa82f1e0d4b8403907af0a0e2153877e0383", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ruioliveiras/ruioliveiras.com", "max_stars_repo_path": "curriculum-vitae/cv_RuiOLiveira_en.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3396, "size": 14400 }
\section{Conclusion} We have presented Legion, a programming model and type system for expressing locality and independence to target heterogeneous, distributed parallel architectures. We implemented both a portable high-level and machine abstraction low-level runtime to support the Legion programming model. Our implementation of Legion demonstrated speedups up to 5.9X on a cluster of GPUs.
{ "alphanum_fraction": 0.825, "avg_line_length": 40, "ext": "tex", "hexsha": "3265048dc62814de7eb6f435e1bb7d211fa0db15", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0180bb3a8ee6efd0d2efdb743f75d3fba86f18f7", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "lightsighter/LegionOrigins", "max_forks_repo_path": "doc/pldi2012/conclusion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0180bb3a8ee6efd0d2efdb743f75d3fba86f18f7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "lightsighter/LegionOrigins", "max_issues_repo_path": "doc/pldi2012/conclusion.tex", "max_line_length": 62, "max_stars_count": 2, "max_stars_repo_head_hexsha": "0180bb3a8ee6efd0d2efdb743f75d3fba86f18f7", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "lightsighter/LegionOrigins", "max_stars_repo_path": "doc/pldi2012/conclusion.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-14T20:56:13.000Z", "max_stars_repo_stars_event_min_datetime": "2021-11-10T06:29:39.000Z", "num_tokens": 78, "size": 400 }
\setcounter{secnumdepth}{3} \chapter{Compiling OTB from source} \label{chapter:Installation} \index{Installation} There are two ways to install OTB library on your system: installing from a binary distribution or compiling from sources. You can find information about the installation of binary packages for OTB and Monteverdi in the OTB-Cookbook. This chapter covers compilation of OTB library from source. Note that it covers also the compilation of Monteverdi which is integrated as an OTB module since version 5.8. OTB has been developed and tested across different combinations of operating systems, compilers, and hardware platforms including Windows, Linux and Mac OSX. It is known to work with the following compilers in 32/64 bit: \begin{itemize} \item Visual Studio 2015 on Windows \item GCC 4.x,5.x or CLang 3.x on GNU/Linux \item AppleClang on Mac~OS~X (10.8 or higher) \end{itemize} Since release version 6.2.0, OTB is compiled using the C++14 standard by default. \index{CMake} The challenge of supporting OTB across platforms has been solved through the use of CMake, a cross-platform, open-source build system. CMake is used to control the software compilation process using simple platform and compiler independent configuration files. CMake generates native makefiles and workspaces that can be used in the compiler environment of your choice. CMake is quite sophisticated: it supports complex environments requiring system configuration, compiler feature testing, and code generation. CMake supports several generators to produce the compilation scripts, dependending on the platform and compiler. It can use : \begin{itemize} \item Makefiles for Unix systems \item Visual Studio workspaces for Windows \item NMake Makefiles for Windows \item Ninja scripts \item and many more ... \end{itemize} The information used by CMake is provided by \code{CMakeLists.txt} files that are present in every directory of the OTB source tree. These files contain information that the user provides to CMake at configuration time. Typical information includes paths to utilities in the system and the selection of software options specified by the user. There are (at least) two ways to use CMake : \begin{itemize} \item Using the command \texttt{ccmake} (on Unix) or \texttt{cmake-gui} (on Windows): it provides an interactive mode in which you iteratively select options and configure according to these options. The iteration proceeds until no more options remain to be selected. At this point, a generation step produces the appropriate build files for your configuration. This is the easiest way to start. \item Using the command \texttt{cmake} : it is a non-interactive polyvalent tool designed for scripting. It can run both \textit{configure} and \textit{generate} steps. \end{itemize} As shown in figure \ref{fig:CMakeGUI}, CMake has a different interfaces according to your system. Refer to section~\ref{sec:compiling-linux} for Linux and Mac~OS~X build instructions and \ref{sec:compiling-windows} for Windows. \begin{figure}[tpb] \centering \includegraphics[width=0.8\textwidth]{ccmakeScreenShot.eps} \includegraphics[width=0.8\textwidth]{CMakeSetupScreenShot.eps} \itkcaption[Cmake user interface]{CMake interface. Top) \texttt{ccmake}, the UNIX version based on \texttt{curses}. Bottom) \texttt{CMakeSetup}, the MS-Windows version based on MFC.} \label{fig:CMakeGUI} \end{figure} For more information on CMake, check : \begin{center} \url{http://www.cmake.org} \end{center} \index{Dependencies} OTB depends on a number of external libraries. Some are mandatory, meaning that OTB cannot be compiled without them, while others (the majority) are optional and can be activated or not during the build process. See table \ref{tab:otb-dependencies} for the full list of dependencies. \begin{center} \begin{tiny} \begin{table}[!htbp] \begin{tabular}{|p{0.15\textwidth}|p{0.45\textwidth}|p{0.1\textwidth}|p{0.1\textwidth}|} \hline \textbf{Library} & \textbf{Web site} & \textbf{Mandatory} & \textbf{Minimum version} \\ \hline \textbf{ITK} & \url{http://www.itk.org} & yes & 4.6.0 \\ \hline \textbf{GDAL} & \url{http://www.gdal.org} & yes & 1.10 (2.x also supported) \\ \hline \textbf{OSSIM} & \url{http://www.ossim.org} & yes & 1.8.20-3 \\ \hline \textbf{libgeotiff} & \url{http://trac.osgeo.org/geotiff/} & yes & - \\ \hline \textbf{boost} & \url{http://www.boost.org} & yes & - \\ \hline \textbf{openthreads} & \url{http://www.openscenegraph.org} & yes & - \\ \hline \textbf{tinyXML} & \url{http://www.grinninglizard.com/tinyxml} & yes & - \\ \hline \textbf{6S} & \url{http://6s.ltdri.org} & no & - \\ \hline \textbf{Curl} & \url{http://www.curl.haxx.se} & no & - \\ \hline \textbf{FFTW} & \url{http://www.fftw.org} & no & - \\ \hline \textbf{GLEW} & \url{http://glew.sourceforge.net/} & no & - \\ \hline \textbf{GLFW} & \url{http://www.glfw.org/} & no & 3 \\ \hline \textbf{GLUT} & \url{https://www.opengl.org/resources/libraries/glut/} & no & - \\ \hline \textbf{libKML} & \url{https://github.com/google/libkml} & no & 1.2 \\ \hline \textbf{libSVM} & \url{http://www.csie.ntu.edu.tw/~cjlin/libsvm} & no & 2.0 \\ \hline \textbf{Mapnik} & \url{http://www.mapnik.org} & no & 2.x \\ \hline \textbf{MPI} & \url{https://www.open-mpi.org/} & no & - \\ \hline \textbf{MuParser} & \url{http://www.muparser.sourceforge.net} & no & - \\ \hline \textbf{MuParserX} & \url{http://muparserx.beltoforion.de} & no & 4.0.7 \\ \hline \textbf{OpenCV} & \url{http://opencv.org} & no & 2 (3.x also supported) \\ \hline \textbf{OPENGL} & \url{https://www.opengl.org/} & no & - \\ \hline \textbf{Qt} & \url{http://qt-project.org} & no & 4 \\ \hline \textbf{QWT} & \url{http://qwt.sourceforge.net} & no & 6 \\ \hline \textbf{Shark} & \url{http://image.diku.dk/shark/} & no & 3.1 \\ \hline \textbf{SiftFast} & \url{http://libsift.sourceforge.net} & no & - \\ \hline \textbf{SPTW} & \url{https://github.com/remicres/sptw.git} & no & - \\ \hline \end{tabular} \caption{External libraries used in OTB.} \label{tab:otb-dependencies} \end{table} \end{tiny} \end{center} \section{Linux and Mac OS X} \label{sec:compiling-linux} \subsection{Setting up the build environment} The first thing to do is to create a directory for working with OTB. This guide will use \texttt{$\sim$/OTB} but you are free to choose something else. In this directory, there will be three locations: \begin{itemize} \item \texttt{$\sim$/OTB/otb} for the source file obtained from the git repository \item \texttt{$\sim$/OTB/build} for the intermediate build objects, CMake specific files, libraries and binaries. \item \texttt{$\sim$/OTB/install}, the installation directory for OTB once it is built. A system location (\texttt{/usr/local} for example) can also be used, but installing locally is more flexible and does not require root access. \end{itemize} To setup this structure, the following commands can be used: \begin{verbatim} $ mkdir ~/OTB $ cd ~/OTB $ git clone https://[email protected]/git/otb.git $ mkdir build $ mkdir install \end{verbatim} The OTB project uses a git branching model where \texttt{develop} is the current development version. It contains the latest patches and represents the work in progress towards the next release. For more information on OTB and git, including how to decide which branch to want to compile, please see the OTB wiki page at \url{http://wiki.orfeo-toolbox.org/index.php/Git}. Checkout the relevant branch now: \begin{verbatim} $ cd ~/OTB/otb $ git checkout develop \end{verbatim} Now you must decide which build method you will use. There are two ways of compiling OTB from sources, depending on how you want to manage dependencies. Both methods rely on CMake. \begin{itemize} \item SuperBuild (go to section~\ref{sec:installation-linux-superbuild}). All OTB dependencies are automatically downloaded and compiled. This method is the easiest to use and provides a complete OTB with minimal effort. \item Normal build (go to section~\ref{sec:installation-linux-normalbuild}). OTB dependencies must already be compiled and available on your system. This method requires more work but provides more flexibility. \end{itemize} If you do not know which method to use and just want to compile OTB with all its modules, use SuperBuild. \begin{center} \begin{tiny} \begin{table}[!htbp] \begin{tabular}{p{0.35\textwidth}p{0.65\textwidth}} \hline \textbf{CMake variable} & \textbf{Value} \\ \hline \texttt{CMAKE\_INSTALL\_PREFIX} & Installation directory, target for \texttt{make install} \\ \texttt{BUILD\_EXAMPLES} & Activate compilation of OTB examples \\ \texttt{BUILD\_TESTING} & Activate compilation of the tests \\ \texttt{OTB\_BUILD\_DEFAULT\_MODULES} & Activate all usual modules, required to build the examples \\ \texttt{OTB\_USE\_\textit{XXX}} & Activate module \textit{XXX} \\ \texttt{OTBGroup\_\textit{XXX}} & Enable modules in the group \textit{XXX} \\ \texttt{OTB\_DATA\_ROOT} & otb-data repository \\ \texttt{OTB\_WRAP\_PYTHON} & Enable Python wrapper \\ \texttt{OTB\_WRAP\_JAVA} & Enable Java wrapper \\ \hline \multicolumn{2}{l}{\small \textbf{SuperBuild only}} \\ \texttt{DOWNLOAD\_LOCATION} & Location to download dependencies \\ \texttt{USE\_SYSTEM\_\textit{XXX}} & Use the system's \textit{XXX} library \\ \hline \end{tabular} \caption{Important CMake configuration variables in OTB} \label{tab:installation-cmake-variables} \end{table} \end{tiny} \end{center} If you want to use a standalone binary package, a lot of dependencies are already supplied in it. In this case, it is advised to use all of the dependencies from that package. Mixing system libraries with libraries from OTB package may not be safe. When you call the \textit{otbenv} script in the package, it will add an environment variable \texttt{CMAKE\_PREFIX\_PATH}, pointing to the root of the OTB package. This variable is used by CMake as a hint to detect the dependencies location. \subsection{SuperBuild: Build OTB and all dependencies} \label{sec:installation-linux-superbuild} The SuperBuild is a way of compiling dependencies to a project just before you build the project. Thanks to CMake and its ExternalProject module, it is possible to download a source archive, configure and compile it when building the main project. This feature has been used in other CMake-based projects (ITK, Slicer, ParaView,...). In OTB, the SuperBuild is implemented with no impact on the library sources : the sources for SuperBuild are located in the 'OTB/SuperBuild' subdirectory. It is made of CMake scripts and source patches that allow to compile all the dependencies necessary for OTB. Once all the dependencies are compiled and installed, the OTB library is built using those dependencies. OTB's compilation is customized by specifying configuration variables. The most important configuration variables are shown in table~\ref{tab:installation-cmake-variables}. The simplest way to provide configuration variables is via the command line \texttt{-D} option: \begin{verbatim} $ cd ~/OTB/build $ cmake -D CMAKE_INSTALL_PREFIX=~/OTB/install ../otb/SuperBuild \end{verbatim} A pre-load script can also be used with the \texttt{-C} options (see \url{https://cmake.org/cmake/help/v3.4/manual/cmake.1.html#options}). Another option is to set variables manually with \texttt{cmake-gui} or \texttt{ccmake}. Please note that the \texttt{CMAKE\_INSTALL\_PREFIX} variable is important because the SuperBuild will install some targets during the compilation step. Therefore this directory will be used even if you don't use make install target. In fact there is no make install target for the SuperBuild. Also note that if not specified to cmake, a default install dir will be used, located in \texttt{../superbuild\_install}. By default, SuperBuild will not use any of libraries installed on system. All \texttt{USE\_SYSTEM\_\textit{XXX}} are set to FALSE. This is our recommended way of using SuperBuild. You are however free to use a system library if you want!. You must be very much aware of dependencies of those libraries you use from system. For example, if libjpeg is not used from superbuild then you should not use zlib from superbuild because zlib is a dependency of libjpeg. Here SuperBuild will NOT set \texttt{USE\_SYSTEM\_ZLIB=FALSE}. One must re-run cmake with \texttt{-DUSE\_SYSTEM\_ZLIB=FALSE}. Above example of libjpeg-zlib dependency is so simple. Imagine the case for GDAL which depends on zlib, libjpeg, libtiff(with big tiff support), geotiff, sqlite, curl, geos, libkml, openjpeg. This is one of the reasons we recommend to use SuperBuild exclusively. All dependencies are configured and built in a way that help us to get an efficient build OTB. So we enable geotiff (with proj4 support), openjpeg, geos in GDAL build. (see table~\ref{tab:installation-cmake-variables}). SuperBuild downloads dependencies into the \texttt{DOWNLOAD\_LOCATION} directory, which will be \texttt{$\sim$/OTB/build/Downloads} in our example. Dependencies can be downloaded manually into this directory before the compilation step. This can be useful if you wish to bypass a proxy, intend to compile OTB without an internet connection, or other network constraint. You can find an archive with sources of all our dependencies on the Orfeo ToolBox website (pick the 'SuperBuild-archives' corresponding to the OTB version you want to build) : \begin{center} \url{https://www.orfeo-toolbox.org/packages} \end{center} Qt library: Unlike other dependencies building Qt4 on all platform is not trivial task but OTB SuperBuild makes best effort to make it easier for you. So there is still some additional package installation, one has to do as a pre-requistie for SuperBuild On a GNU/Linux you must have Qt X11 dependencies installed. See Qt 4.8 documentation for list of packages that needs to be installed before starting superbuild. http://doc.qt.io/qt-4.8/requirements-x11.html For a debian 8.1 systeme, I installed all Qt4 dependencies with below 'apt-get install' \texttt{apt-get install libx11-dev libxext-dev libxt-dev libxi-dev libxrandr-dev libgl-dev libglu-dev} You can also deactivate QT4 and skip this by passing \texttt{-DOTB\_USE\_QT4=OFF} to cmake. This will give you OTB install without monteverdi, mapla and gui application launchers. For Mac OSX you need to install XCode and Windows 7,8.1,10 requires MSVC 2015 or higher. You are now ready to compile OTB! Simply use the make command (other targets can be generated with CMake's \texttt{-G} option): \begin{verbatim} $ cd ~/OTB/build $ make \end{verbatim} Applications will be located in the \texttt{bin/} directory in CMAKE\_INSTALL\_PREFIX directory, which in our case is \texttt{~/OTB/install/bin/}. For example: \begin{verbatim} ~/OTB/install/bin/otbcli_ExtractROI \end{verbatim} will launch the command line version of the \textbf{ExtractROI} application, while: \begin{verbatim} ~/OTB/install/bin/otbgui_ExtractROI \end{verbatim} will launch the graphical version. To be able to use your OTB build from everywhere, we recommend the following. First, add \texttt{bin/} directory to your PATH for easy access: \begin{verbatim} export PATH=$PATH:~/OTB/install/bin \end{verbatim} Second, add the \texttt{lib/} directory to your LD\_LIBRARY\_PATH: \begin{verbatim} export LD_LIBRARY_PATH=~/OTB/install/lib:$LD_LIBRARY_PATH \end{verbatim} Monteverdi is integrated as an OTB module since release 5.8 and it is compiled by the SuperBuild (as long as GLEW, GLUT, OPENGL, Qt and QWT modules are activated). To use OTB applications from within Monteverdi you will need to define the OTB\_APPLICATION\_PATH environment variable. \begin{verbatim} export OTB_APPLICATION_PATH=~/OTB/install/lib/otb/applications monteverdi \end{verbatim} A wiki page detailing the status of SuperBuild on various platforms is also available here: \url{http://wiki.orfeo-toolbox.org/index.php/SuperBuild}. \subsection{Normal build: Build only OTB} \label{sec:installation-linux-normalbuild} Once all OTB dependencies are availables on your system, use CMake to generate a Makefile: \begin{verbatim} $ cd ~/OTB/build $ cmake -C configuration.cmake ../otb \end{verbatim} The script \texttt{configuration.cmake} needs to contain dependencies location if CMake cannot find them automatically. This can be done with the \texttt{\textit{XXX}\_DIR} variables containing the directories which contain the FindXXX.cmake scripts, or with the \texttt{\textit{XXX}\_INCLUDEDIR} and \texttt{\textit{XXX}\_LIBRARY} variables. Additionally, decide which module you wish to enable, together with tests and examples. Refer to table~\ref{tab:installation-cmake-variables} for the list of CMake variables. Since OTB is modularized, it is possible to only build some modules instead of the whole set. To deactivate a module (and the ones that depend on it) switch off the CMake variable OTB\_BUILD\_DEFAULT\_MODULES, configure, and then switch off each \texttt{Module\_module\_name} variable. To provide an overview on how things work, the option \texttt{COMPONENTS} of the CMake command find\_package is used in order to only load the requested modules. This module-specific list prevent CMake from performing a blind search; it is also a convienent way to monitor the dependencies of each module. \begin{verbatim} find_package(OTB COMPONENTS OTBCommon OTBTransform [...]) \end{verbatim} Some of the OTB capabilities are considered as optional, and you can deactivate the related modules thanks to a set of CMake variables starting with \texttt{OTB\_USE\_\textit{XXX}}. Table~\ref{tab:optional} shows which modules are associated to these variables. It is very important to notice that these variable override the variable OTB\_BUILD\_DEFAULT\_MODULES. You are now ready to compile OTB! Simply use the make command (other targets can be generated with CMake's \texttt{-G} option): \begin{verbatim} $ make \end{verbatim} The installation target will copy the binaries and libraries to the installation location: \begin{verbatim} $ make install \end{verbatim} \begin{center} \begin{tiny} \begin{table}[!htbp] \begin{tabular}{|l|l|p{0.52\textwidth}|} \hline \textbf{CMake variable} & \textbf{3rd party module} & \textbf{Modules depending on it} \\ \hline \textbf{OTB\_USE\_LIBKML} & OTBlibkml & OTBKMZWriter OTBIOKML OTBAppKMZ \\ \hline \textbf{OTB\_USE\_QT4} & OTBQt4 & OTBQtWidget \\ \hline \textbf{OTB\_USE\_QWT} & OTBQwt & OTBMonteverdiGUI OTBMonteverdi \\ \hline \textbf{OTB\_USE\_GLEW} & OTBGlew & OTBIce OTBMonteverdiGUI OTBMonteverdi \\ \hline \textbf{OTB\_USE\_OPENGL} & OTBOpenGL & OTBIce OTBMonteverdiGUI OTBMonteverdi \\ \hline \textbf{OTB\_USE\_CURL} & OTBCurl & \\ \hline \textbf{OTB\_USE\_MUPARSER} & OTBMuParser & OTBMathParser OTBDempsterShafer OTBAppClassification OTBAppMathParser OTBAppStereo OTBAppProjection OTBAppSegmentation OTBRoadExtraction OTBRCC8 OTBCCOBIA OTBMeanShift \\ \hline \textbf{OTB\_USE\_MUPARSERX} & OTBMuParserX & OTBMathParserX OTBAppMathParserX \\ \hline \textbf{OTB\_USE\_LIBSVM} & OTBLibSVM & optional for OTBSupervised OTBAppClassification \\ \hline \textbf{OTB\_USE\_OPENCV} & OTBOpenCV & optional for OTBSupervised OTBAppClassification \\ \hline \textbf{OTB\_USE\_SHARK} & OTBShark & optional for OTBSupervised OTBAppClassification \\ \hline \textbf{OTB\_USE\_MAPNIK} & OTBMapnik & OTBVectorDataRendering \\ \hline \textbf{OTB\_USE\_6S} & OTB6S & OTBOpticalCalibration OTBAppOpticalCalibration OTBSimulation \\ \hline \textbf{OTB\_USE\_SIFTFAST} & OTBSiftFast & \\ \hline \end{tabular} \caption{Third parties and related modules.} \label{tab:optional} \end{table} \end{tiny} \end{center} \section{Windows} \label{sec:compiling-windows} Everything that is needed for OTB development on Windows, including compiling from source, is covered in details on the OTB wiki at: \begin{center} \url{http://wiki.orfeo-toolbox.org/index.php/OTB_development_on_Windows} \end{center} \section{Known issues} \label{sec:knownissues} \begin{itemize} \item openjpeg/ITK \end{itemize} It is important to know that the OpenJpeg library doesn't support name mangling since version 2.0. As a consequence, if other libraries linked by your project already contain OpenJpeg, there may be a symbol conflict at run-time. For instance, this was observed with OTB build on a recent ITK version (ver. 4). The ITK library already had a version of OpenJpeg in libitkopenjpeg-*.so, which contained the OpenJpeg symbols un-wrapped. These symbols were also loaded by the GDAL driver but only the first ones were used, which caused a crash. Hopefully, thanks to the modular architecture of ITK, the library libitkopenjpeg-*.so is not imported anymore inside OTB. However the OpenJPEG headers may be present in ITK include directory. As the current architecture doesn't allow to tune include order between modules, the OpenJPEG header from ITK can be included before your own OpenJPEG install. There are two ways to avoid this situation : \begin{itemize} \item Use an ITK without GDCM nor ITKReview (only these modules depend on OpenJPEG) \item Hide the header openjpeg.h in the ITK include directory. \end{itemize} More information can be found here : \url{http://wiki.orfeo-toolbox.org/index.php/JPEG2000_with_GDAL_OpenJpeg_plugin} \begin{itemize} \item libkml / Ubuntu 12.04 \end{itemize} Another issue is related to the official package of libkml under Ubuntu 12.4. Until this problem is addressed, users of this plateform should disable the option OTB\_USE\_KML, so that OTB won't be built with this third-party.
{ "alphanum_fraction": 0.7662217707, "avg_line_length": 44.6453608247, "ext": "tex", "hexsha": "ea869b753148edff4d213bf066a1de0b26a24271", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2019-12-03T02:54:36.000Z", "max_forks_repo_forks_event_min_datetime": "2019-01-17T10:36:14.000Z", "max_forks_repo_head_hexsha": "eb3d4d56089065b99641d8ae7338d2ed0358d28a", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "lfyater/Orfeo", "max_forks_repo_path": "Documentation/SoftwareGuide/Latex/Installation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "eb3d4d56089065b99641d8ae7338d2ed0358d28a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "lfyater/Orfeo", "max_issues_repo_path": "Documentation/SoftwareGuide/Latex/Installation.tex", "max_line_length": 214, "max_stars_count": 2, "max_stars_repo_head_hexsha": "eb3d4d56089065b99641d8ae7338d2ed0358d28a", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "lfyater/Orfeo", "max_stars_repo_path": "Documentation/SoftwareGuide/Latex/Installation.tex", "max_stars_repo_stars_event_max_datetime": "2019-12-03T02:54:28.000Z", "max_stars_repo_stars_event_min_datetime": "2019-02-13T14:48:19.000Z", "num_tokens": 6099, "size": 21653 }
\subsection{Matrix exponentials} The exponential of a complex number is defined as: \(e^x=\sum \dfrac{1}{j!}x^j\) We can extend this definition to matrices. \(e^X:=\sum \dfrac{1}{j!}X^j\) The dimension of a matrix and its exponential are the same.
{ "alphanum_fraction": 0.7058823529, "avg_line_length": 18.2142857143, "ext": "tex", "hexsha": "a0d47cc2551f2f75dcba8d70714e5d9d03feefd7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/algebra/linearSystemsOperations/02-02-matrixExponential.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/algebra/linearSystemsOperations/02-02-matrixExponential.tex", "max_line_length": 59, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/algebra/linearSystemsOperations/02-02-matrixExponential.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 79, "size": 255 }
\documentclass[journal]{Imperial_lab_report} \ifCLASSINFOpdf \else \fi \hyphenation{op-tical net-works semi-conduc-tor} \begin{document} \title{Template for Imperial College Physics 1st Year Laboratory Reports} \author{An I.C. Student}% <-this % stops a space % The paper headers \markboth{A. Student}% {Shell \MakeLowercase{\textit{et al.}}:} \maketitle \begin{abstract} The abstract goes here. \end{abstract} \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for preparing reports for Imperial College Physics 1st year lab, using \LaTeX\ and the Imperial\_lab\_report.cls class file. Happy \TeX ing! \subsection{Subsection Heading Here} Subsection text here. % needed in second column of first page if using \IEEEpubid %\IEEEpubidadjcol \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Section 1} LaTeX is a software system for document preparation. When writing, the writer uses plain text as opposed to the formatted text found in "What You See Is What You Get" word processors like Microsoft Word, LibreOffice Writer and Apple Pages. The writer uses markup tagging conventions to define the general structure of a document (such as article, book, and letter), to stylise text throughout a document (such as bold and italics), and to add citations and cross-references. A TeX distribution such as TeX Live or MiKTeX is used to produce an output file (such as PDF or DVI) suitable for printing or digital distribution. LaTeX is widely used in academia for the communication and publication of scientific documents in many fields, including mathematics, statistics, computer science, engineering, physics, economics, linguistics, quantitative psychology, philosophy, and political science. It also has a prominent role in the preparation and publication of books and articles that contain complex multilingual materials, such as Sanskrit and Greek. LaTeX uses the TeX typesetting program for formatting its output, and is itself written in the TeX macro language. \subsubsection{Subsubsection Heading Here} $$ E = mc^2$$ \section{Section 2} LaTeX is a software system for document preparation. When writing, the writer uses plain text as opposed to the formatted text found in "What You See Is What You Get" word processors like Microsoft Word, LibreOffice Writer and Apple Pages. The writer uses markup tagging conventions to define the general structure of a document (such as article, book, and letter), to stylise text throughout a document (such as bold and italics), and to add citations and cross-references. A TeX distribution such as TeX Live or MiKTeX is used to produce an output file (such as PDF or DVI) suitable for printing or digital distribution. LaTeX is widely used in academia for the communication and publication of scientific documents in many fields, including mathematics, statistics, computer science, engineering, physics, economics, linguistics, quantitative psychology, philosophy, and political science. It also has a prominent role in the preparation and publication of books and articles that contain complex multilingual materials, such as Sanskrit and Greek. LaTeX uses the TeX typesetting program for formatting its output, and is itself written in the TeX macro language. \subsubsection{Subsubsection Heading Here} $$ E = mc^2$$ \subsubsection{Subsubsection Heading Here} $$ E = mc^2$$ \section{Section 3} LaTeX is a software system for document preparation. When writing, the writer uses plain text as opposed to the formatted text found in "What You See Is What You Get" word processors like Microsoft Word, LibreOffice Writer and Apple Pages. The writer uses markup tagging conventions to define the general structure of a document (such as article, book, and letter), to stylise text throughout a document (such as bold and italics), and to add citations and cross-references. A TeX distribution such as TeX Live or MiKTeX is used to produce an output file (such as PDF or DVI) suitable for printing or digital distribution. LaTeX is widely used in academia for the communication and publication of scientific documents in many fields, including mathematics, statistics, computer science, engineering, physics, economics, linguistics, quantitative psychology, philosophy, and political science. It also has a prominent role in the preparation and publication of books and articles that contain complex multilingual materials, such as Sanskrit and Greek. LaTeX uses the TeX typesetting program for formatting its output, and is itself written in the TeX macro language. \subsubsection{Subsubsection Heading Here} $$ E = mc^2$$ \subsubsection{Subsubsection Heading Here} $$ E = mc^2$$ \section{Section 4} LaTeX is a software system for document preparation. When writing, the writer uses plain text as opposed to the formatted text found in "What You See Is What You Get" word processors like Microsoft Word, LibreOffice Writer and Apple Pages. The writer uses markup tagging conventions to define the general structure of a document (such as article, book, and letter), to stylise text throughout a document (such as bold and italics), and to add citations and cross-references. A TeX distribution such as TeX Live or MiKTeX is used to produce an output file (such as PDF or DVI) suitable for printing or digital distribution. LaTeX is widely used in academia for the communication and publication of scientific documents in many fields, including mathematics, statistics, computer science, engineering, physics, economics, linguistics, quantitative psychology, philosophy, and political science. It also has a prominent role in the preparation and publication of books and articles that contain complex multilingual materials, such as Sanskrit and Greek. LaTeX uses the TeX typesetting program for formatting its output, and is itself written in the TeX macro language. \subsubsection{Subsubsection Heading Here} $$ E = mc^2$$ \subsubsection{Subsubsection Heading Here} $$ E = mc^2$$ \section{Conclusion} The conclusion goes here. \appendices \section{Proof of Einstein's Famous Equation} The famous equation $$ E = mc^2$$ can be derived. % you can choose not to have a title for an appendix % if you want by leaving the argument blank \section{} [Appendix two text goes here.] % use section* for acknowledgment \section*{Acknowledgment} [acknowledgments go here] % Can use something like this to put references on a page % by themselves when using endfloat and the captionsoff option. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{ieeetran} % that's all folks \end{document}
{ "alphanum_fraction": 0.797952424, "avg_line_length": 67.0909090909, "ext": "tex", "hexsha": "2af8f7f818ba4ed0a55ba970cdcffddf39ec6eb3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4f6e5631748669557ec47d81520f54548c5f02f5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "kancheng/latex_practice_gather", "max_forks_repo_path": "Imperial_lab_report/bare_report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4f6e5631748669557ec47d81520f54548c5f02f5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "kancheng/latex_practice_gather", "max_issues_repo_path": "Imperial_lab_report/bare_report.tex", "max_line_length": 622, "max_stars_count": null, "max_stars_repo_head_hexsha": "4f6e5631748669557ec47d81520f54548c5f02f5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "kancheng/latex_practice_gather", "max_stars_repo_path": "Imperial_lab_report/bare_report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1468, "size": 6642 }
\documentclass{article} \usepackage{graphicx} \title{Five Programming Languages and Their Origins} \author{Une Umoh} \begin{document} \maketitle \newpage \title{What are Programming Languages?} \maketitle According to Wikipedia, A programming language is a language comprising of a set of strings that produce various kinds of machine code output. \paragraph{} In a simpler way, a programming language can be defined as a language understood by computers consisting of sets of instructions that allow computers carry out tasks \newpage \title{HTML} \maketitle \begin{figure} \includegraphics[width=\linewidth]{html.jpeg} \label{html.01} \end{figure} \newpage HTML stands for Hyper Text Markup Language.It was invented by Sir Tim Berners-Lee in the earlt 90's. \paragraph{} It wasn't published officially until 1993 for people around the world to share and access information easily. \paragraph{} It is today one of the most popular programming languages. The latest version of HTML is HTML 5.0. HTML's main purpose is to create websites \paragraph{} IDEs: Notepad, Notepad++, Brackets, MS FrontPage... \paragraph{} Related languages: CSS, JavaScript and pHp... \newpage \title{Java} \begin{figure} \includegraphics[width=\linewidth]{java.jpg} \end{figure} \newpage The father of Java, James Gosling, invented the language in 1995. \paragraph{} Java is used for internet programming, mobile devices, creation of games and many more. \paragraph{} IDEs: BlueJ, Eclipse, Greenfoot etc... \paragraph{} Related languages: Smalltalk, Lisp, Cypher. \newpage \title{CSS} \begin{figure} \includegraphics[width=\linewidth]{CSS.jpg} \end{figure} \newpage CSS stands for cascading style sheet and was proposed by Hakon Wium Lei and published in 1966. \paragraph{} This language was made to add color, layouts, fonts and many other styles to websites. \paragraph{} IDEs: Netbeans, Komodo edit, Notepad++ etc... \paragraph{} Related Languages: HTML, SVG, XML etc... \newpage \title{PhP} \begin{figure} \includegraphics[width=\linewidth]{php.png} \end{figure} \newpage PHP stands for Hyper text preprocessor. It is a scripting language majorly used for web development. \paragraph{} It was invented by Rasmus Lerdorf, in 1994. The first official release was in 1995. \paragraph{} IDEs: NetBeans, Eclipse, PhPStorm etc... \paragraph{} Related Languages: JavaScript, Python, C etc... \newpage \title{JavaScript} \begin{figure} \includegraphics[width=\linewidth]{javascript.jpg} \end{figure} \newpage \paragraph{} Netscape programmer Brandan Eich invented JavaScript in 10 days in 1995. \paragraph{} JavaScript was made to make web design easier and more attractive. \paragraph{} Applications: Games, Web Servers, Applications etc... \paragraph{} Related Languages: HTML, CSS, Java etc... \end{document}
{ "alphanum_fraction": 0.7273641851, "avg_line_length": 31.7234042553, "ext": "tex", "hexsha": "24bb5c07fbbcaa4ef29880a7b91b3a0cd5d39e37", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "38be08d22924751526dfc80a7963eb13c77046fb", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "uneumoh/uneCSC101", "max_forks_repo_path": "document.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "38be08d22924751526dfc80a7963eb13c77046fb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "uneumoh/uneCSC101", "max_issues_repo_path": "document.tex", "max_line_length": 167, "max_stars_count": null, "max_stars_repo_head_hexsha": "38be08d22924751526dfc80a7963eb13c77046fb", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "uneumoh/uneCSC101", "max_stars_repo_path": "document.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 802, "size": 2982 }
\documentclass[11pt,twoside]{article} \usepackage[headings]{fullpage} \usepackage[utopia]{mathdesign} \pagestyle{myheadings} \markboth{Be rational}{Be rational} \input{../../fncextra} \begin{document} \begin{center} \bf Let's be rational about this \end{center} Our go-to form of interpolating function is the polynomial, \begin{equation} p(x) = c_0 + c_1 x + c_2 x^2 + \cdots + c_{n-1} x^{n-1}. \end{equation} When the monomials are evaluated at $m$ nodes, we get the Vandermonde matrix \begin{equation} \mV = \begin{bmatrix} 1 & t_0 & t_0^{2} & \cdots & t_0^{m-1} \\[1mm] 1 & t_1 & t_1^{2} & \cdots & t_1^{m-1} \\[1mm] 1 & t_2 & t_2^{2} & \cdots & t_2^{m-1} \\[1mm] \vdots & \vdots & \vdots & & \vdots \\ 1 & t_{m-1} & t_{m-1}^{2} & \cdots & t_{m-1}^{m-1} \end{bmatrix} . \end{equation} Given a function $f$, setting $y_i=f(t_i)$ and solving $\mV \bfc = \bfy$ produces the coefficients of the interpolating polynomial. Polynomials can, in principle, converge pointwise to any continuous function. However, they are not equally efficient at doing so for all functions. One way to obtain faster convergence in some cases is to turn to a different category of functions. An interesting choice are the \textbf{rational functions}, \begin{equation} \label{ratdef} r(x) = \frac{a_0 + a_1 x + \cdots + a_{n-1} x^{n-1}}{b_0 + b_1 x + \cdots + b_{n-1} x^{n-1}+ b_nx^n}. \end{equation} Because they can have zeros in the denominator, rational functions are often superior for approximating functions that blow up or have steep gradients. Observe that in~\eqref{ratdef}, multiplying all of the coefficients by the same constant leaves $r$ unchanged. Thus we make a normalization, $b_n=1$. This leaves a total of $n+n=2n$ coefficients to be determined in $r$, so we require $2n$ interpolation nodes. From~\eqref{ratdef}, setting $r(t_i)=f(t_i)=y_i$, for $i=0,\ldots,2n-1$ and clearing the denominator, we eventually obtain \begin{equation} \label{eq:sys1} \m{W} \bfa - \m{Y} \m{W} \bfb = \m{Y} \begin{bmatrix} t_0^n \\ t_1^n \\ \vdots \\ t_{2n-1}^n \end{bmatrix}, \end{equation} where $\m{W}$ is a $2n\times n$ Vandermonde-style matrix, $\m{Y}=\diag(y_0,\ldots,y_{2n-1})$, and $\bfa$ and $\bfb$ collect the polynomial coefficients in the numerator and denominator respectively. This equation is actually a square linear system, \begin{equation} \label{eq:sys2} \begin{bmatrix} \m{W} & - \m{Y} \m{W} \end{bmatrix} \begin{bmatrix} \bfa \\ \bfb \end{bmatrix} = \m{Y} \begin{bmatrix} t_0^n \\ t_1^n \\ \vdots \\ t_{2n-1}^n \end{bmatrix}, \end{equation} easily solved for the vector $\bfc = \bigl[ \bfa\,; \, \bfb \bigr]$. \subsection*{Goals} You will compute a rational interpolant and compare it to a polynomial interpolant for the same points. \subsection*{Preparation} Read section 9.1 and answer the following questions. \begin{enumerate} \item Derive~\eqref{eq:sys1}. \item Write out the linear system~\eqref{eq:sys2} for the rational interpolant to the four points $(-1,0)$, $(0,-1)$, $(1,1)$, $(2,1)$. \end{enumerate} \subsection*{Procedure} Download templates for the script and for the function \texttt{ratinterp.m}. \begin{enumerate} \item Complete the function \texttt{ratinterp} that computes a rational interpolant to given data using the algorithm outlined above. \item Define the function $g(x) = \tanh(10x) + 2x^2$ and plot it over the interval $[-1,1]$. \item To your plot add the polynomial interpolant using $m=18$ equally spaced nodes in $[1,1]$. (It will not be a good result.) \item Find the rational interpolant with $n=9$ and the same nodes. Start a new plot and plot the error $g(x)-r(x)$ over $[-1,1]$. It should be fairly small over the whole interval. \end{enumerate} \subsection*{Discussion} Returning to the $n=9$ rational interpolant of $g$, find the poles of $r$ (i.e., the roots of the denominator). Several of them are conjugate pairs that lie essentially on the imaginary axis of the complex plane. Referring to a property of $g$, explain the two poles closest to the origin. You may want to prove and then use the identity $\cosh(x)=\cos(ix)$. \end{document} %%% Local Variables: %%% mode: latex %%% TeX-master: t %%% End:
{ "alphanum_fraction": 0.69003517, "avg_line_length": 41.4077669903, "ext": "tex", "hexsha": "ef2bbc6b562aab19e143f0f36e635a9015fe3e3d", "lang": "TeX", "max_forks_count": 49, "max_forks_repo_forks_event_max_datetime": "2022-03-29T17:19:57.000Z", "max_forks_repo_forks_event_min_datetime": "2017-04-02T17:21:33.000Z", "max_forks_repo_head_hexsha": "ef51fada748de1326a4ce645fbcb0c2499cb2b8a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "snowdj/fnc-extras", "max_forks_repo_path": "labs/chapter09/BeRational/BeRational.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "ef51fada748de1326a4ce645fbcb0c2499cb2b8a", "max_issues_repo_issues_event_max_datetime": "2022-02-03T21:04:47.000Z", "max_issues_repo_issues_event_min_datetime": "2018-12-04T22:17:44.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "snowdj/fnc-extras", "max_issues_repo_path": "labs/chapter09/BeRational/BeRational.tex", "max_line_length": 382, "max_stars_count": 109, "max_stars_repo_head_hexsha": "ef51fada748de1326a4ce645fbcb0c2499cb2b8a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "snowdj/fnc-extras", "max_stars_repo_path": "labs/chapter09/BeRational/BeRational.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-20T19:03:54.000Z", "max_stars_repo_stars_event_min_datetime": "2018-04-21T09:02:50.000Z", "num_tokens": 1410, "size": 4265 }
%% SECTION HEADER ///////////////////////////////////////////////////////////////////////////////////// \section{Modeling of the GW Propagation in the \acp{hsc}} \label{sec:modelling} %% SECTION CONTENT //////////////////////////////////////////////////////////////////////////////////// %% SUBSECTION HEADER ////////////////////////////////////////////////////////////////////////////////// The most common numerical modeling of the phenomenon of \ac{gw} in \acp{hsc} found in the literature is a calculation of the effective material properties of the honeycomb structure \cite{baid2015dispersion, mustapha2014leaky, qi2008ultrasonic, shi1995derivation, sikdar2016guided}. The properties are obtained from the analytical \cite{gibson1982mechanics, malek2015effective} or \ac{fem} \cite{catapano2014multi, chen2014analysis} analysis of the honeycomb \ac{rve}. A comprehensive literature review on the homogenisation of the honeycomb structure is presented in the work of Ahmed \cite{ahmed2019homogenization}. Replacing the core geometry with a homogeneous material has many advantages. First and foremost, it simplifies the domain mesh so that convergence of the solution requires fewer working memory resources and increases the value of the critical time step. In addition, the wave propagation velocity determined by the simulation is in good agreement with the experiment. However, this method cannot adequately represent the phenomenon of propagating wave interaction in honeycomb cells. It causes the signal energy not to dissipate as it would in a real structure. A more precise model is the \ac{fgm} of the core. Ruzzenne et al. presented a parametric study to evaluate the dynamic behavior of the honeycomb and cellular structures through the \ac{fem} and the application of the theory of periodic structures \cite{ruzzene2003wave}. Recently, the simulations of the wave propagation in the \acp{hsc} have been conducted with commercially available finite element code~\cite{song2009guided, hosseini2013numerical, tian2015wavenumber, zhao2018wave}. While the \ac{fem} based modelling of \ac{gw} requires a significant amount of memory and is time-consuming, this method becomes inefficient in the case of \ac{fgm}. Kudela increased the computational efficiency with the model based on the time-domain \ac{sem} \cite{kudela2016parallel}. In addition, the algorithm has been adapted for parallel computing on the \ac{gpu}, making the simulations fourteen times as fast as on the \ac{cpu}. However, this approach has two major drawbacks. One is employing solid elements with three \acp{dof} at each node to model the core walls. As a result, a $179\times160$ mm sandwich panel has over 1.5 million \acp{dof}. Secondly, no \ac{pzt} sensors were considered in the simulation, so a concentrated force was used to generate the \ac{gw}. To attach the transducers, the grids of the sensor and the host plate must coincide or use an interface between them. The disadvantages mentioned above were motivation to propose a new model of the \ac{hsc}. In the proposed model, the core of the plate consists of \ac{2d} elements, one per each cell wall. Since the neutral plane of the elements is oriented differently concerning the global coordinate system, the local displacements vector has to be transformed accordingly. In addition, two interfaces are used to connect the individual \ac{hsc} components. One with the non-matching grid was developed to join the sensors with the panel. It was done with the novel method based on the element shape function, described in details in the Chapter \ref{ch:sem}. The core and skin connection was implemented with a perfect matching interface. To the best of the authors’ knowledge, the present model has not been implemented yet for \acp{hsc}. The parametric study conducted in the paper leads to the determination of a \ac{madif}, which defines the influence of the size of the composite defect on wave propagation. In this case, the defect is assumed to be a disbond between the skin and the core.
{ "alphanum_fraction": 0.7552482094, "avg_line_length": 101.225, "ext": "tex", "hexsha": "6541b52f7e667b93ba54155692550e68a7e72ed0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e27cf98e5d8a5a8ad2ba0c5e70520c281dfd384c", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "pfiborek/model_hc", "max_forks_repo_path": "docs/proposal/Dissertation/Chapters/Chapter3/sec:modelling.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e27cf98e5d8a5a8ad2ba0c5e70520c281dfd384c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "pfiborek/model_hc", "max_issues_repo_path": "docs/proposal/Dissertation/Chapters/Chapter3/sec:modelling.tex", "max_line_length": 283, "max_stars_count": null, "max_stars_repo_head_hexsha": "e27cf98e5d8a5a8ad2ba0c5e70520c281dfd384c", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "pfiborek/model_hc", "max_stars_repo_path": "docs/proposal/Dissertation/Chapters/Chapter3/sec:modelling.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 886, "size": 4049 }
\clearpage \subsection{Pascal Procedure Declaration (with Parameters)} % (fold) \label{sub:pas_procedure_declaration_with_parameters_} In Pascal \nameref{sub:parameter}s can be declared in any function or procedure declaration. \passyntax{passynt:parameter-procedure-decl}{procedure declarations (with parameters)}{parameters/procedure-decl-with-params} \mynote{ \begin{itemize} \item The syntax in \fref{csynt:parameter-procedure-decl} shows the Pascal syntax for declaring procedures with \nameref{sub:parameter}s. \item Parameters in Pascal are declared in a similar way to other variables. \item Pascal supports passing parameters by reference and by value. The parameters in \lref{lst:parameter-test-pas} are all passed by value. \item There are three ways of passing references by reference in Pascal (see \lref{lst:parameter-types-pas} for examples of these): \begin{itemize} \item \textbf{const}: The parameter is passed a reference to the variable, but cannot change its value. This can be used to pass a value \textbf{in} to the procedure by reference. \item \textbf{out}: The parameter is passed a reference to the variable, but cannot assume it contains a meaningful value. This can be used to pass a value \textbf{out} to the procedure by reference. \item \textbf{var}: The parameter is passed a reference to the variable. This can be used to pass a value \textbf{in} and \textbf{out} of the procedure by reference. \end{itemize} \end{itemize} } \clearpage \passection{\pascode{lst:parameter-test-pas}{Example procedure with parameters}{code/pascal/storing-using-data/ParameterTest.pas}} \passection{\pascode{lst:parameter-types-pas}{Example of the different parameter types}{code/pascal/storing-using-data/ParameterTypes.pas}} % subsection c_procedure_declaration_with_parameters_ (end)
{ "alphanum_fraction": 0.7875477881, "avg_line_length": 63.1379310345, "ext": "tex", "hexsha": "f8baee5f3e72b1f2e5ebf154f8ac7299b131f784", "lang": "TeX", "max_forks_count": 6, "max_forks_repo_forks_event_max_datetime": "2022-03-24T07:42:53.000Z", "max_forks_repo_forks_event_min_datetime": "2020-06-02T03:18:37.000Z", "max_forks_repo_head_hexsha": "8f3040983d420129f90bcc4bd69a96d8743c412c", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "macite/programming-arcana", "max_forks_repo_path": "topics/storing-using-data/pascal/pas-procedure-decl-with-params.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07", "max_issues_repo_issues_event_max_datetime": "2021-12-29T19:45:10.000Z", "max_issues_repo_issues_event_min_datetime": "2021-12-29T19:45:10.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "thoth-tech/programming-arcana", "max_issues_repo_path": "topics/storing-using-data/pascal/pas-procedure-decl-with-params.tex", "max_line_length": 203, "max_stars_count": 1, "max_stars_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "thoth-tech/programming-arcana", "max_stars_repo_path": "topics/storing-using-data/pascal/pas-procedure-decl-with-params.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-10T04:50:54.000Z", "max_stars_repo_stars_event_min_datetime": "2021-08-10T04:50:54.000Z", "num_tokens": 453, "size": 1831 }
\documentclass[12pt]{article} \usepackage{graphicx} \begin{document} \section{Test mesh} Test mesh comprises 7 triangles and 2 quadrilaterals. This mesh can be created with \texttt{lf::mesh::test\_utils::GenerateHybrid2DTestMesh()}. \includegraphics[width=\textwidth,clip]{multiref_L0} \section{Regular refinement} \begin{minipage}[c]{0.5\textwidth} \includegraphics[width=\textwidth]{fine_mesh} \end{minipage}% \begin{minipage}[c]{0.5\textwidth} \includegraphics[width=\textwidth]{fine_mesh_pi} \end{minipage}% \section{Barycentric refinement} \begin{minipage}[c]{0.5\textwidth} \includegraphics[width=\textwidth]{barycentric_ref} \end{minipage}% \begin{minipage}[c]{0.5\textwidth} \includegraphics[width=\textwidth]{barycentric_ref_pi} \end{minipage}% \section{Global bisection refinement} \begin{minipage}[c]{0.5\textwidth} \includegraphics[width=\textwidth]{allref} \end{minipage}% \begin{minipage}[c]{0.5\textwidth} \includegraphics[width=\textwidth]{allref_pi} \end{minipage}% \end{document} %%% Local Variables: %%% mode: latex %%% TeX-master: t %%% End:
{ "alphanum_fraction": 0.7513761468, "avg_line_length": 23.6956521739, "ext": "tex", "hexsha": "0b80667cda00c890e6f362de1dd59b185d081b93", "lang": "TeX", "max_forks_count": 20, "max_forks_repo_forks_event_max_datetime": "2022-02-18T17:33:52.000Z", "max_forks_repo_forks_event_min_datetime": "2018-11-13T13:46:38.000Z", "max_forks_repo_head_hexsha": "e2716e914169eec7ee59e822ea3ab303143eacd1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Pascal-So/lehrfempp", "max_forks_repo_path": "lib/lf/refinement/test/refinement_test.tex", "max_issues_count": 151, "max_issues_repo_head_hexsha": "e2716e914169eec7ee59e822ea3ab303143eacd1", "max_issues_repo_issues_event_max_datetime": "2021-08-04T14:50:50.000Z", "max_issues_repo_issues_event_min_datetime": "2018-05-27T13:01:50.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Pascal-So/lehrfempp", "max_issues_repo_path": "lib/lf/refinement/test/refinement_test.tex", "max_line_length": 72, "max_stars_count": 16, "max_stars_repo_head_hexsha": "e2716e914169eec7ee59e822ea3ab303143eacd1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Pascal-So/lehrfempp", "max_stars_repo_path": "lib/lf/refinement/test/refinement_test.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-16T16:38:06.000Z", "max_stars_repo_stars_event_min_datetime": "2018-08-30T19:55:43.000Z", "num_tokens": 365, "size": 1090 }
\documentclass{article} \usepackage{fullpage} \title{Lecture Notes: Python Lecture for Yun's Family} \author{Sunghee Yun} \begin{document} \maketitle \tableofcontents \newpage \section{Introduction} \subsection{Why learn computer programming?} The following quote from a book by Briggs well explains why we should learn computer programming. \begin{quote} Programming fosters creatively, reasoning, and problem solving. The programmer gets the opportunity to create something from nothing, use logic to turn programming constructs into a form that a computer can run, and, when things don't work quite as well as expected, use problem solving to figure out what has gone wrong. Programming is a fun, sometimes challenging (and occasionally frustrating) activity, and the skills learned from it can be useful both in school and at work \ldots\ even if your career has nothing to do with computers. And, if nothing else, programming is a great way to spend an afternoon when the weather outside is dreary. -- Jason R. Briggs, Python for Kids \end{quote} \nocite{JR:12} \subsection{Why Python?} Again here I quote some sentensces from the same book instead of my own explanation. \begin{quote} Python is an easy-to-learn programming language that has some really useful features for a beginning programmer. The code is quite easy to read when compared to other programming languages, and it has an interactive shell into which you can enter your programs and see them run. In addition to its simple language structure and an interactive shell with which to experiment. Python has some features that greatly augment the learning process and allow you to put together simple animations for creating your own games. One is the {\tt turtle} module, inspired by Turtle graphics (used by the Logo programming language back in the 1960s) and designed for educational use. Another is the {\tt tkinter} module, and interface for the Tk GUI toolkit, which provides a simple way to create programs with slightly more advanced graphics and animation. -- Jason R. Briggs, Python for Kids \end{quote} \subsection{Background} This lecture series started by Ghayoung's suggestion that I teach Beth how to code. I'm not sure whether I suggested it or she said it herself, but we also agreed that Ghayoung would join us, too. I had believed that it would be a bad idea to teach something to my family members, and that's why I'd never thought of this possibility. However, I've changed my mind after a few lectures that I gave them regarding Python coding. And here's why. First and foremost, they had almost no problem understanding the concepts I explained to them. Also their attitude was very good. They tried their best to understand and follow my lead and they actually did great in absorbing the lecture contents and doing coding practice. Secondly, I've realized that it is more valuable than anything else in the world for me to spend my precious time and energy to convey my knowledge and experience to the people I love and care most. I've done many talks, seminars, and lectures in my life, but none of them gave me the satisfaction that I feel now when seeing them improving and growing on coding. In short, unlike my long-lasting conviction that it's never a good idea to teach something to my family members, it's turned out that it is indeed possible and it is extremely pleasing experience to do the lecture to my loved ones. Then I realized that I've already taught many things to them including all those math tricks and insight I tried to show Beth and numerous other concepts and stories I've told her, and all those (boring) conversation that I shared with Ghayoung about various topics such as math, science, engineering, optimization, and machine learning. The fact is that I have been doing sort of teaching already, and this is just one of those examples with difference that, this time, we have official schedules and formats. So I'm planning to keep this lecture series as long as I can or until they can learn themselves without my help. \subsection{Purpose of this lecture series} XXX \newpage \section{Python Basics} \subsection{Python Dictionary} \subsubsection{Dictionary Manipulation in Python} \paragraph{Overview} \begin{itemize} \item A dictionary is a collection of key-value pairs. \item A dictionary is a set of key:value pairs. \item All keys in a dictionary must be unique. \item In a dictionary, a key and its value are separated by a colon. \item The key, value pairs are separated with commas. \item The key \& value pairs are listed between curly brackets ``\{ \}'' \item We query the dictionary using square brackets ``[\ ]'' \end{itemize} \paragraph{Dictionary Manipulation} \begin{itemize} \item Dictionaries are useful whenever you have to items that you wish to link together, and for example storing results for quick lookup. \item Create an empty dictionary \begin{verbatim} months = {} \end{verbatim} \item Create a dictionary with some pairs (Note: Each key must be unique) \begin{verbatim} months = { 1 : "January", 2 : "February", 3 : "March", 4 : "April", 5 : "May", 6 : "June", 7 : "July", 8 : "August", 9 : "September", 10 : "October", 11 : "November", 12 : "December" } \end{verbatim} \item months ``1--12'' are keys and ``January-December'' are the values \item Print all keys \begin{verbatim} print("The dictionary contains the following keys: ", list(months.keys())) \end{verbatim} Output: \begin{verbatim} The dictionary contains the following keys: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] \end{verbatim} \item Accessing To get a value out of a dictionary, you must supply its key, you cannot provide the value and get the key \begin{verbatim} whichMonth = months[1] print(whichMonth) \end{verbatim} Output: \begin{verbatim} January \end{verbatim} \item To delete an element from a dictionary, use {\tt dict.pop} \begin{verbatim} print(months.pop(5)) print(months.keys()) \end{verbatim} Output: \begin{verbatim} May [1, 2, 3, 4, 6, 7, 8, 9, 10, 11, 12] \end{verbatim} \item To add a new element to a dictionary, assign a value to a new key \begin{verbatim} months[5] = "MAY" print(months.keys()) \end{verbatim} Output: \begin{verbatim} [1, 2, 3, 4, 6, 7, 8, 9, 10, 11, 12, 5] \end{verbatim} \item To update an element of a dictionary, assign a new value to its key \begin{verbatim} months[1] = "Jan" print(months) \end{verbatim} Output: \begin{verbatim} {1: 'Jan', 2: 'February', 3: 'March', 4: 'April', 5... } \end{verbatim} \item Sorting \begin{verbatim} sortedkeys = sorted(months.keys()) print(sortedkeys) \end{verbatim} Output: \begin{verbatim} [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] \end{verbatim} \item Dictionaries and Loops \begin{itemize} \item Iterating over keys \begin{verbatim} for key in months: print(key, months[key]) \end{verbatim} Output: \begin{verbatim} 1 Jan 2 February 3 March 4 April 5 MAY 6 June 7 July 8 August 9 September 10 October 11 November 12 December \end{verbatim} \item Iterating over (key, value) pairs \begin{verbatim} for key, value in months.items(): print(key, value) print "The entries in the dictionary are:" for item in months.keys(): print("months[ ", item, " ] = ", months[item]) \end{verbatim} \end{itemize} \item Combining List and Dictionary Example of a list of dictionaries \begin{verbatim} customers = [ {"uid":1,"name":"John"}, {"uid":2,"name":"Smith"}, {"uid":3,"name":"Andersson"}, ] print(customers) \end{verbatim} Output: \begin{verbatim} [{'uid': 1, 'name': 'John'}, {'uid': 2, 'name': 'Smith'}, {'uid': 3, 'name': 'Andersson'}] \end{verbatim} \item Print the uid and name of each customer \begin{verbatim} for x in customer: print(x["uid"], x["name"]) \end{verbatim} Output: \begin{verbatim} 1 John 2 Smith 3 Andersson \end{verbatim} \item Modify an entry This will change the name of customer 2 from Smith to Charlie \begin{verbatim} customers[2]["name"]="charlie" print(customers) \end{verbatim} Output: \begin{verbatim} [{'uid': 1, 'name': 'John'}, {'uid': 2, 'name': 'Smith'}, {'uid': 3, 'name': 'charlie'}] \end{verbatim} \item Add a new field to each entry \begin{verbatim} for x in customers: x["password"]="123456" # any initial value print(customers) \end{verbatim} Output: \begin{verbatim} [{'password': '123456', 'uid': 1, 'name': 'John'}, {'password': '123456', 'uid': 2, 'name': 'Smith'}, {'password': '123456', 'uid': 3, 'name': 'Andersson'}] \end{verbatim} \item Delete a field \begin{verbatim} del customers[1] print(customers) \end{verbatim} Output: \begin{verbatim} [{'uid': 1, 'name': 'John'}, {'uid': 3, 'name': 'Andersson'}] \end{verbatim} \item Delete all fields This will delete id field of each entry. \begin{verbatim} for x in customers: del x["id"] \end{verbatim} Output: \begin{verbatim} [{'name': 'John'}, {'name': 'Smith'}, {'name': 'Andersson'}] \end{verbatim} \end{itemize} \newpage \bibliography{mybib}{} \bibliographystyle{plain} \end{document}
{ "alphanum_fraction": 0.7260379012, "avg_line_length": 27.2507462687, "ext": "tex", "hexsha": "fc86416b35a11bd39e04fffc6b810eadc376f2b0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3a748672bf5b39568b2f42e813a0b9402711ad8e", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "sungheeyun/PythonLectures", "max_forks_repo_path": "lecture_notes/python_lecture_for_yun_family_Main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3a748672bf5b39568b2f42e813a0b9402711ad8e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "sungheeyun/PythonLectures", "max_issues_repo_path": "lecture_notes/python_lecture_for_yun_family_Main.tex", "max_line_length": 118, "max_stars_count": null, "max_stars_repo_head_hexsha": "3a748672bf5b39568b2f42e813a0b9402711ad8e", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "sungheeyun/PythonLectures", "max_stars_repo_path": "lecture_notes/python_lecture_for_yun_family_Main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2495, "size": 9129 }
% !TEX program = xelatex \documentclass{resume} \usepackage{comment} \begin{document} \pagenumbering{gobble} % suppress displaying page number \name{Jinhui Shen} \basicInfo{ \email{[email protected]} \textperiodcentered\ \phone{6479377982} \textperiodcentered\ \github[github.com/TheaaaShen]{} } \section{\faCogs\ Skills} \begin{itemize}[parsep=0.5ex] \item Programming Languages: Java Spring, C++, SQL \item Architectures: OOP, CQRS, Event Sourcing, RESTful API, CRUD, Microservices \item Others: Linux, IntelliJ, Git, Postman, Docker, JPA, Hibernate, AWS, Kubernetes \end{itemize} \section{\faUsers\ Experiences} \datedsubsection{\textbf{Software Engineer}}{Jan. 2019 - Apr. 2019} \datedsubsection{\textit{Autonomic - Transportation as a Service Team}}{Toronto, Canada} Used Java to deploy Spring microservices to Kubernetes on AWS \begin{itemize} \item Deployed microservices with CQRS design and Event Sourcing \item Wrote Java Spring code to interact with messages from live vehicle data via Kafka streams \item Experienced with Extreme Programming and Pair Programming \end{itemize} \datedsubsection{\textbf{Associate Programmer Analyst}}{Jan. 2018 - Apr. 2018} \datedsubsection{\textit{Scotiabank - Risk Management Information Technology Team}}{Toronto, Canada} Used Python and Java to support internal regulatory reporting system \begin{itemize} \item Automated the process of fetching reports using Java \item Regulatory project management using JIRA and Leankit \end{itemize} \datedsubsection{\textbf{Software Developer Intern}}{May 2017 - Aug. 2017} \datedsubsection{\textit{Chinese Academy of Sciences - Institute of Computing Technology}}{Beijing, China} Used Java to optimize the algorithm for Glycan Structure Identification Software (GIPS) \begin{itemize} \item Rewrote and optimized the core data structure and algorithm for the software \item Reduced time consumption to 0.002\%, from 117.87s to 0.0029s \item Reduced memory consumption to 2.048\%. from 1763 Mb to 36.1 Mb \end{itemize} \section{\faStar\ Projects} \datedsubsection{\textbf{Hotel Management}}{Sept. 2019 - Present} \role{Individual Project}{} A Restful CRUD application using Java Spring and MySQL that manages room information for a hotel \begin{itemize} \item Implemented basic create, read, update, delete features that can be accessed through Postman \item Database is built with MySQL, JPA and Hibernate \end{itemize} \datedsubsection{\textbf{Quadris}}{Nov. 2017 - Dec. 2017} \role{Course Project}{} Implemented object-oriented command-line Tetris game with 5 difficulty levels with C++ \begin{itemize} \item Utilized Subject Observer design pattern and Visitor design pattern \item Implemented visualization through XWindows \end{itemize} \section{\faGraduationCap\ Education} \datedsubsection{\textbf{University of Waterloo}}{Sept. 2016 -- Present} Candidate for Bachelor of Computer Science (Software Engineering Option) \datedsubsection{\textbf{Wilfrid Laurier University}}{Sept. 2016 -- Present} Candidate for Bachelor of Business Administration (Marketing Concentration) \begin{comment} \section{\faHeartO\ Activities and Awards} \begin{itemize}[parsep=0.5ex] \item Hack the North 2017 Participant \item University of Waterloo President’s Scholarship of Distinction \item Euclid Mathematics Competition Scholarship \end{itemize} \section{\faInfo\ Miscellaneous} \begin{itemize}[parsep=0.5ex] % \item Website: \item Languages: English - Fluent, Mandarin - Native speaker, Korean - fluent \end{itemize} \end{comment} \end{document}
{ "alphanum_fraction": 0.7813888889, "avg_line_length": 39.5604395604, "ext": "tex", "hexsha": "91f256124e0fd89f56f28d32ac51dab258edcd2e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "599bff7e590efa798eedf9ea87a577b640e0feb7", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "TheaaaShen/Resume", "max_forks_repo_path": "resume.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "599bff7e590efa798eedf9ea87a577b640e0feb7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "TheaaaShen/Resume", "max_issues_repo_path": "resume.tex", "max_line_length": 106, "max_stars_count": null, "max_stars_repo_head_hexsha": "599bff7e590efa798eedf9ea87a577b640e0feb7", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "TheaaaShen/Resume", "max_stars_repo_path": "resume.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 946, "size": 3600 }
\documentclass[12pt,openany,onecolumn]{book} \usepackage{graphicx} \usepackage{caption} \usepackage{subcaption} \usepackage{color} \usepackage[table]{xcolor} \pagestyle{plain} \usepackage{hyperref} \usepackage{fancyhdr} \usepackage[headings]{fullpage} \pagestyle{fancy} \lhead{{\small ASAP FP7 Project}} \rhead{{\small \delivtitle}} \usepackage{times} %\usepackage{helvet} %\renewcommand{\familydefault}{\sfdefault} \usepackage{amssymb,amsthm,amsmath,amsfonts} \usepackage{lmodern} \usepackage{textcomp} \usepackage[T1]{fontenc} \usepackage{multirow} \usepackage{xspace} \usepackage{pgfgantt} \usepackage{pdflscape} \definecolor{rowcolor}{rgb}{0.8,0.9,1} \newtheorem{lemma}{Lemma} \newcolumntype{P}[1]{>{\centering\arraybackslash}p{#1}} \usepackage{mathtools} % Bonus \DeclarePairedDelimiter\norm\lVert\rVert \usepackage[acronym,nonumberlist,toc]{glossaries} \usepackage{glossary-superragged} % \newglossarystyle{modsuper}{% % \glossarystyle{super}% % \renewcommand{\glsgroupskip}{} % } \makeglossaries \renewcommand*{\acronymname}{List of Acronyms and Abbreviations} \renewcommand*{\glspostdescription}{} % remove trailing dot \renewcommand{\glsnamefont}[1]{\textbf{#1}} \newacronym{dag}{DAG}{Directed Acyclic Graph} \newacronym{dg}{DG}{Directed Graph} \newacronym{gui}{GUI}{Grafical User Interface} \newacronym{wms}{WMS}{Workflow Management System} \newacronym{wmt}{WMT}{Workflow Management Tool} \newacronym{ipo}{IPO}{Input Processor Output} \newacronym{es}{ES}{Exhaustive Search} % \newacronym{bsp}{BSP}{Bulk Synchronous Parallel} % \newacronym{hdfs}{HDFS}{Hadoop File System} \newtheorem{definition}{Definition} \newtheorem{proposition}{Proposition} \newtheorem{theorem}{Theorem} \newtheorem{example}{Example} \usepackage{listings} \renewcommand{\lstlistingname}{Algorithm}% Listing -> Algorithm \lstset{ backgroundcolor=\color{white}, basicstyle=\footnotesize, breaklines=true, commentstyle=\color{lightgray}, frame=single, numbers=left, numbersep=8pt, stepnumber=1, tabsize=2, } \def\delivtitle{Workflow Management Tool\\ {\small ASAP, {\rm \it technical report}}} \begin{document} \title{\delivtitle} \author{} \date{} \newcommand{\eat}[1]{} \thispagestyle{empty} \begin{center} \large \textbf{FP7 Project ASAP} \\ Adaptable Scalable Analytics Platform\\[2\baselineskip] \includegraphics[width=4cm]{asap-logo.png}\\[3\baselineskip] {\Huge\bf \delivtitle}\\[3\baselineskip] \end{center} \textbf{Version History} \begin{center} \begin{tabular}{|l|p{3cm}|p{5cm}|p{3cm}|} \rowcolor{gray} \hline Version & Date & Author & Comments \\ \hline 1.0 & 18 Aug 2015 & V. Kantere, M. Filatov & First Version \\ 2.0 & 22 Aug 2015 & V. Kantere, M. Filatov & Final Version \\ \hline \end{tabular} \end{center} \paragraph{Acknowledgement} This project has received funding from the European Union's 7th Framework Programme for research, technological development and demonstration under grant agreement number 619706. \paragraph{Abstract} This work is a technical report on the first version of the Workflow Management Tool (WMT). This version incorporates the prototypes of three core modules of the WMT architecture, namely the Workflow Design, Analysis and Optimization modules. The report first gives a quick overview of the WMT architecture and then delves into the implementation details of each involved module. \smallskip \paragraph{Keywords} Workflow model, analysis, optimization, design tool. \tableofcontents \newpage \phantomsection \addcontentsline{toc}{chapter}{\listfigurename} \listoffigures \begingroup \renewcommand{\cleardoublepage}{} \renewcommand{\clearpage}{} \phantomsection \addcontentsline{toc}{chapter}{\listtablename} \listoftables \endgroup % \printglossary[type=\acronymtype,title={List of Abbreviations}] % \newpage \chapter{Introduction} \section{Workflow management tool overview} The Workflow Management Tool (WMT) is a component of the ASAP system architecture. It is used for workflow creation, modification, analysis and optimisation. WMT provides a GUI for workflow design. The model underlying WMT combines simplicity of expression of application logic and adaptation of the level of description of execution semantics. It enables the separation of task dependencies from task functionality. In this way, WMT can be easily, i.e. intuitively and in a straightforward manner, used by many types of users, with various levels of data management expertise and interest in the implementation. The workflow and the including tasks are described using a JSON-based metadata language. The workflow is analysed and the result of the analysis can be a list of detected errors of a workflow or an analysed workflow which is actually an enhancement of the initial workflow with more vertices, substitution of vertices and/or edges in the initial workflow with others, and addition of metadata to the tasks. WMT is a fully open-source\footnote{\url{https://github.com/maxfil/wmt} - this will be changed to the ASAP repository} instrument that includes the designing interface, as well as analysis and optimisation modules. \section{Purpose of the document} This document serves as a report on the first version of the WMT and accompanies its prototype implementation. Its purpose is to delve into the implementation details of the core architectural modules. This includes the declaration semantics of the workflow, analysis and optimization techniques. Furthermore, we demonstrate the work of WMT on specific use-cases from D8.2 \cite{d8.2} and D9.2 \cite{d9.2}. Finally, we make an initial discussion on multi-workflow optimisation. \section{Document structure} The rest of this document is structured as follows: \begin{itemize} \item Chapter \ref{chapt:theory} gives a brief overview of the workflow model, including manipulation as described in D5.1 \cite{d5.1}. Moreover, it gives details on the workflow analysis and the workflow optimisation. \item Chapter \ref{chapt:architect} presents details of the current architecture and implementation of WMT modules. \item Chapter \ref{chapt:functionality} guides the reader on how to use the tool from creation up to optimisation, describing the functionalities and then gives examples of workflows driven by the use-case scenarios of D8.2 \cite{d8.2} and D9.2 \cite{d9.2}. \item Chapter \ref{chapt:future} gives details on the current status of the formulation of some other optimisation objectives and multi-workflow optimisation and outlines our next steps. \item Chapter \ref{chapt:related} summarises related work in the topic of the workflow optimization. \item Chapter \ref{chapt:summary} concludes the technical report. \end{itemize} \chapter{Workflow management}\label{chapt:theory} This chapter gives an overview of the workflow model, its analysis and its optimisation, as it is defined in D5.1 \cite{d5.1}, and gives details on what it is currently implemented in WMT. Moreover, it gives an update on our work in workflow optimisation. \section{Workflow model}\label{sec:model} The workflow represents applications as a directed acyclic graph (DAG) $G = (V, E)$. The vertices $V$ in the graph represent application logic and the edges represent the flow of data. Application logic includes (a) the analysis of data, and (b) the modification of data. The edges $E$ are directed and connect the vertices that produce and consume data. Each vertex in a workflow represents one or more tasks of data processing. Each task $T$ is a set of \emph{inputs}, \emph{outputs} and a \emph{processor}. In this text we also use the term \emph{operator} instead of the term \emph{processor}. Tasks may share or not inputs, but they do not share processors and outputs. The inputs and outputs of the tasks of a vertex can be related to incoming and outgoing edges of this vertex, but they do not identify with edges: inputs and outputs represent consumption and production of data, respectively, and edges represent flow of data. \begin{definition} In a digraph $G$, the out-degree $d_G^{+}(v)$ is the number of edges leaving a vertex $v$, and the in-degree $d_G^{-}(v)$ is the number of edges entering a vertex $v$. \end{definition} \begin{definition} A root is a vertex $v$ with in-degree $d_G^{-}(v) = 0$; and a sink is a vertex with out-degree $d_G^{+}(v) = 0$. \end{definition} A workflow has one root and several sink vertices. Data and operators can be either abstract or materialized. Abstract are the operators and datasets that are described partially or at a high level by the user when composing her workflow whereas materialized are the actual operator implementations and existing datasets, either provided by the user or residing in a repository. A generic tree-metadata format depicted as a subtree with a root \emph{`operator'} is shown in Figure \ref{fig:wtree}. In Section 3.1.1 (Tree-metadata framework) of the report D3.2 \cite{d3.2} meta-data of data and operators are described in more detail. \subsection{Workflow representation}\label{sec:format} In WMT a workflow is represented in a single JSON file. This representation captures structural information, design metadata (e.g., functional and non-functional requirements, physical characteristics like resource allocation), operator properties (e.g., type, schemata, statistics, engine and implementation details, physical characteristics like memory budget), and so on. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figs/wtree} \caption{Workflow metada tree}\label{fig:wtree} \end{figure} The first levels of the metadata tree of the workflow are the following (shown in Figure \ref{fig:wtree}): \begin{itemize} \item \textbf{Nodes} Each node contains a list of task IDs which belong to this node. \item \textbf{Edges} This is a list of pairs of node IDs - $(sourceId, targetId)$. An edge defines the flow of data from one vertex to another. These nodes are called the source and the target, respectively. \item \textbf{Tasks} This part contains a list of task meta-data. The task meta-data consists of the information that is used to match abstract and materialized operators and datasets and the ID of a node to which this task belongs. \item \textbf{TaskLinks} (optional) This part contains links between tasks lying within a single node. \end{itemize} \section{Workflow analysis}\label{sec:analysis} The workflow structure alleviates from the user the burden of determining any or some execution semantics for the application logic. The execution semantics of the workflow includes the execution of tasks in vertices and the execution of input-output dependencies of edges. The determination of the execution semantics of vertices and edges leads to an execution plan of the workflow. We refer to this plan as the \emph{analysed} workflow. The latter is actually an enhancement of the initial workflow with more vertices, and substitution of vertices and/or edges in the initial workflow with others. More specifically, in the analysed workflow, an edge with different input and output metadata, may be replaced with two edges and a new vertex; the new vertex corresponds to a new task that takes the data and metadata of the input of the initial edge and produces the data and metadata of the output of the initial edge. In other words, since the data of the input and the output of an edge are equivalent, this task changes only the metadata. Such vertices are \emph{associative}, as they encompass associative tasks. Also, a vertex that includes multiple tasks, in the original workflow, is replaced, in the analysed workflow, with a set of new vertices that each includes one task of the original vertex. The new vertices may or may not be connected with new edges. Furthermore, in the analysed workflow, a vertex that corresponds to multiple tasks is replaced with an \emph{associative subgraph} that contains a set of new vertices that correspond to these tasks. This set contains vertices that correspond to the tasks of the initial vertex: each new vertex corresponds to one task; vertices may correspond 1-1 to tasks, but it can be the case that two or more vertices correspond to the same task\footnote{Replication of tasks using many associative vertices that correspond to the same task of an original vertex may be necessary for the optimisation of the workflow execution.}. Naturally, the incoming edges of the initial vertex may have to be replicated, since they may correspond to the input of more than one tasks. The outgoing edges, however, remain the same, as each corresponds to the output of one task. The replacing subgraph may also contain new edges that connect the replacing vertices. Such edges represent the dependencies between tasks related to their execution semantics, and not related to the semantics of the application logic, as expressed by the user. \section{Workflow optimisation}\label{sec:optimisation} In D5.1 \cite{d5.1} we have proposed two axes of a workflow optimization, namely: optimisation via graph reconfiguration and via optimal resource management. Currently, we focus on the first axis of optimisation, i.e. via the reconfiguration of the graph of the analysed workflow. To do this we employ methods for the manipulation of the workflow. A brief overview of these methods is provided below. \subsection{Workflow manipulation}\label{subsec:transitions} A workflow is manipulated so that it can be executed more efficiently than originally designed. Manipulation is performed using the following operations: \begin{itemize} \item \textbf{Swap.} The \emph{swap} operation applies to a pair of vertices, $v1$ and $v2$, which occur in adjacent position in an workflow graph $G$, and produces a new workflow graph $G'$ in which the positions of $v1$ and $v2$ have been interchanged. The goal of the \emph{swap} operation is to change the execution order of tasks. Currently, the WMT uses the \emph{swap} operation in the optimisation module. \item \textbf{Merge.} The \emph{merge} operation takes as an input two vertices and produces one new vertex that includes the tasks of both initial vertices. The vertices that are merged can be connected with an edge, i.e. together they represent some task dependency(ies), or not, i.e. there is no task dependency between them. The goal of the \emph{merge} operation is to allow for a united optimisation of the tasks included in the two initial vertices, e.g. joint micro-optimisation on an execution engine. Currently, this operation is not yet implemented in the WMT. \item \textbf{Split.} The \emph{split} operation takes as input one initial vertex and produces two new vertices that, together, include all the tasks included in the initial vertex. The two new vertices may or may not be connected. The goal of the \emph{split} operation is to lead to separate optimisation of subgroups of tasks included in the initial vertex. Currently, this operation is implemented in WMT and used to produce the \emph{analysed} workflow. \end{itemize} % Currently, we assume that we can \emph{swap} adjacent vertices for which their in-degree and out-degree equal to 1. Thus, the vertices order can be changed only within the line of sequential vertices without branching. \subsection{Operator characteristics}\label{subsec:categories} Workflow manipulation can be performed selectively depending on operator characteristics: \begin{itemize} \item \textbf{Blocking operators} require knowledge of the whole dataset, e.g., a grouping operator or an operator \emph{join} or \emph{sort}. \item \textbf{Non-blocking operators} that process each tuple separately, e.g., operators \emph{filter} or \emph{calc}. \item \textbf{Restrictive operators} output a smaller data volume than the incoming data volume, e.g. \emph{filter}. \end{itemize} Table \ref{table:categories} shows the operators that are currently in the operator library in WMT and their categorization. \begin{table}[h] \centering \begin{tabular}{ |P{3cm}|P{2.5cm}|P{2.5cm}|P{2.5cm}| } \hline Operator & Blocking & Non-blocking & Restrictive \\ \hline Filter & & x & x \\ \hline Calc & & x & \\ \hline Filter Join & x & & \\ \hline groupBy Sort & x & & \\ \hline PeakDetection & x & & \\ \hline Tf-idf & x & & \\ \hline k-Means & x & & \\ \hline \end{tabular} \caption{Operator categorization} \label{table:categories} \end{table} The \emph{filter} operator returns all rows for which the <filter\_predicate> is `True'. The \emph{calc} operator produces data with new attribute <calc\_attr> and value calculated by <calc\_expression>. The others are obtained from the IRES platform. For any operator that is added in WMT, the user has to define combinations of the new operator and operators already in the library, on which the \emph{swap} operation can be applied, if there are any such combinations. The WMT can use apply swapping on these predetermined combinations during the optimization stage. For example, the \emph{groupBy Sort}/\emph{filter} combination is swappable. However, the \emph{calc}/\emph{filter} combination is swappable or not depending on the specific implementation of \emph{calc}. \subsection{Single-workflow optimization}\label{sec:singleworkopt} We formulate the problem of optimizing a workflow for a single engine as a state space search problem. Starting with an initial workflow graph, we apply a set of graph transitions (see Section \ref{subsec:transitions}) to create new, equivalent graphs with (possibly) different costs. Applying transitions creates a large state space and the goal is to find an optimal workflow in this space with respect to the objective function (see Chapter \ref{chapt:future}). We explore the state space exhaustively using the Exhaustive Search (ES). ES generates all possible states by applying all the applicable transitions to every state. The vastness of the state space requires more efficient exploration methods than the Exhaustive Search. To improve the search performance, a solution is to prune the state space. In the following, we propose techniques for achieving this. Heuristics that can drive us close to the optimal solution quickly are the following: \begin{itemize} \item{\textbf H1:} Move restrictive operators to the root of the workflow to reduce the data volume, e.g., rather than \emph{extract \textrightarrow function \textrightarrow filter} do \emph{extract \textrightarrow filter \textrightarrow function}. \item{\textbf H2:} Place non-blocking operators together and separately from blocking operators, e.g., rather than \emph{filter \textrightarrow sort \textrightarrow function \textrightarrow group} do \emph{filter \textrightarrow function \textrightarrow sort \textrightarrow group}. \item{\textbf H3:} Parallelize non-blocking operators. Place adjacent operators on parallel paths, so that the latter can be executed on separate physical processors, e.g., if there are adjacent operators \emph{filter1 \textrightarrow filter2}, create two new paths in the workflow and assign the \emph{filter1} and \emph{filter2} operators each to a different path, so that these workflow paths can run on separate processors concurrently. The workflow paths or, in general, parts, to be executed in parallel should be chosen such that their latency is approximately equal, i.e., if parallelization is possible on $n$ processors, break the flow into $n$ workflow parts $x_1, \ldots, x_n$ of equal (or near-equal) execution time, such that $max(time(x_i))$ is minimized. \end{itemize} Currently, the workflow optimization in WMT is carried out using the operation \emph{swap} and following the heuristics $H1$ and $H2$ (for the current form of the optimization algorithm see Section \ref{sec:optimizer}). \chapter{Workflow management tool architecture and implementation}\label{chapt:architect} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figs/architect} \caption{The architecture of the WMT}\label{fig:architect} \end{figure} In this chapter, we describe in detail the current implementation of WMT. We discuss the architecture, the functionalities provided by the different modules and integration of WMT with other parts of ASAP. Figure \ref{fig:architect} depicts the architecture of WMT as well as its interaction with external components. The main components of the architecture are: \begin{itemize} \item \textbf{Interface.} The interface accepts a workflow definition in the representation described in Section \ref{sec:format}. It enables users to interactively create and/or modify a workflow. \item \textbf{Analyzer.} The analyzer parses the workflow, identifies operators and data stores and maps them to a library of operators supported in WMT (See Section \ref{sec:integration}), generates metadata of edges, finds edges where the data conversion should be applied and adds the appropriate conversions. \item \textbf{Optimizer.} The optimizer generates a functionally equivalent workflow graph optimized towards the performance objective (for more on the performance objective see Section \ref{sec:objectives}). \end{itemize} These provide for workflow design, analysis and optimization. After their design, analysis and optimization, workflows are ready for their execution. They can be executed on independent engines and storage repositories, i.e. engines and repositories that may be accessed through other paths in ASAP (Section \ref{sec:integration}) or another third party platform, besides WMT. \section{WMT architecture}\label{sec:interface} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figs/tech_stack} \caption{Technology stack used in WMT}\label{fig:tech_stack} \end{figure} WMT interface is a web application. It provides full functionality for designing a workflow even in the absence of server-side (Analyzer and Optimizer modules). It is encoded in Hypertext Markup Language (HTML \cite{html}). To deliver content, WMT uses the Nginx \cite{nginx} web server. To encode business logic, WMT uses Javascript \cite{javascript} and PHP-FPM \cite{php-fpm}. The pages and scripts are compiled from Jade \cite{jade} and CoffeeScript \cite{coffescript} sources, respectively, using Grunt \cite{grunt}. The Analysis and Optimisation modules are scripts in Python \cite{python}. Figure \ref{fig:tech_stack} depicts the technical stack used in WMT and the interaction of WMT parts. \section{Analyzer}\label{sec:analizer} WMT analyses a workflow in several steps: \begin{enumerate} \item Parsing the workflow. \item Categorizing operators (see Section \ref{subsec:categories}). \item Validating consistency. A workflow is checked for the existence of cycles and correspondence of metadata of adjacent nodes. Cycle discovery and metadata mismatch of adjacent nodes fall into the error list. The errors with the cycles cannot be resolved and the analysis stops and returns a list of errors back to the Interface. If possible, the data flow errors are solved by adding associative tasks in Step 6. \item Generating metadata of edges. These are a joint result of input and output metadata of source and target nodes, respectively. \item Splitting several tasks in a single node to several single-task nodes. \item Augmenting the workflow with associative tasks. Currently, the implemented tasks are converting data flow: buffer and format conversion. \end{enumerate} \section{Optimizer}\label{sec:optimizer} The Optimizer works on the analyzed workflow, where tasks have been categorized and each node has a single task. Currently, the Optimizer uses heuristics $H1$ and $H2$ described in Section \ref{sec:singleworkopt}. These move restrictive operators towards the root of the workflow and place non-blocking operators together and separately from blocking operators. The current form of the optimization algorithm follows all paths in a workflow by encountering all edges. It swaps nodes following the heuristics. It stops when no more swaps can be performed and an optimized version is produced (See Algorithm \ref{lst:optimisation}). \begin{lstlisting}[caption={Optimization algorithm}, label={lst:optimisation}, language=Python, firstnumber=1, mathescape=true] Input: An initial workflow $W$ Output: An optimized by two heuristics equivalent workflow state $W^o$ $W = (nodes, edges, tasks)$ while $swaps <> 0$ do $swaps = 0$ for edge in edges do $source = nodes[edge[sourceId]]$ $target = nodes[edge[targetId]]$ if $d_G^{+}(source) <> 1$ or $d_G^{-}(source) <> 1$ then continue if $d_G^{+}(target) <> 1$ or $d_G^{-}(target) <> 1$ then continue # heuristic 1 if $category(task(source))$ is $`restrictive'$ then continue if $category(task(target))$ is $`restrictive'$ then $swap(source, target)$ $swaps++$ continue # heuristic 2 if $category(task(source))$ is $`non-blocking'$ then continue if $category(task(target))$ is $`non-blocking'$ then $swap(source, target)$ $swaps++$ continue $W^o = (nodes, edges, tasks)$ return $W^o$ end \end{lstlisting} %On the next step of the integration %In future In the future, the Optimizer will be based on processing estimations of a workflow, which will be derived from IRES Executor and Profiler. Then, the heuristics will be used to prune the state space as it was described in Section \ref{sec:optimisation}. \section{Integration}\label{sec:integration} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figs/ASAP_scheme} \caption{WMT interaction with the rest components of ASAP}\label{fig:ASAP_scheme} \end{figure} IRES is a component of the ASAP that executes workflows (See D3.2 \cite{d3.2}), therefore WMT will have a tight integration with it (Figure \ref{fig:ASAP_scheme}). Currently WMT uses the Tree-metadata language (See 3.1.1 in the document D3.2 \cite{d3.2}) to express operators and receives a list of existing operators from IRES (See 2.3 in the document D3.2 \cite{d3.2}). \chapter{Workflow management tool functionality}\label{chapt:functionality} \section{GUI}\label{sec:gui} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figs/scheme} \caption{GUI of WMT}\label{fig:scheme} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.75]{figs/lib_buttons} \caption{List of library operators}\label{fig:lib_buttons} \end{figure} WMT provides a GUI to enable users to design workflows and perform analysis and optimization. The GUI consists of several areas (Figure \ref{fig:scheme}) that perform the following functions: \begin{itemize} \item Display the workflow (Area 1). \item Add nodes and edges (Area 2). This process depicted in Figure \ref{fig:steps}. First, the user adds a node, then she adds two tasks in it from the operators library, and finally she connects nodes and tasks. \item Create a new workflow from scratch, save and load it. \item Perform workflow analysis or optimization. \item Add tasks from a library (see Figure \ref{fig:lib_buttons}) or create new ones (Area 3). If the user adds a task from a library then it is accompanied by a set of metadata, i.e., properties that describe them. If a new task is created then the first levels of the metadata tree are predefined but users can add their ad-hoc subtrees to define their custom data or operators. \item Display metadata of the selected task (Area 4). \end{itemize} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figs/steps.png} \caption{Steps of adding node}\label{fig:steps} \end{figure} \section{Examples}\label{sec:examples} The ASAP focuses on the real-time analysis of Web content and telecommunications data. This section presents several indicative use-cases selected based on their relevance to the ASAP research. \subsection{Web content analytics} The use case in this domain are centered on the services by Internet Memory Research as part of the Mignify platform (\url{www.mignify.com}). These services provide access to a very large collection of contents extracted from the Web, cleaned, annotated and indexed in a distributed infrastructure mainly based on Hadoop components. ASAP focuses on extending and enriching the public workflow interface supplied by Mignify, referred to as pipes (queries associated with a set of intelligent agents to extract or transform large-scale web data). \subsubsection{Stream processing} This use case captures a typical form of the current IMR Web analytics pipeline. Figure \ref{fig:use_case_nlp} presents a workflow for this use case. Data are selected from the document store based on some conditions (\emph{select}). These data are processed in order to extract some text (\emph{calc}), and the extracted text is moved to a different data store and stored there with other text and annotations (\emph{move}). The output data are further processed via NLP-classification. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figs/nlp} \caption{Workflow for NLP-classification}\label{fig:use_case_nlp} \end{figure} Figure \ref{fig:use_case_nlp_analysed} shows the result of the analysis module. The node containing two tasks select and calc has been split into two single-task nodes. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figs/nlp-a} \caption{Workflow for NLP-classification}\label{fig:use_case_nlp_analysed} \end{figure} \subsection{Telecommunications data} Call Detail Records (CDR) data is a good proxy to understand human mobility. The sheer volume of this data poses new challenges when extracting and visualizing specific indicators. ASAP investigates applications such as the following: \subsubsection{Peak detection} This use case involves processing of the anonymised CDR data of the past day by first selecting a spatial region and a temporal period (\emph{select}). For this region and period, the number of calls is calculated (\emph{calc}). Data and calculations from CDR are archived (\emph{archive}) in other storage (\emph{history}). After calls are counted, the application proceeds with algorithmic processing that detects peaks (\emph{calc2}). The objective of this processing is to detect peaks in load, according to a set of criteria. Criteria may include the minimum size of a region and/or period, the cut-off distance, or other parameters for selecting regions and periods. These parameters should be adjustable by the analytics engineer, marketing expert, etc., who uses the peak analysis results. The results of this workflow are added to a database (relational or graph DBMS) that contains peaks detected in previous data. The database of peaks can then be queried by a user to discover clusters of calls that occur with regularity e.g., every week, discover clusters of calls that occur without any regularity, or similar ad-hoc queries based on the pre-computed peak data. The workflow for this use case is shown in Figure \ref{fig:use_case_peaks}. \begin{figure}[h] \centering \includegraphics[scale=0.75]{figs/peakdetection} \caption{Workflow for the detection of peaks}\label{fig:use_case_peaks} \end{figure} Figure \ref{fig:use_case_peaks_analysed} shows the result of the analysis module. The node containing two tasks calc and archive has been split into two single-task nodes. \begin{figure}[h] \centering \includegraphics[scale=0.75]{figs/peakdetection-a} \caption{Analysed workflow for the detection of peaks}\label{fig:use_case_peaks_analysed} \end{figure} \subsection{Marketing analytics} Figure~\ref{fig:use_tweets} displays workflow of an analysis of a product marketing campaign. It combines sales data with sentiments about that product gleaned from tweets crawled from the Web. The result consists of total sales and average sentiment for each day of the campaign. Campaigns promote a specific product and are targeted at non-overlapping, geographical regions. To simplify the presentation, we assume the sentiment analysis of a tweet yields a single metric, i.e., like or dislike the product over a range of -5 to +5. \begin{figure} \centering \includegraphics[width=\linewidth]{figs/tweets} \caption{Workflow analysing a product marketing campaign}\label{fig:use_tweets} \end{figure} Figure~\ref{fig:use_tweets_analysed} represents an analysed workflow. The Analyzer adds an associative tasks \emph{buffer}. It has been added to convert streaming data to batch. \begin{figure} \centering \includegraphics[width=\linewidth]{figs/tweets-a} \caption{Analysed workflow for the marketing analytics}\label{fig:use_tweets_analysed} \end{figure} Figure~\ref{fig:use_tweets_optimised} depicts the result of the Optimizer work. It swapped vertices totalSales calculation and filtering of sales data. \begin{figure} \centering \includegraphics[width=\linewidth]{figs/tweets-o} \caption{Optimised workflow for the marketing analytics}\label{fig:use_tweets_optimised} \end{figure} \chapter{Ongoing and future work}\label{chapt:future} This chapter presents ideas that are towards a multi-workflow optimization. \section{Performance objective}\label{sec:objectives} \textbf{Performance} is a metric of workflow execution expressed in terms of time and used resources. Let $W = \{W_1,\ldots W_n\}$ be a list of workflows. These workflows arrive at time points $T^a = \{t^a_1,\ldots t^a_n\}$. These two lists are not fixed, but changing over time as new workflows are added. The sizes of lists are equal and initially the list of times consist of zeros and $\forall i < j, t^a_i \leq t^a_j$. Workflow $W_{(n+1)}$ arrives at $t^a_{(n+1)}$ and these elements append to the lists and so on. A workflow $W_i$ is a graph $G$ as defined by the workflow model. Each manipulation operation (see Section \ref{subsec:transitions}) transforms a workflow graph $G$ into an equivalent graph $G'$. In other words, if an operation $f$ is applied on a workflow $W_i$ it produces a new workflow $W'_i$: $W'_i = f(W_i)$. Two workflows are equivalent if they produce the same output, given the same input. All possible equivalent workflows, or else, workflow states, can be produced by applying all the applicable operations to every equivalent workflow state. We denote as $SL(W_i) = \{W^1_i,\ldots W^k_i\}$ the list of all equivalent workflow states. % $T^s = \{t^s_1,\ldots t^s_n\}$ and $ T^e = \{t^e_1,\ldots t^e_n\}$ are the lists of start and end times of execution for workflows respectively. The goal is to minimize the total elapsed time of completion of the execution of workflows: $$\min\limits \sum_{i=1}^{n} (t^e_i-t^a_i)$$ %As the workflows arrives continuously, the limit superior may not exist. The end time of execution $t^e$ of a workflow depends on the starting time of execution $t^s$, as well as the \emph{processing cost} $C$ of this workflow. The cost $C$ is related to the time intervals the workflow is processed on one or several machines (i.e. a parallelized execution) and depends on the state of the system, (i.e. the utlization of resources by a pool of workflows), at a specific point in time. Consequently, the objective function is: $$OF(t) = \min\sum_{i=1}^{n(t)} (t^s_i-t^a_i+C(W_i,t))$$ %As the processing cost $C(W_i,t)$ of a workflow depends on the state of the system (pool of workflows etc.). The estimations of the processing cost of workflows will be an input from the IRES platform. \section{Multi-workflow optimization} Beyond optimising single workflows, we will explore the optimisation of multiple workflows. We will aim to find common or similar subgraphs that we can optimise once and execute once or a few times. \begin{definition}{A `common part'} \label{def:commonpart} is a subgraph that is part of two or more workflows; these parts are bijective and corresponding operators in vertices are equivalent. \end{definition} Thus, our approach for multi-workflow optimization will be the joint execution of common parts (Definition \ref{def:commonpart}). We will try to rearrange workflows with common parts in order to enable such execution (see Figure \ref{fig:synccom}). \begin{figure}[h] \centering \includegraphics[width=0.6\linewidth]{figs/synccommonparts} \caption{Synchronisation of common parts of multiple workflows}\label{fig:synccom} \end{figure} However, there are cases of workflows with common parts for which such rearrangement, and, therefore, joint execution of common parts, is not possible. Such cases occur if subgraphs of common parts have dependencies on subgraphs of other common parts. We introduce the terminology of `mutual arrangement' of subgraphs: \begin{definition} A vertex $v$ is reachable from another vertex $u$, if there is a directed path that starts from $u$ and ends at $v$. \end{definition} \begin{definition} A subgraph $S$ depends from a vertex $v$, if there exists a vertex $u$ in the subgraph and $u$ reachable from $v$. \end{definition} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figs/mutualpositions} \caption{Mutual arrangement of subgraphs $A$ and $B$}\label{fig:mutpos} \end{figure} \begin{definition} \label{def:mutpos} The mutual arrangement of two subgraphs $A$ and $B$ may be (Figure \ref{fig:mutpos}): \begin{itemize} \item \textbf{MA1:} $A$ and $B$ are independent, if there does not exist a pair of vertices of the two subgraphs, for which one vertex depends on another. \item \textbf{MA2:} Uni-dependent ($A$ depends on $B$), if there exists a vertex $v$ in $B$ and $A$ depends on $v$, but there does not exist a vertex in $A$ on which $B$ depends. \item \textbf{MA3:} Cross-dependent ($A$ and $B$ are cross-dependent if there exist vertices $v$ in $A$, $u$ in $B$ and $B$ depends on $v$, $A$ depends on $u$. \end{itemize} \end{definition} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figs/cross-dep_of_2_w} \caption{Cross-dependency of common-parts $A$ and $B$ in workflows $W_1$ and $W_2$}\label{fig:cross-dep_of_2_w} \end{figure} Depending on the mutual arrangement of subgraphs that belong to common parts of workflows (within or across workflows), it may or not be possible to perform joint execution of more than one common parts. We will showcase this with an example. Let us consider the two workflows $W_1$ and $W_2$ that have two common parts, $A$ and $B$. Joint execution of both $A$ and $B$ is not possible if: \begin{itemize} \item At least in one workflow the subgraphs of common parts $A$ and $B$ are cross-dependent (MA3). \item In workflow $W_1$ common part $A$ depends on common part $B$ (MA2) and in $W_2$ $B$ depends on $A$ (MA2), as depicted in Figure \ref{fig:cross-dep_of_2_w}. \end{itemize} We will devise techniques in order to discover efficient dependencies of common parts. Towards this goal we will be inspired by the problem and solutions of the Longest and Heaviest Increasing Subsequence \cite{Jacobson:1992:HIS:647812.738135,Yang:2005:FAC:1710865.1711092}, which seem very promising for dealing with such situations. \chapter{Related Work}\label{chapt:related} Most Workflow Management Systems (WMS) are described in the report D5.1~\cite{d5.1}. In this chapter we discuss only works that engage in optimization. Workflow management systems have emerged in order to provide easy-to-use specification of tasks that have to be performed during data analysis. An essential problem that these systems need to solve is the combination of various tools for workflow execution and optimization over multiple engines into a single research analysis / system. The field of workflow management is a relatively new field of research, but there are already some promising results. The HFMS system~\cite{6544907} builds on top of previous work on multi-engine execution optimization~\cite{Simitsis:2012:OAD:2213836.2213963}. This research focuses on optimization and execution across multiple engines. The design of flows in HFMS is agnostic to a physical implementation. Data sets need not be bound to a data store, and operators need not be bound to an execution engine. HFMS handles flows as DAGs encoded in xLM, a proprietary language for expressing data flows. Work related to HFMS~\cite{5447816} and~\cite{Simitsis:2012:OAD:2213836.2213963} focuses on optimizing flows for several objectives: performance, fault-tolerance and freshness over multiple execution engines. Optimization is defined as a search space problem. These states are obtained by a large number of possible transitions: swap, factorize, distribute, compose, decompose, partition, add recovery point, replicate, parallelization, function shipping, data shipping, etc. Due the vastness of the state space, some techniques / heuristics are proposed to prune it. To assess the cost of states functions are defined both for operations (each vertex in a flow graph) and transitions. The work in~\cite{Simitsis:2013:XPA:2463676.2465247} describes the process of construction of cost functions in more detail, and shows how they can be defined by engineer or during test runs or using micro-benchmarks. The experiments demonstrate the feasibility of their technique by comparing the heuristic solutions to those found by exhaustive search. Thus, a significant difference of the HFMS system and related works from our research is that we are tackling and aim to propose solutions for a wider statement of the problem: the development of WMT in ASAP aims to manage workflows that have a variety of heterogeneous formats and can be processed on a variety of heterogeneous engines that may be, furthermore, distributed. Beyond this, the main difference of our research with HFMS and related works is that the latter focuses only on single-workflow optimization, whereas the first will focus also on multi-workflow optimization. %Nevertheless, their approach to optimization is extremely relevant to the objectives of ASAP and we believe that we can leverage this knowledge and generalize it in a more all-around framework. \chapter{Summary}\label{chapt:summary} This document describes the first version of WMT. This includes the declaration semantics of the workflow, design, analysis and optimization modules. Furthermore, we depict the work of WMT on specific use cases from the telecommunication and web analytics domains.Finally, we make an initial discussion on multi-workflow optimization. \bibliographystyle{plain} \bibliography{biblio} \end{document}
{ "alphanum_fraction": 0.7858937885, "avg_line_length": 73.7412587413, "ext": "tex", "hexsha": "a69463d22896bcef68728c0244b8bea1224aa890", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "09b2401f8f7a5c44c9dede95d5191cc79dfa559b", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "project-asap/workflow", "max_forks_repo_path": "tech_report/report.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "09b2401f8f7a5c44c9dede95d5191cc79dfa559b", "max_issues_repo_issues_event_max_datetime": "2017-04-25T22:38:07.000Z", "max_issues_repo_issues_event_min_datetime": "2016-12-14T17:04:00.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "project-asap/workflow", "max_issues_repo_path": "tech_report/report.tex", "max_line_length": 1252, "max_stars_count": 2, "max_stars_repo_head_hexsha": "09b2401f8f7a5c44c9dede95d5191cc79dfa559b", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "project-asap/workflow", "max_stars_repo_path": "tech_report/report.tex", "max_stars_repo_stars_event_max_datetime": "2018-04-10T21:45:29.000Z", "max_stars_repo_stars_event_min_datetime": "2017-11-08T22:55:53.000Z", "num_tokens": 10268, "size": 42180 }
\documentclass{article} \usepackage{igo} \usepackage{color} \usepackage{gooemacs} \usepackage{gnos} % \usepackage{jigofonts} -- Doesn't work currently. \usepackage[T1]{fontenc} \newcommand{\subtext}[1]{\centerline{\textit{#1}}} \begin{document} \title{Font Examples} \author{Kashomon} \maketitle \begin{center} \section*{Go Type-1 Fonts} \vspace{12pt} \begin{minipage}[t]{240pt} \gnosfontsize{12} {\gnos% <(((((((((((((((((>\\ \char91+++++++++++@\gnosOverlap{@}{\color{white}\footnotesize{45}}\gnosOverlap{@}{\color{white}\footnotesize{47}}+++]\\ \char91++!+@+++!@@!!\gnosOverlap{!}{\footnotesize{46}}!@+]\\ \char91++*!@+!+!@!++@*@+]\\ \char91+@+!@+@+@!!++++++]\\ \char91+++!!@@+@++++++++]\\ \char91+++++!+++++++++++]\\ \char91+@+++++++++++++++]\\ \char91+++++++++++++++!+]\\ \char91++*+++++*+++++*++]\\ \char91+++++++++++++++++]\\ \char91+++++++++++++++++]\\ \char91++++++++++++++!++]\\ \char91+!+++++++++++@!!+]\\ \char91++++++++++++++!@+]\\ \char91++*+++++*++++@*@+]\\ \char91++!+++++++++++@++]\\ \char91+++++++++++++++++]\\ ,))))))))))))))))).\\ } \centerline{\textit{\textbf{Diagram 23}}} \subtext{(Moves: 45-47)} \end{minipage} \end{center} \newpage \begin{center} \section*{Gnos} {\gnos% <((!((((((\\ !!!!@@++++\\ \char91\char64@@++++++\\ \char91\char43\char43\char42+++++*\\ \char91@++++++++\\ \char91+++++++++\\ } A Simple Go Problem! \end{center} \begin{center} \section*{Gooe} {\goo \0??<\0??(\0??(\- !(\0??(\0??(\0??(\0??(\0??(\0??( \- ![\- !+\- !+\- !+\- @+\- @+\0??+\0??+\0??+\0??+ \0??[\- @+\- @+\- @+\0??+\0??+\0??+\0??+\0??+\0??+ \0??[\0??+\0??+\0??*\0??+\0??+\0??+\0??+\0??+\0??* \0??[\- @+\0??+\0??+\0??+\0??+\0??+\0??+\0?l+\0??+ \0??[\0??+\0??+\0??+\0??+\0??+\0??+\0??+\0??+\0??+ } A Simple Go Problem! \end{center} \white{a18,b18,c18,d18,d19} \black{b17,c17,d17,e18,f18,b15} \begin{center} \section*{Igo} \shortstack{\showgoban[a14,k19]\\ A Simple Go Problem} \end{center} \newpage \begin{center} \section*{Gnos: Full Symbol Set} \vspace{12pt} { \gnosfontsize{14} { \gnos% <!((((((@((!(((((!>\\ !+1Cc2Ss3Tt4Xx++++!\\ \char91+@+++++++++++++++]\\ \char91+@*+++++*+++++*++]\\} {\gnosb\char1\char2\char3\char4\char5\char6\char7\char8\char9\char10\char11\char12\char13\char14\char15\char16\char17\char18}{\gnos]\\} {\gnosw\char1\char2\char3\char4\char5\char6\char7\char8\char9\char10\char11\char12\char13\char14\char15\char16\char17\char18}{\gnos]\\ \char91+++++++++++++++++]\\ \char91+++++++++++++++++]\\ \char91+++++++++++++++++]\\ \char91++*+++++*+++++*++]\\ \char91+++++++++++++++++]\\ \char91+++++++++++++++++]\\ \char91+++++++++++++++++]\\ \char91+++++++++++++++++]\\ \char91+++++++++++++++++]\\ \char91++*{\gnoswi\char11}{\gnoswii\char11}{\gnoswiii\char11}{\gnosbi\char11}{\gnosbii\char11}{\gnosbiii\char11}+++++*++]\\ \char91++++{\gnoswl\char1}{\gnosbl\char1}++++++\gnosEmptyLbl{\sffamily{10}}\gnosOverlap{!}{Z}\gnosOverlap{+}{\gnos 2}\gnosOverlap{@}{\color{white}Z}+]\\ !++++++++\gnosOverlap{+}{\gnos 3}\gnosOverlap{+}{\gnos 4}\gnosOverlap{+}{\gnos 1}++++++!\\ ,!))))@)!))))))))!.\\ }} 14pt: A board full of stuff! % Jigo is currently experimental % \section*{JIgo} % \jigosetfontsize{14} % {\jigogoban % {\jigofont% % \char6{\jigofontwi\char0}\char4\char4\char4\char4\char4{\jigofontbi \char0}\char4\char4{\jigofontwi \char 0}\char4\char4\char4\char4\char4\char4\char7\\ % {\jigofontwi\char0}\char1\char88{\jigofontbiii\char125}\char1\char1\char1\char1\char1\char1\char1\char1\char1\char1\char1\char1\char1{\jigofontwi\char0}\\ % \char2\char1{\jigofontbi\char0}\char1\char1\char1\char1\char1\char1\char1\char1\char1\char1\char1\char1\char1\char1{\jigofontwi\char0}\\ % \char2\char1{\jigofontbi\char0}\char1\char1\char1\char1\char1\char1\char1\char1\char1\char1\char1\char1\char1\char1\char3\\ % } % } % {\rlap{\igo@gobanchar{#1}{#2}}\hskip\wd\igo@box}% \end{center} \end{document}
{ "alphanum_fraction": 0.5443369082, "avg_line_length": 28.7443609023, "ext": "tex", "hexsha": "210d998999063bf425359763fdcbab8c7cde8cf4", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2022-02-12T01:19:46.000Z", "max_forks_repo_forks_event_min_datetime": "2015-12-20T20:56:18.000Z", "max_forks_repo_head_hexsha": "75c8cecc878df4d31f2cd8d21527bf96aa29bcea", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Kashomon/go-type1", "max_forks_repo_path": "examples/full_comparison.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "75c8cecc878df4d31f2cd8d21527bf96aa29bcea", "max_issues_repo_issues_event_max_datetime": "2018-09-19T01:57:59.000Z", "max_issues_repo_issues_event_min_datetime": "2015-12-20T21:16:05.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Kashomon/go-type1", "max_issues_repo_path": "examples/full_comparison.tex", "max_line_length": 156, "max_stars_count": 25, "max_stars_repo_head_hexsha": "75c8cecc878df4d31f2cd8d21527bf96aa29bcea", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "otrego/go-type1", "max_stars_repo_path": "examples/full_comparison.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-15T14:52:33.000Z", "max_stars_repo_stars_event_min_datetime": "2020-11-27T22:25:38.000Z", "num_tokens": 1661, "size": 3823 }
%\include{preambleSDL}\include{newcommandsSDL}\include{hyphenationSDL}\begin{document}\tableofcontents\clearpage %%%%%%%%%%%%%%%%%%%% ALL THE ABOVE TO BE COMMENTED OUT FOR COMPLETE DOCUMENT! %%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% C H A P T E R %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% C H A P T E R %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Prosody}\label{ProsodicStructure} This description of the phonology of \PS\ begins with a discussion of prosodic structures before the segmental phonology is described. This choice of ordering is motivated by the important role that prosodic positions play in the distribution of phonemes (as well as in morphophonology). It is useful to first understand the prosodic structure of \PS\ words before looking at their segmental composition here, and later to better understand morphophonology. While there are a number of monosyllabic functional words, all \PS\ lexical forms and many functional words are minimally bisyllabic. The first two sections (\SEC\ref{monosyllabicWords} and \SEC\ref{polysyllabicWords}) describe the prosodic structures of these two groups of words. Then, utterance-level prosodic phenomena are dealt with in \SEC\ref{utteranceProsody}. %\vfill \section{Monosyllabic word structure}\label{monosyllabicWords} While the majority of \PS\ words are polysyllabic, a small set of functional words are monosyllabic. This set includes, for instance, some interjections\is{interjection}, conjunctions\is{conjunction} and pronouns\is{pronoun}. These monosyllabic words consist of at least one vowel\footnote{All vowel phonemes except \ipa{/u͡a/} are attested in monosyllabic words.} and one consonant. This consonant can be in either onset or coda position; it is also possible for both consonant positions to be filled. Consonant clusters\is{consonant cluster} are licensed in coda position as well. The possible segmental structure templates for monosyllabic words are listed with examples in Table~\vref{monosyllabicTemplates}. %\footnote{Here and below, “C” stands for a consonant phoneme and “V” for a vowel phoneme in representations of prosodic template structures.}%JW: removed footnote b/c abbrevs standard and in abbrev-list %\vfill %\hbox{} \begin{table}[htb]\centering \caption[Segmental templates for monosyllabic words]{Segmental templates for monosyllabic words}\label{monosyllabicTemplates} \begin{tabular}{lll l }\mytoprule &\MC{2}{c}{{examples}}& \\ {template} & {IPA} & {orth.} & {gloss} \\\hline {VC} &\ipa{aj} &\It{aj} & ‘also’ \\ &\ipa{ij} &\It{ij} & ‘isn’t’ (\Sc{neg\BS3sg.prs}) \\ {CV} &\ipa{tɛ} &\It{dä} & ‘then’ \\% &\ipa{lɛ} &\It{lä} & ‘is’ (be\BS\Sc{3sg.prs})\\%(be\BS{\sc 3sg.prs}) \\ &\ipa{jo} &\It{juo} & ‘already’ \\ {CVC} &\ipa{jus} &\It{jus} & ‘if’ \\ &\ipa{taːt} &\It{dát} & ‘that’ (\Sc{nom.sg}) \\ &\ipa{men} &\It{men} & ‘but’ \\ &\ipa{vaɲ } &\It{vanj} & ‘really’ \\ {CVCC}& \ipa{kujt} &\It{gujt} & ‘definitely’ \\ &\ipa{mejt} &\It{mejd} & ‘what’ (\Sc{acc.pl}) \\ {CVCCC}& \ipa{taːjst} &\It{dájst} & ‘from these’ (\Sc{dem}-\Sc{prox}-\Sc{elat.pl}) \\\mybottomrule \end{tabular} \end{table} %\FB \section{Multisyllabic word structure}\label{polysyllabicWords} All lexical forms and a large number of functional words in \PS\ are minimally bisyllabic. The smallest prosodic segmental structure attested for polysyllabic words is \begin{center}VCV\end{center} but larger words are both possible and common, and expand upon this minimal foundation; examples are provided throughout the following discussion. Due to a number of phenomena, it is sensible to posit a phonological domain, which, in following basic principles of phonology %\marginpar{get pages?} %(cf.~e.g. \citet[280--283]{dixon2010a1}) %citet{GussenJacobs1998}) and the prosodic hierarchy\is{prosodic hierarchy} %\marginpar{check Pros-Hier for best!} (cf.~e.g.,~\citealt[280--283]{dixon2010a}, \citealt{Selkirk1980}, \citealt{Hayes1989}, \citealt{NesporVogel1986}), I will refer to as a {foot}\is{foot}. A \PS\ foot is trochaic\is{stress} (counting from left to right) and essentially bisyllabic. Multisyllabic words with an odd number of syllables thus have a final (unstressed) syllable which falls outside of the last foot. \enlargethispage{\baselineskip} Whether such a final syllable should belong to the preceding foot or not is a theoretical question which will not be addressed here, but it should be noted that the segments in such syllables are subject to highly restrictive phonotactics compared to those clearly located within a trochaic foot.\footnote{Cf.~\SEC\ref{Vallophones} on the phonotactics of vowel phonemes.} Evidence for the foot as a domain can be found in prosodic (intonation, cf.~\SEC\ref{wordStress}; minimal size restrictions as described here), phonological (segmental restrictions, cf.~Chapter \ref{csANDvs}) and morphophonological (stem alternations and vowel harmony, cf.~\SEC\ref{morphophonology}) phenomena. \subsection{Word stress}\label{wordStress} The initial syllable\is{syllable} (cf.~\SEC\ref{syllabification} on syllabification) of a \PS\ foot always receives main stress\is{stress}. All other foot-initial syllables receive secondary stress. If a final syllable is odd, it does not receive any stress. As a result, the patterns of stressed and unstressed syllables presented in Figure~\vref{trochees} are attested in \PS; Table~\vref{syllTempExs} provides some examples for the syllabic structure\is{syllable!syllabic structure} of words with up to five syllables. \begin{figure}[ht] \centering \begin{tabular}{l} ˈσσ \\ ˈσσσ \\ ˈσσˌσσ \\ ˈσσˌσσσ \\ ˈσσˌσσˌσσ \\ ˈσσˌσσˌσσσ \\ \end{tabular} \caption[Trochaic rhythmic patterns in Pite Saami]{Trochaic rhythmic patterns in Pite Saami; here, σ stands for a syllable}\label{trochees} \end{figure} \begin{table}[htb]\centering \caption{Four common syllable structure patterns}\label{syllTempExs} \begin{tabular}{llll}\mytoprule {pattern} &\MC{2}{l}{{example}} &{gloss}\\\hline ˈσσ & \ipa{/ˈko.le/} &\It{guole} & fish\BS\Sc{nom.pl} \\ ˈσσσ & \ipa{/ˈbet.na.ka/} &\It{bednag-a} & dog-\Sc{nom.pl}\\ ˈσσˌσσ & \ipa{/ˈsaːlp.ma.ˌkirː.je/} &\It{sálbma-girrje} & psalm-book\BS\Sc{nom.sg}\\% (hymnal)\\ ˈσσˌσσσ & \ipa{/ˈkuh.ka.ˌjol.ki.kijt/} &\It{guhka-juolgi-gi-jd} & long-leg-\Sc{nmlz}-\Sc{acc.pl}\\\mybottomrule% (moose)\\ \end{tabular} \end{table} Note that some recent borrowings from Swedish\is{language contact} deviate from these structures by having an initial unstressed syllable, as they do in Swedish. For instance, the example in \REF{loanerSylTemp} is from Swedish \It{departement}. \PhonGlossWL{loanerSylTemp}{deˈparteˌmɛnːta}{deˈparteˌmɛnːta}{departemännta}{department\BS\Sc{nom.sg}}{3583} The acoustic correlates for stress seem to be intensity\is{intensity} and pitch\is{pitch}. Note that vowel length\is{vowel length} does not play a role in stress\is{stress}. Indeed, there are words with a short first vowel and a long second vowel that receive stress on the first syllable, as in the two examples in \REF{have2SGPRSa} and \REF{fishDIM1}. \PhonGloss{have2SGPRSa}{ˈanaː}{ˈanaː}{aná}{have\BS\Sc{2sg.prs}}{101208}{.246} \PhonGloss{fishDIM1}{ˈkolaː-ʧ}{ˈku͡ɔlaːʧ}{guolátj}{fish-\Sc{dim}}{110413a}{.067} However, more detailed analyses is required to fully describe the acous\-tic\--pho\-ne\-tic behavior of word-level stress, including the difference between primary and secondary stress. \subsection{Relevant prosodic domains}\label{prosodicDomains}\is{prosodic domains} Due to systematic restrictions on the distribution of a number of segments and consonant clusters as well as to the prosodic domains of \is{morphophonology}morphophonological processes, it is useful to name and describe various prosodic positions for polysyllabic words. The domains themselves are described below, while the relevant phonological restrictions and morphophonological processes are described in the pertinent sections on consonant phonemes (\SEC\ref{consonants}), vowel phonemes (\SEC\ref{vowelPhonemes}) and morphophonology (\SEC\ref{morphophonology}). Only a very limited number of recent loan words do not adhere to this structure. The schema in Figure~\vref{prosodicDomainFigure} shows these prosodic positions, and they are described further below. In the schema, only segments represented by bold capital letters are obligatory. %%%new commands for the following figure only: %%mit defaul-font: (prefer sans-font for these) \newcommand{\Cyes}{\Bf{C}} \newcommand{\Cno}{c} \newcommand{\Vyes}{\Bf{V}} \newcommand{\Vno}{v} \begin{figure}[htb]\centering \tikzset{terminal/.style={rectangle,minimum size=10mm,rounded corners=1mm,thick,draw=red!50!black!80,top color=white,bottom color=red!40!black!20}} \tikzset{terminalBIG/.style={rectangle,minimum size=20mm,rounded corners=1mm,thick,draw=red!50!black!80,top color=white,bottom color=red!70!black!20}} \tikzset{terminalLONG/.style={rectangle,minimum width=55mm,rounded corners=3pt,thick,left color=white,right color=white,middle color=red!50!black!50}} %\newcommand{\Slant}{anchor=east,xshift=3mm,yshift=4mm} \begin{tikzpicture}[node distance=1mm] %boxed in Cs and Vs: \node (initium1) [terminal] {\Cno\Cno\Cno};\node (Vcenter1) [terminal,right=of initium1] {\Vyes};\node (Ccenter1) [terminal,right=of Vcenter1] {\Cno\Cno\Cyes};\node (latus1) [terminal,right=of Ccenter1] {\Vyes};\node (Cmargin1) [terminal,right=of latus1] {\Cno\Cno\Cno};\node (Vmargin1) [terminal,right=of Cmargin1] {\Vno};\node (finis1) [terminal,right=of Vmargin1] {\Cno\Cno\Cno}; %break between levels %labels: \node (initium2) [rotate=65,below=of initium1,anchor=north east,xshift=1mm,yshift=3mm] {foot onset};\node (Vcenter2) [rotate=65,below=of Vcenter1,anchor=north east,xshift=1mm,yshift=3mm] {V1};\node (Ccenter2) [rotate=65,below=of Ccenter1,anchor=north east,xshift=1mm,yshift=3mm] {consonant center};\node (latus2) [rotate=65,below=of latus1,anchor=north east,xshift=1mm,yshift=3mm] {V2};\node (Cmargin2) [rotate=65,below=of Cmargin1,anchor=north east,xshift=1mm,yshift=3mm] {C2};\node (Vmargin2) [rotate=65,below=of Vmargin1,anchor=north east,xshift=1mm,yshift=3mm] {V3};\node (finis2) [rotate=65,below=of finis1,anchor=north east,xshift=1mm,yshift=3mm] {C3};%\node (Ccenter1) [terminalBIG,right=of Vcenter1,xshift=-5mm] {\Huge CCC}; %footification: \node (foot) [terminalLONG,above=of Ccenter1,anchor=south,xshift=0mm,yshift=0mm] {foot}; \end{tikzpicture} \caption[Illustration of prosodic domains for segments and the foot]{Illustration of prosodic domains for segments and the foot; segments represented by \Cyes\ and \Vyes\ are obligatory, while \Cno\ and \Vno\ are not.}\label{prosodicDomainFigure}% (based on \cite{Sammallahti1998}:39); this illustration does not reflect any quantity distinctions for these slots.} \end{figure} Note that Table~\vref{domainExamples} provides examples of how a word’s segments fill these positions; it may be useful to refer to this to better understand these prosodic domains. \subsubsection{Foot}\label{foot}\is{foot}\is{stress} A {foot} in \PS\ is a prosodic unit consisting of a stressed syllable and the following unstressed syllable, and is thus trochaic. Every multi-syllabic \PS\ word consists of at least one foot. \subsubsection{Foot onset}\label{footOnset} {Foot onset} position is the first consonant or consonant cluster of a foot. It is not obligatorily filled. In Saamic linguistics, this has typically been referred to as the ‘initium’ \citep[cf.][39]{Sammallahti1998}. \subsubsection{V1}\label{v1} {V1} is the first vowel of a foot, and is the peak of the stress-carrying syllable for the foot. It can be long or short, and can be a monophthong or a \is{diphthong}diphthong. The vowel in the final V1 position\footnote{Only words of four or more syllables can have more than one V1 position.} of a word is the location for umlaut and ablaut/\It{j}-harmony (cf.~\SEC\ref{umlaut} and \SEC\ref{VH}). In Saamic linguistics, this has been referred to as the ‘vowel center’ \citep[cf.][39]{Sammallahti1998}. \subsubsection{The consonant center}\label{CCent}\is{consonant center} The {consonant center} is the consonant or consonants that follow V1 (the initial vowel) and precede V2 (the second vowel), and essentially form the core of a foot. Every foot has a consonant center. The final consonant segment of the consonant center is the onset of the second syllable due to syllabification (cf.~\SEC\ref{syllabification}). The final consonant center of a word is the location for consonant gradation. The term ‘consonant center’ is commonly used in Saami linguistics \citep[cf.][39]{Sammallahti1998}. \subsubsection{V2}\label{v2} {V2} is the second vowel of a foot. It never carries stress. Every foot has a vowel in this position. With the exception of the \is{diphthong}diphthong phoneme \ipa{/u͡a/}, all vowel phonemes are attested here. In Saamic linguistics, this has been referred to as the ‘latus’ \citep[cf.][39]{Sammallahti1998}. \subsubsection{C2}\label{CMarg} {C2} is the consonant or consonants following V2. It is not obligatorily filled. If it is followed by a V3, then its final segment is resyllabified as the onset of the following syllable. In Saamic linguistics, this has also been referred to as the ‘consonant margin’ \citep[cf.][39]{Sammallahti1998}. \subsubsection{V3}\label{v3} {V3} is the unstressed vowel of any syllable following the final foot of a polysyllabic word form, and is thus the last syllable nucleus of a word (when present). Only a limited set of vowel phonemes can occur in this position. If the V3 position is filled, then there is always a consonant margin as well. In Saamic linguistics, this has been referred to as the ‘vowel margin’ \citep[cf.][39]{Sammallahti1998}. \subsubsection{C3}\label{c3} {C3} is a consonant or one of a limited set of consonant clusters (cf.~\SEC\ref{CCsWordfinal}) following V3 of a polysyllabic word, and is thus always word-final (when present). It is not obligatorily filled. If the C3 position is filled, then there is always a vowel in V3 and a consonant margin as well. In Saamic linguistics, this has been referred to as the ‘finis’ \citep[cf.][39]{Sammallahti1998}. \subsubsection{Discussion and examples}\label{exampleFootedness} Table~\vref{domainExamples} provides several examples for \PS\ polysyllabic words and how their segments fill the prosodic domains described above. Note that there are always segments in V1, the consonant center\is{consonant center} and V2, forming a sort of ‘minimal core.’ \begin{table}[htb]\centering \caption{Examples showing how the segments of \PS\ words fill prosodic domains}\label{domainExamples} \resizebox{1\linewidth}{!} { \begin{tabular}{ l c |c c c| c c c l }\mytoprule &\MC{7}{c}{{p\,r\,o\,s\,o\,d\,i\,c\hspace{3em}d\,o\,m\,a\,i\,n\,s}} &\\\cline{3-5} % &\MC{7}{c}{\It{prosodic domains}} &\\\cline{3-5} % &\MC{1}{c}{\It{foot}} &\MC{3}{c}{}& & & &\\\cline{3-5} {IPA} &{foot\,onset}&{V1}&{C-center}&{V2}&{C2} &{V3}&{C3}&{gloss} \\\hline \ipa{ane } &\ipa{ } &\ipa{a } &\ipa{n } &\ipa{e } &\ipa{ } &\ipa{ } &\ipa{ } & have\BS\Sc{sg.imp}\\ %\ipa{eno } &\ipa{ } &\ipa{e } &\ipa{n } &\ipa{o } &\ipa{ } &\ipa{ } &\ipa{ } & river\BS\Sc{nom.pl}\\ \ipa{pena } &\ipa{p } &\ipa{e } &\ipa{n } &\ipa{a } &\ipa{ } &\ipa{ } &\ipa{ } & dog\BS\Sc{nom.sg}\\ \ipa{atne-t } &\ipa{ } &\ipa{a } &\ipa{tn } &\ipa{e } &\ipa{t } &\ipa{ } &\ipa{ } & have-\Sc{inf}\\ \ipa{kolːe } &\ipa{k } &\ipa{o } &\ipa{lː } &\ipa{e } &\ipa{ } &\ipa{ } &\ipa{ } & fish\BS\Sc{nom.sg}\\ \ipa{kolːaː-j } &\ipa{k } &\ipa{o } &\ipa{lː } &\ipa{aː } &\ipa{j } &\ipa{ } &\ipa{ } & fish-\Sc{ill.sg}\\ \ipa{vaːjpmo } &\ipa{v } &\ipa{aː } &\ipa{jpm } &\ipa{o } &\ipa{ } &\ipa{ } &\ipa{ } & heart\BS\Sc{nom.sg}\\ \ipa{lu͡akːta-j } &\ipa{l } &\ipa{u͡a } &\ipa{kːt } &\ipa{a } &\ipa{j } &\ipa{ } &\ipa{ } & bay-\Sc{ill.sg}\\ \ipa{ʃɲerːa } &\ipa{ʃɲ } &\ipa{e } &\ipa{rː } &\ipa{a } &\ipa{ } &\ipa{ } &\ipa{ } & rat\BS\Sc{nom.sg}\\ \ipa{uvːata } &\ipa{ } &\ipa{u } &\ipa{vː } &\ipa{a } &\ipa{t } &\ipa{a } &\ipa{ } & kiss\BS\Sc{2sg.prs}\\ \ipa{puʰʦu-jta } &\ipa{p } &\ipa{u } &\ipa{ʰʦ } &\ipa{u } &\ipa{jt } &\ipa{a } &\ipa{ } & reindeer-\Sc{ill.pl}\\ \ipa{saːkasta-v } &\ipa{s } &\ipa{aː } &\ipa{k } &\ipa{a } &\ipa{st } &\ipa{a } &\ipa{v } & say-\Sc{1sg.prs}\\ \ipa{petnaki-st } &\ipa{p } &\ipa{e } &\ipa{tn } &\ipa{a } &\ipa{k } &\ipa{i } &\ipa{st } & dog-\Sc{elat.sg}\\\mybottomrule & &\MC{3}{c|}{{minimal core}}& & & &\\\cline{3-5}%\dline \end{tabular}} \end{table}Only the final foot of a word can be followed by a single, odd syllable with V3 and potentially C3 segments. Similarly, only the final foot of a word is subject to morphophonological\is{morphophonology} phenomena (cf.~\SEC\ref{morphophonology}). For instance, \It{sálbmagirrje} ‘book of psalms, hymnal’ is a compound consisting of \It{sálbma} ‘psalm’ and \It{girrje} ‘book’. It consists of two feet: \It{sálbma-} and \It{-girrje}. %\enlargethispage{\baselineskip} \begin{table}[h!]\centering \caption{Examples showing how the scope of consonant gradation is limited to the final foot of a word}\label{hymnalExample} \begin{tabular}{lll}\mytoprule \Sc{nom.sg} &\Sc{acc.sg} &{gloss} \\\hline \It{sálbma} &\It{sálmav} & ‘psalm’ \\ \It{girrje} &\It{girjev} & ‘book’ \\ \It{sálbmagirrje} &\It{sálbmagirjev} & ‘hymnal’ \\ &\It{*sálmagirjev} & \\\mybottomrule \end{tabular} \end{table}It is not possible to add another syllable (e.g.,~via suffixation) between these two feet because they belong to the same compound noun. Furthermore, the inflected form for \Sc{acc.sg} is \It{sálbmagirjev}, in which consonant gradation (weakening of \It{rr} to \It{r}) is only triggered in the second foot, even though the first foot undergoes gradation (weakening of the cluster \ipa{/lpm/} to \ipa{/lm/}) in non-compound environments, cf.~\It{sálmav} ‘psalm-\Sc{acc.sg}’. This is illustrated by the word forms in Table~\vref{hymnalExample}. %\FB \subsection{Syllabification}\label{syllabification}\is{syllable!syllabification} The distribution of vowel phonemes between consonant phoneme slots patterns clearly, particularly with respect to intonation and the distribution of vowel phonemes. This, along with the sonority sequencing principle \citep[cf.~e.g.,~][]{Selkirk1984}, indicates that vowels are the nuclei of \PS\ syllables. However, the location of syllable boundaries is not as easy to determine; in fact, syllable boundaries are not highly relevant in \PS\ prosody. Because the consonant center\is{consonant center} has by far the widest variety of consonants and consonant combinations of any of the consonant positions, it is best to consider this position first. \enlargethispage{.5\baselineskip} Although the consonant center spans the preceding and following syllabic nuclei, there is no solid phonotactic or phonological evidence for where the syllable boundary is located inside the consonant center. Table~\vref{syllabificationPatternsInCCent} lists the possible syllabification patterns for the consonant center. \newcommand{\HSP}{\hspace*{4pt}}%new command to coordinate spacing in following table only \begin{table}[ht]\centering \caption{Logically possible syllabification patterns for the consonant center}\label{syllabificationPatternsInCCent} \begin{tabular}{ll}\mytoprule%\centering {C-center segment count} &{possible patterns} \\\hline one C & V.CV \HSP VC.V \\%\hline two Cs & V.CCV \HSP VC.CV \HSP VCC.V \\ three Cs & V.CCCV \HSP VC.CCV \HSP VCC.CV \HSP VCCC.V \\\mybottomrule \end{tabular} \end{table} Maximizing onsets, the patterns V.CCCV, VC.CCV and V.CCV would create highly unusual onsets (such as \ipa{/vkŋ/}, \ipa{/pm/} or \ipa{/vɲ/}) unattested in any other onset positions. Similarly, trying to maximize codas, the patterns VCCC.V and VCC.V would also result in highly unusual codas (such as \ipa{/vkŋ/} or \ipa{/vɲ/}) unattested in any other coda positions. While the pattern VCC.CV would also create some otherwise unattested codas (such as \ipa{/vt/} or \ipa{/rk/}), these are phonologically similar to attested word-final codas such as \ipa{/st/} or \ipa{/jk/} (also fricative\PLUS plosive and oral-sonorant\PLUS plosive, respectively). The patterns VC.CV, V.CV and VC.V result in onsets and codas which are not unusual. However, keeping in mind the far greater diversity of single consonant phonemes licensed in word-initial onset position compared to word-final coda position, syllabification favoring singleton onset consonants results in onsets and codas which most resemble word-initial onsets and word-final codas. Note that even then, onsets and codas in positions other than the consonant center form subsets of the possibilities in consonant center position (with the exception of a few non-native word-onset clusters). It is therefore most plausible that syllables are assigned a single consonant segment as an onset in syllabification. The examples in \REF{have2SGPRSb} through \REF{laughINF} %\footnote{Cf.~\SEC\ref{examplesExample} for an explanation of the sources in the examples in the current chapter.} show some results of this syllabification for a variety of consonant constellations in the consonant center. \PhonGlossWL{have2SGPRSb} {anaː} {a.naː} {aná} {have\BS\Sc{2sg.prs}} {6278} \PhonGlossWL{haveINF} {atne-t} {at̚.net} {adnet} {have-\Sc{inf}} {0006} \PhonGloss{bayNOMPL} {lokta} {lʊ͡okʰ.ta} {luokta} {bay\BS\Sc{nom.pl}} {080702b}{.54m38s} %\PhonGlossLongSource{bayNOMPL} {lokta} {lʊ͡okʰ.ta} {luokta} {bay\BS\Sc{nom.pl}} {080702b}{.54m38s} \PhonGlossWL{boxDIM} {kisto-ʧ} {kis.toʧ} {gistotj} {box-\Sc{dim}} {6048} %\PhonGloss{spoonNOMPL} {piste} {pis.te} {biste} {spoon\BS\Sc{nom.pl}} {-} \PhonGlossWL{work1SGPRS} {parka-v} {par.kaʋ} {bargav} {work-\Sc{1sg.prs}} {6241} \PhonGloss{laughINF} {ʧaːjpma-t}{ʧaːjp̚.matʰ} {tjájbmat} {laugh-\Sc{inf}} {100323a}{.001} This syllabification preference for a single onset segment can be applied to syllable boundaries outside of the consonant center, as shown in \REF{speakINFa} and \REF{happinessCOMSG}. \PhonGlossWL{speakINFa} {saːkasti-t} {saː.kas.titʰ} {ságastit} {speak-\Sc{inf}} {1480} \PhonGlossWL{happinessCOMSG}{ɛvu-jna} {ɛ.vuj.na} {ävujna} {happiness-\Sc{com.sg}} {4372} When the consonant center consists of a geminate\is{geminate} phoneme (cf.~\SEC\ref{geminateCs}), there is no phonological test which indicates where the syllable boundary is located; indeed, this syllable boundary is likely not relevant in \PS\ phonology. With this in mind, a symbolic syllable boundary is postulated somewhere within a phonological geminate; this divides the geminate symbolically into two component parts and results in syllables conforming to a syllable template with a singleton in the onset. In the examples in \REF{whileNOMSG} through \REF{fatherNOMSGa}, this symbolic boundary is placed in the middle of the geminate. \PhonGlossWL{whileNOMSG} {pɔtːɔ} {pɔt̚.tɔ} {båddå} {while\BS\Sc{nom.sg}} {0231} \PhonGlossWL{nameNOMSG} {namːa} {nam.ma} {namma} {name\BS\Sc{nom.sg}} {3433} \PhonGloss{milkingbowlNOMSG} {naːʰpːe} {naːhp̚.pe} {náhppe} {milking.cup\BS\Sc{nom.sg}} {080621}{.54m38s} \PhonGlossWL{fatherNOMSGa} {aːʰʧːe} {aːht̚.ʧe} {áhttje} {father\BS\Sc{nom.sg}}{0016} If a geminate precedes another consonant segment in the consonant cluster, the syllabification border is after the geminate, as in \REF{spoonNOMSG}: \PhonGlossWL{spoonNOMSG} {pisːte} {pisː.te} {bisste} {spoon\BS\Sc{nom.sg}} {0190} When the final consonant in the consonant center is preaspirated\is{aspiration!preaspiration}, there is again no test for the location of a syllable boundary, as with geminates. With this in mind, a symbolic boundary is posited between the realization of preaspiration (cf.~\SEC\ref{preaspiration}) and the rest of the preaspirated segment, which results in syllables conforming to a syllable template with a simgleton in the onset. The examples in \REF{houseNOMSGa} and \REF{endNOMSG} show a preaspirated plosive and affricate, respectively, with a syllable boundary indicated between the glottal fricative (preaspiration) and the stop or affricate component. \PhonGlossWL{houseNOMSGa} {tɔʰpe} {tɔh.pe} {dåhpe} {house\BS\Sc{nom.sg}} {0416} \PhonGlossWL{endNOMSG} {keʰʧe} {keh.ʧe} {gehtje} {end\BS\Sc{nom.sg}} {0594} The same is the case when a geminate precedes a preaspirated segment, as in the example in \REF{churchNOMSG}. % and \REF{wantINF}. % \PhonGlossWL{churchNOMSG} {kirːʰko} {kirr̥.ko} {girrko} {church\BS\Sc{nom.sg}} {0640} Note, however, as pointed out above, the actual position of syllable boundaries in \PS\ does not seem to be relevant in other areas of prosody. For this reason, the consonant center is a preferable prosodic domain to consider when describing prosody and phonotactics, and thus is referred to regularly in the following descriptions. \subsection{A note on syllables and feet}\label{footedness}\is{stress} With the above description on stress in polysyllabic word structure in mind, it should become clear that syllables\is{syllable} are only relevant for bearing word stress and creating feet\is{foot}, while feet form a relevant unit on several levels (prosodic, phonological, morphophonological). As a result, it could be more useful to rephrase the ‘bisyllabic minimal word structure’ as ‘obligatory footedness’ for \PS\ lexical items and many functional words. Furthermore, the choice of the term ‘foot’ to describe this minimal size requirement may not be ideal because the edges of the \PS\ foot are quite irrelevant to morphophonological\is{morphophonology} processes. Instead, the \It{V1+Consonant-Center+V2} core is a vital domain for morphophonology, while segments at the edges are not relevant. Perhaps a better descriptive term would be ‘minimal core’. \section{Utterance-level prosody}\label{utteranceProsody} \subsection{Intonation in utterances}\label{utteranceIntonation}\is{intonation}\is{stress} While the following observations are of a preliminary nature, and a more thorough study must be left for future investigation, the relative intensity of stressed syllables in declarative utterances in \PS\ tends to decrease towards the end of the utterance, with the final stressed lexical item, and particularly the final syllable, being realized with noticeably lower intensity than the beginning syllables. As an example, the waveform and intensity trace for the utterance glossed in \REF{intonationDropEx} are provided in Figure~\vref{intonationGraphic}. % \setlength\fboxrule{0pt} \begin{figure}[htb] \fbox{\includegraphics[width=.95\textwidth]{images/pit090826-003.pdf}} \caption{Waveform and intensity trace illustrating the drop in intensity at the end of an utterance}\label{intonationGraphic} \end{figure} \ea\label{intonationDropEx} \glll da lä tjakktja ja gillgijmä sarvajd njuovvat\\ da lä tjakktja ja gillgi-jmä sarva-jd njuovva-t\\ then be\BS\Sc{3sg.prs} autumn\BS\Sc{nom.sg} and will-\Sc{1pl.pst} reindeer.bull-\Sc{acc.pl} slaughter-\Sc{inf}\\\nopagebreak \Transl{It is autumn and we will slaughter the reindeer bulls.}{} \Corpus{090826}{003} \z \FB Here, the initial syllable nucleus of the first lexical item in the sentence \It{tjakttja} has an intensity of 69.7 dB. The other lexical items hover between 63.5 dB and 70.3 dB. The final lexical item \It{njuovvat} begins at 64 dB on the initial syllable nucleus, and drops abruptly to 50 dB on the final syllable nucleus. \subsection{Utterance-final weakening}\label{utteranceFinalDevoicing}\is{devoicing} The final two or three syllables of a declarative utterance in \PS\ can be weakened as a way to mark the end of an utterance.\footnote{\It{Arjeplogsmål}, the local \is{language contact}Swedish dialect, features a similar phenomenon.} This weakening is typically realized by completely devoicing the final one, two or three syllables, often to the point that these are essentially whispered. Alternatively, this can be realized as creaky voice instead of voicelessness. For instance, in the example utterance depicted in the waveform in Figure~\vref{utteranceFinalDevoicingGraphic} and transcribed in \REF{utteranceFinalDevoicing1}, devoicing occurs in the final lexical word \It{giesev} ‘summer’, which contains the last two syllables of the utterance. \begin{figure}[htb] \fbox{\includegraphics[width=.95\textwidth]{images/pit090826-012.pdf}} \caption{Waveform illustrating utterance-final weakening}\label{utteranceFinalDevoicingGraphic} \end{figure} \ea\label{utteranceFinalDevoicing1}%JW: not really good example because ’s’ is voiceless anyway, although the Vs are obviously devoiced - the point is that it’s nearly whispered, which is clearer in the wave-form \glll da’l adnam buorak, buorak giesev\\ \ipa{t-a=l} \ipa{at̚na-m} \ipa{pʊ͡ɔrakʰ} \ipa{pʊ͡ɔrak} \ipa{k{ɪ̥͡e̥se̥-v̥}}\\ \Sc{dem}-\Sc{3pl.nom}=be\BS\Sc{3pl.prs} have-\Sc{prf} good good summer-\Sc{acc.sg}\\\nopagebreak \Transl{They have had a good summer.}{} \Corpus{090826}{012} %\ex\label{utteranceFinalDevoicing1} %\glll dä dieda nubbe sábme diehta guk botsoj vuojdnuj\\ % tɛ tɪ͡eta nupːe saːp̚me tɪ͡eʰta kukʰ poʦoj vʊ͡oj\Bf{j̥t̚n̥ɔ̥j̥}\\ % then know\BS\Sc{2sg.prs} other Saami\BS\Sc{nom.sg} know\BS\Sc{3sg.prs} how reindeer\BS\Sc{nom.sg} look-\Sc{3sg.pst}\\ %\Transl{then you know, the other Saami knows what the reindeer looked like}{} \Corpus{100405b.067} \z The lack of energy in the waveform corresponding to \It{giesev} shows clearly that the word is weakened significantly compared to the rest of the utterance. Note that even the vowels are completely devoiced. It is not clear what triggers this devoicing, and future study, particularly looking at the possibility of it being a turn marker in discourse, is needed. %%\FB % %%\begin{figure} %%\fbox{\includegraphics[width=.95\textwidth]{images/pit090826-003.pdf}} %%\caption{Waveform and intensity trace illustrating the drop in intensity at the end of an utterance}\label{intonationGraphic} %%\end{figure} % %%\begin{figure} %%\fbox{\includegraphics[width=.95\textwidth]{images/pit090826-012.pdf}} %%\caption{Waveform illustrating utterance-final weakening}\label{utteranceFinalDevoicingGraphic} %%\end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% S E G M E N T A L P H O N O L O G Y %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% S E G M E N T A L P H O N O L O G Y %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% S E G M E N T A L P H O N O L O G Y %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% S E G M E N T A L P H O N O L O G Y %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% S E G M E N T A L P H O N O L O G Y %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% S E G M E N T A L P H O N O L O G Y %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Segmental phonology}\label{csANDvs} \PS\ has 43 consonant phonemes and 9 vowel phonemes. In the present chapter, the first section (\SEC\ref{consonants}) deals with consonants phonemes, their allophonic variation, and consonant clusters. The second section (\SEC\ref{vowelPhonemes}) covers vowel phonemes, their allophonic variation, and schwa-epenthesis. %\Red{Note that \SEC\ref{orthography} provides an overview of the working \PS\ orthography and how it relates to \PS\ phonology. While most of the discussion in this chapter on segmental phonology includes both phonemic and phonetic transcriptions, other chapters only feature the orthographic representation when phonology is not directly relevant. }%moved from intro to Ch. 2, now part of explaining examples in Ch. 1 \section{Consonants}\label{consonants}\label{CphoneInventory} The consonant phoneme inventory of Pite Saami can be %\marginpar{is the C-inventory too small? better sideways table on single page?} found in Figure~\vref{Cphonemes}. There are plain and preaspirated phonemes for all plosive and affricate positions, and geminate and singleton pairs for all categories. Preaspirated, geminate and preaspirated geminate phonemes are restricted to the consonant center position. \begin{figure}[ht]\centering%%FIGURE!! %\caption[Consonant phoneme inventory]{Consonant phoneme inventory}\label{Cphonemes} \resizebox{1\linewidth}{!} { %\begin{tabular}{l| l l l l l l l|} \begin{tabular}{r| c c c c c c c|} \MC{1}{l}{} & {bilabial} & {labiodental} & {alveolar} & {post-alveolar} & {palatal} & {velar} & \MC{1}{c}{{glottal}}\\\cline{2-8} {plosive} &\ipa{p ʰp pː ʰpː} &&\ipa{t ʰt tː ʰtː}&&&\ipa{k ʰk kː ʰkː}&\\%6 {affricate} &&&\ipa{ʦ ʰʦ ʦː ʰʦː} &\ipa{ʧ ʰʧ ʧː ʰʧː}&&&\\%4 {fricative} &&\ipa{f fː v vː} &\ipa{s sː}&\ipa{ʃ ʃː}&&&\ipa{h}\\%5 {nasal} &\ipa{m mː}&&\ipa{n nː} &&\ipa{ɲ ɲː} &\ipa{ŋ ŋː}&\\%4 {trill}&&&\ipa{r rː}&&&&\\%1 {approx.}&&&\ipa{l lː}&&\ipa{j jː}&&\\\cline{2-8}%2 %total: 22 \end{tabular}} \caption[Consonant phoneme inventory]{Consonant phoneme inventory}\label{Cphonemes}\is{phoneme inventory!consonants} \end{figure} A description of the consonant phonemes and the distribution of their relevant allophones can be found in \SEC\ref{Callophones} for each mode of articulation. This is followed by a discussion of consonant clusters. For the sake of clarity, the term \It{postaspiration}\is{aspiration!postaspiration} will be used here to refer to what is commonly referred to simply as \It{aspiration}\is{aspiration}; this decision also emphasizes the contrast to \It{preaspiration}\is{aspiration!preaspiration}, which is, in fact, more relevant for Pite Saami than postaspiration. %\FB \subsection{Consonant phonemes and allophonic variations}\label{Callophones} After a brief note on preaspiration in the following section (\SEC\ref{preaspiration}) and a discussion of gemination in \SEC\ref{geminateCs}, the consonant phonemes and their allophones are described in the remaining sections (\SEC\ref{Plosives} through \SEC\ref{oralSonorants}). They are grouped based on manner of articulation. \subsubsection{Preaspiration}\label{preaspiration}\is{aspiration!preaspiration}\is{preaspiration|see {aspiration}}\noindent In Pite Saami, preaspirated\footnote{As is hopefully evident from this discussion on preaspiration, the term ‘preaspiration’ is not entirely accurate from a phonetic-acoustic point of view since the acoustic correlate of this phenomenon is not actually aspiration in all cases. Nonetheless, there are several reasons to select this term: 1. in the majority of cases, the acoustic correlate is in fact preaspiration, 2. this phonemic phenomenon is often referred to as preaspiration in the literature on %\PS\ \citep{} other Saami languages (cf.~e.g.,~\citealt[54--55]{Sammallahti1998} %\citet[15,18]{Svonni2009} %Svonni2009 never defines preasp, just uses the word here and there or \citealt[57,67]{Feist2010}), and 3. preaspiration can be reconstructed for the half-long and long Proto-Saami plosive and affricate phonemes \citep[cf.][54]{Sammallahti1998} that became the current \PS\ preaspirated phonemes.} phonemes only occur in consonant center position. While preaspirated phonemes can be plosives or affricates, the phenomenon that goes along with them is essentially the same. %, and does not interact with the length of the phoneme it is part of. The period of aspiration\is{aspiration}, i.e., the voicelessness preceding the formation of the oral closure, is realized in different ways and depends on the preceding segment. If the preceding segment is a voiced continuant %\marginpar{check for better term than ‘continuant’?} %JW: continuant probably best; Marijn can’t think of a better term consonant, then the final part of that segment is devoiced. While a minimal contrast between a voiceless obstruent preceding a preaspirated consonant phoneme and a voiceless obstruent preceding a plain counterpart phoneme (e.g.,~\ipa{/st/} vs. \ipa{/sʰt/}) is theoretically possible, this cannot be detected because such a consonant cannot be devoiced as it is already voiceless (e.g.,~\mbox{\ipa{/st/\ARROW [st]}} and \mbox{\ipa{/sʰt/\ARROW [st]}}). %\footnote{Obstruents never immediately precede \ipa{/ʰp ʰt ʰk/.} When following a high front vowel \ipa{/i/}, preaspiration is realized as a voiceless palatal fricative \ipa{[ç]}. In all other cases, preaspiration is a voiceless glottal fricative \ipa{[h]}. This is summarized in Table~\vref{preaspRealization}. %on page \pageref{preaspRealization}. \begin{table}[ht]\centering \caption{The phonetic realizations of preaspiration}\label{preaspRealization} \begin{tabular}{lll}\mytoprule {preceding segment} & {realization of preaspiration} &{example} \\\hline voiced consonant & end devoicing of voiced consonant &\ipa{/mʰp/\ARROW\ [mm̥p]} \\%\hline %\MC{2}{|c|}{ex.: /...mʰp.../}\ARROW [...mm̥p...]}\\ front high vowel \ipa{/i/} & voiceless palatal fricative \ipa{[ç]} &\ipa{/iʰp/\ARROW\ [içp]}\\%\hline %\MC{2}{|c|}{ex.: /...iʰp.../}\ARROW [...çp...]}\\ other vowels & voiceless glottal fricative \ipa{[h]} &\ipa{/aʰp/\ARROW\ [ahp]}\\\mybottomrule%\hline %\MC{2}{|c|}{ex.: /...aʰp.../}\ARROW [...ahp...]}\\ \end{tabular} \end{table} \subsubsection{Geminates}\label{geminateCs}\is{geminate} The occurrence of geminate consonants is restricted to the consonant center\is{consonant center}. %and indeed common As illustrated in Table~\vref{Cphonemes}, only the glottal fricative \ipa{/h/} does not have a comparable geminate phoneme. %In such cases, the two phonemes are realized together as a geminate. Geminate segments are realized with a longer overall duration than the corresponding singleton phonemes. This observation is based not only on speakers’ observations that such sounds are ‘longer’, but also on my own observations and analyses, including an acoustic-phonetic comparison of duration in geminate phonemes; cf.\,\SEC\ref{plosiveDurationComparison} for a detailed comparison non-voiced plosive durations in the consonant center. For plosives\is{plosive} and affricates\is{affricate}, only one stop closure is formed, and the overall duration of the stop closure is longer than the duration of a corresponding single consonant. Some examples are provided in \REF{outside} through \REF{motherNOMSG}. \PhonGlossWL{outside}{ta{pː}en}{ta{pː}en}{dabben}{\Sc{dem}\BS\Sc{iness.sg}}{3672} \PhonGloss{ribbonNOMSG}{pa{tː}e}{pa{tː}e}{badde}{ribbon\BS\Sc{nom.sg}}{080701b}{.082}%{swadesh?} \PhonGloss{windNOMSG1}{pɛ{kː}a}{pɛ{kː}a}{bägga}{wind\BS\Sc{nom.sg}}{080702b}{.067}%{swadesh?} \PhonGlossWL{goINF}{vaː{ʦː}e-t}{vaː{tːs}etʰ}{vádtset}{go-\Sc{inf}}{2049} \PhonGlossWL{motherNOMSG}{ʧi{ʧː}e}{ʧi{tːʃ}e}{tjidtje}{mother\BS\Sc{nom.sg}}{3618} Preaspirated plosive and affricate geminates also exist. %the first segment can be preaspirated, in which case, the period of preaspiration tends to be longer than for a single preaspirated segment not followed by the corresponding homorganic plosive or affricate. %Similarly, a preaspirated plosive or affricate can be followed by the corresponding plain plosive or affricate. In such cases, the duration of both the preaspiration phase and the following stop closure are longer than in the corresponding preaspirated single consonants. Some examples are found in \REF{cleverPREDSG} through \REF{fatherNOMSGb}. \PhonGloss{cleverPREDSG} {ʧɛ{ʰpː}e} {ʧɛ{hpː}e} {tjähppe} {clever\BS\Sc{pred.sg}} {090930b}{.077} %\PhonGloss{milkingbowlNOMSG} {naː{ʰpp}e} {naː{hpː}e} {náhppe} {milking\_cup\BS\Sc{nom.sg}} {080621} \PhonGloss{canINF} {maː{ʰtː}e-t} {maː{htː}etʰ} {máhttet} {can-\Sc{inf}} {080926}{.03m21s} %\PhonGloss{wolfNOMSG} {kum{ʰpp}e} {kum{m̥pː}e} {gummpe} {wolf\BS\Sc{nom.sg}} {0671} %\PhonGloss{workINF} {par{ʰkk}at} {par{r̥kː}atʰ} {barrgat} {work-\Sc{inf}} {101208.005} \PhonGlossWL{turnAround3SGPST}{maː{ʰʦː}a-j}{maː{htːs}aj}{máhttsaj}{turn.around-\Sc{3sg.pst}}{6613} \PhonGlossWL{fatherNOMSGb}{aː{ʰʧː}e}{aː{htːʃ}e}{áhttje}{father\BS\Sc{nom.sg}}{0016} For all other consonants, the overall duration of a geminate is longer than for the corresponding singleton consonant. Examples of such geminates are provided in \REF{areaNOMSG} through \REF{letINF}. \PhonGlossWL{areaNOMSG} {taː{fː}o} {taː{fː}o} {dáffo} {area\BS\Sc{nom.sg}} {2367}% %\PhonGloss{rapids} {raː{ff}e} {raː{fː}e} {ráffe} {rapids\BS\Sc{nom.sg}} {2713}% \PhonGlossWL{peaceNOMSG} {raː{vː}e} {raː{vː}e} {rávve} {peace\BS\Sc{nom.sg}} {1366}% \PhonGlossWL{partNOMSG} {ɔ{sː}e} {ɔ{sː}e} {åsse} {part\BS\Sc{nom.sg}} {2269} \PhonGlossWL{horsetailNOMSG} {ɔ{ʃː}e} {ɔ{ʃː}e} {åssje} {horsetail\BS\Sc{nom.sg}} {2270} \PhonGloss{suck3SGPRS}{ɲa{mː}a}{ɲa{mː}a}{njamma}{suck\BS\Sc{3sg.prs}}{080701b}{.005a} \PhonGlossWL{littleBitNOMSG}{pi{nː}a}{pi{nː}a}{binna}{little.bit\BS\Sc{nom.sg}}{2446} %\PhonGloss{eggNOMSG}{mɔ{nn}e}{mɔ{nː}e}{månne}{egg\BS\Sc{nom.sg}}{080621} \PhonGloss{daughterInLawNOMSG}{ma{ɲː}e}{ma{ɲː}e}{mannje}{daughter.in.law\BS\Sc{nom.sg}}{080621}{.74m04s} %\PhonGlossLongGloss{daughterInLawNOMSG}{ma{ɲː}e}{ma{ɲː}e}{mannje}{daughter\_in\_law\BS\Sc{nom.sg}}{080621}{.74m04s} \PhonGloss{after}{ma{ŋː}el}{ma{ŋː}el}{maŋŋel}{after}{080924}{.529} \PhonGlossWL{iceOverINF}{kɔ{rː}oti-t}{kɔ{rː}otɪtʰ}{gårrodit}{ice.over-\Sc{inf}}{4693} \PhonGlossWL{fireNOMSG}{tɔ{lː}ɔ}{tɔ{lː}ɔ}{dållå}{fire\BS\Sc{nom.sg}}{0421}%KORRECTED from NOM:PL to SG -- 18.09.2013 \PhonGlossWL{letINF}{paː{jː}a-t}{paː{jː}atʰ}{bájjat}{let-\Sc{inf}}{3439} Due mostly to the nature of morphophonemic\is{morphophonology} stem alternations, there are numerous minimal pairs differing only in the presence of a singleton versus a geminate consonant; cf.~\SEC\ref{Cgrad} on consonant gradation for examples. %, and the series of two identical segments has often been treated as a phonemic geminates or as morphologically triggered gemination in the literature on both \PS\ and other Saami languages with similar phenomena (cf.~\cite{Lehtiranta1992}:46-47; \cite{Sammallahti1998}:46-50; ). It is also possible to have geminate fricative or sonorant phonemes, as described above, followed by a plosive or affricate phoneme, as illustrated by the examples in \REF{tableNOMSG} through \REF{tightPREDSG}. \PhonGlossWL{tableNOMSG} {pɛ{vː}te} {pɛ{vː}te} {bävvde} {table\BS\Sc{nom.sg}} {0289}% \PhonGlossWL{rapidsNOMSG} {lu{sː}pe} {lu{sː}pe} {lusspe} {rapids\BS\Sc{nom.sg}} {1077}% \PhonGlossWL{shall3SGPRS} {ka{lː}ka} {ka{lː}ka} {gallga} {will\BS\Sc{3sg.prs}} {6626}% \PhonGlossWL{tightPREDSG} {kaː{rː}ʰʧe} {kaː{rr̥}ʧe} {gárrtje} {tight\BS\Sc{pred.sg}} {0554}% A series of two identical consonant phonemes can arise at the internal stem boundary of a compound. In such cases, the resulting duration can be longer than for a single plosive, but is not necessarily so, as the two phonemes are often realized as a singleton, as in \REF{cervicalVertebraeNOMSG}. \PhonGlossWL{cervicalVertebraeNOMSG}{ʧepo{t}\PLUS{t}aːk:te}{ʧi͡epo{t}aːkʰte}{tjiebotdákkte\footnotemark}{cervical.vertebra\BS\Sc{nom.sg}}{3771} %\PhonGlossLong{cervicalVertebraeNOMSG}{ʧepo{t}\PLUS{t}a:k:te}{ʧi͡epo{t}a:kʰte}{tjiebotdákkte\footnotemark}{cervical\_vertebra\BS\Sc{nom.sg}}{}{3771} \footnotetext{The word \It{tjiebotdákkte} literally means ‘throat-bone’, cf.~\It{tjiebot} ‘throat’ and \It{dákkte} ‘bone’.} Due to the morpheme boundary separating such segments, it is clear that this is not a case of a geminate phoneme, even if the realization may resemble that of a geminate. \subsubsection{Plosives}\label{Plosives}\is{plosive|(}%%%%%%%% PLOSIVES %%%%%%%%%%%%%%% The plosive series in Pite Saami consists of the phonemes and their phonetic realizations shown in Table~\vref{PlosivePhonemes}. The distribution of the allophones will be discussed here. As all three relevant places of articulation behave in much the same way, the various manners of articulation for each place will be treated together. \begin{table}\centering \caption{Plosive phonemes and their realizations}\label{PlosivePhonemes} \begin{tabular}{l c l}\mytoprule \ipa{/p/}&:&\ipa{[p] [pʰ] [p̚\,]} \\ %should be written with <b> \ipa{/pː/}&:&\ipa{[pː]} \\ %should be written with <bb> \ipa{/ʰp/}&:&\ipa{[hp] [ ̥p] [çp] } \\ %should be written with <hp> \ipa{/ʰpː/}&:&\ipa{[hpː]} \\ %should be written with <hpp> %\ipa{/b/}&:&\ipa{[b]} \\ %only loans (?) \ipa{/t/}&:&\ipa{[t] [tʰ] [t̚\,]} \\%unreleased even before palatal (see butter.sg.nom)! probably just +CORONAL \ipa{/tː/}&:&\ipa{[tː]} \\ \ipa{/ʰt/}&:&\ipa{[ht] [ ̥t] [çt]} \\ \ipa{/ʰtː/}&:&\ipa{[htː]} \\ %\ipa{/d/}&:&\ipa{[d]} \\ %only loans (?) \ipa{/k/}&:&\ipa{[k] [kʰ] [k̚\,]} \\ \ipa{/kː/}&:&\ipa{[kː]} \\ \ipa{/ʰk/}&:&\ipa{[hk] [ ̥k] [çk] } \\ \ipa{/ʰkː/}&:&\ipa{[ʰkː]} \\ %\ipa{%/g/}&:&\ipa{[g]} \\ %only loans (?) \mybottomrule \end{tabular} \end{table} \paragraph{Voiceless singleton plosives}\label{ptk} The segments \ipa{/p\,t\,k/} are bilabial, alveolar and velar (respectively) voiceless singleton plosive phonemes. %With the exception of their individual places of articulation, they essentially exhibit the same behavior and will be described together here. %JW: redundant, see previous paragraph The voiceless singleton plosives can occur in all prosodic consonant positions and are subject to allophonic variation, depending on the prosodic environment. In syllable-onset position, a plain (unaspirated) voiceless plosive \ipa{[p\,t\,k]} is produced, as seen in examples \REF{dogNOMSG} through \REF{goodADV1}. %\footnote{In these and the examples in the following sections on other consonant and vowel phonemes, the underlying phonological representation, the phonetic form, the orthographic form, the gloss, and the source of the phonetic form are listed for each instance and in that order. The relevant phones and phonemes are in bold face.} \PhonGloss{dogNOMSG}{{p}ena}{{p}i͡ena}{bena}{dog\BS\Sc{nom.sg}}{090926}{.057} \PhonGloss{wish3DUPRS}{saːvːa-{p}a}{saːʋːa{p}aʰ}{sávvabah}{wish-\Sc{3du.prs}}{100323a}{.060} \PhonGloss{2SGNOM}{{t}ɔj}{{t}ɔj}{dåj}{\Sc{2sg.nom}}{100323a}{.014} \PhonGloss{cheeseNOMPL}{vos{t}a}{ʋu͡ɔs{t}a}{vuosta}{cheese\BS\Sc{nom.pl}}{080917c}{.09m47s} \PhonGloss{widePREDSG}{{k}op{t}ok}{{k}opʰ{t}okʰ}{gåbdåk}{wide\BS\Sc{pred.sg}}{091001}{.035} \PhonGloss{drink1SGPRS}{ju{k}a-v}{jʊ{k}ɑʋ}{jugav}{drink-\Sc{1sg.prs}}{100323a}{.115} \PhonGloss{goodADV1}{{p}ora-{k}it}{{p}u͡ora{k}ɪtʰ}{buoragit}{good-\Sc{advz}}{100323a}{.213} %\PhonGloss{queenNOMSG}{\Bf{t}rotnək}{\Bf{t}rot̚nɘkʰ}{drodnik\footnotemark}{queen.\Sc{nom.sg}}{missing} %\footnotetext{from Swedish \It{drottning}} %\PhonGloss{icecreamNOMSG}{\Bf{k}laːsːa}{\Bf{k}laːsːa}{glássa\footnotemark}{ice\_cream.\Sc{nom.sg}}{missing} %\footnotetext{from Swedish \It{glass} (<French \It{glace})} The plain voiceless singleton pronunciations \ipa{[p\,t\,k]} %\marginpar{mention that {[k]} is farther back in velar position (sounds almost uvular) than in English/German/Swedish?} are also found in consonant clusters in word-onset position, an environment usually found in recent and older loan words from (North) Germanic, as in examples \REF{queenNOMSG} %\marginpar{Check placement of footnote in \REF{queenNOMSG}} and \REF{icecreamNOMSG}. % \clearpage %\PhonGloss{queenNOMSG}{\Bf{t}rotnik}{\Bf{t}rot̚nikʰ}{drodnik}{queen\BS\Sc{nom.sg}}{missing}%\footnotetext{from Swedish \It{drottning}.} %\PhonGloss{icecreamNOMSG}{\Bf{k}laːsːa}{\Bf{k}laːsːa}{glássa}{ice\_cream\BS\Sc{nom.sg}}{missing}%\footnotetext{from Swedish \It{glass} (<French \It{glace}).} \PhonGlossWL{queenNOMSG}{{t}rotnik}{{t}rot̚\,nikʰ}{drodnik\footnotemark}{queen\BS\Sc{nom.sg}}{0377}\footnotetext{From Swedish\is{language contact} \It{drottning}.} \vspace{-1em}\PhonGlossWL{icecreamNOMSG}{{k}laːsːa}{{k}laːsːa}{glássa\footnotemark}{ice.cream\BS\Sc{nom.sg}}{0529}\footnotetext{From Swedish\is{language contact} \It{glass} (<French \It{glace}).} The voiceless singleton plosive phonemes are postaspirated word-finally\footnote{Note, however, that speakers from the northern parts of \PS\ territory tend to voice the voiceless plosives phonemes as \ipa{[b d g]} in word-final position.} as \ipa{[pʰ\,tʰ\,kʰ]}, as in examples \REF{be1PLPRS} through \REF{alonePRED}. This is also the case across an internal compound boundary, as in \REF{whoopingCoughNOMSG}.\footnote{The word \It{båktjanitgåssås} in \REF{whoopingCoughNOMSG} refers to the whooping cough, and is a compound composed of \It{båktjanit} ‘to suffocate’ and \It{gåssås} ‘cough’.} %\marginpar{check if this is true at internal border of compounds or before nonhomorganic nasal? also check grade of ex. \REF{hairNOMPL} and \REF{marshNOMSG} - maybe this is true for all phonemic lengths!}%maybe also at the internal word-border of a compound (but what are the criteria for compounds then?); maybe also before a nonhomorganic nasal? \PhonGloss{be1PLPRS}{orːo-{p}}{orːo{pʰ}}{årrop}{be-\Sc{1pl.prs}}{100323a}{.158} \PhonGloss{goodADV2}{pora-ki{t}}{pʊ͡ɔrakɪ{tʰ}}{buoragit}{good-\Sc{advz}}{100323a}{.213} \PhonGloss{alonePRED}{ikto{k}}{ikʰto{kʰ}}{iktuk}{alone}{100323a}{.169} %\PhonGloss{marshNOMSG}{kɔ{tː}kɔ}{kɔ{tʰ}kːɔ}{gåttkå}{marsh\BS\Sc{nom.sg}}{}{0762}%check if this is grade I %\PhonGlossLong{1}{2}{3}{4}{5}{6} \PhonGlossWL{whoopingCoughNOMSG}{pɔ{kʧ}ani{t}\PLUS{k}ɔsːɔs}{pɔ{kʰʧ}ani{tʰk}ɔsːɔs}{båktjanitgåssås}{whooping.cough\BS\Sc{nom.sg}}{5993} %\PhonGlossLong{whoopingCoughNOMSG}{pɔ{kʧ}ani{t}\PLUS{k}ɔs:ɔs}{pɔ{kʰʧ}ani{tʰk}ɔs:ɔs}{båktjanitgåssås\footnotemark}{whooping\_cough\BS\Sc{nom.sg}}{}{5993} %\footnotetext{The word \It{båktjanitgåssås} refers to the whooping cough, and is a compound composed of \It{båktjanit} ‘to suffocate’ and \It{gåssås} ‘cough’.} When preceding a non-homorganic plosive or affricate, a short aspiration occurs between the two segments, as in \REF{hairNOMPL} here, and in \REF{alonePRED} and \REF{whoopingCoughNOMSG} above. \PhonGloss{hairNOMPL}{vo{p}ta}{ʋu͡o{p}ʰta}{vuopta}{hair\BS\Sc{nom.pl}}{080701b}{.092}%check if this is grade I For all three plosive singleton phonemes, the closure is not released when a homorganic consonant %sonorant follows; they are then realized as \ipa{[p̚ t̚ k̚\,]}. Word-internally, this situation is only found in the consonant center\is{consonant center} and with a homorganic sonorant, as shown in examples \REF{heartNOMSG1} through \REF{butterNOMSG}. \PhonGloss{heartNOMSG1}{vaːj{p}mo}{ʋaːj{p̚}mo}{vájbmo}{heart\BS\Sc{nom.sg}}{080701b}{.115} \PhonGloss{have2DUIMP}%different example here for unreleased alveolar voiceless plosive before another alveolar C! {e{t}ni-t}{e{t̚}nɪtʰ}{ednit}{have-\Sc{2du.imp}}{100323a}{.251} \PhonGloss{iceNOMSG}{je{k}ŋa}{je{k̚}ŋa}{jegŋa}{ice\BS\Sc{nom.sg}}{080702b}{.070} \PhonGloss{butterNOMSG}{vo{t}ja}{vʊ͡ɔ{t̚}ja}{vuodja}{butter\BS\Sc{nom.sg}}{080926}{.01m15s} %Check this transcription! %\PhonGloss{be3PLPSTa}{li{t}jin}{li{t̚}jɪn}{lidjen}{be-\Sc{3pl.pst}}{081011.049}%JW: decided this example isn’t necessary, plus it’s a big weird b/c it’s listed as lijjin in the worlist %\PhonGlossLong{cervicalVertebraeNOMSG}{ʧepo{t}\PLUS{t}a:k:te}{ʧi͡epo{t}a:kʰte}{tjiebotdákkte\footnotemark}{cervical\_vertebra\BS\Sc{nom.sg}}{3771} %\footnotetext{\It{tjiebotdákkte} is a cervical vertebra, literally \It{tjiebot} ‘throat’ and \It{dákkte} ‘bone’.} \paragraph{Voiceless geminate plosives} The segments \ipa{/pː\,tː\,kː/} are bilabial, alveolar and velar (respectively) geminate plosive phonemes. They are very restricted in their distribution as they only occur in the consonant center\is{consonant center} %(cf.~\SEC\ref{CCent}) % in the literature on Saami languages, but this refers to morphophonological status, which I try to avoid here in this purely phonological description. One could also refer to this position in terms of feet; then, Pite Saami would have trochaic feet counting from the left, in which case the geminate plosive phonemes only occur foot-internally.} and never occur word-initially or word-finally. %In speech, the oral closure of these phonemes tends to have a duration of between 200 and 300 milliseconds.\marginpar{does this need more solid statistical evidence?} Examples for the geminate plosive phonemes can be found in examples \REF{there} through \REF{windNOMSG1b}. \PhonGlossWL{there}{to{pː}en}{to{pː}en}{dobben}{yonder}{3581} \PhonGloss{ribbonNOMSGb}{pa{tː}e}{pa{tː}e}{badde}{ribbon\BS\Sc{nom.sg}}{090930a}{.215} \PhonGloss{windNOMSG1b}{pɛ{kː}a}{pɛ{kː}a}{bägga}{wind\BS\Sc{nom.sg}}{080621}{.77m27s} When preceding a non-homorganic plosive or affricate, a short aspiration occurs between the two segments, as in \REF{bayNOMSG1}. \PhonGloss{bayNOMSG1}{lu͡a{kː}ta}{lʊ͡a{kː}ʰta}{luakkta}{bay\BS\Sc{nom.sg}}{080917c}{.03m51s} %%%%These are examples for \ipa{/ʰp ʰt ʰk/!!! %\begin{exe} %\PhonGloss{milkingbowlNOMPL}%not relevant for \ipa{/pː/! %\begin{tabular}{p{2cm} p{2cm} p{1.5cm} p{4cm} l} %/naː\Bf{ʰp}e}{naː\Bf{ʰp}ɘ}{náhpe} &'milking\_cup-\Sc{nom.pl}}{080621} %\end{tabular} %\PhonGloss{want3SGPRS} %\begin{tabular}{p{2cm} p{2cm} p{1.5cm} p{4cm} l} %[si\Bf{ʰt}a}{si\Bf{tː}a}{sihta} &'want.\Sc{3sg.prs}}{080926} %\end{tabular} %\PhonGloss{drinkINF} %\begin{tabular}{p{2cm} p{2cm} p{1.5cm} p{4cm} l} %[ju\Bf{ʰk}atʰ}{ju\Bf{kː}at}{juhgat} &'drink-\Sc{inf}}{100323a.114} %\end{tabular} %\end{exe} \paragraph{Preaspirated singleton plosives} %\paragraph[Preaspirated singleton plosives \\/ʰp ʰt ʰk/]{Preaspirated singleton plosives \ipa{/ʰp ʰt ʰk/} The segments \mbox{\ipa{/ʰp\,ʰt\,ʰk/}} are preaspirated\is{aspiration!preaspiration} bilabial, alveolar and velar (respectively) singleton plosive phonemes. They are only licensed as the sole consonant or the final consonant segment in the consonant center\is{consonant center}. The phonetic realization of preaspiration depends on the preceding phoneme, as described in \SEC\ref{preaspiration}. Examples can be found in \REF{milkingbowlNOMPL} through \REF{louseNOMSG}. \PhonGloss{milkingbowlNOMPL} {naː{ʰp}e} {naː{hp}e} {náhpe} {milking.cup\BS\Sc{nom.pl}} {080621}{.55m16s} \PhonGloss{houseNOMSGb} {tɔ{ʰp}e} {tɔ{hp}e} {dåhpe} {house\BS\Sc{nom.sg}} {100310b}{.083} \PhonGloss{towardsNorth} {nurː{ʰt}as} {nur{r̥t}as} {nurrtas} {towards.the.north} {081011}{.177} \PhonGloss{thankACCSG} {kij{ʰt}o-v} {kij{j̥t}oʋ} {gijtov} {thanks-\Sc{acc.sg}} {080621}{.11m45s} \PhonGlossWL{louseNOMSG} {ti{ʰk}e} {ti{çk}e} {dihke} {louse\BS\Sc{nom.sg}} {2359} \paragraph{Preaspirated geminate plosives} The plosive segments \mbox{\ipa{/ʰpː\,ʰtː\,ʰkː/}} are preaspirated\is{aspiration!preaspiration} bilabial, alveolar and velar (respectively) geminate\is{geminate} phonemes. They are only licensed as the sole consonant in the consonant center. Examples can be found in \REF{cleverPREDSGb} through \REF{canINFb}; cf.~\SEC\ref{preaspiration} on allophonic variation in preaspiration. \PhonGloss{cleverPREDSGb} {ʧɛ{ʰpː}e} {ʧɛ{hpː}e} {tjähppe} {clever\BS\Sc{pred.sg}} {090930b}{.077} \PhonGloss{milkingbowlNOMSGb} {naː{ʰpː}e} {naː{hpː}e} {náhppe} {milking.cup\BS\Sc{nom.sg}} {080621}{.54m38s} \PhonGloss{canINFb} {maː{ʰtː}e-t} {maː{htː}etʰ} {máhttet} {can-\Sc{inf}} {080926}{.03m21s} %% the following two are actually geminate sonorants+singleton-plosive!! %\PhonGlossWL{wolfNOMSG} {kum{ʰpː}e} {kum{m̥pː}e} {gummpe} {wolf\BS\Sc{nom.sg}} {0671} %\PhonGloss{workINF} {par{ʰkː}a-t} {par{r̥kː}atʰ} {barrgat} {work-\Sc{inf}} {101208}{.005} \paragraph[Non-voiced plosive durations in the consonant center]{Comparison of non-voiced plosive durations in the consonant center}\label{plosiveDurationComparison} While a thorough study of length phenomena in \PS\ is beyond the scope of the current study, it is worth noting the actual duration which the various plosive phonemes are realized with in the consonant center. %\footnote{The consonant center is the only position in which all plosive and affricate phonemes occur.}%JW: footnote not relevant! Specifically, %\marginpar{add praat illustrations for these, at the latest for the final/print version} %JW: not worth it, wouldn’t really show much anyway, durations sufficient! a plain geminate\is{geminate} plosive and a singleton preaspirated plosive have approximately the same duration for the period of stop closure (around 300ms), and are at least 100ms longer than plain singleton plosives and 100ms shorter than a preaspirated geminate plosive. In this respect, plain geminate plosives and preaspirated singleton phonemes seem to group together concerning stop closure duration. %, plain singletons are significantly shorter than preaspirated singletons, and plain geminates are significantly shorter than preaspirated geminates. Table~\vref{stopClosureComparision} %on page \pageref{stopClosureComparision} shows some (near) minimal sets and duration measurements as a comparison. However, this alignment seems to be irrelevant phonologically. \begin{table}[htb]\centering \caption{Minimal sets or near minimal sets for comparison of stop closure durations (in milliseconds) for plain and preaspirated singletons and geminates}\label{stopClosureComparision} \resizebox{\columnwidth}{!}{% \begin{tabular}{c c c c c }\mytoprule%\cline{2-5}%\hline \MC{1}{c}{} &{plain single} & {plain geminate} &{preasp. single} &{preasp. geminate}\\%\dline%\multirow{2}{*}{{preasp. geminate}\\ \MC{1}{c}{} &\ipa{/p/} & \ipa{/pː/} & \ipa{/ʰp/} & \ipa{/ʰpː/}\\%\cline{2-5} \MC{1}{c}{} &145-150ms &320-340ms &280-340ms &460-490ms\\%\cline{2-5}%\hline \MC{1}{c}{} &{short} &\MC{2}{c}{{medium}} &{long} \\\hline \MR{4}{*}{\begin{sideways}{set 1}\end{sideways}} & \ipa{[tɔ{p}e]} & \ipa{[tu{pː}en]} & \ipa{[tɔ{hp}e]} & \ipa{[tɔ{hpː}o]} \\ % &tɔ.{p}e & tu{p.p}en & tɔ{ʰ.p}e & tɔ{ʰp.p}o\\%JW: here with syllable boundaries! &150ms &320ms & 340ms &490ms\\ &house\BS\Sc{gen.sg} &outside &house\BS\Sc{nom.sg} &sheath\BS\Sc{nom.sg}\\ &\hyperlink{pit100310b}{\small[pit100310b]} & \small[3581] &\hyperlink{pit100310b}{\small[pit100310b]} & \small[3627]\\\cline{2-5} \MR{4}{*}{\begin{sideways}{set 2}\end{sideways}} & \ipa{[naː{p}erti-t]} & \ipa{[nu{pː}e]} & \ipa{[naː{hp}e]} & \ipa{[naː{hpː}e]} \\ % &naː{.p}er.ti-t &nu{p.p}e & naː{ʰ.p}e & naː{ʰp.p}e\\%JW: here with syllable boundaries! &145ms &340ms &280ms &460ms\\ &drill-\Sc{inf} &other &milking.bowl\BS\Sc{nom.pl} &milking.bowl\BS\Sc{nom.sg} \\ &\small[3378] &\small[1317] &\hyperlink{pit080917a}{\small[pit080917a]} &\hyperlink{pit080917a}{\small[pit080917a]} \\\mybottomrule%\cline{2-5} % &145-150ms &\MC{2}{c}{280-340ms} &460-490ms\\\hline %\MC{1}{c}{} &\It{short &\MC{2}{c}{\It{medium} &\It{long \\\cline{2-5} \end{tabular}} \end{table} \is{plosive|)} %\FB %%%%%%%% AFFRICATES %%%%%%%%%%%%%%% %%%%%%%% AFFRICATES %%%%%%%%%%%%%%% \subsubsection{Affricates}\label{Affricates}\is{affricate|(} The affricate series in Pite Saami consists of the phones and their phonetic realizations shown in Table~\vref{AffricatePhonemes}. As affricates, they begin as a plosive stop, which is then released into a sibilant fricative. %They very closely resemble the plosive series, as discussed above in \SEC\ref{Plosives}, in their phonotactic restrictions. %\marginpar{or just include these with the regular plosives?} These are described on the following pages. \begin{table}[htb]\centering \caption{Affricate phonemes and their realizations}\label{AffricatePhonemes} \begin{tabular}{l c l}\mytoprule \ipa{/ʦ/}&:&\ipa{[ʦ]} \\ %should be written as <ts> \ipa{/ʦː/}&:&\ipa{[tːs]} \\ %should be written as <hts> \ipa{/ʰʦ/}&:&\ipa{[hʦ] [ ̥ʦ] [çʦ]} \\ %should be written as <hhts> \ipa{/ʰʦː/}&:&\ipa{[htːs]} \\ %should be written as <hhts> \ipa{/ʧ/}&:&\ipa{[ʧ]} \\ %should be written as <ts> \ipa{/ʧː/}&:&\ipa{[tːʃ]} \\ %should be written as <hts> \ipa{/ʰʧ/}&:&\ipa{[hʧ] [ ̥ʧ] [çʧ] } \\ %should be written as <hhts> \ipa{/ʰʧː/}&:&\ipa{[htːʃ]} \\ %should be written as <hhts> \mybottomrule \end{tabular} \end{table} \paragraph{Plain singleton affricates}\label{tstj} %\marginpar{\Bf{BOLD} for ʦ ligature doesn’t typeset!} The segments \ipa{/ʦ\,ʧ/} are unvoiced alveolar and postalveolar (respectively) singleton affricate phonemes. Both can occur in syllable onset position. Examples can be found in \REF{peeINF} through \REF{wringINF}. \PhonGlossWL{peeINF}{{ʦ}isːa-t}{{ʦ}isːatʰ}{tsissat}{pee-\Sc{inf}}{3700} \PhonGlossWL{reindeerNOMSG}{pɔ{ʦ}oj}{pɔ{ʦ}oj}{båtsoj}{reindeer\BS\Sc{nom.sg}}{0263} \PhonGlossWL{waterGENSG}{{ʧ}aː{ʦ}e-v}{{ʧ}aː{ʦ}ev}{tjátsev}{water-\Sc{gen.sg}}{1861} \PhonGlossWL{wringINF}{pɔ{ʧ}esti-t}{pɔ{ʧ}estitʰ}{båtjestit}{wring-\Sc{inf}}{0262}%probably a derivation of båhtjet - ‘milk’! The postalveolar affricate \ipa{/ʧ/} can also occur in word-final\footnote{There is one particle \It{guts} with the alveolar affricate in final position, but it is not clear what this is or whether it is \ipa{/ʦ/} or \ipa{/tʦ/} in the consonant center (it is spelled inconsistently, as well). Noticeably, it is monosyllabic, and could be an abbreviated form of a typical bisyllabic word which has been lexicalized in its rapid-speech form, in which case it was historically in word-medial position.} % but the second syllable is lost in rapid speech in this particular phrase.} position, frequently as the diminutive suffix \It{-tj}, as in \REF{dogDIM}. \PhonGlossWL{dogDIM}{petnaka-{ʧ}}{pet̚naka{ʧ}}{bednagatj}{dog-\Sc{dim}\BS\Sc{nom.sg}}{5717} \paragraph{Plain geminate affricates} The segments \ipa{/ʦː\,ʧː/} are unvoiced alveolar and postalveolar (respectively) geminate\is{geminate} affricate phonemes. As with all other geminates, the affricate geminates only occur in the consonant center. The duration of the stop closure is longer in geminate affricates compared to their singleton affricate counterparts, while the duration of the fricative element is not relevant. Examples can be found in \REF{goINFb} and \REF{motherNOMSGb}. \PhonGlossWL{goINFb}{vaː{ʦː}e-t}{va:{tːs}etʰ}{vádtset}{go-\Sc{inf}}{2049} \PhonGlossWL{motherNOMSGb}{ʧi{ʧː}e}{ʧi{tːʃ}e}{tjidtje}{mother\BS\Sc{nom.sg}}{3618} \paragraph{Preaspirated singleton affricates} %\paragraph[Preaspirated singleton affricates \\/ʰʦ ʰʧ/]{Preaspirated singleton affricates \ipa{/ʰʦ ʰʧ/} The segments \mbox{\ipa{/ʰʦ\,ʰʧ/}} are preaspirated\is{aspiration!preaspiration} alveolar and postalveolar (respectively) singleton affricate phonemes. Just as with the preaspirated plosives, the preaspirated affricates only occur as the sole consonant or the final consonant in the consonant center. The phonetic realization of preaspiration depends on the preceding phoneme, as described in \SEC\ref{preaspiration}. Examples can be found in \REF{reindeerNOMPL} through \REF{henNOMSG}. \PhonGloss{reindeerNOMPL}{pu{ʰʦ}u}{pu{hʦ}u}{buhtsu}{reindeer\BS\Sc{nom.pl}}{110413b}{.085} %\PhonGloss{waterNOMSG}{ʧaː{ʰʦ}e}{ʧaː{hʦ}e}{tjáhtse}{water\BS\Sc{nom.sg}}{??}%JW: too many examples, removed this one \PhonGlossWL{milkINF}{pɔ{ʰʧ}e-t}{pɔ{hʧ}etʰ}{båhtjet}{milk-\Sc{inf}}{0239} \PhonGlossWL{henNOMSG}{vu͡anː{ʰʦ}a}{vʊ͡an{n̥ʦ}a}{vuanntsa}{hen\BS\Sc{nom.sg}}{2140} \paragraph{Preaspirated geminate affricates} %\paragraph[Preaspirated geminate affricates \\/ʰʦː ʰʧː/]{Preaspirated geminate affricates \ipa{/ʰʦː ʰʧː/} The segments \mbox{\ipa{/ʰʦː ʰʧː/}} are preaspirated\is{aspiration!preaspiration} alveolar and postalveolar (respectively) geminate\is{geminate} affricate phonemes. Just as with the preaspirated geminate plosives, the preaspirated geminate affricates only occur in the consonant center. The duration of the preaspiration and stop closure is longer in geminate affricates compared to their singleton affricate counterparts, while the duration of the fricative element is not phonologically relevant. Examples can be found in \REF{turnAround3SGPSTb} and \REF{fatherNOMSG}. \PhonGlossWL{turnAround3SGPSTb}{maː{ʰʦː}a-j}{maː{htːs}aj}{máhttsaj}{turn.around-\Sc{3sg.pst}}{6613} \PhonGloss{fatherNOMSG}{aː{ʰʧː}e}{aː{htːʃ}e}{áhttje}{father\BS\Sc{nom.sg}}{110415}{.19m16s} \is{affricate|)} %%%%%%%% FRICATIVES %%%%%%%%%%%%%%% %%%%%%%% FRICATIVES %%%%%%%%%%%%%%% %%%%%%%% FRICATIVES %%%%%%%%%%%%%%% \subsubsection{Fricatives}\label{Fricatives}\is{fricative|(} The fricative series in Pite Saami consists of the phonemes and their phonetic realizations shown in Table~\vref{FricativePhonemes}. %Just as with the other consonants in Pite Saami, the fricative phonemes include singletons and geminates. %The distribution of the relevant allophones will be discussed here. \paragraph{Singleton fricative consonants}\label{fvssjh} %\paragraph[Singleton fricative consonants \\/f v s ʃ h/]{Singleton fricative consonants \ipa{/f v s ʃ h/} The segments \mbox{\ipa{/f\,v\,s\,ʃ\,h/}} are singleton labiodental (unvoiced and voiced), alveolar, post-alveolar and glottal fricatives, all of which are attested in syllable onset and word-internal coda position. %and in the consonant center. Some examples are provided in \REF{areaELATSG} through \REF{evilNOMSG}. %\PhonGloss{beautifulATTR} {{f}aː{vː}ro} {{f}aː{vː}ro} {fávvro} {beautiful\BS\Sc{attr}} {}{0472}%JW: this -v- maybe not in bold because it’s in a CC? \PhonGlossWL{areaELATSG} {taː{f}o-st} {taː{f}ostʰ} {dáfost} {area-\Sc{elat.sg}} {6803} \PhonGloss{sonInLawACCSG} {{v}i{v}a-v} {{v}i{v}ɑʋ} {vivav} {son.in.law-\Sc{acc.sg}} {110415}{.08m08s} \PhonGlossWL{speakINFb} {{s}aːka{s}ti-t} {{s}aːka{s}tɪtʰ} {ságastit} {speak-\Sc{inf}} {1480} \PhonGloss{summerINESSSG} {ke{s}e-n} {ki͡e{s}en} {giesen} {summer-\Sc{iness.sg}} {100310b}{.019} \PhonGlossWL{ugly} {{ʃ}ulːo} {{ʃ}ulːo} {sjullo} {ugly\Sc{}} {1598} \PhonGlossWL{think1SGPRS} {u{ʃ}uta-v} {u{ʃ}utɑʋ} {usjudav} {think-\Sc{1sg.prs}} {6815} \PhonGlossWL{sayINF} {{h}ɔlːɔ-t} {{h}ɔlːotʰ} {hållåt} {say-\Sc{inf}} {0856} \PhonGlossWL{evilNOMSG} {pa{h}aː} {pa{h}aː} {bahá} {evil\BS\Sc{nom.sg}} {0101} \begin{table}[tb]\centering \caption{Fricative phonemes and their realizations}\label{FricativePhonemes} \begin{tabular}{l c l}\mytoprule \ipa{/f/}&:&\ipa{[f]} \\ %should be written as <f> \ipa{/fː/}&:&\ipa{[fː]} \\ %should be written as <ff> \ipa{/v/}&:&\ipa{[v] [vv̥] [ʋ]} \\ %should be written as <v> \ipa{/vː/}&:&\ipa{[vː] [vv̥ː] [ʋː]} \\ %should be written as <vv> \ipa{/s/}&:&\ipa{[s]} \\ %should be written as <s> \ipa{/sː/}&:&\ipa{[sː]} \\ %should be written as <ss> \ipa{/ʃ/}&:&\ipa{[ʃ]} \\ %should be written as <sj> \ipa{/ʃː/}&:&\ipa{[ʃː]} \\ %should be written as <ssj> \ipa{/h/}&:&\ipa{[h]} \\ %should be written as <h> \mybottomrule \end{tabular} \end{table} The phonemes \ipa{/v s/} can also occur in word-final position, as in examples \REF{fordNOMSG} and \REF{bad}.%\marginpar{mention that -sj (-ʃ) is DIM for some speakers?} \PhonGlossWL{fordNOMSG} {kaːlaː{v}} {kaːlɑː{v}} {gáláv} {ford\BS\Sc{nom.sg}} {4332} %\PhonGloss{grandchildNOMSG} {aːtjo{v}} {aːt̚jo{v}} {ádjov} {grandchild\BS\Sc{nom.sg}} {}{0005} \PhonGlossWL{bad} {nevre{s}} {ni͡evre{s}} {nievres} {bad\Sc{}} {5101} %\PhonGloss{evilNOMSG} {pa{h}aː} {pa{h}aː} {bahá} {evil\BS\Sc{nom.sg}} {0101} For some speakers from the north-eastern parts of \PS\ territory, \ipa{/ʃ/} is also possible word-finally because the diminutive suffix is sometimes \ipa{/-ʃ/}(instead of \ipa{/-ʧ/}); however there are not enough data in the corpus to determine when the diminutive suffix is \ipa{/-ʃ/}. Two of these phonemes require further explanation. First of all, the labiodental voiced fricative \ipa{/v/} is often realized as a labiodental approximant \ipa{[ʋ]} when following an open front vowel \ipa{/a/} or \ipa{/aː/}, as illustrated by examples \REF{sonInLawACCSG} and \REF{think1SGPRS} above.\footnote{This fact is even evident in some Swedish\is{language contact} place name spellings which use the more open vowel-like spelling <au> instead of the fricative spelling <av>, such as in \It{Båtsjaur}, a small community near Arjeplog whose name is likely based on the (Pite) Saami words \It{båtsoj} ‘reindeer’ and \It{jávvre} ‘lake’.} %EVIDENCE FOR THIS ETYMOLOGY?? In this case, the open front vowel is realized as an open back vowel \ipa{[ɑ]}. Furthermore, \ipa{/v/} is frequently realized as either \ipa{[ʋ]} or as a voiced labial-velar approximant \ipa{[w]} in word-initial position as well, particularly in the Pite Saami \is{dialect variation}dialects along the Pite River to the north. However, the \ipa{[ʋ]} pronunciation seems to be in free variation with a fricative \ipa{[v]} sound. %Because careful pronunciations prefer [v] over [ʋ] and for the sake of symmetry in the phoneme inventory, I have chosen to represent this sound phonemically with \ipa{/v/. This is illustrated by the example in \REF{cheeseNOMSG}.% and \ref{wish1SGPRS}. \ea\label{cheeseNOMSG}\Tn{ \begin{tabular}{p{18mm}p{22mm}l}\ipa{/vu͡asːta/}&\ipa{[{w}u͡asːta]}\TILDE&\It{vuassta}\\\end{tabular}\hfill\hyperlink{110517b2}{\small[pit110517b2.038]}\\ \hspace{1mm}\begin{tabular}{p{18mm}p{22mm}l}&\ipa{[{v}u͡asːta]}& ‘cheese\BS\Sc{nom.sg}’\\\end{tabular}\hfill\hyperlink{080917c}{\small[pit080917c.9m42s]}} \z Secondly, a glottal fricative \ipa{[h]} as the sole consonant in word-final position is possible, but it seems to be limited to certain morphological conditions in contemporary Pite Saami, and only realized by some speakers, and then inconsistently. However, some of the literature describing older stages of Pite Saami\footnote{Cf.~the paradigms in \citet[150--159]{Lehtiranta1992} indicate that there is an \It{-h} suffix for \Sc{nom.pl}, \Sc{gen.sg}, \Sc{conneg} and \Sc{2sg.prs}, among others. Note that \citet{Lehtiranta1992} describes \PS\ up through 1950, but not after that. Also note that \citet[104,120]{Lagercrantz1926} does not include any such suffix in these morphosyntactic contexts.} indicates that at a previous stage of the language, a word-final \ipa{/h/} was obligatory when it had morphological status%(cf.~the morphological environments triggered by \Sc{2sg.prs} or \Sc{gen.sg}, among others)%\marginpar{Perhaps this refers too much to morphophonology - but except in the morphological environments mentioned, \ipa{/h/} is never word-final and never optional; perhaps a better analysis would be to say that these suffixes are optional, and it has nothing to do with phonology.} . Two examples for this variation can be seen in \REF{work2SGPRS} and \REF{fishDIMGENSG}. \ea\label{work2SGPRS}\Tn{ \begin{tabular}{p{18mm}p{22mm}l}\ipa{/parka/}&\ipa{[parka]}\TILDE&\It{barga}\\\end{tabular}\hfill\hyperlink{pit101208}{\small[pit101208.032]}\\ \hspace{1mm}\begin{tabular}{p{18mm}p{22mm}l}&\ipa{[parka{h}]}& ‘work\BS\Sc{conneg}’\\\end{tabular}\hfill\hyperlink{pit101208}{\small[pit101208.029]}} \z \ea\label{fishDIMGENSG}\Tn{ \begin{tabular}{p{18mm}p{22mm}l}\ipa{/kolaː-ʧ-a/}&\ipa{[kʊ͡ɔlaːʧa]}\TILDE&\It{guolatja}\\\end{tabular}\hfill\hyperlink{pit110413a}{\small[pit110413a.077]}\\ \hspace{1mm}\begin{tabular}{p{18mm}p{22mm}l}&\ipa{[kʊ͡ɔlaːʧa{h}]}& ‘fish-\Sc{dim}-\Sc{gen.sg}’\\\end{tabular}\hfill\hyperlink{pit110413a}{\small[pit110413a.079]}} \z \paragraph{Geminate fricative consonants} %\paragraph[Geminate fricative consonant \\/fː vː sː ʃː/]{Geminate fricative consonant \ipa{/fː vː sː ʃː/} The segments \mbox{\ipa{/fː\,vː\,sː\,ʃː/}} are geminate\is{geminate} labiodental (unvoiced and voiced), alveolar and post-alveolar fricatives. As with all geminate phonemes, these fricatives only occur in consonant center\is{consonant center} position. The unvoiced labiodental geminate \ipa{/fː/} is rather uncommon and unique in this series in that it never occurs as part of a consonant cluster\is{consonant cluster}. Note that, unlike the singleton fricative series, there is no glottal geminate fricative phoneme \ipa{/hː/}. Some examples are provided in \REF{areaNOMSGb} through \REF{horsetailNOMSGb}. \PhonGlossWL{areaNOMSGb} {taː{fː}o} {taː{fː}o} {dáffo} {area\BS\Sc{nom.sg}} {2367}% \PhonGlossWL{rapids} {raː{fː}e} {raː{fː}e} {ráffe} {rapids\BS\Sc{nom.sg}} {2713}% \PhonGlossWL{peaceNOMSGb} {raː{vː}e} {raː{vː}e} {rávve} {peace\BS\Sc{nom.sg}} {1366}% \PhonGlossWL{partNOMSGb} {ɔ{sː}e} {ɔ{sː}e} {åsse} {part\BS\Sc{nom.sg}} {2269} \PhonGlossWL{horsetailNOMSGb} {ɔ{ʃː}e} {ɔ{ʃː}e} {åssje} {horsetail\BS\Sc{nom.sg}} {2270} \paragraph{Fricatives and preaspiration}\label{fricativesAndPreaspiration} When the voiced fricative phoneme \ipa{/v/} precedes a preaspirated\is{aspiration!preaspiration} segment, it becomes devoiced towards the end of its realization as \ipa{[vv̥]}.\footnote{Cf.~\SEC\ref{sonorantsAndPreaspiration} for essentially the same phenomenon in sonorant phonemes.} The near minimal pair illustrated by the examples in \REF{predatorNOMSG} and \REF{likeThis} shows a voiced fricative preceding a plain and a preaspirated plosive, respectively. \PhonGlossWL{predatorNOMSG} {naː{vːt}e} {naː{vːt}e} {návvde} {predator\BS\Sc{nom.sg}} {6042}% \PhonGlossWL{likeThis} {naː{vʰt}ɛ} {naː{vv̥t}ɛ} {návte} {like this} {1252}% Evidence for a preaspirated segment following the other fricatives is impossible to ascertain due to their inherent voicelessness. \paragraph{Dialect variation and the historical voiced dental fricative}\label{ethVariation}\is{dialect variation|(}%/r\TILDE d\TILDE ð/ A number of \PS\ lexemes historically featured a voiced dental fricative \ipa{*ð}\is{fricative!voiced dental fricative} in Proto-Saamic. These items are subject to variation in the corresponding synchronic phoneme across \PS\ territory. Specifically, Proto-Saamic \ipa{*ð} can correspond to a singleton or geminate alveolar voiceless plosive \ipa{/t\,tː/}, alveolar trill \ipa{/r\,rː/}, or voiced dental fricative \ipa{/ð\,ðː/}; the selection of phoneme varies from speaker to speaker. The alveolar plosives and trills \ipa{/t\,tː\,r\,rː/} are realized as described in \SEC\ref{Plosives} and \SEC\ref{oralSonorants}. For speakers with \ipa{/ð\,ðː/}, these are realized as a voiced dental fricative singleton \ipa{[ð]} or geminate \ipa{[ðː]}, respectively.\footnote{The phonological system of the few speakers who have the voiced dental fricatives \ipa{/ð\,ðː/} actually has two more phonemes than the systems of speakers with \ipa{/t\,tː/} or \ipa{/r\,rː/}, as these latter four phonemes are already present in the phonology of all speakers. Note that, because \ipa{/ð/} is very uncommon and not a feature that all speakers share, it is not included in the consonant inventory presented in Table~\vref{Cphonemes}.} %, in all cases both as a singleton or as a geminate. %For speakers with \ipa{/t,\,tː/, the affected words are realized just as with any \ipa{/t\,tː/}phonemes; similarly, speakers with \ipa{/r\,rː/}realize %These phonemes are realized accordingly as an alveolar voiceless plosive [t], an alveolar trill [r] or a voiced dental fricative [ð], or the corresponding geminates [tː\,rː\,ðː]. Phonemes subject to this variation are only found in the consonant center. To illustrate this, the phonemic variation in the word for stone, which goes back to Proto-Saamic~\ipa{\It{*keaðɢē}} \cite[243]{Sammallahti1998}, is presented in Table~\vref{ethVariationTable}. \begin{table}[ht]\centering \caption{Phonemic variation in the historical voiced dental fricative}\label{ethVariationTable} \begin{tabular}{llll}\mytoprule %\hline {variant} &{phonemic form}&{phonetic form} &{gloss}\\\hline t&\ipa{/kɛ{t}ːke/} &\ipa{[kɛtː̚\,ke]} & \\%\cline{1-3} r&\ipa{/kɛ{r}ːke/} &\ipa{[kɛrːke]} & ‘stone\BS\Sc{nom.sg}’ \\%\cline{1-3} ð&\ipa{/kɛ{ð}ːke/} &\ipa{[kɛðːke]} & \\\mybottomrule \end{tabular} \end{table} Generally speaking, the phoneme \ipa{/t/} is found on the northern side and the phoneme \ipa{/r/} on the southern side, although the borders are not absolutely clear. The phoneme \ipa{/ð/} is least common, and seems to only be found in the speech of the eldest speakers. Speakers are quite aware of this variation. In the current working version of the \PS\ orthography, the grapheme <r> has been chosen to represent all three variants (thus the word for stone is spelled \It{gärrge}), although spellings using the grapheme <d> or even <ð> may be used as well. Other lexemes subject to this variation include (here in the orthographic representations): \It{åddet}/\It{årret}/\It{åððet} ‘sleep’, \It{åddå}/\It{årrå}/\It{åððå} ‘new’ and \It{gidda}/\It{girra}/ \It{giðða} ‘spring (season)’. \is{dialect variation|)} \is{fricative|)} \FB %%%%%%%% NASALS %%%%%%%%%%%%%%% %%%%%%%% NASALS %%%%%%%%%%%%%%% \subsubsection{Nasals}\label{Nasals}\is{nasal|(}%%%%%%%% NASALS %%%%%%%%%%%%%%% The nasal series in Pite Saami consists of the phones and their phonetic realizations shown in Table~\vref{NasalPhonemes}. The distribution of the allophones will be discussed here, as well as in \SEC\ref{sonorantsAndPreaspiration} concerning the devoiced allophones. \begin{table}[ht]\centering \caption{Nasal phonemes and their realizations}\label{NasalPhonemes} \begin{tabular}{l c l}\mytoprule \ipa{/m/}&:& \ipa{[m] [mm̥] }\\ %should be written as <m> \ipa{/mː/}&:& \ipa{[mː] [mm̥ː] }\\ %should be written as <mm> \ipa{/n/}&:& \ipa{[n] [nn̥] }\\ %should be written as <n> \ipa{/nː/}&:& \ipa{[nː] [nn̥ː]}\\ %should be written as <nn> \ipa{/ɲ/}&:& \ipa{[ɲ] [ɲɲ̥]}\\ %should be written as <nj> \ipa{/ɲː/}&:& \ipa{[ɲː] [ɲɲ̥ː]}\\ %should be written as <nnj> %does this really exist? \ipa{/ŋ/}&:& \ipa{[ŋ] [ŋŋ̥]}\\ %should be written as <ng> \ipa{/ŋː/}&:& \ipa{[ŋː] [ŋŋ̥ː]}\\ %should be written as <nng> %does this really exist? \mybottomrule \end{tabular} \end{table} %\FB \paragraph{Singleton nasal consonants} The segments \ipa{/m\,n\,ɲ\,ŋ/} are singleton bilabial, alveolar, palatal and velar nasal consonant phonemes. They can be found in onset and coda positions, with the exception of the velar nasal, which cannot appear word-initially\footnote{A phonotactic restriction barring a phonemic velar nasal in word-initial position is a common trait for languages spoken in Europe and western Asia \citep[cf.][]{Anderson2008a}.} and is only attested once word-finally (shown in example \REF{sometimes}). Note that the devoiced allophones are triggered by a following preaspirated phoneme (cf.~\SEC\ref{sonorantsAndPreaspiration}). Some examples for singleton nasal phonemes in various positions within words can be found in \REF{1SGNOM} through \REF{sometimes}. \PhonGloss{1SGNOM} {{m}ɔ{n}}{{m}ɔ{n}}{mån} {\Sc{1sg.nom}} {100323a}{.004} \PhonGloss{reside1DUPST} {ɔro-j{m}en} {ɔroj{m}ɘn} {årojmen} {reside-\Sc{1du.pst}} {100323a}{.181} \PhonGloss{eatPRF} {pɔrːɔ-{m}} {pɔrːɔ{m}} {bårråm} {eat-\Sc{prf}} {100323a}{.103} \PhonGlossWL{glueINF} {ɲi{m}ki-t} {ɲɪ{m}kitʰ} {njimgit} {glue-\Sc{inf}} {1287} \PhonGloss{girlNOMSG} {{n}ɛjːta} {{n}ɛj̥ːta} {näjjda} {girl\BS\Sc{nom.sg}} {110415}{.06m31s} \PhonGloss{bearNOMSG} {pɛrt{n}a} {pɛr̥t̚{n}a} {bärdna} {bear\BS\Sc{nom.sg}} {080926}{.01m19s} \PhonGloss{breastNOMSG} {{ɲ}iʧːe} {{ɲ}iʧːe} {njidtje} {breast\BS\Sc{nom.sg}} {080701b}{.114} \PhonGloss{daughterInLawNOMPL} {ma{ɲ}e} {ma{ɲ}e} {manje} {daughter.in.law\BS\Sc{nom.pl}}{080621}{.74m26s} %\PhonGlossLongGloss{daughterInLawNOMPL} {ma{ɲ}e} {ma{ɲ}e} {manje} {daughter\_in\_law\BS\Sc{nom.pl}}{080621}{.74m26s} \PhonGloss{namely} {va{ɲ}} {va{ɲ}} {vanj} {really} {090702}{.035} \PhonGloss{iceNOMSGb} {jek{ŋ}a} {ji͡ek{ŋ}a} {jegŋa} {ice\BS\Sc{nom.sg}} {080702b}{.070} \PhonGloss{sometimes} {mudi{ŋ}} {mudi{ŋ}} {mudiŋ} {sometimes} {080708\_Session02}{.026}% %\PhonGlossLongSource{sometimes} {mudi{ŋ}} {mudi{ŋ}} {mudiŋ} {sometimes} {080708\_Session02}{.026}% \paragraph{Geminate nasal consonants} %\paragraph[Geminate nasal consonants \\/mː nː ɲː ŋː/]{Geminate nasal consonants \ipa{/mː nː ɲː ŋː/} The segments \ipa{/mː\,nː\,ɲː\,ŋː/} are geminate\is{geminate} bilabial, alveolar, palatal and velar nasal consonant phonemes. As is the case for all other geminate phonemes, their distribution is restricted to the consonant center\is{consonant center}. Note that the devoiced allophones are triggered by a following preaspirated\is{aspiration!preaspiration} phoneme (cf.~\SEC\ref{sonorantsAndPreaspiration}). Some examples with the geminate nasal phonemes can be found in \REF{suck3SGPRSb} through \REF{workINF}. \PhonGloss{suck3SGPRSb}{ɲa{mː}a}{ɲa{mː}a}{njamma}{suck\BS\Sc{3sg.prs}}{080701b}{.01m38s} \PhonGlossWL{littleBitNOMSGb}{pi{nː}a}{pi{nː}a}{binna}{little.bit\BS\Sc{nom.sg}}{2446} %\PhonGloss{eggNOMSG}{mɔ{nː}e}{mɔ{nː}e}{månne}{egg\BS\Sc{nom.sg}}{080621} \PhonGloss{daughterInLawNOMSGb}{ma{ɲː}e}{ma{ɲː}e}{mannje}{daughter.in.law\BS\Sc{nom.sg}}{080621}{.74m05s} %\PhonGlossLongGloss{daughterInLawNOMSGb}{ma{ɲː}e}{ma{ɲː}e}{mannje}{daughter\_in\_law\BS\Sc{nom.sg}}{080621}{.74m05s} \PhonGloss{afterB}{ma{ŋː}el}{ma{ŋː}el}{maŋŋel}{after}{080924}{.529} \PhonGlossWL{wolfNOMSG}{ku{mː}ʰpe} {ku{mm̥}pe} {gummpe} {wolf\BS\Sc{nom.sg}} {0671} \PhonGloss{workINF}{pa{rː}ʰka-t} {pa{rr̥}katʰ} {barrgat} {work-\Sc{inf}} {101208}{.005} \is{nasal|)} %%%%%%%% ORAL SONORANTS %%%%%%%%%%%%%%% %%%%%%%% ORAL SONORANTS %%%%%%%%%%%%%%% %%%%%%%% ORAL SONORANTS %%%%%%%%%%%%%%% \subsubsection{Oral sonorants}\label{oralSonorants}%\is{lateral approximants} \PS\ has three oral sonorant phonemes; %they comprise three singleton-geminate pairs of oral sonorant phonemes. because their behavior is very similar, they will be described together in the rest of this section. Their phonetic realizations are shown in Table~\vref{oralSonorantPhonemes}, as well as in \SEC\ref{sonorantsAndPreaspiration} concerning the devoiced allophones. \begin{table}[ht]\centering \caption{Oral sonorant phonemes and their realizations}\label{oralSonorantPhonemes}%\caption{Trill phonemes and their realizaitons.}\label{TrillPhonemes} \begin{tabular}{l c l}\mytoprule \ipa{/r/}&:& \ipa{[r] [rr̥] [ɾ]}\\ %JW: removed [ɽ] \ipa{/rː/}&:& \ipa{[rː] [rr̥ː] }\\ %JW: removed [ɽ] \ipa{/l/}&:& \ipa{[l] [ll̥]}\\ %JW: removed [lʎ] \ipa{/lː/}&:& \ipa{[lː] [ll̥ː]}\\ %%JW: removed [ɬ] [lʎ] %\ipa{%/ʎ/? &:& \ipa{[lj] }\\ %should be written as <lj> %\ipa{%/ʎː/? &:& \ipa{[lːj] }\\ %should be written as <llj> \ipa{/j/}&:& \ipa{[j] [jj̥] }\\ % \ipa{/jː/}&:& \ipa{[jː] [jj̥ː] }\\ % \mybottomrule \end{tabular} \end{table} %%%%%%%% TRILLS %%%%%%%%%%%%%%% %%%%%%%% TRILLS %%%%%%%%%%%%%%% %%%%%%%% TRILLS %%%%%%%%%%%%%%% \paragraph{Singleton trill consonant}\is{trill|(} The segment \ipa{/r/} is a singleton alveolar trill. It can occur in syllable onset and coda positions. %\Red{Some consonant clusters include \ipa{/r/; see \SEC\ref{CClusters}.??} In rapid speech, it is often realized as an alveolar tap \ipa{[ɾ]}, particularly intervocalically. Some examples are found in \REF{rainNOMSG} through \REF{snowdriftNOMSG}. It becomes devoiced \ipa{[rr̥]} towards the end of its realization when preceding a preaspirated phoneme. Word-finally, it is also optionally completely devoiced as \ipa{[r̥]}.%However, only in word-internal position can it occur in a consonant cluster. \PhonGlossWL{rainNOMSG}{{r}aːʃːo}{{r}aːʃːo}{rássjo}{rain\BS\Sc{nom.sg}}{1385} %\PhonGloss{steep}{p{r}a{r}es}{p{r}a{r}es}{prares}{steep}{}{5789} \PhonGlossWL{badB}{kɔ{r}ɔ}{kɔ{r}ɔ}{gårå}{bad}{0768} \PhonGlossWL{wireCOMSG}{ɛ{r}ʰpo-jn}{ɛ{rr̥}pojn}{ärpojn}{wire-\Sc{com.sg}}{6794} \PhonGlossWL{snowdriftNOMSG}{felpa{r}}{felpa{r}}{fielbar}{snowdrift\BS\Sc{nom.sg}}{0473} \paragraph{Geminate trill consonant} The segment \ipa{/rː/} is a geminate\is{geminate} alveolar trill. It only occurs in the consonant center\is{consonant center}. It becomes devoiced \ipa{[rr̥ː]} towards the end of its realization when preceding a preaspirated\is{aspiration!preaspiration} phoneme. Examples can be found in \REF{iceOverINFb} and \REF{barkNOMSGb}. \PhonGlossWL{iceOverINFb}{kɔ{rː}oti-t}{kɔ{rː}otɪtʰ}{gårrodit}{ice.over-\Sc{inf}}{4693} \PhonGlossWL{barkNOMSGb}{paː{rːʰk}o}{paː{rr̥ːk}o}{bárrko}{bark\BS\Sc{nom.sg}}{0147} \is{trill|)} \paragraph{Singleton lateral approximant}\is{approximant|(} The segment \ipa{/l/} is a lateral approximant. It can occur in syllable onset and coda positions. It becomes devoiced \ipa{[ll̥]} towards the end of its realization when preceding a preaspirated\is{aspiration!preaspiration} phoneme. Some examples are found in \REF{tenCARD} through \REF{antlerToolNOMSG}. \PhonGlossWL{tenCARD}{{l}ɔkev}{{l}ɔkev}{lågev}{ten}{2313}%ten\BS\Sc{card} \PhonGlossWL{getScared2SGPRS}{pa{l}a}{pa{l}a}{bala}{become.scared\BS\Sc{2sg.prs}}{6332} %\PhonGlossLongGloss{getScared2SGPRS}{pa{l}a}{pa{l}a}{bala}{become\_scared\BS\Sc{2sg.prs}}{}{6332} %\PhonGloss{fireACCSG}{tɔ{l}ɔv}{tɔ{l}ɔv}{dålåv}{fire-\Sc{acc.sg}}{??} \PhonGlossWL{shall1SGPRS}{ka{l}ka-v}{ka{l}kɑʋ}{galgav}{will-\Sc{1sg.prs}}{6627} \PhonGlossWL{jawNOMSG}{ɔ{l}o{l}}{ɔ{l}o{l}}{ålol}{jaw\BS\Sc{nom.sg}}{2257} \PhonGlossWL{antlerToolNOMSG}{sa{l}ʰpek}{sa{ll̥}pekʰ}{salpek}{antler.tool\BS\Sc{nom.sg}}{4773} \paragraph{Geminate lateral approximant} The segment \ipa{/lː/} is a geminate\is{geminate} lateral approximant. It only occurs in the consonant center, and it becomes devoiced \ipa{[ll̥ː]} towards the end of its realization when preceding a preaspirated\is{aspiration!preaspiration} phoneme. This phoneme is found in the examples in \REF{fireNOMSGb} and \REF{stubborn}. \PhonGlossWL{fireNOMSGb}{tɔ{lː}ɔ}{tɔ{lː}ɔ}{dållå}{fire\BS\Sc{nom.sg}}{0421}%KORRECTED from NOM:PL to SG -- 18.09.2013 \PhonGlossWL{stubborn}{i{lː}ʰʧak}{i{ll̥ː}ʧakʰ}{iltjak}{stubborn}{0894} \paragraph{Singleton central approximant} The segment \ipa{/j/} is a central approximant phoneme. It can occur in syllable onset and coda positions. It becomes devoiced \ipa{[jj̥]} towards the end of its realization when preceding a preaspirated\is{aspiration!preaspiration} phoneme. Some examples are found in \REF{iceNOMSGc} through \REF{shedILLSG}. \PhonGlossWL{iceNOMSGc}{{j}ekŋa}{{j}i͡ek̚ŋa}{jegŋa}{ice\BS\Sc{nom.sg}}{0922}% \PhonGlossWL{waterSpringNOMSG}{aː{j}a}{aː{j}a}{ája}{spring\BS\Sc{nom.sg}}{2685} \PhonGloss{tuesdayNOMSG}{ti{j}stak}{ti{j}stakʰ}{dijstak}{Tuesday\BS\Sc{nom.sg}}{081017}{.00m57s}%JW: no example for this! %\PhonGloss{peakGENSG}{kaː{j}se}{kaː{j}se}{gájse}{mountain\_peak\BS\Sc{gen.sg}}{-}%JW: no example for this! %\PhonGloss{shedILLSG}{aː{jː}ʰtaː{j}}{aː{jj̥}taː{j}}{ájjtáj}{shed-\Sc{ill.sg}}{}{6676} \PhonGloss{shedINESSSG}{aː{j}ʰte-n}{aː{jj̥}ten}{ájten}{shed-\Sc{iness.sg}}{100310b}{.100} \PhonGlossWL{shedILLSG}{aːjːʰtaː-{j}}{aːjj̥ːtaː{j}}{ájjtáj}{shed-\Sc{ill.sg}}{6676} \paragraph{Geminate central approximant} The segment \ipa{/jː/} is a geminate\is{geminate} central approximant. It only occurs in the consonant center, and becomes \is{devoicing}devoiced \ipa{[jj̥ː]} towards the end of its realization when preceding a preaspirated\is{aspiration!preaspiration} phoneme. Examples are found in \REF{letINFb} as well as \REF{shedILLSG} above. \PhonGlossWL{letINFb}{paː{jː}a-t}{paː{jː}atʰ}{bájjat}{let-\Sc{inf}}{3439} \is{approximant|)} \subsubsection{Sonorants and preaspiration}\label{sonorantsAndPreaspiration} All sonorant phonemes become devoiced towards the end of their realization when preceding a preaspirated\is{aspiration!preaspiration} plosive\is{plosive} or affricate\is{affricate}% in the consonant center .\footnote{Cf.~\SEC\ref{fricativesAndPreaspiration} for essentially the same phenomenon in voiced fricative phonemes.} Since preaspiration is limited to the consonant center, this devoicing\is{devoicing} is (with the exception of word-final devoiced \ipa{/r/}) also limited to the consonant center. Some near minimal pairs are listed in \REF{jobNOMSG} through \REF{shedNOMPL}. \PhonGlossWL{jobNOMSG}{pa{rːk}o}{pa{rːk}o}{barrgo}{job\BS\Sc{nom.sg}}{0146} \PhonGlossWL{barkNOMSG}{paː{rːʰk}o}{paː{rr̥ːk}o}{bárrko}{bark\BS\Sc{nom.sg}}{0147} %\PhonGloss{ironPoleNOMSG}{ga{ŋʰk}a}{ka{ŋŋ̥k}a}{gaŋka}{iron\_pole\BS\Sc{nom.sg}}{4067} \PhonGlossWL{old}{kaː{mːp}al}{kaː{mːp}al}{gámbal}{old}{2493} \PhonGlossWL{wolfNOMSGb}{ku{mːʰp}e}{ku{mm̥ːp}e}{gummpe}{wolf\BS\Sc{nom.sg}}{0671} %\PhonGloss{glueINF}{ɲi{mʰk}i-t}{ɲɪ{mm̥k}itʰ}{njimgit}{glue-\Sc{INF}}{1287} \PhonGlossWL{lassoRingNOMSG}{ri{ŋːk}o}{rɪ{ŋːk}o}{riŋŋgo}{lasso.ring\BS\Sc{nom.sg}}{2326} \PhonGlossWL{ravenNOMSG}{ru{ŋːʰk}a}{ru{ŋŋ̥ːk}a}{ruŋŋka}{raven\BS\Sc{nom.sg}}{1428} %\PhonGloss{tentPoleNOMSG}{ʦa{ŋk}a}{ʦa{ŋk}a}{tsaŋga}{tent\_pole\BS\Sc{nom.sg}}{3639} %\PhonGloss{snowPathNOMSG}{aː{j}tːo}{aː{j}tːo}{ájjdo}{path\_in\_snow\BS\Sc{nom.sg}}{0023} %\PhonGloss{shedNOMSG}{aː{j}ʰtːe}{aː{jj̥}tːe}{ájjte}{shed\BS\Sc{nom.sg}}{0034} \PhonGlossWL{snowPathNOMPL}{aː{j}to}{aː{j}to}{ájdo}{path.in.snow\BS\Sc{nom.pl}}{0023} \PhonGlossWL{shedNOMPL}{aː{j}ʰte}{aː{jj̥}te}{ájte}{shed\BS\Sc{nom.pl}}{6677} These examples all show a sonorant preceding a preaspirated plosive; note that a preaspirated affricate triggers the same devoicing in the preceding sonorant. \subsection{Consonant clusters}\label{CClusters}\is{consonant cluster|(} In \PS, it is frequently the case that up to three consonants can occur consecutively, particularly in the consonant center. Because syllabification\is{syllable!syllabification} does not cross word boundaries, consonant clusters in word-initial and word-final position are necessarily tautosyllabic. However, word-internally, syllabification of the final consonant as a syllable onset (cf.~\SEC\ref{syllabification} on syllabification) creates a syllable boundary within a group of consecutive consonants. There are two ways of approaching such word-internal consecutive consonant groups: on the one hand, one can consider the syllable boundary to be a significant fissure dividing such a consonant grouping into two units, and then only study any tautosyllabic consonant clusters that result. On the other hand, one can disregard any syllable boundaries, and thus treat any consonant groupings, even those spanning a syllable boundary (heterosyllabic consonant clusters), as a unit. In determining whether syllable boundaries are a meaningful part of \PS\ phonotactics, a discussion of the inventories for both tautosyllabic and heterosyllabic consonant clusters is provided below. In the following, tautosyllabic consonant clusters will be described first, before moving on to heterosyllabic consonant groupings. Note that this does not include consecutive consonants which arise in compounding at an internal root-boundary. \subsubsection[CCs in onset position]{Consonant clusters in syllable onset position} In syllable onset position, 21 CCs and 2 CCCs are attested, as listed in Table~\ref{syllableOnsetCCCsA} and Table~\vref{syllableOnsetCCCsB}; %below and Table~\vref{syllableOnsetCCCs} on page \pageref{syllableOnsetCCCs}, all are in word-initial position.\footnote{This is because syllabification results in onsets consisting of a single consonant word-internally; thus tautosyllabic consonant clusters can only occur in word-initial onset position.} Words with an onset cluster tend to be of either unknown or of Germanic origin, %(Proto-Saami did not have any consonant clusters in word-onset position\marginpar{citation for PS w/o CC in onset?}) which helps explain why eleven of the word-initial CCs and both of the CCCs would not be attested in word-internal onsets even if syllabification allowed tautosyllabic consonant clusters word-internally. %cannot be found in the significantly larger cluster inventory for the consonant center (cf.~the 191 consonant-center CCs in Figure~\vref{CCenterCCGs} below). %Clusters that are unique to word-initial position are in italics in Figure~\vref{syllableOnsetCCs}. \begin{table}[ht]\centering \caption{Bipartite consonant clusters in syllable onset position}\label{syllableOnsetCCCsA} \begin{tabular}{lll l}\mytoprule \MC{1}{c}{C\sub{1}} && \MC{1}{c}{C\sub{2}} &{attested CCs}\\\hline {plosive} &\PLUS& \MC{1}{l}{sonorant} &\ipa{pr, pl, tr, kn, kr, kl} \\%\hline%\cline{3-4}%6 {sibilant} &\PLUS& \MC{1}{l}{obstruent} &\ipa{sp, st, sk, sv, ʃk, ʃv} \\%\hline%\cline{3-4}%6 {fricative} &\PLUS& \MC{1}{l}{sonorant} &\ipa{fr, fl, sm, sn, sɲ, sl, ʃm, ʃɲ, ʃl} \\\mybottomrule%9 Total: 21 \end{tabular}%\caption{}\label{} \end{table} \begin{table}[ht]\centering \caption{Tripartite consonant clusters in syllable onset position}\label{syllableOnsetCCCsB} \begin{tabular}{lllll l}\mytoprule \MC{1}{c}{C\sub{1}} && \MC{1}{c}{C\sub{2}} && C\sub{3} &{attested CCCs}\\\hline s&\PLUS& plosive &\PLUS& r &\ipa{str, skr}\\\mybottomrule%2 \end{tabular}%\caption{}\label{} \end{table} \subsubsection[CCs in coda position]{Consonant clusters in syllable coda position}\label{CCsWordfinal} %\subsubsection[Consonant clusters in consonant-center position]{True tautosyllabic consonant clusters in consonant-center position} Because syllabification results in syllable onsets of a single consonant segment (cf.~\SEC\ref{syllabification}), only the coda of the initial syllable can host tautosyllabic consonant clusters in the consonant center. An inventory of these CCs is provided in Table~\vref{tautoCCenterCCs}. %on page \pageref{tautoCCenterCCs}. \begin{table}[ht]\centering \caption[Tautosyllabic CC clusters in the consonant center]{Tautosyllabic CC clusters in the consonant center (all in coda position)}\label{tautoCCenterCCs} %\resizebox{\columnwidth}{!}{% \begin{tabular}{lll l }\mytoprule \MC{1}{c}{C\sub{1}} && C\sub{2}&{attested CCs}\\\hline %\MC{3}{|c|}{geminates}&pp, tt, kk, ff, vv, ss, ʃʃ, mm, nn, ɲɲ, ŋŋ, rr, ll, jj \\\hline%JW: in this analysis, geminates are not combinations of Cs! %\MR{1}{*}{plosive \PLUS} & plosive & pp, tt, kk \\\hline%\cline{2-3}%8 \MR{1}{*}{fricative} &\PLUS& plosive &\ipa{vt, vk} \\ % & fricative & ff, vv, ss, ʃʃ, \\\hline%\cline{2-3}%4 %\MR{1}{*}{nasal \PLUS} & nasal & mm, nn, ɲɲ, ŋŋ \\\hline \MR{1}{*}{oral sonorant}&\PLUS& plosive &\ipa{rp, lp, jp, rt, lt, jt, rk, lk} \\\mybottomrule % & oral sonorant & rr, ll, jj \\\hline%\cline{2-3}}%8%TOTAL: 75 %%GRAND-TOTAL: 197 \end{tabular}%}%\caption{}\label{} \end{table} Syllable codas in word-final position are more restrictive, as only three CCs and one CCC are frequently found in this position, while four other CCs are found in a very limited set of words; the attested clusters in word-final syllable codas are listed in Table~\vref{wordFinalCCs}. The regularly occurring word-final coda clusters are quite common suffixes.\footnote{The suffixes \It{-st} \Sc{elat.sg}, \It{-jst} \Sc{elat.sg}, \It{-jt} \Sc{acc.pl} form an integral part of any noun paradigm. The suffix \It{-lt} is limited to a handful of directional particles and may be an old case suffix. It should also be noted that \PS\ speakers from the northern side\index{dialect variation} of \PS\ territory use \It{-s} and \It{-js} for elative case marking.} The clusters \ipa{/rt rm lm jk/} are limited to a single, seemingly native lexical item each, but there are not enough data at this point to make any further conclusions. \begin{table}[ht]\centering \caption{Word-final consonant clusters}\label{wordFinalCCs} \begin{tabular}{cl}\mytoprule {type} &{attested}\\\hline CCs &\ipa{st, jt, lt} (\It{rare:} \ipa{rt, rm, lm, jk})\\ CCC &\ipa{jst} \\\mybottomrule \end{tabular}%\caption{}\label{} \end{table} %Singleton affricates can occur as a member of a consonant cluster in word-medial position. In such cases, the affricate is always the second member of the cluster; only the following phonemes\marginpar{move to section on CCs?} are possible\marginpar{possible that \ipa{/p k/}only in CCluster with \ipa{/ʰʧ/}and \ipa{/ʰʦ/?} in a consonant cluster with an affricate: \ipa{/p b k v n r l j/. %\begin{exe} %\PhonGloss{wetClayNOMSG}{stɛ\Bf{nʧː}o}{stɛ\Bf{nʧː}o}{stänntjo}{wet\_clay\BS\Sc{nom.sg}}{4497} %\PhonGloss{henNOMSG}{vua\Bf{nʦː}a}{vʊ͡a\Bf{nʦː}a}{vuanntsa}{hen\BS\Sc{nom.sg}}{2140} %\end{exe} \subsubsection[Heterosyllabic CCs in the consonant center]{Heterosyllabic consonant clusters in the consonant center}\label{consecutiveCs} %\subsubsection[Consecutive consonant groups in the consonant-center]{Consecutive consonant groups (CCGs) in the consonant-center}\label{consecutiveCs} The inventories of tautosyllabic consonant clusters in various word positions detailed above are lacking any regularity concerning position within syllable structure (onset or coda). In other words, the sets of coda clusters licensed word-internally only overlap with the coda clusters licensed word-finally to a very limited extent. Specifically, only the clusters \ipa{/jt lt rt/} are attested both word-internally and word-finally, while all the other clusters are unique to either word-internal or to word-final position. Furthermore, the relatively large number of word-initial consonant clusters, but complete lack of consonant clusters in other syllable-onset positions word-internally is also asymmetrical. These facts indicate that perhaps a different approach to explaining the data would be more fruitful. Keeping the above in mind, as well as the exceptional role that the consonant center\is{consonant center} plays in morphophonology\is{morphophonology} (consonant gradation, cf.~\SEC\ref{Cgrad}) and phonotactics (geminates, preaspiration, overall length), an inventory of the possible heterosyllabic consonant clusters, e.g.,~disregarding syllable boundaries, that occur in the consonant center as a unit proves more insightful in describing \PS\ phonology. In addition to the 21 geminate consonants that can occur alone in the consonant center, there are a total of 213 %197 + 16 real CCCs heterosyllabic CCs attested in the consonant center. Table~\vref{CCenterCCs} %JW: nec. for output of entire document! lists the 197 heterosyllabic CCs with either two singleton consonants or a geminate consonant and a singleton consonant. %, and in Figure~\vref{CCenterCCCs} for groups with a double plus a single consonant. \begin{table}[htb]\centering \caption{Heterosyllabic consonant clusters in the consonant center}\label{CCenterCCs} \resizebox{1\linewidth}{!} { \begin{tabular}{ l l p{229pt} }\mytoprule \MC{1}{l}{C\sub{1}} & C\sub{2} &{possible clusters}\\\hline {plosive \PLUS} & plosive &\ipa{pt, pːt, pk, pːk, tk, tːk, kt, kːt} \\%\cline{2-3}%8 & affricate &\ipa{pʦ, pːʦ, pʧ, pːʧ, kʦ, kːʦ, kʧ, kːʧ } \\%\cline{2-3}%8 & fricative &\ipa{ps, pːs, tv, tːv, ks, kːs, kʃ, kːʃ } \\%\cline{2-3}%8 &{nasal}&\ipa{pm, pːm, pn, pːn, pɲ, pːɲ, tm, tːm, tn, tːn, tɲ, tːɲ, kŋ, kːŋ } \\%\cline{2-3}%14 & oral sonorant &\ipa{pr, pːr, pl, pːl, pj, pːj, tj, tːj, kl, kːl } \\%\hline%10%TOTAL: 48 %&&} \\ {fricative \PLUS} &{plosive}&\ipa{vt, vːt, vʰt, vːʰt, vk, vːk, vʰk, vːʰk, sp, sːp, st, sːt, sk, sːk, ʃk, ʃːk} \\%\cline{2-3}%16 & affricate &\ipa{vʦ, vːʦ, vʰʦ, vːʰʦ, vʧ, vːʧ, vʰʧ, vːʰʧ } \\%\cline{2-3}%8 & fricative &\ipa{vs, vːs, vʃ, vːʃ } \\%\cline{2-3}%4 & nasal &\ipa{fn, fːn, vn, vɲ, vŋ, sm, sːm, sn, sːn, sŋ, ʃm } \\%\cline{2-3}%11 & oral sonorant &\ipa{vr, vːr, vl, vːl, vj, vːj } \\%\hline%6%TOTAL: 44 %&&} \\ {nasal \PLUS} &{plosive}&\ipa{mp, mːp, mʰp, mːʰp, mk, mːk, mʰk, mːʰk, nt, nnt, nʰt, nːʰt, nʰk, nːʰk, ɲk, ŋk, ŋːk, ŋʰk, ŋːʰk} \\%\cline{2-3}%%\hline%20%, ɲkː? & fricative &\ipa{ms, mːs, mʃ, mːʃ } \\%\hline%4%TOTAL: 24 %&&} \\ {oral sonorant \PLUS}&{plosive}&\ipa{rp, rːp, rʰp, rːʰp, rt, rːt, rʰt, rːʰt, rk, rːk, rʰk, rːʰk, lp, lːp, lʰp, lːʰp, lt, lːt, lʰt, lːʰt, lk, lːk, lʰk, lːʰk, jp, jːp, jʰp, jːʰp, jt, jːt, jʰt, jːʰt, jk, jːk, jʰk, jːʰk } \\%\cline{2-3}%%\hline%36 & affricate &\ipa{rʰʦ, rːʰʦ, rʰʧ, rːʰʧ, lʰʧ, lːʰʧ, jʰʦ, jːʰʦ } \\%\cline{2-3} %8 &{fricative}&\ipa{rf, rːf, rv, rːv, rs, rːs, rʃ, rːʃ, lf, lːf, lv, lːv, ls, lːs, jv, jːv, js, jːs } \\%\cline{2-3} %18 & nasal &\ipa{lm, ln, lɲ, lŋ, rm, rn, rŋ, jm, jn, jŋ } \\%\cline{2-3}%10 & oral sonorant &\ipa{rj, rːj, lj, lːj, jr, jːr, jl, jːl } \\\mybottomrule%\cline{2-3}}%8%TOTAL: 75 \end{tabular}}%\caption{}\label{} \end{table} Most combinations of various natural classes are found; however, it is striking that a nasal as the first element can only have an obstruent as the second element. It is also noteworthy that a single oral sonorant plus a nasal is attested, but no double oral sonorant plus a nasal. Turning to heterosyllabic consonant clusters with three members (tripartite CCs), there are 16 attested in the consonant center; these clusters are listed in Table~\vref{CCenterCCCs}. %on page \pageref{CCenterCCCs}. The heterosyllabic consonant clusters in the first two rows of this table %(fricative/sonorant\PLUS plosive\PLUS sonorant) are fairly common, and correspond paradigmatically to consonant clusters lacking the plosive but with only the fricative and sonorant (see above on heterosyllabic CCs and \SEC\ref{Cgrad} on consonant gradation). The other three tripartite consonant clusters \ipa{/jst mst rtm/} are only attested in one or two words each,\footnote{Recent loan words from Swedish\is{language contact} may also contain tripartite CCs not found elsewhere in \PS, e.g.,~\It{kɔ{nst}ɔ} ‘art’ <Swedish \It{konst}.} and there are not enough data to reach any further conclusions at this point. \begin{table}[t]\centering \caption{Tripartite consonant clusters in the consonant center}\label{CCenterCCCs} \resizebox{1\linewidth}{!} { \begin{tabular}{ c c c c c p{201pt} }\mytoprule \MC{1}{c}{C\sub{1}} &&\MC{1}{c}{C\sub{2}}&& C\sub{3} &{possible CCCs}\\\hline fricative &\PLUS & plosive &\PLUS & sonorant &\ipa{vtn, vtɲ, vkŋ} \\%\cline{2-4}%3 sonorant &\PLUS & plosive &\PLUS &sonorant &\ipa{rpm, lpm, jpm, rtn, ltn, jtn, rtɲ, ltɲ, rkŋ, lkŋ} \\%\cline{?}%10 %sonorant &\PLUS &s &\PLUS &t & jst, mst, nst \\\hline%\cline{2-3}% \MC{5}{l}{other limited CCCs} &\ipa{jst, mst, rtm}\\\mybottomrule%3 %%GRAND-TOTAL: 16 \end{tabular}}%\caption{}\label{} \end{table} Due to the morphophonological\is{morphophonology} process of consonant gradation, which features paradigmatic stem allomorphy characterized by quantitative alternations in the consonant center in many cases (cf.~\SEC\ref{Cgrad}), almost all of the heterosyllabic CCs can be grouped into short\TILDE long pairs, e.g.,~\ipa{/pt\TILDE pːt/} or \ipa{/jʰʦ\TILDE jːʰʦ/}. There are only 16 heterosyllabic CCs which do not seem to have a corresponding quantitative partner; for reasons explained below, it is useful to divide these into the two groups listed in \REF{CCgroups}: %(the motivation for these groupings is provided below): %\begin{center} %\begin{tabular}{c c} \ea\label{CCgroups}\Tn{ Group A: \ipa{/vn vɲ vŋ lm ln lɲ lŋ rm rn rŋ jm jn jŋ/}\\ \hspace{2.6mm}Group B: \ipa{/sŋ ʃm ɲk/} }\z %\end{tabular} %\end{center} Members of the first and larger group all have a corresponding morphophonemic partner, but this corresponding partner is a consonant cluster consisting of three consonant segments, and differs qualitatively as well. Specifically, tripartite CCs consisting of \ipa{/v l r j/} followed by a plosive\PLUS sonorant pair %\footnote{When a plosive precedes a homorganic sonorant, or when an alveolar plosive precedes a palatal sonorant, it is unreleased, e.g.: \ipa{/lpm/}\ARROW [lp̚m], \ipa{/vtn/}\ARROW [vt̚n], \ipa{/ltŋ/}\ARROW [lt̚ɲ],} correspond to Group A (those lacking the plosive element of the relevant tripartite CC); these pairings are listed in Table~\vref{CCandCCCpairs}. \begin{table}[ht]\centering \caption{Quantitative and qualitative CC\TILDE CCC pairs}\label{CCandCCCpairs} \begin{tabular}{ccc p{30pt} ccc}\mytoprule CC &\TILDE & CCC && CC &\TILDE & CCC\\\hline%\cline{1-3}\cline{5-7} \ipa{vn } &:&\ipa{vtn } &&\ipa{ lm } &:&\ipa{lpm } \\ \ipa{vɲ } &:&\ipa{vtɲ } &&\ipa{ ln } &:&\ipa{ltn } \\ \ipa{vŋ } &:&\ipa{vkŋ } &&\ipa{ lɲ } &:&\ipa{ltɲ } \\ \ipa{rm } &:&\ipa{rpm } &&\ipa{ lŋ } &:&\ipa{lkŋ } \\ \ipa{rn } &:&\ipa{rtn } &&\ipa{ jm } &:&\ipa{jpm } \\ \ipa{rŋ } &:&\ipa{rkŋ } &&\ipa{ jn } &:&\ipa{jtn } \\\mybottomrule%\cline{1-3}\cline{5-7}% \end{tabular} \end{table} The remaining heterosyllabic CCs \ipa{/sŋ ʃm ŋk/}(Group B) seem to lack a quantitative partner. It is likely that the corresponding long CCs \ipa{/sːŋ ʃːm ŋːk/} would be acceptable since CCs with very similar phonemic structures in quantitative pairs exist. However, a lack of data at this point prevents this from being ascertained for certain. It is worth noting that the only consonant cluster which occurs in a word-medial position other than the consonant center is \ipa{/st/} from the suffix \mbox{\It{-st-},} a derivational morpheme which derives a verb, e.g.,~\It{base{st}it} ‘to fry quickly’ (cf.~{\It{basset}} ‘to fry’).\footnote{Cf.~\SEC\ref{vblzST} for more on this verbalizer.} % \It{ájle{st}it} ‘to sanctify’. Ultimately, this massive inventory of consonants and consonant clusters in the middle of words can only really make sense when one understands the extent to which morphology is expressed in this position, as becomes clear in Chapters \ref{morphWordClassCh} through \ref{derivMorph}. \is{consonant cluster|)} %%%%%%%%%%%%% V O W E L S %%%%%%%%%%%% %%%%%%%%%%%%% V O W E L S %%%%%%%%%%%% %%%%%%%%%%%%% V O W E L S %%%%%%%%%%%% %%%%%%%%%%%%% V O W E L S %%%%%%%%%%%% %%%%%%%%%%%%% V O W E L S %%%%%%%%%%%% %%%%%%%%%%%%% V O W E L S %%%%%%%%%%%% %%%%%%%%%%%%% V O W E L S %%%%%%%%%%%% %%%%%%%%%%%%% V O W E L S %%%%%%%%%%%% %%%%%%%%%%%%% V O W E L S %%%%%%%%%%%% %%%%%%%%%%%%% V O W E L S %%%%%%%%%%%% %\clearpage \FB \section{Vowels}\label{vowelPhonemes}%%%%%%%% VOWELS %%%%%%%%%%%%%%% \PS\ has eight monophthong vowel phonemes and one diphthong\is{diphthong} vowel phoneme. These are listed in %the vowel chart\is{phoneme inventory!vowels} in Figure~\vref{monophthongs}, with the monophthongs in the vowel chart on the left, and the diphthong on the right. %, and the diphthong is listed in Figure~\vref{diphthongs}. \begin{figure}[ht] \centering \begin{tabular}{p{50mm}c} \begin{vowel} %\fontspec{Charis SIL}%add variables [l] or [r] after command before value to place V left or right of plot. \putcvowel{\ipa{i}}{1} %%\putcvowel[r]{y}{1} \putcvowel{\ipa{e}}{2} %JW: usually realized as [i͡e] in V1 %%\putcvowel[r]{ø}{2} \putcvowel{\ipa{ɛ}}{3} % ɛ %%\putcvowel[r]{œ}{3} \putcvowel{\ipa{a/aː}}{4} %%\putcvowel[r]{ɶ}{4} %\putcvowel{\ipa{(\A)}{5} % ɑ - only in SV loans %%\putcvowel[r]{ɒ}{5} %%\putcvowel[l]{ʌ}{6} \putcvowel{\ipa{ɔ}}{6} % ɔ %%\putcvowel[l]{ɤ}{7} \putcvowel{\ipa{o}}{7} % o - realized as [u͡o] in V1! %%\putcvowel[l]{ɯ}{8} \putcvowel{\ipa{u}}{8} %%\putcvowel[l]{ɨ}{9} %\putcvowel{\ipa{\UB}{9}% ʉ - only DS has this in certain words where others have [ɪ], cf.~iktu (yesterday) %\putcvowel[l]{\RE}{10}% ɘ - \ipa{/a/}before \ipa{/v/, especially DS in ACC.SG/1SG suffixes %%\putcvowel[r]{ɵ}{10} %\putcvowel{\ipa{ə}{11}%{\SCHWA}{11}% ə - epenthetic, non-contrastive? %%\putcvowel[l]{ɜ}{12} %%\putcvowel[r]{ɞ}{12} %\putcvowel[l]{\I}{13}%ɪ - phonemic \ipa{/i/? %\putcvowel{\ipa{\U}{14}% ʊ - normal realization of \ipa{/u/ %%\putcvowel{\ipa{ɐ}{15} %%\putcvowel{\ipa{æ}{16} \end{vowel} &\MC{1}{c}{\ipa{u͡a}}\\ %monophthongs&diphthong\\ \end{tabular} \caption{Vowel phoneme inventory}\label{monophthongs} \end{figure} %\begin{figure}[ht] %\centering %%\begin{tabular}{c} %u͡a %/}u͡ɛ %\\ %%\end{tabular} %\caption[Diphthong inventory]{The diphthong vowel phoneme} \label{diphthongs} %\end{figure} A discussion of these phonemes and the distribution of the relevant allophones follows. Note that there is a short open front vowel \ipa{/a/} and a long open front vowel \ipa{/aː/}; for the latter case, length is marked with a triangular colon <aː> in phonemic transcription, as according to IPA standards, and with an acute accent <á> when represented in orthography. The monophthong phonemes \ipa{/a aː ɛ i u ɔ/} are realized as monophthongs in all cases. The monophthong phonemes \ipa{/e/} and \ipa{/o/} are realized as slight diphthongs in V1 position, namely as \ipa{[i͡e]} and \ipa{[u͡o]}, %\marginpar{{describe [i] and [u] in [i͡e] and [u͡o] as ‘onglides’?}} respectively. They can be very short in duration. There is a relatively minimal difference between the beginning and ending positions of each of the \ipa{[i͡e]} and \ipa{[u͡o]} phones in the oral cavity, a closeness which is even reflected in inconsistencies in the working version of the \PS\ orthography: both <e> and <ie> are used for \ipa{/e/}, and <o> and <uo> for \ipa{/o/}. The diphthong phoneme \ipa{/u͡a/} is restricted to V1 position. Due to vowel harmony, \ipa{/u͡a/} can be realized as \ipa{[ʊ͡ɛ]} or \ipa{[ʊ͡a]}, and needs not be particularly long in duration. Note that farther north within the \PS\ language territory, \is{dialect variation}monophthong phonemes are often realized as slight diphthongs\is{diphthong}. This is closer to Lule Saami within the Saami dialect continuum; indeed, many Lule Saami monophthong vowel phonemes are realized as slight diphthongs \citep[cf., e.g.,~][11]{Spiik1989}. Because there is no significant difference in behavior between monophthongs and diphthongs which would justify treating them separately, all nine vowel phonemes are dealt with together in the following section. \subsection{Vowel phonemes and allophonic variations}\label{Vallophones} The following sections describe the \PS\ vowel phonemes and their allophonic realizations in the three vowel positions\is{prosodic domains} V1, V2 and V3. Table~\vref{VsSlots} summarizes the distribution of the vowel phonemes in these three prosodic vowel slots. All vowel phonemes are licensed in V1, and all except \ipa{/ɛ/} and \ipa{/u͡a/} occur in V2.\footnote{The phoneme \ipa{/ɛ/} only occurs in a limited phonological context in V2 of grammatical words, and is therefore in parentheses in Table~\ref{VsSlots}.; cf.~\SEC\ref{openE}.} However, V3 position is the most restrictive and allows only \ipa{/i ɛ a u/}. This distribution reflects the fact that V1 is the most prosodically relevant slot, and V3 the least significant. \begin{table}[ht]\centering \caption[Distribution of vowel phonemes in the prosodic vowel slots]{Distribution of vowel phonemes in the three prosodic vowel slots}\label{VsSlots} \begin{tabular}{ c c c c }\mytoprule {vowel} &{V1} &{V2} &{V3} \\\hline \ipa{i} &\PLUS &\PLUS &\PLUS \\ \ipa{e} &\PLUS &\PLUS &\MINUS \\ \ipa{ɛ} &\PLUS &(\MINUS)&\PLUS \\ \ipa{a} &\PLUS &\PLUS &\PLUS \\ \ipa{aː} &\PLUS &\PLUS &\MINUS \\ \ipa{ɔ} &\PLUS &\PLUS &\MINUS \\ \ipa{o} &\PLUS &\PLUS &\MINUS \\ \ipa{u} &\PLUS &\PLUS &\PLUS \\ \ipa{u͡a} &\PLUS &\MINUS &\MINUS \\\mybottomrule \end{tabular} \end{table} \subsubsection{Close front high vowel} The segment \ipa{/i/} is a close front high vowel. In V1 position, it is not realized with a very narrow oral cavity, but closer to \ipa{[ɪ]}, while in V2 and V3 position it is even less close and essentially \ipa{[ɪ]}, and tends to be shorter in duration. When a palatal approximant \ipa{/j/} immediately follows, this triggers a slight raising of \ipa{/i/} so that it is closer to \ipa{[i]}. Some examples are found in \REF{decreaseINF} through \REF{eyeACCPL}. \PhonGlossWL{decreaseINF}{karʧet{i}-t}{karʧet{ɪ}tʰ}{gartjedit}{decrease-\Sc{inf}}{0555} \PhonGlossWL{blinkINF}{j{i}mk{i}-t}{j{i}mk{ɪ}tʰ}{jimkit}{blink-\Sc{inf}}{2775} \PhonGlossWL{be3PLPSTb}{l{i}-j{i}n}{l{i}j{ɪ}n}{lijin}{be-\Sc{3pl.pst}}{4114}%JW: lijjin in wordlist, but lidjin in pit081011.049/050! \PhonGlossWL{eyeACCPL}{ʧ{i}lm{i}-jt}{ʧ{i}lm{i}jt}{tjilmijt}{eye-\Sc{acc.pl}}{1877} \subsubsection{Close-mid front vowel} The segment \ipa{/e/} is a close-mid front vowel. In V1 position, it is realized as a slight \is{diphthong}diphthong \ipa{[i͡e]}, while in V2 position it is normally a monophthong \ipa{[e]}. It is not attested in V3 position.\footnote{Note that, in the current working orthography, the V3 position can contain the grapheme <e>, but this is in fact the phoneme \ipa{/i/} realized as \ipa{[ɪ]}.} Some examples are found in \REF{herdNOMSG} through \REF{fishNOMPL}. \PhonGlossWL{herdNOMSG}{{e}lːo}{{i͡e}lːo}{ello}{reindeer.herd\BS\Sc{nom.sg}}{0449} \PhonGlossWL{redFoxNOMPL}{r{e}ʰp{e}-ha}{r{i͡e}hp{e}ha}{rehpeha}{red.fox-\Sc{nom.pl}}{2790} \PhonGloss{fishNOMPL}{kol{e}}{kʊ͡ol{e}}{guole}{fish\BS\Sc{nom.pl}}{110413a}{.002} %In a limited number of bisyllabic grammatical items, \ipa{/e/}has a special behavior.\marginpar{is this a different phoneme? \ipa{/ɛ/?} For monosyllabics, it is realized as [e] and not [i͡e], while in certain bisyllabic grammatical words ending in \ipa{/e/, it is realized as [ɛ \TILDE ɛh]; in the latter case, it is slightly more closed than the realization of the phoneme \ipa{/ɛ/. %\begin{exe} %\PhonGloss{whoNOMSG}{k{e}}{k{e}}{ge}{who\BS\Sc{nom.sg}}{-} %\PhonGloss{whoACCSG}{k{e}v}{k{ev}}{gev}{who\BS\Sc{acc.sg}}{-} %\PhonGloss{now}{tal{e}}{taːl{ɛh}}{dale}{now}{-} %\end{exe} \subsubsection{Open-mid front vowel}\label{openE} The segment \ipa{/ɛ/} is a open-mid front vowel. It normally occurs in V1 position. %However, \ipa{/ɛ/}is only found in V2 position in grammatical words, and in such cases it is realized slightly more open than when in V1 position. Examples can be found in \REF{windNOMSG2} and \REF{iceFishingLineNOMSG}. \PhonGlossWL{windNOMSG2}{p{ɛ}kːa}{p{ɛ}kːa}{bägga}{wind\BS\Sc{nom.sg}}{2302} \PhonGlossWL{iceFishingLineNOMSG}{r{ɛ}ʃːme}{r{ɛ}ʃːme}{rässjme}{ice.fishing.line\BS\Sc{nom.sg}}{2754} %\PhonGlossLongGloss{iceFishingLineNOMSG}{r{ɛ}ʃːme}{r{ɛ}ʃːme}{rässjme}{ice\_fishing\_line\BS\Sc{nom.sg}}{}{2754} %\PhonGloss{windNOMSG2}{p{ɛ}kːa}{p{ɛ}kːa}{bägga}{wind\BS\Sc{nom.sg}}{2302} The phoneme \ipa{/ɛ/} can also be found in V2 position, but this is limited to grammatical words and it is never followed by a final consonant.\footnote{The current working \PS\ orthography is rather inconsistent with the spelling of the \ipa{/ɛ/} phoneme. In V1 position, it is spelled with <ä>, while in V2 (in grammatical words) it is spelled <e>.} %, with the exception of the copula \ipa{/lɛ/, which is spelled \It{lä}. Perhaps \It{lä} is a special case because it is the only monosyllabic word with \ipa{/ɛ/.} %However, \ipa{/ɛ/}is only found in V2 position in grammatical words, and In such cases, \ipa{/ɛ/} is realized slightly more open than when in V1 position. Examples can be found in \REF{where} through \REF{inside}. \PhonGlossWL{where}{kɔnː{ɛ}}{kɔnː{ɛ}}{gånne}{where}{0759} \PhonGlossWL{inOrderTo}{aʰt{ɛ}}{aht{ɛ}}{ahte}{in.order.to}{0014} \PhonGlossWL{now}{taːl{ɛ}}{taːl{ɛ}}{dále}{now}{2303} \PhonGloss{inside}{sinː{ɛ}}{sɪnː{ɛ}}{sinne}{inside}{080702b}{.144} In a few recent loan words from Swedish\is{language contact} which are originally French or Latin loans words in Swedish\is{language contact} that have retained their second syllable stress\is{stress}, \ipa{/ɛ/} can occur as the vowel of the second syllable. However, as this is the stressed syllable in such cases, it is, from a prosodic perspective, still in the V1 position of a normal \PS\ trochaic foot. Two examples are provided in \REF{addressNOMSG} and \REF{professorNOMSG}. \PhonGlossWL{addressNOMSG}{adr{ɛ}sːa}{adr{ɛ}sːa}{adrässa}{address\BS\Sc{nom.sg}}{2683} \PhonGlossWL{professorNOMSG}{prof{ɛ}sːor}{prof{ɛ}sːor}{profässor}{professor\BS\Sc{nom.sg}}{4268} Due to the verbal suffixes \It{-jmä} ‘\Sc{1pl.pst}’ and \It{-jdä} ‘\Sc{2pl.pst}’, verbs with a bisyllabic stem in these forms thus feature \ipa{/ɛ/} in the third and final syllable, as in \REF{drink1PLPST} and \REF{reside1PLPST}. Note also that the current working orthography inconsistently spells \ipa{/ɛ/} as <e> or <ä> in these suffixes. \PhonGloss{drink1PLPST}{juga-jm{ɛ}}{jʊkajm{ɛ}}{jugajmä}{drink-\Sc{1pl.pst}}{100323a}{.138} %\PhonGloss{reside1PLPST}{årojm{ɛ}}{ɔrojm{ɛ}ʰ}{årojmä}{reside-\Sc{1pl.pst}}{100323a.187} \PhonGloss{reside1PLPST}{åro-jt{ɛ}}{ɔrojt{ɛ}ʰ}{årojdä}{reside-\Sc{2pl.pst}}{100323a}{.223} Otherwise, \ipa{/ɛ/} is unattested in V3 position. \subsubsection{Short open front vowel} The segment \ipa{/a/} is an open front vowel of short duration. It can occur in V1, V2 and V3 positions. When preceding \ipa{/v/}, such as before the suffix \It{-v} \Sc{acc.sg} or \It{-v} \Sc{1sg.prs}, \ipa{/a/} is usually pronounced more to the back towards \ipa{[ɑ]}.\footnote{Simultaneously, \ipa{/v/} is optionally but frequently pronounced as \ipa{[ʋ]} when following \ipa{/a/} or \ipa{/aː/}; cf.~\SEC\ref{fvssjh}.} Examples can be found in \REF{want1SGPRS} through \REF{say2SGPRS}. \PhonGloss{want1SGPRS}{sit{a}-v}{sit{ɑ}ʋ}{sidav}{want-\Sc{1sg.prs}}{080926}{.03m14s} \PhonGlossWL{getScared2SGPRSb}{p{a}la}{p{a}la}{bala}{become.scared\BS\Sc{2sg.prs}}{6332} %\PhonGlossLongGloss{getScared2SGPRSb}{p{a}la}{p{a}la}{bala}{become\_scared\BS\Sc{2sg.prs}}{}{6332} \PhonGloss{testicleDIM}{kol{a}-ʧ}{kʊ͡ol{a}ʧ}{guolatj}{testicle-\Sc{dim}\BS\Sc{nom.sg}}{110413a}{.150} \PhonGloss{say2SGPRS}{saːk{a}st{a}}{saːk{a}st{a}}{ságasta}{say\BS\Sc{2sg.prs}}{101208}{.228} %Due to the morphophonologically triggered vowel harmony\marginpar{this may only apply to \ipa{/aː/!!} described in \SEC\ref{vowelRaising}, \ipa{/a/}can also be realized as [ɛ], as illustrated in the examples \REF{} and \REF{}. %%\begin{exe} %%\PhonGloss{getScared2SGPRS}{p{a}la}{p{a}la}{bala}{become\_scared\BS\Sc{2sg.prs}}{6332} %%\PhonGloss{testicleDIM}{kol{a}ʧ}{kʊ͡ol{a}ʧ}{guolatj}{testicle-\Sc{dim}}{-} %%\PhonGloss{want1SGPRS}{sit{a}v}{sɪt{ɑ}ʋ}{sidav}{want-\Sc{1sg.prs}}{080926.???} %%\PhonGloss{say2SGPRS}{saːk{a}st{a}}{saːk{a}st{a}}{ságasta}{say\BS\Sc{2sg.prs}}{101208.228} %%\end{exe} \subsubsection{Long open front vowel} The segment \ipa{/aː/} is an open front vowel of long quantity. It can occur in V1 and V2 positions. %JW: not evidence for V3 anywhere in the wordlist - not enough data to know about V3; bednagatj, fämolatj etc. usually short When preceding a \ipa{/v/}, such as before the suffix \It{-v} \Sc{acc.sg} or \It{-v} \Sc{1sg.prs}, \ipa{/a/} is usually pronounced more to the back towards \ipa{[ɑ]}.\footnote{See previous footnote.}%Simultaneously, \ipa{/v/}is optionally but frequently pronounced as [ʋ] when following \ipa{/aː/}or \ipa{/a/; cf.~\SEC\ref{fvssjh}.} Examples can be found in \REF{dig2SGPRS} through \REF{fordNOMSGb}. \PhonGlossWL{dig2SGPRS}{p{aː}la}{p{aː}la}{bála}{dig\BS\Sc{2sg.prs}}{6314} \PhonGloss{fishDIM2}{kol{aː}-ʧ}{kʊ͡ol{aː}ʧ}{guolátj}{fish-\Sc{dim}\BS\Sc{nom.sg}}{110413a}{.066} \PhonGlossWL{childNOMSG}{m{aː}nː{aː}}{m{aː}nː{aː}}{mánná}{child\BS\Sc{nom.sg}}{1129} \PhonGloss{have1SGPRS}{an{aː}-v}{an{aː}v}{anáv}{have-\Sc{1sg.prs}}{101208}{.263}% \ipa{/}6277}%JW: no recording with [ʋ] for this word so far - coincidence? or rule? \PhonGlossWL{fordNOMSGb}{k{aː}l{aː}v}{k{aː}l{ɑː}ʋ}{gáláv}{ford\BS\Sc{nom.sg}}{4332} %\ex máddáráddjá - really 4x á? %Due to the morphophonologically triggered vowel harmony\marginpar{does this really belong in phonology? morphology is very relevant for it! vaːrːe\ARROW vɛrijt \\(aː\ARROW ɛ, rː\ARROW r)} described in \SEC\ref{vowelRaising}, \ipa{/a/}can also be realized as [ɛ], as illustrated in the examples \REF{} and \REF{}. %\begin{exe} %\PhonGloss{mountainACCPL}{v{aː}rːe-jt}{v{ɛ}rijtʰ}{värijt}{mountain-\Sc{acc.pl}}{-} %%\PhonGloss{testicleDIM}{kol{a}ʧ}{kʊ͡ol{a}ʧ}{guolatj}{testicle-\Sc{dim}}{-} %%\PhonGloss{want1SGPRS}{sit{a}v}{sɪt{ɑ}ʋ}{sidav}{want-\Sc{1sg.prs}}{080926.???} %%\PhonGloss{say2SGPRS}{saːk{a}st{a}}{saːk{a}st{a}}{ságasta}{say\BS\Sc{2sg.prs}}{101208.228} %\end{exe} \subsubsection{Open-mid back vowel} The segment \ipa{/ɔ/} is a open-mid back vowel. It can occur in V1 or V2 position. However, if it is in V2 position, then V1 is also \ipa{/ɔ/}. Examples can be found in \REF{nineCARD} through \REF{streamDIM}. \PhonGlossWL{nineCARD}{{ɔ}kʦe}{{ɔ}kʰʦe}{åktse}{nine}{2823}%nine\BS\Sc{card} \PhonGlossWL{husbandNOMSG}{p{ɔ}tːɲe}{p{ɔ}t̚ːɲe}{båddnje}{husband\BS\Sc{nom.sg}}{0230} \PhonGlossWL{wrong}{p{ɔ}jʰtot}{p{ɔ}jj̥totʰ}{båjtot}{wrong}{0242} \PhonGlossWL{boyNOMSG2}{p{ɔ}jːʰʧ{ɔ}}{p{ɔ}jj̥ːʧ{ɔ}}{båjjtjå}{boy\BS\Sc{nom.sg}}{2569} \PhonGlossWL{streamDIM}{j{ɔ}k{ɔ}-ʧ}{j{ɔ}k{ɔ}ʧ}{jågåtj}{stream-\Sc{dim}\BS\Sc{nom.sg}}{3435} %\PhonGloss{wrong}{p{ɔ}jʰtot}{p{ɔ}jj̥totʰ}{båjtot}{wrong}{0242} \subsubsection{Close-mid back vowel} The segment \ipa{/o/} is a close-mid back vowel. %\marginpar{{BOLD} for combining macro (here in ʊ͡o) does not typeset correctly!} In V1 position, it is realized as a slight \is{diphthong}diphthong \ipa{[ʊ͡o]}, while in V2 position it is the monophthong \ipa{[o]}. It is not attested in V3 position. Some examples are found in \REF{good} through \REF{request1PLPRS}. \PhonGlossWL{good}{p{o}rːe}{p{ʊ͡o}rːe}{buorre}{good}{0213} \PhonGloss{fishNOMSG}{k{o}lːe}{k{ʊ͡o}lːe}{guolle}{fish\BS\Sc{nom.sg}}{110413a}{.003} \PhonGloss{heartNOMSG2}{vaːjpm{o}}{vaːjp̚m{o}}{vájbmo}{heart\BS\Sc{nom.sg}}{080701b}{.115} \PhonGlossWL{request1PLPRS}{aːn{o}ti-p}{aːn{o}tɪpʰ}{ánodip}{request-\Sc{1pl.prs}}{6301} \subsubsection{Close back vowel} The segment \ipa{/u/} is a close back vowel. In V1 position, it is not realized with a completely narrow oral cavity, %but closer to [ʊ], while in V2 and V3 positions it is even less close and essentially \ipa{[ʊ]}, and tends to be shorter in duration. %A palatal consonant, particularly \ipa{/j/, may trigger slight raising of \ipa{/i/}so that it is closer to [i]. %It does not occur in V3 position. %wrong!! ex.: tons of SUPERL forms! changed 20131015. Some examples are found in \REF{skiBindingNOMSG} through \REF{islandDIM}. \PhonGlossWL{skiBindingNOMSG}{j{u}kːsa}{j{u}kːsa}{jukksa}{ski.binding\BS\Sc{nom.sg}}{0934} \PhonGloss{fogNOMSG}{m{u}rːʰko}{m{u}rr̥ːko}{murrko}{fog\BS\Sc{nom.sg}}{080702b}{.065}% \ipa{/}1192} \PhonGloss{reindeerGENSG}{p{u}ʰʦ{u}}{p{u}hʦ{ʊ}}{buhtsu}{reindeer\BS\Sc{gen.sg}}{110413b}{.088} \PhonGlossWL{pinchINF}{ʦipʦ{u}-t}{ʦipʰʦ{ʊ}tʰ}{tsibtsut}{pinch-\Sc{inf}}{5712} \PhonGlossWL{tastySUPERL}{ɲalːka-jm{u}s}{ɲalᵊkajm{ʊ}s}{njallgajmus}{tasty-\Sc{superl}}{4462} \PhonGlossWL{islandDIM}{s{u}lː{u}-ʧ}{s{u}lː{ʊ}ʧ}{sullutj}{island-\Sc{dim}\BS\Sc{nom.sg}}{5148} \subsubsection{Close back to open front vowel}\label{Vua} The segment \ipa{/u͡a/} is a diphthong which begins as a close but slightly centralized back vowel and opens to an open front vowel \ipa{[ʊ͡a]} in most cases. However, the vowel in V2 position can trigger vowel harmony that slightly closes the end position of the oral closure so that it is realized as \ipa{[ʊ͡ɛ]},\footnote{Despite the orthography being quite phonemic rather than phonetic, these two allophones of \ipa{/u͡a/} are reflected in spelling: \ipa{[u͡a]} is spelled <ua> and \ipa{[u͡ɛ]} is spelled <uä>.} but the triggering vowels vary between \PS\ dialects. For southern dialects, only a close \ipa{/i/} or close-mid vowel \ipa{/e/} in V2 position can trigger this harmony. In northern dialects\is{dialect variation}, an open front \ipa{/a/} in V2 position\footnote{There are not enough data at this time to know %claim %definitively conclude whether a long \ipa{/aː/} also triggers this in northern dialects.} also triggers this vowel harmony. A few cognate pairs are provided in Table~\vref{uaVSuae}. %on page \pageref{uaVSuae}. %\vfill \begin{table}[htb]\centering \caption{Examples of allophony for the diphthong \ipa{/u͡a/}}\label{uaVSuae} \resizebox{1\linewidth}{!} { \begin{tabular}{llllll}\mytoprule % &\MC{2}{c}{\It{dialect}} & &\\ %\It{phonemic}&\It{northern} &\It{southern}&\It{orthography}&\It{gloss} \\\hline {allophone}&{phonemic}&\MC{2}{c}{{phonetic}}&{orthography}&{gloss} \\\hline \ipa{[ʊ͡a]}&%\\%\hline \ipa{k{u͡a}lːto} &\MC{2}{l}{\ipa{k{ʊ͡a}lːto}} &\It{gualldo} & ‘snow.flurry\BS\Sc{nom.sg}’\\%\hline &\ipa{ʧ{u͡a}rːvo-t} &\MC{2}{l}{\ipa{ʧ{ʊ͡a}rːvotʰ}} &\It{tjuarrvot} & ‘call.out-\Sc{inf}’\\%\hline \ipa{[ʊ͡ɛ]}&%\\%\hline \ipa{j{u͡a}tke-t} &\MC{2}{l}{\ipa{j{ʊ͡ɛ}tʰketʰ}} &\It{juätkit} & ‘extend-\Sc{inf}’\\%\hline &\ipa{p{u͡a}jːte} &\MC{2}{l}{\ipa{p{ʊ͡ɛ}jːte}} &\It{buäjjde} & ‘fat\BS\Sc{nom.sg}’\\%\hline % &\MC{2}{l}{\It{dialect}} & &\\ & &{northern} &{southern}&& \\\cline{3-4}%\hline \ipa{[ʊ͡ɛ]\TILDE[ʊ͡a]}&%\\%\hline \ipa{l{u͡a}kːta} &\ipa{l{ʊ͡ɛ}ktːa} &\ipa{l{ʊ͡a}ktːa} &\It{luakkta} & ‘bay\BS\Sc{nom.sg}’\\%\hline &\ipa{v{u͡a}sta} &\ipa{v{ʊ͡ɛ}sta} &\ipa{v{ʊ͡a}sta} &\It{vuasta} & ‘cheese\BS\Sc{nom.sg}’\\\mybottomrule %other? &perhaps? &a &verb?\\\hline \end{tabular}} \end{table} %\begin{table}\centering %\caption{The diphthong \ipa{/u͡a/}and its allophones}\label{uaVSuae} %\begin{tabular}{lllll}\mytoprule %% &\MC{2}{c}{\It{dialect}} & &\\ %%\It{phonemic}&\It{northern} &\It{southern}&\It{orthography}&\It{gloss} \\\hline %{phonemic}&\MC{2}{c}{{phonetic}}&{orthography}&{gloss} \\\hline %\MC{5}{l}{/u͡a/}\ARROW\ [ʊ͡a]}\\%\hline %k\Bf{u͡a}lːto &\MC{2}{c}{k\Bf{ʊ͡a}lːto} &\It{gualldo} & ‘snow.flurry\BS\Sc{nom.sg}’\\%\hline %ʧ\Bf{u͡a}rːvo-t &\MC{2}{c}{ʧ\Bf{ʊ͡a}rːvotʰ} &\It{tjuarrvot} & ‘call.out-\Sc{inf}’\\%\hline %\MC{5}{l}{/u͡a/}\ARROW\ [ʊ͡ɛ]}\\%\hline %j\Bf{u͡a}tke-t &\MC{2}{c}{j\Bf{ʊ͡ɛ}tʰketʰ} &\It{juätkit} & ‘extend-\Sc{inf}’\\%\hline %p\Bf{u͡a}jːte &\MC{2}{c}{p\Bf{ʊ͡ɛ}jːte} &\It{buäjjde} & ‘fat\BS\Sc{nom.sg}’\\%\hline %% &\MC{2}{c}{\It{dialect}} & &\\ %\MC{5}{l}{/u͡a/}\ARROW\ [ʊ͡ɛ] \TILDE\ [ʊ͡a]}\\%\hline % &{northern} &{southern}&& \\%\hline %l\Bf{u͡a}kːta &l\Bf{ʊ͡ɛ}ktːa &l\Bf{ʊ͡a}ktːa &\It{luakkta} & ‘bay\BS\Sc{nom.sg}’\\%\hline %v\Bf{u͡a}sta &v\Bf{ʊ͡ɛ}sta &v\Bf{ʊ͡a}sta &\It{vuasta} & ‘cheese\BS\Sc{nom.sg}’\\\mybottomrule %%other? &perhaps? &a &verb?\\\hline %\end{tabular} %\end{table} Note that this vowel harmony is triggered by a purely phonological context, as opposed to the vowel harmony described in \SEC\ref{VH} in the chapter on morphology, which is triggered morphologically. \subsection{Epenthetic schwa}\label{epentheticSchwa}\is{epenthesis}\is{schwa} In a small number of \PS\ words, a vowel is inserted between two non-homorganic consonants in the consonant center\is{consonant center} position. %the consonants of a consonant cluster in consonant center position. This centralized vowel is exceptionally short in duration, and is transcribed here with a %\marginpar{superscript schwa ᵊ is only bold \It{and} superscript using \Tn{\BS Bf\{\BS superS\{ə\}\}}}%bold just isn’t necessary here, so I’ll leave this out superscript schwa \ipa{[ᵊ]}. Waveforms for the two examples provided in \REF{liverNOMSG} and \REF{candyNOMPL} are found in Figure~\vref{epentheticSchwaWaveforms}. \PhonGlossWL{liverNOMSG}{ripːre}{ripᵊre}{ribbre}{liver\BS\Sc{nom.sg}}{1403}%{080701b.001} \PhonGlossWL{candyNOMPL}{ɲaːlka}{ɲaːlᵊkah}{njálga}{candy\BS\Sc{nom.pl}}{1277} %The waveforms in Figure~\vref{epentheticSchwaWaveforms} illustrate the examples in \REF{liverNOMSG} and \REF{candyNOMPL}. \begin{figure}[ht] \resizebox{\columnwidth}{!}{% \fbox{\includegraphics[width=\textwidth]{images/liverNOMSG-1403.pdf}}} \resizebox{\columnwidth}{!}{% \fbox{\includegraphics[width=\textwidth]{images/candyNOMSG-1277.pdf}}} \caption[Waveforms of two words with an epenthetic schwa]{Waveforms of two words (\It{ribbre} ‘liver’ and \It{njálga} ‘candy’) with an epenthetic schwa, including segmental durations}\label{epentheticSchwaWaveforms} \end{figure} %\setlength\fboxrule{0pt} In both cases, the epenthetic schwa is clearly linked to more energy, and it stands out from the surrounding consonants. %after the release of the preceding plosive’s closure and before the following trill. These waveforms also make visible the shorter duration of the epenthetic schwa (59ms and 47ms, respectively) compared to the other vowels (the shortest of which is 108ms). Speakers are rarely conscious of this vowel, and it is not reflected in the orthography. In neighboring Lule Saami, a similar epenthetic vowel exists and is predictable based on the prosodic and phonological structure of a word \citep[cf.][14--15]{Spiik1989}. It therefore seems likely that this epenthetic schwa is not phonemic in \PS\ either. However, more data are needed to confirm this and thoroughly describe its distribution. The fact that this epenthetic vowel seems to be significantly more prevalent in northern \PS\ dialects\is{dialect variation} complicates the situation further. The examples in \REF{quick} through \REF{whitefishNOMSG} provide dialectal variants, with the more southern variant first (lacking the epenthetic vowel), and the more northern variant second (with the epenthetic schwa). \ea\label{quick}\Tn{ \begin{tabular}{p{18mm}p{22mm}l}\ipa{/spaːjːta/}&\ipa{[spaːjj̥ːta]}\TILDE&\It{spájta}\\\end{tabular}\hfill\hyperlink{explExs}{\small[1711]}\\ \hspace{2.8mm}\begin{tabular}{p{18mm}p{22mm}l}&\ipa{[spaːjᵊta]}& ‘fast’\\\end{tabular}\hfill\hyperlink{pit110518a}{\small[pit110518a.3m22s]}} \z \ea\label{deliciousPRED}\Tn{ \begin{tabular}{p{18mm}p{22mm}l}\ipa{/ɲalːke/}&\ipa{[ɲalːke]}\TILDE&\It{njallge}\\\end{tabular}\hfill\hyperlink{pit091001}{\small[pit091001.087]}\\* \hspace{2.8mm}\begin{tabular}{p{18mm}p{22mm}l}&\ipa{[ɲalᵊkːe]}& ‘tasty’ \\\end{tabular}\hfill\hyperlink{pit081111}{\small[pit081111.2m59s]}} \z \ea\label{whitefishNOMSG}\Tn{ \begin{tabular}{p{18mm}p{22mm}l}\ipa{/ʧu͡avːʧa/}&\ipa{[ʧu͡aʋːʧa]}\TILDE&\It{tjuavvtja}\\\end{tabular}\hfill\hyperlink{explExs}{\small[1954]}\\ \hspace{2.8mm}\begin{tabular}{p{18mm}p{22mm}l}&\ipa{[ʧu͡oʋᵊʧa]}& ‘whitefish\BS\Sc{nom.sg}’\\\end{tabular}\hfill\hyperlink{pit0906_Ahkajavvre_a}{\small[pit0906\_Ahka-}\\ \hfill\hyperlink{pit0906_Ahkajavvre_a}{\small javvre\_a.168]}} \z %\begin{figure}[ht] %\resizebox{\columnwidth}{!}{% %\fbox{\includegraphics[width=\textwidth]{images/liverNOMSG-1403.pdf}}} %\resizebox{\columnwidth}{!}{% %\fbox{\includegraphics[width=\textwidth]{images/candyNOMSG-1277.pdf}}} %\caption[Waveforms of two words with an epenthetic schwa]{Waveforms of two words (\It{ribbre} ‘liver’ and \It{njálga} ‘candy’) with an epenthetic schwa, including segmental durations}\label{epentheticSchwaWaveforms} %\end{figure} %\include{postambleSDL}
{ "alphanum_fraction": 0.72654559, "avg_line_length": 90.4554385965, "ext": "tex", "hexsha": "8e34feaf135f8e4047deca33d33d56b644a444ae", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ca113bd66d56345895af9a6d5bd9adbcde69fc22", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "langsci/sidl", "max_forks_repo_path": "finished/Wilbur/phonologySDL.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ca113bd66d56345895af9a6d5bd9adbcde69fc22", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "langsci/sidl", "max_issues_repo_path": "finished/Wilbur/phonologySDL.tex", "max_line_length": 984, "max_stars_count": null, "max_stars_repo_head_hexsha": "ca113bd66d56345895af9a6d5bd9adbcde69fc22", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "langsci/sidl", "max_stars_repo_path": "finished/Wilbur/phonologySDL.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 49662, "size": 128899 }
% Changing book to article will make the footers match on each page, % rather than alternate every other. % % Note that the article class does not have chapters. \documentclass[letterpaper,10pt,twoside]{book} % Use babel or polyglossia to automatically redefine macros for terms % Armor Class, Level, etc... % Default output is in English; captions are located in lib/wlstring-captions.sty. % If no captions exist for a language, English will be used. %1. To load a language with babel: % \usepackage[<lang>]{babel} %2. To load a language with polyglossia: % \usepackage{polyglossia} % \setdefaultlanguage{<lang>} \usepackage[english]{babel} \let\latinencoding\relax \errorcontextlines 10000 %usepackage[italian]{babel} % For further options (multilanguage documents, hypenations, language environments...) % please refer to babel/polyglossia's documentation. %\usepackage[utf8]{inputenc} \usepackage{fontspec} \newfontfamily\jpfont{IPAexMincho} \newfontfamily\specialchar{Junicode} \usepackage{wl} \usepackage{hyperref} \lstset{% basicstyle=\ttfamily, language=[LaTeX]{TeX}, } % Start document \begin{document} \chapter*{DISCLAIMER} This is a sample version of the Warp Lords™ RPG. Formatting, images, and text may not reflect the final product. Only contains the minimum amount of info needed to play plus a small selection of the final character options.\footnote{This is an early version preview. It might be incorrec{\emph k}t and have bad format\hspace{-0.5ex}ting} For more info, visit our website at \href{www.banditgang.com}{www.banditgang.com} or email us at \href{mailto:[email protected]}{[email protected]} \input{basics} % \input{races/intro} \input{races/asura} %\input{races/caithsith} %\input{races/dryad} %\input{races/dullahan} %\input{races/dvergr} %\input{races/fairy} %\input{races/gloop} \input{races/human} %\input{races/jotunn} %\input{races/merfolk} \input{races/naga} %\input{races/orc} \input{races/sidhe} \input{races/tengu} %\input{races/ungoloth} % \chapter{Mage} \input{mage/mageintro} \input{mage/bard} \input{mage/elementalRedux} \input{priest/wildmagictable} %\input{mage/healer} %\input{mage/necromancer} %\input{mage/presti} %\input{mage/runes} %\input{mage/summoner} %\input{mage/witch} % \chapter{Priest} \input{priest/priestintro} %\input{priest/bokor} %\input{priest/cultist} %\input{priest/druid} %\input{priest/inquisitor} \input{priest/kannagi} %\input{priest/pagan} \input{priest/paladin} %\input{priest/prophet} %\input{priest/skinwalker} % \chapter{Scholar} \input{scholar/scholarintro} %\input{scholar/alchemist} %\input{scholar/artisan} %\input{scholar/engineer} \input{scholar/gambler} %\input{scholar/librarian} %\input{scholar/merchant} \input{scholar/physicist} %\input{scholar/psychic} %\input{scholar/socialite} % \chapter{Warrior} \input{warrior/warriorintro} \input{warrior/duelist} %\input{warrior/guardian} \input{warrior/martialartist} %\input{warrior/quartermaster} %\input{warrior/ranger} %\input{warrior/reiki} %\input{warrior/rogue} %\input{warrior/sniper} %\input{warrior/tactician} \input{combat} \end{document}
{ "alphanum_fraction": 0.7675396056, "avg_line_length": 27.8648648649, "ext": "tex", "hexsha": "0ae89418fe85ed36ae2707716a6debcad8a5a151", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-07-21T00:15:15.000Z", "max_forks_repo_forks_event_min_datetime": "2018-07-21T00:15:15.000Z", "max_forks_repo_head_hexsha": "83c505940a0ab77c040e044f686d52d64302e6f9", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "meneldal/DND-5e-LaTeX-Template", "max_forks_repo_path": "example/preview.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "83c505940a0ab77c040e044f686d52d64302e6f9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "meneldal/DND-5e-LaTeX-Template", "max_issues_repo_path": "example/preview.tex", "max_line_length": 337, "max_stars_count": null, "max_stars_repo_head_hexsha": "83c505940a0ab77c040e044f686d52d64302e6f9", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "meneldal/DND-5e-LaTeX-Template", "max_stars_repo_path": "example/preview.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 971, "size": 3093 }
\documentclass[conference]{IEEEtran} \IEEEoverridecommandlockouts \usepackage{cite} \usepackage{amsmath,amssymb,amsfonts} \usepackage{algorithmic} \usepackage{graphicx} \usepackage{textcomp} \usepackage{color} \usepackage{enumerate} \def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}} \begin{document} \title{CS6290: Reading Summary} \author{\IEEEauthorblockN{Ji Yang} \IEEEauthorblockA{ID:\ 56064832 \\ [email protected] \\ Dept.\ of Computer Science} } \maketitle \section{Summary of Paper\cite{cryptobook}} \subsection{Background} In the first two chapters of this book\cite{cryptobook}, the authors introduce the bitcoin system structure and its workflow preliminarily. % And to make these concepts more understandable, they also explain some basic concepts in details. % Glossing over some technical details in implementation, I will summarize these two chapters from two perspectives of rudimentary concepts and system structure. \subsection{Preliminaries} \subsubsection{Cryptographic Hash Functions} A cryptographic hash function $H(\cdot)$ is a mathematical function which accepts an arbitrary-length string as its input and produces a fixed-length bit string. % Hash function has the following three properties: \begin{enumerate}[(i)] \item \textbf{Collision-resistance.} It is infeasible to find two different inputs $m_{1}$, $m_{2}$ such that $H(m_1) = H(m_2)$. \item \textbf{Hiding.} Given $H(m)$, there exists no efficient algorithms that can find its original input $m$. \item \textbf{Puzzle-friendly.} Given a uniformly chosen-random k, it is computationally infeasible to find x such that $H(k \mid x) = y$ \end{enumerate} Bitcoin developers adopted a classical hash function called SHA-256 to construct other complicated data structures. Its compression feature could save data space efficiently. \subsubsection{Merkle Hash Tree\cite{merkle1989certified}} A Merkle Hash Tree (MHT) is a hierarchical data structure which could be used to authenticate a set of data objects with logarithmic time complexity. % Each leaf node contains a hash digest of a data record and each non-leaf node is assigned a hash digest from its child nodes. % Figure\ref{Merkle} shows an example of an MHT with four data blocks. % With the hash digests of its child nodes, the node $N_1$ can construct its digest $H_{N_1} = hash(H_{N_{1-0}} \mid H_{N_{1-1}})$ % Therefore, just keeping the root digest can ensure the integrity of whole data blocks. % Any manipulation on any data records will be detected immediately unless the root is changed correspondingly. \begin{figure}[ht] \centering \includegraphics[width= 0.95\linewidth]{pic/merkle-tree.pdf} \caption{Merkle Hash Tree} \label{Merkle} \end{figure} \subsubsection{Digital Signature} Basically, digital signature consists of three probabilistic polynomial time algorithms including key-generator, signing and verification. \begin{enumerate} \item \textbf{Key-generator.} This step generates a public key $K_{pk}$ which is broadcast to everyone and a private key $K_{sk}$ which is kept privately by its owner. \item \textbf{Signing.} Given a message $m$, the prover can use own private key to construct a signature $Sig = Signing(K_{sk},m)$. \item \textbf{Verifying.} Given a digital signature $Sig$, a message and the prover's public key, the verifier can verdict whether this signature is valid. By verifying the signature, the verifier is convinced that the message is sent by the prover without any tamper. \end{enumerate} In bitcoin system, public keys are also considered as identities which can be generated as needed. \subsection{System Structure and Workflow} Since the bitcoin network is facto distributed and peer-to-peer communication network, there are no authorities that mediate multi-party disputes. % To remedy the loss of the centered authority, a distributed consensus protocol must be applied in network to achieve the coordination among miners and users. % Each node only accepts valid transactions into its own block and other nodes accept a new block only if all transactions in this block are valid. % The unique data structure of blockchain can ensure the integrity of all previous blocks. % Following these protocols, bitcoin system could defense some classical attacks like denial-of-service attack and double-spend attack if the majority nodes are honest. Another significant issue in bitcoin's decentralization is incentive engineering. % Since bitcoin guarantees the unlinkability between accounts and real identities, we cannot reward honest nodes or penalize malicious nodes directly. % Instead of that, bitcoin developers adopt hash puzzles to incite miners to find a new block only by exhaustive search. % Once a mimer successfully finds a nonce that matches this hash puzzle and this new block is proved to be valid, he/she will get corresponding block rewards and transaction fees. \subsection{Contributions} The first two chapters briefly introduce the preliminaries and the way bitcoin achieves the decentralization. \section{Summary of Paper\cite{bonneau2015sok}} \subsection{Problem Statement} In recent years, blockchain technologies have gained considerable attention across the whole world due to the huge success of bitcoin. % Apart from its potential effects on industry transformation, bitcoin, as a decentralized platform, also catches great attention from the academia. % In this trend, this paper\cite{bonneau2015sok} targets on concluding research perspectives and challenges in bitcoin from masses of scattered literatures on the Internet. % Through surveying the state of the art in bitcoin and its related cryptocurrencies, this paper tries to render us detailed insights from the perspectives of system structure and workflow, security analysis, privacy issues and system compatibility in bitcoin. \subsection{Problem Significance} It is a great challenge to capture core techniques well in cryptocurrencies due to their rapidly appearing techniques and applications. % What's more, how to analyze the striking issues and propose promising research directions in a reasonable manner also remains a puzzle. \subsection{State of the Art} In this section, I summarize this paper from the following perspectives: system structure and workflow, security analysis, privacy issues and system compatibility. \subsubsection{System Structure and Workflow} In the decentralized bitcoin system, the key problems are twofold. % One problem is to ensure the security of public transactions since there is no centralized infrastructure like bank which can maintain the transactions service. % The other problem lies in how to design an efficient consensus protocol that can incentivize all users to behave honestly. In a technical sense, some classical cryptography tools like hash function and digital signature are employed in bitcoin network. % For example, SHA-256 plays a role of message digests to ensure the integrity of data structures in bitcoin. % And users can apply Elliptic Curve Digital Signature Algorithm (ECDSA)\cite{johnson2001elliptic} to construct digital signatures with their private and public keys. % The whole bitcoin can be seen as a public, distributed and immutable ledger attached with all valid transactions. % This ledger, which is constructed as a sequence of blocks by miners, contains the following four components: (i) Hash value of the previous block which ensures the integrity previous block; (ii) Timestamp; (iii) Consensus-proof nonce which guarantees the validity of each block; (iv) Merkle hash tree which aggregates the inner-block transactions efficiently. In order to address coordination problems, bitcoin adopts the consensus mechanism called proof-of-work. % In this setting, miners can append a new block on the longest chain by solving a computational hash puzzle. % Therefore, simple exhaustive search reduces all possible competitiveness into computational power which needs ideal hardware and expensive electricity. % And miners are also incentivized to behave honestly by using monetary rewards which includes block rewards and transaction fees. % It is nothing worthy that the difficulty of new block generation is adjusted every 2016 blocks for the purpose of avoiding frequent forks. \subsubsection{Security Analysis} Since the security and stability of bitcoin is just proved empirically in most cases, it remains an unknown question whether it can defense against all possible novel attacks. % In this paper, some notions of stability and security are proposed for each component of bitcoin. Here, I try to refine the argument further and state my own understanding in details. First, considering bitcoin is facto, decentralized, ad hoc peer-to-peer network, some security mechanisms must be deployed in the node and network level against various attacks like delay attack, cyber-attack and denial-of-service attack (DoS). % Therefore, bitcoin is designed as a well-connected random network with low latency for ease of rapid message diffusion. % What's more, in each node level, the security mechanism will execute a screening process to refuse to accept invalid transactions like double-spending transactions. Stability of the consensus algorithms is a more subtle problem in bitcoin. % All the existing analyses are based on a strong assumption that all miners will follow prescribed incentive mechanism. % However the realistic situation might be more complex and hard to balance perfectly. % There exists variety of potential problems, such as excessive power of a majority miner and collusion among smaller miners. % For example, in January 2019 the ethereum\cite{wood2014ethereum} suffered from a majority attack which was once deemed impossible. % By renting a great quantity of external computational power, attackers stole around 16 million worth of Ethereum and ERC20 tokens in this shrinking network. \subsubsection{Privacy Issues} In terms of users' privacy, sensitive transactions should be protected with an access control layer. % However, bitcoin only provides a limited form of unlinkability. % By applying transactions graph analysis, adversaries can still link transactions to some specific users. % Even worse, all these transactions are transparent and anyone can access them. % Therefore, many blockchains are designated to realize this privacy issue. % Zerocoin\cite{miers2013zerocoin} is the first blockchain which can provide unlinkability of transactions which also has it special coin called zerocoin. % Zerocash\cite{sasson2014zerocash} inherits many advantages in bitcoin and improves its efficiency further by refining underlying cryptographic operations. \subsubsection{System Compatibility} This section mainly involves existing and conceivable schemes of implementing changes in bitcoin. % In the communication network, a soft-fork change is preferred due to its backward compatibility with existing clients. % But sometimes, it is also essential for bitcoin developers to adopt hard-fork measurements to fill severe security gaps. % For instance, an anonymous hacker utilized a loophole in the coding to steal 3.6 million Ether within just few hours in July 2016. % As a urgent remedy action, the Ethereum community almost unanimously agreed to implement hard-fork. % In addition, alternative consensus algorithms including system parameters and computational puzzles (e.g. PoS\cite{king2012ppcoin}, PoA\cite{gunnam2008next}, Threshold delay\cite{Dfinity}, PoB\cite{Slimcoin}) are also merging constantly. \subsection{Contributions} This survey introduces the primary technologies and possible future directions and provides an extensive analysis of each component in bitcoin. % In general, it collects masses of scattered materials and arrange them in an innovative and reasonable manner. \subsection{Remaining Questions} This survey focuses on the public blockchains like bitcoin and zerocoin. % Since blockchain is a hot filed in both academia and industry, the survey could be outdated easily. % Nowadays, private blockchain applications like Hyperledger\cite{androulaki2018hyperledger} which adopt a deterministic protocol to ensure stability of whole system strike the world spotlight. % And many financial institutions consider it as a promising tool to realize an access control mechanism. \bibliographystyle{IEEEtran} \bibliography{references} \end{document}
{ "alphanum_fraction": 0.8063537676, "avg_line_length": 55.4336283186, "ext": "tex", "hexsha": "a8b5fe110b5d024ad690f1db9a3e2809219ff512", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d5d980e6d16d808192a4375954a82c2348a47a48", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "yangji12138/blockchain-course", "max_forks_repo_path": "workflow/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d5d980e6d16d808192a4375954a82c2348a47a48", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "yangji12138/blockchain-course", "max_issues_repo_path": "workflow/main.tex", "max_line_length": 258, "max_stars_count": 2, "max_stars_repo_head_hexsha": "d5d980e6d16d808192a4375954a82c2348a47a48", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "yangji12138/CS6290-Summary", "max_stars_repo_path": "workflow/main.tex", "max_stars_repo_stars_event_max_datetime": "2019-12-05T16:21:10.000Z", "max_stars_repo_stars_event_min_datetime": "2019-12-05T16:21:04.000Z", "num_tokens": 2603, "size": 12528 }
%============================================================================== \documentclass[11pt,oneside,onecolumn,letterpaper]{article} \usepackage{times} \usepackage[paperwidth=8.5in, paperheight=11in, top=2.5cm, bottom=2.6cm, left=2.58cm, right=2.53cm]{geometry} %\setlength{\textheight} {9.00in} %\setlength{\textwidth} {6.40in} %\setlength{\topmargin} {-0.50in} %%\setlength{\headheight} {0.00in} %%\setlength{\headsep} {0.40in} %\setlength{\oddsidemargin}{-0.010in} %\setlength{\evensidemargin}{-0.00in} %============================================================================== %\usepackage{algorithm} \usepackage{amssymb} \usepackage{color} \usepackage{booktabs} \usepackage{graphicx} \usepackage{latexsym} \usepackage{subfigure} \usepackage{wrapfig} \usepackage{amsmath} \usepackage{amsthm} \usepackage[hyphens]{url} \usepackage{pifont} \usepackage{color} \usepackage{colortbl} \usepackage[lined, boxed, linesnumbered]{algorithm2e} \usepackage[square, comma, sort&compress, numbers]{natbib} \newcounter{alg} \newenvironment{enum-ref}{ \begin{list}% {[\arabic{alg}]} {\usecounter{alg} \setlength{\leftmargin} {0.25in} \setlength{\labelwidth} {0.30in} \setlength{\rightmargin}{0.00in} \setlength{\topsep} {0.00in}} }{\end{list}} \newenvironment{enum-number}{ \begin{list}% {\arabic{alg})} {\usecounter{alg} \setlength{\leftmargin} {0.25in} \setlength{\labelwidth} {0.30in} \setlength{\rightmargin}{0.00in} \setlength{\topsep} {0.00in}} }{\end{list}} \newenvironment{enum-nonum}{ \begin{list}% {$\bullet$} { \setlength{\leftmargin} {0.25in} \setlength{\labelwidth} {0.30in} \setlength{\rightmargin}{0.00in} \setlength{\topsep} {0.00in}} }{\end{list}} \let\chapter\section %============================================================================== \pagestyle{plain} %============================================================================== \title{Secure UAV Communications System Design} \author{MITRE eCTF 2021\\Team Cacti\\ University at Buffalo} \date{} \begin{document} %% %============================================================================= \normalsize \maketitle %\date{} \renewcommand{\thepage}{System Design, Team Cacti, University at Buffalo--\arabic{page}} \setcounter{page}{1} \normalsize % %\renewcommand{\baselinestretch}{1.2} %\normalsize %\vspace{0.1in} %\centerline{\textbf{\Large }} %\renewcommand{\baselinestretch}{1.0} %\normalsize \newcommand{\flagRollback}{\textsf{Rollback}\xspace} \section{Introduction} This section presents the entities and communication channels in the system. \subsection{Entities} The following summarizes the entities in the system. \begin{itemize} \item A SED CPU works at the application layer. UAV ID is used to identify drone SED CPU at the application layer. UAV ID is a secret like any other data fields at the application layer. \item A SED controller is identified by a \verb|SCEWL_ID|, which is 16 bits in length. Note that a UAV ID is different from a \verb|SCEWL_ID|. The UAV ID is a secret that controlled by the CPU. A SED (SCEWL-Enabled Devices) is a device with a SCEWL Bus installed. A SED is implemented in 2 parts: 1) CPU, which is an userland application running on ARM Cortex-A and Linux. The application cannot be modified; 2) Microcontroller, which is bare-metal firmware running on ARM Cortex-M. SEDs are identified with \verb|SCEWL_ID|s. At the design stage, the \verb|SCEWL_ID|s are unknown. The SED devices include: 1) A Command and Control (C2) SED at a fixed location; 2) many drop-zone SEDs at fixed locations; 3) many drone SEDs that fly to places. \item SSS (SCEWL Security Server) manages SEDs through registration and deregistration. SSS does not communicate with any SED besides registration and deregistration. SSS has a SCEWL ID of 1. \item FAA Transceiver is a device that allows the FAA to communicate with any SED, bypassing the secure SCEWL protocol that we design. FAA Transceiver has a SCEWL ID of 2. \end{itemize} \subsection{Communication Channels} Following the OSI network model, we divide the network into 3 layers. Layer 1: the physical layer, which include radio and wired. There are implemented by UNIX socks, radio.py, etc. Layer 2: a combination of the data link layer, network layer and transport layer, which is implemented at the controller. Our job is to provide security mechanisms at layer 2 before forwarding the message to layer 1 or layer 3. Layer 3: the application layer, which is out of our control and implemented at the CPU. The application layer has its own checksum. This following summarizes the communication channels in the system. \begin{itemize} \item A SED CPU can send a targeted message to another SED CPU via radio. This message will go through the SED controller. This message must be encrypted and authenticated at controller, so only the targeted SED controller can decrypt the message and any tampering to the message by an attacker can be detected. \item A SED CPU can send out a broadcast message to all other registered SED CPUs. This message will go through the SED controller. This message must be encrypted and authenticated as well. Only registered SED controller should be able to decrypt this message, and any tampering to the message by an attacker should be detected. \item In registration and deregistration, a SED only talks with SSS in a wired and secured channel. No further protection is needed for this channel. But sss needs to confirm only the provisioned SED can be registered. \item SED CPU uses the FAA channel to talk with us, sending status notifications to us. \end{itemize} \section{Attack Models} The attackers can carry out the following attacks: \begin{itemize} \item Intercept a targeted message and try to decrypt that, getting the Package Recovery flag. \item Intercept a broadcast message and try to decrypt that, getting the UAV ID Recovery flag. \item Send a targeted message to any drone SED to make it drop a package, getting the Drop Package flag. \item Replay the redirect message from FAA to make a drove SED fly above its altitude ceiling. \item Extract secret from a crashed drone SED firmware. \item Spoof an FAA transceiver. But, the controller must pass all FAA messages without authentication. \item Launch their own spoofed SEDs onto the network, which may run malicious images on CPU and controller. \end{itemize} \section{Our Design} % SSS stores all public keys ($pk_k$) of all provisioned SEDs. Each SED controller $k$ has a public key pair ($pk_k, sk_k$). Each SED stores some of the other provisioned SEDs' $pk_k$. ENC and DEC stands for symmetric encryption and decryption with AES256-GCM. AENC and ADEC stands for asymmetric encryption and decryption with RSA512. SIG and AUTH stands for the signature and verification with RSA512. \subsection{Targeted Transmission} Whenever the SED$_a$ sends a targeted message to SED$_b$, it checks if it has SED$_b$'s public key $pk_b$. If yes, go to Case 2. Otherwise, go to Case 1. Case 1: SED$_a$ sends a special targeted message to SED$_b$ to ask for its public key. As a return, SED$_b$ sends its public key to SED$_a$. Then, SED$_a$ go to Case 2. Case 2: Whenever the SED$_a$ sends a targeted message to SED$_b$, it first generates an AES key $k_a$ and IV $iv_a$. Then, it uses AES-GCM to encrypt and authenticate the SCEWL header ($\mathcal{H}$) and body ($\mathcal{B}$), which outputs the ciphertext ($\mathcal{C}$) and tag ($\mathcal{T}$), e.g. $\mathcal{C}, \mathcal{T}$ = ENC$_{k_a, iv_a}(\mathcal{H} || \mathcal{B})$. Then, it encrypts $k_a$ with $pk_b$, e.g. $\mathcal{E}=$AENC$_{pk_b}(k_a)$. Then, it sends out $\mathcal{M}$ = $\mathcal{H} || \mathcal{E}||iv_a||\mathcal{T}||\mathcal{C}$ (Figure \ref{fig:msg}). A caveat here is to calculate body length before the AES operation. Whenever SED$_b$ receives a targeted message $\mathcal{M}$, it first checks the $\mathcal{H}$ to see if the message is intended for it. If not, discard. Then, it uses its private key $sk_b$ to decryt the received ciphertext AES key $k_a$, e.g. $k_a$ = ADEC$_{sk_b}($AENC$_{pk_b}(k_a))$. Then, it uses $k_a, iv_a$ to decrypt and authenticate the message, e.g. $\mathcal{C'}, \mathcal{T'}$ = DEC$_{k_a, iv_a}(\mathcal{C})$. It compares $\mathcal{T'}$ and $\mathcal{T}$. If they do not match, discard. Otherwise, $\mathcal{C'}=\mathcal{H}||\mathcal{B}$. \subsection{Broadcast Transmission} When the SED$_a$ does a broadcast, it first generates an AES key $k_a$ and IV $iv_a$. Then, it uses AES-GCM to encrypt and authenticate the SCEWL header ($\mathcal{H}$) and body ($\mathcal{B}$), which outputs the ciphertext ($\mathcal{C}$) and tag ($\mathcal{T}$), e.g. $\mathcal{C}, \mathcal{T}$ = ENC$_{k_a, iv_a}(\mathcal{H} || \mathcal{B})$. Then, it encrypts $k_a$ with its private key $sk_a$, e.g. $\mathcal{E}$=AENC$_{sk_a}(k_a)$. Then, it sends out $\mathcal{M}$ = $\mathcal{H} || \mathcal{E}||iv_a||\mathcal{T}||\mathcal{C}$. A caveat here is to calculate body length before the AES operation. Whenever SED$_b$ receives a message $\mathcal{M}$, it first checks the $\mathcal{H}$ to see if the message is a broadcast. Then, it uses SED$_a$'s public key $pk_a$ to decrypt the received ciphertext AES key $k_a$, e.g. $k_a$ = ADEC$_{pk_a}($AENC$_{sk_a}(k_a))$. Then, it uses $k_a, iv_a$ to decrypt and authenticate the message, e.g. $\mathcal{C'}, \mathcal{T'}$ = DEC$_{k_a, iv_a}(\mathcal{C})$. It compares $\mathcal{T'}$ and $\mathcal{T}$. If they do not match, discard. Otherwise, $\mathcal{C'}$ = $\mathcal{H}||\mathcal{B}$. \subsection{Key Generation and Storage} Below are the details key generation for asymmetric encryption: \begin{itemize} % \item[Step1.] \item[Step 1.] When adding SED$_k$ ($k$ refers to \verb|SCEWL_ID|) to a deployment, SSS generates a pair of RSA keys ($pk_k$, $sk_k$) for SED$_k$. SSS stores the keys in files as $k.pub$, $k.pri$ at its local filesystem. We implement \textbf{keygen} (with \verb|SCEWL_ID| as a parameter) application to generate key pair ($pk_k$ and $sk_k$) for each provisioned SED$_k$ first, then invoke \textbf{create\_secrets.py} to output the \textbf{key.h} which will be used to compile the SED$_k$ controller. This step runs the script from \textbf{dockerfiles/2b\_create\_sed\_secrets.Dockerfile} file. \item[Step 2.] When building controller for SED$_k$, the key pair ($pk_k$, $sk_k$) will be copied from the sss container to this controller container. At the time of building the controller of SED's. We change \textbf{dockerfiles/2c\_build\_controller.Dockerfile} to copy \textbf{key.h} from the SSS container to the controller container. \item[Step 3.] At registration stage, the SED$_k$ uses its own private key $sk_k$ to sign the registration message (with padding to 512 bits) and send it to the SSS. It sends out $\mathcal{M}$ = $\mathcal{H} || $SIG$_{sk_k}(\mathcal{B}, \mathcal{H})$, $\mathcal{B}$ = $k$ $||$ $op$ ($op$ refers the register operation). If $k$ is in our provisioned SED list, SSS will use the associated public key $pk_k$ to verify this message, e.g. $\mathcal{H'}, \mathcal{B'}$ = VER$_{pk_k}$(SIG$_{sk_k}(\mathcal{H}, \mathcal{B})$). Upon a successful verification ($\mathcal{H}$ == $\mathcal{H'}$), SSS finishes the registration for SED$_k$, adds $k$ to the registered list, and sends the public keys of all other registered (not provisioned) SEDs to SED$_k$. Then the registered SED will notify other SEDs its own $pk_k$. We implement \textbf{auth} application to verify the signed message, modify the \textbf{sss.py} to verify the provisioned SEDs and send out the public keys of other provisioned SEDs. \item[Step 4.] When SED receives a $pk_k$ notification, it unpacks the message and stores it. P.S. If the SED wants to send out a message that does not have the target $pk_k$ or receives a broadcast that does not have the resource $pk_k$. It will sends out a \verb|PK_REQUEST| to require the pk. \item[Step 5.] The deregistration stage is similar to registration. SED$_k$ will delete other public keys first, then uses its own private key $sk_k$ to sign the deregistration message and send it to the SSS. It sends out $\mathcal{M}$ = $\mathcal{H} || $SIG$_{sk_k}(\mathcal{B}, \mathcal{H})$, $\mathcal{B}$ = $k$ $||$ $op$ ($op$ refers the deregister operation). If $k$ is in our provisioned SED list and in our dedicated list, SSS will use the associated public key $pk_k$ to verify this message. Upon a successful verification, SSS will remove SED$_k$ from the dedicated list. \item[Remove SED.] When removing SED$_k$, SSS will remove the key pair ($pk_k$, $sk_k$). \end{itemize} \subsection{Message Format} Figure \ref{fig:msg} shows the message format. The layer 2 frame has a header described in Section 4.6 of Rules. The header has 8 bytes. It has 2 bytes of magic number, 2 bytes of destination SCEWL ID, 2 bytes of source SCEWL ID, 2 bytes of body length in bytes. The header cannot be encrypted, since it is used for routing. Following the 8 bytes header is the SCEWL message body that including: \begin{itemize} \item A pair of magic numbers (2 bytes) are used to indicate the subtype of the message: $\mathcal{NO}$ means the $pk_k$ request, $\mathcal{PK}$ means the $pk_k$ notification, $\mathcal{XX}$ means the normal message which is either a broadcast or a targeted message. \item 64-bytes padded AES $key$ via RSA crypto along with the 12-byte $iv$ and 16-byte $tag$. \item Encrypted message body which the plaintext includes 2 bytes of destination SCEWL ID, 2 bytes of source SCEWL ID, 2 bytes of true message length, 4 bytes of sequence number (in, out), and the maximum 0x4000 bytes of true message. \end{itemize} \begin{figure}[!htbp] \begin{centering} \includegraphics[width = .80\textwidth]{pic/msg-format.pdf} \caption{Message Format} \label{fig:msg} \end{centering} \end{figure} \subsection{Prevent Replay Attack} For communication between the SEDs, a sequence number of 4 bytes is added in front of the message body. Each SED maintains a pair of sequence number table in memory of other SEDs. Two sequence numbers represent bidirectional transmission between two SEDs. In case of broadcast messages each SED maintains a sending sequence number and receiving sequence numbers for each other SEDS. To prevent the replay attack, receiver SED needs to cross-check between the received value from the message and the value from the table. Explanation: \begin{itemize} \item Suppose there are 5 ($a, b, c, d, e$) SEDs in the system. \item For unicast messages, each SED maintains a pair of sequence number table of other SEDs with the initial value of zero. And for broadcast messages, each SED maintains a sending message count and a number of receiving message table for each SED. \item Example: At the beginning for targeted message, sequence table for SED $b$: \begin{center} \begin{tabular}{ |c|c|c| } \hline \textbf{SED ID} & \textbf{SQ\_send} & \textbf{SQ\_receive}\\ \hline \hline a & 0 & 0 \\ c & 0 & 0 \\ d & 0 & 0 \\ e & 0 & 0 \\ \hline \end{tabular} \end{center} Here each row represents a pair of sequence number for unicast messages. Row $a$ here signifies the sequence number for communication between the SED pair $(a,b)$. $SQ\_send$ is number of communications from SED $b$ to SED $a$. $SQ\_receive$ denotes number of messages that $b$ received from $a$. SED $a$ will have the same values in opposite position. \item Suppose SED$_a$ communicates with the SED $b$. SED $a$ gets the $SQ\_send$ value of SED$_b$ from its table, increments it by one, and then adds it in front of the message body. \item After successful decryption of the message, SED$_b$ checks the received sequence number from the message and $SQ\_receive$ sequence number of SED$_a$ from it's table in memory. \item The received message is valid only if the message sequence number is greater than the sequence number from the table. For a valid message, the receiver SED $b$ copies the sequence from the message to it's table. So, the $SQ\_send$ for $b$ in SED$_a$'s table will be the same as $SQ\_receive$ for $a$ in the SED$_b$'s table. \item After communication sequence table for SED $b$: \begin{center} \begin{tabular}{ |c|c|c| } \hline \textbf{SED ID} & \textbf{SQ\_send} & \textbf{SQ\_receive}\\ \hline \hline a & 0 & 1 \\ c & 0 & 0 \\ d & 0 & 0 \\ e & 0 & 0 \\ \hline \end{tabular} \end{center} \item Similarly, SED$_a$'s table after this communication: \begin{center} \begin{tabular}{ |c|c|c| } \hline \textbf{SED ID} & \textbf{SQ\_send} & \textbf{SQ\_receive}\\ \hline \hline b & 1 & 0 \\ c & 0 & 0 \\ d & 0 & 0 \\ e & 0 & 0 \\ \hline \end{tabular} \end{center} \item Similarly, suppose SED$_b$ twice sends messages to SED$_c$ and once receives the message from SED$_c$. Now the table for SED $b$ should be: \begin{center} \begin{tabular}{ |c|c|c| } \hline \textbf{SED ID} & \textbf{SQ\_send} & \textbf{SQ\_receive}\\ \hline \hline a & 0 & 1 \\ c & 2 & 1 \\ d & 0 & 0 \\ e & 0 & 0 \\ \hline \end{tabular} \end{center} \item In case of broadcast messages, the initial receiving sequence number table for SED $b$: \begin{center} \begin{tabular}{ |c|c| } \hline \textbf{SED ID} & \textbf{SQ\_broadcast\_receive}\\ \hline \hline a & 0\\ c & 0 \\ d & 0 \\ e & 0 \\ \hline \end{tabular} \end{center} Whenever SED $b$ receives a broadcast, it simply increments the received sequence number of the corresponding SED in the table by one. SEDs conclude a received message as a replay attack if the received sequence number is not greater than the stored number for that SED in the table. In addition to this, each SED maintains a global sending broadcast number. This number is updated after each broadcast send. \end{itemize} \subsection{Build Process} \begin{itemize} \item \textbf{dockerfiles/1a\_create\_sss.Dockerfile} \begin{enumerate} \item Copy \textbf{rsa} folder, \textbf{create\_secret.py}, and \textbf{sss.py} to sss container. \item Compile the \textbf{rsa} to get the crypto lib. \item Create a file (\textbf{provisioned\_list}) to store the provisioned SEDs \end{enumerate} \item \textbf{dockerfiles/2b\_create\_sed\_secrets.Dockerfile} \begin{enumerate} \item Create a folder in sss container to store the provisioned SEDs secrets (key pairs). \item Write provisioned SCEWL\_ID to the \textbf{provisioned\_list}. \item \textbf{keygen} is invoked to generate the key pair. \item \textbf{create\_secret.py} is invoked to generate the \textbf{\$\{SCEWL\_ID\}\_publickKey}. \end{enumerate} \item \textbf{dockerfiles/2c\_build\_controller.Dockerfile} \begin{enumerate} \item Copy the \textbf{\$\{SCEWL\_ID\}\_publickKey} to controller container. \end{enumerate} \end{itemize} \section{Implementation} \subsection{Critical Functions and Files} \begin{itemize} \item \textbf{aes-gcm crypto lib}: message encryption and decryption at controller side. \item \textbf{rsa} \begin{enumerate} \item \textbf{keygen}: key pairs generation at sss side. \item \textbf{auth}: authenticate the registration and deregistration message at sss side. \item \textbf{crypto lib}: encryption and decryption, message signature and authentication. \end{enumerate} \item \textbf{create\_secret.py}: key file generation and deletion. \item \textbf{provisioned\_list}: store the provisioned SCEWL ids. \item \textbf{sss.py}: handle registration and deregistration, check the provisioned SEDs, response $pk_k$ of the registered SEDs. \item \textbf{controller.c}: add message encryption, decryption, and authentication, update keys. \item \textbf{controller.h}: critical structures and macros. \item others key files. \end{itemize} % \section{Flag Protection} % \subsection{\textsf{D}} \end{document} %==============================================================================
{ "alphanum_fraction": 0.7095780108, "avg_line_length": 50.5502512563, "ext": "tex", "hexsha": "9cc427bac8502507f88715fb0edd995e07a713bb", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "641e1c7ca3a96a831437b31452cd973dc58a5dc2", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "CactiLab/2021-ectf-UB-Cacti-design", "max_forks_repo_path": "designdoc/SystemDesign.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "641e1c7ca3a96a831437b31452cd973dc58a5dc2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "CactiLab/2021-ectf-UB-Cacti-design", "max_issues_repo_path": "designdoc/SystemDesign.tex", "max_line_length": 522, "max_stars_count": null, "max_stars_repo_head_hexsha": "641e1c7ca3a96a831437b31452cd973dc58a5dc2", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "CactiLab/2021-ectf-UB-Cacti-design", "max_stars_repo_path": "designdoc/SystemDesign.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5898, "size": 20119 }
\section{Scaling of contact rates} The base mixing matrix can be disaggregated into age-specific contacts that occur in the following four settings: \begin{itemize} \item Home \item Other locations \item Work \item School \end{itemize} The contact rates in each of these settings are scaled as indicated in Table \ref{tab:location_scaling}. The quantity indicated in the Table is considered as a proportion relative to the baseline value of one. This quantity is then squared to consider that reductions in mobility will affect both the potential infector and infectee who will not visit the setting considered.
{ "alphanum_fraction": 0.7920634921, "avg_line_length": 39.375, "ext": "tex", "hexsha": "08827323ed0287e7222b5f36208d8c29c95c41a6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b1e7de15ac6ef6bed95a80efab17f0780ec9ff6f", "max_forks_repo_licenses": [ "BSD-2-Clause-FreeBSD" ], "max_forks_repo_name": "emmamcbryde/AuTuMN-1", "max_forks_repo_path": "docs/tex/tex_descriptions/models/sm_sir/mixing_matrix/mixing_matrix.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b1e7de15ac6ef6bed95a80efab17f0780ec9ff6f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause-FreeBSD" ], "max_issues_repo_name": "emmamcbryde/AuTuMN-1", "max_issues_repo_path": "docs/tex/tex_descriptions/models/sm_sir/mixing_matrix/mixing_matrix.tex", "max_line_length": 83, "max_stars_count": null, "max_stars_repo_head_hexsha": "b1e7de15ac6ef6bed95a80efab17f0780ec9ff6f", "max_stars_repo_licenses": [ "BSD-2-Clause-FreeBSD" ], "max_stars_repo_name": "emmamcbryde/AuTuMN-1", "max_stars_repo_path": "docs/tex/tex_descriptions/models/sm_sir/mixing_matrix/mixing_matrix.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 135, "size": 630 }
\documentclass[english]{sig-alternate-05-2015} \usepackage[T1]{fontenc} \usepackage[latin9]{inputenc} \usepackage{verbatim} %% without following commands, amsthm complains proof being already defined \let\proof\relax \let\endproof\relax \usepackage{amsthm} \usepackage{amstext} \usepackage{graphicx} \usepackage{mdwlist} % compact lists \usepackage{relsize} % relative font sizes \usepackage{todonotes} % private notes \usepackage{soul} \usepackage{color} \usepackage{hyperref} \hypersetup{bookmarks=true,bookmarksnumbered=true,pdfstartview=FitB,colorlinks=true,pdfborder=0 0 0} \usepackage{amsmath} \usepackage{xspace} \usepackage[sort&compress]{natbib} \usepackage{comment} \makeatletter %% for manually breaking cells in tables. \newcommand{\specialcell}[2][c]{% \begin{tabular}[#1]{@{}c@{}}#2\end{tabular}} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% LyX specific LaTeX commands. %% Because html converters don't know tabularnewline \providecommand{\tabularnewline}{\\} %%%% \usepackage{booktabs} \newcommand{\ra}[1]{\renewcommand{\arraystretch}{#1}} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Textclass specific LaTeX commands. \theoremstyle{plain} \newtheorem{thm}{\protect\theoremname} \theoremstyle{definition} \newtheorem{defn}{\protect\definitionname} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% User specified LaTeX commands. \usepackage{algorithm} \usepackage{algpseudocode} \@ifundefined{showcaptionsetup}{}{% \PassOptionsToPackage{caption=false}{subfig}} \usepackage{subfig} \makeatother \usepackage{babel} \providecommand{\definitionname}{Definition} \providecommand{\theoremname}{Theorem} \DeclareCaptionType{copyrightbox} %========== Custom typographic commands ========== \newcommand{\emailAddr}[1]{\texttt{\smaller {#1}}} \newcommand{\mailtoLink}[2]{\href{mailto:#2}{\emailAddr{#1}}} \newcommand{\refSection}[1]{Section~\ref{#1}} \newcommand{\refsec}[1]{\S~\ref{#1}} \newcommand{\refequ}[1]{eq.~\eqref{#1}} \newcommand{\refEquation}[1]{Equation~\eqref{#1}} \newcommand{\refFigure}[1]{Figure~\ref{#1}} \newcommand{\reffig}[1]{fig.~\ref{#1}} \newcommand{\refDefinition}[1]{Definition~\ref{#1}} \newcommand{\refdef}[1]{defn.~\ref{#1}} \newcommand{\refAlgorithm}[1]{Algorithm~\ref{#1}} \newcommand{\refalg}[1]{alg.~\ref{#1}} \newcommand{\refTable}[1]{Table~\ref{#1}} \newcommand{\reftab}[1]{table~\ref{#1}} \newcommand{\refTheorem}[1]{Theorem~\ref{#1}} \newcommand{\refthm}[1]{thm.~\ref{#1}} %========== Document Specific commands ========== \newcommand{\sv}{SV} \newcommand{\ftsv}{\textsc{FT-SV}} \newcommand{\graphname}[1]{\emph{#1}} \newcommand{\CCVAL}{$CC$} \newcommand{\icomment}[1]{\textbf{ \textcolor{red} {#1}}} \begin{document} \setcopyright{acmcopyright} \title{A Fault Tolerant Connected Component Algorithm } \numberofauthors{1} \author{ \alignauthor Piyush Sao, Oded Green, Chirag Jain, Richard Vuduc \\ \affaddr{School of Computational Science and Engineering}\\ \affaddr{Georgia Institute of Technology}\\ \email{\{piyush3,ogreen3,cjain3,richie\}@gatech.edu} % % \alignauthor % Xiaoye S. Li\\ % \affaddr{Computational Research Division }\\ % \affaddr{Lawrence Berkeley National Laboratory}\\ % \email{[email protected]} } \maketitle \input{abstract} \keywords{Fault-tolerance, Graph-algorithm, Connected Component, Bit-flip} \section{Introduction} \input{intro} \label{sec:intro} %ODED I think that the fault section should come after the connected components section \section{Fault Model} \input{fault} \label{sec:fault} \section{Connected components} \input{connected} \label{sec:connected} \section{Fault Tolerant \sv~ Algorithm} \input{ft-connected} \label{sec:ft-connected} \section{Results} \input{results} \label{sec:results} \section{Related work} \input{related} \label{sec:related} \section{Conclusion and future work} \input{conclusion} \label{sec:conclusion} \section{Acknowledgment} \input{ack} % \balance \bibliographystyle{abbrv} % \addbibresource{sao.bib} \bibliography{sao} \end{document}
{ "alphanum_fraction": 0.7123938031, "avg_line_length": 23.5411764706, "ext": "tex", "hexsha": "f96e0be879dcf4271a356ca48f2ce83d2415e42f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ffe871d526c2a7e93121be61132356e9c77df6e5", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "ogreen/approxGraphComp", "max_forks_repo_path": "paper/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ffe871d526c2a7e93121be61132356e9c77df6e5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "ogreen/approxGraphComp", "max_issues_repo_path": "paper/main.tex", "max_line_length": 100, "max_stars_count": null, "max_stars_repo_head_hexsha": "ffe871d526c2a7e93121be61132356e9c77df6e5", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "ogreen/approxGraphComp", "max_stars_repo_path": "paper/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1251, "size": 4002 }
\section {Foundations of dataflow analysis} \setlength{\parindent}{0pt} (Prepared by Vardhan Jain) \vspace{0.3cm} \subsection{Advantages of common Dataflow Analysis framework} \begin{itemize} \item \textbf{Prove properties for an entire family of problems} : We prove properties for the framework and basically we have proven properties for different dataflow analysis problems together. \item \textbf{Aids in software engineering} : We can write the basic logic in a base class and all our data flow analysis algorithms can be derived from that base class. We won't have to repeat the same logic. \end{itemize} \subsection{Dataflow Analysis problems \textbf{($F$, $V$, \^{})} are defined by} \begin{itemize} \item \textbf{A semilattice ($V$, \^{})} : Semilattice is defined by the domain of values represented by $V$ and the meet operator \^{} represented by the caret symbol. \item \textbf{A family of transfer functions $F:V\rightarrow V$} : A transfer function is defined as a function that takes value from the set of values $V$ and returns value in the same set of values $V$. $F$ represents family of all such possible functions. They need to satisfy certain properties in order to be admissible. \end{itemize} \subsection{Semilattice} A semilattice S = \textless a set of values $V$, a meet operator \^{} where the meet operator \^{} \textgreater has the following properties: \begin{itemize} \item \textbf{Idempotent} x \^{} x = x \item \textbf{Commutative} x \^{} y = y \^{} x \item \textbf{Associative} x \^{} (y \^{} z) = (x \^{} y) \^{} z \end{itemize} Examples of meet operator \^{} - set-union, set-intersection, and, or, min, max \\ Some non-examples of meet operator are add, subtract, multiply, divide etc. \subsection{Semilattice examples} V = \{x {\textbar} x is the subset of \{$d_{1}$, $d_{2}$, $d_{3}$\}\}, \^{} $\triangleq$ set-union, Ordering $\leq$ $\triangleq$ $\supseteq$. Figure \ref{fig:semilattice_union} shows the semilattice diagram, the nodes represent the values and the edge represent the ordering. If there is an edge from a to b then a $\geq$ b. More precisely there is a path from node a to node b iff a $\geq$ b. This is a partial ordering as not all values are comparable. Meet of two values $v_{1}$ and $v_{2}$ can be inferred from the semilattice diagram as the first common descendant node of nodes with values $v_{1}$ and $v_{2}$.\par Another example contains values as 2-tuples booleans. The four possible values are \{true, true\}, \{false, true\}, \{true, false\} and \{false, false\}. The semilattice diagram is shown in figure \ref{fig:semilattice_and}. Here \^{} is logical AND. \begin{figure} \centering \includegraphics[width=1\linewidth]{images/semilatticeSetUnion.png} \caption{Set union semilattice example} \label{fig:semilattice_union} \end{figure} \begin{figure}[h!] \caption{Boolean tuples logical AND semilattice example} \begin{center} \begin{tikzpicture}[-latex ,auto ,node distance =3.5cm and 5cm ,on grid , semithick , state/.style ={ rectangle ,top color =white , bottom color = blue!20 , draw, blue , text=blue , scale = 0.7 ,minimum width =3.5 cm, minimum height = 1.5 cm}] \node[state] (A){} node [label = {}, rectangle split,rectangle split parts=2]{% \{true, true\}% }; \node[state] (B) [below left = of A]{} node [label = {},rectangle split,rectangle split parts=2] [below left = of A] {% \{false, true\}% }; \node[state] (D) [below right =of A]{} node [label = {},rectangle split,rectangle split parts=2] [below right = of A] {% \{true, false\}% }; \node[state] (E) [below left =of D]{} node [label = {},rectangle split,rectangle split parts=2] [below left = of D] {% \{false, false\}% }; \path[->] (A) edge node [above = 0.3 cm] {} (D); \path[->] (A) edge node [above = 0.3 cm] {} (B); \path[->] (B) edge (E); \path[->] (D) edge (E); \end{tikzpicture} \end{center} \label{fig:semilattice_and} \end{figure} \subsection{Semilattice properties} \begin{itemize} \item x \^{} y is the first common descendant of x and y. \item Define top value T such that x \^{} T = x for all x \item Define bottom ($\bot$) such that x \^{} $\bot$ = $\bot$ for all x \item Semilattice diagram = picture of partial orders \end{itemize} % \subsection{Example of \^{} and $\leq$} % Set-union
{ "alphanum_fraction": 0.6755295178, "avg_line_length": 52.8333333333, "ext": "tex", "hexsha": "7f347fdb64cfd67b4bf7ce865bc7990534de46ad", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-04-12T19:11:33.000Z", "max_forks_repo_forks_event_min_datetime": "2021-02-16T08:32:53.000Z", "max_forks_repo_head_hexsha": "af3788cde815a5b1d19f206ec8605c0e372c1833", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "arpit-saxena/compiler-notes", "max_forks_repo_path": "module95.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "af3788cde815a5b1d19f206ec8605c0e372c1833", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "arpit-saxena/compiler-notes", "max_issues_repo_path": "module95.tex", "max_line_length": 520, "max_stars_count": null, "max_stars_repo_head_hexsha": "af3788cde815a5b1d19f206ec8605c0e372c1833", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "arpit-saxena/compiler-notes", "max_stars_repo_path": "module95.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1337, "size": 4438 }
\title{Part-of-speech tagging (1)} \author{Steve Renals \newline \mbox{ }[email protected]\mbox{ }} \date{ICL --- 13 October 2005} \begin{document} \frame{\titlepage} \mode<article>{\section[Outline]{ICL/PoS Tagging 1/2005--10--13}} \mode<presentation>{ \section[Outline]{} } \frame{\tableofcontents} \section{Parts of Speech} \subsection{Introduction} \begin{frame} \mode<presentation>{\frametitle{Parts of speech}} \begin{itemize} \item How can we predict the bahaviour of a previously unseen word? \item Words can be divided into classes that behave similarly. \item Traditionally eight parts of speech: noun, verb, pronoun, preposition, adverb, conjunction, adjective and article. \item More recently larger sets have been used: eg Penn Treebank (45 tags), Susanne (353 tags). \end{itemize} \end{frame} \begin{frame} \frametitle{Parts of Speech} \begin{alertblock}{What use are parts of speech?} They tell us a lot about a word (and the words near it). \end{alertblock} \pause \begin{itemize} \item Tell us what words are likely to occur in the neighbourhood (eg adjectives often followed by nouns, personal pronouns often followed by verbs, possessive pronouns by nouns) \item Pronunciations can be dependent on part of speech, eg \textcolor{black}{object, content, discount} (useful for speech synthesis and speech recognition) \item Can help information retrieval and extraction (stemming, partial parsing) \item Useful component in many NLP systems \end{itemize} \end{frame} \subsection{Open and closed classes} \begin{frame} \mode<presentation>{\frametitle{Closed and open classes}} \begin{itemize} \item Parts of speech may be categorised as \emph{open} or \emph{closed} classes \item Closed classes have a fixed membership of words (more or less), eg determiners, pronouns, prepositions \item Closed class words are usually \emph{function words} --- frequently occurring, grammatically important, often short (eg \textcolor{black}{of,it,the,in}) \item The major open classes are \emph{nouns}, \emph{verbs}, \emph{adjectives} and \emph{adverbs} \end{itemize} \end{frame} \begin{frame} \frametitle{Closed classes in English} \begin{description} \item[prepositions] on, under, over, to, with, by \item[determiners] the, a, an, some \item[pronouns] she, you, I, who \item[conjunctions] and, but, or, as, when, if \item[auxiliary verbs] can, may, are \item[particles] up, down, at, by \item[numerals] one, two, first, second \end{description} \end{frame} \begin{frame} \frametitle{Open classes} \begin{description} \item[nouns] Proper nouns (\textcolor{black}{Scotland}, \textcolor{black}{BBC}),\\ common nouns: \begin{itemize} \item count nouns (\textcolor{black}{goat}, \textcolor{black}{glass}) \item mass nouns (\textcolor{black}{snow}, \textcolor{black}{pacifism}) \end{itemize} \item[verbs] actions and processes (\textcolor{black}{run}, \textcolor{black}{hope}), also auxiliary verbs \item[adjectives] properties and qualities (age, colour, value) \item[adverbs] modify verbs, or verb phrases, or other adverbs:\\ \textcolor{black}{\emph{Unfortunately} John walked \emph{home extremely slowly yesterday}} \end{description} \end{frame} \subsection{Tagsets} \begin{frame} \frametitle{The Penn Treebank tagset (1)} {\footnotesize \begin{tabular}{|lll|lll|} \hline CC & Coord Conjuncn & \emph{and,but,or} & NN & Noun, sing. or mass & \emph{dog} \\ CD & Cardinal number & \emph{one,two} & NNS & Noun, plural & \emph{dogs} \\ DT & Determiner & \emph{the,some} & NNP & Proper noun, sing. & \emph{Edinburgh} \\ EX & Existential there & \emph{there} & NNPS & Proper noun, plural & \emph{Orkneys} \\ FW & Foreign Word & \emph{mon dieu} & PDT & Predeterminer & \emph{all, both}\\ IN & Preposition & \emph{of,in,by} & POS & Possessive ending & \emph{'s} \\ JJ & Adjective & \emph{big} & PP & Personal pronoun & \emph{I,you,she} \\ JJR & Adj., comparative & \emph{bigger} & PP\$ & Possessive pronoun & \emph{my,one's} \\ JJS & Adj., superlative & \emph{biggest} & RB & Adverb & \emph{quickly} \\ LS & List item marker & \emph{1,One} & RBR & Adverb, comparative & \emph{faster} \\ MD & Modal & \emph{can,should} & RBS & Adverb, superlative & \emph{fastest} \\ \hline \end{tabular} } \end{frame} \begin{frame} \frametitle{The Penn Treebank tagset (2)} {\footnotesize \begin{tabular}{|lll|lll|} \hline RP & Particle & \emph{up,off} & WP\$ & Possessive-Wh & \emph{whose} \\ SYM & Symbol & \emph{+,\%,\&} & WRB & Wh-adverb & \emph{how,where} \\ TO & ``to'' & \emph{to} & \$ & Dollar sign & \emph{\$}\\ UH & Interjection & \emph{oh, oops} & \# & Pound sign & \emph{\#} \\ VB & verb, base form & \emph{eat} & `` & Left quote & ` , `` \\ VBD & verb, past tense & \emph{ate} & '' & Right quote & ', '' \\ VBG & verb, gerund & \emph{eating} & ( & Left paren & ( \\ VBN & verb, past part & \emph{eaten} & ) & Right paren & ) \\ VBP & Verb, non-3sg, pres & \emph{eat} & , & Comma & , \\ VBZ & Verb, 3sg, pres & \emph{eats} & . & Sent-final punct & . ! ? \\ WDT & Wh-determiner & \emph{which,that} & : & Mid-sent punct. & : ; --- ... \\ WP & Wh-pronoun & \emph{what,who} & & & \\ \hline \end{tabular} } \end{frame} \section{PoS Tagging in NLTK} \subsection{Tagging} \begin{frame} \mode<presentation>{\frametitle{Tagging}} \begin{itemize} \item Definition: Tagging is the assignment of a single part-of-speech tag to each word (and punctuation marker) in a corpus. For example:\\ {\scriptsize \textcolor{black}{``/`` The/DT guys/NNS that/WDT make/VBP traditional/JJ hardware/NN are/VBP really/RB being/VBG obsoleted/VBN by/IN microprocessor-based/JJ machines/NNS ,/, ''/'' said/VBD Mr./NNP Benton/NNP ./.}} \item Non-trivial: POS tagging must resolve ambiguities since the same word can have different tags in different contexts \item In the Brown corpus 11.5\% of word types and 40\% of word tokens are ambiguous \item In many cases one tag is much more likely for a given word than any other \item Limited scope: only supplying a tag for each word, no larger structures created (eg prepositional phrase attachment) \end{itemize} \end{frame} %There EX %are VBP %11 CD %players NNS %in IN %a DT %football NN %team NN %. . \begin{frame} \frametitle{Information sources for tagging} What information can help decide the correct PoS tag for a word? \begin{description} \item[Other PoS tags] Even though the PoS tags of other words may be uncertain too, we can use information that some tag sequences are more likely than others (eg \emph{the/AT red/JJ drink/NN} vs \emph{the/AT red/JJ drink/VBP}).\newline Using \emph{only} information about the most likely PoS tag sequence does not result in an accurate tagger (about 77\% correct) \item[The word identity] Many words can gave multiple possible tags, but some are more likely than others (eg \emph{fall/VBP} vs \emph{fall/NN}) \newline Tagging each word with its most common tag results in a tagger with about 90\% accuracy \end{description} \end{frame} \subsection{Simple taggers} %%% Assume Ewan will introduce the idea of a tagged token being a double \begin{frame}[fragile] \frametitle{Tagging in NLTK} The simplest possible tagger tags everything as a noun: {\small \begin{verbatim} from nltk_lite import tokenize text = 'There are 11 players in a football team' text_tokens = list(tokenize.whitespace(text)) # ['There', 'are', '11', 'players', 'in', 'a', 'football', 'team'] \end{verbatim} \pause \begin{verbatim} from nltk_lite import tag mytagger = tag.Default('nn') for t in mytagger.tag(text_tokens): print t # ('There', 'NN') # ('are', 'NN') # ... \end{verbatim}} \end{frame} \begin{frame}[fragile] \frametitle{A regular expression tagger} We can use regular expressions to tag tokens based on regularities in the text, eg numerals: {\small \begin{verbatim} default_pattern = (r'.*', 'NN') cd_pattern = (r' ^[0-9]+(.[0-9]+)?$', 'CD') patterns = [cd_pattern, default_pattern] NN_CD_tagger = tag.Regexp(patterns) re_tagged = list(NN_CD_tagger.tag(text_tokens)) # [('There', 'NN'), ('are', 'NN'), ('11', 'NN'), ('players', 'NN'), ('in', 'NN'), ('a', 'NN'), ('football', 'NN'), ('team', 'NN')] \end{verbatim} }%$ \end{frame} \subsection{Unigram taggers} \begin{frame}[fragile] \frametitle{A unigram tagger} The NLTK UnigramTagger class implements a tagging algorithm based on a table of unigram probabilities: \[ \mbox{tag}(w) = \arg\max_{t_i} P(t_i|w) \] \pause Training a UnigramTagger on the Penn Treebank: {\small \begin{verbatim} from nltk_lite.corpora import treebank from itertools import islice # sentences 0-2999 train_sents = list(islice(treebank.tagged(), 3000)) # from sentence 3000 to the end test_sents = list(islice(treebank.tagged(), 3000, None)) unigram_tagger = tag.Unigram() unigram_tagger.train(train_sents) \end{verbatim}} \end{frame} \begin{frame}[fragile] \frametitle{Unigram tagging} {\small \begin{verbatim} >>> list(unigram_tagger.tag(tokenize.whitespace("Mr. Jones saw the book on the shelf"))) [('Mr.', 'NNP'), ('Jones', 'NNP'), ('saw', 'VBD'), ('the', 'DT'), ('book', 'NN'), ('on', 'IN'), ('the', 'DT'), ('shelf', None)] \end{verbatim}} The UnigramTagger assigns the default tag \texttt{None} to words that are not in the training data (eg \emph{shelf}) \pause We can combine taggers to ensure every word is tagged: {\small \begin{verbatim} >>> unigram_tagger = tag.Unigram(cutoff=0,backoff=NN_CD_tagger) >>> unigram_tagger.train(train_sents) >>> list(unigram_tagger.tag(tokenize.whitespace("Mr. Jones saw the book on the shelf"))) [('Mr.', 'NNP'), ('Jones', 'NNP'), ('saw', 'VBD'), ('the', 'DT'), ('book', 'VB'), ('on', 'IN'), ('the', 'DT'), ('shelf', 'NN')] \end{verbatim}} \end{frame} \section{Evaluating taggers} \subsection{Accuracy and gold standard} \begin{frame} \frametitle{Evaluating taggers} \begin{itemize} \item Basic idea: compare the output of a tagger with a human-labelled \emph{gold standard} \item Need to compare how well an automatic method does with the agreement between people \item The best automatic methods have an accuracy of about 96-97\% when using the (small) Penn treebank tagset (but this is still an average of one error every couple of sentences...) \item Inter-annotator agreement is also only about 97\% \item A good unigram baseline (with smoothing) can obtain 90-91\%! \end{itemize} \end{frame} \begin{frame}[fragile] \frametitle{Evaluating taggers in NLTK} NLTK provides a function \texttt{tag.accuracy} to automate evaluation. It needs to be provided with a tagger, together with some text to be tagged and the gold standard tags. \pause We can make print more prettily: \begin{verbatim} def print_accuracy(tagger, data): print '%3.1f%%' % (100 * tag.accuracy(tagger, data)) \end{verbatim} \pause \begin{verbatim} >>> print_accuracy(NN_CD_tagger, test_sents) 15.0% >>> print_accuracy(unigram_tagger, train_sents) 93.8% >>> print_accuracy(unigram_tagger, test_sents) 82.8% \end{verbatim} \end{frame} \subsection{Error analysis} \begin{frame} \mode<presentation>{\frametitle{Error analysis}} \begin{itemize} \item The \% correct score doesn't tell you everything --- it is useful know what is misclassified as what \item \emph{Confusion matrix}: A matrix (ntags x ntags) where the rows correspond to the correct tags and the columns correspond to the tagger output. Cell $(i,j)$ gives the count of the number of times tag $i$ was classified as tag $j$ \item The leading diagonal elements correspond to correct classifications \item Off diagonal elements correspond to misclassifications \item Thus a confusion matrix gives information on the major problems facing a tagger (eg NNP vs. NN vs. JJ) \item See section 3 of the NLTK tutorial on Tagging \end{itemize} \end{frame} \section{Summary} \begin{frame} \mode<presentation>{\frametitle{Summary}} \begin{itemize} \item \textbf{Reading:} Chapter 8 of Jurafsky and Martin \item Parts of speech and tagsets \item Tagging \item Constructing simple taggers in NLTK \item Evaluating taggers \item Next lecture: n-grams \end{itemize} \end{frame} \end{document}
{ "alphanum_fraction": 0.6626107047, "avg_line_length": 31.5169902913, "ext": "tex", "hexsha": "8fe2c0aadeeefaca485cdbaebbb974790e3dc8f3", "lang": "TeX", "max_forks_count": 114, "max_forks_repo_forks_event_max_datetime": "2021-11-13T08:16:02.000Z", "max_forks_repo_forks_event_min_datetime": "2015-01-13T04:47:49.000Z", "max_forks_repo_head_hexsha": "c152bde901f05915e90b07a615b232adb123bed8", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "silky/nltk_contrib", "max_forks_repo_path": "doc/klein-renals/tag.d/pos1.tex", "max_issues_count": 12, "max_issues_repo_head_hexsha": "c152bde901f05915e90b07a615b232adb123bed8", "max_issues_repo_issues_event_max_datetime": "2020-07-30T23:00:41.000Z", "max_issues_repo_issues_event_min_datetime": "2015-01-13T06:27:18.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "silky/nltk_contrib", "max_issues_repo_path": "doc/klein-renals/tag.d/pos1.tex", "max_line_length": 84, "max_stars_count": 123, "max_stars_repo_head_hexsha": "15d82cb187f0a859fa1366fb0d69dc25c365a3ff", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "peresolb/nltk_contrib", "max_stars_repo_path": "doc/klein-renals/tag.d/pos1.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-01T10:05:16.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-06T10:46:18.000Z", "num_tokens": 3935, "size": 12985 }
% This is an example for a chapter, additional chapter can be added in the % skeleton-thesis. % To generate the final document, run latex, build and quick build commands % on the skeleton-thesis file not this one. \chapter{Microscopic predictions for production of neutron rich nuclei in the reaction $^\mathbf{176}\mathbf{Yb}$+${}^\mathbf{176}\mathbf{Yb}$}\label{chapters:chapter_8} %\vspace{-7mm} \title{Microscopic predictions for production of neutron rich nuclei in the reaction $^\mathbf{176}\mathbf{Yb}$+${}^\mathbf{176}\mathbf{Yb}$} \author[1]{K. Godbey} \affil[1]{Department of Physics and Astronomy, Vanderbilt University, Nashville, TN 37235} \author[2]{C. Simenel} \affil[2]{Department of Theoretical Physics and Department of Nuclear Physics, Research School of Physics, The Australian National University, Canberra ACT 2601, Australia} \author[1]{A. S. Umar} { \makeatletter \begin{center} \AB@authlist \AB@affillist \end{center} \makeatother \bfseries\centering The following work has been accepted by Physical Review C~\citep{godbey2020b} and is reprinted below in its entirety.\\ \textcopyright2020 American Physical Society\\ } \makeatletter \renewcommand{\AB@affillist}{} \renewcommand{\AB@authlist}{} \setcounter{authors}{0} \makeatother %------------------------------------------------------------------------------ \section{Abstract} {\bf [Background]} Production of neutron-rich nuclei is of vital importance to both understanding nuclear structure far from stability and to informing astrophysical models of the rapid neutron capture process (r-process). Multinucleon transfer (MNT) in heavy-ion collisions offers a possibility to produce neutron-rich nuclei far from stability. {\bf [Purpose]} The $^{176}\mathrm{Yb}+{}^{176}\mathrm{Yb}$ reaction has been suggested as a potential candidate to explore the neutron-rich region surrounding the principal fragments. The current study has been conducted with the goal of providing guidance for future experiments wishing to study this (or similar) system. {\bf [Methods]} Time-dependent Hartree-Fock (TDHF) and its time-dependent random-phase approximation (TDRPA) extension are used to examine both scattering and MNT characteristics in $^{176}\mathrm{Yb}+{}^{176}\mathrm{Yb}$. TDRPA calculations are performed to compute fluctuations and correlations of the neutron and proton numbers, allowing for estimates of primary fragment production probabilities. {\bf [Results]} Both scattering results from TDHF and transfer results from the TDRPA are presented for different energies, orientations, and impact parameters. In addition to fragment composition, scattering angles and total kinetic energies, as well as correlations between these observables are presented. {\bf [Conclusions]} $^{176}\mathrm{Yb}+{}^{176}\mathrm{Yb}$ appears to be an interesting probe for the mid-mass neutron-rich region of the chart of nuclides. The predictions of both TDHF and TDRPA are speculative, and will benefit from future experimental results to test the validity of this approach to studying MNT in heavy, symmetric collisions. %------------------------------------------------------------------------------ \section{Introduction} The synthesis of neutron-rich nuclei is one of the most exciting and challenging tasks in both experimental and theoretical nuclear physics. From the lightest systems to the superheavy regime, knowledge about the nuclei at the extremes of the chart of nuclides is vital to understanding physical phenomena at multiple scales. At the foremost, neutron-rich nuclei are at the literal and figurative center of the rapid neutron capture process (r-process). Attempts at modeling the r-process utilize input from nuclear models to inform threshold energies for the reaction types that characterize this process~\citep{cowan2020}. Thus, strong theoretical understanding of both the static and dynamic properties of nuclei far from stability can give vital insight into the formation of stable heavy nuclei. The production of neutron-rich nuclei is also of interest for studying nuclear structure, where exploring this region of the nuclear landscape clearly probes the edges of our current understanding of how finite nuclei form and are composed~\citep{otsuka2020}. %,heyde2011}. This includes studies of neutron-rich nuclei of all masses, ranging from oxygen~\citep{desouza2013} up to the superheavy element (SHE) region. SHEs are of particular note, as the formation and static properties of said nuclei have been the focus of many experimental~\citep{hofmann2002,munzenberg2015,morita2015,oganessian2015,roberto2015} and theoretical~\citep{bender1999,nazarewicz2002, cwiok2005,pei2009a,stone2019} studies. Over the years, many theoretical approaches to studying neutron-rich nuclei formation have been pursued for various reaction types. One such technique is to use models to study neutron enrichment via multinucleon transfer (MNT) in deep-inelastic collisions (DIC) and quasifission reactions~\citep{adamian2003,zagrebaev2007,umar2008a,golabek2009,aritomo2009,kedziora2010,zhao2016,sekizawa2017a,wu2019}. While quasifission occurs at a much shorter time-scale than fusion-fission~\citep{toke1985,durietz2011} and is the primary reaction mechanism that limits the formation of superheavy nuclei, the fragments produced may still be neutron-rich. Quasifission reactions are often studied in asymmetric systems with, e.g., an actinide target \citep{toke1985,hinde1992,hinde1995,itkis2004,wakhle2014}. However, quasifission can also be present in symmetric systems. In fact, the extreme case of quasifission in actinide-actinide collisions has been suggested as a possible reaction mechanism to obtain neutron-rich isotopes of high $Z$ nuclei in particular as well as a possible means to search for SHE~\citep{majka2018,wuenschel2018}. Theoretically, the investigation of actinide-actinide collisions has a rich history with various approaches, including the dinuclear system (DNS) model~\citep{penionzhkevich2005,adamian2008,feng2009a,adamian2010,adamian2010b,feng2017,zhu2017,bao2018b}, relativistic mean-field (RMF) and Skyrme HF studies~\citep{gupta2007b}, reduced density-matrix formalism~\citep{sargsyan2009}, quantum molecular dynamics (QMD)~\citep{zhao2009}, and improved quantum molecular dynamics (ImQMD)~\citep{tian2008,zhao2016,wang2016,yao2017,li2018} calculations, as well as time-dependent Hartree-Fock (TDHF) studies~\citep{cusson1980,golabek2009,kedziora2010}. Over recent years, TDHF has proved to be a tool of choice to investigate fragment properties produced in various reactions, such as DIC~\citep{umar2017,wu2019}, quasifission~\citep{wakhle2014,oberacker2014,hammerton2015,umar2015c,umar2016,wang2016,sekizawa2017a,godbey2019,jiang2020}, and fission~\citep{simenel2014a,scamps2015a,goddard2015,tanimura2015,goddard2016,bulgac2016,tanimura2017,scamps2018,bulgac2018,scamps2019}. Recent reviews~\citep{simenel2018,sekizawa2019} succinctly summarize the current state of TDHF (and its extensions) as it has been applied to various MNT reactions. In this work, we present a study of the $^{176}\mathrm{Yb}+{}^{176}\mathrm{Yb}$ system using TDHF and the time-dependent random phase approximation (TDRPA) extension that considers the effect of one-body fluctuations around the TDHF trajectory. As discussed before, microscopic approaches such as TDHF and its extensions are commonly used in heavy-ion collision studies in different regions of the nuclear chart, positioning TDHF and TDRPA as tools of choice for the current investigation. Symmetric $^{176}$Yb reactions were chosen because they are considered as a potential candidate to explore the neutron-rich region around the mass region $A\sim 170-180$ of the nuclear chart. Specifically, an experimental investigation of this reaction are being considered in Dubna by Oganessian \textit{et al.} and the work presented here was undertaken at their suggestion~\citep{priv_oganessian}. The base theory (TDHF) and the primary extension (TDRPA) are briefly described in Section~\ref{sec:TDHF}. Results for both scattering characteristics and transfer characteristics are discussed in Section~\ref{sec:scat} and Section~\ref{sec:tran} respectively. A summary and outlook are then presented in Section~\ref{sec:conclusions}. %------------------------------------------------------------------------------ %\citep{prasad2016} \section{Formalism: TDHF and TDRPA} \label{sec:TDHF} The TDHF theory provides a microscopic approach with which one may investigate a wide range of phenomena observed in low energy nuclear physics~\citep{negele1982,simenel2012,simenel2018,sekizawa2019}. Specifically, TDHF provides a dynamic quantum many-body description of nuclear reactions in the vicinity of the Coulomb barrier, such as fusion~\citep{bonche1978,flocard1978,simenel2001,umar2006d,washiyama2008,umar2010a,umar2009a,guo2012,keser2012,simenel2013a,oberacker2012,oberacker2010,umar2012a,simenel2013b,umar2014a,jiang2014} and transfer reactions~\citep{koonin1977,simenel2010,simenel2011,umar2008a,sekizawa2013,scamps2013a,sekizawa2014,bourgin2016,umar2017,sekizawa2019}. The TDHF equations for the single-particle wave functions \begin{equation} h(\{\phi_{\mu}\}) \ \phi_{\lambda} (r,t) = i \hbar \frac{\partial}{\partial t} \phi_{\lambda} (r,t) \ \ \ \ (\lambda = 1,...,A) \ , \label{eq:TDHF} \end{equation} can be derived from a variational principle. The principal approximation in TDHF is that the many-body wave function $\Phi(t)$ is assumed to be a single time-dependent Slater determinant at all times. It describes the time-evolution of the single-particle wave functions in a mean-field corresponding to the dominant reaction channel. During the past decade it has become numerically feasible to perform TDHF calculations on a 3D Cartesian grid without any symmetry restrictions and with much more accurate numerical methods~\citep{bottcher1989,umar2006c,sekizawa2013,maruhn2014}. The main limitation in the TDHF theory when studying features like particle transfer, however, is that it is optimized for the prediction of expectation values of one-body observables~\citep{balian1981} and will under-predict fluctuations of those observables~\citep{dasso1979}. This is due to the fact that the fluctuation of one-body operators (such as the particle number operator) includes the expectation value of the square of a one-body operator, \begin{equation} \sigma_{XX}=\sqrt{\langle\hat{X}^2\rangle - \langle\hat{X}\rangle^2}, \label{eq:fluc} \end{equation} that is outside the variational space of TDHF \citep{balian1981}. To obtain such quantities one needs to go beyond standard TDHF and consider the fluctuations around the TDHF mean-field trajectory using techniques like the stochastic mean-field theory (SMF)~\citep{ayik2008,lacroix2014} or TDRPA~\citep{balian1984}, both of these approaches have been used to investigate MNT and fragment production~\citep{ayik2016,ayik2017,ayik2018,ayik2019,ayik2019b,marston1985,bonche1985,broomfield2008,broomfield2009,simenel2011,williams2018}. The advantage of these methods compared to others mentioned in the Introduction is that they do not rely on empirical parameters and are fully microscopic. In this work we follow a similar approach as in~\citep{simenel2011,williams2018} to obtain particle number fluctuations and distributions about the outgoing fragments. The foundation of the method is to consider an alternate variational principle for generating the mean-field theory. In particular, the Balian-V\`en\`eroni (BV) variational principle provides a powerful technique that optimizes the evaluation of expectation values for arbitrary operators~\citep{balian1984,bonche1985}. When the operator chosen is a one-body operator, the method produces the TDHF equations exactly, suggesting that TDHF is the mean-field theory that is best suited for the calculation of one-body expectation values. However, as mentioned above, the calculation of fluctuations and correlations involves the square of a one-body operator. For TDHF alone, Eq.~\ref{eq:fluc} results in the following expression for two generic operators $\hat{X}$ and $\hat{Y}$, \begin{equation} \sigma_{XY}^2(t_f)=\mathrm{Tr}\left\{Y\rho(t_f)X[I-\rho(t_f)] \right\}, \end{equation} where $I$ is the identity matrix and $t_f$ is the final time. By utilizing the BV variational principle and extending the variational space to optimize for the expectation value of squares of one-body operators, one obtains \begin{equation} \sigma_{XY}^2(t_f)=\lim_{\epsilon\rightarrow0}\frac{\mathrm{Tr}\left\{[\rho(t_i)-\rho_X(t_i,\epsilon)][\rho(t_i)-\rho_Y(t_i,\epsilon)]\right\}}{2\epsilon^2} \label{eq:BVfluc} \end{equation} which now depends on the one-body density matrices at the initial time $t_i$. Equation~(\ref{eq:BVfluc}) also contains the density matrices $\rho_{X,Y}(t_i,\epsilon)$ which have been boosted at $t_f$ and evolved back to $t_i$. The procedure to compute Eq.~(\ref{eq:BVfluc}) involves first transforming the states after the collision as \begin{equation} \tilde{\phi}^X_\alpha(r,t_f)=\exp[-i\epsilon N_X\Theta_V({r})]\phi_\alpha(r,t_f),\label{eq:phi_x} \end{equation} where $X$ stands for neutron ($N$), proton ($Z$), or total nucleon number ($A$). The operator $N_X$ ensures that the transformation acts only on nucleons with the correct isospin, with $N_A=1$, $N_Z=\frac{1-\tau_3}{2}$, and $N_N=\frac{1+\tau_3}{2}$. The operator $\Theta_V(\hat{r})$ is a step function that is either $1$ or $0$ depending on whether $r$ is within a volume of space, $V$, delimiting the fragment of interest. Finally, $\epsilon$ is a small number that is varied to achieve convergence. These transformed states are then propagated backwards in time from the final time $t_f$ to the initial time $t_i$. The trace in Eq.~(\ref{eq:BVfluc}) can then be calculated, obtaining \begin{equation}\label{ybyb:eq:sigma} \sigma_{XY} = \sqrt{\lim_{\epsilon\rightarrow0}\frac{\eta_{00}+\eta_{XY}-\eta_{0X}-\eta_{0Y}}{2\epsilon^2}}, \end{equation} with $\eta_{XY}$ describing the overlap between the states at time $t=t_i$, \begin{equation} \eta_{XY}=\sum_{\alpha \beta}\left|\langle\phi_\alpha^X(t_i)|\phi_\beta^Y(t_i)\rangle\right|^2. \end{equation} In the case of $X,Y=0$, this refers to states obtained with $\epsilon=0$ in Eq.~(\ref{eq:phi_x}). In principle, one should recover exactly the initial state as the evolution is unitary. However, using states that have been evolved forward and then backward in time with $\varepsilon=0$ minimizes systematic errors from numerical inaccuracies \citep{bonche1985,broomfield2009}. \begin{figure} \includegraphics*[width=\textwidth]{../Figures/YbYb/Potentials.pdf} \caption{Static nuclear potentials for $^{176}\mathrm{Yb}+{}^{176}\mathrm{Yb}$ in the side-side (blue (dark) lines) and tip-tip (green (light) lines) orientations from FHF and DCFHF.} \label{fig:pot} \end{figure} The SLy4$d$ parametrization of the Skyrme functional is used~\citep{kim1997} and all calculations were performed in a numerical box with $66 \times 66$ points in the reaction plane, and 36 points along the axis perpendicular to the reaction plane. The grid spacing used was a standard $1.0$~fm which provides an excellent numerical representation of spatial quantities using the basis spline collocation method~\citep{umar1991a}. For the TDRPA calculations, each initial orientation, energy, and impact parameter resulted in three additional TDHF evolutions (one for each $X$) for the time reversed evolution at one value of $\epsilon=2\times10^{-3}$ in addition to occasionally scanning $\epsilon$ to ensure convergence of Eq.~(\ref{ybyb:eq:sigma}). In total, $200$ full TDHF evolutions were required for the results presented in this work with each taking on the order of $10\sim55$~hours of wall time due to the large, three-dimensional box size chosen. This corresponds to roughly $250$ days of computation time split among multiple nodes for the $^{176}$Yb HF ground state configuration with a prolate deformation. %of the $^{176}$Yb nucleus. %Additional calculations were also performed for the alternate deformation, requiring nearly the same computational effort. The proton and neutron numbers correlations and fluctuations computed with TDRPA are used to estimate probabilities for the formation of a given nuclide using Gaussian bivariate normal distributions of the form \begin{equation} \mathcal{P}(n,z) = \mathcal{P}(0,0)\exp\left[ -\frac{1}{1-\rho^2} \left( \frac{n^2}{\sigma_{NN}^2}+\frac{z^2}{\sigma_{ZZ}^2} - \frac{2\rho nz}{\sigma_{NN}\sigma_{ZZ}}\right) \right], \label{eq:Pnz} \end{equation} where $n$ and $z$ are the number of transferred neutrons and protons, respectively. The correlations between $N$ and $Z$ are quantified by the parameter \begin{equation} \rho = \mbox{sign}(\sigma_{NZ})\frac{\sigma_{NZ}^2}{\sigma_{NN}\sigma_{ZZ}}=\frac{\langle nz\rangle}{\sqrt{\langle n^2\rangle\langle z^2\rangle}}. \end{equation} In principle, $n$ and $z$ could be very large and lead to unphysical predictions with fragments having, e.g., a negative number of protons and neutrons, or more nucleons than available. In practice, such spurious results could only happen for the most violent collisions where the fluctuations are large. To avoid such spurious effects, the probabilities are shifted so that $\mathcal{P}$ is zero when one fragment has all (or more) protons or neutrons. The resulting distribution is then normalized. Although the $^{176}$Yb nuclide is in a region where shape coexistence is often found~\citep{fu2018,nomura2011,robledo2009,sarriguren2008,xu2011}, TDHF calculations can only be performed with one well-defined deformation (and orientation) of each collision partners in the entrance channel. In our calculations, the ground state is found to have a prolate deformation with $\beta_2\simeq 0.33$ in its HF ground state. A higher energy oblate solution is also found with a difference of around $5$~MeV in total binding energy. A set of calculations were also performed for the oblate solution, though the overall transfer behavior was found to be similar for both deformations despite the oblate one resulting in slightly lower fluctuations. In the following, we thus only show results for the prolate ground state. This deformation allows for possible choices of the orientation of the nuclei. Extreme orientations are called ``side'' (``tip'') when the deformation axis is initially perpendicular (parallel) to the collision axis. Although various intermediate orientations could be considered~\citep{godbey2019}, we limit our study to tip-tip and side-side orientations where the initial orientations of both nuclei are identical. In addition to saving computational time, this restriction is necessary to ensure fully symmetric collisions and to avoid unphysical results in TDRPA~\citep{williams2018}. Figure~\ref{fig:pot} shows the nucleus-nucleus potentials computed using the frozen Hartree-Fock (FHF)~\citep{simenel2008,washiyama2008} and density-constrained frozen Hartree-Fock (DCFHF)~\citep{simenel2017} methods, respectively neglecting and including the Pauli exclusion principle between the nucleons of different nuclei. Due to Pauli repulsion in DCFHF, the inner pocket potential is very shallow in the side-side configuration, and disappears in the tip-tip one. In this work, the effect of the orientation is studied by comparing tip-tip and side-side configurations at a center of mass energy $E_\mathrm{c.m.}=660$~MeV. In addition, calculations are also performed at $E_\mathrm{c.m.}=880$~MeV for both orientations to investigate the role of the energy on the reaction outcome. %------------------------------------------------------------------------------ \section{Results} In this section we present the results of TDHF and TDRPA studies of $^{176}\mathrm{Yb}+{}^{176}\mathrm{Yb}$ reactions at different center of mass energies and initial orientations for a range of impact parameters. Both scattering features and particle number fluctuation derived quantities were calculated and are shown below. % Add subsections for different types of results \subsection{Scattering Characteristics}\label{sec:scat} The following section presents scattering results from the standard TDHF calculations of $^{176}\mathrm{Yb}+{}^{176}\mathrm{Yb}$ collisions. The TDRPA extension to TDHF is not needed for these results, though this means the points can only be interpreted as the most likely outcome for each initial condition. \begin{figure} \includegraphics*[width=\textwidth]{../Figures/YbYb/Scattering.pdf} \caption{Scattering angles for $^{176}\mathrm{Yb}+{}^{176}\mathrm{Yb}$ collisions at center of mass energies (a) $\mathrm{E_{c.m.}}=660$~MeV and (b) $\mathrm{E_{c.m.}}=880$~MeV in the side-side (circles) and tip-tip (squares) orientations. The dotted (dashed) line plots the Rutherford scattering angle for $\mathrm{E_{c.m.}}=660$~MeV ($880$~MeV).} \label{fig:bvstheta} \end{figure} Scattering angles for the $^{176}\mathrm{Yb}+{}^{176}\mathrm{Yb}$ system for both orientations are presented in Fig.~\ref{fig:bvstheta}. A similar deviation from Rutherford scattering is observed at impact parameters $b\le8$~fm for both orientations. These deviations are due to nuclear deflection and partial orbiting of the system. Note that no fusion is observed. The relatively flat shape of the curve around $50-60^\circ$ at 660~MeV and $20-40^\circ$ at 880~MeV implies a large number of events in these particular angular ranges. \begin{figure} \includegraphics*[width=\textwidth]{../Figures/YbYb/TKEL.pdf} \caption{Total kinetic energies of the outgoing fragments in $^{176}\mathrm{Yb}+{}^{176}\mathrm{Yb}$ collisions at center of mass energies $\mathrm{E_{c.m.}}=660$~MeV (blue circles) and $\mathrm{E_{c.m.}}=880$~MeV (red squares) in the side-side orientation.} \label{fig:tke} \end{figure} The TKE of the outgoing fragments is plotted in Fig.~\ref{fig:tke} as a function of the impact parameter $b$ for side-side collisions at the two center of mass energies. Although dissipation occurs at different impact parameter ranges ($b<10$~fm at $E_{c.m.}=660$~MeV and $b<12$~fm at $E_{c.m.}=880$~MeV), both curves exhibit similar behavior. In particular, the TKEs saturate at roughly the same energy ($\sim350-400$~MeV) indicating full damping of the initial TKE for the most central collisions. Among the mechanisms responsible for energy dissipation, nucleon transfer is expected to play an important role. Of course, in symmetric collisions the average number of nucleons in the fragments does not change. Nevertheless, multinucleon transfer is possible thanks to fluctuations, leading to finite widths in the fragment particle number distributions. These fluctuations are explored in the following section. %and thus the lower energy collision reaches the elastic regime at a lower impact parameter. %Knowledge of which initial configurations lead to quasi-elastic scattering provides insight as to which reactions will lead to large fluctuations in particle numbers as seen in the following section. % due to the increased contact between fragments, though fluctuations equilibrate at a certain point \subsection{Transfer Characteristics}\label{sec:tran} This section focuses on the results obtained by extending TDHF to recover particle number fluctuations and correlations with the TDRPA. \begin{figure} \centering \includegraphics*[width=0.9\textwidth]{../Figures/YbYb/Fluctuations.pdf} \caption{TDRPA predictions of correlations $\sigma_{NZ}$ (a) and fluctuations $\sigma_{NN}$ (b) and $\sigma_{ZZ}$ (c) for $^{176}\mathrm{Yb}+{}^{176}\mathrm{Yb}$ collisions for four initial configurations over a range of impact parameters.} \label{fig:fluc} \end{figure} Particle number fluctuations ($\sigma_{ZZ}$ and $\sigma_{NN}$) and correlations ($\sigma_{NZ}$) calculated from Eq.~(\ref{ybyb:eq:sigma}) are shown in Fig.~\ref{fig:fluc} as a function of impact parameters for different initial conditions. The fluctuations are greater in general at the smaller impact parameters, though they do not converge to a single value. Similar variations in fluctuations were already observed in earlier TDRPA studies of deep inelastic collisions in lighter systems~\citep{simenel2011,williams2018}. Particularly large values are sometimes obtained, such as at 660~MeV in tip-tip central ($b=0$) collisions, indicating approximately flat distributions around the TDHF average. \begin{figure} \includegraphics*[width=\textwidth]{../Figures/YbYb/MAD.pdf} \caption{Mass angle distributions for $^{176}\mathrm{Yb}+{}^{176}\mathrm{Yb}$ collisions at (a) $\mathrm{E_{c.m.}}=660$~MeV in the side-side orientation, (b) $\mathrm{E_{c.m.}}=660$~MeV in the tip-tip orientation, (c) $\mathrm{E_{c.m.}}=880$~MeV in the side-side orientation, and (d) $\mathrm{E_{c.m.}}=880$~MeV in the tip-tip orientation. The colorbar represents cross sections in millibarns per bin of mass ratio and degree.} \label{fig:mad} \end{figure} \begin{figure} \includegraphics*[width=\textwidth]{../Figures/YbYb/MED.pdf} \caption{Mass energy distributions for $^{176}\mathrm{Yb}+{}^{176}\mathrm{Yb}$ collisions at (a) $\mathrm{E_{c.m.}}=660$~MeV in the side-side orientation, (b) $\mathrm{E_{c.m.}}=660$~MeV in the tip-tip orientation, (c) $\mathrm{E_{c.m.}}=880$~MeV in the side-side orientation, and (d) $\mathrm{E_{c.m.}}=880$~MeV in the tip-tip orientation. The colorbar represents cross sections in millibarns per bin of mass ratio and MeV.} \label{fig:med} \end{figure} Fragment mass-angle distributions (MADs) are a standard tool used experimentally to interpret the dynamics of heavy-ion collisions \citep{toke1985,shen1987,hinde2008,simenel2012b,durietz2013,wakhle2014,hammerton2015,morjean2017,mohanto2018,hinde2018}. Although TDHF has been used to help interpret theoretically these distributions \citep{wakhle2014,hammerton2015,umar2016,sekizawa2016}, these earlier calculations only incorporate fluctuations coming from the distribution of initial conditions (e.g., different orientations). Here, we go beyond the mean-field prediction by including the fragment mass fluctuations from TDRPA. Note that we only include mass fluctuations, not fluctuations in scattering angle which are still determined solely by TDHF. Calculating quantum fluctuations of scattering angles is beyond the scope of this work, although they might be necessary for a more detailed comparison with experimental MADs. The resulting MADs for $^{176}\mathrm{Yb}+{}^{176}\mathrm{Yb}$ reactions are shown in Fig.~\ref{fig:mad}. The mass ratio $M_R$ is defined as the ratio of the fragment mass over the total mass of the system. The distributions of mass ratios are determined assuming Gaussian distributions with standard deviation $\sigma_{M_R}=\sigma_{AA}/A$, limited and normalized to the physical region $0\le M_R\le1$ (see section~\ref{sec:TDHF}). There is then an $M_R$ distribution per initial condition (defined by $E_{c.m.}$, $b$, and the orientations), but only a single scattering angle $\theta_{c.m.}$. To obtain a continuous representation of the scattering angle, $\theta_{c.m.}$ is discretized into bins of $\Delta\theta=1$ degree and interpolated between the values obtained by TDHF. % This is handled by the integration step, no? %A weighting factor $(b+\Delta b)^2-b^2$ is used so that the distribution is proportional to the differential cross-section. The figures are symmetric about 90$^{\circ}$ as both outgoing fragments are identically the same and will then travel outwards at complimentary angles. Specific orientations such as side-side and tip-tip will not be accessible in an experimental setting of course. %Nevertheless, the relative effect of varying the initial configuration is interesting and indicative of an influence of transfer. Interestingly, when investigating initial energy dependence of the MAD (compare panels (a) and (c), (b) and (d) in Fig.~\ref{fig:mad}), it can be seen that different outgoing angles are preferred depending on the incoming center of mass energy with back (and forward) scattering events being more prevalent in the higher energy regime. %Conversely, the lower energy reaction tends towards a wider distribution at intermediate angles off the collision axis. This agrees well with what is seen in Fig.~\ref{fig:bvstheta}, where many impact parameters result in scattering angles around $50-60$~degrees at $E_{c.m.}=660$~MeV and around $20-40$~degrees at 880~MeV. This is the case for both tip-tip and side-side orientations, though the tip-tip results tend further towards the intermediate angles than side-side at the same energy. While the predictive capability of this method needs to be compared with experimental results and tested, this suggests a strong energy dependence and that detection of fragment production will greatly benefit from large angle detectors. The energy dependence seen in the MAD is not intuitive, and may prove to be useful for informing experimental setups. %In examining the effect of the initial fragment orientations it can be seen that the tip-tip collisions ((b) and (d)) %the dependence on initial fragment orientation is more clear with glancing off-axis collisions around 90$^{\circ}$ having a larger peak in correlations at thebeing more prevalent in the Side-Side configuration where contact times \begin{figure*} \includegraphics*[width=\textwidth]{../Figures/YbYb/combinedchart.pdf} \caption{Primary fragments production cross sections for $^{176}\mathrm{Yb}+{}^{176}\mathrm{Yb}$ collisions at $\mathrm{E_{c.m.}}=660$~MeV in the side-side orientation overlaid onto the chart of nuclides. The innermost contour corresponds to a cross section of 1~millibarn, with subsequent contours drawn every 0.2~mb. Finally, we also plot a boundary contour drawn at the microbarn level. Chart from~\protect\citep{anu_chart}.} \label{fig:chart} \end{figure*} Useful information can also be obtained from the correlations between fragment mass and kinetic energy \citep{itkis2004,itkis2011,itkis2015,kozulin2019,banerjee2019}. Figure~\ref{fig:med} presents mass energy distributions (MED) that detail the predicted TKE of outgoing fragments. It should be noted here that, while the theory provides particle number fluctuations, the values for TKE are single points (as in the case of $\theta_{c.m.}$) as predicted by TDHF alone. That is, widths of the TKE distributions are currently unknown with the method used here. This would make for an excellent extension to the theory, bringing it more in line with what can be experimentally observed. %With this limitation the distribution will under predict The MEDs exhibit a continuous broadening of the mass distribution with increasing energy dissipation. The saturation of TKE lies around $350-400$~MeV for side-side collisions (see also Fig.~\ref{fig:tke}) and around $250-300$~MeV for tip-tip. This difference between orientations is interesting as it indicates a larger kinetic energy dissipation with less compact configurations. % as expected in that the saturation of TKE lies around the same values as seen in Fig.~\ref{fig:tke}. A possible explanation is that the nuclei overlap at a larger distance in the tip-tip configuration, thus producing energy dissipation earlier in the collision process than in the side-side orientation. In general, the MEDs show peaks around the elastic and fully damped regions which results from the large range of impact parameters contributing to both mechanisms. %Additionally, the tip-tip collisions saturate at a slightly lower energy and thus contribute more to this range of TKEs. %Like the MAD above, this technique is speculative and experimental confirmation is vital to ensure the validity of the approach to this particular class of systems. \subsection{Primary fragments production} Using the correlations and fluctuations shown in Fig.~\ref{fig:fluc}, a map of probabilities can be made in the $N$--$Z$ plane assuming a modified Gaussian bivariate normal distribution (See section~\ref{sec:TDHF} and Eq.~(\ref{eq:Pnz})). This choice of using a Gaussian is the primary assumption when calculating probabilities and related quantities and may not accurately describe the true distribution far from the center. These probability distributions at multiple impact parameters can then be integrated over to produce a map of primary fragment production cross sections which is presented in Fig.~\ref{fig:chart} overlaid atop a section of the chart of nuclides in the region surrounding $^{176}$Yb~\citep{anu_chart}. As the probability distributions for each impact parameter will be centered around the $^{176}$Yb ($Z=70$, $N=106$) nuclide, the resulting cross sections are also symmetric about $^{176}$Yb. The inclusion of correlations between protons and neutrons via $\sigma_{NZ}$ more or less aligns the distribution parallel to the valley of stability due to the symmetry energy. Subsequent decay of the fragments would inevitably bring the final products closer to the valley of stability. Here, our focus is on primary fragment productions and the prediction of evaporation residue cross-sections are beyond the scope of this work. In fact, experimental measurements of mass-angle distributions using time of flight techniques are for primary fragments as they assume two-body kinematics \citep{thomas2008}. To estimate the evaporation residue cross-sections would require to first compute the excitation energy of the fragments and then predict their decay with a statistical model \citep{umar2017,sekizawa2017}. One way to minimize evaporation is to consider less violent collisions. In terms of primary fragment productions, 660 and 880~MeV center of mass energies are quite similar (this can be seen by the relatively similar particle number fluctuations in Fig.~\ref{fig:fluc}). However, the higher energy will lead to more neutron evaporation and thus to less exotic evaporation residues. Use of relatively neutron-rich $^{176}$Yb nuclei in symmetric collisions may then allow for this reaction to act as a probe of the neutron-rich region surrounding the principal outgoing fragment. %------------------------------------------------------------------------------ \section{Summary and discussion}\label{sec:conclusions} Multiple TDHF and TDRPA calculations have been performed for the $^{176}\mathrm{Yb}+{}^{176}\mathrm{Yb}$ system with various initial orientations, energies, and impact parameters. Standard TDHF allows for the classification of general scattering characteristics, while the TDRPA technique extends the approach to include correlations and fluctuations of particle numbers of the reaction fragments. This extension provides a theoretical framework that more closely resembles what will be seen in experimental investigations of this (and similar) systems. In examining figures such as the mass-angle distributions in Fig.~\ref{fig:mad}, information regarding the angular distribution of fragments can be gleaned and suggest large acceptance detectors to maximize measurement capability. Mass-energy distributions shown in Fig.~\ref{fig:med} are also useful to investigate, e.g., the interplay between dissipation and fluctuations. In both cases, however, fluctuations of $\theta_{c.m.}$ and of TKE are not predicted in the present study. The latter would require new implementations of the TDRPA to these observables, or the use of alternative approaches such as the stochastic mean-field theory \citep{tanimura2017} or an extension of the Langevin equation \citep{bulgac2019}. Both methods have been recently used to investigate kinetic energy distributions in fission fragments. In order to benchmark our theoretical methods as applied to symmetric heavy nuclei, all predictions presented in this study would greatly benefit from experimental verification. The methods used here provide a very powerful tool for investigating symmetric systems, though an important caveat should be discussed regarding the interpretation of these results. TDRPA produces only correlations and fluctuations, not the actual distributions themselves, which are then taken to be of a Gaussian nature. This assumption may break down when far from the center of the distribution or if the shape at the center itself is too flat and deviates sufficiently from a Gaussian behavior. It is then extremely important to compare with observations made in experimental studies such that we may better understand how to interpret the results coming from these methods. Regardless, the $^{176}\mathrm{Yb}+{}^{176}\mathrm{Yb}$ system presents itself as a viable candidate for studies of MNT processes and production of neutron rich nuclei in the region around $A\sim176$. The map of possible primary fragments loosely painted in Fig.~\ref{fig:chart} presents an exciting range of previously inaccessible nuclei, with the above caveat applying the further one goes from the center of the distribution. Another caveat is that the predicted distribution is for primary fragments only and that statistical decay should be included in order to predict fragment produced after evaporation, e.g., following \citep{sekizawa2017,umar2017,wu2019}. %------------------------------------------------------------------------------ \section{Acknowledgments} We thank Yu. Ts. Oganessian and D. J. Hinde for stimulating discussions. This work has been supported by the U.S. Department of Energy under grant No. DE-SC0013847 with Vanderbilt University and by the Australian Research Councils Grant No. DP190100256. \clearpage
{ "alphanum_fraction": 0.7825106157, "avg_line_length": 93.9650872818, "ext": "tex", "hexsha": "411edeb43822fe6cfc7e8c881dc1d5671af585f8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "63048eecc834936896cb762f7d05b3faa945af78", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "kylegodbey/Dissertation", "max_forks_repo_path": "Chapters/chapter_8.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "63048eecc834936896cb762f7d05b3faa945af78", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "kylegodbey/Dissertation", "max_issues_repo_path": "Chapters/chapter_8.tex", "max_line_length": 641, "max_stars_count": null, "max_stars_repo_head_hexsha": "63048eecc834936896cb762f7d05b3faa945af78", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "kylegodbey/Dissertation", "max_stars_repo_path": "Chapters/chapter_8.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9323, "size": 37680 }
\section{EXPERIENCES} \begin{CVentries} \end{CVentries}
{ "alphanum_fraction": 0.7719298246, "avg_line_length": 11.4, "ext": "tex", "hexsha": "289d83c78094156212b2046e7c73b5732fcbce81", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f33dfc9267b967c9e3e40fda391679ab1464520b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Shuo625/CV", "max_forks_repo_path": "CV/en/_experiences.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f33dfc9267b967c9e3e40fda391679ab1464520b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Shuo625/CV", "max_issues_repo_path": "CV/en/_experiences.tex", "max_line_length": 21, "max_stars_count": null, "max_stars_repo_head_hexsha": "f33dfc9267b967c9e3e40fda391679ab1464520b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Shuo625/CV", "max_stars_repo_path": "CV/en/_experiences.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 20, "size": 57 }
\input{common/conf_top.tex} \input{common/conf_titles.tex} \begin{document} \input{common/conf_listings.tex} \thispagestyle{empty} %removes page number \begin{center} \textsc{\Large\bf{Gradient Boosted Machines with H2O's R Package}} \\ \bigskip \textsc{\small{Cliff Click \hspace{40pt} Michal Malohlava \hspace{40pt} Viraj Parmar \hspace{40pt} Jessica Lanford}} \\ \bigskip \line(1,0){250} %inserts horizontal line {\url{https://www.gitbook.com/book/h2o/gbm-with-h2o-r}} \bigskip August 2015: Third Edition \\%add front page image here? (wavy lines) \bigskip \end{center} {\raggedright\vfill\ Gradient Boosted Machines with H2O's R Package\\ by Cliff Click, Michal Malohlava, Viraj Parmar, \&\ Jessica Lanford\\ \bigskip Published by H2O.ai, Inc. \\ 2307 Leghorn St. \\ Mountain View, CA 94043\\ \bigskip \textcopyright 2015 H2O.ai, Inc. All Rights Reserved. \bigskip August 2015: Third Edition \bigskip Photos by \textcopyright H2O.ai, Inc. \bigskip While every precaution has been taken in the\\ preparation of this book, the publisher and\\ authors assume no responsibility for errors or\\ omissions, or for damages resulting from the\\ use of the information contained herein.\\ \bigskip Printed in the United States of America. }\par \newpage \tableofcontents \newpage \input{common/what_is_h2o.tex} \section{Introduction} This vignette presents the gradient boosted machine (GBM) framework in the H2O package at {\url{http://cran.r-project.org/web/packages/h2o/index.html}}. Further documentation on H2O's system and algorithms can be found at the h2o.ai website at {\url{http://docs.h2o.ai}} and the R package manual at {\url{http://cran.r-project.org/web/packages/h2o/h2o.pdf}}. The full datasets and code of this vignette can be found at the H2O Github at {\url{http://github.com/h2oai/h2o}}. This introductory section provides instructions on getting H2O started, followed by a brief overview of gradient boosting. \section{Installation} To use H2O with R, you can start H2O outside of R and connect to it, or you can launch H2O from R. However, if you launch H2O from R and close the R session, the H2O instance is closed as well. The client object is used to direct R to datasets and models located in H2O. \subsection{Installing R or R Studio} To download R: \begin{enumerate} \item Go to {\url{http://cran.r-project.org/mirrors.html}}. \item Select your closest local mirror. \item Select your operating system (Linux, OS X, or Windows). \item Depending on your OS, download the appropriate file, along with any required packages. \item When the download is complete, unzip the file and install. \\ \end{enumerate} To download R Studio: \begin{enumerate} \item Go to {\url{http://www.rstudio.com/products/rstudio/}}. \item Select your deployment type (desktop or server). \item Download the file. \item When the download is complete, unzip the file and install. \end{enumerate} \subsection{Installing H2O in R} \begin{enumerate} \item Load the latest CRAN H2O package by running \begin{spverbatim} install.packages("h2o") \end{spverbatim}\footnote{Note: Our push to CRAN will be behind the bleeding edge version and due to resource constraints, may be behind the published version. However, there is a best-effort to keep the versions the same.} \begin{lstlisting}[style=R] # The following two commands remove any previously installed H2O packages for R. if ("package:h2o" %in% search()) { detach("package:h2o", unload=TRUE) } if ("h2o" %in% rownames(installed.packages())) { remove.packages("h2o") } # Next, we download packages that H2O depends on. if (! ("methods" %in% rownames(installed.packages()))) { install.packages("methods") } if (! ("statmod" %in% rownames(installed.packages()))) { install.packages("statmod") } if (! ("stats" %in% rownames(installed.packages()))) { install.packages("stats") } if (! ("graphics" %in% rownames(installed.packages()))) { install.packages("graphics") } if (! ("RCurl" %in% rownames(installed.packages()))) { install.packages("RCurl") } if (! ("jsonlite" %in% rownames(installed.packages()))) { install.packages("jsonlite") } if (! ("tools" %in% rownames(installed.packages()))) { install.packages("tools") } if (! ("utils" %in% rownames(installed.packages()))) { install.packages("utils") } \end{lstlisting} \begin{lstlisting}[style=R] # Now we download, install and initialize the H2O package for R (replacing the * with the latest version number obtained from the H2O download page) install.packages("h2o", type="source", repos=(c("http://h2o-release.s3.amazonaws.com/h2o/master/*/R"))) library(h2o) localH2O = h2o.init(nthreads = -1) \end{lstlisting} \normalsize%restore default text size To see {\texttt{h2o.gbm}} at work, run the following command to observe an automatic demo of an example classification model built using H2O's GBM. \begin{lstlisting}[style=R] demo(h2o.gbm) \end{lstlisting} \end{enumerate} \subsection{Making a build from Source Code}
 If you are a developer who wants to make changes to the R package before building and installing it, pull the source code from Git (\url{https://github.com/h2oai/h2o-3}) and follow the instructions in From Source Code (Github) at {\url{http://docs.h2o.ai/developuser/quickstart\_git.html}}. After making the build, navigate to the Rcran folder with the R package in the build’s directory, then run and install. \begin{lstlisting}[style=R] cd ~/Documents/h2o/target/Rcran (master) $ R CMD INSTALL h2o_version#.tar.gz * installing to library 'C:/Users/H2O-User/Documents/R/win-library/3.0' * installing *source* package 'h2o' ... ** R ** demo ** inst ** preparing package for lazy loading Warning: package 'statmod' was built under R version 3.0.3 Creating a generic function for 'summary' from package 'base' in package 'h2o' Creating a generic function for 'colnames' from package 'base' in package 'h2o' Creating a generic function for 't' from package 'base' in package 'h2o' Creating a generic function for 'colnames<-' from package 'base' in package 'h2o' Creating a generic function for 'nrow' from package 'base' in package 'h2o' Creating a generic function for 'ncol' from package 'base' in package 'h2o' Creating a generic function for 'sd' from package 'stats' in package 'h2o' Creating a generic function for 'var' from package 'stats' in package 'h2o' Creating a generic function for 'as.factor' from package 'base' in package 'h2o' Creating a generic function for 'is.factor' from package 'base' in package 'h2o' Creating a generic function for 'levels' from package 'base' in package 'h2o' Creating a generic function for 'apply' from package 'base' in package 'h2o' Creating a generic function for 'findInterval' from package 'base' in package 'h2o' ** help *** installing help indices ** building package indices ** testing if installed package can be loaded *** arch - i386 Warning: package 'statmod' was built under R version 3.0.3 *** arch - x64 Warning: package 'statmod' was built under R version 3.0.3 * DONE (h2o) \end{lstlisting} \section{H2O Initialization} Use the command {\texttt{h2o.init()}} to launch H2O. The following section describes the parameters for the {\texttt{h2o.init()}} command. \subsection{H2O Initialization Parameters} The {\texttt{h2o.init()}} command creates an H2O cluster or establishes a connection to an existing cluster. The following parameters are supported: \begin{itemize} \item {\texttt{ip}}: Specify an IP address for H2O \item {\texttt{port}}: Specify a port for H2O \item {\texttt{startH2O}}: Enable ({\texttt{startH2O = TRUE}}) or disable ({\texttt{startH2O = FALSE}}) H2O initialization from R if no connection with H2O is detected. This option is enabled only is {\texttt{ip = "localhost"}} or {\texttt{ip = "127.0.0.1"}}. If an existing connection is detected, R does not initialize H2O. \item {\texttt{forceDL}}: Enable ({\texttt{forceDL = TRUE}}) or disable ({\texttt{forceDL = FALSE}}) forced downloads of the H2O executable. The default value is {\texttt{FALSE}}, so the executable is only downloaded if it does not already exist in the H2O R library resources directory, {\texttt{h2o/java/h2o.jar}}. This option is only enabled when H2O is started from R. \item {\texttt{beta}}: Enable ({\texttt{beta = TRUE}}) or disable ({\texttt{beta = FALSE}}) beta mode. \item {\texttt{assertion}}: Launch H2O with ({\texttt{assertion = TRUE}}) or without ({\texttt{assertion = FALSE}}) assertions. Use this mode for error checking and debugging. \item {\texttt{license}}: Specify the full path of the license file to enable additional features. \item {\texttt{nthreads}}: Specify the number of threads in the thread pool, which is related to the number of CPUs used. To use the CRAN default (2 CPUs), enter {\texttt{-2}}. To use all CPUs on the host, enter {\texttt{-1}}. To specify the number of CPUs, enter a positive integer. \item {\texttt{max\_mem\_size}}: Specify the maximum amount (in bytes) of memory to allocate to H2O. The value must be greater than 2MB and a multiple of 1024. Enter {\texttt{m}} or {\texttt{M}} to specify megabytes or enter {\texttt{g}} or {\texttt{G}} to specify gigabytes. \item {\texttt{min\_mem\_size}}: Specify the minimum amount (in bytes) of memory to allocate to H2O. Specify the maximum amount (in bytes) of memory to allocate to H2O. The value must be greater than 2MB and a multiple of 1024. Enter {\texttt{m}} or {\texttt{M}} to specify megabytes or enter {\texttt{g}} or {\texttt{G}} to specify gigabytes. \end{itemize} \subsection{Launching from R} If you do not specify the argument {\texttt{max\_mem\_size}} when you run {\texttt{h2o.init()}}, the default heap size of the H2O instance running on 32-bit Java is 1g. H2O checks the Java version and suggests an upgrade if you are running 32-bit Java. On 64-bit Java, the heap size is 1/4 of the total memory available on the machine. For best performance, the allocated memory should be 4x the size of your data, but never more than the total amount of memory on your computer. For larger data sets, we recommend running on a server or service with more memory available for computing. To launch H2O from R, run the following in R: \begin{lstlisting}[style=R] > library(h2o) # # Loads required files for H2O localH2O <- h2o.init(ip = 'localhost', port = 54321, nthreads= -1, max_mem_size = ‘4g') # # Starts H2O on the localhost, port 54321, with 4g of memory using all CPUs on the host \end{lstlisting} R displays the following output: \begin{lstlisting}[style=R] Successfully connected to http://localhost:54321 R is connected to H2O cluster: H2O cluster uptime: 11 minutes 35 seconds H2O cluster version: 2.7.0.1497 H2O cluster name: H2O_started_from_R H2O cluster total nodes: 1 H2O cluster total memory: 3.56 GB H2O cluster total cores: 8 H2O cluster allowed cores: 8 H2O cluster healthy: TRUE \end{lstlisting} If you are operating on a single node, initialize H2O using \begin{lstlisting}[style=R] h2o_server = h2o.init() \end{lstlisting} To connect with an existing H2O cluster node other than the default localhost:54321, specify the IP address and port number in the parentheses. For example:
\begin{lstlisting}[style=R] h2o_cluster = h2o.init(ip = "192.555.1.123", port = 12345) \end{lstlisting} \subsection{Launching from the Command Line} After launching the H2O instance, initialize the connection by running {\texttt{h2o.init( )}} with the IP address and port number of a node in the cluster. In the following example, change 192.168.1.161 to your local host. \begin{lstlisting}[style=R] library(h2o) localH2O <- h2o.init(ip = '192.168.1.161', port =54321) \end{lstlisting} \subsection{Launching on Hadoop} To launch H2O nodes and form a cluster on the Hadoop cluster, run: \begin{lstlisting}[style=R] hadoop jar h2odriver.jar -nodes 1 -mapperXmx 6g -output hdfsOutputDirName \end{lstlisting} \begin{itemize} \item For each major release of each distribution of Hadoop, there is a driver jar file that the user will need to launch H2O with. Currently available driver jar files in each build of H2O include {\texttt{h2odriver\_cdh5.3.jar, h2odriver\_hdp2.1.jar}}, and {\texttt{mapr3.1.1.jar}}. \item The above command launches exactly one 6g node of H2O; however, we recommend launching the cluster with 4 times the memory of your data file. \item{\texttt{mapperXmx}} is the mapper size or the amount of memory allocated to each node. \item{\texttt{nodes}} is the number of nodes requested to form the cluster. \item{\texttt{output}} is the name of the directory created each time a H2O cloud is created so it is necessary for the name to be unique each time it is launched. \end{itemize} \subsection{Launching on an EC2} Launch the EC2 instances using the H2O AMI by running {\texttt{h2o-cluster-launch-instances.py}. \begin{lstlisting}[style=R] python h2o-cluster-launch-instances.py Using boto version 2.27.0 Launching 2 instances. Waiting for instance 1 of 2 ... . . instance 1 of 2 is up. Waiting for instance 2 of 2 ... instance 2 of 2 is up. \end{lstlisting} \subsection{Checking Cluster Status} To check the status and health of the H2O cluster, use {\texttt{h2o.clusterInfo()}}. \begin{lstlisting}[style=R] library(h2o) localH2O = h2o.init(ip = 'localhost', port = 54321) h2o.clusterInfo(localH2O) \end{lstlisting} An easy-to-read summary of information about the cluster displays. \begin{lstlisting}[style=R] R is connected to H2O cluster: H2O cluster uptime: 43 minutes 43 seconds H2O cluster version: 2.7.0.1497 H2O cluster name: H2O_started_from_R H2O cluster total nodes: 1 H2O cluster total memory: 3.56 GB H2O cluster total cores: 8 H2O cluster allowed cores: 8 H2O cluster healthy: TRUE \end{lstlisting} \noindent \subsection{Support} Users of the H2O package may submit general enquiries and bug reports to h2o.ai support at {\url{[email protected]}}. Alternatively, specific bugs or issues may be filed to the h2o.ai JIRA at {\url{https://0xdata.atlassian.net/secure/Dashboard.jspa}}. \section{Gradient Boosting Overview} A gradient boosted model is an ensemble of tree models that can be either regression or classification. Both are forward-learning ensemble methods that obtain predictive results through gradually improved estimations. Boosting is a flexible nonlinear regression procedure that helps improve the accuracy of trees. By sequentially applying weak classification algorithms to the incrementally changed data, a series of decision trees are created that produce an ensemble of weak prediction models. While boosting trees increases their accuracy, it also decreases speed and interpretability. The gradient boosting method generalizes tree boosting to minimize these issues. \subsection{Summary of features} H2O's GBM functionalities include: \begin{itemize} \item supervised learning for regression and classification tasks \item distributed and parallelized computation on either a single node or a multi-node cluster \item fast and memory-efficient Java implementations of the underlying algorithms \item user-friendly web interface to mirror the model building and scoring process running in R \item grid search for hyperparameter optimization and model selection \item model export in plain Java code for deployment in production environments \item additional parameters for model tuning \end{itemize} \subsection{Common Model Parameters} This section describes the functions of the parameters for GBM: \begin{itemize} \item {\texttt{x}}: The names or indices of the predictor variables for use in building the gradient boosted model. \item {\texttt{y}}: The name or index of the response variable (must be an integer or a categorical variable). If the data does not contain a header, {\texttt{y}} is the column index number starting at 0 and increasing from left to right. \item {\texttt{key}}: The unique hex key assigned to the generated model. If left blank, a key is automatically generated. \item \texttt{distribution}: Enter {\texttt{AUTO, bernoulli, multinomial, gaussian, poisson, gamma}} or {\texttt{tweedie}} to select the distribution function. The default is {\texttt{AUTO}}. \item {\texttt{ntrees}}: A non-negative integer that defines the number of trees. The default is 50. \item {\texttt{max\_depth}}: The user-defined tree depth. The default is 5. \item {\texttt{min\_rows}}: The minimum number of rows to assign to the terminal nodes. The default is 10. \item {\texttt{learn\_rate}}: An integer that defines the learning rate. The default is 0.1 and the range is 0.0 to 1.0. \item {\texttt{nbins}}: The number of bins to use for building the histogram. The default is 20. \item {\texttt{validation\_frame}}: An {\texttt{H2OFrame}} object that defines the validation dataset used to construct the confusion matrix. If left blank and {\texttt{nfolds = 0}}, the default is the training dataset. \item {\texttt{balance\_classes}}: Balance training data class counts via over or undersampling for imbalanced data. The default is {\texttt{FALSE}}. \item {\texttt{max\_after\_balance\_size}}: Maximum relative size of the training data after balancing class counts; can be less than 1.0. The default is 5. \item {\texttt{seed}}: Seed for random numbers that affects sampling. \item {\texttt{data}}: An {\texttt{H2OFrame}} object that contains the variables in the model. \item {\texttt{nfolds}}: Number of folds for cross-validation. If {\texttt{nfolds >= 2}}, then {\texttt{validation}} must be blank. \end{itemize} \subsection{Theory and framework} Gradient boosting is a machine learning technique that combines two powerful tools: gradient-based optimization and boosting. Gradient-based optimization uses gradient computations to minimize a model's loss function with respect to the training data. Boosting additively collects an ensemble of weak models in order to ultimately create a strong learning system for predictive tasks. Here we consider gradient boosting in the example of K-class classification, although the model for regression follows a similar logic. The following analysis follows from the discussion in Hastie et al (2010) at {\url{http://statweb.stanford.edu/~tibs/ElemStatLearn/}}. \newline \\ {\bf{\footnotesize{GBM for classification}}} \\ \line(1,0){500} \\ 1. Initialize $f_{k0} = 0, k = 1,2,\dots,K$ \\ 2. For $m=1$ to $M$ \hspace{1cm} a. Set $p_k(x) = \frac{e^{f_k(x)}}{\sum_{l=1}^K e^{f_l(x)}}$ for all $k = 1,2\dots, K$ \hspace{1cm} b. For $k=1$ to $K$ \hspace{2cm} i. Compute $r_{ikm} = y_{ik} - p_k(x_i), i = 1,2,\dots,N$ \hspace{2cm} ii. Fit a regression tree to the targets $r_{ikm}, i = 1,2,\dots,N$, \par \hspace{2.5cm} giving terminal regions $R_{jkm}, 1,2,\dots,J_m$ \hspace{2cm}iii. Compute $$\gamma_{jkm} = \frac{K-1}{K} \frac{\sum_{x_i \in R_{jkm}} (r_{ikm})}{\sum_{x_i \in R_{jkm}} |r_{ikm}| (1 - |r_{ikm}|)} , j=1,2,\dots,J_m$$ \hspace{2cm} iv. Update $f_{km}(x) = f_{k,m-1}(x) + \sum_{j=1}^{J_m} \gamma_{jkm} I(x \in R_{jkm})$ \\ 3. Output $f_k^{\hat{}}(x) = f_{kM}(x), k=1,2,\dots,K$ \\ \line(1,0){500} In the above algorithm, for $k$-classification, H2O builds $k$-regression trees to represent one classification tree. The index $m$ tracks of the number of weak learners added to the current ensemble. Within this outer loop, there is an inner loop across each of the $K$ classes. In this inner loop, the first step is to compute the residuals, $r_{ikm}$, which are actually the gradient values, for each of the $N$ bins in the CART model, and then to fit a regression tree to these gradient computations. This fitting process is distributed and parallelized, and details on this framework can be found on the h2o.ai blog at {\url{http://h2o.ai/blog/2013/10/building-distributed-gbm-h2o/}}. \\ \\ The final procedure in the inner loop is to add to the current model to the fitted regression tree, which improves the accuracy of the model during the inherent gradient descent step. After $M$ iterations, the final ``boosted" model can be tested out on new data. \subsection{Loss Function} The AdaBoost method builds an additive logistic regression model: $${F(x) = log}\frac{Pr(Y = 1|x)}{Pr(Y = -1|x)} = \sum_{m=1}^{M} \alpha_m f_m (x) $$ by stagewise fitting using the loss function: $$L(y, F(x)) = exp(-y F (x)) $$ \subsection{Distributed Trees} H2O's implementation of GBM uses distributed trees. H2O overlays trees on the data by assigning a tree node to each row. The nodes are numbered and the number of each node is stored in a temporary vector as "Node\_ID" for each row. H2O makes a pass over all the rows using the most efficient method, which may not necessarily be numerical order. A local histogram using only local data is created in parallel for each row on each node. The histograms are then assembled and a split column is selected to make the decision. The rows are re-assigned to nodes and the entire process is repeated. For example, for an initial tree, all rows start on node 0. A MapReduce (MR) task computes the statistics and uses them to make an algorithmically-based decision, such as lowest mean squared error (MSE). In the next layer in the tree (and the next MR task), a decision is made for each row: if X $<$ 1.5, go right in the tree; otherwise, go left. H2O computes the stats for each new leaf in the tree, and each pass across all the rows builds the entire layer. For multinomial or binomial, the split is determined by the number of columns. The number of columns is evaluated to find the best split out of the possible combinations. For example, for a hundred-column dataset that uses twenty bins, there are 2000 (20x100) possible split points. Each layer represents another MR task; therefore, a tree that is five layers deep requires five passes. Each tree level is fully data-parallelized. Each pass is over one layer in the tree, and builds a per-node histogram in the MR calls. As each pass analyzes a tree level, H2O then decides how to build the next level. H2O reassigns rows to new levels in another pass by merging the two passes and builds a histogram for each node. Each per-level histogram is done in parallel. Scoring and building is done in one pass. Each row is tested against the decision from the previous pass, assigned to a new leaf, and a histogram is built on that leaf. To score, H2O traverses the tree and obtains the results. The tree is compressed to a smaller object that can still be traversed, scored, and printed. Although the GBM algorithm builds each tree one level at a time, H2O is able to quickly run the entire level in parallel and distributed. The processing requirements for more data can be offset by more CPUs or nodes. Since H2O does the per-level compute in parallel, which requires sending histograms over the network, the amount of data can become very large for a very deep tree. High max-bin sizes slow down generation of deep trees, but this is not a problem for GBM tree depths as GBM typically uses very shallow trees. Bin limits over 20 have been shown to provide very little improvement on the quality of the GBM model. For the MSE reports, the zero-tree report uses the class distribution as the prediction. The one-tree report actually uses the first tree, so the first two reports are not equal. The reported MSE is the inclusive effect of all prior trees, and generally decreases monotonically on the training dataset. However, the curve will generally bottom out and then begin to slowly rise on the validation set as overfitting sets in. The computing cost is based on the number of leaves, but depending on the dataset, the number of leaves can be difficult to predict. The maximum number of leaves is $2^d$, where $d$ represents the tree depth. \subsection{Treatment of factors} When the specified GBM model includes factors, those factors are analyzed by assigning an integer to each distinct factor level, and then binning the ordered integers according to the user-specified number of bins ($N$ bins). Split points are determined by considering the end points of each bin and the one-versus-many split for each bin. For example, if the factor is split into five bins, H2O orders the bins by bin number, then considers the split between the first and second bin, then the second and third, then the third and fourth, and the fourth and fifth. Additionally, the split that results from splitting the first bin from the other four and all analogous splits for the other four bins are considered. To specify a model that considers all factors individually, set the value for $N$ bins equal to the number of factor levels. This can be done for over 1024 levels (the maximum number of levels that can be handled in R), though this increases the time to fully generate a model. Increasing the number of bins is less useful for covering factor columns, but is more important for the one-versus-many approach. The "split-by-a-numerical-value" is basically a random split of the factors, so the number of bins is less important. Top-level tree splits (shallow splits) use the maximum allotment as their bin size, so the top split uses 1024 bins, the next level in the tree uses 512 bins, and so on. Factors for binary classification have a third (and optimal) choice: to split all bins (and factors within those bins) that have a mean of less than 0.5 one way, and the rest of the bins and factors the other way, creating an arbitrary set of factors with a known optimal split. This is represented as a bitset in the embedded Java model. Therefore, factor columns with less than the limit ($\mathtt{\sim}$ 1024) always get an optimal split for categorical problems. For categorical problems with $N$ possible values, the split candidate is determined by the formula $2^{N-1}-1$. For binary classification and regression problems, the number of split candidates is reduced to $N-1$ by sorting the categorical feature values by label average. \subsection{Key parameters} In the above example, an important user-specified value is $N$, which represents the number of bins that data are partitioned into before the tree's best split point is determined. Split points are determined by considering the end points of each bin, and the one-versus-many split for each bin. To model all factors individually, you can specify high $N$ values, but this will slow down the modeling process. For shallow trees, we recommend keeping the total count of bins across all splits at 1024 (so that a top-level split uses 1024, but a 2nd level split uses 512 bins, and so forth). This value is then maxed with the input bin count. \\ \\ Another important parameter to specify is the size ($J$) of the trees, which must be controlled in order to avoid overfitting. Increasing $J$ enables larger variable interaction effects, so knowing about these effects is helpful in setting the value for $J$. Large values of $J$ have also been found to have excessive computational cost, since Cost = \#columns $\cdot N \cdot K \cdot 2^{J}$. However, lower values generally also have the highest performance. Models with $4 \leq J \leq 8$ and a larger number of trees $M$ reflect this generalization. Later, we will discuss how to use grid search models to tune these parameters in the model selection process. \\ \\ You can also specify the shrinkage constant, which controls the learning rate of the model and is actually a form of regularization. Shrinkage modifies the algorithm's update of $f_{km}(x)$ instead with the scaled addition $\nu \cdot \sum_{j=1}^{J_m} \gamma_{jkm} I(x \in R_{jkm})$, where the constant $\nu$ is between 0 and 1. Smaller values of $\nu$ lead to greater rates of training errors, assuming that $M$ is fixed, but that in general $\nu$ and $M$ are inversely related when the error is fixed. However, despite the greater rate of training error with small values of $\nu$, very small values ($\nu < 0.1$) typically lead to better generalization and performance on test data. \section{Use case: Classification with Airline data} Download the Airline dataset from: {\url{https://github.com/h2oai/h2o/blob/master/smalldata/airlines/allyears2k_headers.zip}} and save the .csv file to your working directory. Before running the Airline demo, review how to load data with H2O. \subsection{Loading data} Loading a dataset in R for use with H2O is slightly different from the usual methodology, as we must convert our datasets into \texttt{H2OParsedData} objects. For this example, download the toy weather dataset from {\url{https://raw.githubusercontent.com/h2oai/h2o/master/smalldata/weather.csv}}. Load the data to your current working directory in your R Console (do this for any future dataset downloads), and then run the following command. \begin{lstlisting}[style=R] weather.hex = h2o.uploadFile(h2o_server, path = "weather.csv", header = TRUE, sep = ",", key = "weather.hex") \end{lstlisting} \bigskip \noindent To see a brief summary of the data, run the following command. \begin{lstlisting}[style=R] summary(weather.hex) \end{lstlisting} \subsection{Performing a trial run} Returning to the Airline dataset, load the dataset with H2O and select the variables to use to predict a chosen response. For example, model whether flights are delayed based on the departure's scheduled day of the week and day of the month. \begin{lstlisting}[style=R] # Load the data and prepare for modeling air_train.hex = h2o.uploadFile(h2o_server, path = "AirlinesTrain.csv", header = TRUE, sep = ",", key = "airline_train.hex") air_test.hex = h2o.uploadFile(h2o_server, path = "AirlinesTest.csv", header = TRUE, sep = ",", key = "airline_test.hex") myX <- c("fDayofMonth", "fDayOfWeek") \end{lstlisting} Now, train the GBM model: \begin{lstlisting}[style=R] air.model <- h2o.gbm(y = "IsDepDelayed", x = myX, distribution="multinomial", data = air_train.hex, n.trees=100, interaction.depth=4, shrinkage=0.1, importance=TRUE) \end{lstlisting} \noindent Since it is meant just as a trial run, the model contains only 100 trees. In this trial run, no validation set was specified, so by default, the model evaluates the entire training set. To use n-fold validation, specify, for example, \texttt{nfolds=5}. \subsection{Extracting and handling the results} Now, extract the parameters of the model, examine the scoring process, and make predictions on the new data. \begin{lstlisting}[style=R] # View the specified parameters of your GBM model air.model@model$params # Examine the performance of the trained model air.model \end{lstlisting} \noindent The second command ({\texttt{air.model}}) returns the trained model's training and validation errors. \\ \\ After generating a satisfactory model, use the \texttt{h2o.predict()} command to compute and store predictions on the new data, which can then be used for further tasks in the interactive modeling process. \begin{lstlisting}[style=R] # Perform classification on the held out data prediction = h2o.predict(air.model, newdata=air_test.hex) # Copy predictions from H2O to R pred = as.data.frame(prediction) head(pred) \end{lstlisting} \subsection{Web interface} H2O R users have access to an intuitive web interface for H2O, Flow, to mirror the model building process in R. After loading data or training a model in R, point your browser to your IP address and port number (e.g., localhost:12345) to launch the web interface. From here, you can click on \textsc{Admin} $>$ \textsc{Jobs} to view specific details about your model. You can also click on \textsc{Data} $>$ \textsc{List All Frames} to view all current H2O frames. \subsection{Variable importances} To enable the variable importances option, use the additional argument \texttt{importance=TRUE}. This displays the absolute and relative predictive strength of each feature in the prediction task. From R, access these strengths using the command \texttt{air.model@model\$varimp}. You can also view a visualization of the variable importances on the web interface. \subsection{Supported Output} The following algorithm outputs are supported: \begin{itemize} \item {\bf{Regression}}: Mean Squared Error (MSE), with an option to output variable importances or a Java POJO model \item {\bf{Binary Classification}}: Confusion Matrix or Area Under Curve (AUC), with an option to output variable importances or a Java POJO model \item {\bf{Classification}}: Confusion Matrix (with an option to output variable importances or a Java POJO model) \end{itemize} \subsection{Java model} To access Java (POJO) code to use to build the current model in Java, click the \textsc{Preview POJO} button at the bottom of the model results. If the model is small enough, the code for the model displays within the GUI; larger models can be inspected after downloading the model. \\ \\ To download the model: \begin{enumerate} \item Open the terminal window. \item Create a directory where the model will be saved. \item Set the new directory as the working directory. \item Follow the curl and java compile commands displayed in the instructions at the top of the Java model. \end{enumerate} \subsection{Grid search for model comparison} To support grid search capabilities for model tuning, specify sets of values for parameter arguments to tweak certain parameters and observe changes in model behavior. The following is an example of a grid search: \begin{lstlisting}[style=R] air.grid <- h2o.gbm(y = "IsDepDelayed", x = myX, distribution="multinomial", data = air_train.hex, n.trees=c(5,10,15), interaction.depth=c(2,3,4), shrinkage=c(0.1,0.2)) \end{lstlisting} \noindent This example specifies three different tree numbers, three different tree sizes, and two different shrinkage values. This grid search model effectively trains eighteen different models over the possible combinations of these parameters. Of course, sets of other parameters can be specified for a larger space of models. This allows for more subtle insights in the model tuning and selection process, especially during inspection and comparison of the trained models after the grid search process is complete. To decide how and when to choose different parameter configurations in a grid search, refer to the beginning section for parameter descriptions and suggested values. \begin{lstlisting}[style=R] # print out all prediction errors and run times of the models air.grid air.grid@model # print out a *short* summary of each of the models (indexed by parameter) air.grid@sumtable # print out *full* summary of each of the models all_params = lapply(air.grid@model, function(x) { x@model$params }) all_params # access a particular parameter across all models shrinkages = lapply(air.grid@model, function(x) { x@model$params$shrinkage }) shrinkages \end{lstlisting} \section{Conclusion} Gradient boosted machines sequentially fit new models to provide a more accurate estimate of a response variable in supervised learning tasks such as regression and classification. Though notorious for being difficult to distribute and parallelize, H2O's GBM offers both features in its framework, along with a straightforward environment for model tuning and selection. \newpage \section{References} Click, Cliff and SriSatish Ambati. {\textbf{``Cliff Click Explains GBM at Netflix October 10 2013``}}\\ \url{http://www.slideshare.net/0xdata/cliff-click-explains-gbm} SlideShare (2013). \\ \\ Dietterich, Thomas G, and Eun Bae Kong. {\textbf{``Machine Learning Bias, Statistical Bias, and Statistical Variance of Decision Tree Algorithms.``}} \\ \url{http://www.iiia.csic.es/~vtorra/tr-bias.pdf} ML-95 255 (1995). \\ \\ Elith, Jane, John R Leathwick, and Trevor Hastie. {\textbf{``A Working Guide to Boosted Regression Trees.``}} \\\url{http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2656.2008.01390.x/abstract} Journal of Animal Ecology 77.4 (2008): 802-813 \\ \\ Friedman, Jerome H. {\textbf{``Greedy Function Approximation: A Gradient Boosting Machine.``}}\\ \url{http://statweb.stanford.edu/~jhf/ftp/trebst.pdf} Annals of Statistics (2001): 1189-1232. \\ \\ Friedman, Jerome, Trevor Hastie, Saharon Rosset, Robert Tibshirani, and Ji Zhu. {\textbf{``Discussion of Boosting Papers.``}}\\ \url{http://web.stanford.edu/~hastie/Papers/boost_discussion.pdf} Ann. Statist 32 (2004): 102-107 \\ \\ Friedman, Jerome, Trevor Hastie, and Robert Tibshirani. {\textbf{``Additive Logistic Regression: A Statistical View of Boosting (With Discussion and a Rejoinder by the Authors).``}}\\ \url{http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aos/1016218223} The Annals of Statistics 28.2 (2000): 337-407 \\ \\ Hastie, Trevor, Robert Tibshirani, and J Jerome H Friedman. {\textbf{``The Elements of Statistical Learning``}}\\ \url{http://statweb.stanford.edu/~tibs/ElemStatLearn/printings/ESLII_print10.pdf}. Vol.1. N.p., page 339: Springer New York, 2001. \\ \\ \end{document}
{ "alphanum_fraction": 0.7458594331, "avg_line_length": 60.9644012945, "ext": "tex", "hexsha": "4cf2db24f32a5bb15e23587939a93e512a53ccdd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "77b27109c84c4739f9f1b7a3078f8992beefc813", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "kyoren/https-github.com-h2oai-h2o-3", "max_forks_repo_path": "h2o-docs/src/booklets/v2_2015/source/gbmRVignette.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "77b27109c84c4739f9f1b7a3078f8992beefc813", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "kyoren/https-github.com-h2oai-h2o-3", "max_issues_repo_path": "h2o-docs/src/booklets/v2_2015/source/gbmRVignette.tex", "max_line_length": 690, "max_stars_count": 1, "max_stars_repo_head_hexsha": "77b27109c84c4739f9f1b7a3078f8992beefc813", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "kyoren/https-github.com-h2oai-h2o-3", "max_stars_repo_path": "h2o-docs/src/booklets/v2_2015/source/gbmRVignette.tex", "max_stars_repo_stars_event_max_datetime": "2016-09-30T05:58:18.000Z", "max_stars_repo_stars_event_min_datetime": "2016-09-30T05:58:18.000Z", "num_tokens": 9782, "size": 37676 }
%!TEX root=report.tex \subsection{Kernel PCA} Before delving into kernel methods, the standard PCA method will briefly be recapped. PCA as introduced in this report was done via SVD ($X = U\Sigma V^T$). The columns of $U$ is the eigenvectors of $XX^T$. The projected space is then $Z = XV = U\Sigma$. Looking at the dependencies of $U$, it is apparent that the principal component scores, will end up being linear combinations of basis in original input space $X$. In most cases this suffices but what if there are no linear relationships in the $X$ space, standard PCA won't be appropriate. \subsubsection{Motivating example} \begin{figure}[H] \center \includegraphics[width=\textwidth]{figures/kernel-pca-example} \caption{Motivating kernel PCA example. Image courtesy of scikit-learn. License: BSD 3 clause. Authors: Mathieu Blondel and Andreas Mueller.} \label{fig:kernel-pca-example} \end{figure} As is seen in Figure \ref{fig:kernel-pca-example}, classes that wasn't linearly separable in the original space can be become linearly separable in the kernel space. This in turn makes clustering much easier and hopefully improves results. \subsubsection{The Kernel Trick} In kernel PCA, instead of working on $X X^T$, a nonlinear mapping $\Phi: X\rightarrow Y$ is used, such that the SVD is carried out on $\Phi(X)\Phi(X)^T = K(X, X^T)$. Specifically one define the inner product as a function of $(x_i, x_j)$, thus $\Phi(X)\Phi(X)^T$ is calculated, but without the need for the nonlinear mapping function $\Phi$. This inner product function is the kernel. For something to be a valid kernel, all the kernel needs to satisfy is to be function of an inner product in some vector space. For the above to become more clear, let us give an example of a polynomial kernel in a 2-dimensional space. in the following $X$ is a vector and $X'$ is simply some other vector in the same space as $X$: \begin{equation} \begin{split} K(X,X') &= (1+X^T X')^2 = (1+x_1 x'_1+x_2 x'_2)^2 \\ &= 1+x_1^2 {x'}_1^2 +2 x_1 x'_1 + 2 x_2 x'_2 + 2 x_1 x'_1 x_2 x'_2 \end{split} \end{equation} For the above to actually be a actual kernel there would have to be some transformed space in which the above was an inner product. From the coefficients above it can be deduced that the basis in this transformed space must be given as \begin{equation} (1,x_1^2,x_2^2,\sqrt{2} x_1, \sqrt{2} x_2 , \sqrt{2} x_1 x_2) \end{equation} Now consider if one used a power of 100 instead of 2. Calculating the kernel in the non transformed space is easy; $(1+X^T X')$ is just a number and raising it to the power of 100 can be done quickly. On the other hand if one were to explicitly calculate the kernel as the inner product in the transformed space, a huge vector would have to be computed, transposed and be subjected to an the inner product in this space. This is clearly not very efficient. The shortcut to define $K$ instead of $\Phi$ is called the Kernel Trick. \subsubsection{The Radial Basis Function (RBF) kernel} The RBF kernel is commonly used when the amount of samples is much larger than the amount of dimensions in the original space. The RBF kernel is defined as \begin{equation} K(x,x')=\mathrm{exp}(-\gamma ||x-x'||^p_2) \end{equation} The above can easily be calculated. The following shows that the above is indeed an inner product. It is not a complete proof as $\gamma=1, p=2$ and $x$ is a scalar not a vector. \begin{equation} \begin{split} K(x,x')&=\mathrm{exp}(- ||x-x'||^2_2)=\mathrm{exp}(- (x-x')^2) \\ &= \mathrm{exp}(-x^2) \mathrm{exp}(-{x'}^2) \mathrm{exp}(2 x x') \end{split} \end{equation} since $\mathrm{exp}(2 x x')$ can be Taylor expanded to $\sum_{k=0}^\infty \frac{2^k(x)^k (x')^k}{k!}$ it follows that: \begin{equation} K(x,x') = \sum_{k=0}^\infty \left(\sqrt{\frac{2^k}{k!}} (x)^k \mathrm{exp}(-x^2)\right)\left(\sqrt{\frac{2^k}{k!}} (x')^k \mathrm{exp}(-{x'}^2)\right) \end{equation} From the above it is seen that for any k, the term in the right parenthesis is exactly equal to the left parenthesis, if $x'$ was substituted with $x$. This means that in an expanded vector space, this correspond to the the inner product from the $\ell^2$ Hilbert space. Do also note that since the sum is infinite, the transformed vector space is of infinite dimensional space and is thus impossible to calculate without the kernel trick. \subsubsection{Mercer's Condition} Proving that something is a kernel, is actually easier than shown above. It is done using Mercer's Condition, though it does not say anything about the size of the transformed space when $\Phi$ is applied. The theorem states, that given all real valued square integrable functions $g$ (${g \in \{\mathbb{R} \rightarrow \mathbb{R}\} \cap L^2(\mathbb{R})}$). That is the following must be true: \begin{equation} \int_{-\infty}^{\infty} g(x)^2 \mathrm{d}x < \infty \end{equation} Then for all these $g$ functions, if the following condition holds: \begin{equation} \int \int K(x, y) g(x) g(y) \mathrm{d}x \mathrm{d}y \ge 0 \end{equation} Then $K$ is a valid kernel. \subsubsection{Final notes of kernel PCA} In kernel PCA the columns of $U$ is the eigenvectors of $K(X,X^T)$ and the columns of $V$ the eigenvectors of $K(X^T,X)$. The size of $X X^T$ and $X^T X$ is going to be $nxn$ and $pxp$ respectively where $n$ is the number of samples and $p$ the dimensions in transformed space. Thus when dealing with kernel PCA the calculation of $V$ is omitted and calculation of $U$ is expensive, but possible. In the case of the GRACE data the size of $X$ is ($64800 \times 341$). Memory wise this results in a $U$ matrix of size ($64800 \times 64800$) with float32 numbers, that is approximately $15.64\text{ GB}$. Furthermore kernel PCA also involves quite a lot of simple computing. So for practical purposes one should use a computer cluster (e.q. the DTU HPC cluster).
{ "alphanum_fraction": 0.7321733197, "avg_line_length": 68.1627906977, "ext": "tex", "hexsha": "d07a3f9ddac1aee21acc48a9f573e16b00a7ea03", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "bf472d30a2fac76145d3f68e819c92da4a1970ba", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "AndreasMadsen/grace", "max_forks_repo_path": "Rapport/theory-kernelpca.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "bf472d30a2fac76145d3f68e819c92da4a1970ba", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "AndreasMadsen/grace", "max_issues_repo_path": "Rapport/theory-kernelpca.tex", "max_line_length": 439, "max_stars_count": 1, "max_stars_repo_head_hexsha": "bf472d30a2fac76145d3f68e819c92da4a1970ba", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "AndreasMadsen/grace", "max_stars_repo_path": "Rapport/theory-kernelpca.tex", "max_stars_repo_stars_event_max_datetime": "2016-05-17T22:52:19.000Z", "max_stars_repo_stars_event_min_datetime": "2016-05-17T22:52:19.000Z", "num_tokens": 1689, "size": 5862 }
\section{Related Work} \label{sec:related-work} \subsection{Coq and Other Tools} \label{sec:comparison-with-coq} Coq provides complete support for theorem-proving and creating trusted code. Often one writes code in Coq's functional language% \iflong (values whose types are \texttt{Set}s) \fi % \iflong , states and proves theorems that the code behaves correctly% \iflong (where the theorems are Coq values whose types are \texttt{Prop}s)% \fi % \iflong , and has Coq extract correct code. In such cases RZ is complementary to Coq; it can \iflong clarify the constructive content of mathematical structures and hence \fi % \iflong suggest the appropriate division between code and theorems. We hope RZ will soon be able to produce output in Coq's input syntax. \iflong In general, RZ is a smaller and more lightweight system and thus more flexible where it applies. It is not always practical or necessary to do theorem proving in order to provide an implementation; interfaces generated by RZ can be implemented in any manner. And, RZ provides a way to talk with programmers about constructive mathematics without bringing in full theorem proving. \fi % \iflong Komagata and Schmidt~\cite{komagata+:tr95} describe a system that uses a realizability in a way similar to RZ. Like Coq, it extracts code from proofs. % An interesting implementation difference is that the algorithm they use (attributed to John Hatcliff) does thinning as it goes along, rather than making a separate pass as RZ does. % \iflong (For example, the translation of the conjunction-introduction rule has four cases, depending on whether the left and/or right propositions being proved are almost negative, in which case the trivial contribution can be immediately discarded.) \fi % Unlike RZ, their system needs full formal proofs as input; it checks the proofs, and generates executable code. RZ also handles a much richer input language (including function, subset, quotient, and dependent types; quantification over theories; and parameterized theories) that goes well beyond simple predicate logic over integers and lists. The idea of annotating ML signatures with assertions is not new (e.g., \cite{kahrs+:tcs97}). \subsection{Other Models of Computability} \label{sec:models-of-computability} Many formulations of computable mathematics are based on realizability models~\cite{Bauer:00}, even though they were not initially developed, (nor are they usually presented) within the framework of realizability: Recursive Mathematics~\cite{ershov98:_handb_recur_mathem} is based on the original realizability by Turing machines~\cite{KleeneSC:intint}; Type Two Effectivity~\cite{Wei00} on function realizability~\cite{KleeneSC:fouim} and relative function realizability~\cite{BirkedalL:devttc}, while topological and domain representations~\cite{Bla97a,Bauer:Birkedal:Scott:98} are based on realizability over the graph model $\mathcal{P}\omega$~\cite{ScottD:dattl}. A common feature is that they use models of computation which are well suited for the theoretical studies of computability. \iflong Other approaches are based on simple programming languages augmented with datatypes for real numbers~\cite{escardo97:_pcf,marcial-romero04:_seman} and topological algebras~\cite{TZ98}, or machine models augmented with (suitably chosen subsets of) real numbers such as Real RAM~\cite{borodin75}, the Blum-Shub-Smale model~\cite{blum98:_compl_real_comput}, and the Exact Geometric Computation model~\cite{yap06:_theor_real_comput_egc}. The motivation behind these ranges from purely theoretical concerns about computability and complexity to practical issues in the design of programming languages and algorithms in computational geometry. RZ attempts to improve practicality by using an actual real-world programming language, and by providing an input language which is rich enough to allow descriptions of involved mathematical structures that go well beyond the real numbers. Finally, we hope that RZ and, hopefully, its forthcoming applications, give plenty of evidence for the \emph{practical} value of Constructive Mathematics~\cite{Bishop:Bridges:85}. \else % \iflong Approaches based on simple programming languages with datatypes for real numbers~\cite{escardo97:_pcf,marcial-romero04:_seman} and topological algebras~\cite{TZ98}, and machines augmented with (suitably chosen subsets of) real numbers~\cite{borodin75,blum98:_compl_real_comput,yap06:_theor_real_comput_egc} are motivated by issues ranging from theoretical concerns about computability/complexity to practical questions in computational geometry. RZ attempts to improve practicality by using a real-world language, and by providing an input language rich enough for descriptions of mathematical structures going well beyond the real numbers. Finally, we hope that RZ and, hopefully, its forthcoming applications, give plenty of evidence for the \emph{practical} value of Constructive Mathematics~\cite{Bishop:Bridges:85}. \fi %%% Local Variables: %%% mode: latex %%% TeX-master: "cie" %%% End:
{ "alphanum_fraction": 0.8079652997, "avg_line_length": 41.2357723577, "ext": "tex", "hexsha": "c135e39f205a32bc7fac9eab4572040255361383", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d92cacaf78fb50d61fc6712c74b8fdaf5d2c6d28", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "andrejbauer/rz", "max_forks_repo_path": "private/cie/comparison.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d92cacaf78fb50d61fc6712c74b8fdaf5d2c6d28", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "andrejbauer/rz", "max_issues_repo_path": "private/cie/comparison.tex", "max_line_length": 92, "max_stars_count": 7, "max_stars_repo_head_hexsha": "d92cacaf78fb50d61fc6712c74b8fdaf5d2c6d28", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "andrejbauer/rz", "max_stars_repo_path": "private/cie/comparison.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-07T21:04:22.000Z", "max_stars_repo_stars_event_min_datetime": "2019-08-28T10:12:29.000Z", "num_tokens": 1266, "size": 5072 }
\documentclass[a4paper,11pt,oneside,openany,report]{jsbook} \usepackage[dvipdfmx]{hyperref} \usepackage{pxjahyper} \usepackage[dvipdfmx]{graphicx} \usepackage{listings} \title{TeX Escape} \begin{document} \maketitle \chapter{System special characters}\label{ch:Systemspecialcharacters} \section{Backslash}\label{sec:Systemspecialcharacters/Backslash} foo{\textbackslash}bar \section{Sharp}\label{sec:Systemspecialcharacters/Sharp} foo{\#}bar \section{Dollar}\label{sec:Systemspecialcharacters/Dollar} foo{\$}bar \section{Percent}\label{sec:Systemspecialcharacters/Percent} foo{\%}bar \section{Ampersand}\label{sec:Systemspecialcharacters/Ampersand} foo{\&}bar \section{Under bar}\label{sec:Systemspecialcharacters/Underbar} foo{\_}bar \section{Braces}\label{sec:Systemspecialcharacters/Braces} foo\{bar foo\}bar bar\{foo\}bar \chapter{Special characters}\label{ch:Specialcharacters} \section{Bar}\label{sec:Specialcharacters/Bar} foo{\textbar}bar \section{Asterisk}\label{sec:Specialcharacters/Asterisk} foo*bar \section{Less}\label{sec:Specialcharacters/Less} foo{\textless}bar \section{Greater}\label{sec:Specialcharacters/Greater} foo{\textgreater}bar \section{Hat}\label{sec:Specialcharacters/Hat} foo{\textasciitilde}bar \section{Tilde}\label{sec:Specialcharacters/Tilde} foo{\textasciicircum}bar \end{document}
{ "alphanum_fraction": 0.8057014254, "avg_line_length": 23.8035714286, "ext": "tex", "hexsha": "ef324ec8e56d91ffb62f9eee12753c0b607c7321", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b5809e171acf41a804b2fbdb4ddb21185bd93e8b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Durun/lateko", "max_forks_repo_path": "src/test/resources/testDocuments/escape/texEscape.kts.tex", "max_issues_count": 20, "max_issues_repo_head_hexsha": "b5809e171acf41a804b2fbdb4ddb21185bd93e8b", "max_issues_repo_issues_event_max_datetime": "2020-04-10T13:57:02.000Z", "max_issues_repo_issues_event_min_datetime": "2020-03-04T13:10:08.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Durun/lateko", "max_issues_repo_path": "src/test/resources/testDocuments/escape/texEscape.kts.tex", "max_line_length": 69, "max_stars_count": 1, "max_stars_repo_head_hexsha": "b5809e171acf41a804b2fbdb4ddb21185bd93e8b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Durun/lateko", "max_stars_repo_path": "src/test/resources/testDocuments/escape/texEscape.kts.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-16T10:50:55.000Z", "max_stars_repo_stars_event_min_datetime": "2022-01-16T10:50:55.000Z", "num_tokens": 405, "size": 1333 }
%%% Template originaly created by Karol Kozioł ([email protected]) and modified for ShareLaTeX use \documentclass[a4paper,12pt]{article} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{graphicx} \usepackage{xcolor} \renewcommand\familydefault{\sfdefault} \usepackage{tgheros} \usepackage{url} \usepackage{enumerate} \usepackage{amsmath,amssymb,amsthm,textcomp} \usepackage{enumerate} \usepackage{multicol} \usepackage{tikz} \usepackage{geometry} \geometry{left=25mm,right=25mm,% bindingoffset=0mm, top=20mm,bottom=20mm} \linespread{1.3} \newcommand{\linia}{\rule{\linewidth}{0.5pt}} % custom theorems if needed \newtheoremstyle{mytheor} {1ex}{1ex}{\normalfont}{0pt}{\scshape}{.}{1ex} {{\thmname{#1 }}{\thmnumber{#2}}{\thmnote{ (#3)}}} \theoremstyle{mytheor} \newtheorem{defi}{Definition} % my own titles \makeatletter \renewcommand{\maketitle}{ \begin{center} \vspace{2ex} {\huge \textsc{\@title}} \vspace{1ex} \\ \linia\\ \@author \hfill \@date \vspace{4ex} \end{center} } \makeatother %%% % custom footers and headers \usepackage{fancyhdr} \pagestyle{fancy} \lhead{} \chead{} \rhead{} \lfoot{Projectlet puzzle} \cfoot{} \rfoot{Page \thepage} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0pt} % % code listing settings \usepackage{listings} \lstset{ language=Python, basicstyle=\ttfamily\small, aboveskip={1.0\baselineskip}, belowskip={1.0\baselineskip}, columns=fixed, extendedchars=true, breaklines=true, tabsize=4, prebreak=\raisebox{0ex}[0ex][0ex]{\ensuremath{\hookleftarrow}}, frame=lines, showtabs=false, showspaces=false, showstringspaces=false, keywordstyle=\color[rgb]{0.627,0.126,0.941}, commentstyle=\color[rgb]{0.133,0.545,0.133}, stringstyle=\color[rgb]{01,0,0}, numbers=left, numberstyle=\small, stepnumber=1, numbersep=10pt, captionpos=t, escapeinside={\%*}{*)} } %%%----------%%%----------%%%----------%%%----------%%% \begin{document} \title{Puzzle} \author{[email protected]} \date{\today} \maketitle \section*{Introduction} In this projectlet, we study the classic \textbf{Game of Fifteen} (\url{https://en.wikipedia.org/wiki/15_puzzle}). Apart from the numerous avenues for theoretical studies, this affords the practising engineer special challenges as well. \paragraph{UI} This requires a fairly simple user interface. Implementation will only have to identify and deal with mouse clicks and very little else. \paragraph{Abstraction} The variations of this game span different sizes of the puzzle i.e. 4x4, 5x5 and so on. Thus this is an introduction to parametrising the implementation. \paragraph{Variation} We will introduce a variation in this projectlet namely instead of numbered tiles, a supplied image is used. Thus solution then requires the original image to be reproduced by sliding the individual tiles according to the stated rules. \section*{Specification} Based on the above broad ideas, the following specifications will guide the exploration. \paragraph{Choice of faces} The number of tiles and an image file in a standard format like png or jpg are the basic parameters to an implementation. The user should be allowed to dynamically change the tile face. \paragraph{User commands} The user shall be able to approach the puzzle in a different ways. Starting with a jumbled puzzle, the user can slide the tiles to produce the solution. Alternatively starting with a solution, the tiles can be manually jumbled to produce interesting patterns. \paragraph{Graphic libraries} Quite a few GUI libraries are available to choose from. E.g. \url{https://www.gtk.org/} is a platform, language agnostic choice. So is \url{https://www.qt.io/} though with more licensing restrictions. There are more language specific choices as in \textbf{fyne} (\url{https://fyne.io/}) which may be very \textbf{go} specific. \textbf{Dotnet} \url{https://dotnet.microsoft.com/} rounds out this list as a whole platform including user interfaces. \section*{Implementation} \subsection*{Challenges} \paragraph{Concurrency Model} The gtk library specifies that all changes to the GUI be performed from a single thread of execution. Others such as fyne do not have such an explicit specification. In this example, since every action in the puzzle is driven by the user - by mouse clicks or button presses, the design is quite simple. Except when we need to update the think time which counts up independently. \paragraph{Image manipulation} The need to support images as tile faces exposes us to some basic image processing needs eg to crop, recombine images. More interesting however will be the investigation of the marriage between the static image representation in files (.png or .jpg) and the more dynamic GUI environment and the widget sets presented by the library. \paragraph{Mouse Clicks} Mouse clicks are the way the sliding tiles are identified. Each library supports a different way of identifying mouse clicks and their position. \paragraph{Algorithm Hiding} In order to experiment with different GUI libraries, the algorithmic aspects need to be isolated and a clean interface established. \subsection*{Example Project} An example project has been developed to illustrate the concepts - with the following specifications: \paragraph{} \fbox{% \parbox{6in}{% usage: puzzle [no of segments] [imagename] [fyne|gtk]\\ use . for segments=4 as well as etc/flower.png }% } \paragraph{} An implementation in \textbf{go} language has been developed using fyne and gotk3 bindings to gtk \url{https://github.com/gotk3/gotk3} and is available for exploration and enhancements. \fbox{% \parbox{5in}{% \url{https://[email protected]/ToprLLC/puzzle.git} }% } \subsection*{Implementation with fyne graphical library} \includegraphics[width=6in]{fyne_5_begin.png} \subsection*{Implementation with gtk - Jumbled puzzle} \includegraphics[width=6in]{gtk_6_jumbled.png} \section*{Potential Improvements} \paragraph{Undo} Keep track of each move and be able to retract. \paragraph{Patterns} Instead of random Jumbles, arrange them in a different predictable pattern. For instance, in the equivalent numeric view, the numbers will be placed along the periphery clockwise and so on eventually with the last empty tile being close to the center. \paragraph{Solver} Include a solver which finds the best solution from a given state presenting an animated view. \end{document}
{ "alphanum_fraction": 0.7574310796, "avg_line_length": 37.316091954, "ext": "tex", "hexsha": "2a68633e44df0804bd7f0ea09e051378cf74660b", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2022-01-27T13:17:57.000Z", "max_forks_repo_forks_event_min_datetime": "2020-05-19T18:00:05.000Z", "max_forks_repo_head_hexsha": "011182b5cf9b81fbf9b3bc8ea0c3c830cc6cb5b4", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "RajaSrinivasan/assignments", "max_forks_repo_path": "Puzzle/assignment.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "011182b5cf9b81fbf9b3bc8ea0c3c830cc6cb5b4", "max_issues_repo_issues_event_max_datetime": "2021-01-21T22:41:51.000Z", "max_issues_repo_issues_event_min_datetime": "2021-01-21T19:52:04.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "RajaSrinivasan/assignments", "max_issues_repo_path": "Puzzle/assignment.tex", "max_line_length": 476, "max_stars_count": 117, "max_stars_repo_head_hexsha": "011182b5cf9b81fbf9b3bc8ea0c3c830cc6cb5b4", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "RajaSrinivasan/assignments", "max_stars_repo_path": "Puzzle/assignment.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-22T05:10:29.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-06T19:15:18.000Z", "num_tokens": 1685, "size": 6493 }
\documentclass[11pt]{cmuthesis} % \documentclass[11pt, draft]{cmuthesis} % \sisetup{ % round-mode = places, % Rounds numbers % round-precision = 2, % to 2 places % } \input{content/macros} \begin{document} %> First pages. Do not modify. \input{content/preconfig} \input{content/frontmatter} \input{content/postconfig} %> Content. This section contains references to the chapters in your thesis. %> Modify as you please. \input{methodology/body} \input{traffic_profile/body} \input{building_energy_model/body_thesis} \input{marginal_energy_cost/body_thesis} \input{embodied_cost_model/body} \input{conclusion/body} % \input{embodied_cost_model/body} %> Page style for appendices. Do not modify. \appendix\pagestyle{plain} %> Appendices. If you have any appendices, put them here. Otherwise, remove these lines. % \input{chp-studyone/app-material} % \input{chp-studytwo/app-difussionmodel} %> Bibliography style. Do not modify. \bibliographystyle{IEEEtran} %> If you come up with a different bibtex file, and don't want to change the name of the file, %> you should change the name and location here (i.e., replace content/references for whatever %> is your bib file). Otherwise, leave it as it is. \chapter{Bibliography} \bibliography{content/bibs} \end{document}
{ "alphanum_fraction": 0.7546583851, "avg_line_length": 29.9534883721, "ext": "tex", "hexsha": "5956c3497fa403e23e2957e1cb965e2002192697", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "44e882b65c2925707c4728a9be84abfd95c07532", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "myndtrust/CMU-Thesis-master", "max_forks_repo_path": "thesis.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "44e882b65c2925707c4728a9be84abfd95c07532", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "myndtrust/CMU-Thesis-master", "max_issues_repo_path": "thesis.tex", "max_line_length": 94, "max_stars_count": 1, "max_stars_repo_head_hexsha": "44e882b65c2925707c4728a9be84abfd95c07532", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "myndtrust/CMU-Thesis-master", "max_stars_repo_path": "thesis.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-09T14:41:05.000Z", "max_stars_repo_stars_event_min_datetime": "2021-04-09T14:41:05.000Z", "num_tokens": 359, "size": 1288 }